An image is an array of pixel values, each with its own attribute. A monochrome image has 1 attribute – intensity. A color image has 3 attributes – Red, Green and Blue. Multispectral image has not more than 10 attributes, and hyperspectral – more than 10.

Multispectral or hyperspectral imaging can be done in two ways:

With a specialized camera;
With a monochrome camera and multispectral illumination.
In the process of imaging several images of the same area of the object are formed simultaneously, but in different areas of the electromagnetic spectrum. Various combinations of these images allow to reveal processes and phenomena which are difficult or impossible to detect in the image taken with a color camera, because not only the visible range can be used as a spectrum, but also, for example, the infrared (IR) or ultraviolet (UV) range.

R-spectrum RG-spectrum RGB-spectrum
A color image taken with a monochrome camera and multispectral (RGB) illumination is better for software processing than a color image taken with a color camera

A normal camera most often works in the visible and near infrared range – from 350 to 1000 nm. Check the technical documentation for the exact spectral bandwidth of your camera.

LUCID Triton 2.8 MP camera bandwidth from 320 to 1100 nm (monochrome and color)

At the end it is necessary to understand, what kind of image is required for the task – should it be color to estimate the required characteristic, spectral to reveal complex parameters, or monochrome enough?

If color is not a prerequisite, we recommend opting for a monochrome camera, as they have higher sensitivity and spatial resolution.

What should the image resolution, frame rate and color depth be?
Is it always necessary to have a high resolution?

Keep in mind that the higher the resolution, the larger the data volume and the slower the system speed. To determine what resolution is needed for your particular task, you should perform simple mathematical calculations:

This formula uses the following designations: S – size of the control object, a – the minimum significant size of the object. Coefficient k affects how many pixels should account for the minimally significant size of the object in order to distinguish its borders programmatically. Usually for a monochrome camera the minimum size is 3 to 5 pixels. For an ordinary color camera this coefficient is increased by 2 times.

Let us show an example: you need to detect a defect on a part with a size of 400×300 mm. The minimum size of the defect is 0.65×0.65 mm. Then the required axis resolution will be:

Now, using the obtained values, it is necessary to choose a camera that has the same or slightly higher resolution. The camera Lucid Triton 2.8 MP Model (1936 x 1464 pixels) is suitable for this task.

In some cases, for example for measuring the geometry of objects, it is possible to use sub-pixel resolution instead of buying an expensive camera with higher resolution. In addition, if your algorithms are based on neural networks, look carefully at the requirements for images – they often need to be no more than 2-3 MP.

Image size is also important when selecting calculators and communications (cables, network adapters, etc.) in multi-camera systems.

Frame rate and exposure

The frame rate (in the case of line cameras, the term “line scan rate” is used) and exposure should not be confused. Frame rate refers to the number of frames a sensor can capture and transmit in one second. The higher the frame rate, the greater the amount of data transmitted, and the more processing power required to process it.

Exposure – the interval of time during which the camera shutter is open and light hits the photosensitive area of the camera sensor.

In the case, for example, of monitoring objects moving along a conveyor belt at a very high speed, the exposure must be literally milliseconds in order to ensure that the object in the image is not blurred. However, it must be considered that the exposure must not only be short, but also long enough to produce an image at all. Exposure that is too long will cause “over-exposure”, and exposure that is too short will cause “under-exposure”. The minimum exposure value that can be provided by the camera is 40 µs. If an even shorter exposure is required, pulse illumination must be used.

Color depth (image resolution) is a term that refers to the degree of detail in light transmission. The human eye can perceive about 17 million shades of color or 256 grayscale.

1 digit: 21 = 2 colors. A binary image is most often represented by black and white. Shades are represented by the density of black dots: the “more often” black dots are placed, the darker the color.

8 digits: 28 = 256 grayscale monochrome or 2R8∙2G8∙2B8 = 16.78 million color shades.

8-bit grayscale 8-bit color image
In addition there are 10-bit, 12-bit and even 16-bit images. To the human eye this image is no different from an 8-bit image. However, machine processing capabilities allow you to work with images of higher digit capacity. This provides analysis and specified accuracy of mathematical calculations.