Hyperspectral cameras used in industrial environments or onboard airborne platforms (UAV or airplanes) often operate in a pushbroom fashion. Those cameras, like Specim ones, all build following the same approach:
- a front lens, to define a FOV
- an entrance slit so that the sensor sees only a thin line of the target at once
- a dispersive optics, so that the thin beam of light entering the system spreads over the last main component
- a matrix detector.
With such configuration, the whole sample or scene is not seen entirely at once, but a thin line only. A movement is needed to image the full object, provided by, e.g., a conveyor belt or a UAV. However, we may wonder how much of the sample, scene, or target is seen by the sensor.
What the sensors effectively measures depends on several parameters:
- the frame rate of the camera
- the speed of the movement
- the integration time
- the slit width of the camera
- the front objective
- the measurement distance.
To fully understand this, let consider Figure 1, with a concrete example: A Specim FX17 hyperspectral camera placed on a 1m wide conveyor belt, sorting plastic flakes at 2m/s.
Here we are in a configuration where the slit width image is much larger than A, and Figure 1 would become as follow:
As can be seen, because of the large slit, two consecutive frames overlap each other. Even if the acquired frame with a twice lower pace (641 fps in our present case), the aspect ratio would not be kept anymore (a round object would be imaged as an oval). However, we still get the fully imaged sample.
Looking at the width of the slit image convoluted with the distance run by the belt within the integration time, the distance between each frame run by the belt can be up to 4.48 mm, leading to a frame rate of 446 fps for a fully imaged sample. If the frame rate decreases, two consecutive frames would not overlap anymore, and part of the sample would not be captured.