Although the
basic principles of non-contact infrared temperature sensing and infrared thermal imaging are the same, we want to offer
a few additional basics of thermal imaging as well. This includes a brief overview of the most important basics, which are essentially
the same for infrared thermal imaging as for normal visible light cameras. In addition, we provide helpful hints about the most important characteristics that should be considered for spatial temperature measurements.
If you want to learn more about possible applications, you can directly follow the links to
person detection and
hot spot detection.
An infrared (IR) optical system can be described by the same parameters that apply for the visible spectrum. The main difference
apart from the wavelength, is the material of the lenses. For IR optics usually Germanium (Ge), Silicon (Si), Zinc Sulfide
or Chalcogenide glass is used, since these materials show good transparency in the relevant IR spectrum, while ordinary glass
is NOT transparent in the thermal infrared spectrum. The most common ones are Ge and Si, where Ge shows a better transparency,
but at a higher price. Special optical coatings can further improve the transparency, but of course this is also related to a higher price.
To avoid going into unnecessary detail here, you may want to consult other information sources like Wikipedia which also provides optical basics. Here, we will focus on the most important parts.
The two main parameters to describe the optical system are the focal length and the f-number. The focal length f in combination
with the dimensions of the focal plane area (FPA) determines the field of view (FOV) of the camera. The f-number (N) is the ratio
of the focal length to the lens aperture, essentially the diameter of the entrance pupil D. Since it is defined as N=f/D the f-number
gets smaller the larger the entrance aperture gets.
In general a smaller f-number corresponds to more radiation that can reach the sensitive matrix of the FPA. More radiation will result
in a better signal to noise ratio (SNR). Because a low f-number requires a lens system with larger diameter, it also requires more material, and tighter manufacturing tolerances. Better performance is therefore only achievable at a higher price.
Furthermore, the f-number also has an influence on the dynamic range (temperature measurement range) of the optical system.
The larger the aperture and the smaller the f-number, the more radiation will be detected by the IR sensitive pixel at a given object temperature. This will reduce the maximum temperature that can be detected, since the signal processing in our FPAs has a fixed gain which can not be adjusted for different optics. For the analog-digital conversion this means that at a certain level of target radiation
a maximum digital output value is produced. If the sensor receives a higher radiation due to smaller f-number, the output will still
be the maximum digital value, so the measurement range is truncated and the sensor is said to be saturated at those pixel locations.
To expand the dynamic range without saturation, optical filters can be used to attenuate parts of the IR spectrum in order to reduce
the amount of radiation at the sensor. The usage of small f-number and carefully selected optical filter allows good SNRs for lower object temperatures as well as increased measurement range.
If you want to take a thermal image of a scene or object the three main parameters that determine the spatial resolution are the pixel pitch of the sensor array and the combination of FOV and distance between sensor and object. To get a better understanding
of this relation, please refer to the following image:
Imagine the FPA is projected through the lens optics onto a distant screen. The FOV determines the projected size of the FPA depending on the distance to the sensor. For the same distance (A resp. B) and same pixel pitch of the FPA a large FOV will result
in a larger image with also larger individual pixels than a small FOV. So for greater distances the small FOV optics will have a higher spatial resolution, but of course they also show a smaller part of the scene. If you want to get the same spatial resolution with a large FOV you have two options. One is to reduce the measurement distance (from B to A). Another option is to increase the number
of pixels. For the same FPA size this means reducing the pixel pitch. Please note that increasing the number of pixels and keeping
the pixel pitch the same results in a larger FPA size which in turn gives a larger FOV.
Regarding spatial temperature measurements the aforementioned relationships are important to keep in mind.
To determine the temperature of a specific feature or detail in your thermal image, this feature or detail has to illuminate at least one complete pixel. If this is not the case, the pixel will detect a mixed temperature of the object and the adjacent background. The following image will help to make things clear:
There are two pixels, where the filling factor of the dog
versus the background is shown. For the 100% pixel
in the middle the camera will detect the temperature
of the specific part of the dog. But for the 50% filled pixel
at the dogs head the camera will measure the superposition of the dogs head temperature and the background.
In example: If the dog's head temperature is 30°C
and the background is 20°C, the camera will detect 25°C
as the dogs head temperature.
This problem occurs especially for small objects
and features. Even if the object is larger than one pixel
the position of the object can have a strong influence
on the temperature reading of the sensor. You can see
this from the image below:
A shift or movement of small objects can result in significant changes of the temperature readings and cannot be reliably detected. Thus to determine the correct temperature of an object
or feature more than one pixel should be illuminated by the smallest feature that should reliably be detected. It follows that for large target distances or small object sizes you should consider a smaller FOV or a sensor with more pixels.
P equals the pixel pitch,
n the number of elements in the corresponding direction. This means the FOV can vary in x- and y-direction,
if the number of elements is not equal in both directions.
To give an example: An 80x64 thermopile array has a pixel pitch of 90μm. Combined with a 17mm focal length optics the FOV will result in 24° x 20°:
Note, that this formula does not work well for wide FOV optics, since the aberrations of the system are not considered. To determine,
if an image is large enough for a filling factor of 100% the ray law can also be used. The image size
I can be easily calculated by:
where
O is the object size,
f the focal length and
d the distance of the object. The image size divided by the pixel pitch results
in the number of pixels illuminated. In example: A human with a shoulder width of 50 cm is 2 meters distant from an HTPA32x32 L5.0. Therefore,
f = 0.005 m,
O = 0.5m and
d = 2m. This results in an image size of
I = 1,25e-3 m. With a pixel pitch of 90 μm we get a total of 13.9 pixels illuminated.