The subjective and objective assessment of image quality in safety technology
This article discusses the effect of image compression algorithms on secure image data. This paper compares three different assessment techniques, which are objective criteria, subjective criteria, and identifiability. We have chosen two typical security images with different initial quality (a license plate and a face), which include two specialized technologies (JPEG and LuRaWave-LWF) and one application technology (Karhunen-Loeve Transformation-KLT). Within the three different compression techniques, a series of compressed images are obtained from the original image according to different compression rates. Finally, the MSE (mean squared error) as an objective criterion, the subjective image quality according to ITU-R Rec. 500, and the recognizability of the image were evaluated and compared.
At present, many different image compression techniques are applied to the transfer and storage of images. These compression techniques have an important influence on the perception of the image and its final interpretation. These effects are not too important in relatively broadband systems such as digital television, but have become extremely important in narrowband systems. The artifacts and distortions brought about by different compression algorithms have different sources and influences. This article tries to show some typical situations and show their significance in the security field.
Image quality is of fundamental importance to the identification and classification of targets (people, vehicles, etc.), and to the analysis of motion and movement in peacetime and emergency situations. Usually we can define some parameters related to image perception, including detail resolution (image sharpness), grayscale resolution (hierarchy), color resolution phase motion resolution, and so on. All these parameters will change when there is compression-induced distortion or artifacts.
â– Lossy image coding standard
In two basic situations, it is necessary to use efficient and coarse image compression: a massive image database with limited storage space, and transmission of images over a very narrow communication channel. Although there are a variety of non-negative-loss image compression techniques (especially computer image formats), due to the statistical nature of a particular image (with limited source entropy), using them can only achieve a certain degree of compression "for higher compression Rate, we can only introduce some lossy compression formats, which will bring additional distortion (such as weakening, errors, etc.) after reconstruction.The most commonly used image quality assessment method is to calculate some objective parameters such as MSE, SNR, etc. These parameters are easy Calculations, but they have limited relevance to the subjective impression (perception) of the image, and there is not much help in the resolution or classification of objects in the image.In addition, subjective image quality assessment is being extensively studied and there are many references. We will discuss static black and white images and limit compression rates below.
Currently lossy image compression generally uses two major techniques. The first uses a discrete cosine transform (DCT) to add a quantization table to reduce the amount of data. This is part of the JPEG standard. Another technique is a wavelet transform WT (eg, in a fingerprint database applied to the FBI). WT is part of the LuRaWave or JPEG2000 format. In the IRFAN package we use, both JPEG and LiiRaWave are used.
The third method Karhunen-Loeve extension is based on the statistical properties of image data. We divide the image matrix by the distribution R into a series of M sub-matrices (elements of the vector space). Let each sub-matrix be an implementation of random processing in NxN space. We can write the relevant functions of these realizations: From the rms point of view, the optimal space can be constructed from eigenvectors: where 扒 is the eigenvalue of some variance matrix.
Image compression standard as image processing system
Obviously we can think of the compression algorithm as any image processing system. In the standard image presentation technical forum processing we describe a system with a point distribution function PSF or a modulation transfer function MTF. We can define the response of an image processing system to a point source (two-dimensional Dirac impact) as an impulse response, or as a point distribution function (PSF). PSF is often used to characterize features such as blurring of light. The relationship between the imaged target and the original target can be determined by the convolution of the original target with the PSF.
In the frequency domain, based on the Fourier transform, the convolution operation becomes a multiplication operation.
The Fniirier transformation of the PSF is the optical transfer function (OTF).
OTF is usually a complex number, so it has a modulo (absolute value) and an argument. The mode of the OTF is called the modulation transfer function MTF, or when it is related to the contrast, it is called the contrast transfer function CTF. The argument is called the phase transfer function and is related to the frequency shift. Here is an important assumption that the image processing system is linear, and any input and output can be represented as a series of harmonics at different spatial frequencies.
We can also define a so-called "contrast" C to describe the contrast transmission efficiency of the MTF in the spatial frequency domain, where B is the brightness. All parts of the image processing system, including the atmosphere, the objective lens, the image sensor, the image display, and even the observer's eyes, can be described by the MTF. The system-wide MTF based on the above formulas is the product of all the partial MTFs.
For ease of simulation, image compression standards (compression-codecs, CODECs) are often treated as "black boxes" that provide output images. As an example, we chose one of the most important compression standards, JPEG.
According to the theory mentioned above, we can use "black box" (JPEG compression CODEC) as an image processing system with PSF and MTF. We use these parameters because of their important relationship with subjective and objective assessment of image quality.
â– Relationship between subjective and objective image quality - visual model
The subjective quality of the image can be explained in more depth using the human visual model. Almost all published models have a generic structure that includes the optical properties of the human visual system, retinal sampling, and some major visual cortical functions, including several intervals over spatial frequencies and images of several edge orientations Multipath algorithm, etc. The parameters of these models are derived from multiple psychophysiological experiments. Such a model handles two input images: one original distorted image and the other measured image. The output of this model is often referred to as a "Judgment Just Noticeable" (JND) map. Each point of the JND map indicates whether there is a difference between each corresponding point of the original image and the measured image, and the size of the difference.
After considering other psychophysiological phenomena, the prediction accuracy of the model can be further improved. One of the highlights of the model is the use of automatic "interest region" detection algorithms. For a given image, the impact of the damage on the overall subjective quality of the image in the region of interest than outside the region of interest is much more severe.
Another important point to consider when modeling the human visual system is the difference between the subjective image quality (which can be defined as the impression that a given image causes when viewed) and the information content of the image processed by the human visual system. In security technology, the information capacity of an image is undoubtedly more important than the overall subjective quality of the image. The capacity of information provided by images is indeed difficult to describe. However, depending on how the image is viewed and analyzed, two extreme cases can be defined.
The first extreme situation is when the image is "offline" checked. At this point, the observer has enough time to analyze the given image and use the eye to examine each place of the image and evaluate it. In this case, the quality of the interpreted image can be evaluated using one of the above models. The information obtained by the observer is mainly limited by the quality of the image to be examined, not any physiological and psychological phenomena.
The second extreme situation occurs when a scene is observed in "real time", where the time factor is an important role. Similarly, the time factor should be taken into consideration in the overall connection between the characteristics of the human visual system, psychology and physiological phenomena. When designing a visual model for the second scenario, it must include aspects such as attracting observers' attention, various masking effects, and finding areas of interest. The effect of attracting visual attention on the capacity of the interpreted image information can be seen in the children's picture.
Figure 3 shows the original image of a scene taken by the camera, while Figure 4 shows the image output from several stages of the first stage of the human visual system. It includes the eyeball optics, retinal sampling, and the nerve cell layer closest to the retina. Visual information processing. The image shown in FIG. 4 is the same as the image seen by the observer who focused on the face area in the upper left corner. This particular model simulates a rapid reduction in spatial resolution (more precisely, "angular resolution") as the angle between the object and the fixed point of interest increases. The degradation of the spatial resolution in the diagonal direction is far more serious than in the vertical direction, and the degradation in the horizontal direction is the lightest. It can be concluded from FIG. 4 that in the process of continuously positioning the invariant points in the scene, the observer cannot determine and maintain a peripheral figure with a relatively large angle to the visual axis, even if the figure is quite large, such as a figure. 4 license plates. This results in the observer not being able to recall the graph because the graph reaches the observer's resolving power at all observation times.
â– Subjective and objective image quality comparison and subjective testing methods
Our work focuses on the evaluation of image quality in both subjective and objective terms. We use the standard deviation MSE standard as the objective quality of image quality. Subjective quality is evaluated by a series of subjective tests that will be further elaborated below. In order to evaluate the quality of the image, we chose subjective tests based on ITU-RBT.500. The method used was the Double Stimulus Damage Rating (DSIS) method, which was slightly modified to adapt to the most assessed image quality.
The test prepared two different images (a license plate, a face). The size of the license plate number is set to 55x11 points, and the size of the face is about 6085 points in a typical scene. The image is then compressed at different compression rates using a different lossy compression method (JPEG, LuRaWave, KLT). These images are arranged into two groups in a random order. The contents of each group of images are the same, but the compression rate and the compression method are different. Place a reference (original) image in front of each set of images. Each individual test then includes two presentations. The reference image is presented first and the compressed image is presented. The quality of the image was judged by a 5-point scoring method (1, 1.1, 1.2, ̈ ̈ ̈ 4.9, 5) containing two significant figures. At the beginning of the test, in order to explain some important image parameters, we show the reference image, the worst quality image and the medium-quality image to the observer. Observations are non-professional students of the School of Electrical Engineering.
Test images are obtained from high quality slide scans. The scan uses 1200d resolution and the selected area is subsampled to 720576 resolution. The camera uses a Nikon FE10 camera, a 35-70/3.5 lens, and a Fuji Provia 100 ASA full-color film. In addition, we also used the OLYMPUS Camedia C-1400L digital camera, which uses a 2/3-inch image sensing CCD and 1.4 million gamut of images. Images are stored in the SHQ (super high quality) 12801024 raster dot matrix JFIF format. The license plate image was taken with a digital camera, and the face image was taken with a film camera.
The reference image was processed at different compression ratios for the three compression methods described above and displayed on a Nnkia 19-inch display. The room was covered by black curtains during the test. Viewing conditions are similar to laboratory conditions. Since the maximum illuminance of the screen is 120 cd/m2, according to the standard, the illuminance of the screen area without display is set to 2.4 cd/m2, and in a completely dark room, the illuminance of the black level on the screen is set to 1.2 cd/m2, and the background illuminance setting It is l.8cd/m2. The observation distance is 5 times the image height, and the observation angle is 0 degree in the axial direction.
A group of 11 observers carried out a recognition test (identifying the license plate or face) for another group with 6 subjective image quality chaos.
â– Objective and subjective image quality assessment results
This section summarizes our simulation and experimental results. Figures 5 and 6 show the original versions of the two images (license plate and face).
Figures 7 to 9 (omitted) show the dependence of the root mean squared MSE on objective degree and subjective image quality as a function of compression rate. The compression rate is calculated based on the actual image file size. In addition, the figure also identifies the identifiable limits. Figures 7a and b are JPEG compression methods, and Figures 7c and d are their worst case. Similarly, the LWF and KLT compression methods are given in Figs. 8a, b, cd and Figs. 9a, b, c, and d, respectively.
â– Conclusion
The interrelationship between objective distortion measure, subjective quality and recognizability is a very complicated issue. In this article we have shown two
Computer desk with uplifting function have become very popular. This series of desk are very easy to use and perfect for any height. A good computer desk can minimize the harmful effects of sitting too long for bodies.
It's not a ordinary desk, It is a amazing height adjustable computer desk. You can sit and stand while working from home or in office room. This computer desk is adjustable in height and length and suitable for different height people in a family or office.
Sitting and standing working style alongside taking breaks, getting your legs and spine moving, promotes healthy blood circulation and staying active in general – as well as alternating between sit and stand, these productivity stations can certainly make a difference.
We have many series of best standing desks to help point you in the right direction. Whether you want something that's high-end and elegant, special designed, affordable, or a Gaming Desk, there`s something on our products list to suit your comfort and wellness needs. We offer the best compuer desk, laptop table, office chairs, Laptop Desk, office exercise bike and an under-desk treadmill with different colors to make your work experience healthier.
Sit stand desk electric, Desk electric, Adjustable computer desk, Adjustable standing desk, Electric desk frame
Foshan Hollin Furniture Co.,Ltd , https://www.chnhollin.com