2011 Color and Imaging Conference, Part III: Courses B

This post covers the rest of the CIC 2011 courses that I attended; it will be followed by posts describing the other types of CIC content (keynotes, papers, etc.).

Lighting: Characterization & Visual Quality

This course was given by Prof. Françoise Viénot, Research Center of Collection Conservation, National Museum of Natural History, Paris, France.

During the 20th century, the major forms of light were incandescent (including tungsten-halogen) and discharge (fluorescent as well as compact fluorescent lamps – CFL). Incandescent lights glow from heat, and discharge lamps include an energetic spark or discharge which emits a lot of UV light, which is converted by a fluorescent coating to visible light. LEDs are relatively new as a lighting technology; they emit photons at a frequency based on the bandgap between semiconductor quantum energy levels. LEDs are often combined with fluorescent phosphors to change the light color.

Correlated Color Temperature is used to describe the color of natural or “white” light sources. CCT is defined as the temperature of the blackbody (an idealized physical object which glows due to its heat) which has the color nearest the color of the tested illuminant. Backbody colors range from reddish at around 1000K (degrees Kelvin) through yellow, white and finally blue-white (for temperatures over 10,000K). The CCT is only defined for illuminants with colors reasonably near one of these so it is meaningless to talk about the CCT of an, e.g., green or purple light. “Nearest color” is defined on a CIE uv chromaticity diagram. Reciprocal CCT (one over CCT) is also sometimes used – reciprocal CCT lines are spaced very nearly at equal uv distances which is a useful coincidence (for example, “color temperature” interface sliders should work proportionally to reciprocal CCT for intuitive operation).

Perhaps confusingly, in terms of psychological effect low CCT corresponds to “warm colors” or “warm ambience” and high CCT corresponds to “cool colors” or “cool ambience”. Desirable interior lighting is about 3000K CCT.

Light manufacturers do not reproduce the exact spectra of real daylight or blackbodies, they produce metamers (different spectra with the same perceived color) of the white light desired. For example, four different lights could all match daylight with CCT 4500K in color, but have highly different spectral distributions. Actual daylight has a slightly bumpy spectral power distribution (SPD), incandescent light SPDs are very smooth, discharge lamps have quite spiky SPDs, and LED SPDs tend to have two somewhat narrow peaks.

Since LEDs are a new technology they are expected to be better or at least equal to existing lighting technologies. Expectations include white light, high luminous efficacy (converting a large percentage of its energy consumption on visible light and not wasting a lot on UV or IR), low power consumption, long lifetime, high values of flux (emitted light quantity), innovations such as dimmability and addressability, and high visual quality (color rendition, comfort & well-being). LED light is clustered into peaks that are not quite monochromatic – they are “quasi-monochromatic” with a smooth but narrow peak (spectral width around 100nm).

Most white light LEDs are “phosphor-converted LEDs” – blue LEDs with fluorescent powder (phosphor) that captures part of the blue light and emits yellow light, creating two peaks (blue and yellow) which produce an overall white color. By balancing the two peaks (varying the amount of blue light captured by the fluorescent powder), LED lights with different CCTs can be produced. It is also possible to add a second phosphor type to create more complex spectra. New LED lights are under development that use a UV-emitting LED coupled with 3 phosphor types.

An alternative approach to producing white-light LEDs is to create “color-mixed LEDs” combining red, green, and blue LEDs. There are also hybrid mixtures with multiple LEDs as well as phosphors. This course focused on phosphor-converted LEDs. They have better color rendition and good luminous efficacy, and are simple to control. On the other hand, RGB color-mixed LEDs have the advantage of being able to vary color dynamically.

Regarding luminous efficacy, in the laboratory cool white LED lamps can achieve very high values – about 150 lumens per Watt (steady-state operation). Commercially available cool white LED lamps can reach a bit above 100 lm/Watt, commercial warm white ones are slightly lower. US Department of Energy targets are for commercial LED lights of both types to approach 250 lm/Watt by 2020.

Intensity and spectral width strongly depend on temperature (cooling the lamp makes it brighter and “spikier”, heating does the opposite). Heat also reduces LED lifetime. As LEDs age, their flux (light output) decreases, but CCT doesn’t typically change. The rate of flux reduction varies greatly with manufacturer.

One way to improve LED lifetime is to operate it for short durations (pulse width modulation). This is done at a frequency between 100-2000 Hz, and of course reduces the flux produced.

Heat dissipation is the primary problem in scaling LED lights to high-lumen applications (cost is also a concern) – they top out around 1000 lumens.

The Color Rendering Index (CRI) is the method recommended by the CIE to grade illumination quality. The official definition of color rendering is “effect of an illuminant on the colour appearance of objects by conscious or subconscious comparison with their colour appearance under a reference illuminant”. The instructor uses “color rendition”, for which she has a simpler definition: “effect of an illuminant on the colour appearance of objects”.

CIE’s procedure for measuring the general color rendering index Ra consists of comparing the color of a specific collection of eight samples when illuminated by the tested light vs. a reference light. This reference light is typically a daylight or blackbody illuminant with the same or similar CCT as the tested light (if the tested light’s color is too far from the blackbody locus to have a valid CCT a rendering index cannot be computed; in any case such an oddly-colored light is likely to have very poor color rendering). The process includes a Von Kries chromatic adaptation to account for small differences in CCT between the test and reference light sources. After both sets of colors are computed, the mean of the chromatic distances between the color pairs is used to generate the CIE color rendering index. The scaling factors were chosen so that the reference illuminant itself would get a score of 100 and a certain “known poor” light source would get a score of 50 (negative scores are also possible). For office work, a score of at least 80 is required.

Various problems with CRI have been identified over the years, and alternatives have been proposed. The Gamut Area Index (GAI) is an approach that describes the absolute separation of the chromaticities of the eight color chips, rather than their respective distances vis-à-vis a reference light. Incandescent lights tend to get low scores under this index. Another alternative metric, the Color Quality Scale (CQS) was proposed by the National Institute of Standards and Technology (NIST). It is similar to CRI in basic approach but contains various improvements in the details. Other approaches focus on whether observers find the colors under the tested light to be natural or vivid.

In general, there are two contradicting approaches in selecting lights. You can either emphasize fidelity, discrimination and “naturalness”, or colorfulness enhancement and “beautification” you can’t have both. Which is more desirable will depend on the application. For everyday lighting situations, full-spectrum lights are likely to provide the best combination of color fidelity and visual comfort.

There are also potential health issues – lights producing a high quantity of energy in the blue portion of the spectrum may be harmful for the vision of children as well as adults with certain eye conditions. In general, the “cooler” (bluer) the light source, the greater the risk, but there are other factors, such as brightness density. Looking directly at a “cool white” LED light is most risky; “warm white” lights of all types as well as “cool white” frosted lamps (which spread brightness over the lamp surface) are more likely to be OK.

The Role of Color in Human Vision

This course was taught by Prof. Kathy T. Mullen from the Vision Research Unit in the McGill University Dept. of Ophthalmology.

Prof. Mullen started by stating that primates are the only trichromats (having three types of cones in the retina) am0ng mammals – all other mammals are dichromats (have two types of cones in the retina). One of the cone types mutated into two different ones relatively recently (in evolutionary terms). There is evidence that other species co-evolved with primate color vision (e.g. fruit colors changed to be more visible to primates).

The Role of Color Contrast in Human Vision

Color contrast refers to the ability to see color differences in the visual scene. It allows us to better distinguish boundaries, edges, and objects.

Color contrast has 4 roles.

Role 1: Detection of objects that would otherwise be invisible due to being seen against a dappled backgrounds – for example, seeing red berries among semi-shadowed green foliage.

Role 2: Segregation of the visual field into elements that belong together – if an object’s silhouette is split into several parts by closer objects, color enables us to see that these are all parts of the same object.

Role 3: Helps tell the difference between variations in surface color and variations in shading. This ability depends on whether color and achromatic contrasts coincide spatially or not. For example, a square with chrominance stripes (stripes of different color but the same luminance) at 90 degrees to luminance stripes (stripes that only change luminance) is strongly perceived as a 3D shaded object. If the chrominance and luminance stripes are aligned, then the object appears flat.

Role 4: Distinguishing between otherwise similar objects. This leads into color identification. If after distinguishing objects by color, we can also identify the colors, then we can infer more about the various object’s properties.

Color Identification and Recognition

Color identification & recognition is a higher, cognitive stage of color vision, which involves color identification / recognition and color naming. It requires an internalized “knowledge” of what the different colors are. There is a (very rare) condition called “color agnosia” where color recognition is missing – people suffering from this condition perform normally on (e.g.) color-blindness vision tests, but they can’t identify or name colors at all.

Color is an object property. People group, categorize and name colors using 11 basic color categories: Red, Yellow, Green, Blue, Black, Grey, White, Pink, Orange, Purple, and Brown (there is some evidence that Cyan may also be a fundamental category).

Psychophysical Investigations of Color Contrast’s Role in Encoding Shape and Form

For several decades, vision research was guided by an understanding of color’s role which Prof. Mullen calls the “coloring book model”. The model holds that achromatic contrast is used to extract contours and edges and demarcates the regions to be filled in by color, and color vision has a subordinate role – it “fills in” the regions after the fact. In other words, color edges have no role in the initial shape processing occurring in the human brain.

To test this model, you can perform experiments that ask the following questions:

  1. Does color vision have the basic building blocks needed for form processing: spatially tuned detectors & orientation tuning?
  2. Can color vision extract contours and edges from the visual scene?
  3. Can color vision discriminate global shapes?

The coloring book model would predict that the answer to all of these questions is “no”.

Prof. Mullen then described several experiments done to determine the answers to these questions. These experiments relied heavily on “isoluminant colors” – colors with different chromaticity but the same luminance. The researchers needed extremely precise isolation of luminance, so they had to find individual isoluminant color pairs for each observer. This was done via an interesting technique called “minimum motion”, which relies on the fact that color vision is extremely poor at detecting motion. The researchers had observers stare at the center of an image of a continually rotating wheel with two alternating colors on the rim. The colors were varied until the rim appeared to stop turning – at that point the two colors were recorded as an isoluminant pair for that observer.

The experiments showed that color vision can indeed extract contours and edges from the scene, and discriminate global shapes, although slightly less well than achromatic (luminance) vision. It appears that the “coloring book” model is wrong – color contrast can be used in the brain in all the same ways luminance contrast can. However, color vision is relatively low-resolution, so very fine details cannot be seen without some luminance contrast.

The Physiological Basis of Color Vision

Color vision has three main physiological stages:

  1. Receptoral (cones) – light absorption – common to all day time vision
  2. Post receptoral 1 – cone opponency extracts color but not color contrast
  3. Post receptoral 2: double cone opponency extracts color contrast

The retina has three types of cone cells used for day time (non-low-light) vision. Each type is sensitive to a different range of wavelengths – L cones are most sensitive to long-wavelength light, M cones are most sensitive to light in the middle of the visual spectrum, and S cones are most sensitive to short-wavelength light.

Post-receptoral 1: There are three main types of neurons in this layer, each connected to a local bundle of differently-typed cones. One forms red-green color vision from the opponent (opposite-sign) combination of L and M cones. The second forms blue-yellow color vision from the opponent combination of S with L and M cones. These two types of neurons are most strongly excited (activated) by uniform patches of color covering the entire cone bundle (some of them serve a different role by detecting luminance edges instead). The third type of neuron detects the luminance signal, and is most strongly excited by a patch of uniform luminance covering the entire cone bundle.

Post-receptoral 2: these are connected to a bundle of neurons from the “post-receptoral 1” phase – of different polarity; for example, a combination of “R-G+” neurons (that activate when the color is less red and more green) and “R+G-“ neurons (that activate when the color is more red and less green). Such a cell would detect red-green edges (a similar mechanism is used by other cells to detect blue-yellow edges). These types of cells are only found in the primate cortex – other types of mammals don’t have them.

Introduction to Multispectral Color Imaging

This course was presented by Dr. Jon Y. Hardeberg from the Norwegian Color Research Laboratory at Gjøvik University College.

Metamerism (the phenomena of different spectral distributions which are perceived as the same color) is both a curse and a blessing. Metamerism is what enables our display technologies to work. However, two surfaces with the same appearance under one illuminant may very well have a different appearance under another illuminant.

Besides visual metamerism, you can also have camera metamerism – a camera can generate the same RGB triple from two different spectral distributions. Most importantly, camera metamerism is different than human metamerism. For the two to be the same, the sensor sensitivity curves of the camera would have to be linearly related to the human cone cell sensitivity curves. Unfortunately, this is not true for cameras in practice. This means that cameras can perceive two colors as being different when humans would perceive them to be the same, and vice versa.

Multispectral color imaging is based on spectral reflectance rather than ‘only’ color; the number of channels required is greater than the three used for colorimetric imaging. Multispectral imaging can be thought of as “the ultimate RAW” – capture the physics of the scene now, make the picture later. Applications include fine arts / museum analysis and archiving, medical imaging, hi-fi printing and displays, textiles, industrial inspection and quality control, remote sensing, computer graphics, and more.

What is the dimensionality of spectral reflectance? This relates to the number of channels needed by the multispectral image acquisition system. In theory, spectral reflectance has infinite dimensionality, but objects don’t have arbitrary reflectance spectra in practice. Various studies have been done to answer this problem, typically using PCA (Principal Component Analysis). However, these studies tend to produce a wide variety of answers, even when looking at the same sample set.

For the Munsell color chip set, various studies have derived dimensionalities ranging from 3 to 8. For paint/artwork from 5 to 12, for natural/general reflectances from 3 to 20. Note that these numbers do not correspond to a count of required measurement samples (regularly or irregularly spaced), but to the number of basis spectra required to span the space.

Dr. Hardeberg did a little primer on PCA. Plotting the singular values can let you know when to “cut off” further dimensions. He proposed cutting off dimensions after the accumulated energy reaches 99% of the total.effective dimensionality based on 99% of accumulated energy – accumulated energy sounds like a good measure for PCA.

Dr. Hardeberg next discussed his own work on dimensionality estimation. He analyzed several reflectance sets:

  • MUNSELL: 1269 chips with matte finish, available from the University of Joensuu at Finland.
  • NATURAL: 218 colored samples collected from nature, also available at Joensuu.
  • OBJECT: 170 natural and man-made objects, online courtesy Michael Vhrel.
  • PIGMENTS: 64 oil pigments used in painting restoration, provided to ENST by National Gallery under the VASARI project (not available online)
  • SUBLIMATION: 125 equally spaced patches of a Mitsubishi S340-10 CMY sublimation printer

Based on the 99% accumulated energy criterion, he found the following dimensionalities for the various sets: 18 for MUNSELL, 23 for NATURAL, 15 for OBJECT, 13 for PIGMENTS, 10 for SUBLIMATION. The results suggest that 20 dimensions is a reasonable general-purpose number, but the optimal number will depend on the specific application.

The finding of 10 dimensions for the SUBLIMATION dataset may b e viewed as surprising since only three colorants (cyan, magenta, and yellow ink) were used. This is due to the nonlinear nature of color printing. A nonlinear model could presumably use as few as three dimensions, but a linear model needs 10 dimensions to get to 99% accumulated energy.

Multispectral color image acquisition systems are typically based on a monochrome CCD camera with several color filters. There are two variants – passive (filters in the optical path) and active (filters in the light path). Instead of multiple filters it is also possible to use a single Liquid Crystal Tunable Filter (LCTF). Dr. Hardeberg gave brief descriptions of several multispectral acquisition systems in current use, ranging from 6 to 16 channels.

Getting spectral reflectance values out of the multichannel measured values requires some work – Dr. Hardeberg detailed a model-based approach that takes a mathematical model of the acquisition device (how it measures values based on spectral input) and inverts it to general spectral reflectance from the measured values.

There is work underway to find spectral acquisition systems that are cheaper, easier to operate, and faster while still generating high-quality reflectance data. One of these is happening in Dr. Hardeberg’s group, based on a Color-Filter Array (CFA) – similar to the Bayer mosaics found in many digital camera, but with more channels. This allows capturing spectral information in one shot, with one sensor. Another example is a project that takes a stereo camera and puts different filters on each of the lenses, processing the resulting images to get stereoscopic spectral images with depth information.

Dr. Hardeberg ended by going over various current research areas for improving multispectral imaging, including a new EU-sponsored project by his lab which is focusing on multispectral printing.

Fundamentals of Spectral Measurements for Color Science

This course was presented by Dr. David R. Wyble, Munsell Color Science Lab at Rochester Institute of Technology.

Colorimetry isn’t so much measuring a physical value, as predicting the impression that will be formed in the mind of the viewer. Spectral measurements are more well-defined in a physical sense.

Terminology: Spectrophotometry measures spectral reflectance, transmittance or absorptance of a material as a function of wavelength. The devices used are spectrophotometers, which measure the ratio of two spectral photometric quantities, to determine the properties of objects or surfaces. Spectroradiometry is more general – measurement of spectral radiometric quantities. The devices used (spectroradiometers) work by measuring spectral radiometric quantities to determine the properties of light sources and other self-luminous objects. Reflectance, transmittance, absorptance are numerical ratios; the words “reflection”, “transmission”, and “absorption” refer to the physical processes. Most spectrophotometers measure at 10nm resolution, and spectroradiometers typically at 5-10nm.

Spectrophotometers

Spectophotometers measure a ratio with respect to a reference, so no absolute calibration is needed. For reflectance we reference a Perfect Reflecting Diffuser (PRD) and for transmittance we use air. A PRD is a theoretical device – a Lambertian diffuser with 100% reflectance. Calibration transfer techniques are applied to enable the calculation of reflectance factor from available measured data.

Reflectance is the ratio of the reflected flux to the incident flux (problem – measuring incident flux). Reflectance Factor is the ratio of the flux reflected from the sample to the flux that would be reflected from an identically irradiated PRD (problem – where’s my PRD?).

The calibration equation (or why we don’t need a PRD): a reference sample (typically white) is provided together with Rref(λ) – the known spectral reflectance of the sample (λ stands for wavelength). This sample is measured to provide the “reference signal” iref(λ). In addition, “zero calibration” (elimination of dark current, stray light, etc.) is performed by measuring a “dark signal” idark(λ). Dark signal is measured either with a black reference sample or “open port” (no sample in the device). The calibration equation combines Rref(λ), iref(λ) and idark(λ) with the measured sample intensity isample(λ) to get the sample’s spectral reflectance Rsample(λ):

Rsample (λ) = Rref (λ) * (isample (λ) – idark (λ)) / (iref (λ) – idark (λ))

Note that Rref(λ) was similarly measured against some other reference, and so on. So you have a pedigree of standards, ultimately leading to some national standards body. For example, if you buy a white reference from X-Rite, it was measured by X-Rite against a white tile they have that was measured at the National Institute of Standards and Technology (NIST).

A lot of lower-cost spectrophotometers don’t come with a reflectance standard – Dr. Wyble isn’t clear on how those work. You can always buy a reflectance standard separately and do the calibration yourself, but that is more risky – if it all comes from the same manufacturer you can expect that it was done properly.

Transmittance is the ratio of transmitted flux to incident flux. At the short path lengths in these devices air is effectively a perfect transmitter for visible light. So a “transmittance standard” is not needed since the incident flux can be measured directly – just measure “open port” (no sample) – for liquids you can measure an empty container, and in the case of measuring specific colorants which are dissolved in a carrier fluid you could measure with a container full of clean carrier fluid.

Calibration standards must be handled, stored and cleaned with care according to manufacturer instructions, otherwise incorrect measurement will result. A good way to check is to measure the white standard and check the result just before measuring the sample.

A spectrophotometer typically includes a light source, a sample holder, a diffraction grating (for separating out spectral components) and a CCD array sensor, as well as some optics.

Measurement geometry refers to the measuring setup; variables such as the angles of the light source and sensor to the sample, the presence or absence of baffles to block certain light paths, the use (or not) of integrating hemispheres, etc. Dr. Wyble went into a few examples, all taken from the CIE 15.2004 standards document. Knowledge of which measurement geometry was used can be useful, e.g. to estimate how much specular reflectance was included in a given measurement (different geometries exclude specular by different degrees). Some special materials (“gonio-effects” pigments that change color based on angle, fluorescent, metallic, retroreflective, translucent, etc.) will break the standard measurement geometries and need specialized measuring methods.

Spectroradiometers

Similar to spectrophotometers, but have no light source or sample holder. The light from the luminous object being measured goes through some optics and a dispersing element to a detector. There are no standard measurement geometries for spectroradiometry.

Some spectroradiometers measure radiance directly emitted from the source through focused optics (typically used for measuring displays). Others measure irradiance – the light incident on a surface (typically used for measuring illuminants). Irradiance measurements can be done by measuring radiance from a diffuse white surface, such as pressed polytetrafluoroethylene (PTFE) powder.

Irradiance depends on the angle of incident light and the distance of the detector. Radiance measured off diffuse surfaces is independent of angle to the device. Radiance measured off uniform surfaces is independent of distance to the device.

Instrument Evaluation: Repeatability (Precision) and Accuracy

Repeatability – do you get similar results each time? Accuracy – is the result (on average) close to the correct one? Repeatability is more important since repeatable inaccuracies can be characterized and corrected for.

Measuring repeatability – the standard deviations of reflectance or colorimetric measurements. The time scale is important: short-term repeatability (measurements one after the other) should be good for pretty much any device. Medium-term repeatability is measured over a day or so, and represents how well the device does between calibrations. Long-term repeatability is measured over weeks or months – the device would typically be recalibrated several times over such an interval. The most common measure of repeatability is Mean Color Difference from the Mean (MCDM). It is measured by making a series of measurements of the same sample (removing and replacing each time to simulate real measurements), calculating L*a*b* values for each, calculating the mean, calculate ΔE*ab between each value and the mean, and finally averaging the ΔE*ab values to get MCDM. The MCDM will typically be about 0.01 (pretty good) to 0.4 (really bad). Small handheld devices commonly have around 0.2.

Quantifying accuracy – typically done by measuring the spectral reflectance of a set of known samples (e.g. BCRA tiles) that have been previously measured at high-accuracy laboratories: NIST, NRC, etc. The measured values are compared to the “known” values and the MCDM is calculated as above. Once the inaccuracy has been quantified, this can be used to correct further measurement with the device (using regression analysis). When applied to the test tile values, the correction attempts to match the reference tile values. When applied to measured data, the correction attempts to predict reflectance data as if the measurements were made on the reference instrument. Note that the known values of the samples have uncertainties in them. The best uncertainty you can get is the 45:0 reflectometer at NIST, which is about 0.3%-0.4% (depending on wavelength) – you can’t do better than that.

Using the same procedure, instead of aligning your instruments with NIST, you can align a corporate “fleet” of instruments (used in various locations) to a “master” instrument.

One thought on “2011 Color and Imaging Conference, Part III: Courses B

  1. Pingback: Real-Time Rendering · 2011 Color and Imaging Conference, Part V: Papers

Comments are closed.