2011 Color and Imaging Conference, Part IV: Featured Talks

CIC typically has several featured talks such as keynotes and an “evening lecture” – these are invited talks about topics of interest to attendees:

Keynote: Color Responses of the Human Brain Explored with fMRI

The first keynote of the conference was given by Kathy Mullen from McGill Vision Research at the McGill University Department of Opthalmology.

In this keynote, Prof. Mullen (who also taught the course “The Role of Color in Human Vision” this year) discussed research into human vision that uses fMRI (functional magnetic resonance imaging) to measure BOLD (blood oxygen level dependent response). This takes advantage of the fact that oxyhemoglobin increases in venous blood during neuronal activity, which results in an increase in the intensity of the BOLD signal after a time delay (2-3 seconds). BOLD is imaged volumetrically at a resolution of about 3mm cubed, which is typical for fMRI.

At a high level, visual information flows from the optic nerve via the thalamus (relayed through the lateral geniculate nucleus – LGN) to the visual cortex in the back of the head. Then it splits into two primary streams – the dorsal stream, thought to be associated with motion, and the ventral stream, thought to be associated with objects. The BOLD experiments attempt to localize particular aspects of human perception more precisely. These experiments involve showing volunteers specific stimuli which are carefully designed to isolate certain visual processing areas.

A few different fMRI studies were discussed; for example, it was found that the different opponency signals (blue-yellow, red-green, and achromatic) have widely differing intensities in the LGN (corresponding roughly to the differing proportions of the cones and opponency neurons driving them), but the cortex responds to all three roughly equally – this implies that selective amplification is occurring between the LGN and the cortex. This amplification also appears to depend on temporal frequency -it does not occur for signals cycling at 8 Hz or faster.

In general, fMRI appears to be a bit of a blunt instrument but it can tell us where in the brain certain things happen, making it a useful complement to psychophysical data and low-level (single neuron) experiments on monkeys.

Keynote: The Challenge of Our Known Knowns

This keynote was given by Robert W. G. Hunt (Michael R. Pointer was supposed to be presenting part of it but couldn’t make it to the conference due to illness).

Dr. Hunt is a titan in the field of color science, following up 36 years at Kodak Research (where his accomplishments included the design of the first commercial subtractive color printer) with thirty years as an independent color consultant. He has written over 100 papers (including several highly influential ones) on color vision, color reproduction, and color measurement, as well as two highly-regarded textbooks: “The Reproduction of Colour” (now in its 6th edition) and “Measuring Colour” (now in its 4th edition). He has won many accolades for his work in the field, including appointment as an Officer of the British Empire (OBE) in 2009. He has been a constant presence at CIC over the years, teaching courses and giving many keynotes.

This keynote speech focused on factors which are known to have a (sometimes profound) effect on the appearance of colored objects, but for which agreed quantitative measures are not yet available.

Successive Contrast

This is the phenomena of adaptation to previous images affecting the current image. This needs to be accounted for in (e.g.) motion picture film editing, but there is no measure for it. We need a quantitative representation of successive contrast as a function of the luminance, chromaticity, and time of exposure of the adapting field.

Simultaneous Contrast

The appearance of a color can be greatly affected by the presence of other color around it. This phenomenon has been known since the mid-19th century. Dark surround is known to make colors look lighter, and light surround makes them look darker. CIECAM02 and other proposed measures do account for simultaneous contrast, but not for the fact that the strength of the effect depends on the extent of the contact between the color and its surround (for example, the perceived color of a thin “X” embedded in the surround will be affected a lot more than that of a rectangular patch).. A comprehensive quantitative representation would have to be a function of the luminance factor and chromaticity of adjacent areas, extent of their contact, and include allowance for cognitive effects.

Assimilation

When stimuli cover small angles in the field of view, the opposite of simultaneous contrast can occur. The color appear to be more, rather than less, like their surroundings, an effect called assimilation. The likely causes of the effect include scattering of light in the eye, and the fact that the color difference signals in the visual system have lower resolution than that of the achromatic signal. A quantitative representation of assimilation would have to be a function of the luminance factor and chromaticity of the adjacent areas, the extent of their contact, and the angular subtense of the elements.

Gloss

The surfaces of most objects have some gloss, and the appearance of their colors is affected by the geometry of the lighting. Colors of glossy objects can appear very different in lighting that is diffuse vs. directional. Current methods of measuring gloss do so by measuring the ratio of magnitude of specularly reflected light to incident light at certain angles.

However, such measurements do not account for other factors that affect apparent gloss – geometry of illumination, roughness of objects, and amount of non-specular (diffuse) reflection.

There appear to be two major perceptual dimensions to gloss: contrast gloss (a function of the specular and diffuse reflectance factors) and distinctness of image (a function of specular spread). A quantitative representation of gloss might have to be a function or functions of not just specular reflectance factor, diffuse reflectance factor, and spread of specular reflection, but also of the geometry of illumination.

Translucency

Translucency is very important in the apparent quality of foodstuffs. Like gloss, it also appears to have two main perceptual dimensions: clarity (the extent to which fine detail can be perceived through the material) and haze (the extent to which objects objects viewed through the material appear to be reduced in contrast). Clarity and haze can be measured with the right apparatus. A quantitative representation of translucency might have to be separate functions of clarity and haze, but the relationship between these requires further research.

Surface Texture

Pattern is a fundamental attribute belonging to a surface; texture is a parameter relating to the perception of that pattern, which will, among other variables, be a function of the viewing distance. Surface textures can be characterized in terms of structure (structured-unstructured), regularity (irregular-regular) and directionality (directional-isotropic). The measurement of texture is still in its infancy. A quantitative representation of texture might have to be a function of structure, regularity and directionality. Some early research into this area is being done by people working in machine vision.

Summary

Are color, gloss, translucency, and surface texture all independent phenomena? They all derive from the optical properties of materials. Gloss can have a large effect on color (light reflections decrease saturation). Surface texture has been found to have a large effect on gloss. All these phenomena are known, but quantitative measures are lacking – many industries would benefit.

Evening Lecture: Exploring the Fascinating World of Color Beneath the Sea

The lecture was given by David Gallo, Director of Special Projects at the Woods Hole Oceanographic Institution. David Gallo is a prominent undersea explorer and oceanographer who has used manned submersibles and robots to map the ocean world with great detail. Among many other expeditions, he has co-led expeditions exploring the RMS Titanic and the German battleship Bismarck.

The talk was very interesting and included a lot of stunning imagery, but is hard to summarize. David Gallo emphasized the importance of the ocean, and the fact that humans have only explored about 5% of it. Images were shown of underwater lakes, rivers, and waterfalls (formed of denser and saltier water than the surrounding ocean), as well as a variety of bioluminescent creatures from both mid and deep waters, including species living around poisonous hydrothermic vents. Acquiring high-quality color images in the deep ocean is a difficult challenge, ably met by David’s colleague William Lange at the Woods Hole Advanced Imaging and Visualization Lab.

Keynote: The Human Demosaicing Algorithm

This keynote was given by Prof. David Brainard, from the Department of Psychology at the University of Pennsylvania.

Almost all color cameras use interleaved trichromatic sampling – not all channels are sampled at all pixels, instead there is a mosaic (e.g. Bayer Mosaic). The output of this mosaic must be processed (“demosaiced”) into a thrichromatic image. There is information loss, often resulting in artifacts such as magenta-green fringing. Algorithms are constantly being improved to try to reduce the artifacts but they can never be completely eliminated.

The human retina has the same interleaved design; there is only one cone (long – L, medium – M, or short – S) at each location. S cones are sparse, and the L & M cones are arranged in a quasi-random pattern. The same ambiguities exist, but we very rarely see these chromatic fringes. High enough frequencies do reach the retina to cause such artifacts in theory, so some very clever algorithm must be at work.

Making the problem more complicated, there are very large differences between observers in proportions of L, M and S cones, even among observers that test with normal color vision. Some people have a lot more M than L, or the other way around. Yet somehow this does not affect their color perception.

There are two functional questions:

  1. How does the human visual system (HVS) process the responses of an interleaved cone mosaic to approximate full trichromancy?
  2. How do individuals with very different mosaics perceive color in the same way?

Prof. Brainard uses a Bayesian approach to analyze cases where sensory data is ambiguous about physical variables of interest. He picked this approach because is has simple underlying principles, provides an optimal performance benchmark, and is often a good choice for the null hypothesis about performance.

The basic Bayesian recipe is: model the sensory system as a likelihood, express statistical regularities of the environment as a prior distribution, and apply the Bayes Rule.

Prof. Brainard gave a simple example to illustrate the use of Bayesian priors and posteriors, and then scaled it up to the systems used in his work.

The Bayesian system was able to correctly predict many facets of human color vision, including the demosaicing, and how people with very different cone distributions are able to have similar color vision. To stress-test the predictive power of the model, they examined an experiment by Hofer et al in 2005 which used corrective optics to image spots of light on the retina smaller than the distance between adjacent cones. The results predicted by the Bayesian system matched Hofer’s results.

For this method to work, the visual system must process the cone output with ‘knowledge’ of the type of each cone at a fine spatial scale. It appears that the brain needs to learn the cone types, since there doesn’t seem to be a biochemical marker. Another experiment was done to determine how well cones can be typed via unsupervised learning, and it was found that this is indeed possible – about 2500 natural images are sufficient for a system to learn the type of each input in the mosaic.

None of this proves that the HVS works in this way, but it does show that it is possible and correctly predicts all the data. In the future, Prof. Brainard plans to explore engineering applications of these methods.