ZestW

Cortical colouring in: what does your brain do with black-and-white images? | Science

Head quartersScience

Cortical colouring in: what does your brain do with black-and-white images?

New research into the cortical basis for colour vision suggests that our brain ‘fills in’ black-and-white images, based on our memory of what the colour should be

You’re looking out on a vast plain of crops of all different colours, shapes and sizes. Flowers of reds, yellows and greens explode under a sea of blue light from the sky. For the most part though, it’s all in your head. A green plant isn’t actually green, it’s just that the plant absorbs all of the wavelengths of light apart from those in a specific range, and you have colour receptors in the back of your eye that interpret that reflected wavelength as being green. In this sense, colour is a very subjective property, and it begs one of the all-time great, deep philosophical questions – is your red the same as my red?

Despite this, our brains do a pretty good job of keeping colours constant for us. When we look at a banana for instance, it will look yellow regardless of the conditions we’re viewing it in. Bright white fluorescent light, a deep red sunset, the dull grey of dusk – it doesn’t matter, we still see a yellow banana. It’s mainly due to a phenomenon called colour constancy, but for objects that have a particularly strong colour association, it can also in part be influenced by the fact that we know what colour those objects should be. This idea is called ‘memory colour’ – bananas are yellow, broccoli is green, strawberries are red. But what happens when we see these sorts of objects in black-and-white?

A recent study published in Current Biology looked at exactly this question. Michael Bannert and Andreas Bartels from the University of Tübingen in Germany took 18 participants and asked them to perform a motion perception task while undergoing an MRI scan. In the first part of the experiment, they were shown rotating black-and-white pictures of bananas, broccoli, strawberries and other objects that had strong associations with a certain colour, and asked which way the pictures were moving. Later on, the participants were shown rings that were coloured red, green, yellow or blue. Framing the experiment as a motion perception task was a ruse – Bannert and Bartels wanted to make sure the participants focused their attention on the task, but weren’t concentrating on anything other than the movement of the objects. What they were really interested in was whether the brain was encoding the colours of objects that were presented in greyscale.

To look at this, they performed a multi-voxel pattern analysis (MVPA). Traditionally, fMRI data analysis techniques try to detect activation of structures within the brain that might correspond to a given behaviour or respond to a certain type of stimulus. For example, fMRI detects which areas of the brain oxygenated blood is being directed to, and the idea is that those areas need more oxygen because they’re being used in the task at hand. So say you run a study where you present participants with a series of basic objects, such as squares or triangles, and in some trials they’re moving, and in others they’re staying still. Providing you’ve run a well-controlled study, then you might conclude that if areas of the brain show activation in the motion trials but not the static trials, then they’re involved in motion perception in some way.

But this discards a vast amount of data – any areas of the brain that show weak activation aren’t considered for further analysis, as they’re assumed to not be substantially involved in whatever it is that you’re measuring. MVPA is different, in that it looks for patterns of activity that incorporate both strong and weak responses. The neat thing about this sort of analysis is that you can use this information to train computer algorithms, called classifiers, to learn the associations between patterns of activity and real stimuli. Once they’ve learned the correct associations, the classifiers can then be used to predict what a person was looking at, based on their brain activity.

In Bannert and Bartels’ study, they trained classifiers to distinguish between patterns of fMRI responses to real colours. Then, by looking at activity in three areas of the brain – the primary visual cortex (V1), a colour-processing area (V4), and an object-processing area called the lateral occipital cortex (LOC) – they were able to predict the memory colour of the black-and-white images. In other words, even though pictures of things like bananas and strawberries were presented in black-and-white, the brain seems to be encoding the colour that the objects should be.

“This is actually one of the most elegant experimental designs I've seen in a long time – one of those experiments I wish I'd thought of” says Dr Matt Wall, an imaging scientist based at Imperial College, London. “It reminds me of the work showing activation in V5/MT (an area which normally responds to motion) to static stimuli. All of these studies are broadly about seeing things that aren't really there, and showing corresponding brain activity in the visual cortex.” 

One of the stranger aspects of this study was that activity in V1 seemed to be encoding the memory colour of objects. We usually think of V1 as one of the simplest vision areas of the brain, mainly responsible for very basic properties like the gross organisation and orientation of things that we’re looking at. Colour is quite a complex property that’s usually associated with area V4, and memory colour even more so. The fact that memory colour could be predicted from activity in this area seems to imply that it receives strong feedback from higher visual areas, and this study adds to a growing number of findings that are causing a shift in thinking about what the general role of V1 is in vision. Rather than relying on raw sensory input (or so-called "bottom-up" processes), it seems that V1 is heavily influenced by factors such as prior knowledge about objects in our environment (what’s known as "top-down" processing). 

Understanding how previously obtained information affects visual processing at such an early stage in the brain has implications beyond colour vision. For example, imagine you’re back looking at that wide expanse of crops again. It’s starting to cloud over, but then you glimpse a strange blob in the sky. Is it a UFO? Is it a weather balloon? Or is it a lenticular cloud? Your answer, it seems, is based on more than meets the eye.

ncG1vNJzZmivp6x7tbTEoKyaqpSerq96wqikaKuTnrKvr8RoaWlpY2S7sMKOaW5omqKWtq95wqijqK2iYq%2BtrcKkZLCgmamybrXMmp6eqw%3D%3D

Martina Birk

Update: 2024-07-23