Precision and the Bayesian brain. Daniel Yon, Chris D. Frith. Current Biology, V 31, Issue 17, PR1026-R1032, Sep 13 2021. https://doi.org/10.1016/j.cub.2021.07.044
Summary: Scientific thinking about the minds of humans and other animals has been transformed by the idea that the brain is Bayesian. A cornerstone of this idea is that agents set the balance between prior knowledge and incoming evidence based on how reliable or ‘precise’ these different sources of information are — lending the most weight to that which is most reliable. This concept of precision has crept into several branches of cognitive science and is a lynchpin of emerging ideas in computational psychiatry — where unusual beliefs or experiences are explained as abnormalities in how the brain estimates precision. But what precisely is precision? In this Primer we explain how precision has found its way into classic and contemporary models of perception, learning, self-awareness, and social interaction. We also chart how ideas around precision are beginning to change in radical ways, meaning we must get more precise about how precision works.
Precise and imprecise percepts
Imagine you are walking a particularly disobedient dog. After being let off the leash he leaps into the bushes, and you have to dive in to fetch him out. But you are not entirely sure where he is. You hear the sound of twigs cracking to the left, but you see the leaves shake to the right. Where should you jump in to catch him?
Locating your dog based on a combination of sight and sound is an example of a general class of multisensory integration problems where our perceptual systems have to triangulate different sensory signals. In our example, the visual signal (shaking leaves to the right) and the auditory signal (cracking twigs to the left) both tell us something about one feature of the environment (the dog’s location), and so it makes sense to combine them. But how? A simple approach could be for our brain to average them together — if sight says right and sound says left, we should dive in straight ahead.
But simple averaging turns out to be suboptimal when some signals are more reliable than others. For example, the spatial acuity of vision is much greater than that of hearing, meaning visual estimates of location are considerably more precise than auditory ones. This insight was formalised in Marc Ernst and Martin Banks’ Bayesian model of multisensory integration, which assumes that our perceptual systems combine different signals according to their reliability or uncertainty. Agents are thought to achieve this by keeping track of the noise or variance in different sensory modalities — with low noise taken as an index of high precision — and affording a higher weight to those channels that are more precise.
This idea of ‘precision-weighting’ provides a good account of near-optimal cue integration seen in humans and other animals. Typically, we won’t jump straight to catch the dog, but will veer off to the right as our brains give more credence to the more precise visual signal. Importantly, this idea can also explain why perception sometimes errs: such as when we are fooled by a ventriloquist’s dummy. It is common to say that the ventriloquist ‘throws their voice’ so it appears to be coming from the silent puppet. In fact, the illusion of the speaking doll emerges because our perceptual systems infer that the visual and auditory signals come from a common source, but give more weight to what we see than what we hear as we try to pinpoint where this source is located. The perceptual experience is false — the voice is not coming from the dummy — but this can still be thought of as an optimal inference from the brain’s perspective, given that coincident sensory signals often do come from a common source, and visual information about the location of these sources is typically so much more precise.
No comments:
Post a Comment