Abstract: Bayesian brain theories suggest that perception, action and cognition arise as creatures minimise the mismatch between their expectations and reality. This principle could unify cognitive science with the broader natural sciences, but leave key elements of cognition and behaviour unexplained.
Check also Yon, Daniel, Carl Bunce, and Clare Press. 2019. “Illusions of Control Without Delusions of Grandeur.” PsyArXiv. November 21. doi:10.31234/osf.io/zk4vg
Abstract: We frequently experience feelings of agency over events we do not objectively influence – so-called ‘illusions of control’. These illusions have prompted widespread claims that we can be insensitive to objective relationships between actions and outcomes, and instead rely on grandiose beliefs about our abilities. However, these illusory biases could instead arise if we are highly sensitive to action-outcome correlations, but attribute agency when such correlations emerge simply by chance. We motion-tracked participants while they made agency judgements about a cursor that could be yoked to their actions or follow an independent trajectory. A combination of signal detection analysis, reverse correlation methods and computational modelling indeed demonstrated that ‘illusions’ of control could emerge solely from sensitivity to spurious action-outcome correlations. Counterintuitively, this suggests that illusions of control could arise because agents have excellent insight into the relationships between actions and outcomes in a world where causal relationships are not perfectly deterministic.And Now you see it: Our brains predict the outcomes of our actions, shaping reality into what we expect. That’s why we see what we believe. Daniel Yon. Aeon Nov 2019. https://aeon.co/essays/how-our-brain-sculpts-experience-in-line-with-our-expectations
---
Why do we do anything at all? In everyday life we tend to explain the behaviour of ourselves and other creatures in terms of beliefs and desires. For example, we might say that a rat pulls a lever or a scientist runs an experiment because they believe that certain outcomes will ensue (i.e., a piece of food or a piece of data) and because these are outcomes they desire (e.g., because they are hungry or curious).
The idea that action is motivated by belief-like and desire-like representations – respectively defining which states of the world are most probable and most valuable (see Box 1) – is also a deep feature of many programmes of work across the cognitive sciences. For example, cognitive models suggest goal-directed action depends on separate associations between actions and outcomes (instrumental beliefs) and outcomes and values (incentives)1,2. A similar distinction is fundamental to models of economic choice, where decisions are thought to reflect a combination of utilities (how good is this option?) and probabilities (how certain am I to obtain it?)3.
However, in recent decades cognitive scientists have been enticed by the possibility that the familiar double act of beliefs and desires can be replaced by theories that explain behaviour using only one kind of internal state – ‘prediction’ (see Fig. 1;4). These predictive processing accounts5 assume that the brain acts as a model of the extracranial world, optimised to fit information arriving at the senses. According to this view, the brain is structured in a hierarchical way such that ‘higher’ cortical areas embody hypotheses about the activity expected in ‘lower’ areas, which in turn send information up the processing hierarchy signalling the mismatch or ‘error’ between prediction and reality. This structure allows the brain to optimise its fit to the outside world through two kinds of process or ‘inference’. The first is perceptual inference, where incoming sensory signals are used to adjust hypotheses at higher levels, such that the hypotheses more closely match the outside world. The second is active inference, where strong top-down predictions engage muscles and organs to drive action, changing states of the body and the world such that they conform with the prior predictions. More simply put, the brain can either revise its predictions to match the world or change the world to make the predictions come true.
Proponents of this view4 suggest that these models leave us with a ‘desert landscape’ view of cognition, where mental states once thought to be crucial in explaining behaviour – such as goals, drives and desires – are boiled down to predictions. Under this view “desired outcomes [are] simply…those that an agent believes, a priori, it will obtain”6. Here, the hungry rat presses the lever because it expects itself to press, since it expects not to be hungry in the future.
One particularly attractive feature of this predictive processing scheme is its potential to integrate cognitive science with other life and social sciences through a common set of principles. For example, it can be shown that any plausible biological system – whether brain, bacterium or birch tree – behaves as though it possesses a predictive model of its environment, and acts in ways that improve the fit between this model and the outside world7,8. More recently, it has been suggested that the same mathematical principles can explain cultural evolution, with ‘cultural ensembles’ equipping their members with shared predictions about the outside world, the contents of which are adjusted in response to cultural prediction errors9. The idea that a range of natural phenomena, on a range of scales, can be understood as prediction-making, error-minimising systems commends predictive processing models to “unity of science” enthusiasts. These models are a boon for scientists who seek continuity between the principles explaining human and animal behaviour and those explaining the rest of the natural world.
However, the unifying potential of predictive processing models may come at a cost to explanatory power. There may still be good reasons for the cognitive scientist to retain concepts of belief-like and desire-like states in their theoretical arsenal. For example, predictive processing models of active inference assume that we act by generating (false) predictions about the states of our body (e.g., my hand is over there) and enslaving peripheral reflexes to make the prediction come true (i.e., move it). While this formulation provides an elegant account of how motor commands are generated and unpacked in the spinal cord, and there would be little dispute that goals are achieved through error minimisation processes, a key ingredient in this scheme is the assumption that agents suspend perception of their actions until their predictions are realised10. This assumption is required because one state plays the role of belief and desire – I cannot simultaneously represent with one state that my hand is by my side and that I would like it to be grasping the mug. Therefore, it would appear that these incarnations of the model are difficult to reconcile with evidence that agents can simultaneously act and perceptually monitor their actions as they unfold – for example, when adapting to unexpected perturbations in a visually-guided reaching movement11 . It is unclear that there is a straightforward solution to this problem. This kind of sensory-guided goal-directed action is compatible with there being some levels in the hierarchy that do not distinguish between belief-like and desire-like information1,11 but not with the absence of this distinction at all levels.
Retaining the distinction between belief-like and desire-like states may also help clinical scientists explain atypical aspects of action. For example, neuropsychiatric work has shown that addicts can expect drugs to be unrewarding, yet still feel strong compulsions to consume them – with expectations about the pleasantness of consumption (‘liking’) and about one’s future actions (‘wanting’) subserved by dissociable mechanisms12. A similar distinction may be important in obsessive compulsive disorder, where patients feel strong urges to perform actions they believe to be causally impotent13. Such experiences are difficult to explain without distinguishing desire-like and belief-like mechanisms (see Box 1).
Intriguingly, some recent predictive processing models may be suggesting rejection of the desert landscape view of cognition, by emphasising that agents like us can act in ways that minimise future prediction errors14. These temporally-deep models entail agents that have separate predictions about states of the world and predictions about plausible actions they could perform. This feature could allow for the reintroduction of the distinction between beliefs and desires. However, if doing so, proponents of predictive processing must also accept that the aim of unifying scientific explanation is only partially achieved. The desert landscape of cognition is not as bare as it seems, and we must accept that there is a discontinuity between different types of mental state, and between error-minimising systems that possess predictions about the future (e.g., animals) and those that do not (e.g., viruses).
In conclusion, prominent predictive processing models have suggested it is possible to do away with traditional concepts of belief and desire, explaining all cognition and behaviour in terms of predictions. This account holds promise for uniting the study of the mind with the study of the natural world, given the diverse number of natural systems that can be thought of as ‘error-minimisers’. However, abandoning these concepts may limit cognitive science’s ability to explain the subtleties of motivated action in health and disease. Though both beliefs and desires could be crafted from the sands of a desert landscape, the cognitive scientist may still find them to be as different as concrete and glass.
-----Full text, graphs, references, etc., at the link above