Cognitive Science Symposium
 

Bastos abstract

We all agree that the brain computes and uses prediction to inform our cognition and behavior. Unpredictable inputs take longer to process, and prediction plays a role in visual and social cognition. In addition, circuits for prediction are impaired in autism and schizophrenia. But how is prediction implemented at the neuronal level? The theory of predictive coding has proposed a candidate mechanism: high-order brain regions build up predictions (PDs) and send these down the cortical hierarchy to sensory areas, where they interact with feedforward sensory input. Predictions either match the incoming sensory data or not: if they do not match, a prediction error (PE) is generated and used to update the next PD. During my PhD, Karl Friston and I proposed a neural architecture for PE and PD computations (Bastos et al., 2012). We proposed a circuit where PE and PD computations use specific cortical layers, cell types, and frequencies; superficial (layers 2 and 3) gamma frequency rhythms (40-100Hz) signal feedforward PE, and that deep (layers 5 and 6) alpha/beta frequency (8-30 Hz) rhythms signal feedback PD. In this talk, I will review the underlying anatomy, neurophysiology, and theoretical ideas that led us to make these hypotheses. In addition, I will discuss recent experimental work that has tested critical aspects of this model using high-density laminar recordings in macaques as they perform a cognitive task that manipulates the predictability of visual cues.

Enns abstract

For much of the past century, perception research and theory was conducted without much thought for the potential role of predictive mechanisms. Occam-razor thinking of the day deemed it too messy. This began to change as neuroscience reported new data and computer scientists realized that without prediction little intelligent work could be done. My colleagues and I first encountered the power of prediction in the 1990’s when trying to account for new behavioral data on visual masking, motion perception, visual search, and visually guided rapid actions. In the 2000’s we extended this way of thinking to help us understand the perception of emotion in others, the perception of other’s social interactions, the ability to read others' intentions in the fine grained details of their actions, and the use of body language in tasks of collaborative cognition and joint action.

Pickering abstract

What do comprehenders predict and how do they use those predictions? I first present a series of “visual world” experiments that suggest that comprehenders initially predict in an automatic, associative manner and then shift to predicting in an allocentric manner, so that they predict what the speaker or character under discussion is likely to say. I argue that comprehenders predict by simulating the speaker, adjusting for differences between self and other, and then engaging aspects of the system that they use to produce language. I then integrate such prediction into a theory of dialogue as cooperative joint activity in which interlocutors use shared plans to “post” contributions to a “shared workspace” that contains their contributions and relevant non-linguistic context. They predict the upcoming state of this workspace, in a way that leads to alignment of their linguistic representations and their models of the situation under discussion.