Two talks on minimizing free energy during belief coordination

Hi there, it has been a while.

I'm currently busy writing up my dissertation thesis, so I probably shouldn't be writing here. But it's late and I'm still a bit psyched about that I've just been contacted twice about my recent talk on my modeling approach, its application in the context of social interaction, and its implications.

Here is a link to the abstract for the talk I just gave at the Computational Cognition workshop in Osnabrück:

The other talk (on basically the same topic with a different spin) I gave at the EuroCogSci conference in Bochum, a couple of weeks earlier.

I've been working on this model for over 4 years now, with different iterations of the implementation and further developments that have come close to making me start another iteration of implementation. But maybe this is not the best idea if you want to finish up your thesis soon, is it?

Well, first there was this idea to implement a simple MOSAIC type model for motor coordination. This quickly grew into what can only be described as modeling the dynamics of the funtional networks of the so-called social brain, and further, into understanding what the dynamics within a social brain mean for the dynamics between agents in social interaction. Now it's four years later and I've since stumbled my way towards a probabilistic hierarchical model of sensorimotor processes and mentalizing processes.

A very central aspect of this model is its basis in the predictive processing framework, especially active inference. Its central idea is that every level of the processing hierarchy tries to predict the dynamics in the next lower level. This goes all the way down to sensory and motor areas, in direct contact with the environment. In theory, an erroneous prediction leads to a switch to a better-fitting prediction, if available, to keep prediction error down. But that is simple inference in a predictive processing hierarchy. It gets more interesting when you think about how such a system would trigger action.

For action to happen you need to handle your predictions in a special way. In some sense you fixate your higher-level prediction about what your action should look like, making your motor system responsible to find an appropriate state that again can minimize prediction error, and in effect makes your body move, so that you meet your predictions.

When you now switch to social interaction with another agent, you need to start thinking about trying to infer the other's beliefs. You don't have direct access to those, but have to infer them from the other's behavior. The basic idea that I follow is now simply to extend the model in a way that allows it to coordinate these beliefs through reciprocal coordination. In other words: the back and forth of perceived beliefs leads to a shared understanding for all interaction partners.

For the full idea, I'll probably have to write a summary of my thesis. But just to tease you, I think the combination of active inference and a process of establishing shared understanding can lead to a subjective sense of direct access to the other agent's beliefs.