Home About Speakers Posters Organisers Abstracts Schedule Map

Be.AI conference

Biomimetic embodied Artificial Intelligence

Miguel Aguilera

Nonequilibrium Neural Computation: Stochastic thermodynamics of the asymmetric Sherrington-Kirkpatrick model

Effective neural information processing entails flexible architectures integrating multiple sensory streams that vary in time with internal and external events. Physically, neural computation is, in a thermodynamic sense, an out-of-equilibrium, non-stationary process that changes dynamically giving rise to entropy production. Cognitively, neural activity results in dynamic changes in sensory streams and internal states. In contrast, classical neuroscience theory focuses on stationary, equilibrium information paradigms (e.g., efficient coding theory), which often fail to describe the role of nonequilibrium fluctuations in neural processes. In consequence, there is a pressing demand for mathematical tools to understand the dynamics of large-scale, non-equilibrium networks systems and to analyse high-dimensional datasets recorded from them. Inspired by the success of the equilibrium Ising model in investigating disordered systems in the thermodynamic limit, we study the nonequilibrium thermodynamics of the asymmetric Sherrington-Kirkpatrick system with both synchronous and asynchronous updates as a prototypical model of large-scale nonequilibrium networks. We employ a path integral method to calculate a generating functional over the trajectories to derive exact solutions of the order parameters, conditional entropy of trajectories, and steady-state entropy production of infinitely large networks. Inspecting the system's phase diagram, we find that the entropy production peaks at a critical order-disorder phase transition, but it is more prominent outside the critical regime, especially at disordered phases with low entropy rates. While entropy production is becoming popular to characterize various complex systems, our results reveal that increased entropy production is linked with radically different scenarios. Combining multiple thermodynamic quantities yields a more precise picture of different temporally irreversible spiking patterns. These results contribute to an understanding of the distinct roles in neural computation in the light of an exact analytical theory of the thermodynamics of large-scale nonequilibrium systems and their phase transitions.

Henry Shevlin

Uncanny communicators: the coming age of Social AI Since the launch of ChatGPT in November last year, there has been growing public awareness that generative AI will soon have massive impacts on our professional lives. What is less widely recognised, however, is the pervasive influence that AI will soon have on our social and romantic lives. Or so I argue in this talk. Beginning with a survey of existing social, conversational, and romantic applications of generative AI, I argue that deep features of human cognition and 21st social arrangements make it highly likely that AI friendships and romantic partnerships will become commonplace in many societies around the world before the end of the decade. After considering and responding to some sceptical doubts about this suggestion, I go on to survey some of the ethical and political concerns - from social deskilling to commodification of relationships – raised by this new stage in the development of human-AI interactions, and argue that we should begin planning for these as soon as possible.

Miguel Maravall

Neuronal sensitivity to multiple task variables induced in the mouse somatosensory and associative cortex

To explore how cortical neuronal activity underpins sequence discrimination, we recently developed a task in which mice distinguished between tactile sequences constructed from segments assembled in different orders and delivered to the animal’s whiskers. Animals licked to report the presence of a target sequence (GO/NOGO design). We recorded and manipulated cortical activity with two-photon imaging and optogenetics while mice performed the task. We expected that learning the task would induce neurons in the primary somatosensory cortex (S1bf) to refine their sensory tuning, becoming more selective to the target GO sequence. Instead, two-photon imaging showed that, upon learning, neurons in both S1bf and posterior parietal cortex (PPC) became sensitive to multiple task variables, including sensory input but also the animal’s action decision (goal-directed licking) and the trial’s outcome (presence or absence of a predicted reward). While S1bf was necessary for sequence discrimination, PPC was not. Contrary to expectation, classifiers trained on the activity of S1bf neurons discriminated the animal’s decision on a trial (lick vs no lick) better than the sensory identity of the trial (GO vs NOGO), while classifiers trained on PPC neurons discriminated sensory information better than actions. Our results demonstrate that conditioning on a goal-directed sensory discrimination task results in neurons within S1bf whose activity reflects the learnt links between target stimulus and licking. They also show that PPC contains copies of task-relevant information even while playing no causal role in the animal’s performance. These results and others from multiple sensory areas of cortex challenge the traditional neuroscience textbook view in which sensory and motor function are seen as separate areas of enquiry: classic work on anaesthetised animals was interpreted to suggest that sensory pathways provide a general-purpose representation of the environment, independent of decision-making and action. Instead, sensory cortex may be involved in task-specific processing and – as a consequence – an animal may not sense the world independently of its current behavioural goal. I will discuss possible roles of task-dependent S1bf activity.

Multimodal units extract comodulated information

We continuously detect sensory data, like sights and sounds, and use this information to guide our behaviour. However, rather than relying on single sensory channels, which are noisy and can be ambiguous alone, we merge information across our senses and leverage this combined signal. In biological networks, this process (multisensory integration) is implemented by multimodal neurons which are thought to receive the information accumulated by unimodal areas, and to fuse this across channels; an algorithm we term accumulate-then-fuse. However, is implementing this algorithm their main function? Here, we explore this question by developing novel multimodal tasks and deploying probabilistic and spiking neural network models. Using these models we demonstrate that multimodal units are not necessary for accuracy or balancing speed/accuracy in classical multimodal tasks, but are critical in a novel set of tasks in which we comodulate signals across channels. We show that these comodulation tasks require multimodal units to implement an alternative fuse-then-accumulate algorithm, and we demonstrate that this algorithm excels in naturalistic settings like predator-prey interactions. Ultimately, our work suggests that multimodal neurons are critical for extracting comodulated information, and provides novel tasks and models for exploring this in biological systems.

Consciousness, neural complexity and integrated information

Understanding the fundamental physical substrate of consciousness is sometimes considered the final frontier of science. Here the integrated information theory of consciousness (IIT) is unprecedentedly ambitious in that it proposes a universal mathematical formula, derived from fundamental properties of conscious experience, to describe the quality and quantity of consciousness for any physical system that possesses it. However, in the current formulation, IIT formulae are not always well-defined and empirical results do not support the level of specificity present in the theory. In this talk, I give (i) an accessible overview of the key features and problems with IIT; (ii) discuss exciting empirical motivation for a weaker, less prescriptive, form of IIT. This ‘weak IIT’ retains the key hypothesis that neural activity associated with consciousness should exhibit a form of ‘dynamical complexity’ whereby there are lots of interactions between regions, and yet each region has distinct and rich dynamics.

Many of the greatest societal challenges that we face involve the management of complex adaptive systems. Such systems are inherently hard to manage due to their openness and adaptability, long causal chains and non-linear interactions between components. Meaning that interventions such as policies seldom have the intended effect and continuous adaptive management is necessary. Additionally, due to their complex and social nature, challenges here can be characterized as wicked problems, in which no optimal solution exists. Only trade-offs between different actors’ desired outcomes and values. The urgency of these global societal challenges is clear and I strongly believe that fields such as biomimetic, embodied AI and Artificial Life, with their particular combinations of tools, approaches and philosophy have immense potential to engage with real world problems. However, we must develop this potential from ideas into experiments and practice in collaborations with other disciplines and those working on the ground. This will require us to think both critically and creatively about our possible roles and our assumptions. In this talk I will present the “actionable complexity” approach that I have developed in moving from ALife to applying complexity in policy. In order to illustrate some of the challenges and potential responses to working under “real world” constraints, I will describe my work with the UK government as part of CECAN the Centre for Evaluation of Complexity Across the Nexus, which aims to design and deliver innovative complexity-appropriate tools for policy evaluation in complex adaptive systems. In particular I will discuss participatory systems mapping work and co-production approaches in work with DEFRA, the department for Environment, Food and Rural Affairs and with BEIS the department for Business, Energy and Industrial Strategy. This method allows system stakeholders to collaboratively develop causal models of their complex system, consisting of a system’s key components, from any domain, and the causal interconnections between them. Then to analyse them to suggest ways in which system complexity impacts policy relevant questions and to design policy which exploits or works with system structure and dynamics. Producing insights which can be directly applied in the policy process, whilst simultaneously increasing complexity literacy. Most importantly, this method can be used within the constraints of policy contexts and provide actionable insights whilst simultaneously highlighting the reality of policy domains as large, dynamic complex systems in which policies interact with extant human systems involving the interactions of social, economic, technical and ecological components. Systems and complexity methods such as these are increasingly popular in government, but despite their current popularity and success, there is potential and a need to go much further. Both in our methods and in changing perspectives on how we interact with the world to a genuine understanding of ourselves and others as in continuous interaction with multiple complex, potentially hybrid, living or cognitive, systems at multiple scales. In the second half of my talk, I will discuss this work within the bigger picture of the need to develop new approaches for the participatory steering of complex adaptive systems. And open up a discussion about what be.AI and other related approaches could offer to this mission. As well as considering the provisos, the challenges, and the guiding principles for working which could allow us to make a genuine difference to real world societal challenges.

Human metacognition and uncertainty estimation

Humans have a seemingly unique ability for metacognition: we can reflect on and evaluate our inferences and decisions, and “know when we don’t know” (and when we do). Recent developments in chatbots such as Chat GPT highlight the need to start building metacognitive capacities into AI so that users can make informed decisions regarding whether their outputs should be trusted. In this talk I will discuss how we construct metacognitive confidence in our choices, the kinds of information incorporated into metacognitive confidence, and the degree to which metacognitive confidence is distinct from uncertainty estimation. I will also talk about those characteristics and behaviours – some unintuitive – of human confidence for perceptual decisions that bio-inspired AI would need to reproduce to be human-like.

Learning requires the brain to assign credit to billions of synapses. How the brain achieves this feat is one of the unsolved mysteries in neuroscience. Recently, inspired by deep learning, we have introduced a novel model of hierarchical credit assignment in cortico-cortical networks. Our model combines synaptic, sub-cellular, cellular, microcircuit, and cortico-cortical computations to enable error-driven learning of challenging tasks. I will show that in contrast to previous work our model (i) is consistent with experimental observations, (ii) provides rapid credit assignment across multiple cortical areas and (iii) does not require a multi-phase learning process. Experimental evidence suggests that neuromodulation also plays a key role in controlling learning. Inspire by deep learning we propose that cholinergic neuromodulation implements adaptive learning rules in the cortex. In our model, cholinergic neuromodulation democratizes learning by continuously shifting learning to different neuronal populations. I will show that such distributed learning has two key consequences: (i) greatly speeds up learning and (ii) produces a more distributed task-encoding. Importantly, more distributed representations result in networks that are more robust to perturbations (e.g. cell death), thereby providing the first theoretical explanation of why Cholinergic deficits are commonly associated with dementia, aging and injury. In summary, our AI-driven modelling is opening the window to a new understanding of learning in the brain with important implications for health and disease.