
Thinking About Robots
MANUEL BALTIERI
​
The English translation of the essay "PÅ™emýšlení o robotech" from the Czech book Robot 100: Sto rozumů, edited by Jitka ÄŒejková, published by the University of Chemistry and Technology Prague (Vysoká škola chemicko-technologická v Praze) in 2020
27
What would it feel like to be saved by a droid as creative and resourceful as R2D2 from Star Wars? What would it be like to interact with androids as human-like as some of the characters found in Westworld? Some of us may also wonder: What would it be like to build a robot, droid, or android of this kind?
​
These questions are, however, rather difficult to answer, and one could even consider them openly naive considering the status of modern robotics. Our robots are not really autonomous; they are incapable of adapting to increasingly more complicated problems, unlike R2D2. It is often tempting to look at the results shown in videos such as the ones released by Boston Dynamics, which include robots jumping and running around, and wonder at how easily they perform backflips when most of us usually fail. In practice, however, these machines lack (almost entirely) any form of autonomy, with humans often controlling them remotely. The “human-likeliness” of the androids in Westworld is also nothing but a dream at the moment. Some of the systems usually found in competitions such as the RoboCup, where robots need to play football in real time, often fall short in different aspects of “human-likeliness,” with slow, jerky, and repetitive movements that look all but human.
​
Even in their current state, however, robots are a fundamental tool for research in different fields. In engineering, robots’ electronic and mechanical components are often built to face the hardships of complex environments to consider, for instance, how we can build better joints for robots to move on uneven terrains or how we can write more effective algorithms to control them. In psychology, they can be used as models of synthetic, fictional agents [1]; to investigate mechanisms that could drive different organisms, imaginary and real ones alike; and to perform complex behaviours, such as being attracted by another individual or running away from a predator, while facing different hurdles. Engineers can also use robots to better understand processes of learning, by trying to build robots that can adapt to different tasks: For example, what would it be like for a robot to learn how to solve problems in the same way a baby does? In studies of ethics, we can imagine what it would be like to live and co-exist with intelligent machines, to understand different standards and morals that are required to create a society promoting peaceful coexistence between different groups [2].
While it is clear how different areas of engineering can benefit from thinking about robots to push the state of the art and improve their design, using robots to study the mechanisms of learning or complex adaptation may not be so obvious. Nonetheless, some of us might have already thought that what makes most of the famous robots really unique and special is their ability to acquire and use more knowledge over time. The more imaginative of us (or the ones closer to sci-fi literature) may have also considered that creating intelligent machines would have non-trivial ethical repercussions on a society including other intelligent beings. But how far can we push the use of robots to improve our understanding of other fields? What about philosophy of mind?
​
What could robots have done for philosophers such as Plato, Descartes or Heidegger? What can they do for those of us studying philosophy nowadays? The intuition and the ensuing questions are, in many ways, quite similar to the ones asked in other fields [3]: What would it be like to build a robot like R2D2? What understanding of philosophy of mind would we need to design such a machine? What more could we learn by trying to build one? And, perhaps more importantly, would a robot like R2D2 have something like a “mind?” What would that be like? Would such a robot also be conscious? Clearly, this is no easy task, and not everyone should be required to build a robot in order to voice their opinion. However, the message behind this idea is rather simple: If one has an interesting proposal, why not just work to test it one day? A useful way to discuss a theory and its assumptions is to check whether they can ever work in the real world, or if they are just fundamentally flawed. In practice this is often difficult due to limitations of current technologies (hardware and software, for example). However, this shouldn’t be used as an excuse. On the one hand, it simply means that we should push for our theories to become testable, not hiding behind the idea that “more resources” (more computing power, bigger power plants, etc.) will simply solve our problems in the future, such as what is often claimed in areas such as modern “deep learning” [4]. On the other, this should also be seen as an opportunity for us to consider more ambitious attempts to understand the mind, relying on embodied and situated robots [5]: agents that freely move in a world, subjected to physical laws and constraints that may play a crucial role in the emergence of something like intelligence [6], or even a “mind”.
​
With this in mind, I recently worked on a project to implement robots using a new theory in neuroscience and philosophy of mind, nowadays going by the names of “active inference” or “predictive processing” respectively [7]. According to this theory, the behaviour of living and cognitive systems can be explained following a rather simple principle: the minimisation of some prediction error. In this context, one should see agents (humans, animals) as “trying to” constantly minimise the discrepancies between a set of world variables (temperature, humidity, etc.) and predictions about, or estimates of, these variables that agents can generate. For instance, one should be able to estimate (with a certain level of accuracy) the outside temperature to stop sweating, an example of a “prediction error.” If one doesn’t stop sweating, they are probably underestimating the heat on a sunny day, or wearing too many clothes. They will therefore be sweating until they reduce their prediction error by correctly estimating the temperature and, consequently, moving to a colder place, or by taking off some clothes. To study this new framework and its ambitious claim that all behaviours could simply be understood using this very general idea, I began thinking about robots with one question in mind: Was it possible to build robots following this theory, unleashing its full potential for neuroscience, biology and perhaps cognitive science?
​
After months where I considered different robots and their possible different applications, it wasn’t clear how one could ever derive useful models from such a simple, but rather generic, principle of “minimisation of prediction error” alone. Under the “right” set of conditions, and lack of proper constraints, (almost) any mathematical model can be implemented as a process of minimising some prediction error. With the “right” knowledge, one can describe a ball as “trying to” hit the ground after a player kicked it in the air so to minimise a prediction error driven by its predictions of landing. One can even understand the motion of celestial bodies as “trying to” follow certain trajectories using a simpler version of the error minimisation advocated by active inference, called least square method [8]. Active inference and its claims with regards to a “unified brain theory” [9] became somewhat puzzling to me. If I couldn’t build a robot with it, what was it good for? Once stripped of all of the hype [7] surrounding it, one could see that, as a general principle, simply capturing existing knowledge in neuroscience and cognitive science, this could be used as a foundation for a mathematical theory that could describe processes in neuroscience that remain, to date, largely unclear, often due to the lack of a proper quantitative understanding of different phenomena [10].
The modern idea of robots as automata, programmable artificial machines combining a set of mechanical parts made of metal, isn’t an accurate description of the fictional characters called “robots,” originally introduced in R.U.R. [11]. In R.U.R., robots were described as conscious agents built out of organic matter and having limited emotional and self-preservation capabilities (at least initially). At the same time, these fictional characters were introduced based on the historical background of Czech Republic in the 1920s, as a critique of the role of automation, with an eye to the dehumanisation of labourers. R.U.R. proposed one of the first examples of the use of “robots” to understand possible implications of different scientific theories and philosophical stances, —in this case automated labour and the Futuristic [12] celebration of technology — inspiring discussions and studies of ethics, morality, technology and anthropology for years to come. Even without physical implementations situated in the real world, thinking about robots can still prove to be useful to explain a theory, showing its implications and clarifying its assumptions. When a scientist, or a philosopher, tries to test a framework in the real world, they must carefully consider different aspects of their hypotheses, crafting a set of rules that convert their ideas into testable predictions. In my work on active inference, a modern proposal of brain function introduced as a grand “unified brain theory,” [9] this process proved to be difficult since the original “theory” represents more a set of guidelines inspired by a general principle of error minimisation. The implementation of robots, or even just thinking about robots, can show its true potential in a variety of different fields, not confined to engineering or applied sciences. It defines a set of rules and a goal to be achieved. Importantly, grounding a problem in terms of robots doesn’t limit the applications of a theory but rather, as in my case, helps in clarifying what it is more suitable for, with active inference showing, for example, its possible role in neuroscience and philosophy of mind as a more flexible mathematical language to describe cognitive processes [10].
[1] Braitenberg, V. (1986). Vehicles: Experiments in synthetic psychology. MIT press.
[2] Urasawa, N. (2003). Pluto. https://www.viz.com/pluto-urasawa-x-tezuka
[3] Harvey, I. (2000). Robotics: Philosophy of mind using a screwdriver. Evolutionary robotics: From intelligent robots to artificial life, 3, 207-230.
[4] Marcus, G. (2019). An epidemic of AI misinformation. https://thegradient.pub/an-epidemic-of-ai-misinformation/
[5] Pfeifer, R., & Bongard, J. (2006). How the body shapes the way we think: a new view of intelligence. MIT press.
[6] Brooks, R. A. (1991). Intelligence without representation. Artificial intelligence, 47(1-3), 139-159.
[7] Raviv, S. (2019). The Genius Neuroscientist Who Might Hold the Key to True AI. https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence/
[8] Sorenson, H. W. (1970). Least-squares estimation: from Gauss to Kalman. IEEE spectrum, 7(7), 63-68.
[9] Friston, K. (2010). The free-energy principle: a unified brain theory?. Nature reviews neuroscience, 11(2), 127.
[10] Baltieri, M. (2019). Active inference: building a new bridge between control theory and embodied cognitive science (Doctoral dissertation, University of Sussex).
[11] Capek, K. (2004). RUR (Rossum's universal robots). Penguin.
[12] Marinetti, F. T. (1909), Manifesto del Futurismo.