Andy Pickering

On Stafford Beer's cybernetic theory, its foundations, inspirations, and applications in industry and government.

_Interview

Andrew Pickering, a British sociologist, philosopher, and historian of science, focused on his understanding of the history of cybernetics in a detailed conversation on July 26, 2022.

The conversation reviewed some of the pivotal concepts in cybernetic theory, scrutinized the nature of Stafford Beer's theories, and assessed their foundations, inspirations, and applications in both industry and government.

***

Evgeny: What does cybernetics mean to you, and how do you define it?

Andy: Cybernetics can mean many things to different people, but I follow the definition from Stafford Beer, who said that cybernetics is the science of exceedingly complex systems. This refers to systems that are endlessly complicated and might be changing all the time, which you can't calculate. Beer saw cybernetics as a different way of thinking about and acting in the world compared to ordinary science or operations research.

I'm interested in exploring cybernetics in various fields, but one that stands out to me is organization. I find it fascinating to examine what cybernetics looks like as a science of organization.

The branch of cybernetics that I studied, and the one that interests me particularly, began as brain science. The founders, in one way or another, were actually working as psychiatrists. The important thing for cybernetics was that they had a very particular conception of what brains are and what brains do.

For them, the brain was geared into the performance of bodies. It was an organ of performance, of doing things, and the brain - especially in that branch of psychiatry - was what helped us adapt to situations that we'd never been in before. So the brain was primarily an organ of performative adaptation, and that became the problematic of this whole branch of cybernetics, which Stafford Beer was part of.

CYBERNETICS: KEY TERMS

Evgeny: Excellent. Before we get to Stafford Beer and his thought, it would be nice to have you define a few other key cybernetic concepts. Shall we start with the “black box”?

Andy: In cybernetics, a black box refers to an object that you can interact with by supplying it with inputs and observing its outputs. You don't need to know how it works inside, but you can map it in terms of input-output relations.

The modern sciences aim to open up black boxes to uncover what's happening inside them, but what's interesting about cybernetics is that it treats the entire world, including people, as a constellation of black boxes. The world is made up of an infinite assemblage of black boxes, and the question is how to get along in this kind of world. We don't necessarily need to know what goes on inside the black boxes, but we need to have a performative relation to them. We're always doing things, and things are always coming back to us. The question then is how to get along with these black boxes.

Evgeny: Can you explain the concept of "variety" in cybernetics, and why it's important?

Andy: Variety, for cyberneticians like Ross Ashby and Beer, is a measure of the number of different states a system can be in. The larger the number of states the system can be in, the higher its variety. The idea is that if I'm trying to control something, I need to have as much variety as the system that I'm trying to control. Only variety can control variety, according to Ashby's law of requisite variety.

Evgeny: Great. What about the homeostat – a device that inspired so much cybernetic theorizing?

Andy: Yes, the homeostat was an object built by Ross Ashby in 1948, and it's incredibly important in the history of cybernetics in Britain. It was a device that emitted electrical outputs and received electrical inputs, and its goal was to achieve equilibrium with its environment in terms of inputs and outputs. If it found itself out of equilibrium, it would randomly reconfigure its internal workings until it achieved equilibrium again. It's important to note that the homeostat wasn't trying to understand the world; it was just interacting with it and adapting to it performatively.

Evgeny: I understand that Ashby also saw the homeostat as a model of the brain. How did the homeostat differ from conventional robots in terms of its interaction with the world?

Andy: Yes, that's correct. Ashby considered the homeostat to be a model of the performative brain, as opposed to the brain as a system that forms images of the world and reasons about them. The main difference between the homeostat and conventional robots is that a conventional robot forms a representation of the world and constructs its actions in the light of that representation. In contrast, the homeostat had no representations inside it and interacted with the world without trying to understand it.

Evgeny: The inner workings of the homeostat tie into a broader philosophical principle that cyberneticians refer to as "ultrastability". Can you elaborate on that?

Andy: Yes, ultrastability is the principle behind the homeostat's operations. An ultrastable system is one that, regardless of its situation, reconfigures itself until it reaches equilibrium. This was Ashby's fundamental idea of ultrastability.

In terms of the homeostat, its inner organization originates from within. The device, rather than being programmed, finds its suitable configuration. This contrasts with externally imposed organization or programming. While the homeostat reorganizes itself electronically, numerous adaptive systems in nature also reconfigure themselves biologically to adapt to environmental changes.

Ecosystems, for instance, are significant examples of ultrastability in the natural world. Consider a pond ecosystem. The various elements continuously adjust their relations to each other and the climate, responding to factors like temperature changes, rainfall, and the presence of different chemicals. The ecosystem, like other biological systems, finds its way to exist in whatever situation it encounters. This self-regulation is similar to how our bodies maintain a nearly constant internal temperature, regardless of the external environment.

PONDS AS MANAGERS

Evgeny: Is there some deeper connection between this broader philosophical interest in “ultrastability” among the early cyberneticians and their efforts to study the brain?

Andrew: In a sense, yes. Take Stafford Beer, who was originally an operational research man, trying to find scientific ways of optimizing the function of organizations like factories. He thought a lot about that and believed that you could automate many of the things that happened in a factory and optimize them. But the thing that you couldn't optimize was how to make a factory that could adapt to changes in the environment that it found itself in. If people stop buying the thing it makes, what's it going to do next? You can't calculate something like that. That's where the brain comes in.

Having figured out how to automate the factory, the question became: how can this factory think, so to speak? How can it adapt itself to changes in its situation, which was the primary definition of a brain? As far as Ashby was concerned, the first move that Stafford Beer made was to think rather literally: "This factory needs a brain!" Ashby has defined a brain as an adaptive system, so maybe if we find an adaptive system and plug it into the factory instead of management, that adaptive system can help the factory itself adapt to changes in its environment.

Evgeny: This is how Beer turned to biological computing in his early career, right?

Andy: Yes. Beer and his collaborator Gordon Pask  and Stafford Beer were trying to find a way to create an adaptive, performative brain that could manage a factory instead of human beings. The idea was to use a pond ecosystem as a black box that could accept inputs and return outputs that would control what was happening inside the factory.

The pond would process inputs that somehow signified the health of the factory and return them as outputs that could help the factory respond appropriately to changes in its environment. If the right variables were wired up properly, the adaptive quality of the pond could serve the health of the factory.

Unfortunately, the project never worked as intended. The biggest challenge was transducing the factory's variables into something that the pond could care about and respond to. For example, Beer tried to get insects called daphnia to absorb iron filings into their bodies because the pond would react to magnetic fields, which might represent the state of the factory. But the daphnia just excreted the iron into the water they were living in, and it became a total mess.

The project failed because finding the right variables and getting the pond to care about them was a difficult task. It required a deep understanding of the factory's functioning and the ecosystem of the pond. Moreover, this biological computing work was done by Beer and Pask in their spare time, while Beer was also running a steel factory in his working hours. So, it's not surprising that the project didn't go very far. The concept of using a pond ecosystem as a black box to manage a factory was a fascinating and original idea, but it turned out to be very challenging.

Evgeny: Can you tell us about the "growing an ear" project, and how it differed from the biological computing experiments we discussed earlier?

Andy: Sure. The "growing an ear" project was a version of the biological computing project that wasn't strictly biological. It had to do with electrochemical reactions and threads, or whiskers, that would grow in a cell supplied with voltages from different electrodes. Between the electrodes, threads of metal would be deposited, and the cells would always find the best route to link up the electrodes, depending on the pattern of voltages being applied. These cells had a kind of memory of the changes they had been through.

Beer and Pask treated these whiskers as a black box, which had a certain input-output relation. In the experiments, they looked at the way the input-output relations were affected by the environment that the cells were in. If the cell did something interesting in response to an input they were interested in, they could reward it somehow. The idea was that if they rewarded it, it would grow in a certain way and take on certain properties that might be of interest to them.

They discovered that by connecting one of these cells up to a microphone, they could train it to respond to different frequencies of sound. It wasn't like building a microphone and analyzing the frequencies and reading out this frequency. It was more about training an electromechanical device to performatively respond to a cue. It could be sounds, it could be magnetic fields, it could be anything they liked. But it's encouraging this device to develop a sense that wasn't built into it. That was the key feature of their work with threads.

The contrast here is with programming a computer to respond to a specific cue. Beer and Pask were encouraging this device to develop a sense that wasn't built into it. They were training the cell to performatively respond to a cue, rather than programming it to do so. The idea was to create an adaptive, performative device that could respond to changing environments without having to be explicitly programmed to do so.

CYBERNETICS IN THE US AND THE UK

Evgeny: What do you think were the main differences between British and American cybernetics?

Andy: One of the main differences is the sense of fun that British cyberneticians had. In the British Isles, they were all having fun while working on cybernetics. On the other hand, American cybernetics had a rather grim atmosphere. Warren McCulloch , a key American cybernetician, said that by the end of each day of the Macy Conferences in America, people just couldn't talk to each other because they were having these vicious academic arguments.

Norbert Wiener  was the key American cybernetician, and the device that was central to his cybernetics was the servo-mechanism, something like a domestic thermostat, which focused on negative feedback and killing variations in its environment. Wiener was very concerned with the political implications of automation, which he famously wrote about.

In contrast, on the British side, the key device was the homeostat, which experimented productively and in a future-forward way with the world it found itself in. This performative experimentation is what distinguishes British cybernetics from American cybernetics.

American cybernetics was heavily influenced by World War II and a certain approach to automation, which gave it a particular flavor. Wiener was on the engineering side of cybernetics, and a lot of engineering cybernetics owes a debt to him. He was more concerned with the political implications of automation, whereas British cyberneticians were amateurs as far as cybernetics was concerned. They were doing their distinctively cybernetic work because it interested them, was imaginative, new, and fun, and could also be useful. They weren't tied to the grim context that American cyberneticians were.

Evgeny: How did cybernetics view the counterculture movements in the US and Britain?

Andy: American cybernetics, for the most part, tried to distance itself from the counterculture. The American cyberneticians were very keen on the fact that they were scientists and meant science in an old-fashioned way. On the other hand, British cyberneticians were less keen to keep their distance from the counterculture, and the counterculture was very interested in British cybernetics from several angles. One of the most important was that, following up on the example of Ashby's homeostat, British cybernetics was all about experimenting performatively. The counterculture was also a culture of experimentation, so there were big resonances between cybernetics and the counterculture.

Evgeny: Were there any other connections between cybernetics and other cultural or philosophical movements?

Andy: Yes, one interesting thing about British cybernetics was that it was a non-dualist field that focused on interconnections between people and things. One source of inspiration for this perspective was Eastern philosophy and spirituality. British cybernetics had a strong spiritual streak, and the counterculture was fascinated with all things Eastern. So there was another resonance between cybernetics and the counterculture in this regard.

Evgeny: How did cybernetics influence the arts of the 1960s and 1970s?

Andy: Cybernetics was interested in dynamic relations between systems, and the homeostat was an important device that could change in response to the environment it found itself in. This idea of changing in response to the world was essential in the arts of the 1960s and interactive artworks. Gordon Pask was a key figure in this field. In architecture, cybernetics also influenced the design of buildings that could adapt themselves to the people who were inhabiting them. Cybernetics fed very strongly into the arts of the 1960s and 1970s.

BEER’S NOTION OF AUTONOMY

Evgeny: Can you explain how cybernetics approached the concept of freedom?

Andy: For cyberneticians like Beer, freedom was a crucial concept. However, total freedom, in an anarchist sense, was not viewed as a viable option. Ahsby demonstrated that, once a significant number of entities or people started interacting freely, it would take longer than the age of the universe for some kind of ordered situation to emerge. Beer's idea was that this kind of total freedom would result in chaos. He argued that reducing the variety in a system was essential to optimize freedom without imposing a hierarchical structure.

Beer's work aimed to find ways to reduce the variety in a system without imposing a hierarchical structure on it. His later work, the Viable System Model  and Syntegration , were different ways of reducing variety without imposing hierarchy. Beer saw these as ways of optimizing the freedom within a system. He claimed that the Viable System Model optimized the amount of freedom for people within the organization, consistent with it remaining an organization. The challenge was to find a way to reduce variety while still maintaining some level of freedom within the system.

Evgeny: Can you explain Stafford Beer's numerical indices and models, and their importance to his approach to management?

Andy: In his operations research career, Beer developed numerical indices that measured the performance of production units like machines or factories. The indices included ratios between theoretical and actual production, among other things. The indices were used as guides for action and were inputted into higher-order models for operations research or economics. The important thing was not the indices themselves, but that the models were revisable in practice, guiding action and allowing for revision when necessary.

Beer's work on optimizing factory performance through operations research showed that there was something that could not be optimized using those methods. That residue of unknowability was what became the subject matter of cybernetics for Beer.

Evgeny: Would you say that there was a definitive political slant to VSM?

Andy: I think the Viable System Model was a kind of sub-politics that was interested in organizing groups of people practically. It wasn't developed to advance capitalist or socialist goals. If fully implemented, the VSM would look very different from capitalism as we know it today, as it didn't prioritize shareholder value or profits. The idea was to find out what a given organization was for, which would undercut capitalism from below.

Evgeny: Can you tell us more about the implementation of the Viable System Model in Cybersyn , and how it functioned in practice?

Andy:: In Cybersyn, the VSM was supposed to function at different levels, with the lower levels consisting of the production units and the higher levels being grand models of how the Chilean economy fit into the world. The VSM was intended to be recursive, meaning that even a unit at Level 1 could have a VSM-like structure, but as far as I know, this was never fully implemented in Chile. The information flow was from the production units to Santiago, but the recursive aspects of the VSM were not put into practice.

Evgeny: What do you make of the Operations Room of Project Cybersyn? How does it fit into your overall take on cybernetics?

Andy: The control room is often talked about in terms of its aesthetics, and I often show a picture of it when giving a talk on the history of cybernetics. It does look like the bridge of the Starship Enterprise, but there was nothing inherently cybernetic about its appearance.

It’s what was meant to happen in the control room that was distinctively cybernetic. People were meant to be playing with models, looking at forecasts, and revising parameters. The information technology was the clever thing about the control room, allowing information to be brought visibly into play in real-time.

The whole setup of Cybersyn and the VSM was designed to operate in an unknowable world that would always surprise you, and so the models were always meant to be revisable, with nothing concrete ever finally known. The control room was meant to be a continual, experimental exploration and reaction to a world that could never be fully known.

The control room was not a place where orders were issued and then obeyed at all levels below. Instead, it was a place where proposals were made to the operations research people in Level Three, who would then provide feedback and suggestions that would go back up to Level Four, and then bounce around with Levels One and Two. This horizontal arrangement made the VSM a cybernetic project, with invisible elements that made it function as such, rather than just its visible appearance.

Evgeny: Would you say Project Cybersyn was a way to practice cybernetic planning?

Andy: I've never really thought of cybernetics as being about planning. However, if we think about the word "plan," there is a concept of it within cybernetics. Ross Ashby wrote about plans and his idea was that a plan was just one part of what was going on.

In the case of Project Cybersyn, the equivalent of a plan would be the System Dynamics model at Level Four of the whole Cybersyn project. This model was continually revised and messed around with, and it was at stake in running the Chilean economy. It's not the be-all and end-all, but rather a part of the whole experimental dance of agency that is central to cybernetics. So, while planning is not the core concept of cybernetics, it does play a role in certain applications like Project Cybersyn.

CYBERNETIC CAPITALISM

Evgeny: In our conversation, you kept emphasizing performative – some might say action-oriented – aspects of cybernetics. I hear certain echoes of pragmatism in such descriptions. Would you agree?

Andy: Indeed, my journey into pragmatism began at MIT in 1985 when colleagues recommended I read William James. His work resonated with me. James is my real hero, pragmatism is about action - the truth is what works. But James leaves us asking, "So what? How should we behave?" Pragmatism interests me because it makes the abstract idea of performance and unknowability tangible across multiple fields.

That's where cybernetics comes in. It takes the philosophical aspects of pragmatism and applies them in a practical way across various fields, including but not limited to management, psychiatry, anti-psychiatry, arts, education, theater, and robotics.

One of the challenges with philosophy is that it often revolves around abstract ideas without clear practical applications. It's like pushing a boulder up a hill. Pragmatism, dualism, Cartesianism - they all operate in a theoretical realm without making a tangible impact. While Dewey offers some practical insights, cybernetics truly brings to life these concepts by demonstrating imaginative ways we can grapple with unknowability.

Evgeny: You've described the peripheral status of cybernetics since around the 70s. Yet, Silicon Valley, with its search engines like Google, thrives on feedback loops, which is a very cybernetic idea. Does this suggest that today's capitalism is heavily influenced by cybernetics?

Andy: It's possible to see diluted elements of cybernetics in these sectors, but they're typically narrowed and dominated by the pursuit of profit and market supremacy. Cybernetics doesn't inherently support such domination, which is why it's a more intriguing concept than capitalism.

Cybernetics garnered attention in the 60s before receding into the background. I see it as a distinct paradigm, pushed to the margins of our modern Western approaches. Its essence is often disguised—you have to delve deeply to find true instances of cybernetics in today's world.

Take academia. It can certainly be viewed as a cybernetic structure in the sense that, at the ground level of teaching and research, academics have a lot of freedom to do what they like. However, the university system can also have a hierarchical management structure that does not prioritize adaptability and experimentation. This is similar to how companies like Google and Amazon can have a self-organized sub-structure that is dominated by a non-cybernetic super-structure. The point is that a cybernetic sub-structure can be in service of something that has a narrow view of what people and organizations are for. In academia, the metrics of student satisfaction and research income can overshadow the adaptive and experimental nature of cybernetic principles.

THE UNKNOWABILITY OF THE WORLD

Evgeny: Would you agree that cybernetics, for all its hubris, can also be a very humble discipline at times, especially when, in the hands of people like Beer at least, it explicitly acknowledges that the world is just too complex to model and predict in advance?

Andy: Absolutely. It does posit that unknowability is an existential condition that we have no choice but to confront in our complex world. Stafford Beer and Ross Ashby emphasized that we cannot fully know the world, but we do have a choice in how we respond to this reality. We can choose to ignore it and act as if we are the masters of the universe, which can lead to catastrophic outcomes such as global warming and environmental disasters. Alternatively, we can acknowledge the presence of unknowability and act in the face of it, as cybernetics proposes. Cybernetics highlights the importance of adapting to and experimenting with complex systems that can always surprise us, and it prompts us to question how we should go on in such a world.

Evgeny: Can you give an example of where such a prior commitment to unknowability pays off politically and intellectually?

Andy: Well, in the book I’m currently writing, my argument is that if we ignore the unknowability of the environment and act on it as if we're the masters of it, we precipitate all of the dark sides of modernity. This includes global warming, extinction of species, and other environmental disasters that are happening all the time. The paradigm of domination and mastery has been the classic pattern of modernity, but it has led to these negative consequences.

On the other hand, there's another paradigm that I'm trying to bring back to life. This is the paradigm of acting with the world, more like a marginal swimmer in a river. In this paradigm, we don't dominate the environment but let it carry us along and surprise us. This acknowledges the unknowability of the environment and seeks to work with it instead of against it.

Evgeny: But this idea of “acting in the world” sounds rather politically neutral, e.g. big companies that pollute the environment are also “acting in the world” – and might even be using cybernetic methods to circumvent the law more efficiently. Don’t we need some normative criterion to make proper political sense of cybernetics?

Andy: I’m not so sure. The political consequence of cybernetics is that it opens up many more possibilities for how we can organize ourselves. If you're interested in socialism or anarchism, for example, you might start thinking about individuals being free to do what they like, collectively making decisions, and building things up from the ground level. This is a different approach from big political decisions about nationalizing industry, for example.

To achieve effective organization from the bottom up, we do need some form of variety reduction, which is what the Viable System Model and Syntegration are about. We need to think about what sort of world we want to live in and how we are going to organize things. The different segments need to relate to one another, and Syntegration was supposed to achieve that through a kind of back and forth.

Cybernetics was developed to help us understand the world around us and how we interact with it. By doing so, it helps us question what it's like to be human. We need to challenge the stipulation that production is important and that fun and enjoyment are irrelevant. Cybernetics isn't just about the VSM and Cybersyn. We might want to think about the anti-psychiatry movement or Gordon Pask's artworks.

Evgeny: But someone like Hayek might say that the market is just that very cybernetic system – developed to help us understand the world and interact in it. Wouldn’t the cybernetic language also legitimate some of the worst market excesses?

Andy: Cybernetics emphasizes the experimental quality of trying things when you don't know what the result is going to be. This is very different from organizing a system around the market, which is fixed and not experimental. The market also relies on a certain picture of what people are like, which is also fixed and not experimental. Cybernetics undercuts capitalism by being much closer to something you might call socialism or anarchism, as it is about open-ended inquiry.

British psychiatrist and a founding father of cybernetics, he championed the concept of homeostasis in explaining how complex systems maintain stability.More

A problem-solving and decision-making discipline using advanced analytical methods to help manage an effective organization. Originated in Britain during WWII for integrating technologies (such as the radar) into warfare tactics.More

An esteemed British cybernetician and educational theorist, played pivotal roles in advancing conversational and interactive learning systems. His influential work extended to the field of architecture.More

An American neurophysiologist and cybernetician, he contributed to the founding of neural networks and computational neuroscience.More

Held from 1946 to 1953, these conferences were crucial in shaping the conceptual foundations of cybernetics and systems theory.More

An American mathematician, he is regarded as the father of cybernetics, the interdisciplinary study of the structure of regulatory systems, which has impacted fields as diverse as computer science, robotics, philosophy, and biology.More

A conceptual model for any viable or autonomous system's organizational structure, based on human nervous system structure and cybernetic principles of adaptive behavior and requisite variety, designed by Stafford Beer.More

A non-hierarchical problem-solving model presented by Stafford Beer, for use in small teams, integrating diverse positions and information streams into a common result.More

A 1970s initiative to manage Chile's economy using real-time data, networked computing, and principles of cybernetics. It was dismantled after Pinochet's 1973 coup.More

_bibliography

Pickering, Andrew. “Cybernetics.” International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015, pp. 645–50.

_Links

Andrew's faculty page.
Andrew's faculty page.

Link to site
The Cybernetic Brain
The Cybernetic Brain

(2010) Sketches of Another Future

Link to site
The Mangle of Practice
The Mangle of Practice

(1995) On time, agency and science

Link to site

An indexed guide to the Santiago Boys universe

COMING SOON!