On second order cybernetics


Copyright 2017-9 Graham Berrisford. One of a hundred papers on the System Theory page at http://avancier.website .


For discussion of Ashby’s classical (first order) cybernetics, read https://bit.ly/2TRAqlA

This article explores insights that have been attributed to Heinz von Foerster and his second order cybernetics.

It argues some are truisms, axiomatic in modern science, and others are questionable.

And that Von Foerster is at least partly responsible for making the term “system” meaningless in much of today’s systems thinking discussion.


There was no description before life. 1

A description is a reality, but not the reality it describes. 2

A description is subjective, yet can prove objective. 2

Some ideas attributed to Heinz von Foerster 3

What is a self- organising system?. 5

Challenges to self-organisation. 6

Conclusions. 7

Footnotes on self-organisation in other discussions. 9


There was no description before life

Some attribute to second order cybernetics the idea that knowledge is a biological phenomenon.

The idea is older, more general, even axiomatic.

It is explored in papers on memories and messages, and language and logic, on the System Theory page at http://avancier.website.


You may have come across something called a WKID triangle or pyramid; there are several versions.

This table presents a version compatible with the discussion here.





the ability to respond effectively to knowledge


information that is accurate enough to be useful


any meaning created or found in a structure by an actor


a structure of matter/energy in which information has been created or found


Information is created or found by an actor in a data structure.

In other words, an actor creates or finds meaning in a message or signal at the point of its creation or use.

For eons, in the history of our planet, the actors were biological organisms, but today they can be computers.

A description is a reality, but not the reality it describes

Some attribute to second order cybernetics the idea that description is not reality.

The idea is older, more general, even axiomatic.

Descriptions of realities appear in the physical forms of memories and messages.

They represent the realities they describe, but they are not the same thing.

A description can never be more than a very selective perspective or model of a reality.


This insight underpins Ashby’s classical cybernetics.

Ashby pointed out that infinite possible variables might be used to describe “the real machine”.

His system includes only that tiny selection of variables that the observer wants to monitor or direct.

The system is not the real machine; it is a highly selective model of that machine.


The same insight underpins all science.

Physicists describe the universe in terms of particles or waves.

They do not say either description is “true”, it is only a model we find useful.

Scientists do not think in terms of absolute truth.

The role of science is to find, test, agree and share those descriptions we can rely on to be useful.

A description is subjective, yet can prove objective

Some attribute to second order cybernetics the idea that each individual constructs his or her own model of reality.

The idea is older, more general, even axiomatic.

“Everything that is said is said by an observer.” Heinz Von Foerster, Observing Systems.

This insight underpins Ashby’s classical cybernetics and science in general.


Each mental model is unique to the mind that holds it.

However, subjective and objective descriptions are best not seen as mutually exclusive.

Subjective means personal, perhaps influenced by personal feelings, tastes, or opinions.

Objective means not restricted to one individual and considered to be factual.


Obviously, no two of us share the same biochemical mental model of the world.

But to say “no one shares the same knowledge of the world” is patently untrue, since it denies the success of social animal species.

And surely, von Foerster did not mean to say “Everything said is only subjective.”

Because the evidence is that we do manage to share “facts” that we abstract from observations of reality.

By communicating and testing descriptions, we turn what might be called subjective into objective.


Objective does not mean description = reality; it does not mean absolutely true.

It only means a) not limited to one person and b) supported by empirical evidence.

It is accurate enough to be demonstrably useful to more than one person.



All social animals depend on being able to communicate descriptions that usage proves to be objective.

A honey bee can encode their mental model of a pollen location in a dance.

Another bee can decode the dance into a mental model of where that pollen is.

To find the pollen, the second bee must share the mental model of the first.

This shows that both mental models represent the same facts – the distance and direction of the pollen source.

The facts recorded in these mental models are objective and accurate enough for us to call them “true”.


Suppose you see one honey bee finds the pollen described to it by another honey bee.

You (3rd party observer or experimenter) have evidence that they have shared an objective description of the world.

That is the very definition of objective - not limited to one intelligent entity - and confirmed by empirical evidence.

Moreover, in an example of cross-species communication, scientists can read the dance of a honey bee and find the pollen themselves!



We (you and I) can both read and remember this sentence.

Our two mental models of the sentence are different and yet the same.

Different – our mental models are bio-chemically distinct and different.

The same - we can both recall and recite the sentence accurately.

The objectivity of our mental models is found not in their biochemistry.

It is found in communication showing we both recall and recite the sentence.

A 3rd party can test what we recite is objectively accurate.


By circular communication and theory-test experimentation we shift descriptions from subjective towards objective.

The more a mental model or message proves useful to its owner or receiver, the more experiments confirm a hypothesis, the more confidence the owner, receiver or scientist can place in its objectivity.

To deny that would be to deny the survival and flourishing of social animals in our biosphere, in the universe.

A few of Heinz von Foerster’s ideas

This section highlights four ideas, three of which are questionable.


Self-regulating systems

In the 1950s, W Ross Ashby popularised the usage of the term “cybernetics” to refer to self-regulating systems.

Homeostatic control systems constrain the range of a variable’s values, or the population of a stock, between upper and lower bounds.

Self-regulation means an entity maintains its own state in an stable, orderly or homeostatic fashion (its state being describable in the values of variables)


Wikipedia cites von Foerster as writing this in 1963.

“The main theme of this report is a particular facet of the general problem of pre-organization in self-organizing systems, namely, the theory and circuitry of information processing networks.

One may consider these networks as a special type of parallel computation channels which extract from the set of all possible inputs a particular subset which is defined by the internal structure of the network.

The advantage of such operationally deterministic networks in connection with adaptive systems is the obvious reduction in channel capacity of the adaptors, if it is possible to predetermine classes of inputs which are supposed to be meaningful for those interacting with the automaton.”


At this time, von Foerster’s “self-organizing systems” and “adaptive systems” were automatons that change state in reaction to “pre-determined classes of inputs”.

These systems cannot react to unforeseen inputs, or change their own “circuitry”.


The hermeneutic principle of communication?

Later, von Foerster wrote as follows in "Notes on an epistemology for living things" in Observing Systems, The Systems Inquiry Series, Intersystems. Publications (1981), p. 258-271.

In this work, he appeared to endorse what some call the hermeneutic principle of communication.

“Information” is a relative concept that assumes meaning only when related to the cognitive structure of the observer of this utterance (the “recipient”).


This is somewhat misleading, because an utterance also has meaning when related to the cognitive structure of its sender,

First, for any act of communication to occur, there must be some of what Ashby called variety.

If my office door is always open, I cannot use it to convey a message.

If it can be left open or closed, I can use it to convey a message (I am open to visitors or not).


Ashby emphasised that the meaning of a message depends on what the receiver/decoder knows of the sender/encoder.

In his example, after two soldiers are taken prisoner by countries A and B; their wives receive the same brief message “I am well”.

Though each has received the same message (or signal), they have received different informations (or meanings).

Because country A allows the prisoner a choice of three messages: I am well, I am slightly ill and I am seriously ill,

Whereas country B allows only one message: I am well (meaning no more than “I am alive”).


Successful communication requires a sender to encode some meaning in a message or signal, and receiver to decode the same meaning from that message or signal.

In other words, a sender encodes some information in a data structure, and receivers decode the same information from that data structure.

Every business depends on information processing systems in which it is assumed that senders and receivers, who encode and decode messages or data structures, share the same language.


We need a theory of the observer?

Von Foerster was something of a dilettante (“I don't know where my expertise is; my expertise is no disciplines”).

He enjoyed thinking about all things circular, recursive and self-referential and enjoyed provoking people with thoughts about them.

In the same 1981 work he wrote:

it was clear that the classical concept of an “ultimate science”, that is an objective description of the world in which there are no subjects (a “subjectless universe”), contains contradictions.

To remove these one had to account for an “observer” (that is at least for one subject):

(i)                 Observations are not absolute but relative to an observer’s point of view (i.e., his coordinate system: Einstein);

(ii)               Observations affect the observed so as to obliterate the observer’s hope for prediction (i.e., his uncertainty is absolute: Heisenberg)

After this, we are now in the possession of the truism that a description (of the universe) implies one who describes it (observes it).

What we need now is the description of the “describer” or, in other words, we need a theory of the observer.”


It is indeed a truism that a description is a product of a describer; but it is not true that a describer necessarily affects what is described.

Heisenberg’s uncertainty principle applies to observations of micro-scale sub-atomic particles; it is not generally applicable to macro-scale objects and social phenomena.


Obviously, people can be involved in and affect the social phenomena they observe and describe.

This is rightly a concern to sociologists (e.g. Mead) studying the behaviours of tribal societies.

But it is possible to understand and describe other social phenomena with no “theory of the observer”.


In observing a tennis match, you need no description of the people who watch it.

In studying the laws of tennis, you need no description of the people who wrote those laws.

In observing how pricing affects supply and demand, Hayek did not need to exclude himself as a buyer or seller in the market.


We need no theory of the observer beyond understanding that people can make choices (as Ackoff noted), and are aware of their own actions.


Self-organising system?

A decade later, in "Ethics and Second-Order Cybernetics", von Foerster wrote:

“Something strange evolved among the philosophers, the epistemologists and, the theoreticians.

They began to see themselves more and more as being included in a larger circularity; maybe within the circularity of their family;

or that of their society and culture; or even being included in a circularity of cosmic proportions!”


von Foerster presented second order cybernetics as the science of “self-organising systems”.

He was fond of aphorisms and questions like: “Am I a part of the system, or I am apart from the system?


You are never a part of a designed system (as a wheel is part of a car); rather, you play a role in countless systems (e.g. as a driver of a car).

At any time, you can choose not to play your role in system X; you can break the rules; you can act outside system X – perhaps in another system that competes with system X.

One thing you can do outside system X is to observe it; another thing you can do is redefine the rules of your role.

But unless you get all other actors in the system on board with that, that is to disorganise rather than organise.


Von Foerster is at least partly responsible for making the term “system” meaningless in much of today’s systems thinking discussion.

We need no theory of self-organisation beyond understanding that people are aware of their own actions, and can agree to change the rules of a system they act in.

What is a self- organising system?

Terms like “adaptation”, “evolution” and “change” are used with different meanings by different people.


This kind of action

A biologist might say

A sociologist might say

Maintaining the values of state variables in a confined range



Increasing the values of state variables over time



Changing the state variable types or rules of a system

Reorganising it to make a new system (at generation N+1)




System change may be divided into types thus.

·         State change: changing the values of given state variables (typically triggered by inputs).

·         Behaviour change: changing the variable types or the rules that update their values.

·         Reconfiguration: changing behaviour in a pre-ordained way.

·         Mutation: changing behaviour in a random or creative way.


When people speak of “self-organisation”, they rarely make clear which kind(s) of change they are thinking of.

The term self-organisation is used in a variety of contexts and ways.

It may be loosely defined as the emergence of order from interactions between initially disordered elements.

However, the elements may initially be ordered in a different way.


The term self-organisation has been used in many disciplines, but with different meanings.

·         In chemistry, the self assembly of crystals.

·         In economics, the emergence of order in a free market as price changes influence supply and demand (after Hayek).

·         In biology, the emergence of complex life forms from the process of evolution (after Darwin).

·         In classical cybernetics, the maintenance of homeostasis (after Weiner and Ashby).

·         In chaos theory, arriving at an island of predictability in a sea of chaotic unpredictability.


Analysis of examples suggests there are at least two different ideas of what it means to call an entity self-organising; it might mean:

1.      the entity maintains its state in an stable, orderly or homeostatic fashion (its state being describable in the values of variables)

2.      the entity changes its own characteristics or properties (describable as variable types and/or rules).


Regarding 1: Heinz von Foerster articulated the self-organising principle of "order from noise".

This means that random perturbations ("noise") stimulate a system to move through a variety of states in its state space.

And the system may arrive near an “attractor” drawing it into a steady state, or an orderly state change pattern.


Regarding 2: von Foerster introduced a second and different kind of self-organisation.

His second-order cybernetics is said to be the recursive application of cybernetics to itself.

Here, self-organisation means actors observe and change the organisation they work in– change their own roles and rules.


In social systems thinking, the term “self-organising” usually implies actors can observe and change a system they play a role in.

Some social systems thinkers have turned this idea into a political agenda or mission - to promote a “participatory democracy”.

A question that needs addressing is how self-organisation can happen without undermining the very concept of a system.

Especially where many actors play the same role, and should all change their behaviour simultaneously when the role is changed.

Challenges to self-organisation

Maturana stated he would "never use the notion of self-organization, because it cannot be the case... it is impossible.

That is, if the organization of a thing changes, the thing changes”.

(Maturana, H. (1987). Everything is said by an observer. In Gaia, a Way of Knowing, edited by W. Thompson, Lindisfarne Press, Great Barrington, MA, pp. 65-82, p. 71.)


 “By organization” Maturana refers to the relations between components that give a system its identity, that make it a member of a particular type.

Thus, if the organization of a system changes, so does its identity.”

(John Minger in Self-Producing Systems: Implications and Applications of Autopoiesis. Contemporary Systems Thinking. New York: Plenum P, 1995)


Similarly, in Forrester’s System Dynamics, if you change the stocks or flows, you change the identity of the system.

You create a new system, or system generation N+1.

"Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game.” Meadows


Ashby put it thus:

"If the system is to be self-organising, the self must be enlarged to include… some outside agent."

(Ashby 1962)


Ashby rejected the idea that a system can change itself by creating new variable types or rules.

“One of Ashby’s goals was to repudiate that interpretation of self-organization, commonly held, that a machine or organism can change its own organization.”

Instead he postulated “A new higher level of the machine was activated to reset the lower level's internal connections or organisation.”

(Goldstein’s introduction to Ashby’s 1962 treatise on self-organisation.)


Ashby’s higher and lower level machines can be seen as “coupled” in a wider or aggregate machine.

But the aggregate machine is only ever partially self organising, since one part of the aggregate machine always drives the change to the other part.


Lars Lofgren reported Ashby’s view thus.

“Ashby… dared to suggest that no machine can be said self–organizing in a certain complete sense.

“Thus the appearance of being ‘self-organizing’ can be given only by the machine S being coupled to another machine ...

Then the part S can be ‘self-organizing’ within the whole S+α.

Only in this partial and strictly qualified sense can we understand that a system is ‘self–organizing’ without being self–contradictory.”

(Lars Lofgren’s introduction to “The Wholeness of the Cybernetician”.)


In short, the roles and rules of a lower level system (S) cannot be set or changed from inside the system.

But they can be set or changed by a higher level process or meta system (M).


The roles and rules of this system

are found in

Planets in a solar system

The law of gravity produced by the universe

Termites building a termite nest

The DNA produced by a process of reproduction

Players in a tennis match

The laws produced by the Lawn Tennis Association.


In a human social system, importantly, one actor may alternate between a role in a lower level system and a role in a meta system.

The actors in a system have insight into the system, and may be able to change it.

An actor can act as a system definer who changes the roles and rules of a social system they play a role in.

Still, one action is in one or the other system – not in both.


Some of what second order cyberneticians say seems axiomatic, and not unique.

·                     Knowledge is a biological phenomenon  - there was no description before life.

·                     Knowledge "fits" but does not "match" the world of experience – a description is a reality, but not the reality it describes

·                     Each individual constructs his or her own "reality" – our mental models are bio-chemically distinct and different.


By circular communication and theory-test experimentation we can shift our descriptions from subjective towards objective.

The more a mental model or message proves useful to its owner or receiver, the more experiments confirm a hypothesis, the more confidence the owner, receiver or scientist can place in its objectivity.


Obviously, people, in everyday life and in business, must use their imagination and creativity.

They must communicate, learn from each other and respond in ad hoc ways to unforeseen inputs.

This is true of enterprise architects, software developers and all system designers.

And all system designers must respond to the experience of system implementation.

We don’t need second order cybernetics to tell us this circular sense-respond loop is vital.


The trouble with second order cybernetics is its central notion of a “self-organising system”.

Because this undermines the very idea of a system (as in classical cybernetics and more general system theory).


Distinguishing social networks from social systems

If every named entity is a system (1 for 1) the term system adds no value.

Ashby urged us to distinguish a system (a set of variables) from the real machine or animal that realises it.

He said one machine or animal can realise infinite different systems.


In sociology, we should distinguish a social system (a set of roles and rules) from the social network (a group of inter-communicating actors) that realises it.

The actors in one social network can play different roles in many parallel systems, S, T, U...


The second order cybernetics idea of a self-organising social system arises out of confusing the two concepts.

If the roles in system S include actions that change the roles in system S, that makes a nonsense of the system concept. 

Imagine several actors, who currently play the same role, each changing that role – as they see fit - while the system is running.

The result is the opposite of a system, it is disorderly, irregular and possibly uncoordinated behaviour.


Of course, we can coordinate human actors in a social network by giving them the same goal, or asking them to agree the same goal.

But motivating people is surely better classified as “management science” or some such, rather than system theory.

Business managers may create an organisation in which people are given only goals (not rules).

And then encourage those people to act and cooperate however they see fit.

That is not a general system theory; it is a very special human-only system theory, and little or nothing to do with cybernetics.


Distinguishing a higher or meta system from a lower system

Ashby would surely agree that a human actor playing a role in system S can observe that system and envisage changes to it.

But to adhere to classical cybernetics, that change must be made under change control.


Ashby’s concept of a higher level machine helps us reconcile classical cybernetics with self-organisation.

To change a role in a system S, the actor must step outside the lower system to act (however briefly) in a higher level or meta system (M) to system S.



Consider how two tennis players can change the rules of a tennis match they are playing.

They stop the match (step outside it) agree a rule change, then restart the match.

Via successive changes, the two players may radically change the nature of a tennis match.

"Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game.” Meadows



Consider how a society can avoid “the tragedy of the commons”.

The lower level system is a group of people who share access to limited resources.

Such as fishermen who share fishing grounds, or farmers who share an irrigation system.

How to avoid “the tragedy of the commons” by which competition exhausts the common resource?

The meta system is the cooperative in which the fishermen or farmers agree their rules.

Now and then, the fishermen must stop fishing, and farmers stop farming, to define the rules of their social system.

(Elinor Ostrom (1990, 2010) defined eight generic conditions for such a cooperative.)


This idea needs a name, and for the want of anything better it is here called 3rd order cybernetics.

3rd order cybernetics seems a better fit (than second order cybernetics) to most systems of interest to us, including social systems.

It is developed and further exemplified in the paper on 3rd order cybernetics the System Theory page at http://avancier.website.

Footnotes on self-organisation in other discussions


Human organisations

Of course, there is much to be said about human organisations that is outside general system theory.

·         John Kotter: Organizations won't change unless there's a "burning platform".

·         Thomas Kuhn: New models are accepted when the adherents of the old models retire.

·         James C. Scott: Organizations become optimized to make them easier to observe and control, which is at odds with making them better and more efficient.

·         Public Choice Economics: Organizations don't have goals. Individuals in organizations do.

·         Bruce Bueno de Mesquita: People at the top have to be good at accumulating power. Act as if that's their only goal.

·         Pournelle's Iron Law: "In any bureaucracy, the people devoted to the benefit of the bureaucracy itself always get in control and those dedicated to the goals the bureaucracy is supposed to accomplish have less and less influence, and sometimes are eliminated entirely."


Turing machines

Ashby modelled systems as machines that are predictable to some extent.

He wrote of determinate machines, which respond predictably to each combination of input and current state.

And wrote of Markovian machines, which respond less predictably, using probability rules.

Even if his machine could have well-nigh infinite states, it cannot have infinite functions/rules/probabilities.

And to add a new variable or function/rule/probability is to make a new machine/system/organisation.


Ashby did not mention Turing machines.

A reader has suggested a Turing machine (as opposed to a finite state machine) can be self-organising.

A Turing machine is "a mathematical model of a hypothetical computing machine which can use a predefined set of rules to determine a result from a set of input variables."

Can a Turing machine change its predefined set of rules? Can it invent new variables or new rules?

Suppose a Turing machine could organise itself, then we’d want know:

·         In what ways can it change its own state variable types or rules?

·         Are change options prescribed within the machine, or infinite?

·         What triggers the machine to change itself?

·         How does it choose between changes it might make to itself?


Moreover, note that in most systems of interest to us, many actors or machines can play the same role

When a state variable or rule is changed, how it is distributed to all actors in the system?


Pending answers, I favour an Ashby-like version of self-organisation, in which a higher or meta system changes the description of another.

It seems a better fit to the social and business systems of interest to us.


Bio-physical chemistry

Professor Manfred Eigen was a Nobel prize laureate in chemistry. https://lnkd.in/dsVBPYc

He and Peter Schuster researched and published on hypercycles, in the transdisciplinary domain of bio-physical chemistry.

Hypercycles demonstrate natural phenomena of self-organisation.

They demonstrate functionally coupled self-replicative entities.

A hypercycle is a cyclic symmetry of autocatalytic reactions that are arranged in a circle so that each reaction's product catalyses the production of its clockwise neighbour.


What does "self-organising" mean here? It sounds more like self-perpetuating, or cyclical symbiosis.