Von Foerster’s ideas
On second order cybernetics, and disambiguating self-organisation
Copyright 2017-9 Graham Berrisford. One of more than 100 papers on the “System Theory” page at http://avancier.website. Last updated 15/06/2019 20:35
Many of today’s social system thinkers (knowingly or not) refer to the ideas found in Von Foerster’s second order cybernetics.
This paper argues some these ideas are truisms (axiomatic in all modern science) and others are questionable.
“Thanks for sharing this invaluable knowledge.” Linkedin message
This is one of many companion papers that analyse some systems thinkers’ ideas.
· Read Ashby’s ideas for an introduction to his ideas about cybernetics, mostly integral to general system theory
· Read Ackoff’s ideas on the application of general system theory (after von Bertalanffy and others) to management science.
· Read Ashby’s ideas about variety on his measure of complexity and law of requisite variety.
· Read Beer’s ideas on the application of cybernetics (after Ashby and others) to management science.
· Read Von Foerster’s ideas on ideas attributed to Heinz von Foerster and his second order cybernetics.
Further reading on the “System Theory” page at http://avancier.website includes:
Boulding’s ideas, Checkland’s ideas, Luhmann’s ideas, Marx and Engels’ ideas, Maturana’s ideas and Snowden’s ideas.
Heinz von Foerster (1911 to 2002) was an Austrian American scientist who combined physics with philosophy
He is widely credited as the originator of second-order cybernetics.
He and other "second order cyberneticians" asserted that:
· Knowledge is a biological phenomenon (Maturana, 1970)
· Knowledge "fits" but does not "match" the world of experience (Glasersfeld, 1987)
· Each individual constructs his or her own "reality" (von Foerster, 1973).
This paper explains these ideas are not unique to second order cybernetics or social systems thinking
They underpin not only Ashby’s classical cybernetics (1956) but all modern science.
"Objectivity is the delusion that observations could be made without an observer." Heinz von Foerster
This is axiomatic: there was no observation before observers, and no description before life,
Because, as Maturana said, knowledge is a biological phenomenon.
This table (one version of the WKID triangle or pyramid) distinguishes four concepts.
the ability to respond effectively to knowledge in new situations
information that is accurate enough to be useful
any meaning created or found in a structure by an actor
a structure of matter/energy in which information has been created or found
You can create or find information or meaning in any structure or motion that is variable - has a variety of values.
E.g. a shadow, an office door, a dance, or the sounds we call words – all of them can be regarded as data.
From the ever-changing direction of a shadow on a sundial you may read the time of day.
The position of your office door (open or closed) can be used as a data structure.
You can use that structural variety to convey the message that you are open to visitors or not.
Honey bees communicate using the movements of a dance.
The spoken word gives us the ability (with almost no effort) to form messages
Those messages can contain infinite different structures.
And the written word gives us the ability to preserve those structures in shared memory spaces.
For an act of communication to succeed, two roles must be played.
One actor must encode some information or meaning in a data structure or message.
Another must decode the same information or meaning from that data structure or message – using the same code.
There is no information or meaning in a data structure (shadow, door, dance or words) per se.
Information or meaning only exists in the communicating actors at the time a data structure is created (encoded) or used (decoded), using a code
If the information created or found in a data structure is accurate enough to be useful, then it may be called knowledge.
E.g. the knowledge of where to find some pollen can be communicated by one honey bee to another.
Until recently, in the history of the world, the actors who communicated thus were biological organisms.
But today they can be computers, which are creations of biological organisms.
Wisdom is the ability to respond effectively to knowledge in new situations.
“We cannot know the essences of things in themselves; all we can know is what we know as abstracting nervous systems.” Alfred Korzybski
“Knowledge "fits" but does not "match" the world of experience” (Glasersfeld, 1987).
This is axiomatic: a description is not the reality it describes.
This triangular graphic separates descriptions from the realities they describe, symbolise or represent.
Abstraction of description from reality
<create and use> <represent>
Describers <observe & envisage> Realities
A description is created by a process of encoding and used by a process of decoding.
A description can never be more than a very selective perspective or model of a reality (else it would be the reality).
Recursively, descriptions of realities are also real.
Descriptions are abstractions that appear in the physical forms of memories and messages – and can be described.
"The environment as we perceive it is our invention." ~ Heinz von Foerster
Heinz von Foerster (2007). “Understanding Understanding: Essays on Cybernetics and Cognition”, p.212, Springer Science & Business Media
This is axiomatic: every mental model is unique to the mind that holds it.
Obviously, no two of us share the same biochemical mental model of the world.
But for second order cyberneticians to say “no one shares the same knowledge of the world” is clearly untrue, since it denies the success of social animal species.
Did von Foerster really mean to say “Everything said is only subjective”?
The concepts of subjective and objective are not mutually exclusive.
Subjective means personal, perhaps influenced by personal feelings, tastes, or opinions.
Objective means not restricted to one individual and considered to be factual.
“We cannot transcend ourselves as organisms that abstract” Alfred Korzybski
The evidence suggests we can; since all science, all humanity, depends on it.
We do it whenever we successfully cooperate socially.
All social animals depend on being able to communicate descriptions that usage proves to be objective.
We do manage to share “facts” that we abstract from observations of reality.
Your butcher advertises pork chops; you buy and eat them.
You share an understanding of the abstraction labelled “pork chop”.
You ask sometime to tell you the time; they tell you the time.
You share an understanding of the abstraction labelled “the time”.
You use Newton's laws of motion to calculate a force.
A honey bee can encode their mental model of a pollen location in a dance.
Another bee can decode the dance into a mental model of where that pollen is.
To find the pollen, the second bee must share the mental model of the first.
This shows that both mental models represent the same facts – the distance and direction of the pollen source.
The facts recorded in these mental models are objective and accurate enough for us to call them “true”.
Suppose you see one honey bee finds the pollen described to it by another honey bee.
You (3rd party observer or experimenter) have evidence that they have shared an objective description of the world.
That is the very definition of objective - not limited to one intelligent entity - and confirmed by empirical evidence.
Moreover, in an example of cross-species communication, scientists can read the dance of a honey bee and find the pollen themselves!
We (you and I) can both read and remember this sentence.
Our two mental models of the sentence are different and yet the same.
They are different – our mental models are bio-chemically distinct and different.
They are the same - we can both recall and recite the sentence accurately.
The objectivity of our mental models is found not in their biochemistry.
It is found in communication showing we both recall and recite the sentence.
A 3rd party can test what we recite is objectively accurate.
Objective does not mean description = reality.
It only means a) not limited to one person and b) well supported by social, logical or empirical verification.
Objective does not mean a description is absolutely true
Physicists can describe elements of the universe as particles or waves; they do not say either description is “true”, they say only that it is a model we find useful.
Scientists do not think in terms of absolute truth; the role of science is to find, test, agree and share those descriptions we can rely on to be useful.
There is no absolute truth, a description is true to the extent it proves useful.
What a message sender considers true, a message receiver may consider false, and vice versa.
E.g. I say the swimming pool is warm; you act on that information by diving in; the swimming pool is colder than you expected, and now recall the information as a lie.
Sometimes, the accuracy or information can be measured more objectively, but all measurement has a degree of accuracy
E,g, Newton’s laws of motion are approximations.
In short, through social, logical or empirical verification we shift descriptions from subjective towards objective.
The more a mental model or message proves useful to its owner or receiver, the more experiments confirm a hypothesis, the more confidence the owner, receiver or scientist can place in its objectivity.
To deny that would be to deny the survival and flourishing of social animals in our biosphere.
Von Foerster was something of a dilettante (“I don't know where my expertise is; my expertise is no disciplines”).
He enjoyed thinking about all things circular, recursive and self-referential and enjoyed provoking people with thoughts about them.
"Should one name one central concept, a first principle, of cybernetics, it would be circularity." ~ Heinz von Foerster
OK, but the circularity in a feedback loop between cooperating machines is radically different from the circularity between a machine in operation and its designer.
“Information” is a relative concept that assumes meaning only when related to the cognitive structure of the observer of this utterance (the “recipient”).
von Foerster "Notes on an epistemology for living things" in Observing Systems, The Systems Inquiry Series, Intersystems. Publications (1981), p. 258-271.
Here, von Foerster appeared to endorse what some call the hermeneutic principle of communication.
This is at least somewhat misleading, because an utterance has meaning when related to the cognitive structure of its sender as well as its recipient.
Again, for an act of communication to succeed, two roles must be played.
One actor must encode some information or meaning in a data structure or message.
Another must decode the same information or meaning from that data structure or message – using the same code.
Ashby emphasised that the meaning of a message depends on what the recipient knows of the sender.
E.g. In Ashby’s example, after two soldiers are taken prisoner by countries A and B; their wives receive the same brief message “I am well”.
Though each has received the same message (or signal), they have received different information/meanings.
Because country A allows the prisoner a choice of three messages: I am well, I am slightly ill and I am seriously ill,
Whereas country B allows only one message: I am well (meaning no more than “I am alive”).
Aside: every business depends on information/data processing systems in which it is assumed that:
· Data transmission is perfect
· Receivers decode messages or data structures using the same language senders use to encode them.
In this context, the terms information and data are usually interchangeable.
E.g. a so-called “information model” is likely to be a somewhat informal and abstract “data model”.
… the classical concept of an “ultimate science”… contains contradictions.
To remove these one had to account for an “observer” (that is at least for one subject):
(i) Observations are not absolute but relative to an observer’s point of view (i.e., his coordinate system: Einstein);
(ii) Observations affect the observed so as to obliterate the observer’s hope for prediction (i.e., his uncertainty is absolute: Heisenberg)
After this, we are now in the possession of the truism that a description (of the universe) implies one who describes it (observes it).
What we need now is the description of the “describer” or, in other words, we need a theory of the observer.”
von Foerster in "Notes on an epistemology for living things" in Observing Systems, The Systems Inquiry Series, Intersystems. Publications (1981).
Obviously, a description is a product of a describer; but it is simply not true that a describer necessarily affects what is described.
Von Foerster’s reference to Heisenberg’s uncertainty principle applies to observations of micro-scale sub-atomic particles.
It is not generally applicable to macro-scale objects and social phenomena.
E.g. We need no knowledge of Newton to understand and apply his laws of motion.
We need no knowledge of the umpire to watch tennis match and understand the score board.
In studying the laws of tennis, we need no description of the people who wrote those laws.
In observing how pricing affects supply and demand, the economist Hayek did not need to exclude himself as a buyer or seller in the market.
True, observers can become involved in, and affect, the social phenomena they observe and describe.
This is rightly a concern to sociologists (e.g. Mead) in studying the behaviours of tribal societies.
But it is possible to understand and describe other physical and social phenomena with no “theory of the observer”.
“Something strange evolved among the philosophers, the epistemologists and, the theoreticians.
They began to see themselves more and more as being included in a larger circularity; maybe within the circularity of their family;
or that of their society and culture; or even being included in a circularity of cosmic proportions!”
von Foerster in "Ethics and Second-Order Cybernetics"
Homeostatic control systems constrain the range of a variable’s values, or the population of a stock, between upper and lower bounds.
Self-regulation means an entity maintains its own state in a stable, orderly or homeostatic fashion.
It wasn’t long before systems thinkers turned their attention from self-regulation to the radically different concept of self-organisation.
The concept of self-organisation takes different forms in von Foerster’s writing.
It is appears in one guise as the principle of "order from noise".
Random perturbations ("noise") will stimulate a system to move through a variety of states in its state space.
As it does so, the system may arrive near an “attractor” drawing it into a steady state, or an orderly state change pattern.
This kind of “self-stabilising” behavior can be observed in simple system mechanical systems.
What about human social systems? Von Foerster is credited with initiating second-order cybernetics.
Second-order cyberneticians is said to be the recursive application of cybernetics to itself.
"First-order cybernetics is the science of observed systems; second-order cybernetics is the science of observing systems." von Foerster
In this context, the term self-organisation means that the actors playing roles in a system can observe and change the variables or rules of that system.
This table lists some systems that cannot be changed by the actors playing roles in them.
The variables and rules of this system
are found in
Planets in a solar system
The law of gravity produced by the universe
Termites building a termite nest
The DNA produced by a process of reproduction
Players in a tennis match
The laws produced by the Lawn Tennis Association.
By contrast, many human social systems are partly or wholly defined by the actors who play roles in them.
Von Foerster asked: “Am I a part of the system, or I am apart from the system?”
How would Ashby have answered this question?
“Heinz, cybernetics does not ask "what is this thing?" but ''what does it do?" It is thus essentially functional and behaviouristic.
You must distinguish yourself as a concrete thing from the actions you perform in various systems.
Your part in any social or socio-technical system is the behavior you contribute to it, rather than you as a person.” (A reply based on Ashby’s writing).
E.g. You are only a “part” of a car system in so far as you play a role in it, say as a driver.
Being self-aware and having free will, you can ignore or break the rules of that role: jump a red light, or close your eyes for a nap and crash the car.
You have the same freedom of choice in every social or socio-technical system you play a role in.
You can also act simultaneously in several systems (drive the car, listen to the radio, answer a call from your boss, sing a song to your child in the back seat.)
Some system theorists (notably Ashby and Maturana) have said the concept of a self-organising system makes no sense.
However, term is widely used, and has been interpreted in an extraordinarily wide variety of ways.
Self-organisation = absence of a design or pattern?
This means there is no law, rule or definition of how a system forms or changes.
However, one may say there is a blueprint for much so-called “self-organisation”.
The blueprint for self-organisation in a solar system is found in the laws of physics
The blueprint for self-organisation in a molecule is found in the chemists’ periodic table
The blueprint for self-organisation in a biological organism is found in its DNA.
The blueprint for self-organisation in a social system is found in the minds or documents of its actors.
In the first three examples, the self-organisation is predetermined and predictable in theory if not in practice.
Self-organisation = decentralised control?
Decentralisation means there is no central control - no overarching controller of system processes.
Rather, the processes of the system are distributed between atomic components or agents.
In sociology: this may be called anarchy or a participatory democracy (apparently the vision of many social systems thinkers).
In computing: this corresponds to the “choreography” design pattern rather than the “orchestration” pattern.
The latter is found in very simple software systems; does not imply “self-organisation”.
Self-organisation = accretion?
Accretion means growth or increase by the gradual accumulation of additional layers or matter.
The accretion of a crystal growing in a super-saturated liquid is a simple process that has been called self-organisation.
The accretion of a city growing as it attracts more people and money has also been called self-organisation.
But neither is what people usually think of by “self-organisation”.
Self-organisation = flocking?
Flocking means to move or go together in a crowd.
The shoaling behavior of fish has been called “self-organisation”.
The behavior of a flock of starlings wheeling in the sky has also been called “self-organisation”.
In both examples, many simple interactions between many adjacent actors produces a complicated moving shape.
However, the complexity of these state change (visual) effects is more in the eye of the observer than in the system itself.
Self-organisation = morphogenesis?
By any measure, the morphogenesis of an organism is a complex process.
The process, predetermined by DNA, inexorably builds an adult organism from an egg.
As the process proceeds, new kinds of component and interaction emerge, increasing the complexity of the organism.
Self-organisation = business reorganisation?
All the processes above are very different from the “self-organisation” of a social or business organisation.
The morphogenesis of a business organisation is a process that leads to outcomes that are not inexorable or pre-determined.
Organisation changes are stimulated by a variety of internal and external forces - that might either complexify or simplify the organisation
Ultimately, external forces (political, economic, social, legislative and environmental) dominate the internal ones.
A business, though it relies on countless systems, is not is well described as a system - as a whole.
Inside all business organisations, the actors behave in a mix of ad hoc and regular ways.
Sometimes actors act in business systems, where they behave regular ways; often they act outside of any describable system.
Ashby and Maturana said the concept of a self-organising system makes no sense.
They rejected the idea that a system can change itself by creating new variables or rules.
They said a system can only be re-organised from outside the system, by a higher process or meta system.
“By “organization” Maturana refers to the relations between components that give a system its identity, that make it a member of a particular type.
Thus, if the organization of a system changes, so does its identity.”
(John Minger in Self-Producing Systems: Implications and Applications of Autopoiesis. Contemporary Systems Thinking. New York: Plenum P, 1995)
“Maturana stated he would "never use the notion of self-organization, because it cannot be the case... it is impossible.
That is, if the organization of a thing changes, the thing changes”.
(Maturana, H. (1987). Everything is said by an observer. In Gaia, a Way of Knowing, edited by W. Thompson, Lindisfarne Press, Great Barrington, MA, pp. 65-82, p. 71.)
Forrester’s and Meadows’ view
In Forrester’s System Dynamics, if you change the stocks or flows, you change the identity of the system.
You create a new system, or system generation N+1.
“One of Ashby’s goals was to repudiate that interpretation of self-organization, commonly held, that a machine or organism can change its own organization.”
Instead he postulated “A new higher level of the machine was activated to reset the lower level's internal connections or organisation.”
(Goldstein’s introduction to Ashby’s 1962 treatise on self-organisation.)
Ashby’s higher and lower level machines can be seen as “coupled” in a wider or aggregate machine.
But the aggregate machine is only ever partially self organising, since one part of the aggregate machine always drives the change to the other part.
Lars Lofgren reported Ashby’s view thus.
“Ashby… dared to suggest that no machine can be said self–organizing in a certain complete sense.”
“Thus the appearance of being ‘self-organizing’ can be given only by the machine S being coupled to another machine ...
Then the part S can be ‘self-organizing’ within the whole S+α.
Only in this partial and strictly qualified sense can we understand that a system is ‘self–organizing’ without being self–contradictory.”
(Lars Lofgren’s introduction to “The Wholeness of the Cybernetician”.)
As Ashby put it:
"If the system is to be self-organising, the self must be enlarged to include… some outside agent." (Ashby 1962)
The roles and rules of a lower level system cannot be set or changed from inside the system.
But they can be set or changed by a higher level process or meta system.
Moreover, in a human social system, one actor may alternate between a role in a lower level system and a role in a meta system.
The actors in a system have insight into the system, and may be able to change it.
An actor can act as a system definer who changes the roles and rules of a social system they play a role in.
Still, one action is in one or the other system – not in both.
If every named entity is a system (1 for 1) the term system adds no value.
Von Foerster and his second order cybernetics are at least partly responsible for making the term “system” meaningless in much of today’s systems thinking discussion.
Some of what second order cyberneticians say seems axiomatic.
· Knowledge is a biological phenomenon – yes, there was no description before life.
· Knowledge "fits" but does not "match" the world of experience – yes, a description is a reality, but not the reality it describes
· Each individual constructs his or her own "reality – yes, our mental models are bio-chemically distinct and different.
However, by social, logical and empirical verification we shift our descriptions from purely subjective towards objective.
The more useful a mental model or message proves to its owner or receiver, the more experiments confirm a hypothesis, the more confidence the owner, receiver or scientist can place in its objectivity.
The trouble with the apparent circularity of a self-organising system
Ashby and Maturana rejected the idea the notion of a “self-organising system”.
It undermines the very idea of a system in classical cybernetics and more general system theory.
For sure, the actors who play roles in a system may agree to change the variables or rules of that system
But whenever actors discuss and agree such a change, they are (for that time) acting in a higher or meta system.
And once the change is made, the actors (still members of the same social network) now act in a new system or system generation.
Some social systems thinkers treat the idea of self-organisation as a political agenda or mission - to promote a “participatory democracy”.
How can self-organisation happen without undermining the very concept of a system?
Where many human actors cooperate in a system, they must understand and agree to any change before it can be rolled out
Such inter-generational evolution is different from chaotic ad hoc change.
If actors continually change the properties and functions of the organisation they work in, then the concept of a system is lost.
Distinguishing social networks from social systems
Ashby urged us to distinguish a system (a set of variables) from the real machine or animal that realises it.
He said one machine or animal can realise infinite different systems.
We need to distinguish:
· a social system - a set of roles and rules that actors may comply with
· a social network - a group of inter-communicating actors who can realise any number of systems.
The second order cybernetics idea of a self-organising social system confuses the two concepts.
If the roles in system S include actions that change the roles in system S, that makes a nonsense of the system concept.
Imagine several actors, who currently play the same role, each changing that role – as they see fit - while the system is running.
The result is the opposite of a system; it leads to disorderly, irregular and likely uncoordinated behaviour.
Distinguishing a higher or meta system from a lower system
Ashby would surely agree that a human actor playing a role in system S can observe that system and envisage changes to it.
But to adhere to classical cybernetics, that change must be made under change control.
Ashby’s concept of a higher level machine helps us reconcile classical cybernetics with self-organisation.
To change a role in a system S, the actor must step outside the lower system to act (however briefly) in a higher level or meta system (M) to system S.
Consider how two tennis players can change the rules of a tennis match they are playing.
They stop the match (step outside it) agree a rule change, then restart the match.
Via successive changes, the two players may radically change the nature of a tennis match.
"Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game.” Meadows
Consider how a society can avoid “the tragedy of the commons”.
The lower level system is a group of people who share access to limited resources.
Such as fishermen who share fishing grounds, or farmers who share an irrigation system.
How to avoid “the tragedy of the commons” by which competition exhausts the common resource?
The meta system is the cooperative in which the fishermen or farmers agree their rules.
Now and then, the fishermen must stop fishing, and farmers stop farming, to define the rules of their social system.
(Elinor Ostrom (1990, 2010) defined eight generic conditions for such a cooperative.)
This idea needs a name, and for the want of anything better it is here called 3rd order cybernetics.
3rd order cybernetics seems a better fit (than second order cybernetics) to most systems of interest to us, including social systems.
The need for a circular sense-respond loop
Obviously, people, in everyday life and in business, must use their imagination and creativity.
They must communicate, learn from each other and respond in ad hoc ways to unforeseen inputs.
To verify any non-trivial logic requires social verification by peer review and/or empirical verification by observation of test results.
All system designers, enterprise architects and software developers know the importance of peer review and testing.
And all must respond to the experience of system implementation.
We don’t need second order cybernetics to tell us this circular sense-respond loop is vital.
Distinguishing the motivation of people in a social network from the behaviors of a system
Of course, managers may coordinate human actors in a social network by giving them a goal, or asking them to agree a goal.
Business managers may create an organisation in which people are given only goals (not rules).
And then encourage those people to act and cooperate however they see fit.
That is not a general system theory; it is a very special human-only system theory, and little or nothing to do with cybernetics.
It is better called “management science” or some such, than system theory.
The definition in Wikipedia of self-organisation is “the emergence of order from interactions between initially disordered elements.”
But the term is used in many others ways; and people rarely make clear which way they are thinking of.
The term has been used with remarkably wide variety of different meanings, for example.
· In chemistry, the self assembly of a crystal in a liquid by accretion.
· In economics, the emergence of order in a free market as price changes influence supply and demand (after Hayek).
· In biology, the emergence of complex life forms from the process of evolution (after Darwin).
· In mechanics, the maintenance of homeostasis (after Weiner and Ashby).
· In chaos theory, arriving at an island of predictability in a sea of chaotic unpredictability.
Further analysis of system change varieties suggests this (tentative) classification.
· Update: changing the values of variables in response to inputs.
· Accretion: as in the expansion of a city, or the inexorable growth of a crystal in a super-saturated liquid
· Flocking: as in the flocking of starlings, or the shoaling behavior of fish
· Self-regulation: as in the maintenance of homeostasis during the life of an entity
· Self-sustaining: in which autopoietic processes make and maintain the structures that perform the processes.
· System change: changing the variables or the rules that update their values.
· Reconfiguration: changing behaviour in a pre-ordained way.
· Leverage: switching a system from one given configuration to another.
· Morphogenesis: as in the inexorable growth an embryo into an adult
· Evolution: changing behaviour in a random or creative way.
· Discrete mutation: replacement of one system generation by the next.
· Mutation with random change: as in biological evolution.
· Mutation with designed change: redesign by external observers, or by self-aware actors who observe and change the system they play roles in.
· Continuous mutation: n/a. Impossible here, since it is contrary to the notion of a system.
Self-organisation appears above in several guises; perhaps the most popular in social systems thinking discussion are:
· Internal discrete mutation with designed change
· Continuous mutation
But the latter must be rejected, because it undermines the concept of a system.
Professor Manfred Eigen was a Nobel prize laureate in chemistry. https://lnkd.in/dsVBPYc
He and Peter Schuster researched and published on hypercycles, in the trans-disciplinary domain of bio-physical chemistry.
Hypercycles demonstrate natural phenomena of self-organisation.
They demonstrate functionally coupled self-replicative entities.
A hypercycle is a cyclic symmetry of autocatalytic reactions that are arranged in a circle so that each reaction's product catalyses the production of its clockwise neighbour.
What does "self-organising" mean here? It sounds more like self-sustaining (autopoeisis), self-perpetuating, or cyclical symbiosis.
Ashby modelled systems as machines that are predictable to some extent.
He wrote of determinate machines, which respond predictably to each combination of input and current state.
And wrote of Markovian machines, which respond less predictably, using probability rules.
Even if his machine could have well-nigh infinite states, it cannot have infinite functions/rules/probabilities.
And to add a new variable or function/rule/probability is to make a new machine/system/organisation.
Ashby did not mention Turing machines.
A reader has suggested a Turing machine (as opposed to a finite state machine) can be self-organising.
A Turing machine is "a mathematical model of a hypothetical computing machine which can use a predefined set of rules to determine a result from a set of input variables."
Can a Turing machine change its predefined set of rules? Can it invent new variables or new rules?
Suppose a Turing machine could organise itself, then we’d want know:
· In what ways can it change its own state variable types or rules?
· Are change options prescribed within the machine, or infinite?
· What triggers the machine to change itself?
· How does it choose between changes it might make to itself?
Moreover, note that in most systems of interest to us, many actors or machines can play the same role
When a state variable or rule is changed, how it is distributed to all actors in the system?
Pending answers, I favour an Ashby-like version of self-organisation, in which a higher or meta system changes the description of another.
It seems a better fit to the social and business systems of interest to us.
Of course, there is much to be said about human organisations that is outside general system theory.
· John Kotter: Organizations won't change unless there's a "burning platform".
· Thomas Kuhn: New models are accepted when the adherents of the old models retire.
· James C. Scott: Organizations become optimized to make them easier to observe and control, which is at odds with making them better and more efficient.
· Public Choice Economics: Organizations don't have goals. Individuals in organizations do.
· Bruce Bueno de Mesquita: People at the top have to be good at accumulating power. Act as if that's their only goal.
· Pournelle's Iron Law: "In any bureaucracy, the people devoted to the benefit of the bureaucracy itself always get in control and those dedicated to the goals the bureaucracy is supposed to accomplish have less and less influence, and sometimes are eliminated entirely."