Ashby’s six essential ideas

For applying “cybernetics” to “management science”

Copyright 2019 Graham Berrisford. One of more than 100 papers on the “System Theory” page at Last updated 01/01/2020 19:03


W. Ross Ashby (1903-1972was a psychologist and systems theorist.

He popularised the usage of the term 'cybernetics' to refer to self-regulating (rather than self-organising) systems.

This paper abstracts five ideas from three of Ashby’s works, which you can find on the internet:

·       “Design for a Brain” (1952),

·       ”Introduction to Cybernetics” (1956/7)

·       “Principles of the self-organizing system” (1962).


Preface: on things and types. 1

Ashby 1: We convert the continuous to the discrete. 1

Ashby 2: Observers create and use abstract systems to represent regular behaviors. 1

Many real-world entities can realise one system.. 1

Many systems can be abstracted from one real world entity. 1

Ashby 3: A system applies a set of rules to a set of variables. 1

Ashby 4: The adaptation ambiguity: distinguish changing a system’s state from changing its rules. 1

Ashby 5: The self-organization ambiguity: distinguish assembling parts from improving. 1

Ashby 6: The law of requisite variety. 1

Obstacles to applying cybernetic ideas to management science. 1

Applying Ashby’s self-organization principle to management science. 1


Preface: on things and types

Ashby’s cybernetics is based on simple presumptions about how we describe reality.

We divide the world into things, and describe them using types, with some interest or requirement in mind.


Below are three versions of the triangular epistemology used in this work.

One for philosophers, one for type theorists, and one for systems thinkers.


Our philosophy

Our type theory

Ashby’s cybernetics


<create and use>     <represent>

Describers <observe and envisage> Realities


<create and use>    <characterize>

Describers <observe and envisage> Things


<create and use>           <represent>

Systems thinkers <observe and envisage> Real machines


Ashby’s “real machine” is a thing (observed or envisaged) that behaves according to rules.

E.g. a clock, a pair of sticklebacks in their mating ritual, a flight of geese, a game of poker.

Despite it being rule-bound, it may have surprising “emergent properties” and “chaotic” outcomes.


A system is an abstraction we make from such a real machine.

It is a complex type composed of simpler types that characterize features of the machine.

Ashby 1: We convert the continuous to the discrete

Klaus Krippendorff (a student of Ashby) wrote as follows:

"Differences do not exist in nature. They result from someone drawing distinctions and noticing their effects.”

“Bateson's ‘recognizable change’ [is] something that can be recognised and observed."


To distinguish two things or states is to imply a noticeable, potentially measurable, difference between them.


Ashby wrote:

2/1. “The most fundamental concept in cybernetics is that of ‘difference’, either that two things are recognisably different, or that one thing has changed with time…

We assume change occurs by a measurable jump.”


In other words, we:

·       divide continuous space or material into discrete entities

·       divide continuous time or inputs into discrete events

·       divide continuously varying qualities or attributes into discrete quantities or enumerable amounts

·       divide continuously changing entities into discrete generations.


Should we ever need to do the reverse, we can do it.

“As a simple trick, the discrete can often be carried over into the continuous, in a way suitable for practical purposes ...” Ashby

It turns out that converting the discrete to the continuous can be problematic.

It can lead to errors in plotting the trajectory of a continuous system’s state change over time.

Such “lines of behavior” are of interest to those who use Forrester’s system dynamics and draw causal loop diagrams.


However, the systems of interest here are inherently discrete.

Our interest is in modelling social and business systems, human and computer activity systems.

We model these systems verbally, using words and graphical symbols that serve as words.

We describe social and business systems in terms of discrete entities and events, which have properties or variables.

We describe discrete actors, discrete activities and discrete state variable values.

Ashby 2: Observers create and use abstract systems to represent regular behaviors

The notion that a system is a "perspective" of an entity is deeply embedded in the history of systems thinking.

Yet today, equating entities with systems (one to one) is still the most common mistake in systems thinking discussion.


Peter Checkland promoted a “soft systems methodology”.

His premise was that different observers have different world views.

They may perceive different systems, some in conflict, in any one human organization or other entity.


Checkland’s Soft Systems Methodology

World views

<create and use>                        <represent>

Stakeholders <observe and envisage> Human organizations


Russel Ackoff, a writer on management science, spoke of abstract and concrete systems.

An abstract system is a description or model of how an entity behaves, or should behave.

A concrete system is any entity that conforms well enough to an abstract system.


Ackoff’s system theory

Abstract systems

<create and use>                        <represent>

System thinkers <observe and envisage> Concrete systems


An abstract system does not have to be a perfect model of an entity’s behavior; only accurate enough to be useful.

We can test that an entity realises an abstract system - to the degree of accuracy we need for practical use.


W Ross Ashby, writing on cybernetics, distinguished entities from the abstract systems they realise. 

3/11 “At this point we must be clear about how a "system" is to be defined.

Our first impulse is to point at [some real-world entity] and to say "the system is that thing there".

This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems.

Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.

What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” (Ashby 1956)


In cybernetics, a system is an abstraction, a theory of how an entity behaves, or should behave.

Ashby’s system is a model of some regular behaviour; it represents any entity or “real machine” that performs as described in the model.


Ashby’s cybernetics


<create and use>                   <represent>

System thinkers <observe and envisage> Real machines


These papers take this triangular view of system theory as axiomatic.


3/1 “We are concerned in this book with those aspects of systems that are determinate—that follow regular and reproducible courses.

It is the determinateness that we shall study, not the material substance.”


We study what we observe to be some regular behaviour in the real world.

Consider description or model of how a regular heartbeat should happen.

1.     The walls of the atria contract and force blood into the ventricles.

2.     The walls of the ventricles contract and force blood out of the heart into the lungs and body.

3.     The atria fills with blood and the cycle begins again.


Now consider the beating of your own heart as a concrete realization of that abstract system,

And note that your real-world heart does more than realize that simple cyclical process.

It is important to understand the relationship between abstract and concrete systems is many-to-many.


Many real-world entities can realise one system

In other words, one abstract system may be realised by countless physical entities.

E.g. The game of poker may be realised by countless card schools.


Many systems can be abstracted from one real world entity

Krippendorff wrote:

“Ashby insisted anything can afford multiple descriptions.”


In other words, one physical entity may act as many concrete systems.

E.g. One card school may play poker, or play whist or share a pizza.


Ashby wrote:

“Since different systems may be abstracted from the same real thing, a statement that is true of one may be false of another.

… there can be no such thing as the unique behavior of a very large system [meaning, a real thing], apart from a given observer.

… there can be as many systems as observers

… some so different as to be incompatible.

… studying only carefully selected aspects of things is simply what is always done in practice.”


As an example of incompatible abstractions, consider the modelling of light as waves and as particles.

Or perhaps the definition of two business goals - to increase profit margin and to increase turnover.

Ashby 3: A system applies a set of rules to a set of variables

Krippendorff wrote:

“What we know of a system always is an ‘observer’s digest’.”

"It is important to stress Ashby defined a system not as something that exists in nature.

A system consisted of a set of variables chosen for attention and relationships between these variables, established by observation, experimentation, or design."


General system theory (after von Bertalanffy and others) does not sharply distinguish abstract systems from concrete systems in the natural world.

By contrast, Ashby’s system is what an observer perceives to be the regular behavior of a material entity.

With some interest in mind, the observer picks out some variables that characterise the entity, and behaviors or rules that affect the values of those variables.


Cybernetics defines discrete objects in terms of variables, and discrete operations in term of rules that change the values of variables.

Ashby wrote:

2/5. “every real machine embodies no less than an infinite number of variables, most of which must of necessity be ignored.”


Ashby’s “real machine” can be any material entity that behaves in a deterministic way.

In other words, the next state the machine will move to is determined by its internal state and its external environment.

(Where humans are actors in a machine, they have choices we’ll return to later.)


“Our starting point is the idea, much more than a century old, that a machine, in given conditions and at a given internal state, always goes to a particular state.”

·       “A variable is a measurable quantity that has a value.”

·       “The state of the system is the set of values that the variables have.”

·       “A system is any set of variables which he [observer] selects from those available on the real machine.” (1956)


Consider your heart as a machine that realizes the abstract regular heart beat cycle.

Your heart embodies or manifests state variables (e.g. atria volume), and can be seen by an observer as giving values to those variables (e.g. full or empty).

It is also infinitely more than a realization of that system, or any other system we could conceive it as realizing.


Ashby’s observer characterizes an entity as a machine that applies set of rules to a set of variables.

One might perhaps apply this idea to an entity that can be modelled as a Turing Machine, which has infinite states.

But every social and business entity of interest to us here can be modelled as a Finite State Machine.

Meaning it (or our interest in it) can be modelled has having a finite set of variables, each with a finite set of values.


Aside: more ideas from “Design for Brain”

19/2. Each variable [numeric quantity] is a function of time.

2/9. The state of a system at a given instant is the set of numerical values which its variables have at that instant.

2/10. A line of behavior is specified by a succession of states and the time-intervals between them. The first state is the initial state.

2/13. A system's field is the phase-space containing all the lines of behavior found by releasing the system from all possible initial states.

3/8. Given an organism, its environment is defined as those variables whose changes affect the organism, and those variables which are changed by the organism's behavior.

3/11. The organism affects the environment, and the environment affects the organism, such a system is said to have feedback.

5/3. A form of behavior is adaptive if it maintains the essential variables within physiological limits.”


Ashby 4: The adaptation ambiguity: distinguish changing a system’s state from changing its rules

System theory shares with biology that idea that a machine can change two ways.

It can change state over time; it can change its nature from one generation to the next.


“5/7. the word 'adaptation' is commonly used in two senses which refer to different processes.

[Consider body temperature maintenance as a characteristic or mechanism]

The change from a species too primitive to show such a reaction to a species which had developed the reaction as a characteristic.

A member of the species, born with the mechanism, is subjected to cold and changes from not-shivering to shivering.”


Remember Ashby’s most fundamental concept is that of “difference”.

Having converted the continuous to the discrete, we can recognize:

a)     A discrete change in the values of variables – e.g. from not shivering to shivering.

b)     A new rule that governs the values of a variable – e.g. if temperature drops then start shivering.


In short, we can recognize discrete changes a) to a system’s state and b) to a system’s organization.

The first, changing the state of a system, changing the values of its variables, is a relatively straightforward idea.

The second, re-organizing the system itself, making a new system or system generation, is a more challenging idea.


Consider observing the metamorphosis of a caterpillar into a butterfly.

You may choose to see this as one organism that realises two systems, one replaces the other.


Consider using a computer application, such as a word processor.

Use the same word processor on a different computer, you still see the system as the same.

Use a different word processor on the same computer, and you see the system has changed.


Consider observing a Universal Turing Machine, which contains a description of its behavior, ‘the program’.

Change that program, and you’ll observe the behavior of the machine has changed.

Your system of interest has changed – has acquired a new characteristic - has moved to a new generation.

Ashby 5: The self-organization ambiguity: distinguish assembling parts from improving


On emergent behavior

Emergent properties can usually be both explained and predicted.


To Ashby and others, emergent properties are ones observable in a "whole" that arise from a coupling of interacting “parts”, but are not observable in a part on its own.

E.g. the V shape in a flight of geese. Or the flexing of the Tahoma Narrows bridge in the wind. 

Both examples may appear mysterious at first sight, but when you know the explanation, they are predictable.


The term behavior is ambiguous; it can mean the activity of a system, or the trajectory of its state change over time.

Every such “line of behavior” - which may be shown on a graph - emerges from the activity of the system. 

Whether the line is stable (perhaps a flat line or a sine wave) or chaotic (perhaps a line that crashes) it emerges from a system's initial state and rules.

The shape of the line is explainable, even if it is not predictable - as chaos theory tells us.


A different situation worth mentioning is the evolution of system rules from one generation to the next.

It seems Bertalanffy didn't distinguish this from others kinds of emergence, but Ashby did.


On self-stabilizing

In 1947, Ashby wrote of self-organization as self-stabilizing.

“The cybernetician William Ross Ashby formulated the original principle of “self-organization”

It states that any deterministic dynamic system automatically evolves towards a state of equilibrium that can be described in terms of an attractor in a basin of surrounding states.” Wikipedia

Surely not every deterministic dynamic system moves to stable state?

What about a rocket?


“Once there, the further evolution of the system is constrained to remain in the attractor.

This constraint implies a form of mutual dependency or coordination between its constituent components or subsystems.

In Ashby's terms, each subsystem has adapted to the environment formed by all other subsystems.” Wikipedia

I read Ashby to say the attractor lies within the finite state space of the system.


In 1947, it appears Ashby did not to consider reorganizing the state space itself.

Having thought about, in 1962 he concluded.

“The use of the phrase [self-organization] tends to perpetuate a fundamentally confused and inconsistent way of looking at the subject”

To make sense of the term, he divided self-organization into two kinds.


Self-connecting: assembling parts in a whole

Think of a goose joining a flight of geese, and following the rules that keep the flight in a V shape.

Or a crystal growing by accretion in a liquid.

Ashby spoke of this as: “Changes from parts separated to parts joined” “Self-connecting” “Perfectly straightforward”.


Self-improving: changing from bad to good

Think of a system mutating so it can respond to new environmental conditions.

Ashby spoke of this as: “Changing from a bad way of behaving to a good.”

“No machine can be self-organizing in this sense.”

“The appearance of being self-organizing can be given only by the machine S being coupled to another machine x.

Then the part S can be self-organizing within the whole S+x.”


Example 1: Ashby’s homeostat

Goldstein described how Ashby built a homeostat to illustrate inter-generational reorganization.

If the environmental conditions changed and shifted variables beyond the range safe for the lower machine to function, then a new higher level of the machine was activated.

On observing a changed environment, the “higher” machine

1.     took as input the rules applied by the homeostat to its variables

2.     changed those rules at random

3.     triggered the homeostat to realise the new rules.


Example 2: Biological evolution

“[Consider] the process of evolution through natural selection.

The function-rule (survival of the fittest) is fixed”. (Ashby’s Design for a Brain 1/9)


The rules of organic living are encoded in an organism's DNA.

What process embodies the “survival of the fittest” rule?

An organism’s systems are changed from one generation to the next by the fertilisation of an egg

The process of sexual reproduction starts with two organisms that succeed in mating; it:

1.     takes as input the DNA of the male and female parents

2.     transforms that input into new DNA

3.     triggers a new organism to realise the new abstract system.

Ashby 6: The law of requisite variety

"The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate".  Ashby 1956

Conversely, the larger the variety of state variable changes an entity can experience, the more different actions a controller needs to maintain the entity in stable state.


This law is not counted here as fundamental to our interest in social and business systems.

Partly because maintaining homeostatic stability is peripheral to that interest.


Variety as a measure of complexity

“A system's variety V measures the number of possible states it can exhibit.” Ashby 1956

Ashby said the complexity of a system = its variety.

But variety is not an absolute measure of a thing; it relates to a controller’s interest in that thing.

The number of possible states a target system has are relative to the interest the controller has.

Different controllers - with different interests - perceive a target as having different varieties.


The law of requisite variety “only variety can absorb variety”

"The larger the variety of actions available to a control system, the larger the variety of perturbations it is able to compensate".  Ashby 1956

Perturbations are changes in the values of a target system’s variables that need to be regulated by the controller.


Ashby’s law of requisite variety tells us something useful; but it can be misinterpreted and there is more to know.

Three things the law does not say:

·       The law does not say variety is all a controller needs

·       The law does not mean a controller knows all the state variables of the thing controlled

·       The law does not imply a controller should maximise its variety.


Other things to know include:

·       One real world entity may be regulated by several controllers

·       Controllers may cooperate to regulate one entity

·       Controllers may compete to regulate one entity

·       The real world can be seen as mess of more and less tightly coupled systems.

Obstacles to applying cybernetic ideas to management science

Applying Ashby’s ideas to social and business systems is far from straightforward, for reasons indicated here.


The real world is a mess of more and less tightly coupled systems

"Managers do not solve problems, they manage messes." Russell L. Ackoff

The business managed by a manager is not a system.

The business is countless systems, it is as many systems as observers can a) define and b) demonstrate the business conforms to.

Moreover, much of what happens in a business is not systematic – and need not be systematic.


"There are no separate systems. The world is a continuum.

Where to draw a boundary around a system depends on the purpose of the discussion." ~ Donella Meadows

We can divide the world into distinct systems, nested systems, overlapping systems and competing systems.

The fuzziness of social network and system boundaries is a challenge for any attempt by sociologists to treat a social group as a system.


Managing complexity (meaning variety)

Remember Ashby’s law of requisite variety is that “only variety can absorb variety”.

When a controller has insufficient variety to control a target system, then design options include:

·       Amplifying (increasing) the variety in the control/management system

·       Attenuating (reducing) the variety in the target/operational system.


Amplifying and attenuating variety were major themes in Beer's work in management (the profession of control, as he called it).

However, you cannot control a target that is in reality controlled by other levers, outside your control.

And there are other design options for managing complexity, including:

·       Improve information flow qualities (speed, throughput, integrity etc.)

·       Tighten the control-target coupling

·       Remove any rival controllers that compete with the one we wish to succeed

·       Decentralise control: divide regulation responsibilities between maximally autonomous controllers (E.g. kidney and liver)

·       Decentralise what is managed: divide the target system into maximally autonomous subsystems (E.g. “microservices”).


Read Beer’s ideas on the application of cybernetics (after Ashby and others) to management science.


Clemson’s principles of management cybernetics

Clemson has identified 22 cybernetic principles he considers applicable to the management of human organizations.

Some of them are mentioned in the next paper on “Complexity Science”.

Many of the remainder are questionable, as discussed separately in this slide show.

Applying Ashby’s self-organization principle to management science

The principle is that to re-organize one system, we must couple it to another system.

The convention here is to call the first the “lower system” (or S) and call the second “a higher process or meta system” (or M).


Considering a variety of examples, we can generalise Ashby’s second kind of self-organization as “rule-setting reorganization”.

Generally speaking, to change a real machine that realizes S, M must:

1.     take as input the abstract system (S version N) realized by the real machine

2.     transform that input into a new abstract system (S version N+1)

3.     trigger a real machine to realise the new abstract system.


How can M change a system from bad to good (as Ashby discussed)?

M may iteratively make random changes to a system, favouring ones that lead an entity to behave better (in some pre-defined way).

M may have the ability to observe the abstract system, understand it and invent changes that are likely to make it better.


The latter implies M is intelligent, probably human.

People are not robotic automatons, who inexorably and helplessly play their role in an orderly system.

They are intelligent and creative; they have the ability to redefine the rules of a system they play a role in.


Including the observer in the system

Krippendorff wrote:

 “Although second-order cybernetics (Foerster et al. 1974) was not known at this time, Ashby included himself as experimenter or designer of systems he was investigating.”


Ashby’s observer can not only observe but also change the variables and rules of a system.

Where humans are actors in a determinate machine, they have three choices.

·       They can follow the rules of the system.

·       They can ignore or break the rules – which may be recognised in the system as an “exception”.

·       They can change the rules of the system itself.


In the third case, the actor plays different roles in different systems

They act in the lower system (S) as an actor in its regular behavior, and in a higher system (M) as observer of the lower system (S).