What is a complex system?
On applying “complexity science” to “management science”
Copyright 2019 Graham Berrisford. One of more than 100 papers on the “System Theory” page at http://avancier.website. Last updated 02/12/2019 15:40
The earlier paper discussed Ashby’s ideas, including these three principles.
1. To make sense of things, we convert the continuous to the discrete.
2. Observers create and use abstract systems to represent regular behaviors.
3. A system applies a set of rules to a set of variables.
Today, there are software systems with hundreds or thousands of variables and rules.
By contrast, a simple system may be defined as one with relatively few variables and few rules.
There are simple games.
E.g. A game of checkers has 16 entities (each defined by three variables: row, column and status) and about 10 rules.
There are simple systems in biology too.
E.g. The pattern in a flight of geese.
E.g. The coupling between wolf and sheep populations in a causal loop.
That is a challenging question.
Complexity science embraces the study of several phenomena listed below, along with the mathematics of these situations.
But the use of the term “complex” is curious, since there is no widely agreed measure of complexity.
And simple systems (of the kind above) can display the following properties discussed in complexity science.
Structural patterns: especially fractal ones
E.g. the pattern in a leaf.
Systems with non-linear or chaotic lines of behavior
A line of behavior is the trajectory by which a quantitative state changes over time (cf. a line on a graph).
E.g. A thermostat keeps the temperature of its environment stable – near to a straight line when plotted over time.
E.g. When coupled in a causal loop, wolf and sheep populations can have chaotically curved or jagged lines of behavior.
(Even simple systems can show non-linear and chaotic lines of behavior.
What makes one line of behavior more “complex” than another?)
Systems in which actors have a degree of autonomy
Autonomy can be interpreted in one or both of two ways.
First, that actors play roles in a system of their own accord, driven by internal rules, rather than being directed by others.
Second, that actors can join or leave the system, and exist independently of it.
(A so-called “complex system” may be composed of many actors, like the fish in a shoal
Yet those actors may all play the same role and follow the same simple rules.)
Systems in which order emerges from disorder
E.g. The pattern in a flight of geese is used to illustrate how order can emerge from disorder.
The organization is created by the choreography of rule-using actors.
(Is that objectively “more complex” than an organization orchestrated by a “higher” actor?
Imagine the lead goose in a flight had to bark orders to direct the following geese to maintain their positions.
Would that system be simpler or more complex?)
The following sections discuss four somewhat inter-related domains of complexity science:
· NETWORKS (network types, structures, graph theory)
· NON-LINEARITY (feedback, instability, chaos,)
· ADAPTATION (cybernetics, game theory, evolution)
· SELF-ORGANIZATION (emergence, synchronization, pattern formation).
Network structures are interesting but peripheral to the main purpose here.
You can of course have simple networks, structures and graphs, as well as complex ones.
The nodes (vertices) in a network may be passive elements (as posts in a fence) or interacting actors.
The nodes may be fixed in a network, or free to join and leave it.
Of interest here is the concept of a social network, which actors may join and leave.
Later, this will be distinguished from the concept of social system, in which actors play roles and follow rules.
In a “complex system” actors often interact in feedback loops to change a system’s state.
A line of behavior is the trajectory by which a quantitative state changes over time (cf. a line on a graph).
Every quantitative state variable has a line of behavior over time.
The value of a function acting on two or more inter-related state variables can also be plotted over time as a line of behavior.
Some discussion confuses linearity with stability, but this table shows they are distinct concepts.
Linear increase or decrease
Exponential increase or decrease
Stable lines of behavior
Negative feedback is when increasing a quantitative variable leads to an effect that decreases it again (or vice-versa).
It tends to dampen variation from a starting point, and maintain the state of a system in stable state.
E.g. Decreasing the air temperature causes a thermostat to switch a heater on, thus keeping the temperature above a minimum.
AND Increasing the air temperature causes the thermostat to switch the heater off, thus keeping the temperature below a maximum.
If the line of behavior is flat, then it is linear.
If the line of behavior is wavy, it oscillates around a norm, then properly speaking it is non-linear - but might still be called linear.
Unstable lines of behavior
Positive feedback is when increasing/decreasing a quantitative variable leads to an effect that further increases/decreases it.
It tends to amplify variation from a starting point, and change the state of a system in an exponential curve.
E.g. Shrinking the area of the north pole ice cap increases the heat absorbed by the ocean.
AND increasing the heat absorbed by the ocean shrinks the area of the north pole ice cap.
If the line of behavior is a straight line, upward or downward, then it may be called linear.
If the line of behaviour is a curve, upward or downward, then it may be called non-linear.
In the material world, endless increase or decrease is unsustainable; the entity must crash, or flip to a new regime or configuration.
E.g. eventually, cooled water turns to ice, and heated water turns to gas.
Basins of stability
In some systems, a variable may be led by events to settle on one particular value.
A line of behaviour that starts from any other variable value will slide towards that value, which appears to be its goal.
Some so-called complex systems have several basins of stability, separated by thresholds of instability.
A system “parked” on a ridge will “roll downhill” into the basin, being attracted to the nearest of several possible stable values.
Chaotic lines of behavior
A chaotic system’s line of behavior is not only non-linear, but also very sensitive to initial conditions.
E.g. Butterfly flaps wings > positive feedback > Tornado.
E.g. increasing the number of wolves decreases the population of eatable sheep - which decreases the population of wolves.
AND decreasing the number of wolves increases the population of eatable sheep - which increases the population of wolves.
This particular feedback loop may lead the two populations into a basin of stability.
But depending on the initial conditions, the populations may have chaotically curved or jagged lines of behavior.
And a chaotic line of behaviour may end when one or other population crashes to zero.
Dynamic systems change state over time.
Adaptation means that a dynamic system changes in response to changes in another system or its environment.
There are simple self-regulating systems (e.g. predator-prey populations).
Ashby distinguished two kinds of adaptation, which may be broadly characterised as.
· Changing a system’s state: rule-using homeostatic change (as in a heating system) and progressive change (as in the development of leaf pattern).
· Changing a system’s rules: rule-setting reorganization (as in biological evolution and business management).
It is important to draw the following distinctions.
Reorganization differs from state change
Rule-using state change.
Advancing the state variables of a system.
· E.g. dealing a card in a game of poker.
· E.g. heating water to boiling point.
State change can be sudden, dramatic, catastrophic, chaotic.
Rule-setting reorganization of behavior
Changing the variables or rules of a system.
· E.g. changing the rules of poker.
· E.g. inter-generation mutation in biology.
System reorganization is often gradual and incremental.
Reorganization differs from regulation
A controller monitors the current state of a target system.
And maintains or advances that state.
· E.g. maintains temperature
· E.g. deals cards in a game of poker.
Rule-setting reorganization of behavior
A “higher” process or meta system responds to long-term outcomes of a system.
It changes the rules of a “lower” system.
· E.g. creates new DNA
· E.g. redefines the rules of poker.
What prompts rule-setting reorganization?
· the results or outcomes of the system in operation?
· the state the system has reached?
· or both?
The concept of self-organization has appeared in many guises, for example.
· In chemistry, the self-assembly of a crystal in a liquid by accretion.
· In economics, the emergence of order in a free market as price changes influence supply and demand (after Hayek).
· In biology, the emergence of complex life forms from the process of evolution (after Darwin).
· In mechanics, the maintenance of homeostasis (after Weiner and Ashby).
· In chaos theory, arriving at an island of predictability in a sea of chaotic unpredictability.
However, we’d like here to define what self-organization means in general.
von Foerster’s view
The environment of a system can act as a source of perturbations or “noise” to the system of interest.
“The cybernetician Heinz von Foerster formulated the principle of "order from noise.”
It notes that self-organization is facilitated by random perturbations ("noise") that let the system explore a variety of states in its state space.
This increases the chance that the system will arrive into the basin of a "strong" or "deep" attractor, from which it then quickly enters the attractor itself.” Wikipedia
To begin with, von Foerster spoke of a system exploring states in its own finite state space.
He may not, then, have considered reorganizing the state space or rules of the system itself.
Later, in his “second order cybernetic” he encouraged consideration of how the observer can affect a system.
And a major concern of social systems thinkers is the re-organization of how a social organization works – of its rules.
Prigogine’s thermodynamic view
“The thermodynamicist Ilya Prigogine formulated a principle as "order through fluctuations" or "order out of chaos".
It is applied in the method of simulated annealing for problem solving and machine learning.” Wikipedia
Data science view
What is now called “data science” includes:
· data mining – in which algorithms find previously unknown facts in large data sets.
· machine learning – in which algorithms find “generalizable predictive patterns”.
If I understand correctly, the algorithms discover some order where none was previously recognised.
They may discover new things, but they don’t modify the way they discover things.
Maturana’s biological view
The biologist Maturana differentiated the “structure” and “organization” of an organism
“By “organization” Maturana refers to the relations between components that give a system its identity, that make it a member of a particular type.
Thus, if the organization of a system changes, so does its identity.”
(John Minger in Self-Producing Systems: Implications and Applications of Autopoiesis. Contemporary Systems Thinking. New York: Plenum P, 1995)
Maturana defined "organization“ as a "network of processes".
“Maturana stated he would "never use the notion of self-organization, because it cannot be the case... it is impossible.
That is, if the organization of a thing changes, the thing changes”.
(Maturana, H. (1987). Everything is said by an observer. In Gaia, a Way of Knowing, edited by W. Thompson, Lindisfarne Press, Great Barrington, MA, pp. 65-82, p. 71.)
Forrester’s System Dynamics view
In System Dynamics, if you change the stocks or flows of a system, then you change the identity of the system.
You create a new system, or system generation N+1.
As in Cybernetics, the system is an abstraction, it cannot change itself.
Ashby’s cybernetic view
Like Maturana, Ashby was sceptical about self-organization.
"One of Ashby’s goals was to repudiate that interpretation of the notion of self-organization, one commonly held to this day,
that a machine or living organism could change its own organization (or, in his phraseology, the functional mappings).” Goldstein
“The use of the phrase [self-organization] tends to perpetuate a fundamentally confused and inconsistent way of looking at the subject”
To make sense of the term, he divided self-organization into two kinds, self-connecting and self-improving.
Connecting parts in a whole
Think of a goose joining a flight of geese, and following the rules that keep the flight in a V shape.
Ashby spoke of this as: “Changes from parts separated to parts joined” “Self-connecting” “Perfectly straightforward”.
Improving from bad to good
Think of a system mutating so it can respond to new environmental conditions.
Ashby spoke of this as: “Changing from a bad way of behaving to a good.”
“No machine can be self-organizing in this sense.”
“The appearance of being self-organizing can be given only by the machine S being coupled to another machine x.
Then the part S can be self-organizing within the whole S+x.”
Imagine a software system that overwrites its own script.
Ashby proposes this rule-setting reorganization requires at least two subsystems
· S, a subsystem that does its job using the rules defined now.
· x, a subsystem that determines and makes a change to the rules of S
x might change S by
· sending a message for S to read and change its own text
· stopping S version 1, “reaching in” to change its text, and restarting S version 2.
Example 1: Ashby’s homeostat
Goldstein described how Ashby built a homeostat to illustrate inter-generational reorganization.
If the environmental conditions changed and shifted variables beyond the range safe for the lower machine to function, then a new higher level of the machine was activated.
On observing a changed environment, the “higher” machine
1. took as input the rules applied by the homeostat to its variables
2. changed those rules at random
3. triggered the homeostat to realise the new rules.
Example 2: Biological evolution
“[Consider] the process of evolution through natural selection.
The function-rule (survival of the fittest) is fixed”. (Ashby’s Design for a Brain 1/9)
The rules of organic living are encoded in an organism's DNA.
What process embodies the “survival of the fittest” rule?
An organism’s systems are changed from one generation to the next by the fertilisation of an egg
The process of sexual reproduction starts with two organisms that succeed in mating; it:
1. takes as input the DNA of the male and female parents
2. transforms that input into new DNA
3. triggers a new organism to realise the new abstract system.
Example 3: Software system maintenance
1. take as input the abstract system (code at generation N) realized by a computer
2. transform that input into a new abstract system (code at generation N+1)
3. trigger a computer to realise the new abstract system.
We can generalise Ashby two kinds of self-organization thus:
· Rule-using creation of structure or order.
· Rule-setting reorganization.
Some characterize complex systems by the emergence of order from disorder.
The change is from disorganized to organized.
Consider the fish in a shoal, the geese in a flight of geese, and the cells in a leaf.
When these actors interact, following simple rules, an orderly structural pattern emerges.
Note that emergence requires parts to interact (not simply to be accumulated in a pile).
To begin with, there is no order by way of centralized authority or control.
But there is order in the rules that actors (fish, geese, leaf cells) follow in their roles.
Order can be found in the inherited ability of the actors to follow rules.
The fish, the geese, the leaf cells, are robotic actors who follow simple rules.
The fish and the geese are "autonomous" in that they live independently - outside their roles in the system.
But note that they are not self-aware enough to change the rules of the system they play roles in.
Geese, fish and leaf cells are automatons that cannot change the structural patterns they are able to produce.
(Just as machine-learning algorithms don’t rewrite themselves.)
Some take a thermodynamic view.
Within a system, the change from disorder to order is a change from high entropy to low entropy.
To form a structural pattern, autonomous actors must consume energy to follow rules.
The order grows in proportion to the amount of chaos dissipated.
But note Ashy said we usually take energy for granted, and thermodynamics plays little role in cybernetics.
The designers of social and business systems do not apply thermodynamics.
What about maintaining order in an already-orderly system? Is that ongoing “self-organization"?
Ashby’s principle is that to re-organize one system, we must couple it to another system.
The convention here is to call the first the “lower system” (or S) and call the second “a higher process or meta system” (or M).
Generally speaking, to change a real machine that realizes S, M must:
1. take as input the abstract system (S version N) realized by the real machine
2. transform that input into a new abstract system (S version N+1)
3. trigger a real machine to realise the new abstract system.
Actors playing roles in an entity that realize system S are supposed to follow its rules.
Where can M find those rules?
· outside of the entity, in an abstract type (like the rules of poker)?
· copied into the entity (as a program is loaded into a computer)?
· embedded in the entity from its birth (as is DNA)?
How can M trigger an entity to use a new rule? M can:
· Create and send a message for the entity to read and change its own system
· Stop the entity, “reach in” to change its system, then restart the entity.
· Create a new copy of the entity, embed the rules in it, and initiate its operation.
The last is how biological reproduction and software engineering work.
People often use the term “complex” to describe social and business systems.
And it is widely agreed that today’s software systems are the most complex machines ever devised.
Yet these uses of the term “complex” have little to do with complexity science.
Structural patterns: especially fractal ones?
Clearly, social networks often organise themselves hierarchically.
And a business may be successively subdivided into smaller business units.
But I doubt any complexity scientist would regard the structure as fractal.
Systems with non-linear or chaotic lines of behavior?
Clearly, we can use variables to describe a social or business system.
But many of those variables are qualitative rather than quantitative.
And while business managers might predict a line of behaviour over time (e.g. predict turnover).
Predicting lines of behavior is not what enterprise and business architects do.
Systems in which actors have a degree of autonomy?
Clearly, the customers, suppliers and employees of a business are autonomous actors.
But they are not the rule-following robots found in complexity science – like the geese in a flight of geese.
The responsibility for people’s psychological and sociological welfare (in and outside their roles) lies with business managers.
Where enterprise or business architects propose changes to roles that raise concerns about this – a business change function is likely employed.
Systems in which order emerges from disorder?
Clearly, social and business systems are orderly.
Enterprise and business architects design systems in which actors play roles and follow rules.
How does this relate to the concept of “self-organization” in complexity science?
Ashby, Forrester and Maturana each approached “self-organization” from their own direction.
They may all have accepted that a real machine can mutate continually – as an organism does during morphogenesis.
But each eschewed the idea of continuously-changing system, because a system is defined by its rules.
Since it is an observer’s abstraction, Ashby’s system can only mutate in discrete steps.
M may iteratively make random changes to a system, favouring ones that lead an entity to behave better (in some pre-defined way).
M may have the ability to understand the abstract system and invent changes that are likely to make it better.
The latter implies M is intelligent.
“Although second-order cybernetics (Foerster et al. 1974) was not known at this time, Ashby included himself as experimenter or designer of systems he was investigating.”
Ashby’s observer may not only observe but also change the variables and rules of a system.
Where humans are actors in a system, they have three choices.
They can ignore or break the rules – which may be recognised in the system as an “exception”.
They can change the rules of the system itself.
In the third case, the actor plays different roles in different systems – in a lower one as an actor, and in a higher one as an observer of the lower.
People are not robotic automatons, who inexorably and helplessly play their role in creating an orderly pattern.
They are intelligent and creative; they have the ability to redefine the rules of a system they play a role in.
When doing this, they act in a different role, in a (higher) meta system to that first (lower) system.
To be consistent with Ashby’s ideas, what sociologies and management scientists often call a system is better called somethings else.
It is a continuously-changing social entity, or social network, in which the actors may now and then modify the social systems they act in.
Now let us apply Ashby’s ideas to those two concepts.
It is important to recognise the relationship is many to many.
Many social networks can realise one social system
In other words, one abstract social system may be realised by countless concrete social networks
E.g. The game of poker may be realised by countless card schools.
Many social systems can be abstracted from one social network
In other words, the actors in one concrete social network may act in many concrete social systems
E.g. One card school may play poker, or play whist, or share a pizza.
To paraphrase Ashby
“Since different social systems may be abstracted from one social network, a statement that is true of one may be false of another.
… there can be no such thing as the unique behavior of social network, apart from a given observer.
… there can be as many social systems as observers
… some so different as to be incompatible
… studying only carefully selected aspects of a social network’s behavior is simply what is always done in practice.”
As an example of incompatible abstractions consider the definition of two business goals - to increase profit margin and to increase turnover.
Or the targeting of one customer by different sales departments, wanting the customer (with a limited budget) to purchase different products.
Or the clash between what managers do to keep wages low and what employees do to increases their wages.
Clemson has identified 22 cybernetic principles he considers applicable to the management of human organizations.
Some of them (in bold below) are mentioned above.
Many of the remainder being questionable, they are discussed separately in this slide show.
1. System Holism
4. Complementarity Law
6. Godel’s Incompleteness Theorem
7. Entropy – the Second Law of Thermodynamics
8, 9, 10. Redundancy of Information, Resources and Potential Command
11. Relaxation time
12, 13. Circular Causality
14. Feedback dominance theorem
16. Steady State
17. Requisite Variety Law
18. Conant-Ashby theorem
19. Self-Organizing Systems
20. Basins of Stability
22. Recursive System Theorem