Copyright 2014-17 Graham Berrisford.
One of about 300 papers at http://avancier.website. Last updated 03/08/2017 21:56
Discussions of “complex systems” are often unclear or confused about meaning of “complex” and/or “system”.
This paper explores differences in what people think these terms mean.
Ashby said that: “a system is any set of variables which he [the observer] selects”.
“A system's variety V measures the number of possible states it can exhibit, and corresponds to the number of independent binary variables.
But in general, the variables used to describe a system are neither binary nor independent.”
Ashby proposed complexity = variety = the number of possible states a system can exhibit.
There are several difficulties with this definition of complexity:
First, the measure is incalculably large for any system with a non-trivial state.
Second, many other measures have been proposed: e.g. McAbe’s procedural complexity, and other measures below.
Here, we might propose: complexity = the number of event/state combinations * the average procedural complexity of the rules applied to them.
There is no agreement as to which complexity measure is right or best.
Third, does the measure apply to the control system, the target system, or the real world entity of which only selected variables are controlled?
“every real machine embodies no less than an infinite number of variables, all but a few of which must of necessity be ignored.” (Ashby in “Design for a Brain”)
Consider the complexity of a tennis match in real world.
Does it include the structures and movements of atomic particles in the players, balls, court surface etc.? Or the thought processes of its players?
A moment’s thought is enough to conclude you can never measure the complexity of real world entity or behavior per se.
You can only measure it with respect to your chosen description of its elements (roles, actors, processes, variables, whatever).
And then, only measure it at the level of abstraction at which you choose to describe those elements and their inter-relationships.
From the viewpoint of a describer, a system is only as complex as its description.
From the viewpoint of a control system, a target system is only as complex as those variables the control system monitors and controls.
Here are a few complexity measures I have picked up over the years.
· Procedural (cyclomatic) complexity = number of decisions + 1 (Mcabe).
· Structural complexity = variety = number of states (Ashby).
· Structural complexity = inter-component dependencies (Sessions)/
· Maximum structural complexity = components * (components - 1) / 2 (after Brooks).
· System complexity = number of variables and the number and the nature of their interactions required to explain the properties and behavior of that system (Ackoff, 2003).
· Complexification formula: For every 25% increase in the complexity of the problem space, there is a 100% increase in the complexity of the solution space (Glass’s Law).
I have proposed two complexity measures:
· Structural complexity = relationships between component types / component types.
· Behavioral complexity = the number of event/state combinations * the average procedural complexity of the rules applied to them.
But there are many more (at least 40) possible measures of complexity.
"The diversity of measures that have been proposed indicates that the notions of complexity that we're trying to get at have many different interacting dimensions and probably can’t be captured by a single measurement scale” ("Complexity: A Guided Tour" Chapter 7 "Defining and Measuring Complexity" Melanie Mitchell.)
Some assume that if a system behaves unpredictably, it must complex. Not so.
Very simple systems can be unpredictable; very complex systems can be predictable.
E.g. A system with just two element types (wolves and sheep) and simple rules can behave unpredictably or chaotically.
The behavior of an individual actor (e.g. a wolf) in response to an event may be deterministic, predictable from its current state.
Yet at a macro level, the volumes of populations (wolf packs and sheep flocks) may fluctuate in what seems a random or chaotic manner.
Multiple actions and interactions between individual actors at a micro-level may lead to unpredictable outcomes at the macro level.
Populations may remain stable for a while, then boom or bust unexpectedly.
For more, read Modelling a continuously varying system using System Dynamics.
A system dynamics model (or agent-based modelling) may be used to predict how interacting populations will change in the real world.
You can test the model by running it over time, then comparing its results with the reality it is supposed to model.
To act upon the predictions of a model before you have tested it would be risky, since you have no idea what stocks and flows you have missed.
Ashby might have said we need to think less about structural resources (Men, Materials, Machines and Money) and more about the regular behaviors that we require.
Beer said that thinking about the four Ms is inadequate, that we need to think about managing complexity.
Knowing that Ashby’s measure of complexity is incalculable in all business systems of interest, Beer said that relative statements are valid.
How to assess relative complexity? How to objectively compare the relative complexity of two real entities, machines, societies or businesses?
You could do as follows.
1. Choose your measure of complexity
2. Identify the elements to be described (roles, actors, processes, variables, whatever)
3. Describe two real world entities in terms of those elements
4. Demonstrate your two descriptions have been made to the same level of abstraction.
5. Demonstrate by testing that the two real world entities behave according to your two descriptions.
6. Then apply the complexity measure and compare.
However the process looks fanciful and impractical, leaving us with complexity as a subjective assessment.
People glibly assert that a system is complex, without reference to any description or complexity measure.
A human contains a brain which contains a brain cell. Which is the most complex?
Some say the human brain is the most complex thing in the universe.
Yet a brain has a simple structure (forebrain, midbrain and hindbrain) at the highest level of description.
You may well envisage a description of the brain’s structure at the level of neurons (perhaps 10 billion) and connections (perhaps 10 trillion).
But that is to consider only the structure of the brain; what about its behavior?
Perhaps we should measure a brain’s complexity by the variety and success of the mental models it makes?
A system can exist in two forms - which may be called “abstract” and “concrete”.
A concrete system realises (or instantiates) an abstraction system description (or type).
The Solar System
A Beethoven symphony
Abstract system description
The “Solar System” as described by an astronomer
A musical score as written by Beethoven
Concrete system realisation
Several large physical bodies orbiting the sun
Performances that instantiate the symphony in physical sound waves
You can only measure a thing’s complexity against a description of it; and every reality can be described in many ways.
So every reality has as many complexities as there descriptions of it.
And then, that number can be multiplied by the number of different complexity measures applied to the description.
The complexity of any real-world behavior/process or structure/component depends on three things.
1. The boundary you choose to draw around it in time, space or logic.
2. The kind and level of detail you choose to include in your description.
3. The measure (of many complexity measures) you choose to apply to those details.
Sitting at your lap top, sending an email seems a simple one step action.
None of us can begin to imagine the full sequence of actions involved.
The message bounces from node to node across the internet.
One might guess millions of software actions are performed between sender and receiver.
Perhaps billions of detectable physical events at the level of atoms, electrons and radio waves?
The same physical reality (atoms, electrons, light waves, sound waves) underpins biological and psychological processes.
The physical reality of any machine, organism or society is infinitely complex.
Whether you see it as simple or complex depends on the level of detail in your description of it.
“At this point we must be clear about how a "system" is to be defined.
Our first impulse is to point at [some entity] and to say "the system is that thing there".
This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems.
Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.
What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” (Ashby in “Introduction to Cybernetics”).
The phrase "complex system" is used in curious ways.
For example, to label a system dynamics models that leads to a so-called "chaotic" outcome, but where the system itself is simple.
Or social entities where there is very little by the way of a "system" and the remainder is irregular, not repeated and unrepeatable (not a system at all).
Again, terms mean different things to different people.
There are profound terminology clashes between general system theory and socio-cultural systems thinking.
General System Theory
Socio-cultural systems thinking
Activities and rules are complex.
Actors are required to perform them.
Actors are complex
Activities and rules lightly prescribed, if at all
The system is self-sustaining and/or maintains its own state
according to feedback from entities and events in its environment.
Actors are self-directing
They can change their roles, rules and goals
A collection of repeated or repeatable processes
An ever-evolving social entity
In general system theory, cybernetics and system dynamics, a system is a collection of repeated or repeatable processes.
To some socio-cultural systems thinkers a system is a collection of actors in a social entity, who may act and interact as they choose
In general system theory, a system is self-sustaining and/or maintains its own state; it adapts according to feedback from entities and events in its environment.
To socio-cultural systems thinkers, a system self-organising; it evolves as actors change their roles, rules and goals, and even the goals of the system.
In general system theory, a system is complex is the describable system complex.
To socio-cultural systems thinkers, a system is complex if the reality is complex, beyond the system
For example, if its actors are intelligent, and can change the aims, properties, rules and roles of the group they participate in.
In so far as a social entity is stable and demonstrably repeats behaviors, it is well-called a system.
If its aims, properties, rules and roles are in flux, then there is a complex reality, but little or no system
And at the extreme, a social entity is not a system at all in the general system theory sense.
It is merely a named entity - group of actors who do whatever they like.
Some definitions of complexity are mathematical, applied in computing and taken for granted by enterprise architects.
By contrast, some social systems thinkers speak of “complexity theory” with a very different meaning, as in this source.
This paper asserts that complex systems have the four characteristics below.
· Identity: meaning a set of core characteristics that identify the system, and remain when it evolves or changes.
· Homeostasis: meaning the system maintains is their relative internal stability.
· Permeability: meaning the systems interacts with its environment.
· Self-organization: meaning adaptive behaviors emerge in response to changes in the environment.
The first three are irrelevant, since identity, homeostastis and permeability are found in the simplest of mechanical control systems
So, we are left with self-organisation as the defining feature of a complex system.
Another source aligns self-organisation with complex adaptive, and pitches them half way between chaos and order
· Chaotic system: unconstrained; there is no system
· Complex adaptive / self-organising system: constrained by some rules, but actors behave outside those rules or change the rules
· Ordered system: fully constrained to follow the rules of the system.
So it seems a “complex adaptive system” is half way between an ordered system and a chaotic one.
How does a general system theorist see such a system?
Is it complex? It is an infinitely complex physical phenomenon. But what can be described as a system is simple.
Is it adaptive? It evolves. But that is very different from adapting in homeostatic sense.
Is it self-organising? It is self-directing. But that is very different from self-sustaining in the autopoietic biological sense.
Is it a system? It barely registers as a system to a general system theorist; it is rather more an unfolding process.
How can a "social system" be defined in way compatible with general system theory?
A physical social entity is a set of actors who communicate with each other.
To paraphrase Ashby:
Our first impulse is to point at social entity and to say "the system is that group of people there".
This method, however, has a fundamental disadvantage: every social entity can act as several social systems
Its actors can play unrelated roles in (say) a football team and a choir.
Any suggestion that we can measure all the characteristics of a social entity is unrealistic, and actually the attempt is never made.
Instead, we pick out and study roles and rules that are relevant to some interest already given.
We define a logical social system as a set of interrelated roles and rules.
And then, we can measure the complexity of a particular social entity in terms of those roles and rules.
A complexity measure must assume a system is bounded.
You must exclude entities and activities outside the system, in its environment.
E.g. in measuring the complexity of a retail shop, you ought to ignore
the remote payment card systems that enable payment card transactions.
A complexity measure must define the atomicity of components.
You must exclude the internals of components you consider to be atomic.
E.g. to measure the complexity of a human organisation, you ignore the internal biochemistry of the humans.
Given a railway network, you ignore the internal complexity of switching systems, and railway carriages.
A complexity measure likely excludes pre-defined components.
You probably ought to exclude the internals of generic components you can plug in.
E.g. to measure the complexity of clock, you’ll probably ignore the internal complexity of the replaceable battery.
A measure of one view’s complexity may hide complexity in another view.
You have to consider what kinds of complexity matter to you, since there are infinite ways to juggle internal design elements and trade complexity in one area for simplicity in another.
Reducing the complexity of
May increase the complexity of
Inter component communication
The more abstract the description, the simpler the system appears.
That implies you ought to multiply the measure of description complexity by a measure of the abstraction gap between the description and the operational system.
How to measure the gap? Unfortunately here is no rule governing how far a description abstracts from reality.
Internal complication and external complexity.
The internal complication of parts and rules is one thing.
Ignoring these, you might measure the amount and complexity of input-output transformations made.
This involves counting and assessing the atomic inputs and outputs.
A technique called function point analysis considers individual data movements made by information systems.
Is there any similar technique for other kinds of system?
It seems pretty obvious to suggest as follows.
A simple system has a simple internal structure and/or simple behaviors, with little variety and few inter-component dependencies.
A complex system has a complicated internal structure and/or complicated behaviors, with much variety and many inter-component dependencies.
But it is impossible to answer questions about complexity without answering questions about abstraction and measurement.
There are two completely different and irreconcilable views of complexity
· The complexity of what is in a system description – without reference to an operational system
· The complexity of what is an operational system with reference to a system description
Systems that are complex in description
To a system architect or designer, a system is as complex as its system description is complex – no more and no less.
Software systems are surely - by some distance - the most complex systems we describe.
Even then, the programmers never consider the internal complexity of the operating system or other platform technologies.
Human activity systems may also be complex in system description.
Even then, the system designers never consider the internal complexity of the human brain.
And whatever happens in the operation of a system (beyond the system description) cannot be counted in a measure of its complexity
Systems that are complex in operation
To a sociologist, social systems are complex because they include complex biological entities.
Biological entities are the most complex machines we know.
Surely the most complex operational machine we know of is the human brain.
So of course, any operational system that relies on human abilities may be regarded as complex.
Yet in description, social systems can be very simple. E.g. “you scratch my back, I scratch yours”.
If you tell people to do whatever it takes meet a goal, then you are setting in motion a highly complex operational system, and yet your system description is almost trivial.
Much happens in a social organisation (beneficial to its members or owners) that is never mentioned in any system description
This behavior is outside the "system" as Ashby defines it, and its complexity cannot be measured.
Complexity measurement is challenging.
Ashby’s preferred measure of complexity (variety, or number of possible states) is incalculably large for any significant system.
And it is only one of very many possible measures.
And there is a bigger problem: you cannot measure the complexity of an operational system per se.
The full complexity of an operational system includes the thought processes of any human participants
And it extends down to the structures and movements of atomic particles.
You cannot begin to think about the complexity of a system beyond what you can observe or describe of it.
The complexity of an operational system is mostly hidden from you.
In observing a social entity, you completely ignore the staggeringly complex internal biochemistry if the participants.
In observing a computer system, you see nothing of the software or network complexity.
It is inconceivable you could observe, describe or measure an operational system at the level of a genome, brain cell, or atomic particle.
You can only consider a system at a much higher level of abstraction.
You can only measure what is in a system description.
You can measure the complexity of a system at the level of abstraction at which you describe the components and process steps of the operational system.
But the gap between your system description and the operational system will be wide.
The level of abstraction in a system description is a matter of choice.
And so, its complexity is also a matter of choice.
Clearly, there is immense complexity in the operational system, most of which lies in the heads of the individual participants.
But what is a social system beyond what a sociologist describes of it?
Our systems descriptions are simpler than our operational systems.
(Sadly, we often omit mention of things that turn out to be important to systems in operation.)
One difference between sociology and mechanical engineering is the size of the description- reality gap.
The gap is larger in social systems.
The architects of human systems can get away with high level abstraction and sloppy system description.
The human actors will compensate by making intelligent judgements about the right thing to do, or indeed by changing the system’s rules on the fly.
Various forces (external and internal) drive the people working within the system to change it.
After period of system operation, the system is usually significantly different from the initial system description.
The architects of deterministic systems cannot get away with it.
How would cybernetics guru Ross Ashby speak of what social system thinkers call a "complex adaptive system"?
He might well call it a “simple evolutionary unfolding process”.
The terminology clash is so profound it seems wiser to separate socio-cultural systems thinking from general system theory than to pretend they are readily generalised as “systems thinking”.
Architects cannot describe a chaotic human system, because there is no describable system, there is only a named collection of human actors.
(Though the way actors interact and behave might be susceptible to some kind of analysis, statistical or other.)
EA frameworks propose maintaining several levels of abstraction in enterprise description.
So, the enterprise can be seen as simple or complex, depending on which level description you look at.
All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.
If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use.