Copyright 2014-17 Graham Berrisford.
One of about 300 papers at http://avancier.website. Last updated 22/01/2017 13:30
Discussions of “complex systems” are often unclear or confused about meaning of “complex” and/or “system”.
This paper explores differences in what people think these terms mean.
A system can be characterised as parts that interact in regular, repeated or repeatable behaviors.
Such a system can exist in two forms - which may be called “abstract” and “concrete”.
A concrete system realises (or instantiates) an abstraction system description (or type).
The Solar System
A Beethoven symphony
Abstract system description
The “Solar System” as described by an astronomer
A musical score as written by Beethoven
Concrete system realisation
Several large physical bodies orbiting the sun
Performances that instantiate the symphony in physical sound waves
There is no way to measure the complexity of a concrete system directly.
“every real machine embodies no less than an infinite number of variables, all but a few of which must of necessity be ignored.”
(Ashby “Design for a Brain”)
You can only measure its complexity against a description of it.
And every reality can be described in many ways.
So every reality has as many complexities as there descriptions of it.
And then, that number can be multiplied by the number of different complexity measures applied to the description.
In short, complexity is a subjective measure.
The complexity of any real-world behavior/process or structure/component depends on three things.
1. The boundary you choose to draw around it in time, space or logic.
2. The kind and level of detail you choose to include in your description.
3. The measure (of many complexity measures) you choose to apply to those details.
Sitting at your lap top, sending an email seems a simple one step action.
None of us can begin to imagine the full sequence of actions involved.
The message bounces from node to node across the internet.
One might guess millions of software actions are performed between sender and receiver.
Perhaps billions of detectable physical events at the level of atoms, electrons and radio waves?
The same physical reality (atoms, electrons, light waves, sound waves) underpins biological and psychological processes.
The physical reality of any machine, organism or society is infinitely complex.
Whether you see it as simple or complex depends on the level of detail in your description of it.
“At this point we must be clear about how a "system" is to be defined.
Our first impulse is to point at [some entity] and to say "the system is that thing there".
This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems.
Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.
What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.”
(Ashby in “Introduction to Cybernetics”).
What makes a system complex is far from agreed.
Some assume that if a system behaves unpredictably, it must complex. Not so.
Very simple systems can be unpredictable. Very complex systems can be predictable.
A system with just two element types (say wolves and sheep) and simple rules can behave unpredictably or chaotically.
The behavior of an individual actor (wolf or sheep) in response to an event may be deterministic, predictable from its current state.
Yet at a macro level, the volumes of populations (wolf packs and sheep flocks) may fluctuate in what seems a random or chaotic manner.
Multiple actions and interactions between individual actors at a micro-level may lead to unpredictable outcomes at the macro level.
Populations may remain stable for a while, then boom or bust unexpectedly.
A system dynamics model (or agent-based modelling) may be used to predict how populations will change in the real world.
You can test the model by running it over time, then comparing its results with the reality it is supposed to model.
To act upon the predictions of a model before you have tested it would be risky, since you have no idea what stocks and flows you have missed.
People glibly assert that a system is complex, without reference to any description or complexity measure.
The only way to measure a thing’s complexity is by reference to a description of its elements.
A human contains a brain which contains a brain cell. Which is the most complex?
Some say the human brain is the most complex thing in the universe.
Yet a brain has a simple structure (forebrain, midbrain and hindbrain) at the highest level of description.
You may well envisage a description of the brain’s structure at the level of neurons (perhaps 10 billion) and connections (perhaps 10 trillion).
But that is to consider only the structure of the brain; what about its behavior?
Perhaps we should measure a brain’s complexity by the variety and success of the mental models it makes?
To measure the complexity of an entity (machine, organism or society) you must
1. Choose the elements of interest (roles, actors, decisions, actions, variables?)
2. Describe the system’s elements of interest at the composition level of interest
3. Choose one of several complexity measures that could be applied to those elements
4. Count the relevant system elements and calculate the complexity measure.
Here are a few measures I have picked up over the years.
· Procedural (cyclomatic) complexity = number of decisions + 1 (Mcabe)
· Structural complexity = variety = number of states (Ashby)
· Structural complexity = inter-component dependencies (Sessions)
· Structural complexity = relationship types / component types (my own measure)
· Maximum structural complexity = components * (components - 1) / 2 (after Brooks)
· System complexity = number of variables and the number and the nature of their interactions required to explain the properties and behavior of that system (Ackoff, 2003)
· Complexification formula: For every 25% increase in the complexity of the problem space, there is a 100% increase in the complexity of the solution space (Glass’s Law).
There are many more (at least 40) possible measures of complexity.
"The diversity of measures that have been proposed indicates that the notions of complexity that we're trying to get at have many different interacting dimensions and probably can’t be captured by a single measurement scale”
"Complexity: A Guided Tour" Chapter 7 "Defining and Measuring Complexity" Melanie Mitchell.
The phrase "complex system" is used in curious ways.
For example, to label a system dynamics models that leads to a so-called "chaotic" outcome, but where the system itself is simple.
Or social entities where there is very little by the way of a "system" and the remainder is irregular, not repeated and unrepeatable (not a system at all).
Again, terms mean different things to different people.
There are profound terminology clashes between general system theory and socio-cultural systems thinking.
General System Theory
Socio-cultural systems thinking
Activities and rules are complex.
Actors are required to perform them.
Actors are complex
Activities and rules lightly prescribed, if at all
The system is self-sustaining and/or maintains its own state
according to feedback from entities and events in its environment.
Actors are self-directing
They can change their roles, rules and goals
A collection of repeated or repeatable processes
An ever-evolving social group
In general system theory, cybernetics and system dynamics, a system is a collection of repeated or repeatable processes.
To some socio-cultural systems thinkers a system is a collection of actors in a social group, who may act and interact as they choose
In general system theory, a system is self-sustaining and/or maintains its own state; it adapts according to feedback from entities and events in its environment.
To socio-cultural systems thinkers, a system self-organising; it evolves as actors change their roles, rules and goals, and even the goals of the system.
In general system theory, a system is complex is the describable system complex.
To socio-cultural systems thinkers, a system is complex if the reality is complex, beyond the system
For example, if its actors are intelligent, and can change the aims, properties, rules and roles of the group they participate in.
In so far as a social group is stable and demonstrably repeats behaviors, it is well-called a system.
If its aims, properties, rules and roles are in flux, then there is a complex reality, but little or no system
And at the extreme, a social group is not a system at all in the general system theory sense.
It is merely a named entity - group of actors who do whatever they like.
Some definitions of complexity are mathematical, applied in computing and taken for granted by enterprise architects.
By contrast, some social systems thinkers speak of “complexity theory” with a very different meaning, as in this source.
This paper asserts that complex systems have the four characteristics below.
· Identity: meaning a set of core characteristics that identify the system, and remain when it evolves or changes.
· Homeostasis: meaning the system maintains is their relative internal stability.
· Permeability: meaning the systems interacts with its environment.
· Self-organization: meaning adaptive behaviors emerge in response to changes in the environment.
The first three are irrelevant, since identity, homeostastis and permeability are found in the simplest of mechanical control systems
So, we are left with self-organisation as the defining feature of a complex system.
Another source aligns self-organisation with complex adaptive, and pitches them half way between chaos and order
· Chaotic system: unconstrained; there is no system
· Complex adaptive / self-organising system: constrained by some rules, but actors behave outside those rules or change the rules
· Ordered system: fully constrained to follow the rules of the system.
So it seems a “complex adaptive system” is half way between an ordered system and a chaotic one.
How does a general system theorist see such a system?
Is it complex? It is an infinitely complex physical phenomenon. But what can be described as a system is simple.
Is it adaptive? It evolves. But that is very different from adapting in homeostatic sense.
Is it self-organising? It is self-directing. But that is very different from self-sustaining in the autopoietic biological sense.
Is it a system? It barely registers as a system to a general system theorist; it is rather more an unfolding process.
How can a "social system" be defined in way compatible with general system theory?
A physical social group is a set of actors who communicate with each other.
To paraphrase Ashby:
Our first impulse is to point at social group and to say "the system is that group of people there".
This method, however, has a fundamental disadvantage: every social group can act as several social systems
Its actors can play unrelated roles in (say) a football team and a choir.
Any suggestion that we can measure all the characteristics of a social group is unrealistic, and actually the attempt is never made.
Instead, we pick out and study roles and rules that are relevant to some interest already given.
We define a logical social system as a set of interrelated roles and rules.
And then, we can measure the complexity of a particular social group in terms of those roles and rules.
A complexity measure must assume a system is bounded.
You must exclude entities and activities outside the system, in its environment.
E.g. in measuring the complexity of a retail shop, you ought to ignore
the remote payment card systems that enable payment card transactions.
A complexity measure must define the atomicity of components.
You must exclude the internals of components you consider to be atomic.
E.g. to measure the complexity of a human organisation, you ignore the internal biochemistry of the humans.
Given a railway network, you ignore the internal complexity of switching systems, and railway carriages.
A complexity measure likely excludes pre-defined components.
You probably ought to exclude the internals of generic components you can plug in.
E.g. to measure the complexity of clock, you’ll probably ignore the internal complexity of the replaceable battery.
A measure of one view’s complexity may hide complexity in another view.
You have to consider what kinds of complexity matter to you, since there are infinite ways to juggle internal design elements and trade complexity in one area for simplicity in another.
Reducing the complexity of
May increase the complexity of
Inter component communication
The more abstract the description, the simpler the system appears.
That implies you ought to multiply the measure of description complexity by a measure of the abstraction gap between the description and the operational system.
How to measure the gap? Unfortunately here is no rule governing how far a description abstracts from reality.
Internal complication and external complexity.
The internal complication of parts and rules is one thing.
Ignoring these, you might measure the amount and complexity of input-output transformations made.
This involves counting and assessing the atomic inputs and outputs.
A technique called function point analysis considers individual data movements made by information systems.
Is there any similar technique for other kinds of system?
It seems pretty obvious to suggest as follows.
A simple system has a simple internal structure and/or simple behaviors, with little variety and few inter-component dependencies.
A complex system has a complicated internal structure and/or complicated behaviors, with much variety and many inter-component dependencies.
But it is impossible to answer questions about complexity without answering questions about abstraction and measurement.
There are two completely different and irreconcilable views of complexity
· The complexity of what is in a system description – without reference to an operational system
· The complexity of what is an operational system with reference to a system description
Systems that are complex in description
To a system architect or designer, a system is as complex as its system description is complex – no more and no less.
Software systems are surely - by some distance - the most complex systems we describe.
Even then, the programmers never consider the internal complexity of the operating system or other platform technologies.
Human activity systems may also be complex in system description.
Even then, the system designers never consider the internal complexity of the human brain.
And whatever happens in the operation of a system (beyond the system description) cannot be counted in a measure of its complexity
Systems that are complex in operation
To a sociologist, social systems are complex because they include complex biological entities.
Biological entities are the most complex machines we know.
Surely the most complex operational machine we know of is the human brain.
So of course, any operational system that relies on human abilities may be regarded as complex.
Yet in description, social systems can be very simple. E.g. “you scratch my back, I scratch yours”.
If you tell people to do whatever it takes meet a goal, then you are setting in motion a highly complex operational system, and yet your system description is almost trivial.
Much happens in a social organisation (beneficial to its members or owners) that is never mentioned in any system description
This behavior is outside the "system" as Ashby defines it, and its complexity cannot be measured.
Complexity measurement is challenging.
Ashby’s preferred measure of complexity (variety, or number of possible states) is incalculably large for any significant system.
And it is only one of very many possible measures.
And there is a bigger problem: you cannot measure the complexity of an operational system per se.
The full complexity of an operational system includes the thought processes of any human participants
And it extends down to the structures and movements of atomic particles.
You cannot begin to think about the complexity of a system beyond what you can observe or describe of it.
The complexity of an operational system is mostly hidden from you.
In observing a social group, you completely ignore the staggeringly complex internal biochemistry if the participants.
In observing a computer system, you see nothing of the software or network complexity.
It is inconceivable you could observe, describe or measure an operational system at the level of a genome, brain cell, or atomic particle.
You can only consider a system at a much higher level of abstraction.
You can only measure what is in a system description.
You can measure the complexity of a system at the level of abstraction at which you describe the components and process steps of the operational system.
But the gap between your system description and the operational system will be wide.
The level of abstraction in a system description is a matter of choice.
And so, its complexity is also a matter of choice.
Clearly, there is immense complexity in the operational system, most of which lies in the heads of the individual participants.
But what is a social system beyond what a sociologist describes of it?
Our systems descriptions are simpler than our operational systems.
(Sadly, we often omit mention of things that turn out to be important to systems in operation.)
One difference between sociology and mechanical engineering is the size of the description- reality gap.
The gap is larger in social systems.
The architects of human systems can get away with high level abstraction and sloppy system description.
The human actors will compensate by making intelligent judgements about the right thing to do, or indeed by changing the system’s rules on the fly.
Various forces (external and internal) drive the people working within the system to change it.
After period of system operation, the system is usually significantly different from the initial system description.
The architects of deterministic systems cannot get away with it.
How would cybernetics guru Ross Ashby speak of what social system thinkers call a "complex adaptive system"?
He might well call it a “simple evolutionary unfolding process”.
The terminology clash is so profound it seems wiser to separate socio-cultural systems thinking from general system theory than to pretend they are readily generalised as “systems thinking”.
Architects cannot describe a chaotic human system, because there is no describable system, there is only a named collection of human actors.
(Though the way actors interact and behave might be susceptible to some kind of analysis, statistical or other.)
EA frameworks propose maintaining several levels of abstraction in enterprise description.
So, the enterprise can be seen as simple or complex, depending on which level description you look at.
All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.
If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use.