Some system thinkers and their ideas

Copyright 2017 Graham Berrisford. One of several hundred papers at Last updated 30/05/2019 10:25


The systems of interest here are islands of orderly behavior in the ever-unfolding process of the universe.

The role of system architects is to observe baseline systems, envisage target systems, and describe both.

So, you might assume architects are taught about system theory and systems thinking; but this is far from the case.


This paper introduces some systems thinkers and their ideas.

For deeper explanation and exploration of the underlying principles and ideas, read Introducing system ideas.

That other paper discusses passive and activity systems, open and closed systems, abstract and concrete systems.

And explores the concepts of adaptation, atomicity, black box, chaos, complexity, coupling between systems, determinism, dynamics, emergent properties,

goal seeking, hierarchy and system of systems, holistic view, information, self-organisation, and unpredictability.



Preface. 1

Classical cybernetics - the science of system control 3

General system theory – the cross-science notion of a system.. 4

System Dynamics – animating a system model to predict long-term outcomes. 5

Soft systems thinking – loosening the system concept 6

Decision theory - or theory of choice. 6

Social group as organism.. 8

Second-order cybernetics – undermining the system concept 9

Conclusions. 10

Footnote 1: Two more sociological systems thinkers. 11

Footnote 2: a couple of half-baked comparisons. 12



Read the science and philosophy of systems thinking for a brief history of the universe.

The discussion there is of how humans came to conceptualise the world in terms of systems.


Willard Gibbs (1839 – 1903) was a scientist, instrumental in the development of chemistry into a science

He defined a system as “a portion of the ... universe which we choose to separate in thought from the rest of the universe."

But if every entity we think of is a system, one for one, then the concept of a system has no value.

Let us use the term “entity” for any portion of the universe we can locate in space and time

And look at how the term “system” has been given more interesting and useful meanings. 


Read thinkers who foreshadowed system theory for ideas attributed to the thinkers below.

·        Adam Smith (1723 to 1790) subdivision within and competition between systems.

·        Charles Darwin (1809 to 1882) system mutation by reproduction with modification.

·        Claude Bernard (1813 to 1878) homeostatic feedback loops.

·        Herbert Spencer (1820 to 1903) social systems as organic systems.

·        Vilfredo Pareto (1848 – 1923) the Pareto principle.

·        Emile Durkheim (1858-1917) collective consciousness and culture.

·        Gabriel Tarde (1843 –1904) social system as emergent from the actions of individual actors.

·        Max Weber (1864-1920) a bureaucratic model – hierarchy, roles and rules

·        Kurt Lewin (1890–1947) group dynamics.

·        Lawrence Joseph Henderson (1878 to 1942) meaning in communication

·        Talcott Parsons (1902-1979) action theory.


These thinkers may not have spent much time analysing what the word “system” means.

The term may have been used some to mean only "a group of interrelated things".

And not all of their ideas stand the test of time; however, they did influence 20th century systems thinkers.


When “system theory” became established as a topic in its own right is debatable.

Some suggest system theory is a branch of sociology.

“Systems theory, also called social systems theory...

Others suggest the reverse, that social systems thinking is branch of general system theory.

“Though it grew out of organismic biology, general system theory soon branched into most of the humanities.” Laszlo and Krippner.


It is certainly true that the general concept of a system became a focus of attention after second world war.

And there was a burst of systems thinking in the period 1945 to 1980.

Before we look at that, here are a few things to be borne in mind


Inside a system there are actors and activities

Actors are structures that exist in space and perform activities.

Activities are behaviors that happen over time, and change the state of the system or something in its environment.

The actors and activities can be orderly in the sense that they conform to some roles and rules.

The roles and rules can be described, and the conformance of a system’s behavior to the rules can be assessed.


There are natural and designed (accidental and purposive) systems

A designed system is often described in terms of aims (motivations), activities (behaviors), actors and objects (structures).

It is created by intent, with aims in mind - though its outcomes may diverge from its aims.

By contrast, a natural system (e.g. the solar system) evolves without any intent.

Some refer to its outcomes (e.g. stable orbits) as its aims, but really they are unintended consequences.


Social networks and social systems are different things

The principles of general system theory and cybernetics can be applied to the roles and rules of a social system.

Note however that a social network (a collection of communicating actors) is a different concept.

One social network can realise several distinct social systems.

And one social system can be realised by several social networks.

Classical cybernetics - the science of system control

Cybernetics emerged out of efforts in the 1940s to understand the role of information in system control.

Thinkers in this domain include:

·        Norbert Wiener (1894-1964) the science of system control.

·        W. Ross Ashby (1903-1972) the law of requisite variety.

·        Alan Turing (1912 –1954) finite state machines and artificial intelligence.


Weiner introduced cybernetics as the science of biological and mechanical control systems.

He discussed how a controller directs a target system to maintain its state variables in a desired range.

E.g. A thermostat directs the actions of a heating system.

Ashby popularised the usage of the term 'cybernetics' to refer to self-regulating (rather than self-organising) systems.


The Ratio Club, which met from 1949 to 1958, was founded by neurologist John Bates to discuss cybernetics.

It members included Alan Turing and Ross Ashby.

Many of its 21 members went on to become prominent scientists - neurobiologists, engineers, mathematicians and physicists.


Abstract and concrete systems

For some, to understand systems thinking requires making a paradigm shift as radical as is needed to understand Darwin’s evolution theory.

In discussion, people often refer to a named entity as a system.

They point at a machine or a business (like IBM) and say "the system is that thing there".

But with no further description, that is vacuous to the point of being meaningless.


Ashby, Ackoff, Checkland and other systems thinkers emphasise that a system is one perspective of a reality.

Ashby spoke of systems and real machines; Ackoff spoke of abstract systems and concrete systems; Checkland spoke of soft systems.

An abstract system (e.g. the game of “poker”) describes how some part of the real world behaves, or should behave.

A concrete system (e.g. a real world game of poker) is the realisation by a social network of an abstract system.


One abstract system may be realised by several real world entities (each of which might also do others things).

One real world entity can realise several abstract systems – each defined by taking a different perspective of the entity.

Unfortunately, we use the term system for both abstract systems (types), and concrete/real world entities that instantiates them.

And so, in systems thinking discussions, we tend to confuse abstract systems with concrete things that realise them.


Systems as abstractions

Abstract systems (descriptions)

<create and use>                   <represent>

Systems thinkers <observe & envisage> Concrete systems (realisations)


A concrete system can be viewed as “the real machine”.

Or instead, as the extent and accuracy to which that concrete structure realises the abstract system.


Information feedback loops

Another idea important to cybernetics is the concept an information feedback loop.

Information is encoded in flows or messages that pass between systems.

A control system:

·        receives messages that describe the state of a target system

·        responds by sending messages to direct activities in the target system.


Information feedback loops are found in both organic and mechanical systems:

·        A missile guidance system senses spatial information and sends messages to direct the missile.

·        A brain holds a model of things in its environment, which an organism uses to manipulate those things.

·        A business database holds a model of business entities and events, which people use to monitor and direct those entities and events.

·        A software system holds a model of entities and events that it monitors and directs in its environment.


Read Introduction to Cybernetics and Ashby’s ideas for more on cybernetics.

In the 1950s, Turing thought that computers would give us insights into how the brain works.

Here, the brain’s ability to typify and predict things is more interesting than its workings.

General system theory – the cross-science notion of a system

The 1954 meeting of the American Association for the Advancement of Science in California was notable.

Some people at that meeting conceived a society for the development of General System Theory.

They included:

·        Ludwig von Bertalanffy (1901-1972) the cross-science notion of a system

·        Kenneth Boulding (1910-1993) applying general system theory to “management science”.

·        Anatol Rapoport (1911 to 2007) wrote on game theory and social network analysis.


Bertalanffy was a biologist who promoted the idea of a general system theory.

His aim was to discover patterns and elucidate principles common to systems in every scientific discipline, at every level of nesting.

He looked for concepts and principles applicable to several disciplines or domains of knowledge rather than to one.

“There exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind.”


Bertalanffy related system theory to communication of information between the parts of a system and across its boundary.

“Another development which is closely connected with system theory is that of… communication.

The general notion in communication theory is that of information.

A second central concept of the theory of communication and control is that of feedback.”

“Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow…”


Here are some ideas that Ashby’s cybernetics shares with wider cross-science general system theory.

·        Environment the world outside the system of interest.

·        Boundary: a line (physical or logical) that separates a system from is environment, and encapsulates a system as an input-process-output black box.

·        Interface: a description of inputs and outputs that cross the system boundary.

·        Hierarchy: a system is composed from interacting subsystems; systems are recursively composable and decomposable

·        Emergence of properties, at a higher level of composition, from coupling of lower-level subsystems.

·        Coupling of systems by input/output information.

·        State: the current structure or variables of a system, which changes over time

·        Deterministic processing of system inputs with respect to its memory/state data.


Cooperation and conflict between actors

When actors interact in a system, they don’t necessarily help each other.

They may cooperate, as within a football team or a business system.

They may compete, as in a game of chess, or a market; or hurt each other, as in a boxing match or a war.

Cooperation, conflict and conflict resolution is a focus of bio-mathematics and game theory.


Anatol Rapoport was a mathematical psychologist and biomathematician who made many contributions.

He pioneered the modeling of parasitism and symbiosis, researching cybernetic theory.

This gave a conceptual basis for his work on conflict and cooperation in social groups.


Game theory: cooperation and conflict resolution

In the 1980s, Rapoport won a computer tournament designed to further understanding of the ways in which cooperation could emerge through evolution.

He was recognized for his contribution to world peace through nuclear conflict restraint via his game theoretic models of psychological conflict resolution


Social network analysis

Rapoport showed that one can measure flows through large networks.

This enables learning about the speed of the distribution of resources, including information through a society, and what speeds or impedes these flows.


Read Introducing general system theory for more.

System Dynamics – animating a system model to predict long-term outcomes

System Dynamics was founded and first promoted by:

·        Jay Forrester (1918 to 2016) every system is a set of quantities that are related to each other.

·        Donella H. Meadows (1941 to 2001) resource use, environmental conservation and sustainability.


Read these papers for more.

·        System Dynamics

·        System state change by circular causal loops


Today, akin to system dynamics, there are agent-based approaches to the analysis of systems.

Soft systems thinking – loosening the system concept

Bertalanffy didn’t like some directions in “the System Movement”, especially those specific to one science.

However, he saw the movement as “a fertile chaos” that generated many insights and inspirations.

General system theory doesn’t start from or depend on sociology, or analysis of human behavior.

However, it stimulated people to look afresh at social systems in general and business systems in particular.


The term “soft system” emerged in the 1970s.

Three well-known soft system thinkers are:

·        Peter Checkland (born 1930) the Soft Systems Methodology.

·        Stafford Beer (1926- 2002) management cybernetics and the Viable System Model.

·        Russell L Ackoff (1919-2009) human organisations as purposeful systems.


Checkland observed the distinction between hard and soft system approaches is slippery.

The distinction between hard and soft systems is also questionable, since even to Ashby, every system is “soft system” in the sense of being one perspective of the real world.

Read Soft Systems for more on Checkland.

Read Ackoff’s ideas on applying general system theory to management science.

Read Beer’s ideas on applying cybernetics to management science.

The main problem here (as is explained and resolved in other papers) is the confusion of a social group or network with a social system.

Decision theory - or theory of choice

Ackoff noted that the actors in a system described as per classical cybernetics act according roles and rules.

By contrast, the actors in a human social system have free will and can act as they choose.

This prompts the question as to how people do, or should, make choices.

A biologist or psychologist may look to instinct, homeostasis, emotions or Maslow’s hierarchy of needs as the basis for making decisions.

A sociologist or mathematician may take a different perspective.

A sociological perspective

Herbert Alexander Simon (1916 to 2001) was a political scientist, economist, sociologist, psychologist, and computer scientist.

According to Wikipedia, Simon argued that fully rational decision making is rare: human decisions are based on a complex admixture of facts and values.

And decisions made by people as members of organizations are distinct from their personal decisions.

He proposed understanding organizational behavior in humans depends on understanding the concepts of Authority, Loyalties and Identification.



A theory of why and how humans conform to group norms has to start from the evolutionary advantage it gives them.

Authority, Loyalties and Identification are matters for biology and management science rather than a general system theory.

A mathematical perspective

Anatol Rapoport (1911 to 2007) wrote on game theory and social network analysis.

Game theory is concerned with choices made by agents interacting with each other - in a game of poker.

By contrast, decision theory is concerned with the choices made by agents regardless of such a structured interaction.


“Decision theory (or the theory of choice) is the study of the reasoning underlying an agent's choices.

Decision theory can be broken into two branches:

·        normative decision theory, which gives advice on how to make the best decisions, given a set of uncertain beliefs and a set of values; and

·        descriptive decision theory, which analyzes how existing, possibly irrational agents actually make decisions.

Empirical applications of this theory are usually done with the help of statistical and econometric methods, especially via the so-called choice models, such as probit and logit models.


Advocates for the use of probability theory point to the work of Richard Threlkeld Cox, Bruno de Finetti, and complete class theorems, which relate rules to Bayesian procedures

Others maintain that probability is only one of many possible approaches to making choices, such as fuzzy logic, possibility theory, quantum cognition, Dempster–Shafer theory and info-gap decision theory.

And point to examples where alternative approaches have been implemented with apparent success.” Wikipedia 31.12/2018.


Decision theory is beyond the scope of this work on system theory.

The Wikipedia entry on Decision Theory will give you other links to follow.

For some practical case studies, look here

They feature (for example) the use of a Bayesian approach with Markov Chain Monte Carlo numerical methods.

The applications include assessing the risks of mechanical system failures, in order to inform decisions about their use and maintenance.


Be cautious about claims made about mathematics-based decision making.

First, the comparison is not between making a decision and making no decision.

It is between making a decision by mathematical analysis and making a more intuitive or experience-based decision.

Second, mathematics-based decision making cannot be extended to that large set of business decisions where

·        the time or cost of the analysis is too high

·        one can make only wild guesses about the raw data/numbers

That last is the category of decision that top level managers are often faced with, and paid to make.

Social group as organism

In the theory of evolution by natural selection, can a social group be treated as an organism?

Can selection between groups (favoring cooperation) successfully oppose selection within a group (by competition)?

Thinkers who addressed this include:

·        Lynn Margulis – the evolution of cells, organisms and societies

·        Boehm – the evolution of hunter-gatherer groups

·        Elinor Ostrom – the formation of cooperatives.

The evolution of cells, organisms and societies

Lynn Margulis (1970) proposed how nucleated cells evolved from symbiotic associations of bacteria.

The general idea is that members of groups can become so cooperative that the group becomes a higher-level organism in its own right.

The idea was later generalized (Maynard Smith and Szathmary 1995, 1999) to explain other major transitions, such as the rise of

·        multicellular organisms

·        eusocial insect colonies

·        human evolution.


It is common for people draw to a questionable analogy from biology to sociology.

The cooperation between smaller biological entities in a larger one is inflexible, rigidly rule-bound.

The interactions between people in a social network are extremely flexible, and a matter of choice for each individual.

Moreover, the people in one social network may realise several, possibly conflicting, rule-bound systems.

The evolution of hunter-gatherer groups

Hunter-gatherer societies are famously egalitarian, but not because everyone is nice to everybody else.

Group members can collectively suppress bullying and other self-aggrandizing behaviors within their ranks.

Boehm (1993, 1999, 2011) saw this as the defining criterion of a major evolutionary transition in human society.

With little disruptive competition within a group, succeeding as a group became the main selective force in human evolution.

The formation of cooperatives

Consider a group of people who share access to resources.

Such as fishermen who share fishing grounds, or farmers who share an irrigation system.

How to avoid “the tragedy of the commons” by which competition exhausts the common resource?

For a while, the fishermen must stop fishing, and farmers stop farming, to define the rules of their social system – a cooperative.


Elinor Ostrom (1990, 2010) defined eight generic conditions for such a cooperative.


1 Clearly defined boundaries

members know they are members of a group and it aims

2 Proportional equivalence between benefits and costs

members must earn benefits and can’t just appropriate them

3 Collective choice arrangements

members must agree decisions so nobody can be bossed around

4 Monitoring

5 Graduated sanctions

6 Fast and fair conflict resolution

disruptive self-serving behaviors must be detected and punished

7 Local autonomy

the group must have the elbow room to manage its own affairs

8 Appropriate relations with other rule-making authorities

(polycentric governance)

all rules above apply equally to inter-group relations


David Wilson reports projects that successfully applied Ostrom’s eight principles

However, there are two constraints on how widely applicable they are.

First, the eight conditions are very demanding.

Second, the notion of pre-modern grouping of humans by geographical location has largely broken down.


This section above was edited from this paper by David Sloan Wilson.

Read that paper for more detail and references.

The transformation of social groups

In the ancient world, humans (like apes) were naturally grouped by geography.

They communicated only with people in the same location or territory.

First, vehicular transport transformed our ability to mix in different societies.

Then, telecommunications transformed our ability to communicate remotely.


What is now is a social group?

How does an individual actor join or leave a group? Who decides?

How many groups can one individual be a member of? Are there degrees of membership?

How does an actor prioritise, apportion time and attention, between groups they belong to?

How do they reconcile the conflicting norms or goals of different groups?

Second-order cybernetics – undermining the system concept

Second-order cybernetics was developed around 1970 by thinkers including Heinz von Foerster and Margaret Mead.


The distinction between classical and second-order cybernetics is fundamental.

Classical cybernetics emerged out of thinking about biological and mechanical control systems.

A system responds in a regular way to changes in its environment or another (coupled) system.

The term adaptation usually means system state change - changing state variable values in response to events.

It often means homeostasis - maintaining given variable values in a desirable range.

Homeostasis maintains the nature of a system.

The term evolution might be be used to mean the path followed by state variable values over time.

The trajectory of state change (be it linear, circular or chaotic) is an inexorable result of the system’s rules.


Second-order cybernetics emerged out of thinking about social organisations.

The term adaptation often means system mutation - or evolution - changing state variable types or the rules for responding to events.

This changes the very nature of the system; it changes its laws.

Moreover, second order cybernetics is the recursive application of cybernetics to itself.

It allows systems actors to be system thinkers, who study the system they play roles in, and re-organise it.

The trouble is that continual reorganisation undermines the general concept of a system – which is regularity.

Consequently, second-order cybernetics and “complexity science” tend to undermine more general system theory.


Read second order cybernetics for more.


There is probably little dispute about these basic ideas about systems.

·        There are forms and functions – actors and activities - in a system.

·        There are accidental and purposive - natural and designed - systems.

·        There are descriptions and realisations - abstract and concrete - systems.


Activity systems can be seen as games in which actors follow defined roles and rules.

You rely on countless human activity systems.

You wouldn't want to:

·        stand trial in a court that didn’t follow court procedures

·        board a train or airplane operated by people who didn’t follow the rules.

·        invest in a company that didn’t repay its loans as promised

·        play tennis against somebody who ignored the laws of the game.


Unfortunately, second-order cybernetics and “complexity science” have undermined the concept of a system.

Some sociologists have come to call any social network a system, even where there are no defined roles or rules.

And may neglect to consider that one human actor can belong to different social networks and play roles in different systems.


To call every problem, situation, society or social group “a system” is unhelpful.

A business system has roles and rules, which are defined and coordinated to meet agreed business aims.

Employees frequently act outside of defined roles – whether to benefit or the cost of a business.

But those extra mural actions are not part of the business system in any meaningful sense.


The following papers unscramble ways that the ideas in this table have been confused.


A social network

A social system

A set of actors who communicate as they choose.

A set of activities performed by actors.

A concrete entity in the real world.

A performance of abstract roles and rules by actors in social network

Ever-changing at the whim of the actors

Described and changed under change control


Read Systems thinking approaches for more.

Footnote 1: Two more sociological systems thinkers

Social systems thinking continued alongside the post-war system theory movement, sometimes in touch with it, sometimes far apart from it.

Luhmann: autopoietic social systems

Niklas Luhmann (1927–1998) was a German sociologist and student of Parsons.

Like writers a century earlier, he presumed a system is homeostatic and sustains itself, though in a very curious way.

David Seidl (2001) said the question facing a social system theorist is what to treat as the basic elements of a social system.

“The sociological tradition suggests two alternatives: either persons or actions.”

Luhmann chose neither, he proposed the basic elements of a social system are communicative events about a code that lead to decisions that sustain the system.

Each social system is centred on one code, which is a concept such as “justice” or “sheep shearing”.

He endorsed the “hermeneutic principle” that the hearer alone determines the meaning of a communicative event.

Read Luhmann’s ideas for more.



Luhmann’s system is radically different from systems as understood by most other system theorists.

It is well-nigh diametrically opposed to that of classical cybernetics.

Since the system has no persistent structure, no persistent state, and no memory of communication events.

And the hermeneutic principle is contrary to common sense and to biology.

Since communication requires a receiver to decode the same meaning from a message that a sender intentionally encoded in that message.

Luhmann’s whole scheme (like that of Parsons before him) seems more metaphysical than scientific.

However, the idea of system based on a code might been seen as having a counterpart in more general system theory

That is a “domain-specific language” for communication of information about entities and events related to one body of knowledge.

Habernas: universal pragmatics

Jürgen Habermas (born 1929) was a critic of Luhmann’s theory of social systems

He developed the social theory of communicative reason or communicative rationality.

According to Wikipedia, this distinguishes itself from the rationalist tradition, by locating rationality in structures of interpersonal linguistic communication rather than in the structure of the cosmos.

It rests on the argument called universal pragmatics – that all speech acts have an inherent "purpose" – the goal of mutual understanding.

He presumed human beings possess the communicative competence to bring about such understanding.

And hoped that coming to terms with how people understand or misunderstand one another could lead to a reduction of social conflict.



A theory of why and how animate entities communicate has to start from the evolutionary advantage it gives them.

Natural human language is inherently fluid and fuzzy; it is a tool for social bonding and communication, but can easily lead to misunderstandings.

Footnote 2: a couple of half-baked comparisons

The tendency of “systems thinkers” to draw analogies between “systems” in different sciences is not itself science.

Consider how very different are the systems in this (possibly questionable) table


A (designed) software system

A (natural) free market economic system

Is a designed entity.

Is a natural entity.

Must be described fully, in excruciating detail.

Can be described lightly as a collection of actors making buy and sell decisions.

Must be tested to ensure its actions add up to coherent, consistent and correct whole.

Needs no testing before it runs.

Proceeds by actions that are predetermined and coordinated.

Proceeds by actions that are neither predetermined nor coordinated.

Proceeds only when intentionally deployed and set in motion.

Proceeds regardless of any overarching intention.

Changes only by design of actors outside the system

Changes as a result of actors inside the system.

Changes only incrementally and predictably.

Evolves continually and unpredictably.

Is relatively complex

Is relatively simple (however complex the real world actors and activities are).


And consider how very different are the systems in this table.


The actors in an agile software development system

The actors in a free market economic system

The actors are a small team of developers (cf. a hunter-gatherer group)

Actors are millions of individuals who act in their own self interest

Actors act to build a coherent, consistent and correct software system

Actors act to sell or buy something (anything).

Actors must make coordinated decisions.

Actors make ad hoc decisions.

Actors make decisions with the purposes of the system’s owners and users in mind.

Actors make decisions no wider purposes in mind.

Actors’ decisions are highly constrained by requirements technologies, standards etc.

Actors’ decisions are constrained by the money they possess.


All free-to-read materials at are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to in whichever social media you use..