Classic system ideas – part one

Copyright 2017 Graham Berrisford. a chapter in “the book” at https://bit.ly/2yXGImr. Last updated 13/05/2021 10:14

 

Reading online? If your screen is wide, shrink its width for readability.

This is the first chapter in part one of the book.

 

Part one: General view

Preface to the book

Ideas you should know

Classic system ideas – one

More you should know – one

More about dynamics

Disambiguating terms

Interpretations, use & abuse

Relating cybernetics to EA

Forrester’s system dynamics

Meadows’ generalizations

 

If you want to get to grips with systems thinking, you’ll need to understand the core ideas that have emerged over the last century, not merely hear the words and interpret them as you like. This chapter is the first of two that take you on canter through a history of systems thinkers and their ideas. It introduces three classic systems thinking schools (cybernetics, system dynamics and soft systems methodology) and outlines their ideas about modelling activity systems.

Contents

Early systems thinkers. 1

Cybernetics. 2

System Dynamics. 8

Soft systems thinking. 9

General system theory. 10

Conclusions and remarks. 11

 

Introduction

Some ideas that prefigure modern systems thinking emerged centuries ago. Notable authors have included:

·       Isaac Newton (1642-1726) who described the world as a system of objects that interact by forces, according to the laws of motion

·       Adam Smith (1723-1790) who wrote on specialization of, cooperation and competition between, businesses (cf. autonomous agents)

·       Charles Darwin (1809-1882) who wrote on the evolution of a species by reproduction with modification

·       Claude Bernard (1813-1878) who wrote on homeostatic feedback loops; and

·       Vilfredo Pareto (1848-1923) who is famous for the Pareto principle.

 

The general concept of a system became a focus of attention after second world war. There was a burst of theory development in the period 1940 to 1980, out of which two major schools of system science emerged. First, cybernetics, after Weiner and Ashby. Later, system dynamics, after Forrester, which was popularized by Meadows.

 

Today, what people call “systems thinking” ranges from the softest of sociology to the hardest of physics and mathematics. Some generalize across these different domains of knowledge by defining a system as a whole made of parts. (By the way, what looks like a whole to you may look like a part to another observer/describer, and vice-versa. The scope or granularity of every whole and part is a decision made by describers.)

 

Here, the trouble is that every individual entity you can see or name is divisible into separately describable parts. If the term system is to be useful, more than a vacuous noise word, more than a synonym for entity, then it must have some more particular meaning. What other meanings can we find in the system sciences introduced in the previous chapter?

 

In most system sciences, every system of interest has two contrasting properties. First, it is dynamic in the sense it displays behavior and changes from state to state over time. Structures within the system interact in describable behaviors to change its state and/or produce outputs from inputs. Second, it is stable in the sense its way of behaving is regular enough to be modelled. It is a describable pattern of behavior (however transient) in the ever-unfolding process that is the universe.

 

Read on to find more about what a system is cybernetics, system dynamics and soft systems methodology.

Cybernetics

Cybernetics emerged out of efforts to understand the role of information in controlling the state of a mechanical or biological system. It is much about the storage and transmission of information, in memories and messages, to describe and direct the state of things.

 

Cybernetics is independent of any physical form or medium for a memory or a message. Actors can encode information or meaning in any structure of matter or energy that they can later decode. Information can be transmitted using smells, sounds, gestures, words, pictures, dials, lines drawn in the sand, electrical pulses or radio waves.

 

Well-known cyberneticians include:

·       Norbert Wiener (1894-1964) "the science of control and communication, in the animal and the machine".

·       W. Ross Ashby (1903-1972) known for his law of requisite variety.

·       Alan Turing (1912 –1954) finite state machines and artificial intelligence.

 

Cybernetics was discussed and promoted by two influential groups.

·       1941 to 1960: The Macy Conferences - cross-disciplinary meetings in New York, with a mandate to aid medical research. Topics included connective tissues, metabolism, the blood, the liver and renal function. Also, infancy, childhood, aging, nerve impulses, and consciousness.

·       1949 to 1958: The Ratio Club - a cross-disciplinary group in the UK, founded by the neurologist John Bates to discuss cybernetics. Members included psychologists, neurobiologists, engineers, physicists, and mathematicians. Many went on to become prominent scientists.

 

Though he is not as well-known as many of the scientists his work influenced, a prominent member of the Ratio Club was the psychologist W. Ross Ashby, who generalized cybernetics, and defined a system in two ways.

 

System as a set of state variables

Ashby defined a system as set of state variables selected from observation of some real-world entity, machine or situation. He represented the state of a physical entity by the values of state variables, as, for example, we represent state of a tennis match by game, set and match scores, or may represent the state of a predator-prey system by predator and prey population numbers.

 

Ashby didn’t exclude the appearance in messages of qualitative information (as a location might be named in his gale warning broadcast). However, the focus of cybernetics is mostly on quantitative variables, like the levels of a stock, population or resource. Given that a quantity changes over time, the variable’s line of behavior may be shown on a graph of quantity against time.

 

Core concept: Line of behavior: the trajectory of how a state variable’s value changes over time. The shape of the line is an inexorable result of the system following its rules. It may hover around a homeostatic norm, or dwell for a while in one stable state (an attractor), then move to another stable state, or zoom up or down, or change in some other non-linear way as shown in figure 7/1/1 in Ashby’s “Design for a Brain”(1954).

[jpg not visible to on-linear readers]

 

 

The progress of a system with two or three variables can be represented on a graph as a two or three-dimensional shape.

 

System as a set of rule-bound activities

Ashby defined systems by the way they behave over time, as a set of regular or repeatable state changes, rather than what they are made of. This shift in perspective, from the physical structure of a system to its logical behavior, is central to the cybernetic view of the world. When a cybernetician calls a system complex, the reference is to its lines of behaviour rather than its material structure.

 

Ashby was particularly interested in machines and organisms that change from state to state under their own internal drive, without input. By contrast, enterprise architects are mostly concerned with systems in which actors are prompted to act when inputs arrive, or internal states reach a threshold value. Activities advance the internal state of the business and/or produce outputs that advance the state of external actors. Despite their different interests, there is a considerable overlap of ideas, explored in other chapters.

 

The good regulator

The “good regulator” theorem, conceived by Roger C. Conant and W. Ross Ashby, is central to cybernetics. In short, it states that every good regulator of a system must have a model of that system.

 

“Abstract "The design of a complex regulator often includes the making of a model of the system to be regulated. The making of such a model has hitherto been regarded as optional, as merely one of many possible ways. In this paper a theorem is presented which shows, under very broad conditions, that any regulator that is maximally both successful and simple must be isomorphic with the system being regulated. Making a model is thus necessary.

https://www.tandfonline.com/doi/abs/10.1080/00207727008920220

 

A regulator can be any animal, machine or business that has a model, or has access to a model, of what it needs to monitor and control. So, read this triangle from left to right: regulators <have and use> models, which <represent> targets.

 

The good regulator

Models

<have and use>           <represent>

Regulators    <monitor and regulate >   Targets

 

Ashby wrote that a regulator’s model of a target must be isomorphic with the target that is regulated, meaning, the elements and relationships in the model must be correlatable with elements and relationships in the reality. This isomorphism is logical rather than physical.

 

The theorem has the interesting corollary that the living brain, so far as it is to be successful and efficient as a regulator for survival, must proceed, in learning, by the formation of a model (or models) of its environment."

https://www.tandfonline.com/doi/abs/10.1080/00207727008920220

 

Evidently, to function and respond to changes, an animal must “know” what it going on in its world. It needs a model of entities and events its environment if it is to find food and mates, and avoid enemies. The richer the model, the more adaptive the animal can be to changes in it environment. Similarly, a business needs to know the state of things it seeks to monitor or direct.

 

The question is not whether an animal or a business has a model; it is how complete and accurate is the model? To which the answers might be both “very incomplete and somewhat inaccurate” and “remarkably, complete and accurate enough”. Thinking about these answers leads inexorably to the view of description and reality that is outlined in the second half of this book.

 

Feedback and homeostasis

"Self-regulating mechanisms have existed since antiquity, and the idea of feedback had started to enter economic theory in Britain by the 18th century, but it did not have a name.... In 1868, James Clerk Maxwell wrote a famous paper, "On governors", that is widely considered a classic in feedback control theory. This was a landmark paper on control theory and the mathematics of feedback."  (Wikipedia)

 

The biologist Claude Bernard (1813-1878) wrote on homeostatic feedback loops that maintain the essential variables of the body within critical limits. Later Walter Cannon wrote “Physiological Regulation of Normal States” (1926).

 

If a regulator is to monitor and direct the state of a target then it must know or remember the state of that target, and update that knowledge. Cybernetics explains how a regulator does this by means of a feedback loop that connects the regulator and target subsystems such that the output from one is input to the other.

 

Subsystem

Feedback loop

Subsystem

Regulator

ßstate information

directionà

Target

Thermostat

ßtoo cold or hot

switch on or offà

Heater

 

A controller/regulator typically contains three kinds of component:

·       receptors that sense changes in the state of a target

·       a control center that determines the regulator’s responses to those changes

·       effectors that act to change the state of a target.

 

In his “Design for a Brain” Ashby presented the brain as a regulator that monitors and maintains the state of the body.

·       sensors detect changes in bodily state variables

·       the brain determines the responses to those changes

·       motor neurons, via glands and muscles, increase or decrease state variable values

 

Subsystem

Feedback loop

Subsystem

Regulator

ßstate information

directionà

Target

Sensors, Brain

Muscles, Glands

ßvariable values

increase/decreaseà

Body

state variables

 

This table compares Ashby’s model with general activity system thinking.

 

Generic activity system

Ashby’s design for a brain

Actors

interact in orderly activities to

maintain system state and/or

consume/deliver inputs/outputs

from/to the wider environment.

Brain cells

interact in processes to

maintain body state variables by

receiving/sending information

from/to sensors, muscles, glands

 

Feedback loops are found in mechanical, organic and business systems. A missile guidance system senses spatial information, and sends messages to direct the missile. An animal brain senses thing its environment, and uses its mental model of the world to respond to or manipulate those things.

 

A business is coupled to its environment by producing outputs that affect external entities, and receiving inputs, some of which are responses to previous outputs. A business information system maintains a model of the entities and events that it monitors and directs in its environment. In EA, given that the focus is on information flows rather than material flows, the relationship of a business to the external actors it monitors and serves or directs may be seen as a regulator-to-target feedback loop.

 

Business activity system

Feedback loop

Business environment

Regulator

ßstate information

directionà

Target

Consumes inputs

Produces outputs

Maintains system state

ß Inputs

à Outputs

External actors

Environment state

 

A core concept: Feedback loop: the circular fashion in which output flows influence future input flows.

 

Progressive entities or processes

Feedback is important in machines, animals, social entities and business operations. A business is coupled to its environment by producing outputs that affect external entities, and receiving inputs, some of which are responses to previous outputs. However, business activities are usually more progressive than homeostatic.

 

As an example of progressive state changes, in his “Introduction to Cybernetics”, Ashby discussed an abstract system realized in nature. In the table below, after Tinbergen, the columns show the roles of two sticklebacks in their mating ritual; the rows show the succession of activities/states of the system/process as it progresses to a conclusion.

 

The stickleback mating ritual: the abstract system

The female’s role is to

The male’s role is to

present swollen abdomen, special movements

present a red color and a zigzag dance

swim towards the male

turn around and swim rapidly to the nest.

follow the male to the nest

point its head into the nest entrance

enter the nest

quiver in reaction to the female entering the nest

spawn fresh eggs in the nest

fertilize the eggs

 

The sticklebacks communicate by sending/receiving information in the form visual signals, which are stimuli (akin to data flows or messages in business operations) that inform and direct activity. The sticklebacks may be called actors or active structures; the nest and eggs may be called passive structures.

Ashby’s terminology

Ashby’s work had a profound influence on many prominent scientists. However, his writing is a tough read. He used the terms system and machine variously, with and without quotes, with and without prefixed adjectives (real, whole, regular, dynamic, independent etc.). Between books, even within a book, he did not use these terms entirely consistently (and he had no “find next” and “change all” function to help him fix that).

 

Ashby’s real system and real machine usually refer to a whole physical entity with infinite potentially definable variables (and sometimes infinite possible inputs) which “stands apart from any observer”. It could be an electrical transducer, an animal, or even a society.

 

His system (without the prefix “real”) is defined as an observer’s selection of variables. In his first book, the selection is arbitrary. In his second book, the selection relates to some “main interest that is already given”. Perversely however, Ashby also sometimes wrote of an animal or a society as being a system, regardless of any observer or selected variables.

 

In his first book, Ashby narrowed the scope of “system” by saying in chapter 2 that

“from now on we shall be concerned mostly with regular systems. We assume that preliminary investigations have been completed and that we have found a system, based on the real 'machine', that (1) includes the variables in which we are specially interested, and (2) includes sufficient other variables to render the whole system regular.”

 

Later, in chapter 2/17 and 14/3 he narrowed “system” further to mean a selection of variables that inter-related, and change independently of other variables.

 

His term 'machine' with quotes often seems the same as real 'machine'. By contrast, the term machine without quotes, sometimes seems limited to what a physical entity does by way realizing a system of variables that have been selected (whether arbitrarily, or as related to some given interest, or as related to each other and changing independently of others).

 

(For examples of Ashby using these terms in different ways, see 2/4, 2/5, 2/14, 2.17, 14.3  and 21/6 in “Design for a Brain”, and see 2/1, 3/1, 3/11, 4/3, 6/1 and 6/14 in “Introduction to Cybernetics”.)

 

Abstracting systems from physical entities

Ashby made clear that a system (a set of variables and rules) is abstracted from the infinite complexity of any physical entity or real 'machine' in which that system may be observed. The table below divides his machine into a physical entity and the physical activity system it performs, when it realizes a particular abstract system.

 

Stickleback mating ritual

System

Abstract system (roles, rules, variables)

The ritual typified above

Machine

Physical activity system (activities, variable values)

An instance of the ritual

Physical entity (actors in the material word)

Sticklebacks, nest and eggs

 

Generally, the relationship between abstract systems and physical entities is many to many. So, the mating ritual in the table above can be realized by many stickleback pairs; and conversely, those sticklebacks may play roles in other systems. Also, there is more to a physical entity than any abstract system it realizes; there is more to know about the sticklebacks than their mating ritual.

 

SEE NEXT CHAPTER

System Dynamics

Jay Forrester (1918 to 2016), sometime a professor at the MIT Sloan School of Management, defined the system dynamics method. Donella H. Meadows (1941 to 2001), whose interests included resource use, environmental conservation and sustainability, used and promoted the method (more on Meadows’s ideas later).

 

Much as Ashby defined a system as a set of variables chosen for attention, and relationships between those variables, Forrester defined a system as a set of “stocks” chosen for attention, and the “flows” between those variables, be they observed or envisaged.

 

A stock can represent any quantifiable population or resource. E.g. wolf and sheep populations, happiness level, sunlight level, and variables that affect climate change such as CO2 level. A stock level, a variable quantity representing the amount of the stock, is increased and decreased by inter-stock flows. A flow, connecting two stocks, represents how increasing one stock increases or decreases another stock.

 

A negative or balancing feedback loop, where growth in one stock tends reduce another, tends restore a system to a stable state, regardless of the initial conditions.

 

Growth in

Feedback loop

Growth in

Predators

decreasesà

ßincreases

Prey

Homes for Sale

decreasesà

ßincreases

House Prices

 

A positive or reinforcing feedback loop drives the two coupled populations or resources to extreme levels (barring some natural limit or control from another direction).

 

Growth in

Feedback loop

Growth in

Virus

population

ßincreases

increasesà

Infected

people

Polar ice cap

ßincreases

increasesà

Reflected

sunlight

 

In a network of feedback loops, radically different end states are possible from very similar initial conditions. This is a defining feature of “chaos theory”.

 

Typically, the modeller begins by drawing a causal loop diagram that names stocks (in nodes) and connects them by flows (arrows). To show whether a flow increases or decreases a stock, the modeller annotates the flow arrow with + or – sign. For examples, see this introductory book https://bit.ly/3blcJaT.

 

A causal loop diagram is only a sketch of a system's dynamics. The modeller can go on to complete the system dynamics model by defining the mathematical rules for how flows modify stock quantities. When completed, the model can be animated. You can give the state variables some initial values, then set the system in motion. Its state will change in discrete time-steps, and the result can be reported on a graph showing stock level changes over time (lines of behavior).

 

SEE “FORRESTER’S SYSTEM DYAMICS” CHAPTER

Soft systems thinking

The term “soft system” is used with at least three meanings. Soft system 1: An observer’s perspective of a business system or situation. Soft system 2: A system in which the observer participates. Soft system 3: A human activity system in which people undertake activities to meet aims.

 

The distinction between hard and soft systems is debatable. The first two meanings are no different from the view of a system taken in cybernetics and system dynamics. Ashby and Forrester both saw a system as “soft” in the sense that it is an observer’s perspective or model of some physical phenomena. And both presumed an observer may set the parameters for a system to begin operation, and monitor its state changes over time.

 

Where the interest is particularly in the regular operations of a business, all four soft systems thinkers below looked at the business as an “organization” in which some (not all) human activities are regular and repeatable enough to be modelled.

 

Churchman, one of the first soft systems thinkers, said "a thing is what it does". He outlined these considerations for designing a system managed by people:

·       “The total system objectives and performance measures;

·       the system’s environment: the fixed constraints;

·       the resources of the system;

·       the components of the system, their activities, goals and measures of performance; and,

·       the management of the system.”

 

Like others, Churchman sought to integrate system theory into “management science”, and replacing the word “business” by “system” he conflated the ideas of a social entity and a social activity system.

 

Peter Checkland (born 1930) promoted a “soft systems methodology” for business analysis and design. Checkland regarded a business as an input-to-output transformation system. His “business activity model” shows a network of activities that transform inputs into outputs. So, whatever “soft system” means, the end result of his soft systems method is the definition of a human activity system. The same general principles apply to this as to other kinds of activity system models.

 

1.     Regular activities transform inputs into outputs wanted by customers

2.     Feedback loops connect a business to its environment thus: a) a business detects changes in the state of its environment b) it determines responses and c) it directs entities to perform activities.

3.     Observers may draw a business activity model, and read the current state of a system in its data store(s).

 

Using Checkland’s soft systems method, one draws a business activity model to show how regular or repeated activities complete a “transformation” that human actors are employed to make. Of the six letters in his CATWOE acronym for describing a system of interest, five can be found in the extended SIPOC diagram drawn below.

 

Environment

Transformation

Environment

Suppliers

Actors

Inputs à Processes à Outputs

Owner and Actors

Customers

Actors

 

Different modellers, speaking to different stakeholders, may come up with different CATWOE sentences, different systems of interest. Reconciling different perspectives is part of what must be done, even in the hardest of engineering projects. Checkland noted the distinction between hard and soft system approaches is slippery. He wrote that people get it one day, and lose it the next. And sometimes he said the term “soft” was intended to characterize his method – not the system of interest.

 

Among other “soft systems thinkers”, Russell L Ackoff (1919-2009) spoke of human organizations as purposeful systems, and Stafford Beer (1926- 2002) wrote on “management cybernetics”. You’ll meet Checkland, Ackoff and Beer again in part two of this book – on systems thinking in management science.

 

SEE PART TWO

General system theory

“There exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind.” Bertalanffy

 

The 1954 meeting of the American Association for the Advancement of Science in California was notable. Some at that meeting conceived a society for the development of general system theory. The founding members of the International Society for the Systems Sciences (1955 to date) included:

·       Ludwig von Bertalanffy (1901-1972) the cross-science notion of a system

·       Anatol Rapoport (1911 to 2007) wrote on game theory and social network analysis.

·       Kenneth Boulding (1910-1993) applying general system theory to “management science” (see part two).

 

Ludwig von Bertalanffy was a biologist who promoted the idea of a cross-science general system theory from the 1940s. He looked for patterns and principles applicable to several sciences and domains of knowledge. Many have been outlined earlier in this chapter. Two more branches of general system theory should be mentioned here.

 

Anatol Rapoport was a mathematical psychologist and biomathematician who made many contributions. In researching cybernetic theory, he pioneered the modelling of parasitism and symbiosis. This gave a conceptual basis for his work on conflict and cooperation in social groups.

 

When actors interact in a system, they may cooperate, as in a football team or a business system. But they don’t necessarily help each other. They may compete, as in a game or a market; or hurt each other, as in a boxing match or a war. Cooperation, conflict and conflict resolution is a focus of bio-mathematics and game theory.

 

Game theory: cooperation and conflict resolution. In the 1980s, Rapoport won a computer tournament. It was designed to further understanding of how cooperation could emerge through evolution. He was recognized for his contribution, through nuclear conflict restraint, to world peace.

 

Social network analysis. Rapoport showed that one can measure flows through large networks. This enables learning about the speed of the distribution of resources, including information through a society, and what speeds or impedes these flows.

Conclusions and remarks

For a distillation of ideas attributed to the nineteenth century thinkers mentioned at the start, you could read my sketchy notes here thinkers who foreshadowed system theory.

 

If you want to get to grips with more modern systems thinking, you really must understand the core ideas discussed under that heading, not merely hear the words and interpret them as you like. This chapter is the first of two that take you on canter through a history of systems thinkers and their ideas. It introduces three classic systems thinking schools (cybernetics, system dynamics and soft systems methodology) and outlines their ideas about modelling activity systems.

 

In all these system sciences, all the systems of interest have two contrasting properties. First, they are dynamic in the sense they display behavior and change from state to state over time. Structures within a system interact in describable behaviors to change the state of that system and/or produce outputs from inputs. Second, they are stable in the sense there some order or pattern in their way of behaving. A system is a transient island of stability, a describable pattern of behavior, in the ever-unfolding process that is the universe.

 

There is more to know about systems, as explained in the following chapters. And later, the second chapter of this pair looks at ideas that have emerged from the history of more sociological “systems thinking”.

 

Let us finish this chapter with some remarks about the most naïve kind of “systems thinking” you will find on the internet. Often, the term implies little more than a glorification of holistic thinking (or “everything is connected”). And some naively use the adjectives on the left in the table below pejoratively, to deprecate a way of thinking.

 

Contrasting terms

Reductionistic

Holistic

Linear

Non-linear

Simple or solvable

Complex or wicked

Hard system

Soft system

 

To deprecate cybernetics, system dynamics or soft systems method as reductionist, linear, simple or hard is to suggest a misunderstanding of those approaches, those words, or both.

 

Reductionistic or holistic systems thinking?

All the approaches in this chapter take a holistic view of systems. They are concerned with how a whole system produces effects that its parts cannot on their own. At the same time, all are also reductionistic in the sense they divide a system into parts, and sometimes divide coarser-grained parts into finer-grained parts.

 

Linear or non-linear thinking?

All the approaches in this chapter presume that effects can be traced to causes, whether in end-to-end processes or in feedback loops. However, they allow that over time, a system may behave in complex, non-linear or chaotic ways. And may evolve in a different system.

 

Simple/solvable or complex/wicked?

All the approaches in this chapter presume reality is infinitely complex. What can understand and test only what we can describe. And most of the ten points that define wicked problems apply to most challenges in human society. Be the problem large or small, there is rarely a perfect answer; rather, there are trade-offs to be made between competing goals, and balances to be drawn between different design options. The options can’t be neatly divided into good or bad, only placed on a scale between those extremes.

 

Hard or soft systems thinking?

Midgely (2000) presented hard, soft and critical systems thinking approaches as though they were steps in a historical progression. Yet the hardest of system sciences regards a system as soft in the sense that it is a perspective of an entity or phenomenon, represented in a model made by observers, in the light of some given interests. This book starts from the position that all system thinking approaches (hard or soft) should help us describe a human activity system in a holistic way. Even mechanical engineers are taught to:

 

·           Analyse current situations, understand assumptions

·           Look at the big picture, overarching drivers, goals and principles

·           Engage in critical thinking

·           Identify stakeholders, their concerns and viewpoints

·           Unfold multiple perspectives, promote shared understanding.

·           Monitor and manage changes to requirements, time, cost, resources etc

·           Outline solutions or changes and how to make them.

 

Activity systems thinking or social entity thinking?

Much confusion in systems thinking discussion stems from people over-generalizing different schools of thought. Even respected authors use the terms of one with reference to different concepts in another. To make better sense of the systems thinking field it is necessary differentiate social entity thinking from activity systems thinking. One is not an evolution of the other; they are not competitors; both are needed.

 

We’ll return to social entity thinking in parts two and three of this book. In part one, we explore activity systems thinking, particularly with reference to the “dynamics” of systems.