A new vocabulary for systems thinking



Reading on-line? Consider shrinking the display width for easier reading.


Preface. 2

CHAPTER 1 Systems, parts and information. 5

CHAPTER 2: Basic system thinking ideas 11

CHAPTER 3: Thinking. 22

CHAPTER 4: A new model of thinking. 28

CHAPTER 5: A new model for systems thinking. 32

CHAPTER 6 Hard, soft and other viewpoints 42

CHAPTER 7 Self-organization – varieties 52

CHAPTER 8 Causality – varieties 58

CHAPTER 9 On what makes people special 70

CHAPTER 10 Complexity - varieties 77

CHAPTER 11: Looking forward. 86

APPENDIX 1: More on abstraction. 92

APPENDIX 2: Abstracting systems from phenomena. 102



What does this book do?. 125

What’s new?. 126

More to follow?. 127




This book is a foundational reference work for teachers and students of “system thinking” and “complexity science” around the world. It is also relevant to enterprise and business architects and analysts, other business change agents, teachers of system engineering, and users of activity system modelling languages like UML and ArchiMate.

The book gives you a vocabulary that disambiguates terms used in systems thinking. It synthesizes disparate views of the topic into a new and more coherent whole. In part, it does this by taking a naturalist, psycho-biological, view of philosophy.

Countering an anti-science tendency

Some systems thinkers put scientists down for thinking in "hard, systematic, reductionistic, linear" ways, rather than "soft, systemic, holistic, non-linear" ways. Actually, scientists think in all those ways.

Some speak in a pseudo-scientific way, borrowing terms from mathematics, physics and biology (fractal, attractor, chaos, autopoiesis) and using them with debatable and/or different meanings in discussion of human organizations. Some draw causal loop diagrams of socio-economic situations, and present them as truths, without any evidence.

Some quote aphorisms of gurus that don't fit the context, or don't bear close examination. Many use terms (non-linear, complex etc.) with no clear meaning. There is now a very large body of material in which the lax, loose or lazy use of words leads to readers to confuse thinking about social entities with thinking about activity systems.

Since so much posted today under the banners of "systems thinking", "management science" and "complexity theory" is ambiguous, even mystical, this work renders a service to readers by exposing many ambiguities, and resolving them in a new vocabulary of more than 100 terms for systems thinking.

This work does take a scientific view of systems, but you won’t find mathematical proofs or evidence drawn from practical experiments. Instead, you will find assertions are analyzed critically and logically, and conclusions are illustrated by simple examples, such as a ridden bicycle, a game of poker, a card school, nest building, a termite colony, and a business.

The importance of symbolic modelling

Any analysis of systems thinking must address thinking, and what we know of the world around us. That is the subject matter of epistemology and semiotics, which is about the use of symbols.

"Signs, symbols, and signals are basic to our existence on many organizational levels, from the biological to the psychological to the social. The ‘semiosphere’… envelopes and incorporates us at every turn. Symbolic nucleotide sequences lie at the root of our biological organizations, neural pulse codes subserve the coherent functional organizations in our brains that permit us to think, while the symbol sequences of our languages afford the complex communications that make human society possible. Semiotic concepts, properly developed, are critical for a deep understanding of the organization of life, the functioning of the brain, and the functional organization of the observer." P. Cariani

This work requires no deep knowledge of science or philosophy. However, it does offer a new triadic model for systems thinking, and a new classification of systems thinking approaches, which are readily understood.

A triadic model for systems thinking

Underlined names are writers referenced at the end of the book.

Ludwig von Bertalanffy was a biologist who looked to a "unity of science”, a general system theory of principles and patterns applicable to systems in all sciences. His idea stimulated others to look afresh at social systems. Unfortunately, this has led to some confusion.

To refer to an enterprise as a "system of systems", is to confuse a social entity with activity systems it participates in. To give you a “heads up”, this work disentangles a mess of terms and concepts by distinguishing not two, but three, things systems thinkers are interested in.

1.      A material or social entity – say, a card school.

2.      An abstract model of an activity system – the rules of poker.

3.      A concrete instantiation of 2 by 1 - a poker game.

Reading this book

This is a short book. However, it does more than introduce few ideas and bloviate about them. The ratio of ideas to pages is high. To get an impression of the content, you might skim through the chapters looking only at the tables, before returning read the narrative. Even if you are familiar with many of the ideas, this work is best read from start to end, because there is a logical flow from simple ideas at the start to more challenging concepts later. Your patience will be rewarded.












CHAPTER 1 Systems, parts and information


As a certified systems thinker, you have to get your head around some abstract ideas. This first chapter introduces a few of them. And like most chapters to follow, it addresses some common misconceptions.

The importance of you, the thinker or observer

Systems thinker, there is no system at the start. First, you model some entity or situation as a system. Then, when you look at the world, you may see a concrete instantiation of that system, or build one.

(There is an exception: to see a biological organism as a system, you don’t need to model or build one, because nature has already done that, by creating a model encoded in the form of DNA.)

As a systems thinker, you scope a “system of interest”. The interest may be in effects or results observed as interesting or problematic today, or envisaged as desirable in future. You identify things relevant to that interest, and relate them as parts of a whole – a system.

Your system is a whole composed of related parts, but it is not the whole of an entity or situation. It is an abstraction that represents a real system that you observe or envisage, a system that is instantiated by that real-world entity or in that situation.

"The art of model-building is the exclusion of real but irrelevant parts of the problem, and entails hazards for the builder and the reader." Philip Anderson, 1977

Given a different interest in future, what looks now to be a system might then be seen as one part in a wider system. Or, what looks now like a part might then be seen as a whole system needing investigation.

Parts and relationships

Defining a system as “a whole made of related parts” is a naïve but useful starting point, because it implies important things about parts and relationships.

Parts are discrete. In an abstract system, the part types must be differentiable from each other (say “predator” and “prey”, or “molecule”). And in a real or run-time system, the instances of those part types must also be separable (individual wolves and sheep, or molecules in a gas cloud).

Parts in a system are related to each other, directly or indirectly. Else there would be two or more distinct systems.

Parts and relationships can be named. Relationship types are best defined using a controlled vocabulary. For example: “Predators <consume> prey”, or “Molecules <collide with> molecules”.

Parts can be related in a graphical model. A variety of graphical forms may be used to model how parts (at the type or the individual level) are related in a system, most notably these three.

Network: a graph with parts in boxes/vertices related by lines/edges.

Hierarchy: a tree structure in which one part is divided into several parts, and so on.

Matrix: a table relating parts in rows and to parts in columns.

This simplistic view of an ecology shows aggregate parts in boxes, and relationships between them as arrows.




increases à

ß decreases



Figure 1.1

Relationships are as important as parts. Relational theory (the general one rather than relational data theory) embraces any framework for understanding or modelling reality in terms of how different objects or entities (and their properties, or variable attributes) relate to each other. Several kinds of relationship appear in the following sections.

Abstraction relationships

Some see the ability to abstract as a defining feature of intelligence. Abstraction is not a singular idea; to pin it down in one useful sentence is difficult. First, one has to distinguish description from reality.

Description: an abstraction from reality that conceptualizes or models a real-world phenomenon.

Abstraction: selecting or hiding, when describing something or when simplifying a description.

The table below shows various ways B may be abstracted from A.

A <realizes or refines> B

Abstraction variety

A <is an instance of> B

B is a Type

A <is a part of> B

B is a Composite

A <is a subtype of, or extends> B

B is a Generalization

A <makes more concrete> B

B is an Idealization

A <is encapsulated by> B

B is an Interface

A <serves> B

B is a Delegator

Figure 1.2

Type: a description of properties that instantiations of the type share.

Chapter 11 explores composition (hiding parts inside a whole), generalization (hiding what is different, describing shared properties) idealization (hiding physical details, abstracting a more logical view) encapsulation (hiding what is inside a system, abstracting an interface definition) and delegation (hiding the workings of a server from a client).

Abstraction in the hierarchy of sciences

The table below outlines what Bertalanffy called the “tremendous hierarchy” of sciences.








Information in writing


Human sociology

Human groups

Information in speech

Teaching, logic

Social psychology

Animal groups

Information in signals

Parenting, copying



Sense, memory, response




Sense, response.


Organic chemistry

Hydro carbons

Organic reactions


Inorganic chemistry


Inorganic reactions






Figure 1.3

Reading from the bottom up, you can see a history of the universe, from the big bang to human civilization. Systems may be found at every level.

“We cannot reduce the biological, behavioral, and social levels to the lowest level, that of the constructs and laws of physics.” Bertalanffy

Scientists at one level mostly think and work without reference to laws established in other levels. Even within biology, one scientist may study a cell in the liver as a whole system, and a second may see it as an atomic part of a wider system. The second pays no attention to what the first sees inside the cell.

Association relationships

Association: a relationship between things, or types of things.

Scientists are much concerned with how the parts of a system are associated with each other. There are countless varieties of association. This table shows some found in different branches of science.

Part A <is associated with> Part B

Structure kind

A <is physically connected to> B


A <exerts a force on> B


A <sends energy to> B


A <is logically associated with> B


A <sends information to> B


Figure 1.4

The storage and transmission of information/data by systems is important in much systems thinking. In sociology and management science, the systems of interest feature human actors who are related logically and communicate by exchanging information.

Information and data

The mathematical expressions for thermodynamic entropy are similar to those for information entropy (by Claude Shannon and Ralph Hartley). Physicists often refer to a flow of energy or radiation as information, and see the structure of anything as a memory of processes that created or modified it.

In social systems thinking, the term information is used in the sense of knowledge encoded by an actor in memory or message for use later, or by another actor. Several WKID hierarchies have been presented. Here is one compatible with this work.

Wisdom: the ability to use knowledge to envisage and predict the future.

Knowledge: information true enough to be useful.

Information: meaning, an actor’s correlation of data to what is observed or envisaged.

Data: a matter/energy structure, in a memory or message, in which information has been encoded, that is decodable into information.


On paper, on its own, Newton’s formula F = M * A is data. It is information, has meaning, when an actor writes it down or reads it. When decoded as Newton intended, it is true enough to be called knowledge. In their wisdom, scientists used that knowledge to predict the forces needed to put man on the moon. (By contrast, F = M/A is data that, when decoded, turns out to be false information.)

Despite the distinctions drawn above, in practice, because people assume data is decoded with the same meaning it had when it was encoded, the terms information and data are used interchangeably.

In business and software architecture, human and computer actors maintain data that records the state of real-world entities (customers, orders, payments etc.) that a business must remember for its processes to succeed. The entities and how they relate to each other may be represented in a concept/knowledge graph.

Conceptual entity-relationship (ER) models

Conceptual ER models, which classify things of interest (entity types) in a domain of knowledge, relate to ideas about sets, types and predicate logic that emerged in the early 1900s.

Paul Williams has summarized how, from the 1960s to the 1970s, several people developed and/or popularized ER modelling techniques. In historical order, he mentions Charles Bachman, J. Barrie Leigh, A. P. G. Brown, Peter Chen, Clive Finkelstein and James Martin.

Enrico Franconi has shown how ER models, though drawn to define data structures stored in databases, can be seen as concept/knowledge graphs that relate variables or entity types in predicate statements.

Edgar Codd introduced the relational model for structuring the data a business must remember. An application of his model is a relational data model. If you name the relationships between the tables in relational data model, you reveal some behavior of the system, and so turn the data model into a conceptual ER model of the system of interest.

Predicate sentences that relate entity types

General relational theory

A subject <is related to> an object.

Relational data model

A row in table A <is associated with> a row in table B.

Conceptual ER model

One of type A <is associated with> one of type B.

Specific domain model

An Order <was placed by> a Customer.

Concrete instantiation

Order 1234 <was placed by> Mr Bean

Figure 1.6

In the bottom row above, substituting specific values for the types or variables turns an abstract predicate statement into a concrete proposition about particular entities.


Rules can be specified in various ways, including constraints on variable values, and pre and post conditions of activities.

A data model is a passive structure that relates the state variables of an activity system. Some rules can be documented as constraints that limit the values of variables.

·        Domain constraint: the value of a variable (Birthday) is limited to a type (a date).

·        Uniqueness constraint: the value of a variable (Name) is unique to an entity instance, and is a candidate identifier for it.

·        Referential integrity constraint: the value of a variable (Parent Name) matches the value of another variable (Name).

Here, our interest is not only in the state of a system but also in the actors and activities that change a system’s state over time.

Cause: an input, event or condition that triggers an effect.

Effect: an activity, state change or output triggered by a cause.

Rule: a law that constrains and determines the effect of a cause, ranging from the laws of physics (such as Newton’s F = M * A) to business rules.

The relationship Customer <places> Order, implies an activity. An activity may be defined in terms of preconditions (which must be true for it to work) and post conditions (which should be true after it has finished). According “Hoare logic”, if the preconditions are true, and the activity succeeds, then the post conditions will be true. E.g.,

Activity: Place Order.

·        Input message: Order details.

·        Preconditions: Order value + customer debt < customer credit limit.

·        Postconditions: An order recorded with the status “to be fulfilled”.

·        Output message: Fulfilment instructions.

Some include inputs in preconditions, and outputs in post conditions; others distinguish them as above.


CHAPTER 2: Basic system thinking ideas

If everything is a system, then systems thinking is just thinking. And if a system is merely a collection of related parts (bounded and defined by a thinker with some interest in mind) then what, larger than a quark, is not a system?

System: a collection of inter-related parts that transform inputs into outputs and/or interact to produce results or effects that no part can produce on its own, and whose parts are orderly in some way.

Structural order: parts are arranged according to, or correlated with, a given pattern, structure or sequence.

Behavioral order: parts interact according to some rule(s).


Aside: In the physics of a many-particle system, order signifies symmetry or correlation to a pattern, and disorder designates the absence of any symmetry or pattern. A crystal has structural order, a cloud of gas doesn’t. However, the cloud has behavioral order, since its particles collide in rule-bound ways.


A rule-bound activity system has a particular "way of behaving". In a whirling flock of starlings, the parts (starlings) interact in a regular and repeatable way, which produces “emergent effects” that no part can produce on its own. Moreover, the whole (flock) may change state suddenly or unexpectedly.

Note that idea of a collective producing results that individuals cannot is so appealing that management science gurus often use the terminology of system theory, though not necessarily the concepts.


Two kinds of emergence may be distinguished. Systems thinkers may address how systems are formed in the first place, which may be by design, or by chance.

Emergent system: one that arises in the evolution of universe, either from disorder or from modifications to a prior system generation. E.g., Our solar system emerged from an apparently disorderly cloud of interstellar gas and dust. Billions of years ago, it settled into the regular way of behaving it has repeated, near enough, ever since.

More usually, systems thinking means observing or envisaging how the parts of a whole interact with each other and things in their wider environment to produce emergent properties, which are sometimes construed to be purposes.

Emergent property: an effect, result or ability of a given system that arises from interactions between its parts or actors. E.g.

·        the forward motion of a yacht when a wind passes over its sail.

·        the printed pages produced from paper and ink by a printer.

·        the response to invoking an operation in an interface

·        the response to requesting a service in a service level agreement

·        any line of behavior graph that shows how a variable's value changes over time - how it increases exponentially, goes up and down, or stabilizes.

Emergent properties may be surprising, or not. A designed system is purposefully designed to produce them. For example, the requirements for a ridden bicycle may be defined in terms of emergent properties.


A Ridden Bicycle



Emerges from interactions or feedback between these parts



The riders’ legs and feet, the pedals, the rotating parts of the drive mechanism, the wheels, etc.


The riders’ arms and hands, the handlebar, and the shaft to the axle of the front wheel.


The rider’s left-right lean, the direction of the handle bars, and the centrifugal force produced by rotating wheels.


The rider’s bottom, hands and feet, the saddle, the suspension, the tyres, the spokes, the handlebars and pedals.


The rider’s thumb and the bell on the handlebars.


Remember, a system is an abstraction that represents an entity or situation. You choose the boundary of the entity, and the granularity of its parts, with some interests or requirements in mind. The parts of the system above interact (holistically) to produce the required emergent properties.

System designers may:

·        Replace one rider by another, without changing the properties above.

·        Remove the warning bell with no effect on the other properties above.

·        Attend to one property and one subsystem at a time.

·        Trade-off between properties, such as comfort and speed.

·        Ignore the internals of what seem atomic parts (legs, pedals, ball bearings)

·        Be completely ignorant of a rider’s cardio-vascular system, or the internal structure of a ball bearing.


Holism: considering how things interact (in a whole) to produce emergent properties, effects or outputs.

Holism does not mean an emergent property requires every part of a system. You could potentially define a different system for each emergent property of a bicycle. However, a bicycle manufacturer will likely say the requirements for the whole “system of interest” include all the emergent properties above, and so include all the parts mentioned.

In the case of the ridden bicycle, and in general:

·        The whole can do some useful things without every part.

·        A part (here, the rider) may do useful things in other wholes.

Note that wholes may overlap. Not only can one whole contain many parts, but also, one part may appear in any number of wider wholes. For example, you can see a person as participating (with more or less commitment) in many different social entities, overlapping and nested, large and small.

In defining a system, you zoom in and out. You analyze the whole and synthesize (or relate) its parts. You can't analyze until you have identified a whole. Conversely, you can't synthesize until you have identified some parts.

Other things holism does not mean

Remember: "The art of model-building is the exclusion of real but irrelevant parts of the problem."

Holism is not wholeism (considering every conceivable aspect or element of \ thing). Wholeism is impossible. Even a grain of sand is beyond our full comprehension. We must exclude most of what is conceivably knowable about a thing. We have no other way of looking at or understanding the world.

Holism does not mean limitlessly zooming in (to sub-atomic particles) or zooming out (to encompass the whole universe). A system’s boundary, and the granularity of its atomic parts, are whatever the systems thinker decides.

Holism does not imply “wholesale” change. The history of human society suggests most attempts to replace one regime by another have nasty unintended consequences.

Holism does not mean ignoring parts. Ackoff is often quoted as saying "improving a part does not necessarily improve the whole" and "you might focus on the interaction of the parts”. But he also spoke of focusing on the contribution a part makes: "Don't change the part because it makes the part better without considering its impact on the whole". Taken as a whole, his advice adds up to - do what you think is best for the whole.

Modern approaches to improving a system are based on agile incremental development, and the "doctrine of marginal gains". The principle is that major improvements emerge from making many small incremental improvements to parts. To improve the performance of a cyclist, we can do well to focus separately on the fitness of the cyclist, the weight of the bicycle frame, and the aerodynamics of the wheels. You can read other case studies here.

On reductionism as a myth

Some social systems thinkers who promote “holism” also condemn scientists as being “reductionists”.

"The anti-reductionist stance [of the scientist Philip Anderson] is not some uninformed and poorly thought-out gibberish condemning science that unfortunately one finds too much of."  Commentator on Anderson’s paper

Not only do scientists take a holistic view of systems, but it is hard to understand what it means to condemn them for being reductionist.

Reductionism (1) reducing all to physics; explaining biological, psychological and social phenomena in terms of the rules governing interactions between atomic particles. Who does that? Surely no scientist tries to explain an organism or society in terms of atoms or inter-atom collisions?

Reductionism (2) analysing and describing one part of a whole (say, one atom in a molecule, cell in a body, wheel of a bicycle, person in a family, buyer or seller in an economy) without considering how parts interact. Surely either the part studied is the whole system of interest, or else scientists do consider how parts interact?

System state

“The most fundamental concept in cybernetics that of ‘difference’, either that two things are recognisably different, or that one thing has changed with time.” (Ashby).

When modelling a system, we name the different things in it, their attributes and the relationships between them. Attributes are variables that can represent qualities, quantities and states.

Variable type: a quality or type (species, color), or quantity (weight), or state (on and off; egg, maggot and fly).


Aside: what look like two types of thing in one context (say, “caterpillar” and “butterfly”) may look like two states of one thing in another. And a type in one context (say “human”) can be an instance of a type in another (one member in the set of named species).


State variables are important to systems thinking. The state of a dynamic system changes over time, either under its own internal drive or in response to inputs.

State: the current values of a system’s state variables

Microstate: the state of individual parts, their values or quantities (say, the mass of each organism).

Macrostate: the state of aggregate system-level properties that emerge from system behavior (say, total biomass).

Attractor: a state towards which a system will move, from a variety of starting states; it will stay in that state until affected by a major disturbance.

Basin of attraction: a region of a system’s phase space (its potential states) in which the system will tend to fall towards the attractor.

Strange attractor: an attractor with a fractal structure.

Open linear systems

This section goes along with the common idea that we can contrast open linear systems with closed non-linear systems.

Open system: a system connected to external entities by consuming inputs or producing outputs.

Linear system: typically, a system characterized as transforming inputs into outputs in a predictable way (other meanings are possible).

An open system transforms inputs into outputs. To define it, you must separate what is inside it from what is outside. To define its inputs and outputs is to define its boundary – to encapsulate the activities it performs - to define the interface between the system and its environment.

A conventional way to define an open system is to define its suppliers, inputs, processes, outputs and consumers/customers (SIPOC). To design a system, we begin by defining external entities who are impacted and the outputs they require, before defining actors and activities inside the system.  Consider how a material processing system can be represented in what is known as a SIPOC diagram as shown below.


A material processing system

Suppliers à  Inputs à


Activity à Activity

à Outputs à  Customers


Or consider this information/energy processing system.


A microphone

Speaker à  Sound à


Activity à Activity

à Signal à  Device


Sometimes, the consumers of outputs and suppliers of inputs are different entities, as above. Other times, the consumers of outputs and suppliers of inputs are the same entity. Consider for example the user of a telephone, or a word processor.


Word processor





à   Inputs

ß Outputs

à Activity à Activity

ß Activity ß Activity


Activity systems can be triggered to perform activities by

a)      internal events or state changes and/or

b)     inputs (open systems only).

Activities can produce two kinds of effect or result

a)      internal events or state changes and/or

b)     outputs (which change the state of the external environment).

The results of activities can affect

a)      actors within the system (members or employees) and/or

b)     actors outside the system (consumers or customers).

Any actor may find the effects or results of some activity to be beneficial, harmful or neutral. The aim of a system designer is to produce "desired effects", that is, effects that some actors (sponsors or other stakeholders) find beneficial. However, a system (say, an atomic power station) may produce a mixture of beneficial and harmful effects, about which different actors may have different opinions.

Closed non-linear systems

Closed system: a system that behaves independently of its environment or wider world.

Non-linear system: typically, a system characterized as “self-organizing” in the sense that its emergent properties emerge from feedback loops between its parts (other meanings are possible).

A closed non-linear system contains parts or actors that interact to produce results or effects that no part can produce on its own. The parts or subsystems are related in feedback loops. This table below is a generalised illustration.


Closed System - as shown in a Causal Loop Diagram

Stock A

decreases à

Stock B

ß increases

Stock D

increases à

ß increases

Stock C

increases à

ß decreases


To draw a system dynamics model, we define stocks in the system and how they interact. No external entity is defined, because as soon as we identify a stock that is impacted, it is drawn inside the system of interest.

Loop: a physical or logical path that returns to the same place in space or state in a process; it can be continuous, or else a chain of discrete elements.

Feedback loop: a loop that returns from output to input, or effect to cause. Consider how water falls in rain drops from the clouds and rises in evaporation from the ocean, thus balancing the volume of water in each.

The effect of two cause-effect flows in a feedback loop can increase, decrease or maintain the size of stock, population or resource.

If both flows are positive (+/+), then the loop has an amplifying or reinforcing effect on the stock, which can lead it to grow exponentially as in this example:





increases à

ß increases

Infected people



Here is another example of an amplifying feedback loop.



Sea water evaporating

increases à

ß increases

Hurricane wind speed


If one flow is positive and the other is negative (+/-), then the loop has a dampening or balancing effect, which may lead a stock to oscillate around an “attractor” state, as in homeostasis. Consider how wolf and sheep populations interact in a loop that represents discrete birth and death events in the lives of individual wolves and sheep.





increases à

ß decreases




The network of feedback loops in a system dynamics model can cause stocks, populations or resources to increase exponentially, go up and down chaotically, or stabilize homeo-statically. A physicist might call these “lines of behavior” non-linear, complex, chaotic or self-organizing.

The idea of producing effect or results without any overarching organizer appeals to sociologists and management scientists. The trouble, some use the terms of physical sciences with different meanings when discussing a social entity, such as a business.

Looking inside a system

The universe is an ever-unfolding process, from the big bang onwards. From the continuous expanse of space, we carve out discrete structures - entities and actors. And from the continuous flow of change over time, we carve out discrete behaviors - events and activities. This table illustrates the difference between structures and behaviors.


Structure examples

Behavior examples

Solar system








Motor cycle

Two-stroke cylinder cycle


Billing process


Although some refer to a passive classification or organization structure as a system, almost all discussion under the headings of systems thinking, enterprise, business and software architecture is about activity systems that feature both structures and behaviors.


System context




what exists in space

what happens over time

1950s Cybernetics

state variables

state changes

1960s System dynamics

stocks or populations

stock level changes

1970s Soft systems methods


activities in processes

1980s Structured systems analysis



1990s Unified Modelling Language



2000s ArchiMate modelling language

components & interfaces

processes & services

2010s RESTful software architecture

web resources (w URIs)

HTTP operations

2020s Fractal Enterprise Modelling




The structure/behavior distinction is related to some other distinctions we naturally use to describe the existence of phenomena in space and time: instantiation/occurrence, persistent/transient, enduring/fleeting.

Structure: a thing in space that can range from an atomic particle, through a database or organic system to a solar system.

Part: a structure within a system.

Passive structure or part: a structure that does not act but may be acted on (say, a variable, the chemist’s periodic table, or a database schema).

Active structure or part: a structure that acts, exhibits behavior (say, a termite, or a steam engine).

Behavior: either a) an activity or process, or b) the life history of a variable or structure.

Line of behavior: the trajectory of a quantitative variable, a line on a graph showing how the life history of a variable, how it value goes up and down, perhaps chaotically, perhaps staying in an attractor state or moving between attractor states.

Given a model of a system’s dynamics, a computer can animate it, and show its lines of behavior on a graph. Two or three variables can be represented as x, y and z coordinates in space. The progress over time can be drawn as a two or three-dimensional shape.

Recursive composition

Systems and their parts are composable/decomposable into bigger/smaller systems and parts.

System composition: assembling parts or systems into a larger (eco)system.

System decomposition: dividing a system into smaller parts or (sub)systems.

The systems within a system are often called subsystems or components. And systems that contain systems are sometimes called ecosystems.

Unfortunately, the field of systems description is terminology torture, because some use the same term for a concept at every level of granularity, but others use different terms.

E.g., A process (however long or short) is a process. Nevertheless, you’ll find processes appear in systems analysis and design methods under different names - such as value stream, procedure, activity, action, use case, epic, user story, operation and method. You’ll also find them represented in different kinds of diagram - such as value stream diagram, flowchart, and interaction/sequence diagram.

Approaches to activity system thinking

Systems thinkers model entities as systems, with a view to understanding, predicting or directing their behavior. The entities may be material or social objects, situations or phenomena.

Since the 1950s, many approaches to modelling a system’s structures and processes have been developed. In a business activity model, processes depend on other processes. In a system dynamics model, structures interact with other structures. In Maturana’s “autopoietic” life, processes maintain structures which perform processes which maintain structures and so on.

See Chapter 6 for more


CHAPTER 3: Thinking

For sure, there is a reality out there. Systems thinking is much about what we know of reality and how we perceive and describe it. The evolution of systems thinking might be distilled as follows.

1.      Animals evolved to sense phenomena

2.      Animals evolved to retain and recall models of phenomena

3.      Social animals evolved to share models using symbolic languages

4.      We humans evolved oral languages to share models.

5.      We developed written languages to record models.

6.      We learnt how to typify things (entities and events) in models.

7.      We learnt ways to specify rules that constrain what things are and can do.

8.      We developed system modelling techniques.

A philosophical view of thinking

A systems thinker doesn’t need to read philosophers. Nevertheless, systems thinking does raise issues that philosophers have debated for centuries. This work refers in passing to Descartes, Nietzsche, Kant, Chomsky, Locke, Peirce, Popper, Frege and Hilbert.

The universe existed for nine billennia before the earth was formed. At that time, there was no knowledge, description, model or classification of things on earth. The earth rolled on for a while longer without life on it. Eventually, animals evolved to perceive and describe things in the world, because doing that helped them survive and thrive. Here, cognition is not a particular biological process, but rather any process by which an organism can recognize some phenomenon in its environment, observe or envisage it, and create or use a description of it.

Rene Descartes famously started: “I think, therefore I am”. Here, we assume more. We assume we exist as animals, along with other things we can observe and envisage out there in the real world, of which we can retain knowledge.

As Humberto Maturana said, “Knowledge is a biological phenomenon". Which is to say: all thinkers, thinking, thoughts, words and signalling gestures are phenomena that have emerged out of biological evolution.

Obviously, brains do not record and process information like computers do. Still, we do remember things well enough to recognise them. Somehow (mysteriously) a memory is encoded biochemically. Whether it is in one cell, many cells, interactions between cells, or the whole brain is irrelevant.

Your mental model of a dollar bill is very, very vague. Nevertheless, when you see one, you recognise what it is. In other words, you have a mental model of a dollar bill that is good enough for practical use. If that mental model is not to be called "information" or "knowledge", then what is it to be called?

Certainly, a mental model is fuzzy, fragile, forgettable and flexible. It is influenced by emotions, confusable with similar mental models, very incomplete and somewhat inaccurate. It may be changed every time you bring it to mind. But biological evolution requires only that you can recognise food, friends and enemies when you see them. And evidently, you do remember them often enough and well enough. What form a mental model takes, and how thinking works at a biochemical level, are irrelevant.

In the field of mathematics, whether we think about reality directly, or about models of it, has long and famously been debated.

·        Frege believed maths is carried out at the level of thoughts about real entities, rather than models of them; he believed models are imperfect representations of thoughts.

·        Hilbert said that mathematical thinking is manipulation of models, regardless of what the entities are in reality (so whether and how far an entity corresponds to a model has to be verified).

The Stanford Encyclopaedia of Philosophy says Hilbert is now regarded as winning the debate. Geometry does not address the whole of a thing. Rather, it addresses only those features that are describable by geometry.

Hilbert concluded that mathematical thinking is manipulation of models. Since observing and envisaging things involves creating and using mental models of them, all four processes are inextricably intertwined in thinking.

From the idea that thinking involves creating and using models (mental, oral, written or honey bee dances) a new triadic model of thinking emerges.

·        Thinkers <observe & envisage> Phenomena (of any kind)

·        Thinkers <create & use> Models (of any kind)

·        Models <represent> Phenomena

In academic terms, this chapter touches on epistemology and semiotics.


Humberto Maturana said: "Knowledge is a biological phenomenon". In other words, before there can be descriptions of the world, there must be thinkers who can describe it. There were no concepts or descriptions before there were conceivers or describers. There were no conceivers or describers before there was life. Our abilities to conceptualise and describe what we observe emerged out of biological evolution.

To observe a thing, an animal requires a sensor to detect it, then create and transmit a signal. The message may be very slight. Noses can only convey the odour of a thing. Ears can convey the sound and location of a thing. Eyes can convey the location of a thing, and features of its surface. However slight the information conveyed; all observations encode a model of a thing in a message.


Aside: If a thinker’s recall of a model changes it (as in false memory syndrome), then model use is entangled with model creation.


Having observed and memorized a route through your house, a mouse will never forget it. Consider the examples below of what animals know of reality, and how they describe it.


From biochemistry to symbolic language

Biological example

Biochemical signals

Flies smell food

Sensor cells <sense> Food items

Sensor cells <create & use> Biochemical signals

Biochemical signals <represent> Food items

Fruit flies <smell> Fruit odours

Fruit flies <create & use> Biochemical signals

Biochemical signals <represent> Fruit odours


Mice remember routes through a house

Animals <observe> Things

Animals <form and recall> Memories

Memories <represent> Things

Mice <observe> Routes

Mice <form and recall> Route Memories

Route Memories <represent> Routes


Birds sound alarm calls.

Social animals <observe> Things

Social animals <create & use> Messages

Messages <represent> Things

Birds <observe & envisage> Presence of predators

Birds <create & use> Alarm calls

Alarm calls <represent> Presence of predators


Honey bees describe pollen locations

Observers <observe & envisage> Aspects of reality

Observers <create & use> Descriptions

Descriptions <represent> Aspects of reality

Honey bees <observe & envisage> Pollen locations

Honey bees <act in and read> Dances

Dances <represent> Pollen locations


The examples show not only that non-human animals have knowledge, but also, that neither knowledge nor communication require a verbal language.

The game of thinking

Thinking without words

A general principle of coding is this: to store knowledge, an actor encodes it in a model; to retrieve the knowledge, an actor decodes the model by reversing the process used to encode it.

Without words, animals must manipulate models (encode, decode and translate them) to remember useful things and communicate.

Without words, animals must remember simple relations in some mysterious biochemical form. Say, “my mother will help me”, or “blackberries taste nicer when they are black and shiny”. Notice, we have no option here but to express such ideas in words.

Without words, animals can communicate. When bird sees a fox, a series of internal model-to-model translations end with the sounding of an alarm call symbolizing the presence of a predator.

So, the basis of "symbolic AI", that verbal language is a precursor of thinking makes no more sense to a biologist or psychologist than John Locke’s idea that our mind is a blank slate when we are born.

The importance of language to human thought

This biology-based naturalist philosophy differs from that of Nietzsche, Kant, Wittgenstein or Chomsky. You might read it as a rejection of not only metaphysical and theological philosophies, but also linguistic ones.

However, it is evident that verbal language changed the game of thinking. The evolution in humans of the ability to translate mental models into and out of verbal models started a revolution in the evolution of thinking

Given words and grammar, we can connect ideas together. In thought and speech, we can sequence and relate words. We can communicate complicated ideas, and ask others to verify them. And given writing, we can build much larger, more elaborate, systems of thought, both on our own and in collaboration with others.

So, while we do think in non-verbal ways, the kind of thinking needed to write this sentence certainly involves thinking in words. Moreover, it seems likely that a human-like AI must develop human-like linguistic skills. It will surely need the ability to interrogate other intelligent beings – be they natural or artificial.

The interplay between internal and external models

If we want a concept to exist outside of our mind, we translate it from a private mental model into words other actors can hear. Our words will convey our meaning to any actor who can decode them.

Our mental models are fragile. Our oral models are forgettable. The development of writing enabled more stable, more persistent, more shareable, and more verifiable models of what we observe and envisage to be stored.  Moreover, they can be incrementally extended into a structure that is larger, more elaborate and more internally consistent than any human mind can hold or comprehend.

The word “force” used to mean simply something that has a push or pull effect. Isaac Newton introduced the idea that Force = Mass * Acceleration. Imagine that Newton died as soon as he first translated his mental model into a written model. Then, hundreds of years later, you are the first to discover his writings. Knowing the vocabulary and grammar he used, you instantly understand the concept Newton had in mind. So, where was the concept for all those years?

My answer is this: All knowledge, thinkers, thinking, thoughts and words are biological phenomena. Until there were conceivers, there could be no concepts. Concepts exist only where conceivers create or put them. Those places can be minds, memories or messages of any kind. We share concepts by communicating them and copying them. We find meanings in the acts of creating and using concepts. But if all minds, memories and messages were destroyed, then all concepts and meanings would disappear from the universe.

That is a controversial answer. And you don’t have to believe it to read on.


CHAPTER 4: A new model of thinking

An analysis of systems thinking should address how we think. Epistemology is the study of knowledge, with regard to its methods, validity, and scope. Semiotics is the study of signs and symbols and their use.

Old triadic models of thinking

We speak of conceptualizing things in minds, and representing things in words. The challenge below is to place five concepts (thinkers, thinking, thoughts, things and words) the following five triadic views of semiotics.


Ogden and Richards: Semiotic Triangle




stand for Referents




are referred to by References




are symbolized by Symbols



Above, words symbolize thoughts, which seems to imply that thoughts symbolize things.


Charles Peirce: Triadic sign relation




represent Objects




are referred to by Interpretants




understand Objects from Signs

Thinkers? Thoughts?


“What Peirce means by the interpretant is difficult to pin down. It is something like a mind, a mental act, a mental state, or a feature or quality of mind.” Stanford Encyclopaedia of Philosophy.


Karl Popper: Three worlds view



Products of the Mind

describe/predict Physical Realities



Physical Realities

referred to by Mental Worlds



Mental Worlds

produce Products of the Mind



Above, Popper’s mental worlds might be mapped to any of thinkers, thinking and thoughts.

The Object Management Group have published a standard called the Semantics of Business Vocabulary and Business Rules. It formalizes the use of natural language for the purposes of conceptual modelling, and the sharing of meanings. It presumes a triad of the kind below.



SBVR Triad





used in thinking and discussing 2 and 3



Real-world things

are conceptualized by Concepts




in held in our minds



Above, real-world things include representations (say, as records in information systems), yet exclude concepts in minds. Why? Our mental models may be fuzzy and fragile, but they are just as real as spoken or written words, or drawings.

To say that thinkers conceptualize things in minds and represent things in messages is to imply these are fundamentally different processes. Yet both may be viewed as encodings of knowledge. The next triad combines them.


Pierre Bordieu: three relations




Thoughts, Words







A new triadic model of thinking

Following Hilbert’s view that thinking is manipulation of models, and Bordieu’s three relations, this work introduces a triadic relation that is more readily understood and used than the older ones above.



Our triad





in minds, memories & messages

Thoughts, Words



represented in 1




observe & envisage 2, create & use 1



For example:

·        Architects <observe & envisage> Buildings

·        Architect <create & use> Building Models (mental, graphical or physical)

·        Buildings Models <represent> Buildings

In other words, to observe or envisage a thing is to create and/or use a model that carves it out of the wider universe, and identifies some of its qualities.

Thinker: an actor who observes and envisages phenomena.

Phenomenon: something that exist or happens, especially a concrete system.

Charles Sanders Peirce suggested signs or models can take three forms, defined here as follows.

Model: something that represents or signifies a phenomenon:

·        an icon (like a painting or statue),

·        an indicator (a symptom, like a footprint or a smell) or

·        a symbolic model (biochemical, verbal, graphical, other).

Thinkers may be characterized as having the ability to create and use models of reality, and manipulate them for practical purposes. Most models are hugely abstracted from the reality they represent. Michelangelo’s statue of David represents an infinitesimal fraction of the features of the real-life David.

The focus here is on symbolic models – biochemical or verbal. Animals are born with some knowledge, and acquire some by copying what parents do, and from trial-and-error experience. Symbolic verbal language and bigger brains evolved in humans because they proved so useful. They massively increased our ability to share knowledge, communicate about things in sophisticated ways, and create new knowledge.

From consideration of the above, a new triadic model of thinking emerges.

·        Thinkers <observe & envisage> Phenomena (of any kind)

·        Thinkers <create & use> Models (of any kind)

·        Models <represent> Phenomena

Note that since thinkers and models are themselves phenomena, they too can be modelled by thinkers.

The innovation here is not the nouns, it is the verbs that connect them.

·        To observe is to perceive something and create a mental model that symbolizes it. Say, a fruit fly’s sensor cells smell the odour of some fruit, and create a biochemical signal symbolizing the presence of that fruit.

·        To envisage is to recall (“bring to mind”) a mental model, or create a new mental model of something that might be materialized in the future. Say, a mouse recalls a route through your house and uses it to navigate. Or say, an architect draws a banana-shaped building.

·        To create or use a symbolic model is to encode or decode it.

Constructing triads with different nouns, but the same verbs, shows how robust the model is.


Variations of the epistemological triad














<observed and envisaged by>







<create and use>








To equate models, descriptions and types may seem strange to you at first, but will turn out to be important later.

Nobody knows how thinking works. But evidently, observing and envisaging involve creating and using models. And note that models may be modified on translation, accidentally or purposefully.


CHAPTER 5: A new model for systems thinking

To describe the flocking behavior of starlings as a system is to carve it out of the universe and model those features that make it a system. The model may be called an abstract system, and the phenomena in which entities instantiate it (near enough to satisfy observers) may be called a concrete system.

Systems thinker: an actor who observes and envisages concrete systems, by creating and using abstract systems.

Abstract system: a model of actors and/or activities, which represents any concrete system that instantiates it.

Concrete system: a performance or instantiation of an abstract system by one or more material and/or social entities.

Systems thinking may be represented using our triadic model thus:

·        Systems thinkers <observe & envisage> Concrete systems

·        Systems thinkers <create & use> Abstract systems

·        Abstract systems <represent> Concrete systems performed or instantiated by entities.

Systems thinkers do not address the whole of an entity (a common misinterpretation of holism). Rather, they address those features describable in a system model.

A conversation – part one

A systems thinking guru: There is much confusion over whether systems are tangible objects/entities/phenomena and/or mental models that explain how something works.

Graham: Indeed, there is much confusion. However, early systems thinkers (Ashby, Checkland, Forrester, Ackoff etc.) all viewed systems as “soft” in the sense they are defined by observers. Not only did they distinguish

a)      abstract systems - mental models in mind or writing, from

b)     concrete systems - instantiations of abstract systems

but also, they allowed that observers may abstract different (potentially conflicting) systems from observing one material entity or situation.

The separation of abstract models from concrete phenomena has been fundamental to systems thinking for many decades. Ross Ashby made the point strongly in his “Introduction to Cybernetics” (1956).

3/11 “At this point we must be clear about how a “system” is to be defined. Our first impulse is to point at a [physical entity] and to say “the system is that thing there”. This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems. … Any suggestion that we should study “all” the facts is unrealistic, and actually the attempt is never made. [We must] pick out and study the facts that are relevant to some main interest that is already given.” (Ashby’s “Introduction to Cybernetics”)

Guru: Peter Checkland hit the nail on the head when he said: “The use of the word “system” is no longer applied to the world, it is instead applied to the process of dealing with the world. Experience shows that this distinction is a slippery concept which many people find very hard to grasp. Probably because embedded in our habits is the way we use the word system”. Having spoken with Ackoff years ago, he didn’t differ from Checkland.

Graham: In 1971, Russell Ackoff distinguished abstract systems from the concrete systems that they typify, describe or represent. His abstract system is a mental construct. His concrete system is a physical instantiation of the abstract system.

Guru: Geoffrey Vickers, an eminent systems thinker, talks about resisting our urge to view business organizations or social entities as “systems”. Systems are mental models or theories rather than material entities, or social entities.

Graham: To put it another way, taking a soft systems approach implies the systems we discuss are the ones we can represent in system models. The remainder of what happens in the universe is not well called a system – unless and until we understand how to bound it and model it as a system.

Note that a model is a model, regardless of what form it takes, and where it is, in a mind, a memory or a message. As Ashby pointed out, in the process of forming and communicating a model we translate it from one form to another, often several times over.

Two perspectives of a human organization

When reading what is posted under the heading of “systems thinking”, you need to distinguish between discussion of:

·        abstract models of concrete activity systems that actors play roles in, and

·        the management and motivation of actors in a particular social entity.

In the 1950s, the idea of a general system theory was taken up by “management scientists” concerned with the structures and behaviors of socio-technical entities that employ human actors.

Kenneth Boulding wrote what is probably the first article on applying general system theory to management science. He questioned whether the “parts” of a social or business system are actors or the roles they play.

“The unit of such [social] systems is not perhaps the person – the individual human – but the role - that part of the person which is concerned with the organization or situation in question.”

David Seidl, in his article on Niklas Luhmann’s social autopoiesis, pointed out that:

"The first decision is what to treat as the basic elements of the social system. The sociological tradition suggests two alternatives: either persons or actions."

Both Boulding and Seidl pointed us to the question: Which is the fundamental element of social system: the people or the roles they play? the actors or the activities they perform?


Aside: The answer matters to a systems thinker, because we can model the dynamics of an activity system (be it a flock of birds, a chemical reaction, or a game of chess or poker). And we can simulate the behavior of such a system in software. However, we cannot model the dynamics of a social entity in which people continually redefine their own activities, except at such an abstract level of thinking (about the meta system that defines systems) that it tells us nothing about the business at hand.


Social entity thinkers, who discuss organizing and motivating actors to do what is wanted, including performing their roles in one or more activity systems, should distinguish between these two kinds of “system”.

Activity system: a pattern of activities, performed by parts or actors, that are related as shown in a model (as in the rules of poker, or the score of a symphony).

Social entity: a group of actors who interact by creating and using messages (say, a card school, or orchestra.)

You can see light as waves or particles; but not both at once. Similarly, you cannot simultaneously view a human organization as an activity system and a social entity, since they are different viewpoints.

Activity systems design

To design an activity system, you focus on the activities and roles needed to meet some given aims. At run-time, entities or actors interact in rule-bound activities to change its state, represented by state variable values, over time. (For discussion of “rule”, see the chapter on causality.)

To model an activity system is to describe an entity's way of behaving, how it changes state and/or transforms inputs into outputs. The model can include roles for actors, rules for activities, results (state changes or outputs) produced; also, state variables, information maintained in memories and exchanged in messages. Change the model, change the pattern (say, change the rules of poker) and you change the system.

Activity systems thinking appears in the form of cybernetics, system dynamics and soft systems methodology. It is about systems with a stable “way of behaving”. Their behavior is regular and repeatable enough to be modelled in terms of parts interacting in an orderly way to produce a set of results or effects that no part can produce on its own.