Things to know about systems thinking

A chapter in the draft book at https://bit.ly/2yXGImr.

 

Reading on-line? If your screen is wide, shrink the display width for easier reading.

 

This work is for enterprise architects, business architects and people who already consider themselves to be “system thinkers”.

 

Preface

Writing this first chapter – an overview of terms and concepts explored in later chapters - has been rather like trying to solve the mystery of the Bermuda triangle. Analysis of that by Larry Kusche led him to describe it as a manufactured mystery, perpetuated by writers who either purposely or unknowingly made use of misconceptions, faulty reasoning, and sensationalism.

 

Much posted today on "system change/innovation" and "the management of complexity" under the headings of "systems thinking", "management science" and "complexity theory" is mysterious on first reading. Analysis of it reveals some authors quote aphorisms of gurus that don't apply out of context, or don't bear close examination; and some unknowingly misinterpret the terms of mathematics, physics, biology or more general system theory. The result is a very large and incohesive body of material, some so ambiguous as to be incoherent.

 

So, this chapter has been written in the hope that you (reader) want to resolve the ambiguities in systems thinking terminology, and distinguish the sense from the nonsense, with a view to having more meaningful and useful discussions about systems of interest. You won’t find mathematical proofs and evidence drawn from practical experiments. You will find assertions about systems thinking are analyzed critically and logically, and conclusions are illustrated by easily understood examples, such as a ridden bicycle, a game of poker, a card school, nest building, a termite colony, and a business (say IBM).

 

A reader has asked: “There’s a lot know about systems thinking. Do I have to study it before doing it?” In my view, you ought to understand at least the ideas in this one chapter. Ackoff spoke to the effect that people learn despite their teachers. That is truer of (say) swimming than of academic subjects. Teaching does involve explaining things learners may never come to understand without a teacher’s help, backed up by decades, even centuries, of thinking about the subject matter by countless other people – in this case, people like Ross Ashby and Charles Darwin.

 

One conclusion of this chapter is that the traditional contrast between hard and soft systems would be better replaced by one between activity systems and social entities, associated in a many-to-many relationship. We will do better to distinguish activity systems thinking from social entity thinking, and apply both to the organization of a business, without confusing one with the other.

 

 

Contents

Description and reality. 2

Holism and emergence. 3

General activity system concepts. 5

Linear and non-linear systems. 7

Three ways of looking activity systems. 9

More concepts. 10

System change by state change and by evolution. 11

System: soft or hard?. 13

Social system: activity system or social entity?. 15

More ambiguities. 18

Questionable assertions about a system.. 24

Conclusions and remarks. 25

Appendix: relevance to “complexity science”. 27

 

Description and reality

For sure, there is a reality out there - there are phenomena that thinkers can not only observe but also envisage. The terms subjective and objective apply not to reality but to the descriptions thinkers make of reality - to their models. A model (mental or documented) is always subjective in the sense it is formed by one or more thinkers. It can also be objective to the extent its accuracy is confirmed by empirical, logical and social verification.

 

What follows is underpinned by the epistemological triangle below.

 

1.     Thinkers <observe and envisage> Phenomena.

2.     Thinkers <create and use> Models.

3.     Models <represent> Phenomena.

 

You may replace thinkers by observers, phenomena by aspects of reality, or models by descriptions. And note the triangle is recursive, since both thinkers and models are also phenomena. For more detailed definition and deep exploration of the triangle, read this other article/chapter.

 

How do systems thinkers distinguish description from reality? Russell Ackoff (1971) distinguished abstract systems from concrete systems.

a)     An abstract system is a model (say, the rules of poker), created by thinkers to describe a phenomenon as they observe or envisage it.

b)     A concrete system is a phenomenon (a particular game of poker) in which a material entity (a card school) realizes an abstract system, near enough to satisfy observers.

 

Even seasoned systems thinkers casually refer to a social entity (say, a card school) as being a system, without reference to which of many conceivable abstract systems they have in mind. And even Ackoff lost the plot when he said that “not every system is a [human] organization but every organization is a system”.

 

When a systems thinker refers to the organization displayed by a whirling flock of starlings (or other birds) as a complex adaptive system (or CAS), you may reasonably ask them: In what sense is it any those?

 

Complex? Each actor in the system follows simple flying rules, adapting their flight path to changes in conditions around them. Does a flock of 200 birds have the same complexity as a flock of 100 birds, or is it twice as complex, or else, how do their complexities relate?

 

Adaptive? Each evening, starlings play the same roles and follow the same rules in the same activity, never changing them. In what sense does the whole flock adapt, and to what?

 

A system? For sure, it is a set of parts that interact (holistically) to produce an ever-shifting shape that no part can produce on its own. But when the flocking stops, and the birds drop down to roost in some trees, is it still the same system, or a system at all?

 

What is the system of interest? Is it the flock - a social entity - as in social systems thinking? Or the regular and repeated pattern of flocking behavior they engage in - an activity system?

 

A flight of geese or a ridden bicycle is a rule-bound activity system that adapts to changing conditions by changing its state. By contrast, a human social entity may adapt to changing conditions by changing its way of behaving (the activity systems it participates in) or even changing its aims. Is that a CAS? Or is it better called an evolving social entity (ESE)?

 

This first chapter explores these questions. Sit down in a quiet place and take it slow.

Holism and emergence

A system is not unreasonably defined (holistically) as a whole composed of parts or actors that interact to produce results or effects (emergent properties) that one part cannot produce on its own.

 

There is a tradition of thinkers rejecting a mythical past, or knocking down straw men, before purporting to bring some new insight. Systems thinkers often decry scientists for defining systems in a top-down, analytic or reductionist way. And it does seem that Ludwig von Bertalanffy, the first general system theorist, saw his own science of biology as reductionist. Yet bottom-up synthesis, studying how things relate holistically in a bigger whole, is pervasive in science. That is how our picture of the solar system emerged.

 

You can see a thing both as a whole and as a part of a wider whole - see it as what Arthur Koestler called a “holon”. Moreover, you can see a thing as a part in any number of wider wholes, which may overlap or be nested. For example, you can see a person as participating (with more or less commitment) in many different social entities, overlapping and nested, large and small.

 

The idea of a collective producing results that individuals cannot is so appealing to management science gurus that they often use the terminology of system theory, though not necessarily the concepts. And remember, one individual actor may play roles in many wholes (as one person may live in more than one city, and participate in several card schools and choirs) with varying degrees of participation or commitment.

 

Two kinds of emergence may be distinguished.

a)     emergent systems arise in the evolution of universe, either from disorder or from modifications to a prior system generation.

b)     emergent properties arise from interactions between entities or actors in a given activity system.

 

If you can detect no order or pattern in a cloud of interstellar gas and dust, then there is no definable system. There is only an ever-evolving entity or ever-unfolding process. Once our solar system emerged from such a cloud, billions of years ago, it settled into its regular way of behaving and has repeated it, near enough, ever since. The system is the pattern of planetary orbits that we can detect and describe.

 

An emergent property is an effect, result or ability of a given system that arises from interactions between its parts. Consider the forward motion of a yacht that arises from the passage of a wind over its sail. Or the outputs produced from inputs by a mechanical, human or computer activity system (perhaps in response to invoking an operation defined in an interface or a service in service level agreement). Or any line of behavior graph that shows the trajectory of a variable's value over time - how it increases exponentially, goes up and down, or stabilizes.

 

Consider a ridden bicycle, and emergent properties observers may be interested in.

 

A Ridden Bicycle

Emergent property

Emerges from interactions or feedback between these parts

Forward motion

The riders’ legs and feet, the pedals, the rotating parts of the drive mechanism, the wheels, etc.

Steering

The riders’ arms and hands, the handlebar, and the shaft to the axle of the front wheel.

Balance

The rider’s left-right lean, the direction of the handle bars, and the centrifugal force produced by rotating wheels.

Comfort

The rider’s bottom, hands and feet, the saddle, the suspension, the tyres, the spokes, the handlebars and pedals.

Warnings

The rider’s thumb and the bell on the handlebars.

 

Note that system designers may:

·       Replace one rider by another, without changing the properties above.

·       Remove the warning bell with no effect on the other properties above.

·       Attend to one property and one subsystem at a time.

·       Trade-off between properties, such as comfort and speed of forward motion.

·       Ignore the internals of what they see as atomic parts (legs, pedals, ball bearings)

·       Be completely ignorant of a rider’s cardio-vascular system, or the internal structure of a ball bearing.

 

“Holism” may be misinterpreted to mean an emergent property requires all of system’s parts to interact. In practice, some requirements for a bicycle require only some parts of the whole. You could potentially define a different system for each emergent property. However, a bicycle manufacturer will likely say the requirements for the whole “system of interest” include all the emergent properties above.

 

Some advise us that to improve a system one should focus on how parts relate rather than on parts on their own. Have they misinterpreted Ackoff's "Improving a part does not necessarily improve the whole"? He didn't mean don't improve a part on its own, he only meant "don't change the part because it makes the part better without considering its impact on the whole".

 

Today, the opposite "doctrine of marginal gains" is the principle that major improvements emerge from making many small incremental improvements. To improve the performance of a cyclist, we do well to focus separately on the fitness of the cyclist, the weight of the bicycle frame, and the aerodynamics of the wheels. We may also usefully attend to a relationship between two or more parts. But in any non-trivial system, we cannot focus on all of its parts or its relationships, we can only focus at a few of them at any time.

 

In the case of the ridden bicycle, and in general:

·       The scope of the whole is a choice made by observers/describers.

·       The performance of the whole can be measured in several ways.

·       The whole can do some useful things without every part.

·       The granularity of a part is a choice made by observers/describers.

·       A part (here, the rider) may do useful things in other wholes.

·       The whole entity is not is wholly knowable.

 

“Holism” is sometimes misinterpreted to mean zooming in to study every conceivable part of an entity, or zooming out to study every conceivable effect of a system on its environment. In practice, every system we model is an abstraction that excludes almost all of what is conceivably knowable about an entity or situation. We have no other way of looking at or understanding the world. An activity system is an observer's view of how selected parts of an entity interact holistically to produce particular emergent properties of interest.

General activity system concepts

Most systems thinkers are concerned with dynamic systems shaped by

a)     evolution (as in cosmology and biology), or by

b)     design (as in human and computer activity systems).

 

This section exposes many other dichotomies that all systems thinkers should be aware of. To begin with, structures may be classified as

a)     passive structures (like the chemist’s periodic table, or a database schema) and

b)     active structures (like the solar system, or a steam engine, or a database).

 

Aside: the table below shows a variety of passive structures used in modelling how the structures and behaviors of a real-world activity system are organized.

 

Descriptive structures

Hierarchy

Composition (or granularity) hierarchy

Delegation (or dependency) hierarchy

Generalization (or classification) hierarchy

Network

A graph of nodes connected by lines

Matrix

A table of rows and columns

 

The hierarchical structures in the table above relate to varieties of abstraction discussed in a later chapter.

·       Encapsulation: abstracting an interface definition from the inner workings of a component.

·       Delegation: abstracting a client component from the inner workings of a server component.

·       Generalization: abstracting properties shared by several components.

·       Idealization: abstracting a more logical description from a more physical one, of which interface definition is an example.

 

Although some refer to a passive classification or organization structure as a system, almost all discussion under the heading of “systems thinking” is about activity systems that feature both structures and behaviors.

 

Context

Structures

Behaviors

General

what exists in space

what happens over time

1950s Cybernetics (Ashby)

state variables

state changes

1960s System dynamics (Forrester)

stocks or populations

stock level changes

1970s Soft systems methods

actors

activities in processes

1980s Structured systems analysis

entities

events

1990s Unified Modelling Language

objects

operations

2000s ArchiMate language

components and interfaces

processes and services

 

The structure/behavior distinction is related to others (instantiation/occurrence, persistent/transient, enduring/fleeting) we naturally use to describe the existence of phenomena in space and time.

 

Aside: Bertalanffy speculated that all structures can be reduced to behaviors. The structural entity-relationship models drawn by systems analysts commonly represent both persistent entity types (“customer” and “employee”) and transient event types (“order” and “payment”). However, that distinction is only a perspective, since it depends on the time-scale over which you model reality. Within their life-time, a person is an endurant. Over the billennia of life on earth, a person is a fleeting process that conveys some genes from one generation to the next. And in the history of the universe, a star is a transient process that runs from its birth to its death.

 

Activity systems can be

a)     closed (as in a system dynamics model), or

b)     open (as in most business activity systems).

 

To encapsulate the activities performed by a system, you must separate what is inside it from what is outside. To define the boundary of an open system is to define its inputs and outputs, to define the interface between the system and its environment. Using the ArchiMate modelling language, an open activity system can be defined in terms of four concepts named in the table below.

 

Structures

Behaviors

External (boundary) view

Interfaces

Services

Internal (contents) view

Components

Processes

 

Activity systems can be triggered to perform activities by

a)     internal events or state changes and/or

b)     inputs (open systems only).

 

Activities can produce two kinds of effect or result

a)     internal events or state changes and/or

b)     outputs (which change the state of the external environment).

 

The results of activities can affect

a)     actors within the system (members or employees) and/or

b)     actors outside the system (consumers or customers).

 

Any actor may find the effects or results of some activity to be beneficial, harmful or neutral. The aim of a system designer is to produce "desired effects", that is, effects that some actors (sponsors or other stakeholders) find beneficial. However, a system (say, an atomic power station) may produce a mixture of beneficial and harmful effects, about which different actors may have different opinions.

Linear and non-linear systems

Some systems thinkers decry what they call linear thinking in favor of non-linear thinking. Yet linear and non-linear systems have much in common. In the table below, most concepts (to be explored in this and later chapters) are characteristics of both.

 

In a so-called linear system

as might be a modelled in a value stream diagram

In a so-called non-linear system

as might be a modelled in a causal loop diagram

Activity 1 à

Activity 2 à

Activity 3

Entity A

à

ß

Entity B

Properties or results emerge

from interactions between

interacting entities or actors joined

synthetically or holistically in

causal or cause-effect relationships.

Properties, inc. self-organization, emerge

from feedback loops between

interacting entities or populations joined

synthetically or holistically in

causal or cause-effect relationships.

 

To design a linear system, you may approach the task in a linear fashion by defining the

1.     desired effects or results (emergent properties).

2.     end-to-end process(es) to produce 1 from a given starting point.

3.     roles required to perform 2.

4.     exception conditions that may prevent the successful completion of 2.

5.     exception-handling processes and roles.

 

Then buy, build, hire or educate components or actors to perform the roles in the processes at 2 and 5.

 

Typically, a material processing system may be represented in what is known as a SIPOC diagram.

 

A linear material processing system (SIPOC)

Suppliers à  Inputs à

Process

Activity à Activity à Activity

à Outputs à  Customers

 

Often, in an information processing system, the customers supply the inputs as well as receive the outputs.

 

A linear information processing system

Environment

Interface

System

Customers

à   Inputs

ß Outputs

à Activity à Activity

ß Activity ß Activity

 

Note that a linear business activity system may (externally) be connected to the wider world by feedback loops in which outputs (say, products used by customers) prompt future inputs (more orders for products of the same kind). Also, it may (internally) contain loops of requests and replies between actors playing different roles.

 

The notion of a feedback loop is typically an abstraction from real world events. Consider the loop connecting water in clouds and water in oceans. Discrete water molecules fall in the form of rain drops and rise in the form of evaporation. The looping path of a single water molecule is a track through space and time we cannot see or touch. Consider a loop connecting wolf and sheep populations. The loop represents a stream of discrete birth and death events in the lives of individual wolves and sheep. There is no continuous, visible or tangible loop.

 

Non-linear systems are often characterized as “self-organizing”, meaning their properties or results emerge from feedback loops between their parts. When a feedback loop connects two quantitative variables, then the two flows may have mutually-reinforcing effects (+/+) on the variable values, which leads them to grow exponentially. For example:

 

Epidemic

Virus

population

increases à

ß increases

Infected people

population

 

Or

 

Hurricane

Sea water evaporating

increases à

ß increases

Hurricane wind speed

 

Or else, they may have opposing effects (+/-) on two stock levels, which may lead them to oscillate around an “attractor” state, as in homeostasis.

 

Ecology

Prey

population

increases à

ß decreases

Predator

population

 

The rule-bound feedback loops in a system dynamics model can cause populations or resources to increase exponentially, go up and down chaotically, or stabilize homeostatically. A physicist might call these “lines of behavior” non-linear, complex, chaotic or self-organizing.

 

The idea of producing effect or results without any overarching organizer appeals to sociologists and management scientists. The trouble, they use the terms of physical sciences with different meanings when discussing a social entity, such as a business.

 

The term "self-organization" has several meanings, including these two.

a)     In physical systems, how entities synchronize their behavior, often by following simple rules (as in the flight of a flock of geese).

b)     In sociology, how human actors playing roles in (linear or non-linear) activities may creatively redefine those roles.

 

The term “behavior” also has at least two meanings.

a)     In physics, the “lines of behavior” produced by a machine-like activity system

b)     In sociology, the “purposeful behavior” of people in a social entity.

 

By masking ambiguities in the terms used, discussions under the heading of “systems thinking” also mask the dichotomy between two schools. Even respected gurus (like Donella Meadows) sometimes confuse activity systems, of the kind representable in a causal loop diagram, with social entities. Some gurus are concerned with how to diagnose and solve problems in human “organizations” (say, to do with the motivation of people, the management structure, training or culture) with little regard to any particular activity system, linear or non-linear.

 

This book explores and relates these two different schools of thought.

Three ways of looking activity systems

Peter Checkland promoted a soft systems methodology in which a business system’s parts are actors typified by their roles in transforming inputs into outputs required by customers.

 

A linear business activity system

Environment

Inputs à

from Suppliers

Transformation

performed by Actors

à Outputs consumed

by Customers

 

The “transformation” can be described in a business activity model, as a network of activities related by dependencies.

 

Norbert Weiner promoted cybernetics as the science of control in mechanical and biological systems, in which systems and their parts are typified by state variables that change value over time, and can be related in feedback loops. W Ross Ashby extended cybernetics into a theory of how all kinds of system change can be described in terms of rules applies to state variables, and how feedback leads to complex system dynamics.

 

Jay Forrester promoted system dynamics in which system parts are stocks (aka populations or resources) with a measurable level. Any two stocks may interact via a flow that represents how increasing or decreasing the quantity of one stock acts to increase or decrease the quantity of the other stock.

 

The system dynamics diagram below is only a fragment of what is potentially a very large and very complex system. The whole model is a complex type, which relates the primitive or simple types in a theory of how some part of the world is observed or envisaged to work. Note there can be time delays on flows.

 

Epidemic

Virus

population

decreases à

Infectable people

increases à

ß increases

Infected

people

increases à

ß decreases

Immune

people

 

Arguably, since Forrester’s system dynamics may be seen as a branch of it, cybernetics is the dominant science in the field of systems thinking.

More concepts

This section analyzes some more ideas considered important in systems thinking.

 

System as orderly

A disorderly system is a contradiction in terms. Every system we abstract from observation of reality is orderly in some way, meaning its parts are arranged according to a given sequence, rule, structure or pattern, or they interact according to some rules.

 

A passive structure can display structural order. At first you may see a list of names (each: forename, middle name, surname) as disorderly. When I show you the list is sorted alphabetically on the middle name, you’ll see the list as ordered. At first you may see a three-dimensional model of the particles in a fluid as a random product of Brownian motion. When I show you the structure matches a model of the stars in our galaxy, you’ll see the fluid as ordered.

 

An active structure (as modelled in cybernetics or system dynamics) can exhibit behavioral order, meaning its behavior over time follows given rules.

 

Aside: Might we measure the complexity of a system’s order in terms of the work needed to impose its definitive pattern on its structure and/or follow its definitive rules? See the appendix for discussion.

 

System (and terminology) composition and decomposition

Systems and all their elements are composable and decomposable. The field of systems analysis and design is terminology torture, because some use the same terms at different levels of granularity, in a fractal or recursive way, and others use different terms for the same concept at different levels of granularity.

 

A system (however big or small) is a system. Nevertheless, the systems within a system are often called subsystems or components. And systems that contain systems are sometimes called ecosystems.

 

A process (however long or short) is process, definable by its preconditions and postconditions thus: "If the preconditions hold true, and the process completes successfully, then the postconditions will hold true.” Nevertheless, you’ll find processes appear in systems analysis and design methods under different names - such as value stream, procedure, activity, action, use case, epic, user story, operation and method. You’ll also find them represented in different kinds of diagram - such as value stream diagram, flowchart, and interaction/sequence diagram.

 

Types and states

“The most fundamental concept in cybernetics that of ‘difference’, either that two things are recognisably different, or that one thing has changed with time.” (Ashby).

 

Ashby’s point is general, since we all describe things in the world by differentiating them from other things. To describe a thing (its position in space, its membership of a family or generation, its qualities or states) we differentiate discrete:

·       types of things or qualities (say, different species, or colors),

·       instantiations of a type (the different individuals of a species, or appearances of a color in different rainbows),

·       states of a thing that changes over time (the egg, maggot and fly phases in the life of a housefly; or the cyclical on and off states of a light).

 

Which descriptive tool to use depends on the context. What are named as two types in one context (say, “caterpillar” and “butterfly”) may be described as two states of one thing in another context. And a type in one context (say “human”) can be an instance of a type in another (one member in the set of named species).

 

The states of a system

In system dynamics, a system is driven to change state over time under its own internal drive.

a)     The microstate is the state of every individual stock - its level or quantity.

b)      The macrostate is the state of whatever aggregate or system-level properties (say, total biomass, or GDP) emerge from inter-stock interactions.

 

Given a model of a system’s dynamics, we can use a computer to animate it, and so simulate how a real-world instance of the system behaves, how its state changes over time. Its “lines of behavior” may be shown on a graph that reveals how (say) populations of predators and prey go up and down chaotically, or stabilize homeostatically, or flip between chaos and homeostasis.

System change by state change and by evolution

To recap, a system of interest to us may be defined as a collection of two or more parts that interact in an orderly way to produce a set of results or effects that no part can produce on its own. It is triggered to act by the occurrence of events or conditions – be they external or internal.

 

Since the 19th century, some thinkers have focused on systems that adapt to changing conditions by homeostatically maintaining a stable state. On the other hand, it has also long been clear that the universe, our biosphere, our societies and businesses are continually evolving; and that feedback loops can lead to unstable or chaotic behavior.

 

For discussion of system stability and change to be coherent, it is necessary to distinguish two kinds of stability and two kinds of change. First, a system may be stable in that it:

a)     maintains or hovers around an attractor state, as in homeostasis, or

b)     repeatedly acts according to the same rules (which may either maintain a system’s state, or change it progressively, and perhaps dramatically).

 

Second, a system can change in the sense of:

a)     state changes within the life history of a system

b)     evolutionary changes that produce new (different) system generations.

 

A state change is a measurable increase or decrease in a stock level, or a change in any variable’s value. It is “a measurable difference in the qualities of something from one state to the next… we assume change occurs by a measurable jump.” (Ashby).

 

By contrast, an evolutionary change modifies the system itself. It may add, remove or change stocks or variables. It may change the flows or rules that dictate how stocks or variables change state over time. This changes the very nature of a system; it replaces an older system by a newer system or system generation.

 

An activity system is a pattern (observers can detect and describe) in the behavior of an entity.

E.g. A solar system is a pattern in the behavior of the planets that orbit a star.

A game of bridge is a pattern in the behavior of a card school.

 

A system changes from one state to the next according to rules, which may be laws of nature or mankind.

E.g. The law of gravity and motion govern the orbits of planets.

The rules of bridge are defined by the World Bridge Federation.

 

A system can evolve, mutate or be changed from one generation to another.

E.g. Biological evolution depends on inter-generational mutations.

The rules of bridge have been changed about ten times since 1933.

 

We may detect and describe a system without knowing anything about how it emerged or how it evolves from generation to generation.

 

We may describe the

regular processes of

without knowing anything about

A solar system

How planets emerge from a cloud of dust and gas

An organism

Reproduction with modification and Darwinian evolution

A game of bridge

The World Bridge Federation

A business

Enterprise architecture

 

But all is a matter of perspective! Sooner or later, we may turn to look at those higher-level processes as systems in their own right.

 

What to call a process or entity that initializes, produces or changes the rules of a given system? Let us call it a higher-level process or meta system. Now suppose we bound a lower-level system and higher-level meta system together in a wider system. Two insights emerge from thinking about this.

 

a)     A generational evolutionary change to the lower-level system, is a state change in the life history of the higher-level meta system.

b)     Wrt the “self-organization” of a human social entity, the same actor(s) can play roles in both lower and higher-level systems.

 

These insights are not original. They are highlighted here because they are important to understanding how we can separate discussion of social entities from discussion of the activity systems their actors not only participate in, but may also redefine.

System: soft or hard?

Ashby’s “Introduction to Cybernetics” (1956) featured the concept of a “soft system” in all but name.

 

3/11 “We must be clear about how a “system” is to be defined Our first impulse is to point at the pendulum and to “the system is that thing there”. This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems. The real pendulum, for instance, has not only length and position; it has also mass, temperature, electric conductivity, crystalline structure, chemical impurities, some radio-activity, velocity, reflecting power, tensile strength, a surface film of moisture, bacterial contamination, an optical absorption, elasticity, shape, specific gravity, and so on and on. Any suggestion that we should study “all” the facts is unrealistic, and actually the attempt is never made. What is try is that we should pick out and study the facts that are relevant to some main interest that is already given.” Ashby 1956

 

It is debatable whether an entity can be described by infinite or merely many variables. But for sure, different observers of one entity may describe it using different sets of interrelated variables, and identify different systems.

 

6/14 “These facts emphasise an important matter of principle in the study of the very large system. Faced with such a system, the observer must be cautious in referring to “the system”, for the term will probably be ambiguous, perhaps highly so. “The system” may refer to the whole system quite apart from any observer to study it— the thing as it is in itself; or it may refer to the set of variables (or states) with which some given observer is concerned. Though the former sounds more imposing philosophically, the practical worker inevitably finds the second more important.

 

Then the second meaning can itself be ambiguous if the particular observer is not specified, for the system may be any one of the many sub-machines provided by homomorphism. Why all these meanings should be distinguished is because different sub-machines can have different properties; so that although both sub-machines may be abstracted from the same real “thing”, a statement that is true of one may be false of another. It follows that there can be no such thing as the (unique) behaviour of a very large system, apart from a given observer. For there can legitimately be as many sub-machines as observers, and therefore as many behaviours, which may actually be so different as to be incompatible if they occurred in one system.” Ashby 1956

 

In other words, faced with a material entity, there are two kinds of system thinking.

a)     A physicist may think of the entity – “the thing as it is in itself” – as being a system – regardless of any observer’s interest in it.

b)     A cybernetician thinks of a system as a “set of variables” or “way of behaving” selected by an observer as describing the entity, with some interest in mind.

 

The latter is akin to what Peter Checkland called a “soft system”, which is commonly interpreted to be the first or both of the following.

a)     A stakeholder’s subjective perspective of some business entity or situation

b)     A whole whose parts include human actors.

 

For example, we might perceive a manufacturing business to be a system that transforms input supplies into output products, or transforms input payments from customers into output payments to shareholders, employees and suppliers. Similarly, is IBM a system to make computers? Or to sell consultancy services? Or to transform customers’ payments into salaries and dividends? Or to pay taxes and oil the wheels of the USA’s government and economy?

 

Having abstracted different abstract systems from one concrete entity or situation, we may find each description or perspective is:

a)     valid or invalid (accurate or inaccurate)

b)     compatible or incompatible with other descriptions

 

Valid? Though an abstract system description is subjective in the sense it is a perspective, it should also be objective in the sense that it is valid. Meaning that when a concrete system is tested, it matches the abstract system to a sufficient degree of accuracy.

 

Aside: The scientific method has evolved over centuries. Karl Popper and others improved our understanding of it in the 20th century. Models are approximations. Science does not assume a model or proposition is 100% valid. It is valid to the extent it passes tests. Sometimes, it has a measurable accuracy or degree of truth. Sometimes, especially in sociology, it has only a statistical degree of certainty, allowing, say, the 5% chance an experimental result was due to chance. For reasoning in the face of uncertainty (out of methods for eliminating error) statistical methods have been developed.

 

The term “perspective” is sometimes mistakenly read to imply that kind of “perspectivism” or “relativism” which has infected some university studies of the humanities, in which all descriptions of the world are seen as equally valid, and given equal weight. Rather, a perspective has a degree of validity that is either enough for some given purposes, or not.

 

Two valid perspectives may be incompatible. Consider the two perspectives of light as a wave pattern or a stream of particles. Two valid perspectives may also be valued differently. Consider the two perspectives, taken by opposing sides in debates about abortion, of an unborn child. For addressing conflicting perspectives of a system, we have Checkland's Soft Systems Method. And for addressing "Wicked Problems" we have General Morphological Analysis, as discussed in a later chapter.

 

Checkland observed that people get the soft/hard distinction one day and lose it the next. A ridden bicycle may be described in various ways (as above, or in terms of aerodynamics, or including the surface the bicycle travels over) and it includes a human actor who decides where and how fast the bicycle goes. So, is it a soft system? I doubt many would say so. Still, in my view, every system description or model is soft in the sense it is an observer’s perspective; and many hard systems are soft in the sense they involve human actors.

 

This chapter goes on to identify a dichotomy that is arguably more fundamental than the soft/hard one.

Social system: activity system or social entity?

Kenneth Boulding, in probably the first article on applying general system theory to management science (1956), questioned whether the “parts” of a social or business system are actors or the roles they play. “The unit of such [social] systems is not perhaps the person – the individual human – but the role - that part of the person which is concerned with the organization or situation in question.”

 

David Seidl, in his article (2001) on Niklas Luhmann’s social autopoiesis, pointed out that: "The first decision is what to treat as the basic elements of the social system. The sociological tradition suggests two alternatives: either persons or actions."

 

Both Boulding and Seidl pointed us to the question: Which is the fundamental element of social system: the people or the roles they play? the actors or the activities they perform?

 

Aside: The answer matters to a systems thinker, because we can model the dynamics of an activity system (be it a flock of birds, a chemical reaction, or a game of chess or poker), with or without feedback loops. And we can simulate the behavior of such a system in software. However, we cannot model the dynamics of a social entity in which people continually redefine their own activities, except at such an abstract level of thinking (about the meta system that defines systems) that it tells us nothing about the business at hand.

 

The two answers lead two kinds of system: a population of actors or a pattern of activities. For discussion of social systems to be coherent, it is necessary to distinguish:

a)     the social entities we in human society sometimes call "organizations" from

b)     the activity systems they participate in

and to recognize there is many-to-many relationship between them.

 

Russell Ackoff, in an effort to build a unifying system theory that bridges the schism between activity systems and social entities, built elaborate hierarchies of aims, activities and system types. The graphic below is my attempt to stitch Ackoff’s system ideas together in a coherent whole that differentiates four kinds of system.

 

Ackoff’s system classes

Actors (parts)

Activities

Aims (purposes)

State maintaining system

Roles in an

activity system

No optional activities

Fixed aims

Goal-seeking system

Some optional activities

Purposive system

Members of a

social entity

Define their activities

Fixed aims

Purposeful system

Define their aims

 

Ackoff described the first two kinds of systems (activity systems with a fixed range of possible activities) as deterministic; I don't know if he recognized they could instead be probablistic or possibilistic (terms to be defined in a later chapter).

 

His primary interest was in the third and fourth kinds of system; in social entities that are business organizations or institutions. He characterized them "purposive" or "purposeful", to distinguish them from the activity systems defined in cybernetics.

 

Activity systems thinking (explored in a later chapter)

An activity system (say, a game or poker) is a pattern in an entity's way of behaving.  Change the pattern (change the rules of poker) and you change the system. An instance of the system is dynamic in the sense that entities or actors interact in rule-bound activities to change its state, represented by state variable values, over time.

 

To model an activity system is to describe an entity's way of behaving, how it changes state and/or transforms inputs into outputs. The model can include roles for actors, rules for activities, results (state changes or outputs) produced; also, state variables, information maintained in memories and exchanged in messages.

 

Activity systems thinking appears in the form of cybernetics, system dynamics and soft systems methodology.

 

Social entity thinking (explored in a later chapter)

A social entity (say, a card school) is a population of communicating actors. It is dynamic in the sense that its actors communicate. It is a community of actors who interact by exchanging information to meet aims – both personal and shared. The actors may be free to determine how they communicate and behave (regardless of any given activity system model).

 

The word “social” implies actors interact by creating and using messages. They do this in a more or less organized way, influenced by various structures (power, reporting, competency, friendship, family) that may be found in a social entity.

 

Much social systems thinking is about how a group of human actors are:

·       motivated and managed to accept, agree or achieve some aim(s)

·       connected in hierarchical, network or matrix structure(s)

·       more or less constrained in their actions by the rules of activity system(s)

·       more or less able to change an activity system they participate in.

 

If the aims are stable, but we change the structure, is it still the same system? If the structure is stable, but we change the actors or activities, is it still the same system? If a system thinker cannot answer those questions, their concept of a system is unclear.

 

Social entity thinking features in “2nd order cybernetics”, and in the works of Jackson and Senge.

 

On relating the two perspectives

You can see light as waves or particles; take either view at different times, but not both at once. Similarly, you cannot simultaneously view a system as a set of actors and set of activities, since they are different viewpoints.

 

For sure, one activity system may be performed or realized by one social entity, as this table indicates.

 

An activity system may be

performed by a social entity

Nest building

A termite colony

A card game

A card school

An orchestral performance

An orchestra

Expense claims and payments

A business

                                                                                        

However, to conflate the two concepts is misleading, because they are related many-to-many. You can change the members of a card school without changing the games they play. You can change a card game without changing the card schools who play it. So, to see a card school and a card game as one system is to be blind to the many-to-many relationship between the two different concepts.

 

A many-to-many relationship

1 to N

A Card school <plays> Card games

1 to N

A Card game <is played by> Card schools

N-to-N Link

One instance of a game

 

The table below contains some more generic many-to-many associations.

 

Three generic many-to-many relationships

1 to N

A concrete thing <conforms to> abstract types

1 to N

An abstract type <is conformed to by> concrete things

N-to-N Link

One instantiation of a type by a thing

1 to N

An Actor <plays> Roles

1 to N

A Role <is played by> Actors

N-to-N Link

One playing of a role by an actor

1 to N

A Social entity <realizes> Activity systems

1 to N

An Activity system <is realized by> Social entities

N-to-N Link

One instantiation of a system by a social entity

 

Consider a termite colony as social entity. The actors mostly interact in an organized way, playing roles and following rules. We can observe and describe the behavior of one colony in terms of several distinct activity systems it realizes (nest building, swarming, fighting intruders etc.). Moreover, each one of those system types is realized by many termite colonies.

 

Consider IBM as a social entity containing actors performing activities to meet aims. The actors are organized to some extent, and perform some activities in accord with the rules of given activity systems. However, IBM is only well-called an activity system when, where and in so far as it realizes a system model. It is meaningless to call it a system without reference to a particular system model.

 

Of course, human actors (unlike computer actors) may ignore the roles and rules of any described activity system, and determine their own responses to stimuli. Such ad hoc activities lie outside of any definable activity system. Nevertheless, of course, a business usually depends on the ability of its human actors to act in ad hoc and creative ways.

 

Later chapters discuss how Meadows tried to generalize activity systems thinking and social entity thinking.

More ambiguities

Purpose: outcome or aim?

Daniel Kim (in this article) defined a system’s parts as forming “a complex and unified whole that has a specific purpose”. Who has, defines or agrees the purpose of the solar system, a hurricane, a termite, a game of poker, or IBM?

 

Stafford Beer coined the phrase: The Purpose Of a System Is What It Does (POSIWID), which may be read with two very different meanings.

a)     To Ashby, the results produced by the dynamics of an activity system are the inexorable consequence of actors following its rules.

b)     To Ackoff, the results produced by actors playing roles in a social entity are the inexorable consequence of actors following their self-interest.

 

The purposes of a thing lie in the perspectives actors take of it. An activity system produces the results its dynamics lead to; so, if the results aren’t what you want, you’ll have to change the dynamics. A social entity produces the results its members want to; so, if the results aren’t what you want, you’ll have to redirect or motivate the members.

 

Obviously, every business change agent should start with the purposes (aims and requirements) of sponsors and other stakeholders, and the products/services to be produced/delivered. Thereafter both kinds of system thinking may be needed.

a)     Activity systems thinking, to define the required activities and roles.

b)     Social entity thinking, to direct, organize, and motivate the required actors.

Organized: from above or within?

What does it mean to refer to a business as an “organization”?  Most businesses are organized so as to realize some modellable activity system(s). Of course, every model is an incomplete and feeble description of a whole business. And there may be leakage around the edges of any model - variations from any described structure or behavior. Moreover, in practice, business actors also interact in ad hoc (unorganized) ways. They spend time acting outside of any known activity system, in pursuit of share or personal aims.

 

Recognizing the importance of human creativity to business success, management scientists often propose a business is or should be a “self-organizing system”. The trouble is that systems thinkers use term in several ways.

 

In activity systems thinking

Self-organization can mean:

a)     organization that emerges (bottom up) from interactions between actors, as in a shoal of fish, or

b)     self-assembly, as in the growth of a crystal.

 

Francis Heylighen, in “The science of the self-organization and adaptivity” wrote as follows.

 

“Self-organization can be defined as the spontaneous creation of a globally coherent pattern out of local interactions. Because of its distributed character, this organization tends to be robust, resisting perturbations. The dynamics of a self-organizing system is typically non-linear, because of circular or feedback relations between the components”.

 

Positive [amplifying] feedback leads to an explosive growth, which ends when all components have been absorbed into the new configuration, leaving the system in a stable, negative [dampening] feedback state. Non-linear systems have in general several stable states, and this number tends to increase (bifurcate) as an increasing input of energy pushes the system farther from its thermodynamic equilibrium.”

 

There is no implication above that the elements or actors in a self-organizing system have any awareness of what their collective action will produce, or any aim to produce it.

 

In social entity thinking

Self-organization is usually related to the ability of self-aware human actors to

a)     purposefully change an activity system they participate in, or

b)     purposefully act in ad-hoc ways outside any definable activity system.

 

The former is compatible with activity systems thinking. Thinking about a system and redefining its variables, roles or rules is meta systems thinking. It occurs over and above the activity system to be changed. And making the step change from old to new activity system will require some change management activity.

 

The latter is incompatible with activity systems thinking, since those activities are not knowable, definable or agreed before they occur, and may never be repeated.

Adaptation: state change or evolution?

A system may adapt or be adapted in either of the ways defined earlier:

a)     state changes within the life history of a system

b)     evolutionary changes that produce new (different) system generations.

 

In the first sense, an activity system may explore a wide “state space”, settling for a while into one or other “attractor” state.

 

“To adapt to a changing environment, the system needs a variety of stable states that is large enough to react to all perturbations but not so large as to make its evolution uncontrollably chaotic. The most adequate states are selected according to their fitness, either directly by the environment, or by subsystems that have adapted to the environment at an earlier stage.

 

Formally, the basic mechanism underlying self-organization is the variation which explores different regions in the system’s state space until it enters an attractor. This precludes further variation outside the attractor, and thus restricts the freedom of the system’s components to behave independently. This is equivalent to the increase of coherence, or decrease of statistical entropy, that defines self-organization.” Heylighen

 

People often talk about the need for "adaptivity" in a business or other human social entity, but without reference to any model of it as a system. So, it is often unclear whether they are referring to the ability of human actors to

a)     evolve any activity system they participate in by changing its way of behaving

b)     act in ad-hoc ways outside of any definable activity system.

 

In short, there are three kinds of change:

a)     orderly state changes within the life history of a modellable orderly system

b)     generational evolutionary changes that change an activity system from one modellable system generation to the next

c)     disorderly, ad hoc or creative changes individual actors make to how they interact with others in a social entity, which cannot be modelled.

Causality: situational or dispositional?

Activities are triggered, and effects or results are produced, in response to events or conditions. Some distinguish

a)     situational causality, triggered by external events or conditions, from

b)     dispositional causality, triggered by internal events or conditions.

 

The distinction is more cosmetic than fundamental, because systems (like all wholes) can be nested such that an external event to one system can be internal to a system at a higher level of system composition.

 

Moreover, often, a state change or effect is caused by a combination of an external event and an internal condition. For example, I have seen a drinking glass crumble apparently spontaneously, presumably due to some internal stress. But normally, when struck, a glass may be disposed to either ring or shatter depending on both:

a)     the external force and

b)     the internal condition of the glass.

 

Aside: on the flow of time. Can effects precede causes? Can time slow down, stop or go into reverse? In theory yes, but that time flows steadily forward is an illusion we cannot escape from, because time runs at the speed of the physical and chemical processes that underpin all change in the world, ranging from the beat of an atomic clock to human thinking. A person in a room with no light and sound still thinks time passing, because the thinking and ageing processes run in the same direction. If time were to go into reverse, those processes would become unthinking and rejuvenating.

 

We detect time passing by observing a change in the state of something, be it a clock or our thoughts. If all changes stop, then time stops. Conversely, if time stops, then all changes stop. No chemical reactions occur, no synapses fire, and no light moves. Observation and remembering are processes that run in the direction that biochemical changes happen. So if time were to stop or reverse, we could not observe or remember it. The only time we can observe and remember is change in the direction we can detect and remember. Change in the other direction is theoretically possible, but practically undetectable.

Autonomy: rule-bound or creative agents?

Are the options (between this or that action) an autonomous actor chooses between better viewed as

a)     constrained - by the rules of an activity system the actor plays a role in, or

b)     created anew - by the agent?

 

Much social systems thinking is about entities in which actors are said to be self-governing or autonomous. They are “agents” who have “agency”. They cannot only choose between pre-determined actions, they can be creative - invent new actions.

 

Russel Ackoff defined the abilities to choose between activities and between aims as the characteristics that differentiate what he called “purposive” and “purposeful” systems from the kind of rule-bound activity systems discussed in cybernetics and system dynamics.

 

A wine glass, when struck, may “choose” to ring or to shatter, depending on definable conditions. However, Ackoff was surely thinking of something different, the kind of choice we humans make “consciously”. At this point, a host of questions arise, to which brief answers are offered below.

 

Consciousness and language

In the range of animals alive today, self-awareness, forethought and communication skills may be placed on scales from primitive to advanced. The presumption must be that all our human abilities, including our advanced forms of consciousness and language, are side effects of biological evolution.

 

Throughout the animal kingdom, the process of evolution has been remarkably effective in developing abilities that prove functional or useful, and discarding ones that don't. Advances in consciousness and language improved our abilities to make decisions and cooperate with others (and probably reduced our need to be as strong as our Neanderthal cousins).

 

Consciousness is what enables animals to compare descriptions of the past, the present and envisaged futures, and so make better decisions. To choose between this or that action, a thinking animal predicts the future consequences of each, and compares their pros and cons (see below). It seems big-brained humans are exceptionally well able not only to remember the past (as elephants do) but also to envisage long-term futures.

 

Human language enables both sophisticated communication and elaborate descriptions of envisaged futures (including target systems). But natural language is highly flawed. Dictionaries often define one word in several different ways; the definitions of one word change over time; and worse, every natural language sentence is inherently ambiguous. Every symbolic model or expression is potentially ambiguous. The ambiguities don't lie in the model or expression itself; they lie in the different meanings associated with it by the actors who create and use it.

 

A written sentence has no meaning in itself. It only means what its writer thinks it means, and each reader thinks it means. So, the successful use of natural language depends on

·       shared context: writers and readers sharing just enough understandings of just enough of the words and grammar used

·       redundancy: repetition of one meaning using different words and/or sentence constructions

·       verification: by way of question and answer.

 

In defining human and computer activity systems, to avoid the ambiguities of natural language, we use domain-specific languages. This is inescapably essential in software system design.

 

Aside: Our consciousness and language give us abilities that have well-nigh cost-free side-effects, like the ability to write poetry. You may say our ability to write poetry "emerges" from the evolution of consciousness and language skills for more functional reasons. Such side effects are incidental to our survival as organisms.

 

The general decision-making process

Generally, animals choose how to act by comparing futures they can envisage, depending on the disadvantages and advantages of different courses of action.

                                                                    

“Behavior analysts have… been able to develop a simple mathematical equation that can predict the choices of animals almost perfectly! The equation is called the “matching law” because their decisions have been found to match the combined advantages and disadvantages of their choices. This equation is able to account for all of the different ways scientists have come up with for changing the qualities of the two choices. This includes amount of food, quality of food, delay to food, and much more….

 

While the matching law has been shown to predict the decisions of animals, it’s reasonable to question whether it is also able to account for the choices people make every day. Further research, both experimental and archival, has found that the matching law does, in fact, describe human decision-making accurately!”

https://thedecisionlab.com/insights/society/parallels-between-human-animal-decision-making

 

In management science, the universal animal decision-making process above is quantified in the form called cost/benefit analysis This is always a subjective process, since somebody has to identify the costs, who pays them, the benefits, who gets them, and the time-scale over which the analysis is done – all of which may be debatable.

 

Aside: What is the use of emotions? The evolutionary benefit to animals of emotions lies in how they effective they are influencing the weights given to the pros the cons of different envisaged futures. Perhaps humans are unique in their ability to use reason and logic to overcome the influence of emotions in at least some decision-making processes.

 

Creative actions

Remember the question: Are the options (between this or that action) an autonomous actor chooses between better viewed as

a)     constrained - by the rules of an activity system the actor plays a role in, or

b)     created anew - by the agent?

 

The actions chosen by players in a game of poker may be unpredictable, but they are constrained by the rules of the game. The winner may be unpredictable, but we know there will be a winner. Change the rules governing activities and you change the system.

 

You may see the bees in a beehive as subsystems of an activity system, with little or no self-determination. By contrast, social systems thinkers like Ackoff see the individual actors in a human organization as having agency, as being able to define not only activities but also the aims they pursue.

 

Thus, a human social entity is hugely more than any specifiable activity system. A card school is hugely more than a poker game. The card players can make a mistake, or deliberately break the rules. Either way, their action lies outside of the activity system. Card school members can act in creative ways, with regard to purposes that are personal or shared.

 

So, I propose we apply the term “creative” to actions we have no prospect of predicting from the rules of a system. Not because feedback loops mean the system behaves in a “self-organizing” or “chaotic” way (as weather systems do) but because the actors are thinkers. Remember our epistemological triangle? It is perhaps the only original idea in this work, and arguably, an improvement on the classic semiotic triangle.

 

1)     Thinkers <observe and envisage> Phenomena.

2)     Thinkers <create and use> Models.

3)     Models <represent> Phenomena.

 

Thinkers can follow the rules of a given activity system. Thinkers can, instead, behave in an intelligently purposeful way. They can

·       creatively envisage two or more future phenomena

·       model and predict the outcomes of those phenomena

·       assess the costs and benefits of those outcomes

·       choose or propose what action to take.

 

A biologist may think of creative choices and proposals as merely options in a deterministic activity system yet to be defined. However, a commitment to determinism at the level of physics, chemistry and biology is not necessarily incompatible with the accepting the ideas of self-consciousness, free will and creativity at the level of sociology.

 

Given that every rational thinker can envisage different futures, weigh up their pros and cons, and choose to override what their biological instincts urge them to do, a sociologist may reasonably treat their choices as demonstrating free-will. In practice, we sometimes ask a court to decide if a human’s action was rule-bound or creative.

 

The choices and proposals of individual actors in a human social entity can be filtered through group decision-making mechanisms (democratic or authoritarian) to appear as though the whole social entity makes purposeful choices in a creative way. Which brings us to the management of agents in a social entity.

Questionable assertions about a system

If you’re with it so far, then you’re now equipped to see that some of what you may hear said about systems is ambiguous or questionable.

 

Systems thinking is holistic or anti-reductionist? Hard to say, because people confuse “holism” with wholeism and interpret “reductionism” in various ways.

 

A whole system is more than the sum of its parts? True if the “parts” (think rider and bicycle) are structures, and they interact in behaviors that produce results/effects the parts cannot produce on their own. But false if you consider those behaviors and effects to be “parts”.

 

Everything in a system is connected? False if connected implies physical contact. But true if “connected” means every part of a system is related to every other part directly or indirectly, since otherwise there would be two or more discrete systems.

The purpose of a system is what it does? True in the case of a system successfully designed to meet a purpose. Otherwise, false, unless "purpose" is oddly read to mean any result/effect of a system in operation, devoid of any prior intent/motivation.

There is no system out there? False, since "out there" are both abstract systems (descriptions, theories or models) and concrete entities (realities, examples or instantiations) that behave near enough as defined by abstract systems.

 

Structure determines behavior? When physicists discuss the structure in which atoms connect in a molecule, they are discussing the predominant feature of what makes a molecule a molecule. Changing the structure can radically alter the behavior of the molecule. By contrast, when sociologists discuss the structure under which the employees of a business are managed, they are discussing a feature that can be so peripheral to the business at hand that reorganizing the management structure has very little effect on business operations.

 

It is as true or even truer that behavior determines structure, that behaviors create, change and destroy structures. In the world of physics, after the big bang, the initial energy of the universe produced its material structures. In the world of sociology, communications between actors create and maintain the structures of social entities. In the world of data processing, human thought processes define types of process and data structure, and then, instances of processes determine the values in instances of data structures in messages and memories. The first general system theorist, von Bertalanffy, suggested all structures may be seen as the results of behaviors.

 

All systems of interest are non-linear? False. There are interesting linear systems. Anyway, what does “non-linear” mean? Some refer to non-linear lines of behavior, which occur when a feedback loop has an amplifying or dampening effect on some variable, which leads a system to either exponential growth in some quantity, or else to oscillate around an attractor state, as in homeostasis. Others relate non-linearity to some measure of “complexity” (see below for further discussion.)

 

All systems of interest are fractal? False. For sure, a system may be composed into a wider ecosystem and/or decomposed into smaller subsystems, but that is not what fractal means.

 

In short, don’t believe everything you hear said about systems – or assume you know what it means.

Conclusions and remarks

We will do better to distinguish activity systems thinking from social entity thinking, and apply both to the organization of a business, without confusing one with the other.

The schism in systems thinking

Midgely (2000) presented hard, soft and critical systems thinking approaches as three phases in a historical progression. Yet the hard/soft distinction has been interpreted several ways discussed above. And even mechanical and information system engineers are taught “soft” and “critical” systems thinking techniques.

 

Others have divided the history of systems thinking into three or four waves, as though it is one coherent and continually progressing science, but it isn’t. Partly because there is a schism between the two schools of thinking differentiated in this chapter. For discussion of social systems to be coherent, it is necessary to distinguish the evolving social entities we in human society sometimes call "organizations" from the activity systems they participate in. And to recognize there is many-to-many relationship between them.

 

Activity systems thinking (exemplified in system dynamics, classical cybernetics and soft systems methodology) is about systems with a stable “way of behaving”. Their behavior is regular and repeatable enough to be modelled in terms of parts interacting in an orderly way to produce a set of results or effects that no part can produce on its own.

 

In the 1970s, social entity thinkers, including Heinz von Foerster, started using the term "second order cybernetics” in relation to meta systems thinking, to people thinking and redefining systems they observe and/or work in. Do not confuse this with classical cybernetics! Rather, look at it as a collection of thoughts about how a group of human beings may be organized, managed or motivated to achieve some aims, or respond to a changing environment.

 

Social entity thinking is not an evolution of activity systems thinking; they are not competitors; they are different perspectives with different traditions. Both have their place, but until systems thinkers distinguish them more clearly, the field will remain confused and confusing.

 

This is not to imply people should espouse or limit their thinking to either activity systems or social entities. Rather, it is to say they cannot have meaningful conversations about these things until they recognise the many ambiguities that make much of today's systems thinking discussion incoherent, if not meaningless babble.

Relevance to “management science

An enterprise is a human social entity that employs and participates in many business activity systems. It is capable of purposefully redesigning an activity system it employs, rather than waiting for chance evolutionary changes to prove beneficial.

 

How to solve the problem that a business activity system is not producing the beneficial “desired effects” that are wanted by its directors, employees or customers? Or is producing harmful effects? Or is simply costing too much? 

 

We analyse the current system, then design a solution. This typically involves buying, hiring or designing and building an improved generation of a system, or a different system. To this end, some apply soft systems thinking methods. Others apply “management cybernetics” of kinds discussed in later chapters.

To change a human activity system, we do well to involve the actors who play roles in it, in the effort. This kind of “self-organization” has little to do with the use of the term in more general system theory. Ross Ashby repudiated (as superfluous metaphysics) the idea that a machine or organism could change its own constitution. Of course, a social entity can do that, but only by acting as meta system in which self-aware actors observe the way they are currently organised and envisage a new way.

 

The term complex adaptive system (CAS) appears in discussion of management science. Yet it is multiply ambiguous. The three words are variously defined, separately and together. Where the writer has in mind a human social entity that adapts to changing conditions by changing its way of behaving, changing activity systems it participates in, or even changing its aims, isn’t that better called an evolving social entity (ESE)?

Relevance to Enterprise Architecture (EA)

For sure, enterprise architects must attend to social entity thinking. Where architecture work involves changes to people's roles, architects often work in partnership with a business change function.

 

Having said that, most enterprise architects focus more on activity systems thinking. What is documented in an EA repository is mostly descriptive of the human and computer activity systems a business participates in and depends on.

 

The phrase "every enterprise has an architecture" confuses reality with description of it. To say "describe the architecture of the enterprise" is to say "create an architectural description of the enterprise”, one which defines regular business operations at an abstract level. EA involves consideration of three concepts.

 

1)     The social entity. The business as a thing in reality, which employs actors capable of performing business operations.

2)     The phenomenon. The operations performed in reality, at run-time, by the enterprise, which requires actors to have descriptions of their roles in those operations.

3)     A model. The operations as they are represented, at description time, in a readable description of some form.

 

Even if there is no architect, a model of operations must exist in the form of some written instructions or mental model that human and computer actors follow to perform their roles. So, in that sense, every enterprise has an architecture. But it is not the architectural description stored in an EA repository. And the dark secret of EA is that very few enterprises have a comprehensive and useful architectural description.

 

Later chapters at https://bit.ly/2yXGImr expand on and extend the points made above.

Appendix: relevance to “complexity science”

Can you place the following in order of complexity: the solar system, a steam engine, a beehive, a ridden bicycle, IBM, and Google's software (reputedly two billion lines of code)?

 

Obviously, you cannot do this until it is agreed what “complexity” means. In practice, the term is used so variously and so meaninglessly, it is hard to say anything general about it.

 

Aside: This “map of the complexity sciences” is a brave attempt to generalize by imposing a sequence and coherence on a messy jumble of ideas, but it is questionable. The line from classical cybernetics to second-order cybernetics is especially misleading. Confusions arise from sociologists and management scientists taking words from maths and physical sciences and applying them with different meanings to social and business organisations. Until people distinguish activity systems thinking from social entity thinking, the field of "systems thinking" will remain confused.

 

It is easy to say the complexity of a system relates to "emergence", "adaptivity", "self-organization" or “non-linearity” but all those terms are used with two or more meanings - as discussed above. The bi-metal strip in a thermostat controls a heating system in an adaptive, non-linear way to produce the emergent property of an air temperature that oscillates around the one desired by the actor who sets the dial on the thermostat. Is that a complex adaptive system? Or is it a simple control device, based on a simple feedback loop?

 

How do scientists measure complexity? You may know that to maintain its order, a thermodynamic system consumes energy. Perhaps, more generally speaking we might measure the complexity of a system in terms of the work needed to impose its definitive pattern on its structure and/or follow its definitive rules?

Complexity in matter and energy processing

Physicists often speak of a physical structure (say, a molecule, or a solar system) as a system. Several have proposed measuring the complexity of a structure in terms of the process needed to build it (or a problem in terms of the process needed to solve it).

 

Seth Lloyd (in this paper) proposed a bounded entity’s complexity is its thermodynamic depth, the gap between its microstates and macrostate. He defined the complexity of a system in terms of the process that produces the system’s macrostate (emergent properties, such as temperature, pressure, volume and density) from its microstate (a specific configuration of microscopic parts), discarding information along the way.

 

Philip Anderson proposed a bounded entity’s complexity is a measure of its asymmetry (or perhaps better, its departure from symmetry), or the complexity of the process to build or draw the structure.

 

Thus, physicists tend to speak of a physically bounded entity as being a system, with a given measure of complexity. When they stop to think about it, they may realize they are really only talking about one perspective an entity, but they usually don’t. Instead, they plough on as though every material entity has one physical state, which they can identify. By contrast, cyberneticians and soft systems thinkers are aware that, by taking different viewpoints of an entity, they can identify different systems with different states and different ways of behaving.

 

As Ashby indicated, faced with a given material or social entity, there are two kinds of system thinking.

a)     A physicist thinks of a system as a whole entity – “the thing as it is in itself” – its essential nature  – regardless of any observer.

b)     A cybernetician thinks of a system as a set of variables and way of behaving of interest to some observer(s).

 

So, it seems to me there is a dichotomy in how the term complexity is used.

a)     A physicist speaks of the complexity they see as inherent in some material entity.

b)     A cybernetician speaks of the complexity of an observer’s subjective perspective or description of a material or social entity.

 

If an entity can realise different systems, then it can have different complexities.

Complexity in information processing

Thermodynamics and information theory are related. Ashby’s cybernetics leans on the concept of information entropy developed by Claude Shannon in the 1940s, which is similar to the concept of thermodynamic entropy established in the 1870s. And yet, in his Introduction to Cybernetics, Ashby wrote:

 

1/2 “Cybernetics started by being closely associated in many ways with physics, but it depends in no essential way on the laws of physics or on the properties of matter.”

1/5 “In this discussion, questions of energy play almost no part—the energy is simply taken for granted.” "Even whether the system is closed to energy or open is often irrelevant”.

4/15 “cybernetics is not bound to the properties found in terrestrial matter, nor does it draw its laws from them.” “What is important is the extent to which the observed behaviour is regular and reproducible.”

7/24. Decay of variety: “any system, left to itself, runs to some equilibrium”. “Sometimes the second law of thermodynamics is appealed to, but this is often irrelevant to the systems discussed here."

 

Kolmogorov complexity (aka algorithmic complexity) is the length of a rule stated in the most efficient fashion possible. In algorithmic information theory the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output.

 

Although the concept of information is central in much if not most system theory, there is a dichotomy in how the term information is used. In short:

a)     A physicist speaks of information as a quasi-thermodynamic property, inherent in some physical entity, regardless of any observer.

b)     A sociologist speaks of the information created by an observer to describe some physical entity (or a fantasy), and its purpose or use.

 

The physicist, Seth Lloyd wrote. “Information is represented physically by the different states of physical system – a frequency of a wave – the level of current in a semiconductor. Almost all physical processes involve the exchange and transformation of information.”

 

Sociologists and software engineers may say the opposite, that the different states of a physical system can be represented by information. They discuss information that actors register, remember and exchange in subjective views of physical reality. This information processing occurs over and above the thermodynamics of physical entities and processes represented by the information. In so far as this information is useful, it may be called knowledge, of which Maturana noted “Knowledge is a biological phenomenon”.

Complexity in “complexity science”

In discussions of “complexity theory” the term complexity is associated with many and various interesting qualities you might observe or describe, including those discussed above.

·       Many define complexity as a feature of self-organizing and/or adaptive systems, disregarding that some such "non-linear" systems are simple in the everyday sense of the term.

·       Ross Ashby pointed to the ambiguity of “self-organization" and repudiated the idea that a machine or organism could change its own organization.

·       Several have proposed ways to measure the computational complexity of a structure in terms of the process needed to produce it from a given starting point.

·       Seth Lloyd defined the complexity of bounded entity in terms of the "thermodynamic depth" of a process that turns the micro-scale properties of its parts into the macro-scale properties of the whole.

·       Philip Anderson defined the complexity of a bounded entity in terms of it being more asymmetrical than a symmetrical structure.

·       Some now relate complexity to the "edge of chaos", a zone between order and disorder.

 

In other sources, complexity has been associated with the following specific features of an activity system or social entity.

·       Emergent properties. Yet the simplest of systems has emergent properties.

·       Emergent systems. Yet a simple system may emerge from evolution, and a continually evolving entity is an ever-unfolding process rather than a modellable system.

·       Networks. Yet there are simple networks.

·       Open systems. Yet there are simple open systems, and complex closed ones.

·       Adaptive or self-organizing behavior emerging from feedback loops in a closed non-linear system. Yet consider the simple feedback loop between a thermostat and a heater.

·       Decisions made on a random or statistical basis. Yet consider the randomness in the calls made in a game of poker, or the probability that a customer fails to pay for goods received. Those decisions make the outcomes of a system more unpredictable, but (as in chaos theory) simple systems can be unpredictable.

·       Chaotic system dynamics. Yet such a chaotic system it is still an orderly arrangement of stocks and flows, as may be shown in a simple causal loop diagram.

 

Most features above can be found in simple systems; a few are not relatable to any system that can be modelled. As a result, it is hard to pin down what "complexity science" or "complexity theory" is about. Beyond bundling interesting ideas people want to talk about under the heading of a "science", there seems no overarching coherence to the field.

 

If complex does mean any of the above (say, unpredictable, or chaotic), then it would be clearer to use the more specific word. Especially since some unpredictable and chaotic systems (say, a double pendulum) are simple in any normal sense of the term.

 

Surely, there is no way to measure the complexity of a thing per se? We can only measure a thing with regard to particular perspective or description of it, or a particular algorithm? And since a thing can be described in many different ways, and at any level of abstraction you choose, it has many different complexities.

 

For sure, every business of interest to us has many complexities. Many system design problems are messy, confusing and cannot be solved in a way that meets all requirements (whether due to time, cost, or resource constraints, or conflicts between requirements). Many feature most if not all of the ten points that define wicked problems. There is rarely a perfect answer; rather, there are trade-offs to be made between competing goals, and balances to be drawn between different design options. So, the best solution we can offer is a compromise that trades off between different needs.

Complexity at the edge of chaos?

The term chaos is used in two ways:

a)     The wide variety of outcomes produced by a system, given tiny variations in its starting conditions (as in chaos theory)

b)     Disorder, the absence of a discernible pattern (as in thermodynamics).

 

Here, chaos is disorder. To impose order on some parts of a whole is to arrange them according to a sequence or a pattern, as might be drawn in a causal loop diagram. Seth Lloyd made the interesting observations that, intuitively,

·       wholly ordered and wholly random structures are not complex

·       duplicating a complex structure does little to increase the complexity of the whole

 

If a structure is highly ordered, a perfectly symmetrical triangle or octagon, then it seems simpler than an asymmetrical triangle or octagon. On the other hand, if a structure is a randomly arranged set of points, if there is no pattern at all, then it seems simple in a different way. Some now relate complexity to the "edge of chaos", a zone between order and disorder, another ill-defined concept.