The ideas of system theory and systems thinkers

(Separating systems from entities)

https://bit.ly/2w5XKNK

Copyright 2017 Graham Berrisford. One of several hundred papers at http://avancier.website. Last updated 17/03/2020 19:25

                                                  

“A very interesting potted history of the evolution of systems thinking and the different strands of thoughts that have evolved.”

“Thank you for sharing.”

“I cannot overstate the sense of clarity that I get whenever I read your articles. It is very much appreciated.”

 

The systems of interest to us here feature actors performing activities.

What is here called system theory is about regular activities, which are performed by actors.

What is here called systems thinking is looser than system theory; it is about a group of actors who interact in some activities to some purposes.

Some systems thinkers try to have it both ways, or slip from one to the other without noticing.

Some use the terms of system theory, but with different meanings.

An aim here is to disambiguate some terms and point to how system theory can be reconciled with systems thinking.

 

Contents

Preface on communication and language in systems. 1

1. Basic system theory. 1

2. Separating systems from entities. 1

3. System theory in the 20th century. 1

4. Distinguishing systems thinking from system theory. 1

5. System theory in the 21st  century. 1

6. Conclusions. 1

APPENDIX: more sociologically-inclined systems thinking. 1

 

Preface on system theory and systems thinking

 

Systems

The systems of interest to us here feature actors performing activities.

The actors are active structures, human and/or other, that occupy some space.

The activities are behaviors performed by actors, over time.

 

“The first decision [a theoretician has to make] is what to treat as the basic elements of the social system.

The sociological tradition suggests two alternatives: either persons or actions." Seidl 2001

 

In other words, there are two kinds of system.

There is a set of inter-related activities - in which actors interact - the domain of what may be called "system theory".

And there is a set of inter-related actors - who interact - the domain of what may be called "systems thinking".

 

System theory is about regular activities, which are performed by actors.

It embraces Ashby's cybernetics, Forrester's system dynamics, and some “soft systems” techniques.

It surfaces in enterprise, business and software architecture models, such as business activity models, process flow charts and data flow diagrams.

And in social systems definable as activities performable by different actors in different social entities.

 

In general system theory, gurus (e.g. Ashby, Forrester) take an activity-centric view of systems.

And when applying general system theory to human activity systems we presume:

1.     A system features activities performed by actors, who act on objects and interact with each other.

2.     Interacting involves exchanging information encoded in the data structures of messages and memories.

3.     Actors create and find information in messages and memories by encoding meaning in symbols, and decoding meanings from symbols, using a language.

4.     Successful communication in a system requires its actors to share a language - to share the meanings of symbols created and found in messages and memories.

5.     Between discrete generations of the system, its language may be changed.

 

Relevance to complex system theory

Much "complexity science" is about a set of interrelated activities in which autonomous actors interact, and in which the rules governing those interactions lead to

·       unexpected "emergent properties" (e.g. the shimmering of a shoal of fish, or the V shape of flight of geese) and/or

·       non-linear state change trajectories (e.g. unit prices in a stock market, or the number of people infected by a virus).

 

Even if the results of an activity system are unexpected or complicated, the rules of that system may be simple by any normal definition of complexity.

 

Systems thinking is looser than system theory.

It is about actors in a social group or network, who interact as they choose to some purposes - be they shared or individual.

Sociological systems thinking presumes actors in social groups or networks are able to send and receive information.

E.g. Organization structures are often seen as power structures, or interaction patterns, in which actors exchange information in the form of directions and reports.

 

Sociological systems thinkers often take an actor-centric view of systems.

E.g. Herbert George Blumer (1900 to 1987) was an American sociologist.

His “symbolic interactionism” rests on four premises about human activities:

1.     A system features actors, who act on objects and interact with each other.

2.     Actors act on things on the basis of the meanings those things have for them.

3.     The meanings of things derive from the social interactions between actors.

4.     Meanings are handled in, and modified through, an interpretive process.

 

How does systems thinking differ from system theory?

On the surface, systems thinking seems similar to system theory.

However:

·       Re. point 1, system theory starts from activities, systems thinking starts from actors.

·       Re. point 2, system theory and systems thinking are the same.

·       Re. point 3, in system theory, the meanings of things to an actor may be down to their biology or individual experience of the world, not just social interactions.

·       Re. point 4, in system theory, successful communication requires actors to share a language, not just to interpret a message as they see fit.

 

Finally, system theory allows a system to change only in discrete inter-generational steps.

Systems thinking is a looser kind of sociological thinking in which a system can change continually.

1. Basic system theory

In general discussion, the term system is often used for a collection of inter-connected things, parts or people.

If every whole divisible into parts is a system, then everything larger than a quark is a system.

If every describable entity or situation (larger than a quark) is a system, then the term is a noise word, it adds no useful meaning.

 

The systems of interest here are not static structures, like the periodic table.

They are dynamic, meaning they display behavior and change state over time.

E.g. Consider a tennis match, whose current state is displayed on the score board.

 

So, when and where is an entity or situation properly called a system?

Answer: when and where its parts interact in regular ways, where there is a pattern of behavior.

As might be documented in a causal loop diagram, or in Checkland’s “Business Activity Model”

 

System theory is useful whenever we seek to:

·       understand how some outcome arises from some regular behavior.

·       predict how some outcome will arise from some regular behavior.

·       design a system to behave in a way that produces a desired outcome.

·       intervene in a situation to change some system(s) for some reason.

Core terms and concepts

There is probably little dispute that there are forms and functions – actors and activities - in a system.

There are descriptions and realizations - abstract and physical - systems.

There are accidental and purposive - natural and designed - systems.

This section generalises more concepts from Ashby’s introduction cybernetics and Meadow’s introduction to system dynamics.

And does this in a way compatible with Bertalanffy’s general system theory.

 

In most real-world (physical) systems of interest, some actors interact in regular activities that advance the state of the system.

 

System

System kind

Actors (active structures)

Activities (behaviors)

State (facts of interest)

A solar system

physical

planets and star

planets orbit the star

positions of the planets

A windmill

physical

sails, shafts, cogs, millstones

rotate to transform wind energy and corn into flour

wind speed, quantity of corn, quantity of flour

A digestive system

biological

teeth, intestines, liver, pancreas etc.

transform food into nutrients and waste

quantities of nutrients and waste in the system

A termite nest

biological

termites

disperse pheromone, deposit material at pheromone peaks

the structure of the nest

A prey-predator system

ecological

e.g. wolves and sheep

births, deaths and predations

e.g. wolf and sheep populations

A tennis match

social

tennis players

motions of the ball and the players

game, set and match scores

A church

social

people

play roles in the church’s organization and services

many and various attributes of roles and services

A circle property calculator

software

software component

calculate perimeter, calculate area

radius, the value of pi (invariant)

An information system

socio-technical

humans and/or computers

messaging and message processing

facts in memories

 

Actors

Actors are active structures of any kind (people, planets, molecules, machines or whatever).

Actors interact in a system to perform its characteristic activities.

The activities are regular or repeatable processes that advance the state of the system.

 

Activities

Activities are regular behaviors performed by actors.

Activities create, use and change structures, both passive structures (material or information) and active ones (actors)

The results of activities are internal state changes and outputs (which change the state of the external environment).

 

System state

State is the current status of a system's physical materials and/or logical information/memory.

Most systems of interest are stateful, rather than stateless.

This means the system’s structures persist over time, and between discrete activities.

Actors advance the state of the system, which sometimes means recording information for future use.

Importantly in sociology, actors may rest between activities, or do something irrelevant to, even in conflict with, a system they play a role in.

 

In Ashby’s cybernetics, a system's state is the values that a particular set of state variables have.

“A variable is a measurable quantity that has a value.”

“The state of the system is the set of values that the variables have.”

 

In Meadow’s system dynamics, a system's state is the quantities of a particular set of stocks, populations or qualitative attributes.

The trajectory of a system’s state change over time, as shown on a graph, might be oscillating, linear, curved or jagged.

Whatever its shape, this “line of behavior” is an inexorable result of the system following its rules.

 

Changes to the information state variables of a system may reflect changes to the material state of the real-world entity that is modelled.

On the other hand, the entity may experience material state changes not reflected information state changes.

 

System

Information state

Material state

A prey-predator system

wolf and sheep populations

the current physical condition of each wolf and sheep.

A tennis match

game, set and match scores

the current condition of the court, the balls and the players.

 

System boundary – input and output

The boundary of a system is arbitrary, a choice made by its observers or designers.

A closed system is not connected to or influenced by anything outside its boundary.

An open system is connected to its wider environment by inputs and outputs – which are describable and testable.

The way that an open system’s activities are driven by inputs is characteristic of that system.

 

Interactions – physical and logical

The actors or subsystems of a systems interact physically and/or logically.

They may interact by physical flows of energy (e.g. electromagnetic radiation) or force (e.g. gravity).

Or else interact by exchanging logical information, either directly by sending/receiving messages, or indirectly by writing/reading some shared memory.

 

Information

The interest here is particularly in social systems in which actors interact by communicating - by exchanging information - such as descriptions, directions and decisions.

Actors play roles that involve creating and using information in messages and memories.

They respond to information in messages, often in ways determined by some information in memory.

Remembered information represents the last known state of entities or events of importance to the business at hand.

Input messages can update the information state.

 

Some position information in a hierarchy of Wisdom, Knowledge, Information and Data (WKID).

Here is a way to make sense of that hierarchy.

 

·       Data = a structure of matter/energy in which information has been created (encoded) or found (decoded).

·       Information = meaning created or found in data by an actor.

·       Knowledge = information that is accurate enough to be useful.

·       Wisdom = the ability to apply knowledge in new situations.

 

Information is only created or found by an actor when performing a data coding/decoding process, and with reference to a language.

And to succeed in communicating, the communicating actors must share the same language.

(By the way, with respect to these definitions, Shannon's "information theory" is about maintaining the integrity of data structures rather than information.)

 

System purposes or goals

What is the purpose or goal of the solar system?

There are different views of what goals (if any) a system meets or is supposed to meet.

Different people may perceive and express the purposes of a system in different ways, which may be related or distinct.

 

For some, purposes are goals declared by observers inside or outside the system.

Goals may be given to a system by its observers, sponsors, designers or other stakeholders.

The actors who play roles in a system may share those given goals and/or have different goals.

 

On the interplay between personal and shared purposes

Peter Senge recommends building a shared vision from personal visions through interaction, give and take.

You may have to suppress some personal visions that others do not share, at least in the same social network/group.

However, you do not (as seems presumed in some systems thinking) belong to only one social group; you belong to many.

As different groups develop different visions. you may choose to spend more of your time with those whose shared vision is closer to your own.

The internet helps you do the reverse of what Peter suggests, which is to discover groups that already share your personal vision. 

Clicking on "accept" so often (as you do) means that the internet will direct you to groups that already share your personal vision. 

 

For some, purposes are what the system does by way of advancing system state and/or producing outputs.

Beer coined the phrase “the purpose of a system is what it does” (POSIWID).

 

For some, purposes are lines of behavior, showing how the system changes state over time.

Meadows said you deduce a system’s purposes from its behavior over time, not from rhetorical declarations of goals.

The epistemological triangle

It is important to understand that systems exist in two forms.

There are abstract systems – which can be named and described in terms of aims, roles and activities (e.g. the rules of poker).

There are physical entities - the actors and resources needed to realise an abstract system (e.g. a card school with a pack of cards).

There are physical systems - realisations in the real world of an abstract system by a physical entity (e.g. a game of poker).

 

To paraphrase Ashby (later), a common error is to point to the physical entity and call it "a system".

The notion that a system is a "perspective" of a real-world entity or situation is deeply embedded in systems theory.

An entity is only a "physical system" when, where and in so far as it realises an "abstract system".

 

Moreover, that one entity may realise several systems at once.

E.g. you observe a physical entity/situation - the members of a card school, playing cards.

·       As card player yourself, you conclude they are realizing the "poker" system - playing their cards according to the roles and rules of that game.

·       As an economist, you conclude they are realizing a system to transfer money from the less skilled to the more skilled. 

·       As a psychologist, you conclude each player is realizing a system in which they are conditioned by occasional and near random rewards to repeat a behavior. 

·       As a sociologist, you conclude the players are realizing a system in which the card game is a front for exchanging anecdotes and reinforcing social bonds and/or a dominance hierarchy. 

·       As a heating engineer, you conclude the players are realizing a system that generates heat and so reduces the heating bill of the host.

 

The five systems above are different, and might even be in conflict to some extent.

Each is an abstraction from the same real-world entity, made by an observer, given the interests they bring to their observation.

 

We need continually to remind ourselves that a map is not the territory, a description is an abstraction from a reality.

This paper employs a new device that separates and relates describers, descriptions and realities in an epistemological triangle.

 

Epistemology

Systen theory

Descriptions

<create and use>     <represent>

Describers <observe and envisage> Realities

Abstract systems

<create and use>              <represent>

System describers <observe and envisage> Physical systems

 

For a detailed explanation of this triangle, read “A philosophy of systems”.

Later in this paper, the triangle is edited to reflect the system theories of Bertalanffy, Checkland, Ackoff, Forrester and Ashby.

2. Separating systems from entities

Many ideas that appear in systems thinking discussion today can be traced to earlier sources.

In system theory today, some of these ideas are used, some must be set aside, and some re-expressed.

 

Looking back more than 100 years, notable authors included.

·       Adam Smith (1723 to 1790) specialisation of and competition between enterprises.

·       Charles Darwin (1809 to 1882) system mutation by reproduction with modification.

·       Claude Bernard (1813 to 1878) homeostatic feedback loops.

·       Willard Gibbs (1839 – 1903) the development of chemistry into a science.

·       Vilfredo Pareto (1848 to 1923) the Pareto principle.

 

Gibbs defined a system as: “a portion of the ... universe which we choose to separate in thought from the rest of the universe.”

E.g. a planet, a tree, a rain forest, a tennis match, a brick, a church (building or social organization), or a socio-technical entity such as IBM.

However, since the 1950s, system theorists have separated the concept of a system from Gibb’s discrete entity.

 

Entities

Our universe is a space-time continuum.

Every intelligent animal makes sense of the universe by carving it up into mentally-separable chunks.

We naturally perceive and describe the world around us in terms of discrete entities.

Some entities have a physical boundary (e.g. the moon, a tree, a mate).

Other entities have only the boundaries we ascribe to them (e.g. a rainbow, a symphony, the democratic party, a corporation).

In management science, the boundary of an entity often encapsulates what must be managed or monitored.

And reflects the scope in the mind of somebody who sponsors work to direct, design or change an entity.

 

An animal recognizes an entity in its environment by matching a perception of it to a remembered pattern.

In other words, we identify an entity by comparing it to a mental model.

Given the gift of human language, we can also identify an entity by giving it a name (or another symbolic identifier).

Hearing the name alone is enough to bring a mental model to mind.

Our mental models may have fuzzy edges, may overlap, may conflict in some way, and may change over time.

 

In this context, everything you can name or describe is an entity; there is no limit to what you can refer to as an entity.

By contrast, what you can refer to as a system is constrained; since without some constraint, the term system has no special meaning.

 

Separating systems from entities

We can describe an entity of any size, from a quark to the universe, if not beyond.

We can divide an entity into parts, and compose it with other entities into a larger entity.

If every entity is divisible into parts is called a system, then the concept of a system brings nothing new or useful.

In system theory, a system is a particular way of looking at a real-world entity or situation.

The entity must behave systematically.

 

So, presumptions here include:

1)     we can observe an entity in the real world, describe it, and prove it exists to others’ satisfaction

2)     we can observe an entity's behavior, describe it, and test how well it conforms to a particular system description

3)     we can also envisage entities and systems that do not yet exist, and then make them

4)     we can do all above accurately enough to be useful.

 

Separating social systems from social entities

The first sociological thinkers included:

·       Herbert Spencer (1820 to 1903) social systems as organic systems.

·       Emile Durkheim (1858 to 1917) collective consciousness and culture.

·       Gabriel Tarde (1843 to 1904) social systems emerge from the actions of individual actors.

·       Max Weber (1864 to 1920) a bureaucratic model – hierarchy, roles and rules

·       Kurt Lewin (1890 to 1947) group dynamics.

·       Lawrence Joseph Henderson (1878 to 1942) meaning in communication

 

Much “systems thinking” discussion is about social entities

Many have likened a social entity or business organization to a biological organism.

And many have presumed that social or business system is homeostatic.

Though these ideas have influenced systems thinkers for 150 years, they are at least somewhat misleading.

 

(Read thinkers who foreshadowed system theory for ideas attributed to the thinkers above.)

Post-war system theory

When “system theory” became established as a topic in its own right is debatable.

Some suggest system theory is a branch of sociology.

“Systems theory, also called social systems theory... https://www.britannica.com/topic/systems-theory

Others suggest the reverse, that social systems thinking is branch of general system theory.

 

The modern concept of a system became a focus of attention after second world war.

And there was a burst of systems theory in the period 1940 to 1980.

Influential bodies and groups have included these three.

 

1941 to 1960: The Macy Conferences - cross-disciplinary meetings in New York.

On Cybernetics, with a leaning to The Macy Foundation’s mandate to aid medical research.

Topics included connective tissues, metabolism, the blood, the liver and renal function.

Also, infancy, childhood, aging, nerve impulses, and consciousness.

 

1949 to 1958: The Ratio Club - a cross-disciplinary group in the UK.

It was founded by neurologist John Bates to discuss cybernetics.

Many members went on to become prominent scientists - neurobiologists, engineers, mathematicians and physicists.

Members included psychologists (Ashby) and mathematicians (Turing).

See the next section below.

 

1955 to date: The International Society for the Systems Sciences (ISSS).

This was conceived in 1954 by Bertalanffy, Rappaport and Boulding; see the next section.

General system theory (Bertalanffy, Rapaport, Boulding)

The 1954 meeting of the American Association for the Advancement of Science in California was notable.

Some people at that meeting conceived a society for the development of General System Theory (the ISSS mentioned above).

They included:

·       Ludwig von Bertalanffy (1901-1972) the cross-science notion of a system

·       Anatol Rapoport (1911 to 2007) wrote on game theory and social network analysis.

·       Kenneth Boulding (1910-1993) applying general system theory to “management science”.

 

Ludwig von Bertalanffy was a biologist who introduced the idea of a cross-science general system theory in the 1940s.

“There exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind.”

His aim was to discover patterns and elucidate principles common to systems in every scientific discipline, at every level of nesting.

He looked for concepts and principles applicable to several disciplines or domains of knowledge rather than to one.

A selection of concepts follows, accompanied by some definitions.

 

System structure and behavior

“A similar hierarchy is found both in "structures" and in "functions."”

In the last resort, [structures and behaviors] may be the very same thing: in the physical world matter dissolves into a play of energies.” Bertalanffy 1968

Concepts:

Part: a structure, be it active (actor) or passive (material or information).

Process: a sequence of activities that changes or reports a system’s state, or the logic that controls the sequence.

State: the current material or information structure (variable values) of a system, which changes over time.

 

System boundary

“Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow…” Bertalanffy 1968

A boundary line (physical or logical) separates a system from its environment, and encapsulates an open system as an input-process-output black box.

Concepts:

System environment: the world outside the system boundary.

System interface: a description of inputs and output that cross the system boundary.

 

Inter-system flows and feedback

“Another development which is closely connected with system theory is that of… communication.

The general notion in communication theory is that of information.

A second central concept of the theory of communication and control is that of feedback.” Bertalanffy 1968

Concepts:

Flow: the conveyance of a force, matter, energy or information.

Information or data flow: the conveyance of information in a message from a sender to a receiver.

Feedback loop: the circular fashion in which output flows influence future input flows, and vice-versa.

 

Holism and emergent properties

“General System Theory… is a general science of wholeness… systems [are] not understandable by investigation of their respective parts in isolation.” Bertalanffy 1968

Holism means considering how the parts of a whole interact to do things they cannot do on their own.

E.g. consider how the smooth forward motion of rider and bicycle emerges from their interaction.

Or the properties of a higher-level system (e.g. consciousness) emerge from the interactions of lower-level systems (e.g. neurons).

 

Reductionism - studying or describing each part on its own - is often deprecated by “systems thinkers”.

The difficulty with holism/reductionism distinction is that is the boundary of the whole is an arbitrary choice of an observer.

As you zoom in or zoom out, a holistic view becomes reductionistic or vice-versa.

 

E.g. Consider the beating of the human heart.

An observer describes the regular beat as an emergent property of parts (muscles) interacting in a whole (the heart).

You describe it as an ordinary/assumed property of one part in a wider whole (a body).

 

E.g. Consider the flexing of the Tahoma Narrows bridge.

An observer wrongly describes the flexing as an emergent property of the whole thing (the bridge).

You realise it is really an emergent property of a wider whole in which some part(s) of the bridge interact with some part(s) of its environment (the wind).

 

Concepts:

Holism: looking at a thing in terms of how its parts join up rather than studying and dissecting each part on its own.

Emergence: the appearance of properties in a wider or higher-level system, from the coupling of lower-level subsystems.

 

Organicism / hierarchy

“We presently "see" the universe as a tremendous hierarchy, from elementary particles to atomic nuclei… to cells, organisms and beyond to supra-individual organizations.” Bertalanffy 1968

To describe and understand any large and complex reality, observers tend to create a hierarchical description.

By zooming in and out, observers can decompose a system into subsystems, and compose subsystems into a system.

Note that the decomposition of a system is not usually fractal, since the system at one level is very different from the next higher or lower system.

 

Abstraction

“One of the important aspects of the modern changes in scientific thought is that there is no unique and all-embracing "world system."

All scientific constructs are models representing certain aspects or perspectives of reality.” von Bertalanffy’s p94 1968

In other words, a system is selective perspective of a reality.

Moreover, one reality may be represented by different scientific models - as light may be modelled as waves or particles.

 

Concepts:

We can represent the separation of models from reality using our triangle.

 

General system theory

Models

<create and use>          <represent>

Observers <observe and envisage> Realities

 

Bertalanffy often referred to real-world entities as systems.

And provided you disregard any failing or diseased parts, it is indeed natural to see a biological organism as a system.

Each biological entity, such as an animal or a tree, grows as a system.

Its entire existence depends on its cells (all sharing the same DNA) cooperating in systematic behaviors.

The organism/system is built up from a single cell by deterministic processing of the information in that cell’s DNA.

 

Some still draw that questionable analogy - more than a century old - between a business organization and a biological organism.

But while it feels natural to speak of an organism as a system, to speak of a business as a system is a very different matter.

The cells of an organism are born to play their particular role in the whole entity; they have no role outside of that.

All human actors in a business were born or created outside it; they may play various roles inside and outside that entity, possibly conflict with each other.

 

Anatol Rapoport was a mathematical psychologist and biomathematician who made many contributions.

He pioneered the modelling of parasitism and symbiosis, researching cybernetic theory.

This gave a conceptual basis for his work on conflict and cooperation in social groups.

 

When actors interact in a system, they may cooperate, as within a football team or a business system.

But they don’t necessarily help each other; they may compete, as in a game of poker, or a market; or hurt each other, as in a boxing match or a war.

Cooperation, conflict and conflict resolution is a focus of bio-mathematics and game theory.

 

Game theory: cooperation and conflict resolution

In the 1980s, Rapoport won a computer tournament designed to further understanding of the ways in which cooperation could emerge through evolution.

He was recognized for his contribution to world peace through nuclear conflict restraint via his game theoretic models of psychological conflict resolution

 

Social network analysis

Rapoport showed that one can measure flows through large networks.

This enables learning about the speed of the distribution of resources, including information through a society, and what speeds or impedes these flows.

 

Kenneth Boulding was among the first to explore the application of general system theory ideas to “management science”.

He speculated whether the elements of a social system are the human actors, or the roles they play.

Thus, he tentatively acknowledged the difference between a social entity and an abstract system.

He is probably best known today for this hierarchical classification of system types.

1.     Static structures

2.     Clock works

3.     Control mechanisms

4.     Open systems

5.     Lower organisms

6.     Animals

7.     Man

8.     Socio-cultural systems

9.     Symbolic systems

 

“This survey is impressionistic and intuitive with no claim for logical rigor.” Bertalanffy 1968

The hierarchy mixes up several scales, including:

·       Steps in complexity (simple to complex)

·       Steps in composition (small to large)

·       Steps in biological evolution

 

As Bertalanffy said, none of those scales are applied rigorously.

E.g. The hierarchy stretches from static structures (1) through robotic organisms and self-aware organisms to their logical products (9) which are static structures (1).

E.g. Clockwork mechanisms (2) are open (4) in that they acquire energy from a winder, and give that energy to some movable entity of interest.

E.g. The interactions between cells in a lower organism (5) can be much more complex than interactions between animals in a social system (8).

(The complexity of a system can only be measured with respect to given a system description, at a given level of abstraction.)

 

For more, read Introducing general system theory

Bertalanffy didn’t like some directions in “the System Movement”, but saw the movement as “a fertile chaos” that generated many insights and inspirations.

3. System theory in the 20th century

As system theory developed, authors distinguished abstract systems (observers’ perspectives) from physical entities or situations.

·       In von Bertalanffy’s general system theory, a “system model” is selective perspective of a system reality.

·       In Forrester’s system dynamics, a system is a mathematical model of inter-stock flows that increase and decrease stocks.

·       In Ashby’s cybernetics, a system is an observer’s highly selective model of what something does.

·       In Ackoff’s vocabulary for systems, an “abstract system” idealises a “concrete system”.

·       In Checkland’s approach to business analysis, a “soft system” is a perspective of a real-world business.

 

All these gurus see a system as an abstraction from a reality (be it observed or envisaged).

Moreover, the relationship between physical entities and abstract systems is many-to-many.

One abstract system may be realised by countless physical entities; less obviously, one physical entity may realise countless abstract systems.

 

This section outlines system theories mentioned above.

System Dynamics (Forrester, Meadows)

System Dynamics was founded and first promoted by:

·       Jay Forrester (1918 to 2016) every system is a set of quantities that are related to each other.

·       Donella H. Meadows (1941 to 2001) resource use, environmental conservation and sustainability.

 

Jay Forrester (a professor at the MIT Sloan School of Management) was the founder of System Dynamics.

In Forrester’s system dynamics (1950s) a system is a mathematical model of inter-stock flows that increase and decrease stocks.

·       A stock is a variable number representing the level of a quantity, or instances of a type.

·       A flow between two stocks represents how increasing or decreasing one stock increases or decreases another stock.

·       A causal loop connects two or more stock by flows that form a circular feedback loop.

 

Wherever one stock has an effect on another stock, that causal relationship is defined as an inter-stock flow.

So, the system can be modelled in a causal loop diagram, supported by rules for the flows that modify the quantities of the stocks.

Such a model is of course most useful when it accurately represents the behavior of things in the real world – and how stocks, populations or resources change over time.

 

Remember that all the systems of interest here are dynamic, they display behavior and change state over time, either continually or in discrete steps.

Forrester was concerned in the first place with entities that can be modelled as having continuous dynamics.

Mathematically, the model is a set of coupled, nonlinear, first-order differential (or integral) equations.  

But when system dynamics is simulated using software tools, the continuous dynamics is converted into discrete event-driven dynamics.

A software tool divides time into discrete intervals, it steps the model through one interval at a time, and reports how stocks, populations or resources change over time.

 

System Dynamics

Mathematical models of causal loops

<create and animate>                          <represent>

System modellers <observe and envisage> Inter-related quantities of things

 

Today, akin to system dynamics, there are agent-based approaches to the analysis of systems.

 

Donella Meadows (1941 –2001) was an environmental scientist, teacher, and writer.

She was much concerned with resource use, environmental conservation and sustainability

She is surely best known as lead author of the popular and influential book “Thinking in Systems: a Primer.”

 

Chapter one of Meadows' book starts thus:

“A system isn’t just any old collection of things. A system is an interconnected set of elements that is coherently organized in a way that achieves something.

If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.

 

In this definition, Meadows makes what might be seen as a canny generalisation, since readers may interpret each kind of thing in different ways.

·        Elements: may be read as physical material structures (unitary actors or resource items), or logical data structures (quantities of a population, resource or quality).

·       Interconnections: may be read as physical material flows, or logical data flows (signals or messages), or even, perhaps, social relationships.

·       Functions or purposes: may be read as state changes over time (lines of behavior), or motivations (goals or intentions, individual or communal).

 

Did Meadows intend all these interpretations to be made or allowed?

People aren’t always clear which interpretation they have made, and sometimes slip from one to another.

Meadows didn’t entirely avoid slipping between meanings, but she was more specific about what her three basic terms mean.

 

Elements (structures)

Meadows wrote of elements as structures – physical or logical - material or data (quantitative variable attributes).

 “For example, the elements of your digestive system include teeth, enzymes, stomach, and intestines.” [material structures]

“A football team is a system with elements such as players, coach, field, and ball” [material structures.]

“The elements of a system are often the easiest parts to notice, because many of them are visible, tangible things.” [material structures.]

“The elements that make up a tree are roots, trunk, branches, and leaves.” [material structures]

[Elements include also intangibles such as] “school pride and academic prowess” [quantitative variable attributes possessed by a structure.]

 

Interconnections (flows or behaviors)

Meadows wrote of interconnections as flows or behaviors – physical or logical - material or data.

“Some interconnections in systems are actual physical flows, such as the water in the tree’s trunk or the students progressing through a university.

Many interconnections are flows of information—signals that go to decision points or action points within a system.”

These flows represent activities or causal relationships that are definitive of the system.

 

Functions and purposes

Meadows was less clear about the concept of a function or purpose.

“The word function may be used for system with non-human actors; the word purpose for a human one.

But the distinction is blurred, and many systems have both human and non-human actors.”

“A system’s function or purpose is not necessarily spoken, written, or expressed explicitly, except through the operation of the system.

The best way to deduce the system’s purpose is to watch for a while to see how the system behaves.”

 

Changing a system’s individual elements (actors, resource items or units of a stock) does not necessarily change the system.

But change the system’s element types and you certainly change the system.

And to change a system’s interconnections, functions or purposes is also to change the system.

"Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game.” Meadows

 

Separation of systems from entities?

Different models of one reality can each be accurate and useful for some purpose, yet be unrelatable or even incompatible.

E.g. physicists may model a stream of flight as waves or particles.

 

At the start of Meadows’s book, the following entities are given as examples of a system.

"A school, a city, a factory, a corporation, a national economy, an animal, a tree, the earth, the solar system, a galaxy."

Whether by accident or design, Meadows seems here to deny the possibility of abstracting different systems from one real-world entity.

More generally, a system is one perspective of a real word entity (be it a school, a corporation, or planet earth).

And in practice, many systems can be abstracted from observation of such an entity - some of them incompatible with each other.

 

Separation of social systems from social entities?

In the second half of the book, Meadows sometimes refers to a human organization as though it is a system.

And says some things that seem contrary to system dynamics and/or conventional business management practices.

 

“[A system’s] purposes are deduced from its behavior over time, not from rhetoric or stated goals.”

This is a very particular view of “purpose”, one of several already discussed above.

 

“Hierarchical systems evolve from the bottom up.”

This may apply to biological organisms, but business organisations are often shaped and reshaped from the top down by their directors.

 

“The purpose of the upper layers of the hierarchy is to serve the purposes of the lower layers.”

This may be desirable a human society, but is a subjective view, and it is possible to take the opposite view.

 

In business, the term organization usually refers to a management structure.

Meadows doesn't clearly distinguish this kind of organization from her core idea of looking at a real-world entity as a causal loop network.

A result is some statements that casual readers may read as they like, but studious readers may find difficult to interpret.

E.g. What is a subsystem of a business?

Is it one section of a wider causal loop structure? Or one division/leg of a management structure? These are very different things.

 

We’ll return to this discussion in the conclusions below.

For more on system dynamics, read System Dynamics.

Cybernetics (Wiener, Ashby)

Cybernetics is the science of how a physical, biological or social machine can be controlled.

It emerged out of efforts in the 1940s to understand the role of information in mechanical system control.

Thinkers in this domain include:

·       Norbert Wiener (1894-1964) the science of system control.

·       W. Ross Ashby (1903-1972) the law of requisite variety.

·       Alan Turing (1912 –1954) finite state machines and artificial intelligence.

 

Wiener discussed how a controller directs a target system to maintain its state variables in a desired range.

E.g. Consider how a thermostat (control system) directs the actions of a heating system (target system).

The control system receives signals or messages that describe the state of a target system

The control system responds by sending signals messages to direct activities in the target system.                                                                                                         

 

Information feedback loops are found in both organic, mechanical, business and software systems:

·       A missile guidance system senses spatial information, and sends messages to direct the missile.

·       A brain holds a model of things in its environment, which an organism uses to manipulate those things.

·       A business database holds a model of business entities and events, which people use to monitor and direct those entities and events.

·       A software system holds a model of entities and events that it monitors and directs in its environment.

 

Ashby popularised the usage of the term 'cybernetics' to refer to self-regulating (rather than self-organizing) systems.

For decades, many thinkers had been particularly interested in homeostatic systems.

In “Design for a Brain” (1952), Ashby discussed biological organisms as homeostatic systems.

He presented the brain as a regulator that maintains each of a body’s state variables in the range suited to life.

This table distils the general idea.

 

Generic system

Ashby’s design for a brain

Actors

interact in orderly activities to

maintain system state and/or

consume/deliver inputs/outputs

from/to the wider environment.

Brain cells

interact in processes to

maintain body state variables by

receiving/sending information

from/to bodily organs/sensors/motors.

 

Homeostatic entities and processes are only a subset of systems in general.

Our main interest here is in real world systems whose behavior over time is the outcome of actors interacting in regular ways

Actors interact in regular activities to advance system state and/or consume/deliver inputs/outputs from the wider environment.

 

The systems of interest to us

Remember all the systems of interest here are dynamic, they display behavior and change state over time, either continually or in discrete steps.

Ashby was concerned with entities that can be modelled as having either continuous or discrete event-driven dynamics.

But for convenience, he modelled continuous behavior as event-driven.

This means dividing time and continuous input flows into discrete events.

 

Most business systems are discrete event-driven systems to begin with.

A model of a business system – a theory of how some part of the business behaves, or should behave – may be called "the system of interest".

Ashby and others have made three additional points about such a system of interest.

 

First, since every model is selective, a large and complex entity can be far more than whatever system of interest we focus on.

That extra stuff can include activities outside of to the particular system of interest, perhaps even contrary to it.

 

Second, different observers may form different models of an entity such as IBM.

And so, see IBM as realising different systems, some of them incompatible. 

 

Third, therefore, it is meaningless to point to IBM and call it a system without reference to a particular system of interest.

That system of interest must be defined somewhere, whether in a mental or documented model.

 

For more, read Ashby’s ideas and below.

Abstract and physical systems

In Ashby’s “Introduction to Cybernetics” (1956), a system is an observer’s highly selective model of what something does.

Krippendorff, a student of Ashby, wrote:

"It is important to stress Ashby defined a system not as something that exists in nature.”

“What we know of a system always is an ‘observer’s digest’.”

 

Ashby’s system is an abstraction from a real-world entity (be it observed or envisaged).

It is theory of how the entity (be it mechanical, biological or social) behaves, or should behave.

Ashby sometimes referred to the entity as a “real machine”.

This triangle relates observers (like Ashby) to systems and to real machines.

 

Ashby’s cybernetics

Systems

<create and use>                   <represent>

Observers <observe and envisage> Real machines

 

Ashby urged us to distinguish material entities from abstract systems they realise. 

3/11 “At this point we must be clear about how a "system" is to be defined.

Our first impulse is to point at [some entity in the real world] and to say "the system is that thing there".

This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems.

Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.

What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” (Ashby 1956)

 

Ashby noted that the term “system” is ambiguous in discussions, because systems thinkers use the term in at least two ways.

Those two ways are described here as:

·       An entity = a complete real-world thing (e.g. all the people, processes and technologies of a business) regardless of which observer looks at it.

·       A system = an observer’s view of some regular or repeatable behaviors that advance some variables/quantities.

 

Ashby noted that the second is the practical view.

"Though the first sounds more imposing… the practical worker inevitably finds second more important."

"Since different systems may be abstracted from the same real thing, a statement true of one may be false of another."

“There can be no such thing as the unique behavior of a [large real thing], apart from a given observer.”

"There can be as many systems as observers... some so different as to be incompatible.”

“[Therefore] studying [large real-world entities] by studying only carefully selected aspects of them is simply what is always done in practice.” (Ashby 1956).

 

A system may be envisaged in an abstract model and realized in a physical form – as a performance of an abstract system.

The abstract system is a model or type; the physical system is an instance of that type.

 

Instances in a physical system

Types in an abstract system

Actors (structures in space that perform activities)

Roles

Activities (behaviors over time that advance the state of the system or something in its environment)

Rules

System state (structures changed by activities)

State variables

 

Our epistemological triangle can be used to indicate the abstract to physical relationship.

(Strictly, all words are abstractions, but the terms actors, activities and state are used here to signify the physical system elements.)

 

How systems may be described

Roles, Rules, Variables

<create and use>                       <represent>

System describers <observe and envisage> Actors, Activities, State

 

An abstract system describes roles for actors and rules for activities that advance system state variables.

A physical system is the realization by a real-world entity or situation of an abstract system description.

E.g. A real-world hurricane is a realization in the atmosphere of an abstract weather system described by meteorologists.

E.g. Your beating heart beats in accord with an abstract system known to medical science.

 

An abstract system does not have to be a perfect model of an entity’s behavior; only accurate enough to be useful.

We can test that an entity realises an abstract system to the degree of accuracy we need for practical use.

 

Ashby pointed out that different observers may abstract countless different abstract systems from observations of a material entity.

IBM can realise countless different abstract systems in parallel, some of which may be in conflict, and most of which are changed over time.

 

Abstract systems as models

Observers can use various modelling techniques to describe the actors, activities and state of a system.

Using Ashby’s cybernetics, observers model a system as a set of state variables advanced by processes.

Using Forrester’s system dynamics, observers model a system as a set of stocks (variable quantities) increased and decreased by inter-stock flows.

Using Checkland’s soft systems method, observers model a system as actors playing roles in activities that transform inputs from the environment into outputs for customers.

Relating system dynamics and cybernetics

Somebody who applies Meadows’ ideas may speak of elements, interconnections and functions or purposes.

Meadows relates a system’s function to the trajectory of a system’s state change over time.

Somebody who applies Ashby’s cybernetics might speak instead of actors interacting in activities to advance system state.

A system’s function is called its line of behavior.

Given a stock, resource or population, a line of behavior might continually increase its quantity, maintain its quantity, or exhaust it.

 

This table compares the two vocabularies.

 

Meadows’ terms

General system theory terms

Element

Interconnection

Behavior

Pattern of behavior over time.

Function or purpose

Actor

Interaction

Activity

Line of behavior (state change trajectory)

Purpose

 

This table compares the ideas of system dynamics and cybernetics.

 

Meadows’ ideas (Introduction and Chapter one- the basics)

General system theory ideas (expressed with cybernetics in mind)

A system is a set of things [elements] people, cells, molecules, or whatever

interconnected in such a way that they produce their own pattern of behavior over time.

Actors (people, planets, cells, molecules, or whatever) interact in a system to perform its characteristic activities.

The activities are regular or repeatable processes that advance the state of the system.

The activities produce a line of behavior over time.

The system may be buffeted, constricted, triggered, or driven by outside forces.

But the system’s response to these forces is characteristic of itself.

The behavior of a system cannot be known just by knowing the elements of which the system is made.

A system may be either closed, or open and driven by inputs from its environment.

A system’s activities in response to inputs are characteristic of the system.

A system isn’t just any old collection of things.

A system is an interconnected set of elements that is coherently organized in a way that achieves something.

A system isn’t just any old collection of actors.

It is a collection of actors organized to perform the system’s characteristic activities.

Is there anything that is not a system?

Yes—a conglomeration [of elements] without any particular interconnections or function.

Is there anything that is not a system?

Yes, a passive structure.

Or a collection of actors that do not interact in the particular ways characteristic of a system.

How to know whether you are looking at a system or just a bunch of stuff:

A) Can you identify parts? . . . and

B) Do the parts affect each other? . . . and

C) Do the parts together produce an effect that is different from the effect of each part on its own? and perhaps

D) Does the effect, the behavior over time, persist in a variety of circumstances?

How to know whether you are looking at a system or just a bunch of stuff:

A) Can you identify roles played by actors in interactions?

B) Do actors have effects (change the state of the system or produce outputs)?

C) Do actors cooperate to produce effects that differ from the effects of actors on their own?

D) Are the activities regular and repeatable?

If information-based relationships are hard to see, functions or purposes are even harder.

A system’s function or purpose is not necessarily spoken, written, or expressed explicitly,

except through the operation of the system.

The best way to deduce the system’s purpose is to watch for a while to see how the system behaves.

Purposes are deduced from behavior, not from rhetoric or stated goals.

Actors occupy space, activities (which run over time) are harder to see.

And purposes are even harder.

A system’s purposes may be perceived and expressed in several ways

·        as goals - by observers inside or outside the system

·        as what the system does – its effects by way of producing state changes and/or outputs

·        as a line of behavior, showing how the system changes state over time.

The word function is generally used for a nonhuman system, the word purpose for a human one,

but the distinction is not absolute, since so many systems have both human and nonhuman elements.

The different kinds of purpose may be related or distinct.

Changing elements usually has the least effect on the system.

If you change all the players on a football team, it is still recognizably a football team.

A system generally goes on being itself, changing only slowly if at all,

even with complete substitutions of its elements

—as long as its interconnections and purposes remain intact.

If the interconnections change, the system may be greatly altered. It may even become unrecognizable.

Changes in function or purpose also can be drastic.

Changing actors usually has the least effect on a system.

Change all the players on a football team, and it is still recognizably a football team.

But if a systems’ activities change, then it mutates into a new system generation, or a different system.

Changing a desired purpose usually implies changing the activities.

Changing the activities sometimes implies changing the actors.

To ask whether elements, interconnections, or purposes are most important is to ask an unsystemic question.

All are essential. All interact. All have their roles.

But the least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior.

Interconnections are also critically important.

Changing relationships usually changes system behavior.

The elements, the parts of systems we are most likely to notice,

are often (not always) least important in defining the unique characteristics of the system.

Which of actors, activities and purposes are most important?

All are essential to what a system does, and interdependent.

What matters most are usually its purposes and/or the effects of regular system activities.

The actors, the most tangible and visible elements, are often the least important.

Soft Systems (Churchman, Ackoff, Checkland)

The term “soft system” emerged in the 1970s; however, the distinction between hard and soft systems is debatable.

Remember that all system theorists discussed above consider a system to be “soft” in the sense that it is a perspective of the real world.

All three gurus below mixed some system theory (about activity systems) with some systems thinking (about an “organization” of human actors).

 

Churchman, one of the first soft systems thinkers, said "a thing is what it does".

He outlined these considerations for designing a system managed by people:

·       “The total system objectives and performance measures;

·       the system’s environment: the fixed constraints;

·       the resources of the system;

·       the components of the system, their activities, goals and measures of performance; and,

·       the management of the system.”

 

Like other soft systems thinkers, Churchman sought to integrate system theory into “management science”.

The trouble is that by replacing the word “business” by “system” he tended to confuse social entity with social system.

 

Somewhat better-known soft systems thinkers include:

·       Russell L Ackoff (1919-2009) human organizations as purposeful systems.

·       Peter Checkland (born 1930) the Soft Systems Methodology.

·       Stafford Beer (1926- 2002) management cybernetics and the Viable System Model.

 

Ackoff wrote many works on systems thinking.

In Ackoff’s vocabulary for systems (1971), an “abstract system” idealises a “concrete system”.

He said different observers may see different abstract systems in the same physical reality.

His abstract system is a description or model of how an entity behaves, or should behave.

His concrete system is any entity that conforms well enough to an abstract system.

 

Ackoff’s system theory

Abstract systems

<create and use>                        <represent>

Systems thinkers <observe and envisage> Concrete systems

 

Ackoff wrote widely and wisely about the management of human institutions or organizations.

The trouble is that when replacing the word “organization” by “system” he tended to confuse social entity with social system.

 

Checkland promoted a “soft systems methodology” for the analysis and design of business systems.

He regarded a business system as an input-to-output transformation.

He pointed out that different observers may perceive different systems of that kind, possibly in conflict, in one human organization.

He called each perspective of a business (each a “soft system” if you like) a “weltenshauung” or world view.

 

Checkland’s Soft systems methodology

World views

<create and use>                        <represent>

Observers <observe and envisage> Human organizations

 

Checkland said the term “soft” was intended to characterize his methodology or approach.

He noted that the distinction between hard and soft system approaches is slippery; people get it one day, and lose it the next.

 

Today, soft systems approaches typically involve:

·       Considering the bigger picture

·       Studying the vision/problems/objectives

·       Identifying owners, customers, suppliers and other stakeholders

·       Identifying stakeholder concerns and assumptions

·       Unfolding multiple views, promoting mutual understanding

·       Analysis, visual modelling, experimentation or prototyping

·       Considering the cultural attitude to change and risk

·       Prioritizing requirements.

 

However, software engineers, even mechanical engineers, use these ideas, so the nature of the system itself is not the central issue here.

The issue is rather the challenge of managing a change process, gathering different views, reconciling them and making a successful change.

 

For more, read

·       Checkland’s ideas

·       Ackoff’s ideas

·       Beer’s ideas

4. Distinguishing systems thinking from system theory

Our definition of a system rests on drawing a distinction between actors and activities.

Forget the contrast between nouns and verbs, and any linguistic philosophy.

Think rather of actors as structures in space, and activities as behaviors over time.

 

System theory

What is here called system theory is about regular activities, which are performed by (replaceable) actors.

System theory embraces Ashby's cybernetics, Forrester's system dynamics, and some “soft systems” techniques.

It surfaces in enterprise, business and software architecture models such as process flow charts and data flow diagrams.

 

A living thing is a large and complex entity; and life on earth is an even larger and more complex entity.

We may regard both as systems; a difference lies in the rules that shape the entities.

For the living things, we have DNA, which stores information - the rues for an organism’s development and function.

For life on earth, we have the rules of evolution by natural selection.

A biosphere can only survive as a whole if

a)     it has balancing (rather than amplifying) loops, such as a CO2-Oxygen balancing system, and

b)     its actors (via sexual reproduction) can adapt to permanent changes in those gas levels - within the bounds that biochemistry allows.

 

Our example model of the biosphere is so abstract it explains very little.

The fact is, we are, in practice, necessarily, always reductionist; we can only understand what we can describe/model.

The largest and most complex causal loop diagram we could ever build will only ever model one small and simple aspect of life on earth.

And the same applies to any model of a socio-technical system like IBM.

 

Many have adopted the terminology of general system theory, cybernetics and system dynamics.

“Though it grew out of organismic biology, general system theory soon branched into most of the humanities.” Laszlo and Krippner.

But some have adopted the words rather than the concepts.

 

System theory v systems thinking paradigm clash

Systems thinkers often use the terms of system theory with different meanings.

Some thinkers are antagonistic to system theory; they deprecate system theory as "reductionistic" or "linear".

This is either to show ignorance of system theory, or to use those words with obscure meanings.

Some deprecate system theory as "mechanistic"; yet many are also fans of system dynamics - which is mechanistic system theory.

 

Many social systems thinkers draw the equation: 1 human organization = 1 system.

And use the scientific-sounding term "complex adaptive system".

This paper proposes we separate the concepts of social entity and social system.

The actors in a human social entity may determine and change the activities they perform. 

Which is to say, the activity system(s) realised by a given social entity are changeable.

 

Three kinds of organization

If your interest is in enterprises that employ many people, you might be interested in:

·       Organization kind 1: a causal loop structure in which employees are triggered by flows to perform activity instances.

·       Organization kind 2: a management structure in which employees are directed as to what their activity types are.

·       Organization kind 3: a social network of employees within an enterprise.

·       Anarchy: the extent to which employees determine their own activities, and even their own purposes.

 

A real-world business, like IBM, is a complex adaptive entity in which all of four of these things exist.

In practice we cannot, and are never expected to, describe the whole entity as one coherent system.

3/11 “Our first impulse is to point at [IBM] and to say "the system is that thing there".

This method, however, has a fundamental disadvantage: every [such entity] contains no less than an infinity of variables and therefore of possible systems.

Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.

What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” (Ashby 1956)

 

Three kinds of flow between actors

The opening sentence of chapter one in Meadow’s book is often quoted but variously interpreted.

“A system is an interconnected set of elements that is coherently organized in a way that achieves something.” 

That definition is so generalized that it embraces readings contrary to assertions Meadows makes about systems.

Consider the three different kinds of flow that appears in the three different kinds of organization above.

 

1: Causal flows - in causal loop structure(s) - in which employees are triggered by flows to perform regular activities

A causal loop structure organizes some quantifiable stocks, populations or resources in a structure of causal relationships.

Here, coherently organized means system elements are inter-related as in a causal loop diagram (not a management structure).

And to achieve something is to produce "a pattern of behavior over time" (not meet goals set by people).

 

The flows in a causal loop structure are causal relationships, which trigger actors to perform regular activities.

Most of Meadows’ book is about systems of this kind, as in system dynamics.

(She takes a side-swipe against event-driven models, as are used in most business system modelling.)

 

It is impossible, and never necessary in practice, to describe every causal flow in a business.

And impossible to address all conflicts that may arise between them.

“There can be no such thing as the unique behavior of [IBM], apart from a given observer.”

"There can be as many systems as observers... some so different as to be incompatible.”

“[Therefore] studying [IBM] by studying only carefully selected aspects of [it] is simply what is always done in practice.” (Ashby 1956).

 

2: Denotic causal flows - in management structures – which direct employees as to do and/or achieve

A human institution typically organizes people from the top down in a structure of authority/reporting relationships.

Here, coherently organized means people are related in a management structure.

And to achieve something is to perform some duties, or meet some goals, typically cascaded downwards from higher managers.

 

Some flows between actors in a management structure report the results of activities (usually to those higher up).

Other flows convey goals, duties or obligations (usually to those lower down).

Some call these denotic causal flows – which sound like causal flows, but are very different.

Causal flows trigger the performance of defined activities.

Denotic causal flows define activities to be performed (or tell actors enough to define the activities for themselves).

 

In a denotic relationship, a manager gives goals, duties and obligations to an employee.

The manager specifies the actor's role in a regular business system (S).

Ashby said: “No machine can be self-organizing in this sense.”

He meant that to re-organize a system, it must be coupled another, “higher” process or meta system (M).

 

M = the higher process or meta system, in which actors which act to specify the roles, rules and variables of S.

S = the system in which actors perform activities as directed by M.

 

Here, the manager plays a role in a M rather than in S. 

To draw a causal loop structure for S is difficult enough, to draw it for M is bigger challenge.

 

3: ad hoc information flows - in social networks

A social network is a structure in which actors create connections by communicating with each other.

Much of the inter-actor communication in corporations is ad hoc and impromptu.

Though much of it is essential to a business, the behavior of this network is irregular and outside any definable system.

 

Again, a real-world business, like IBM, will surely feature all three kinds of flow above.

Some of today’s systems thinking discussion is generic - about how groups of people work effectively together.

It is not about particular business operations; it is instead about how people shape and steer those operations.

It is about a higher process or meta system (M) that defines the workings of regular business operations (S).

 

Separation of social systems from social entities?

Meadows' book is about regular system dynamics of the kind modellable in a casual loop network.

But some readers equate a human or business organization to a system, and don't notice the incongruity.

Many systems describable as casual loop networks might be observed in one managed human institution.

A causal loop diagram cannot all define the behavior of IBM; it can only define one mechanistic system realized by that entity.

Any system we abstract from the staggering complexity of IBM as a whole is simple, and only one of many, possibly conflicting, perspectives.

Moreover, denotic causal flows (directing actors’ behavior) may have effects that are contrary to regular causal flows.

In chapter 5 of "Thinking in Systems", Meadows observes that manager-set goals can lead to unintended consequences and counter-productive results in operational systems.

 

So, IBM may well be called a complex adaptive entity.

But sorry Donella, it is meaningless to call IBM a system with no reference to your perspective or model, be it mental or documented.

Having said that, chapter 7 of Meadows "Thinking in Systems" does discuss the importance of verifying system models against realities.

“Expose Your Mental Models to the Light of Day... making them as rigorous as possible, testing them against the evidence,

and being willing to scuttle them if they are no longer supported is nothing more than practicing the scientific method

—something that is done too seldom even in science, and is done hardly at all in social science or management or government or everyday life."

 

Reconciling system theory with systems thinking

In system theory, a system is a particular way of looking at a real-world entity or situation.

The entity must behave systematically.

System theory is useful whenever we seek to:

·       understand how some outcome arises from some regular behavior.

·       predict how some outcome will arise from some regular behavior.

·       design a system to behave in a way that produces a desired outcome.

·       intervene in a situation to change some system(s) for some reason.

 

An aim in what follows is to clarify ambiguities in wider systems thinking discussion, including those in the next section.

We shall consider the impacts of resolving these ambiguities on social systems thinking and on “complexity science”.

And point to how system theory can be reconciled with systems thinking.

5. System theory in the 21st  century

An abiding sin of some “systems thinkers” is over generalisation

They take a word from one domain and use it with a different meaning in a different (often social or business) domain.

This does not produce a more general system theory - it merely draws a superficial analogy between what can be very different concepts.

This phenomenon, sometimes called "overloading", introduces ambiguity into discussions, and the analogy can be misleading.

Much systems thinking discussion is confused by three ambiguities Ashby identified.

 

System

1 An entity (material object, real-world thing or organization)

2 A system realised by an entity

 

Adaptation

1 System state change (homeostatic or progressive)

2 System mutation

 

Self-organization

1 Rule-bound self-assembly (of parts into a whole)

2 Rule-changing improvement (system mutation)

 

If we are to progress system theory, we must acknowledge and address these ambiguities.

The need to separate systems from entities

To repeat: Ashby noted that the term “system” is ambiguous in discussions, because systems thinkers use the term in at least two ways.

The ways are described here as:

·       An entity = a complete real-world thing (e.g. all the people, processes and technologies of a business) regardless of which observer looks at it.

·       A system = an observer’s view of some regular or repeatable behaviors that advance some variables/quantities.

 

Remember that the relationship between physical entities and abstract systems is many-to-many.

One abstract system may be realised by countless physical entities.

 

One abstract system (type)

may be instantiated (in a physical system)

by many real-world entities

“Carbon capture”

may be realised (by photosynthesis processes)

by countless rain forests

“Poker”

may be followed (in a game)

by many card schools

One musical score

may be performed (in a performance)

by many orchestras

One program

may be executed (in an execution)

by many computers

 

One physical entity may realise countless abstract systems.

 

One real-world entity

may instantiate many different abstract systems

One rain forest

may capture carbon in tree trunks, and sustain biodiversity

One card school

may play many different games (poker, whist and pizza sharing)

One orchestra

may perform many different musical scores

One computer

may execute many different programs

 

The need to separate social systems from social entities

It is normal to distinguish business management from project management. Typically:

 

Business managers manage regular business operations, and actors employed in existing systems

To paraphrase Meadows “The actors of a system are the noticeable parts, because they are visible, tangible things.”

 

Project managers manage changes to how systems behave, or create new systems that behave in a new way,

To paraphrase Meadows "A system isn’t just any old collection of actors. It is a collection of actors organized to perform the system’s characteristic activities.”

 

Many write about the management or organization of socio-technical entities that employ human actors.

They casually refer to a socio-technical entity, such as IBM, as a system.

However, many of the points they make are more sociological than systemological.

They speak of a system with no reference to any particular perspective, "characteristic set of behaviors" or "pattern of behavior over time".

And presume the purposes are those declared as goals by managers or actors in the system.

 

Moreover, the actors in businesses and other socio-technical entities often behave in ad hoc, impromptu and disorderly ways.

In fact, every business depends on people doing this, making decisions and acting in creative and innovative ways.

This behavior is neither what Meadows called a "pattern of behavior" nor "characteristic" of the business.

So, it lies outside of any system employed or deployed by that business.

Which is fine; there is no reason to presume a socio-technical business - as a whole - is one coherent "system" in the sense defined by Meadows or Ashby.

 

To rescue the system concept, we need to distinguish social entities from social systems, as this table indicates.

 

A social entity

A social system

A set of actors who communicate as they choose.

A set of activities performed by actors.

A physical entity in the real world.

A performance of abstract roles and rules by actors in social entity

Ever-changing at the whim of the actors

Described and changed under change control

 

A social entity is simply a group of people who inter-communicate.

It is an entity, a bounded whole, but is it a system?

When and where the actors creatively invent how they act and interact, the entity does not behave as a system.

The entity is a system only when and in so far as its actors interact in regular ways – where there are describable roles or rules.

 

A social system can be seen as a game in which actors play roles and follow rules.

You rely on countless human activity systems; for example, you wouldn't want to:

·       stand trial in a court that didn’t follow court procedures

·       board a train or airplane operated by people who didn’t follow the rules.

·       invest in a company that didn’t repay its loans as promised

·       play poker with people who ignore the laws of the game.

 

Every human actor can belong to many social entities and play roles in many systems.

One social entity can realise several distinct social systems.

And one social system can be realised by several social entities.

The need to distinguish system state change from system mutation

Continuous change is the nature of our universe.

However, a system is (by definition) an island of regularity in the ever-unfolding process that is the universe.

It can change in two different ways:

·       System state change: a change to the state of a system.

·       System mutation: a change to the roles, rules or variables of a system.

 

Change in Ashby’s classical system dynamics

Ashby distinguished two kinds of system change or adaptation.

“5/7. the word 'adaptation' is commonly used in two senses which refer to different processes.”

                                                                 

System state change

A system can change state, either under its own internal drive, or in response to changes in its environment.

E.g. A word processor changes state when you select the font you want to use when typing.

 

System mutation

A system’s nature (its roles, rules and variables) can change or be changed, creating a new and different system.

E.g. A word processor mutates when the vendor releases a new version.

 

Changing a purpose of a system usually implies changing the activities.

Changing the activities sometimes implies changing the actors.

The actors, though the most tangible and visible elements, are often the least important.

Changing actors usually has the least effect on a system.

Change all the players on a football team, and it is still recognizably a football team.

But if a systems’ activities change, then it mutates into a new system generation, or a different system.

 

Change in Meadow’s system dynamics

Meadow was less clear than Ashby about the distinction above, but did imply it.

 

System state change

System dynamics defines a system in a mathematical model of processes and variables.

The processes change the value of the variables over time – change its information state.

Changes to the information state may reflect changes to the material state of the real-world entity that is modelled.

Meadows wrote that changing actors usually has the least effect on a system.

“If you change all the players on a football team, it is still recognizably a football team.”

“A system generally goes on being itself, changing slowly if ever, even with complete substitutions of its actors —as long as its interactions and purposes remain intact.”

 

System mutation

A real-world system that conforms to system dynamics model repeats its behaviors, performing the same operations on the same resources.

It continues doing this until the system exhausts some necessary resource, at which point it stops.

To change the model is to define a new system.

That new system may be regarded as the next generation of the system, or a different system altogether

"Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game.” Meadows

 

In chapter 1, Meadows wrote:

“If the interconnections change, the system may be greatly altered. It may even become unrecognizable.

Changes in function or purpose also can be drastic.”

Which is to say that changing the purpose of a system is to change the system itself.

 

In chapter 5 Meadows wrote:

“Back in Chapter One, I said that one of the most powerful ways to influence the behavior of a system is through its purpose or goal.”

Since this paper separates systems from the entities that realise, she should have written.

“One of the most powerful ways to influence the behavior of a (social entity) is through its purpose or goal,

By changing the purpose, you can change the system that the entity realises.”

 

Change in social systems thinking

The focus of a social systems thinking discussions is often on system mutation.

And the term “complex adaptive system” is often used

For sure, a complex socio-technical entity (like IBM) may continually adapt to changes in the environment - by changing itself.

And IBM might reasonably be called a complex adaptive entity.

But it is meaningless to call it a complex adaptive system without reference to a system of interest – the particular perspective of some observer(s).

Moreover, continuous adaptation or reorganization undermines the concept of a system – which is regularity.

If we don't distinguish an ever-evolving social entity from the various modellable systems it may realise, the concept of a system evaporates.

For more on that distinction, read on, and read second order cybernetics.

The need to distinguish rule-bound and rule-creating self-organization

Ashby distinguished two kinds of self-organization.

·       “Changes from parts separated to parts joined” “Self-connecting” “Perfectly straightforward”

·       “Changing from a bad way of behaving to a good.”

 

The two kinds might be named and differentiated as:

·       rule-bound self-assembly (as when autonomous geese join in a flight of geese)

·       rule-changing improvement (as when a machine reconfigures itself to behave in a different way).

 

Of the second kind, Ashby said: “No machine can be self-organizing in this sense.”

He meant that to re-organize a system, it must be coupled another system – which we may call a higher-level process or meta system.

Note that one actor can play a role in both systems, lower and higher.

6. Conclusions

Much systems thinking discussion is confused by three ambiguities Ashby identified.

If we are to progress system theory, we must acknowledge and address these ambiguities.

 

The need to separate systems from entities

1 An entity (material object, real-world thing or organization)

2 A system realised by an entity       

 

The need to distinguish system state change from system mutation

1 System state change (homeostatic or progressive)

2 System mutation

 

The need to distinguish rule-bound and rule-creating self-organization

1 Rule-bound self-assembly (of parts into a whole)

2 Rule-changing improvement (system mutation)

 

We shall explore and address these needs in later papers.

 

Relevance to Enterprise Architecture (EA)?

Remember there is no meaning in data alone.

Meaning appears only in those moments when actors create and use data structures or symbols, with reference to a language/system in which that data is mapped to a meaning.

 

In unregulated human society, the meaning of a word like "policy" is ambiguous.

It may be interpreted by each actor as they see fit, and may be changed by one actor regardless of how other actors interpret it.

Successful communication in business systems requires communicating actors to share a controlled vocabulary.

The meaning of (say) "policy number" is registered as the identifier of a particular "policy", which has an agreed set of attributes.

 

In EA, the architects of a business system define a domain-specific vocabulary in some kind of data dictionary or canonical data model.

This "meta data" defines the meanings created/used by actors when performing a coding/decoding process.

 

In EA, modelers define systems in which it is usually assumed that the meaning of a data item is shared by its creators and users.

Where the senders and receivers of some data create and find the same information in that data.

The concepts of data and information are in 1-to-1 correspondence, and terms are often used interchangeably.

 

In EA, modelers define a system that is stable for a generation.

The system can change in discrete steps from generation to generation (under change control).

But its actors cannot continually change how they individually interpret messages and memories, since this undermines the very concept of a system.

 

In short, EA modelers apply "system theory" rather than "systems thinking".

They do not model ad hoc inter-actor communications in a social entity or social network.

Nevertheless, these ad hoc communications are essential to the success of any business, and systems thinking tools and techniques may be useful to EA, for example in discussion of business change management.

 

Further reading

Principles and concepts of business architecture

APPENDIX: more sociologically-inclined systems thinking

General system theory doesn’t start from or depend on sociology, or analysis of human behavior.

However, it stimulated people to look afresh at social systems in general and business systems in particular.

 

Read Sociological Systems Thinkers for discussion of the following.

 

Second-order Cybernetics (Von Förster, Bateson, Mead)

“Second-order cybernetics” was developed in the early 1970s.

It was pursued by thinkers including Heinz von Foerster, Gregory Bateson and Margaret Mead.

 

Decision theory - or theory of choice

Ackoff noted that the actors in a system, when described as per classical cybernetics, act according roles and rules.

By contrast, the actors in a human social system have free will and can act as they choose.

This prompts the question as to how people do, or should, make choices.

A biologist might look to instinct or homeostasis as the basis for making decisions.

A psychologist might look to emotions or Maslow’s hierarchy of needs as the basis for making decisions.

A sociologist or mathematician may take a different perspective.

 

Social entities as organisms

In the theory of evolution by natural selection, can a social group be treated as an organism?

Can selection between groups (favoring cooperation) successfully oppose selection within a group (by competition)?

Thinkers who addressed this include:

·       Lynn Margulis – the evolution of cells, organisms and societies

·       Boehm – the evolution of hunter-gatherer groups

·       Elinor Ostrom – the formation of cooperatives.

 

Luhmann: autopoietic social systems

Habernas: universal pragmatics

Systems thinking babble and “systemantics”

 

 

All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to http://avancier.website in whichever social media you use.