System thinkers and their ideas

https://bit.ly/2w5XKNK

Copyright 2017 Graham Berrisford. One of several hundred papers at http://avancier.website. Last updated 23/01/2020 17:23

 

“A very interesting potted history of the evolution of systems thinking and the different strands of thoughts that have evolved.

Thank you for sharing.”

 

Contents

Before systems thinking – recap. 1

The post war boom in system theory. 1

Classical cybernetics - the science of system control 1

General system theory – the cross-science notion of a system.. 1

System Dynamics – animating a system model to predict long-term outcomes. 1

Basic system ideas. 1

Soft systems thinking – loosening the system concept 1

Decision theory - or theory of choice. 1

Second order cybernetics. 1

Conclusions and remarks. 1

Footnote on sociologically-inclined system thinking. 1

 

Before systems thinking – recap

Read thinkers who foreshadowed system theory for ideas attributed to the thinkers below.

Thinkers included:

·       Adam Smith (1723 to 1790) subdivision within and competition between systems.

·       Charles Darwin (1809 to 1882) system mutation by reproduction with modification.

·       Claude Bernard (1813 to 1878) homeostatic feedback loops.

·       Willard Gibbs (1839 – 1903) the development of chemistry into a science.

·       Vilfredo Pareto (1848 to 1923) the Pareto principle.

 

Gibbs defined a system as: “a portion of the ... universe which we choose to separate in thought from the rest of the universe.”

By this definition, every describable thing is a system.

Here, Gibb’s “portion of the universe” is an entity, and a system is a particular way of looking at an entity.

 

The notion that a system is a perspective of a real-world entity or situation is deeply embedded in systems thinking.

In his introduction to cybernetics (1956), Ashby wrote that a system is an observer’s highly selective model of what a “material object” or "real machine" does.

In system dynamics (1950s) Forrester showed how a system of stocks and inter-stock flows can simulate some interrelated real-world behaviors.

In his general system theory (1968), von Bertalanffy discussed system models as perspectives of system realities.

In his vocabulary for system thinking (1971), Ackoff said different observers may see different abstract systems in the same concrete reality.

In his approach to business analysis (1970s), Checkland positioned a soft system as one “world view” of a real-world business.

 

A philosophy of systems must address questions debated by philosophers for millennia; notably, how do descriptions relate to realities?

This paper employs a new device, an epistemological triangle that relates describers, descriptions and realities.

 

Epistemology

Descriptions

<create and use>     <represent>

Describers <observe and envisage> Realities

 

For a detailed explanation of this triangle, read “A philosophy of systems”.

Later in this paper, the triangle is edited to reflect the system theories of von Bertalanffy, Checkland, Ackoff, Forrester and Ashby.

 

Our interest is in real world systems whose behavior is the outcome of actors interacting in regular ways.

The first sociological thinkers included:

·       Herbert Spencer (1820 to 1903) social systems as organic systems.

·       Emile Durkheim (1858 to 1917) collective consciousness and culture.

·       Gabriel Tarde (1843 to 1904) social systems emerge from the actions of individual actors.

·       Max Weber (1864 to 1920) a bureaucratic model – hierarchy, roles and rules

·       Kurt Lewin (1890 to 1947) group dynamics.

·       Lawrence Joseph Henderson (1878 to 1942) meaning in communication

 

It was commonly that presumed that a social system is homeostatic, and a business organization is like a biological organism.

These ideas have influenced systems thinkers for 150 years, but are at least somewhat misleading.

The post war boom in system theory

When “system theory” became established as a topic in its own right is debatable.

Some suggest system theory is a branch of sociology.

“Systems theory, also called social systems theory... https://www.britannica.com/topic/systems-theory

Others suggest the reverse, that social systems thinking is branch of general system theory.

 

The modern concept of a system became a focus of attention after second world war.

And there was a burst of systems thinking in the period 1940 to 1980.

Influential bodies and groups have included these three.

 

1941 to 1960: The Macy Conferences - cross-disciplinary meetings in New York.

On Cybernetics, with a leaning to The Macy Foundation’s mandate to aid medical research.

Topics included connective tissues, metabolism, the blood, the liver and renal function.

Also infancy, childhood, aging, nerve impulses, and consciousness.

 

1949 to 1958: The Ratio Club - a cross-disciplinary group in the UK.

On Cybernetics in general: members included psychologists (Ashby), mathematicians (Turing) and engineers.

See the next section below.

 

1955 to date: The International Society for the Systems Sciences (ISSS).

On General System Theory: conceived in 1954 by Bertalanffy, Boulding and Rappaport.

See the next but one section below.

 

In the second half of the 20th century, sociologists and management scientists (like Boulding, Ackoff and Clemson) were quick to adopt system terminology

“Though it grew out of organismic biology, general system theory soon branched into most of the humanities.” Laszlo and Krippner.

But some adopted the words rather than the concepts.

Some still draw the 19th century sociologist’s equation human “organization” = system, treating systems and human organizations as synonymous.

Classical cybernetics - the science of system control

Cybernetics emerged out of efforts in the 1940s to understand the role of information in system control.

Thinkers in this domain include:

·       Norbert Wiener (1894-1964) the science of system control.

·       W. Ross Ashby (1903-1972) the law of requisite variety.

·       Alan Turing (1912 –1954) finite state machines and artificial intelligence.

 

Wiener introduced cybernetics as the science of biological and mechanical control systems.

He discussed how a controller directs a target system to maintain its state variables in a desired range.

E.g. A thermostat directs the actions of a heating system.

Ashby popularised the usage of the term 'cybernetics' to refer to self-regulating (rather than self-organising) systems.

 

The Ratio Club, which met from 1949 to 1958, was founded by neurologist John Bates to discuss cybernetics.

Many members (e.g. Alan Turing) went on to become prominent scientists - neurobiologists, engineers, mathematicians and physicists.

Since the 19th century, many authors have been particularly interested in homeostatic systems.

In “Design for a Brain” (1952), Ashby discussed biological organisms as homeostatic systems.

He presented the brain as a regulator that maintains each of a body’s state variables in the range suited to life.

This table distils the general idea.

 

Generic system

Ashby’s design for a brain

Actors

interact in orderly activities to

maintain system state and/or

consume/deliver inputs/outputs

from/to the wider environment.

Brain cells

interact in processes to

maintain body state variables by

receiving/sending information

from/to bodily organs/sensors/motors.

 

However, homeostatic entities and processes are only a subset of systems in general.

In his more general work, “Introduction to Cybernetics” (1956), Ashby defined a system as a set of regular or repeatable behaviors, which advance a set of state variables.

 

Ashby urged us to distinguish entities from the abstract systems they realise. 

“At this point we must be clear about how a "system" is to be defined.

Our first impulse is to point at [some real-world entity] and to say "the system is that thing there".

This method, however, has a fundamental disadvantage: every material object contains no less than an infinity of variables and therefore of possible systems.

Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.

What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” (Ashby 1956)

 

In cybernetics, a system is an abstraction, a theory of how an entity behaves, or should behave.

Ashby’s system is a model of some regular behavior; it represents any entity or “real machine” that performs as described in the model.

 

Ashby’s cybernetics

Systems

<create and use>                   <represent>

Observers <observe and envisage> Real machines

 

Information feedback loops

Cybernetics is the science of how a physical, biological or social machine can be controlled.

It addresses how a control system (via an input-output feedback loop) can control at least some activities in a target system.

Information is encoded in flows or messages that pass between systems.

A control system:

·       receives messages that describe the state of a target system

·       responds by sending messages to direct activities in the target system.

 

Information feedback loops are found in both organic and mechanical systems:

·       A missile guidance system senses spatial information, and sends messages to direct the missile.

·       A brain holds a model of things in its environment, which an organism uses to manipulate those things.

·       A business database holds a model of business entities and events, which people use to monitor and direct those entities and events.

·       A software system holds a model of entities and events that it monitors and directs in its environment.

 

Ashby’s classical cybernetics is widely used - explicitly or implicitly.

However, his extrapolations from cybernetics to learning and intelligence look questionable.

Here, the brain’s ability to typify, to see resemblances and patterns, and use them to predict the immediate future, is more interesting than how it works.

 

For more on cybernetics, read Ashby’s ideas.

General system theory – the cross-science notion of a system

Von Bertalanffy (a biologist) introduced the idea of a cross-science general system theory in the 1940s.

“There exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind.”

His aim was to discover patterns and elucidate principles common to systems in every scientific discipline, at every level of nesting.

He looked for concepts and principles applicable to several disciplines or domains of knowledge rather than to one.

 

The 1954 meeting of the American Association for the Advancement of Science in California was notable.

Some people at that meeting conceived a society for the development of General System Theory (the ISSS mentioned above).

They included:

·       Ludwig von Bertalanffy (1901-1972) the cross-science notion of a system

·       Kenneth Boulding (1910-1993) applying general system theory to “management science”.

·       Anatol Rapoport (1911 to 2007) wrote on game theory and social network analysis.

 

Von Bertalanffy was ambiguous about the distinction between description and reality

In 1968, he wrote that: “All scientific constructs are models representing certain aspects or perspectives of reality.”

We can represent his assertion using our triangle.

 

General system theory

System Models

<create and use>          <represent>

Observers <observe and envisage> System Realities

 

Unfortunately, by referring to a biological entity as a system, he tended to conflate (at least in readers’ minds) the system reality and the system model. 

 

Organicism

People create hierarchical descriptions of reality by zooming in and zooming out.

They commonly decompose systems into subsystems and compose subsystems into systems.

Being a biologist, von Bertalanffy called this organicism.

He was very familiar with decomposing an organism into organs into cells into organelles into molecular structures and inter-reactions.

Note that the decomposition is not fractal, the system at one level is very different from the next higher or lower system.

 

Note that as you zoom in and zoom out, what appears holistic at one level is reductionistic at another.

E.g. Consider the beating of the human heart.

I describe the regular beat as an emergent property of parts (muscles) interacting in a whole (the heart).

You describe it as an ordinary/assumed property of one part in a wider whole (a body).

E.g. Consider the flexing of the Tahoma Narrows bridge.

I wrongly describe the flexing as an emergent property of the whole thing (the bridge).

You realise it is really an emergent property of a wider whole in which some part(s) of the bridge interact with some part(s) of its environment (the wind).

 

Information

Bertalanffy related system theory to communication of information between the parts of a system and across its boundary.

“Another development which is closely connected with system theory is that of… communication.

The general notion in communication theory is that of information.

A second central concept of the theory of communication and control is that of feedback.”

“Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow…”

 

Some general system ideas

Regarding information feedback loops, Bertalanffy was on the same page as Ashby.

Here are some ideas that Ashby’s cybernetics shares with wider cross-science general system theory.

·       System: an entity describable as actors interacting in activities - to advance the system’s state and/or transform inputs into outputs

·       Coupling: the relating of subsystems in a wider system by flows.

·       Flow: the conveyance of a force, matter, energy or information.

·       Feedback loop: the circular fashion in which output flows influence future input flows and vice-versa.

·       Emergence: the appearance of properties in a wider or higher-level system, from the coupling of lower-level subsystems.

·       Holism: looking at a thing in terms of how its parts join up rather than dissecting each part.

·       Information flow: the conveyance of information in a message from a sender to a receiver.

·       Process: a sequence of activities that changes or reports a system’s state, or the logic that controls the sequence.

·       System environment: the world outside the system of interest.

·       System boundary: a line (physical or logical) that separates a system from its environment, and encapsulates an open system as an input-process-output black box.

·       System interface: a description of inputs and output that cross the system boundary.

·       System state: the current structure or variable values of a system, which change over time.

·       System state change: a change to the state of a system.

·       System mutation: a change to the roles, rules or variables of a system.

 

Read Introducing system ideas for a discussion of the system terms and concepts above, and some ambiguities.

Beware that many terms used by systems thinkers (e.g. emergence, complexity and self-organisation) are open to several interpretations.

 

Cooperation and conflict between actors

When actors interact in a system, they don’t necessarily help each other.

They may cooperate, as within a football team or a business system.

They may compete, as in a game of poker, or a market; or hurt each other, as in a boxing match or a war.

Cooperation, conflict and conflict resolution is a focus of bio-mathematics and game theory.

 

Anatol Rapoport was a mathematical psychologist and biomathematician who made many contributions.

He pioneered the modeling of parasitism and symbiosis, researching cybernetic theory.

This gave a conceptual basis for his work on conflict and cooperation in social groups.

 

Game theory: cooperation and conflict resolution

In the 1980s, Rapoport won a computer tournament designed to further understanding of the ways in which cooperation could emerge through evolution.

He was recognized for his contribution to world peace through nuclear conflict restraint via his game theoretic models of psychological conflict resolution

 

Social network analysis

Rapoport showed that one can measure flows through large networks.

This enables learning about the speed of the distribution of resources, including information through a society, and what speeds or impedes these flows.

 

For more, read Introducing general system theory

System Dynamics – animating a system model to predict long-term outcomes

System Dynamics was founded and first promoted by:

·       Jay Forrester (1918 to 2016) every system is a set of quantities that are related to each other.

·       Donella H. Meadows (1941 to 2001) resource use, environmental conservation and sustainability.

 

Jay Forrester (a professor at the MIT Sloan School of Management) was the founder of System Dynamics.

He defined a system as a set of stocks (or populations) that interact and affect each other.

Where one stock has an effect on another stock, that causal relationship is defined as an inter-stock flow.

·       A stock is a variable number representing the level of a quantity, or instances of a type.

·       A flow between two stocks represents how increasing or decreasing one stock increases or decreases another stock.

·       A causal loop connects two or more stock by flows that form a circular feedback loop.

 

The system’s behavior can be modelled in a causal loop diagram, supported by rules that modify quantitative variable values.

Generally, and mathematically, the system is seen as a set of coupled, nonlinear, first-order differential (or integral) equations.  

However, the system is commonly simulated in software by dividing time into discrete intervals and stepping the model through one interval at a time.

Which is to say, the system is animated as a discrete event-driven system (as many business systems are).

 

System Dynamics

Models of system dynamics

<create and animate>                          <represent>

System modellers <observe and envisage> Inter-related quantities of things

 

Today, akin to system dynamics, there are agent-based approaches to the analysis of systems.

 

Systems as patterns of behaviour

Meadows defined a system thus:

“A set of elements or parts that is coherently organized and interconnected in a pattern or structure that produces a characteristic set of behaviors often classified as its function or purpose."

Meadows said this of the elements or parts.

·       A system isn’t just a set of interconnected things.

·       A system is a set of elements that are connected and coherently organized so as to achieve something.

·       The elements, the parts we… notice, are often… least important.

·       A system generally goes on being itself... even with complete substitutions of its elements.

Meadows characterised a system by its behaviors.

·       A system is a set of things interconnected so as to produce a pattern of behavior over time.

·       The behavior of a system cannot be known just by knowing its elements.

·       The way to deduce a system’s purposes is to watch how the system behaves.

·       You deduce its purposes from its behaviors, not from a declaration of goals.

·       Intervening in a system [to change its behaviors] is to change the system.

The ideas that Meadows expresses above are compatible with Ashby’s ideas, but incompatible with much other discussion of "systems".

Sociological and management thinkers often casually refer to a sociological-technical entity, such as IBM, as a system.

They do so without reference to any particular interest, perspective, "characteristic set of behaviors" or "pattern of behavior over time".

 

To paraphrase Ashby, the error is to point to an entity and call it "a system".

An entity is only a "concrete system" when, where and in so far as it realises an "abstract system".

In other words, a concrete system is the instantiation by an entity of an abstract system.

E.g. a game of poker is the instantiation by a card school of the abstract type that is the rules of poker.

The same system can be instantiated by other entities (other card schools).

The same entity can instantiate other systems (a game of whist).

 

Ashby pointed out that infinite systems may be abstracted from a material entity, or as many systems as observers.

IBM can realise countless different abstract systems in parallel, some of which may be in conflict.

Moreover, the systems that IBM realises may change over time. 

 

For more, read System Dynamics.

Basic system ideas

 

When is an entity describable as a system?

In the 19th century, Gibbs defined a system as: “a portion of the ... universe which we choose to separate in thought from the rest of the universe.”

Here, we prefer to use the term “entity” for any portion of the cosmos we separate in thought from the rest

It could be a group of people, a moon, a hurricane or a wardrobe.

 

If every discrete entity or situation is a system, then the term is a noise word, it adds no useful meaning.

So, when is an entity a system? When the entity shows a pattern of behavior.

We speak of an entity as a system when seeking to:

·       understand how some outcome arises from some regular behavior.

·       predict how some outcome will arise from some regular behavior.

·       design a system to produce some outcome.

 

When is a whole a system?

Some define a system as a whole composed of two or more parts.

The parts of a whole can be physically bounded by

·       a phase boundary, skin or wall, as in an organism or a factory.

·       physical interconnections, as in the parts of shirt or a necklace

Parts can be logically bounded by

·       having a membership identifier, as in the employees of a business.

·       the choice of a describer or designer, as in pair of curtains or a ridden bicycle.

 

If every whole divisible into parts is a system, then everything larger than a quark is a system.

So, when is a whole a system? When the whole shows a pattern of behavior.

And when that behavior emerges from how parts of the whole interact

E.g. consider how the smooth forward motion of rider and bicycle emerges from their interaction.

In other examples of emergence, the properties of a higher-level system (e.g. consciousness) are said to emerge from the interactions of lower-level systems (e.g. neurons).

 

A system of interacting parts

A structure is composed of interconnected parts

A system is a structure in which parts (aka actors) interconnect by interacting. 

Behaviors transform inputs into outputs

E.g. a windmill transforms wind energy and corn into flour.

The system’s structures are designed to perform the required behaviors.

                                               

There are physical, biological and social systems, for example.

·       In a solar system, the actors are planets that orbit a star.

·       In a digestive system, the actors are parts (teeth, intestines, liver, pancreas etc.) that transform food into nutrients and waste.

·       In a church, the actors are people who play roles in the church’s organization and services.

 

How to describe a system of interest?

Most are describable in terms of actors (or roles for them), activities (or rules for them) and system state.

·       Actors are structures (in space) that perform activities - in roles and processes that are describable and testable.

·       Activities are behaviors (over time) that change the state of the system or something in its environment - governed by rules that are describable and testable.

·       A system’s state is changed by activities – and is describable as a set of state variables.

 

Most systems of interest are open (rather than closed), meaning the system is connected to its wider environment by inputs and outputs that are describable and testable.

Most systems of interest are stateful (rather than stateless), meaning the system’s structures persist over time and between discrete activities.

 

So, generally, a system can be described as actors that interact in regular activities to

·       transform inputs into outputs and/or

·       advance the system’s state.

 

Actors may act to

·       consume inputs from suppliers to the system.

·       produce outputs for customers of the system.

·       advance the state of the system - which can mean recording some information for future use.

 

Actors may also rest between activities or do something irrelevant to a system they play a role in.

 

Physical, social and business systems

The systems of interest feature parts interacting in a pattern of behaviour.

In physical systems, parts may interact by energy such as electromagnetic radiation, or forces such as gravity.

In a social system, actors interact directly by sending and receiving information in messages.

They may also interact indirectly by writing or reading some shared memory (representing system state).

 

Our main interest is in applying system theory to socio-technical and business systems.

Actors play roles in rule-bound processes.

Actors process information encoded in memories and messages

Actors respond to messages, often in ways determined by some received or stored information.

The information represents the state of entities or events of importance to the business at hand.

 

Goals

There are different views of what goals (if any) a system meets or is supposed to meet.

Does the solar system have a goal?

Goals may be given to a system by its observers, sponsors, designers or other stakeholders.

The actors who play roles in a system may share those given goals and/or have different goals.

However, Meadows said you deduce a system’s purposes from its behaviors, not from a declaration of goals.

And Beer coined the phrase “the purpose of a system is what it does” (POSIWID) by way of advancing system state and/or producing outputs.

 

Abstract systems as types

A system may be envisaged in an abstract form - as in a causal loop diagram, or the rules of poker.

A system may be realized in a concrete form – as a performance of an abstract system.

Given an abstract system (a type); a concrete system is an instance of that type.

 

Modelling systems

Observers can use various modelling techniques to describe or model the actors, activities and state of a system.

Using Ashby’s cybernetics, observers model a system as a set of state variables advanced by processes.

Using Forrester’s system dynamics, observers model a system as a set of stocks (variable quantities) increased and decreased by inter-stock flows.

Using Checkland’s soft systems method, observers model a system as actors playing roles in activities that transform inputs from the environment into outputs for customers.

Soft systems thinking – loosening the system concept

General system theory doesn’t start from or depend on sociology, or analysis of human behavior.

However, it stimulated people to look afresh at social systems in general and business systems in particular.

And the term “soft system” emerged in the 1970s.

 

Three well-known soft system thinkers are:

·       Russell L Ackoff (1919-2009) human organisations as purposeful systems.

·       Peter Checkland (born 1930) the Soft Systems Methodology.

·       Stafford Beer (1926- 2002) management cybernetics and the Viable System Model.

 

The distinction between hard and soft systems is debatable.

Remember that Ashby’s system is “soft” in the sense it is a perspective of the real world.

One of the first soft system thinkers, Churchman, said "a thing is what it does".

He outlined these considerations for designing a system:

·       “The total system objectives and performance measures;

·       the system’s environment: the fixed constraints;

·       the resources of the system;

·       the components of the system, their activities, goals and measures of performance; and,

·       the management of the system.”

 

Russel Ackoff, a writer on management science, spoke of abstract and concrete systems.

An abstract system is a description or model of how an entity behaves, or should behave.

A concrete system is any entity that conforms well enough to an abstract system.

 

Ackoff’s system theory

Abstract systems

<create and use>                        <represent>

System thinkers <observe and envisage> Concrete systems

 

Consider these examples:

 

Systems thinker

Abstract system

Concrete system

Composer

a musical score

a performance of the score

Software engineer

 a program

a computer that executes the program

Sociologist

a social system model

a network of people playing roles in the system

 

An abstract system does not have to be a perfect model of an entity’s behavior; only accurate enough to be useful.

We can test that an entity realises an abstract system - to the degree of accuracy we need for practical use.

 

The relationship between physical entities and abstract systems is many-to-many.

One physical entity (e.g. a person) may realise countless abstract systems (e.g. body temperature maintenance, poetry recital).

One abstract system (e.g. the game of poker) may be realised by countless physical entities.

 

Peter Checkland promoted a “soft systems methodology”.

He regarded a system as an input-to-output transformation, a perspective of a reality, a world view or “Weltenshauung”.

Different observers may perceive different systems, some in conflict, in any one human organization or other entity.

 

Checkland’s Soft systems methodology

World views

<create and use>                        <represent>

Observers <observe and envisage> Human organizations

 

Checkland observed the distinction between hard and soft system approaches is also slippery.

Today, soft systems thinking approaches typically involve:

·       Considering the bigger picture

·       Studying the vision/problems/objectives

·       Identifying owners, customers, suppliers and other stakeholders

·       Identifying stakeholder concerns and assumptions

·       Unfolding multiple views, promoting mutual understanding

·       Analysis, visual modelling, experimentation or prototyping

·       Considering the cultural attitude to change and risk

·       Prioritizing requirements.

 

However, even mechanical engineers use these ideas.

And note the confusion in systems thinking between social networks and social systems.

 

For more, read

·       Checkland’s ideas

·       Ackoff’s ideas

·       Beer’s ideas

Decision theory - or theory of choice

Ackoff noted that the actors in a system, when described as per classical cybernetics, act according roles and rules.

By contrast, the actors in a human social system have free will and can act as they choose.

This prompts the question as to how people do, or should, make choices.

A biologist or psychologist may look to instinct, homeostasis, emotions or Maslow’s hierarchy of needs as the basis for making decisions.

A sociologist or mathematician may take a different perspective.

A sociological perspective

Herbert Alexander Simon (1916 to 2001) was a political scientist, economist, sociologist, psychologist, and computer scientist.

According to Wikipedia, Simon argued that fully rational decision making is rare: human decisions are based on a complex admixture of facts and values.

And decisions made by people as members of organizations are distinct from their personal decisions.

He proposed understanding organizational behavior in humans depends on understanding the concepts of Authority, Loyalties and Identification.

 

Observations:

A theory of why and how humans conform to group norms has to start from the evolutionary advantage it gives them.

Authority, Loyalties and Identification are matters for biology and management science rather than a general system theory.

A mathematical perspective

Game theory is concerned with choices made by agents interacting with each other – as in a game of poker.

By contrast, decision theory is concerned with the choices made by agents regardless of such a structured interaction.

 

“Decision theory (or the theory of choice) is the study of the reasoning underlying an agent's choices.

Decision theory can be broken into two branches:

·       normative decision theory, which gives advice on how to make the best decisions, given a set of uncertain beliefs and a set of values; and

·       descriptive decision theory, which analyzes how existing, possibly irrational agents actually make decisions.

Empirical applications of this theory are usually done with the help of statistical and econometric methods, especially via the so-called choice models, such as probit and logit models.

 

Advocates for the use of probability theory point to the work of Richard Threlkeld Cox, Bruno de Finetti, and complete class theorems, which relate rules to Bayesian procedures

Others maintain that probability is only one of many possible approaches to making choices, such as fuzzy logic, possibility theory, quantum cognition, Dempster–Shafer theory and info-gap decision theory.

And point to examples where alternative approaches have been implemented with apparent success.” Wikipedia 31.12/2018.

 

Decision theory is beyond the scope of this work on system theory.

The Wikipedia entry on Decision Theory will give you other links to follow.

For some practical case studies, look here http://www.attwaterconsulting.com/Papers.htm

They feature (for example) the use of a Bayesian approach with Markov Chain Monte Carlo numerical methods.

The applications include assessing the risks of mechanical system failures, in order to inform decisions about their use and maintenance.

 

In the real world, we make decisions about decision making.

We can choose to make a decision or make no decision (kick the can down the road).

We can choose to make a decision using strictly mathematical analysis, or using a “Pugh matrix”, or using intuition/experience.

For many of the business decisions that top-level managers are paid to make:

·       the time or cost of strictly mathematical analysis is too high, or

·       the raw data/numbers entered into a Pugh Matrix are unreliable guesses.

Second order cybernetics

“Second-order cybernetics” was developed around 1970.

It was developed and pursued by thinkers including Heinz von Foerster, Gregory Bateson and Margaret Mead.

It is said to be the circular or recursive application of cybernetics to itself.

It shifts attention from observed systems to the observers of systems.

It is often applied to human societies or businesses.

In those contexts, a system’s actors can also be system thinkers, who study and reorganise the system they play roles in.

 

Von Foerster said much that is axiomatic, and much that is questionable.

In this video https://youtu.be/acx-GiTyoNk he proposed a system is “a pattern which connects”.

And proposed a three-stage process.

·       First (“sci”) separate the whole into parts.

·       Second (“sy”) connect the parts in a pattern, a system.

·       Third, look not only at the pattern, but at the matrix in which patterns connect.

 

The first two steps are axiomatic, the third is questionable.

Why distinguish a matrix from other patterns? Why presume only one matrix? Why presume patterns must connect?

What matters is that each pattern/system is a useful perspective of reality.

In cybernetics and other 20th century sources we have more specific and useful definitions of a system’s properties than simply “a pattern which connects".

In most modern systems thinking, the “parts” of a system are actors or components that interact in activities.

 

Unfortunately, second order cybernetics tends to lead people away from classical cybernetics.

A common issue in discussion of systems is the one Ashby warned us of – the confusion of real-world entities with systems.

In much systems thinking discussion there is little or no recognition that:

·       one entity can realise several (possibly conflicting) systems at the same time

·       there is a need to verify an entity behaves in accord with a system description

 

Seeing the world as a duality of systems and observers is naive.

The observer(s) of a real word entity may discuss it (its properties, problems and possibilities) without reference to any system.

And/or abstract countless different (possibly conflicting) systems from its behavior.

 

Moreover, discussion of system change often confuses two kinds of change or adaptation.

In classical cybernetics, a system responds in a regular way to changes in its environment or another (coupled) system.

The term adaptation usually means system state change - changing state variable values in response to events.

The trajectory of a system’s state change (be it oscillating, linear, curved or jagged) is an inexorable result of the system’s rules.

Second-order cybernetics is often applied to thinking about social organisations.

Here, the term adaptation often means system mutation or evolution – changing the system’s state variables, roles and rules.

This changes the very nature of the system; it changes its laws.

The trouble is that continual adaptation or reorganisation of a system undermines the general concept of a system – which is regularity.

 

Consequently, second-order cybernetics tends to undermine more general system theory.

If we don't distinguish an ever-evolving social network from the various modellable systems it may realise, the concept of a system evaporates.

For more, read second order cybernetics.

Conclusions and remarks

Bertalanffy didn’t like some directions in “the System Movement”, especially those specific to one science.

However, he saw the movement as “a fertile chaos” that generated many insights and inspirations.

 

Many terms used by systems thinkers (e.g. emergence, complexity and self-organisation) are open to several interpretations.

Here is an example of the kind of paper now written under the heading of “systems thinking”.

The Non-Systemic Usages of Systems as Reductionism: Quasi-Systems and Quasi-Systemics

The abundant use of questionable terms in this paper makes it well-nigh unreadable.

 

A reader writes:

Having quickly read that paper I can't decide whether to call it pseudoscience or deism!

Or Dadaism - an artistic movement from the early 20th century who's purpose was to ridicule the modern world

It is exactly the kind of writing on systems that John Gall's "Systemantics" sends up.

 

I once wrote a 50-page pamphlet on "human factors in hierarchical organisations".

It never occurred to me that anybody would relate it to system theory.

Much "systems thinking" is about the human condition rather than systems of the kind Ashby wrote about. 

 

There is probably little dispute about these basic ideas about systems.

·       There are forms and functions – actors and activities - in a system.

·       There are accidental and purposive - natural and designed - systems.

·       There are descriptions and realisations - abstract and concrete - systems.

 

However, much social systems thinking discussion seems confused in the way that Ashby warned us of.

It confuses real-world entities with systems they realise.

To rescue the system concept, we need to distinguish social networks from social systems, as this table indicates.

 

A social network

A social system

A set of actors who communicate as they choose.

A set of activities performed by actors.

A concrete entity in the real world.

A performance of abstract roles and rules by actors in social network

Ever-changing at the whim of the actors

Described and changed under change control

 

A social system can be seen as a game in which actors play roles and follow rules.

You rely on countless human activity systems; for example, you wouldn't want to:

·       stand trial in a court that didn’t follow court procedures

·       board a train or airplane operated by people who didn’t follow the rules.

·       invest in a company that didn’t repay its loans as promised

·       play poker with people who ignore the laws of the game.

 

Social networks v social systems

A social network is simply a group of people who inter-communicate.

It is an entity, a bounded whole, but is it a system?

When and where the actors creatively invent how they act and interact, the network does not behave as a system.

The network is a system only when and in so far as its actors interact in regular ways – where there are describable roles or rules.

 

Every human actor can belong to many social networks and play roles in many systems.

One social network can realise several distinct social systems.

And one social system can be realised by several social networks.

 

For more read Systems thinking approaches.

Footnote on sociologically-inclined system thinking

Social systems thinking continued alongside the post-war system theory movement, sometimes in touch with it, sometimes far apart from it.

Below are a few notes on sociologically-inclined systems thinkers.

Social groups as organisms

In the theory of evolution by natural selection, can a social group be treated as an organism?

Can selection between groups (favoring cooperation) successfully oppose selection within a group (by competition)?

Thinkers who addressed this include:

·       Lynn Margulis – the evolution of cells, organisms and societies

·       Boehm – the evolution of hunter-gatherer groups

·       Elinor Ostrom – the formation of cooperatives.

 

The evolution of cells, organisms and societies

Lynn Margulis (1970) proposed how nucleated cells evolved from symbiotic associations of bacteria.

The general idea is that members of groups can become so cooperative that the group becomes a higher-level organism in its own right.

The idea was later generalized (Maynard Smith and Szathmary 1995, 1999) to explain other major transitions, such as the rise of

·       multicellular organisms

·       eusocial insect colonies

·       human evolution.

 

It is common for people draw to a questionable analogy from biology to sociology.

The cooperation between smaller biological entities in a larger one is inflexible, rigidly rule-bound.

The interactions between people in a social network are extremely flexible, and a matter of choice for each individual.

Moreover, the people in one social network may realise several, possibly conflicting, rule-bound systems.

 

The evolution of hunter-gatherer groups

Hunter-gatherer societies are famously egalitarian, but not because everyone is nice to everybody else.

Group members can collectively suppress bullying and other self-aggrandizing behaviors within their ranks.

Boehm (1993, 1999, 2011) saw this as the defining criterion of a major evolutionary transition in human society.

With little disruptive competition within a group, succeeding as a group became the main selective force in human evolution.

 

The formation of cooperatives

Consider a group of people who share access to resources.

Such as fishermen who share fishing grounds, or farmers who share an irrigation system.

How to avoid “the tragedy of the commons” by which competition exhausts the common resource?

For a while, the fishermen must stop fishing, and farmers stop farming, to define the rules of their social system – a cooperative.

 

Elinor Ostrom (1990, 2010) defined eight generic conditions for such a cooperative.

 

1 Clearly defined boundaries

members know they are members of a group and it aims

2 Proportional equivalence between benefits and costs

members must earn benefits and can’t just appropriate them

3 Collective choice arrangements

members must agree decisions so nobody can be bossed around

4 Monitoring

5 Graduated sanctions

6 Fast and fair conflict resolution

disruptive self-serving behaviors must be detected and punished

7 Local autonomy

the group must have the elbow room to manage its own affairs

8 Appropriate relations with other rule-making authorities

(polycentric governance)

all rules above apply equally to inter-group relations

 

David Wilson reports projects that successfully applied Ostrom’s eight principles

However, there are two constraints on how widely applicable they are.

First, the eight conditions are very demanding.

Second, the notion of pre-modern grouping of humans by geographical location has largely broken down.

 

This section above was edited from this paper http://evonomics.com/tragedy-of-the-commons-elinor-ostrom by David Sloan Wilson.

Read that paper for more detail and references.

 

The modern transformation of social groups

In the ancient world, humans (like apes) were naturally grouped by geography.

They communicated only with people in the same location or territory.

First, vehicular transport transformed our ability to mix in different societies.

Then, telecommunications transformed our ability to communicate remotely.

 

What is now is a social group?

How does an individual actor join or leave a group? Who decides?

How many groups can one individual be a member of? Are there degrees of membership?

How does an actor prioritise, apportion time and attention, between groups they belong to?

How do they reconcile the conflicting norms or goals of different groups?

Luhmann: autopoietic social systems

Niklas Luhmann (1927–1998) was a German sociologist and student of Parsons.

Like writers a century earlier, he presumed a system is homeostatic and sustains itself, though in a very curious way.

David Seidl (2001) said the question facing a social system theorist is what to treat as the basic elements of a social system.

“The sociological tradition suggests two alternatives: either persons or actions.”

Luhmann chose neither, he proposed the basic elements of a social system are communicative events about a code that lead to decisions that sustain the system.

Each social system is centred on one code, which is a concept such as “justice” or “sheep shearing”.

He endorsed the “hermeneutic principle” that the hearer alone determines the meaning of a communicative event.

Read Luhmann’s ideas for more.

 

Observations:

Luhmann’s system is radically different from systems as understood by most other system theorists.

It is well-nigh diametrically opposed to that of classical cybernetics.

Since the system has no persistent structure, no persistent state, and no memory of communication events.

And the hermeneutic principle is contrary to common sense and to biology.

Since communication requires a receiver to decode the same meaning from a message that a sender intentionally encoded in that message.

Luhmann’s whole scheme (like that of Parsons before him) seems more metaphysical than scientific.

Having said that, the idea of system based on a code might been seen as having a counterpart in more general system theory

That is a “domain-specific language” for communication of information about entities and events related to one body of knowledge.

Habernas: universal pragmatics

Jürgen Habermas (born 1929) was a critic of Luhmann’s theory of social systems

He developed the social theory of communicative reason or communicative rationality.

According to Wikipedia, this distinguishes itself from the rationalist tradition, by locating rationality in structures of interpersonal linguistic communication rather than in the structure of the cosmos.

It rests on the argument called universal pragmatics – that all speech acts have an inherent "purpose" – the goal of mutual understanding.

He presumed human beings possess the communicative competence to bring about such understanding.

And hoped that coming to terms with how people understand or misunderstand one another could lead to a reduction of social conflict.

 

Observations:

A theory of why and how animate entities communicate has to start from the evolutionary advantage it gives them.

Natural human language is inherently fluid and fuzzy; it is a tool for social bonding and communication, but can easily lead to misunderstandings.

 

All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use..