Some system thinkers and their ideas

https://bit.ly/2w5XKNK

Copyright 2017 Graham Berrisford. One of several hundred papers at http://avancier.website. Last updated 22/10/2019 18:35

 

“A very interesting potted history of the evolution of systems thinking and the different strands of thoughts that have evolved.

Thank you for sharing.”

 

Contents

Before systems thinking - recap. 1

The post war boom in system theory. 1

Systems in general - recap. 1

Classical cybernetics - the science of system control 1

General system theory – the cross-science notion of a system.. 1

System Dynamics – animating a system model to predict long-term outcomes. 1

Soft systems thinking – loosening the system concept 1

Decision theory - or theory of choice. 1

Second order cybernetics. 1

Conclusions and remarks. 1

Footnote on sociologically-inclined system thinking. 1

 

Before systems thinking - recap

Read thinkers who foreshadowed system theory for ideas attributed to the thinkers below.

Thinkers included:

·       Adam Smith (1723 to 1790) subdivision within and competition between systems.

·       Charles Darwin (1809 to 1882) system mutation by reproduction with modification.

·       Claude Bernard (1813 to 1878) homeostatic feedback loops.

·       Willard Gibbs (1839 – 1903) the development of chemistry into a science.

·       Vilfredo Pareto (1848 to 1923) the Pareto principle.

 

Gibbs defined a system as: “a portion of the ... universe which we choose to separate in thought from the rest of the universe.“

By this definition, every describable thing is a system

Here, Gibb’s “portion of the universe” is an entity, and a system is a particular way of looking at an entity

 

In the 19th century, sociology started with thinkers including:

·       Herbert Spencer (1820 to 1903) social systems as organic systems.

·       Emile Durkheim (1858 to 1917) collective consciousness and culture.

·       Gabriel Tarde (1843 to 1904) social systems emerge from the actions of individual actors.

·       Max Weber (1864 to 1920) a bureaucratic model – hierarchy, roles and rules

·       Kurt Lewin (1890 to 1947) group dynamics.

·       Lawrence Joseph Henderson (1878 to 1942) meaning in communication

 

Note the tendency to presume a social or business organization is like a biological organism, is a homeostat, is a bureaucratic machine.

These three ideas, which have influenced systems thinkers for 150 years, are misleading.

The post war boom in system theory

When “system theory” became established as a topic in its own right is debatable.

Some suggest system theory is a branch of sociology.

“Systems theory, also called social systems theory... https://www.britannica.com/topic/systems-theory

Others suggest the reverse, that social systems thinking is branch of general system theory.

 

The modern concept of a system became a focus of attention after second world war.

And there was a burst of systems thinking in the period 1940 to 1980.

Influential bodies and groups have included these three.

 

1941 to 1960: The Macy Conferences - cross-disciplinary meetings in New York.

On Cybernetics, with a leaning to The Macy Foundation’s mandate to aid medical research.

Topics included connective tissues, metabolism, the blood, the liver and renal function.

Also infancy, childhood, aging, nerve impulses, and consciousness.

 

1949 to 1958: The Ratio Club - a cross-disciplinary group in the UK.

On Cybernetics in general: members included psychologists (Ashby), mathematicians (Turing) and engineers.

See the next section below.

 

1955 to date: The International Society for the Systems Sciences (ISSS).

On General System Theory: conceived in 1954 by Bertalanffy, Boulding and Rappaport.

See the next but one section below.

 

Later in the 20th century, sociologists and management scientists (like Boulding, Ackoff and Clemson) were quick to adopt system terminology

“Though it grew out of organismic biology, general system theory soon branched into most of the humanities.” Laszlo and Krippner.

But some adopted the words rather than the concepts.

And some still presume the 19th century equation: human “organization” = system.

Systems in general – recap

Here, we use the word “entity” to mean “an observable or conceivable part of the world”.

It could be a planet, a hurricane, a group of people, or a performance of a symphony.

In his work on cybernetics, Ashby urged us not confuse an entity with any abstract system that the entity realises. 

Because to equate entities with systems (one to one) is the most common mistake you will find “systems thinking” discussion.

 

The notion that a system is a "perspective" of an entity is deeply embedded in the history of systems thinking.

E.g. W Ross Ashby (cybernetics), Russell Ackoff (management science) and Peter Checkland (soft systems method).

All three indicated, a human organization is an entity in which observers may perceive many different systems, some conflict.

 

General ideas about a system

Given a real-world entity or situation.

To speak of that entity as a system implies our listeners share our perspective (in mind or documentation) of that entity as a system.

Notably its scope/boundary, and some regular behavior that is observable and testable

E.g. we agree a solar system features named planets, which orbit a sun.

A digestive system features parts (teeth, intestines, liver, pancreas, blood flow etc.), which transform food into nutrients and waste

A church features members, who play roles in the church’s organization and services.

 

Holism

For a whole to be a whole, its parts must be bounded.

Either physically bounded by a phase boundary, wall or skin.

Or logically grouped by sharing a type, having a membership identifier, or by design.

 

For a whole to be a system, it parts must interact.

The parts can be actors, cells, components, stocks or subsystems of any kind.

Note that distributed parts may interact by gravity, electromagnetic radiation, or communication of information.

So, a communication network may be needed to enable a system of interest.

 

Systems thinkers take a “holistic view” of a thing

Look at how the properties of the whole thing “emerge” from cooperation between its "parts”.

                                         

Emergence

A system is composed of two or more related parts (said Ackoff). E.g. rider + bicycle + road.

Emergent properties arise from interactions between parts (said Ashby). E.g. smooth forward motion.

If you speak of a thing as a complete whole with no awareness of or reference to its parts, then you don’t have a “system” with “emergent properties”

You just have a “thing” with “properties”.

 

Holism is Relative rather than Absolute

Holistic, reductionistic and emergent are relative to the scope considered

As you zoom in and zoom out, what appears holistic at one level is reductionistic at another.

In a nested “hierarchy” of things, I see an emergent property (a regular beat) of parts (muscles) interacting in a whole (a heart).

You see that as an ordinary/assumed property of an elementary part of a wider whole (a body).

In a “coupling” of things, I think a thing (the Tahoma Narrows bridge) has an emergent property (flexing).

You see it is really an emergent property of a wider whole in which some part(s) of that thing (the bridge) interact with some part(s) of its environment (the wind).

 

A system as more than “a pattern which connects”

Von Foerster was a dilettante who said much that is axiomatic, and much that is questionable.

In this video https://youtu.be/acx-GiTyoNk he proposed a system is “a pattern which connects”.

And proposed a three-stage process.

·       First (“sci”) separate the whole into parts.

·       Second (“sy”) connect the parts in a pattern, a system.

·       Third, look not only at the pattern, but at the matrix in which patterns connect.

 

That is two axiomatic steps and a questionable one

A matrix is just another pattern. Why presume only one matrix? Why presume patterns must connect? Is light composed of waves or particles?

What matters is that each pattern/system is a useful perspective of reality.

And in cybernetics and other 20th century sources we have more specific and useful definitions of a system’s properties than simply “a pattern which connects".

In most modern systems thinking, the “parts” of a system are actors or components that interact in activities.

 

Dynamic activity systems

Generally, a system can be described as actors interacting in activities to advance the system’s state and/or transform inputs into outputs.

·       The actors are structures (in space) that perform activities - in roles and processes that are describable and testable.

·       The activities are behaviors (over time) that change the state of the system or something in its environment -  governed by rules that are describable and testable.

·       The state is describable as a set of state variables - each with a range of values.

·       An open system is connected to its wider environment - by inputs and outputs that are describable and testable.

 

These concepts can be seen in writings of Ashby, Forrester and Checkland.

In Ashby’s cybernetics, a system is modelled as processes that advance a set of state variables.

In Forrester’s system dynamics, a system is modelled as inter-stock flows that advance a set of stocks (variable populations).

In Checkland’s soft systems method, a system is modelled as actors who perform processes that transform inputs into outputs for customers.

Classical cybernetics - the science of system control

Cybernetics emerged out of efforts in the 1940s to understand the role of information in system control.

Thinkers in this domain include:

·       Norbert Wiener (1894-1964) the science of system control.

·       W. Ross Ashby (1903-1972) the law of requisite variety.

·       Alan Turing (1912 –1954) finite state machines and artificial intelligence.

 

Wiener introduced cybernetics as the science of biological and mechanical control systems.

He discussed how a controller directs a target system to maintain its state variables in a desired range.

E.g. A thermostat directs the actions of a heating system.

Ashby popularised the usage of the term 'cybernetics' to refer to self-regulating (rather than self-organising) systems.

 

The Ratio Club, which met from 1949 to 1958, was founded by neurologist John Bates to discuss cybernetics.

Many members (e.g. Alan Turing) went on to become prominent scientists - neurobiologists, engineers, mathematicians and physicists.

Since the 19th century, many authors have been particularly interested in homeostatic systems.

In “Design for a Brain” (1952), Ashby discussed biological organisms as homeostatic systems.

He presented the brain as a regulator that maintains each of a body’s state variables in the range suited to life.

This table distils the general idea.

 

Generic system

Ashby’s design for a brain

Actors

interact in orderly activities to

maintain system state and/or

consume/deliver inputs/outputs

from/to the wider environment.

Brain cells

interact in processes to

maintain body state variables by

receiving/sending information

from/to bodily organs/sensors/motors.

 

However, homeostatic entities and processes are only a subset of systems in general.

In his more general work, “Introduction to Cybernetics” (1956), Ashby defined a system as a set of regular or repeatable behaviors, which advance a set of state variables.

 

On abstract and concrete systems

Ashby, Ackoff, Checkland and others emphasised that a system is a perspective of a reality.

They distinguished what some call “soft systems” and Ackoff called “abstract systems” from their realizations.

An abstract system (e.g, the rules of Poker) is a theory of, or insight into, how some part of the world works.

A concrete system (e.g. a game of Poker) is a real-world application, or empirical example, of such a theory.

Science requires us to show the latter conforms (well enough) to the former.

 

The basis of system theory

Abstract systems (descriptions)

<create and use>                              <represent>

System thinkers   <observe and envisage >  Concrete systems (realities)

 

These papers take this triangular, scientific, view of system theory as axiomatic.

·       An abstract system (e.g. the normal regular heart beat) is a description or model of how some part of the word behaves, or should behave.

·       A concrete system (e.g. your own heart beating) is a realisation by a real-world entity that conforms well enough to an abstract system.

 

An abstract system does not have to be a perfect model of what is described.

It only has to be accurate enough to be useful in understanding and predicting the behaviour of a concrete system.

 

The relationship between physical entities and abstract systems is many-to-many.

·       One physical entity (e.g. a person) may realise countless abstract systems (e.g. body temperature maintenance, poetry recital).

·       One abstract system (e.g. the game of poker) may be realised by countless physical entities.

 

Information feedback loops

Cybernetics is the science of how a system (be it organic or mechanical) can be controlled.

It addresses how a control system (via an input-output feedback loop) can control at least some activities in a target system.

Information is encoded in flows or messages that pass between systems.

A control system:

·       receives messages that describe the state of a target system

·       responds by sending messages to direct activities in the target system.

 

Information feedback loops are found in both organic and mechanical systems:

·       A missile guidance system senses spatial information, and sends messages to direct the missile.

·       A brain holds a model of things in its environment, which an organism uses to manipulate those things.

·       A business database holds a model of business entities and events, which people use to monitor and direct those entities and events.

·       A software system holds a model of entities and events that it monitors and directs in its environment.

 

Ashby’s classical cybernetics is more widely used (explicitly or implicitly) than second order cybernetics (discussed later).

However, his extrapolations from cybernetics to learning and intelligence look questionable.

In the 1950s, cyberneticians (Ashby and Turing, Wiener and McCullough) thought that computers would give us insights into how the brain works.

Their hopes that artificial intelligence would work like natural intelligence were dashed, and the latter will surely not be understood in the foreseeable future.

Here, the brain’s ability to typify, to see resemblances and patterns, and use them to predict the immediate future, is more interesting than how it works.

 

For more on cybernetics, read Ashby’s ideas.

General system theory – the cross-science notion of a system

The 1954 meeting of the American Association for the Advancement of Science in California was notable.

Some people at that meeting conceived a society for the development of General System Theory (the ISSS mentioned above).

They included:

·       Ludwig von Bertalanffy (1901-1972) the cross-science notion of a system

·       Kenneth Boulding (1910-1993) applying general system theory to “management science”.

·       Anatol Rapoport (1911 to 2007) wrote on game theory and social network analysis.

 

Bertalanffy was a biologist who promoted the idea of a general system theory.

His aim was to discover patterns and elucidate principles common to systems in every scientific discipline, at every level of nesting.

He looked for concepts and principles applicable to several disciplines or domains of knowledge rather than to one.

“There exist models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind.”

 

Bertalanffy related system theory to communication of information between the parts of a system and across its boundary.

“Another development which is closely connected with system theory is that of… communication.

The general notion in communication theory is that of information.

A second central concept of the theory of communication and control is that of feedback.”

“Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow…”

 

Regarding information feedback loops, Bertalanffy was on the same page as Ashby.

Here are some ideas that Ashby’s cybernetics shares with wider cross-science general system theory.

·       System: an entity describable as actors interacting in activities to advance the system’s state and/or transform inputs into outputs

·       Coupling: the relating of subsystems in a wider system by flows.

·       Flow: the conveyance of a force, matter, energy or information.

·       Feedback loop: the circular fashion in which output flows influence future input flows and vice-versa.

·       Emergence: the appearance of properties in a wider or higher-level system, from the coupling of lower-level subsystems.

·       Holism: looking at a thing in terms of how its parts join up rather than dissecting each part.

·       Information flow: the conveyance of information in a message from a sender to a receiver.

·       Process: a sequence of activities that changes or reports a system’s state, or the logic that controls the sequence.

·       System environment: the world outside the system of interest.

·       System boundary: a line (physical or logical) that separates a system from its environment, and encapsulates an open system as an input-process-output black box.

·       System interface: a description of inputs and output that cross the system boundary.

·       System state: the current structure or variable values of a system, which change over time.

·       System state change: a change to the state of a system.

·       System mutation: a change to the roles, rules or variables of a system.

 

Read Introducing system ideas for a discussion of the system terms and concepts above, and some ambiguities.

Beware that many terms used by systems thinkers (e.g. emergence, complexity and self-organisation) are open to several interpretations.

 

Cooperation and conflict between actors

When actors interact in a system, they don’t necessarily help each other.

They may cooperate, as within a football team or a business system.

They may compete, as in a game of poker, or a market; or hurt each other, as in a boxing match or a war.

Cooperation, conflict and conflict resolution is a focus of bio-mathematics and game theory.

 

Anatol Rapoport was a mathematical psychologist and biomathematician who made many contributions.

He pioneered the modeling of parasitism and symbiosis, researching cybernetic theory.

This gave a conceptual basis for his work on conflict and cooperation in social groups.

 

Game theory: cooperation and conflict resolution

In the 1980s, Rapoport won a computer tournament designed to further understanding of the ways in which cooperation could emerge through evolution.

He was recognized for his contribution to world peace through nuclear conflict restraint via his game theoretic models of psychological conflict resolution

 

Social network analysis

Rapoport showed that one can measure flows through large networks.

This enables learning about the speed of the distribution of resources, including information through a society, and what speeds or impedes these flows.

 

For more, read Introducing general system theory

System Dynamics – animating a system model to predict long-term outcomes

System Dynamics was founded and first promoted by:

·       Jay Forrester (1918 to 2016) every system is a set of quantities that are related to each other.

·       Donella H. Meadows (1941 to 2001) resource use, environmental conservation and sustainability.

 

Today, akin to system dynamics, there are agent-based approaches to the analysis of systems.

 

For more, read System Dynamics.

Soft systems thinking – loosening the system concept

General system theory doesn’t start from or depend on sociology, or analysis of human behavior.

However, it stimulated people to look afresh at social systems in general and business systems in particular.

And the term “soft system” emerged in the 1970s.

 

Three well-known soft system thinkers are:

·       Peter Checkland (born 1930) the Soft Systems Methodology.

·       Stafford Beer (1926- 2002) management cybernetics and the Viable System Model.

·       Russell L Ackoff (1919-2009) human organisations as purposeful systems.

 

The distinction between hard and soft systems is debatable.

Remember that Ashby’s system is “soft” in the sense it is a perspective of the real world.

One of the first soft system thinkers, Churchman, said "a thing is what it does".

He outlined these considerations for designing a system:

·       “The total system objectives and performance measures;

·       the system’s environment: the fixed constraints;

·       the resources of the system;

·       the components of the system, their activities, goals and measures of performance; and,

·       the management of the system.”

 

Checkland observed the distinction between hard and soft system approaches is also slippery.

Today, soft systems thinking approaches typically involve:

·       Considering the bigger picture

·       Studying the vision/problems/objectives

·       Identifying owners, customers, suppliers and other stakeholders

·       Identifying stakeholder concerns and assumptions

·       Unfolding multiple views, promoting mutual understanding

·       Analysis, visual modelling, experimentation or prototyping

·       Considering the cultural attitude to change and risk

·       Prioritizing requirements.

 

However, even mechanical engineers use these ideas.

And note the confusion in systems thinking between social networks and social systems.

 

For more, read

·       Checkland’s ideas

·       Ackoff’s ideas

·       Beer’s ideas

Decision theory - or theory of choice

Ackoff noted that the actors in a system, when described as per classical cybernetics, act according roles and rules.

By contrast, the actors in a human social system have free will and can act as they choose.

This prompts the question as to how people do, or should, make choices.

A biologist or psychologist may look to instinct, homeostasis, emotions or Maslow’s hierarchy of needs as the basis for making decisions.

A sociologist or mathematician may take a different perspective.

A sociological perspective

Herbert Alexander Simon (1916 to 2001) was a political scientist, economist, sociologist, psychologist, and computer scientist.

According to Wikipedia, Simon argued that fully rational decision making is rare: human decisions are based on a complex admixture of facts and values.

And decisions made by people as members of organizations are distinct from their personal decisions.

He proposed understanding organizational behavior in humans depends on understanding the concepts of Authority, Loyalties and Identification.

 

Observations:

A theory of why and how humans conform to group norms has to start from the evolutionary advantage it gives them.

Authority, Loyalties and Identification are matters for biology and management science rather than a general system theory.

A mathematical perspective

Game theory is concerned with choices made by agents interacting with each other – as in a game of poker.

By contrast, decision theory is concerned with the choices made by agents regardless of such a structured interaction.

 

“Decision theory (or the theory of choice) is the study of the reasoning underlying an agent's choices.

Decision theory can be broken into two branches:

·       normative decision theory, which gives advice on how to make the best decisions, given a set of uncertain beliefs and a set of values; and

·       descriptive decision theory, which analyzes how existing, possibly irrational agents actually make decisions.

Empirical applications of this theory are usually done with the help of statistical and econometric methods, especially via the so-called choice models, such as probit and logit models.

 

Advocates for the use of probability theory point to the work of Richard Threlkeld Cox, Bruno de Finetti, and complete class theorems, which relate rules to Bayesian procedures

Others maintain that probability is only one of many possible approaches to making choices, such as fuzzy logic, possibility theory, quantum cognition, Dempster–Shafer theory and info-gap decision theory.

And point to examples where alternative approaches have been implemented with apparent success.” Wikipedia 31.12/2018.

 

Decision theory is beyond the scope of this work on system theory.

The Wikipedia entry on Decision Theory will give you other links to follow.

For some practical case studies, look here http://www.attwaterconsulting.com/Papers.htm

They feature (for example) the use of a Bayesian approach with Markov Chain Monte Carlo numerical methods.

The applications include assessing the risks of mechanical system failures, in order to inform decisions about their use and maintenance.

 

In the real world, we make decisions about decision making.

We can choose to make a decision or make no decision (kick the can down the road).

We can choose to make a decision using strictly mathematical analysis, or using a “Pugh matrix”, or using intuition/experience.

For many of the business decisions that top-level managers are paid to make:

·       the time or cost of strictly mathematical analysis is too high, or

·       the raw data/numbers entered into a Pugh Matrix are unreliable guesses.

Second order cybernetics

“Second-order cybernetics” was developed around 1970.

It was developed and pursued by thinkers including Heinz von Foerster, Gregory Bateson and Margaret Mead.

It is said to be the circular or recursive application of cybernetics to itself.

It shifts attention from observed systems to the observers of systems.

It is often applied to human societies or businesses.

In those contexts, a system’s actors can also be system thinkers, who study and reorganise the system they play roles in.

 

Unfortunately, second order cybernetics tends to lead people away from classical cybernetics.

A common issue in discussion of systems is the one Ashby warned us of – the confusion of real-world entities with systems.

In much systems thinking discussion there is little or no recognition that:

·       one entity can realise several (possibly conflicting) systems at the same time

·       there is a need to verify an entity behaves in accord with a system description

 

Seeing the world as a duality of systems and observers is naive.

Classical cybernetics gives us a more sophisticated triangular view.

A concrete entity is a system only when and in so far as it realises a testable system description.

The observer(s) of a real word entity may abstract countless different (possibly conflicting) systems from its behaviour.

 

Referring to every entity, every social network, as a system, is naïve.

Observers may well discuss an entity (its properties, problems and possibilities) without reference to any system.

 

Moreover, discussion of system change often confuses two kinds of change or adaptation.

In classical cybernetics, a system responds in a regular way to changes in its environment or another (coupled) system.

The term adaptation usually means system state change - changing state variable values in response to events.

The trajectory of a system’s state change (be it oscillating, linear, curved or jagged) is an inexorable result of the system’s rules.

Second-order cybernetics is often applied to thinking about social organisations.

Here, the term adaptation often means system mutation or evolution – changing the system’s state variables, roles and rules.

This changes the very nature of the system; it changes its laws.

The trouble is that continual adaptation or reorganisation of a system undermines the general concept of a system – which is regularity.

 

Consequently, second-order cybernetics tends to undermine more general system theory.

If we don't distinguish an ever-evolving social network from the various modellable systems it may realise, the concept of a system evaporates.

For more, read second order cybernetics.

Conclusions and remarks

Bertalanffy didn’t like some directions in “the System Movement”, especially those specific to one science.

However, he saw the movement as “a fertile chaos” that generated many insights and inspirations.

 

Many terms used by systems thinkers (e.g. emergence, complexity and self-organisation) are open to several interpretations.

Here is an example of the kind of paper now written under the heading of “systems thinking”.

The Non-Systemic Usages of Systems as Reductionism: Quasi-Systems and Quasi-Systemics

The abundant use of questionable terms in this paper makes it well-nigh unreadable.

 

A reader writes:

Having quickly read that paper I can't decide whether to call it pseudoscience or deism!

Or Dadaism - an artistic movement from the early 20th century who's purpose was to ridicule the modern world

It is exactly the kind of writing on systems that John Gall's "Systemantics" sends up.

 

I once wrote a 50-page pamphlet on "human factors in hierarchical organisations".

It never occurred to me that anybody would relate it to system theory.

Much "systems thinking" is about the human condition rather than systems of the kind Ashby wrote about. 

 

There is probably little dispute about these basic ideas about systems.

·       There are forms and functions – actors and activities - in a system.

·       There are accidental and purposive - natural and designed - systems.

·       There are descriptions and realisations - abstract and concrete - systems.

 

However, much social systems thinking discussion seems confused in the way that Ashby warned us of.

It confuses real-world entities with systems they realise.

To rescue the system concept, we need to distinguish social networks from social systems, as this table indicates.

 

A social network

A social system

A set of actors who communicate as they choose.

A set of activities performed by actors.

A concrete entity in the real world.

A performance of abstract roles and rules by actors in social network

Ever-changing at the whim of the actors

Described and changed under change control

 

A social system can be seen as a game in which actors play roles and follow rules.

You rely on countless human activity systems; for example, you wouldn't want to:

·       stand trial in a court that didn’t follow court procedures

·       board a train or airplane operated by people who didn’t follow the rules.

·       invest in a company that didn’t repay its loans as promised

·       play poker with people who ignore the laws of the game.

 

A social network is simply a group of people who communicate.

One social network can realise several distinct social systems. E.g. one group of people may play roles in a church and in a game of poker.

And one social system can be realised by several social networks. E.g. the roles and rules of poker are realised by many groups of people.

 

Some refer to any social network a system, even where there are no defined roles or rules.

Some neglect to consider that a human actor can belong to many social networks and play roles in many systems.

For more read Systems thinking approaches.

Footnote on sociologically-inclined system thinking

Social systems thinking continued alongside the post-war system theory movement, sometimes in touch with it, sometimes far apart from it.

Below are a few notes on sociologically-inclined systems thinkers.

Social groups as organisms

In the theory of evolution by natural selection, can a social group be treated as an organism?

Can selection between groups (favoring cooperation) successfully oppose selection within a group (by competition)?

Thinkers who addressed this include:

·       Lynn Margulis – the evolution of cells, organisms and societies

·       Boehm – the evolution of hunter-gatherer groups

·       Elinor Ostrom – the formation of cooperatives.

 

The evolution of cells, organisms and societies

Lynn Margulis (1970) proposed how nucleated cells evolved from symbiotic associations of bacteria.

The general idea is that members of groups can become so cooperative that the group becomes a higher-level organism in its own right.

The idea was later generalized (Maynard Smith and Szathmary 1995, 1999) to explain other major transitions, such as the rise of

·       multicellular organisms

·       eusocial insect colonies

·       human evolution.

 

It is common for people draw to a questionable analogy from biology to sociology.

The cooperation between smaller biological entities in a larger one is inflexible, rigidly rule-bound.

The interactions between people in a social network are extremely flexible, and a matter of choice for each individual.

Moreover, the people in one social network may realise several, possibly conflicting, rule-bound systems.

 

The evolution of hunter-gatherer groups

Hunter-gatherer societies are famously egalitarian, but not because everyone is nice to everybody else.

Group members can collectively suppress bullying and other self-aggrandizing behaviors within their ranks.

Boehm (1993, 1999, 2011) saw this as the defining criterion of a major evolutionary transition in human society.

With little disruptive competition within a group, succeeding as a group became the main selective force in human evolution.

 

The formation of cooperatives

Consider a group of people who share access to resources.

Such as fishermen who share fishing grounds, or farmers who share an irrigation system.

How to avoid “the tragedy of the commons” by which competition exhausts the common resource?

For a while, the fishermen must stop fishing, and farmers stop farming, to define the rules of their social system – a cooperative.

 

Elinor Ostrom (1990, 2010) defined eight generic conditions for such a cooperative.

 

1 Clearly defined boundaries

members know they are members of a group and it aims

2 Proportional equivalence between benefits and costs

members must earn benefits and can’t just appropriate them

3 Collective choice arrangements

members must agree decisions so nobody can be bossed around

4 Monitoring

5 Graduated sanctions

6 Fast and fair conflict resolution

disruptive self-serving behaviors must be detected and punished

7 Local autonomy

the group must have the elbow room to manage its own affairs

8 Appropriate relations with other rule-making authorities

(polycentric governance)

all rules above apply equally to inter-group relations

 

David Wilson reports projects that successfully applied Ostrom’s eight principles

However, there are two constraints on how widely applicable they are.

First, the eight conditions are very demanding.

Second, the notion of pre-modern grouping of humans by geographical location has largely broken down.

 

This section above was edited from this paper http://evonomics.com/tragedy-of-the-commons-elinor-ostrom by David Sloan Wilson.

Read that paper for more detail and references.

 

The modern transformation of social groups

In the ancient world, humans (like apes) were naturally grouped by geography.

They communicated only with people in the same location or territory.

First, vehicular transport transformed our ability to mix in different societies.

Then, telecommunications transformed our ability to communicate remotely.

 

What is now is a social group?

How does an individual actor join or leave a group? Who decides?

How many groups can one individual be a member of? Are there degrees of membership?

How does an actor prioritise, apportion time and attention, between groups they belong to?

How do they reconcile the conflicting norms or goals of different groups?

Luhmann: autopoietic social systems

Niklas Luhmann (1927–1998) was a German sociologist and student of Parsons.

Like writers a century earlier, he presumed a system is homeostatic and sustains itself, though in a very curious way.

David Seidl (2001) said the question facing a social system theorist is what to treat as the basic elements of a social system.

“The sociological tradition suggests two alternatives: either persons or actions.”

Luhmann chose neither, he proposed the basic elements of a social system are communicative events about a code that lead to decisions that sustain the system.

Each social system is centred on one code, which is a concept such as “justice” or “sheep shearing”.

He endorsed the “hermeneutic principle” that the hearer alone determines the meaning of a communicative event.

Read Luhmann’s ideas for more.

 

Observations:

Luhmann’s system is radically different from systems as understood by most other system theorists.

It is well-nigh diametrically opposed to that of classical cybernetics.

Since the system has no persistent structure, no persistent state, and no memory of communication events.

And the hermeneutic principle is contrary to common sense and to biology.

Since communication requires a receiver to decode the same meaning from a message that a sender intentionally encoded in that message.

Luhmann’s whole scheme (like that of Parsons before him) seems more metaphysical than scientific.

Having said that, the idea of system based on a code might been seen as having a counterpart in more general system theory

That is a “domain-specific language” for communication of information about entities and events related to one body of knowledge.

Habernas: universal pragmatics

Jürgen Habermas (born 1929) was a critic of Luhmann’s theory of social systems

He developed the social theory of communicative reason or communicative rationality.

According to Wikipedia, this distinguishes itself from the rationalist tradition, by locating rationality in structures of interpersonal linguistic communication rather than in the structure of the cosmos.

It rests on the argument called universal pragmatics – that all speech acts have an inherent "purpose" – the goal of mutual understanding.

He presumed human beings possess the communicative competence to bring about such understanding.

And hoped that coming to terms with how people understand or misunderstand one another could lead to a reduction of social conflict.

 

Observations:

A theory of why and how animate entities communicate has to start from the evolutionary advantage it gives them.

Natural human language is inherently fluid and fuzzy; it is a tool for social bonding and communication, but can easily lead to misunderstandings.

 

All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use..