Ackoff’s ideas – for applying system theory to management science

Copyright 2017 Graham Berrisford. One of more than 100 papers on the “System Theory” page at http://avancier.website . Last updated 01/11/2019 19:45

 

This paper reviews how Ackoff defined and classified systems in three papers that you can probably find on the internet.

1971 “Towards a System of Systems Concepts”.

·       1999 “Re-Creating the Corporation - A Design of Organizations for the 21st Century”

·       2003 “On The Mismatch Between Systems And Their Models”.

 

This is one of many companion papers that analyse some systems thinkers’ ideas.

·       Read Ashby’s ideas for an introduction to his ideas about cybernetics, mostly integral to general system theory

·       Read Ashby’s ideas about variety on his measure of complexity and law of requisite variety.

·       Read Beer’s ideas on the application of cybernetics (after Ashby and others) to management science.

·       Read Von Foerster’s ideas on ideas attributed to Heinz von Foerster and his second order cybernetics.

 

Further reading on the “System Theory” page at http://avancier.website includes:

Boulding’s ideas, Checkland’s ideas, Luhmann’s ideas, Marx and Engels’ ideas, Maturana’s ideas and Snowden’s ideas.

Contents

Preface. 1

Ackoff’s basic ideas. 3

Ackoff’s hierarchy of activities. 7

Ackoff’s hierarchy of aims. 9

Ackoff’s hierarchy of systems. 10

Ackoff’s three system classifications: 1971, 1999 and 2003. 12

Ackoff’s further ideas. 15

Ackoff’s five conditions (1999) 16

Conclusions and remarks. 17

Footnotes on human factors. 19

 

Preface

Russell L Ackoff (1919-2009) was an American organizational theorist, operations researcher, systems thinker and management scientist.

He was a well-known “systems thinker”, respected for a large body of work focused on managed human social systems.

He was primarily interested in the management of organised human social networks – especially bureaucracies.

Ackoff’s “socio-systemic view of organizations” bears some similarity to today’s Open Group Architecture Framework (TOGAF).

 

Ackoff’s more general work draws on two “system” traditions.

The sociological tradition, which can be traced back to the 19 century, might be called “social systems thinking”.

The second tradition emerged after the second world war in two related movements known as general system theory and cybernetics.

 

General system theory concepts are largely taken for granted in today’s enterprise and software architecture methods.

Read Introducing system ideas for discussion of passive and activity systems, open and closed systems, abstract and concrete systems.

Also the concepts of adaptation, atomicity, black box, chaos, complexity, coupling between systems, determinism, dynamics, emergent properties,

goal directedness, hierarchy and system of systems, holistic view, information, self-organisation, and unpredictability.

 

Cybernetics emerged in the 1940s out of studying the role of information in system control.

Read Ashby’s ideas for an introduction cybernetics.

Weiner introduced cybernetics as the science of control of biological and mechanical machines.

He discussed how a controller (e.g. a thermostat) directs a target system (e.g. a heating system) to maintain some state variable(s) in a desired range.

Cybernetic ideas were soon adopted by psychologists and sociologists.

W. Ross Ashby (1903-1972) was a psychologist whose “Introduction to Cybernetics” was published in 1956.

Ashby’s emphasised that every system is a perspective, or partial representation, of a reality.

“Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.

What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” Ashby.

 

Abstract system description

Theoretical system

System description

Roles and rules

Concrete system realisation

An empirical system

A system in operation

Actors and activities

 

Ackoff’s ideas

“Though it grew out of organismic biology, general system theory soon branched into most of the humanities.” Laszlo and Krippner.

Ackoff’s primary interest was in the aims and activities of human organisations.

Simply put, the systems of interest to Ackoff may be described in terms of aims, activities and actors.

 

Example

Meaning

Aims (motivations)

win the world cup

target outcomes that give an entity a reason or logic to perform and choose between actions.

Activities (behaviors)

compete in world cup matches

behaviors or processes than run over time with intermediate outcomes and a final aim or ideal.

Actors (active structures)

players in a national football team

entities that interact by perform activities in system behaviors.

 

Ackoff built elaborate hierarchies of aim, activity and system types.

They are outlined in a table at the end of the next section on Ackoff’s basic ideas.

Ackoff’s basic ideas

In “Towards a System of Systems Concepts” Ackoff declared more than thirty ideas about systems.

The majority of them are compatible with Ashby’s ideas about cybernetics.

This section outlines the first eleven.

 

Ackoff idea 1- System: a set of interrelated elements

All the elements must be related directly or indirectly, else there would be two or more systems.

This definition embraces both passive structures (e.g. tables) and activity systems.

 

In most schools of system thinking, the system of interest is an activity system

Ashby (cybernetics) said a system’s elements include both state variables and behaviors

Forrester (system dynamics) said a system’s elements include both stocks and flows

Ackoff (management science) said a system’s elements include both properties and functions

 

Ackoff idea 2- Abstract system: a system in which the elements are concepts.

An abstract system may describe an observed reality, or an envisaged reality.

 

Abstract descriptions or models take concrete forms: mental, documented and material.

What matters here is not the form the description takes but the relationship of two things.

·       An abstract description (model, conceptualisation, idealisation) to

·       A concrete reality that is observed or envisaged as instantiating that description.

 

Ackoff idea 3- Concrete system: a system that has two or more objects.

A concrete system is a realization (in physical matter and/or energy) of an abstract system description.

 

Abstract system

Concrete system

The Dewey Decimal System

A set of books sorted on library shelves

“Solar system”

A set of planets orbiting a sun

The laws of tennis

A tennis match

The score of a symphony

A performance of a symphony

A computer program

A performance of a computer program

 

One abstract system (or type) can be realised (or instantiated) by many concrete system realities.

One concrete system (or type) realise (or instantiate) many abstract systems.

 

Which comes first?

A natural concrete system (like the solar system) runs in reality before it is observed and described as a system.

A designed concrete system (like a motor car) cannot run in reality until after it has been envisaged by a designer.

 

People find this hard to understand and accept, but here goes…

It is meaningless to say a named entity is a system except with reference to a particular system description.

With no system description, there is no system, just stuff happening.

The universe is an ever-changing entity in which stuff happens.

A concrete system is an island of regular or repeatable behavior carved out of that universe.

·       a set of describable behaviors performed by describable actors

·       an entity we can test as doing what an abstract system description says.

 

Ackoff idea 4: System state: the values of the system’s properties at a particular time.

The state of a system is its current structure or variable values, which changes over time.

Ackoff used the term “property” to mean a system state variable.

 

Abstract state: a “complex type”, a composite of variables.

Cyclist Name, Direction, Speed, Gear Number

Concrete state: an instantiation of the type, a composite of values.

Guy Onnerbeich, South, 20 mph, 7

 

The state variables of an entity may be part of one system but not part of another system.

E.g. the temperature of the earth’s atmosphere is vital to its role in the biosphere, but irrelevant to the earth’s role in the solar system

 

Ackoff idea 5: System environment: those elements and their properties (outside the system) that can change the state of the system, or be changed by the system.

“The elements that form the environment… may become conceptualised as systems when they become the focus of attention.” Ackoff

The environment is the world outside the system of interest.

The system’s boundary is a line (physical or logical) that separates a system from its environment.

Encapsulation is the enclosure of a system as an input-process-output “black box”.

 

Every brain and business can be described as encapsulated in a wider environment.

·       It receives information (in messages) about the state of actors and activities in its environment.

·       It records information (in a memory)

·       It sends information (in messages) to inform and direct actors and activities in its environment.

Thus, every brain and business can be seen as a control system connected in a feedback loop with external actors and activities.

However, a systems thinker may switch attention to a system that contains those actors and activities instead.

 

Ackoff idea 6: System environment state: the values of the environment’s properties at a particular time.

The current state of an environment is defined by giving values to the variable types in a description of that environment.

The remainder of the real-world does not count as part of that environment (though it might count as part of another system’s environment).

 

“Different observers of the same phenomena [actors and actions] may conceptualise them into different systems and environments.” Ackoff

Observers of the same business may see different systems that are:

·       nested, one a subsystem of the other

·       overlapping, discrete

·       coupled by output/input flows

·       cooperating, competing or conflicting.

 

Ackoff idea 7: A closed system: one that has no environment.

An open system interacts with entities and events in a wider environment.

A closed system does not interact with its environment.

“Such conceptualizations are of relatively restricted use”. Ackoff

 

Forrester’s System Dynamics models a closed system of stocks that interact by flows,

but do not interact with anything in a wider environment.

 

Ashby’s general cybernetics can model open systems that interact by information flows with each other

and with entities in their environment.

A conventional business system design process proceeds along these lines.

1.     Define customers, outputs, inputs, suppliers and processes (known as SIPOC in Six Sigma)

2.     Define roles in the processes

3.     Hire, buy or build actors to play the roles.

4.     Organise, deploy, motivate and manage the actors – to perform the processes.

 

Ackoff idea 8: System/environment event: a change to the system variable values.

Ackoff was concerned with how state changes inside a system are related to state changes in its environment.

Ackoff did not distinguish events (input messages received) from state changes caused by those events.

 

On discrete event-driven behavior.

External events cross the boundary from the environment into the system.

In response to an event, a system refers to its current state.

It then “chooses” which actions to perform, to change its current state and/or produce outputs.

The choice is determined by values of input event variables and of internal state variables.

So one event can cause different, optional, state changes, depending on the current state of the system.

And a state change is the modification of one or more values of state variables.

Within a system, internal events pass between subsystems.

 

Ackoff idea 9: Static (one state) system: a system to which no events occur.

Ackoff’s example is a table, which does experience events.

It can be laid for dinner, painted, stripped and polished, and have its wonky leg replaced.

It would better be called a passive structure, as in ArchiMate.

 

A passive structure is inanimate, it cannot act; but it can experience events, can be acted on or in.

An abstract system is a passive structure, a static description of an active concrete system.

Still, it can experience events and state changes (it is written, revised and realised).

In the odd cases of computer program and DNA, an abstract system is directly used in a concrete system.

 

Ackoff idea 10: Dynamic (multi state) system: a system to which events occur.

The system of interest to most systems thinkers is not only dynamic – changes state in response to events.

It is also an activity system - in which actors perform activities.

 

Ashby defined systems in terms of the state changes that result from events detected.

 

Ackoff idea 11: Homeostatic system: a static system whose elements and environment are dynamic.

“A house that maintains a constant temperature… is homeostatic”. Ackoff

Contrary to Ackoff’s definition (whose static system experiences no events) a homeostat does experience events, and its state can oscillate around a norm.

Ackoff’s example here is also questionable.

Surely the house is merely a container for the air whose temperature is controlled?

What maintains that air at a constant temperature is a heating/cooling system, controlled by a thermostat.

 

Several early social systems thinkers viewed sociology as analogous to biology.

They looked at societies as akin to homeostatic organisms.

Ackoff and other systems thinkers have deprecated using this analogy.

 

About the ideas above and to follow

Ackoff started with ideas that any general system theorist or cybernetician would recognise.

The systems of interest are multi-actor multi-state activity systems that perform behaviors

He distinguished abstract systems from concrete systems (as Ashby did).

 

Ackoff ideas 9, 10 and 11, and some of his further ideas are more questionable.

He built elaborate hierarchies of aim, activity and system types.

The table below is an attempt to stitch many of his ideas into a coherent whole.

I believe it gives a good impression of how a cybernetician like Ashby would see Ackoff ideas.

 

System types

Actors

Activities

Aims

A state-maintaining system

a deterministic machine

no optional responses

fixed

A goal-seeking system

some optional responses

fixed

A purposive system

a self-organising social network

defines its own responses

fixed

A purposeful system

defines its own responses

defines its own aims

 

Note that Ackoff used the term “function” ambiguously to mean a behaviour (activity) or an aim.

Ackoff’s hierarchy of activities

Most system theorists are concerned with activity systems.

In Forrester’s System Dynamics, a system is defined thus:

 “A set of elements or parts that is coherently organized and interconnected in a pattern or structure that produces a characteristic set of behaviors." Meadows

And in the context of cybernetics:

 “Cybernetics deals with all forms of behavior in so far as they are regular, or determinate, or reproducible.”

“[It] treats, not things but ways of behaving. It does not ask "what is this thing?" but ''what does it do?" It is thus essentially functional and behavioristic.” (Ashby 1956)

 

Ackoff built an elaborate hierarchy of behavior concepts.

He distinguished acts, reactions and responses with reference to state changes inside the system and its environment.

 

Ackoff idea 12: System reaction: a system event that may be triggered directly by another event.

E.g. “A light going on when the switch is turned.” Ackoff

A system reaction is simplistic: there is no reference to memory and no other option.

If the reaction depends on the state of the world, then Ackoff would call this a response.

 

Ackoff idea 13: System response: a response that goes beyond the naive reaction at 12.

E.g. “A person’s turning on a light when it gets dark is a response to darkness”. Ackoff

In Ackoff’s example, a person compares two variables:

·       an internal state variable (visual acuity level needed for their current activity, say reading)

·       an external environment variable (current light level).

If the required acuity level is higher than the current light level allows, then the person switches on the light.

Presumably, people apply what might be called fuzzy logic to such internal variables rather rigid rules.

 

Ackoff idea 14: System act: a self-determined, autonomous, behavior.

Ackoff said a system act may be triggered by an internal event (rather than an external event).

An internal event happens when the value of an internal state variable crosses a significant threshold.

E.g. when a person’s required acuity level is higher than the current light level allows?

 

Ackoff’s distinction between responses and acts seems fuzzy.

Since his example of a system response also fits his definition of a system act.

A deterministic system can determine its response to an event, and so produce different outcomes, by applying rules that refer to its memory.

You, an observer, will see the response to an event as being at least somewhat unpredictable if

·       the rules include applying fuzzy logic

·       you don’t know the internal state of the system (Boulding’s reason we can’t predict human responses.)

 

However, surely Ackoff was intending to distinguish determined behaviors (responses) from self-determined behaviors (acts).

Meaning that the latter are made by self-aware entities with free will.

 

Ackoff idea 15: System behavior: system events which… initiate other events

Ackoff says behaviors are state changes whose consequences are of interest.

It is arguable that all behaviors, all acts, reactions and responses have both:

·       consequences of interest at some level of investigation

·       antecedents of interest at some level of investigation.

 

I can’t find a definition of “outcome” in Ackoff’s 1971 paper.

Are they consequential changes in the state of the system? or its environment?

 

On the hierarchy of behavior types

Ackoff arranged behaviors (also aims and systems below) in a hierarchical structure using different words at different levels.

I am not confident that this classification is wholly coherent; this table represents my summary of it.

 

Trigger

event

Make choices wrt

to system state

Outcome

Involve learning

System event/state change

Environment event/state change

12 Reaction

External

No

No

One

No

13 Response

External

Yes

Yes

Several

Yes?

14 Act

Internal

Not defined?

Yes

Not defined?

Not defined?

 

Again, I am not confident that Ackoff’s classification is wholly coherent or that my reading matches his intention.

Ackoff’s hierarchy of aims

Ackoff’s main interest was in managed human organisations.

Other things (the solar system, a virus, a tree, an ant, an ant colony, and a person) might be viewed as a system

But it is probably best to consider his hierarchy of aim concepts in relation to a managed human organisation.

 

Ackoff idea 22: The relative value of an outcome: a value (between 0 and 1) compared with other outcomes in a set.

The highest value outcome is the preferred outcome.

Ackoff idea 23: The goal of a purposeful system: a preferred outcome within a time period.

Ackoff idea 24: The objective of a purposeful system: a preferred outcome that cannot be obtained in a time period.

Ackoff idea 25: An ideal: an objective that cannot be obtained in a time period, but can be approached.

Ackoff idea 26: An ideal-seeking system: a purposeful system that seeks goals leading towards its ideal.

 

It appears the hierarchy of aims is describable from the bottom up as shown in this table.

 

22 An outcome

can be valued and preferred as a

Goal

23 Goals

can be ordered with respect to

Objectives

24 Objectives

can be ordered with respect to

Ideals

25 Ideals

are persistent aims that appear “unobtainable in principle or in practice”.

 

 

Again, I can’t find a definition of “outcome” in Ackoff’s 1971 paper.

It appears an outcome is a consequential change in the state of a system or its environment.

The relationship of purposes to objectives and ideals is not clear, but they appear to sit at a higher level than goals.

 

On the elasticity of the concepts

Generally, systems may be recursively composed or decomposed in space, time or logic.

An event that is external to one system is internal to a wider system, and vice-versa.

So, what happens to aims if you expand the time period of interest?

Presumably, Ackoff’s ideal becomes an objective and an objective becomes a goal?

And what happens if you expand the system boundary – say from department to division to enterprise?

Presumably, a goal of a small subsystem is seen as one of many outcomes in a wider system?

Ackoff’s hierarchy of systems

Since since a system is composed from interacting subsystems, systems are recursively composable and decomposable.

Like Boulding before him, Ackoff blurred the idea of system descomposition with the idea of different system types.

This aim and behavior hierarchies (above) may have been designed to fit his system type hierarchy (this section).

In the latter, he classified systems by dividing behaviors and outcomes between “determined” and “chosen”.

 

Ackoff idea 16: State maintaining system: the most naďve of reactive systems (cf. 12 reactions).

When an event of a given type occurs, a state maintaining system reacts to produce a fixed outcome.

“Fixed outcome” implies there is no choice as to the outcome of the event.

 

Ackoff’s discussion of this system type is dominated by self-regulating or homeostatic entities.

In this category, is maintaining a system state variable (within defined bounds) definable as a goal or objective?

 

What if maintaining one system state variable involves making choices related to the values of other state variable?

I guess that is a goal-seeking system - the next idea below.

 

Ackoff idea 17: Goal-seeking system: a system that chooses its response to events (cf. 13 responses).

This category of system seems to be defined by its ability to retain and use its memory to make choices.

Every business information system remembers facts in the form of system state variables and uses that memory to inform its actions.

It refers to its memory when choosing between behaviors that produce different outcomes, with an end goal “in mind”.

 

Ackoff refers at this point to a system that does more – it learns – it adapts its behavior by conditioning according to experience.

E.g. a rat in a maze increases its efficiency in meeting a goal by choosing not repeat a behavior that failed earlier.

That primitive kind of learning can be mimicked by a computer program.

 

It is unclear whether Ackoff intended to embrace all kinds of learning, for example:

·       Learning from simple physical sensation (e.g. that your lips may stick to a cube ice)

·       Learning facts (e.g. the colours of the rainbow)

·       Learning a physical process (e.g. to swim or play music)

·       Learning a logical process (e.g. multiplication, algebra, a business process)

·       Learning a cultural norm (e.g. to say please and thank you)

·       Machine learning (detecting patterns in data).

 

Ackoff idea 18: Process: a sequence of behavior that constitutes a system and has a goal-producing function.

You might call this a one-time system; it runs from trigger event to an outcome that meets its goal.

Ackoff says each step in the process (be it an act, reaction or response) brings the actor closer to the goal.

It isn’t clear whether his process can have control logic (be it strict or fuzzy) or lead from one event to a variety of different outcomes/goals.

It isn’t clear whether the goals of a system = the sum of all goals of processes performed by the system.

 

Ackoff idea 19: Multi-goal seeking system: a system that seeks different goals in different states.

So, the goal(s) of a system depend on the state it is in.

Presumably, Ackoff means the state-goal relationship predetermined.

 

Here, Ackoff appears to presume system actors behave in a deterministic manner.

 

Ackoff idea 20: Purposive system: a multi-goal seeking system where goals have in common the achievement of a common purpose.

The common purpose could be to survive, win the game or make a profit.

Purposive systems and sub-animate actors may be given goals but can’t change them.

Again, it seems purposes sit at a higher level than goals, but the relationship of purposes to objectives and ideals is not clear.

 

Here, Ackoff appears to presume system actors can choose between acts, but cannot determine goals.

 

Ackoff idea 21: Purposeful system: can produce the same outcome in different ways and different outcomes in the same and different states.

Ackoff believed that free will enabled purposefulness, which in turn demonstrated free will.

 

A book Ackoff wrote with Emery about purposeful systems (1972) put it more clearly.

“a purposeful system is one that can change its goals in constant environmental conditions;

it selects goals as well as the means by which to pursue them. It displays will.” Ackoff

Animate actors (being self-aware) might change the aims any system they participate in.

"members are also purposeful individuals who intentionally and collectively formulate objectives and are parts of larger purposeful systems.” Ackoff

 

(How far the goals of a social group relate to or derive from individual actors’ goals is a question beyond the discussion here.)

 

On the hierarchy of system types

Again, the table below is a sketchy attempt to stitch ten of his ideas together into a coherent whole.

I believe it gives a good impression of what Ackoff was striving towards.

Later, he defined “organisations” as purposeful systems (21).

 

System types

Behavior types

Aim types

16 A state maintaining system

12 Reacts as predetermined to produce

22 one predetermined Outcome

17 A goal-seeking system chooses between

13 Responses as predetermined to meet

23 one predetermined Goal

20 A purposive system chooses between

14 Acts by self-determination to meet

23 predetermined Goals with a shared 24 Objective

21 A purposeful system chooses between

14 Acts by self-determination to meet

25 Goals and Objectives it chooses, leading to an Ideal

 

If (20) actors can define the acts they perform, there isn’t a system in the normal sense.

If (21) the only stable feature is an ideal (which Ackoff described as unobtainable), there is nothing systematic about the organisation.

What might a cybernetician like Ashby make of this? Find an answer in the next section.

Ackoff’s three system classifications: 1971, 1999 and 2003

Remember Ackoff’s primary system of interest was managed human organisations.

This section reviews how his classification of system types evolved over twenty years.

Before Ackoff

Ackoff’s hierarchy of systems appears to have been an attempt to turn hierarchical system decomposition into a science.

In this regard, he seems to have followed in the footsteps of Kenneth Boulding.

However, Boulding's (1956) system classification was highly questionable.

 

Boulding proposed systems be classified in hierarchy with seven levels that rise in two ways

·       by aggregation from part to whole

·       by complexification from simple to complex

The hierarchy is topped by societies at level 6 societies and ecologies at level 7.

This has encouraged people, ever since, to apply the term "complex system" to social networks.

 

One trouble is that levels of aggregation and levels of complexity are different ideas.

Every system is a perspective, or partial representation, of a reality.

To take a holistic view of an aggregate is to deliberately ignore the complexity of each part.

E.g. a card game is a very simple social system; the biology of a human’s cardio-vascular systems is very complex.

The staggering complexity of a card players’ biology is irrelevant to the card game (and to other social interactions).

1971: Classification by behavior and outcome

At first, Ackoff differentiated his systems of interest by whether behaviors and outcomes are “determined” and “chosen”.

Remember this table (copied from above) is my interpretation of what he was saying in 1971.

 

System types

Behavior types

Aim types

16 A state maintaining system

12 Reacts as predetermined to produce

22 one predetermined Outcome

17 A goal-seeking system chooses between

13 Responses as predetermined to meet

23 one predetermined Goal

20 A purposive system chooses between

14 Acts by self-determination to meet

23 predetermined Goals with a shared 24 Objective

21 A purposeful system chooses between

14 Acts by self-determination to meet

25 Goals and Objectives it chooses, leading to an Ideal

 

Later, Ackoff defined “organisations” as purposeful systems (21).

 

A cybernetician’s view

This table shows what Ashby might make of Ackoff’s classification.

 

System types

Actors

Activities

Aims

A state maintaining system

a deterministic machine

no optional responses

fixed

A goal-seeking system

some optional responses

fixed

A purposive system

a self-organising social network

defines its own responses

fixed

A purposeful system

defines its own responses

defines its own aims

 

Ackoff’s purposive and purposeful systems are only systems if the definitions of responses and aims are changed under change control.

Read Foerster’s ideas about second order cybernetics for further explanation and exploration of self-organisation.

1999: Classification by purposefulness

Later, Ackoff differentiated his systems of interest by the “purposefulness” of the parts and the whole.

The table is edited from table 2.1 in “Re-Creating the Corporation - A Design of Organizations for the 21st Century” .

 

Systems type

Parts

Whole

Notes

Deterministic

Not purposeful

Not purposeful

Motor cars, fans, clocks, computers, plants

Animated

Not purposeful

Purposeful

Animals

Social

Purposeful

Purposeful

Corporations, universities, societies

Ecological

Purposeful

Not purposeful

An environment that serves its parts. E.g. ocean, atmosphere

 

By 1999, Ackoff had defined “purposeful” more neatly than in 1971.

"An Entity is purposeful if it can select both means and ends in two or more environments."

He then contrasted goal-seeking entities…

“Although the ability to make choices is necessary for purposefulness, it is not sufficient. An entity that can behave differently (select different means) but produce only one outcome in any one of a set of different environments is goal seeking, not purposeful. For example, automatic pilots on ships and airplanes and thermostats for heating systems have a desired outcome programmed into them; they do not choose them.”

… with purposeful people and social networks

“people can pursue different outcomes in the same and in different environments and therefore are obviously purposeful; so are certain types of social networks.”

2003: Classification by choice

Ackoff subtly revised his system classification scheme as shown below.

Now the key differentiator is the ability of the parts and the whole to exercise “choice” – as Ackoff defined it.

 

Type of System Model

Parts

Whole

Beware!

Deterministic / mechanistic

Clock, Tree, Ant, Fish

No choice

No choice

Not just machines!

This class includes plants and “lower” animals

Animate

Human, Gibbon

No choice

Choice

Not all animals! This class excludes “lower” animals.

It includes only “higher” animals that Ackoff considered to exercise free will

Social

Church, Corporation

Choice

Choice

Not all social networks! This excludes “lower” animal groups and informal groups.

It is primarily if not only formal organisations

Ecological

Island

Choice

No Choice

Not an external environment!

An environment that serves its parts.

 

Read this paper for a detailed critique of this 2003 system classification.

Notice that he had previously allowed that machines make choices.

Clearly, the meaning Ackoff attached to “choice” is vital to his system classification; for a discussion of choice, read the footnote.

 

Ackoff was a doomsayer about the management of organised human social networks.

His 2003 joint paper On The Mismatch Between Systems And Their Modelsstarted: “Most contemporary social systems are failing.”

Surely most of them also succeed in some way to some extent, at least they employ some people for a while?

 

Ackoff’s examples of failing organisations were questionable.

 

Ackoff’s example

Might this signify

“Head Start is said to be a failure.”

Impossible targets? Or the impossibility of garnering enough resources to meet the huge challenge?

“The US has a higher percentage of its population in prison than any other developed country, but has the highest crime rate.”

The prison population is consequence of the crime rate? And shows the success of criminal investigation organisation?

“Most corporations formed each year fail before the year is up.”

A flourishing entrepreneur base?

“Half the corporations on the Fortune 500 list twenty-five years ago no longer exist.”

A healthy free market adapting to changing times? (Biology requires that older individuals die to make way for new ones.)

 

“One could go on citing deficiencies in the management of our principal social systems.”

Is it reasonable to expect every organisation to succeed?

The difficulties of managing large organisations are multitudinous.

What if managers have impossible targets? Or they cannot garner enough resources? Or market forces change the game?

 

In short

Ackoff’s system classifications (1971, 1999 and 2003) appear plausible, yet they are often misunderstood.

And there is a fundamental problem with classifying real-world entities into system types.

That is, it undermines the distinction Ackoff originally drew between:

·       an entity such as an institution or other human organisation

·       the many different systems that might be abstracted from observation of that entity.

Ackoff’s further ideas

This section lists Ackoff’s further ideas without analysis.

Ackoff idea 27: The functions of a system: production of the outcomes that defines its goals and objectives.

Ackoff idea 28: The efficiency of a system: defined in mathematical terms beyond the analysis here.

Ackoff idea 29: An adaptive system: reacts or responds to reduced efficiency by changing system or environment state.

Ackoff idea 30: “To learn is to increase one’s efficiency in the pursuit of a goal under unchanging conditions.” Ackoff

Ackoff idea 31: Control: “An element or system controls another element or system if its behavior is either necessary or sufficient for subsequent behavior of the element or the system itself and the subsequent behavior is necessary or sufficient for the attainment of one of more of its goals.” Ackoff

Ackoff idea 32: Organization: “An organization is a purposeful system that contains at least two purposeful elements which have a common purpose relative to which the system has a functional division of labor; its functionality distinct subsets can respond to each other’s behavior through observation of communication; and at least one subset has a system-control function”. Ackoff

 

Later, Ackoff deprecated the mechanical and biological views of systems taken by cyberneticians.

It is a little surprising therefore that (in 1971) he described inter-system relationships in terms of controls.

Ackoff’s five conditions (1999)

Remember: Ackoff’s primary system of interest was a managed human organisation.

In 1999, Ackoff defined his system of interest by five conditions.

 

Condition 1- The whole has one or more defining properties or functions.

Ashby (cybernetics) defined a system as having both state variables and behaviors.

Forrester (system dynamics) defined a system as having both stocks and flows.

Ackoff (management science) defined a system as having both properties and functions.

He used the term “property” to mean a system state variable.

 

Condition 2- Each part in the set can affect the behavior or properties of the whole.

At first sight, this seems to go without saying, but what is meant by a “part”?

Is it a type in an abstract organisation – a role, rule or variable?

You cannot remove a type without having an impact on the behavior or variables of the whole.

Or is it an instance in a concrete organisation – an actor or activity instance?

You can remove an instance (remove a fish from a shoal) with no effect on the behavior or properties of the whole.

 

Condition 3- There is a subset of parts that is sufficient in one or more environments for carrying out the defining function of the whole; each… necessary but insufficient for… this defining function.

The division of a system into parts, and the granularity of those parts, is entirely in the gift of the describer.

This third condition is not true of systems described at the level of coarse-grained parts that are tightly coupled.

E.g. there is no functioning subset of a rider and bicycle system.

Or a marriage between two people.

Or a commercial business divided into sales, delivery and accounting parts.

 

Condition 4- The way that each essential part of a system affects its behavior or variables depends on (the behavior or variables of) at least one other essential part of the system.

Again, the division of a system into parts, and the granularity of those parts, is entirely in the gift of the describer.

This fourth condition is not true of systems in which one subsystem (part) encapsulates all essential parts.

 

Condition 5- The effect of any subset of essential parts on the system as a whole depends on the behavior of at least one other such subset.”

The fifth condition seems an elaboration of the fourth.

 

Some system descriptions fit all Ackoff's 5 conditions.

Some do not, where all the described parts are essential and there is no functioning subset of parts.

Conclusions and remarks

Management scientists (e.g. Boulding, Ackoff and Beer) tried to meld management science with cybernetics or general system theory.

Unfortunately, attempts to merge different approaches can obscure what each has to offer.

And when management scientists borrow terms from cybernetics or general system theory they often use them with different meanings.

Read “Terms and ambiguities in systems thinking discussion” for more on this.

 

On abstract and concrete systems

In 1971, one of Russell Ackoff’s insights was this:

“Different observers of the same phenomena may conceptualise them into different systems”.

Ackoff distinguished the conceptualisation of a system from its realisation.

·       Abstract systems: conceptual - as are the roles and rules in a system description

·       Concrete systems: physical - as are the actors and activities in a working system

 

In discussion, many systems thinkers conflate the two ideas, yet there is a many-to-many association between them.

·       One abstract system can be realised by several concrete social networks.

·       One concrete social network can realise several abstract systems.

 

In 1999, Ackoff’s first condition was that a system’s variables or functions must be defined.

So, a group of people or social network doing things is not a system just because people call it an “organisation”.

It becomes a social system when, where and in so far as it performs functions defined in a system description.

 

So, systems thinking involves separating abstract systems (e.g. laws like apartheid) from concrete things (e.g. societies) that realise them.

Just as a thing is infinitely more than any system it realises, so, a society is infinitely more than any law that constrains its behaviors.

A society that realises a system will continue to evolve - under the influence of that system.

Sometimes a society evolves in a way that reinforces the system (cf. a social cell).

Sometimes a society evolves in a way that forces the system to be changed or abandoned.

The society and the system are two different things.

 

The trouble is - Ackoff went on to contradict himself by equating a social network with a social system.

He referred to a church, a corporation or a government agency as being a social system.

Whereas IBM, for example, is as many different systems as system describers can describe and test.

Some of those systems may conflict with each other, or undermine each other.

Moreover, much of what happens in a business like IBM is not at all systematic or systemic.

It relies on humans knowing what to do, choosing what to do, and inventing what to do when the need arises.

In equating an organization that depends on human knowledge and abilities (rather than system description) to a system, Ackoff undermined the basic ideas he set out at the beginning.

 

On system composition/decomposition

Ackoff arranged aims, behaviors and systems in hierarchical structures, using different words at different levels.

It can be convenient to use different words for different levels of system concept, e.g.

 

Division in time

Aims

Activities

Actors

Division in space

Persistent

Mission

 

Enterprise

Whole

Long term

Goal

Value stream

Division

Composite

Short term

Objective

Process

Team

Part

Immediate

Requirement

Activity

Actor

Atom

 

However, systems are infinitely nestable; the level of decomposition is arbitrary – a choice made in a particular situation.

Pinning different words to different levels of a decomposition hierarchy can obscure the general nature of system theory.

At every level, a process is an event-triggered sequence of actions that may refer to system state, include choices and produce outcomes.

A choice is a choice: whether it is made by strict or fuzzy logic, deterministically or by free will (if you consider those to be incompatible).

 

On system change or adaptation

Ackoff said all animated systems are organisms, which he said are defined as being autopoietic.

In discussion of social systems, he used the term self-organising, which is a different concept.

First, we need to distinguish self-sustaining from self-organising.

Them, if the latter means actors continually change the variables and functions of the organisation they work in, this conflicts with Ackoff’s first condition for a system.

 

For a more coherent discussion, we need to distinguish several possible meanings of “self-organising”.

Read Foerster’s ideas about second order cybernetics for further explanation and exploration of that topic.

Some speak of complex adaptive systemswhere the meaning of all three terms is debatable.

Read Complex adaptive systemsfor more on that.

 

In short, it may seem Ackoff’s attempts to merge social systems thinking with more general system theory are doomed.

However, it is possible to reconcile the two traditions, as indicated in the papers linked to above.

Footnotes on human factors

Ackoff took an anthropomorphic view of systems.

Even so, he did not see all human social networks as organizations.

He showed little interest in social networks with minimal bureaucracy (a tennis club or pick-pocket gang).

His primary interest was in organizations of the kind addressed by “management science”.

Mostly, his interest was in bureaucracies that organise human actors - in the public or private sector.

On choice

Ackoff defined animate actors (humans and other higher animals) as ones that choose freely between possible actions.

He said they make choices in the light of their “ideals”.

 

"The capability of seeking ideals may well be a characteristic that distinguishes man from anything he can make, including computers".

Really?

Doesn’t a chess-playing computer choose between possible chess moves in the light of its ideal – to win the game?

Doesn’t a humanoid robot directs its limb movements in the light of its ideal – to stay on its feet?

Surely the main issue is not how actors make choices - by free will or not?

Rather, it is that animate actors may have different ideals from any system they participate in.

And so, the actors have to choose between conflicting ideals when choosing their next action.

 

“All organizations are social systems”

Ackoff said the reverse is not true; not all social systems are organizations.

 

“Most contemporary social systems are failing”

Did he means to say most organizations are failing?

Sometimes he seems to be presenting something akin to a political manifesto.

Does he really mean government agencies rather than commercial businesses?

Most UK government agencies surely succeed a bit and fail a bit. Is his concern US government in particular?

 

“An organization is a purposeful system that contains at least two purposeful elements which have a common purpose”.

Can purposes be ascribed to a purposeful organization by entities inside or outside the organization?

 

“An aggregation of purposeful entities does not constitute an organization unless they have at least one common purpose”

Does it count if employees and suppliers have the common purpose of maximising their income from the organization?

Or all employees are motivated by self-respect from being employed and social interactions with their colleagues?

 

“Organizations display choice.”

Geoff Elliot told me that in the sociological perspective, only people can be “purposeful”.

But Ackoff said social systems (meaning social organizations) are also purposeful.

 

What does it mean to say a social organization make choices or exercises free will?

Not all the purposes of an organization are abstracted from the purposes of the actors in it.

What about purposes ascribed to an organization by entities outside the organization – like the government?

 

Would you say an investment bank’s trading division chooses which stocks to buy and sell?

Those choices, formerly made by human actors, are now by made mechanistic computer actors.

So, in what sense does the organization display choice?

And how does it make a choice independently of choices made by its animate or mechanistic parts?

What does it mean to make a choice anyway?

 

To invent a response to an event?

What is stable in Ackoff’s social organization?

May actors leave and may join? Presumably yes.

May the functions be changed also? Presumably no, else where is the system?

If actors define their own actions, the notion of a system is lost.

 

To select one of several possible actions within a defined system?

Suppose external event or state change types are stable

And the range of performable actions is limited to rational responses to those events or state changes.

E.g. Given an invoice, a customer might choose to pay in full, pay in part, or refuse to pay.

The question then becomes – how does a human actor choose between actions?

 

Select deterministically by following definable rules?

Determinism means that if you know an actor’s internal state and reasoning rules, you can predict the actor’s choice of action response to an event.

We don’t know whether a human is deterministic – because we don’t know the actor’s state or the rules their brain follows.

Further animals may apply fuzzy logic to choices, surely more fuzzily than any computerised artificial intelligence.

So, a human’s choice may be unpredictable, but still deterministic in a complex way we cannot penetrate.

 

Select by free will?

How an actor makes a choice makes no difference to a describer of system’s aims, variables, rules and roles.

The system describer has only to decide what range of options and actions to include in the system description, and what to exclude.

The describer must assume a human can choose any possible option, or fail to act.

A designer has to make allowances for any “exception paths” that actors may choose to follow.

 

Select purposefully in the light of the actors internal purposes?

If an animate actor applies logical reasoning when changing its aims (goals, objectives or purposes) is that a deterministic process?

If so, does that undermine the meaning of purposeful?

On people not being the “parts” of a system

The word “part” suggests containment within a boundary, be it physical or logical.

A man can contain a machine (say, an artificial heart) both physically and logically.

The heart has no aim or behavior outside of the man.

A machine (say, motor car) can contain a man physically but not logically.

E.g. a taxi driver has aims and behaviors outside of their role as driver.

 

Suppose you employ human actors, each with their own aims and behaviors, in your business.

Your business needs only some of the time and talents of each employee.

You ask an employee (Jane) to perform the activities expected of her given role.

Within her work hours, Jane will sometimes act outside her defined role, and sometimes contrary to it.

Outside of work hours, she may play a role in other (perhaps even competing) businesses.

In what sense is Jane a part of your business?

 

Your business can only be regarded as a system when and where it realises an abstract system (Ackoff ideas 2 & 3).

The “parts” of the system are not the actors per se; they are instantiations of defined roles by actors.

What actors do outside of those roles is not part of the defined system.

Moreover, much of what happens in a business is not at all systematic or systemic.

Human actors act differently to roles they are supposed to play, and do much else besides.

They invent and do stuff as they choose; sometimes in accord with business goals; sometimes not.

On organising, deploying, motivating and managing actors perform processes

“A group of unwilling slaves can be organised to do something that they do not want to do,

but they do not constitute and an organization, even though they may form a system” Ackoff

There is a difference between not being keen to do some work and not being willing and able to do it well when asked.

We may baulk at starting a job, but get some enjoyment from it nevertheless.

And being appreciated by colleagues and managers has a lot to do with that.

 

Your new employee brings their own aims (ideals, purposes, objectives, goals) to your business.

Their aims may be in conflict with each other, in flux, unconscious or unrecognised.

Some of their aims may be contrary to the aims of you and your social organization.

Your employee may act to make your organization inefficient, or even to sabotage it (e.g. sharing information with a competitor).

OK, you can design security and exception handling processes to address the conflicting aims and contrary behaviors of employees.

And yes, human actors are special, they need special attention.

This is the territory of social psychology, neuro-linguistic programming and the like – rather than system theory.

 

Business managers and project managers are responsible for managing and motivating their employees.

It isn't the responsibility of an enterprise architecture function.

They attend to the "machinery" of human and computer activity systems, which is difficult enough.

Where people's roles are affected, the EA function work with business managers, HR and sometimes a "business change function" to ensure people are well managed and motivated.

 

 

All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use.