Ackoff ideas from a GST perspective
Copyright 2017 Graham Berrisford. One of about
300 papers at http://avancier.website. Last updated 01/12/2017 14:29
Russell L Ackoff (1919-2009) was an American organizational theorist, operations
researcher, systems thinker and management scientist.
He was a well-known “systems thinker”, respected for a large body of work focused on managed human social systems.
Ackoff was primarily interested in the management of organised human social entities
This
paper is not about his “socio-systemic view of organizations”.
Analysis
of that shows it bears some similarity to today’s Open Group Architecture
Framework (TOGAF).
This paper is about Ackoff’s attempt to relate “management science” to more general system theory.
It shows how he started from GST principles, but later departed from them.
It reviews how Ackoff defined and classified systems in three papers that you can probably find on the internet.
· 1971 “Towards a System of Systems Concepts”.
·
1999 “Re-Creating the Corporation - A Design of
Organizations for the 21st Century”
·
2003 “On The Mismatch
Between Systems And Their Models”.
Contents
Ackoff’s three system classifications (1971, 1999, 2003)
The “parts” of a human activity system
Footnote 1: a few more remarks
Ackoff’s work inherits from two “system” traditions.
The first, sociological tradition can be traced back to the 19 century.
It might be called “socio-cultural systems thinking” (SST).
The second, general system theory (GST) emerged after the second world war.
In An Introduction to Cybernetics (1956), Ashby furthered the ideas of general system theory in relation to control systems.
Cybernetics does not ask "what is this thing?" but ''what does it do?
[It] deals with all forms of behavior in so far as they are regular, or determinate, or reproducible.”
Our companion on GST principles abstracts some principles from Ashby’s works, including the following three.
GST
Principle: descriptions idealise observed or envisaged realities
We describe reality as a selective conceptualisation or idealisation of that reality.
“Any suggestion that we should study
"all" the facts is unrealistic, and actually the attempt is never
made.
What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” Ashby.
GST Principle: concrete systems realise
abstract ones
There are two forms of system
An abstract system description (or type) is a realised (or instantiated) by one or more concrete system realities.
Abstract system description |
Theoretical system |
System description |
Concrete system realisation |
An empirical system |
A system in operation |
GST Principle: an open system interacts
with its environment
An open system is encapsulated within a wider environment.
To encapsulate a system means defining
its input-process-output (IPO) boundary.
The inputs and outputs can be flows of
information, material or energy.
The flows of interest in business
systems are sometimes of materials and usually of information.
A system describer starts with an already-given interest or aim.
Then defines the system boundary in terms of feedback loops between a system and its environment.
A conventional business system design process proceeds along these
lines.
1.
Define
Customers, Outputs, Inputs, Suppliers, Processes and Roles
2.
Hire,
buy or build Actors to play the Roles
3.
Organise,
deploy, motivate and manage the Actors – to perform the processes.
Ackoff often seemed primarily concerned with the last.
General
system characteristics
Generally, a designed system is described in terms of aims, behaviors and active structures.
Terms |
Example |
Meaning |
Aims
|
win the world cup |
target outcomes that give an entity a reason or logic to perform and choose between actions. |
Behaviors |
compete in world cup matches |
processes than run over time with intermediate outcomes and a final aim or ideal. |
Active
structures |
players in a national football team |
nodes, related in a hierarchy or network, that perform activities in behaviors. |
Ackoff’s interest was in the aims, behaviors and structures of managed/organised human social entities.
With this in mind, he built elaborate hierarchies of aim concepts, behavior concepts and system concepts.
But first, here are his basic ideas.
Most of Ackoff’s first 11 ideas are compatible with the GST principles and concepts above (though he could have been clearer).
Ackoff idea 1- System: a set of
interrelated elements
All the elements must be related directly or indirectly, else there would be two or more systems.
This definition embraces both passive structures (e.g. tables) and activity systems.
The concern of GST is activity systems, in which structural elements interact in orderly behaviors.
Ackoff idea 2- Abstract system: a system in which the elements are concepts.
It may be purely conceptual; it may describe an envisaged reality or an observed reality.
Abstract descriptions do take the concrete forms of mental, documented and physical models.
What matters here is not the form but the relationship of
· a description (model, conceptualisation, idealisation) to
· a reality that is observed or envisaged as instantiating that description.
Ackoff idea 3- Concrete system: a system that has two or more objects.
A concrete system is realization in physical matter and/or energy of an abstract system description.
Abstract system description |
The Dewey Decimal System |
“Solar system” |
Laws of tennis |
Defined roles (e.g. Orchestral
parts) |
The score of a
symphony |
Concrete system realisation |
Books sorted on library shelves |
Planets in orbits |
A tennis match |
Actors (e.g. Orchestra members) |
A performance of
that symphony |
Which comes first? Abstract system description or
concrete system realization?
A designed concrete system (like a motor car) cannot run in reality until after it has been described, however abstractly.
A natural concrete entity (like the solar system) runs in reality before it is recognised and described as a system.
The concern of GST is activity systems that operate in the real world, displaying behavior of some kind.
People find this hard to understand and accept, but here goes…
It is meaningless to say a named entity is a system except with reference to a system description.
With no abstract system description, an entity cannot rightly be called a system.
GST note: The universe is an ever
changing entity in which stuff happens. A concrete system is ·
an island of repeatable behaviors carved out of that universe. ·
a set of describable entities that interact in describable behaviors. ·
an entity we can test as doing what an abstract system description says. With no system description,
there is no testable system, just stuff happening. |
Ackoff idea 4: System state: the values of the system’s properties at a particular time.
A concrete system’s property values realise property types or variables in its abstract system description.
|
System properties |
Abstract description of system state |
Property types (air temperature,
displayed colour) |
Concrete realization of system state |
Property values (air temperature is
80 degrees, displayed colour is red) |
The current state of a concrete system realises (gives particular values to) property types or variables in a system description.
Other qualities of that entity are not a part of that system, but might count as part of another system.
E.g. the temperature of the earth’s atmosphere is irrelevant to its role in the solar system, but vital to its role in the biosphere.
Ackoff idea 5: System environment: those elements and their properties (outside the system) that can change the state of the system, or be changed by the system.
“The elements that form the environment… may become
conceptualised as systems when they become the focus of attention.” Ackoff
Any brain or business can be seen as a control system connected in a feedback loop with its environment.
It receives information in messages about the state of entities and activities in its environment.
It records information in a memory.
It sends information in messages to inform and direct motors, actors or entities.
Ackoff idea 6: System environment state: the values of the environment’s properties at a particular time.
A concrete environment’s property values realise property types or variables defined in an abstract description of that environment.
The remainder of the real-world does not count as part of that environment (though it might count as part of another system’s environment).
“Different observers of the same phenomena [actors and
actions] may conceptualise them into different systems and environments.”
Ackoff
Ackoff idea 7: A closed system: one
that has no environment.
An open system interacts with entities and events in a wider environment.
A closed system does not interact with its environment.
“Such conceptualizations are of relatively restricted use”.
Ackoff
Aside: Every “system dynamics” model
is a closed system. It is a model of populations
(stocks) that grow and shrink in response to continuous inter-stock event streams
(flows). The whole system is closed, so all
events are internal events. On the other hand, each stock can be
seen as a subsystem, to which every inter-stock flow is an external event. |
Ackoff idea 8: System/environment event: a
change to the system property values.
Ackoff was concerned with how state changes inside a system are related to state changes in its environment.
He did not appear distinguish events from state changes.
It is generally presumed that
· a state change modifies the value(s) of system variable(s) in response to a discrete event.
· one event can cause different (optional) state changes, depending on the current state of the system.
GST note: On discrete event-driven behavior. External events cross the boundary from the environment into the system. Within a system, internal events pass between subsystems. In response to an event, a system refers to current system state. It then “chooses” what actions to take, including actions that change its own state. The choice depends on the values of input event variables and internal state variables. |
Ackoff idea 9: Static (one state) system: a
system to which no events occur.
Ackoff’s definition here can be improved, since his example, a table, does experience events
A table is laid for dinner, painted, stripped and polished, and has its wonky leg replaced.
In today’s ArchiMate terminology, his static system is a passive structure rather than an active structure.
A passive structure can experience events, can be acted in or on, but is inanimate and cannot act itself.
Aside: You may see an abstract system description as a static definition of dynamic concrete systems realizations.
The system description does experiences events and state changes; it is written, communicated, revised and realised.
There are odd cases where a system description is directly realised – notably, a computer program and DNA.
Ackoff idea 10: Dynamic (multi state)
system: a system to which events occur.
System theorists often describe systems in terms of state changes that result from events detected.
Brains and businesses can be seen as control systems that remember the current state of entities and processes they monitor, inform and direct.
They update their memories in response to events that reveal state changes in those entities and processes.
Ackoff idea 11: Homeostatic system: a static system whose elements and environment are dynamic.
“A house that maintains a constant temperature… is
homeostatic”. Ackoff
Hmm… Ackoff’s notion of the system here is questionable.
The heating system is dynamic, not a static system.
The environment property of interest is a variable - air temperature.
This too is dynamic, though maintained in a range of values between upper and lower bounds.
Looking at the house as merely a container of air, it is neither the system of interest nor its environment.
Looking at the house a system, you’d assume its state can change in many other ways, .
In short
Ackoff
started from basic points a general system theorist would recognise.
He
discussed homeostatic systems - like the early general system theorists did.
He distinguished abstract systems from concrete systems, and structures from behaviors.
But from here onwards, Ackoff moved away from GST and from Ashby.
GST is concerned with activity systems that operate in the real world, displaying behavior of some kind.
“A set of elements or parts that is coherently organized and interconnected in a pattern or structure that produces a characteristic set of behaviors." Meadows
The systems can be characterised as having parts that interact in “regular, repeated or repeatable behaviors” W Ross Ashby.
Ackoff’s interest was in the behaviors of managed/organised human social entities.
He built an elaborate hierarchy of behavior concepts.
He
distinguished acts, reactions and
responses with reference to state changes inside the system and its
environment.
Ackoff idea 12: System reaction: a system
event that may be triggered directly by another event.
E.g. “A light going on when the switch is turned.” Ackoff
The reaction is simplistic: there is no reference to memory and no
choice of outcome.
By contrast, consider turning on a
heating system.
The heating system may examine the
temperature in several rooms then “choose” which heaters to switch on.
Ackoff might call this a response.
Ackoff idea 13: System response: a response that goes beyond the naive reaction at 12.
E.g. “A person’s turning on a light when it gets dark is a
response to darkness”. Ackoff
This response might be seen as deterministic.
You apply a function that involves dividing an internal state variable (acuity needed) by an external environment variable (light level).
If the function is higher than N, then you switch on the light.
But of course animals apply complex fuzzy logic to such variables rather simple arithmetic.
Ackoff idea 14: System act: a
self-determined, autonomous, behavior.
Ackoff says a system can be triggered to act by an internal state change, not triggered by an event.
However, all internal state changes are traceable to external events, time events and internal events.
An internal event happens when the value of an internal state variable crosses a significant threshold.
Both human and computer actors may be triggered to act by an internal event/state change of this kind.
Again, human actors apply fuzzy logic in processing such threshold-crossing changes rather simple arithmetic.
Ackoff idea 15: System behavior:
system events which… initiate other events
Ackoff says behaviors are state changes whose consequences are of interest.
But his behaviors include acts, reactions and responses whose antecedents are of interest.
And surely the
consequences of acts, reactions and responses are outcomes?
So doesn’t every behavior have antecedents and consequences of interest?
AN ATTEMPT TO TABULATE ACKOFF’S
BEHAVIOR TYPES
Ackoff arranged behaviors, aims and systems in hierarchical structures, using different words at different levels.
I am not
confident that Ackoff’s division of behaviors into
types is coherent and/or that my interpretation is correct.
This table presumes all his behaviors are triggered by events and produce outcomes, usually with reference to the system’s current state/memory.
15 Behavior |
Trigger event |
Choice made wrt to system state |
Outcome |
Involve learning |
|
System event/state change |
Environment event/state change |
||||
12 Reaction |
External |
No |
No |
1 |
No |
13 Response |
External |
Yes |
Yes |
Several |
Yes? |
14 Act |
Internal |
? |
Yes |
? |
? |
Remember: there is recursive
composition/decomposition of systems in space, time or logic.
An event that is external to one system is internal to a wider system, and vice-versa.
Ackoff’s interest was in the aims of managed/organised human social entities.
He built an elaborate hierarchy of aim concepts.
It may be distilled from the bottom up thus.
· An outcome can be valued and preferred as a goal.
· Goals can be ordered with respect to objectives.
· Objectives can be ordered with respect to ideals
· Ideals are persistent aims that appear to be unobtainable in principle or in practice.
I can’t find a definition of “outcome” in Ackoff’s 1971 paper.
Presumably,
outcomes are state changes - inside the system and/or in its environment.
Ackoff idea 22: The relative value of an outcome: a value (between 0 and 1) compared with other outcomes in a set. The highest value outcome is the preferred outcome.
Ackoff idea 23: The goal of a purposeful
system: a preferred outcome within a time period.
Ackoff idea 24: The objective of a
purposeful system: a preferred outcome that cannot be obtained in a time
period.
Ackoff idea 25: An ideal: an objective
that cannot be obtained in a time period, but can be approached.
Ackoff idea 26: An ideal-seeking system: a purposeful system that seeks goals leading towards its ideal.
It seems
purposes sit at a higher level than
goals, but the relationship of purposes to objectives and ideals is not clear.
Ackoff doesn’t
discuss: What happens if you
expand the system-environment boundary or the time-frame?
Presumably:
· an ideal becomes an objective when the time frame is expanded?
· an objective becomes a goal when the time frame is expanded?
· a goal of a small subsystem may be seen as one of many outcomes in a larger system?
Ackoff’s aim and behavior hierarchies may well have been designed to fit his system
type hierarchy.
He
classified systems by dividing behaviors and outcomes between “determined” and
“chosen”.
System type |
Behavior |
Outcome |
16 State maintaining |
Determined (reactive) |
Fixed |
17 Goal-seeking |
Chosen (responsive) |
Fixed |
19 Purposive |
Chosen |
Variable but determined |
20 Purposeful |
Chosen |
Variable and chosen |
Ackoff idea 16: State maintaining system: the most naïve of reactive systems (cf. 12 reactions)
When an event of a given type occurs, a state maintaining system reacts to produce a fixed outcome.
“Fixed outcome” implies there is no choice as to the outcome of the event.
Ackoff’s discussion of this system type is dominated by homeostatic systems.
So, this category
appears to be about maintaining a system state variable - within defined bounds
– which is an aim in itself.
What if maintaining one system state variable involves making choices related to the values of other state variable?
I guess that is a goal-seeking system as at 17 below.
Ackoff idea 17: Goal-seeking system: a system that chooses its response to events (cf. 13 responses)
This category of system seems to be defined by its ability to retain and use its memory to make choices.
Every information system remembers facts in system state and uses that to inform choices between optional behaviors.
It refers to its memory when choosing between behaviors that produce particular outcomes, with an end goal “in mind”.
Ackoff refers here to a system that does more – it learns – it adapts its behavior by conditioning according to experience.
E.g. a rat in a maze increases its efficiency in meeting a goal by choosing not repeating a behavior that failed earlier.
It is unclear whether Ackoff wants to distinguish between a system using its memory and learning by conditioning.
And perhaps that distinction is too subtle to be drawn.
Ackoff idea 18: Process: a sequence of behavior that constitutes a system and has a goal-producing function.
This appears to be a one-time system; it runs from trigger event to an outcome that meets its goal.
Ackoff says each step in the process (be it an act, reaction or response) brings the actor closer to the goal.
It isn’t clear whether his process can have control logic (strict or fuzzy) or lead from one event to a variety of outcomes/goals.
Ackoff idea 19: Multi-goal seeking system: a system that seeks different goals in different states.
A deterministic system, by referring to its memory, can choose between reactions/responses to an event, and produce different outcomes.
If you don’t know the internal rules or state of the system, you cannot predict the outcome of an event.
And if those rules include applying fuzzy logic, this introduces a further level of unpredictability
Ackoff idea 20: Purposive system: a multi-goal seeking system where goals
have in common the achievement of a common purpose.
The common purpose could be to survive, win the game or make a profit. (Is that purpose also an objective?)
Purposive systems and sub-animate actors may be given goals but can’t change them.
Up to this point, Ackoff appears to presume a system behaves in a deterministic manner. But from this point on it becomes harder to reconcile Ackoffs ideas with more science-based GST. |
Ackoff idea 21: Purposeful system: can produce the same outcome in different ways and different outcomes in the same and different states.
Ackoff believed that purposefulness demonstrated free will.
The next two quotes are from
a 1972 book Ackoff wrote with Emery about purposeful systems.
“a purposeful system is one that
can change its goals in constant environmental conditions; it selects goals as
well as the means by which to pursue them. It displays will.” Ackoff
Animate animate actors (being self-aware) might change the aims, variables, rules and roles of any system they participate in.
"members are also purposeful individuals who intentionally
and collectively formulate objectives and are parts of larger purposeful
systems.” Ackoff
Three more points are included here without analysis.
Ackoff idea 27: The functions of a system: production of the outcomes that defines its goals and objectives.
Ackoff idea 28: The efficiency of a system: defined in mathematical terms beyond the analysis here.
Ackoff idea 29: An adaptive system: reacts or responds to reduced efficiency by changing system or environment state.
Ackoff goes on to define learning in a very particular way.
Ackoff idea 30: “To learn is to
increase one’s efficiency in the pursuit of a goal under unchanging
conditions.” Ackoff
That is a very limited definition, since learning could be extended, for example, to include learning that a goal is not worth pursuing.
Ackoff deprecated the mechanistic, biological and animalistic views of systems taken by other system theorists.
It is a little surprising therefore that (in 1971) he described inter-system relationships in terms of controls.
Ackoff idea 31: Control: “An
element or system controls another element or system if its behavior
is either necessary or sufficient for subsequent behavior
of the element or the system itself and the subsequent behavior
is necessary or sufficient for the attainment of one of more of its goals.”
Ackoff
Ackoff idea 32: Organization: “An organization is a purposeful system that contains at
least two purposeful elements which have a common purpose relative to which the
system has a functional division of labor; its
functionality distinct subsets can respond to each other’s behavior
through observation of communication; and at least one subset has a
system-control function”. Ackoff
Remember: Ackoff’s
systems of interest were managed/organised human social entities.
This section reviews how his classification of system types evolved over twenty years.
Remember it is well-nigh axiomatic that:
· Systems can be hierarchically nested: one system can be a part or subsystem of another.
· An event that is external to a smaller system is internal to a wider system (and vice-versa).
· The emergent properties of a small system can be ordinary properties of a wider system it is a part of.
At first
Ackoff differentiated his systems of interest by whether behaviors and outcomes
are “determined” and “chosen”.
System type |
Behavior |
Outcome |
16 State maintaining |
Determined (reactive) |
Fixed |
17 Goal-seeking |
Chosen (responsive) |
Fixed |
19 Purposive |
Chosen |
Variable but determined |
20 Purposeful |
Chosen |
Variable and chosen |
Later, Ackoff differentiated his systems of interest by the “purposefulness” of the parts and the whole.
The table is edited from table 2.1 in “Re-Creating the Corporation - A Design of Organizations for the 21st Century”
Systems type |
Parts |
Whole |
Notes |
Deterministic |
Not purposeful |
Not purposeful |
Motor cars, fans, clocks, computers, plants |
Animated |
Not purposeful |
Purposeful |
Animals |
Social |
Purposeful |
Purposeful |
Corporations, universities, societies |
Ecological |
Purposeful |
Not purposeful |
An environment that serves its parts. E.g atmosphere |
Ackoff defined “purposeful” more neatly in 1999 than in 1971.
"An Entity is purposeful if it can select both means
and ends in two or more environments."
He then contrasted goal-seeking entities.
“Although the ability to make choices is necessary for
purposefulness, it is not sufficient.
An entity that can behave differently (select different
means) but produce only one outcome in any one of a set of different
environments is goal seeking, not purposeful.
For example, automatic pilots on ships and airplanes and
thermostats for heating systems have a desired outcome programmed into them;
they do not choose them.”
With purposeful people and social entities
“people can pursue different
outcomes in the same and in different environments and therefore are obviously
purposeful; so are certain types of social groups.”
Ackoff
was a doomsayer about the management of organised human social entities.
His joint
paper “On
The Mismatch Between Systems And Their Models” started: “Most
contemporary social systems are failing.”
Ackoffs examples suggest a presumption that the blame
for the failure of a business must lie with its managers.
“Head Start is said to be a failure.” Perhaps signifying the impossibility of garnering
enough resources to meet the huge challenge?
“The US has a higher percentage
of its population in prison than any other developed country, but nevertheless
has the highest crime rate.” The
former is a consequence of the latter?
“Most of the corporations formed
each year fail before the year is up.” The sign of a flourishing entrepreneur
base?
“Half the corporations on the
Fortune 500 list twenty-five years ago no longer exist.” The sign of a healthy free
market economy adapting to changing
times?
“One could go on citing
deficiencies in the management of our principal social systems.”
Deficient management? Or inability to
gather resources, impossible targets or market forces?
Ackoff subtly revised his
system classification scheme as shown below.
Now the key differentiator is the ability of the parts and the whole to exercise “choice” – as Ackoff defined it.
Parts |
Whole |
Beware! |
|
Deterministic / mechanistic Clock, Tree, Bee |
No choice |
No choice |
Not just
machines! This includes plants and “lower” animals |
Animate Human |
No choice |
Choice |
Not all animals! This excludes “lower” animals. It includes only “higher” animals that Ackoff considers to exercise free will |
Social Church, Corporation |
Choice |
Choice |
Not all social groups! This excludes “lower” animal groups and informal groups. It is primarily
if not only formal organisations |
Ecological Island |
Choice |
No Choice |
Not an external environment! An environment that serves its parts. E.g. Island. |
Read this
paper for a detailed
critique of this 2003 system classification.
Notice that “”purposeful” was replaced by “choice” - though he had previously allowed that machines make choices.
Clearly, the meaning Ackoff attached to “choice” is vital to his system classification.
For more on choice, read the footnote.
Remember:
Ackoff’s systems of interest were managed/organised
human social entities.
In 1999, Ackoff defined his system of interest by five conditions.
Condition 1- The whole has one or
more defining properties or functions.
It goes without saying that a condition for meaningful discussion of a thing is that its properties are defined.
Ackoff earlier used the term “properties” to mean system state variables in particular.
The values of system state variables are referred to and changed by the performances of system behaviors.
So are “functions” behaviors? Or the aims of behaviors? (Ideas 18 and 27 of his 1971 paper don’t make it clear.)
Condition 2- Each part in the set
can affect the behavior or properties of the whole.
Surely
it goes without saying that system parts are involved in system behaviors that affect system properties?
But
what is meant by a “part” here?
A
type in an abstract organisation – a human role and/or logical subsystem?
An
instance in a concrete organisation – a human actor, organisation unit, or
instantiation of a logical subsystem?
It is difficult to remove a structural type without having an impact on the behaviour or properties of the whole
You may however remove instances of a multiply occurring structure type with no effect on the behavior or properties of the whole.
(See the next section of this paper for further discussion of “parts”.)
Condition 3- There is a subset of
parts that is sufficient in one or more environments for carrying out the
defining function of the whole; each… necessary but insufficient for… this
defining function.
The division of a system into parts, and the granularity of those parts, is entirely in the gift of the describer.
Ackoff’s third condition is not true of systems described in terms of coarse-grained and tightly coupled parts.
E.g. there is no functioning subset of:
· a rider and bicycle system
· a marriage between two people
· a commercial business divided into sales, delivery and accounting parts.
Yes, the first two examples are not relevant here, because Ackoff’s five conditions are not definitive of systems in
general.
Which is to say, again, the five conditions characterise a
large-scale managed/organised human social entity.
Condition 4- The way that each
essential part of a system affects its behavior or
properties depends on (the behavior or properties of)
at least one other essential part of the system.
Condition 5- The effect of any
subset of essential parts on the system as a whole depends on the behavior of at least one other such subset.”
Again, the division of a system into parts, and the granularity of those parts, is entirely in the gift of the describer.
The fourth condition is not true of systems in which one subsystem (part) encapsulates all essential parts.
The fifth condition seems an elaboration of the fourth.
Example
Several GST compatible definitions could be drawn up of a tennis match, depending on the stakeholder viewpoint.
Here is a starter: an abstract description using a simple template.
· Identifier: Tennis Match.
· Why? Aims or purposes: to win the match; to enjoy the match.
· How? Behaviors: laws covering activities: serving, receiving, changing ends, etc.
· Who? Active structures: laws about players’ roles in behaviors (not player identities).
· What? Passive structures: laws covering objects: rackets, balls, court, etc.
· Where? Places: tennis court locations.
· When? Times: tennis match scheduling.
Again:
· The 1 abstract system description above can be realised by N concrete social entities.
· Any 1 social entity (e.g. you and me) can realise this and N other systems.
Much (even most) systems thinking discussion conflates the two ideas.
Some GST-compatible system descriptions will fit Ackoff's 5 conditions.
Some will not since all the described parts are essential and there is no functioning subset.
But why worry the 5 conditions at all?
Ackoff was focused on the management of organised social entities.
He surely did not write the 5 conditions about systems in general; he wrote them to characterise the social entities of interest to him.
The word “part” suggests containment within a boundary, be it physical or logical.
A man can contain a machine (say, an artificial heart) both physically and logically; the heart has no aim or behavior outside of the man.
A machine (say, motor car) can contain a man physically -
but not logically; the taxi driver has aims and beahaviors
outside of their role as car driver.
Human activity system (HAS) is a kind or variety or subtype of Activity System.
Ackoff 1999 condition 1: a human activity system can only be regarded as a system if it is defined.
Ackoff 1971 ideas 2 & 3: a concrete human activity system realises an abstract (defined) human activity system.
The abstract system defines human roles in terms of Activities performed.
The “parts” of a concrete human activity system are not the Actors per se.
The “parts” are the instantiations of the defined roles by Actors.
That is to say, the Actors’ performances of Activities in defined roles.
What Actors do outside of their roles is not part of the defined system.
Though it may valuable to a social entity that realises that defined system.
Suppose you gather several systems, each with its own aims and behaviors, into one centrally-managed entity.
You expect those systems to work towards the aims of the whole entity
But at the same time, they still have their own aims
and behaviors.
In what sense, or to what extent, is your managed entity
a single coherent system?
Suppose you employ human actors, each with their own aims and behaviors, in centrally-managed entity.
Your business needs only some of the time and talents of each employee.
You ask your employee Jane to perform the activities expected of her given role.
Within her work hours, Jane will sometimes act outside her defined role, and sometimes contrary to it.
Outside of work hours, she may play a role in other (perhaps competing) businesses.
In what sense is Jane a part of your business?
Much of what happens in a business is not at all systematic or systemic.
Human actors act differently to roles they are supposed to play, and do much else besides.
They invent and do stuff as they choose; sometimes in accord with business goals; sometimes not.
“A group of unwilling slaves can be
organised to do something that they do not want to do, but they do not
constitute and an organization, even though they may form a system”
Surely there
is a huge difference between not being keen to do some work and not being
willing and able to do it well when asked?
Do soldiers
want to go into battle?
We often
baulk at starting a job, but get some enjoyment from it nevertheless.
And being
appreciated by colleagues and managers has a lot to do with that.
Isn’t that
territory of social psychology, neuro-linguistic
programming and the like (rather than system theory)?
Your new employee brings their own aims (ideals,
purposes, objectives, goals) to your business.
Their aims may be in conflict with each other, in flux, unconscious or unrecognised.
Some of their aims may be contrary to the aims of you and your social organization.
Your employee may act to make your organization inefficient, or even to sabotage it (e.g. sharing information with a competitor).
OK, you can design security and exception handling
processes to address the conflicting aims and contrary behaviors of employees.
And yes, human actors are special, they need special attention.
Again, isn’t that theory to be found in social psychology, neuro-linguistic programming and the like (rather than system theory)?
And isn’t this the job of business managers and human resources (rather than enterprise architects)?
Management scientists (e.g. Boulding, Ackoff and Beer) tried to meld management science and general system theory
Unfortunately, attempts to merge such different approaches can obscure what each has to offer.
And much so-called systems thinking might more accurately be called “social entity thinking”.
On the ambiguity of Human Activity System
(HAS)
One of Russell Ackoff’s most profound insights in 1971 was this:
“Different observers of the same phenomena may conceptualise
them into different systems”.
Later, Ackoff contradicted himself by equating social entities with social systems.
In 1971, Ackoff pointed out there are abstract and concrete systems.
In other words, there are abstract human role-activity system (RAS) descriptions.
Which are realised by concrete human actor-performance systems (APS).
GST (as in this GST principles paper) presumes the scientific method is applicable
There is an abstract (theoretical) system description, against which a concrete (empirical) system realisation can be tested
E.g. there is a US constitution that describes a system of a government, against which the structures and behaviors of real-world US governments are tested.
Social system |
|
Abstract system description |
US constitution |
Concrete system realization |
US government |
GST differentiates abstract roles from concrete performances
What realises a defined role (in a system) is not an actor
It is an actor’s performance of the actions expected in that role.
Social system |
|
Abstract system description |
Roles |
Concrete system realization |
Performances of roles by actors |
In 1999, Ackoff’s first condition was a system’s properties must be defined.
With no descriptive role-activity system, the behavior of a social entity is undefined.
The entity becomes a social system when and in so far as it performs the activities in a role-activity system.
And the problem is?
Many systems thinkers point to an entity and call it a system.
And sadly, Ackoff ended up equating one managed/organised social entity with one social system.
He referred to a church, a corporation or a government agency as a social system.
To a general system theorist like Ashby, any of these entities can realise zero, one or many systems.
IBM is not a system; it zero, one or many HAS.
It is as many actor-performance systems as can be judged to realise descriptive role-activity systems.
Every role-activity system is a highly selective abstraction from what might be called chaos of real word IBM employee behavior.
Calling something a system does not make it a system
A group of people doing things is not a system just because people call it a “system” or an “organisation”.
The US economy, a church or IBM is not a system
It is as many different systems as system describers can successfully describe and test.
Some of those systems may conflict with each other, or undermine each other.
Drawing
the entity/system distinction
What an actor does outside their defined role in a system is not a part of that system,
But it might contribute to some agreed aim of a social entity, or be part of a different system.
E.g. a person’s singing is irrelevant to their role in a tennis club but vital to their role in a choir.
A social entity is a collection of physical actors who communicate with each other. One social entity can act as several social systems – its actors can
play unrelated roles in (say) a tennis club and a
choir. A social system (say a tennis club, or choir) is a set of logical roles, which need actors to play them. One social organization can be realised by several social entities. All the actors who play its roles can be replaced. |
On describing different levels of system
composition/decomposition
Ackoff arranged aims, behaviors and systems in hierarchical structures, using different words at different levels.
It can be convenient to use different words for different levels of system
concept, e.g.
Time-frame |
Aims |
Behaviors |
Active structures |
Granularity |
Persistent |
Business mission |
|
Enterprise |
Whole |
Long
term |
Goal |
Value stream |
Division |
Composite |
Short
term |
Objective |
Process |
Team |
Part |
Immediate |
Requirement |
Action |
Actor |
Atom |
However, systems are infinitely nestable.
The level of composition or decomposition is arbitrary – a choice made in a particular situation.
It is impossible to be scientific about pinning different words to different levels of a three, four or five level decomposition.
And trying to do so can obscure the general nature of system theory.
The concepts are the same at whatever level of system composition you choose to model.
A process is an event-triggered sequence of actions that may refer to system state, include choices and produce outcomes.
A choice is a choice: whether it is made by strict or fuzzy logic, deterministically or by free will (if you consider those to be incompatible).
On
change
It is
vital in GST to distinguish system state change from system change.
That is to distinguish:
·
system
adaptation - changing state within a system generation
·
system evolution - changing
nature between system generations.
GST
note: On change The
current state of a system is measurable in the values of described variables. Changing state means changing the variables’ values. Changing a system means changing the variable types or the rules applied to them. |
Ackoff said all animated systems are organisms, which said are defined as being autopoietic.
In biology, autopoiesis means that organisms are self-sustaining - their processes manufacture their structures from primitive edible chemicals.
But Ackoff used the term self-organising, which means something different in social sciences.
It can mean re-arranging people in a management structure.
It can mean people changing the aims, roles or rules of a system they work in.
Many systems
thinkers speak of systems that continually change their aims and
behaviors.
And call these “complex adaptive systems”.
A general system theorist would rather call then “complex evolving entities”.
Calling a real world entity a "system” is merely a hypothesis until there is some evidence.
Call it a "system" is meaningless if
· there is no abstract description of the system
· there is nothing to suggest the concrete system matches an abstract system description
· actors can continually change the roles they play, the rules they follow or even the aims of the system
· there is no distinction between one system generation and the next.
Ackoff’s system classifications (1971, 1999 and 2003) appear plausible, yet are often misunderstood.
And there is a fundamental problem with classifying real-world entities into system types.
That is, it undermines the distinction Ackoff originally drew between an entity and the many different systems that might be idealised from observation of that entity.
As Ackoff extended GST to discuss organizations that
depend on human abilities rather than any system description, he undermined the
GST he started with.
Also, there is some confusion in SST between actors, the roles they are supposed to play and what they actually do.
Much of what happens in a business is not at all systematic or systemic.
Not only do human actors in a business perform activities way beyond what is described in abstract role descriptions.
But also, business system designers rely on humans knowing how to do things, judging what to do, and inventing what to do when the need arises.
In
short, Ackoff’s attempts to merge SST with GST seem doomed.
However,
it is possible to reconcile the two traditions.
Read the end of the companion GST principles paper for more detail.
On choice as the
distinguishing feature in Ackoff’s 2003 system
classification
Ackoff defined animate actors (humans and other higher animals) as ones that choose freely between possible actions.
He said they make choices in the light of their “ideals”.
"The capability of seeking ideals may well be a characteristic
that distinguishes man from anything he can make, including computers".
Doesn’t a chess-playing computer choose
between possible chess moves in the light of its ideal – to win the game?
And a humanoid robot directs its limb
movements in the light of its ideal – to stay on its feet?
In any case, surely the main issue is not how actors make choices - by free will or not?
The key issue is that animate actors may have different ideals from any system they participate in.
And so, the actors have to choose between conflicting ideals when choosing their next action.
How they do that is not relevant to that fact that it has to be done.
Ackoff took an anthropomorphic view of systems, but did not see all human social entities as organizations.
He showed little interest in social entities (tennis
club, pick-pocket gang) with minimal bureaucracy.
His
primary interest was in organizations as
in “management science”.
Mostly, his interest was in bureaucracies that
administratively organise human actors - in the
public or private sector.
“All organizations are social systems”
Ackoff said the reverse
is not true; not all social systems are organizations.
“Most contemporary social
systems are failing”
Surely he meant to say most organizations are failing?
Sometimes he seems to be presenting
something akin to a political manifesto.
Does he really mean government agencies rather than commercial businesses?
Most UK government agencies surely succeed a bit and fail a bit. Is his concern US government in particular?
“An organization is a purposeful
system that contains at least two purposeful elements which have a common
purpose”.
Surely purposes can ascribed to a purposeful organization - by entities inside or outside the organization.
Can an organization be both purposive and purposeful?
“An aggregation of purposeful
entities does not constitute an organization unless they have at least one
common purpose”
Does it count
if employees and suppliers have the common purpose of maximising their income
from the organization?
Or all
employees are motivated by self-respect from being employed and social
interactions with their colleagues?
“Organizations display choice.”
Geoff Elliot told me that in the sociological perspective, only people can be “purposeful”.
But Ackoff said social systems (meaning social organizations) are also purposeful.
What does it mean to say a social organization make choices or exercises free will?
Are all an organization’s purposes abstracted from some or all of its members’ purposes?
What about purposes ascribed to an organization by entities outside the organization?
Would you say an investment bank’s
trading division chooses which stocks to buy and sell?
Those choices, formerly made by
human actors, are now by made mechanistic computer actors.
So, in what sense does the
organization display choice?
And how does it make a choice independently of choices made by its
animate or mechanistic parts?
What does it mean to
make a choice anyway?
To
select one of several possible actions within a defined system?
What is stable in Ackoff’s social organization?
May human actors leave and may join? Presumably yes.
Are the actions stable, or might the organization perform any conceivable action?
If neither actors nor actions are stable, what is the system?
Suppose external event or state change types are stable
And the range of performable actions is limited to rational responses to those events or state changes.
E.g. Given an invoice, a customer might choose to pay in full, pay in part, or refuse to pay.
The question then becomes – how does a human actor choose between actions?
Select
deterministically by following definable rules?
Determinism means that if you know an actor’s internal state and reasoning rules, you can predict the actor’s choice of action response to an event.
We don’t know whether a human is deterministic – because we don’t know the actor’s state or the rules their brain follows.
Further animals may apply fuzzy logic to choices, surely more fuzzily than any computerised artificial intelligence.
So, a human’s choice may be unpredictable, but still deterministic in a complex way we cannot penetrate.
Select by free will?
How an actor makes a choice makes no difference to a describer of system’s aims, variables, rules and roles.
The system describer has only to decide what range of options and actions to include in the system description, and what to exclude.
The describer must assume a human can choose any possible option, or fail to act.
A designer has to make allowances for any “exception paths” that actors may choose to follow.
Select purposefully in
the light of the actors internal purposes?
If an animate actor applies logical reasoning when changing its aims (goals, objectives or purposes) is that a deterministic process?
If so, does that undermine the meaning of purposeful?
All free-to-read materials at http://avancier.website are paid for out of
income from Avancier’s training courses and methods licences.
If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use.