Ackoff ideas from a GST perspective

Copyright 2017 Graham Berrisford.

One of about 300 papers at Last updated 18/05/2017 20:48


Russell L Ackoff (1919-2009) was an American organizational theorist, operations researcher, systems thinker and management scientist.

He was a well-known “systems thinker”, respected for a large body of work focused on human society.


This paper is not about the merits of Ackoff’s “socio-systemic view of organizations”.

It is about the nature of systems in general, and of the business systems addressed in enterprise architecture, ArchiMate and TOGAF.

It reviews how Ackoff classified systems in three papers.

·         1971 “Towards a System of Systems Concepts”.

·         1999 “On The Mismatch Between Systems And Their Models”.

·         2003 “Re-Creating the Corporation - A Design of Organizations for the 21st Century


It shows how Ackoff started from GST principles, but went on to stretch those principles to breaking point.

This paper is also the basis for one session in our next tutorial: System Theory Tutorial: May 2017 in London.


Before Ackoff. 1

Ackoff’s basic ideas. 3

Behaviors. 6

Aims. 8

Systems. 9

Ackoff’s system classifications (1971, 1999, 2003) 11

On composing a “system” from disparate entities. 13

Conclusions (abridged from GST principles) 15

Final remarks. 17

Footnote: On “choice”. 18


Before Ackoff

Ackoff’s work inherits from two “system” traditions.

The first, sociological tradition can be traced back to the 19 century.

It might be called “socio-cultural systems thinking” (SST).


The second, general system theory (GST) emerged after the second world war.

In An Introduction to Cybernetics (1956), Ashby furthered the ideas of general system theory in relation to control systems.

Cybernetics does not ask "what is this thing?" but ''what does it do?

[It] deals with all forms of behaviour in so far as they are regular, or determinate, or reproducible.”

This paper is a companion to our GST principles paper.

That companion paper abstracts the following principles of General System Theory from Ashby’s works.


Principle: descriptions idealise observed or envisaged realities

This is the general principle of an idealist philosopher or modern-day scientist.

We describe reality as a selective conceptualisation or idealisation of that reality.

“Any suggestion that we should study "all" the facts is unrealistic, and actually the attempt is never made.

What is necessary is that we should pick out and study the facts that are relevant to some main interest that is already given.” Ashby.


Principle: concrete systems realise abstract ones

There are two forms of system: an abstract system description (or type) is a realised (or instantiated) by one or more concrete system realities.

Abstract system description

Theoretical system

System description

Concrete system realisation

An empirical system

A system in operation


Principle: an open system interacts with its environment

A system is encapsulated within a wider environment.

There are usually feedback loops between a system and its environment.

To encapsulate a system means defining its input-process-output (IPO) boundary.

The inputs and outputs can be flows of information, material or energy.

A system describer defines system inputs and outputs with reference to an already-given interest or aim


Remember, the system-environment boundary is a choice made by system describer(s).

It meaningless to point to an entity and call it a system if its boundary and properties are obscure.

First, those in the discussion must agree its boundary and which I/O flows are relevant to some shared interest.

The I/O flows of interest in business system models are rarely energy flows, sometimes material flows and often information flows.


A conventional system design process starts by defining the system-environment boundary and proceeds along these lines

1.      Define Customers - entities in the environment that need outputs to meet their goals

2.      Design Outputs - that customers need from the system

3.      Design Inputs – that the system needs to produce the outputs

4.      Define Suppliers - entities in the environment that will supply the inputs

5.      Design Processes - to produce the outputs from the inputs.

6.      Define Roles - in which actors can perform the process steps

7.      Hire and/or make Actors - to play the roles

8.      Organise, deploy, motivate and manage the actors – to perform the processes.


Ackoff was much concerned with the organisation and motivations of actors in a social entity.


Further GST principles can be abstracted from Ashby’s work including:

·         A closed system is sealed from its environment

·         The primacy of behavior

·         Continuous behavior can be modelled as driven by discrete events

·         Systems can be composed and decomposed

·         System change differs from system state change

·         An entity is a system whenever and wherever it realises an abstract system


To discuss general system theory, we need a domain-specific language.

We need to words for the core concepts used in describing systems.

For example, this table distinguishes three core concepts.






win the world cup

target outcomes that give an entity a reason or logic to perform and choose between actions.


compete in world cup matches

processes than run over time with intermediate outcomes and a final aim or ideal.

Active structures

players in a national football team

nodes, related in a hierarchy or network, that perform activities in behaviors.


Ackoff’s 1971 paper set out a domain-specific language containing 32 terms.

He started with terms and concepts that fit Ashby’s, then went further.

Ackoff’s basic ideas

Most of Ackoff’s first 11 ideas are compatible with the GST principles and concepts above (though he could have been clearer).


Ackoff idea 1- System: a set of interrelated elements

All the elements must be related directly or indirectly, else there would be two or more systems.

This definition embraces both passive structures (e.g. tables) and activity systems.

The concern of GST is activity systems, in which structural elements interact in orderly behaviors.


Ackoff idea 2- Abstract system: a system in which the elements are concepts.

It may be purely conceptual, or describe an imagined or envisaged reality, or describe an observed reality.

Abstract system description

The Dewey Decimal System

“Solar system”

Laws of tennis

Defined roles (e.g. Orchestral parts)

The score of a symphony


Abstract descriptions do take concrete forms; they are found in mental, documented and physical models.

What matters here is not the form but the relationship of the description (model, conceptualisation, idealisation) to a reality that is observed or envisaged.


Ackoff idea 3- Concrete system: a system that has two or more objects.

A concrete system is realization in physical matter and/or energy of an abstract system description.

Abstract system description

The Dewey Decimal System

“Solar system”

Laws of tennis

Defined roles (e.g. Orchestral parts)

The score of a symphony

Concrete system realisation

Books sorted on library shelves

Planets in orbits

A tennis match

Actors (e.g. Orchestra members)

A performance of that symphony


The concern of GST is activity systems that operate in the real world, displaying behavior of some kind.


Which comes first? Abstract system description or concrete system realization?

A designed concrete system (like a motor car) cannot run in reality until after it has been described, however abstractly.

A natural concrete entity (like the solar system) runs in reality before it is recognised and described as a system.


People find this hard to understand and accept, but here goes…

It is meaningless to say a named entity is a system until you are sure it realises a system description.

Until there is an abstract system description an entity cannot fairly be presumed to be a system.


GST note: The universe is an ever changing entity in which stuff happens.

A concrete system is

·         an island of repeatable behaviors carved out of that universe.

·         a set of describable entities that interact in describable behaviors.

·         an entity we can test as doing what an abstract system description says.

With no system description, there is no testable system, just stuff happening.


Ackoff idea 4: System state: the values of the system’s properties at a particular time.

A concrete system’s property values realise property types or variables in its abstract system description.


System properties

Abstract description of system state

Property types (air temperature, displayed colour)

Concrete realization of system state

Property values (air temperature is 80 degrees, displayed colour is red)


The current state of a concrete system realises (gives particular values to) property types or variables in a system description.

Other qualities of that entity are not a part of that system, but might count as part of another system.

E.g. the temperature of the earth’s atmosphere is irrelevant to its role in the solar system, but vital to its role in the biosphere.


Ackoff idea 5: System environment: those elements and their properties (outside the system) that can change the state of the system, or be changed by the system.

“The elements that form the environment… may become conceptualised as systems when they become the focus of attention.” Ackoff

Any brain or business can be seen as a control system connected in a feedback loop with its environment.

It receives information about the state of entities and activities in its environment.

It records that state information in memory.

It outputs information to inform and direct motors, actors and entities as need be.


Ackoff idea 6: System environment state: the values of the environment’s properties at a particular time.

A concrete environment’s property values realise property types or variables in an abstract description of that environment.

The remainder of the real-world does not count as part of that environment, but it might count as part of another environment.

“Different observers of the same phenomena [actors and actions] may conceptualise them into different systems and environments.” Ackoff


Ackoff idea 7: A closed system: one that has no environment.

An open system interacts with entities and events in a wider environment.

A closed system does not interact with its environment.

“Such conceptualizations are of relatively restricted use”. Ackoff


Aside: Every “system dynamics” model is a closed system.

It is a model of populations (stocks) that grow and shrink in response to continuous inter-stock event streams (flows).

The whole system is closed, so all events are internal events.

However, each stock can be seen as a subsystem, to which every event is an external event.


Ackoff idea 8: System/environment event: a change to the system property values.

Ackoff was concerned with how state changes inside a system are related to state changes in its environment.

He does not appear distinguish events from state changes.

It is generally presumed that a state change is a change to the value(s) of a system’s variable(s) in response to a discrete event.

And that one event can cause different (optional) state changes, depending on the current state of the system.


GST note: On discrete event-driven behavior.

External events cross the boundary from the environment into the system.

Within a system, internal events pass between subsystems.

In response to an event, a system refers to current system state.

It then “chooses” what actions to take, including actions that change its own state.

The choice depends on the values of input event variables and internal state variables.


Ackoff idea 9: Static (one state) system: a system to which no events occur.

Ackoff’s definition here can be improved, since his example, a table, does experience events

A table is laid for dinner, painted, stripped and polished, and has its wonky leg replaced.

In ArchiMate terms, his static system is a passive structure rather than an active structure.

A passive structure can experience events, can be acted in or on, but is inanimate and cannot act itself.


Aside: You may see an abstract system description as a static definition of dynamic concrete systems realizations.

The system description does experiences events and state changes; it is written, communicated, revised and realised.

There are odd cases where a system description is directly realised – notably, a computer program and DNA.


Ackoff idea 10: Dynamic (multi state) system: a system to which events occur.

System theorists often describe systems in terms of state changes that result from events detected.

Brains and businesses can be seen as control systems that remember the current state of entities and processes they monitor, inform and direct.

They update their memories in response to events that reveal state changes in those entities and processes.


Ackoff idea 11: Homeostatic system: a static system whose elements and environment are dynamic.

“A house that maintains a constant temperature… is homeostatic”. Ackoff

Hmm… Ackoff’s notion of the system here is questionable.

The heating system is not a static system.

The property of interest in its environment (air temperature) is maintained between upper and lower bounds, but is not static.

Looking at the house as nothing but a container of air, it is neither the system of interest nor its environment.

Looking at the house a system, you’d assume its state can change in various ways.


In short, Ackoff started from basic points a general system theorist would recognise.

He discussed homeostatic systems - like the early general system theorists did.

He distinguished abstract systems from concrete systems, and structures from behaviors.


From here on, Ackoff moves away from GST and from Ashby.

He constructs a somewhat elaborate domain-specific language for more sociological systems thinking.

E.g. he defines a hierarchy of behavior concepts built on top of the simplest of mechanical “reactions”

And a hierarchy of motivation concepts built on top of “outcomes”.

And a four-way classification of systems.

Principle: The primacy of behavior

GST is concerned with activity systems that operate in the real world, displaying behavior of some kind.

“A set of elements or parts that is coherently organized and interconnected in a pattern or structure that produces a characteristic set of behaviors." Meadows

The systems can be characterised as having parts that interact in “regular, repeated or repeatable behaviors” W Ross Ashby.


Ackoff defined behaviors in terms of acts, reactions and responses.

He distinguished these behaviors with reference to state changes inside the system and its environment.


Ackoff idea 12: System reaction: a system event that may be triggered directly by another event.

E.g. “A light going on when the switch is turned.” Ackoff

The reaction is simplistic: there is no reference to memory and no choice of outcome.

By contrast, consider turning on a heating system.

The heating system may examine the temperature in several rooms then “choose” which heaters to switch on.

Ackoff might call this a response.


Ackoff idea 13: System response: a response that goes beyond the naive reaction at 12.

E.g. “A person’s turning on a light when it gets dark is a response to darkness”. Ackoff

This response might be seen as deterministic.

You apply a function that involves dividing an internal state variable (acuity needed) by an external environment variable (light level).

If the function is higher than N, then you switch on the light.

But of course animals apply complex fuzzy logic to such variables rather simple arithmetic.


Ackoff idea 14: System act: a self-determined, autonomous, behavior.

Ackoff says a system can be triggered to act by an internal state change, not triggered by an event.

However, all internal state changes are traceable to external events, time events and internal events.

An internal event happens when the value of an internal state variable crosses a significant threshold.

Both human and computer actors may be triggered to act by an internal event/state change of this kind.

Again, human actors apply fuzzy logic in processing such threshold-crossing changes rather simple arithmetic.


Ackoff idea 15: System behavior: system events which… initiate other events

Ackoff says behaviors are state changes whose consequences are of interest.

His behaviors include acts, reactions and responses whose antecedents are of interest.

Surely the consequences of acts, reactions and responses are outcomes?

So doesn’t every behavior have antecedents and consequences of interest?



Ackoff arranged behaviors, aims and systems in hierarchical structures, using different words at different levels.

Bear in mind he wanted to classify behaviors in a way that supports his hierarchical classification of system types (later).


I am not confident that Ackoff’s division of behaviors into types is coherent and/or that my interpretation is correct.

This table presumes all his behaviors are triggered by events and produce outcomes, usually with reference to the system’s current state/memory.

15 Behavior



Choice made wrt

to system state


Involve learning

System event/state change

Environment event/state change

12 Reaction






13 Response






14 Act







Remember: there is recursive composition/decomposition of systems in space, time or logic.

An event that is external to one system is internal to a wider system, and vice-versa.


Ackoff arranged behaviors, aims and systems in hierarchical structures, using different words at different levels.

Bear in mind Ackoff wanted to classify aims in a way that supports his hierarchical classification of system types (later).

Ackoff presents a hierarchy of motivation concepts built on top of outcomes.

·         An outcome can be valued and preferred as a goal.

·         Goals can be ordered with respect to objectives.

·         Objectives can be ordered with respect to ideals

·         Ideals are persistent aims that appear to be unobtainable in principle or in practice.


The “outcome” concept doesn’t get a definition of its own.

Presumably, outcomes are state changes - inside the system and/or in its environment.


Ackoff idea 22: The relative value of an outcome: a value (between 0 and 1) compared with other outcomes in a set. The highest value outcome is the preferred outcome.

Ackoff idea 23: The goal of a purposeful system: a preferred outcome within a time period.

Ackoff idea 24: The objective of a purposeful system: a preferred outcome that cannot be obtained in a time period.

Ackoff idea 25: An ideal: an objective that cannot be obtained in a time period, but can be approached.

Ackoff idea 26: An ideal-seeking system: a purposeful system that seeks goals leading towards its ideal.


It seems purposes sit at a higher level than goals, but the relationship of purposes to objectives and ideals is not clear.

Ackoff doesn’t discuss: What happens if you expand the system-environment boundary or the time-frame?

Can an ideal become an objective when the time frame is expanded?

Presumably, an objective may become a goal when the time frame is expanded

And a goal of a small subsystem may be seen as one of many outcomes in a larger system.


Ackoff defined both aims and behaviors (above) in a way that seems designed to support his particular hierarchical classification of system types.

He classified systems by dividing behaviors and outcomes between “determined” and “chosen” (as he defined those terms).

System type



16 State maintaining

Determined (reactive)


17 Goal-seeking

Chosen (responsive)


19 Purposive


Variable but determined

20 Purposeful


Variable and chosen


Ackoff idea 16: State maintaining system: the most naďve of reactive systems (cf. 12 reactions)

When an event of a given type occurs, a state maintaining system reacts to produce a fixed outcome.

“Fixed outcome” implies there is no choice as to the outcome of the event

Ackoff’s discussion of this system type is dominated by homeostatic systems.

So, this category appears to be about maintaining a system state variable - within defined bounds – which is an aim in itself.

What if maintaining one system state variable involves making choices related to the values of other state variable?

I guess that is a goal-seeking system as at 17 below.


Ackoff idea 17: Goal-seeking system: a system that chooses its response to events (cf. 13 responses)

This category of system seems to be defined by its ability to retain and use its memory to make choices.

Every information system remembers facts in system state and uses that to inform choices between optional behaviors.

It refers to its memory when choosing between behaviors that produce particular outcomes, with an end goal “in mind”.

Ackoff refers here to a system that does more – it learns – it adapts its behavior by conditioning according to experience.

E.g. a rat in a maze increases its efficiency in meeting a goal by choosing not repeating a behavior that failed earlier.

It is unclear whether Ackoff wants to distinguish between a system using its memory and learning by conditioning.

And perhaps that distinction is too subtle to be drawn.


Ackoff idea 18: Process: a sequence of behavior that constitutes a system and has a goal-producing function.

This appears to be a one-time system; it runs from trigger event to an outcome that meets its goal.

Ackoff says each step in the process (be it an act, reaction or response) brings the actor closer to the goal.

It isn’t clear whether his process can have control logic (strict or fuzzy) or lead from one event to a variety of outcomes/goals.


Ackoff idea 19: Multi-goal seeking system: a system that seeks different goals in different states.

A deterministic system, by referring to its memory, can choose between reactions/responses to an event, and produce different outcomes.

If you don’t know the internal rules or state of the system, you cannot predict the outcome of an event.

And if those rules include applying fuzzy logic, this introduces a further level of unpredictability


Ackoff idea 20: Purposive system: a multi-goal seeking system where goals have in common the achievement of a common purpose.

The common purpose could be to survive, win the game or make a profit. (Is that purpose also an objective?)

Purposive systems and sub-animate actors may be given goals but can’t change them.


Up to this point, Ackoff appears to presume a system behaves in a deterministic manner.

But from this point on it becomes harder to reconcile Ackoffs ideas with more science-based GST.


Ackoff idea 21: Purposeful system: can produce the same outcome in different ways and different outcomes in the same and different states.

Ackoff believed that purposefulness demonstrated free will.

The next two quotes are from a 1972 book Ackoff wrote with Emery about purposeful systems.

a purposeful system is one that can change its goals in constant environmental conditions; it selects goals as well as the means by which to pursue them. It displays will.” Ackoff

Animate animate actors (being self-aware) might change the aims, variables, rules and roles of any system they participate in.

"members are also purposeful individuals who intentionally and collectively formulate objectives and are parts of larger purposeful systems.” Ackoff


Three more points are included here without analysis.

Ackoff idea 27: The functions of a system: production of the outcomes that defines its goals and objectives.

Ackoff idea 28: The efficiency of a system: defined in mathematical terms beyond the analysis here.

Ackoff idea 29: An adaptive system: reacts or responds to reduced efficiency by changing system or environment state.


Ackoff goes on to define learning in a very particular way.

Ackoff idea 30: “To learn is to increase one’s efficiency in the pursuit of a goal under unchanging conditions.” Ackoff

That is a very limited definition, since learning could be extended, for example, to include learning that a goal is not worth pursuing.


Ackoff deprecated the mechanistic, biological and animalistic views of systems taken by other system theorists.

It is a little surprising therefore that (in 1971) he described inter-system relationships in terms of controls.


Ackoff idea 31: Control: “An element or system controls another element or system if

its behavior is either necessary or sufficient for subsequent behavior of the element or the system itself and

the subsequent behavior is necessary or sufficient for the attainment of one of more of its goals.” Ackoff


Ackoff idea 32: Organization: “An organization is a purposeful system that contains at least two purposeful elements

which have a common purpose relative to which the system has a functional division of labor;

its functionality distinct subsets can respond to each other’s behavior through observation of communication;

and at least one subset has a system-control function”. Ackoff

Ackoff’s system classifications (1971, 1999, 2003)

It is well-nigh axiomatic that:

·         Systems can be hierarchically nested: one system can be a part or subsystem of another.

·         An event that is external to a smaller system is internal to a wider system (and vice-versa).

·         The emergent properties of a small system are ordinary properties of any wider system it is a part of.


Some draw a correspondence between system decomposition and system types.

This section reviews how Ackoff’s system classification evolved.

1971: Choice

At first, Ackoff differentiated system types according whether behaviors and outcomes are “determined” and “chosen” – as he defined those terms.

System type



16 State maintaining

Determined (reactive)


17 Goal-seeking

Chosen (responsive)


19 Purposive


Variable but determined

20 Purposeful


Variable and chosen

1999: Purposefulness

Ackoff described systems another way in table 2.1 of “Re-Creating the Corporation - A Design of Organizations for the 21st Century

Now the key differentiator is the “purposefulness” of the parts and the whole.

Systems type





Not purposeful

Not purposeful

Motor cars, fans, clocks, computers, plants


Not purposeful






Corporations, universities, societies



Not purposeful

An environment that serves its parts. E.g atmosphere


Ackoff defined five conditions for a system, in which four key concepts are underlined below.

“A system is a whole consisting of two or more parts that satisfies the following five conditions

1.      The whole has one or more defining properties or functions.

2.      Each part in the set can affect the behavior or properties of the whole.

3.      There is a subset of parts that is sufficient in one or more environments for carrying out the defining function of the whole; each… necessary but insufficient for… this defining function.

4.      The way that each essential part of a system affects its behavior or properties depends on (the behavior or properties of) at least one other essential part of the system.

5.      The effect of any subset of essential parts on the system as a whole depends on the behavior of at least one other such subset.”


It appears “parts” are purposeful actors or subsystems; “properties” are structural variables; “behaviors” are processes.

And “functions” are ideals, objectives or purposes and/or the behaviors that achieve them.

Note that point 1differs from GST in allowing a system to have no defining characteristic other than a function or ideal.


Ackoff defined “purposeful” more neatly than in 1971.

"An Entity is purposeful if it can select both means and ends in two or more environments."


He then contrasted goal-seeking entities.

“Although the ability to make choices is necessary for purposefulness, it is not sufficient.

An entity that can behave differently (select different means) but produce only one outcome in any one of a set of different environments is goal seeking, not purposeful.

For example, automatic pilots on ships and airplanes and thermostats for heating systems have a desired outcome programmed into them; they do not choose them.”

With purposeful people and social entities

people can pursue different outcomes in the same and in different environments and therefore are obviously purposeful; so are certain types of social groups.”


Ackoff said all animated systems are organisms, which he reports as being defined as being autopoietic (meaning self-sustaining).

But he used the term self-organising, meaning something very different from manufacturing body parts from edible chemicals.

2003: Choice

Ackoff subtly revised his system classification scheme in joint paper On The Mismatch Between Systems And Their Models” as shown below.

Now the key differentiator is the ability of the parts and the whole to exercise “choice” – as Ackoff defined it.

Type of System Model




Deterministic / mechanistic

Clock, Tree, Bee

No choice

No choice

Not just machines!

This includes plants and “lower” animals



No choice


Not all animals! This excludes “lower” animals.

It includes only “higher” animals that Ackoff considers to exercise free will


Church, Corporation



Not all social groups! This excludes “lower” animal groups and informal groups.

It is primarily if not only formal organisations




No Choice

Not an external environment!

An environment that serves its parts. E.g. Island.


Read this paper for a detailed critique of this 2003 system classification.

Notice that “”purposeful” was replaced by “choice” - though he had previously allowed that machines make choices.

Clearly, the meaning Ackoff attached to “choice” is vital to his system classification.

For more on choice, read the footnote.

On composing a “system” from disparate entities

Challenges arise when you employ one system inside another, each with its own aims, variables, rules and roles.

Is an entity rightly called a system if its parts act with different aims/ideals from each other and from the whole entity?


Much of what happens in a business is not at all systematic or systemic.

There is some confusion in SST between actors, the roles they are supposed to play and what they actually do.

Employees invent and do stuff as they choose; sometimes in accord with business goals; sometimes not.

An employee (Jane) is not a truly a part of a business system.

·         Jane may participate in several – perhaps competing - business systems.

·         A concrete business system needs only a part of Jane’s time and talents.

·         An abstract business system description defines only some actions Jane is supposed to perform in a given role.

·         Jane may act both beyond actions defined in a role and contrary to them.


“A group of unwilling slaves can be organised to do something that they do not want to do, but they do not constitute and an organization, even though they may form a system”

Surely there is a huge difference between not being keen to do some work and not being willing and able to do it well when asked?

Do soldiers want to go into battle?

We often baulk at starting a job, but get some enjoyment from it nevertheless.

And being appreciated by colleagues and managers has a lot to do with that.

Isn’t that territory of social psychology, neuro-linguistic programming and the like (rather than system theory)?


Your new employee brings their own aims (ideals, purposes, objectives, goals) to your business.

Their aims may be in conflict with each other, in flux, unconscious or unrecognised.

Some of their aims may be contrary to the aims of you and your social organization.

Your employee may act to make your organization inefficient, or even to sabotage it (e.g. sharing information with a competitor).


OK, you can design security and exception handling processes to address conflicting aims and contrary behaviors.

And yes, human actors are special, they need special attention in any theory or practice of business management.

Again, isn’t that theory to be found in social psychology, neuro-linguistic programming and the like (rather than system theory)?

And isn’t this the job of business managers and human resources (rather than enterprise architects)?


Ackoff views human actors as parts of an organization; but what does that mean?

A man can contain a machine (an artificial heart) both physically and logically.

The heart has no aims, variables, rules and roles outside of the man.

A machine (motor car) can contain a man physically - but not logically.

The man has aims, variables, rules and roles way beyond those of driving a car.


Every human has different aims, variables, rules and roles from those of the organization that employs them.

So much of a human’s IPO is irrelevant to the IPO of any organization they participate in.

In IPO terms, a social organization encapsulates only the roles played by actors – rather than the actors themselves.

And where actors act outside given roles, they act outside that system (but perhaps inside another).

Conclusions (abridged from GST principles)

There are two “system” traditions.

The sociological tradition can be traced back to the 19 century.

It might be called “sociological systems thinking” (SST).

General system theory (GST) emerged after the second world war.


Management scientists (e.g. Boulding, Ackoff and Beer) have tried to merge SST and GST.

Unfortunately, attempts to merge different approaches can obscure what each has to offer.

Much of SST might more accurately be called “Social Entity Thinking”.


How to describe different levels of system composition/decomposition?

Ackoff arranged behaviors, aims and systems in hierarchical structures, using different words at different levels.

It can be convenient to use different words for different levels of system concept, e.g.





Active structures



Business mission




Long term


Value stream



Short term











However, the level of composition or decomposition is arbitrary – a choice made in a particular situation.

It is impossible to be scientific about pinning different words to different levels of a three, four or five level decomposition.

And trying to do so can obscure the general nature of system theory.


The concepts are the same at whatever level of system composition you choose to model.

A process is an event-triggered sequence of actions that may refer to system state, include choices and produce outcomes.

A choice is a choice: whether it is made by strict or fuzzy logic, deterministically or by free will (if you consider those to be incompatible).

It is vital in GST to distinguish system state change from system change.

That is to distinguish:

·         system adaptation - changing state within a system generation

·         system evolution - changing nature between system generations.


GST note: On change

The current state of a system is measurable in the values of described variables.

Changing state means changing the variables’ values.

Changing a system means changing the variable types or the rules applied to them.


In sociology, the term self-organising means something very different from autopoietic in biology.

It means people changing the aims, variables or rules of the business they work in.

Ackoff and other systems thinkers speak of systems that continually change their aims and behaviors.

This is sometimes called a “complex adaptive system”.

A general system theorist would rather call it a “complex evolving entity”, for the reason discussed below.

Ackoff’s contradiction

One of Russell Ackoff’s most profound insights in 1971 was this:

“Different observers of the same phenomena [usually, people doing stuff] may conceptualise them into different systems” or indeed, into no system at all.


Later, Ackoff contradicted himself by presuming a church, a corporation or a government agency is a “system” regardless of any conceptualisation.

The problem with much SST is this presumption.

That presumption that all actors employed by an employer (along with everything they touch and do) necessarily form one "system".


Calling something a system does not make it a system

A group of people doing things is not a system just because people call it a “system” or an “organisation”.

The US economy, a church or IBM is not a system

It is as many different systems as system describers can successfully describe and test.

Some of those systems may conflict with each other, or undermine each other.

If there is no system description, then to assert IBM is a system conveys no meaning (beyond saying it exists as an entity).

GST presumes the scientific method is applicable

There is an abstract (theoretical) system description, against which a concrete (empirical) system realisation can be tested

E.g. there is a US constitution that describes a system of a government, against which the structures and behaviors of real-world US governments are tested.

Social organization

Abstract system description

US constitution

Concrete system realization

US government


GST differentiates abstract roles from concrete performances

What realises a defined role (in a system) is not an actor

It is an actor’s performance of the actions expected in that role.

Social organization

Abstract system description


Concrete system realization

Assignments of actors to roles


What an actor does outside their defined system role is not a part of that system, but might be part of another defined system.

E.g. a person’s singing is irrelevant to their role in a tennis club but vital to their role in a choir.


A social entity is a collection of physical actors who communicate with each other.

One social entity can act as several social organizations – its actors can play unrelated roles in (say) a tennis club and a choir.

A social organization (say a tennis club, or choir) is a set of logical roles, which need actors to play them.

One social organization can be realised by several social entities. All the actors who play its roles can be replaced.

Final remarks

Scientism is the practice of asserting things to be true in a way that sounds like a scientific statement.

E.g. “Organic foods are better for you”.

Science is more; it is the practice of testing assertions in an attempt to confirm or deny them.


Sadly, much SST is “scientism”.

People call an entity or organisation a "system“

But this an empty assertion, because there is no system description against which reality can be tested

Or there is a description, but testing the reality is impossible.


The word "system" is a meaningless or useless label if

·         there is no agreed description of the system

·         actors can change the roles they play and rules they follow

·         actors may even change the aims of the system

·         the entity called “a system” may change continually

·         there is no distinction between one system generation and the next.


Ackoff’s system classifications (1971, 1999 and 2003) appear plausible, yet are often misunderstood.

And there is a fundamental problem with classifying real-world entities into system types.

That is, it undermines the distinction Ackoff originally drew between an entity and the many different systems that might be idealised from observation of that entity.


As Ackoff extended GST to discuss organizations that depend on human abilities rather than any system description, he undermined the GST he started with.

Also, there is some confusion in SST between actors, the roles they are supposed to play and what they actually do.

Much of what happens in a business is not at all systematic or systemic.

Not only do human actors in a business perform activities way beyond what is described in abstract role descriptions.

But also, business system designers rely on humans knowing how to do things, judging what to do, and inventing what to do when the need arises.


In short, Ackoff’s attempts to merge SST with GST seem doomed.

However, it is possible to reconcile the two traditions.

Read the end of this GST principles paper for more detail.

Footnote: On “choice”

On choice as the distinguishing feature in Ackoff’s 2003 system classification

Ackoff defined animate actors (humans and other higher animals) as ones that choose freely between possible actions.

He said they make choices in the light of their “ideals”.


"The capability of seeking ideals may well be a characteristic that distinguishes man from anything he can make, including computers".

Doesn’t a chess-playing computer choose between possible chess moves in the light of its ideal – to win the game?

And a humanoid robot directs its limb movements in the light of its ideal – to stay on its feet?


In any case, surely the main issue is not how actors make choices - by free will or not?

The key issue is that animate actors may have different ideals from any system they participate in.

And so, the actors have to choose between conflicting ideals when choosing their next action.

How they do that is not relevant to that fact that it has to be done.


Ackoff took an anthropomorphic view of systems, but did not see all human social entities as organizations.

He showed little interest in social entities (tennis club, pick-pocket gang) with minimal bureaucracy.

His primary interest was in organizations as in “management science”.

Mostly, his interest was in bureaucracies that administratively organise human actors - in the public or private sector.


“All organizations are social systems”

Ackoff said the reverse is not true; not all social systems are organizations.


“Most contemporary social systems are failing”

Surely he meant to say most organizations are failing?

Sometimes he seems to be presenting something akin to a political manifesto.

Does he really mean government agencies rather than commercial businesses?

Most UK government agencies surely succeed a bit and fail a bit. Is his concern US government in particular?


“An organization is a purposeful system that contains at least two purposeful elements which have a common purpose”.

Surely purposes can ascribed to a purposeful organization - by entities inside or outside the organization.

Can an organization be both purposive and purposeful?


“An aggregation of purposeful entities does not constitute an organization unless they have at least one common purpose”

Does it count if employees and suppliers have the common purpose of maximising their income from the organization?

Or all employees are motivated by self-respect from being employed and social interactions with their colleagues?


“Organizations display choice.”

Geoff Elliot told me that in the sociological perspective, only people can be “purposeful”.

But Ackoff said social systems (meaning social organizations) are also purposeful.


What does it mean to say a social organization make choices or exercises free will?

Are all an organization’s purposes abstracted from some or all of its members’ purposes?

What about purposes ascribed to an organization by entities outside the organization?


Would you say an investment bank’s trading division chooses which stocks to buy and sell?

Those choices, formerly made by human actors, are now by made mechanistic computer actors.

So, in what sense does the organization display choice?

And how does it make a choice independently of choices made by its animate or mechanistic parts?

What does it mean to make a choice anyway?


To select one of several possible actions within a defined system?

What is stable in Ackoff’s social organization?

May human actors leave and may join? Presumably yes.

Are the actions stable, or might the organization perform any conceivable action?

If neither actors nor actions are stable, what is the system?


Suppose external event or state change types are stable

And the range of performable actions is limited to rational responses to those events or state changes.

E.g. Given an invoice, a customer might choose to pay in full, pay in part, or refuse to pay.

The question then becomes – how does a human actor choose between actions?


Select deterministically by following definable rules?

Determinism means that if you know an actor’s internal state and reasoning rules, you can predict the actor’s choice of action response to an event.

We don’t know whether a human is deterministic – because we don’t know the actor’s state or the rules their brain follows.

Further animals may apply fuzzy logic to choices, surely more fuzzily than any computerised artificial intelligence.

So, a human’s choice may be unpredictable, but still deterministic in a complex way we cannot penetrate.


Select by free will?

How an actor makes a choice makes no difference to a describer of system’s aims, variables, rules and roles.

The system describer has only to decide what range of options and actions to include in the system description, and what to exclude.

The describer must assume a human can choose any possible option, or fail to act.

A designer has to make allowances for any “exception paths” that actors may choose to follow.


Select purposefully in the light of the actors internal purposes?

If an animate actor applies logical reasoning when changing its aims (goals, objectives or purposes) is that a deterministic process?

If so, does that undermine the meaning of purposeful?



All free-to-read materials at are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to in whichever social media you use.