The adaptable enterprise

Copyright 2020 Graham Berrisford. A chapter in “the book” at Last updated 17/04/2021 19:21


This chapter addresses what it takes for an enterprise to be adaptable, including several interpretations of self-organization, and a way of approaching “wicked problems”.


Reading online? If your screen is wide, shrink its width for readability.


System change terms and concepts. 1

Making an enterprise more adaptable. 4

More about activity system mutation. 6

Views of self-organization. 10

Meta systems thinking. 11

On tackling wicked problems. 15

Conclusions and remarks. 16


System change terms and concepts

Obviously, an enterprise may fail because it cannot adapt to changing circumstances, inside or outside the business. This first section unscrambles some terms and concepts used in discussing the adaptation, evolution or mutation of entities and systems.


Two kinds of system

[To speak of] “the system” [is] ambiguous. “The system” may refer to… the thing itself; or to the variables with which some given observer is concerned. (Ashby 1956, 6/14)


This book starts from the position that it is misguided to refer to a government or a university (for example) as a system. Rather, it is a social entity that employs and participates in several more or less clearly-defined human activity systems.


A social entity is a collection of actors who may both realize one or more activity systems and act outside of any defined activity system. We must distinguish the thing itself (a social entity in which actors may act in ad hoc and innovative ways) from any particular activity system in which those actors are bound by roles and rules.


At one extreme (think of an ant colony) the range of performable actions is limited. Expanding the range of possible actions gives actors a higher degree of freedom, and increases the system’s complexity, but it remains describable as an activity system. At the other extreme, where every action is self-determined, and there is no observable regularity, repetition, or pattern, then there is no recognizable activity system in the social entity.


A real-world business sits between those extremes. It is a social entity that gives its employees some degree of freedom, but also expects them to play roles in regular business activity systems.


Two kinds of system change

“'adaptation' is commonly used in two senses which refer to different processes.” (Ashby 1956, 5/7)


Ashby wrote primarily of adaptation of the kind whereby an entity (animal, mechanical or social) responds to events by maintaining itself in a homeostatic state. System theorists long ago recognized that mutation or evolution is a different kind of adaptation. To distil the distinction, an activity system can respond to events by:

·       rule-bound state change, whether to maintain homeostasis or to advance its state progressively.

·       rule-changing mutation, from one system generation to the next, or a new system.


“The distinction is fundamental and must on no account be slighted.” (Ashby 1956, 4/1)

"Change the rules from those of football to those of basketball, and you’ve got, as they say, a whole new ball game.” Meadows

"Social systems are not just ‘complex adaptive systems’ bound by the fixed rules of interaction of their parts. Rather, they are ‘complex evolving systems’ that can change the rules of their development as they evolve over time." This book Jackson 2003


In discussing social entities, Jackson preferred evolving to adaptive. To draw the distinctions needed to move systems thinking forward, we might do better to distinguish evolving social entities from adaptive activity systems.


A classification of system change types

“A system generally goes on being itself… even with complete substitutions of its actors as long as its interactions and purposes remain intact.” Meadows


Natural systems evolve before they are described. Consider solar systems, weather systems, plants, animals, subsystems of those such as the organs of a body, and regular and repeated behaviors observable in a social entity. Designed systems are described (in mind or documentation) before they are made. Consider a steam engine, marriage ceremony, game of poker, word processor or billing system.


State changes occur when state variable values change, as when a temperature goes up or down, or a stock grows or shrinks. System mutations occur when the roles, rules or variables of a system change. If we add or remove a stock or flow in a causal loop diagram, or we change a rule in a game of poker, then we define a new system generation or a different system altogether.


Continuous changes occur when the values of state variables, or the generations of a system, move along a continuous sliding scale, as when a planet moves through space, or a child matures into an adult. Discrete changes occur when there is a step change in the value of a state variable, or in the generation/version of a system.


Given the dichotomies above, system change can be classified as continuous or discrete, state change or mutation, and natural/accidental or designed/planned. Representing change as a three-dimensional phenomenon helps us to think about what it means to model change and design for it.





State change

Natural: the growth of a crystal

Natural: asleep to awake, or day to night

Designed: analogue light dimmer

Designed: light on to light off


Natural: child to adult

Natural: parent to child

Designed: X?

Designed: version 1 to version 2


X? How to design an activity system that mutates continually? If there is no stable pattern, no regularity or repetition, then there is no describable system.


However, continuous change can be simulated by dividing changes into discrete steps frequent and small enough to appear continuous. And it is possible to design a higher “meta system” responsible for the design of systems, of which EA might be seen as an example, as discussed later in the section on self-organization.


Other ambiguities

This table indicates some other ways you might find words used differently in the two schools of thought – activity systems thinking (AST) and social entity thinking (SET).




Possible meanings



The complexity of actors and activities that is measurable wrt an abstract system model


The un-measurable and infinite complexity of a physical entity



Changing state – updating state variable values according to rules


Mutating - changing rules, to act in a different way



A set of inter-related activities performed by actors playing roles according to rules.


A set of inter-related actors who determine their own actions



Properties arising from subsystems interacting a larger system


Properties not seen before, new or surprising.


This chapter focuses on adaptation of the system mutation kind,


Events and antifragility

A UK Prime Minister was asked what the most troubling problem of his Prime Ministership was. Famously, Harold Macmillan replied ‘Events, my dear boy, events’.


Obviously, an enterprise must respond to perturbations, shocks, attacks or failures. Here, all such life-threatening events are called Events (with a capital E).


Antifragility is one or more properties of a system which enables it to survive and thrive after Events. It is sometimes defined as robustness and/or resilience.


Robustness and resilience are so variously defined in the literature than no attempt is made to distinguish them here. Suffice to say that to survive Events a system may:


·       change state, but remain the same system (think, produce antibodies to an infection)

·       call for back up (think, fail over to a back-up system)

·       mutate, evolve into new system generation (think, the evolution of a virus).


Agile development is a process by which a system readily mutates to handle Events.


An agile system is a system that can handle Events without mutating.

Making an enterprise more adaptable

An enterprise may become more adaptable by:


·       managing risks related to Events

·       enabling actors to innovate when Events happen

·       changing activity systems quickly and easily

·       designing activity systems up front to anticipate Events

·       tackling “wicked problems”.


This section discusses the first four above. The fifth is addressed at the end of this chapter.


Managing risks

A social entity can prepare for Events by taking out insurance or managing risks in the normal way.


·       Envisage the future: predict the most likely and serious Events

·       Prepare to resist or recover from bad Events

·       Prepare to respond to desirable new Events

·       Test those preparations if possible

·       Plan an escape route or alternative future if need be.


This chapter is not about the generalities of insurance and risk management in social entities. The focus on ideas more directly relatable to systems thinking terms and concepts.


Enabling actors

One way for an enterprise to prepare for Events is to put responsibility in the hands of (well-trained and well-motivated) human actors, rather than computer actors or machines.


Management scientists often encourage seeing employees as autonomous agents who can learn from experience and adapt what they do. They promote "organizational learning". And obviously, there are many times and places where a business benefits or even depends on its employees insightfully adapting what is done and how it its done. The question here is not whether this is a good thing, it is how to position it terms of social entity thinking and activity systems thinking.


An adaptive social entity is one in which actors are free and able to:


1.     join and leave (be hired or fired)

2.     determine the activities they perform, and even the aims they seek

3.     learn from experience and act in ad hoc and innovative ways

4.     change the rules of activity systems they play roles in


Re 2 and 3. There is nothing more agile than a social entity in which human actors/agents use their intelligence and insight to decide what activities to perform, and even what aims to pursue. However, incrementally, businesses are replacing the evolved complexity of human adaptability, by the designed complexity of human and computer activity systems, which is where EA comes in.


Re. 4. How a business makes discrete changes to the activity systems it employs is discussed later in the section on self-organization.


Changing activity systems quickly and easily

Another way for an enterprise to prepare for Events is to improve its ability to make incremental changes to the activity systems it employs. This has been a focus of software development methodology for several decades now. Among the best-known principles for agile software system design are:


·        You Ain't Gonna Need it (YAGNI)

·        Keep it Simple (KISS).


Software systems are infinitely malleable. Still, a price must be paid for taking a short-term incremental approach to system design, since applying the KISS and YAGNI principles inevitably produces designs that must be modified later, when the cost of database and software refactoring must be paid.


Some agile development principles may be applied in higher-level business system design. However, people and physical resources are not infinitely malleable. The need to hire, fire or retrain people, and buy or redesign physical machines and resources, can hinder the ability to roll out system changes quickly and easily.


In software system development, the ring-fencing of scope (by time boxing, cash boxing, functionality or persistent data) helps a two-pizza team to be more productive. But this can result in overlaps and disintegrities between systems – and so be sub-optimal at the enterprise level. To counter that, EA tends to favor more up-front analysis and design.


Designing systems up front to anticipate Events

To design a system in anticipation of future changes means making it more complex than is needed initially. From the start, the system has to include a wider range of possible choices and activities. So, contrary to the agile development principles above, the principles are:


·        We Are Gonna Need it (WAGNI)

·        Complexify for change or configurability (CFC).


What can EA do to prepare a business handle to future Events? Design for bad Events, for robustness or resilience, requires redundant components, back up versions, fail over and restore processes.


Design to handle desirable new Events or requirements implies using the "complexify" principle above, since we must design a system that is more complex and resource-intensive than it needs to be right now.


E.g. The architects of a logistics business might design over-sized and resourced "hubs" with highly mechanized storage and retrieval of items, even though the current throughput of packages to be delivered does not require it.


E.g. Software architects might design a broad and rich database structure that will be stable, or perhaps configurable. The idea being the database need not be restructured during the following process of incremental and iterative agile software development.


Note that design up front faces many challenges.

·       How far ahead in time are we looking?

·       What proportion of future Events can we predict? 

·       How often or likely will each kind of Event occur?

·       Can we afford the redundancy, additional complexity and resource consumption that design up front requires?

·       And noting that a robust or resilient capability may be lost if not exercised now and then, will be able to do those exercises?


One way or another, design for antifragility requires time and budget since a) agile development requires time and budget for refactoring after Events, and b) design up front requires time and budget before Events. In practice, this puts pressure on business managers to sponsor the minimum design effort they can get away with.

More about activity system mutation

Every weather system, every plant and animal, every business activity system (as described in soft systems methodology) and every causal loop diagram (as drawn in system dynamics) is a machine in the broadest sense. Although business activity systems allow human actors the freedom to choose between defined activities, still, the range of actions is limited to those available in the machine.


How do machines - natural and designed - evolve in discrete steps?


Mutation by natural selection

A species mutates, not continually, but via discrete birth and death events. Biology does not design for the future. Nature does not, as one system thinker opined, "design processes that foster adaptability and robustness for a range of scenarios that could come to pass."  While inter-generational genetic mutations are accidental; the process of natural selection produces “adaptive” changes.


·       "Genetic mutations arise by chance. They may or may not equip the organism with better means for surviving in its environment. But if a gene variant improves adaptation to the environment (for example, by allowing an organism to make better use of an available nutrient, or to escape predators more effectively—such as through stronger legs or disguising coloration), the organisms carrying that gene are more likely to survive and reproduce than those without it.

·       Over time, their descendants will tend to increase, changing the average characteristics of the population. Although the genetic variation on which natural selection works is based on random or chance elements, natural selection itself produces "adaptive" change—the very opposite of chance."


In biology, the adaptability of a species does not lie in the adaptability of an organism. It lies in the “higher” process of evolution by natural selection. This kind of evolution is wasteful in the sense that it discards almost every new feature it creates. Nature kills off "designs" that don’t work well enough to be reproduced. It replaces them by whatever new "designs" turn out to better fit today's environment.


Given an environment with limited resources, each generation must die to make space for the next. Thus, old entities are replaced by new ones.


Not only are individuals are replaced, but species also. Most (99.9%) of the species that evolved are now extinct. The analogy in the business world would be the most brutal of capitalist systems. One in which every business fails, to be replaced by start-ups. And only a small number of those start-ups' random innovations survive for long in a changing world.


Mutation by design

EA is about designed activity systems that are rolled out, changed and replaced in discrete steps. There are motivations, aims, goals or objectives for change. There are designers who work to invent or change the systems. There are designs or plans for systems to be built and deployed. There are baseline-to-target migration projects.


Incremental or transformational mutation?

“Instead of obsessing over spreadsheets, he said, executives should walk factory floors or interact with more customers. Innovation often doesn’t come through one breakthrough idea but a relentless focus on continuous improvement, he said.” Elon Musk


This table characterizes some contrasts that often appear in system thinking discussion.


Management style contrasts

Individuals and interactions

Following processes

Network structures


Bottom up

Top down




Client-server relationship

Self-organization of actions

Direction and coordination of actions


Many social entity thinkers promote the styles to the left. These styles tend to hinder making an enterprise-wides transformational change. (Though paradoxically, to adopt those styles, some promote making a radical transformation to how an organization works.)


How often and how big should discrete changes be? Is it better to make incremental or transformational changes?


The case for making a transformational change is best made when there is an existential threat to survival. It can work, but it requires clear leadership from the center, and a good deal of top-down command and control. It does not emerge naturally from self-organization.


Let me express some doubts about the notion that radical transformations are generally more effective than incremental development.


·       One system thinker - apparently deprecating the continual improvement motto of Lean manufacturing - is quoted as saying “the electric light did not come from the continuous improvement of candles”. However, electric light was not a singular innovation or disruptive transformation. It took many decades of continual incremental development and migration, nearly the whole of the 19th century, for electric light to replace the candle.

·       In this video Pia Macini concludes to push boundaries we must cast aside old models, and make a transformational change. Yet biological evolution, has pushed boundaries and produced amazing sustainable solutions by incremental change.

·       What to conclude from McKinsey's report that 70% of business transformations fail?  Some may argue it was because people resisted change. Others may argue the transformation was ill-conceived or impractical in the first place.

·       When Karl Marx referred to Darwin in promoting a social revolution in Russia, he misrepresented the nature of biological evolution. Many social revolutions have led to disaster or the opposite of what was intended.


A challenge for system designers today is the pace of change in the system’s biological, social or technological environment. That does not imply we do better to make radical large-scale changes. Arguably it implies the opposite, we need to be flexible, make more small-scale changes, more quickly.


EA doesn't have to be about whole-scale transformation. It can be about stimulating, prioritizing, coordinating and optimizing smaller scale incremental innovations. This is a principle of the manufacturing revolution (Kaizen), and was favored by Elon Musk in the quote above.


Taking a strategic view of change

Obviously, we should match our approach to the situation. Is everything OK, is radical transformation necessary, or do we see discrete problems and opportunities to be explored?


Is everything OK?

Suppose our business services/products appeal to customers, and we're in profit. Then, to embark on redesigning and transforming our whole business would be needlessly costly and risky. Better to prioritize things needing attention and fix or improve them incrementally.


Are there discrete problems?

Although Ackoff is often quoted as saying "Improving a part does not necessarily improve the whole", in practice, improving a part on its own is a reasonable way to improve the performance of the whole. Attending to the most costly part, and removing the largest bottleneck (one at a time) are recommended practices for improving a system. Such incremental development is a feature of biological evolution and agile system development.


Are there discrete opportunities to be explored?

Again, continuous improvement is the name of the game. EA is about planning discrete step changes. If the steps are frequent and small enough they can appear continuous. The only time to put incremental fixing of problems and making of improvements on hold is just before and during large-scale transformation.


Is radical transformation necessary?

Suppose our services/products are not selling, and we're losing money. Then we should consider substantially revising or largely replacing services/products. And then redesign and transform our enterprise to provide those different services/products. We cannot cling too hard to the idea that old businesses should be transformed to meet new demands.


The obstacle to transformative evolution is usually what some call "sunk costs". That is, the investment in people, processes, technologies, buildings and other resources that are hard to discard or change.  The history of civilization tells us that old businesses are replaced by fitter competitors and/or start ups. Sooner or later, most businesses are taken over by new management or replaced by other businesses.


Digital activity system mutation

The remarkable thing is not the difficulty we have designing complex software systems; it is that we succeed at all. Google is said to have 2 billion lines of code. It was not completely designed up from. It grew incrementally, complexifying what started out as a relatively simple system. In biology as in software, each generation of a system replaces the previous one.


The more activity systems a business digitizes, the more data is stored in digital memories, the more systems are integrated by digital messages (the more a business does what technology vendors, consultants and enterprise architects urge it to do), the larger and more complex the business application estate will be.


The vision of "simplifying" the application estate runs counter to the flow of history, the pressure to integrate systems (using ESB, ETL or RPA), the pressure to adopt new technologies, the pressure to complete agile development projects, not to mention mergers and acquisitions.


Nobody should be under the illusion that much legacy can readily be eliminated without costs and downsides. Moreover, a primary simplification technique (data store consolidation) is the very opposite of the current fashion (to make things more complex by dividing monoliths into microservices - for debatable reasons).


Views of self-organization

“The use of the phrase [self-organization] tends to perpetuate a fundamentally confused and inconsistent way of looking at the subject”… “No machine can be self-organizing in this sense.” Ashby 1962


The term self-organization has been used with several meanings, including

·       the emergence of order from chaos (as paper clips connect in a chain when shaken up)

·       homeostatic adaptation (as a body adapts to changes in temperature)

·       self-assembly (as of a crystal in a liquid)

·       self-improvement (as by insightful learning from experience)

·       mutation to survive changes in a wider environment.


Ashby and Maturana, separately, suggested the use of the term “self-organization” tends to undermine the concept of a system. However, both allowed that a system can evolve or mutate in coordination with its environment. Ashby spoke of system mutation as


“a change of its way of behaving… occurs at the whim of the experimenter or some other outside factor.”


The idea of systems being described and re-organized from the outside appears in many domains of knowledge.


In mathematics

Gödel's two incompleteness theorems (1931) demonstrate the limitations of every formal axiomatic system capable of modelling basic arithmetic. The first states that no formal system is capable of proving all truths about the arithmetic of natural numbers. The second shows that the system cannot demonstrate its own consistency. However, an observer, standing outside the system, can assess its truth or consistency.

In biology

Darwin’s evolution of species can be seen as overcoming the limits to the adaptability of an organism. No organism is capable of changing its own DNA. However, the higher system of evolution and the lower system of an organism are coupled whenever two organisms succeed in mating, and replacing one generation of those organisms by the next.


In sociology

The notion of the cooperative can be seen as overcoming the limitations of social entities that compete for resources (as in “the tragedy of the commons”). Typically, a committee or governing body determines the rules of the wider collective. The members of the committee may be drawn from the collective’s members. It designs changes to the roles of actors, and directs actors in the social entities to follow them. It also has some power to ensure compliance to the rules.


To a greater or less extent, members may be permitted to negotiate how they interact on a bi-partisan basis, as in the Morning Star example later.


In cybernetics

Ashby’s treatise on self-organization can be seen as overcoming the limitations of basic cybernetics. He stated that no machine is capable of changing itself. However, another (let us call it “higher”) machine can do that.


Ashby’s idea is that a higher machine can monitor the state and rules of a lower machine, and when some condition is recognized, can change those variables or rules. The two machines are coupled by a feedback loop in one system.


Note that neither machine re-organizes itself; the lower system does not self-organize, and the higher system does not self-organize. The whole composite (of higher and lower) is a larger and more complex system, but it does not self-organize either.


What happens might better be called “rule-changing mutation”. It turns out that this approach to self-organization sits well alongside the notion of a sociological cooperative. But first, a brief aside on second order cybernetics.


In second order cybernetics

Krippendorff wrote:

“Although second-order cybernetics (Foerster et al. 1974) was not known at this time, Ashby included himself as experimenter or designer of systems he was investigating.”


Second order cybernetics is not a later version of classical cybernetics; it is different. Authors who used the term in the 1970s include Heinz von Foerster, Gregory Bateson and Margaret Mead. However, they didn’t use the term with exactly the same meaning.


Second order cybernetics is usually said to be about systems that include the system describer in the system, which enables the system to be self-organizing and creatively self-improving. Remember Boulding’s distinction between roles and actors? Second order cybernetics conflates them, and so undermines the concept of an activity system in which actors are defined by their roles.



EA is a higher system that is coupled to the primary activity systems of a business, and plans changes to them.

Meta systems thinking


Rule-changing mutation

Ashby proposed a system (call it S) cannot reorganize itself, but can be reorganized by higher system or entity (call it M). M can observe and change the behavior of S, and when circumstances demand it, M can change the roles and rules of S, thus creating a new generation of S.


To change an entity that realizes S, M must:

1.     take as input the abstract system (S version N) realized by the entity

2.     transform that input into a new abstract system (S version N+1)

3.     trigger the entity to realize the new abstract system.


Meta systems thinking

“Meta systems thinking” adds one more idea to Ashby-style rule-changing mutation. The idea is that one actor may play two different roles: a role in the operation of a regular activity system (S), and another role in a high system or entity (M) that observes and changes S.


E.g. One person can play a role as a tennis player in tennis matches, and another role as a law maker in the Lawn Tennis Association.



The rules of tennis

<create and use>          <represent>

LTA   <observe and envisage>   Tennis matches

<are realized by>

Tennis players


The idea that one human actor can play a role in systems at different levels helps us to reconcile activity system theory with “self-organization”, and gives us an alternative to second order cybernetics.


It does however presume that a system evolves by inter-generational steps. And in the case of human activity systems, this implies some kind of change control.


Applying meta systems thinking

Meta systems thinking reconciles activity system theory with self-organization. And it goes some way to reconcile second order and classical cybernetics. A few points to bear in mind:


·       Every activity system (ecological, socio-technological, whatever) we define is a "machine" in the Ashby sense.

·       It is logically impossible to define a fully self-defining system, because the moment it starts running it may depart from whatever was defined (e.g. peers reorganize themselves into a hierarchy).

·       We can however define a meta system that enables actors who play roles in a system to negotiate changes to those roles (as in the “Morning Star” example).

·       The system must be changed in discrete, generational steps, since without change control, the system’s activities would become incoherent.

·       Changes to a meta system are distinct from changes to a system that it defines; the former organizes the latter; neither organizes itself.


It turns out that meta system thinking can be applied in a variety of domains, to homeostatic machines, to biology and to sociology.

Applying meta system thinking to a homeostat

Ashby built a homeostat to illustrate inter-generational reorganization. A “higher” machine detects when environment variables move outside the range safe for the lower machine to function. The higher machine.

1.     Takes as input the rules applied by the homeostat to its variables

2.     changes those rules at random

3.     triggers the homeostat to realise the new rules.


What if the higher-level system detects this doesn’t improve matters? Then it can change the rules again, until the lower system ether dies or works better.


What if the lower-level system is not an individual, but a species or population of actors or agents? If the higher-level system is like biological evolution, it will change the rules of each individual differently. And then leave it to the environment to favor individuals that work better. Or else, if the higher-level system is like a human government, then it can change the rules for all. And then monitor the effectiveness of those rule changes.

Applying meta system thinking in biology

“[Consider] the process of evolution through natural selection. The function-rule (survival of the fittest) is fixed”. (Ashby’s Design for a Brain 1/9)


Darwin’s evolution of species can be seen as overcoming the limits to the adaptability of an organism. No organism is capable of changing its own DNA. However, the higher system of evolution can do that. The two systems are coupled in that evolution is triggered whenever two organisms succeed in mating, and so replace one generation of those organisms by the next.


The rules of organic living are encoded in an organism's DNA. The rules of the organisms in a species are changed from one generation to the next by the fertilisation of an egg. The process of sexual reproduction embodies the “survival of the fittest” rule.

1.     male and female individuals mate

2.     their DNA mixes to form new DNA (think of it as an abstract system description)

3.     the new abstract system is realized by a new individual

4.     the environment favors individuals that make best use of the available resources.

Applying meta system thinking in sociology

Can we stretch ideas about mechanical, biological and psychological machines to the level of sociology? In the theory of evolution by natural selection, can a social entity be treated as an organism? Might natural selection favor cooperation and oppose competition?


Thinkers who addressed this include:

·        Lynn Margulis – the evolution of cells, organisms and societies

·        Boehm – the evolution of hunter-gatherer groups

·        Elinor Ostrom – the formation of cooperatives.


The notion of the cooperative can be seen as overcoming the limitations of social entities that compete for resources (as in “the tragedy of the commons”). Typically, a committee or governing body determines the rules of the wider collective. It designs changes to the roles of actors, and directs actors in the social entities to follow them. It also has some power to ensure compliance to the rules.


Of course, people are not automatons who inexorably and helplessly play their roles. They are intelligent and creative; they change the rules of an activity system they play a role in. As free agents, they can

·       follow the rules of the system.

·       ignore or break the rules – which may be recognised in the system as an “exception”.

·       propose changing the rules of the system.


Generally speaking, an actor can act in the lower system (S) as an actor in its regular operations, and the same actor can act in a higher system (M) as observer of the lower system (S).


How can M change a system from “bad” to “good”. Whereas a robotic M may iteratively make random changes to a system, favouring ones that lead an entity to behave better (in some pre-defined way), a human M can observe the system, understand it and invent changes that are likely to make it better.

Applying meta system thinking to EA

The idea of meta system thinking is readily applies to business operations. EA is the higher system or entity that monitors and changes business activity systems in discrete steps, under change control. The two levels are coupled in a feedback loop.


Meta system

Feedback loop


Enterprise architecture

 ßbaseline roles and rules  

target roles and rules à

Busines activity system


EA monitors the state of a business activity system, and when some condition (say excessive cost or poor quality of service) is recognized, replaces one generation of the system by the next. The higher and lower systems are coupled in one enterprise, but neither re-organizes itself. (And to change EA, we’d need another system or entity, say business management, to monitor the success or failure of EA.)


At the level of software design, an agile system development method is a meta system. “Stand up meeting” are times when developers monitor and modify the software development system in which they work. Akin to biological evolution, inter-generational changes to the software system are small; and designer and testers eliminate changes that don't work.  Unlike biological evolution, changed systems are fed with all the electricity they need, which can lead to wasteful use of resources.


“Let’s fire all the managers”

To design a fully self-organizing activity system is impossible, because the moment it starts running its autonomous actors (agents) may change whatever was designed. What we describe as autonomous actors might immediate reorganize themselves into a hierarchy, assigning responsibility for command and control to one of their number!


However, within a given organization structure, any two organization units or individual human actors may negotiate how they interact on a bi-partisan basis. E.g. two business units, one making cans, another canning fruit, agree how many cans will be delivered and when. On a regular basis (say bi-annually) they sit down to agree an interface specification, defining and quantifying the services that one supplies to another. This is exactly the self-organizing principle of a company called Morning Star.


“Every year, the 23 business units negotiate customer-supplier agreements [interface specifications] with one another. And each employee negotiates a Colleague Letter of Understanding [another interface specification] with associates most affected by his or her work. “As a colleague, I agree to provide this report to you, or load these containers into a truck, or operate a piece of equipment in a certain fashion.” [i.e. I agree to provide this collection of services]. This [interface specification] covers as many as 30 activity areas and spells out all the relevant performance metrics.” <>


I recommend you read the Harvard Business Review article in which Gary Hamel talks about Morning Star.


I guess (don’t know) that the parties in Morning Star agree service contracts (in customer-supplier agreements and in letters of understanding) using the “pull” principle of Lean manufacturing, meaning they are defined backwards from the end of the supply chain to the start.


Of course, it is not true that Morning Star fired all the managers. And even in Morning Star, there may be a role for EA in optimising and extending the creation and uses of information to support and enable business processes.

On tackling wicked problems

The original paper on wicked problems, defined them in terms of ten points. In practice, most points apply to most human activity problems we are asked to address - small as well as large.


On short, a "wicked problem" is one with conflicting requirements and no ideal solution. There is no perfect answer; there are trade-offs to be made between competing goals, and balances to be drawn between different design options. The possible solutions can’t be neatly divided into good or bad, only placed on a scale between those extremes.


How to define and "solve" wicked problems in the messiest of social entities or situations? The first step is not to model a particular system; it is to do some meta system thinking.


This video introduces General Morphological Analysis as a tool to scope a problem and define solution options. GMA is a kind of meta system (see above).

·        The state variables of the meta system are called dimensions.

·        The actors are a pool of experts, and a facilitator.

·        The activities are to populate the dimensions with options and analyse them in various ways.


GMA prompts a panel of actors to think, question and consider options in a systematic way. The video suggests these actors are supported and enabled by a software tool. Typically, there are three two-day workshops with 6 or 7 carefully selected and heterogeneous experts. They define the critical dimensions of the problem (cf. the state variables of an activity system). In each dimension, they define a range of options (cf. values for the variable values).


The experts examine each pair of dimensions to decide if it is possible or impossible. Rejecting impossible permutations narrows the permutations to be considered. The many remaining permutations are akin to Ashby’s measure of a system's “variety”. Fixing the choice between options in one or more critical dimensions further narrows the possible option permutations.


This exhaustive and exhausting analysis of (potentially hundreds or thousands of options) can identify unexpectedly viable and non-viable solutions. Assessment of the most favored options is done in whatever empirical, logical, normative (social) ways can be applied.


Further, proceeding to a solution, we might be able to use casual network analysis. Or else Thomas Saaty’s analytic hierarchy process (AHP). And eventually, apply system theory to design a chosen business activity system.

Conclusions and remarks

This chapter addresses what it takes for an enterprise to be adaptable, including several interpretations of self-organization, and a way of approaching “wicked problems”.


Aside: comments on a paper

This paper on robustness makes some questionable assertions. It defines robustness thus. “A core property of robust systems is given by the invariance of their function against the removal of some of their structural components.” Contrary to what some interpret Ackoff as having said, we can indeed remove parts from a whole without affecting its ability to meet its main aim. We can remove the spell checker from a word processor. We can remove the glove compartment, airbag, safety belt, carpet, arm rests and radio from a motor car, and still drive from A to B. We can remove the spleen, gall bladder and appendix from a human body with no significant effect on the functioning of the body.


The paper doesn’t distinguish actors from roles. Since it assumes one actor plays one role, and vice-versa, removing one removes both. In practice, it isn’t that simple. Given one role played by many actors, we can remove actors without qualitatively changing the system behavior. E.g. remove one of our two kidneys.  And given one role is responsible for several activities, removing the role may disable all of them. However, some processes may be completable without some ideally-expected activities. And in practice, some processes are more central to business success than others. This “centrality” may not be evident from a model of system structure and behavior.