Description Theory

From biological perception to formal system description

Copyright 2016 Graham Berrisford. One of about 300 papers at Last updated 08/02/2019 22:35


This article takes a Darwinian approach to description.

The preceding article explored “semiotics” and what might be called “type theory”.

The next article looks at how actors exchange information (descriptions, directions, decisions and requests for them).


Description theory terms and concepts. 1

The evolution of description.. 3

1: Perception and pattern recognition.. 3

2. Learning new patterns. 4

3. Communication (sharing mental models). 5

4. Verbalisation of communications. 6

5. Mathematics and computing. 8

6. Artificial intelligence (AI). 9

What else makes humans special?. 10

A general description theory. 12

A general system description theory. 13


Description theory terms and concepts

Our description theory is based on three propositions.

Describers observe and envisage realities (which they perceive to be composed of discrete objects and events).

Describers create and use descriptions (stored in memories and conveyed in messages using brains, speech, writing, pictures and other forms).

Descriptions conceptualise realities (they either mimic features of things, or represent them in some encoded form).


The reality of the universe is supposedly a space-time continuum, with no gaps in space or time.

However, all descriptions of the universe chunk it into discrete things.

Thing: a subdivision of the universe, locatable in space and time, describable as instantiating one or more types.

Natural thing: a thing that emerges from the evolution of matter and energy, regardless of description.

Organism: a natural thing whose form is defined by it genes, and engages in the process of Darwinian evolution.

Designed thing: a thing described by an organism before it is created.


A premise here is: there was no description before life.

There could be no concept before there was an actor able to conceive it, and no description before there was a describer.

Describer: a thing (organism or machine) that can create and use a description.

A Darwinian explanation starts from the notion that organisms can recognise new things as resembling remembered things

The survival of describers depends on their ability to create and use memories that are descriptions of realities.


Realities do not have descriptions, awaiting discovery.

Rather, describers create descriptions to help them understand and predict realities.

Description: a thing that idealises another thing by recording or expressing some of its properties.

Any memory, message, model or view that captures/encodes knowledge of a thing’s properties can be called a description.

It enables knowledge to remembered and/or communicated.

It enables some questions about the described thing to be answered.


Set theory and semiotics provide some useful terms for descriptive concepts.

Set: a collection of things that are similar in so far as they instantiate (embody, exemplify) one type.

Type: an intensional definition, composed of property type(s) that describe a thing.

Instance: an embodiment by one thing of a type, giving values to the properties of that type.

Sign: a name, image or effect of a thing or a type, which describers use in recognising, thinking or communicating about that thing or type.

Token: an appearance of a sign or a type in a memory or message.

Description theory triangle

These articles present the propositions above in a graphical form as a triangle.

The term “idealise” can be read as conceptualise, represent, signify or symbolise.


Description theory


<create and use>                   <idealise>

Describers     <observe & envisage>     Realities


Note: this triangle does not match some other triangles you may have come across, like the semiotic triangle (see other triangular philosophies).

And there is recursion; the universe of realities includes describers and descriptions.


There were biological describers and descriptions before mankind evolved.

However, the ability to describe reality is uniquely and astonishingly well advanced in mankind.

We form descriptions in many different ways:

·         We naturally encode descriptions in mental models: e.g. remember the sensation of looking at a face.

·         We make descriptions that mimic reality in the form of pictures and other kinds of physical model.

·         We encode descriptions in spoken or written words: e.g. “a planet is a large body in space that orbits a star.”

·         We encode descriptions in other forms of documented model: e.g. a symphony score.

The evolution of description

 “Evolution is the accumulation of small errors that turn out to be advantageous” (Steve Jones).


Before life, the universe was composed of physical matter and energy.

There was a continuous process of change to that matter and energy over time.

This process led to some systems, like the solar system, that repeat behaviors in an orderly way.

However, there was no description of the universe or any system in it.

For eons, there was stuff happening, and even some orderly systems; but there was no description of those systems.

(Or else, there was only a metaphysical description by God that is unknowable).


Eventually, biological evolution led to humans with the ability to describe things using words.

But a Darwinian explanation of description cannot start from words, it has to start earlier.

It starts from the notion that organisms can recognise family resemblances between things.

And so recognise when a new thing is of a kind important to survival, or resembles a previously remembered thing.

This paper tells a story of steps that lead from recognising similar things to increasingly sophisticated creation and use of “types”.

1: Perception and pattern recognition

Sunflowers react to the sun’s daily passage across the sky.

Somehow they perceive the position of the sun.


Single-celled organisms can perceive some features of their environment – hot/cold, light/dark, edible/not edible.

We know from observations they repeatedly react to similar things in the same or appropriate way.

Knowledge of that perception must somehow be encoded/symbolised in their biochemistry.


Have you ever engaged a kitten by wiggling a piece of string?

Then you know that “mouse tail like” is somehow encoded in a cat’s biochemistry.

(It may be relevant that Hubel and Wiesel (1959) showed cat’s brains have cells dedicated to detecting movement in slit-shaped spots of light.)


A family resemblance

Long, thin, wiggly


A brain can remember a perception of a thing

But surely, it cannot compare every newly-perceived thing with every past-remembered thing.

Rather, it can compare every newly-perceived things against patterns of related sensations.

And efficient pattern recognition gives organisms an advantage in the struggle for survival.


There must be infinite overlapping patterns.

Some are more common and more usefully remembered than others.

E..g. it is useful distinguish from firm and stable things we call “solid” from fluid and stirrable things we call “liquid”.

And predators must recognise things we call “prey”.


Naturally, members of the same species tend to conceptualise realities in the same ways.

They share a common bio-chemistry; they have similar experiences and similar needs.

However, their knowledge of the world does not have to be precise or perfect.

Evolution does not require an animal to have a perfect description of anything.


Holding mental models of the world helps animals to manipulate things in the world and predict their behavior.

But all mental models are partial and flawed models of reality.

These descriptions need only be accurate enough to help animals survive and breed.

Animal minds evolved to model reality, not exactly as the world is, but just well enough to sense, predict and direct events that matter to survival.

2. Learning new patterns

Evolution led on to animals able to recognise a wider range of family resemblances.

And to animals with the intelligence to learn from experience.

Meaning - they can remember new patterns of resemblances and match newly perceived things to them.

They encode perceptions in memory and recognise when new things resemble old ones.

E.g. A bird learns by trial and error to recognise things we call “edible”.


Creating and remembering new descriptions helps animals to recognise food, mates, friends and enemies.

Having flexible mental models of the world helps animals to survive, thrive, and pass their genes on.

The more intelligent the animal, the greater its ability to abstract descriptions from realities.


Most animals have no words to describe things and family resemblances between them.

Here, because we are discussing these matters in writing, we have no option but to use words.

A cat recognises long, thin wiggly things as being what we might call “mouse tail like”.

And flying things with feathers and a beak as what we call “birds”.


A family resemblance

has feathers, has a beak, can fly.


Words are so natural to us that our mental models may take verbal forms.

For more on mental models, read How the brain works.

3. Communication (sharing mental models)

The ability of actors to communicate is an amazing side effect of biological evolution.

To communicate, animals use tokens to signify types that their relatives can recognise.

E.g. A bird sounds an alarm call (a token) to signal the type we call “danger”.


How do we know one animal can understand a description created by another? By their actions.

One honey bee performs a wiggle dance to describe the location of a pollen source;

The only plausible conclusion is that the second honey bee understood the description of reality communicated by the first.


E.g. A honey bee remembers where pollen is found, and later describes that to other bees.

One bee’s wiggle dance contains tokens giving values to the types we call “direction” and “distance”.

Another honey bee observes the dance finds that pollen source.


Senders encode meaningful information in signals.

Receivers decode meaningful information from signals.

Both translate internal/organic mental models into and out of external/inorganic forms of description.


Symbolic languages include: facial expressions, sounds (calls, barks, whistles), smells, movements, gestures and manipulated materials (e.g. nests).

By using such physical symbols, animals share knowledge about food, friends and enemies, and signal mating intentions.


Communication by messages

Many animals communicate using sound – since it is immediate, fast, easily manipulated and works in the dark.

Male and female mosquitoes, in flight, normally transmit sounds of different pitches.

When they can hear each other; they align the pitches of the sounds they make.


Many birds communicate by singing.

One may perceive danger and sound an alarm call, another hears that call and internalises it as sense of danger.

Thus, the two birds share an internal mental model of the current situation as being what we symbolise as “dangerous”.


Honey bees must recognise and remember things of the type we might name “pollen location”.

Famously, they also communicate about pollen sources by performing and observing wiggle dances.

The dance of a honey bee encodes particular values for the variables “distance” and “direction” of a “pollen location”.

The proof that bees “know” the symbols for these two types is that bees do find pollen where it was described.


Externally, bees communicate pollen locations by dancing.

Internally, their mental models of pollen locations are somehow encoded in their biochemistry (how does not matter here).

They have to translate internal descriptions of reality into and out of external descriptions of reality.

One bee finds some pollen, encodes its distance and direction in private memory, and later, translates that information into a public dance.

Another bee decodes the information from the public dance, translates the information into private memory and later, finds the pollen.


Beehive communications

Pollen location descriptions

<create and use>                   <idealise>

Honey bees   <observe and envisage> Real world pollen


Communication using shared memory spaces

Animals can leave a smell on a tree to say “This is my territory”.

They can shape materials into a nest and signal to a potential partner “I am ready for mating” (cf. Ashby’s sticklebacks later).

The smelly tree and the nest are persistent external memories; they are shared memory spaces.


For more on communication in social systems, read A communication theory.

4. Verbalisation of communications

There is nothing human-specific above; we were not the first to remember descriptions of things and communicate them.

What sets us apart is the use of words to remember and communicate information.

Our ability to communicate verbally (and in pictures) hugely amplifies the intelligence nature gave us.

It enables us to collaborate with others in complex projects to do complex things and create complex objects.


Communication by messages

Any matter or energy flow can be used by one actor to send information to another.

Evolution gave us humans a unique and dramatically well-developed communication tool.

We can create and use an infinite variety of sounds to symbolise things and similarities between things.


We translate internal mental models into and out of external oral descriptions.

We give voice to and hear verbal messages ranging from short and simple to long and complex.

Our messages contain descriptions, directions, decisions (and requests for them).

Oral communication was a huge step forward for mankind, and is essential to most peoples’ lives today.


Communication using shared memory spaces

We have a second huge advantage when it comes to sharing mental models.

We have shared memory spaces that far exceed those other animals can use - in scope, complexity and value.

We can record oral descriptions, decisions and directions using that triumph of human invention - the written record.

Thus, we can translate internal mental models into documented models for posterity, for agreement and for testing.


Recalling a biological memory has the effect of modifying it to some extent, if not as far as in "false memory syndrome".

For this and other reasons, the externalisation of our mental models – into publicly shareable written, audio and visual forms - is vital.


The need to externalise memory

Intelligent describers create descriptions as a result of observing or envisaging realities.

Their observations start in sensory perceptions; their envisagings start in either dream-like or consciously-directed mental activity

Both observations and envisagings lead to the same result - the creation of private descriptive/mental models.

Unfortunately, recalling a biological memory has the effect of modifying it some extent - though rarely as far as in "false memory syndrome".

That is why externalising memories – in publicly shareable written, audio and visual forms - is so important.

Written communication is so important to modern society that schools prioritise the teaching of reading and writing over other subjects.

And obviously, documented models are vital in any collaborative design exercise.


The fuzziness and fluidity natural language

Human verbal languages are extremely flexible.

It isn’t just that we bend the rules of the language we learn.

E.g. We call a circus ring “circular”, though nothing in the real world is measurable as a perfect circle.

E.g. We say a penguin is a “bird”, though a bird may be defined as below





has feathers, has a beak, can fly.


We also shuffle signs and types and around, modify them, and use them loosely.

Dictionary editors strive to impose a structure on a vocabulary, using simple well-known words to define other words.

But there is no overarching type hierarchy, there is instead a complex network of signs for concepts.

There are synonyms, homonyms, and recursive references.


This fuzziness and fluidity of words surely reflects the fuzziness and fluidity of ideas in our heads.

Perhaps ambiguities are essential, since they lead to innovation in the evolution of human ideas.

Over two million years, humans have put countless ideas into words, tested them and found them wanting.

Axioms that seem obvious today are those that have survived testing and been shared for that reason.

Those models of reality that successfully predict outcomes will tend to survive and thrive better than those which don’t.


Constraining natural language

Natural languages give us enough communication capability to get by.

But natural languages are biological rather than logical.

It isn’t just that a word can have several different meanings.

The grammar we use is loose, and our natural language expressions can be unclear or ambiguous.


So, natural language is not precise and rigorous enough to support all our endeavours.

To specify complex artefacts, machines and systems we need more formal languages.

We use words to fix family resemblances in the form of sign-type pairs.


To prevent ambiguity and misunderstanding, people devise “controlled vocabularies.”

Different “bounded contexts” or “domains of knowledge” have their own “domain-specific languages”.

E.g. Accountants agree the meaning of the types “profit” and “loss”.


For more, read A language theory.

5. Mathematics and computing


From words to types

Human cognition is wonderful, various and mysterious beyond comprehension. 

It is surely a complex compound of different processes that are biological rather than logical.

However, the use of words enabled is to classify things and concepts into types.


A type is an intensional definition, composed of property type(s) that describe a thing.

This triangle represents how types idealise particular things.



General Types

<create and use>                <idealise>

Typifiers   <observe and envisage> Particular things


People and software engineers often use polythetic types.

These contain more properties than necessary – so a thing need not instantiate all properties of the type.

E.g. The employee record type has 20 attributes, but most employee records contain values for only some of them.


Mathematicians often use monothetic types

These types contain only properties that are necessary and sufficient to be a thing of that type.

E.g. Every instance of “even number” must have all the properties of that type.


Within such “hard” domain or knowledge a statement can be absolutely true or false.

Because truth and falsehood is defined by reference to the logic or rules of the domain itself.

E.g. it is universally and certainly true that the “circumference” of a “circle” is equal to its “diameter” times the value of “pi”.


From types to systems

And types enable people to thinking logically - a very narrow kind of thinking.

And thinking logically led people to invent (in the 19 century?) the deterministic system.

A deterministic system processes messages (containing values of given types) against memories (containing values of given types), and acts according to rules.


From systems to computers

Computing is nothing more or less than a formalisation of the deterministic system

Computers directly imitate of that very narrow kind of deterministic thinking humans developed.

Computers create meaning by encoding of descriptions of the values of types.

6. Artificial intelligence (AI)

AI machines can create and use descriptions using neural networks, polythetic types and fuzzy logic

AI overcomes the limitations of monothetic types by imitating biological processes.


Don’t misinterpret news (in 2017) of AI machines talking to each other and “inventing their own language”.

Look at what they actually said to each other, and you’ll see it looks very unlike a language, and far more like the babble of software going awry.

AI has been hyped for 40 years, and it has recently been joined by “big data”; surely, both are still early on the hype curve.

The ability of a machine to recognise patterns in, or derive types from, data is one thing.

Our ability to create complex system descriptions and then build complex systems is on a completely different level.


An aside on free will.

Robots can do unexpected things, by employing probabilistic fuzzy logic or randomising functions when choosing a response to inputs.

That doesn’t imply they have intention or will, but does prompt the question about whether animals have free will.

My view is that it doesn't matter how animal intelligence works, whether it is deterministic or not.

For almost all practical day-to-day purposes, we have to treat people as making choices of their own free will.

We allow judges and juries to make allowances for cases where people are “out of their minds” or coerced by others to make choices.

What else makes humans special?

This article tells a story of seven steps that leads from perception to fuzzy pattern recognition to increasingly sophisticated creation and use of “types”.

This article presumes describers have the human abilities above

So we can describe the structures and behaviors of a system by using words to typify them.

Translation between different kinds of description

We can translate between descriptions in three ways.


First, we can translate between internal mental models and external descriptions

It may seem natural to draw a division between internal mental models and external oral/documented models.

A mental model is unconsciously encoded in a biochemical form that is fuzzy, fluid, fragile, prone to decay and be forgotten

A documented model is consciously encoded in much more stable and persistent physical form.


However, we continually translate between internal and external models of reality.

We translate mental models into oral/documented models and vice-versa.

We continually, well nigh automatically, translate between mental models and spoken words or written words.


Second, we can translate between any two kinds of external description

We frequently translate between external forms of description: e.g. between speech and writing.

That seems no different in principle from translating between internal and external models: e.g. between translating mental models into and out of spoken words.

And no different in principle from translating up and down the mysterious communication stack from chemistry to consciousness


Third, we can translate between descriptions usable by humans and by machines we make

Computers facilitate human communication of descriptions, directions and decisions.

They require that all descriptions, directions and decisions are translated into patterns of binary digits.

But translating into and out of binary code is no different in principle from translating between other symbolic languages.

Introspection and analysis

Biological evolution led to animals with self-awareness.

Experiments have shown many animals recognise themselves in a mirror, including elephants, apes, dolphins and whales.

However, we are not only more self-aware but also more introspective than other animals.

We analyse what is communicated, we challenge it, we test it.


It is easy to make assertions with no evidence.

Persuasive oratory and/or laziness can lead people to believe false assertions are true.

Historically however, the written record changed the game.

“As soon as writing made it possible to carry communication beyond the temporally and spatially limited circle if those present at a particular time,

one could no longer rely on the force of oral presentation; one needed to argue more strictly about the thing itself.” Luhmann


The written record helps us to examine what is thought, said and written, to challenge it, to test it.

We need tools confirm assertions are true, descriptions are definitions, and hypotheses are knowledge.

We have developed tools to test the truth of assertions: mathematics, logic and the scientific method.


Like everything else in the knowable universe, verbal descriptions can be described.

Logicians seek to describe things using statements that can be tested as true or false.

To do this, they use a generalised form of description called a predicate statement, which takes the form: subject <verbal phrase> object.

For example: “The Lawn Tennis Association <are responsible for defining> the laws of tennis.”


For more on the use of predicate statements, read A language theory.

The scientific method

The word science is rooted in an old word for knowledge.

Science might be simplified to three propositions.

1.            Scientists <observe and predict> the universe – meaning the behavior of entities observable in reality

2.            Hypotheses <conceptualise> the universe – meaning the behavior of entities observable in reality.

3.            Scientists <create and use> hypotheses - which are regarded as knowledge if they successfully predict real-world behavior that is measured.


The three propositions are arranged in a triangle below.


Hypotheses & Knowledge

<create and use>                 <idealise>

Scientists    <describe and predict>   The universe


To generate a hypothesis is easy, and it does not amount science.

As Thomas Edison said “Genius is 1% inspiration, 99% perspiration.”

The inspiration, creating a hypothesis, is 1% of the effort; the other 99% lies in analysing and testing the hypothesis.

For more on the limits to our knowledge, read Knowledge and truth.


There is a lot of pseudo science about – not least in the world of “systems thinking”

For more on that, read “Seven signs of shamanism”.

A general description theory

The description theory we started with can be distilled in this triangle


Description theory


<create and use>               <idealise>

Describers     <observe & envisage>      Realities


Things change, and in the end, all is unstable.

A system is a transient island of stable behavior in the ever-changing universe.

What differentiates a "system" from any old named thing is that a system has one or more stable roles and rules

For example, a game of chess, a tennis match, a business system, a software system, or anything describable in the form of a causal loop diagram.


Systems are islands of orderly behavior that are describable in terms of:

·                 Roles played by system actors - persistent entities – active structures – locatable in space

·                 Rules governing system activities – behaviors over time – which follow some logic or law – and change the state of the system.


To apply system theory is to describe those regular behaviors of an entity that are relevant to some given aims(s) and/or concern(s).


The description of a natural system

Our sun and its planets interact in the solar system by following the laws of gravity and motion.

It is describable as a system because its behavior conforms to laws.


Russell Ackoff encouraged people to distinguish (but went on to confuse himself) abstract and concrete systems.

1.            The "solar system" is an abstract system description; it is our way of making sense of a reality.

2.            The relevant bodies out there in space are the concrete system realisation.

The second is more than the first; the first is a selective abstraction from the second.


The description of a designed system

The solar system is a natural system; it predated the humans who first observed it.

By contrast, consider a business system that is designed and created by humans.

It too can be described as a system because its behavior conforms to some logical rules, for example:


A business system



Place order

Send invoice

Send payment

Send receipt


Some branches of systems thinking are about social entities rather than social systems.

The behavior of an ever-changing social entity is not describable as a “system”.

A general system description theory

This table maps system theory to other ways of describing the world.



create and use

Descriptions (logical)

which idealise

Realities (concrete, or relatively so)

Type theory


create and use


which idealise

Set members



create and use


which idealise




create and use

Hyphotheses & knowledge

which idealise

The universe

System theory

System describers

create and use

Abstract system descriptions

which idealise

Concrete systems


System describers create and use abstract system descriptions which idealise concrete systems.

A concrete, running, system is made of individual things, somewhat different from each other, which realise/instantiate types in an abstract system description

Conversely, an abstract system description is made of types that idealise similar things by naming properties they instantiate.


This table lists several applications of system theory.

General system theory

System describers

Abstract system descriptions

Concrete system realisations




“The Dewey Decimal System”

which idealises

Sorting books on library shelves




“The Solar System”

which idealises

Planets in orbits

Lawn tennis



“The laws of tennis”

which idealise

Tennis matches

Classical music



Symphony scores

which idealise

Symphony performances

Business systems

Enterprise architects

create and use

Business models

which idealise

Business actors and operations

Software systems

Software architects

create and use

Software models

which idealise

Software objects and operations


Ashby said in 1956: “our first impulse is to point at a concrete entity repeating a behavior and to say "the system is that thing there".

But to apply system theory is to select and describe those behaviors of an entity that are relevant to some given aims(s) and/or concern(s).

The describer must abstract from the infinite describable facts that could be found in observing the activities of the concrete entity.

And notice, moreover, that our concern is activity systems rather than passive structures.


"Cybernetics does not ask "what is this thing?" but ''what does it do?" It is thus essentially functional and behavioristic.”

“[It] deals with all forms of behavior in so far as they are regular, or determinate, or reproducible.” Ashby 1956

Abstraction in system thinking

This table specialises our description theory for systems.

System description theory

Abstract system descriptions

<create and use>                            <idealise>

System describers <observe & envisage> Concrete system realisations


This table outlines four examples.






Abstract system description

“Solar system”

The laws of tennis

The score of a symphony

The roles in a radio play

Concrete system realisation

Planets in orbits

A tennis match

A performance of that symphony

Actors playing those roles


By necessity, a system description abstracts from most aspects of what it describes.

It is impossible to describe the infinite potentially describable feature of a concrete system realisation.

E.g. A definition of the solar system says nothing about life on earth.

E.g. The score of a symphony says nothing about the personalities of orchestra members.

E.g. Human role definitions say almost nothing about the individual actors that play those roles.


Remember our concern is activity systems.

The next table subdivides the concrete system realization into the specified behavior and the entity that performs it.


System thinking levels

System 1

System 2

Abstract system description

The specification

A symphony score

The design-time code of a computer program

Concrete system realisation

The specified behavior

A performance of the above

A run-time execution of the above

The entity that performs it

The orchestra members in a concert hall.

A computer in a data centre.


Remember, one entity can realise many systems.

E.g. one computer can perform many unrelated programs at once

E.g. you (as a person) may realise at least two of the three systems in the table below at the same time (three at once is not advisable).



System thinking levels

System 3

System 4

System 5

Abstract system description

The specification

“Oxygen to carbon dioxide”

“Gene reproduction”

“System modelling”

Concrete system realisation

The specified behavior

Person breathing

Person making love

Drawing ArchiMate diagrams

The entity that performs it

A person – you for example


Remember Ashby’s warning: “our first impulse is to point at a concrete entity repeating a behavior and to say "the system is that thing there".

An entity is rightly called a system only when and where it conforms to a system description.

No entity is rightly called a system without reference to a specific system description or model – one to which the entity demonstrably conforms.


The interaction between system describers, descriptions and realisations is especially intimate in the case of human and/or computer activity systems.

If you consider “friendliness” to be an important property of your run-time system, then you should put it in your role definitions.

The only features of the real-world entity that count as part of the system are the features included in your system description.

Concretion of system realisations

In practice, it is normal to regard the concrete entity as part of the system realisation.

Because the remainder of that entity, and whatever it does outside the system of interest, is out of scope.


The describer’s premise is that a concrete system will realise (gives values to) variables in an abstract system description.

For example, the members of an orchestra will give values to the notes in a musical score.

We test this by observing and measuring the values the concrete system gives to those property types.

To run at all, the system must employ actors who can read the types in a description and assign values to them. 

Level of abstraction?

The level of abstraction in any system architecture is a matter of choice.

The architecture of a computer program can be described at several levels, from program code up to very abstract architectural models/diagrams.

The “system of interest” to a system architect extends only so far as the architect chooses to describe the structures and behaviors of that system.

For more on abstraction of description from reality, read A philosophy.



All free-to-read materials on the Avancier web site are paid for out of income from Avancier’s training courses and methods licences.

If you find them helpful, please spread the word and link to the site in whichever social media you use.