The science and philosophy of systems

Copyright 2018 Graham Berrisford. One of several hundred papers at Last updated 19/10/2018 11:10


Click here to ask about the next “SYSTEM THEORY FOR ARCHITECTS” tutorial in London in 2019.


System architects are supposed to observe baseline systems, envisage target systems, and describe both.

So, you might assume architects are taught about system thinking; but this is far from the case.


The systems of interest here are islands of orderly behavior in the ever-unfolding process of the universe.

That does include the solar system, riding a bicycle and the cardio-vascular system, but the main interest is narrower.

Systems in which entities act in response to information encoded in messages and memories.

Human social systems in which that information describes or directs some entities or events of importance to people.

And business systems in which some or all of the information is digitised.


You can’t understand such systems without answering questions about the nature of description and reality.

These questions are often considered the domain of philosophers and linguists such Nietzsche and Wittgenstein.

And thinking about such systems is often considered the domain of sociologists.

By contrast, the story of description and reality below is rooted in biology.


Contrary to some Postmodern Attacks on Science and Reality, this story respects science.

It positions systems thinking in a brief history of the universe and human evolution, and as a branch of science.

It discusses the relationship of real world actors and activities to descriptions of them (as in data structures).

And the specification of rules as pre and post conditions of activities (as in business processes).

It prepares the ground for later papers challenging some systems thinkers and their ideas.


The conclusions contain a list of points made in this pre-history of systems thinking.

If they seem obvious to you, then you may want to skip to the next paper on systems thinkers and their ideas

If you find them obscure or controversial, then this paper may prepare you for what follows.


Reality and descriptions of it 1

Animal memories and messages. 1

Human communications. 2

Human languages, natural and artificial 3

Aside on mathematics. 4

Systems as complex types. 5

Conclusions and remarks. 6

Footnotes on thermodynamics. 7

Reference: Postmodern Attacks on Science and Reality. 7


Reality and descriptions of it

Heinz von Foerster (1911 to 2002) was a thinker interested in the circularity of ideas.
(His contribution to systems thinking will be challenged later.)

He is reputed to have said “We live in the domain of descriptions that we invented.”

We do live in a society with some laws, roles and rules invented by people who described them.

But we don’t live in a world we have invented.

Scientists believe our universe started with a big bang about 14,000 million years ago.

The earth was formed about 4,500 million years ago.


Before life emerged, there were no perceptions or memory of the universe.

There was no conceptualisation or model of real world structures or behaviors.

Nothing was created to represent or symbolise what exists and what happens in the world.

In short, there was no description of the universe before life; description is a side effect of biological evolution.


“Modern physics strongly suggests ... reality is very much like what was inferred by some remarkable thinkers in the ancient world:

a universe composed of elementary objects that move around in an otherwise empty void.” Postmodern Attacks on Science and Reality


However, as von Foerster might have noted, even physicists have invented different ways of describing objects and their motions.

In classical physics, human-scale structures and behaviors are described as continuous in space and time (cf. analogue signals).

In quantum mechanics, tiny atoms, particles and changes are described as discrete objects in space and jumps in time (cf. digital signals).

But neither of those descriptions is only an invention.

Both are “true” in the sense they represent or symbolise reality well enough in practical tests.


As biological organisms, we all naturally perceive and describe the universe the same way.

We see it as composed of objects (structures) that occupy space at a moment in time and change over time (behaviors).

Our descriptions of reality are digital in the sense that we divide reality into discrete entities (structures) and events (behaviors).

A later paper discusses converting a discrete-event-driven system model into continuous system dynamics model.

Animal memories and messages

It is probably more than 3,000 million years since life on earth began.


Biological information processing

All animals must know something of the world they live in.

Even the earliest organisms had to recognise chemicals in their environment.

Through biological evolution, animals developed ever more sophisticated ways of knowing the world.


Animals evolved to process descriptions of reality encoded in transient sensations.

By about 700 million years ago, Jellyfish had nerve nets that enabled them to sense things in the world and manipulate them.

In a nerve net, intermediate neurons monitor messages from sensory neurons and react by sending messages to direct motor neurons.


By about 550 million years ago, some species had a central hindbrain to monitor and control some homeostatic state variables.

An internal information feedback loop connected that hindbrain to the organs and motors of the body.

The hindbrain must sense the state of the body state variables and send messages to direct actions that maintain those variables.


About 250 million years ago, the paleo-mammalian brain evolved to manage more complex emotional, sexual and fighting behaviors.

A wider information feedback loop is needed to connect that higher brain to the external world.

The higher brain must sense the current state of food items, friends and enemies in the world, and direct appropriate actions.


Recording knowledge in memory

Animals can not only sense and react to things, but also retain descriptions of things - if only as vague sense memories.

Like all biological traits, human memory is the result of a very long history, most of it shared with other animals.

At each stage in the path from vertebrate to mammal to primate to anthropoid to human, we acquired a different kind of memory

The result, this research suggests, is that humans have seven different kinds of memory.


Animals don’t just remember static images; they can remember the sequences in which dynamic behaviors unfold.

This other research suggests even rats can replay memories in order to recognise things in sequence.

You can remember the sequence of steps in a dance, notes in a melody, or words in a story.

And of course, the sequence of words in a sentence or message is important to its meaning.


The need for description to represent reality

Friedrich Nietzsche (1844 to 1900) was a philosopher whose metaphysical ideas influenced many Western intellectuals.

He took the view, called “perspectivism”, that our conceptualisations of the world are shaped by how we view it.

To some extent, different people do see the world differently from each other, and from birds, bats and bees.

But more importantly, their conceptualisations are shaped by testing them against reality.


We don’t need metaphysics to explain descriptive concepts.

An animal’s mental models must describe the world well enough; else the animal would not survive.

The physical world includes not only you, your food, friends and enemies, but also your concepts of those things.

Your conceptualisations are “true” to the extent that they help you recognise and predict what exists and happens in reality.


Nietzsche is quoted as asserting “there are no facts, only interpretations”.

Some postmodernists read this as saying there is no objective truth or accurate knowledge of the world.

Some have interpreted the assertion as meaning all descriptions of the world are equally valid.

Any appealing belief or poetic assertion carries the same weight as scientific evidence.

Yet all animal life depends two facts: a) there is a real world, and b) only some descriptions of reality prove accurate enough when tested.


Communicating via messages

Even very primitive animals signal mating intentions to each other.

By 100 million years ago, some animals cooperated in groups.

Perhaps the earliest social acts were related to marking territory, signalling danger and locating food.

E.g. Cats spray scent to mark their territory; other cats smell that scent.


Communication requires both the creation (encoding) and interpretation (decoding) of messages.

Messages are created by manipulating physical matter and energy to form symbols (smells, gestures, sounds etc.).

The symbols identify or represent things of interest, such as territorial claims, friends, enemies and food.

E.g. A honey bee can symbolise the direction and distance of a pollen source in the form of a wiggle dance.


Honey bee communication

Wiggle dances

<perform & read>        <symbolise>

Honey bees  <find & seek>  Pollen sources


Symbols can both identify things and describe their features (qualities, characteristics, attributes, properties).

E.g. Astonishingly, experiments have shown that honey bees can communicate quantities up to four.


Some system thinkers promote the “hermeneutic principle” that the hearer alone determines the meaning of an utterance.

This dreadful postmodern idea makes speakers guilty of causing offence where none was intended.

The principle for success might be called the communication principle.

Communication requires that a receiver decodes the same meaning from a message that a sender intentionally encoded in that message.


Note that biological evolution has not demanded animals communicate perfectly accurate descriptions or absolute truths.

Animals send messages that represent reality accurately enough, often enough, for message receivers to find them useful.

Communications do fail when symbols are ill-formed, lost or obscured in transit, or misread or misinterpreted on receipt.

And animals do sometimes lie to each other, as this video illustrates.

Human communications

The earliest human brain, though larger than other mammals, was about the same size as a chimpanzee’s brain.

Over the last six or seven million years, the human brain tripled in size.

By two million years ago, homo erectus brains averaged a little more than 600 ml.

And by 300 thousand years ago, early homo sapiens brains averaged 1,200 ml, not far from the average today.


Why this growth?

Three million years ago, human-like primates learnt to make tools with a cutting edge or point.

Humans needed a bigger brain to make and use increasingly complex tools to hunt and cultivate food.

At the same time, intelligence was needed for the increasingly complex language humans used to cooperate.


Neither conceptualisation nor cooperation by communication is unique to humans.

But only humans communicate by inventing words to symbolise things and their qualities.


The spoken word – transient messages

Many non-human animals use sounds to communicate information about things of interest to them.

But they use sounds instinctively, with fixed meanings.

Between 150 and 300 thousand years ago, humans started inventing sounds (words) to convey meanings.

This emergence of speech may well have reflected changes in human society.

Notably, the change from a gorilla-style dominance hierarchy to the more cooperative and egalitarian lifestyle of hunter-gatherers.

Increasingly, humans used words to express descriptions, directions and decisions, and share them with each other.


The ability to create words and assign meanings to them had a profound effect on thinking.

In describing pollen sources, honey bees describe things that resemble each other, but they don’t discuss what those resemblances are.

Words enable humans to discuss the resemblances between things; inventing words such as “pollen source” to label all similar things.


To idealise a thing means to abstract some features or qualities of the thing, and represent them in a symbolic form – such as words.

We observe and envisage realities; we create and use descriptions; our descriptions idealise or symbolise realities.

These three relations can be shown in a triangle.


Human communication

Verbal descriptions

<create and use>         <symbolise>

Humans  <observe & envisage>  Realities


The ability to describe realities in words makes humans unique.

We don’t inherit words with particular meanings; we imitate and invent words.

We can invent words to symbolise infinite concepts – not only realistic ones but also impossible ones, like a flying elephant.

This freedom to invent words and sentences enables creative thinking and scientific postulations.


Words are unreliable; we not only abuse words, we also change their meanings.

The popular meaning of a word can evolve rapidly and change dramatically.

In every oral communication, a word has a meaning when spoken to its creator, and a meaning when heard to a receiver.

There is no guarantee that the two meanings are the same.


Note again: biological evolution has not demanded that words express perfectly accurate descriptions or absolute truths.

It requires only that spoken words are understood well enough, often enough.

To improve the chances of being understood, we overload communications with redundant information.

Describing one thing in several ways reduces the chances of miss-communication.


The written word – persistent memories

5 or 6 thousand years ago, people found ways to persist spoken words using written symbols.

Scholars suggest this may have happened separately in Sumeria/Egypt, the Indus River, the Yellow River, the Central Andes and Mesoamerica.

Writing made one person’s thoughts available for inspection and use by others in different places and times.


The invention of writing enabled the development of civilization.

People could do business and conduct trade on the basis of facts recorded on clay tablets or papyrus.


Only humans invent words to typify similar things.

Translating spoken words into and out of written words helped people clarify their thoughts and communicate over distance and time.

The written record revolutionised our ability to think deeply, think straight, remember things and communicate.


One “landmark in the triumph of the centralised written record” recorded the enterprise architecture of a nation state.

After the Norman Conquest of England (1066), King William ordered an audit of locations in England and parts of Wales

The aim was to record who held what land, provide proof of rights to land and obligations to tax and military service.

This survey resulted in The Domesday Book, which classifies towns, industries, resources and people into various types.

Human languages, natural and artificial

We use the words and grammar of a language to describe things.

The fluidity and imprecision of natural language enables human creativity and assists survival in a changing world.

But to specify a system in an unambiguous and testable way, an artificial language is needed.


Family resemblances and natural languages

Ludwig Wittgenstein (1889-1951) influenced the “Vienna circle” of logical empiricists (aka logical positivists).

In his “Tractatus Logico-Philosophicus” - a tough read - he set out seven propositions.

He argued philosophical disagreements and confusions can be resolved by analysing the use and abuse of language.

Later, he realised his “Tractatus” was self-contradictory, and developed an entirely different linguistics.

He turned his focus from the precision of language to the fluidity of language.

He dropped the metaphor of language “picturing” reality and replaced it with language as a tool.

In “Philosophical Investigations”, he articulated the concept of family resemblances.

He considered “games” as a set, which includes activities as varied as chess, archery and Super Mario.

He argued the set members have overlapping lists of features, but no single feature in common.


A biologist might propose every game is “an activity that serves as a direct or indirect rehearsal of skills useful to survival.”

But that does not matter here, since animals certainly do recognise things as resembling each other in a loose and informal way.

What does matter here is to understand the limitations of linguistics.

Natural language is loose; both words and grammar are very flexible.

No word, description or message has a universally-agreed meaning.

There is ambiguity and fuzziness in the meanings of words, and degrees of truth in how well a reality matches a description.

But for a system description to be holistic, unambiguous and testable, an artificial domain-specific language is needed.


Types and artificial domain-specific languages

A. J. Ayer (1910 to 1989) was a philosopher who wrote about language, truth, logic and knowledge.

He rejected metaphysics and much philosophical discussion as meaningless, that is, not provable or disprovable by experience

He pointed out that every coherent description of a thing or situation is a type definition.

“In describing a situation, one is not merely registering a [perception], one is classifying it in some way,

and this means going beyond what is immediately given.” Chapter 5 of “Language, truth and logic”


A type defines features (qualities, characteristics, attributes, properties) shared by things that instantiate or realise that type.

To describe something as the game is to imply it is the only one of that named type.

To describe something as a game is to imply it is one of a set containing many things of that named type.


As Wittgenstein indicated, the set of things people call a game is elastic, and it is difficult to agree a type that defines every set member.

However, words do give humans the ability to classify or typify things more formally and rigidly.

Within a bounded context, or domain of knowledge, we can determinedly fix the meanings of words.

A domain-specific language is an island of inter-related words with stable meanings, in the ever-unfolding evolution of natural language.

Words are treated as type names, and each type is defined by statements relating it to other types.



In the domain of mathematics, type names include: “number”, “division” and “remainder”.

Type definitions include: “An even number is a number that is divisible by two with no remainder.”


In the domain of physics, type names include “force,” “mass” and “acceleration”.

Type definitions include: “A force equals the mass of a body times its acceleration.”

(By contrast, in the language of management science, a force is a pressure acting on a business, such as competition or regulations.)


In any business domain, people define the rules of their specific business in terms of relations connecting types.

“An employee has a salary and may be assigned to a project.”

“An order is placed by a customer; a customer can place several orders.”


Again, for a system description to be holistic, unambiguous and testable, an artificial domain-specific language is needed.


The logical syntax of language

Rudolf Carnap (1891 – 1970) was a member the Vienna circle who contributed to the philosophy of science and of language.

He too rejected the sentences of metaphysics as pseudo-sentences, which prove to be either empty phrases or phrases which violate the rules of syntax.

“Of the so-called philosophical problems, the only questions which have any meaning are those of the logic of science.”

Page 8, Logical Syntax of Language, quoted in Wikipedia.


He defined the purpose of logical syntax thus:

to provide a system of concepts, a language, by the help of which the results of logical analysis will be exactly formulable.”

“Philosophy is to be replaced by the logic of science – that is to say, by the logical analysis of the concepts and sentences of the sciences...”

Foreword, Logical Syntax of Language, quoted in Wikipedia.


He defined the logical syntax of a language thus:

the systematic statement of the formal rules which govern [the language] together with the development of the consequences which follow from these rules.

Page 1, Logical Syntax of Language, quoted in Wikipedia.


As I understand it, Carnap said:

A statement is only meaningful with respect to a given theory (a set of inter-related domain-specific predicate statements).

And only true to the extent it can be supported by experience or testing.


The basic logic of statements is called propositional logic (or calculus).

A proposition is a statement that asserts a fact that may be true or false.

In natural (as opposed to mathematical) language it takes the form of a sentence: e.g. the sun is shining.

Simple propositions can be connected by connectives (and, or, not and if) into a compound proposition: e.g. pigeons fly and eat corn. 

Propositional logic is the foundation of first-order or predicate logic.

A predicate is a verbal phrase, with or without an object, which declares a feature of a subject.


Proposition (in the form of a predicate statement)



Verb phrase


A particular thing

or instance of a general type

A verb or verbal phrase

that either stands alone or

relates the subject to the object

A particular thing or a general type

related to the subject by the predicate.

The sun

is shining

A game of monopoly

results in

a winner with the largest credit amount

A game

is kind of


A game

is played by

one or more animate players

A game

results in

a measure of achievement

An order

is placed by

a customer

A customer

can place

several orders


Predicate statements can include variables.

And “a subject (or object)” can be read as “one instance in the set of subjects (or objects)”.

So, a type definition can be presented as a predicate statement.

E.g. “A game is a kind of activity and is played by one or more animate players and results in a measure of achievement.”



To paraphrase von Foerster: “We live in the domain of types that we invented.”

The types idealise and symbolise the realities we observe and envisage.

These relations can be shown in the triangle you may now becoming familiar with.




<invent>                 <symbolise>

Human intelligences <observe & envisage> Realities


A type is a description; moreover, a description may be viewed as a type.

Every coherent description, even a very long and complex one, serves as a type definition.


In a domain-specific language, a word is a type name, and is defined by predicate statements relating it to other types. 7

Typically: “An instance of type A is related by this verb phrase to one or more instances of type B.”


But note the matching of a thing to a type can be incomplete.

A monothetic type, like “even number”, requires all instances of the type to have all its features.

A polythetic type, like “game” does not require that all instances of the type will have all its features.


And there can be degrees of truth in a predicate statement.

Newton’s laws describe the motion of things in the reality we normally experience.

The laws are true to the degree of accuracy we need, but only approximations, neither wholly true nor wholly false.

For more on fuzzy logic and fuzzy sets, try this link.

Aside on mathematics

Did numbers exist before life?

It is arguable that numbers only emerged when animals could recognise similarities between discrete things.

And that mathematics only developed when we could describe those similarities - as types.


Animals evolved to

·        perceive the universe in terms of discrete things in space.

·        recognise similarities between things - such as food items and enemies.

·        recognise if a group of similar members gained or lost a member (experiments show babies do that before they have words).

·        count members in a group of somewhat similar things (experiments show honey bees can count up to four).


Then, we humans evolved the ability to

·        create words, to suggest and discuss similarities between things

·        abstract communicable descriptive types from reality.


Numbers can be seen as types we use to describe groups containing somewhat similar members.

·        onesomeness” is the property shared by all groups with one thing in.

·        twosomeness” is the property shared by any onesome to which we have added one.

·        empty (zero)” is the property of any group from which we have removed all members.


The question is not so much whether there were numbers before life.

It is whether there were any types before life.

The premise here is that there were no types, no descriptions, before life.

The infinite variety of types we manipulate depend on our ability to identify similarities between things and typify them.


However, there were always things that can (in retrospect) be regarded as similar.

This was first true at the level of atomic particles, then stars and planets.

Numbers did not exist in the form of types before life forms started to create, remember and communicate types

But numbers always existed in the sense that numerous similar instances of what we now choose to describe as a type have existed.

Systems as complex types

Humans have long sought to understand people through the labelling of classes or types.

The Greeks divided dramatic roles into types: hero, ingénue, jester and wise man.

In the 11th century, The Domesday Book classified people into types according to their rank and role in a feudal society.


1800s Social systems thinking

In the 19th century, sociologists, taking a lead from biologists, looked at societies as systems.

A social system is an island of orderly behavior observable in a group of animals.

David Seidl (2001) said the question is what to see as the basic elements of a social system.

“The sociological tradition suggests two alternatives: either persons or actions.”

Some see a set of actors who perform activities; others see a set of activities performed by actors.

To describe a social system is to typify actors in role descriptions and/or activities in rule descriptions.


1900s System theory

After the Second World War, the general concept of a system became a focus of attention.

The systems of interest here are islands of orderly behavior in the ever-unfolding process of the universe.

Especially, systems in which entities process information encoded in memories and messages that describe or direct reality.



Cybernetics focuses attention on how information feedback loops connect systems.

A control system receives messages that describe the state of actors in a target system.

In response, the control system sends messages to direct the activities of those actors.


E.g. In a missile guidance system, a control system senses spatial information and sends messages to direct the missile.

A brain holds a model of things in its external environment, which an organism uses manipulate to those things.

A business database holds a model of business entities and events, which people use to monitor and direct those entities and events.

And (as Michael A Jackson taught me in the 1970s) a software system holds a model of entities and events that it monitors and directs in its environment.



Roles, Rules & Variables

<create and use>                   <symbolise>

Systems thinkers <observe & envisage> Actors, Activities & Values


The science of cybernetics was quickly embraced within a broader system theory movement.


General system theory

Thinkers looked for what it is common to systems in all disciplines, from hard sciences to the humanities.

General system theory incorporates cybernetic concepts such as:

·        System environment: the world outside the system of interest.

·        System boundary: a line (physical or logical) that separates a system from is environment.

·        System interface: a description of inputs and outputs that cross the system boundary.

·        System state: the current structure or variables of a system, which changes over time.


System theorists distinguish abstract system descriptions from concrete entities that instantiate (realise) them.

Again, a system description is a complex type; it symbolises both the structures and the behaviors of each entity that realises the system.


General system theory

Abstract / theoretical systems

<create and use>                    <symbolise>

System theorists <observe & envisage>  Concrete / empirical systems


Aside: one principle of general system theory is holism.

This means focusing on how components interact, rather than the internals of those components.

The focus is on properties of a whole system that emerge from interactions between its components.

But note: it is impossible think holistically until you have (reductionistically) identified components of the whole.

And since smaller systems are recursively composable into bigger ones, one person’s system is another person’s component.

In other words, one person’s emergent properties are another person’s internal attributes or workings.


The specification of software systems

Both cybernetics and general system theory foreshadowed computing.

To develop a software system is to apply system theory in a scientific way.

Just as scientists develop a theory that describes/typifies how the universe works, then test that it does work that way.

So, software developers write code that describes process types (behaviors) that create and use data types (structures), then test that system works


Software systems

Software system code

<create and use>                 <symbolises>

Software developers <observe & envisage> Run-time data and processes


Of the statements in a system definition we might say:

A statement is only meaningful with respect to a given system description, expressed in an artificial language governed by the rules of logic and mathematics.

The statement is only true to the extent it can be supported by system testing.


The specification of behaviors

The grammar of algorithms is used to define the behaviors of a software system.

The concept of an algorithm, known to Greek mathematicians, was formalized in the 1930s

See Wikipedia for references to Gödel–HerbrandKleene, Alonzo Church, Emil Post and Alan Turing.


An algorithm that processes information typically:

·        reads from the data structures of input messages, and writes to the data structures of output messages

·        stores data in the data structure of a memory (for future processing), and retrieves data from the data structure of a memory.


The specification of data structures

As Carnap indicated, to specify the data structures used in memories and messages, we use a domain-specific language.

We define a data type in terms of a generic data type (number, text, date etc.), plus a domain-specific meaning.

We relate data types in the structure of a memory, using the grammar of predicate logic, as an entity-attribute-relationship model.

We relate data types in the structure of a message, using a regular grammar, as a structure of sequences, selections and iterations.


The specification of rules

Charles Antony Richard Hoare (1934 - ) is a British computer scientist.

Few have taken up his work on formal specification languages such as CSP and Z.

But many use Hoare logic to describe how a process changes the state of a system.

The logic is based on the Hoare triple, which may be expressed as: {Precondition} Process {Post condition}.

The meaning is: If the precondition is true AND the process proceeds to completion THEN the post condition will be true.


There are many other useful ways to specify rules, including the predicate logic of an entity-attribute relationship model.

But defining the entry and exit conditions of an activity seems the most universally applicable.

It underpins many ways to analyse requirements and declare what business processes do.

It can be seen in definitions of “value streams”, “business scenarios”, “use cases” and “service contracts”.

To define a post condition is to define a requirement, result, goal or outcome of value to the business (at a micro or macro level).


At the risk of being simplistic:

·        A concrete system is composed of actors performing activities.

·        An abstract system typifies actors in role descriptions and activities in rule descriptions.

·        A role is a list of activities performable by an actor.

·        A rule is a precondition or post condition of an activity.


None of the specification tools in this section is unique to computing.

They can be used in human activity system description that needs to be unambiguous and testable.

Conclusions and remarks

Systems thinkers observe baseline systems, envisage target systems, and describe both.

This paper traces the pre-history of systems thinking and concludes with a few modern ideas.

It discusses the relationship of real world actors and activities to descriptions of them (as in data structures).

And the specification of rules as pre and post conditions of activities (as in business processes).

Below are some of the points made above.


Reality and descriptions of it

·        The systems of interest here are islands of orderly behavior in the ever-unfolding process of the universe.

·        Especially systems in which entities act (systematically) in response to information encoded in messages and memories.

·        And usually, systems in which that information describes or directs some entities or events in reality.

·        Our descriptions of reality are digital in the sense that we divide reality into discrete entities (structures) and events (behaviors). 2


Animal memories and messages

·        Only some descriptions of reality prove useful when tested.

·        Communication requires that a receiver decodes the same meaning from a message that a sender intentionally encodes in that message. 3


Human communications

·        Only humans invent words to symbolise things and their qualities.

·        The written record revolutionised our ability to think deeply, think straight, remember things and communicate. 5


Human languages, natural and artificial

·        The fluidity and imprecision of natural language enables human creativity and assists survival in a changing world.

·        For a system description to be holistic, unambiguous and testable, an artificial domain-specific language is needed.

·        A domain-specific language is an island of inter-related words with stable meanings, in the ever-unfolding evolution of natural language.


Thinking about systems  9

·        A system description is a complex type that symbolises the structures and the behaviors of each entity that realises the system.

·        To make testable assertions about a system’s behavior, we specify processes by their pre and post conditions.

·        A concrete system is composed of actors performing activities.

·        An abstract system typifies actors in role descriptions and activities in rule descriptions.

·        A role is a list of activities performable by an actor.

·        A rule is a precondition or post condition of an activity.


By the way, some systems thinkers speak of systems maintaining order, or “negative entropy”.

It turns out that thermodynamics is tangential to most practical applications of general system theory.

“Cybernetics depends in no essential way on the laws of physics.”

“In this discussion, questions of energy play almost no part; the energy is simply taken for granted.” Ashby

Having said that, a few notes on thermodynamics are included below.


Read Systems thinkers and their ideas for more on the history of systems thinking in the 19th and 20th centuries.

On side issues, other papers of possible interest include:

This paper for a philosophy based on the triangular graphics included above.

Personality classification for more on personality types.

The Domesday Book for more on that.

Footnotes on thermodynamics

Generally, a system is an island of orderly behavior in the ever-unfolding process of the universe.

To maintain order (or negative entropy) in its structures and behaviors, a system must consume energy.


The need for energy to maintain order

Ludwig von Bertalanffy (1901-1972) considered an organism as a thermodynamic system in which homeostatic processes keep entropy at bay.

“By importing complex molecules high in free energy, an organism can maintain its state, avoid increasing entropy…."


Observation: while homeostasis was a focus of many early system theorists, it is not a property of all systems.

The fact is that social and business systems can grow, shrink, die and produce chaotic outcomes.


Information as a subtype of order

Erwin Schrödinger (1887 –1961) also discussed the thermodynamic processes by which organisms maintains themselves in an orderly state.

 “Living matter evades the decay to thermodynamical equilibrium by homeostatically maintaining negative entropy (today this quantity is called information) in an open system.”

“The increase of order inside an organism is more than paid for by an increase in disorder outside this organism by the loss of heat into the environment.” Cornell University web site.


Observation: In 2009, Mahulikar & Herwig re-defined the negative entropy (negentropy) of a dynamically ordered sub-system.

Negentropy = the entropy deficit of an ordered system relative to its surrounding chaos.

Negentropy might be equated with “free energy” in physics or with “order”; some equate it with "information".

But in cybernetics and systems thinking "information" usually has a more specific meaning.

Information is the meaning created or found by an actor in a description of a reality.

To encode meaning in a description (or data structure) is to use some energy to create a very specific kind of order.


The tendency of systems in competition to optimise their use of energy

“Nature's many complex systems--physical, biological, and cultural--are islands of low-entropy order within increasingly disordered seas of surrounding, high-entropy chaos.

Energy is a principal facilitator of the rising complexity of all such systems in the expanding Universe, including galaxies, stars, planets, life, society, and machines.

Energy flows are as centrally important to life and society as they are to stars and galaxies.

Operationally, those systems able to utilize optimal amounts of energy tend to survive and those that cannot are non-randomly eliminated.” Cornell University web site.


Observation: this “optimal use of energy” principle has been at work in the evolution of biological systems.

But where minimising energy consumption is of little or no advantage, evolution proceeds in a suboptimal way.


The tendency of systems, where resources are cheap, to sub-optimise use of energy

The highest energy consumption per head is not found in countries that are especially orderly.

Energy consumption is highest in countries that are:

·        Too cold: Iceland, Canada,

·        Too hot: Trinidad and Tobago, Qatar, Kuwait, Brunei Darussalam, United Arab Emirates, Bahrain, Oman, or

·        Too rich to care about the cost: Luxembourg, and the United States.


Many modern software systems are over complex and suboptimal, because we give them as much memory space and electricity as they need.


All free-to-read materials at are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to in whichever social media you use.

Reference: Postmodern Attacks on Science and Reality

Victor J. Stenger, Ph.D.


Recent trends in some academic circles have called into question conventional notions of truth and reality. The claim is made in these circles that all statements, whether in science or literature, are simply narratives -- stories and myths that do nothing more than articulate the cultural prejudices of the narrator. In this view, one narrative is as good as another, since each is expressed in the language of its particular culture and thus contains all the assumptions about truth and reality embedded in that culture. Texts have no intrinsic meaning. Rather, their meanings are created by the reader. The conclusions are then drawn that no narrative can have universal validity and that "Western" science is no exception..


Today's college students, in the United States and elsewhere, hear this line of reasoning from many of their social science and humanities professors. "Alternative medicine" proponents often use similar arguments to reject science as a method of determining health-related truths.


The assertion that "Western" science is unexceptional begins with a plausible, though ultimately misleading, notion that we humans lack access to any mechanism by which we can learn the truth about an objective reality that exists independent of human thought processes. Certainly, science relies on thought processes and does not always follow a clear, logical path to the conclusions it makes about reality. True, it never proves the correctness of these conclusions. Science knows nothing for certain about the world and must always couch its results in terms of probabilities or likelihoods. Often the choice between competitive scientific theories is based on taste, fashion, or subjective notions of simplicity or aesthetic appeal.


Agreed. Scientists can never be certain of the "truth" of their theories. Nevertheless, the predictions of scientific theories are very often sufficiently close to certainty that we all bet our life on them, such as when we are in an airliner or on an operating table. When predictions are that reliable, we can rationally conclude, if not prove, that the concepts on which they are based must have some universal validity. That is, they must somehow be connected to the way things really are.


For example, we cannot predict with complete certainty what will happen if we jump off a tall building. It is always possible that we might land in a crate of feathers that, by luck, just happens to protrude from a window on the floor below. However, based on the law of gravity, we can predict with high likelihood that we will pass that floor and hit the ground with an unhealthy splat. The law of gravity has been tested with enough experiments to safely conclude that the concept of gravity is "real."


Reality acts to constrain our observations about the world, preventing at least some of those observations from being completely random, arbitrary, or what we might simply like them to be. Although much of what we do in fact observe is random -- far more than most people realize -- not everything is. And while we humans can exert a certain amount of control over reality, that reality is not merely the creation of our thought processes. In a dream about jumping off a building, we might float to the ground unharmed. In thinking about jumping off the building, we can imagine whatever we want about the outcome. Superman can fly by and rescue us, in our fantasies. An airplane with a mattress on its wings can appear just in time.

But, in reality, we fall to the ground no matter how we might wish otherwise.


Without getting too pedantic about defining reality, let me just say that our own observations in everyday life make it quite clear that we and the objects around us are subject to externally imposed constraints that neither we nor those objects can completely control. If I could control reality with my thoughts, I would look like I did when I was twenty and still be as smart as I am now. I don't. In science, we use our observations about what happens when we are not dreaming or fantasizing to make reasonable inferences about the nature of what supplies the impetus for the constraints we record with our measuring apparatus.


Modern physics strongly suggests a surprisingly uncomplicated, non-mysterious "ultimate reality" that may not be what we wish it to be, but is supported by all known data. Furthermore, this reality is very much like what was inferred by some remarkable thinkers in the ancient world: a universe composed of elementary objects that move around in an otherwise empty void. I call this atomic reality.


This proposal flies in the face of current fashion. That fashion repudiates all attempts, within science and without, to describe a universal, objective reality. I repudiate that fashion. Where the validity of certain ancient and modern concepts of truth and reality are denied, I affirm them. Where arguments are made that Western science tells us nothing of deep significance, I assert that it remains our foremost tool for the discovery of fundamental truth.

Many natural science professors, with their heads buried mainly in research, have ignored the attacks on science and rational thought. When they happen to hear assertions that science is just another tall tale, they typically dismiss the notion as nonsense. Instead, they should be speaking out.


Dr. Stenger is professor of physics and astronomy at the University of Hawaii. He received doctoral degree from UCLA in 1963 and has had an active research career in elementary particle physics and astrophysics. His projects have included elaborating the properties of quarks, gluons, neutrinos, CP violation, and the weak neutral current. He has worked on high-energy gamma ray andneutrino astronomy. He is currently a collaborator on Super-Kamiokande, an experiment in a mine in Japan that recently confirmed the solar neutrino anomaly and is expected to be the decade's most definitive experiment on solar neutrinos, proton decay, and neutrino oscillations. His writings include many articles for skeptical publications and three books published by Prometheus Books: Not By Design: The Origin of the Universe Physics (1988); Psychics: The Search for a World Beyond the Senses (1990); and The Unconscious Quantum: Metaphysics in Modern Physics and Cosmology (1995), which the Times Literary Supplement described as "an interesting, provocative, informative and impassioned attempt to rescue physics from the contemporary unscientific or anti-scientific appropriations of its softer-edged theoretical self-description."