One of about 300 papers at http://avancier.website. Last updated 26/03/2017 18:37
From the sketchiest of visions and requirements, to the most detailed software designs, architects describe complex systems.
What theory do we have for how to describe a system? Is there a general description theory?
This paper tells a story of description that starts in science and ends in philosophy.
The aim here is not to present a new “a priori” view of the world.
The aim is to integrate existing theories into a coherent story.
The story of description is well-nigh magical, and not limited to humans.
A honey bee can dance to describe where a discrete pollen source can be found.
It can communicate using a code that not only other bees but also humans can decode.
And recent experiments show that though a honey bee has a brain the size of a grain of sand, it can count up to four.
Physicists describe the world in terms of space and time.
They see our world as embedded in a four-dimensional space-time continuum.
It is called a “continuum” because it is assumed that space and time can be subdivided without any limit to size or duration.
This work presumes there is a real world out there, separate from any perception of it.
The universe is a continually unfolding process
Stuff exists (for a while) as a product or side effect of stuff happening.
In the beginning, there was a lot of energy, and then a lot of disordered matter.
Gradually, the laws of physics and the chances of natural evolution created things so orderly they can seem to be designed.
Planets fell into orderly repeating orbits; tides ebbed and flowed on a daily basis.
So now some things in the universe behaved in an orderly fashion.
But still, there were no actors to describe that order.
Before life there could be no description.
There was stuff out there; relatively persistent stuff and relatively transient stuff.
There was matter, motion and energy, but no description of it.
Universe before life
No describers Matter and energy
Darwinist evolution in biology is not goal-directed with a fixed forward-looking goal; rather, species adapt themselves to an ever changing environment.”
Eventually, by consuming energy, some matter became organised into an organic life form.
An organism is an individual life form - or single-celled entity, a plant, an animal.
It is a system - meaning an organization of parts that cooperate in repeated processes to a common end.
Plants and then animals evolved to sense and react to variations in continuous variables
Both internal variables (hydration, salinity, PH) and external variables (heat, light, sound).
They adapted their behavior to changes in these variables - autonomously.
Autonomic adaptation to change
Senses of variables
<inherit and use> <abstract from>
Animals <sense and react to> Matter and energy
“A biological approach to human knowledge naturally gives emphasis to the pragmatist view that theories [descriptions of reality] function as instruments of survival.”
The universe is a continuum.
But descriptions of it divide it into discrete entities and events.
How? We detect abrupt changes in the form or state of the world - such as solid-fluid phase boundaries.
Animals evolved to perceive the world as composed of discrete things.
Recognising occurrences of food items, friends and enemies aids survival.
Animals adapt their behaviour to things they perceive.
This implies animals hold an internal model of the sense of a thing.
The model is a bio/electro/chemical pattern of some kind, inherited from a parent.
The animal can match new perceptions to it, and act according to inherited rules.
Autonomic adaptation to things
Senses of things
<inherit and use> <abstract from>
Animals < sense and react to> Matter and energy
Animals evolved to recall new things they perceived.
They can record a first perception in a first sense memory.
The encode a second perception in a second sense memory.
Then match the two memories, and act according to whether they recognise a recurrence of the first thing.
Intelligent adaption to things
Remembered senses of things
<create and use> <abstract from>
Animals <sense and react to> Matter and energy
Research tells us animals can retain and recall memories of entities that their senses detect in the world.
But it also suggests most animals remember discrete entities very much better than discrete events.
And recent work suggests the human ability to remember arbitrary events is unique.
(Note also, humans remember things they see and touch better than what they hear.)
Conant’s “Good Regulator Theorem” is a fundamental principle of cybernetics.
It says "every good regulator of a system must be a model of that system".
Models of system variables
<create and use> <abstract measures of>
Regulators <monitor and regulate> Systems
Similarly, we humans monitor and direct entities and events in our environment.
We must have a model of those entities and events, however sketchy that model may be.
<create and use> <abstract concepts from>
Humans <monitor and influence> Environments
The mental model discussed in these papers is an abstraction for which we have only indirect evidence.
But that evidence is overwhelming (honey bees tell other bees where the pollen is, and they find it).
So, we can use the term without needing to explain how that model is made or remembered.
Intelligence adds the ability to model future things as well as perceived things.
Envisaging interesting things
<create and use> <abstract from>
Describers <observe and envisage> Interesting things
It is often said that perception is reality.
However, animals make predictions based on perceptions.
When predictions are tested and found to come true, then we have some assurance that, also, reality is perception.
That is how biological evolution helped animal with accurate sense models to survive, thrive and reproduce.
(This may be considered contrary to Wittgenstein’s argument that a private language is impossible).
To exclaim Danger! Is to convey a meaning, a concept, a type that classifies the current situation.
“In describing a situation, one is not merely registering a [perception], one is classifying it in some way, and this means going beyond what is immediately given.”
Chapter 5 of “Language truth and logic” A J Ayer.
To compare every new thing we see against every remembered thing would be a very inefficient way to look at the world.
If we can remember a common pattern of sense memories, then we can compare a new thing against that pattern.
This must be a more efficient way of recognising what a thing is and how to deal with it.
Evidently, animals have evolved to perceive the world as composed of things that share family resemblances.
And learning depends on the ability to detect resemblances things, and act accordingly.
Experiments show animals remember family resemblances – perhaps as a group of sense memories.
This forms an abstraction - a generalisation – a pattern - a “type” – that matches a set of similar things.
<create and use> <encode qualities of>
Describers <observe> Similar things
Consider the shadow of a man on a cave wall; a pattern of light and shade.
It has no meaning until viewed by an actor who is able to recognise it as a description of something.
It might be recognised as the body shape of the man type.
Dogs use their sense memories to help them survive by recognising things and anticipating the behaviour of things.
A dog senses what it observes to be a discrete thing in reality, say a bone.
The dog has no words for the qualities and quantities that its senses detect.
Nevertheless, the dog does manage to encode a memory of the bone thing in its brain.
The memory might be a grouping of sensory measures (smell, taste, texture…) and experiences (chewable, pleasurable).
Later, when the dog finds a new thing that resembles the old thing, it can act according to its recorded experience.
The resemblance must - near enough - group same qualities in similar quantities.
Repeated observations of resemblances can strengthen the dog’s memory of how best to deal with things of that type.
The goal of this learning process is to optimize performance and minimize the number of mistakes made in dealing with new things.
If an actor can recall a general pattern or type, then a startling new idea emerges; it becomes possible to count things of a type.
Animals can count things that resemble each other, and communicate those numbers to each other.
“Honeybees are clever little creatures.
They can form abstract concepts, such as symmetry versus asymmetry.
And they use symbolic language — the celebrated waggle dance — to direct their hivemates to flower patches.
New reports suggest that they can also communicate across species, and can count — up to a point.” http://www.livescience.com/2909-bees-count.html
Do bees count really like you and I do?
Perhaps their ability is to differentiate between visual images showing two, three or four things?
A number is a description made by a describer who can do three things.
1. observe or envisage things as being discrete (e.g. solid objects, fishes, words).
2. observe or envisage a group of discrete things by
· aggregating them (e.g. items in basket, fish in a lake, words in a sentence)
· typifying them by their resemblances (e.g. the shape of apples, the functions of nouns)
3. count the things in the group.
Types enable counting, numbers imply types, and types can be organised in type hierarchies.
Abstraction of quantities from types
<create and use> <abstract totals of>
Describers <observe and envisage> Things of a type
Sense memories help animals to direct behavior to their advantage.
The direction can be autonomic: you sense heat and automatically starting sweating.
Or conscious: you see a train coming, predict it will run you over, and step off the track.
Animals who can form more accurate mental models survive and thrive better than those who form inaccurate models.
<create and use> <abstract concepts from>
Animals <observe and envisage> Realities
Here, mental model does not mean a philosophy of life, world view or “Weltanschauung”.
It means a model of particular concepts/meanings that actors perceive or envisage in realities.
We form and store such mental models in our minds, in written words, and in other ways.
We share mental models using variety of communication mechanisms.
· Gesturing animatedly to an oncoming train is enough to share the mental model of that as a threat to survival.
· Spoken words translate mental models into transient sound waves – heard by a currently present audience.
· Written words transcribe mental models into persistent graphical symbols – readable by future and remote audiences.
Logical and physical mental models.
Physical mental models are bio-electro-chemical
Logical mental models are concepts/meanings that actors perceive or envisage.
Physical mental models are private, not directly shareable.
Logical mental models are shareable between animals.
How do we know both kinds of models exist?
We define a logical model, and predict actions to follow from sharing it.
We test that the logical model has been shared between animals
Successful tests imply the logical model has been stored in a physical form.
E.g. we express the logical model represented in a honey bee’s dance as the “distance” and “direction” of a pollen source.
Then test that other bees find the pollen in the location thus described.
This shows those bees have stored mental models long enough to complete actions as predicted.
The storage of each physical mental model must be bio-electro-chemical, but how it works is irrelevant.
Physical models may be fragile, unstable, and inconsistent.
Nevertheless, evolution first made them good enough to be storable and actable on.
Then equipped animals with tools to share the logical information/meaning in them.
Animals evolved to communicate descriptions of things with a circle of fellows present at a particular time.
Actors translate their physical models into a code communicable to other actors - usually of the same species.
· Honey bees dance to describe pollen location.
· Dogs bark to tell us a stranger has arrived
· We speak using words that symbolise qualities and quantities of things observed or envisaged.
Some animals can recognise things by name: a bottlenose dolphin recognises another by its signature whistle.
Some can speak in a very limited way: domesticated dogs communicate using meaningful barks, growls, howls and whimpers.
But the remainder of the story is essentially a human one.
Verbal descriptions of perceptions and memories are the foundation of advanced reasoning and communication.
They help us to communicate about complex things, both observed things and envisaged things.
We expect listeners/readers to recognise speakers/writers complex meanings, and act accordingly.
We humans developed the ability to communicate descriptions of realities using spoken words.
Then found we could describe words using words, and convey these definitions to a message receiver.
We developed ways to preserve and share descriptions using written words.
We invented dictionaries, and conventions for defining words (by genus and difference).
Long ago, astronomers observed things in the sky that shared the property of being a light source, a “star”.
They noticed some “stars” shared a second common property, that of being “wandering”.
They invented the type name “planet” as a short-hand label for these two properties of a planet.
The spoken word is transient, and has a limited audience.
To illustrate verbal type names and definitions, this work has to use the written form.
are described using a
which is to imply the ideas/properties in this
Venus, Mars etc.
Wandering thing, Starry thing
The table above shows how human actors idealise real things, and encode those ideas in verbal descriptions.
We use type names as a short-hand to describe things that are observed or envisaged.
By defining words using words, our brains surely grow a vast network of type names, associated with each other and with other sense memories.
A type might be recognised:
· in a private language known only to one individual
· in a domain-specific language shared by a specialist group
· as a universal concept (universally useful and logically consistent with previously agreed universal types).
In the long view, types turn into states.
· Over a short time, you see objects you can classify under structural types (e.g. child and adult).
· Over a long time, apparently static object types now appear as the transient states of processes.
This is why type hierarchies don’t work well as persistent database structures.
The English language used here is rich in words that have multiple meanings and words that share meanings.
Consider these words: type, concept, property, quality, feature and attribute. They are all used definitive descriptions.
It seems some of these words work better with certain verbs others. E.g.
· A rose bush is a plant that instantiates the types “thorny”, “flowering” and “bushy”
· A rose bush is a plant that embodies the concepts “thorny”, “flowering” and “bushy”
· A rose bush is a plant that exhibits the properties “thorny”, “flowering” and “bushy”
· A rose bush is a plant that has or possesses the qualities, features or attributes “thorny”, “flowering” and “bushy”.
Further confusion arises in discussion because concepts, properties and qualities exist in two forms: as types and instances (values or facts).
The term "property" is used ambiguously to mean property type (“height in metres”) and/or property instance (1.74 metres).
The term "concept" is probably used more often for the property type than the property instance.
Beware also that the terms “class” and “type” are widely confused, for example, in programming languages.
We use “type” as above – with reference to the description of one individual thing (e.g. one planet).
We try to reserve “class” to mean a set or group (e.g. all planets), which could have a property of its own (e.g. average volume).
The by-products of evolution now include mathematics and computing.
Mathematicians define mathematical objects in terms of strict types.
They form mathematical proofs using those types.
Computers read process types, perform process instances
They create data structures that instantiate data types.
They are based on boolean logic, in which everything is "true or false" (1 or 0).
Whether everything is ultimately describable in binary terms is a philosophical question worth pursuing.
But in practice, much computer input and output data is somewhere between true or false.
The evolutionary story of description runs on to artificial intelligence.
What follows is edited from http://whatis.techtarget.com/definition/fuzzy-logic.
The words we use are not strict types (except within a very limited ontology).
Natural language (like most of the universe) is not reducible to true or false sentences.
A "state of matters" or "fact" that we recall or communicate has "degrees of truth".
0 and 1 may encode wholly false and wholly true; but there are degrees of truth in between.
E.g. the result of a comparing two things is not "tall" or "short" but ".38 of tallness."
Think of it this way
· the strict types used in maths evolved from the fuzzy types in nature.
· the binary or Boolean logic used in computers is a special case of the fuzzy logic in nature.
Fuzzy logic seems closer to the way our brains work.
We aggregate data and form a number of partial truths which we aggregate further into higher truths.
When certain thresholds are exceeded, further results such as motor reaction, are triggered.
A similar process is used in neural networks, expert systems and other artificial intelligence applications.
Fuzzy logic is essential to the development of human-like capabilities for AI (aka artificial general intelligence).
AI is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the system can find a solution.
There are now robots that can perform the magic of abstracting a general type of thing from observations of particular things.
“Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning.
In supervised learning, an algorithm is given samples that are labeled in some useful way.
For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible.
The algorithm takes these previously labeled samples and uses them to induce a classifier.
This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm.
The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.” Wikipedia
Biological and psychological evolution have developed and refined how we typify and describe reality.
The evolutionary story runs something like this.
· The big bang and the space-time continuum
· Sensing and reacting to continuous change
· Sensing and reacting to discrete things
· Remembering perceived entities and events
· Creating and using mental models
· Recognising family resemblances (fuzzy types)
· Counting things
· Abstracting logical mental models from physical ones
· Verbalising mental models
· Using words to label, define and communicate fuzzy types
· The strict types used in mathematics and computing
· Fuzzy logic and artificial intelligence
This story leads also us to a particular view of philosophy – outlined in the next paper.
All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.
If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use.