The evolution
of types and logic
Copyright Graham Berrisford 2018. One of a hundred articles on
the System Theory page at http://avancier.website.
Last updated 10/01/2021 15:59
This is a supplement to the chapter on information, description and types here https://lnkd.in/dQNhNbd
It explores the evolution of types and logic.
Contents
Recap
from “the evolution of description”
Seeing
things as discrete, describable and differentiable
Logic
in neurology and human intelligence
Conclusions
and terms relevant to EA
There
were no descriptions before life
Before life there were things, but no description of them.
An understanding of the description-reality relationship has to start in biology.
Non-human animals communicate about things in the world effectively.
That is the empirical demonstration that they can model the world – well enough.
Animals evolved the ability to internalise descriptions of things – in memories.
They also evolved ways to externalise and share descriptions of things – in messages.
Messages and memories can represent reality to a degree – but perfect accuracy (or truth) is elusive.
Messages and memories contain what may be called “signs” in semiotics.
A sign is only meaningful or useful at those points in time when it is encoded or decoded by an actor.
Its meaning is in the understanding/intent of the encoder - or in the understanding/reaction triggered in the decoder.
There
were no types before symbolisation
Verbalisation enabled us (humans) to formalise our sense of family resemblances between similar things into types.
Typification is both
a game we play and a basis of science.
A type is a
description of a set member.
Types/descriptions
help us manipulate things and predict their behaviour.
Identifying a thing
as an instantiating a type helps us manipulate that thing and predict its
behaviour.
Science believes in
typification and identification in so far as they are supported by evidence.
The evidence shows
humans have succeeded very well in typifying and identifying things; else humankind
would be extinct.
But science doesn't say types/descriptions are the truth; it only says they represent reality well enough to be useful.
There is no way to know the world “as it is”; the idea doesn’t even make sense.
Since Einstein's day, scientists take the view that all we can understand is models we make of the world.
That is equally true of non-verbal models and verbal models.
Language cannot be the basis of thinking.
Eons before languages, animals could recognise a thing they had perceived before, even if that thing had changed a little.
They could recognise “family resemblances” between any two things they perceive to be similar.
But only we humans invent words to describe things, and this is a game changer.
Yes, we do naturally use words in the fuzzy and fluid way we perceive the world.
Surely, the flexibility of natural language assists survival and
creativity in an ever-changing world.
In science however, we look to specify regular or repeatable behaviors more formally.
We typify the roles of actors and the rules they follow in an unambiguous and testable way.
And to do this, we need an artificial, logically consistent, language.
This section goes on to contrast natural and artificial languages.
Natural languages
To begin with, philosopher Ludwig Wittgenstein (1889-1951) presumed language is precise.
So, a word is a type name that define the members of a set.
A “game” is
one of a set containing many things of that named type.
Later, Wittgenstein changed his mind, and turned his focus to the fluidity of language.
He considered games as a set that includes activities as varied as
chess, archery and Super Mario.
He argued the set members have overlapping lists of features, but no single
feature in common.
He used
“game” as an example to tell us that words (in natural language) are not
type names.
Rather, games exhibit family
resemblances.
The set of
things people call “a game” is elastic
And it is
difficult to agree a definition of the word that defines every set member.
This is disappointing if you are a mathematician who hoped that words
define sets.
But it is no surprise to a biologist or psychologist coming at this from
a different direction.
Natural language is a biological phenomenon rather than a mathematical
one.
We use words to indicate one thing resemble another in a loose and
informal way.
No word,
description or message has a universally-agreed meaning.
And since the words and grammar we use are so flexible, there is ambiguity and fuzziness in
natural language
There are
degrees of truth in how well a reality matches a description we make of it.
However, words
do give us the ability to classify or typify things more formally and rigidly.
The marvel is not that natural language is imprecise.
The marvel is that we can forces words to act as the names of types that
do have one or more features in common.
And to create a holistic, unambiguous and testable description of a
system, we must do this.
We have to create an artificial domain-specific language in which words
do act as type names.
From words to types
A. J. Ayer
(1910 to 1989) was a philosopher who wrote about language, truth, logic and
knowledge.
He rejected
metaphysics and much philosophical discussion as meaningless, that is, not
provable or disprovable by experience
He pointed
out that every coherent description of an individual thing or situation is a type
definition.
“In describing a situation, one is not
merely registering a [perception], one is classifying it in some way,
and this means going beyond what is immediately given.” Chapter 5 of
“Language, truth and logic”
Where do classes or types come from?
They come from the human urge to formalise family resemblances by defining a family member.
Within a bounded context, or domain of knowledge,
we can and do determinedly fix the meanings of words.
Domain specifc language |
Types <invent> <symbolise> Domain
specialists <observe & envisage> Family members |
Artificial (domain-specific) languages
A natural language can contain several
popular and alternative meanings for the same word.
In an artificial language, terms are
fixed to type definitions, and related to each other.
A domain-specific language is an island of inter-related words with stable meanings (in the ever-unfolding evolution of natural language).
Words are treated as type names, and each type is defined by statements
relating it to other types.
A type defines features (qualities, characteristics, attributes,
properties) shared by things that instantiate or realise that type.
E.g. a biologist might say: “A
game is an activity that serves as a rehearsal of skills that have (in the
past) proved useful to survival.”
This article presents the view that description and knowledge are instruments that evolved alongside life.
This part promotes a type theory that allows for fuzziness and transience in the conformance of things to types.
In contrast to the more rigid set theory you may be familiar with.
Our type theory begins with an assertion and a relation.
The assertion: to describe a thing (e.g. draw a picture of the sun) is to typify it (e.g. as a “round” and “yellow”).
The relation: a thing can instantiate (embody, realize, manifest, conform to) one or more types.
Before we
can think of types, we have to think of things as being discrete, describable
and differentiable
2/1. “The most fundamental concept in cybernetics is that of ‘difference’, either that
· two things are recognisably different, or that
· one thing has changed with time… We assume change occurs by a measurable jump.” Ashby. 1956
The universe is an ever-unfolding process, in which space and time are continuous.
But in our perceptions, memorizations and descriptions of phenomena, we divide continuous things and qualities into discrete chunks.
This requires what psychologists now call the “just noticeable difference”.
The concept is traceable to Ernst Weber a 19th century
experimental psychologist
His “difference threshold” (or JND) is the minimum amount by
which stimulus intensity must be changed in order to produce a noticeable
variation in sensory experience.
Using his eyes, Newton divided the continuous spectrum of light into (first five and later seven) discrete colours.
And thousands of years ago, using their ears, musicians divided the continuous spectrum of sound into discrete notes (see appendix 1 for more on music).
One actor may use a JND in any physical form to encode
information for other actors to read.
E.g. I leave the office open door open to convey the information that I am open to visitors.
Differentiating
things in space at one time
To describe the continuous space of the universe, we differentiate discrete entities.
An entity’s boundary may be physical (e.g. the boundary of a solid in a liquid or gas).
Or logical, such as the players in a soccer team, who grouped by sharing the same style of shirt.
To describe the continuous qualities of things, we differentiate discrete types.
Types appear as qualitative attributes (colors) and quantifiable amounts (height, width and depth).
Differentiating changes to one thing over
time
Change can be classified in three ways:
· continuous or discrete
· state change or mutation
· natural/accidental or designed/planned.
The universe is an ever-unfolding process of continual change.
A social entity continuously and naturally mutates.
Its members change, and it responds to environmental changes that were not predictable or anticipated.
By definition, an activity system is an island of regularity in the universe.
A system that continually changes its nature would be a contradiction in terms.
If there is no stable pattern, no regularity, no repetition, then there is no system to describe.
A system cannot possibly be designed to continually mutate into infinite different systems.
Ashby and Maturana, separately, rejected continual mutation as undermining the concept of a system.
However, continuous change can be simulated by dividing changes into steps frequent and small enough to appear continuous.
Discrete designed mutation (e.g. system version 1 to version 2).
Klaus Krippendorff (a student of Ashby) wrote
as follows:
"Differences
do not exist in nature.
They result from someone drawing distinctions and noticing their effects.”
“Bateson's ‘recognizable change’ [is] something that can be recognised and observed."
To describe the passage of time and its effects, we divide its continuous flow into discrete intervals or changes.
We differentiate:
·
discrete qualities of a thing: e.g. asleep to awake
· discrete versions of a thing: e.g. caterpillar to butterfly.
· discrete generations of a thing: e.g. parent to child.
To differentiate discrete qualities, versions and generations is to create descriptive types.
Chapter 5 of “Language, truth and logic” A J Ayer.
Our type theory begins with an assertion and a relation.
The assertion: to describe a thing (e.g. draw a picture of the sun) is to typify it (e.g. as a “round” and “yellow”).
The relation: a thing can instantiate (embody, realize, manifest, conform to) one or more types.
A type classifies things that are similar in some way, to
some degree.
Two things (e.g. mouse and caterpillar) can be seen as either:
· two instances of one type (e.g. two animals), or
· instances of two different types (e.g. one mouse and one caterpillar).
Two things (e.g. green box and blue box) can be seen as either:
· two instances of one type (e.g. two boxes), or
· instances of two different types (e.g. one green box and one blue box).
Typification is so fundamental to our existence we have
countless words for ideas about things.
Those ideas may be called concepts, qualities, properties, characteristics, features, or attributes.
We describe a thing by relating it to general ideas, classes or descriptive types.
· E.g. In general, a human body has height and shape property types.
To instantiate a type is to embody, exhibit, exemplify, manifest or realize that type in a particular value.
We describe a particular thing as instantiating its descriptive types.
· E.g. In particular, one body’s height is 2 metres and its shape is humanoid.
Unfortunately, we are sloppy about distinguishing types from
instances.
And confusingly, we refer to both general types and particular values as qualities, properties, concepts, characteristics, and attributes.
Describers use types (classes, categories) to categorise and describe things.
The three essential relationships are:
· Describers <observe and envisage >
phenomena.
· Describers <create and use> types to help them recognize and deal with phenomena.
· Types <characterizes> phenomena in terms of their structural and/or behavioral attributes.
Types |
Types <create
and use> <characterize> Describers
<observe and envisage > Things |
A
description can been seen as a (perhaps complex, perhaps polythetic) type that
identifies properties of the thing described.
One
description may conceivably be embodied or realized in many physical
instances, by many real-world things.
Having described
one universe, physicists can posit the existence of infinite other universes
Describer(s) |
One
abstract type |
Many
physical instances |
Many
physical entities |
Composer(s) |
One symphony score |
Many symphony performances |
Many orchestras |
Building architect(s) |
A set of architecture drawings |
Many concrete buildings |
Many builders |
Business architect(s) |
One set of business roles and rules |
Many businesses processes |
Many business actors |
Software
engineer(s) |
One
program |
Many
program executions |
Many
computers |
Game
designer(s) |
The
rules of “poker” |
Many
games of poker |
Many
card schools |
In each
example above, the description is embodied or realized in many instances, by
many real-world entities.
·
The description is a concept, a type or typifying
assertion.
· And
conversely, a type or typifying assertion is a concept, is a description.
Before life, light existed and was reflected from the surfaces of objects.
But no color existed in the world then, either as a sensation or a description.
Experiments show animal brains manufacture the sensation of color, from a mixture of the light they perceive and their experience.
They also show that we perceive the same light waves as different colors, depending on the situation.
For more on color perception, read https://www.bbc.co.uk/news/science-environment-14421303.
Surely the concept of an elephant did not exist before elephants evolved, before their kind came to be encoded in some DNA?
When we define the elephant type in words, we encode it in another kind of description.
For sure, the planet type did not exist before life, it is a construct of the human mind.
The truth of the statement “Pluto is a planet”
depends on how that type is defined.
And what is true has changed, as astronomers define and redefine the planet
type.
The things we describe (as elephants and planets) exist in time and space.
The descriptions we create (of “elephant” and “planet) also exist in time and space.
The orbits of real planets (being pulled by gravity in many directions) are never perfect ellipses.
Where does the general concept or type called ellipse exist?
Plato believed the concept or type exists in an ideal or ethereal sense.
It makes no practical difference here whether you agree with Plato or not.
Because the only types we can discuss are ones encoded in our minds and other descriptive forms.
In short, this philosophy presumes that types are instruments that life forms developed as side effects of biological evolution.
There is no type outside of a description
encoded in a matter and/or energy structure.
And when all descriptions of a “rock”,
“plant” or “circle” destroyed, that type will disappear from the cosmos.
The idea of an ethereal type is useless, redundant, and better cut out using
Occam’s razor.
There is no universal classification
of real word things into types.
There are only classifications we
find useful to model things for particular purposes.
For each domain of knowledge in
science, we need a domain-specific language.
In such a language, a word symbolises
a type, and a type is symbolised by a word.
To define and relate types in a domain-specific language, we use taxonomies.
A taxonomy defines types (aka classes) to which things can be assigned
A thing has the properties of each class it is assigned to.
There are:
· flat taxonomies (the rainbow, a glossary)
· multi-dimensional taxonomies (e.g. the periodic table)
· class hierarchies with single inheritance (e.g. the biologist’s classification of species)
· class hierarchies with multiple inheritance (directed graphs)
· ontologies (see below).
We impose hierarchies on things to make them manageable, and find them via search trees.
But ontologies tend not to include deep class hierarchies.
An ontology is a taxonomy that relates objects of classes by actions.
Typically in the form of propositions statements of the kind: Subjects <verb phrase> Objects.
E.g. Orders <are placed by> Customers.
Things can be constrained by predicate logic statements in how they are related.
E.g. Every Order <is placed by> one Customer.
Data models are ontologies composed of such statements
Why do they tend not to feature class hierarchies?
Over time, a thing can change from one type to another.
The passage of time turns subtypes into states, or roles, or values of a variable.
Classes/types can be related by generalisation (subtypes to supertype), and a variety of that called idealisation (physical to logical).
Classes/types can be related by association (between types), and variety of that called composition (parts to whole).
Arguably, there is another kind of taxonomy: a composition hierarchy (e.g. an army or organisation chart).
This looks like a class hierarchy, but connects classes in whole-part rather than type-subtype relationships.
Also, the classes may be singletons, meaning there is only one instance of each.
Domain-specific
ontologies
To create a holistic, unambiguous and testable description of a system,
we need an artificial language.
And to define a domain-specific vocabulary, we make statements using predicate logic
Business
In any business domain, people define the rules of their specific
business in terms of relations connecting types.
Thousands of years ago businesses, defined types such as “order” and
“payment”.
Today they defined types such as:
·
“An employee has a salary and may be assigned to a
project.”
·
“An order is placed by a customer; a customer can
place several orders.”
Science
In the domain of physics, type names include “Force,” “Mass” and
“Acceleration”.
Type definitions include: Force
= Mass * Acceleration.
By contrast, in the language of management science, a force is a
pressure acting on a business, such as competition or regulations.
Maths
Where do numbers come from?
From enumerating the members of a
family, and describing that quantity.
In the domain of mathematics, abstract type names include: “number”,
“division” and “remainder”.
Type definitions include: “An even number is a number that is divisible
by two with no remainder.”
Propositional logic
(3rd century BC)
Propositions are assertions (statements, sentences) that can be true or false.
Principia Mathematica is a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913,
They tried to show that all of mathematics could be built from the ground up using basic, indisputable logic.
Their building block was the proposition—the simplest possible statement, either true or false.
A proposition is a statement that asserts a fact that may be true or false.
In natural
language it takes the form of a sentence: e.g. the sun
is shining.
A predicate
is a verbal phrase, with or without an object, which declares a feature of a
subject.
Proposition
(in the form of a predicate statement) |
||
Subject |
Predicate |
|
Verb phrase |
Object |
|
A particular thing or instance of a general type |
A verb or verbal phrase that either stands alone or relates the subject to the object |
A particular thing or a general type related to the subject by the predicate. |
The sun |
is shining |
|
A game of monopoly |
results in |
a winner with the largest credit amount |
A game |
is kind of |
activity |
A game |
is played by |
one or more animate players |
A game |
results in |
a measure of achievement |
An order |
is placed by |
a customer |
A customer |
has placed |
one or more orders |
An order |
is placed by |
a customer |
A customer |
has placed |
one or more orders |
The basic logic of statements is called propositional logic (or calculus).
Compound propositions connect atomic propositions, for
example:
· pigeons fly <and> eat corn.
Logical connectives include:
· conjunction (“and”),
· disjunction (“or”)
· negation (“not”)
· choice (“if”).
For more connectives, see https://en.wikipedia.org/wiki/Logical_connective.
Simple propositions can be related in increasingly complicated networks.
Thus, Russell and Whitehead derived the full complexity of modern mathematics.
Predicate logic
(c1700)
Gottlob Frege's predicate logic (aka first order logic) was an advance on propositional logic.
It allows assertions to contain quantified variables (and non-logical objects).
So a predicate statement may be true or false depending on the values of these variables.
E.g.
·
“An instance of the customer type has placed one or
more instances of the order type”.
· “An order is placed by one customer.”
An entity-attribute-relationship model is - in essence - an inter-related collection of predicate statements.
You could call it a domain-specific language or part of one.
Hoare logic (1969)
This logic is based on the Hoare triple, which describes the execution of a behaviour that changes the state of a system.
It takes the form {P} B {Q}.
B is a behaviour.
P and Q are assertions (predicate logic sentences).
P is the precondition; Q is the post condition.
When P is true, performing the behaviour will establish Q.
It appears logic is not the basis of neurology and human intelligence.
Rather, logic is a painstakingly constructed artifact of human intelligence.
Here is a sad story.
https://getpocket.com/explore/item/the-man-who-tried-to-redeem-the-world-with-logic
In 1940s and 50s, Weiner, McCulloch and Pitts (USA), Ashby and Turing (UK) shared the same idea.
That the human brain is logical and can be mirrored by a digital computer.
In 1943 Pitt and McCulloch proposed the first mathematical model of a neural network.
Later, experiments showed animal thinking is not digital.
In 1959, "What the Frog’s Eye Tells the Frog’s Brain" (Maturana, Lettvin, McCulloch and Pitts) demonstrated
"analog processes in the eye were doing at least part of the interpretive work" in image processing.
As opposed to "the brain computing information… using the exacting implement of mathematical logic".
This led Pitts to burn years of unpublished research.
"Human intelligence seems to be able to find methods… 'transcending' the methods available to machines." Alan Turing
Today, it is still unclear how far artificial neural networks mirror the brain’s biology.
Experiments have shown that most people find logic difficult, unnatural to their normal way of thinking
Nevertheless, logic is fundamental to the digital business systems we design.
These systems are deterministic, they follow the rules we give them.
Rudolf Carnap (1891 – 1970) was a member the Vienna
circle who contributed to the philosophy of science and of language.
Carnap has been called a logical positivist, but he disagreed with Wittgenstein.
He considered
philosophy must be committed to the primacy of science and logic, rather than
verbal language.
Carnap’s first major work, Logical Syntax of Language can be regarded as a response to Wittgenstein 's Tractatus.
“the sentences of metaphysics are pseudo-sentences which on logical analysis are proved to be either empty phrases or phrases which violate the rules of syntax.
Of the so-called philosophical problems, the only questions which have any meaning are those of the logic of science.
To share this view is to substitute logical syntax for philosophy.”
— Carnap, Page 8, Logical Syntax of Language, quoted in Wikipedia.
He defined the purpose of logical syntax thus:
“to provide a system of concepts, a language, by the help of which the results of logical analysis will be exactly formulable.”
“Philosophy is to be replaced by the logic of science – that is to say, by the logical analysis of the concepts and sentences of the sciences...”
Foreword, Logical Syntax of Language, quoted in Wikipedia.
He defined the logical syntax of a language thus:
“the systematic statement of the formal rules which govern [the language] together with the development of the consequences which follow from these rules.
Page 1, Logical Syntax of Language, quoted in Wikipedia.
Carnap’s second major work, Pseudoproblems
in Philosophy asserted
that many metaphysical philosophical questions were meaningless.
His Principle of
Tolerance says there is no such thing as a "true" or
"correct" logic or language.
His concept of logical syntax is important in formalising the storage and communication of information/descriptions.
Computers require that logical data structures are defined using a formal grammar called a regular expression.
It is said that Carnap’s ideas helped the development of natural language processing and compiler design.
As I
understand it, Carnap said:
A statement
is only meaningful with respect to a
given theory - a set of inter-related domain-specific predicate
statements.
And only true to the extent it can be supported
by experience or testing.
You cannot be interested in business rules and not in logic; since they are one and the same!
To define digital business systems we use domain-specific languages and predicate logic.
Structural rules
We specify system state variables using entity-attribute-relationship models - composed of Frege’s predicate statements.
E.g. Every order is placed by one and only one customer.
A data model is nothing more or less than a compound of such predicate statements.
Behavioral rules
We specify system behaviours using use case (epic) descriptions – a narrative form of the Hoare triple.
Behavioural rules relate to how events are processed when
they occur.
E.g. If the named customer already
exists (precondition), and the order event is processed successfully, then the
order will be attached to the customer in the system's state structure (post
condition).
If you asked me to specify an application using only two artifacts, I would choose use case (epic) descriptions and a data model!
Rules in systems
thinking
In classical system theory and cybernetics (after Ashby), systems are deterministic, rules are fixed (though they can be probabilistic).
The system is the rules; the rules are the system.
How far a "real machine" realises the system by conforming to the rules is another question.
Some systems are realised by software actors that never break the rules.
Other systems are realised by human actors who may break the rules, which means they are acting outside the system.
We can design "exception" rules for exception cases.
But at some point, the system passes exception handling to a human being and says "Sort this out as best you can".
In social systems thinking (after Ackoff), the actors are animate entities (usually human) that communicate in social networks.
They have goals and they make choices.
These social entities are not systems in sense of activity systems.
They are goal-driven social networks that are asked to realise systems.
First, animals evolved to recognise family resemblances.
Their brains conceptualised things in what we can think of a fuzzy open taxonomy (akin to that in a natural language dictionary).
Later, humans developed the ability to define a type more formally by naming it and defining it with reference to other types.
We developed the ability to formalise type definitions in glossaries and controlled vocabularies, and use them to built taxonomies and ontologies.
In the 20th century we used ontologies - built from predicate statements – to define the contents of digital data stores.
Artificial intelligence tools abstract new types from stored data (often from big data).
Here are definitions of relevant terms.
Data: a structure in which meaning is encoded and/or from which meaning can be decoded.
Information: a meaning encoded in data and/or decoded from data.
Knowledge: information that is accurate enough to be useful.
Wisdom: the ability to assess the accuracy of information and use knowledge effectively.
Intelligence: the ability to abstract information from data, and types from instances, and use the abstractions.
Artificial Intelligence (AI): the ability of computers to abstract information from data, and types from instances, and use the abstractions.
Data model: an ontology or schema composed of interrelated predicate statements of the form: object <is related to> object.
Triple store: a schema-less way of storing predicate statements, so every predicate statement expressed can be stored
The larger and richer the triple store grows, the more an AI application can learn from it.
Learning is not the same as getting more intelligent.