The science of systems: language and logic

https://bit.ly/2xKc9zW

Copyright Graham Berrisford 2018. One of a hundred papers on the System Theory page at http://avancier.website. Last updated 07/04/2019 14:26

Find that page for a link to the next System Theory for Architects Tutorial in London.

 

You can’t understand systems without answering questions about the nature of description and reality.

These questions are often considered the domain of philosophers and linguists such Nietzsche and Wittgenstein.

There is some philosophy here, but the perspective is primarily scientific.

 

Thinking about systems is often considered the domain of sociologists.

But if you are looking for discussion of social systems, you’ll have to wait for a while.

 

Some postmodernists have attacked science and reality, as reported in “A philosophical position statement”.

To the contrary, this story respects hard science.

It positions systems theory in a brief history of the universe and human evolution, and as a branch of science.

Our story of description and reality is rooted phenomena that emerged through the evolution of animals and human thought:

·         memories and messages

·         languages and logic (this paper)

·         the use of logic to describe systems.

Contents

Recap. 1

Human languages. 2

Taxonomies. 4

Ontologies. 5

Logic. 6

Business rules. 7

Conclusions and terms relevant to computing and AI. 8

FOOTNOTES. 9

Logic in neurology and human intelligence. 9

Logic in philosophy. 10

 

Recap

 

There were no descriptions before life

Before life there were things, but no description of them.

An understanding of the description-reality relationship has to start in biology.

Non-human animals communicate about things in the world effectively.

That is the empirical demonstration that they can model the world – well enough.

 

Animals evolved the ability to internalise descriptions of things – in memories.

They also evolved ways to externalise and share descriptions of things – in messages.

Messages and memories can represent reality to a degree – but perfect accuracy (or truth) is elusive.

 

Messages and memories contain what may be called “signs” in semiotics.

A sign is only meaningful or useful at those points in time when it is encoded or decoded by an actor.

Its meaning is in the understanding/intent of the encoder - or in the understanding/reaction triggered in the decoder.

 

There were no types before symbolisation

Verbalisation enabled us (humans) to formalise our sense of family resemblances between similar things into types.

Typification is both a game we play and a basis of science.

A type is a description of a set member.

Types/descriptions help us manipulate things and predict their behaviour.

Identifying a thing as an instantiating a type helps us manipulate that thing and predict its behaviour.

 

Science believes in typification and identification in so far as they are supported by evidence.

The evidence shows humans have succeeded very well in typifying and identifying things; else humankind would be extinct.

But science doesn't say types/descriptions are the truth; it only says they represent reality well enough to be useful.

There is no way to know the world “as it is”; the idea doesn’t even make sense.

Since Einstein's day, scientists take the view that all we can understand is models we make of the world.

That is equally true of non-verbal models and verbal models.

Human languages

Language cannot be the basis of thinking.

Eons before languages, animals could recognise a thing they had perceived before, even if that thing had changed a little.

They could recognise “family resemblances” between any two things they perceive to be similar.

 

But only we humans invent words to describe things, and this is a game changer.

Yes, we do naturally use words in the fuzzy and fluid way we perceive the world.

Surely, the flexibility of natural language assists survival and creativity in an ever-changing world.

In science however, we look to specify regular or repeatable behaviors more formally.

We typify the roles of actors and the rules they follow in an unambiguous and testable way.

And to do this, we need an artificial, logically consistent, language.

This section goes on to contrast natural and artificial languages.

 

Natural languages

To begin with, philosopher Ludwig Wittgenstein (1889-1951) presumed language is precise.

So, a word is a type name that define the members of a set.

A “game” is one of a set containing many things of that named type.

 

Later, Wittgenstein changed his mind, and turned his focus to the fluidity of language.

He considered games as a set that includes activities as varied as chess, archery and Super Mario.

He argued the set members have overlapping lists of features, but no single feature in common.

He used “game” as an example to tell us that words (in natural language) are not type names.

Rather, games exhibit family resemblances.

The set of things people call “a game” is elastic

And it is difficult to agree a definition of the word that defines every set member.

 

This is disappointing if you are a mathematician who hoped that words define sets.

But it is no surprise to a biologist or psychologist coming at this from a different direction.

Natural language is a biological phenomenon rather than a mathematical one.

We use words to indicate one thing resemble another in a loose and informal way.

No word, description or message has a universally-agreed meaning.

And since the words and grammar we use are so flexible, there is ambiguity and fuzziness in natural language

There are degrees of truth in how well a reality matches a description we make of it.

 

However, words do give us the ability to classify or typify things more formally and rigidly.

The marvel is not that natural language is imprecise.

The marvel is that we can forces words to act as the names of types that do have one or more features in common.

And to create a holistic, unambiguous and testable description of a system, we must do this.

We have to create an artificial domain-specific language in which words do act as type names.

 

From words to types

A. J. Ayer (1910 to 1989) was a philosopher who wrote about language, truth, logic and knowledge.

He rejected metaphysics and much philosophical discussion as meaningless, that is, not provable or disprovable by experience

He pointed out that every coherent description of an individual thing or situation is a type definition.

“In describing a situation, one is not merely registering a [perception], one is classifying it in some way,

and this means going beyond what is immediately given.” Chapter 5 of “Language, truth and logic”

 

Where do classes or types come from?

They come from the human urge to formalise family resemblances by defining a family member.

Within a bounded context, or domain of knowledge, we can and do determinedly fix the meanings of words.

 

Domain specifc language

Types

<invent>                 <symbolise>

Domain specialists <observe & envisage> Family members

 

Artificial (domain-specific) languages

A natural language can contain several popular and alternative meanings for the same word.

In an artificial language, terms are fixed to type definitions, and related to each other.

 

A domain-specific language is an island of inter-related words with stable meanings (in the ever-unfolding evolution of natural language).

Words are treated as type names, and each type is defined by statements relating it to other types.

A type defines features (qualities, characteristics, attributes, properties) shared by things that instantiate or realise that type.

E.g. a biologist might say: “A game is an activity that serves as a rehearsal of skills that have (in the past) proved useful to survival.”

Taxonomies

There is no universal classification of real word things into types.

There are only classifications we find useful to model things for particular purposes.

For each domain of knowledge in science, we need a domain-specific language.

In such a language, a word symbolises a type, and a type is symbolised by a word.

To define and relate types in a domain-specific language, we use taxonomies.

 

A taxonomy defines types (aka classes) to which things can be assigned

A thing has the properties of each class it is assigned to.

There are:

·         flat taxonomies (the rainbow, a glossary)

·         multi-dimensional taxonomies (e.g. the periodic table)

·         class hierarchies with single inheritance (e.g. the biologist’s classification of species)

·         class hierarchies with multiple inheritance (directed graphs)

·         ontologies (see below).

 

We impose hierarchies on things to make them manageable, and find them via search trees.

But ontologies (below) tend not to include deep class hierarchies.

Ontologies

An ontology is a taxonomy that relates objects of classes by actions.

Typically in the form of propositions statements of the kind: Subjects <verb phrase> Objects.

E.g. Orders <are placed by> Customers.

 

Things can be constrained by predicate logic statements in how they are related.

E.g. Every Order <is placed by> one Customer.

 

Data models are ontologies composed of such statements

Why do they tend not to feature class hierarchies?

Over time, a thing can change from one type to another.

The passage of time turns subtypes into states, or roles, or values of a variable.

 

Classes/types can be related by generalisation (subtypes to supertype), and a variety of that called idealisation (physical to logical).

Classes/types can be related by association (between types), and variety of that called composition (parts to whole).

 

Arguably, there is another kind of taxonomy: a composition hierarchy (e.g. an army or organisation chart).

This looks like a class hierarchy, but connects classes in whole-part rather than type-subtype relationships.

Also, the classes may be singletons, meaning there is only one instance of each.

 

Domain-specific ontologies

To create a holistic, unambiguous and testable description of a system, we need an artificial language.

And to define a domain-specific vocabulary, we make statements using predicate logic

 

Business

In any business domain, people define the rules of their specific business in terms of relations connecting types.

Thousands of years ago businesses, defined types such as “order” and “payment”.

Today they defined types such as:

·         “An employee has a salary and may be assigned to a project.”

·         “An order is placed by a customer; a customer can place several orders.”

 

Science

In the domain of physics, type names include “Force,” “Mass” and “Acceleration”.

Type definitions include: Force = Mass * Acceleration.

By contrast, in the language of management science, a force is a pressure acting on a business, such as competition or regulations.

 

Maths

Where do numbers come from?

From enumerating the members of a family, and describing that quantity.

In the domain of mathematics, abstract type names include: “number”, “division” and “remainder”.

Type definitions include: “An even number is a number that is divisible by two with no remainder.”

Logic

 

Propositional logic (3rd century BC)

Propositions are assertions (statements, sentences) that can be true or false.

 

Principia Mathematica is a three-volume tome written by Bertrand Russell and Alfred Whitehead between 1910 and 1913,

They tried to show that all of mathematics could be built from the ground up using basic, indisputable logic.

Their building block was the proposition—the simplest possible statement, either true or false.

 

A proposition is a statement that asserts a fact that may be true or false.

In natural language it takes the form of a sentence: e.g. the sun is shining.

A predicate is a verbal phrase, with or without an object, which declares a feature of a subject.

 

Proposition (in the form of a predicate statement)

Subject

Predicate

Verb phrase

Object

A particular thing

or instance of a general type

A verb or verbal phrase

that either stands alone or

relates the subject to the object

A particular thing or a general type

related to the subject by the predicate.

The sun

is shining

A game of monopoly

results in

a winner with the largest credit amount

A game

is kind of

activity

A game

is played by

one or more animate players

A game

results in

a measure of achievement

An order

is placed by

a customer

A customer

has placed

one or more orders

An order

is placed by

a customer

A customer

has placed

one or more orders

 

The basic logic of statements is called propositional logic (or calculus).

 

Compound propositions connect atomic propositions, for example:

·         pigeons fly <and> eat corn.

 

Logical connectives include:

·         conjunction (“and”),

·         disjunction (“or”)

·         negation (“not”)

·         choice (“if”).

 

For more connectives, see https://en.wikipedia.org/wiki/Logical_connective.

 

Simple propositions can be related in increasingly complicated networks.

Thus, Russell and Whitehead derived the full complexity of modern mathematics.

 

Predicate logic (c1700)

Gottlob Frege's predicate logic (aka first order logic) was an advance on propositional logic.

It allows assertions to contain quantified variables (and non-logical objects).

So a predicate statement may be true or false depending on the values of these variables.

E.g.

·         “An instance of the customer type has placed one or more instances of the order type”.

·         “An order is placed by one customer.”

 

An entity-attribute-relationship model is - in essence - an inter-related collection of predicate statements.

You could call it a domain-specific language or part of one.

 

Hoare logic (1969)

This logic is based on the Hoare triple, which describes the execution of a behaviour that changes the state of a system.

It takes the form {P} B {Q}.

B is a behaviour.

P and Q are assertions (predicate logic sentences).

P is the precondition; Q is the post condition.

When P is true, performing the behaviour will establish Q.

Business rules

You cannot be interested in business rules and not in logic; since they are one and the same!

To define digital business systems we use domain-specific languages and predicate logic.

 

Structural rules

We specify system state variables using entity-attribute-relationship models - composed of Frege’s predicate statements.

E.g. Every order is placed by one and only one customer.

A data model is nothing more or less than a compound of such predicate statements.

 

Behavioral rules

We specify system behaviours using use case (epic) descriptions – a narrative form of the Hoare triple.

Behavioural rules relate to how events are processed when they occur.

E.g. If the named customer already exists (precondition), and the order event is processed successfully, then the order will be attached to the customer in the system's state structure (post condition).

 

If you asked me to specify an application using only two artifacts, I would choose use case (epic) descriptions and a data model!

 

Rules in systems thinking

In classical system theory and cybernetics (after Ashby), systems are deterministic, rules are fixed (though they can be probabilistic). 

The system is the rules; the rules are the system.

How far a "real machine" realises the system by conforming to the rules is another question.

Some systems are realised by software actors that never break the rules.

Other systems are realised by human actors who may break the rules, which means they are acting outside the system.

We can design "exception" rules for exception cases.

But at some point, the system passes exception handling to a human being and says "Sort this out as best you can".

 

In social systems thinking (after Ackoff), the actors are animate entities (usually human) that communicate in social networks.

They have goals and they make choices.

These social entities are not systems in sense 1.

They are goal-driven social networks that are asked to realise systems.

Conclusions and terms relevant to computing and AI

First, animals evolved to recognise family resemblances.

Their brains conceptualised things in what we can think of a fuzzy open taxonomy (akin to that in a natural language dictionary).

Later, humans developed the ability to define a type more formally by naming it and defining it with reference to other types.

We developed the ability to formalise type definitions in glossaries and controlled vocabularies, and use them to built taxonomies and ontologies.

In the 20th century we used ontologies - built from predicate statements – to define the contents of digital data stores.

Artificial intelligence tools abstract new types from stored data (often from big data).

 

Here are definitions of relevant terms.

Data: a structure in which meaning is encoded and/or from which meaning can be decoded.

Information: a meaning encoded in data and/or decoded from data.

Knowledge: information that is accurate enough to be useful.

Wisdom: the ability to assess the accuracy of information and use knowledge effectively.

Intelligence: the ability to abstract information from data, and types from instances, and use the abstractions.

Artificial Intelligence (AI): the ability of computers to abstract information from data, and types from instances, and use the abstractions.

Data model: an ontology or schema composed of interrelated predicate statements of the form: object <is related to> object.

Triple store: a schema-less way of storing predicate statements, so every predicate statement expressed can be stored

The larger and richer the triple store grows, the more an AI application can learn from it.

Learning is not the same as getting more intelligent.

FOOTNOTES

Logic in neurology and human intelligence

In the 1930s and 40s there was a hope that the workings of the human brain – at least memory and learning - could be explained using logic.

Here is an interesting account of McCulloch and Pitts – their hope and disappointment.

 

https://getpocket.com/explore/item/the-man-who-tried-to-redeem-the-world-with-logic

 

“If one were to see a lightning bolt flash on the sky, the eyes would send a signal to the brain, shuffling it through a chain of neurons.

Starting with any given neuron in the chain, you could retrace the signal’s steps and figure out just how long ago lightning struck.

Unless, that is, the chain is a loop.

In that case, the information encoding the lightning bolt just spins in circles, endlessly.

It bears no connection to the time at which the lightning actually occurred.

It becomes, as McCulloch put it, “an idea wrenched out of time.” In other words, a memory.”

 

“they had found a way for the brain to abstract a piece of information, hang on to it, and abstract it yet again, creating rich, elaborate hierarchies of lingering ideas in a process we call “thinking...

Their model was vastly oversimplified for a biological brain, but it succeeded at showing a proof of principle...

Thought, they said, need not be shrouded in Freudian mysticism or engaged in struggles between ego and id...

“For the first time in the history of science,” McCulloch announced to a group of philosophy students, “we know how we know.”

 

“As Pitts began his work at MIT… he figured, that we all start out with essentially random neural networks...

He suspected that by altering the thresholds of neurons over time, randomness could give way to order and information could emerge...

Wiener excitedly cheered him on, because he knew if such a model were embodied in a machine, that machine could learn.”

 

In the 1950s, the hopes of Weiner, McCulloch and Pitts were dashed.

https://getpocket.com/explore/item/the-man-who-tried-to-redeem-the-world-with-logic

Logic is not the basis of neurology and human intelligence.

Rather, logic is a painstakingly constructed artifact of human intelligence.

 

"Recently the theorems of Gödel and related results (Gödel, Church, Turing) have shown that if one tries to use machines for such purposes as determining the truth or falsity of mathematical theorems and one is not willing to tolerate an occasional wrong result, then any given machine will in some cases be unable to give an answer at all.

On the other hand, human intelligence seems to be able to find methods of ever-increasing power for dealing with such problems 'transcending' the methods available to machines."

Alan Turing

 

Experiments have shown that most people find logic difficult, unnatural to their normal way of thinking

Nevertheless, logic is fundamental to the digital business systems we design.

These systems are deterministic, they follow the rules we give them.

Logic in philosophy

Rudolf Carnap (1891 – 1970) was a member the Vienna circle who contributed to the philosophy of science and of language.

Carnap has been called a logical positivist, but he disagreed with Wittgenstein.

He considered philosophy must be committed to the primacy of science and logic, rather than verbal language.

 

Carnap’s first major work, Logical Syntax of Language can be regarded as a response to Wittgenstein 's Tractatus.

the sentences of metaphysics are pseudo-sentences which on logical analysis are proved to be either empty phrases or phrases which violate the rules of syntax.

Of the so-called philosophical problems, the only questions which have any meaning are those of the logic of science.

To share this view is to substitute logical syntax for philosophy.”

— Carnap, Page 8, Logical Syntax of Language, quoted in Wikipedia.

 

He defined the purpose of logical syntax thus:

to provide a system of concepts, a language, by the help of which the results of logical analysis will be exactly formulable.”

“Philosophy is to be replaced by the logic of science – that is to say, by the logical analysis of the concepts and sentences of the sciences...”

Foreword, Logical Syntax of Language, quoted in Wikipedia.

 

He defined the logical syntax of a language thus:

the systematic statement of the formal rules which govern [the language] together with the development of the consequences which follow from these rules.

Page 1, Logical Syntax of Language, quoted in Wikipedia.

 

Carnap’s second major work, Pseudoproblems in Philosophy asserted that many metaphysical philosophical questions were meaningless.

His Principle of Tolerance says there is no such thing as a "true" or "correct" logic or language.

His concept of logical syntax is important in formalising the storage and communication of information/descriptions.

Computers require that logical data structures are defined using a formal grammar called a regular expression.

It is said that Carnap’s ideas helped the development of natural language processing and compiler design.

 

As I understand it, Carnap said:

A statement is only meaningful with respect to a given theory - a set of inter-related domain-specific predicate statements.

And only true to the extent it can be supported by experience or testing.