The evolution of intelligence

Memories, messages, languages and civilization

Copyright 2018 Graham Berrisford. Now a chapter in “the book” at Last updated 03/07/2021 15:58


“A biological approach to human knowledge naturally gives emphasis to the pragmatist view that theories [descriptions of reality] function as instruments of survival.” Stanford Encyclopedia of Philosophy


This chapter describes the emergence of human intelligence and civilization from the biological evolution of animals, with reference to symbolic languages and the sharing of knowledge in writing.


Knowledge is a biological phenomenon. 1

From chemical to biological evolution. 3

Animals and enterprise architecture (EA) 4

Three keys to the evolution of intelligence. 9

Four keys to evolution of human civilisation. 10

Conclusions and remarks. 13


Knowledge as a biological phenomenon

The "second order cyberneticians" claimed that

·       knowledge is a biological phenomenon (Maturana, 1970), that

·       each individual constructs his or her own "reality" (Foerster, 1973) and that

·       knowledge "fits" but does not "match" the world of experience (von Glasersfeld, 1987).

Stuart A. Umpleby (1994) The Cybernetics of Conceptual Systems. p. 3.


The general principle goes way beyond second order cybernetics.

Knowledge and description evolved in biological organisms


“The contribution of Maturana to this new epistemological proposition is fundamental. He is, to our knowledge, the first biological scientist to suggest that knowledge is a biological phenomenon that can only be studied and known as such. Furthermore, his proposition is that life itself should be understood as a process of knowledge which serves the organism for adaptation and survival. Maturana’s work… visualizes human experience from a point of view situated from within itself and not from an external view from the outside.”


By the way, the quoted source mistakenly concludes that Maturana’s principle implies a “relativist” view that every individual’s view of reality is equally valid. Relativism is rebutted a later chapter.


What does it mean to know or learn something? Consider:

·       A sunflower “knows” the direction sunlight is coming from.

·       An amoeba “knows” whether what it senses is food or not.

·       Penguins recognize (“know”) their babies in a flock, and vice-versa.

·       A macaque monkey “knows” it can use a stone to crack open a shellfish.


These are examples of non-human knowledge. It appears an organism (whether by inheritance or experience) has a model of things and phenomena in its environment. Or to put it another way, as Maturana did “Knowledge is a biological phenomenon”.


Learning is also a biological phenomenon.

·       Primitive animals learn by habituation or sensitization; they decrease or increase the intensity of their response to a repeated stimulus.

·       More advanced animals learn by conditioning (by trial and error).

·       Still more intelligent animals learn by observation, play and insight.


Insight learning is using past experiences and reasoning to solve problems, often “in a flash”. Apes can do this. Even crows can do it. And being able to apply knowledge successfully in new situations might be called “wisdom”.

A good regulator has a description of what it regulates

The Conant-Ashby theorem, or “good regulator” theorem, was conceived by Roger C. Conant and W. Ross Ashby and is central to cybernetics. In short, it states that "every good regulator of a system must be a model of that system".


Abstract "The design of a complex regulator often includes the making of a model of the system to be regulated. The making of such a model has hitherto been regarded as optional, as merely one of many possible ways. In this paper a theorem is presented which shows, under very broad conditions, that any regulator that is maximally both successful and simple must be isomorphic with the system being regulated. (The exact assumptions are given.) Making a model is thus necessary.


The theorem has the interesting corollary that the living brain, so far as it is to be successful and efficient as a regulator for survival, must proceed, in learning, by the formation of a model (or models) of its environment."


In short: a good regulator has a description of what it regulates. Here, a regulator is an animal or a machine that either has a model or has access to a model. So, read this triangle from left to right: regulators <have and use> models, which <represent> systems.


The good regulator


<have and use>           <represent>

Regulators    <monitor and regulate >   Systems


Evidently, to function and respond to changes its in its environment, a regulator must “know” what it regulates. Every animal needs a model of its environment if it is to find food and mates, and avoid enemies. And the more intelligent the animal, the richer its model of the entities and events in its environment. Similarly, a business needs to know the state of things it seeks to monitor or direct.


The question is not whether an animal or a business has a model; it is how complete and accurate is the model? To which the answers might be both “very incomplete and somewhat inaccurate” and “remarkably, complete and accurate enough”. Thinking about this has led me inexorably to the view of description and reality that is outlined in this and the following chapters.

From chemical to biological evolution

Before life, the universe was unobserved

Heinz von Foerster (1911 to 2002) was a thinker interested in the circularity of ideas. He is reputed to have said “We live in the domain of descriptions that we invented.” But we don’t live in a universe we invented.


Scientists believe the universe started with a big bang about 14,000 million years ago. In the beginning, there was a lot of energy, and then a lot of disordered matter. Gradually, the laws of physics and evolution created things so orderly they can seem to be designed. Planets fell into orderly repeating orbits; tides ebbed and flowed on a daily basis. So now some things in the universe behaved in an orderly fashion.


The earth was formed about 4,500 million years ago. Life on earth began at least 3,500 million years ago, possibly more. Before then, thee was no knowledge of the universe. There were no memories and messages. There was no conceptualisation or model of things in space and time. There were no representations or symbolisations of what exists and what happens.

Since life began, things have been observed and described by animals

This table (bottom to top) presents a history of the universe from the big bang to human civilisation.



Elements or actors

Interact by

Knowledge acquisition

Human civilisation

Human organizations

Information encoded in writings

Science and enterprise

Human sociology

Humans in groups

Information encoded in speech

Teaching and logic

Social psychology

Animals in groups

Information encoded in signals

Parenting and copying


Animals with memories

Sense, thought, response



Living organisms

Sense, response. Reproduction


Organic chemistry

Carbon-based molecules

Organic reactions


Inorganic chemistry


Inorganic reactions



Matter and energy




The world we call earth rolled on for millennia without life on it. At that time, there was no knowledge, description, model or classification of things on earth. So, surely, to understand how we describe those things ought to start in biology rather than philosophy? As Maturana, a biologist and systems thinker, observed: “knowledge is a biological phenomenon”.


The amazing story of how knowledge of the world evolved starts well before humans. All animals must “know” something of what they observe in reality; and some animals retain internal descriptions of what they perceive. A honey bee can remember the location where a discrete pollen source can be found, and dance to describe it. It communicates that kowledge using a code that not only other bees, but also humans, can decode. And recent experiments show that though a honey bee has a brain the size of a grain of sand, it can count up to four.


Animals of different species see the world somewhat differently. E.g. Birds and bees can see ultra-violet light.  And it is estimated that a dog's sense of smell is between 10 to 100 thousand times more acute than ours. Some say this means an animal’s view of the world is subjective – does not describe the “real” world.


"… research highlights that the world we see is not the physical or the 'real' world. Different animals have very different senses, depending on the environment the animals operate in,"

Professor Lars Chittka from Queen Mary's School of Biological and Chemical Sciences.


However, the implication, that description is, or could ever be, reality makes no sense! A description is, by definition, an abstraction from reality. No animal can fully understand any physical reality, but it can understand it well enough for some purposes it has. Clearly, to find enough food to eat, an animal must have objective-enough knowledge of the world.


As humans, we do more than construct and recall descriptions in our minds, we classify things we observe as being similar by naming and describing types. The following outline of biological and psychological ideas will lead us towards a general type theory that depends in no way on mathematics.

Animals and enterprise architecture (EA)

The universe is an ever-unfolding process that physicists describe as a four-dimensional space-time continuum. The word “continuum” implies space and time can be subdivided without any limit to size or duration. But as animals who perceive, remember and describe phenomena, we divide the universe into discrete things. We divide the material world into things where its structure or form changes – say from fluid to solid. And where the state of things changes - say from night to day.


In molecular memory, organisms sense and respond to molecular structures.

Perception turns an input message into a sensation an animal can respond to. About 800 million years ago, the very first animals knew enough not to eat themselves. They could perceive the difference between their own substance and chemicals in their environment.


In primitive animals, sense-response thinking is an end-to-end input-to-output process (cf. a value chain in business architecture). It begins in perception by a bodily sensor, where the state of things is first described by encoding a neural message; and ends with a direction to the body’s motors and organs, where a message is decoded and used in actions to change the state of things


Animals must know something of the world they live in, for feeding, breeding and other purposes. Some knowledge is hard coded in their biochemistry. E.g. things that taste good are better for you than things that taste nasty. When an amoeba comes across an item, it detects chemicals on the item’s surface. It senses those chemicals as signals, informing it as to whether the item is of the type “food”, “predator” or other. And then responds by acting appropriately.


Autonomic reactions


<recognize>           <represent>

Animals      < detect and react to>     Phenomena


Animals evolved to abstract ever more knowledge from perception phenomena, and ever more sophisticated ways of dealing with the world. By about 700 million years ago, Jellyfish had nerve nets in which

·       sensory neurons detect information and send messages to

·       intermediate neurons, which react by sending messages to direct

·       motor neurons.


By about 550 million years ago, some animals used a central hindbrain to monitor and control homeostatic state variables. A hindbrain senses the state of the body state variables, and sends messages to maintain those variables, via an information feedback loop that connects the hindbrain to the organs and motors of the body.


Today, EA is about business activity systems that perceive and respond to events, and monitor or direct entities of interest.

·       Input messages inform the system about changes in the state of external entities.

·       The system determines the responds to changes, either directly and autonomically, or reference to an acquired memory.

·       Output messages inform and direct the behavior of external entities.


A sensor (e.g. an eye or ear) is a machine that can detect qualities or changes in its environment. The eye’s lens does nothing but focus light. The retina does more, it acts to optimise the data passed in messages via the nervous system to the brain. And experiments show the retina of a cat's eye responds excitedly to thin wiggly lines - like mouse tails.


Intelligent reactions

Sensations and memories

<recognize>                 <represent>

Animals    <detect and react to>   Phenomena


At Cambridge university, in the 1950s, the neuroscientist Horace Barlow recorded signals from nerve cells in frog’s retinas. Barlow found some nerve cells in the eye are “hard-wired” to detect small moving insects. (By the way, this was a disappointment to McCulloch and Pitts, mentioned later.) With others, Barlow discovered signals from both eyes converge on a single cell in the visual cortex. It is supposed this cell creates map of the three-dimensional space around an animal.


Note: the research referred to here is old. The science of perception and memory has advanced further, but we don’t need to understand that here. What matters is to know that animals can abstract information from phenomena, and remember it.

Active inference

Horace Barlow defined intelligence as “the art of good guessing”. In 1961, he proposed a successful model of sensory systems – the efficient coding hypothesis. Given finite resources to transmit information, neural systems optimise what they encode. They minimise the number of nerve cells and impulses needed to transmit data to the brain. They leave the rest to be inferred from inherited predispositions and acquired memories. That process is called active inference.


In this talk by Anil Seth says that human perception combines both:

·       Observation: sensing information input from what is out there.

·       Envisaging: making a best guess as to what has been sensed, with reference to what is expected.


What you perceive and remember is not purely an invention. Your brain (given the time and resources at its disposal) makes the best bet it can as to what your senses tell you about the world. Thus, the brain optimises its matching of perception and experience. Else, it would have the hopeless task of analysing every perception from scratch.


Intelligent life


<create and use>          <represent>

Animals   <perceive and manipulate>  Phenomena


When Seth says that perception is hallucination/ Does he mean phenomenon do not exist? Again, what we see is not purely fanciful. It is what a mix of what sensors detect, and inheritance and experience predict is likely to be true.


The survival of every social animal depends on the presumption that things exist out there, our memories of those things are useful mental models of them and we can share features of those models by translating them into and out of messages.


The sensation created by a perception may be fuzzy, incomplete and malleable. A memory or mental model is never a complete or a perfect representation of what we observe. It only needs to tell us enough to be useful, and its accuracy can be tested by using it.


In neural memory, animals remember things they have perceived through their senses.

An amoeba inherits a memory in the sense it “knows” the types of things it is likely to meet. Primitive animals recognize things by comparing new sensations with inherited sensations.


In higher animals, with neural memories, perception is a mix of observation (sensing) and envisaging (guessing) at what there is out in reality. Both observation and envisaging are processes that create (encode) and use (decode) descriptions in memory.


More advanced animals recognize things by comparing new sensations with remembered sensations. To do this, an animal must first store and later access memories that represent things.


Intelligent life

Information in memories

<create and use>          <represents>

Animals      <observe and envisage>    Phenomena


About 250 million years ago, the paleo-mammalian brain evolved to manage more complex emotional, sexual and fighting behaviors. A wider information feedback loop was needed to connect that higher brain to the external world. This higher brain had to sense the current state of friends and enemies in the world, and direct appropriate actions.


We still know very little about how the brain works. Still, it is evident that animals do create and use descriptions of reality. For example, honey bees find the pollen where other bees tell them to find it.


This research suggests most animals remember entities better than events. However, some animals can recognise a pattern or sequence in which events happen. This other research suggests even rats can replay memories in order to recognise things in sequence.


Nothing above depends on humans or human-invented technologies. But the same general principles apply to humans. Humans can remember the sequence of steps in a dance, notes in a melody, or words in a story. And of course, the sequence of words in a sentence or message is important to its meaning.


Like all biological traits, human memory is the result of a very long history, most of it shared with other animals. At each stage in the path from vertebrate to mammal to primate to anthropoid to human, we acquired a different kind of memory. The result, this research suggests, is that humans have seven different kinds of memory.


By the way, a “photographic memory” or hyperthymesia is a neurological condition allows people to store memories like pictures on a camera roll. The actress Marilu Henner says she can remember nearly all the events in her life, down to the minutest details. When she visits these memories, she claims it is like re-watching old videos or looking at old pictures. She can even pinpoint which memories occur at what point in her life and recall specific memories from that time. Most of us do not have this ability, which is reported to be more of a curse than a superpower


Today, EA is about business activity systems that monitor and regulate entities and events in their environment, using memories stored in databases.


In social interaction, animals first used fixed format messages, like alarm calls.

Animals cannot not only inherit and remember knowledge, and learn from experience, but also communicate knowledge. Even very primitive animals signal mating intentions to each other. Other early social acts were related to marking territory, signalling danger and locating food. E.g. Cats spray scent to mark their territory and other cats smell that scent. By 100 million years ago, some animals had evolved to cooperate in groups by communicating descriptions of things to their fellows.


Social life

Information in messages

<send & receive>             <represents>

Social animals  <observe and envisage>  Phenomena


One bottlenose dolphin can recognise another by its signature whistle. Domesticated dogs can communicate several meanings (e.g. a stranger has arrived) in barks, growls, howls and whimpers. We can watch a honey bee communicate the location of some pollen that it observed earlier.


Honey bee communication

Wiggle dances

<perform & read>         <represent>

Honey bees     <observe>    Pollen sources


Honey bees use the symbolic language of dance both identify to things (pollen locations) and describe their features (location and distance). And astonishingly, experiments have shown that honey bees can communicate quantities up to four.


More advanced animals have a “theory of mind” – meaning they can attribute mental states to other animals. This is a foundation for social interaction. It is important because it enables animals to interpret and predict the behavior of others. And it enables animals to envisage the consequences of actions they may take.


Note that biological evolution has not demanded animals communicate perfectly accurate descriptions. They send messages that represent reality accurately and often enough for message receivers to find them useful. Communications fail when symbols are ill-formed, lost or obscured in transit, or misread or misinterpreted on receipt.

Encoding meaning using a language

There is no meaning in a brain’s memory on its own. Meaning is found in the processes of

·       encoding a perception (or conscious thought) into memory

·       decoding that memory into action (or conscious thought).


Similarly, in a society of communicating actors, there is no information or meaning the data structure of a message its own. Meaning is found in

·       the sender’s encoding of a data structure, with reference to a language.

·       a receiver’s decoding of that data structure, with reference to a language.


Like Ashby, we shall eschew direct discussion of consciousness, though it is implicit in social entity thinking. More importantly here, note that to succeed in communicating, actors must share the same code or language. Where mistakes cannot be allowed, in science for example, domain specific languages are defined. And in information system design, the meaning of a data structure is defined meta data.


Today, EA is about business activity systems that consume and produce data structures – using messaging systems. The meanings of data structures in messages are defined in meta data.

Three keys to the evolution of intelligence


Animals do not store persistent data structures as a computer does. Their mental images or models may be incomplete, fuzzy and malleable. But evidently, they do store enough information to recognize and manipulate things in the world. We may reasonably speak of animals having “mental models” that represent features of entities and events perceived. This section outlines three more features in the evolution of human intelligence.


In consciousness, animals compare descriptions of the past, the present and possible futures

Every remembered description of the past serves as a type that defines (one member of) a set that may contain more members in future. Envisaging the future involves creating and playing with descriptions of possible futures. Consciousness enables an animal to compare descriptions of past, present and possible future phenomena.

Big brains

The earliest human brain, though larger than other mammals, was about the same size as a chimpanzee’s brain. Over the last six or seven million years, the human brain tripled in size. By two million years ago, homo erectus brains averaged a little more than 600 ml. And by 300 thousand years ago, early homo sapiens brains averaged 1,200 ml, not far from the average today.


This growth coincided with development tools and society. Three million years ago, human-like primates learnt to make tools with a cutting edge or point. Humans needed a bigger brain to make and use increasingly complex tools to hunt and cultivate food. At the same time, intelligence was needed for the increasingly complex language humans used to cooperate.


Cognitive psychologists speak of procedural and declarative knowledge. We learn both acts - how to perform procedures:

·       physical acts (e.g. to walk, to swim, to sing, to play the piano)

·       cultural acts (e.g. to say please and thank you)

·       logical acts (e.g. multiplication, algebra).


And facts - descriptive types and typifying assertions:

·       physical facts (e.g. touching a hot plate is painful)

·       cultural facts (e.g. the colors of the rainbow)

·       logical facts (e.g. patterns or types abstracted from observations).


Intelligent life

Acts and facts

<learn and use>                 <represent>

Animals      <observe and behave in>    Phenomena


How knowledge is acquired is peripheral to this book, but for your interest, it might be acquired by:

·       inheritance (cats know to chase mouse tails, babies know not to crawl over a cliff edge)

·       practising a skill (to walk, to swim, to sing, to play the piano)

·       conditioning (after which we know not to touch a hot plate)

·       intentional trial and error (which key fits the lock?)

·       observing and copying another (how monkeys learn to use tools)

·       logical deduction (how Einstein found e = mc squared)

·       instruction or education (from a teacher of any kind).


Also peripheral to this chapter is the division that some make of knowledge into three kinds:

·       explicit knowledge - expressed in a shareable description - like the colors of the rainbow

·       implicit knowledge – currently known in one or more minds, but not yet made explicit

·       tacit knowledge - cannot be articulated – like how to swim.


Polanyi said "in the end all knowledge is personal and tacit". But it is misleading to interpret this as meaning no knowledge is shareable. We humans not only perceive and recognize things, we successfully describe them to others. Moreover, teaching people to swim or play music shows some tacit knowledge can be made explicit.

Four keys to evolution of human civilisation


This section outlines four strands in the evolution of human civilization.

Speech and symbolic languages

In speech, humans encoded complex messages in sounds.

In humans, memories are translated into and out of verbal messages for communication. Ashby observed that in thought and communication “coding is ubiquitous”. The multiple translation steps involved in social communication are illustrated in the next chapter.


Animals evolved to communicate descriptions of things to their fellows. They translate their mental models into messages that others - even in other species – can interpret.

·       One bottlenose dolphin can recognise another by its signature whistle.

·       Honey bees dance to describe pollen locations.

·       Dogs bark to tell us a stranger has arrived.

·       Domesticated dogs can communicate several meanings in barks, growls, howls and whimpers.


A description must in correlate in some way to the described phenomenon. There are three ways a description can do this.

An iconic description, like a statue or photograph, mimics some features of the described phenomenon, and is recognisable using the basic senses.

A signifying description, like the smoke of a fire, points to effects produced by a described phenomenon.

A symbolic description encodes some features of the described phenomenon using some kind of code, so can only by recognised by an animal of machine that knows that code.


Our main interest is in the last, in descriptions that are encoded using symbols (or “signs” in semiotics) and in how animals and machines create and use them.


Fixed-meaning symbols

One bird (a receiver) can understand the alarm call made by another bird (a sender). Prior to that alarm, the sender and receiver can be entirely unknown to each other. The alarm works because they inherited the same code for encoding and decoding the message.


Animals use various kinds of structure or behavior (sounds, smells, gestures) to symbolise meaning. A honey bee can encode their mental model of a pollen location in a dance. Another bee can decode that dance to create their own model of where that pollen is, and then find it. This demonstrates the two bees succeeded in shared some knowledge of the world. Moreover, in an example of cross-species communication, humans can read a bee’s dance and find the pollen themselves!


The languages of alarm calls, honey bee dances and facial expressions are rarely ambiguous. Because those languages were designed by nature to convey very specific meanings. An animals shared understanding of an alarm call is acquired by inheritance – biologically. By contrast, human understanding of what a sentence means is learnt - sociologically.


Flexible symbolic languages

Many animals use a limited range of sounds to communicate information about things of interest to them. Apes use sounds and gestures to communicate emotions and ideas to other apes. Between 150 and 300 thousand years ago, humans started inventing sounds (words) to convey meanings. Humans could convey message in other ways, point to things and draw on a cave walls. But for most purposes, using words is so much easier and quicker.


At birth, by inheritance, we acquire the ability to use our vocal chords and ears. It cost us almost nothing to speak; and we can create an infinite variety of sentences. The cost comes in the learning time to speak and write, and interpret what we hear and read. As members of a society, we wrestle with the same realities and conceptualize them similarly. And by trial and error, we establish a well-enough shared pairing of sounds to concepts.


The evolution of verbal language coincided with and probably explains the expansion of the human brain. High intelligence was needed to use verbal languages, and cooperate in social groups. The emergence of speech may well have reflected changes in human society. Notably, the change from a gorilla-style dominance hierarchy to the more cooperative and egalitarian lifestyle of hunter-gatherers.


The ability to describe phenomena and ideas in words makes humans unique. We can symbolise infinite concepts – not only realistic ones but also impossible ones, like a flying elephant. This ability to create words and associate them with ideas had a profound effect on thinking. It enabled the development of human civilization and science, through creative thinking and scientific postulations.


For more discussion of symbolic languages, read the next chapter. Remember, meaning exists to a writer in the process of writing a message. And to a reader in the process of reading the same message. The two meanings may be the same or different; and either meaning may be true or false.


In writing, humans recorded ever more complex descriptions in a persistent and shareable form.

Five or six thousand years ago, people found ways to persist spoken words using written symbols. Scholars suggest this may have happened separately in Sumeria/Egypt, the Indus River, the Yellow River, the Central Andes and Mesoamerica. Writing made one person’s thoughts available for inspection and use by others in different places and times.


Description in writing


<write and read>         <represent>

Humans  <observe and envisage>  Phenomena


The invention of writing enabled the development of civilization in many ways.


Pscyhologically, writing enabled better thinking. The written record revolutionised our ability to analyse our ideas, think deeply, think straight. Translating spoken words into and out of written words help us clarify our thoughts. We can model much larger and much more complex things and systems. The models we build can be studied, analysed, made more consistent and more coherent than anything we can hold in mind or speak of. They allow us to think the previously unthinkable


Socially, writing enabled sharing knowledge over distance and over time. From c5,000 years ago, people could communicate over any distance and any time. They could do business and conduct trade on the basis of facts recorded on clay tablets or papyrus. Moreover, they could record ideas for inspection by future generations.

"The metaphor of dwarfs standing on the shoulders of giants expresses the meaning of "discovering truth by building on previous discoveries". This concept has been traced to the 12th century, attributed to Bernard of Chartres.  Its most familiar expression in English is by Isaac Newton in 1675: "If I have seen further it is by standing on the shoulders of Giants." Wikipeda December 2018


Politically, writing enabled better government of people and organizations. After the Norman Conquest of England (1066), King William ordered an audit of locations in England and parts of Wales. The aim was to record who held what land, provide proof of rights to land and obligations to tax and military service. This survey resulted in The Domesday Book, which classifies towns, industries, resources and people into various types. This “landmark in the triumph of the centralised written record” recorded the enterprise architecture of a nation state.


Logically, writing enabled mathematics and computation. Writing made it possible to do complex calculations involving many variables.


Humans learnt to formalize the specification of types in verbal descriptions of reality.

There is no way to know the world “as it is”; the idea doesn’t even make sense. Since Einstein's day, if not before, scientists say that all we can understand and discuss of the world, is descriptions we construct of it. We describe a thing verbally by typifying it, by relating it to general classes or types.


Typification is a basis of mathematics and science. For millions of years, social animals had described phenomena in symbolic messages. E.g. birds make alarm calls to typify situations as “dangerous”. Over only a few thousand years, people have learnt how to describe anything of interest by typifying them using quantitative measures, and testable theories.


Many animals can recognise the difference between there being one, two or three things of a type. It is short step from there to recognizing two sets have the same number of members (say, two families have the same number of children). And then, not a huge step for a human to create a word for the quantity that is the same in the two sets. And then, a word to describe what is left when the last member is removed, the empty nest with zero members.


Ancient peoples quantified the items in a collection using words to represent numbers. About 5 thousand years ago, mathematicians introduced the type “zero”, to describe the emptiness of a collection with no items. About 2 thousand years ago, mathematicians created the decimal number system. Less than 2 hundred years ago, the mathematician Cantor introduced the concept of a set - a collection of things. To speak of a set, we need one or other way to define a member. We need either a list of the members (an extensional definition), or a type (an intensional definition).

Reasoning and science

By reasoning, humans refined how to form theories, predict outcomes and test them.

Reasoning (like other human abilities) evolved because it helps a human to survive and thrive. Consider for example this reasoning: "if we drive the buffalo over the edge of a cliff it will kill them, and provide us with the meat and leather we need." Reasoning helps us to predict how things will turn out, generalize from observations, abduce rules of behavior, form logically coherent descriptions of reality, form theories and test them, communicate with other humans, design technologies, etc. All of which gives one human being, and others in the same co-dependent social group, an advantage over other individuals and groups.

Conclusions and remarks

This chapter starts with the idea that knowledge is a biological phenomenon. It does go on to describe the emergence of human intelligence and civilization from the biological evolution of animals, with reference to symbolic languages and the sharing of knowledge in writing.


Here is recap of three principles in this chapter.

·       A good regulator has a description of what it regulates

·       Knowledge and description evolved in biological organisms

·       Consciousness enables us to compare the past, present and future.


Much of this book is about information held in memories and transmitted in messages. For some, reading it requires something of a paradigm shift. This conclusion contains a few pointers that may help you to avoid misunderstandings later.


Knowledge as a biological phenomenon

The world we call earth rolled on for millennia without life on it. At that time, there was no knowledge, description, model or classification of things on earth. So, surely, to understand how we describe those things ought to start in biology rather than philosophy?


“A biological approach to human knowledge naturally gives emphasis to the pragmatist view that theories [descriptions of reality] function as instruments of survival.”


Knowledge may be defined as information that we find accurate enough to be useful. The knowledge that an onrushing train will kill you is useful – it tells you to step off a railway track. Knowledge (along with emotions like love and fear) helps you to survive, thrive and pass your genes on.


Knowledge as biological sense-response phenomenon

Some knowledge is hard coded in the biochemistry of animals. E.g. things that taste good are better for you than things that taste nasty. Kittens innately know the properties of a mouse’s tail, and respond animatedly to any long, thin, wiggly thing. Experiment has shown that babies fear crawling over the edge of what appears (in a visual illusion) to be a cliff edge. And probably, you are born with the knowledge to avoid large onrushing objects – not just railway trains.


Knowledge as a psychological phenomenon

Animals evolved to remember what they perceive. We don’t know how a honey bee remembers where it found some pollen. The bio-electro-chemical form of those mental models is deeply mysterious. Perhaps it is a network that connects related images, symbols, sensations and experiences. The form doesn’t matter; what matters is that such mental models demonstrably exist.


Animals evolved to learn from observation and/or direct experience. You might learn about the danger of standing on railways tracks by watching a train squash an apple on the line. You acquire and remember knowledge of what works and what doesn’t, through trial and error.


A mark of intelligence is to recognize similarities between particular situations, entities and events. Moreover (to be discussed below) humans have evolved from

·       fuzzy matching - recognizing similarities between particular things, to

·       formal typification – classifying such similarities in intensional type definitions.


Knowledge as a social phenomenon

Animals evolved to share information with others. They can encode knowledge in messages, using sounds (alarm calls), smells (territory marking), body movements (honey bee dances) etc. Humans share knowledge by encoding it in speech and writing. Our knowledge is wrapped up with our use of words to label things and qualities we recognize. My mental model of you standing on a train track includes the knowledge that it is dangerous. In other words, you are in a situation we encode in verbal language as “dangerous”. So, if you don’t know that instinctively, or from experience, I can easily tell you.


In short, we can acquire knowledge through inheritance, experience and communication, and verbal languages massively increased our human ability to describe things


A knowledge triangle


<inherit and acquire>   <represents>

Animals  <observe and envisage>  Phenomenon


Not only do we abstract knowledge from things or phenomena that we observe, but those abstractions are themselves phenomena in the real world. They are encoded somewhere - in biochemistry, speech or writing - in the human phenome.


Human knowledge and civilization depend on the use of symbolic languages. It is important to know that natural language messages are ambiguous. The meaning of a message exists not in the message per se, but a) to a writer in writing the message, and b) to a reader in reading that message. The two meanings may be the same or different; and either meaning may be true or false. The next chapter discusses the nature of truth and lies.

An aside on memories, messages and artificial intelligence

Primitive animals evolved to abstract knowledge from perception of real-world phenomena. More intelligent animals developed memories, to remember and recognize entities and events observed before. They can typify entities and events that are similar, and improve how they deal with new instances. Social animals evolved to share knowledge of what they perceive using messages.


Human civilization and scientific progress were enabled by the written word. In shared writing, we build ever more complex models, and stitch them together into larger models. Written descriptions, models and specifications are part of the human phenome (rather than genome).


Tools, including computers, are also part of the human phenome. In the 1940s, Turing proposed how a machine could read and respond logically to inputs. Computers read and write messages encoded, using symbolic languages. They can also retain memories.


By the way, in this US court case, the judge seems to have confused the memory/message and private/public distinctions. In psycho-biology, there are 2 things, private memories (in brains) and public messages (in speech). In software, there are 4 things, since both memories and messages are encoded digital data structures, which can be public or private. And private doesn't mean hidden from all but one actor, it means hidden from all but those who are given access to the data structure.


In the 1940s, McCulloch and Pitts realized that a cyclic network of artificial neurons could act as a memory. At first, some hoped they had identified how the brain works, but Barlow’s experiments with frog’s eyes suggested otherwise. And by the way, other experiments show that most people find it difficult to apply the rules of logic. It seems the human brain evolved, not to work logically, but to handle the complexities of human social interactions.


We use computers in ways that extend our ability to create and use models. E.g. a message in this thread says a regular business computer can process 16,000 database transactions per second. Machine learning algorithms can abstract patterns or types from information they read. Other kinds of artificial intelligence are being in used in ways that exceed particular (rather than general) human abilities.


All free-to-read materials on the Avancier web site are paid for out of income from Avancier’s training courses and methods licences. If you find them helpful, please spread the word and link to the site in whichever social media you use.