The evolution
of intelligence and civilization
Inc. memories, messages and
languages
Copyright 2018 Graham Berrisford. Now a chapter in “the book” at
https://bit.ly/2yXGImr. Last
updated 23/02/2021 09:51
“A biological approach to human knowledge naturally gives emphasis to the pragmatist view that theories [descriptions of reality] function as instruments of survival.” Stanford Encyclopedia of Philosophy
Descartes is famously said to have started
his philosophy from the premise “I think therefore I am”. Psycho-biologists
presume more. They presume that, in space and time, there exist animals that
can perceive phenomena, remember them and communicate about them.
Implicit in those premises is the idea that
there was no description of reality before life. The Darwinian view is that
both memories and messages are biological phenomenon that evolved to help
animals survive. Remembering and sharing descriptions helps animals to
understand, predict and manipulate things in reality,
and so, improves their chance of reproducing.
Contents
From
chemical to biological evolution
Animals
and enterprise architecture (EA)
Four
keys to the evolution of intelligence.
Four
keys to evolution of human civilisation
“The contribution of Maturana to this new epistemological proposition is fundamental. He is, to our knowledge, the first biological scientist to suggest that knowledge is a biological phenomenon that can only be studied and known as such. Furthermore, his proposition is that life itself should be understood as a process of knowledge which serves the organism for adaptation and survival. Maturana’s work… visualizes human experience from a point of view situated from within itself and not from an external view from the outside.”
https://www.inteco.cl/articulos/1996/02/22/ontology-of-observing
By the way, the quoted source mistakenly concludes that Maturana’s principle implies a “relativist” view that every individual’s view of reality is equally valid. Relativism is rebutted a later chapter.
What does it mean to know or learn
something? Consider:
·
A
sunflower “knows” the direction sunlight is coming from.
·
An
amoeba “knows” whether what it senses is food or not.
·
Penguins
recognize (“know”) their babies in a flock, and vice-versa.
· A macaque monkey “knows” it can use a stone
to crack open a shellfish.
These are examples of non-human knowledge.
It appears an organism (whether by inheritance or experience) has a model of things
and phenomena in its environment. Or to put it another way, as Maturana did
“Knowledge is a biological phenomenon”.
Learning is also a biological phenomenon.
·
Primitive
animals learn by habituation or sensitization; they decrease or increase the
intensity of their response to a repeated stimulus.
·
More
advanced animals learn by conditioning (by trial and error).
· Still more intelligent animals learn by
observation, play and insight.
Insight learning is using past experiences
and reasoning to solve problems, often “in a flash”. Apes can do this. Even
crows can do it. And being able to apply knowledge successfully in new
situations might be called “wisdom”.
Wisdom, knowledge, information and data may be related as in the table below. To be honest, one purpose for this table is to save us from having to discuss any of several other WKID variations that have been proposed and criticized elsewhere. This version might also be criticized, but it does help us to discuss the exchange of knowledge in a social system of communicating actors.
WKID |
meaning |
Wisdom |
the
ability to apply knowledge in new situations. |
Knowledge |
information that is accurate enough to be useful. |
Information |
meaning
created/encoded or found/decoded in data by an actor. |
Data |
a
structure of matter/energy in which information has been
created/encoded or found/decoded |
Wisdom implies something of the insight learning mentioned earlier.
In natural language, the terms data, information and knowledge are often
used interchangeably. Knowledge may be
defined as information that is accurate enough to be useful. The knowledge that
an onrushing train will kill you is useful – it tells you to step off a railway
track. Knowledge (along with emotions like love and fear) helps you to survive,
thrive and pass your genes on.
To remember and communicate information about
phenomena they observe and envisage, animals construct data structures
in memories and messages. The information in, the meanings of, those
structures appear in their use by their creators and users. To know something,
to understand a memory or message structure, is to know when and how to use it
to do something.
The Conant-Ashby theorem, or “good regulator” theorem, was conceived by Roger C. Conant and W. Ross Ashby and is central to cybernetics. In short, it states that "every good regulator of a system must be a model of that system".
Abstract "The design of a complex regulator often includes the making of a model of the system to be regulated. The making of such a model has hitherto been regarded as optional, as merely one of many possible ways. In this paper a theorem is presented which shows, under very broad conditions, that any regulator that is maximally both successful and simple must be isomorphic with the system being regulated. (The exact assumptions are given.) Making a model is thus necessary. The theorem has the interesting corollary that the living brain, so far as it is to be successful and efficient as a regulator for survival, must proceed, in learning, by the formation of a model (or models) of its environment."
https://www.tandfonline.com/doi/abs/10.1080/00207727008920220
The good regulator |
Models <have and use> <represent> Regulators <monitor and regulate > Systems |
Evidently, to function and respond to changes its in its environment, a regulator must “know” what it regulates. Every animal needs a model of its environment if it is to find food and mates, and avoid enemies. And the more intelligent the animal, the richer its model of the entities and events in its environment. Similarly, a business needs to know the state of things it seeks to monitor or direct.
The
question is not whether an animal or a business has a model; it is how complete
and accurate is the model? To which the answers might be both “very incomplete
and somewhat inaccurate” and “remarkably, complete and accurate enough”. Thinking
about this has led me inexorably to the view of description and reality that is
outlined in this and the following chapters.
Scientists believe the universe started with a big bang about 14,000 million years ago. In the beginning, there was a lot of energy, and then a lot of disordered matter. Gradually, the laws of physics and evolution created things so orderly they can seem to be designed. Planets fell into orderly repeating orbits; tides ebbed and flowed on a daily basis. So now some things in the universe behaved in an orderly fashion.
This table (bottom to top) presents a
history of the universe from the big bang to human civilisation.
Elements or actors |
Interact by |
Knowledge acquisition |
|
Human civilisation |
Human organizations |
Information encoded in writings |
Science and enterprise |
Human sociology |
Humans in groups |
Information encoded in speech |
Teaching and logic |
Social psychology |
Animals in groups |
Information encoded in signals |
Parenting and copying |
Psychology |
Animals with memories |
Sense, thought, response |
Conditioning |
Biology |
Living organisms |
Sense, response. Reproduction |
Inheritance |
Organic chemistry |
Carbon-based molecules |
Organic
reactions |
|
Inorganic chemistry |
Molecules |
Inorganic reactions |
|
Physics |
Matter and energy |
Forces |
|
The world we call earth rolled on for millennia without life on it. At that time, there was no knowledge, description, model or classification of things on earth. So, surely, to understand how we describe those things ought to start in biology rather than philosophy? As Maturana, a biologist and systems thinker, observed: “knowledge is a biological phenomenon”.
The amazing story of how knowledge of the world evolved starts well before humans. All animals must “know” something of what they observe in reality; and some animals retain internal descriptions of what they perceive. A honey bee can remember the location where a discrete pollen source can be found, and dance to describe it. It communicates that kowledge using a code that not only other bees, but also humans, can decode. And recent experiments show that though a honey bee has a brain the size of a grain of sand, it can count up to four.
Animals of different species see the world somewhat differently. E.g. Birds and bees can see ultra-violet light. And it is estimated that a dog's sense of smell is between 10 to 100 thousand times more acute than ours. Some say this means an animal’s view of the world is subjective – does not describe the “real” world.
"… research highlights that the world we see is not the physical or the 'real' world. Different animals have very different senses, depending on the environment the animals operate in,"
Professor Lars Chittka from Queen Mary's School of Biological and Chemical Sciences. https://www.bbc.co.uk/news/science-environment-11971274
However, the implication, that description is, or could ever be, reality makes no sense! A description is, by definition, an abstraction from reality. No animal can fully understand any physical reality, but it can understand it well enough for some purposes it has. Clearly, to find enough food to eat, an animal must have objective-enough knowledge of the world.
As humans, we do more than construct and recall descriptions in our minds, we classify things we observe as being similar by naming and describing types. The following outline of biological and psychological ideas will lead us towards a general type theory that depends in no way on mathematics.
The universe is an ever-unfolding process that physicists describe as a four-dimensional space-time continuum. The word “continuum” implies space and time can be subdivided without any limit to size or duration. But as animals who perceive, remember and describe phenomena, we divide the universe into discrete things. We divide the material world into things where its structure or form changes – say from fluid to solid. And where the state of things changes - say from night to day.
Perception turns an
input message into a sensation an animal can respond to. About 800 million years ago, the very first animals knew enough not to
eat themselves. They could perceive the difference between their own substance
and chemicals in their environment.
Animals must know something of the world they live in, for feeding, breeding and other purposes. Some knowledge is hard coded in their biochemistry. E.g. things that taste good are better for you than things that taste nasty. When an amoeba comes across an item, it detects chemicals on the item’s surface. It senses those chemicals as signals, informing it as to whether the item is of the type “food”, “predator” or other. And then responds by acting appropriately.
Autonomic reactions |
Sensations <recognize> <represent> Animals < detect and react
to> Phenomena |
Animals evolved to
abstract ever more knowledge from perception phenomena, and ever more
sophisticated ways of dealing with the world. By about 700 million years ago,
Jellyfish had nerve
nets in which
·
sensory
neurons detect information and send messages to
·
intermediate
neurons, which react by sending messages to direct
·
motor
neurons.
By about 550 million years ago, some animals used a central hindbrain to
monitor and control homeostatic state variables. A hindbrain senses the state
of the body state variables, and sends messages to maintain those variables,
via an information feedback loop that connects the hindbrain to the organs and
motors of the body.
Today, EA is about business activity systems that perceive and respond to events, and monitor or direct entities of interest.
· Input messages inform the system about changes in the state of external entities.
· The system determines the responds to changes, either directly and autonomically, or reference to an acquired memory.
· Output messages inform and direct the behavior of external entities.
A sensor (e.g. an eye or ear) is a machine that can detect qualities or changes in its environment. The eye’s lens does nothing but focus light. The retina does more, it acts to optimise the data passed in messages via the nervous system to the brain. And experiments show the retina of a cat's eye responds excitedly to thin wiggly lines - like mouse tails.
Intelligent reactions |
Sensations
and memories <recognize> <represent> Animals
<detect and react to> Phenomena |
At Cambridge university, in the 1950s, the neuroscientist Horace Barlow recorded signals from nerve cells in frog’s retinas. Barlow found some nerve cells in the eye are “hard-wired” to detect small moving insects. (By the way, this was a disappointment to McCulloch and Pitts, mentioned later.) With others, Barlow discovered signals from both eyes converge on a single cell in the visual cortex. It is supposed this cell creates map of the three-dimensional space around an animal.
Note: the research referred to here is old. The science of perception and memory has advanced further, but we don’t need to understand that here. What matters is to know that animals can abstract information from phenomena, and remember it.
An amoeba inherits a memory in the sense it “knows” the types of things it is likely to meet. Primitive animals recognize things by comparing new sensations with inherited sensations.
More advanced animals can not only sense and react to things, but also record descriptions of things. Then, they can recognize things by comparing new sensations with remembered sensations. To do this, an actor must first store and later access memories that represent things.
Intelligent life |
Information
in memories <create and use> <represents> Animals
<observe and envisage> Phenomena |
About 250 million years ago, the
paleo-mammalian brain evolved to manage more complex emotional, sexual and
fighting behaviors. A wider information feedback loop was needed to connect
that higher brain to the external world. This
higher brain had to sense the current state of friends
and enemies in the world, and direct appropriate actions.
We still know very little about how the brain works. Still, it is evident that animals do create and use descriptions of reality. For example, honey bees find the pollen where other bees tell them to find it.
This research suggests
most animals remember entities better than events. However,
some animals can recognise a pattern or sequence in which events happen.
This other research suggests even rats can replay memories in order to recognise things in
sequence.
Nothing above
depends on humans or human-invented technologies. But the same general
principles apply to humans. Humans can remember the
sequence of steps in a dance, notes in a melody, or words in a story. And of
course, the sequence of words in a sentence or message is important to its
meaning.
Like all biological traits, human memory is the result of a very long history, most of it shared with other animals. At each stage in the path from vertebrate to mammal to primate to anthropoid to human, we acquired a different kind of memory. The result, this research suggests, is that humans have seven different kinds of memory.
By the way, a “photographic memory” or hyperthymesia is a neurological condition allows people to store memories like pictures on a camera roll. The actress Marilu Henner says she can remember nearly all the events in her life, down to the minutest details. When she visits these memories, she claims it is like re-watching old videos or looking at old pictures. She can even pinpoint which memories occur at what point in her life and recall specific memories from that time. Most of us do not have this ability, which is reported to be more of a curse than a superpower
Today, EA is about business activity systems that monitor and regulate entities and events in their environment, using memories stored in databases.
Animals cannot not only inherit and remember knowledge, and learn from experience, but also communicate knowledge. Even very primitive animals signal mating intentions to each other. Other early social acts were related to marking territory, signalling danger and locating food. E.g. Cats spray scent to mark their territory and other cats smell that scent. By 100 million years ago, some animals had evolved to cooperate in groups by communicating descriptions of things to their fellows.
Social life |
Information in messages <send
& receive>
<represents> Social animals <observe and envisage> Phenomena |
One bottlenose dolphin can recognise another by its
signature whistle. Domesticated dogs can
communicate several meanings (e.g. a stranger has arrived) in barks, growls,
howls and whimpers. We can watch a honey bee
communicate the location of some pollen that it observed earlier.
Honey bee
communication |
Wiggle dances <perform & read>
<represent> Honey bees <observe> Pollen sources |
Honey bees
use the symbolic language of dance both identify to things (pollen locations)
and describe their features (location and distance). And astonishingly, experiments have shown
that honey bees can communicate quantities up to four.
Note that
biological evolution has not demanded animals communicate perfectly accurate
descriptions. They send messages that represent reality accurately and often
enough for message receivers to find them useful. Communications fail when
symbols are ill-formed, lost or obscured in transit, or misread or
misinterpreted on receipt.
Memories and messages are holders of information. In biology, internal memories and external messages are of different kinds. Memories are neural patterns, whereas messages take the form of sounds, smells and gestures. By contrast, in software, the distinction between memories and messages is blurred.
There
is no meaning in a brain’s memory on its own. Meaning is found in
the processes of
· encoding a perception (or conscious thought) into memory
· decoding that memory into action (or conscious thought).
Similarly, in a society of communicating actors, there is no information or meaning the data structure of a message its own. Meaning is found in
· the sender’s encoding of a data structure, with reference to a language.
· a receiver’s decoding of that data structure, with reference to a language.
Like
Ashby, we shall eschew direct discussion of consciousness, though it is
implicit in social entity thinking. More importantly here, note that to
succeed in communicating, actors must share the same code or language. Where
mistakes cannot be allowed, in science for example, domain specific languages are
defined. And in information system design, the meaning of a data structure is
defined meta data.
Today, EA is about business activity systems that consume and produce data structures – using messaging systems. The meanings of data structures in messages are defined in meta data.
Animals do not store persistent data structures as a computer does. Their mental images or models may be incomplete, fuzzy and malleable. But evidently, they do store enough information to recognize and manipulate things in the world. We may reasonably speak of animals having “mental models” that represent features of entities and events perceived. This section outlines four more features in the evolution of human intelligence
Horace Barlow defined intelligence as “the art of good guessing”. In 1961, he proposed a successful model of sensory systems – the efficient coding hypothesis. Given finite resources to transmit information, neural systems optimise what they encode. They minimise the number of nerve cells and impulses needed to transmit data to the brain. They leave the rest to be inferred from inherited predispositions and acquired memories. That process is called active inference.
In this talk by Anil Seth says that human perception combines both:
· Observation: sensing information input from what is out there.
· Envisaging: making a best guess as to what has been sensed, with reference to what is expected.
What you perceive and remember is not purely an invention. Your brain (given the time and resources at its disposal) makes the best bet it can as to what your senses tell you about the world. Thus, the brain optimises its matching of perception and experience. Else, it would have the hopeless task of analysing every perception from scratch.
Intelligent
life |
Memories <create
and use> <represent> Animals
<perceive and
manipulate> Phenomena |
When Seth says that perception is hallucination/ Does he mean phenomenon do not exist? Again, what we see is not purely fanciful. It is what a mix of what sensors detect, and inheritance and experience predict is likely to be true.
The survival of every social animal depends on the presumption that things exist out there, our memories of those things are useful mental models of them and we can share features of those models by translating them into and out of messages.
The sensation created by a perception may be fuzzy, incomplete and malleable. A memory or mental model is never a complete or a perfect representation of what we observe. It only needs to tell us enough to be useful, and its accuracy can be tested by using it.
The earliest
human brain, though larger than other mammals, was about the same size as a
chimpanzee’s brain. Over the last six or seven million years, the human brain
tripled in size. By two million years ago, homo erectus brains averaged a
little more than 600 ml. And by 300 thousand years ago, early homo sapiens
brains averaged 1,200 ml, not far from the average today.
This growth
coincided with development tools and society. Three million years ago,
human-like primates learnt to make tools with a cutting edge or point. Humans
needed a bigger brain to make and use increasingly complex tools to hunt and
cultivate food. At the same time, intelligence was needed for the increasingly
complex language humans used to cooperate.
Every remembered description of the past serves as a type that defines one member of a set that may contain more members in future. Consciousness enables an animal to compare the past, present and future. It is a process that enables an animal to compare descriptions of past, present and possible future phenomena.
Cognitive psychologists speak of procedural and declarative knowledge. We learn both acts - how to perform procedures:
· physical acts (e.g. to walk, to swim, to sing, to play the piano)
· cultural acts (e.g. to say please and thank you)
· logical acts (e.g. multiplication, algebra).
And facts - descriptive types and typifying assertions:
· physical facts (e.g. touching a hot plate is painful)
· cultural facts (e.g. the colors of the rainbow)
· logical facts (e.g. patterns or types abstracted from observations).
Intelligent
life |
Acts and facts <learn
and use>
<represent> Animals
<observe and behave
in> Phenomena |
How knowledge is acquired is peripheral to this book, but for your interest, it might be acquired by:
· inheritance (cats know to chase mouse tails, babies know not to crawl over a cliff edge)
· practising a skill (to walk, to swim, to sing, to play the piano)
· conditioning (after which we know not to touch a hot plate)
· intentional trial and error (which key fits the lock?)
· observing and copying another (how monkeys learn to use tools)
· logical deduction (how Einstein found e = mc squared)
· instruction or education (from a teacher of any kind).
Also
peripheral to this chapter is the division that some make of knowledge into
three kinds:
· explicit knowledge - expressed in a shareable description - like the colors of the rainbow
· implicit knowledge – currently known in one or more minds, but not yet made explicit
· tacit knowledge - cannot be articulated – like how to swim.
Polanyi said "in the end all knowledge is personal and tacit". But it is misleading to interpret this as meaning no knowledge is shareable. We humans not only perceive and recognize things, we successfully describe them to others. Moreover, teaching people to swim or play music shows some tacit knowledge can be made explicit.
This section outlines four strands in the evolution of human civilization.
Animals evolved to communicate descriptions of
things to their fellows. They translate their
mental models into messages that others - even in other species – can
interpret.
· One bottlenose dolphin can recognise another by its signature whistle.
· Honey bees dance to describe pollen
locations.
· Dogs bark to tell us a stranger has arrived.
· Domesticated dogs can communicate several meanings in barks, growls, howls and whimpers.
An iconic description, like a statue or
photograph, mimics some features of the described phenomenon, and is
recognisable using the basic senses.
A signifying description, like the smoke of a
fire, points to effects produced by a described phenomenon.
A symbolic description encodes some features of
the described phenomenon using some kind of code, so can only by
recognised by an animal of machine that knows that code.
Our main interest is in the last, in descriptions that are encoded using
symbols (or “signs” in semiotics) and in how animals and
machines create and use them.
Fixed-meaning
symbols
One bird (a receiver) can understand the alarm call made by another bird (a sender). Prior to that alarm, the sender and receiver can be entirely unknown to each other. The alarm works because they inherited the same code for encoding and decoding the message.
Animals use various kinds of structure or behavior (sounds, smells, gestures) to symbolise meaning. A honey bee can encode their mental model of a pollen location in a dance. Another bee can decode that dance to create their own model of where that pollen is, and then find it. This demonstrates the two bees succeeded in shared some knowledge of the world. Moreover, in an example of cross-species communication, humans can read a bee’s dance and find the pollen themselves!
The languages of alarm calls, honey bee dances and facial expressions are rarely ambiguous. Because those languages were designed by nature to convey very specific meanings. An animals shared understanding of an alarm call is acquired by inheritance – biologically. By contrast, human understanding of what a sentence means is learnt - sociologically.
Flexible symbolic
languages
Many animals use a limited range of sounds to communicate information about things of interest to them. Apes use sounds and gestures to communicate emotions and ideas to other apes. Between 150 and 300 thousand years ago, humans started inventing sounds (words) to convey meanings. Humans could convey message in other ways, point to things and draw on a cave walls. But for most purposes, using words is so much easier and quicker.
At birth, by inheritance, we acquire the ability to use our vocal chords and ears. It cost us almost nothing to speak; and we can create an infinite variety of sentences. The cost comes in the learning time to speak and write, and interpret what we hear and read. As members of a society, we wrestle with the same realities and conceptualize them similarly. And by trial and error, we establish a well-enough shared pairing of sounds to concepts.
The evolution of
verbal language coincided with and probably explains the expansion of the human
brain. High intelligence was needed to use verbal languages, and cooperate in
social groups. The emergence of speech may well have
reflected changes in human society. Notably, the change from a gorilla-style
dominance hierarchy to the more cooperative and egalitarian lifestyle of
hunter-gatherers.
The ability to describe phenomena and
ideas in words makes humans unique. We can symbolise infinite concepts – not only realistic ones but also impossible
ones, like a flying elephant. This ability to create words and associate them
with ideas had a profound effect on thinking. It enabled the development of
human civilization and science, through creative thinking and scientific
postulations.
For more discussion of symbolic languages, read
the next chapter. Remember, meaning exists to a writer in the process of writing a
message. And to a reader in the process of reading the same message. The
two meanings may be the same or different; and either meaning may be true or
false.
Five or six thousand years ago, people
found ways to persist spoken words using written symbols. Scholars suggest this
may have happened separately in Sumeria/Egypt, the Indus River, the Yellow
River, the Central Andes and Mesoamerica. Writing made one person’s thoughts
available for inspection and use by others in different places and times.
Writings
<write
and read> <represent> Humans <observe and envisage> Phenomena |
The invention of writing enabled the
development of civilization in many ways.
Pscyhologically - better thinking
The written record revolutionised our ability to analyse our ideas, think deeply, think straight. Translating spoken words into and out of written words help us clarify our thoughts. We can model much larger and much more complex things and systems. The models we build can be studied, analysed, made more consistent and more coherent than anything we can hold in mind or speak of. They allow us to think the previously unthinkable
Socially- sharing
knowledge over distance and over time
From c5,000 years ago, people could
communicate over any distance and any time. They could do business and conduct
trade on the basis of facts recorded on clay tablets or papyrus. Moreover, they
could record ideas for inspection by future generations.
"The metaphor of dwarfs standing on the shoulders of giants expresses the meaning of "discovering truth by building on previous discoveries". This concept has been traced to the 12th century, attributed to Bernard of Chartres. Its most familiar expression in English is by Isaac Newton in 1675: "If I have seen further it is by standing on the shoulders of Giants." Wikipeda December 2018
Politically – government
of people and organizations
After the Norman Conquest of England
(1066), King William ordered an audit of locations in England and parts of
Wales. The aim was to record who held what land, provide proof of rights to
land and obligations to tax and military service. This survey resulted in The Domesday Book, which classifies towns, industries,
resources and people into various types. This “landmark in the triumph of the centralised written record”
recorded the enterprise architecture of a nation state.
Logically,
mathematically and computationally
Writing
made it possible to do complex calculations involving many variables.
There is no way to know the world “as it is”; the idea doesn’t even make sense. Since Einstein's day, if not before, scientists say that all we can understand and discuss of the world, is descriptions we construct of it. We describe a thing verbally by typifying it, by relating it to general classes or types.
For more on typification, read the chapter on type theory.
For millions of years, social animals have described phenomena in symbolic messages. E.g. birds make alarm calls to typify situations as “dangerous”. And over thousands of years, people have described things by typifying them verbally.
Many animals can recognise the difference between there being one, two or three things of a type. It is short step from there to recognizing two sets have the same number of members (say, two families have the same number of children). And then, not a huge step for a human to create a word for the quantity that is the same in the two sets. And then, a word to describe what is left when the last member is removed, the empty nest with zero members.
Ancient peoples quantified the items in a collection using words to represent numbers. About 5 thousand years ago, mathematicians introduced the type “zero”, to describe the emptiness of a collection with no items. About 2 thousand years ago, mathematicians created the decimal number system. Less than 2 hundred years ago, the mathematician Cantor introduced the concept of a set - a collection of things.
To speak of a set,
we need one or other way to define a member. We need either a list of the
members (an extensional definition), or a type (an intensional definition).
This is the first of several chapters on the nature of knowledge. This chapter starts with the idea that knowledge is a biological phenomenon. It does go on to describe the emergence of human intelligence and civilization from the biological evolution of animals, with reference to symbolic languages and the sharing of knowledge in writing.
Here is recap of three principles in this chapter.
· A good regulator has a description of what it regulates
· Knowledge and description evolved in biological organisms
· Consciousness enables us to compare the past, present and future.
Much of this book is about information held in memories and transmitted in messages. For some, reading it requires something of a paradigm shift. This conclusion contains a few pointers that may help you to avoid misunderstandings later.
Knowledge as a biological phenomenon
The world we call earth rolled on for millennia without life on it. At that time, there was no knowledge, description, model or classification of things on earth. So, surely, to understand how we describe those things ought to start in biology rather than philosophy?
“A biological approach to human knowledge naturally gives emphasis to the pragmatist view that theories [descriptions of reality] function as instruments of survival.”
http://plato.stanford.edu/entries/scientific-progress/#ReaIns
Knowledge may be defined as information that we find accurate enough to be useful. The knowledge that an onrushing train will kill you is useful – it tells you to step off a railway track. Knowledge (along with emotions like love and fear) helps you to survive, thrive and pass your genes on.
Knowledge as
biological sense-response phenomenon
Some knowledge is hard coded in the biochemistry of animals. E.g. things that taste good are better for you than things that taste nasty. Kittens innately know the properties of a mouse’s tail, and respond animatedly to any long, thin, wiggly thing. Experiment has shown that babies fear crawling over the edge of what appears (in a visual illusion) to be a cliff edge. And probably, you are born with the knowledge to avoid large onrushing objects – not just railway trains.
Knowledge as a
psychological phenomenon
Animals evolved to remember what
they perceive. We don’t know how a
honey bee remembers where it found some pollen. The bio-electro-chemical form of those mental models is deeply
mysterious. Perhaps it is a network
that connects related images, symbols, sensations and experiences. The form doesn’t matter; what matters
is that such mental models demonstrably exist.
Animals evolved to
learn from observation and/or direct experience. You might learn about
the danger of standing on railways tracks by watching a train squash an apple
on the line. You acquire and
remember knowledge of what works and what doesn’t, through trial and error.
A mark of intelligence is to recognize similarities between particular situations, entities and events. Moreover (to be discussed below) humans have evolved from
· fuzzy matching - recognizing similarities between particular things, to
· formal typification – classifying such similarities in intensional type definitions.
Knowledge as a social
phenomenon
Animals evolved to share information with others. They can encode knowledge in messages, using sounds (alarm calls), smells (territory marking), body movements (honey bee dances) etc. Humans share knowledge by encoding it in speech and writing. Our knowledge is wrapped up with our use of words to label things and qualities we recognize. My mental model of you standing on a train track includes the knowledge that it is dangerous. In other words, you are in a situation we encode in verbal language as “dangerous”. So, if you don’t know that instinctively, or from experience, I can easily tell you.
In short, we can acquire knowledge through inheritance, experience and communication, and verbal languages massively increased our human ability to describe things
A knowledge triangle |
Knowledge <inherit
and acquire> <represents> Animals <observe and envisage> Phenomenon |
Not only do we abstract knowledge from things or phenomena that we observe, but those abstractions are themselves phenomena in the real world. They are encoded somewhere - in biochemistry, speech or writing - in the human phenome.
Human knowledge
and civilization depend on the use of symbolic languages. It is important to
know that natural language messages are ambiguous. The meaning of a
message exists not in the message per se, but a) to a writer in writing the
message, and b) to a reader in reading that message. The two meanings
may be the same or different; and either meaning may be true or false. The next chapter discusses the
nature of truth and lies.
Primitive animals evolved to abstract knowledge from perception of real-world phenomena. More intelligent animals remember and recognize entities and events they have observed before. They can typify entities and events that are similar, and improve how they deal with new instances. Social animals evolved to share knowledge of what they perceive.
Humans evolved flexible symbolic languages to share knowledge, first orally, then in writing. Human civilization and scientific progress were enabled by the written word. In shared writing, we build ever more complex models, and stitch them together into larger models.
Written descriptions, models and specifications are part of the human phenome (rather than genome). Tools, including computers, are also part of the human phenome. In the 1940s, Turing proposed how a machine could read and respond logically to inputs. Computers read and write information encoded, using symbolic languages, in physical media.
In the 1940s, McCulloch and Pitts realized that a cyclic network of artificial neurons could act as a memory. At first, some hoped they had identified how the brain works, but Barlow’s experiments with frog’s eyes suggested otherwise. And by the way, other experiments show that most people find it difficult to apply the rules of logic. It seems the human brain evolved, not to work logically, but to handle the complexities of human social interactions.
We use computers in ways that extend our ability to create and use models. E.g. a message in this thread says a regular business computer can process 16,000 database transactions per second. Machine learning algorithms can abstract patterns or types from information they read. Other kinds of artificial intelligence are being in used in ways that exceed particular (rather than general) human abilities.
All
free-to-read materials on the Avancier web site are paid for out of income from
Avancier’s training courses and methods licences. If you find them helpful,
please spread the word and link to the site in whichever social media you use.