How we share knowledge https://lnkd.in/dpgdQhv
By clarifying and verifying
information
Copyright 2016 Graham Berrisford. Now a chapter in “the
book” at https://bit.ly/2yXGImr. Last
updated 13/02/2021 14:58
“A biological approach to human knowledge naturally gives emphasis to the pragmatist view that theories [descriptions of reality] function as instruments of survival.” Stanford Encyclopedia of Philosophy
This chapter discusses how we clarify information by reducing noise and ambiguity, and verify the truth of information by and empirical, logical and social means. In doing so, it rejects extreme interpretations of relativism and perspectivism.
Contents
On sharing knowledge of the world
Verifying
the true of information
Earlier chapters discussed knowledge as a biological
phenomenon. In short, animals can acquire knowledge through inheritance,
experience and communication.
A knowledge triangle |
Knowledge <inherit and acquire> <represents> Animals
<observe and envisage> Phenomena |
Not only do animals abstract knowledge from things or phenomena they observe, but those abstractions are themselves phenomena in the real world. They are encoded somewhere - in biochemistry, in speech, in writing. That we abstract knowledge from things or phenomena we observe is widely accepted. And systems thinkers often refer to one or more of the following quotes.
“Each
individual constructs his or her own reality" (von Foerster, 1973).
"The
environment as we perceive it is our invention." (von Foerster, 2007).
“We cannot transcend ourselves as organisms that abstract” Alfred Korzybski
“All
experience is subjective” (Gregory Bateson).
"Objectivity is the delusion that observations could be made without an observer." (von Foerster).
"Internal cognitions do not reflect any external reality" (source lost).
The trouble with these aphorisms is that they lead some to say we cannot share knowledge, and then to an extreme kind of “perspectivism” or "relativism"- to a view that knowledge and truth exist only relation to each particular individual, society or historical context.
Friedrich Nietzsche (1844 to 1900) “rejected the idea of objective reality, arguing that knowledge is contingent and conditional, relative to various fluid perspectives or interests. This leads to constant reassessment of rules (i.e., those of philosophy, the scientific method, etc.) according to the circumstances of individual perspectives. This view has acquired the name perspectivism.” Wikipedia December 2018
This chapter sets out to explain that internal cognitions can represent reality; we do transcend ourselves as individual organisms; we do share knowledge, and objectivity is not a delusion! We can and do clarify the information in a description we receive by reducing noise and ambiguity; and we can verify the truth of the information by empirical, logical and social means.
Evidently, animals evolved to share information with others by encoding knowledge in messages. They encode danger signals in the sound of alarm calls, encode territory ownership in the smells of scent marking, encode directions to food locations in pointing and other gestures.
Honey bees don’t have access to direct
access to each other’s memory. Instead, they share knowledge by performing and
watching a “wiggle dance”. Honey bee A notices and remembers the location of
some pollen. Later, A translates that memory into a wiggle dance message “read”
by bee’s B and C. If bee B doesn’t find the pollen, then bees A and B have not
shared knowledge. But if bee C does find the pollen, that suggests bees A and C
have shared knowledge. And honey bees wouldn’t make such an effort to
communicate if it did not help them find pollen often enough.
Humans share knowledge by encoding it verbally - in speech and writing. Verbal languages massively increased our human ability to describe things. And our knowledge is so wrapped up with our use of words (to label things and qualities we recognize) we even think in words. My mental model of you standing on a train track includes the knowledge that it is dangerous. In other words, you are in a situation we encode in verbal language as “dangerous”. So, if you don’t know that instinctively, or from experience, I can easily tell you.
Of course, natural language messages are ambiguous. The meaning of a message exists not in the message per se, but a) to a writer in writing the message, and b) to a reader in reading that message. The two meanings may be the same or different; and either meaning may be true or false. So how do we clarify and verify information? Must knowledge be true or false, or can it be in between? Read on.
What if a message may gain some meaningless noise in transit? Noise reduction is out of scope here, but FYI, “signal-to-noise ratio” has several meanings.
Signal-to-noise
ratio in engineering
The strength of an electrical or other
signal carrying information, compared to that of unwanted interference. Here, the signal is the data encoded
by a sender within a message. The reader wants to remove or ignore any
interference, in order to find the original signal/data.
Signal-to-noise
ratio in sociology
The ratio of useful information to false or
irrelevant data in a message or series of messages. Here the signal is the message(s) that readers are interested in.
The reader wants to remove or ignore data that they regard as misleading,
mistaken or irrelevant to their particular interest.
Signal-to-noise
ratio in data analysis
A conclusion to be drawn from examining a
sample of data values. The reader
wants to ignore small, random or statistically insignificant variations, and
focusing on the largest variations.
In a society of communicating humans, the meaning of a
message is found in
· the intention of an actor when encoding the message, with reference to a language.
· the interpretation of an actor when decoding the message, with reference to a language.
To succeed in communicating, actors must share the same code or language.
But
even then, there is no guarantee the two meanings are the same. Our
natural languages are not enough to ensure successful communication. Evolution has not demanded that words
express perfectly accurate descriptions or absolute truths. It requires only
that words are understood well enough, often enough.
Words
are unreliable; we not only abuse words, we change their meanings. The popular
meaning of a word can evolve rapidly and change dramatically. In every
communication, a word has a meaning to its creator, and a meaning to a
receiver. One message may convey different
meanings to different actors.
E.g.
Consider this message or data structure: “I saw a man on a hill with a telescope”. To the speaker,
it represents an observable phenomenon
Yet different hearers might understand the message to mean:
·
There was a man on a
hill, who I saw through a telescope.
·
There was man on a hill, who I saw has a telescope.
·
There was a hill that had on it both a man and a
telescope.
·
I was on a hill, and from there saw a man using a telescope.
·
There’s a man on a hill, whom I’m sawing with a
telescope.
In
short: there
is no meaning in a message on its own. Meaning exists to a writer in the process of writing a message. And to a
reader in the process of reading the same message. The two meanings may
be the same or different.
SOS might mean different things in different codes. Using your code, you encode a request for help as "SOS". Using my code, I decode it as meaning Send Over Sausages, and act accordingly.
How be sure we are using the same code or language? How to address the ambiguity of a natural language, which is learnt by trial and error? We may repeat the same message in different word. We may check for understanding (“know what I mean?”). We may use a controlled vocabulary or domain specific language.
Possibly
redundant repetition
Senders commonly overload verbal and written communications with redundant information. A sender may resend the same message, or express same meaning in different ways. Describing one thing in several different ways reduces the chances of miss-communication.
E.g. having no reply to an SOS message, its sender might send another message saying Help! That kind of iteration is very common in human discourse using natural language. The iteration may end when a receiver responds as expected, or some kind of time-out is reached.
Testing
Having interpreted a message, you may lack confidence you have found its intended meaning. You may test to see if your interpretation is confirmed by subsequent messages or events. If things turn out as expected or predicted, then a degree of confidence is established.
E.g. you find your way to a place on a map you were given. You find an apple where you were told to find it. Somebody promises to call you at 11.00 hours, and they call you at that time.
Controlled,
domain--specific, languages
In science and in business, it is important to minimise or eliminate communication errors. People use controlled vocabularies in which words and phrases (like SOS) have agreed meanings. And in business information system design, the meaning of data is defined meta data.
E.g. The language used by mathematicians is the most controlled we have. If one says the shape of a planet’s orbit is an ellipse, another will get that meaning.
Knowledge was defined above as information accurate or true enough to be
useful. Truth is a measure we apply to information in descriptions of
the world.
Our survival depends on information/models
being true enough. A
map is true enough if it helps us find things in a territory. And evolution favours social animals
that communicate what is true enough, since that helps them to survive.
However, a description may be true, yet also be misleading or be misread. Actors may hallucinate (perceive something where there is nothing). And actors can misread, miss-remember or forget a description; and lie.
The scientific method can be seen as formalising our natural approach to learning from experience. A scientific theory represents a belief about some particular phenomena. The scientific method is based on testing that phenomena occur as the theory predicts.
Science |
Theoretical models <create
and use> <represent> Scientists
<observe and envisage> Phenomena |
You may think the laws of physics are pure or perfect. Actually, in history of cosmology one theory has been supplanted by another over several centuries. The sequence is often presented along these lines.
· Aristotle
and Ptolemy thought the sun revolved around the earth.
· Copernicus placed the sun at the centre of the universe.
· Galileo championed Copernicanism, and made discoveries in the kinematics of motion and astronomy
· Kepler’s laws of planetary motion provided a foundation for Newton's theory of universal gravitation.
· Newton defined the theory of universal gravitation and three laws of motion.
This epistemological triangle describes the role of Newton in relation to his laws. Read it left to right.
Physics |
Law of motion <created and used> <represent> Newton
<observed and envisaged> Phenomena
|
Newton's laws of motion true enough in the world we live in. Astronomers have used them to describe how the sun and the planets interact in the solar system.
Einstein showed Newton’s laws only approximately describe the motion of things – in our part of the universe. But even Einstein’s laws are not the whole truth.
“As Einstein would have happily admitted [his] new physics was not a definitive answer, nor did it negate the importance of Newton’s contribution. It was not “right” or “true”, but simply a more accurate explanation that Newton’s”, which was perfectly good for its time. As a pragmatist would say, it was a valid explanation” Marcus Weeks
In retrospect, both Newton’s and Einstein’s laws presume three basic axioms about the universe.
1. An object cannot both exist and not exist at a given point in space and time.
2. An object must either exist or not exist at a given point in space and time.
3. An object cannot exist at different points in space at the same time.
Intuitively, these assertions seem obviously true, yet even these are undermined by quantum mechanics. Heisenberg’s uncertainty principle says an object’s position and speed cannot both be described exactly, at the same time. And to Einstein’s great discomfort, a bizarre property of quantum mechanics is that an electron can be two places at the same time.
The modern “hard science” view is that we never know the universe as it really is, only as it is described in mental/documented/other models. Unfortunately, some “post-modernists” in the humanities have interpreted this as saying this means any mental/documented/other model of the world has equal validity. We’ll return to that later.
The description theory here allows there to be fuzziness. There are degrees in how closely phenomena conform to theories, or instances conform to types.
Newton’s three laws of motion describe how objects move.
1. Every object in a state of uniform motion will remain in that state of motion unless an external force acts on it.
2. Force equals mass times acceleration (f = ma).
3. For every action there is an equal and opposite reaction.
Engineers the world over rely on these laws being useful, and regard them as true. Our lives depend on engineers having applied these laws to physical situations. E.g. For most intents and purposes, Newton's laws of motion are 100% true. Yet three centuries later, Einstein showed the second laws fails when objects move closer to the speed of light.
In science, there is no absolute truth. Rather, there are degrees of confidence in the accuracy and usefulness of some information. The truth of a model = the degree to which the information in it proves accurate when tested. In hard sciences, that degree might be 99.99% or more. In the humanities, many models and classification schemes have a lower degree truth.
Suppose a sender broadcasts an SOS message to any and every actor able to receive it. It might be true - intended to convey the information that help is needed. It might be a lie - intended to waste its receivers’ time, to convey misinformation.
There are accidental lies. What a message sender considers true, the message receiver may consider false. E.g. I feel the swimming pool is warm and you take me at my word. You dive in, but find the water colder than you expected, then complain I lied to you.
There are deliberate lies – good and bad. E.g. Electrons don’t orbit the nucleus of an atom; this lie-to-children is used to introduce a very complex subject. They are not like tiny marbles; they have a wavelength, are fuzzy and smeared out over space. They exist as a probability distribution, in a “cloud” around the nucleus. People cannot grasp this easily - because nothing in the macroscopic world we observe behaves this way.
For selfish reasons, social animals do sometimes lie to each other, as this video illustrates. Humans are unique in the extent they eagerly broadcast “fake news” on the internet.
“Knowledge is “justified true belief”. (A J Ayer)
The logical
positivist A J Ayer said a proposition can be discounted if it is not
verifiable by:
·
logical
analysis or manipulation of descriptive elements according to agreed rules
(e.g. 2 + 2 = 4).
·
empirical testing
of propositions about real-world entities and events.
The first
approach, so wildly successful in mathematics, is not so in successful in
investigations of nature. The success of a scientific theory about the world
depends on how well its results agree with observations of reality. If they do
not agree, then the theory needs to be adjusted.
Here, we say the
information in a theory, model or description can be verified in three ways,
the last of which is clearly the weakest.
· Empirically
true - supported by evidence from test cases.
· Logically true
- can be deduced from other concepts within a body of knowledge.
·
Socially true - widely shared by
others in a social network.
Science means knowledge in Middle English (via Old French from the Latin scientia and scire “know”). And the scientific method is how we seek to prove the truth of knowledge.
A scientific theory may be considered true when experimental results agree with predictions made using theory. Just as a description of a particular thing is considered true when the qualities of that thing match the qualities in the description.
Suppose a honey bee finds some pollen where another honey bee (by dancing) described it to be. Evidently, the bees have shared some knowledge - the distance and direction of the pollen source. In other words, their mental models represent reality accurately enough for you to call them “true”.
Similarly, suppose you buy a map, and then find your way to a place via a route described on that map. You judge the truth of the map by how well it predicts what you find.
Both the honey bee’s mental model and your map are objectively true in the sense they are a) not limited to one intelligence and are b) confirmed by empirical evidence.
Still, in science, no amount of successful tests can prove a model is perfectly and forever true. Since that same model may fail the next test.
Falsification
Karl Popper famously proposed the test of a good theory is that it can be falsified. His idea has proved immensely useful to the progress of science. The best kind of scientific theory
a) fits all circumstantial evidence,
b) passes all tests devised to disprove it and
c) could be disproved by a future test.
Hovering behind a good theory is always the spectre of falsification by a future test. A theory that could never conceivably be disproved is considered weak – more a declaration of faith or belief.
And yet - even falsification does not necessarily invalidate knowledge. Newton’s laws of motion remain useful knowledge, they help us deal with the world we live in. They are propositions or models that describe and predict reality well enough. Scientists strive to recognise the limits of their models and understand when/where they are best applied.
The limitations of science
In science, there is no absolute truth. There are degrees of confidence in the accuracy of information. The truth of a model is the degree to which information proves accurate when tested. In hard sciences, that degree might be 99.99% or more. In the humanities, many models and classification schemes have a lower degree truth.
Wonderful and powerful as science is, how certain is the knowledge that it gives us?
“The half-life of knowledge is the amount
of time that has to elapse before half of the knowledge in a particular area is
superseded or shown to be untrue.” Fritz
Machlup (1962) The Production and Distribution of Knowledge in the
United States.
There is a spectrum of precision and certainty in science - from physics through biology to economics and sociology.
“The half-life of a physics paper is on average 13.07 years,
in Math it’s 9.17 years, and in Psychology it’s 7.15.” Samuel
Arbesman (2012). The Half-life of Facts: Why Everything We Know Has an
Expiration Date.
Worse: there is now so much published in the name of “science” that scientific journals have become an unreliable source.
“Most
published scientific research papers are wrong, according to a new analysis. …
there is less than a 50% chance that the results of any randomly chosen
scientific paper are true.”
https://www.newscientist.com/article/dn7915-most-scientific-papers-are-probably-wrong
And perhaps most depressing of all to a scientist is the following fact. Pseudo-scientific papers can survive longer than scientific ones, because it is impossible to disprove them.
Mathematicians use logical analysis to prove conclusions drawn from axioms (which they can’t prove). Logical analysis has been wildly successful in mathematics. You can be certain a square is a rectangle; because the description of those things is entirely in your gift. Applying logic to the natural world is more problematic.
Newton’s three laws of motion presume three basic axioms about the universe.
1. An object cannot both exist and not exist at a given point in space and time.
2. An object must either exist or not exist at a given point in space and time.
3. An object cannot exist at different points in space at the same time.
The equivalent laws of logic are:
1. The law of contradiction: a proposition is not both true and false.
2. The law of excluded middle (or third): a proposition is either true or false.
3. The principle of identity: a proposition true of x must be true of x; OR x is identical with x.
Historically, some
mathematicians saw these laws as intuitively obvious, yet others have challenge this. A sociologist may see
the laws as far from obvious; since natural language descriptions can be
entirely true, entirely false, a mix of both, or uncertain.
Suppose we recast the laws of logic as laws of description.
1. The law of contradiction: a description is not both true and false.
2. The law of excluded middle (or third): a description is true or false.
3. The principle of identity: a description true of x must be true of x; OR x is identical with x.
Consideration of syllogisms throws the
first two laws into question. A syllogism is a form of reasoning in
which a conclusion is drawn from two premises. A common or middle term is present in the two
premises but not in the conclusion.
Consider for example: all dogs are animals; all animals have four
legs; therefore, all dogs have four legs One of the premises is misleading, yet still
(ignoring dogs born malformed) the conclusion is true.
Even mathematicians now allow that a
proposition might be neither true nor false.
Aside: more on the "laws of
thought"
The following is lightly edited from the Encyclopædia Britannica.
Logic symbol |
Meaning |
∼ |
not |
· |
and |
∨ |
or |
⊃ |
formally implies |
∀ |
for every |
= |
is the same as |
The three traditional laws of logic are listed in the table below.
Law |
Meaning |
Or |
Symbolically |
The law of contradiction |
For all propositions p, it is impossible for both p and not p to be true |
A description is not both true and not true |
∼(p · ∼p) |
The law of excluded middle (or third) |
Either p or ∼p must be true, there being no third or middle true proposition between them |
A description is true or not true |
p ∨ ∼p |
The principle of identity |
If a propositional function F is true of an individual variable x, then F is indeed true of x |
A description true of x must be true of x |
F(x) ⊃ F(x) |
OR, a thing is identical with itself |
For every x, x is the same as x |
(∀x) (x = x) |
There
has been debate about the law of excluded middle. The following is lightly edited from: http://www.britannica.com/topic/laws-of-thought
[A doctrine of traditional
logicians was that] the laws of thought are a sufficient foundation for the
whole of logic.
[And] all other principles of
logic are mere elaborations of them. It has been shown, however, that these
laws do not even comprise a sufficient set of axioms for the most elementary
branch of logic (the propositional calculus)...
Aristotle cited the laws of
contradiction and of excluded middle as examples of axioms. He partly exempted
future contingents, or statements about unsure future events, from the law of
excluded middle. Holding that it is not (now) either true or false that there
will be a naval battle tomorrow. [Rather] the complex proposition that either
there will be a naval battle tomorrow or that there will not is (now) true. In
the epochal Principia Mathematica (1910–13) of A.N. Whitehead and Bertrand
Russell, this law occurs as a theorem rather than as an axiom.
The law of excluded middle and
certain related laws have been rejected by L.E.J. Brouwer, a Dutch mathematical
intuitionist. His school do not admit their use in mathematical proofs in which
all members of an infinite class are involved. Brouwer would not accept, for
example, the disjunction that either there occur ten successive 7’s somewhere
in the decimal expansion of π or else not, since no proof is known of
either alternative. But he would accept it if applied, for instance, to the
first 10100 digits of the decimal, since these could in principle actually be
computed.
In 1920 Jan Łukasiewicz, a
leading member of the Polish school of logic, formulated a propositional
calculus that had a third truth-value, neither truth nor falsity, for
Aristotle’s future contingents, a calculus in which the laws of contradiction
and of excluded middle both failed. Other systems have gone beyond three-valued
to many-valued logics—e.g., certain probability logics having various degrees
of truth-value between truth and falsity.
In the natural
sciences, truth is a fuzzy concept that can be determined with a degree of
certainty rather than complete certainty.
If a bird sees a cat and makes an alarm call, then other birds better bet that signal/message is true. It conveys the useful knowledge that something of the “dangerous” type is around.
After years in our house, I observed every pair
hot and cold taps is different, and only one pair has red and blue dots on.
When I asked my wife if she had observed the same, she replied “Yes, because I
clean the taps.” I didn’t need that social verification. However, you might be
inclined to believe the two of us more strongly than me alone.
It is easy to deceive ourselves. We continually validate our knowledge by inspecting it and discussing it with others. We engage in a kind of collective mental modelling, to hone the accuracy of our knowledge. However, the people we converse with may be equally deluded.
Scientists place great store by peer group review of papers. The value of a review is reduced when what is proposed cannot be verified empirically or logically. When the text is impenetrable or ambiguous, or when reviewers have something to gain from endorsing a paper. In the humanities, some make speculations expressed in pseudo-scientific terms. The proposals are metaphysical, purely conceptual; it is impossible to test them. The defence against this criticism is to include very long reference lists, hoping the reader will presume “shared perception is reality”.
Social communication spreads exhortations, suppositions, babble and nonsense as well as objective knowledge. Our survival depends in part on recognising when assertion is not truth, fake news is not news, correlation is not causation, and pseudo science is not science.
Outside of very stable domains of knowledge, the types used to define qualities can be fluid. For example: Is Pluto a planet?
Before life, the question did not arise, there was an unknown, unnamed and unclassified body in space. When that body was discovered, it was named “Pluto” and classified as an instance of “planet”. But the definition of a planet’s qualities has changed over the centuries, and changed recently such that Pluto is no longer an instance of “planet”. The thing may have remained the same, yet the definitive description of the “planet” type has changed.
Remember the law of the excluded middle? In modern systems of knowledge, some probability logics have degrees of truth-value between truth and falsity. Especially, or at least, when the proposition declares something will be true in the future.
In the mathematics
of fuzzy logic, predicates are the functions of a probability distribution. This replaces a strict
true/false valuation by a quantity interpreted as the degree of truth. In natural language, the words we use to
describe things are fuzzy polythetic types, meaning the described thing need
not wholly match the word describing it. So, we convey meaning with probability rather certainty, and
interpreting the meaning of a communication is somewhat fuzzy.
Fuzziness
in physical measurement
Is the Yankees baseball ground bigger than Lords cricket ground? If their areas are similar, your certainty about the answer depends on how accurately you can measure them. The truth of a proposition about the physical world is determined by how accurately you measure physical matter and energy.
Fuzziness
in social
communication
Suppose two honey bees observe a third honey bee dancing to describe where pollen can be found. The first bee finds the pollen, and regards the first honey as telling the truth. The second bee fails to find the pollen, and regards the first honey bee as telling a lie.
The truth as you see it depends on your mental model, but others may have different mental models of the same reality. We can ask observers (a judge or jury) to examine a proposition, and give us a verdict. Or else devise test cases with predicted results, and compare the predictions with the actual results of running the tests. Either way, there is room for fuzziness, or a margin of error.
Read the chapter on type theory.
Nietzsche and von Foerster have a lot to answer
for, as discussed in Postmodern Attacks on Science and Reality. Some Marxists, sociologists and philosophers
deny the concept of objective knowledge. Some have proposed our knowledge of the world reflects nothing of the
reality out there. At the extreme, this leads to the view that the
“dialectic” is more important than evidence. That any persuasively argued or
widely believed assertion carries the same weight as science. Many bloggers on
the world-wide web seem to presume their personal opinion is as true as the
facts the world’s best scientists agree.
The philosophy here rejects relativism, perspectivism and solipsism as dangerous ideas. People start by making a false presumption - that subjective and objective views are mutually exclusive. Then go on to deny the evidence all around them that society, business and science are based on sharing knowledge.
Rather,
we say here, internal cognitions can represent reality; we do transcend
ourselves as individual organisms; and objectivity is not a delusion!
Scientists are aware that our sensory tools, perceptions, memories and communications are subjective and imperfect. That doesn’t mean science is unreliable and should be discarded; the reverse is the case. The scientific method is how we overcome our limitations as individual observers. By repeated testing of results against predictions, logical analysis and peer group review - we incrementally improve our confidence that a model or theory is valid.
For sure, how we see things is shaped by what species we belong to, and our personal experience of the world. Still, our neural systems evolved to enable us to perceive and remember some aspects of real-world phenomena. And our observations of phenomena are amenable to scientific analysis.
The physical world includes not only you, your food, friends and enemies, but also your mental models of things. Your mental models must describe the world well enough; else you would not survive.
Neural systems enabled animals to represent things in their environment (food, friends and enemies) in mental models. Social animals evolved further to share knowledge, by translating internal cognitions into external messages. And having shared a perception of reality, they can verify the truth of information that has been shared.
The plain fact is, one honey bee can find pollen at the location described by another. And when it happens, the two bees have demonstrably shared objective knowledge of the world.
Some say “perception is reality” as though that justifies making no effort to verify the perception. But still, we can and should verify that perceptions, constructs do represent reality – well enough.
Systems thinkers often say things
like:
·
“All
experience is subjective” (Gregory Bateson).
· "Objectivity is the delusion
that observations could be made without an observer." (von Foerster).
For sure, we cannot know – perfectly and completely - what a thing is; that is not even a meaningful suggestion. Yes, we can only know a thing as it is represented in descriptions, models or theories. But that does not mean all our knowledge of world is entirely subjective or personal. That all descriptions of the world are equally valid, and sharing an observation does nothing to increase our confidence in its objectivity.
The
distinction between subjective and objective views is far from simple. Given a
description made by one observer, you might regard it as subjective. Given the
same description by two observers, you may have more confidence in its
objectivity. The more observers give you the same description, the more
objective you will likely consider it. And shown that Newton's
description of force (f = ma) is used
successfully every day, all over the world, you’d surely call it objective.
A subjective view is more fallible because it is personal, and influenced by an individual’s feelings, tastes, or opinions. We regard an observation as subjective if we believe it is shaped by a person’s preconceptions and experiences.
An objective view is not infallible; but has been verified - empirically and/or logically, and probably shared by two or more observers. Objectivity is distinguished from subjectivity by how we make observations. In objective observation we strip out what is personal to us, and instead use a model of observing that is shared with others. An objective observation is one made using strict, standardised, procedures of measurement, designed to eliminate as much as possible of the subjective content.
In quantum physics, an observer-independent view of reality is impossible, since observing something changes it. But at the level of classical physics, where we describe biological, social and technological systems, an observer-independent view is possible. Evidently, two honey bees can share some knowledge - the distance and direction of a pollen source. Their mental models are objectively true in that they are a) not limited to one actor and b) demonstrably confirmed by empirical evidence.
In short, to transcend our subjective experience, we use science, logic and peer review (and domain-specific languages). We turn the subjective into the objective by empirical, logical and social verification. To deny that would be to deny the success of social animals and science.
The communication principle is a receiver should decode the same meaning from a message that a sender intentionally encoded in it. The philosophy here emphasises that successful communication requires the two meanings (encoded and decoded) to match - near enough.
Hermeneutics
is a philosophy that defines human experience through the use of language; it
grew out of studies and interpretations of the bible. Some take the hermeneutic
principle to mean that only receivers determine the meanings of the
words they hear/read. The hearer
alone determines the meaning of an utterance.
This dreadful postmodern idea makes speakers guilty of causing offence where none was intended. It leads to a poisonous kind of “identity politics” in which people can be criminalized for using words (e.g. for skin color). Surely better, we neither assume nor grant the right never to be offended by words, but take no offence at being accurately described ourselves? Perhaps better not assume a word for a skin color means any more than that, and accept with good grace that others may disagree with us?
This chapter discusses how we clarify information by reducing noise and ambiguity, and verify the truth of information by and empirical, logical and social means. It concludes by rejecting extreme interpretations of relativism and perspectivism.
Knowledge is fuzzy, there are degrees of truth. We can reasonably point to a particular circus ring and call it “circular”. But on close inspection, no circus ring is perfectly circular; it is only near enough circular to be usefully described thus.
A
truth triangle |
True-enough propositions <create and use> <describe and predict> Rational actors <observe and envisage> Realities |
Given a proposition or model, then we might call it:
· a supposition, speculation or hypothesis if it has never been tested
· knowledge if it has been tested successfully
· babble or nonsense if it fails to pass tests we agree are important.
In practice, these distinctions are fuzzy. Much knowledge is a proposition or model that describes or predicts a reality well enough. Generally speaking, the propositions of sociologists and economists are less certain and reliable those of chemists and physicists, and the “true enough” tests are more relaxed.
If
you find them helpful, please spread the word and link to the site in whichever
social media you use.