From science to scientism
Copyright 2016 Graham Berrisford.
One of about 300 papers at http://avancier.website. Last updated 24/05/2017 21:57
Some believe modern “systems thinking” approaches derive from, or are an advanced application of, general system theory.
Actually, systems thinking approaches tend to depart from general system theory in one or more of the ways listed below.
This paper explores the first and last differences.
General system theory (GST)
Can be contrasted with approaches that are
Specific to situations in which humans interact
3 About system roles and rules
About individual actors (especially people)
4 About systems
About meta systems
5 About describing testable systems
About solving a problem or changing a situation
Thomas Carlyle, the 19th century Scottish writer and philosopher, coined the term "the dismal science" for economics.
In 1974, Friedrich von Hayek gave an Economic Sciences Nobel prize acceptance speech entitled “The Pretence of Knowledge”.
Here’s an excerpt:
“It seems to me that this failure of the economists to guide policy more successfully is closely connected with
their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences
- an attempt which in our field may lead to outright error.
It is an approach which has come to be described as the ‘scientistic’ attitude –
which, as I defined it some thirty years ago, ‘is decidedly unscientific in the true sense of the word,
since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.’"
In other words, drawing an analogy between ill-defined A and well-defined B is not the same as forming a generalisation of them.
It can give the appearance of knowledge about A without the substance, and lead to error.
A scientistic guru announces a theory, and recommends you act as that theory suggests.
The trouble is that verification or falsification is impossible because the guru is always right.
If the action fails, the guru will say you didn’t try hard enough, or unforeseeable phenomena got in the way.
And if it succeeds, you may suspect it was down to a different cause, because success has a thousand fathers.
For instance, where is the evidence that a debt crisis can be solved by accruing more debt (a de facto policy implemented today)?
Without evidence to support or contradict the policy, you can’t rationally argue for or against it.
If the policy fails it’s only because you did not pile on more debt fast enough, and so, the guru is always right.
Of if the policy succeeds, it might be down to other parallel influences (some unforeseeable market forces come into play).
Software engineering is far from immune to scientism.
Design fashions (OO, CBD, SOA, Microservices) come and go with little or no testing of one fashion against another.
As long the code works, people don’t look into whether it works optimally or not.
Sooner or later, we are probably heading for some kind of worldwide calamity.
But since we still can’t predict the weather next month, it is difficult be confident about predictions of when that calamity will happen.
The collapse of institutions
In the 1970s, Ackoff and Beer (independently) predicted the imminent collapse of governments, if not western civilisation.
Do those who admire Ackoff or Beer now consider their predictions have been verified or falsified by the evidence?
It turns out turns out that many statistics have moved dramatically and surprisingly in the right direction since the 1970s.
There have been global increases in health and education, and reductions in poverty.
Google anything you can find from Hans Rosling, especially “200 Countries, 200 Years, 4 Minutes - The Joy of Stats”.
Ackoff and Beer were not the only systems thinkers to take a pessimistic view of the world and its future.
The limits to growth
“In the 1970s, the Club of Rome (Meadows et al, 1972) released its first report “The Limits to Growth”.
The scientists and philosophers of the Club took a systemic look at the development of present-day civilisations
by consideration the interactions of global subsystems in the areas of population growth, agricultural production, dwindling resources and pollution.
On the basis of the computer simulation of the future course of the world ecology, they predicted worldwide calamity by the year 2025.” Bausch
Meadows’ team modelled industrialisation, population, food, use of resources, pollution (as stocks and flows in a “System Dynamics” model).
They modelled the historical data, then modelled a range of scenarios up to 2100, with varying assumptions about action on environmental and resources.
Their “business-as-usual” model predicted a catastrophic collapse in the economy, environment and population before 2070.
Forty two years later some pointed out that the “business-as-usual” model matches reality pretty well so far.
They they therefore proposed that the catastrophic collapse will happen when Meadow’s model predicts it will happen.
Trouble is, first: the “business-as-usual” presumption is questionable, since the world has changed in so many ways.
(Google anything you can find from Hans Rosling, especially “200 Countries, 200 Years, 4 Minutes - The Joy of Stats”.)
And second, the graphs (shown in the article above) show near-to-linear continuations of past trends so far.
So it remains impossible to be confident when if ever the predicted change from linear to non-linear will happen.
Often, a System Dynamics model is a scientistic theory of how the world works.
The guru is always right: if the model fails to predict reality, the guru may point to an unforeseeable interference from something outside their theory.
Success has a thousand fathers: if the model predicts reality, it might be that something outside their theory (e.g. global warming) is the primary cause.
And it probably won’t be clear what the outcome of doing things differently would have been.
The balance between centralisation and distribution runs through much thinking about systems and societies.
A theme in socio-cultural systems thinking is the balance between totalitarianism (hierarchical control) and individualism.
Here, individualism might be interpreted as anarchy, or liberalism or participatory democracy.
There are obvious reasons why hierarchical bureaucracies are inefficient and inept.
· Parkinson's law
· The Peter principle.
· The difficulty of recruiting, motivating and retaining employees to do boring or difficult work
· The impossibility of a top manager knowing enough to do much better than random in decision making
· The unintended consequences (distortions of behavior) that arise from top-down targets.
Some systems thinkers present their own insights into the weaknesses of commercial and government institutions.
Some advocate a particular organisational model – often towards the individualism end of the spectrum.
Bausch suggests systems thinkers have a mission to herald a new era of social organisation, of advancing participative democracy.
“If systems theory is applied to social processes in the manner exemplified in this book, it offers practical and ethical methods for advancing participatory democracy.”
But few thinkers present convincing empirical evidence to support their theories and recommendations.
Richard Feynman (1918- 88) was recently ranked as one of the ten greatest physicists of all time, and left us with insights that go beyond the world of physics.
Here is what Feynman had to say about social sciences in this BBC interview in 1981 (the whole series is worth viewing).
“Because of the success of science, there is a kind of a pseudo-science.
Social science is an example of a science which is not a science.
They follow the forms. You gather data, you do so and so and so forth, but they don’t get any laws, they haven’t found out anything.
They haven’t got anywhere – yet. Maybe someday they will, but it’s not very well developed.
But what happens is, at an even more mundane level, we get experts on everything that sound like they are sort of scientific, expert.
They are not scientists.
They sit at a typewriter and they make up something like ‘a food grown with a fertilizer that’s organic is better for you than food grown with a fertilizer that is inorganic’.
Maybe true, may not be true. But it hasn’t been demonstrated one way or the other.
But they’ll sit there on the typewriter and make up all this stuff as if it’s science and then become experts on foods, organic foods and so on.
There’s all kinds of myths and pseudo-science all over the place.
Now, I might be quite wrong. Maybe they do know all these things. But I don’t think I’m wrong.
See, I have the advantage of having found out how hard it is to get to really know something, how careful you have about checking your experiments, how easy it is to make mistakes and fool yourself.
I know what it means to know something. And therefore, I see how they get their information.
And I can’t believe that they know when they haven’t done the work necessary, they haven’t done the checks necessary, they haven’t done the care necessary.
I have a great suspicion that they don’t know and that they are intimidating people by it.
I think so. I don’t know the world very well but that’s what I think.”
Socio-cultural system thinkers strive understand the collective behavior of social groups, sometimes with a view to changing them.
The trouble is, human social groups contain volatile, irrational, unpredictable and contrary human actors.
To Feynman’s point, rather than acknowledging this huge limitation, the gurus present their theories and models as truths without the evidence that harder sciences expect.
The gurus describe social systems using terms like "complex, adaptive non-linear systems”.
They borrow mathematical-sounding terms like “complexity theory”, “nonlinear dynamics”, “fractal geometry” and “chaos theory”.
And other scientific-sounding terms like “autopoiesis”, “emergent properties”, “strange attractors” and “entropy”.
Where von Hayek called this scientism, Feynman called it pseudo science.
Scientism is the practice of asserting things to be true in a way that sounds like a scientific statement.
E.g. “Organic foods are better for you”.
Science is more; it is the practice of testing assertions in an attempt to confirm or deny them.
The importance of testing in science
Karl Popper taught us that scientific theories should be readily falsifiable.
GST presumes the scientific method is applicable: a system can be tested, as any other scientific theory can be tested.
There is an abstract (theoretical) system description, against which a concrete (empirical) system realisation can be tested.
E.g. there is a US constitution that describes a system of a government, against which the structures and behaviors of real-world US governments can be and are tested.
Abstract system description
Concrete system realization
General system theory is about situations in which the actual performances of behaviors can be tested for conformance to descriptions of those
The system is a collection of repeated or repeatable behaviors whose progress can be measured in terms of changes to variables.
As, for example, the actual orbits of planets are tested for conformance to astronomers’ descriptions of those orbits.
And the actual behaviors of US governments are tested for conformance to the description of those behaviors in the US constitution.
So in applying general system theory:
1. You observe or envisage a discrete concrete entity.
2. You hypothesise that the entity can be observed or built to realise (near enough) an abstract system description.
3. You describe the system (in a mental or documented model) by defining the abstract property types/variables and rules you envision will be measurable.
4. If the entity does not exist, then you manufacture it.
5. You observe the entity in motion and test it gives values to the defined variables (near enough) as you expect.
General system theory principles (encapsulation, information feedback, determinism etc) are used every day, all over the world.
Every software engineer applies these principles, and relies on system testing to prove the system theory they code.
Human activities in business operations are designed and tested using the same principles.
We all depend on humans and computers behaving, systematically, according to descriptions of those behaviors.
The lack of testing in social system theory
By contrast, much systems thinking is about situations in which the actual performances of behaviors are not tested (or even testable) against any description.
The implication is that social system thinkers are more scientistic than scientific.
This is true at the basic level of calling an entity or organisation a "system”.
Since this is an empty assertion if there is no system description against which reality can be tested.
The countless social systems thinking scientistic propositions include:
· Classifications of how societies have evolved through history (often implying evolution = improvement).
· Classifications of systems and social groups into different kinds (using scalar, tabular and graphical models).
· Models for how to change the organisation of a social group to meet some goals.
· Models for how to make “interventions” (as business management consultants call them) to improve business operations.
It is unclear which are correct, effective or can be recommended, because it is so hard to test or evaluate them.
All free-to-read materials at http://avancier.website are paid for out of income from Avancier’s training courses and methods licences.
If you find the web site helpful, please spread the word and link to avancier.website in whichever social media you use.