From science to scientism

Copyright 2016 Graham Berrisford.

One of about 300 papers at Last updated 29/05/2017 16:48


Evidence-based medicine is scientific; a medicine man (or shaman) is scientistic.

Some suggest modern social systems thinking (SST) derives from, or is an advanced application of, general system theory (GST).

Actually, SST approaches both preceded and depart from GST in one or more of the ways listed below.


General system theory (GST)

Can be contrasted with approaches that are




Specific to situations in which humans interact

About system roles and rules

About individual actors (purposeful people)

About systems (S)

About meta systems (M)

About describing testable systems

About solving a problem or changing a situation


This paper explores the first difference.



“Scientism”. 1

50 years of doomsaying. 2

Totalitarianism and individualism.. 3

Pseudo-science. 4

Conclusions and remarks. 5



Thomas Carlyle, the 19th century Scottish writer and philosopher, coined the term "the dismal science" for economics.

In 1974, Friedrich von Hayek gave an Economic Sciences Nobel prize acceptance speech entitled The Pretence of Knowledge.

Here’s an excerpt:


“It seems to me that this failure of the economists to guide policy more successfully is closely connected with

their propensity to imitate as closely as possible the procedures of the brilliantly successful physical sciences

- an attempt which in our field may lead to outright error.

It is an approach which has come to be described as the ‘scientistic’ attitude –

which, as I defined it some thirty years ago, ‘is decidedly unscientific in the true sense of the word,

since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.’"


In other words, drawing an analogy between ill-defined A and well-defined B is not the same as forming a generalisation of them.

It can give the appearance of knowledge about A without the substance, and lead to error.


A shaman announces a theory, and recommends you act as that theory suggests.

The trouble is that verification or falsification is impossible because the shaman is always right.

If the action fails, the shaman will say you didn’t try hard enough, or unforeseeable phenomena got in the way.

And if it succeeds, you may suspect it was down to a different cause, because success has a thousand fathers.


For instance, where is the evidence that a debt crisis can be solved by accruing more debt (a de facto policy implemented today)?

Without evidence to support or contradict the policy, you can’t rationally argue for or against it.

If the policy fails it’s only because you did not pile on more debt fast enough, and so, the guru is always right.

Of if the policy succeeds, it might be down to other parallel influences (some unforeseeable market forces come into play).


Software engineering is far from immune to scientism.

Design fashions (OO, CBD, SOA, Microservices) come and go with little or no testing of one fashion against another.

As long the code works, people don’t look into whether it works optimally or not.

50 years of doomsaying

Sooner or later, we are probably heading for some kind of worldwide calamity.

But since we still can’t predict the weather next month, it is difficult be confident about predictions of when that calamity will happen.


The collapse of institutions

In the 1970s, Ackoff and Beer (independently) predicted the imminent collapse of governments, if not western civilisation.

Have their predictions have been verified or falsified by the evidence?


It turns out turns out that many statistics have moved dramatically and surprisingly in the right direction since the 1970s.

There have been global increases in health and education, and reductions in poverty.

Google anything you can find from Hans Rosling, especially “200 Countries, 200 Years, 4 Minutes - The Joy of Stats”.


Ackoff and Beer were not the only systems thinkers to take a pessimistic view of the world and its future.


The limits to growth

“In the 1970s, the Club of Rome (Meadows et al, 1972) released its first report “The Limits to Growth”.

The scientists and philosophers of the Club took a systemic look at the development of present-day civilisations

by consideration the interactions of global subsystems in the areas of population growth, agricultural production, dwindling resources and pollution.

On the basis of the computer simulation of the future course of the world ecology, they predicted worldwide calamity by the year 2025.” Bausch


Meadows’ team modelled industrialisation, population, food, use of resources, pollution (as stocks and flows in a “System Dynamics” model).

They modelled the historical data, then modelled a range of scenarios up to 2100, with varying assumptions about action on environmental and resources.

Their “business-as-usual” model predicted a catastrophic collapse in the economy, environment and population before 2070.


Forty two years later some pointed out that the “business-as-usual” model matches reality pretty well so far.

They they therefore proposed that the catastrophic collapse will happen when Meadow’s model predicts it will happen.



Trouble is, first: the “business-as-usual” presumption is questionable, since the world has changed in so many ways.

(Google anything you can find from Hans Rosling, especially “200 Countries, 200 Years, 4 Minutes - The Joy of Stats”.)

And second, the graphs (shown in the article above) show near-to-linear continuations of past trends so far.

So it remains impossible to be confident when if ever the predicted change from linear to non-linear will happen.


Often, a System Dynamics model is a scientistic theory of how the world works.

The guru is always right: if the model fails to predict reality, the guru may point to an unforeseeable interference from something outside their theory.

Success has a thousand fathers: if the model predicts reality, it might be that something outside their theory (e.g. global warming) is the primary cause.

And it probably won’t be clear what the outcome of doing things differently would have been.

Totalitarianism and individualism

The balance between centralisation and distribution runs through much thinking about systems and societies.

A theme in socio-cultural systems thinking is the balance between totalitarianism (hierarchical control) and individualism.

Here, individualism might be interpreted as anarchy, or liberalism or participatory democracy.


There are obvious reasons why hierarchical bureaucracies are inefficient and inept.

·         Parkinson's law

·         The Peter principle

·         The difficulty of recruiting, motivating and retaining employees to do boring or difficult work

·         The impossibility of a top manager knowing enough to do much better than random in decision making

·         The unintended consequences (distortions of behavior) that arise from top-down targets.


Some systems thinkers present their own insights into the weaknesses of commercial and government institutions.

Some advocate a particular organisational model – often towards the individualism end of the spectrum.

Bausch suggests systems thinkers have a mission to herald a new era of social organisation, of advancing participative democracy.

 “If systems theory is applied to social processes in the manner exemplified in this book, it offers practical and ethical methods for advancing participatory democracy.”

But few thinkers present convincing empirical evidence to support their theories and recommendations.


Richard Feynman (1918- 88) was recently ranked as one of the ten greatest physicists of all time, and left us with insights that go beyond the world of physics.

Here is what Feynman had to say about social sciences in this BBC interview in 1981 (the whole series is worth viewing).


“Because of the success of science, there is a kind of a pseudo-science.

Social science is an example of a science which is not a science.

They follow the forms. You gather data, you do so and so and so forth, but they don’t get any laws, they haven’t found out anything.

They haven’t got anywhere – yet. Maybe someday they will, but it’s not very well developed.


But what happens is, at an even more mundane level, we get experts on everything that sound like they are sort of scientific, expert.

They are not scientists.

They sit at a typewriter and they make up something like ‘a food grown with a fertilizer that’s organic is better for you than food grown with a fertilizer that is inorganic’.

Maybe true, may not be true. But it hasn’t been demonstrated one way or the other.

But they’ll sit there on the typewriter and make up all this stuff as if it’s science and then become experts on foods, organic foods and so on.

There’s all kinds of myths and pseudo-science all over the place.


Now, I might be quite wrong. Maybe they do know all these things. But I don’t think I’m wrong.

See, I have the advantage of having found out how hard it is to get to really know something, how careful you have about checking your experiments, how easy it is to make mistakes and fool yourself.

I know what it means to know something. And therefore, I see how they get their information.

And I can’t believe that they know when they haven’t done the work necessary, they haven’t done the checks necessary, they haven’t done the care necessary.

I have a great suspicion that they don’t know and that they are intimidating people by it.

I think so.  I don’t know the world very well but that’s what I think.”


Socio-cultural system thinkers strive understand the collective behavior of social groups, sometimes with a view to changing them.

The trouble is, human social groups contain volatile, irrational, unpredictable and contrary human actors.

To Feynman’s point, rather than acknowledging this huge limitation, the gurus present their theories and models as truths without the evidence that harder sciences expect.


The gurus describe social systems using terms like "complex, adaptive non-linear systems”.

They borrow mathematical-sounding terms like “complexity theory”, “nonlinear dynamics”, “fractal geometry” and “chaos theory”.

And other scientific-sounding terms like “autopoiesis”, “emergent properties”, “strange attractors” and “entropy”.

Where von Hayek called this scientism, Feynman called it pseudo science.

Conclusions and remarks

Scientism is the practice of asserting things to be true in a way that sounds plausible, and like a scientific statement.

E.g. “Organic foods are better for you”.

Science is more; it is the practice of testing assertions in an attempt to confirm or deny them.

For the scientific method to be applicable: a system in operation must be testable (as any good theory is testable) against a system description.

Karl Popper taught us that scientific theories should be readily falsifiable.


The importance of system testing to GST

GST and first order cybernetics deal with behaviors that are regular, or deterministic, or reproducible.

Actual (empirical) performances of behaviors can be tested for conformance to abstract (theoretical) descriptions of those behaviors.

As, for example, the actual orbits of planets are tested for conformance to astronomers’ descriptions of those orbits.

And the actual behaviors of US governments are tested for conformance to the description of those behaviors in the US constitution.

Social system

Abstract system description

US constitution

Concrete system realization

US government


So to apply general system theory successful:

1.      You observe or envisage a discrete concrete entity.

2.      You hypothesise that the entity can be observed or built to realise (near enough) an abstract system description.

3.      You describe the system (in a mental or documented model) by defining the abstract property types/variables and rules you envision will be measurable.

4.      If the entity does not exist, then you manufacture it.

5.      You observe the entity in motion and test it gives values to the defined variables (near enough) as you expect.


GST principles (encapsulation, information feedback, determinism etc) are used every day, all over the world.

Every software engineer applies these principles, and relies on system testing to prove the system theory they code.

Human activities in business operations are designed and tested using the same principles.

We all depend on humans and computers behaving, systematically, according to descriptions of those behaviors.


The lack of testing in much social system theory

Evidence-based medicine is scientific; a medicine man (or shaman) is scientistic.

Some social systems thinking ideas are scientistic, meaning hard to verify or falsify by testing of actual behaviors against a description of them

E.g. the evidence for the effectiveness Beer’s “viable system model” is sparse and the “Hawthorne effect” offers an alternative explanation.


There have been countless other social systems thinking propositions.

It is often unclear whether a proposition correct, effective or can be recommended, because it is so hard to test or evaluate it.


Even calling an entity or organisation a "system” is an empty assertion if there is no system description that real-world behaviors have been found to match.



All free-to-read materials at are paid for out of income from Avancier’s training courses and methods licences.

If you find the web site helpful, please spread the word and link to in whichever social media you use.