Challenging architecture assumptions

This page is published under the terms of the licence summarized in the footnote.

All free-to-read materials on the Avancier web site are paid for out of income from Avancier’s training courses and methods licences.

If you find them helpful, please spread the word and link to the site in whichever social media you use.

 

This paper outlines some possible premises related to system architecture, and challenges them.

Contents

There is a right, pure or best architectural style?. 1

Our architects are like building architects?. 2

Architects describe concrete system components?. 3

Integrated systems are better than silo systems?. 3

Loose-coupling is better than tight coupling?. 4

Hub and spoke is better than point to point?. 4

Reuse is a good thing?. 5

Buying is better than building?. 5

Replacing is better than maintaining?. 5

Getting a system to work proves its architecture was good?. 6

Architecture is separable from design?. 6

Business systems are like biological systems?. 7

Evolution is better than intelligent design?. 8

Bottom up is better than top-down?. 9

A top-to-bottom enterprise architecture can be fully documented?. 9

Bottom to top traceability is possible?. 10

Architects should expect new technologies to simplify systems and improve productivity?  11

Architecture is science?. 11

Modularisation reduces the intellectual challenge?. 11

 

A strategy can be agile?

In “The history of data processing” (http://www.infoq.com/presentations/db-history-data-processing)

Mark Madsen lists eight choices facing people throughout history.

These choices are tabulated below under the headings of strategic and agile approaches

 

Strategic

Agile

Top-down

Bottom up

Authority

Anarchy

Bureaucracy

Autonomy

Control

Creativity

Hierarchy

Network

Dynamic

Static

Powerful

Easy

Work up front

Postponed

 

Madsen says: “in every choice, something is lost and something is gained.”

 

Given two extremes, rather than choose one or the other, usually, a compromise has to be made.

To occupy the middle ground is not to choose both extremes, it is to be neither wholly one thing nor wholly the other.

So, a term like “agile strategy” is misleading.

A compromise between the two is neither properly agile, nor properly strategic.

 

An agile approach empowers an agile team not only to change products in the pipeline, but also to shuffle and amend requirements.

Principles include "Do the minimum that can possibly work", "You ain't gonna need it" and "Fail faster is good".

Agile methods focus on what a team can achieve through collaboration rather than top-down command and control and adherence to contracts.

 

By contrast, a strategic approach requires top-down command and control of distributed teams.

A strategy sets out a stable long-term vision, directives and goals - e.g. to win a war

Battles and skirmishes are conducted to reach this overall end rather than for short-term gain. 

 

You can follow a strategy while being agile about tactical steps and product details.

But that doesn't make the strategy itself agile.

 

To have no stable vision, principle or goal is anti-strategy rather than agile strategy.

Top-down command and control does not work if the vision, principles and goals are in flux.

People lose faith in there being a strategic direction, and start behaving as though it doesn't matter.

 

Sometimes, top-down command and control is counter-productive, because it inhibits agility.

It can be better to look for each team to do what can be achieved in the short term, at minimal cost.

That is a rational philosophy/position, but it is anti-strategy rather than agile strategy.

 

Most of the eight choices Madsen listed above surface in papers on the Software Architecture page at avancier.website.

The sixth choice (Dynamic or Static) appears in the next point.

Our architects are like building architects?

Analogies between system architecture and building architecture mislead people about what to focus on and how to go about it.

Our domain is the architecture of human and computer activity systems.

These systems are composed of components that cooperate in the performance of processes

The processes are what matters, since they deliver the desired results.

The emphasis on dynamic behaviour over static structure makes activity system architecture different from building architecture.

 

Building architects (designing a house, bridge, ship or city)

Activity system architects (designing a human and computer activity system)

Architectural drawings are of tangible components (walls, pillars, bulkheads, roads).

The buildings look like the drawings.

Builders cannot elaborate the drawings until they “work”

Builders step from description to concrete materials.

Architects focus primarily on defining system structure.

They consider, but don’t specify, how people should behave in their structure.

Architectural drawings are of abstract roles and processes.

The buildings look nothing like the architectural drawings.

“Builders” painstakingly elaborate the specification until it can be performed

Builders do not step from description to concrete materials; all is abstraction.

Architects define system behaviour, and then what it is needed to perform it.

They detail performable instructions for how humans and computers should behave

Architects describe concrete system components?

True, an activity system employs concrete, tangible entities by way of human and computer actors.

In a human activity system, the component instances that perform procedures are people.

In a computer activity system, the component instances that perform programs are computers

Both human and computer components are general-purpose processors.

We purchase or hire these components as commodities - we largely take their abilities for granted.

We do not describe these components or make them; we specify only a few qualities they must possess and the roles they play in our system.

Integrated systems are better than silo systems?

A silo system (or point solution) is an organisation unit, application or other system that:

·         is not standardised - does not follow the same rules or processes as another doing the same thing

·         is not integrated - does not share information with another doing something different

·         does not share/reuse common services – at the business or technology level.

 

A silo system either does not interoperate with other systems, or resists attempts to make it interoperate.

The result is that silo systems are connected by only a few links, and loosely-coupled even there.

Inevitably, the difficulty of connecting silo systems leads designers to duplicate components in different systems.

 

Approaches like SOA can be seen as a reaction to the “problem” of silo systems.

The visionary postulates a world in which software elements are very widely shared.

At the base of the system are components which each create, read, update and delete a discrete parcel of information.

At the top of the system are business processes - a thin layer of control flows which orchestrate the shared services to reach a specific goal.

Ultimately, information systems across the world become one vast information system, with no duplication of functionality between components.

 

And yet - silo systems can be a good thing.

We can change one silo system without worrying too much about its effect on others.

Keeping silo systems apart stops the wider system from becoming too complex to understand and to manage.

In practice, people can and do manage scale and complexity by dividing a large and complex system into distinct silo systems.

Loose-coupling is better than tight coupling?

In “On the Criteria to Be Used in Decomposing Systems into Modules” (1972) Parnas introduced the idea of information hiding.

This was later re-expressed by others as the twin principles of high cohesion within a module and loose coupling between modules.

 

The architectural fashion since the 1990s has been towards loosely-coupled architectures.

But loose-coupling has down sides. It can mean:

·         longer end-to-end processes: due to slower inter-component communication

·         higher process failure rates: due to inter-component communication failures

·         more problems arising from inconsistencies: due to discrepancies between data maintained in different subsystems

·         higher costs: due to the cost of network and mediator technologies to connect subsystems

·         higher system complexity

·         lower system change productivity.

 

As a crude generalisation, architects look for:

·         Increasingly high cohesion within a system as it gets smaller.

·         Increasingly loose coupling between systems as they get larger.

 

Within a system of a manageable size, you may do better to keep its subsystems closely coupled in some or all ways.

Only when the system reaches a size you can’t manage are you better advised to keep the systems loosely coupled in most ways.

Hub and spoke is better than point to point?

For many years, technology vendors have used the following story to sell their middleware technology - it barely matters what kind of middleware it is.

 

“Your trouble (Mr Customer) is that you have hundreds of applications, which talk to each other in a point-to-point style.

Looked at as a whole, your application portfolio is unmanageable spaghetti.

I can radically improve your enterprise architecture.

Put my technology in the middle.

Connect all your applications to my technology, instead of to each other.

The spaghetti has disappeared.

Your new hub-and-spoke enterprise architecture will be much more efficient and easier to manage.”

 

This is a con trick – even if the salesman doesn’t realise it.

He started by talking about logical point-to-point data flows, then switched to talking about physical hub-and-spoke communication channels.

The enterprise can find itself painfully coding most if not all the same point-to-point data flows inside the vendor’s technology.

Not only is the spaghetti still there, each strand has been divided into two sections – to and from the hub.

And the enterprise now has yet another technology to operate and maintain.

 

There is a place for middleware, for many-to-many messages and where senders and receivers are volatile.

But it is not a universal panacea, and in some cases it complicates rather than simplifies.

Reuse is a good thing?

Reuse does not always mean faster, better or cheaper.

Reuse is an ambiguous term; it can mean direct, copy or tailor reuse.

 

Direct reuse - one instance of a thing is used at run time by many.

This leads to the many service sharing challenges discussed elsewhere on the web site.

 

Copy reuse – many instances of a thing are used by many.

Copy reuse can be counter productive.

It means you started with something you didn’t quite want, and are constrained by that.

 

Copy and tailor reuse – many variations of a thing are made at design time.

To revise what you copy costs you time and money.

Then, you then have to test it before you can use it.

Your testing may reveal features of the copied thing that are contrary to your purpose, meaning you have to withdraw it and revise it a bit more.

You now have a bespoke thing, and you are obliged to maintain it separately from (in parallel with) the original thing.

 

Improving operational efficiency may trade off

Producing more – by automating production – perhaps by copy reuse, or copy and tailor reuse.

Producing less – by careful design – by not copying things - by consolidating several things to one – by direct reuse.

Buying is better than building?

Many businesses have set out to consolidate and standardise their applications by buying a big COTS package, only to be disappointed.

They say their enterprise has spent millions of dollars/pounds/euros on an ERP or CRM package.

And then found it does little of what they want and/or has to be expensively configured using specialist consultants.

Replacing is better than maintaining?

The problem is rarely that the old system was no good.

Sometimes the old system is better than the new one.

The problem is sometimes that the intelligent design team who created the old system has moved on.

Legacy systems will die if there is no design team looking after the system with the same intelligence and care as the original design team.

Getting a system to work proves its architecture was good?

In 1975: Michael Jackson wrote “The beginning of wisdom for a programmer is to recognise the difference between get a program to work, and getting it right.”

By right, he meant economical and maintainable.

Software systems have been built according to many different design methods and patterns.

When a system works, the team naturally assumes their methods and patterns are correct, and the system is well-designed.

Just because you have made a design pattern work, doesn’t mean that pattern was best or right for the system you had to build.

 

A personal anecdote

In his 1971 paper on “stepwise refinement”, Niklaus Wirth said “program construction consists of a sequence of refinement steps. In each step a given task is broken up into a number of subtasks.”

The paper vastly over complicated what should have been a simple program.

A year or two later, Djikstra later wrote up a different solution to the same program design problem.

A year or two later, I reworked it in an object-oriented style, designing it around an object that called another of the same type in a recursive fashion.

I was pleased to have worked through the intellectual challenge and impressed with my own work.

Many years later, I reworked it in event-oriented style, and found the program could be coded in about 20 lines of BASIC.

Architecture is separable from design?

“All architecture is design but not all design is architecture.

Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change.” Grady Booch

 

If a design is an abstraction from an executable specification, then an architecture description is an abstraction from a design.

But there is no fixed line between design and architecture.

 

There are scores of papers on the Avancier web site on the topics of enterprise, solution and software architecture.

You could replace all "architecture" with "design" in all the papers without any effect on their meaning.

 

A continuous abstraction hierarchy runs from the bottom-level executable specification to the highest-level system or enterprise description.

If you do draw a line between architecture and design, then it will shift up and down the abstraction hierarchy depending on

·         how high you start your analysis

·         how low you finish your design.

 

“I entirely concur. There is a design continuum that runs from more abstract to less abstract (also from more distributed to more local).

One end, by convention around the top of this continuum, can be referred to as 'architecture', with the bottom end 'implementation', but it is all design at some level…

The design continuum is often broken up by people into segments labeled 'architecture', 'design', 'detailed design', 'implementation'.

The boundaries of the segments are relative and depend on viewpoint. So any 'line' is a man-made, non-deterministic, judgment call, of convenience.” Ron Segal

Business systems are like biological systems?

Biological systems (at the level of their biochemistry) are not purposeful; they just work so as to sustain themselves (autopoiesis).

They contain components which interact to the effect of performing wider processes (breathing, perspiration etc.)

But there are no overarching procedures.

By contrast, business and software systems are purposeful; they are designed to support and enable “end to end” processes who scope must be understand by architects.

 

In the human species (of which each human is an individual)

In any software species (of which every copy is an individual)

Reproduction of individuals is difficult, expensive and slow

Reproduction of individuals easy, cheap and fast

Reproduction creates varying individuals

Reproduction creates identical individuals

Variants that require more resources die out

Variants that require more resources are given more resources

The species changes in tiny steps over millenia

The species changes a lot in a 30 day sprint.

If an individual stops performing its processes, it dies

If an individual stops performing its processes, it simply rests

An individual’s survival requires it to find inputs

An individual’s survival requires on people wanting its outputs

An individual decays and dies

An individual never decays (and need not die)

Individuals cooperate to help each other survive and/or kill each other

Individuals do not kill each other

An individuals’ processes work because they grew by chance to help survival

An individuals’ processes work because they were designed to work

The whole has evolved to sustain the cell-level components and vice-versa

The cell-level components are designed and organised to sustain the whole

Individuals choose goals and steer end-to-end human processes

Designers impose goals and end-to-end business processes

Evolution is better than intelligent design?

Evolution is a bottom up process by which a system changes slightly, from one generation to the next.

Evolution by natural selection proceeds by random changes – not designed ones.

Evolution in biology means a continuing iteration of small changes that better adapt a species to its environment.

There is much to be said for evolutionary development.

Evolution is fine if you don’t know where you are headed, and you are happy to proceed in small increments.

 

The opposite of evolution is intelligent design.

Intelligent design is needed to make wholly new things and large-scale changes.

Architects are intelligent designers who produce blueprints for new things and large-scale changes.

If you don’t have new things or large-scale changes to make, then you don’t need architects.

 

This table compares and contrasts evolution in IT with evolution in biology.

Evolution factor

Evolution in biology

Evolution in IT

Population survival

and growth through change

There is a continuing cycle of reproduction with tiny changes in each generation.

Changes that help individuals to thrive and reproduce (better than competitors) are more likely to be passed on.

Changes that hinder individuals from doing this are less likely to be passed on.

Viruses replicate themselves.

However, we don't willingly download virus updates.

Other kinds software are so widely useful they get not only copied but also updated.

For example, most of us have a copy of the Java Virtual Machine, and willingly download updated versions of it.

However, population growth is not what IT people think of when talking about evolution.

They focus on the improvement of a single system instance, through iterative change management. The key here is the term 'management'.

Source of change

Changes occur through chance mutations in genes.

Changes occur by design in response to requirements, test results or change requests.

This is a managed process that depends on intelligent design.

Speed of change

Genetic changes are very small and accumulate very slowly.

Hundreds or thousands of generations may pass without perceptible change.

Change can be very fast, a new system may be released every day.

Direction of change

Most genetic changes are deleterious.

Only a few are advantageous and survive.

Most software changes are improvements.

Intelligent designers don't look to make any deleterious change.

Consolidation of poor design

Biological evolution can consolidate what a designer considers a poor design.

For example, the photoreceptors in the eye point backwards.

(An example quoted by Richard Dawkins.)

Software enhancements can consolidate what a designer considers a poor design.

A widely reused system may be badly designed, may wastefully offer many services that nobody wants to use.

As long as the system fits the niche its users want, it survives and may be copied.

Catastrophic change

There are occasionally dramatic changes in the physical environment which cause mass extinctions.

There are frequently changes in the operating systems and other platform software upon which applications depend.

Size of change

There is absolutely no possibility of any substantial 'redesign' to optimise the structure of a system or make it work in a new environment.

In agile software development, intelligent designers are encouraged to continually ‘refactor’ the system to remove waste and optimise the system design.

Even a catastrophic change to the underlying platform software can be managed by intelligent designers.

 

However much users like an IT system, it will eventually die out.

And it will die sooner if there is no design team looking after it - with the same intelligence and care as the original design team.

Every kind of IT development requires intelligent designers.

Bottom up is better than top-down?

Dion Hinchliffe says that web-oriented architecture (based on REST) enables systems to be built from the bottom-up.

Some beautiful systems may emerge from the bottom-up.

But architecting is a top-down process:

And you can’t call a system architected if there was no architect, no intelligent design, no architectural blueprint.

A top-to-bottom enterprise architecture can be fully documented?

There are several obstacles to completing and maintaining a single enterprise architecture model.

The obstacles include the excessive scale of the model and insufficient sponsorship.

 

Excessive scale of the model

Suppose your enterprise has 32 business functions, each with 32 software applications, each with 32 components, each component with 32 modules/classes.

A full top-to-bottom model of your enterprise architecture model would have one million software modules at the bottom level.

Perhaps one day we will have the resources, techniques and tools to complete a single coherent description with a million building blocks.

And connect them properly in single hierarchical structure; and keep it up to date with changes.

I haven’t seen one yet.

 

Insufficient sponsorship

The code of a software application is a detailed description of a system’s structure and behaviour.

We have to complete it fully and correctly – we cannot get away with less - because a computer system cannot work from incomplete or incorrect instructions

One application may be composed from a thousand modules/classes.

To build and maintain an abstract description of a thousand modules requires a huge effort.

It requires much in the way of skilled human resources, techniques and tools.

A Gartner report claimed that less than 20% of software code is in fact documented.

 

Surely, to build and maintain a comparably detailed description of a human activity system is impossible?

We could never get investment, because we can get away with far, far less.

We trust that human beings can and will work from incomplete and incorrect instructions.

Bottom to top traceability is possible?

Higher level specifications acts to define the requirements and constraints for lower level specifications

 

Just as too much close coupling between systems is a bad thing so, too much traceability between levels of specification is a bad thing.

You cannot document every relationship between elements in lower-level and higher-level specifications

If you could, you would end up stitching them together into the humungous top-to-bottom architecture you were trying to avoid in the first place.

And you knew that would be too large and complex to maintain.

 

So here, in place of traceability records, we need a sense of duty and governance by human beings.

A lower-level architect has the duty to ensure their architecture conforms to the higher-level architecture as best they can.

The higher-level architect has the duty to govern the lower level, to assess the validity of the lower-level architecture during the course of architecture compliance reviews.

Architects should expect new technologies to simplify systems and improve productivity?

For many years, vendors having been selling new technologies as the answer to customers' system development and maintenance problems.

·         Languages: 3GLs > 4GLs > OOPLs > Model-Driven Architecture tools

·         Databases: CODASYL, Relational, NoSQL.

·         Communication: Client-server, Middleware, RESTful Web Services.

 

New technologies can open up new opportunities

But it is far from obvious that productivity has improved as we move along any of these scales; some say it has got worse.

 

 “The Mythical Man-Month: Essays on Software Engineering” by Fred Brooks was republished in 1995 (ISBN 0-201-83595-9) with the essay "No Silver Bullet".)

"we cannot expect ever to see two-fold gains every two years" in software development, like there is in hardware development.

"there is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity."

Architecture is science?

Again with reference to “The Mythical Man-Month”.

A business naturally wants to minimise the number of people (N) it employs to build and maintain a system of a given size (X).

Or to put it another way, wants to maximise the system size (X) whose design is maintainable by a given number of people (N).

 

The key to successful system design is skilful layering and modularisation of the system.

Design principles and design patterns are useful, but most provide only qualitative guidance.

There are always trade-offs between design options.

We want guidance that is quantified in a way that will help us manage scale and complexity.

We want to measure alternative designs, and monitor the evolution of a system design over its life time.

 

Grady Booch and others have discussed the possibility of measuring the scale or complexity of architectures or systems.

See the papers on System Complexity and Agility at http://avancier.website.

Modularisation reduces the intellectual challenge?

Architecting is intelligent design.

To build and maintain a system is intellectually challenging; it demands knowledgeable, intelligent and disciplined minds.

The complexity of a system design, and the manageability of the system’s evolution, is limited by the minds of its architects and maintainers.

Surely, the most efficient design will make best use of the power of a single human mind to remember and maintain a configuration structure?

Two strategies might be proposed.

·         Divide the configuration into many small loosely-coupled components that can each be maintained by a low-ability person. OR

·         Maximise the use of a few high-ability people by giving each the largest configuration they can manage.

 

The first strategy is OK when the configuration is a list or hierarchy of simple and largely unrelated components.

Surely, the second strategy will prove cheaper, faster and better for a large and complex system?

See the papers on System Complexity and Agility at http://avancier.website.

 

 

Copyright conditions

Creative Commons Attribution-No Derivative Works Licence 2.0             24/05/2015 19:41

Attribution: You may copy, distribute and display this copyrighted work only if you clearly credit “Avancier Limited: http://avancier.co.uk” before the start and include this footnote at the end.

No Derivative Works: You may copy, distribute, display only complete and verbatim copies of this page, not derivative works based upon it.

For more information about the licence, see http://creativecommons.org