Cloud Computing

This page is published under the terms of the licence summarized in the footnote.


Is Cloud Computing a threat - to the enterprise architect?

Definitions of cloud computing

Personal use

Small business use

SME use

Large enterprise use

The complexity of a security or downtime fault tree

Interim conclusion

Further discussion about why banks don’t use cloud computing


Definitions of cloud computing

First, here is summary of the definition by the National Institute of Standards and Technology, Information Technology Laboratory (Peter Mell and Tim Grance. )

Version 15, 10-7-09) at


Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.


This cloud model promotes availability and is composed of

  • 5 Essential Characteristics: On-demand self-service.
  • Broad network access.
  • Resource pooling.
  • Rapid elasticity.
  • Measured Service.


3 Service Models: Cloud Software as a Service (SaaS).

  • Cloud Platform as a Service (PaaS).
  • Cloud Infrastructure as a Service (IaaS).


4 Deployment Models: Private cloud.

  • Community cloud.
  • Public cloud.
  • Hybrid cloud.



  • Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.
  • Cloud computing is still an evolving paradigm.
  • Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors.
  • These definitions, attributes, and characteristics will evolve and change over time.
  • The cloud computing industry represents a large ecosystem of many models, vendors, and market niches.
  • This definition attempts to encompass all of the various cloud approaches.


The definition is OK, if elaborate because it attempts to encompass all forms of Cloud Computing..


Cloud Computing is sometimes called a Virtual Data Centre.

Cloud Computing is the provision of scalable computing power, memory and data storage over the internet.

Cloud Computing is the provision of scalable software applications running over the internet.

Cloud Computing service providers rely on browsers to deliver thin-client applications, and virtualisation to share computing resources.

Cloud Computing makes remote hosting of applications and data storage available to small-to-medium enterprises – and to business units within a large enterprise.

Cloud Computing (using “3rd layer” network services provided by the internet) may be able to beat in-house business continuity levels (since the internet can offer availability levels and recovery levels that are better than a private line to a disaster recovery centre).


From an article “Securing the Cloud” BCS Dec 2009: Cloud Computing can be:

  • a private internal cloud offering applications on demand.
  • a public delivery system for software as a service like Amazons Web Service (AWS).
  • an application platform as a service such as Google maps.
  • provision of computing infrastructure (power and memory) as a service.


You probably don’t know where the service provider is.

You probably don’t know where the applications run or where the data is stored.


You may not know what the service provider does with your data (besides store it and return it to you).

You may worry about remote storage of data, and inconsistency of business data.

Your data stores are business assets.

Your business applications maintain and rely on these data stores.

Integrity of business data (a vision of the enterprise architect) is threatened whenever a business unit starts to enters data into to a new database – as it may do via Cloud Computing.

“Is it wise to move IT to a virtual environment? No doubt it can provide much greater storage capacity and savings on servers and the maintenance of applications; but some may be nervous about the accessibility (especially if internet connections go down), reliability and the integrity of the information held on the system.” Government Computing magazine, September 2009

Personal use

You already use Cloud Computing if you use the Google email service, or internet banking, or any online package for accounting.

Your data (emails, account details) are stored by a service provider on the other side of the Cloud.

Your service provider owns the application specification, including whatever controls you get for configuring business rules applied to your data.

You like the fact that you don’t have to store and manage your own data, don’t have to carry your PC about with you, don’t have to back up your data or secure it.

You don’t worry about data security, data protection, data integrity and business continuity, because you assume your service provider will manage these things better than you can.

You do sacrifice some control over risks.

Your Cloud IT service does not sign a service level agreement containing your non-functional requirements for access to your data, and to the applications that manipulate that data.

Your Cloud IT service provider does not compensate you financially if it fails to meet an acceptable service level.

The risk is entirely yours.

Small business use

Small businesses are already using Software as a Service (SaaS) delivered via the Cloud.

They probably don’t use the Cloud in place of an IT department (if they never had one).

Rather they use the Cloud to do stuff they never did before – like automate their Customer Relationship Management records.

They are using application services they couldn’t or wouldn’t pay to have developed or operated in-house.

And like the personal user above, they live with whatever functions are provided by SaaS application.

A small business might notice some deterioration in services when lots of users are on-line.


But just like a personal user, the small business does not monitor service levels in a systematic way, and does not want to pay for a contract that guarantees them.

A private enterprise may not worry much about data security issues - may not even have considered what the service provider does with the business data they store.

A small business probably doesn’t have many bespoke requirements.

They don’t submit application change requests to Microsoft and Google.

And they probably aren’t running systems that have to integrate data from the Cloud applications with data in-house databases.

(Though that is a direction in which their IT systems might grow.)

SME use

A small-to-medium enterprise must evaluate the risks and benefits of cloud computing, like any other new technology.

What follows is a distillation and critique of a piece by David Moorman (12-Nov-2009 16:46:41) of DynaSis Integrated Systems, who serve small-to-medium enterprises in the Atlanta area.


Possible risk

Risk analysis by David Moorman

Comment by me

My data won’t be as accessible in the cloud?

Cloud computing is meant to maximize access to your data.


Since your data is delivered over the Internet, you are no longer tied to a physical machine.


At the office, home, or on the road, you can get to necessary files and applications.

That’s thin client computing.


This implies a data centre, but not necessarily one in the cloud.

My provider will be able to hold my data hostage?

A reputable managed IT company will have processes to transfer your data should you ever want to switch.




But you do need a contract, a service level agreement, with a reputable service provider.

My data won’t be as secure in the cloud?

What if someone breaks in and steals your equipment or an employee tampers with it after hours?

Cloud computing offers physical and virtual security measures that may not be available to the typical small-to-medium enterprise (temperature control, fire suppression systems, around the clock monitoring, controlled access, enterprise firewall security, and virus protections).

Yes your data will be highly protected and available.

However, business continuity is not the same as security.

You have to trust your security to technicians in the cloud rather than your own employees.

Sharing physical resources in the cloud creates security issues?

The cloud often uses virtualization to accomplish high rates of uptime and to minimize the amount of physical equipment needed.

Each virtual server has its own licensing, its own firewall, its own anti-virus software and so on.

Just like two ships passing in the night, virtual servers run in parallel but never touch.

That’s good news about virtualisation.

It doesn’t address access to remote data stores.

Sharing physical resources in the cloud increases my risk of downtime?

In a virtualized environment, if any one physical server crashes, all the virtual instances are easily moved to another resource so no downtime is experienced.

 Providers offering cloud computing should be able to give you the percentage of guaranteed uptime they offer.

Additionally, if your business experiences a spike in credit card transactions, or an increase in the number of users accessing the network during certain times of the day or week, any available resources within the server farm are temporarily given to you and when you no longer need it, that bandwidth is reallocated somewhere else.

Scalability is a feature of virtualised and clustered servers rather than cloud computing.


In short, the message is that cloud computing is a virtual data centre; it gives large-enterprise-levels of accessibility, availability and recoverability to the small-to-medium enterprise.

But the SME should ask the questions the large enterprise has to ask.

“If you are dissatisfied with either your cloud provider or your remotely hosted application, how do you move to another? And what would happen if there was a major disruption to the internet.” Chris Britton

Large enterprise use

Large enterprises – the kind that enterprise architects work with - already have a data centre, thin-client applications, and virtualisation.

How viable is the Cloud for those enterprise applications and databases that are currently managed in-house or under a service level agreement with an IT service provider?


Is the Cloud scalable enough for their enterprise applications? Scalable means first that the Cloud should provide scalable network and server resources, so it does not matter how much traffic we (and others) throw at it, how much data we want to store.

Second, it means the contractual terms should scale up from the individual and small business user to the larger enterprise.


Q) What does your SLA say? Does it cover data locations, data retrievability, security features and business continuity levels?

Q) Do you care where your data is stored? Not just the live data but any back up? If you are running offshore financial accounts with large tax implications then in or out of the UK, EU or USA has big legal disclosure caveats.

Q) How to integrate remote apps with each other and with local apps? Packages run as SaaS may not give you the flexibility you need.

Q) Can you get ALL your data back into a local database, and in a form you can use? If you do ever have a problem, you’ll have set up somewhere else after the event.


Q) Is there clear separation between your data and that of other customers - especially your competitors)? A hundred SMEs might fit into a single blade chassis .

One approach is the use of trust zones such as those from Catbird, and tripwire and VMware.

Q) Are the right security measures in place? A hardened perimeter strategy is not sufficient.

You need authentication of users at connection-level and encryption of data in motion.

Q) Are all site links are encrypted? Are the same admin passwords used for every client? Amazon works to SOX compliance and states in its security policy that it ‘will continue efforts to obtain the strictest of industry certifications in order to verify its commitment to provide a secure, world-class cloud computing environment’.

From an article “Securing the Cloud” BCS Dec 2009


Before using the Cloud, the enterprise will want the service provider to agree a Service Level Agreement that covers requirements relating to:

  • data location, data retrievability, data security, data protection, data integrity.
  • business continuity (availability, reliability, recoverability).
  • performance (throughout and response time)

Cloud IT service providers aren’t going to promise to meet these requirements without financial reward.

So if Google offers to run an enterprise IT services; then it becomes much like any other traditional IT service provider.

The threat posed by the Cloud to the enterprise IT department is no more or less than the threat posed by more traditional outsourcing of IT operations.

The complexity of a security or downtime fault tree

Cloud computing may hide some complexity from the enterprise’s business managers, it hides the day-to-day operations virtual data centre (or centres).

But in general, distribution of computing services across networks is making systems more complex.

And it may be harder to track down the cause of a security or downtime problem.


Traditionally, security controls are applied at the perimeter of the enterprise – be that the reception desk of a building or the routers at the edge of the enterprise’s wired network.

Now, the computer within an enterprise’s building may connect to wireless networks.

Or they may be used outside of the enterprise’s buildings.


The section below has been edited from “Increasing Complexity and Geographic Distribution Driving TCO” Source: IT Business Edge: 11/19/2008.

 A conversation with Vikas Aggarwal, CEO of Zyrion.

The conversation is about the growth in the complexity of how application services by components within the enterprise’s own data center.


Applications are increasingly distributed across servers.

What looks like a simple service to the user has much complexity underneath.

Consider for example an enterprise with an e-commerce arm.

At first, there is a server or two.

Then a server farm… virtual servers… redundant networks connected to one or two databases… an authentication server, a billing server and a credit card server.

If one goes down it might impact the whole e-commerce service.


The ubiquity of IP is another complicating factor.

Businesses are increasingly dependent on their underlying IP infrastructure.

The database is in New York, but the users are in California.

Cloud computing, virtualization and software as a service (SAAS) are making things even more complex.


This increasing complexity makes it harder to see all the components, see how they are tied to each other and deliver this service.

You need to have a way to get your arms around the complexity.

And when something slows down, you must immediately be able to say what the reason is.

You have to make sure you have a way to see how everything is working together as a service.


One of the areas that needs to change is the silo approach in which:

  • Enterprise network architects looks at the network infrastructure.
  • Enterprise server architects looks at the server infrastructure.
  • Enterprise applications architects looks at the applications.


Each of three departments spends a lot of money; they configure their systems separately and then point fingers at each other when something goes wrong.

The fact that the three different groups are not talking makes the TCO a lot higher.

What to do?

  • Applications, server and network architects must talk to each other.
  • They can look for technology that is easy to use and maintain on an ongoing basis.
  • They should ask: how quickly can you find a run-time problem and recover? This requires the use of service-monitoring tools.
  • They can get cleverer at modelling architecture complexity up front.


Zyrion focuses on service-oriented monitoring of the IT infrastructure.

By allowing you to model your service, you can recover from outages faster.

That improves TCO.


What happens when there is a security or down time problem? If it is increasingly hard to know and to model what is going on within an enterprise’s data centre, then how much harder will it be when the enterprise’s employees are using applications provided from multiple data centres on the other side of a cloud?

Interim conclusion

One threat to the enterprise architect is the commoditisation of business applications in general purpose packages.

The more that Cloud Computing can deliver packages as SaaS, the easier it is for business managers to buy more packages, without due analysis.


The enterprise architect may also fear that standardisation of business applications will force all businesses to work the same way.

Without bespoke business requirements, enterprise architects are left with nothing to do but buy and install generic infrastructure.


However, if architects on training courses are to be believed, many businesses have set out to consolidate and standardise their applications by buying a big COTS package, only to be disappointed.

Leaving aside whether the package really does meet bespoke requirements, a big issue is that the package brings into the business a new database that stores business data already stored in other databases.

The new package has created data integrity issues, a systems integration challenge, and a dependency on specialist resources.


The enterprise architect is still needed to address the age-old challenges.

How to improve data integrity between disparate systems? How to automate workflows that join disparate systems together? Using SaaS packages, storing business data in the Cloud, running applications in the Cloud, creates work for enterprise architects.

Further discussion about why banks don’t use cloud computing

Cloud computing is a contractual agreement, not a technology.


"Cloud" is not simply a contract.

Cloud is type of large scale computing that has six distinct attributes that separate it from traditional computing


·         Virtualization at every level - OS, Network, Storage

·         Multi-tenancy

·         Self-service provisioning / deprovisioning

·         API driven system configuration

·         High level of homogeneity

·         Abstraction of the hardware resources


While these individual traits have existed in different ways in other systems, when these six come together you have "cloud" technology.
The main issues with audit and compliance are around virtualization and multi-tenancy, even in a private cloud.

If your financial application shares a hypervisor with a non-financial application, you run several different risks of compromise.

Your system is operating within the same host OS as the other application.

Transactions are flowing through the same network card that the other application has access to.

Your data is on the same disks that the other application has access to. Etc.
The Open Source community has been very good about identifying, documenting, and closing vulnerabilities in Open Source cloud technologies.

Proprietary cloud technologies have been much more close mouthed about what vulnerabilities exist and what they've done about them.


Wouldn’t you expect to see 4 or 5 of those 6 features in a private, non-cloud, data centre?

If you see cloud computing as a variant of the SOA story, then worrying about how the service provider works internally is to miss the point entirely.
I agree one has to have some confidence that the service provider can deliver what they promise in any contract they offer, so there is an obligation to do due diligence.
But then, surely that due diligence effort is best advised to focus on known performance metrics, track record, customer base, solvency etc?

Rather than examining the 6 techie features you mention?

As far as banking is concerned, we have many examples of a bank’s internal IT department failing to meet SLAs: e.g.

·         being unavailable to all customers (for several days in a recent case).

·         being insecure (my own account has been compromised, leaving me unable to do business for weeks).


Cloud service providers cannot provide specific terms to specific clients if they want to maintain a certain level of service, keep their platform stable, and their costs under control.

I work with financial institutions on a daily basis and see them struggle with the notion of standard contracts that the cloud service provider won't negotiate.

Take Security for example - Security means three things: Confidentiality, Integrity and Availability.

The cloud service provider may be able to guarantee Availability at a certain acceptable level, but there is no way it can guarantee Confidentiality and Integrity, for several reasons:


the client (bank) runs its systems in a virtual environment, created for it by an underlying, non-virtual, software that if hacked compromises the entire virtual environment used by the client.

Any controls or security technologies used by the client inside the compromised virtual environment are irrelevant and ineffective (unless it is using a data-centric security technique which is a topic for another discussion).


Banks are subject to dozens of regulatory frameworks, standards and guidelines.

For example: They need to use hardware based cryptography combined with elaborate key management procedures to be able to prove demonstrable control over cryptographic key material;

They must implement various mechanisms to guarantee the integrity of their data and implement sufficient mitigating controls to compensate for any suspected vulnerability; and so on.

No cloud service provider will be willing to subject itself to these requirements, go through endless audits and legally attest to their fulfilment given the potential financial and reputational impact of a compromise when a bank is involved.

Economies of scale
A cloud service provider leverage the economies of scale to reduce its per-item cost.

However, banks have unique requirements such as

·         "residency" (where data can or cannot be stored), "isolation" (physical separation of systems and data between entities),

·         "commissioning" and "decommissioning" of equipment (having to witness and document the entire supply chain of equipment from the manufacturer to the rack and back, sometimes destroying it with a 20Lbs sledgehammer...)

·         effectively preventing the cloud service provider from optimizing its resource consumption and turning the supposedly cloud service into a dedicated, managed service.

To sum it up: Processing financial transactions in the cloud can be done, but it requires a different consumption approach than simply offloading workloads into the cloud, in such a way that the bank can take full responsibility for the safety and accuracy of its data without having to rely on assurances from the CSP.


Let us assume these are matters of fact for a high street bank today

·         their legacy mainframe TP beats all comers for performance, CIA etc.

·         their transactions cannot be processed over a network to the same standards.


How does that justify making a technology-bound definition of Cloud Computing?

Suppose I am a start-up bank,

·         happy to be limited (say) 1,000 customers in the first 6 months

·         my service provider's data centre is 50 miles away in the same country

·         all that passes over the internet is encrypted to the highest possible standard

·         my data will stored separately from other customers

·         and the service provider never mentions the term "cloud"?


What you just described is the standard outsourcing data-center type of service that most banks already have today:

·         A detailed and specific contract with a provider

·         Dedicated hardware and floor space

·         At least two remote sites

·         Physical and logical separation from other clients

·         A private cloud environment even (which is called "the mainframe"...)

·         and the service provider never mentions the term "cloud"


Btw, there are two other things that enable this outsourcing model to work:

·         Some cryptography is done using dedicated FIPS 140-2 Level 3 HSM devices that often require the bank staff to physically visit the data centers and perform key management ceremonies (and other security related activities), protecting the bank from the provider itself.

·         The provider must participate in the bank's quarterly compliance programs and answer to the bank's auditors.

·         It must dedicate resources to maintain constant relationship with the specific client and provide evidence on demand.



Agreed. Thank you for that.


I believe that "moving to the cloud" means for the bank the ability to deviate from the standard outsourcing model described above (otherwise the question would not have been raised).

Meaning, the ability to turn a large capital investment into a variable cost by offloading workloads to the public cloud where resources can be provisioned dynamically according to needs while being able to benefit from the most up to date SaaS functionality without having to build legacy systems, which brings us back to technology-oriented definition of the cloud.

Surely the scalability is first and foremost in the contract?

So "Offloading workloads to where the cost is proportionate to the workload." Yes.
But "Offloading workloads to where resources can be provisioned dynamically".

Not necessarily. That is a matter for the supplier to determine.

I fear using the word "cloud" tends to obscure would otherwise be clear and rational discussions of requirements and architecture.

And if regulators are using it in regulations, that does not fill me with confidence about the regulators.



Footnote: Creative Commons Attribution-No Derivative Works Licence 2.0

Attribution: You may copy, distribute and display this copyrighted work only if you clearly credit “Avancier Limited:” before the start and include this footnote at the end.


No Derivative Works: You may copy, distribute, display only complete and verbatim copies of this page, not derivative works based upon it.

For more information about the licence, see