On virtualisation-unfriendly apps



Forgive, me.

I edited this from an internet discussion and lost the reference to the source.


Yogesh said

We have been hosting a suite of apps over a private cloud infrastructure.

The apps are line-of-business bespoke apps developed in Win32, DOTNET, JAVA etc.

These apps work well when installed on a physical machine.

But some do not work as expected and fail when deployed on a virtual machine (VM).

We do not have access to the source code of the apps.
We wonder: What about these apps could be virtualization-unfriendly?

I read somewhere the Security products (Antivirus, Malware detectors etc) also encounter issues in virtualized environments.


David said
I have not met an app that will not run similarly

         on an OS installed on physcal hardware

         on a similar virtual configuration on the same OS on a full virtualization hypervisor such as VMware or HyperV.


My list of virtualization-unfriendly apps is more about what will break licensing, or require a configuration the negates the benefits of virtualization:

         apps dependent on specific hardware (hardware dongles, PCI cards);

         apps with licensing terms that make virtualization expensive or hard to manage (Oracle); or

         apps that try to do odd things with networks (e.g. problems with Microsoft's load balancing and clustering).

Apps with the above issues can be virtualized with modern hypervisors, but it usually becomes more expensive or painful than it is worth.


I suspect under-sizing in at least part of the infrastructure supporting the virtual servers.

Or not considering that the underlying resources are shared, rather than the hypervisor or virtulization in general as being the cause of the problems.

For example, problems with security products have been due to scheduling them to run on all of the servers at the same time of day.

For independent servers scanning their local hard drives it doesn't matter, since they are all independent.

So, when you move to shared storage, using a SAN or virtual servers with a shared storage system, stagger scanning the file systems so as not to impact all the servers and cause apps to time out.

Solving this is often easy, changing scheduling so batch jobs don't overlap.

Sometimes it is more expensive and requires adding hardware, often more or higher performance disks as storage is often the first bottleneck that shows up.

Occasionally the hardware the virtual servers are running on runs out of memory, or CPU is maxed out, but these are both easier to see and less likely on modern servers.



The only apps I encountered with problems moving to a virtual environment were apps with dongles or poorly written bespoke apps.

One particular app only behaved when given a couple of Solid State Drives.


My experiences in this area were less to do with the virtualisation per se.

And more to distribution of app components across geographically dispersed environments - Azure and on-site.

Latency was the initial major factor.

We resolved this using a VPN and hardware accelerators to connect to the two environments - the issue was identified to timeouts.

The second was due to insufficient network connectivity in the virtual network - too many virtual devices using the same physical network connection.

We increased the number of physical network connections allowing a better spread of traffic.