value insights

5 Cloud Performance Measuring Strategies- Valutrics

 

Migrating systems to the public cloud requires many steps. One of them should be performance monitoring. Here are five ways IT pros can make sure their company is getting a return on its cloud investment. 

 

Shifting to public cloud computing from traditional sources, including direct server and software license purchases, has been picking momentum. Gartner expects the public cloud market will total over $208 billion by the end of this year. The research firm predicts IT spending on cloud services will grow to $216 billion in 2020.

But choosing to migrate to the public cloud is only the first step. Once the move has been made, all the stakeholders, including the CEO, CIO, and IT staff, will want to insure the company is getting its money’s worth in terms of performance.

To determine that, IT should document some baselines of how the workloads are running. What are the response times, business logic calculation times, transaction processing times? Keep a record of all the things that the staff can measure as it starts out, and periodically check that it’s still getting the same performance at a later date.

The issue was paramount to Netflix as it transitioned from distributing video from its own data centers into the cloud. It noticed in some cases a virtual server it had commissioned kept registering slippage in performance and concluded other tenants had already been on the server when it landed there.

Those tenants were the “noisy neighbor” type, with lots of incoming and outgoing network traffic and different data exchanges with storage. In no case was the server stalling their workload, only periodically slowing it down as the server paused to allow a co-tenant’s cache to clear and queues to unload.

Better to shut the workload down and shift to a new server. Netflix became skilled in sizing its workloads so that they required most of a server and didn’t allow noisy neighbors to plant themselves in the same neighborhood.

If at some point response times seem to be slowing, you may have to engage in a conversation with your cloud provider. With modeling and baseline process time captured at the outset, at least you have some talking points as the provider insists performance is the same as it’s always been.

To do that, you’re going to need to establish those baselines. Consider the experience of First American Financial, where the company implemented the IT process modeling benchmarking capabilities of Apptio. Going with a supplier such as Apptio is an option, as is inventing your own documentation method. The important thing is to get something on the record about performance.

Beyond setting up the defense, read on for five strategies for measuring public cloud performance.

 

DIY View

The simplest strategy for application performance measurement is for an IT staff to instrument an application with the metrics that matter to it, package it up as a virtual machine set of files and send it into the cloud of choice.

Your instruments, in whatever manner you configured them, will need to periodically send out information to a logging or storage system, where you can retrieve them or have them sent directly to an analytics application for continuous analysis. This is a do-it-yourself, engineering-intensive approach that depends entirely on the skills of the engineers involved.

View From The Inside

Make full use of your provider’s monitoring and reporting mechanism to deduce all you can about performance. But that won’t be much. In Amazon Web Services, for example, the CloudWatch service reports back on whether your virtual server is up or down.

However, you can set alerts based on a defined metric and have Amazon Simple Notification Service let you know if the metric crosses a threshold. You can also invent or extend the standard metrics by implementing a custom alert, for a small additional fee. Unless you do, your application may appear to be running fine, according to CloudWatch, when in fact it’s returning a stream of error messages to the application’s users.

Outside View

Cloud Watch and provider services like it are inside the cloud. Now consider a performance measurement approach that sits outside and repeatedly asks a one thing of the target application: Answer my query. Synthetic transaction producers on PCs are around the world. Sometimes known as “headless users,” they periodically fire up thousands of synthetic queries to test to your application, if you commission a service to do so.

The service records the normal time it takes to get a short response or a more full response to given locations around the world, coming up with an average response time. The service will sound an alert if response times are drifting over a threshold the customer doesn’t want to cross. Dynatrace, formerly Compuware, and other suppliers of synthetic queries can test your application from the outside.

Hired View

New Relic, AppDynamics, Riverbed SteelCentral, AppNeta, Manage Engine, BMC, HPE, and other performance measuring system makers can offer a great deal of information. For a set of applications running on Microsoft’s Azure, New Relic SaaS system, for example, can give a total picture of CPU, memory, storage, and I/O rates from its management interface, then allow a customer to drill down into individual server details.

AppDynamics can be located on-premises or in the cloud and monitor what’s going on with workloads there, including assessments of what’s needed to prevent trouble from developing or getting worse. Several vendors have added analytics to troubleshoot specific problems.

Go Combo For Clearer View

Try a combination of above. Start out with your own metrics, add whatever CloudWatch or other provider service can supply, and layer on top monitoring and analytics from one or more third parties. Don’t forget the synthetic queries through a Dynatrace-like outside service. A combination of more than one of the above will get you a lot closer to effective application performance measurement and management.