How to accurately identify impact of system issues on end-user response time

From Compuware APM Blog as of 4 June 2013

Triggered by current expected load projections for our community portal, our Apps Team was tasked to run a stress on our production system to verify whether we can handle 10 times the load we currently experience on our existing infrastructure. In order to have the least impact in the event the site crumbled under the load, we decided to run the first test on a Sunday afternoon. Before we ran the test we gave our Operations Team a heads-up: they could expect significant load during a two hour window with the potential to affect other applications that also run on the same environment.

During the test, with both the Ops and Application Teams watching the live performance data, we all saw end user response time go through the roof and the underlying infrastructure running out of resources when we hit a certain load level. What was very interesting in this exercise is that both the Application and Ops teams looked at the same data but examined the results from a different angle. However, they both relied on the recently announced Compuware PureStack Technology, the first solution that – in combination with dynaTrace PurePath – exposes how IT infrastructure impacts the performance of critical business applications in heavy production environments.

Bridging the Gap between Ops and Apps Data by adding Context: One picture that shows the Hotspots of “Horizontal” Transaction as well as the “Vertical” Stack.

Bridging the Gap between Ops and Apps Data by adding Context: One picture that shows the Hotspots of “Horizontal” Transaction as well as the “Vertical” Stack.

The root cause of the poor performance in our scenario was CPU exhaustion – on a main server machine hosting both the Web and App Server – caused us not to meet our load goal. This turned out to be both an IT provisioning and an application problem. Let me explain the steps these teams took and how they came up with their list of action items in order to improve the current system performance in order to do better in the second scheduled test.

Step 1: Monitor and Identify Infrastructure Health Issues

Operations Teams like having the ability to look at their list of servers and quickly see that all critical indicators (CPU, Memory, Network, Disk, etc) are green. But when they looked at the server landscape when our load test reached its peak, their dashboard showed them that two of their machines were having problems:

The core server for our community portal shows problems with the CPU and is impacting one of the applications that run on it.

The core server for our community portal shows problems with the CPU and is impacting one of the applications that run on it.

Step 2: What is the actual impact on the hosted applications?

Clicking on the Impacted Applications Tab shows us the applications that run on the affected machine and which ones are currently impacted:

The increased load not only impacts the Community Portal but also our Support Portal

The increased load not only impacts the Community Portal but also our Support Portal

Already the load test has taught us something: As we expect higher load on the community in the future, we might need to move the support portal to a different machine to avoid any impact.

When examined independently, operations-oriented monitoring would not be that telling. But when it is placed in a context that relates it to data (end user response time, user experience, …) important to the Applications team, both teams gain more insight.  This is a good start, but there is still more to learn.

Step 3: What is the actual impact on the critical transactions?

Clicking on the Community Portal application link shows us the transactions and pages that are actually impacted by the infrastructure issue, but there still are two critical unanswered questions:

  • Are these the transactions that are critical to our successful operation?
  • How badly are these transactions and individual users impacted by the performance issues?

The automatic baseline tells us that our response time for our main community pages shows significant performance impact. This also includes our homepage which is the most valuable page for us.

The automatic baseline tells us that our response time for our main community pages shows significant performance impact. This also includes our homepage which is the most valuable page for us.

Step 4: Visualizing the impact of the infrastructure issue on the transaction

The transaction-flow diagram is a great way to get both the Ops and App Teams on the same page and view data in its full context, showing the application tiers involved, the physical and virtual machines they are running on, and where the hotspots are.

The Ops and Apps Teams have one picture that tells them where the Hotspots both in the “Horizontal” Transaction as well as the “Vertical” Stack is.

The Ops and Apps Teams have one picture that tells them where the Hotspots both in the “Horizontal” Transaction as well as the “Vertical” Stack is.

We knew that our pages are very heavy on content (Images, JavaScript and CSS), with up to 80% of the transaction time spent in the browser. Seeing that this performance hotspot is now down to 50% in relation to the overall page load time we immediately know that more of the transaction time has shifted to the new hotspot: the server side. The good news is that there is no problem with the database (only shows 1% response time contribution) as this entire performance hotspot shift seems to be related to the Web and App Servers, both of which run on the same machine – the one that has these CPU Health Issues.

Step 5: Pinpointing host health issue on the problematic machine

Drilling to the Host Health Dashboard shows what is wrong on that particular server:

The Ops Team immediately sees that the CPU consumption is mainly coming from one Java App Server. There are also some unusual spikes in Network, Disk and Page Faults that all correlated by time.

The Ops Team immediately sees that the CPU consumption is mainly coming from one Java App Server. There are also some unusual spikes in Network, Disk and Page Faults that all correlated by time.

Step 6: Process Health dashboards show slow app server response

We see that the two main processes on that machine are IIS (Web Server) and Tomcat (Application Server). A closer look shows how they are doing over time:

We are not running out of worker threads. Transfer Rate is rather flat. This tells us that the Web Server is waiting on the response from the Application Server.

We are not running out of worker threads. Transfer Rate is rather flat. This tells us that the Web Server is waiting on the response from the Application Server.

It appears that the Application Server is maxing out on CPU. The incoming requests from the load testing tool queue up as the server can’t process them in time. The number of processed transactions actually drops.

It appears that the Application Server is maxing out on CPU. The incoming requests from the load testing tool queue up as the server can’t process them in time. The number of processed transactions actually drops.

Step 7: Pinpointing heavy CPU usage

Our Apps Team is now interested in figuring out what consumes all this CPU and whether this is something we can fix in the application code or whether we need more CPU power:

The Hotspot shows two layers of the Application that are heavy on CPU. Lets drill down further.

The Hotspot shows two layers of the Application that are heavy on CPU. Lets drill down further.

Our sometimes rather complex pages with lots of Confluence macros cause the major CPU Usage.

Our sometimes rather complex pages with lots of Confluence macros cause the major CPU Usage.

Exceptions that capture stack trace information for logging are caused by missing resources and problems with authentication.

Exceptions that capture stack trace information for logging are caused by missing resources and problems with authentication.

Ops and Apps teams now easily prioritize both Infrastructure and app fixes

So as mentioned, ‘context is everything’.  But it’s not simply enough to have data – context relies on the ability to intelligently correlate all of the data into a coherent story.  When the “horizontal” transactional data for end-user response-time analysis is connected to the “vertical” infrastructure stack information, it becomes easy to get both teams to read from the same page and prioritize fixes that have the greatest negative impact on the business.

This exercise allowed us to identify several action items:

  • Deploy our critical applications on different machines when the applications impact each other negatively
  • Optimize the way our pages are built to reduce CPU usage
  • Increase CPU power on these virtualized machines to handle more load
Advertisements

New Relic and SOASTA offer comprehensive solution for web application testing and performance management

 

 

According to recent SOASTA press release…

SOASTA, the leader in cloud and mobile testing, and New Relic, the SaaS-based cloud application performance management provider, today announced their partnership and the integration of New Relic’s application performance management solution on SOASTA’s CloudTest platform.

The integrated solution gives developers complete visibility into the performance of their web applications throughout the entire application development lifecycle. Now developers can immediately begin building performance tests to assure the highest test coverage and tune their apps based on deep performance diagnostics to ensure the highest level of application performance and availability, all at no charge.

Hopefully get an update shortly…


SOASTA’s TouchTest is certainly a ‘game changer’

Screengrab of TouchTest in action

Screengrab of TouchTest in action

SOASTA’s launch today of TouchTest as part of the CloudTest family is certainly a great step forward for all those working with mobile applications and the need to performance test them.

The launch webinar today was certainly a sneak peak at what the new tool set can offer both the mobile developer community and professional QA tester alike.

The quick grab of the webinar slides, download here, will give many a gist of the key outline of features SOASTA’s new solution can support. Initially, shipping to support iOS based native applications with access to functional, automation and performance testing capabilities from the outset. It looks like Android support won’t be far behind and the road map looks to support web applications as well as hybrid, which many developers have now deployed.

What is certainly to SOASTA’s credit is the accessibility of this new addition to their product suite. A real commitment to all those involved in testing their mobile applications, be it the single one man band rather than those large enterprises with deep pockets.

The point being, TouchTest will have a ‘free’ entry point to enable single users to at least explore the possibilities of utilising the tool on a single device to start with.

SOASTA announced the beta program and open with pricing from the outset. A link to the sign up for the beta here.

Hats off to SOASTA for what looks to be an exciting product in a space that has been too expensive to access such a tool set or frankly falling short of what both developers and clients demand.


eBook Web Load Testing For Dummies

This is certainly worth checking out if you are new to the web load testing arena.

Simply sign up via Compuware’s website and receive a copy of the ‘For Dummies’ Compuware sponsored eBook.

Co-written by industry veterans, Scott Barber and Compuware in-house product manager Colin Mason.


The book blurp:

eBook Web Load Testing For Dummies

Web applications that perform well can strengthen a company’s brand, reputation, and create customer loyalty. Web applications that perform poorly put all of that at risk. Web load testing is a critical component to any risk management plan for web applications.

You will learn:

  • The ins and outs of web load testing — know what to expect from web load testing
  • The importance of outside-in load testing — determine what the performance feels like to an actual user
  • Why and when to test — set goals, gather your team, and implement
  • How to manage ongoing analysis— monitor how your testing is going
  • How diagnostics tools combined with web load testing dramatically reduces time to problem resolution

Gomez / Compuware sponsored Forrester webinar – “The Testing Tools Landscape”

Just spotted details of an upcoming Gomez / Compuware sponsored Forrester webinar, which might be worth a look in titled “The Testing Tools Landscape” on 17th February 2011 at 1:00 PM EST.

Here’s the blurb…

Gone are the days when application development and delivery teams could cavalierly ask the business to pick two: cost, time, or quality.

The business wants and needs all three. Quality must move beyond the purview of just the testing organization and must become an integrated part of the entire software development life cycle (SDLC) to reduce schedule-killing rework, improve user satisfaction, and reduce the risks of untested nonfunctional requirements such as security and performance.

These new requirements have motivated vendors to provide tools that support every role in the organization, considerably broadening the testing tools landscape.

Join Margo Visitacion of Forrester and learn:

  • Don’t lose before you get into the game
  • Why load testing can make the difference
  • How planning performance testing today can help budget planning tomorrow
  • How to develop your test game plan

Sign up here if you are interested…


Load Testing: A Quick Definition

What is the difference between Load and Stress testing?

Load testing is a blanket term that is used in many different ways across the professional software testing community.

Load testing generally refers to the practice of modeling the expected usage of a software program by simulating multiple users accessing the program’s services concurrently.

As such, load testing is most relevant for a multi-user system, often one built using a client/server model, such as a web server. Although you could perform a load test on a word processor by or graphics editor forcing it read in an extremely large document; on a financial package by forcing to generate a report based on several years’ worth of data, etc.
When the load placed on the system is accelerated beyond normal usage patterns, in order to test the system’s response at unusually high or peak loads, it is known as Stress Testing. The load is usually so great that error conditions are the expected result, although there is a gray area between the two domains and no clear boundary exists where you could say that an activity ceases to be a load test and becomes a stress test.