APM as a Service: 4 steps to monitor real user experience in production

From Compuware APM Blog from 15 May 2013

With our new service platform and the convergence of dynaTrace PurePath Technology with the Gomez Performance Network, we are proud to offer an APMaaS solution that sets a higher bar for complete user experience management, with end-to-end monitoring technologies that include real-user, synthetic, third-party service monitoring, and business impact analysis.

To showcase the capabilities we used the free trial on our own about:performance blog as a demonstration platform. It is based on the popular WordPress technology which uses PHP and MySQL as its implementation stack. With only 4 steps we get full availability monitoring as well as visibility into every one of our visitors and can pinpoint any problem on our blog to problems in the browser (JavaScript, slow 3rd party, …), the network (slow network connectivity, bloated website, ..) or the application itself (slow PHP code, inefficient MySQL access, …).

Before we get started, let’s have a look at the Compuware APMaaS architecture. In order to collect real user performance data all you need is to install a so called Agent on the Web and/or Application Server. The data gets sent in an optimized and secure way to the APMaaS Platform. Performance data is then analyzed through the APMaaS Web Portal with drilldown capabilities into the dynaTrace Client.

Compuware APMaaS is a secure service to monitor every single end user on your application end-to-end (browser to database)

Compuware APMaaS is a secure service to monitor every single end user on your application end-to-end (browser to database)

4 Steps to setup APMaaS for our Blog powered by WordPress on PHP

From a high-level perspective, joining Compuware APMaaS and setting up your environment consists of four basic steps:

  1. Sign up with Compuware for the Free Trial
  2. Install the Compuware Agent on your Server
  3. Restart your application
  4. Analyze Data through the APMaaS Dashboards

In this article, we assume that you’ve successfully signed up, and will walk you through the actual setup steps to show how easy it is to get started.

After signing up with Compuware, the first sign of your new Compuware APMaaS environment will be an email notifying you that a new environment instance has been created:

Following the steps as explained in the Welcome Email to get started

Following the steps as explained in the Welcome Email to get started

While you can immediately take a peek into your brand new APMaaS account at this point, there’s not much to see: Before we can collect any data for you, you will have to finish the setup in your application by downloading and installing the agents.

After installation is complete and the Web Server is restarted the agents will start sending data to the APMaaS Platform – and with dynaTrace 5.5, this also includes the PHP agent which gives insight into what’s really going on in the PHP application!

Agent Overview shows us that we have both the Web Server and PHP Agent successfully loaded

Agent Overview shows us that we have both the Web Server and PHP Agent successfully loaded

Now we are ready to go!

For Ops & Business: Availability, Conversions, User Satisfaction

Through the APMaaS Web Portal, we start with some high level web dashboards that are also very useful for our Operations and Business colleagues. These show Availability, Conversion Rates as well as User Satisfaction and Error Rates. To show the integrated capabilities of the complete Compuware APM platform, Availability is measured using Synthetic Monitors that constantly check our blog while all of the other values are taken from real end user monitoring.

Operations View: Automatic Availability and Response Time Monitoring of our Blog

Operations View: Automatic Availability and Response Time Monitoring of our Blog

Business View: Real Time Visits, Conversions, User Satisfaction and Errors

Business View: Real Time Visits, Conversions, User Satisfaction and Errors

For App Owners: Application and End User Performance Analysis

Through the dynaTrace client we get a richer view to the real end user data. The PHP agent we installed is a full equivalent to the dynaTrace Java and .NET agents, and features like the application overview together with our self-learning automatic baselining will just work the same way regardless of the server-side technology:

Application level details show us that we had a response time problem and that we currently have several unhappy end users

Application level details show us that we had a response time problem and that we currently have several unhappy end users

Before drilling down into the performance analytics, let’s have a quick look at the key user experience metrics such as where our blog users actually come from, the browsers they use, and whether their geographical location impacts user experience:

The UEM Key Metrics dashboards give us the key metrics of web analytics tools as well as tying it together with performance data. Visitors from remote locations are obviously impacted in their user experience

The UEM Key Metrics dashboards give us the key metrics of web analytics tools as well as tying it together with performance data. Visitors from remote locations are obviously impacted in their user experience

If you are responsible for User Experience and interested in some of our best practices I recommend checking our other UEM-related blog posts – for instance: What to do if A/B testing fails to improve conversions?

Going a bit deeper – What impacts End User Experience?

dynaTrace automatically detects important URLs as so-called “Business Transactions.” In our case we have different blog categories that visitors can click on. The following screenshot shows us that we automatically get dynamic baselines calculated for these identified business transaction:

Dynamic Baselining detect a significant violation of the baseline during a 4.5 hour period last night

Dynamic Baselining detect a significant violation of the baseline during a 4.5 hour period last night

Here we see that our overall response time for requests by category slowed down on May 12. Let’s investigate what happened here, and move to the transaction flow which visualizes PHP transactions from the browser to the database and maps infrastructure health data onto every tier that participated in these transactions:

The Transaction Flow shows us a lot of interesting points such as Errors that happen both in the browser and the WordPress instance. It also shows that we are heavy on 3rd party but good on server health

The Transaction Flow shows us a lot of interesting points such as Errors that happen both in the browser and the WordPress instance. It also shows that we are heavy on 3rd party but good on server health

Since we are always striving to improve our users’ experience, the first troubling thing on this screen is that we see errors happening in browsers – maybe someone forgot to upload an image when posting a new blog entry? Let’s drill down to the Errors dashlet to see what’s happening here:

3rd Party Widgets throw JavaScript errors and with that impact end user experience.

3rd Party Widgets throw JavaScript errors and with that impact end user experience.

Apparently, some of the third party widgets we have on the blog caused JavaScript errors for some users. Using the error message, we can investigate which widget causes the issue, and where it’s happening. We can also see which browsers, versions and devices this happens on to focus our optimization efforts. If you happen to rely on 3rd party plugins you want to check the blog post You only control 1/3 of your Page Load Performance.

PHP Performance Deep Dive

We will analyze the performance problems on the PHP Server Side in a follow up blog. We will show you what the steps are to identify problematic PHP code. In our case it actually turned out to be a problematic plugin that helps us identify bad requests (requests from bots, …)

Conclusion and Next Steps

Stay tuned for more posts on this topic, or try Compuware APMaaS out yourself by signing up here for the free trial!


It takes more than a tool! Swarovski’s 10 requirements for creating an APM culture

By Andreas Grabner at blog.dynatrace.com

Swarovski – the leading producer of cut crystal in the world- relies on its eCommerce store as much like other companies in the highly competitive eCommerce environment. Swarovski’s story is no different from others in this space: They started with “Let’s build a website to sell our products online” a couple of years ago and quickly progressed to “We sell to 60 million annual visitors across 23 countries in 6 languages”. There were bumps along the road and they realized that it takes more than just a bunch of servers and tools to keep the site running.

Why APM and why you do not just need a tool?

Swarovski relies on Intershop’s eCommerce platform and faced several challenges as they rapidly grew. Their challenges required them to apply Application Performance Management (APM) practices to ensure they could fulfill the business requirements to keep pace with customer growth while maintaining an excellent user experience. The most insightful comment I heard was from René Neubacher, Senior eBusiness Technology Consultant at Swarovski: “APM is not just about software. APM is a culture, a mindset and a set of business processes.  APM software supports that.”

René recently discussed their Journey to APM, what their initial problems were and what requirements they ended up having on APM and the tools they needed to support their APM strategy. By now they reached the next level of maturity by establishing a Performance Center of Excellence. This allows them to tackle application performance proactively throughout the organization instead of putting out fires reactively in production.

This blog post describes the challenges they faced, the questions that arose and the new generation APM requirements that paved the way forward in their performance journey:

The Challenge!

Swarvoski had traditional system monitoring in place on all their systems across their delivery chain including web servers, application servers, SAP, database servers, external systems and the network. Knowing that each individual component is up and running 99.99% of the time is great but no longer sufficient. How might these individual component outages impact the user experience of their online shoppers? WHO is actually responsible for the end user experience and HOW should you monitor the complete delivery chain and not just the individual components? These and other questions came up when the eCommerce site attracted more customers which was quickly followed by more complaints about their user experience:

APM includes getting a holistic view of the complete delivery chain and requires someone to be responsible for end user experience.

APM includes getting a holistic view of the complete delivery chain and requires someone to be responsible for end user experience.

Questions that had no answers

In addition to “Who is responsible in case users complain?” the other questions that needed to be urgently addressed included:

  • How often is the service desk called before IT knows that there is a problem?
  • How much time is spent in searching for system errors versus building new features?
  • Do we have a process to find the root-cause when a customer reports a problem?
  • How do we visualize our services from the customer‘s point of view?
  • How much revenue, brand image and productivity are at risk or lostwhile IT is searching for the problem?
  • What to do when someone says ”it‘s slow“?

The 10 Requirements

These unanswered questions triggered the need to move away from traditional system monitoring and develop the requirements for new generation APM and user experience management.

#1: Support State-of-the-Art Architecture

Based on their current system architecture it was clear that Swarovski needed an approach that was able to work in their architecture, now and in the future. The rise of more interactive Web 2.0 and mobile applications had to be factored in to allow monitoring end users from many different devices and regardless of whether they used a web application or mobile native application as their access point.

Transactions need to be followed from the browser all the way back to the database. It is important to support distributed transactions. This approach also helps to spot architectural and deployment problems immediately

Transactions need to be followed from the browser all the way back to the database. It is important to support distributed transactions. This approach also helps to spot architectural and deployment problems immediately

#2: 100% transactions and clicks – No Averages

Based on their experience, Swarovski knew that looking at average values or sampled data would not be helpful when customers complained about bad performance. Responding to a customer complaint with “Our average user has no problem right now – sorry for your inconvenience” is not what you want your helpdesk engineers to use as a standard phrase. Averages or sampling also hides the real problems you have in your system. Check out the blog post Why Averages Suck by Michael Kopp for more detail.

Measuring end user performance of every customer interaction allows for quick identification of regional problems with CDNs, 3rd Parties or Latency.

Measuring end user performance of every customer interaction allows for quick identification of regional problems with CDNs, 3rd Parties or Latency.

Having 100% user interactions and transactions available makes it easy to identify the root cause for individual users

Having 100% user interactions and transactions available makes it easy to identify the root cause for individual users

#3: Business Visibility

As the business had a growing interest in the success of the eCommerce platform, IT had to demonstrate to the business what it took to fulfill their requirements and how business requirements are impacted by the availability or the lack of investment in the application delivery chain.

Correlating the number of Visits with Performance on incoming Orders illustrates the measurable impact of performance on revenue and what it takes to support business requirements.

Correlating the number of Visits with Performance on incoming Orders illustrates the measurable impact of performance on revenue and what it takes to support business requirements.

#4: Impact of 3rd Parties and CDNs

It was important to not only track transactions involving their own Data Center but ALL user interactions with their web site – even those delivered through CDNs or 3rd parties. All of these interactions make up the user experience and therefore ALL of it needs to be analyzed.

Seeing the actual load impact of 3rd party components or content delivered from CDNs enables IT to pinpoint user experience problems that originate outside their own data center.

Seeing the actual load impact of 3rd party components or content delivered from CDNs enables IT to pinpoint user experience problems that originate outside their own data center.

#5: Across the lifecycle – supporting collaboration and tearing down silos

The APM initiative was started because Swarovski reacted to problems happening in production. Fixing these problems in production is only the first step. Their ultimate goal is to become pro-active by finding and fixing problems in development or testing—before they spill over into production. Instead of relying on different sets of tools with different capabilities, the requirement is to use one single solution that is designed to be used across the application lifecycle (Developer Workstation, Continuous Integration, Testing, Staging and Production). It will make it easier to share application performance data between lifecycle stages allowing individuals to not only easily look at data from other stages but also compare data to verify impact and behavior of code changes between version updates.

Continuously catching regressions in Development by analyzing unit and performance tests allows application teams to become more proactive.

Continuously catching regressions in Development by analyzing unit and performance tests allows application teams to become more proactive.

Pinpointing integration and scalability issues, continuously, in acceptance and load testing makes testing more efficient and prevents problems from reaching production.

Pinpointing integration and scalability issues, continuously, in acceptance and load testing makes testing more efficient and prevents problems from reaching production.

#6: Down to the source code

In order to speed up problem resolution Swarovski’s operations and development teams  require as much code-level insight as possible — not only for their own engineers who are extending the Intershop eCommerce Platform but also for Intershop to improve their product. Knowing what part of the application code is not performing well with which input parameters or under which specific load on the system eliminates tedious reproduction of the problem. The requirement is to lower the Mean Time To Repair (MTTR) from as much as several days down to only a couple of hours.

The SAP Connector turned out to have a performance problem. This method-level detailed information was captured without changing any code.

The SAP Connector turned out to have a performance problem. This method-level detailed information was captured without changing any code.

#7: Zero/Acceptable overhead

“Who are we kidding? There is nothing like zero overhead especially when you need 100% coverage!” – Just the words from René when you explained that requirement. And he is right: once you start collecting information from a production system you add a certain amount of overhead. A better term for this would be “imperceptible overhead” – overhead that’s so small, you don’t notice it.

What is the exact number? It depends on your business and your users. The number should be worked out from the impact on the end user experience, rather than additional CPU, memory or network bandwidth required in the data center. Swarovski knew they had to achieve less than 2% overhead on page load times in production, as anything more would have hurt their business; and that’s what they achieved.

#8: Centralized data collection and administration

Running a distributed eCommerce application that gets potentially extended to additional geographical locations requires an APM system with a centralized data collection and administration option. It is not feasible to collect different types of performance information from different systems, servers or even data centers. It would either require multiple different analysis tools or data transformation to a single format to use it for proper analysis.

Instead of this approach, a single unified APM system was required by Swarovski. Central administration is equally important as they need to eliminate the need to rely on remote IT administrators to make changes to the monitored system, for example, simple tasks such as changing the level of captured data or upgrading to a new version.

By storing and accessing performance data from a single, centralized repository, enables fast and powerful analytic and visualization. For example, system metrics such as CPU utilization can be correlated with end-user response time or database execution time - all displayed on one single dashboard.

By storing and accessing performance data from a single, centralized repository, enables fast and powerful analytic and visualization. For example, system metrics such as CPU utilization can be correlated with end-user response time or database execution time – all displayed on one single dashboard.

#9: Auto-Adapting Instrumentation without digging through code

As the majority of the application code is not developed in-house but provided by Intershop, it is mandatory to get insight into the application without doing any manual code changes. The APM system must auto-adapt to changes so that no manual configuration change is necessary when a new version of the application is deployed.

This means Swarovski can focus on making their applications positively contribute to business outcomes; rather than spend time maintaining IT systems.

#10: Ability to extend

Their application is an always growing an ever-changing IT environment. Where everything might have been deployed on physical boxes it might be moved to virtualized environments or even into a public cloud environment.

Whatever the extension may be – the APM solution must be able to adapt to these changes and also be extensible to consume new types of data sources, e.g., performance metrics from Amazon Cloud Services or VMware, Cassandra or other Big Data Solutions or even extend to legacy Mainframe applications and then bring these metrics into the centralized data repository and provide new insights into the application’s performance.

Extending the application monitoring capabilities to Amazon EC2, Microsoft Windows Azure, a public or private cloud enables the analysis of the performance impact of these virtualized environments on end user experience.

Extending the application monitoring capabilities to Amazon EC2, Microsoft Windows Azure, a public or private cloud enables the analysis of the performance impact of these virtualized environments on end user experience.

The Solution and the Way Forward

Needless to say that Swarovski took the first step in implementing APM as a new process and mindset in their organization. They are now in the next phase of implementing a Performance Center of Excellence. This allows them moving from Reactive Performance Troubleshooting to Proactive Performance Prevention.

Stay tuned for more blog posts on the Performance Center of Excellence and how you can build one in your own organization. The key message is that it is not about just using a bunch of tools. It is about living and breathing performance throughout the organization. If you are interested in this check out the blogs by Steve Wilson: Proactive vs Reactive: How to prevent problems instead of fixing them faster andPerformance in Development is the Chief Cornerstone.


OPNET Technologies, Inc acquired by Riverbed Technology

Riverbed Technology, the performance company, and OPNET Technologies, Inc. a leading provider of solutions for application and network performance management, today announced that Riverbed® has entered into a definitive agreement to acquire OPNET for $43 per share in cash and stock, representing an equity value of $1 billion and an enterprise value of $921 million.

The acquisition will enable Riverbed to extend its network performance management (NPM) business into the multi-billion dollar application performance management (APM) market. The combination of Cascade® and OPNET will create a new force in the converged market for NPM and APM, with over $250 million in annualized revenue.

Networks and applications are required to work together to deliver the performance business users demand. The addition of OPNET’s broad-based family of APM products enhances Riverbed’s already strong position in the NPM market. The resulting combination is a product line with unparalleled visibility and insight into application and network performance. This acquisition enables Riverbed to provide customers with a unique integrated solution that not only monitors network and application performance, but also accelerates it.

OPNET has been recognised as a leader in the APM market: www.opnet.com/gartner-magic-quadrant-apm/.

“The addition of OPNET establishes Riverbed as the clear leader in the high-growth and converging application and network performance management markets,” said Jerry Kennelly, Chairman and CEO at Riverbed. “This acquisition also transforms Riverbed into a billion dollar revenue company.”

“Riverbed and OPNET have natural synergies,” said Marc Cohen, OPNET’s Chairman and CEO. “Riverbed’s leadership in accelerating business technology combined with OPNET’s industry-leading suite of APM products provides customers with a single solution for monitoring, troubleshooting and actually fixing the application and network performance problems challenging them today.”

OPNET will be combined with Riverbed’s Cascade business unit and expected to be fully integrated by mid 2013.


Compuware unveils Outage Analyzer, a new generation performance analytics solution that raises the intelligence of SaaS APM

Tracks cloud and third-party web service outages with instant notification of cause and impact

Compuware, the technology performance company today announced a new generation performance analytics solution that raises the intelligence of software-as-a-service (SaaS) application performance management (APM).

 

Outage Analyzer provides real-time visualizations and alerts of outages in third-party web services that are mission critical to web, mobile and cloud applications around the globe. Compuware is providing this new service free of charge. Check out Outage Analyzer here.

Utilizing cutting-edge big data technologies and a proprietary anomaly detection engine, Outage Analyzer correlates more than eight billion data points per day. This data is collected from the Compuware Gomez Performance Monitoring Network of more than 150,000 test locations and delivers information on specific outages including the scope, duration and probable cause of the event — all visualized in real-time.

“Compuware’s new Outage Analyzer service is a primary example of the emerging industry trend toward applying big data analytics technologies to help understand and resolve application performance and availability issues in near real-time,” said Tim Grieser, Program VP, Enterprise System Management Software at IDC. “Outage Analyzer’s ability to analyze and visualize large masses of data, with automated anomaly detection, can help IT and business users better understand the sources and causes of outages in third-party web services.”

Cloud and third-party web services allow organizations to rapidly deliver a rich user experience, but also expose web and mobile sites to degraded performance—or even a total outage—should any of those components fail. Research shows that the typical website has more than ten separate hosts contributing to a single transaction, many of which come from third-party cloud services such as social media, ecommerce platforms, web analytics, ad servers and content delivery networks.

Outage Analyzer addresses this complexity with the following capabilities:

  • Incident Visualization: Issues with third-party services are automatically visualized on Outage Analyzer’s global map view. This view displays information on the current status, impact—based on severity and geography—and duration, along with the certainty and probable cause of the outage. Outage Analyzer also provides a timeline view that shows the spread and escalation of the outage. The timeline has a playback feature to replay the outage and review its impact over time.
  • Incident Filtering and Searching: With Outage Analyzer, users can automatically view the most recent outages, filtered by severity of impact, or search for outages in specific IPs, IP ranges or service domains. This allows users to find the outages in services that are potentially impacting their own applications.
  • Alerting: Users can sign-up to automatically receive alerts—RSS and Twitter feeds—and can specify the exact types of incidents to be alerted on such as popularity of third-party web service provider, certainty of an outage and by the geographical region impacted. Alerts contain links to the global map view and details of the outage. This provides an early-warning system to potential problems.
  • Performance Analytics Big Data Platform: Utilizing cutting-edge big data technologies in the cloud, including Flume and Hadoop, Outage Analyzer collects live data from the entire Gomez customer base and Gomez Benchmark tests, processing more than eight billion data points per day. The processing from raw data to visualization and alerting on an outage all happens within minutes, making the outage data timely and actionable.
  • Anomaly Detection Algorithms: At the heart of Outage Analyzer’s big data platform is a proprietary anomaly detection engine that automatically identifies availability issues with third-party web services that are impacting performance of the web across the globe. Outage Analyzer then correlates the outage data, identifies the source of the problem, calculates the impact and lists the probable causes — all in real-time.

“Since Outage Analyzer has been up and running, we’ve seen an average of about 200 third-party web service outages a day,” said Steve Tack, Vice President of Product Management for Compuware’s APM business unit. “Outage Analyzer is just the beginning. Our big data platform, propriety correlation and anomaly detection algorithms, and intuitive visualizations of issues with cloud and third-party web services are key building-blocks to delivering a new generation of answer-centric APM.”

Outage Analyzer harnesses the collective intelligence of the Compuware Gomez Network, the largest and most active APM SaaS platform in the world. Now eight billion measurements a day across the global Internet can be harnessed by any organization serious about delivering exceptional web application performance. Determining whether an application performance issue is the fault of an organization’s code, or the fault of a third-party service has never been easier.

Compuware APM® is the industry’s leading solution for optimizing the performance of web, non-web, mobile, streaming and cloud applications. Driven by end-user experience, Compuware APM provides the market’s only unified APM coverage across the entire application delivery chain—from the edge of the internet through the cloud to the datacenter. Compuware APM helps customers deliver proactive problem resolution for greater customer satisfaction, accelerate time-to-market for new application functionality and reduce application management costs through smarter analytics and advanced APM automation.

With more than 4,000 APM customers worldwide, Compuware is recognized as a leader in the “Magic Quadrant for Application Performance Monitoring” report.

To read more about Compuware’s leadership in the APM market, click here.


Introducing the new web performance project: Speed of the web

From Alois Reitbauer at DynaTrace Compuware APM.

I am excited about the launch of a new project in the Web Performance space. With SpeedoftheWeb we provide a free benchmarking and optimization service that provides key performance indicators (KPI) calculated for industry verticals like Retail, Health, Media or Travel.

The idea behind the project is that Web performance also depends the on type of service your site provides. A simple static page is different from a content rich site with a lot of interactive parts. The main question is how am I doing compared to my competition and where can I improve. SpeedoftheWeb answers exactly these questions.

You can get a free report showing you how you do against the top sites in your industry across the whole Web application delivery chain. We start from the user’s perspective by showing how long it takes to see page or fully load it. Then we dive into how individual components like JavaScript, content or server-side processing contribute to the user experience; explicitly pointing out where you have to optimize.

Performance across the Web App Delivery Chain and where to improve

For a total of 15 KPIs on Web performance we do not only answer how good you are but also what the range in your industry is. Often it is hard to specify performance KPIs as you do not know what the ideal site should look like. SpeedoftheWeb exactly provides this information. Below you see an example how the JavaScript execution time of a page relates to equivalent pages in the industry

JavaScript execution time compared to the competition

Getting better is also about learning from the best. That is why we tell you how many of the top sites in the field are better than you and what the best sites for each KPI are. Get insight into what these sites are doing and learn what their secret sauce is.

SpeedoftheWeb will also help you to justify why you want to invest in Web performance in a way that management will understand. You always wanted to get rid of this 2 MB Flash Video on your start page? Show management that it makes you more than 1 second slower than the top pages in industry.

SpeedoftheWeb provides several testing locations around the globe enable to get data from where you users are. You can even use the data you to compare performance across multiple locations.

All reports are persisted in our Cloud storage and can be accessed via a web browser. So you can easily share it with your colleagues. We gave our best to also polish them up visually so you don’t have to put a lot of make up on them before showing them to your boss – as this is often the case with performance data.

There is even more that SpeedoftheWeb can do for. Knowing in which area to improve is good, but knowing exactly what to do is even better. Therefore we automatically record an Ajax Edition session which can be downloaded for deep dive analysis. So, if something is slow you will figure it out. A nice bonus is that you can now also recordAjax Edition sessions from around the globe for free.

Detailed Diagnostics Data in Ajax Edition

I am very excited about this new service and I hope it provides a lot of value to the Web performance community. If you have ideas on how to improve it, just let me know. If you want to gain deeper insight into how performance differs across various industries I recommend checking out this presentation link

Don’t forget to visit www.speedoftheweb.org now and see what it can do for you. Enjoy using SpeedoftheWeb and provide feedback to make this an even better service.

Introducing SpeedoftheWeb

I am excited about the launch of a new project in the Web Performance space. With SpeedoftheWeb we provide a free benchmarking and optimization service that provides key performance indicators (KPI) calculated for industry verticals like Retail, Health, Media or Travel.

The idea behind the project is that Web performance also depends the on type of service your site provides. A simple static page is different from a content rich site with a lot of interactive parts. The main question is how am I doing compared to my competition and where can I improve. SpeedoftheWeb answers exactly these questions.

You can get a free report showing you how you do against the top sites in your industry across the whole Web application delivery chain. We start from the user’s perspective by showing how long it takes to see page or fully load it. Then we dive into how individual components like JavaScript, content or server-side processing contribute to the user experience; explicitly pointing out where you have to optimize.

Web App Delivery Chain Image

For a total of 15 KPIs on Web performance we do not only answer how good you are but also what the range in your industry is. Often it is hard to specify performance KPIs as you do not know what the ideal site should look like. SpeedoftheWeb exactly provides this information. Below you see an example how the JavaScript execution time of a page relates to equivalent pages in the industry

JS Time Image

Getting better is also about learning from the best. That is why we tell you how many of the top sites in the field are better than you and what the best sites for each KPI are. Get insight into what these sites are doing and learn what their secret sauce is.

SpeedoftheWeb will also help you to justify why you want to invest in Web performance in a way that management will understand. You always wanted to get rid of this 2 MB Flash Video on your start page? Show management that it makes you more than 1 second slower than the top pages in industry.

SpeedoftheWeb provides several testing locations around the globe enable to get data from where you users are. You can even use the data you to compare performance across multiple locations.

All reports are persisted in our Cloud storage and can be accessed via a web browser. So you can easily share it with your colleagues. We gave our best to also polish them up visually so you don’t have to put a lot of make up on them before showing them to your boss – as this is often the case with performance data.

There is even more that SpeedoftheWeb can do for. Knowing in which area to improve is good, but knowing exactly what to do is even better. Therefore we automatically record an Ajax Edition session which can be downloaded for deep dive analysis. So, if something is slow you will figure it out. A nice bonus is that you can now also record Ajax Edition sessions from around the globe for free.

I am very excited about this new service and I hope it provides a lot of value to the Web performance community. If you have ideas on how to improve it, just let me know. If you want to gain deeper insight into how performance differs across various industries I recommend checking out this presentationLink

Don’t forget to visit www.speedoftheweb.org

Introducing SpeedoftheWeb

I am excited about the launch of a new project in the Web Performance space. With SpeedoftheWeb we provide a free benchmarking and optimization service that provides key performance indicators (KPI) calculated for industry verticals like Retail, Health, Media or Travel.

The idea behind the project is that Web performance also depends the on type of service your site provides. A simple static page is different from a content rich site with a lot of interactive parts. The main question is how am I doing compared to my competition and where can I improve. SpeedoftheWeb answers exactly these questions.

You can get a free report showing you how you do against the top sites in your industry across the whole Web application delivery chain. We start from the user’s perspective by showing how long it takes to see page or fully load it. Then we dive into how individual components like JavaScript, content or server-side processing contribute to the user experience; explicitly pointing out where you have to optimize.

Web App Delivery Chain Image

For a total of 15 KPIs on Web performance we do not only answer how good you are but also what the range in your industry is. Often it is hard to specify performance KPIs as you do not know what the ideal site should look like. SpeedoftheWeb exactly provides this information. Below you see an example how the JavaScript execution time of a page relates to equivalent pages in the industry

JS Time Image

Getting better is also about learning from the best. That is why we tell you how many of the top sites in the field are better than you and what the best sites for each KPI are. Get insight into what these sites are doing and learn what their secret sauce is.

SpeedoftheWeb will also help you to justify why you want to invest in Web performance in a way that management will understand. You always wanted to get rid of this 2 MB Flash Video on your start page? Show management that it makes you more than 1 second slower than the top pages in industry.

SpeedoftheWeb provides several testing locations around the globe enable to get data from where you users are. You can even use the data you to compare performance across multiple locations.

All reports are persisted in our Cloud storage and can be accessed via a web browser. So you can easily share it with your colleagues. We gave our best to also polish them up visually so you don’t have to put a lot of make up on them before showing them to your boss – as this is often the case with performance data.

There is even more that SpeedoftheWeb can do for. Knowing in which area to improve is good, but knowing exactly what to do is even better. Therefore we automatically record an Ajax Edition session which can be downloaded for deep dive analysis. So, if something is slow you will figure it out. A nice bonus is that you can now also record Ajax Edition sessions from around the globe for free.

I am very excited about this new service and I hope it provides a lot of value to the Web performance community. If you have ideas on how to improve it, just let me know. If you want to gain deeper insight into how performance differs across various industries I recommend checking out this presentation Link

Don’t forget to visit http://www.speedoftheweb.org now and see what it can do for you. Enjoy using SpeedoftheWeb and provide feedback to make this an even better service.


Are your web and mobile apps ready for the revolution?

From Compuware APM blog by 

Today, more than 50% of Americans own a smartphone. They use them to update Facebook profiles, scan and deposit checks, find restaurants and more often than not work.

As a result of today’s workforce becoming more dispersed and collaborative, employees often want to use technologies of their choice, including their own personal devices. Businesses have been quick to embrace this trend and give employees the ability to perform critical business functions anytime, anywhere, in the hopes of leveraging gains in productivity.

A recent study by Forrester Research shows that 60% of employees use their personal mobile devices at work. Some organizations have reported over 75% of devices on their networks are owned by employees. While this trend is viewed by some as risky and painful, causing security, liability, and management issues, others have embraced it.

Companies that have embraced Bring Your Own Device’ or BYOD have opened up a Pandora’s box of potential technical issues, only some of which have come to light. Recently IBM’s CIO tightened the company’s BYOD restrictions on certain software apps, citing security problems. A Cisco Systems study also revealed BYOD is creating internal support issues at many enterprises.

While BYOD offers flexibility, freedom and potential productivity gains, it also increases the complexity at the edge of the Internet.  To make matters more challenging, employees have the expectation that they can access their email, Internet and corporate resources not only from their laptop, but also on their iPhone, iPad, Android device or Blackberry.

It is not unusual to see the number of mobile clients in an enterprise double, or even triple, with the same number of employees, since each user may have two or even three devices in use, such as a laptop, smart phone and tablet. Employees also expect to receive the same high level of user experience, in terms of seamless access and wire-like performance that they receive with the services that they are used to consuming on their mobile devices.

Arguments for allowing employees to bring their own devices into the office environment abound – from helping employees become more productive, to making it easier to attract and retain talent, to simply submitting to a tidal wave of change that is impossible to resist.

However, the chart below provides a snapshot of just one culprit that can impact employees’ user experience – the large number of web browsers and how differently they perform on various mobile and desktop devices. This small sample shows average page load times for just a few of the hundreds of possible device/OS/browser combinations.

Since many applications are delivered via the web (and much of the work of an application happens within the browser itself) how can an IT department ensure the application speed and availability that the business demands for so many variations? The first step is to gain visibility into the performance of business sites and applications across all devices and networks used by employees.

But tracking and managing performance levels is now exponentially more difficult as more and more devices and applications are in play. If you can’t verify the reliability and performance of mobile sites and applications, then you’ll create frustrated, non-productive employees.

Visibility into application performance across networks and devices is needed to understand and track performance, quickly spot problems and correct issues before they impact employee productivity or result in unwanted support tickets.


New Relic and SOASTA offer comprehensive solution for web application testing and performance management

 

 

According to recent SOASTA press release…

SOASTA, the leader in cloud and mobile testing, and New Relic, the SaaS-based cloud application performance management provider, today announced their partnership and the integration of New Relic’s application performance management solution on SOASTA’s CloudTest platform.

The integrated solution gives developers complete visibility into the performance of their web applications throughout the entire application development lifecycle. Now developers can immediately begin building performance tests to assure the highest test coverage and tune their apps based on deep performance diagnostics to ensure the highest level of application performance and availability, all at no charge.

Hopefully get an update shortly…