AppDynamics releases powerful database monitoring solution, extends visibility beyond the application layer

AppDynamics logo


 

1st APM vendor to bridge application and database performance with a single view.

AppDynamics, the next-generation Application Performance Management solution that simplifies the management of complex apps, has announced the release of AppDynamics for Databases to help enterprises troubleshoot and tune database performance problems.  This new AppDynamics solution isavailable immediately and offers unmatched insight and visibility into how SQL and stored procedures execute within databases such as Oracle, SQL Server, DB2, Sybase, MySQL and PosgreSQL.

AppDynamics for Databases addresses the challenges that application support teams such as Developers and Operations face in trying to identify the cause of application performance issues that relate to database performance.  As many as 50% of application problems are the result of slow SQL calls and stored procedures invoked by applications—yet until now, databases have been a “black box” for application support teams.

“Giving our customers critical visibility and troubleshooting capability into the cause of database problems makes AppDynamics absolutely unique in the APM space,” said Jyoti Bansal, founder and CEO of AppDynamics. “Application support teams constantly wrestle with database performance problems in attempting to ensure uptime and availability of their mission-critical applications, but they usually lack the visibility they need to resolve problems. We’ve equipped them with a valuable new solution for ensuring application performance, and it will enable them to collaborate with their Database Administrator colleagues even more closely than before.”

With its new database monitoring solution, AppDynamics has applied its “secret sauce” from troubleshooting Java and .NET application servers to databases, allowing enterprises to pinpoint slow user transactions and identify the root cause of SQL and stored procedure queries. AppDynamics for Databases also offers universal database diagnostics covering Oracle, SQL Server, DB2, Sybase, PostgreSQL, and MySQL database platforms.

AppDynamics Pro for Databases includes the following features:

  • Production Ready: Less than 1% overhead in most production environments.
  • Application to Database drill-down: Ability to troubleshoot business transaction latency from the application right into the database and storage tiers.
  • SQL explain/execution plans: Allows developers and database administrators to pinpoint inefficient operations and logic, as well as diagnose why queries are running slowly.
  • Historical analysis: Monitors and records database activity 24/7 to allow users to analyze performance slowdowns in the database tier.
  • Top database wait states: Provides insights and visibility into database wait and CPU states to help users understand database resource contention and usage.
  • Storage visibility for NetApp: Provides the ability to correlate database performance with performance on NetApp storage.

“It is great to have a tightly integrated way to monitor, troubleshoot and optimize the performance of our key applications and the databases that support them,” said Nadine Thomson, Group IT Operations Manager at STA Travel. “We’re enthusiastic about the ability to use deep database, Java, and .NET performance information all from within a single AppDynamics product.”

AppDynamics for Databases is available now; get a free trial here.


Amazon explains how to measure streaming video performance

Learn who the customers are and understand what’s important to them. An Amazon exec offers 12 best practices.

Every second matters when it comes to online video performance, a statement that’s literally true. In his address at the recent Streaming Media West conference in Los Angeles, Nathan Dye, software development manager for Amazon Web Services, revealed that studies have shown that a one-second delay in an e-commerce web site’s loading time can reduce revenues by seven percent.

Loading times are just as crucial for online video. Shoppers often don’t come back if videos are slow to load.

“Poor performance and video interruptions lead to less return traffic and less video viewed overall,” Dye said. “IMDB, of course, knows this very well. Their operations team is constantly using their performance measurement, their metrics and dashboards, to find issues with their infrastructures or find problems their customers are experiencing, pinpointing those issues and finally fixing them. Ultimately, that’s what performance measurement is all about: it’s about improving the streaming performance of your customers by first finding those issues and then fixing them.”

In his presentation, Dye offered 12 best practices for measuring streaming video performance.

“You have to start with your customers. If you don’t know what your customers care about, you won’t be able to measure it,” Dye explained. “You need to know what they’re watching, where they’re watching it from, how frequently they’re watching it. Depending on who your customers are, you may care about different performance criteria. For example, if you’re vending feature-length films, you may care a lot more about insuring that customers get a high-quality stream that’s uninterrupted.”

Feature film vendors might decide to sacrifice some start-up latency to insure that viewers get an uninterrupted stream, Dye explained.

For the other 11 best practices for measuring streaming video, watch the full presentation below.

An Amazon exec offers 12 best practices.

HOW-TO: Best Practices for Measuring Performance of Streaming Video

In this presentation, you’ll learn about best practices for measuring and monitoring the quality of your videos streamed to end-users. We will provide practical guidance using external agent-based measurements and real user monitoring techniques, and discuss CDN architectures and how they relate to performance measurement. Finally we’ll walk through real-world CDN performance monitoring implementations used by Amazon CloudFront customers for video delivery.

Speaker: Nathan Dye, Software Development Manager, Amazon Web Services


Compuware unveils 2013 application performance management best practices and trends

Compuware has just published the first volume of its new Application Performance Management (APM) Best Practices collection titled: “2013 APM State-of-the-Art and Trends.” Written by Compuware’s APM Center of Excellence thought leaders and experts, the collection features 10 articles on the technology topics shaping APM in 2013.

For organisations that depend on high-performance applications, the collection provides an easy-to-absorb overview of the evolution of APM technology, best practices, methodology and techniques to help manage and optimize application performance. Download the APM Best Practices collection here.

The APM Best Practices: 2013 APM State-of-the-Art and Trends collection helps IT professionals and business stakeholders keep pace with these changes and learn how application performance techniques will develop over the new year. The collection not only explores APM technology but also examines the related business implications and provides recommendations for how best to leverage APM.

Topics covered in this collection include:

  • managing application complexity across the edge and cloud;
  • top 10 requirements for creating an APM culture;
  • quantifying the financial impact of poor user experience;
  • sorting myth from reality in real-user monitoring; and
  • lessons learned from real-world big data implementations.

To download the APM Best Practices collectionclick here.

“This collection is a source of knowledge, providing valuable information about application performance for all business and technical stakeholders,” said Andreas Grabner, Leader of the Compuware APM Center of Excellence. “IT professionals can use the collection to help implement leading APM practices in their organizations and to set direction for proactive performance improvements. Organisations not currently using APM can discover how other companies are leveraging APM to solve business and technology problems, and how these solutions might apply to their own situations.”

More volumes of the APM Best Practices collection will become available throughout the year and will cover:

With more than 4,000 APM customers worldwide, Compuware is recognised as a leader in the “Magic Quadrant for Application Performance Monitoring” report. To read more about Compuware’s leadership in the APM market, click here.


It takes more than a tool! Swarovski’s 10 requirements for creating an APM culture

By Andreas Grabner at blog.dynatrace.com

Swarovski – the leading producer of cut crystal in the world- relies on its eCommerce store as much like other companies in the highly competitive eCommerce environment. Swarovski’s story is no different from others in this space: They started with “Let’s build a website to sell our products online” a couple of years ago and quickly progressed to “We sell to 60 million annual visitors across 23 countries in 6 languages”. There were bumps along the road and they realized that it takes more than just a bunch of servers and tools to keep the site running.

Why APM and why you do not just need a tool?

Swarovski relies on Intershop’s eCommerce platform and faced several challenges as they rapidly grew. Their challenges required them to apply Application Performance Management (APM) practices to ensure they could fulfill the business requirements to keep pace with customer growth while maintaining an excellent user experience. The most insightful comment I heard was from René Neubacher, Senior eBusiness Technology Consultant at Swarovski: “APM is not just about software. APM is a culture, a mindset and a set of business processes.  APM software supports that.”

René recently discussed their Journey to APM, what their initial problems were and what requirements they ended up having on APM and the tools they needed to support their APM strategy. By now they reached the next level of maturity by establishing a Performance Center of Excellence. This allows them to tackle application performance proactively throughout the organization instead of putting out fires reactively in production.

This blog post describes the challenges they faced, the questions that arose and the new generation APM requirements that paved the way forward in their performance journey:

The Challenge!

Swarvoski had traditional system monitoring in place on all their systems across their delivery chain including web servers, application servers, SAP, database servers, external systems and the network. Knowing that each individual component is up and running 99.99% of the time is great but no longer sufficient. How might these individual component outages impact the user experience of their online shoppers? WHO is actually responsible for the end user experience and HOW should you monitor the complete delivery chain and not just the individual components? These and other questions came up when the eCommerce site attracted more customers which was quickly followed by more complaints about their user experience:

APM includes getting a holistic view of the complete delivery chain and requires someone to be responsible for end user experience.

APM includes getting a holistic view of the complete delivery chain and requires someone to be responsible for end user experience.

Questions that had no answers

In addition to “Who is responsible in case users complain?” the other questions that needed to be urgently addressed included:

  • How often is the service desk called before IT knows that there is a problem?
  • How much time is spent in searching for system errors versus building new features?
  • Do we have a process to find the root-cause when a customer reports a problem?
  • How do we visualize our services from the customer‘s point of view?
  • How much revenue, brand image and productivity are at risk or lostwhile IT is searching for the problem?
  • What to do when someone says ”it‘s slow“?

The 10 Requirements

These unanswered questions triggered the need to move away from traditional system monitoring and develop the requirements for new generation APM and user experience management.

#1: Support State-of-the-Art Architecture

Based on their current system architecture it was clear that Swarovski needed an approach that was able to work in their architecture, now and in the future. The rise of more interactive Web 2.0 and mobile applications had to be factored in to allow monitoring end users from many different devices and regardless of whether they used a web application or mobile native application as their access point.

Transactions need to be followed from the browser all the way back to the database. It is important to support distributed transactions. This approach also helps to spot architectural and deployment problems immediately

Transactions need to be followed from the browser all the way back to the database. It is important to support distributed transactions. This approach also helps to spot architectural and deployment problems immediately

#2: 100% transactions and clicks – No Averages

Based on their experience, Swarovski knew that looking at average values or sampled data would not be helpful when customers complained about bad performance. Responding to a customer complaint with “Our average user has no problem right now – sorry for your inconvenience” is not what you want your helpdesk engineers to use as a standard phrase. Averages or sampling also hides the real problems you have in your system. Check out the blog post Why Averages Suck by Michael Kopp for more detail.

Measuring end user performance of every customer interaction allows for quick identification of regional problems with CDNs, 3rd Parties or Latency.

Measuring end user performance of every customer interaction allows for quick identification of regional problems with CDNs, 3rd Parties or Latency.

Having 100% user interactions and transactions available makes it easy to identify the root cause for individual users

Having 100% user interactions and transactions available makes it easy to identify the root cause for individual users

#3: Business Visibility

As the business had a growing interest in the success of the eCommerce platform, IT had to demonstrate to the business what it took to fulfill their requirements and how business requirements are impacted by the availability or the lack of investment in the application delivery chain.

Correlating the number of Visits with Performance on incoming Orders illustrates the measurable impact of performance on revenue and what it takes to support business requirements.

Correlating the number of Visits with Performance on incoming Orders illustrates the measurable impact of performance on revenue and what it takes to support business requirements.

#4: Impact of 3rd Parties and CDNs

It was important to not only track transactions involving their own Data Center but ALL user interactions with their web site – even those delivered through CDNs or 3rd parties. All of these interactions make up the user experience and therefore ALL of it needs to be analyzed.

Seeing the actual load impact of 3rd party components or content delivered from CDNs enables IT to pinpoint user experience problems that originate outside their own data center.

Seeing the actual load impact of 3rd party components or content delivered from CDNs enables IT to pinpoint user experience problems that originate outside their own data center.

#5: Across the lifecycle – supporting collaboration and tearing down silos

The APM initiative was started because Swarovski reacted to problems happening in production. Fixing these problems in production is only the first step. Their ultimate goal is to become pro-active by finding and fixing problems in development or testing—before they spill over into production. Instead of relying on different sets of tools with different capabilities, the requirement is to use one single solution that is designed to be used across the application lifecycle (Developer Workstation, Continuous Integration, Testing, Staging and Production). It will make it easier to share application performance data between lifecycle stages allowing individuals to not only easily look at data from other stages but also compare data to verify impact and behavior of code changes between version updates.

Continuously catching regressions in Development by analyzing unit and performance tests allows application teams to become more proactive.

Continuously catching regressions in Development by analyzing unit and performance tests allows application teams to become more proactive.

Pinpointing integration and scalability issues, continuously, in acceptance and load testing makes testing more efficient and prevents problems from reaching production.

Pinpointing integration and scalability issues, continuously, in acceptance and load testing makes testing more efficient and prevents problems from reaching production.

#6: Down to the source code

In order to speed up problem resolution Swarovski’s operations and development teams  require as much code-level insight as possible — not only for their own engineers who are extending the Intershop eCommerce Platform but also for Intershop to improve their product. Knowing what part of the application code is not performing well with which input parameters or under which specific load on the system eliminates tedious reproduction of the problem. The requirement is to lower the Mean Time To Repair (MTTR) from as much as several days down to only a couple of hours.

The SAP Connector turned out to have a performance problem. This method-level detailed information was captured without changing any code.

The SAP Connector turned out to have a performance problem. This method-level detailed information was captured without changing any code.

#7: Zero/Acceptable overhead

“Who are we kidding? There is nothing like zero overhead especially when you need 100% coverage!” – Just the words from René when you explained that requirement. And he is right: once you start collecting information from a production system you add a certain amount of overhead. A better term for this would be “imperceptible overhead” – overhead that’s so small, you don’t notice it.

What is the exact number? It depends on your business and your users. The number should be worked out from the impact on the end user experience, rather than additional CPU, memory or network bandwidth required in the data center. Swarovski knew they had to achieve less than 2% overhead on page load times in production, as anything more would have hurt their business; and that’s what they achieved.

#8: Centralized data collection and administration

Running a distributed eCommerce application that gets potentially extended to additional geographical locations requires an APM system with a centralized data collection and administration option. It is not feasible to collect different types of performance information from different systems, servers or even data centers. It would either require multiple different analysis tools or data transformation to a single format to use it for proper analysis.

Instead of this approach, a single unified APM system was required by Swarovski. Central administration is equally important as they need to eliminate the need to rely on remote IT administrators to make changes to the monitored system, for example, simple tasks such as changing the level of captured data or upgrading to a new version.

By storing and accessing performance data from a single, centralized repository, enables fast and powerful analytic and visualization. For example, system metrics such as CPU utilization can be correlated with end-user response time or database execution time - all displayed on one single dashboard.

By storing and accessing performance data from a single, centralized repository, enables fast and powerful analytic and visualization. For example, system metrics such as CPU utilization can be correlated with end-user response time or database execution time – all displayed on one single dashboard.

#9: Auto-Adapting Instrumentation without digging through code

As the majority of the application code is not developed in-house but provided by Intershop, it is mandatory to get insight into the application without doing any manual code changes. The APM system must auto-adapt to changes so that no manual configuration change is necessary when a new version of the application is deployed.

This means Swarovski can focus on making their applications positively contribute to business outcomes; rather than spend time maintaining IT systems.

#10: Ability to extend

Their application is an always growing an ever-changing IT environment. Where everything might have been deployed on physical boxes it might be moved to virtualized environments or even into a public cloud environment.

Whatever the extension may be – the APM solution must be able to adapt to these changes and also be extensible to consume new types of data sources, e.g., performance metrics from Amazon Cloud Services or VMware, Cassandra or other Big Data Solutions or even extend to legacy Mainframe applications and then bring these metrics into the centralized data repository and provide new insights into the application’s performance.

Extending the application monitoring capabilities to Amazon EC2, Microsoft Windows Azure, a public or private cloud enables the analysis of the performance impact of these virtualized environments on end user experience.

Extending the application monitoring capabilities to Amazon EC2, Microsoft Windows Azure, a public or private cloud enables the analysis of the performance impact of these virtualized environments on end user experience.

The Solution and the Way Forward

Needless to say that Swarovski took the first step in implementing APM as a new process and mindset in their organization. They are now in the next phase of implementing a Performance Center of Excellence. This allows them moving from Reactive Performance Troubleshooting to Proactive Performance Prevention.

Stay tuned for more blog posts on the Performance Center of Excellence and how you can build one in your own organization. The key message is that it is not about just using a bunch of tools. It is about living and breathing performance throughout the organization. If you are interested in this check out the blogs by Steve Wilson: Proactive vs Reactive: How to prevent problems instead of fixing them faster andPerformance in Development is the Chief Cornerstone.


OPNET Technologies, Inc acquired by Riverbed Technology

Riverbed Technology, the performance company, and OPNET Technologies, Inc. a leading provider of solutions for application and network performance management, today announced that Riverbed® has entered into a definitive agreement to acquire OPNET for $43 per share in cash and stock, representing an equity value of $1 billion and an enterprise value of $921 million.

The acquisition will enable Riverbed to extend its network performance management (NPM) business into the multi-billion dollar application performance management (APM) market. The combination of Cascade® and OPNET will create a new force in the converged market for NPM and APM, with over $250 million in annualized revenue.

Networks and applications are required to work together to deliver the performance business users demand. The addition of OPNET’s broad-based family of APM products enhances Riverbed’s already strong position in the NPM market. The resulting combination is a product line with unparalleled visibility and insight into application and network performance. This acquisition enables Riverbed to provide customers with a unique integrated solution that not only monitors network and application performance, but also accelerates it.

OPNET has been recognised as a leader in the APM market: www.opnet.com/gartner-magic-quadrant-apm/.

“The addition of OPNET establishes Riverbed as the clear leader in the high-growth and converging application and network performance management markets,” said Jerry Kennelly, Chairman and CEO at Riverbed. “This acquisition also transforms Riverbed into a billion dollar revenue company.”

“Riverbed and OPNET have natural synergies,” said Marc Cohen, OPNET’s Chairman and CEO. “Riverbed’s leadership in accelerating business technology combined with OPNET’s industry-leading suite of APM products provides customers with a single solution for monitoring, troubleshooting and actually fixing the application and network performance problems challenging them today.”

OPNET will be combined with Riverbed’s Cascade business unit and expected to be fully integrated by mid 2013.


Compuware unveils Outage Analyzer, a new generation performance analytics solution that raises the intelligence of SaaS APM

Tracks cloud and third-party web service outages with instant notification of cause and impact

Compuware, the technology performance company today announced a new generation performance analytics solution that raises the intelligence of software-as-a-service (SaaS) application performance management (APM).

 

Outage Analyzer provides real-time visualizations and alerts of outages in third-party web services that are mission critical to web, mobile and cloud applications around the globe. Compuware is providing this new service free of charge. Check out Outage Analyzer here.

Utilizing cutting-edge big data technologies and a proprietary anomaly detection engine, Outage Analyzer correlates more than eight billion data points per day. This data is collected from the Compuware Gomez Performance Monitoring Network of more than 150,000 test locations and delivers information on specific outages including the scope, duration and probable cause of the event — all visualized in real-time.

“Compuware’s new Outage Analyzer service is a primary example of the emerging industry trend toward applying big data analytics technologies to help understand and resolve application performance and availability issues in near real-time,” said Tim Grieser, Program VP, Enterprise System Management Software at IDC. “Outage Analyzer’s ability to analyze and visualize large masses of data, with automated anomaly detection, can help IT and business users better understand the sources and causes of outages in third-party web services.”

Cloud and third-party web services allow organizations to rapidly deliver a rich user experience, but also expose web and mobile sites to degraded performance—or even a total outage—should any of those components fail. Research shows that the typical website has more than ten separate hosts contributing to a single transaction, many of which come from third-party cloud services such as social media, ecommerce platforms, web analytics, ad servers and content delivery networks.

Outage Analyzer addresses this complexity with the following capabilities:

  • Incident Visualization: Issues with third-party services are automatically visualized on Outage Analyzer’s global map view. This view displays information on the current status, impact—based on severity and geography—and duration, along with the certainty and probable cause of the outage. Outage Analyzer also provides a timeline view that shows the spread and escalation of the outage. The timeline has a playback feature to replay the outage and review its impact over time.
  • Incident Filtering and Searching: With Outage Analyzer, users can automatically view the most recent outages, filtered by severity of impact, or search for outages in specific IPs, IP ranges or service domains. This allows users to find the outages in services that are potentially impacting their own applications.
  • Alerting: Users can sign-up to automatically receive alerts—RSS and Twitter feeds—and can specify the exact types of incidents to be alerted on such as popularity of third-party web service provider, certainty of an outage and by the geographical region impacted. Alerts contain links to the global map view and details of the outage. This provides an early-warning system to potential problems.
  • Performance Analytics Big Data Platform: Utilizing cutting-edge big data technologies in the cloud, including Flume and Hadoop, Outage Analyzer collects live data from the entire Gomez customer base and Gomez Benchmark tests, processing more than eight billion data points per day. The processing from raw data to visualization and alerting on an outage all happens within minutes, making the outage data timely and actionable.
  • Anomaly Detection Algorithms: At the heart of Outage Analyzer’s big data platform is a proprietary anomaly detection engine that automatically identifies availability issues with third-party web services that are impacting performance of the web across the globe. Outage Analyzer then correlates the outage data, identifies the source of the problem, calculates the impact and lists the probable causes — all in real-time.

“Since Outage Analyzer has been up and running, we’ve seen an average of about 200 third-party web service outages a day,” said Steve Tack, Vice President of Product Management for Compuware’s APM business unit. “Outage Analyzer is just the beginning. Our big data platform, propriety correlation and anomaly detection algorithms, and intuitive visualizations of issues with cloud and third-party web services are key building-blocks to delivering a new generation of answer-centric APM.”

Outage Analyzer harnesses the collective intelligence of the Compuware Gomez Network, the largest and most active APM SaaS platform in the world. Now eight billion measurements a day across the global Internet can be harnessed by any organization serious about delivering exceptional web application performance. Determining whether an application performance issue is the fault of an organization’s code, or the fault of a third-party service has never been easier.

Compuware APM® is the industry’s leading solution for optimizing the performance of web, non-web, mobile, streaming and cloud applications. Driven by end-user experience, Compuware APM provides the market’s only unified APM coverage across the entire application delivery chain—from the edge of the internet through the cloud to the datacenter. Compuware APM helps customers deliver proactive problem resolution for greater customer satisfaction, accelerate time-to-market for new application functionality and reduce application management costs through smarter analytics and advanced APM automation.

With more than 4,000 APM customers worldwide, Compuware is recognized as a leader in the “Magic Quadrant for Application Performance Monitoring” report.

To read more about Compuware’s leadership in the APM market, click here.


Gartner highlights five things that private cloud is not

Ongoing hype around private cloud computing is creating misperceptions about private cloud, according to Gartner, Inc. To help reduce the hype and identify the real value of private cloud computing for IT leaders, Gartner explains five common misconceptions about private cloud.

“The growth of private cloud computing is being driven by the rapid penetration of virtualization and virtualization management, the growth of cloud computing offerings and pressure to deliver IT faster and cheaper,” said Tom Bittman, vice president and distinguished analyst at Gartner. “However, in the rush to respond to these pressures, IT organizations need to be careful to avoid the hype, and, instead, should focus on a private cloud computing effort that makes the most business sense.”

The five misconceptions about private cloud and the corresponding realities are:

1. Private Cloud Is Not Virtualization

Server and infrastructure virtualization are important foundations for private cloud computing. However, virtualization and virtualization management are not, by themselves, private cloud computing. Virtualization makes it easier to dynamically and granularly pool and reallocate infrastructure resources (servers, desktop, storage, networking, middleware, etc.). However, virtualization can be enabled in many ways, including virtual machines, operating systems (OSs) or middleware containers, robust OSs, storage abstraction software, grid computing software, and horizontal scaling and cluster tools.

Private cloud computing leverages some form of virtualization to create a cloud computing service. Private cloud computing is a form of cloud computing that is used by only one organization, or that ensures that an organization is completely isolated from others.

2. Private Cloud Is Not Just About Cost Reduction

An enterprise can reduce operational costs with a private cloud by eliminating common, rote tasks for standard offerings. A private cloud can reallocate resources more efficiently to meet enterprise requirements, possibly by reducing capital expenses for hardware.

However, private clouds require investment in automation software, and the savings alone might not justify the cost. As such, cost reduction is not the primary benefit of private cloud computing. The benefits of self-service, automation behind the self-service interface and metering tied to usage are primarily agility, speed to market, ability to scale to dynamic demand or to go after short windows of opportunity, and ability for a business unit to experiment.

3. Private Cloud Is Not Necessarily On-Premises

Private cloud computing is defined by privacy, not location, ownership or management responsibility. While the majority of private clouds will be on-premises (based on the evolution of existing virtualization investments), a growing percentage of private clouds will be outsourced and/or off-premises. Third-party private clouds will have a more flexible definition of “privacy.” A third-party private cloud offering might share data center facilities with others, could share equipment over time (from a pool of available resources), and could share resources, but be isolated by a virtual private network (VPN) and everything in between.

4. Private Cloud Is Not Only Infrastructure as a Service (IaaS)

Server virtualization is a major trend and, therefore, a major enabler for private cloud computing. However, private cloud is not limited in any way to IaaS. For example, with development and test offerings, enabling higher-level Platform as a Service (PaaS) offerings for developers makes more sense than a simple virtual machine provisioning service.

Today, the fastest growing segment of cloud computing is IaaS. However, IaaS only provides the lowest-level data center resources in an easy-to-consume way, and doesn’t fundamentally change how IT is done. Developers will use PaaS to create new applications designed to be cloud-aware, producing fundamentally new services that could be very differentiating, compared with old applications.

5. Private Cloud Is Not Always Going to Be Private

In many ways, Gartner analysts said that private cloud is a stopgap measure. Over time, public cloud services will mature, improving service levels, security and compliance management. New public cloud services targeting specific requirements will emerge. Some private clouds will be moved completely to the public cloud. However, the majority of private cloud services will evolve to enable hybrid cloud computing, expanding the effective capacity of a private cloud to leverage public cloud services and third-party resources.

“By starting with a private cloud, IT is positioning itself as the broker of all services for the enterprise, whether they are private, public, hybrid or traditional,” Mr. Bittman said. “A private cloud that evolves to hybrid or even public could retain ownership of the self-service, and, therefore, the customer and the interface. This is a part of the vision for the future of IT that we call ‘hybrid IT.'”

Additional information is available in the Gartner report: Five Things That Private Cloud Is Not