Performance Engineering

Top 5 Obstacles to Implementing Performance Testing (And How to Overcome Them)

by
Zach Sergio

July 25, 2024

IN THIS ARTICLE
2024 Software Developer Salary Guide

We’ve been building teams overseas for over a decade. Download our definitive guide to hiring international software developers.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

There’s little debate about the importance of reliability and scalability for applications, especially for apps responsible for user engagement and revenue generation. However, while the importance of performance is clear, knowing how to start scaling performance engineering can be much less obvious.

In this article, we’ll review the top 5 obstacles teams face when first building a performance engineering practice for their business. Along the way, we’ll provide practical advice that you can incorporate into your strategy.

1) Limited Skillset & Expertise

The first barrier is of little surprise. Building a performance practice will be an uphill battle without the right personnel and team in place. While cross-training current staff on frameworks like JMeter may seem straightforward, it’s usually not practical to ask a functional automation engineer to start owning nonfunctional performance tests. Cross-training and up-skilling certainly has its time and place, but it’s important to be mindful of potential implications.

Short-term Drawbacks of Cross-Training

Hypothetically, consider a functionally automation engineer whose been tasked with taking on more responsibilities for performance engineering. This will require an initial investment of time to cross-train on a new framework, which can be time consuming for frameworks like JMeter. Then, activities like building, executing, and maintaining performance tests will consistently take them away from their day-to-day responsibilities.

Even if highly skilled automation engineer hammers out a set of performance test cases, the work doesn’t stop there. Beyond scripting, performance engineers also need expertise in areas like system architecture, resource management, and performance metrics. Before we examine the other key roles a performance engineer plays, here are a few suggestions for addressing the talent gap.

How to Cross the Skill Gap

If you don’t have the in-house expertise for performance readily available, you have several options:

  • Full-Time Hires: Hiring a full-time performance engineer is one place to start, assuming you’re ready to address the other four barriers in this article.
  • Cross-Training on Low-Code Frameworks: Starting with JMeter can be more trouble than it’s worth. Low-code performance frameworks like NeoLoad have smaller learning curves and are much easier to maintain, making it far more practical to cross-train current staff.
  • Managed Services: Specialized firms, like Perform, have consultants that can work alongside your existing team as part of a managed service. This makes it easy to scale resources up and down if you’re not ready for full-time employees.
  • Nearshore or Offshore Hires: If you opt for the nearshore or offshore model, be sure to work with a partner that also has domain expertise in performance.

Sidenote: If you’d like to connect with one of our architects to discuss performance strategy, you can book a time here.

2) Difficulty Defining Performance Requirements & Objectives

As we discuss in the webinar, “5 Keys to High-Performance Mobile Apps,” performance engineers and architects impact various stages outside the QA department. They can make a notable impact in the early design and development phases, where functional requirements are often the primary focus.

Bridging Between Business & Technical Stakeholders

In the design and development phases, performance architects can serve as a ‘translator’ between business and technical stakeholders during discussions about performance requirements. For instance, a business stakeholder might express a need for a highly responsive experience for creating a new account, logging in, or checking out. Without a performance architect’s input, a technical stakeholder might optimize for a load time of 3 seconds instead of 1-2 seconds.

Addressing Missed Requirements

If these nonfunctional requirements are missed, it starts a domino effect that first impacts QA. If QA relies solely on visual evidence for functional tests, they may miss nonfunctional performance issues altogether. As a new build progresses through the delivery pipeline into production, the app becomes more susceptible to performance degradations. Crashes and even downtime are more common when collections of API’s aren’t held to certain baseline standards.

3) Lack of Clarity on Where to Start Performance Testing

As QA departments first establish, they typically start by building the functional test plan. It might seem logical to follow the lead of functional tests when scripting performance tests, but this approach is counterintuitive for a few reasons.

Identifying Performance Priorities

Prioritizing performance tests based on high-priority functional requirements can cause you to miss high-traffic areas of your application that need to handle heavier workloads. Additionally, a team’s repository of functional test cases can consist of hundreds or thousands of test cases, making it difficult to write correlating performance tests for each.

Practical Approaches

Here’s a few ways you can identify areas of the application for performance testing:

  • For Existing Applications with APM in Place: Collaborate with DevOps to review metrics within your application performance monitoring (APM) solution. Tools like Dynatrace and New Relic make it easy to identify which transactions are most heavily used in your application.
  • For Existing Applications without APM in Place: If you don’t have a long history of metrics to review in your APM platform, check your logs.
  • For New Applications: If you’re building a brand-new application, you won’t be able to rely on metrics from production. In this case, performance architects should gather the business’s needs and translate them into specified nonfunctional requirements.

At Perform, we offer 1-3 month ‘Performance Kickstart’ engagements to help teams navigate the planning process. Our architects work alongside client teams to establish use case documents, workload models, and performance requirements.

4) Limitations of Current Tooling

We all know how painful it can be to use the wrong tool for the job. This applies in construction as much as it does in software engineering, and it’s especially true in performance testing.

Open Source Isn’t ‘Free’

Companies initially ‘dipping their toes’ into performance engineering tend to shy away from investing in robust frameworks. Similar to how functional automation teams may start with Appium or Selenium, performance teams often start with JMeter. However, ‘free’ open-source performance tools can actually carry a heavy price tag due to maintenance overhead.

Investing in the Right Tools

While some teams may have the experience and resources needed to scale JMeter, this is rare. More often than not, teams are better served by investing in sophisticated solutions like Tricentis NeoLoad and OctoPerf. Perform is a certified delivery partner for each of these tools.

Balancing Cost and Functionality

When selecting the right tool for a project, cost is always a consideration. However, it’s important not to let the mentality of ‘doing more with less’ take precedence. Factors like the type of application, test environments, tool functionality, scalability, and ease of use should also be weighed.

5) Complacency with Inefficient Cloud Architecture

Some organizations have a mindset where it’s acceptable to waste money on supporting inefficient cloud architecture. This often happens when a mission-critical application receives heavier-than-expected workloads, leading executives to authorize increased cloud spend. While this makes sense in the short term, it’s a ‘band-aid fix’.

The Importance of Cloud Audits

We’re not advocating for underfunding your cloud servers and allowing your app to crash in production. Instead, periodically and systematically review bottlenecks in your cloud infrastructure to see where improvements can be made.

This requires a specialized skillset, which can be challenging to develop in-house. Perform conducts cloud assessments that help clients understand where their bottlenecks and inefficiencies are and provides detailed recommendations for remediation. For example, we helped a large banking institution reclaim over $450k per month in wasted AWS spend simply by optimizing bottlenecks in the app.

You can request more information about Perform’s cloud assessment workshop by clicking here.

Conclusion

Implementing performance testing can be challenging, but overcoming these obstacles is crucial for ensuring your applications' reliability and scalability. By addressing skillset gaps, defining clear performance requirements, prioritizing testing effectively, investing in the right tools, and optimizing cloud architecture, you can build a robust performance engineering practice. Perform is here to help you navigate these challenges and achieve your performance goals.

Ready to get started? Book a consultation with one of our architects today.

ROUGH DRAFT

Top 5 Obstacles to Implementing Performance Testing (And How to Overcome Them)

There’s little debate about the importance of reliability and scalability for applications, especially for those responsible for user engagement and revenue generation. However, while the importance of performance is clear, knowing how to start scaling performance engineering can be much less obvious

In this article, we’ll review the top 5 obstacles teams face when starting to build out a performance engineering practice in their business. Along the way, we’ll  provide practical advice that you can incorporate into your strategy.

1) Limited Skillset & Expertise

The first barrier is of little surprise. Building a performance practice will obviously be an uphill battle without the right personnel and team in place. While cross-training current staff on frameworks like JMeter may seem straightforward, it’s usually not easy (or practical) to ask an functional automation engineer to start owning nonfunctional performance tests. Cross-training has its time and place, but it’s important to be mindful of potential implications.

Let’s start with short-term drawbacks. Building, executing, and maintaining performance tests will consistently take an individual away from their day-to-day responsibilities. Even if you have a highly skilled automation engineer hammer out a set of performance test cases, the work doesn’t stop there.

Beyond scripting, performance engineers also need to have expertise in areas like system architecture, resource management, and performance metrics. Before we expand on this in the next section, here’s a few suggestions for addressing the talent gap.

How to Bridge the Gap

If you don’t have the in-house expertise readily available, you have several options.

  • Full-Time Hires: Hiring a full-time performance engineer is one place to start, assuming you’re ready to address the other 4 barriers in this article
  • Cross-Training on Low-Code Frameworks: Starting with JMeter can be more trouble than it’s worth. Low-code performance frameworks like NeoLoad have smaller learning curves and are much easier to maintain, making it far more practical to cross-train current staff
  • Managed Services: Specialized firms, like Perform, also have consultants that can work alongside your existing team as part of a managed service. This makes it easy to cycle resources up and down if you’re not ready for FTE’s.
  • Nearshore or Offshore Hires: If you do opt for the nearshore or offshore model, be sure to work with a partner that also has domain expertise in performance.

Sidenote: If you’d like to connect with one of our architects to discuss performance strategy, you can book a time here

2) Difficulty Defining Performance Requirements & Objectives

As we discuss in the webinar, “5 Keys to High-Performance Mobile Apps,” performance engineers & architects make a variety of impacts outside the QA department. The first of which is in the earlier design and development phases, where functional requirements are often the sole focal point.

In these phases, architects can essentially serve as a ‘translator’ between business & technical stakeholders - and facilitate discussions about performance requirements. For instance, a business stakeholder might express a need for a highly responsive experience for creating a new account, logging in, or checking out. For a technical stakeholder focused on functionality, they may optimize for a load time of 3 seconds instead of 1-2 seconds.

Sometimes, business stakeholders request changes that are simple to implement from a functional perspective, but are complex in terms of performance.

It can also be easy to overlook the architecture needs to ensure certain API’s load at a certain speed.

If these nonfunctional requirements are missed - starts a domino effect that tips into QA. If QA is simply relying on visual evidence for functional tests, they may miss nonfunctional performance altogether. As a new build progresses through the delivery pipeline into production, the app can be more susceptible to performance degradations.

This leads us to the next common barrier.

3)  Lack of Clarity on Where to Start Performance Testing

As QA departments first establish, they typically start by building the functional test plan. Once they start building in-house performance capabilities, it would seem logical to simply follow the lead of the functional tests. However, this is actually counterintuitive for a few reasons.

If you prioritize performance tests based on high priority functional requirements, you can easily miss high-traffic areas of your application that need to handle heavier workloads. Also, a team’s repository of functional test cases can consist of hundreds or thousands of test cases - which makes writing correlating performance tests for each difficult.

Fortunately, there’s easier ways to approach this:

  • For Existing Applications with APM in Place: You can collaborate with DevOps in reviewing metrics within your application performance monitoring (APM) solution. Tools like Dynatrace, New Relic, and others make it easy to identify which transactions are most heavily used in your application.
  • For Existing Applications without APM in Place: If you don’t have a long history of metrics to review in your APM platform, you can at least check into logs.
  • For New Applications: If you’re building a brand-new application, you obviously won’t be able to rely on metrics from production. In the absence of this data, it’s especially important for performance architects to help gather the business’ needs and translate into specified nonfunctional requirements.

At Perform, we offer 1-3 month ‘Performance Kickstart’ engagements to help teams navigate the planning process. Our architects work alongside client teams and help establish use case documents, workload models, and performance requirements.

We also help clients navigate the difficult decision of selecting the right tools for their project, which brings us to our next barrier.

4) Limitations of Current Tooling

Universally, we all know how painful it can be to use the wrong tool for the job. This applies in construction as much as it does software engineering - and is especially true in performance.

It’s a natural tendency for companies who are initially ‘dipping their toes’ into performance engineering to shy away from investing in strong frameworks. For the same reason that functional automation teams may start with Appium or Selenium, performance teams tend to start with JMeter.

Granted, some teams do have the experience and resources needed to scale JMeter, although this is rare. More often than not, teams are far better served by investing in more sophisticated solutions like Tricentis NeoLoad and OctoPerf (sidenote: Perform is a certified delivery partner for each one).

Of course, when selecting the right tool for a project, cost is always a consideration. However, it’s important to not let the mentality of ‘doing more with less’ to take the front seat. Weight needs to be given to other factors like type of application, test environments, tool functionality & scalability, ease of use, and so on.

As part of helping clients build a refined performance testing strategy, we also help clients select the right solution for their project. You can learn more on how this process works by simply booking a meeting with one of our architects and a member of our client strategy team.

More often than not, clients actually save money by investing in performance engineering - which transitions to our fifth and final barrier for this article.

5)  Complacency with Inefficient Cloud Architecture

For some organizations, there’s a general mindset where it’s OK to waste perfectly good dollars on supporting inefficient cloud architecture. This tends to happen when a mission-critical application receives heavier-than-expected workloads, which requires chief executives to authorize increased cloud spend. While this makes sense in the short-term, this is a ‘bandaid fix’.

To be clear, we’re not advocating to underfund your cloud servers and allow your app to crash in production. Instead, our recommendation is to periodically and systematically review bottlenecks in your cloud infrastructure to see where improvements can be made.

This does require a specialized skillset, making it easier said than done. At Perform, we conduct cloud assessments that help clients understand where their bottlenecks and inefficiencies are - and provide detailed recommendations for remediation. At the end of last year, we helped a large banking institution reclaim over $450k per month in wasted AWS spend simply by optimizing bottlenecks in the app.

You can request more information about Perform’s cloud assessment workshop by clicking here.

Dev

teams

love

Perform

“Total Performance Consulting helped MHE build out new teams, supplement existing teams, and improve our overall performance testing posture”.

SHANE SHELTON

Sr. Director, Application Performance and Development Operations, McGraw Hill Education

totalperform logo

Founded by engineers - for engineers.
Expert consulting and staffing for software engineering at scale.