We’ve been building teams overseas for over a decade. Download our definitive guide to hiring international software developers.
There’s little debate about the importance of reliability and scalability for applications, especially for apps responsible for user engagement and revenue generation. However, while the importance of performance is clear, knowing how to start scaling performance engineering can be much less obvious.
In this article, we’ll review the top 5 obstacles teams face when first building a performance engineering practice for their business. Along the way, we’ll provide practical advice that you can incorporate into your strategy.
The first barrier is of little surprise. Building a performance practice will be an uphill battle without the right personnel and team in place. While cross-training current staff on frameworks like JMeter may seem straightforward, it’s usually not practical to ask a functional automation engineer to start owning nonfunctional performance tests. Cross-training and up-skilling certainly has its time and place, but it’s important to be mindful of potential implications.
Hypothetically, consider a functionally automation engineer whose been tasked with taking on more responsibilities for performance engineering. This will require an initial investment of time to cross-train on a new framework, which can be time consuming for frameworks like JMeter. Then, activities like building, executing, and maintaining performance tests will consistently take them away from their day-to-day responsibilities.
Even if highly skilled automation engineer hammers out a set of performance test cases, the work doesn’t stop there. Beyond scripting, performance engineers also need expertise in areas like system architecture, resource management, and performance metrics. Before we examine the other key roles a performance engineer plays, here are a few suggestions for addressing the talent gap.
If you don’t have the in-house expertise for performance readily available, you have several options:
Sidenote: If you’d like to connect with one of our architects to discuss performance strategy, you can book a time here.
As we discuss in the webinar, “5 Keys to High-Performance Mobile Apps,” performance engineers and architects impact various stages outside the QA department. They can make a notable impact in the early design and development phases, where functional requirements are often the primary focus.
In the design and development phases, performance architects can serve as a ‘translator’ between business and technical stakeholders during discussions about performance requirements. For instance, a business stakeholder might express a need for a highly responsive experience for creating a new account, logging in, or checking out. Without a performance architect’s input, a technical stakeholder might optimize for a load time of 3 seconds instead of 1-2 seconds.
If these nonfunctional requirements are missed, it starts a domino effect that first impacts QA. If QA relies solely on visual evidence for functional tests, they may miss nonfunctional performance issues altogether. As a new build progresses through the delivery pipeline into production, the app becomes more susceptible to performance degradations. Crashes and even downtime are more common when collections of API’s aren’t held to certain baseline standards.
As QA departments first establish, they typically start by building the functional test plan. It might seem logical to follow the lead of functional tests when scripting performance tests, but this approach is counterintuitive for a few reasons.
Prioritizing performance tests based on high-priority functional requirements can cause you to miss high-traffic areas of your application that need to handle heavier workloads. Additionally, a team’s repository of functional test cases can consist of hundreds or thousands of test cases, making it difficult to write correlating performance tests for each.
Here’s a few ways you can identify areas of the application for performance testing:
At Perform, we offer 1-3 month ‘Performance Kickstart’ engagements to help teams navigate the planning process. Our architects work alongside client teams to establish use case documents, workload models, and performance requirements.
We all know how painful it can be to use the wrong tool for the job. This applies in construction as much as it does in software engineering, and it’s especially true in performance testing.
Companies initially ‘dipping their toes’ into performance engineering tend to shy away from investing in robust frameworks. Similar to how functional automation teams may start with Appium or Selenium, performance teams often start with JMeter. However, ‘free’ open-source performance tools can actually carry a heavy price tag due to maintenance overhead.
While some teams may have the experience and resources needed to scale JMeter, this is rare. More often than not, teams are better served by investing in sophisticated solutions like Tricentis NeoLoad and OctoPerf. Perform is a certified delivery partner for each of these tools.
When selecting the right tool for a project, cost is always a consideration. However, it’s important not to let the mentality of ‘doing more with less’ take precedence. Factors like the type of application, test environments, tool functionality, scalability, and ease of use should also be weighed.
Some organizations have a mindset where it’s acceptable to waste money on supporting inefficient cloud architecture. This often happens when a mission-critical application receives heavier-than-expected workloads, leading executives to authorize increased cloud spend. While this makes sense in the short term, it’s a ‘band-aid fix’.
We’re not advocating for underfunding your cloud servers and allowing your app to crash in production. Instead, periodically and systematically review bottlenecks in your cloud infrastructure to see where improvements can be made.
This requires a specialized skillset, which can be challenging to develop in-house. Perform conducts cloud assessments that help clients understand where their bottlenecks and inefficiencies are and provides detailed recommendations for remediation. For example, we helped a large banking institution reclaim over $450k per month in wasted AWS spend simply by optimizing bottlenecks in the app.
You can request more information about Perform’s cloud assessment workshop by clicking here.
Implementing performance testing can be challenging, but overcoming these obstacles is crucial for ensuring your applications' reliability and scalability. By addressing skillset gaps, defining clear performance requirements, prioritizing testing effectively, investing in the right tools, and optimizing cloud architecture, you can build a robust performance engineering practice. Perform is here to help you navigate these challenges and achieve your performance goals.
Ready to get started? Book a consultation with one of our architects today.
ROUGH DRAFT
There’s little debate about the importance of reliability and scalability for applications, especially for those responsible for user engagement and revenue generation. However, while the importance of performance is clear, knowing how to start scaling performance engineering can be much less obvious
In this article, we’ll review the top 5 obstacles teams face when starting to build out a performance engineering practice in their business. Along the way, we’ll provide practical advice that you can incorporate into your strategy.
The first barrier is of little surprise. Building a performance practice will obviously be an uphill battle without the right personnel and team in place. While cross-training current staff on frameworks like JMeter may seem straightforward, it’s usually not easy (or practical) to ask an functional automation engineer to start owning nonfunctional performance tests. Cross-training has its time and place, but it’s important to be mindful of potential implications.
Let’s start with short-term drawbacks. Building, executing, and maintaining performance tests will consistently take an individual away from their day-to-day responsibilities. Even if you have a highly skilled automation engineer hammer out a set of performance test cases, the work doesn’t stop there.
Beyond scripting, performance engineers also need to have expertise in areas like system architecture, resource management, and performance metrics. Before we expand on this in the next section, here’s a few suggestions for addressing the talent gap.
How to Bridge the Gap
If you don’t have the in-house expertise readily available, you have several options.
Sidenote: If you’d like to connect with one of our architects to discuss performance strategy, you can book a time here
As we discuss in the webinar, “5 Keys to High-Performance Mobile Apps,” performance engineers & architects make a variety of impacts outside the QA department. The first of which is in the earlier design and development phases, where functional requirements are often the sole focal point.
In these phases, architects can essentially serve as a ‘translator’ between business & technical stakeholders - and facilitate discussions about performance requirements. For instance, a business stakeholder might express a need for a highly responsive experience for creating a new account, logging in, or checking out. For a technical stakeholder focused on functionality, they may optimize for a load time of 3 seconds instead of 1-2 seconds.
Sometimes, business stakeholders request changes that are simple to implement from a functional perspective, but are complex in terms of performance.
It can also be easy to overlook the architecture needs to ensure certain API’s load at a certain speed.
If these nonfunctional requirements are missed - starts a domino effect that tips into QA. If QA is simply relying on visual evidence for functional tests, they may miss nonfunctional performance altogether. As a new build progresses through the delivery pipeline into production, the app can be more susceptible to performance degradations.
This leads us to the next common barrier.
As QA departments first establish, they typically start by building the functional test plan. Once they start building in-house performance capabilities, it would seem logical to simply follow the lead of the functional tests. However, this is actually counterintuitive for a few reasons.
If you prioritize performance tests based on high priority functional requirements, you can easily miss high-traffic areas of your application that need to handle heavier workloads. Also, a team’s repository of functional test cases can consist of hundreds or thousands of test cases - which makes writing correlating performance tests for each difficult.
Fortunately, there’s easier ways to approach this:
At Perform, we offer 1-3 month ‘Performance Kickstart’ engagements to help teams navigate the planning process. Our architects work alongside client teams and help establish use case documents, workload models, and performance requirements.
We also help clients navigate the difficult decision of selecting the right tools for their project, which brings us to our next barrier.
Universally, we all know how painful it can be to use the wrong tool for the job. This applies in construction as much as it does software engineering - and is especially true in performance.
It’s a natural tendency for companies who are initially ‘dipping their toes’ into performance engineering to shy away from investing in strong frameworks. For the same reason that functional automation teams may start with Appium or Selenium, performance teams tend to start with JMeter.
Granted, some teams do have the experience and resources needed to scale JMeter, although this is rare. More often than not, teams are far better served by investing in more sophisticated solutions like Tricentis NeoLoad and OctoPerf (sidenote: Perform is a certified delivery partner for each one).
Of course, when selecting the right tool for a project, cost is always a consideration. However, it’s important to not let the mentality of ‘doing more with less’ to take the front seat. Weight needs to be given to other factors like type of application, test environments, tool functionality & scalability, ease of use, and so on.
As part of helping clients build a refined performance testing strategy, we also help clients select the right solution for their project. You can learn more on how this process works by simply booking a meeting with one of our architects and a member of our client strategy team.
More often than not, clients actually save money by investing in performance engineering - which transitions to our fifth and final barrier for this article.
For some organizations, there’s a general mindset where it’s OK to waste perfectly good dollars on supporting inefficient cloud architecture. This tends to happen when a mission-critical application receives heavier-than-expected workloads, which requires chief executives to authorize increased cloud spend. While this makes sense in the short-term, this is a ‘bandaid fix’.
To be clear, we’re not advocating to underfund your cloud servers and allow your app to crash in production. Instead, our recommendation is to periodically and systematically review bottlenecks in your cloud infrastructure to see where improvements can be made.
This does require a specialized skillset, making it easier said than done. At Perform, we conduct cloud assessments that help clients understand where their bottlenecks and inefficiencies are - and provide detailed recommendations for remediation. At the end of last year, we helped a large banking institution reclaim over $450k per month in wasted AWS spend simply by optimizing bottlenecks in the app.
You can request more information about Perform’s cloud assessment workshop by clicking here.
Founded by engineers - for engineers.
Expert consulting and staffing for software engineering at scale.