Best Practices in Performance Testing for CX Systems


At Cyara Xchange 2018 in March, I had the great privilege of hearing a presentation by two experts on performance testing of CX applications: Pete Dhadwar of Royal Bank of Canada and Chandra Golla of Cyara. In this blog post, I will share a few of their pearls of wisdom.


What is a performance test and why is it important?

Performance testing is done at the hand-off from the development team to the operations team. By running proper and thorough performance testing, you are able to catch issues and prevent your customers from finding them for you. For example, Thirty-One Gifts, a Cyara customer, used performance testing to discover a configuration issue that was limiting capacity and returning busy signals. Performance tests will assure:  

  • That the environment is resilient in a business-as-usual or disaster recovery situation
  • Scalability meets requirements
  • Integrations with new products or functionality work as expected

Testing for these things is important because, as we found out in a survey Cyara did with Frost & Sullivan, some of the top issues impacting operational customer experience include poor load handling and poorly integrated systems — both of which can be identified by performance testing.

There are different types of performance testing, and each is used for different purposes:   

  • Graduated testing, which includes a series of proactive benchmark tests at various traffic loads, improves testing efficiency by finding issues early at lower traffic loads.
  • Compliance load testing, which ensures the platform is performing at the ratings specified by the vendor or the business.
  • Stress load testing, which executes tests at ratings beyond those specified by the vendor or the business to observe the behavior of the system.
  • Avalanche testing, which is a high volume of calls over a short period of time, and can help prepare for such things as a service outage or response to a new product release.
  • Disaster recovery testing, which deliberately fails components of a platform to invoke disaster recovery procedures and ensure they perform as expected.
  • Soak testing, which maxes concurrent interactions over an extended period of time to verify a system's stability and performance characteristics.

What are the challenges associated with performance testing?

One of the really interesting points the speakers made was about a common challenge they face: simply understanding what constitutes a successful performance test. By way of example, the speakers asked the audience if these two scenarios represented a successful performance test:

  • The test objectives were not met because chat agents only received a single chat session under concurrent chat volume
  • The test objectives were not met because there was an issue identified at the carrier level

While both of these tests “failed,” because the test objectives were not met, they were in fact both successes! These tests identified issues before customers did, allowing developers to fix the code before rolling out to production.

Other concerns include testing in production, impact on business operations, and skewing of operational metrics and reporting. These concerns can all easily be managed with proper planning and communication. Strategies include diversion of call volume, use of test IDs, testing during off hours, monitoring overflow in live queues, and open dialog with the business teams. One specific strategy they suggested was to run testing on Thursday evenings, after business hours, rather than on Friday. This way, when issues are identified you have a fully staffed team the following day to deal with issues.

What does a performance test look like? 

The speakers shared a four-step approach to planning and executing a performance test:

  • Idea phase, when you define the objectives, as well as the technical requirements such as the number of concurrent calls, calls per second, and the skills to be tested. You also identify the stakeholders, and define success criteria.
  • Scope/planning phase. In this phase, you define and document a clear, concrete timeline, including a plan for retries if the test should fail. This plan must cover a wide array of details, including: readiness of the environment, call distribution, the numbers you will dial, required resources, how to handle influx of calls, carrier/gateway prep, and monitoring during the test.
  • Execution phase, when you actually execute the test, including shakeout testing, load and performance testing, and capturing and archiving the logs.
  • Analysis phase. In this phase, you now have access to data about the testing, which you can review to identify root causes, and define an action plan for remediation.


We all know that your CX is more important than ever today. Performance testing provides confidence in the resiliency of your environment, and ensures that you eliminate issues and errors before your customers find them. Proper planning and good communication are keys to your success. And remember, even if you do not meet the test objective, but uncover issues — THAT IS A SUCCESS!

Thank you Pete and Chandra for sharing your expertise. I walked away from this session with a much better understanding of CX performance testing, and I hope my summary of their presentation is paying that forward.

And of course, I have more information to share for those interested in learning more about how Cyara can help you with your CX performance testing. Click here to learn more about Cyara Cruncher, or click here to read how Thirty-One Gifts used Cruncher to improve their CX. Or even better, contact us to learn how Cyara can help you assure your CX.