CX Assurance Blog

From Planning Ahead to ‘Smoke Tests’ — Top Five Tips for CX Regression Testing

Posted by Matthew Schwarz | Global Head of Customer Delivery

June 15, 2018

For CX leaders who are in the trenches day-to-day, you know that paying careful attention upfront in the design stage can make or break the efficacy of your CX testing, and of course, ultimately the success of your CX. I’ve captured some best practices to keep in mind, start to finish — based on my years of experience as a developer myself, from working side-by-side with many Cyara customers and from a panel discussion I recently hosted with Cyara customers. 

june-articles-regression

1. Design Your Tests as you Design Your Code

In a typical software development lifecycle, the developers write all the code, and when it’s complete, they hand it over to the QA testers. In the world of CX, we don’t want to wait until that code is complete. Instead, design your CX tests in parallel with code development. This is what we call “test-driven development methodology.” Once the business analyst provides a roadmap of what the IVR should do, get your developers and the CX testing team working simultaneously. Then, when the code is complete, you’re already set to go. Ideally, you’ll click a button and you should be able to pass at 100%. This level of preparation has the added effect of finding not only bugs in the code, but it also can help identify ‘bugs’ in the design — where they are much easier to fix. With two different teams responding to the business requirements, it can reveal any issues in the business logic, further improving and bulletproofing your customers’ experience.

2. Balance Test Data with Real User Experiences

Test data is a consistent issue that all our customers grapple with. In an ideal world, we’d all test with complete, real-world data. But typically, customers just don’t have that level of control and need to rely on test data. For example, when I worked in the airline industry, we could create reservations, but every single month that data was wiped and restarted, because we utilized a third-party database that was shared by all airlines. So, we created a mock backend, and were able to simulate any data we needed. That’s the positive with test data — you have 100% control. You can test your front-end, your routing, the connections going to your back-end, and you can fully automate the process. But the negative is that you’re working within a dummy database that doesn’t fully represent real-life scenarios. So, the trick is to develop and maintain that robust test dataset, while not losing sight of your actual customer experience. Make sure that your test cases are built on what the user experiences should be, then find the test data to fit those test cases (versus the other way around). It’s a simple QA best practice, but one that’s easy to lose sight of, given the challenges of managing test data.

3. Err on the Side of Caution

Your level of caution and how high you set your thresholds for passing tests will depend on your industry, customers, and journeys. Many leading global brands lean towards a high standard and a conservative approach. A specific bit of advice, coming from a financial services company, was to increase your speech recognition confidence scores in order to ensure that their CX was running exactly as designed. They upped the default for their confidence score to 95 percent, and in fact locked this into their processes by requiring manager approval to adjust that score below 92 percent. You may have a little more wiggle room than this, but consider this example: The prompt should read: “You will not be charged for this order.” Instead, it says, “You will be charged for this order.” Notice the absence of the key word, “not.” The system ranked that at an 88 percent confidence. Seemingly close… and yet very significantly off, and something that would trigger a major customer issue. Another note of caution is to not pass scripts that should fail. This sounds obvious, but it’s about paying close attention when the Cyara system fails a script, even when the confidence score is high. Another example from the financial institution customer was a failed script, despite a prompt with a 96% confidence score. The catch that caused the failure? It was a $4,000 difference in the amount for a customer’s account.

4. Make Continuous Integration a Part of Your CX Culture

Our most successful customers have a commitment to ongoing, automated testing. There’s a culture change that comes along with that. People need to understand how important it is to maintain test scripts and testing, and to lock testing into timelines. We have a model customer in this respect, who essentially is constantly testing their CX. Literally every single time a developer puts in code, a ‘smoke test’ kicks off. This might be a 20-30 minute test at most, but it runs through a lightweight regression suite to make sure nothing major was broken with the update. If there’s an issue, feedback gets to the development team immediately, so they can fix it. Then, at the end of the day, they run a larger daily suite that runs everything against one primary testing database. That test may take about six hours. Over the weekend, they run the full gamut of everything they have, testing over 20-36 hours, against multiple testing database sets.

5. Implement a Clear Naming Structure

A recurring theme is to organize test cases with a clear naming structure — one that will save you significant time as you modify test cases and help you quickly be able to identify the area where a bug is. Doing the work upfront pays dividends in the end. As you modify a block, that change can then impact hundreds of test cases. A clear naming structure vastly simplifies the complex chain reactions that can come with those updates. The first part of your test case (for example, a welcome menu) will always be the same, followed by your next step of options, such as account balance or payment options for a financial institution. If you use what we call the “prompt-as-a-block approach,” when something changes, instead of having change to a thousand cases, you change it in that one spot so that it automatically changes all of them. With numerous test cases often using the same 80% for the call flow, it makes the whole process more efficient. When a discrepancy does show itself in the test case versus the actual developed code, then having a name that identifies to the documented design will allow you to quickly determine where the issue is and whether it’s a defect in the design, test case, or developed application itself.

For more information on Velocity, watch the video or visit our Velocity page.

And contact us today to learn more.

Contact Us

Customer Experience Update