Every day all around the world, engineers and testers push new code live with a hope and a prayer because they know it hasn’t been thoroughly tested. They know the customers will probably find issues, but that’s what hot fixes are for, right?
Does this sound familiar?
It’s not that you want to deliver an experience that will frustrate your customers, but you’ve only got so much time, and you’ve got to get the release out. We’ve all been there. Maybe,you feel like you are there all too often. And, it’s not unusual to wish you could have done a better job with testing.
Challenges of Finding and Fixing
You have tight timelines, granted, but are there other reasons why it’s so hard to catch issues before the release goes out? If you test manually, it is virtually impossible to cover everything. Your technology stack is complex; in general terms, it’s made up of four layers:
- Network architecture
- Infrastructure (Hardware and hardware setup, operating system and databases, etc.)
- Applications
- Business logic (IVR application, interaction distribution rules, etc.)
Your environment is made up of these things, and your environment is unique. It’s impossible for vendors or solution providers to test their components in an environment that represents each of their customers'.
When you test, you probably go after the most common denominators. That is, you test with the hardware and software that you believe most of your customers use. The idea is that you may not catch everything, but you can try to catch a big portion of the errors your customers could experience. This isn’t perfect, but the alternative would be to deliver very little innovation at a very high cost!
The 80-20 Rule
If you are like most companies, you are testing only about 20% of your customer experience following a change. You may think that’s all you need to test since you only touched a portion of your environment. Or, if you are testing manually, this may be all that you have time to test before you have to go live. You know that something you didn’t test in the other 80% is almost guaranteed to come back around to bite you. It’s a rule.
Here’s how it bites you. Changes to one area often impact what you think is a wholly unrelated area. Every time you deploy code, you can potentially introduce errors throughout your environment.
What’s the Alternative?
You do have an option – automated testing. This could then improve your test coverage to 100%. And you could stop using your customers as your testers.
If you decide to transition to automated testing, here are three things to think about as you get started:
- Make sure your IVR documentation is up to date. You may not know what options have been deployed in your IVR. By using automation to crawl your IVR’s options, you can quickly develop design documentation. You probably don’t realize how simple this step can be and the importance of having this type of documentation for development projects.
Many executives also don’t realize how bad the customer experience is from their company’s IVR. See our previous blog post, Is Your CX Making Your Customers See Red? and this webinar by AVOKE, Your IVR: Silent but Deadly, for more information on CX and your IVR. - Baseline your system. You may not have a baseline of your system that shows your current performance levels. Or, if you do have one, it may be outdated. A baseline is critical to show incremental improvements as you move forward. Being able to show improvement can be significant as you look for funding for your projects.
- Reuse as much as possible from project to project. As you move through projects, you can think about how to reuse and repurpose project artifacts. This includes building a test library over time. This is also a good way for you to stretch the investment you make in automated testing across several projects instead of treating each project as a one-off.
By following this advice, you can begin to build a development-centered DNA that puts you on the road to improving the customer experience you deliver and your processes overall.
Reaching Software Nirvana
Here’s a story that you might find inspiring if you decide to make the move to automated testing. Last year, I had a chance to talk to the guy who manages testing for a major US financial institution that has moved to a continuous code delivery model. They deploy new releases in their environment multiple times a day. When we were talking, he mentioned that their service levels weren’t five 9s (a stretch goal for any enterprise) but 100%. He explained that they have to be this high because some of their customers are in situations where they get access to communications only once a week. If an interaction fails, they don’t get another chance.
This led me to ask him about his average cost of testing per project. I was expecting a large number, but he went on to tell me that the industry benchmark is around 40%. He said that the more advanced organizations manage 30%, and I agree with this figure since that’s also my experience. In his organization, however, it’s 10%!! He says that’s the benefit of automated testing.
This is also what you might call software Nirvana. When a developer checks a new piece of code in as completed, the development team has built-in automation that recognizes the code needs to be executed, creates the environment to deploy the code, and then runs regression tests on the code without the need for human interference for end-to-end testing. This is the result of a test-centered DNA.
Find out more about Cyara Crawler!