Before joining Cyara, I built contact centers in Europe, Asia, Africa, and Australia for fifteen years. For two-thirds of that time, I was an architect. When I joined Cyara, I started doing some research on testing and the software development lifecycle. In a series of blog posts, of which this is the first, I will go through some of the things I learned and wish I had known and applied.
One of the recurring problems I faced within different organizations was getting to Acceptance Testing and being told by the customer: "that's not what I wanted"! Last-minute changes ensued with very little or no testing because there wasn't enough time to execute manual testing. The result was typically delayed projects, deployments into production that resulted in major customer pain, and a technical team being on deck for long hours performing heroics to fix things as quickly as possible. The stress levels were sky-high and team morale was low.
A Cinematic Analogy
So why does this happen? Well, let me use an analogy. You know when you go see a movie after having read the book? If you're like me, you often come out disappointed. That's because when you were reading the book, you pictured the characters and the scenes in your mind. These pictures were shaped by your beliefs, your experiences, and so on, and these are usually different from those of the movie director.
I believe something very similar happens on projects. Teams of business analysts bring together their requirements in large, wordy documents, then developers are left to interpret these (like a movie director would) before they are handed over for User Acceptance Testing (when the movie is released). Only after that does the conversation about what people really meant happen!
So what if there were a way to have that conversation at the beginning of the project and to document it as an artefact to be executed against the system under test, and to use the results as a feedback mechanism? Well, at Cyara, that's exactly what we're working on.
Using Test Cases as Documentation
With Cyara, analysts can use test cases to document how they expect the system to work in great detail and also how it will behave based on how the user interacts with it, like a storyboard would. Cyara does this in a conversational manner that allows for the documentation of what the user should expect to hear or see from the system and what the user should reply with. Once an analyst has finished documenting the scenario, the developer(s) can execute the test case until it passes, and send the result back to the analysts and the customer, so they can not only have the documentation, but also see or hear the interaction. This should be done as soon as possible to avoid getting to the end of the project and finding out about disconnects.
In addition, that same test case can be re-executed any time a change is made or a new requirement/scenario is added, to make sure existing functionality hasn't been broken in the process. As you can see, this approach allows customers, analysts, and developers to be aligned from the start. It also allows feedback to happen quickly on the progress of the development and on the kind of experience the system will provide. Another benefit is that as test execution is automated, resources are freed up to respond to changes better.