State of DevOps Insights. 5 questions for Dr. Nicole Forsgren

 

This week, I had an opportunity to reconnect with Dr. Nicole Forsgren, CEO and chief scientist at DevOps Research and Assessment (DORA). Some of you DevOps aficionados will remember her fascinating keynote presentation, The Data Behind DevOps, at the Cyara Xchange conference, earlier this year. Through her work with DORA, Nicole helps some of the world’s top companies become high-performing organizations that are able to move faster and build more secure, resilient systems.

Nicole is also the lead investigator on the largest DevOps study to date, and so has a unique viewpoint on the expanding intersection of DevOps and customer experience (CX). In this two-part blog post, we first discuss some of the study’s findings, and then, later this week in a follow-up post, we’ll talk about how these findings can be applied to development projects that focus on customer experience applications.

NicoleForsgren-Headshot94

 

1. Nicole, thanks for taking the time to share some of your insights into the DevOps industry with our users. Could you provide an overview on the survey and the report?

Thanks Amy! It’s a pleasure to be here. We recently released our latest research, and the results are exciting. The “Accelerate: State of DevOps Report” is the largest and longest-running research of its kind. It represents the latest iteration on five years’ work, which includes data from more than 30,000 technical professionals worldwide, across all industries. This year, we conducted the research in collaboration with Google.

Our goal is to understand the practices that drive higher software delivery performance which, in turn, generates powerful business outcomes. This is the only DevOps report that includes cluster analysis to help teams benchmark themselves against the industry as high, medium, or low performers. The report’s predictive analysis identifies specific strategies that teams can use to improve their performance and outcomes.

2. Was there a specific focus for this year’s report?

This year we examined the impact that cloud adoption, use of open source software, organizational practices (including outsourcing), and culture all have on software delivery performance. Our research has always examined the role of software delivery metrics—throughput and stability—on organizational performance and finds evidence of tradeoffs for organizations that conventionally optimize for stability.

For the first time, our research expanded our model of software delivery performance to include availability. I love the addition of availability this year, because availability is about making and keeping promises to our customers about the software we deliver – and customers are key. This addition improves our ability to explain and predict organizational outcomes and forms a more comprehensive view of developing, delivering, and operating software. We call this new construct software delivery and operational performance, or SDO performance. This new analysis allows us to offer even deeper insight into DevOps transformations.

3. The transformation toward Agile and DevOps methodologies away from more traditional ways of operating has gained tremendous traction in enterprise software development. What are the primary benefits that companies can gain through implementing this approach?

I’m so excited and encouraged by this shift, because we’re seeing a huge increase in value and safety of delivery, which is a major benefit for customers and stakeholders. With older, traditional models of operating, the extended process of planning, requirements gathering, development, testing, and delivery was so incredibly slow.

One big benefit I see—and the report underscores this—is the acceleration of value delivery, which you could also think about as reducing risk. That is, there used to be a huge gap between the initial idea for the application and its eventual delivery to the customer. That introduces risk, because what if our idea for delivering customer value wasn’t quite right? We’ve already invested a huge amount of time and effort into what is essentially a grand experiment, and research has shown that even for well-known features on mature products, only about one-third deliver value (Kohavi et al. 2009).

Another risk is that even if we are correct and we do deliver value, we’ve had to wait a long time to realize that value. And the cost of that delay can be extraordinary. Because of the way that work is scoped in the traditional way of developing software, we batch low-value work together with high-value work, squeezing everything into the requirements doc for approval. This means we are stuck waiting for the highest value features to be delivered along with much lower priority items; there is no expediting in favor of the features we really need. A great example of this is seen in a case study from Maersk, where their highest-value features were worth $7m and having to wait for all features in a release was costing them much more than that. When we can develop and deliver features quickly and incrementally, we don’t have to batch them up, and we avoid this risk and these costs.

I also want to point out that speeding up delivery makes our releases more stable: smaller releases introduce less change into the system. The data on this is clear: throughput and stability are interrelated in software. So, if anyone is moving slowly, thinking this is a “safer” approach, it’s incorrect and misguided. We actually see in the data that this slower approach introduces a much higher likelihood of long periods of downtime.

4. Clearly most organizations would love to achieve accelerated value delivery, risk reduction and therefore the savings benefits, but it’s HARD! Are companies actually being successful in these transitions in your findings?

An overarching—and encouraging—theme this year is that high performance and excellence are possible for everyone. Our high-performing group is growing – it’s 48 percent of our respondents this year! This year, we identified the emergence of a fourth high-performance group: elite performers. This new category exists for two reasons. The first is that we see the high-performing group growing and expanding, suggesting the overall industry is improving its software development and delivery practices. This trend signals that high performance is attainable for many teams in the industry and is not something reserved for an exclusive group of teams with unique characteristics. The second is that the elite group demonstrates that the bar for excellence remains high across the industry, with the highest performers still optimizing for throughput and stability.

5. That is really encouraging—what an interesting data point! OK, so l let’s talk some more about achieving those performance stats. What insights does the report provide on how to reach the level of a high-performing or elite-performing enterprise?

Not surprisingly, high performance comes down to execution. Anyone can achieve high performance, but you need to do the work. It’s so visible in the cloud computing numbers this year. So many of our respondents said they were using the cloud, but when we went on to ask if they were really executing in the ways that matter—such as broad network access and on-demand self-service (characteristics defined by NIST)—only 22 percent of our respondents were adopting essential cloud computing characteristics. And this is an important point because teams that adopt these essential characteristics are 23 times more likely to be an elite performer! So high performance is possible, but you need to commit to doing the work and executing consistently.

6. Nicole – these are some excellent observations, but I know these are just the salient details. Where can our readers get access to the report to learn more about your findings?

It’s available for free download from the Google Cloud Platform website.

-------------

Please be sure to read the second installment of my Q&A with Dr. Forsgren, where we will dive into the importance of DevOps methods to CX.

Contact Us