DORA Metrics: Measuring What Really Matters About Your Software Delivery

Home / Blog / DORA Metrics: Measuring What Really Matters About Your Software Delivery
DORA Metrics: Measuring What Really Matters About Your Software Delivery

Introduction

Organisations around the world want to improve the efficiency of their software development teams, delivering software at scale to their customers. When organisations want to improve the quality and efficiency of their teams, they need to determine how to measure success. Some common questions that we hear from our clients include: 

  • How good are the DevOps principles in our organisation? 
  • Where does my engineering team stand compared to other organisations? 
  • How can we increase the productivity of our development teams? 

While these might be good questions, they raise challenges about how teams can answer them and measure themselves. How do we measure software team performance that focuses on business outcomes and on what matters most to customers? 

If you can’t measure it, you can’t improve it. 

While the above statement is a little facetious, it illustrates that it’s very difficult to demonstrate improvements without appropriate metrics in place. When organisations first start determining what metrics to measure, some might think of: 

  • Lines of code written 
  • JIRA tickets completed 
  • Timesheets submitted 

While the examples listed above are measurable, are they measuring business outcomes that your organisation truly cares about?  
 
In this blog post, we explore four key metrics that can be used to gain insight into the business outcomes software delivery drives. We will also discuss thresholds across industries, so we understand what constitutes different levels of software performance.  
 
Finally, we look at how Sourced Group (Sourced) can help identify high value interventions to push your organisation up a software performance grade. 

What Is DORA?

No, it’s not Dora the Explorer, but rather The DevOps Research & Assessment (DORA) program that has been running for 7 years and gathered data from 32,000 professionals worldwide. It is the longest running research investigation of its kind and provides an independent view into practices and capabilities that drive high performing technology organisations. 

State of DevOps Reports are released yearly, with the latest report released in September 2021. A popular book that we recommend and that goes into deeper detail on DORA metrics is Accelerate: The Science of Lean Software by Nicole Fosgreen, founder of DORA, Jez Humble and Gene Kim. 

Every year, the State of DevOps report is released with an updated research model. This enables the project to keep up to date with the industry as new methodologies and technologies are embraced. It provides an independent assessment of how organisations deliver software through four key metrics. The goal of this research is to determine practices that drive software delivery excellence and demonstrate how this is key to organisational success.  

DORA Metrics

Metrics like lines of code, tickets completed, and time utilisation focus on individual or siloed team outputs. DORA metrics track team outcomes rather than individual outputs. There are two main themes behind the DORA metrics: team velocity and quality & stability.  

Figure 1: Four Key DORA Metrics 

Deployment Frequency and Lead Time for Changes look at team velocity, strong indicators for how good the team is at delivering software. 

Deployment Frequency 

Being able to release more often allows you to receive quicker feedback and deliver value to end users more quickly.  

It reduces the amount of risk the team faces when making a change to production as the batch size will typically be smaller. This makes it easier to determine which change is causing an outage if it occurs. Rolling back on changes will be easier due to reduced batch size.  

Lead Time for Changes 

A lower Lead Time for Changes means that your team can release new features to market and deliver value to end users very quickly, giving your business a huge competitive advantage. The definition of lead time for change can also vary, which can create confusion within the industry.  

We define Lead Time for Changes as the time when the requirement originates in the business until a particular feature is pushed into production. This metric is important because it allows us to measure the end-to-end process.  

In the situation where we only measure the time taken by the development teams, we might miss out on identifying bottlenecks in other areas of the organisation (finance, product, legal, risk) and potential areas of improvement for the entire organisation. 

Figure 2: Lead Time for Changes 

Mean Time to Recovery and Change Failure Rate focus more on quality and stability. This measures the stability of your production environment and the quality of your processes in the software delivery lifecycle. 

Mean Time to Recovery 

Quick recovery times reflect the team’s ability to diagnose problems and correct them. Firstly, it is an indicator on how good observability is, as monitoring and alerting needs to be in place to detect the problem in the first place. Once you have detected that there is a problem, Mean Time to Recovery measures how quickly organisations can recover. 

Change Failure Rate 

When a failure occurs in production, it may mean downtime for your business. Time spent dealing with failures is also time not spent delivering new features and value to customers. If an organisation has a high change failure rate it suggests that the QA processes may need some work to ensure changes are well tested. 

It’s important to note that organisations must measure all four metrics rather than measure them individually. If teams just try to improve one metric without looking at the other three, it might make things worse.  

For example, organisations could massively improve their deployment frequency by deploying broken code more often, but if we don’t also measure Change Failure Rate then it will look like we’ve made things better when we haven’t! 

DORA Benchmarks

In the State of DevOps Report 2021, it is evident that the difference between low and elite performing organisations has been growing steadily over the past few years. To illustrate this, below is an extract from the report that shows the magnitude of difference between low and elite performers: 

973x

More frequent code deployments

6570x

Faster lead time from commitment to deploy

3x

Lower change failure rate

(changes are 1/2 less likely to fail)

6570x

Faster time to recover from incidents

Figure 3: Benchmarking Against the Industry, Comparing the Elite’ Statistics from 2021 State of Devops Report

Figure 4: DevOps Benchmarks, State of DevOps 2021 

The industry is accelerating. There are more elite and high performers in 2021 than ever before. Furthermore, there is a chasm emerging between medium and high performers. 

Low and medium performers are in the minority for the first time and risk being left behind without intervention.  

How Can Sourced Help?

Sourced can help by identifying and delivering high value interventions to help push your business up a software delivery bracket. First, we need to help our clients measure their DORA metrics. 

There are several techniques we use to help clients gather a baseline for their DORA metrics: 

  • Value Stream Mapping 
  • Instrument existing CI/CD pipelines 
  • Build CD pipelines (if they don’t have them) 
  • DevOps Assessments 

We’ll discuss one of our favourite techniques below. 

Value Stream Mapping

Value stream mapping (VSM) is one of the techniques that Sourced can use for your organisation to get a baseline for DORA metrics. Value stream mapping originated from Toyota in the 1990s, where the use of lean manufacturing processes helped to promote value stream mapping as a modern best practice for business teams to perform at a high efficiency.

Software developers eventually adopted some of the concepts that were applied to the software development lifecycle (SLDC). 

Below is a small example of how an organisation can start generating a VSM to measure what their Lead Time for Changes roughly is where they have no metrics. 

In this example, we have mapped out an example of a user journey:

“I want to have a new AWS environment ready to host application workloads.” 

Figure 5: Value Stream Card Types 

Each colour represents a different type of card. These cards will be used in the value stream map to help visualise how an organisation can get from the onboarding phase to achieve their expected outcome, which is to build out a new AWS environment to host application workloads. 

Figure 6: Sample Value Stream Map 
Figure 7: Sample Value Stream Map 

In the example above, we can see that there is a wait time for the ticket in the onboarding phase for a period of two days, despite the task taking five minutes to complete. Also getting a hardware MFA token had no process and had three days of wait – that adds a week of time to the lead time for changes. 

In order to get a rough metric for Lead Time for Changes, we can add up the event and wait times of the stream. 

The value stream map can provide your organisation with several things:  

  1. An easy-to-understand visual representation of how to achieve the business outcome 
  1. A rough base level of what the lead time for changes is for this type of change 
  1. Identifying the events in the process where the biggest bottlenecks are  

To get the most value from this exercise, we like to get stakeholders from all teams involved in the process to participate collectively to generate the value stream map. This typically leads to Eureka moments where teams realise bottlenecks don’t need to exist and they can collaborate better, but it’s important there is a no-blame and open culture in the session which we like to foster. 

Practices to Improve Your DORA Metrics

Improving DORA metrics highly depends on the business context and what the software process looks like, but below are several techniques and initiatives that we have implemented with some of our clients. 

Figure 8: Practices Sourced Can Deliver to Improve DORA Metrics  

In terms of trying out new software engineering techniques to improve DORA metrics, it’s important that: 

  1. We have a way to measure DORA metrics before and after we make the potential improvement (preferably automated) 
  1. We have buy-in from the business to make changes to how the development team works 

The second point can be difficult, especially in highly regulated organisations. An approach to navigate this can be to take a small, low criticality application to prove some of the concepts we have identified then socialise the results with the business stakeholders to be scaled out across other teams. 

Conclusion

DORA metrics provide organisations with a concise set of measurements that can be used to answer the following questions: 

  • How do we determine the success of any improvements we make to our software process? 
  • Where does our organisation’s software performance sit across industries? 

Adopting good DevOps practices is becoming standard for most organisations and that’s a good thing. But being able to determine the success or failure of said practices is key to accelerating your delivery.  

There is no better time than now to start measuring as the chasm between medium and high performers grows. 

Video 

We ran a public webinar on DORA metrics that you can watch here, there is a Q&A section at the end (37:20) that can answer some questions that may arise as you read this blog post. 

Matt has over a decade of experience bringing organisations across various sectors on devops and cloud transformations to provide high-value interventions. He is passionate about empathising with all teams involved in the software development journey and enabling them to collaborate as seamlessly as possible to focus on delivering value.

Kee Jin is an Associate Consultant at Sourced with close to 3 years of experience in the industry, working with clients from Financial Services and Telco. He has a strong passion for technology and hones his skills by tinkering with technology during his free time.