A Research-Based Approach to Measuring DevOps Metrics That Matter
- June 17, 2020
As the old adage goes, what gets measured gets done. Measurement is the key enabler to any DevOps transformation and yet it’s an oft-neglected aspect of projects. Organizations struggle to get beyond the starting blocks when learning how to measure DevOps. As a result, in today’s article, I will share important DevOps metrics your team can use to get started on your journey to measuring positive change.
A common challenge I frequently hear from DevOps teams is that there is no clear starting place, no benchmark to begin measuring from. They ask, ‘If you don’t know where you are starting, how do you measure improvement from that place?’ My advice is to simply start. Start measuring and your yardstick will appear. You will see and be able to show improvement over time. Instead of measures like, “the release made it out to production,” you’ll start being able to report on metrics for DevOps that meaningfully impact the business.
Why DevOps Metrics Matter
DevOps metrics are important as they help inform data-driven decisions that can guide continuous improvement efforts. And, with the right measures, you can link DevOps improvements with measurable impact to greater goals like digital transformation efforts. DevOps Research and Assessment (DORA) group helpfully provides us with clear metrics to track and even more insights with its latest report, Accelerate State of DevOps 2019.
DORA’s Research-Driven Guidelines
Over the past six years, DORA has worked to develop four DevOps measurements indicative of an organization’s software delivery performance, and ability to meet its DevOps goals. This year the group has enriched its research by identifying the capabilities that drive improvement in each of these four key areas. Using DORA’s four key metrics as a foundation, let’s explore the options and tools available for gathering metrics in DevOps.
This metric gauges the throughput of your software delivery process, telling you how often and how quickly new services or features are deployed to production. This measure tells you quite a bit about your process effectiveness. For example, if there are bottlenecks in the process, measuring deployment frequency will help you unearth them by asking key questions such as:
- Are there unnecessary steps in the process or are these steps in the wrong order?
- What can we automate?
- Are we the right team to manage this part of the process?
- Do upstream issues exist that affect our responsiveness?
- And, do we have access to the tools we need to ensure timely deployments?
Over time deployment frequency should remain even — or increase. In the spirit of continuous improvement, decreases or dips should be reviewed closely to identify (and remediate when possible) the root cause. DORA identifies elite performers as those able to deploy on-demand, multiple times a day. Conversely, low performers deploy once every one to six months.
Deployment frequency can be measured using Jenkins, Trello, and Git. In addition, off-the-shelf tools like Xebia Labs can help measure deployment frequency as can Azure’s built-in DevOps tools.
Lead Time for Code Changes
Along with deployment frequency, this metric measures the throughput of the software delivery process. DORA recommends measuring the lead time for code changes from the point in time when code is checked-in to the point it is released. This measure can also help you gauge the efficiency of your processes, supporting system effectiveness, and the general capabilities of your development team. For example, lengthy lead times can unearth inefficiencies in the development process or deployment bottlenecks.
As your team becomes more familiar and efficient with its DevOps processes, you should expect to see your lead time for changes metric decrease over time. Elite performers lead time is less than one day whereas low performers need between one and six months.
Lead time for code changes can be effectively measured from Jenkins. Like deployment frequency, you may also opt to use Azure DevOps tools or Xebia Labs. Ultimate, it depends on your platform, what your team has experience with, and what you want to spend both in dollars and maintenance.
DevOps Change Failure Rate
DORA flags the change fail rate as a measure of the quality of the release process. It gets to the heart of how many application or service changes, builds, or deployments that create a service issue large enough that it requires remediation. The change fail rate would ideally be managed down to as close to zero as possible. And, indeed, all but low performers have a change fail rate between zero and 15%.
The IT ticket system is an effective tool for measuring fail rates, tracking for each change its success rate, the impact of the failure, and any required remediation. For example, your ticket systems can report if an approved change led to a service outage that required a rollback of the change.
Time to Restore Service
Once a service-impacting incident is detected, how long does it take to remediate and restore the service? This question measures system stability. Naturally, you’ll want to restore services as quickly as possible as the cost of service outages can be extreme to the business. A Fortune 1000 survey by IDC found that the average cost of an infrastructure failure is $100,000 per hour.
When it comes to this measure, DORA research finds a significant gap between elite and low performers. Elite organizations are able to restore services on average in less than one hour whereas low performers report taking between one week and one month. High and medium performers are able to restore service within a day.
If you issue tickets for system repairs, your ticket system should be able to report on time to repair service. Tracking this metric will give you a distinct trend line illustrating progress over time. This is just one way to measure this metric. Often, a look at the monitoring tools that come with your cloud resources will give you this information. In the best scenarios, failures are self-healing and take milliseconds to failover.
While these four metrics are a very helpful starting place to measure DevOps improvement and success, it is absolutely critical that teams take the initiative to link these metrics to the business. For example, increased deployment frequency allows the DevOps team to address new customer requests faster, growing customer satisfaction. Tracking key metrics is important to the business and even more so if you can show the business how DevOps processes are driving improvement over time that directly impacts key corporate goals.
Some tools allow for value stream mapping which directly ties code changes to features released. In some cases, e.g. retail applications, you can directly tie new feature introductions with impact to revenue.
With these four key metrics in hand, you are now in a position to build a dashboard for ongoing tracking and reporting. Commonly used DevOps metrics dashboard tools include Grafana, CloudBees, DevOptics, XebiaLabs, and Azure DevOps dashboards. Depending on your budget, timeline, and technologies in use, any of these solutions could be a fit for you.
DORA’s four key metrics will not only allow you to show progress and highlight areas for improvement for the DevOps team but by using these four common key metrics, you will be able to benchmark your team against their peers for external validation of their progress. And, you’ll have a genuine numbers-based response when your boss drops by to ask how the team is progressing. Most importantly, with this data in hand, you will be prepared to change course quickly when you don’t see a benefit in something you have built, leverage the insights you gain from your experiments, and capitalize on your successes helping the business reach its ultimate goals.
Interested in an experienced sherpa for your DevOps journey? Reach out to our consulting team today.
This article originally appeared on DevOpsDigest.