Algorithms for High Performance DevOps

Accelerating Digital Innovation – Speeding Concept to Cash

To determine the business case for the Cloud Native technologies of microservices, containers and Continuous Deployment the key dynamic is ‘Business Value Throughput’. Ie. not just speeding up the production of deployed code but of software that adds quantifiable value to the organization.

In this blog Stackify make this great observation, that Agile and DevOps combine to holistically address the full lifecyle of translating business ideas into working code running in the Cloud hosting delivery environment.

Improving the throughput of this capability is how organizations can increase innovation rates, how they can bring more new products to market faster. Better synthesis of both software development and infrastructure operations speeds up the overall cycle of new feature deployment, but of course it’s only a success if the Ka Ching part happens.

This “ah ha > ka ching!” process is also often described as the ‘Concept to Cash‘ life-cycle, the core mechanic of generating new ideas and turning them into revenue-generating business services of some kind.

Continuous Deployment, Continuous Innovation

What this establishes is a direct link between technology and business practices, notably Continuous Deployment and Continuous Innovation, where the optimized software process is harnessed to accelerate this Concept to Cash lifecycle, and can be combined with business practices such as Lean to maximize development profitability.

ThoughtWorks provide an excellent article exploring this implementation: How to Practice Continuous Innovation, Solutions IQ a presentation on a 7 Minute Case Study of Agile and Concept to Cash, and there is a free sample chapter available of the book Implementing Lean Software Development – From Concept to Cash.

Another example is the Amazon white paper ‘Jenkins on AWS‘, where they provide a recipe for implementing Continuous Deployment on AWS, via using software such as Jenkins running on their Cloud service.

Critically the paper also describes how this capability is the foundation for higher up business benefits derived from the same productivity improvements, ie faster software = faster product innovation. This is described equally as Continuous Innovation, the business level transformation achieved through adopting Continuous Deployment building blocks.

This defines the core mechanic of how to harness maturity model planning for Cloud computing, highlighting how leveraging the fast, easy deployment feature of IaaS to make available applications that also speed further adoption.

Maturity models

A number of maturity models are available to map the details of this evolution. Thoughtworks offer this Forrester paper describing it within a context of executive strategy and perceptions of IT and by setting the scene for where many organizations are currently at. HP, IBM, InfoQ and Arun Gupta of AWS each offer a DevOps and CD maturity model.

In short each of these describes an approach of automating key steps to eliminate the various manual procedures of development and release management, as the mechanism for maturing the overall capability and speeding software throughput. This TechTarget article describes this maturity progression in a very simplified form that the models provide the detail for:

Throughput Accounting – Business Value Metrics

To close the loop and map these development improvements to Business Value, management can leverage organizational performance practices notably Six Sigma and the Theory of Constraints (TOC), to define their ‘DevOps Algorithms’. Although originating in the manufacturing domain they deal generally with whole system design and the same principles can be applied to any industry workflow; in software engineering enabling an equivalent of a ‘software factory’.

A highly recommended paper that explores this in detail is Productivity in Software Metrics by Derick Bailey, describing the application of TOC to software development, such as how they relate ‘User Stories’ as a unit of work and which includes a framework for performance metrics based upon its principles:

  • Inventory (V), and Quantity: Unit of Production (Q) – How does the software team quantify what is ‘work in production’.

  • Optimizing Lead Time (LT) vs. Production (PR) – Using Workload Management to schedule the most optimum flow of work.

  • Investment (I), Operating Expense (OE) and Throughput (T) – Maximizing Net Profit (NP) and Return on Investment (ROI) and calculating Average Cost Per Function (ACPF)

The key ingredient is the science of ‘Throughput Accounting’, versus traditional cost accounting. As the name suggests where the latter is concerned with static snapshots of financial reporting, Throughput Accounting focuses on the systems dynamics of what actually drives cash flow, and reporting on their performance relevant to overall capacity.

These are practices that enable the development process to be viewed as a factory-like workflow, so you can then identify the overall throughput rates, constraints in the ‘production line’ that slow progress and reduce output rates, and so forth.

Doing so enables you to view the organization through a business performance-centric lens and apply optimization best practices such as ‘Value Stream Mapping‘, enabling you to identify the right tools and techniques to apply at the right process points, and thus improve developer productivity withiin a context of Business Value generation. In the DevOps article Lean Value Stream Mapping for DevOps the IBM author describes how they use this kind of process optimization goal as a way of better organizing software development and innovation.

Technology Metrics and Toolchains – Continuous Containers

As the base unit for improvement design this analysis can also identify and measure technology metrics, and from this inform what permutation of DevOps tools to use.

For example Lori McVittie of F5 provides a great blog on developing KPIs around system-level performance, relevant to DevOps goals, formulating a set of metrics for:

  • MTTR – Mean Time To Recover
  • LTTC – Lead Time To Change
  • MTTL – Mean Time to Launch

Lori identifies the core benefit of containers versus virtual machines, in terms of spin-up times and how this would be especially beneficial to high-traffic, high demand scenarios such as software defined networking. From this she identifies a number of variables that define a performance algorithm.

This type of analysis identifies the constraints in technical operations that can slow overall performance throughout, and therefore what types of tools and automations to apply.

Conclusion: Maturity-model Driven Cloud Native Agility

The Cloud Native mix of microservices, containers and Continuous Deployment offers a combination of architecture and tools that can bring these types of improvements to any enterprise software team.

David Linthicum describes how containers enables continuous operations, such as Docker. Rancher also explores this relationship in their white paper How to Supercharge your CI Pipeline.

However in 4 Myths about Containers and Continuous Delivery Todd DeCapua explains that this doesn’t happen magically, the use of these technologies does not automatically translate into improved development throughput. Instead he also recommends the use of maturity models, quoting the Infoq Continuous Delivery maturity model, to identify your current capabilities and from this plan a future roadmap where the tools are deployed to achieve specific process improvements.

 

CBPN Founder and Consultant – Specializes in SaaS, business transformation and enterprise devops.

LinkedIn 

We will be happy to hear your thoughts

Leave a reply