Algorithms for High Performance DevOps

Exciting as they are Cloud Native technologies aren’t exempt from the practical considerations of adopting any new technology, most notably the business case, and especially the fact that organizational transformation is also a key ingredient for success. It is this combination that makes possible ‘Algorithms for High Performance DevOps’.

In short the adoption of new technology tools won’t address issues that arise from departmental conflicts and the bottlenecks they cause, and thus an overall, holistic transformation of both technology and organization is required.

Identifying Process Constraints

This presentation from NTT Communications describes this constraint, making the specific point that tools could only achieve a certain level of improvement, their bigger challenges arose from organizational dynamics.

NTT make the critical observation that define the limitations of their progress and with this the need for broader organizational transformation. On slide 23 showing their future plans and challenges, they highlight how they reached the ceiling limits of what toolchains alone can achieve:

“Each section has its own Agile environment.

Difficult to promote collaboration over the sections.”

In other words tools can only solve tool-related problems – Those created by organizational boundaries et al therefore need organizational level solutions.

Starting to define organizational change sets the scene for the role of Business Architecture, providing the tools to both plan the desired future state team and process models, but also help document what the current capabilities are, often a big part of the challenge.

For example Value Stream Mapping is the process of analyzing the flow of work across teams, literally identifying those activities that add value to customer workflow. This would start to identify the blockages caused by departmental silos, as NTT describe. How to better optimize them then the process of planning DevOps transformation.

Business Capability Team – From Silos to DevOps

DevOps also encompasses the organizational and team practices, referring to the fusion of previous distinct departmental functions of software development and IT operations, a distinction that often lead to the kinds of challenges that silos usually create. It sets out to break down the artificial boundaries that develop profusely in large, hierarchical organizations, and instead self-organize around a ‘delivery pipeline’ of the work required to deploy code faster and with fewer errors.

IT Revolution, one of the leading experts in the field, captures these challenges and describes the transformation to new models very effectively, citing the ‘Inverse Conway Manoeuvre’ as the technique for designing DevOps team flows.

Conway published revealing research in the 60’s that showed organizational performance is directly related to the hierarchical department structures that management choose to organize these teams. For example cost-centric functional approaches, such as grouping software development and IT operations into their own departments, results in local optimizations but long lead times overall, caused by the bottlenecks that arise through slow hand offs between them.

Agile DevOps teams have instead focused on the end-to-end process required to deliver new software and organized around these, implementing ‘Business Capability Teams’ – Multi-discipine teams that work together across the entire lifecycle. Martin Fowler closes the loop, describing how the approach goes hand in hand with the new microservices architecture, and Scott Prugh of AMC explores this transformation in detail in this Slidedeck.

Throughput Accounting

It is this level of transformation that will yield the real business improvements that senior executives are hoping Cloud computing will deliver for them, and they can call upon existing management practices they already know, in particular Six Sigma and the Theory of Constraints (TOC), to quantify exactly how, the Algorithms.

These methods originated in manufacturing, applying whole system design to improve the overall throughput of the production line not just local optimizations, and the same principles can be applied to any industry workflow; in software engineering enabling an equivalent ‘software factory’.

A highly recommended paper that explores this in detail is Productivity in Software Metrics by Derick Bailey, describing the application of TOC to software development, such as how they relate ‘User Stories’ as a unit of work and including a framework for performance metrics based upon its principles:

  • Inventory (V), and Quantity: Unit of Production (Q) – How does the software team quantify what is ‘work in production’.
  • Optimizing Lead Time (LT) vs. Production (PR) – Using Workload Management to schedule the most optimum flow of work.
  • Investment (I), Operating Expense (OE) and Throughput (T) – Maximizing Net Profit (NP) and Return on Investment (ROI) and calculating Average Cost Per Function (ACPF)

The key ingredient is the science of ‘Throughput Accounting’, versus traditional cost accounting. As the name suggests where the latter is concerned with static snapshots of financial reporting, Throughput Accounting focuses on the systems dynamics of what actually drives cash flow.

Speeding Concept to Cash

Where traditional accounting presents a financial statement snapshot of one point in time, as the name suggests Throughput Accounting is concerned more with identifying the work streams that generate cash flow and other business benefit, and reporting on their performance relevant to overall capacity.

In this blog Stackify make this great observation that helps illustrate the overall idea, that Agile and DevOps combine to holistically address the full lifecyle of translating business ideas into working code running in the Cloud hosting delivery environment.

Improving the throughput of this capability is how organizations can increase innovation rates, how they can bring more new products to market faster, the improvement to high performance.

Better synthesis of both software development and infrastructure operations speeds up the overall cycle of new feature deployment, but of course it’s only a success if the Ka Ching part happens, a process described as the ‘Concept to Cash‘ lifecycle, the core mechanic of generating new ideas and turning them into revenue-generating business services.

Accelerating Digital Innovation – Speeding Concept to Cash

To determine the business case for the Cloud Native technologies of microservices, containers and Continuous Deployment the key dynamic is ‘Business Value Throughput’. Ie. not just speeding up the production of deployed code but of software that adds quantifiable value to the organization.

In this blog Stackify make this great observation, that Agile and DevOps combine to holistically address the full lifecyle of translating business ideas into working code running in the Cloud hosting delivery environment.

Improving the throughput of this capability is how organizations can increase innovation rates, how they can bring more new products to market faster. Better synthesis of both software development and infrastructure operations speeds up the overall cycle of new feature deployment, but of course it’s only a success if the Ka Ching part happens.

This “ah ha > ka ching!” process is also often described as the ‘Concept to Cash‘ life-cycle, the core mechanic of generating new ideas and turning them into revenue-generating business services of some kind.

Continuous Deployment, Continuous Innovation

What this establishes is a direct link between technology and business practices, notably Continuous Deployment and Continuous Innovation, where the optimized software process is harnessed to accelerate this Concept to Cash lifecycle, and can be combined with business practices such as Lean to maximize development profitability.

ThoughtWorks provide an excellent article exploring this implementation: How to Practice Continuous Innovation, Solutions IQ a presentation on a 7 Minute Case Study of Agile and Concept to Cash, and there is a free sample chapter available of the book Implementing Lean Software Development – From Concept to Cash.

Another example is the Amazon white paper ‘Jenkins on AWS‘, where they provide a recipe for implementing Continuous Deployment on AWS, via using software such as Jenkins running on their Cloud service.

Critically the paper also describes how this capability is the foundation for higher up business benefits derived from the same productivity improvements, ie faster software = faster product innovation. This is described equally as Continuous Innovation, the business level transformation achieved through adopting Continuous Deployment building blocks.

This defines the core mechanic of how to harness maturity model planning for Cloud computing, highlighting how leveraging the fast, easy deployment feature of IaaS to make available applications that also speed further adoption.

Maturity models

A number of maturity models are available to map the details of this evolution. Thoughtworks offer this Forrester paper describing it within a context of executive strategy and perceptions of IT and by setting the scene for where many organizations are currently at. HP, IBM, InfoQ and Arun Gupta of AWS each offer a DevOps and CD maturity model.

In short each of these describes an approach of automating key steps to eliminate the various manual procedures of development and release management, as the mechanism for maturing the overall capability and speeding software throughput. This TechTarget article describes this maturity progression in a very simplified form that the models provide the detail for:

Throughput Accounting – Business Value Metrics

To close the loop and map these development improvements to Business Value, management can leverage organizational performance practices notably Six Sigma and the Theory of Constraints (TOC), to define their ‘DevOps Algorithms’. Although originating in the manufacturing domain they deal generally with whole system design and the same principles can be applied to any industry workflow; in software engineering enabling an equivalent of a ‘software factory’.

A highly recommended paper that explores this in detail is Productivity in Software Metrics by Derick Bailey, describing the application of TOC to software development, such as how they relate ‘User Stories’ as a unit of work and which includes a framework for performance metrics based upon its principles:

  • Inventory (V), and Quantity: Unit of Production (Q) – How does the software team quantify what is ‘work in production’.

  • Optimizing Lead Time (LT) vs. Production (PR) – Using Workload Management to schedule the most optimum flow of work.

  • Investment (I), Operating Expense (OE) and Throughput (T) – Maximizing Net Profit (NP) and Return on Investment (ROI) and calculating Average Cost Per Function (ACPF)

The key ingredient is the science of ‘Throughput Accounting’, versus traditional cost accounting. As the name suggests where the latter is concerned with static snapshots of financial reporting, Throughput Accounting focuses on the systems dynamics of what actually drives cash flow, and reporting on their performance relevant to overall capacity.

These are practices that enable the development process to be viewed as a factory-like workflow, so you can then identify the overall throughput rates, constraints in the ‘production line’ that slow progress and reduce output rates, and so forth.

Doing so enables you to view the organization through a business performance-centric lens and apply optimization best practices such as ‘Value Stream Mapping‘, enabling you to identify the right tools and techniques to apply at the right process points, and thus improve developer productivity withiin a context of Business Value generation. In the DevOps article Lean Value Stream Mapping for DevOps the IBM author describes how they use this kind of process optimization goal as a way of better organizing software development and innovation.

Technology Metrics and Toolchains – Continuous Containers

As the base unit for improvement design this analysis can also identify and measure technology metrics, and from this inform what permutation of DevOps tools to use.

For example Lori McVittie of F5 provides a great blog on developing KPIs around system-level performance, relevant to DevOps goals, formulating a set of metrics for:

  • MTTR – Mean Time To Recover
  • LTTC – Lead Time To Change
  • MTTL – Mean Time to Launch

Lori identifies the core benefit of containers versus virtual machines, in terms of spin-up times and how this would be especially beneficial to high-traffic, high demand scenarios such as software defined networking. From this she identifies a number of variables that define a performance algorithm.

This type of analysis identifies the constraints in technical operations that can slow overall performance throughout, and therefore what types of tools and automations to apply.

Conclusion: Maturity-model Driven Cloud Native Agility

The Cloud Native mix of microservices, containers and Continuous Deployment offers a combination of architecture and tools that can bring these types of improvements to any enterprise software team.

David Linthicum describes how containers enables continuous operations, such as Docker. Rancher also explores this relationship in their white paper How to Supercharge your CI Pipeline.

However in 4 Myths about Containers and Continuous Delivery Todd DeCapua explains that this doesn’t happen magically, the use of these technologies does not automatically translate into improved development throughput. Instead he also recommends the use of maturity models, quoting the Infoq Continuous Delivery maturity model, to identify your current capabilities and from this plan a future roadmap where the tools are deployed to achieve specific process improvements.

 

(Visited 4 times, 1 visits today)

You might be interested in

LEAVE YOUR COMMENT

Your email address will not be published. Required fields are marked *

Skip to toolbar