Algorithms for High Performance DevOps

Exciting as they are Cloud Native technologies aren’t exempt from the practical considerations of adopting any new technology, most notably the business case, and especially the fact that organizational transformation is also a key ingredient for success. It is this combination that makes possible ‘Algorithms for High Performance DevOps’.

In short the adoption of new technology tools won’t address issues that arise from departmental conflicts and the bottlenecks they cause, and thus an overall, holistic transformation of both technology and organization is required.

Identifying Process Constraints

This presentation from NTT Communications describes this constraint, making the specific point that tools could only achieve a certain level of improvement, their bigger challenges arose from organizational dynamics.

NTT make the critical observation that define the limitations of their progress and with this the need for broader organizational transformation. On slide 23 showing their future plans and challenges, they highlight how they reached the ceiling limits of what toolchains alone can achieve:

“Each section has its own Agile environment.

Difficult to promote collaboration over the sections.”

In other words tools can only solve tool-related problems – Those created by organizational boundaries et al therefore need organizational level solutions.

Starting to define organizational change sets the scene for the role of Business Architecture, providing the tools to both plan the desired future state team and process models, but also help document what the current capabilities are, often a big part of the challenge.

For example Value Stream Mapping is the process of analyzing the flow of work across teams, literally identifying those activities that add value to customer workflow. This would start to identify the blockages caused by departmental silos, as NTT describe. How to better optimize them then the process of planning DevOps transformation.

Business Capability Team – From Silos to DevOps

DevOps also encompasses the organizational and team practices, referring to the fusion of previous distinct departmental functions of software development and IT operations, a distinction that often lead to the kinds of challenges that silos usually create. It sets out to break down the artificial boundaries that develop profusely in large, hierarchical organizations, and instead self-organize around a ‘delivery pipeline’ of the work required to deploy code faster and with fewer errors.

IT Revolution, one of the leading experts in the field, captures these challenges and describes the transformation to new models very effectively, citing the ‘Inverse Conway Manoeuvre’ as the technique for designing DevOps team flows.

Conway published revealing research in the 60’s that showed organizational performance is directly related to the hierarchical department structures that management choose to organize these teams. For example cost-centric functional approaches, such as grouping software development and IT operations into their own departments, results in local optimizations but long lead times overall, caused by the bottlenecks that arise through slow hand offs between them.

Agile DevOps teams have instead focused on the end-to-end process required to deliver new software and organized around these, implementing ‘Business Capability Teams’ – Multi-discipine teams that work together across the entire lifecycle. Martin Fowler closes the loop, describing how the approach goes hand in hand with the new microservices architecture, and Scott Prugh of AMC explores this transformation in detail in this Slidedeck.

Throughput Accounting

It is this level of transformation that will yield the real business improvements that senior executives are hoping Cloud computing will deliver for them, and they can call upon existing management practices they already know, in particular Six Sigma and the Theory of Constraints (TOC), to quantify exactly how, the Algorithms.

These methods originated in manufacturing, applying whole system design to improve the overall throughput of the production line not just local optimizations, and the same principles can be applied to any industry workflow; in software engineering enabling an equivalent ‘software factory’.

A highly recommended paper that explores this in detail is Productivity in Software Metrics by Derick Bailey, describing the application of TOC to software development, such as how they relate ‘User Stories’ as a unit of work and including a framework for performance metrics based upon its principles:

  • Inventory (V), and Quantity: Unit of Production (Q) – How does the software team quantify what is ‘work in production’.
  • Optimizing Lead Time (LT) vs. Production (PR) – Using Workload Management to schedule the most optimum flow of work.
  • Investment (I), Operating Expense (OE) and Throughput (T) – Maximizing Net Profit (NP) and Return on Investment (ROI) and calculating Average Cost Per Function (ACPF)

The key ingredient is the science of ‘Throughput Accounting’, versus traditional cost accounting. As the name suggests where the latter is concerned with static snapshots of financial reporting, Throughput Accounting focuses on the systems dynamics of what actually drives cash flow.

Speeding Concept to Cash

Where traditional accounting presents a financial statement snapshot of one point in time, as the name suggests Throughput Accounting is concerned more with identifying the work streams that generate cash flow and other business benefit, and reporting on their performance relevant to overall capacity.

In this blog Stackify make this great observation that helps illustrate the overall idea, that Agile and DevOps combine to holistically address the full lifecyle of translating business ideas into working code running in the Cloud hosting delivery environment.

Improving the throughput of this capability is how organizations can increase innovation rates, how they can bring more new products to market faster, the improvement to high performance.

Better synthesis of both software development and infrastructure operations speeds up the overall cycle of new feature deployment, but of course it’s only a success if the Ka Ching part happens, a process described as the ‘Concept to Cash‘ lifecycle, the core mechanic of generating new ideas and turning them into revenue-generating business services.

We will be happy to hear your thoughts

Leave a reply

Skip to toolbar