Part of the series: Capability is everywhere in the organisation. For architects it is a strategic element. For everyone else it is what they do daily.
Where this builds from
The first post in this series established the nature distinction: strategic capabilities are claims about potential, operational capabilities are claims about reality. ArchiMate formalises the separation in the language architects already use. The second post extended it with a conditions layer: a capability must be tested against the environment it must actually operate in, and naming those conditions is an act of judgement.

- Jacco Meijer
- |
- Mar 9, 2026
- Capability is not confused. It is occupied
Most capability work fails before it starts because it does not ask which type of claim it is making. A diagnostic language for enterprise and security architects.

- Jacco Meijer
- |
- Mar 16, 2026
- Having a capability and being able to exercise it are not the same thing
Formal capability is not real capability. A conditions layer for testing whether a capability holds in the environment it must actually operate in.
What remains is the question of what happens when the distinction collapses or the conditions go unnamed. This post introduces the failure modes that follow, the feedback loop that prevents them and the diagnostic instrument that brings all three elements together.
Failure modes
ArchiMate is precise on this point: the name of a capability should emphasise what we do rather than how we do it. A capability is what the organisation can do. The business function that realises it is how it is done. They are related. They are not the same.
A failure mode is what follows when that distinction collapses. Each of the six below traces a specific mismatch: between what a capability claims, how it was assessed and the conditions it must hold under. They are not exhaustive. They are the patterns most consistently produced when capability work does not distinguish the type of claim it is making.
Naming a failure mode is an act of judgement. It requires being precise about which mismatch is driving the breakdown. That precision is what makes the diagnosis actionable rather than merely descriptive.
| Mode | What breaks |
|---|---|
| Omission | The map looks complete, so what is missing is invisible. Enterprise models that have never been tested against operational reality. |
| Illusion | What could be achieved is overstated. Strategic roadmaps that assert capability without tracking whether the conditions for it are in place. |
| Myopia | What exists dominates. Operational processes that score well locally while missing the wider system they belong to. |
| Fiction | Formal capability is not real capability. Programmes that exist on paper and fail in practice because adoption was never tested. |
| Isolation | The system is assessed in pieces and the pieces do not add up. What emerges from interaction is invisible when each component is evaluated alone. |
| Drift | Capability forms and operates without governance. Adaptive systems that produce real outcomes that nobody has taken responsibility for. |
The feedback loop
Strategic capabilities need operational grounding. A capability roadmap never tested against operational evidence will drift. An enterprise model never updated from operational reality becomes fiction.
This is not a new observation. TOGAF builds the loop into the architecture lifecycle itself. Phase G requires that implementation produces feedback to the architecture team. Phase H exists to determine whether that evidence warrants a new evolution cycle. The loop is not optional. It is structural.
The OAA Standard operationalises it at the capability level. OKRs must be defined for each operational capability so that execution is measurable and strategy is revisable. The Standard is precise here too: measurement is the governing link between intent and execution.
What closes the loop in practice is operational evidence continuously revising strategic intent. Without it, strategy hardens into assertion and operations collapse into local optimisation. Digital twins, cyber ranges and continuous control monitoring are the instruments that make this possible. Their purpose is to close the gap between what the organisation claims and what it demonstrably does. They are not a separate category of capability. They are the feedback loop made tangible.
The model
The three posts in this series build to a diagnostic instrument with four elements: two capability categories, a conditions layer and a feedback loop. Strategy shapes what operations must deliver. Operations provide the evidence that revises strategy. Conditions define what either can honestly claim.
| Element | Role |
|---|---|
| Strategic | What the organisation claims it can achieve |
| Operational | What is demonstrably done |
| Feedback loop | Operational evidence revising strategic intent |
| Conditions | What the capability must hold under |
The full picture
The table below is the diagnostic instrument in a single view. Each row is a perspective: the question it asks, the frameworks that operationalise it, the nature of the capability claim it produces (strategic or operational), the condition it must hold under and the failure mode that follows when the perspective is misapplied or the condition goes unnamed. Adaptive systems span both natures because emergence is directional: operational discovery becomes strategic intent.
| Perspective | Core question | Frameworks | Nature | Conditions | Failure mode |
|---|---|---|---|---|---|
| Enterprise modelling | What are we structured to do? | Zachman, BIZBOK | Strategic | Transformation | Omission |
| Strategic planning | What can we achieve? | TOGAF, Gartner, DoD | Strategic | Change | Illusion |
| Operational performance | What is demonstrably done? | CMMI, COBIT 2019, ITIL | Operational | Maturity | Myopia |
| Human development | Can this capability actually be exercised? | Sen, Nussbaum, UNDP, McKinsey OHI | Operational | Adoption | Fiction |
| Systems engineering | What fails when components interact? | INCOSE | Operational | Stress | Isolation |
| Adaptive systems | What becomes possible when components interact? | Cynefin | Operational to Strategic | Complexity | Drift |
The model as diagnostic instrument
The diagnostic instrument does three things. It explains why capability definitions differ: each perspective is asking a different question. It guides which definition to use: by naming the perspective before the modelling begins. It diagnoses what goes wrong when the wrong definition is applied: by tracing the failure back to a mismatch between perspective, nature and condition.
NIST CSF illustrates the model in action. The core functions (Govern, Identify, Protect, Detect, Respond and Recover) are strategic capability statements. The implemented controls beneath them are operational. The framework exists to answer the conditions question: do these capabilities hold under adversarial pressure? The feedback loop is what makes the answer reliable.
Three examples
These are not edge cases. They are typical outcomes of capability work that does not distinguish type, condition and evidence.
Strategic failure
An organisation builds a capability roadmap for digital transformation. Product management is listed as a strategic capability. The roadmap looks complete and well structured. Eighteen months later delivery is inconsistent, roadmaps drift and stakeholders have lost confidence.
The post-mortem finds the capability was defined and assessed strategically, focused on what could be achieved, but never grounded operationally. Nobody asked what was demonstrably done. The feedback loop was never closed.
| Summary | |
|---|---|
| Perspective | Strategic planning. What can we achieve? |
| Capability | Product management |
| Nature | Strategic. A claim about potential, not demonstrated reality |
| Condition | Change. Never tested against operational evidence to confirm the direction was achievable |
| Failure mode | Illusion. What was possible was overstated because reality was never consulted |
| Root cause | Feedback loop. Operational evidence never revised the strategic claim |
Operational failure
A security programme lists real-time threat intelligence as an operational capability. It appears in the maturity assessment at level 3. The process is documented, the tooling is in place and the coverage looks complete. Eighteen months later a significant breach occurs through a threat vector that was well known in the intelligence community.
The post-mortem finds the capability was assessed against process consistency, such as whether the feed was ingested, documented and reviewed, but never against outcomes. The organisation had the process. It did not have the capability. Process compliance substituted for capability.
| Summary | |
|---|---|
| Perspective | Operational performance. What is demonstrably done? |
| Capability | Real-time threat intelligence |
| Nature | Operational. Evidence-grade, but scoped to individual processes |
| Condition | Stress. Adversarial pressure the assessment was never designed to test |
| Failure mode | Isolation. The interaction between controls was invisible when each was assessed alone |
| Root cause | Conditions. The stress condition was never applied to the assessment |
Nature mismatch: Operational treated as Strategic
A security team has built a mature vulnerability management programme over several years. Scanning is consistent, remediation cycles are tracked and the process scores well on every assessment.
But the programme has never been connected to the organisation's strategic risk posture. High-severity vulnerabilities in non-critical systems are remediated ahead of medium-severity vulnerabilities in business-critical ones, because the process optimises for severity score, not for strategic exposure. The capability works. It is working on the wrong thing. Local optimisation replaced strategic intent.
| Summary | |
|---|---|
| Perspective | Operational performance. What is demonstrably done? |
| Capability | Vulnerability management |
| Nature | Operational. Demonstrably done, but never grounded in strategic intent |
| Condition | Change. The strategic risk posture shifted and the operational programme did not follow |
| Failure mode | Myopia. Local optimisation, missing the wider strategic picture |
| Root cause | Nature mismatch. Operational capability never connected to strategic intent |
What experienced architects already know
Experienced architects already navigate this terrain. They move between perspectives, adjust language to context and apply conditions without naming them. When they say a capability looks good on paper but needs to be seen under pressure, they are testing conditions. When they ask whether it actually works, they are invoking the feedback loop.
What is missing is not intuition. It is a shared language precise enough to make that intuition transferable across disciplines and legible to the people who work alongside architects.
The proliferation of definitions and models across this series is itself a signal. Capability resists unification because it is doing different work in different contexts. The multiplicity is not a flaw in the concept. It is a feature of the territory.
An architect fluent across these meanings, who can move between strategic and operational and who understands what the conditions column is asking, is not navigating confusion. They know which maturity model they are reaching for and why. They have asked what each capability must hold under. They are working with the full map. Most organisations are not.
Conclusion
Capability thinking is not a single definition. It is a way of reading what an organisation can achieve, what it does achieve and what holds under the conditions it faces. The model does not simplify the territory. It makes it legible.
Name the perspective. Test the conditions. Close the loop. Anything less is guesswork.
References
Standards and frameworks
- ArchiMate: The Open Group. ArchiMate 3.2 Specification. Document Number: C226. Published October 2022.
- The Open Group Architecture Framework (TOGAF): The Open Group. TOGAF Standard, various editions.
- OAA Standard (Operations Architecture): The Open Group. Operations Architecture Standard.
- NIST Cybersecurity Framework (CSF): National Institute of Standards and Technology. Cybersecurity Framework, Version 2.0 (2024).
- CMMI: CMMI Institute. Capability Maturity Model Integration (CMMI).
- COBIT 2019: ISACA. COBIT 2019 Framework.
- ITIL: AXELOS. ITIL 4 Foundation.
- INCOSE Systems Engineering Handbook: INCOSE. Systems Engineering Handbook, 5th ed. (2023). Wiley.
- Cynefin Framework: Snowden, D.J. and Boone, M.E. (2007). A Leader's Framework for Decision Making. Harvard Business Review, 85(11), pp.68-76.
- DoD Architecture Framework (DoDAF): U.S. Department of Defense. DoD Architecture Framework, Version 2.02.
- Zachman, J.A. (1987). A Framework for Information Systems Architecture. IBM Systems Journal, 26(3).
- BIZBOK Guide: Business Architecture Guild. A Guide to the Business Architecture Body of Knowledge.
Academic and human development
- Sen, A. (1999). Development as Freedom. Oxford University Press.
- Nussbaum, M.C. (2011). Creating Capabilities: The Human Development Approach. Harvard University Press.
- UNDP Human Development Reports: United Nations Development Programme.
- McKinsey Organizational Health Index (OHI): McKinsey & Company.







































