Tuesday, March 1, 2011

Capability taxonomies can kill

The power of taxonomies never ceases to amaze me. We use a lot of (explicit AND implicit) taxonomies and other types of classifications in defense planning - as we of course also do in other walks of life. Most of us are not even aware that we put things into various mental 'boxes' for all sorts of purposes ("she's one/not one of us"; "he is attractive/not attractive; etc.). Some of the more geeky amongst us might be more conscious of this, but might still think these categories are innocent, 'objective' mental 'aids' that just assist us in moving from the more abstract to the more concrete. My own take on this is that the taxonomies we currently use in defense are among the main culprits why we have the 'armed forces' we have now, and why we have too much of certain types of military capability and too litle of others.


Taxonomy is the study of the general principles of scientific classification; and taxonomies are hierarchical classifications - the most popular probably being the biological taxonomy named after the Swedish botamist Carl Linnaeus that 'neatly' organizes all forms of life into different classes and subclasses.

We certainly have 'boxes' galore in defense and in defense planning. We have 'environmental' boxes (air, land, sea - and now also cyber and  space), temporal ones (e.g. short/medium/long-term; or the different phases of a conflict), geographical ones (nearby/far away; conflict zones; areas of responsibility), civil/military ones, technological ones (these really come in all sorts of colors and stripes), various functional (e.g. task lists, DOTMLPF/TEPIDOIL/PRICIE/...) ones, and the list goes on. In the past century, the dominant categorization in defense has been the environmental one as embodied in our 'services': air, land, sea. By 'slicing' defense that way, our armed forces almost automatically obtain some 'balance' of capabilities between the (existing) services. In a recent (in many ways unique) defense benchmarking effort, for instance, McKinsey even used the average equipment mix of the United Kingdom and France as their 'ideal' target mix, because it felt both countries have a good balance of army, navy, and air force equipment in all major categories. As soon as we have boxes, we always seem to want to fill them in some 'balanced' way. But we rarely stop to question whether this particular 'boxing method' is the only one, or even the 'best' one from a point of view of effectiveness and efficiency. It is extremely unlikely that a country would decide to 'give up' one of the services. There are still a fair number of land-locked countries with a Navy (even Austria, a land-locked country since 1918, only gave up its Navy in 2006). But it also extremely difficult to add new capabilities that do not (or not easily) 'fit' into the catagory - such as air forces after World War II, or new cybercapabilities today.

There is a rich literature in cognitive psychology about the various mental 'shortcuts' (often called heuristics) and biases in our brains that help us make sense of the world around us, but that also impair our abilities to think logically or rationally [My own bible on this is Jonathan Baron's Thinking and Deciding]. We may think that some of us are (systematically) better or worse at this than others (yet another classification), but there is ample empirical evidence that this is just a fundamental design 'flaw' in humans. It's what we do. It helps us in managing our daily lives. But it also presents big dangers. Using taxonomies or 'putting things in boxes' just happens to be one of these heuristics.

There can be no doubt that we 'need' boxes in defense. 'Defense' is too big and unwieldy a concept to start developing capabilities 'out of the blue'. So we have to slice up that bigger pie into smaller, more chewable pieces. We also need to do so because there are service-specific attributes (specialized knowledge, kit, know-how, networks, etc.) that require separate, specialized care. It just makes sense to make airmen (chiefly) responsible for the air-specific parts of defence planning.

But we also have to realize that as soon as we start slicing things up, the slices tend to acquire a life of their own. They become parochial and often stovepiped. And will fight for 'their' capabilities, even if it is at the detriment of the 'other' slices or - worse yet - of the whole. And the responsibility for the 'whole' (as opposed to the parts) may become more difficult to safeguard against the knowledge and power of the parts. This is not just an inter-service rivalry issue - we see the same occur between the civilian and military sides of defense organizations, between short-termers and long-termers, etc.  It is also not just a 'political' issue between groups of people (my grandfather used to say, wherever there are people, the will be 'peopling' going on) but it is also a purely analytical problem that analysts should find better solutions for.

Let me illustrate this with a concrete example from the NATO defense planning process. This (new and indeed somewhat improved) process goes through a number of steps to ensure that NATO can identify the 'right' capabilities that are required to cover the level of ambition it has set itself and to equitably apportion the burden of providing these capabilities.


The New NATO Defence Planning Process
In the first step, the NATO political leadership agrees upon a set of mission types it feels the Alliance should collectively be able to cover and on a specific (politically determined) level of ambition - essentially how many of which missions it wants to be able to successfully perform simultaneously. These missions include a number of very 'tough' missions such as collective defense or peacemaking in non-permissive environments, but also some 'softer' ones such as humanitarian operations. This list of missions is obviously also a categorization, but in this case it is a fully political one - so there is not much we analysts can do with this other than to use it. That is different in the second, more analytical (and analytically perilous) step, where this political guidance has to be somehow converted into concrete capability requirements. Here analysis has the lead - and in my eyes also the primary responsibility for trying to get it as right as possible.

At NATO, this second step falls under the responsibility of Allied Command Transformation, one of NATO's two major strategic commands, located in Norfolk, VA and currently headed by a European four-star, French General Stéphane Abrial). ACT is assisted in this task by NC3A, the NATO agency that houses much of the Alliance's (astonishingly sparse) operations research capability. NC3A has developed a tool suite for this process which is just riddled with various seemingly more purely analytical taxonomies/categorizations that play a key role in the requirements setting process.

Mission-to-Task Decomposition
It would be impossible to derive meaningful capability requirements from the fairly abstract mission types. Even within the mission type of 'collective defence' for instance, the actual capabilities that are required to defend - and I am giving illustrative examples here - Norway against Russia (far up North, with severe winters, a rugged coast line with significant maritime challenges against a still fairly - also increasingly - impressive Russian order of battale) will differ dramatically from defending Turkey against Syria (with a radically different climat, topology, military opponent) or the entire alliance against a cyberattack. So the first thing analysts do here is to tranform the mission types into more concrete story-lines that span the full set of approved missions AND more real-life planning-relevant parameters (such as climate, geography, etc.). There has always been (and continues to be) much politicking about these story lines and how concrete they should/can be. NC3A has recently developed so-called 'Zoran-Sea' scenarios - an imaginary but concrete setting that replicates many potential real-life scenarios with made-up countries, geography, etc.   [I always think of it as akin to Africa but with a big sea in the middle of it surrounded by anonymized Russia's, Irans, Somalia's etc.]. But so whether real-life or 'generic' - we use scenarios to help us in making the big leap from missions to capabilities [For a defense planning approach that does not use scenarios - but can also not be seen as an independent methodology, see my entry on Joint Ops 2030].

Yet even with these more concrete contextual elements, we still need some analytical tools to identify what capabilities would be required to successfully deal with such contingencies. And this is where the taxonomies come in. One of the key taxonomies used here is the taxonomy of tasks and subtasks that have to be fulfilled to achieve the political objectives that are described in the contextual vignettes. These lists are now been included in an NC3A-tool called J-DARTS (Joint Defence Planning Analysis and Requirements Tool Set), an end-to-end tool for capability based planning that builds on previous (less elegant and less comprehensive) incarnations of the same idea. The following image presents a screendump from a part of J-DARTS called D-MIST (Defence Planning Mission Study Tool), an application that supports the hierarchical decomposition of missions into tasks (as well as the creation, storage and visualisation of individual scenarios and scenario descriptions).



As the image shows on the left hand sight, the generic (but concrete) scenario here is a Zoran Sea scenario dealing with the mission type 'evacuation'. A crisis in a ficticious failing state in an oil- and resource-rich region  with some countries that are both friendly AND not-so-friendly with NATO triggers a humanitarian crisis that requires evacuation. On the right hand side, we see how capabilities are derived out of this in a  number of steps: on the bottom right hand side, for instance, we see the various tasks NATO armed forces will have to be able to execute - starting with the task 'planning of the operation' (which itself includes a few layers of subtasks), continuing with exercising of command and control over the operation etc. In subsequent steps then, other tools will then allow planners to translate these tasks into concrete capabilities (down to the platform-level).

I have always thought the tasks lists are one of the big Achilles heels in our current approach to capabilities-based planning. They are far from trivial to produce. NATO analysts clearly have put much thinking into them, and they have changed (and improved) significantly over time. Whenever we discover, for instance, that some tasks may be missing (unfortunately far more frequently after finding out the hard way after some painful lesson rather than through sheer thinking power), we just add them to the list. They also draw on similar taxonomies established in some of the larger countries. The United States, for instance has a gargantuan 1300-page Universal Joint Task List (and there is also an extremely interesting UJTL-manual)
The Universal Joint Task List (UJTL) serves as a menu of tasks in a common language, which serve as the foundation for capabilities-based planning across the range of military operations. The UJTL will support DOD in joint capabilities-based planning, joint force development, readiness reporting, experimentation, joint training and education, and lessons learned. It is the basic language for development of a joint mission essential task list (JMETL) or agency mission essential task list (AMETL) used in identifying required capabilities for mission success.
A couple of years ago, as we were preparing the method for the new Dutch (whole-of-government) national  security programme (which really does (some) capability-based planning across government as a whole), we looked at a number of these tasks list and identified what we felt to be a number of important shortcomings.
  • most of them tended to be highly stovepiped, with defense organizations (and increasingly also homeland security organizations) typically having them, other departments (like Ministries of Foreign Affairs) typically not having them. We found no truly whole-of-government task lists - i.e. what are the general tasks governments (as opposed to individual departments or agencies) have to be able to fulfill in order to safeguard a country's security [which is why we set out to develop one for the Netherlands]. 
  • we observed that all task lists - most of which were constructed primarily 'bottom-up' - had a very strong operational focus, often underestimating other non-operational tasks. In essence, both the front end (‘paving the path’) and the far end (‘sit back and reflect’) of the security chain are much less well developed. Another missing dimension is often the strategic level above the operational one, where the whole chain is viewed and put in perspective (‘are we doing the right things?"; "why are we doing this" and "how do we ensure the balance between ends, means and ways"). 
  • most  looked much more at the past and the present than at the future. DoD’s Universal Joint Task List (UJTL) for instance, is essentially a compilation of the existing task list of the Services, augmented with a number of joint tasks that were already on the books, augmented and amended based on lessons learned in (essentially) Iraq and Afghanistan. But all are clearly guided by today’s challenges. In a period of deep uncertainty, this may not be the most appropriate approach.
  • they were overwhelmingly biased towards the 'response' side of the security chain at the expense of the prevention side. This again is a reflection of the way in which these lists emerge - with many (mid-level) 'operators' being more involved than (top-level) 'broader' people.  As a result, one need not worry that the task 'force protection' will receive insufficient attention. But one DOES need to worry about capabilities such as building up defense capabilities of others (what David Gompert and some RAND colleagues have provocatively labeled  'defense development'). "More armor, sure; but building up the capabilities of the African Union???"
  • we found the lists to be too weakly linked to the desired effects. In the NATO defense planning method I just described, analysts do specify operational (and not necessarily strategic!) objectives for the scenarios. But in the task lists, we find no direct linkages to those effects. Put concretely - 'strike' is likely to be developed in much detail (based on our long operational experience with strike), but how do we assess the return on our investment in 'strike' in terms of the ultimate goals we are achieving (e.g. a more stable and f'riendly' Afghanistan) if we do not take the alienation of the population into account that comes with it. If we were to start the analysis truly from the effects, and then put more creativity in working our way back from there to a balanced capability bundle to achieve it, the lists would likely look quite different. 
  • their complexity was  directly correlated to the size of the country, with some smaller countries having fairly succinct task lists (but sometimes even better on the 'higher-level') and the US having a guargantuan one. The key challenge here is to strike the proper balance between levels of aggregation. And certainly for forward planning (as opposed to operational planning) purposes, getting things right at the higher levels still seems more complete than being totally exhaustive at the lowest levels.
  • most of them have a singular higher-level logic, that is often of a temporal nature. The taxonomy we saw in the NDPP J-DARTS slide is a nice example: it starts with making a plan, and then moves on to the 'next' thing planners have to do. Time is a useful heuristic for taxonomies, but it should be offset against some other 'logics' to make sure we do not miss out on some important tasks just because we were thinking from a 'time'-straightjacket.
Coming back to the NATO capability development process, I still feel these task lists (and many other taxonomies used in the process) deserve more creativity and attention than they currently receive. This is not necessarily the fault of the analysts working in the NDPP. Many of the weaknesses are due to the fact that they are still forbidden from taking a 'broader' (more comprehensive) view because of the resistance of a (small) number of countries. My point here is also not so much that NATO should necessarily be the organization that does this broader capability derivation (although it does have fairly unique capabilities AND experience in this matter). My point is more that this should be done somewhere, and that we need better and broader taxonomies than we currently have. 

And on a final note for us analysts - it seems to me that we have to be very careful not to become overly enamored with our 'nice' taxonomies. We like lists, we love matrices, we adore morphological analyses - and these all CAN be quite useful. We also prefer them 'neat' rather than messy, and when we feel something might be important, but does not fit nicely into the 'logic' or the 'aggregation level' of a certain taxonomy, we may be tempted to leave it out. This we should not do. Ultimately, we should not aim for a neat', but for a robust AND adaptive toolset that increases our chances of getting capabilities 'right'. And so - to give another example of a taxonomy that is in danger of becoming overly fossilized - if we develop dotmlpf-analysis methods to make sure that capability solutions cover all of those areas, we should not then rest on our laurels and think that we have now 'solved' the problem. As I have written recently in some TNO-work for the Dutch MoD (Beyond DOTMLPF. Designing capabilities that are ‘born networked/ing’), there are many other 'angles' into the problem that should be investigated in a 'full-spectrum' capability investigation. And many of these deserve their own 'checklists' (along the lines of the dotmlpf-checklist). These other angles include:
  • where the problem comes from (what I call capability forensics) and whether more 'upstream' capabilities might not present higher value for money; 
  • more sensitivity for (and creativity in) 'framing' issues - is our way of framing this capability investigation the only way or even the right way? If we frame it in another way (especially 'out-of-the-box'), might we be able to come up with other capability requirements?
  • context-sensitivity analysis - given the importance of scenarios in the NDPP, the risk may be smaller there, but in general we have to make sure we test any capability 'solution' against a representative range of contexts (and not 'only' the seemingly 'logical' one(s))
  • sourcing tests - sourcing tests typically occur too late in the process now (typically by the acquisition community, and in most countries also typically in a largely perfunctory exercise for a small set of capabilities to see whether they could be outsourced). But the very question whether certain capabilities are better generated nationally or through some 'ecosystem' is not typically asked. But if we think about expeditionary constabulary forces, for instance (which are in high demand and which not all countries have), the question whether we get more return on our investment from recruiting and training our own 'gendarmerie' forces vs. training more African or Asian constabulary forces is certainly one that is worth asking in a capability investigation. 
  • Positive capabilities - many capabilities we now have are typically 'reactive' in response against ‘negative gaps’. We think of the things we think we have to do mostly because of old or new threats. But should we not have a category where we instead think of positive gaps: ‘things we CAN now do to prevent or minimize problems in the future or to strengthen forward resilience so that if problems occur societies are better equipped to deal with them locally or regionally’?
Summing up, it seems to me we analysts have to constantly review the (often singular) taxonomies we use and develop new taxonomies that allow us to 'tackle' defense planning issues from different angles (I'm trying out the neologism 'heterogonal' for this). We do need taxonomies. But they can kill - especially (but not only) in the defense world. They can lead to insufficiently robust (not necessarily in the kinetic sense but in in the sense that they cover different contingencies from different angles) capabilities that can cost unnecessary lives - both on 'our' side and on the 'other' side(s). 

Let me end with a quote from an exquisite blog posting in an entirely different field that makes the same point:
In all circumstances, competing mechanisms should always be allowed to co-exist with a taxonomy, and should never be allowed to die out completely, because that will kill the value that a taxonomy brings. A taxonomy thrives on the bed of the Babel Instinct, and dies once it is cut off from that source of new insight, new perspectives, and new ways of looking at things. The whole essence of taxonomy work is to constantly repair the fragmentation caused by the Babel Instinct, and to weave the social fabric that allows collectives of people to work together effectively, to organize and exploit their knowledge for common use, and to discover new things. The taxonomy, ironically, is the servant of Babel, not the master of it, and it cannot survive the complete death of Babel’s diversity. An effective taxonomy sits between Chaos and Order and mediates the two; it does not, as it so often assumes, represent the domain of Order unequivocally.

4 comments:

  1. You may have heard about Chicago's great security system company, but you may not really know what the company does.To know more about forbel alarms, browse this site.

    ReplyDelete
  2. IT Support Nottingham provides a wide range of services for your company in Nottingham, which include desktop and server support, application development and maintenance. If you want to get more interesting details about IT security, visit this site right here.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete