Small Wars Journal

Best Practices Guide for Conducting Assessments in Counterinsurgencies

Wed, 08/17/2011 - 11:39am

Download the Full Article

This guide provides practical advice to assessment strategy planners and practitioners. It aims to fill the gap between instructions provided in handbooks and field manuals, and the challenges faced when adapting these instructions to specific operations.   Its purpose is to complement, not replace, the more detailed planning or instructional documents.  Wherever possible, the articles in this guide provide references to more detailed assessment planning documents. It also makes reference to some of the specific needs of the implementation of the Transition (Inteqal) process in Afghanistan.

Part One:  Assessment Philosophy

Article One:  Remain True to the Assessment’s Objective.  The objective of an assessment is to produce insights pertaining to the current situation, and to provide feedback that improves the decision maker’s decisions.  This article discusses how key elements of this objective should guide the assessment development process.

Article Two:  Take a Multi-dimensional Perspective.  This article describes why it is essential to build the assessment by looking at the environment through multiple perspectives that cross lines of operations and time periods.  It also highlights some errors that may arise if the assessment lacks a broad perspective. 

Article Three:  Serve as the Bodyguards of Truth.  Assessment teams develop what may, by default, become the only publicly-available, official picture of the campaign.  Therefore, assessment teams must serve as the bodyguard of truth and never compromise the integrity of their reports.  This article outlines nine key practices that help preserve the integrity of assessments.

Article Four:  Ensure Independence and Access.  Strategic assessment teams need to be free to express their findings about the current conditions and the influential factors they discover.  They also need access to a wide array of information and people in order to perform their job properly.   This article describes how to secure independence and access through a partnership between the senior sponsor of the assessment team, individual line of operation owners, and the assessment team.

Article Five:  Nurture the Intelligence - Assessment Partnership.  The activities related to intelligence and assessments often seem remarkably similar, thus generating the potential for confusion or duplication of effort.  This article briefly discusses the mutually supporting relationship between the two activities. It uses references from formal documents and recommends that the leaders of the two communities deliberately develop a shared understanding of this symbiotic relationship in order to avoid problems.

Part Two: Method

Article Six:  Establish a Terms of Reference Document.  Unclear terms generate confusion in the design of the assessment framework, the analysis of data, and the reporting of insights.  Thus, it is in the team’s best interests to develop a Terms of Reference document as soon as possible.

Article Seven:   Build the Assessment Framework Iteratively, Incrementally, and Interactively.  The assessment framework should be built in stages through a collaborative process.  This approach minimizes complexity, allows for effective learning, and retains clearly established priorities.  It also allows the assessment team to refine the focus and scope of the assessment framework based on lessons learned during the development and use of earlier versions. 

Article Eight:  Discriminate between Indicators and Metrics.  Most people use the term indicator and metric interchangeably and suffer little or no consequences or confusion.  However, there are times when it is useful to discriminate between the two. This article offers a useful approach for when and how to discriminate.

Article Nine:  Use Each Class of Indicator Properly.  Some indicators can be grouped into classes because they share a common set of characteristics that may be beneficial or detrimental to the assessment process.  Several of these broad classes are described in this article including those that measure input versus outcome, those that indicate failure to achieve a condition (spoilers), metrics that can indicate positive or negative effects depending upon context (bipolar), and those that serve as substitutes for other hard-to-measure indicators (proxies).

Article Ten:  Beware of Manipulated Metrics.  Some metrics can be manipulated by the subjects under observation to send misleading signals to observers, rather than reflecting the reality of the current conditions.  This is a particularly high risk for metrics that are used to promote or demote, or directly redistribute resources and money.    This article discusses several examples and suggests ways to detect and minimize such distortions of the data.

Article Eleven:  Develop a Manageable Set of Metrics.  There are hundreds of metrics available for consideration at any point in time.  Thus, it is necessary to establish rules that help us select the metrics contributing the most to the assessment effort.  This article discusses several screening filters that help practitioners develop a manageable and effective set of metrics.

Article Twelve:  Retain Balance in Both Metrics and Method.  Interrelated debates arguing the merits of the narrative versus summary graphics, the organizational level at which assessments should be performed, and the need to preserve the front-line commander’s views within higher level summary assessment products persist in the assessment world.  This article suggests using a format that balances different metrics and method to capture the best features of each alternative.

Article Thirteen:  Deploy Field Assessment Teams.  In order to provide actionable information to the decision maker, assessment insights must be relevant and credible.  For critical issues, the only way to achieve this standard is get out to the field and engage directly with front-line units.  This article suggests that we rethink how we perform assessments and offers an approach that augments the traditional process with the use of field assessment teams.

Article Fourteen:  Bound Estimates with Eclectic Marginal Analysis.  When a desired metric is difficult to measure directly we might be able to measure other factors that drive the value of the desired metric.  Under such conditions, we can use marginal analysis with an eclectic set of related metrics to generate a reasonable estimate of the target metric.  This section explains the technique and provides some examples of marginal analysis.

Article Fifteen:  Anchor Subjectivity.  A degree of subjectivity in assessments is unavoidable.  This article discusses methods to minimize the degree of subjectivity, make that subjectivity transparent, and maintain consistency in the way we capture subjective assessments.

Article Sixteen:  Share Data.  Every coalition effort faces information sharing challenges.  This article discusses important reasons for sharing information and offers some guidelines that promote effective sharing.

Article Seventeen:  Include Host Nation Data.  Two features of the COIN assessment environment that should be considered when developing the assessment process are the existence of host nation data collection efforts and the ability for assessment teams to interact with this system.  This article addresses the challenges of using host nation data and ways to work around the challenges.

Article Eighteen:  Develop Metric Thresholds Properly.  This article discusses key guidelines for developing metrics thresholds, including adjusting levels towards key phases of objective conditions, developing and sharing clear definitions of the thresholds, and ensuring that observances of metrics at these levels represent a significant change in underlying conditions.

Article Nineteen:  Avoid Substituting Anecdotes for Analysis.  Anecdotes are a useful component of assessments when used properly.  Unfortunately, they are often used as substitutes for a solid assessment.  The best rule to keep in mind when using anecdotes is that they are generally the starting point for analysis, not the closing argument of an assessment. 

Article Twenty:  Use Survey Data Effectively.  Questions of motivation, satisfaction, degrees of trust or fear, as well as intentions regarding future actions are difficult to measure by monitoring actions.  Often, we must capture this information by interviews or broader surveys.   This article addresses how to manage some of the major concerns associated with using survey data in assessments.

Download the Full Article

About the Author(s)

Dave LaRivee is an Assistant Professor of Economics at the United States Air Force Academy.  He retired from the Air Force as a colonel in 2010 after serving as a combat analyst during initial operations in Iraq, directing an effects assessment survey team in the summer of 2003, and leading the Strategic Assessments cell, Multi-National Forces-Iraq, from 2007 to 2008.  He holds a Master’s Degree in Operations Research and a Doctorate in International Economics.

Comments

G Martin

Thu, 08/25/2011 - 6:29pm

Not being a "metrics" guru, I may be simply ignorant, but my experience with metrics gathered by government entities has been that they usually are turned around to support current strategies, conventional wisdom, and our own worldviews/what we want to believe (or "know" is true). Very seldom, if ever, are they used to test a hypothesis. In my view, metrics should be gathered to disprove hypotheses- not to "prove" we are doing the right things.

The articles mentioned above seem to me to assume that one can get to these objectives. This would take too much IMO- you'd have to have the right ORSA folks, the right data gatherers, the right leaders to influence the assessments, the right environment, etc. I think we have to assume the opposite- that you'll never have most of these- if any, and then talk about uses of metrics/assessments in light of that reality.

Lastly, I think that an overreliance on metrics and assessments belies our "postivist" and engineered-solution culture where we think we can break things down into categories, measure things, and then provide a useful interpretataion of a complex environment. Metrics and assessments are a tool only- and only one tool. When taken with a grain of salt, when understood in the context of their limitations, when used to disprove hypotheses, and when used in conjunction with many other tools, they can help. But, more often that not, I would argue our metrics and assessments do the opposite of "help".

- Grant Martin

BillM

The article is a significant of guidance. Its hard to fault the analysis and the first rate conceptual framework except for the fact that like many, many other assessment tools the analysis is often more blindingly complex than the problem. The value in assessment should be in its ability to deliver simple, clear guidance.

Unfortunately you are probably right Bill. Building roads, schools etc doesnt defeat an insurgency. We are clouded by our own fundamental belief in what maintains social harmony in our own society.

Maybe we are trying to apply the same/template of COIN for every conflict, but looking to develop better assessment models/frameworks to prove the doctrine works, as with this paper.

Our war-fighters are also operating under intense political/media/public scrutiny combined with endless well-meaning do-gooders who deep down want everyone to love everyone.

Do we really believe Templer was all 'hearts and minds' in Malaya in an era of considerably less media? Unless you are the Sri Lankan Government and dont give a rats' about what the world thinks of you.

Commanders in the fight are working with an NGO community who believe, as night follows day, that if only we pump more money, more goodness then people will stop hating each other and stop wanting to kill us.

Here is a question for the assessment tool: Very recently I asked someone "do you guys measure the effectiveness of what we are doing here?" The answer I got was "no we only really want to know about output." Second question I asked was "last time I could never get the metrics on whether all those fighting aged men occupied in a particular area equalled less/more/the same IEDs, kinetic engagement?" The answer I got was "no we dont measure that..no one has data on that."

To the author could your framework tell how how effective we are? and show a correlation between money pumped in/men occupied and kinetics?

Regards

Jason

drmiller

Thu, 08/18/2011 - 5:49pm

I've also done counterinsurgency work, am very familiar with Col LaRivee's work in Iraq, and applaud this guide. Excellent advice here, from someone who ran the Assessments Office in Iraq under General Petraeus, and did a fantastic job.

WRT the 1st post comment that "COIN is not nation-building", it may not be, but in Iraq it cetainly was vital, and in many COIN situations the political, nation-building aspects will be far more important than killing insurgents. Analysts don't get to choose the strategy, and it's likely that U.S. COIN operations will always include a full range of operation that need to be assessed and evaluated. Dave LaRivee has written a fantastic guide that I hope DoD, Dept of State, and others use.

Drew Miller,
Col USAFR, Ret

I did a quick read, and owe the author a more detailed read, but based on what I read this article reflects the damaging effects of effects based operations (EBO)process and our COIN doctrine on our collective logic. COIN is counterinsurgency, not nation building. The author's implication that we need to assess all the systems of PMESII to determine if we are trending towards our desired transition/end state can be misleading. If our goal is to defeat the insurgency then more focus should be directed on the insurgents themselves and less on the economy, infrastructure, etc.

If we're doing an assessment for nation building and/or stability operations then these factors are probably relevant. However, building great roads, increasing employment, building better schools, etc. will rarely "defeat" an insurgency. This is a myth that we have managed to perpetuate and now we believe it like a young child believes in Stanta Claus. I would suggest that the first objective should be clarifying what we're assessing. To be fair the author does a good job of tying most of his assessment criteria to the insurgency (though some points are debatable) and warns of drawing the wrong conclusions.

Once we move past the idealism such as economic aid and development will defeat an insurgency we can focus on what really works and then conduct a more pragmatic assessment. In Afghanistan we need to assess what areas are under government control compared to which areas are contested or under insurgent control. We need to assess insurgent strength, their ability to sustain recruiting, funding, etc., basically all the things that allow them to sustain their fight. If we want to know the truth about our effectiveness against the insurgency then we need to focus more on the insurgents and less on distractions such as building schools, clincis, etc.. We can deceive ourselves by focusing on other metrics such as economic development, social programs, political reform etc., because we can show great progress in all those areas, yet those efforts are not hampering the insurgency. That results not only in misleading ourselves, but our civilian leadership.