26 April 2006. Mary O. McCarthy was recently fired by the CIA for allegedly talking to reporters and releasing classified information, which she has denied.
See also: Mary O. McCarthy, "The Mission to Warn: Disaster Looms," Defense Intelligence Journal; 7-2 (1998), 17-31. http://cryptome.org/mccarthy-mtw.htm
Source: Hardcopy of Defense Intelligence Journal.
Defense Intelligence Journal 3 (1994), 5-19
Mary McCarthy
In 1992, then-Director of Central Intelligence (DCI) Robert Gates established a Task Force on Improving Intelligence Warning and charged the group with reviewing the Intelligence Community's capability to warn in light of the enormous changes occurring in the world. According to that ten-member panel of highly respected intelligence and policy veterans, providing policymakers with persuasive and timely intelligence warning is the most important service the Intelligence Community can perform for the security of the United States. Earlier, in its 1992 Intelligence Authorization Act, Congress had been even less equivocal about what is expected of the Community. In the language of the Act: "the Intelligence Community's highest priority is warning of threats to US interests worldwide."1
Warning is a process of communicating judgments about threats to US security or policy interests to decisionmakers. Such communications must be received and understood in order for'leaders to take action that can deter, defuse, or otherwise address the threat and, thereby, minimize the damage to US interests. Effective warning, therefore, involves both communication and timeliness.
There have been repeated attempts since the end of World War II to address deficiencies, create structures, and channel resources to better position intelligence agencies to provide timely strategic warning. Nevertheless, the DCI's 1992 Task Force concluded gloomily that effective intelligence warning has been an elusive goal of the Community since the post-1945 establishment of a national intelligence program.
History of the Effort
Strategic warning emerged as a topic for special scrutiny following three catastrophic surprises: Pearl Harbor, the imposition of the Berlin blockade in 1948, and the Korean War in 1950. In hope of avoiding further intelligence warning failures, in 1953, DCI Walter Bedell Smith established a new community organization, the National Indications Center. Its charter was limited to watching for signs of military mobilization in the Soviet Union, China, and North Korea, or anywhere else where a global conflict might arise.
The Community's subsequent failure to warn of a whole series of other events similarly adverse to US interests -- the Tet Offensive (1968), the Arab-Israeli War (1973), the leftist military coup in Portugal (1974), the Indian nuclear test (1974), and the military coup followed by the Turkish invasion of Cyprus (1974) -- prompted a spate of congressional inquiries into how well the Intelligence Community was positioned to provide warning of events other than major military mobilizations. These inquiries began with the 1976 Report on the Performance on the Intelligence Community by the infamous "Pike Committee" and culminated in an August 1978 study on warning by the Subcommittee on Evaluation of the House Permanent Select Committee on Intelligence (HPSCI). In those reports, Congress made a number of recommendations that led to the establishment of the position of National Intelligence Officer (NIO) for Warning and of a National Warning Staff.
Before the ink was even dry on the HPSCI's warning recommendations, however, the Community was in the throes of another failure, one that continues to resonate some sixteen years later. The fall of the Shah and the subsequent rise of the cleric rulers in Iran, with all the attendant implications for US policy in the region, generated expectations that warning intelligence should include warning of rapid political change and potential adverse developments for US foreign policy.
The Community's failure to warn persuasively of Saddam Husayn's intention to invade Kuwait in August 1990 prompted the most recent review of the national warning system. In the case of Iraq-Kuwait, the warners warned, but the rest of the Community equivocated. Thus, policymakers were not persuaded that the threat was real. In the aftermath, Congress encouraged intelligence agencies to devote more resources and pay more attention to warning.2 When Robert Gates returned to CIA as Director, he established the Task Force on Warning and ordered a complete review of the warning function in the Community. That study resulted in the most comprehensive plan to date for restructuring· the Community to enhance its ability to warn.
Current Structure of the National Warning System
The structure that emerged was shaped by the rather sharp conclusions of the Task Force that the Community's effort to provide warning, historically, had been inadequate; that the task of warning was inherently difficult; that, with the exception of Defense, all intelligence agencies had treated warning intelligence assessments as by-products of their routine analytical activities, an approach that had proven inadequate; that there had been little accountability; and, most disturbingly, that the system previously in place had been largely notional.
To correct these deficiencies, the DCI devised a comprehensive strategy, one which integrates line analytic units of regional and functional experts into the warning process. In a fundamental departure from past practice, the warning system no longer exists in isolation from, or parallel to, the remainder of the analytic and collection community. In addition, each principal agency of the National Foreign Intelligence Board now has a unit that acts as its focal point for warning.
The NIO for Warning remains the DCI's and the Community's principal adviser on warning. The NIO for Warning and the warning elements of the various agencies, collectively, comprise the National Intelligence Warning System.
The NIO, specifically, is responsible for ensuring that warning intelligence is provided to the DCI in a timely manner and to consumers in a progressive fashion, as the situation develops. As part of the warning process, the NIO for Warning also is charged with engaging the analytic community on warning issues and influencing warning-related intelligence collection. In addition, in his or her role as adviser to the Community, the NIO for Warning oversees efforts to improve the quality of warning analysis by sponsoring sustained training of analysts and emphasis on methodological research.
The parameters of warning within which the National Intelligence Warning System works were left inexact, perhaps a recognition on the part of the system's creators that threats to US national security and interests may come from unpredictable, or even unthinkable, sources. Warning, the DCI noted, includes identifying or forecasting events that could cause the engagement of US military forces -- from the scale of embassy evacuations to larger military activities -- as well as events that would have a sudden deleterious effect on US foreign policy, such as coups d'etat, third party wars, and refugee surges.
Overcoming Occupational Hazards: Will the System Work?
The reasons for warning intelligence failure are numerous. As the 1992 Task Force on Warning discovered, however, the root causes seldom lie with individuals, particular organizations, or with a lack of data. Rather, they lie in the nature of the task itself and in the system devised for doing it. The newest version of the National Warning System will only work if the root causes of the past warning failures can be overcome or circumvented.
A Daunting Task. Warning of threats to US interests is an inherently difficult undertaking. In the first place, to warn successfully requires an assessment of intentions, a less secure analytic path than estimates of capabilities. Leaders and commanders can always change their minds, be persuaded, or deterred right up to the moment of an attack, coup, or other precipitous action. Such possibilities increase analytic uncertainty. Furthermore, the actors in such events usually have surprise as a major goal and will seek to mask their intentions.
To make matters worse, potential outcomes that have sudden and deleterious effects on the security and policy of the United States often involve sharp shifts in the direction of developments in a country or region. In other words, the analyst is looking at something entirely new, a discontinuous phenomenon, an outcome that he or she has never seen before. Furthermore, the analyst only sees this new pattern emerge in bits and pieces. The risk is that each new bit of data will be mentally absorbed into an expected pattern that would lead to a familiar outcome. As Janice Stein put it, "it is little wonder, then, that intelligence frequently 'fails' at what is inherently almost an impossible task. What is surprising is that experts sometimes do succeed."3
An analyst can work a whole lifetime on an area and never encounter such a major discontinuity. It is clear to us now tbat the demonstrations that began in Qom, Iran in January 1978 would, within months, threaten the survival of the regime upon which the United States had based its security policy in the Persian Gulf for a generation. But it was not so clear in early 1978. Nor was the outcome inevitable, although it certainly became more possible and likely as events that year unfolded.
Gary Sick, who served on the National Security Council Staff at the time, expresses some sympathy for the intelligence analysts who were working on the Iran account. Nothing in their experience prepared them for what they were about to see. Sick writes:
Genuine revolutions are rare. The mobilization of an entire populace to demand, at risk of personal violence, the radical transformation of their own society is historically uncommon, particularly in large nations of great strategic importance. However, when such an upheaval does occur, it alters the landscape of an age and generates political aftershocks that continue to reverberate long after the initial dust has settled. Arguably, the Iranian revolution was such an event.4
Pity the poor analyst who must make that kind of call. The fact is, he or she is unlikely to do so until the evidence is overwhelming, unless prodded and encouraged to explore alternative outcomes, especially those that would have a serious and harmful effect on US interests.
Moreover, in any complex situation representing a threat to vital US interests the data are abundant, but they do not point unambiguously to a single outcome. Roberta Wohlstetter's "signals" -- the clues and pieces of evidence that indicate a particular danger or enemy intention -- and "noise" -- the competing or contradictory data that are not useful for anticipating this particular disaster -- come into play and obscure the intentions of the adversary. "If it does nothing else," she writes, "an understanding of the noise present in any signal system will teach us humility and respect for the job of the information analyst."5
The Problem of the Rational Actor. When faced with an array of complex data, at least some of which is contradictory, analysts must rely on their own sense of what reasonably should be the expected outcome. Those expectations usually are based on assessments of what, in their view, would be rational behavior on the part of the foreign actors. In most cases, this approach works. Leaders tend to behave according to rational norms that are predictable, given a good understanding of their personal, cultural, and national background. Good analysts have a deep knowledge of the culture they are monitoring and can usually get it right by relying on their expertise.
But such reliance can be dangerous in a period of crisis. A number of warning failures since World War II have occurred because analysts -- in the face of mounting evidence of possible outcomes detrimental to US interests -- continued to adhere to their own, even expert, views of what would comprise rational behavior of foreign actors. When a leader in a crisis makes life or death decisions central to the future of his people, government, or group, his assessment of what is rational, or what risks are acceptable, probably will not be identical to even what the most experienced US analyst contends would be the case. The continued reliance of the analyst on his own views of rational foreign behavior under these circumstances risks a costly failure of intelligence warning. In October 1950, for example, the Intelligence Community produced a national estimate that said China would not intervene in Korea, despite mounting evidence and some analytical views to the contrary. They did so because, according to the calculations of US military leaders and intelligence agencies, rationally, the disadvantages of participation in the war appeared to outweigh the advantages. China apparently used a different rationale to calculate its decision.6
It is especially dangerous for analysts to concoct what appear to be rational rules of behavior and attribute them to foreign leaders when no evidence or precedents exist. Some examples: "The military will not attempt to topple the government of country X because the economy is in such bad condition that it would not want to have to manage it, or; "The military will not intervene in the democratic process of country Y because it fears international isolation," or; "Parties A and B would prefer to negotiate their differences and avoid a costly conflict." This is not good analysis. Unfortunately, sometimes it creeps into national products. The passive acceptance of such unimaginative reasoning has resulted in a number of warning failures.
The failure to warn of the Arab attack on Israel in October 1973 is one of the clearest examples of an intelligence disaster resulting from the assumption that foreign leaders will temper their intentions with a rational analysis of capabilities. Henry Kissinger says it was "a failure of political analysis." US analysts should take little comfort from the fact that most Israeli analysts similarly led themselves astray. "Every Israeli (and American) analysis before October 1973 agreed," writes Kissinger, "that Egypt and Syria lacked the military capability to regain their territory by force of arms; hence there would be no war."7 According to the Pike Committee, one agency flatly asserted that Egypt was not capable of an assault across the canal; a postmortem by another agency noted that analysts had thought the Arabs were so clearly inferior that another attack would be irrational and, thus, out of the question.8
Current Intelligence, the enemy of warning. Intelligence managers should not expect analysts who are responsible for daily production to be able to discern early or subtle developments that would indicate a emerging threat or evolving crisis. Most analysts assigned to current intelligence duties on "hot topics" are preoccupied with the mechanics of digesting the contents of their electronic inboxes, writing and coordinating articles, and shepherding them through review and publication. These same analysts also must prepare talking points for senior managers, give briefings and attend endless meetings. Often much of their time is spent getting their work into the proper format, dealing with a balky printer or negotiating adverbs -will the situation become "significantly" more serious, or "slightly" more serious? Talented though these individuals may be -- and managers often assign their most capable analysts to current duties on important accounts -- the pace of work does not permit reflection, research, or the application of methodological techniques that might help them weigh alternative hypotheses in processing new data.
The demands of daily production and the tyranny of deadliness also impinge on creativity and imagination. Thomas C. Shelling, in his introduction to Wohlstetter's study of Pearl Harbor, says:
The danger is not that we shall read the signals and indicators with too little skill; the danger is in a poverty of expectations -- a routine obsession with a few dangers that may be familiar rather than likely. The problem is that those responsible for developing a wider range of contingencies also are overburdened with daily tasks and responsibilities.9
According to a 1979 HPSCI report evaluating the Community's performance on Iran:
Current intelligence is inherently episodic -- it does not lend itself readily to assessments of the long-term significance of events. It is an important vehicle but most effective in reporting [those] events that stand out clearly.
The report noted that events in Iran evidenced a clear pattern over time, but while they were occurring,
the 'signal to noise ratio' tended to obscure their significance to analysts [who were] caught up in a series of fast-breaking situations [and who] tended to overlook the immediate past in assessing the present.10
As a particular region gets "hot," more and more analytic talent is devoted to the daily care and feeding of senior intelligence managers and policymakers demanding a steady diet of finished products. In such an atmosphere, other analytical work that might include reaching some warning judgments or exploring alternative outcomes -- the writing of a national intelligence estimate, for example -- falls by the wayside. During the Iran crisis in 1978, as the HPSCI's 1979 evaluation report notes, the Community was totally engrossed in the task of producing current analysis. One intelligence agency, although it had significant substantive differences with the rest of the Community on an estimate (which was never completed), "devoted greater effort to convincing others to adopt a change in format..."11
Politicization. Despite frequent ,allegations of pressure from intelligence managers for analysts to espouse or eschew a particular line of analysis, no evidence exists that directed judgments have been responsible for warning failures. In postmortems, Congressional reviews, and interviews with analysts concerning major failures there emerges abundant evidence, however, that analysts often shaped their judgements with policy in mind. In the case of Iran, the policymakers' confidence in the Shah skewed intelligence, but analysts never tried to challenge that policy position. To make their work relevant, analysts must understand policy but, in doing so, they sometimes self-politicize their analysis.
Bureaucratic imperative toward caution. Assessing most likely outcomes, rather than less likely but more problematic ones, is the daily fare of analytical organizations. Analysts and intelligence managers are wary of sending out false alarms by warning of a crisis that never develops. It is this reticence on the part of regional experts, along with an apparent misunderstanding of the warning process, that will result in future warning failures. Those concerned solely with warning intelligence do a better job of early identification of the warning issue, not by using advanced analytic techniques and not because they are by nature more insightful, but because their sole or primary mission is to look for credible evidence of threatening outcomes.
What is a Warning Success?
Most successful warnings appear to be false alarms. A warning success is a threat judgment communicated to policymakers in time for them to take action to deter the threat. Successful warning means that the outcome being warned of - if the policy action also is successful -- never materializes. On the other hand, a warning failure can occur either when the warning is not given, is given too late to be addressed by decisionmakers, or is not persuasively communicated and, therefore, not heeded.
Warning is always a question of probabilities. If, however, the Intelligence Community waits until an outcome appears highly probable, it will have deprived our leadership of valuable time during which low-cost actions to deter the threat could have been taken. Yet, warning too early may be futile; few policymakers will focus on a possible outcome some 18 months in the future. The National Warning System usually concentrates on threats judged to be about six months away or less.
Communicating the Warning
The typical policymaker is handling at least a half dozen complex foreign policy problems at once, has several more on deck, and is managing a large office with all the usual personnel and budget concerns. The last thing he or she wants to hear is that one of those policy items is on its way to becoming a crisis. The policymaker bias, therefore, is to hear the warning, but fixate on the evidence that points away from the possible crisis outcome.
This tendency, unfortunately, came to the fore in the days preceding the 2 August 1990 Iraqi invasion of Kuwait. According to a Congressional review, despite the accumulating evidence of Iraqi hostile intent, senior decisionmakers relied on reassuring messages from a number of Arab heads of state who said Saddam might threaten, but the matter would be settled without resorting to hostilities. In addition, as pointed out in the same report, policymakers also relied on their own preconceived notions: one Arab country would not attack another Arab country; Saddam is merely saber-rattling; even if he did invade, it would only be a limited incursion.
Dismissing warnings that posited the opposite is a continuing problem that will not be solved in the shortterm .... It requires a greater receptivity on the part of policymakers to intelligence information, and a willingness to evaluate that information apart from preconceived notions.12
It is human nature to hope that all turns out well, to avoid thinking about potentially bad outcomes -- particularly if they are seen as less likely to occur -- and to embrace the facts that point in the direction of a happy resolution.
Warning communication somehow must attack those human tendencies by being persistent, persuasive, and resolute. Warning is a process. It begins with the first accumulation of credible evidence, perhaps when the potential crisis still appears to have a relatively low probability of occurrence. As evidence accumulates, additional warnings are given, noting the increasing prospects for an adverse outcome. The restructured National Warning System provides mechanisms and products for presenting Community-coordinated warning messages, an improvement that should enhance persuasiveness. Finally, the warning message should be delivered in a number of ways; it is not sufficient simply to put a piece of paper in the mail. Both written and oral warnings should be given, and the warners should engage the policymaker and elicit a response.
Warning, as Robert Gates told his Task Force on Warning, should sound an alarm, give notice and admonishing advice to policymakers. Research is currently underway that, when completed, will provide the Intelligence Community with some insights into how warning messages can best be framed. Questions remain concerning which mechanisms for warning are best and what kind of language is most effective in getting people to respond to a warning.
Alarms, Wolves, and Sheep
Failures of intelligence warning are obvious, but the task of measuring a system's overall success rate is difficult. Warnings cannot be validated by their immediate outcomes. Action taken in response to the warning may have changed the outcome, or the perpetrators may have cancelled or postponed their activity. Warnings that Egyptian President Sadat would attack Israel in May 1973 were not incorrect; Sadat just deferred his plans until October.13 In many cases, the information needed to validate or disprove a warning may not be available for years.
The Intelligence Community is very sensitive to accusations of having "cried wolf' -- declaring a threat where no evidence of one exists. Such charges, however, are simply part of doing business. Moreover, they are almost invariably inaccurate· normally we see the wolf lurking and think he may attack. Warning that is based on credible evidence and that is given early enough for the policymaker to take action is the Community's primary mission. Intelligence managers and analysts will have to absorb a few barbs if warning failure is to be minimized. According to Stein, it is analysts who "cry sheep" of whom policymakers should be more wary.14 The warning failures of the past were all cases in which the Intelligence Community "cried sheep," or gave reassurances, despite indications that wolves were lurking about.
Notes
1. US Congress, House, Permanent Select Committee on Intelligence, Intelligence Authorization Act, Fiscal Year 1992, 102d Congress, 1st Session, 1991, H. Rept. 102-65, Part I.
2. Ibid.
3. Janice Gross Stein, "The 1973 Intelligence Failure: A Reconsideration," The Jerusalem Ouarterly 24 (Summer 1982): 50.
4. Gary Sick, All Fall Down: America's Tragic Encounter With Iran (New York: Random House, 1985), vii.
5. Roberta Wohlstetter, Pearl Harbor: Warning and Decision (Palo Alto: Stanford University Press, 1962), 2-3.
6. Joseph C. Goulden, Korea: the Untold Story of the War (New York: McGraw Hill, 1982), 277. Also see: Clay Blair, The Forgotten War: America in Korea 1950-1953 (New York: Times Books, 1987).
7. Henry Kissinger, Years of Upheaval (New York: Little Brown, 1982): 452.
8. US Congress, House, (Pike Committee), Report on the Performance of the Intelligence Community. 1976, 142.
9. Wohlstetter, xiii.
10. US Congress, House, Staff Report, Subcommittee on Evaluation, Permanent Select Committee on Intelligence, Iran: Evaluation of US Intelligence Performance Prior to November 1978, January 1979.
11. Ibid.
12. US Congress, 1991.
13. Intelligence Community Staff, Staff Postmortem on the Performance. of the Intelligence Community Prior to the October 1973 Arab-Israeli War, Unclassified portions released by the House Permanent Select Committee on Intelligence.
14. Stein, 54.
HTML by Cryptome.