The intelligence cycle is a formal step-wise process employed since roughly World War II by government agencies to advise and warn high-level decision makers with foreknowledge of events so they can make advantageous decisions.  Later, adaption of the same cycle has been employed by business to know their competitors in legal and ethical ways.  But in an era of Big Data this cycle is changing.  Here is how. 

The traditional intelligence cycle uses 5 steps as shown in the diagram below.  Two key areas are easy to overlook. 

The intelligence cycle

The intelligence cycle

First is the need.  Resources are not engaged or deployed until and unless they are necessary.  They are expensive, prone to risk and if lost or discovered cause damage.  Likewise in the business equivalent intelligence cycle, the need drives a decision.  When money, time and/or strategic advantage are at stake companies use the intelligence cycle.

Planning & Direction and Collection are also critical steps.  These are about breaking the problem down into analytical and collection sub-components.  Doing so focuses the problem as well as sets it up for analytical success. 

Intelligence is about a preponderance of evidence.  When the problem set is logically and thoroughly broken down the chances to see such preponderance is available to analysts.  Analysts can also stress test what they believe by removing some evidence and reaching the same conclusion or not.

Big Data practice can both learn from this cycle as well as be bound by it.  The learning comes from the clarity and proof of method the intelligence cycle provides as a test over time.  Big Data efforts often get bogged down without a priori goals, over collection of information, and over-fitting of data models.  But Big Data efforts are improved when boundaries to the problem are set, reasonable choices about data and analytical techniques are tied to goals and an unwavering focus on decisions is adhered to. 

On the other hand, some of the promise of Big Data lies in the discovery of new things not previously seen or imagined.  It is helpful to chart this out.  There is a body of knowledge about the world.  Yet there is much yet to learn about how the world works.  Our personal or corporate body of knowledge is a subset of the world.

knowledge map

knowledge map

When our knowledge matches that of the world we have certainty.  But there are two classes of uncertainty.  One is when we know we don’t know something.  This becomes a call for effort to acquire the knowledge.  The other is that don’t know we already know something.  This is correctable through discovery.

The last box in the diagram represents pure uncertainty.  In the famous words of Donald Rumsfeld these are the things we don’t know that we don’t know. 

Much information technology in the last 30 years has been devoted to moving to correct uncertainty through effort – to acquire knowledge we knew we did not have but that existed.  Big Data, through machine learning and predictive analytics can be thought of as the discovery type of correctable uncertainty.  We need the volume and speed of information processing only available to us with computer processing power to enact this discovery. 

The larger picture still is to use both effort and discovery to squeeze the pure uncertainty box ever smaller.  As we do so the certainty box grows larger as all other boxes shrink. 

The intelligence cycle in government or business is impacted by Big Data in the same way.  Uncertainty in any form can become an intelligence need in and of itself, where the volume of data and the analytical horsepower may have previously precluded it.  Likewise, Big Data predictive analytics now opens new decision-making avenues not previously considered or seen.  Finally, good intelligence practice always included after-action evaluation and feedback.  Now, the practice of Big Data provides it’s own machine learning based feedback and correction, delivering insight and subtleties previously missed. 

Whether for national security or for optimal corporate resource allocation, Big Data is impacting intelligence creation for the better.  As the 2010 report to the Congress and the President succinctly stated;

“Data volumes are growing exponentially.  There are many reasons for this growth, including the creation of all data in digital form, a proliferation of sensors and new data sources such as high-resolution imagery and video.  The collection, management and analysis of data is a fast growing concern.  Automated analysis techniques such as data mining and machine learning facilitate the transformation of data into knowledge and knowledge into action.  Every federal agency [and business] needs to have a ‘big data’ strategy.” 

Comment