Intelligence Fundamentals: From Raw Data to Actionable Intelligence

TL;DR

Intelligence is not just data collection—it’s the systematic transformation of raw information into actionable insights that drive decision-making. This foundational piece covers:

  • The DIKW Pyramid: Data → Information → Knowledge → Wisdom progression
  • Intelligence Levels: Strategic, Operational, and Tactical distinctions
  • Collection Disciplines (INTs): HUMINT, SIGINT, OSINT, GEOINT, MASINT, FININT
  • The Intelligence Cycle: A continuous 6-phase process from requirements to feedback
  • Source Evaluation: The Admiralty System for rating reliability and credibility
  • Cognitive Biases: The silent killers of good intelligence analysis
  • Attribution Challenges: Why “whodunit” is harder than it looks
  • Application to Cyber Threat Intelligence (CTI): How traditional intelligence concepts map to modern threat detection

What is Intelligence?

Intelligence is frequently misunderstood. It is not merely the accumulation of facts, nor is it synonymous with data or information. Intelligence represents the refined product of a rigorous analytical process—the transformation of disparate data points into coherent, contextual understanding that enables informed decision-making.

“Intelligence is information that has been collected, integrated, evaluated, analyzed, and interpreted.” — Joint Publication 2-0, Joint Intelligence

The distinction matters. A sensor detecting network traffic produces data. When that data is parsed and correlated, it becomes information. When an analyst determines the traffic pattern matches known command-and-control behavior, it becomes knowledge. When leadership uses that knowledge to authorize defensive measures or attribute the activity to a specific threat actor, we approach wisdom.

Critical Point: Intelligence is not truth—it is an estimate of truth based on incomplete information. Every intelligence product carries inherent uncertainty, which is why confidence levels and source evaluation are fundamental to the discipline.


The DIKW Pyramid: From Noise to Insight

The Data-Information-Knowledge-Wisdom (DIKW) hierarchy provides a foundational framework for understanding how raw inputs transform into actionable intelligence.

DIKW Pyramid

Data (What?)

Raw, unprocessed facts without context. In cyber operations, this includes:

  • Packet captures
  • Log entries
  • File hashes
  • IP addresses
  • DNS queries

Data alone answers “What happened?” but provides no meaning.

Example: 192.168.1.105 connected to 45.33.32.156:443 at 03:42:17 UTC

Information (Who, When, Where?)

Data becomes information when organized, structured, and given context. Information answers basic interrogatives.

Example: “Workstation WS-105, assigned to user john.doe in the Finance department, established an encrypted connection to an IP address geolocated to St. Petersburg, Russia, during non-business hours.”

Knowledge (How? Why?)

Knowledge emerges from analyzing patterns, correlating multiple information sources, and applying expertise. It provides understanding of mechanisms and motivations.

Example: “This connection pattern matches known Cobalt Strike beacon behavior. The destination IP is associated with infrastructure previously attributed to APT29. The Finance department has access to sensitive M&A documentation. This likely represents initial access by a sophisticated threat actor conducting economic espionage.”

Wisdom (What Should We Do?)

Wisdom synthesizes knowledge with experience, ethical considerations, and strategic context to inform decisions. It enables foresight and optimal action selection.

Example: “Based on the attribution confidence level (MODERATE), potential business impact, and geopolitical context, we recommend: (1) immediate network isolation of affected systems, (2) engagement of incident response, (3) notification of legal counsel regarding potential nation-state involvement, and (4) coordination with sector ISAC partners who may face similar targeting.”


The Four Qualities of Actionable Intelligence

Not all intelligence is created equal. Effective intelligence exhibits four critical characteristics:

QualityDefinitionCounter-Example
ActionableEnables specific decisions or actions“Bad actors exist on the internet”
TimelyDelivered when it can still influence outcomesAttribution report arriving 6 months post-breach
RelevantAddresses the consumer’s specific requirementsSending ICS/SCADA threat intel to a SaaS company
AccurateFactually correct with appropriate confidence levelsMisattributing commodity malware to an APT

The acronym ATRA (Actionable, Timely, Relevant, Accurate) provides a useful mnemonic for evaluating intelligence quality.


Source Evaluation: The Admiralty System

Source Reliability Matrix

Before any information becomes intelligence, it must be evaluated. The intelligence community uses standardized grading systems to assess both the reliability of the source and the credibility of the information.

The Admiralty/NATO System

This dual-axis evaluation system, also known as the “6x6 system” or “NATO System,” has been the standard since WWII:

Source Reliability (A-F)

GradeDescriptionCriteria
ACompletely ReliableNo doubt about source’s authenticity, trustworthiness, and competence. History of complete reliability.
BUsually ReliableMinor doubts. History of mostly valid information.
CFairly ReliableDoubts exist. Has provided valid information in the past.
DNot Usually ReliableSignificant doubts. History of some valid, some invalid information.
EUnreliableLacking authenticity, trustworthiness, or competence. History of invalid information.
FCannot Be JudgedNo basis for evaluating reliability. New or unknown source.

Information Credibility (1-6)

GradeDescriptionCriteria
1ConfirmedConfirmed by independent sources. Logical, consistent with other information.
2Probably TrueNot confirmed, but logical and consistent with other information.
3Possibly TrueNot confirmed. Reasonably logical but limited corroboration.
4Doubtfully TrueNot confirmed. Possible but not logical. No other information to support.
5ImprobableNot confirmed. Not logical. Contradicted by other information.
6Cannot Be JudgedNo basis for evaluating credibility.

Practical Application

Example Evaluation:

Source: Underground forum user "xShadowBrokerx" (active 3 years, 
        verified sales history, previous accurate leaks)
Information: Claims upcoming ransomware campaign targeting healthcare sector

Evaluation: B2
- Source Reliability: B (Usually Reliable) - Established presence, track record
- Information Credibility: 2 (Probably True) - Consistent with observed 
  threat landscape, not independently confirmed

Why This Matters: Without source evaluation, you cannot assign confidence levels to your assessments. An analyst who treats a B1 source the same as an E5 source will produce garbage intelligence.

Cyber-Specific Considerations

In CTI, source evaluation extends to:

Source TypeReliability Factors
Threat Intel FeedsVendor reputation, update frequency, false positive rate
Dark Web ForumsAccount age, reputation score, verified transactions
Malware SamplesSubmission source, sandbox environment, analysis depth
OSINTPublication credibility, author expertise, corroboration
Technical IndicatorsCollection method, age, context

Cognitive Biases: The Silent Killers

Cognitive Biases

Intelligence failures are rarely about lack of information—they’re about failures in analysis. Cognitive biases are systematic errors in thinking that affect decisions and judgments.

“We see what we expect to see, not what is there to be seen.” — Richards Heuer, Psychology of Intelligence Analysis

Critical Biases for Intelligence Analysts

Confirmation Bias

Definition: Seeking, interpreting, and remembering information that confirms pre-existing beliefs.

Historical Example: Iraq WMD Assessment (2002-2003). Analysts focused on information supporting the existence of WMD programs while discounting contradictory evidence. The assumption that Saddam Hussein must have WMDs led to selective interpretation of ambiguous data.

Mitigation: Devil’s Advocacy, Analysis of Competing Hypotheses (ACH)


Anchoring Bias

Definition: Over-relying on the first piece of information encountered (the “anchor”).

Example in CTI: Initial attribution to a specific threat actor becomes the anchor. All subsequent evidence is interpreted through that lens, even when it should prompt reconsideration.

Mitigation: Explicitly document initial assumptions, revisit them regularly


Mirror Imaging

Definition: Assuming adversaries think and act as we would in their situation.

Historical Example: Pearl Harbor (1941). American analysts assumed Japan wouldn’t attack because it would be “irrational” given US military superiority. They failed to understand Japanese strategic calculus.

Mitigation: Red Team Analysis, Cultural expertise


Groupthink

Definition: Conformity pressure within a group suppresses dissenting views and alternative analysis.

Historical Example: Bay of Pigs (1961). CIA planners convinced themselves the invasion would succeed; dissenting voices were silenced or excluded.

Mitigation: Structured dissent (assigning “devil’s advocate” role), anonymous feedback


Availability Heuristic

Definition: Overweighting information that comes to mind easily (recent, dramatic, or personally experienced events).

Example in CTI: After a high-profile ransomware attack, analysts may over-attribute subsequent incidents to the same actor because that threat is top-of-mind.

Mitigation: Base rate analysis, structured checklists


Structured Analytical Techniques (SATs)

The intelligence community developed SATs specifically to counter cognitive biases:

TechniquePurposeUse When
Analysis of Competing Hypotheses (ACH)Systematically evaluate multiple explanations against evidenceAttribution, complex assessments
Key Assumptions CheckIdentify and examine underlying assumptionsAny major assessment
Red Team AnalysisThink like the adversaryThreat assessment, vulnerability analysis
Devil’s AdvocacyArgue against the prevailing viewBefore finalizing assessments
Indicators & Warnings (I&W)Define observable events that would signal changeMonitoring, forecasting

Note: Detailed SAT methodology will be covered in Part 1 of this series.


Intelligence Levels: Strategic, Operational, Tactical

Intelligence requirements and products vary significantly based on the consumer’s decision-making horizon.

Intelligence Levels

Strategic Intelligence

  • Audience: Executives, Board, Policy Makers
  • Horizon: 12-36 months
  • Focus: Threat landscape trends, risk posture, investment priorities, geopolitical shifts
  • Example: “Nation-state actors are increasingly targeting supply chains in our sector. We recommend diversifying critical vendors and implementing additional third-party risk controls.”

Operational Intelligence

  • Audience: Security Managers, IR Teams, Hunt Teams
  • Horizon: Weeks to months
  • Focus: Campaign analysis, threat actor profiles, TTP evolution, infrastructure patterns
  • Example: “APT41 has shifted from custom malware to living-off-the-land techniques in Q4. Hunt teams should prioritize detection of LOLBins abuse.”

Tactical Intelligence

  • Audience: SOC Analysts, Detection Engineers, Incident Responders
  • Horizon: Hours to days
  • Focus: IOCs, detection signatures, specific TTPs, immediate response procedures
  • Example: “Block hash abc123..., IP 45.33.32.156, and monitor for scheduled task persistence using schtasks.exe /create.”

Critical Insight: Organizations often over-invest in tactical intelligence (IOC feeds) while under-investing in strategic and operational intelligence. IOCs are inherently perishable—they represent the artifacts of an attack, not the behavior. Mature intelligence programs balance all three levels.


Intelligence Collection Disciplines (The “INTs”)

Intelligence collection is organized into specialized disciplines, each with distinct sources, methods, and analytical requirements. These disciplines are collectively referred to as “INTs.”

Intelligence Disciplines

Primary Collection Disciplines

HUMINT (Human Intelligence)

Definition: Intelligence derived from human sources through interpersonal contact.

Sources:

  • Informants and agents
  • Diplomatic reporting
  • Traveler debriefings
  • Defectors
  • Interrogations
  • Elicitation (social engineering in cyber context)

Characteristics:

  • Oldest intelligence discipline (dating to antiquity)
  • Provides intent and motivation (the “why”)
  • High value, high risk
  • Difficult to scale
  • Vulnerable to deception and double agents

Cyber Application: Engaging with threat actors in underground forums, recruiting sources within criminal organizations, social engineering assessments, insider threat programs.

Source Evaluation Challenge: Human sources can be turned, deceived, or may have their own agendas. Corroboration is essential.


SIGINT (Signals Intelligence)

Definition: Intelligence derived from the interception of signals, including communications and electronic emissions.

Sub-disciplines:

AbbreviationNameFocus
COMINTCommunications IntelligenceVoice, text, messaging intercepts
ELINTElectronic IntelligenceNon-communication signals (radar, telemetry)
FISINTForeign Instrumentation Signals IntelligenceWeapons systems, space vehicle telemetry

Characteristics:

  • Technical, scalable collection
  • Volume creates analytical challenges
  • Encryption is a significant obstacle
  • Provides communication patterns and content
  • Legal frameworks vary significantly by jurisdiction

Cyber Application: Network traffic analysis, malware C2 protocol analysis, encrypted traffic metadata analysis, passive DNS collection.


OSINT (Open Source Intelligence)

Definition: Intelligence derived from publicly available sources.

Sources:

  • Media (print, broadcast, online)
  • Academic publications
  • Government reports and filings
  • Social media
  • Commercial databases
  • Technical documentation
  • Conference proceedings
  • Court records
  • Patent filings

Characteristics:

  • Accessible and cost-effective
  • Volume requires sophisticated filtering
  • Verification challenges (misinformation, disinformation)
  • Estimated to provide 60-80% of intelligence requirements
  • Legal to collect, but ethical and legal considerations remain

Cyber Application: Threat actor research, vulnerability intelligence, leaked credentials monitoring, brand protection, attack surface mapping.

Legal/Ethical Considerations:

  • Privacy Laws: GDPR, CCPA, and other regulations may restrict collection and processing of personal data
  • Terms of Service: Scraping may violate platform ToS
  • Duty of Care: Collected information may reveal individuals at risk
  • Responsible Disclosure: Vulnerability information requires careful handling

GEOINT (Geospatial Intelligence)

Definition: Intelligence derived from the analysis of imagery and geospatial information.

Sources:

  • Satellite imagery (optical, radar, multispectral)
  • Aerial photography
  • Mapping data
  • Location-based services
  • Geolocation metadata

Characteristics:

  • Provides physical context
  • Commercial availability has democratized access
  • Temporal analysis reveals patterns
  • Integration with other INTs enhances value

Cyber Application: Physical infrastructure mapping, data center identification, supply chain verification, attribution support (correlating physical and cyber activities).


MASINT (Measurement and Signature Intelligence)

Definition: Intelligence derived from the analysis of data obtained from sensing instruments for the purpose of identifying distinctive features associated with the source, emitter, or sender.

Focus Areas:

  • Radar signatures
  • Acoustic signatures
  • Nuclear radiation
  • Chemical/biological detection
  • Spectral analysis
  • Materials analysis

Characteristics:

  • Highly technical
  • Requires specialized sensors
  • Provides unique identification capabilities
  • Often complements other INTs

Cyber Application: Electromagnetic emanations analysis (TEMPEST), side-channel attacks, hardware fingerprinting.


Additional Disciplines

INTFull NameFocus
FININTFinancial IntelligenceMonetary transactions, sanctions evasion, money laundering, cryptocurrency tracing
SOCMINTSocial Media IntelligenceSocial platform analysis, influence operations, persona attribution
TECHINTTechnical IntelligenceForeign weapons and equipment analysis, malware reverse engineering
IMINTImagery IntelligenceSubset of GEOINT focused on imagery analysis
MEDINTMedical IntelligenceHealth-related intelligence, biological threats, pandemic monitoring

Note on “CYBINT”: While sometimes used colloquially, CYBINT is not a formally recognized standalone discipline. Cyber intelligence typically represents an “all-source” fusion of SIGINT, OSINT, HUMINT, and TECHINT applied to the cyber domain.


The Intelligence Cycle

The intelligence cycle describes the process by which raw information is converted into finished intelligence and delivered to consumers. While various models exist, the six-phase model is widely adopted:

Intelligence Cycle

Phase 1: Requirements (Planning & Direction)

Intelligence begins with a question. Requirements define:

  • Priority Intelligence Requirements (PIRs): Critical questions leadership needs answered
  • Essential Elements of Information (EEIs): Specific data points needed to address PIRs
  • Collection emphasis: Which INTs to prioritize
  • Intelligence Gaps: What we don’t know but need to know

Example PIR: “What are the most likely threat actors to target our organization in the next 12 months, and what TTPs do they employ?”

Common Failure Point: Requirements that are too vague (“tell me about threats”) or too narrow (missing emerging threats).

Phase 2: Collection

Systematic gathering of raw information using appropriate INTs. Effective collection:

  • Aligns with defined requirements
  • Employs multiple INTs for corroboration
  • Documents sources and methods
  • Maintains chain of custody
  • Identifies collection gaps

Collection Management involves prioritizing limited collection assets against numerous requirements. Not everything can be collected—tradeoffs are necessary.

Phase 3: Processing (Exploitation)

Raw collected material is converted into usable form:

  • Translation
  • Decryption
  • Format conversion
  • Data normalization
  • Deduplication
  • Initial validation
  • Source evaluation

Phase 4: Analysis (Production)

The intellectual core of intelligence work. Analysis involves:

  • Correlation of multiple sources
  • Pattern recognition
  • Hypothesis development and testing
  • Application of Structured Analytical Techniques (SATs)
  • Assessment of reliability and credibility
  • Confidence level assignment
  • Production of finished intelligence products

Analytical Rigor: Professional analysis employs SATs to mitigate cognitive biases and ensure defensible conclusions. An assessment without a confidence level and supporting reasoning is an opinion, not intelligence.

Phase 5: Dissemination

Finished intelligence must reach consumers in appropriate formats and through secure channels:

  • Written reports (assessments, briefings)
  • Alerts and notifications
  • Machine-readable feeds (STIX/TAXII)
  • Dashboards and visualizations
  • Oral briefings

Key Principle: Intelligence that doesn’t reach the right person at the right time is useless, no matter how accurate.

Phase 6: Feedback (Evaluation)

Consumers provide feedback on:

  • Relevance to their needs
  • Timeliness of delivery
  • Actionability of recommendations
  • Accuracy (over time)

This feedback refines future requirements, completing the cycle.


Attribution: The Confidence Spectrum

Attribution—determining who is responsible for an action—is one of the most challenging aspects of intelligence work, particularly in the cyber domain.

Why Attribution is Hard

ChallengeDescription
False FlagsAdversaries deliberately plant evidence pointing to others
Shared ToolingMultiple actors use the same malware families (Cobalt Strike, Mimikatz)
Proxy OperationsContractors, criminals-for-hire obscure true sponsors
Infrastructure OverlapShared hosting, bulletproof providers, compromised systems
Parallel DevelopmentSimilar TTPs may emerge independently
Deliberate ConfusionAPT28 vs APT29 have overlapping operations

Attribution Confidence Levels

LevelDescriptionBasis
HIGHWe assess with high confidence…Multiple independent sources, corroborating evidence across INTs, consistent with known patterns
MODERATEWe assess with moderate confidence…Good evidence from fewer sources, some analytical gaps, generally consistent
LOWWe assess with low confidence…Limited evidence, significant gaps, plausible but uncertain

Critical Point: Even “HIGH confidence” is not certainty. The 2002 Iraq WMD NIE assessed with “high confidence” that Iraq had WMD programs—and was wrong.

Attribution in Practice

Real-world attribution rarely produces clear answers:

Example - Better Attribution Statement:

“We assess with MODERATE confidence that APT29 is responsible for this intrusion, based on: (1) infrastructure overlap with previously attributed APT29 operations, (2) TTP consistency with documented APT29 playbooks, and (3) targeting alignment with Russian strategic interests. However, we cannot exclude the possibility of a sophisticated false flag operation, shared tooling, or contractor involvement. Alternative hypotheses considered include APT28 and criminal actors with Russian nexus.”

This is more honest—and more useful—than simply saying “APT29 did it.”


Application to Cyber Threat Intelligence (CTI)

The traditional intelligence framework maps directly to cyber threat intelligence:

Traditional ConceptCTI Application
PIRs“What ransomware groups target our sector?”
HUMINTDark web forum engagement, insider threat programs
SIGINTNetwork traffic analysis, C2 protocol research
OSINTThreat actor blogs, paste sites, code repositories
GEOINTInfrastructure geolocation, data residency compliance
Source EvaluationThreat feed quality assessment, indicator confidence
Intelligence CycleCTI program operations
Cognitive BiasesAnalyst training, structured analysis

The Pyramid of Pain

David Bianco’s Pyramid of Pain (2013) illustrates the relationship between indicator types and the cost to adversaries when those indicators are denied:

Pyramid of Pain

LevelIndicator TypeAdversary PainNotes
TopTTPsTOUGH!Must change behavior, tradecraft
ToolsChallengingMust find/develop new tools
Host ArtifactsAnnoyingRegistry keys, file paths, mutexes
Network ArtifactsAnnoyingURI patterns, C2 protocols, JA3
Domain NamesSimpleDNS is cheap, but requires setup
IP AddressesEasyInfrastructure is fungible
BottomHash ValuesTrivialRecompile = new hash

Key Insight: Organizations that focus detection on Tactics, Techniques, and Procedures (TTPs) rather than atomic indicators create significantly higher costs for adversaries. This aligns with the MITRE ATT&CK framework’s behavior-based approach.


Case Study: From Data to Intelligence

Let’s trace how raw data becomes actionable intelligence through proper analytical process:

The Data

timestamp: 2024-11-15T03:42:17Z
src_ip: 192.168.1.105
dst_ip: 45.33.32.156
dst_port: 443
bytes_out: 2847
bytes_in: 156892

The Information

  • Source: Workstation WS-105 (Finance, user: john.doe)
  • Destination: IP geolocated to St. Petersburg, Russia
  • Provider: VPS provider with documented abuse history
  • Time: 03:42 UTC (non-business hours for US East Coast)
  • Traffic pattern: Small request, large response (possible exfiltration or beacon check-in)

Analysis Process

Step 1: Source Evaluation

  • Network telemetry: A1 (our own sensors, raw data)
  • Threat intel on destination IP: B2 (reputable vendor, not independently confirmed)
  • ISAC reporting: C2 (peer reports, limited detail)

Step 2: Hypothesis Generation

  1. Malicious C2 communication (APT)
  2. Malicious C2 communication (criminal)
  3. Legitimate but unusual business activity
  4. Compromised third-party application
  5. False positive (CDN, cloud service)

Step 3: Evidence Evaluation

EvidenceH1 (APT)H2 (Criminal)H3 (Legit)H4 (Third-party)H5 (FP)
Russian IP+++-NN
Non-business hours++-NN
Traffic pattern++++N+-
Prior spearphish+++N
ISAC correlation++NN
Finance dept targeting+++NNN

Step 4: Assessment

After applying ACH methodology:

We assess with MODERATE confidence that this activity represents command-and-control communication by a sophisticated threat actor, likely APT29 or affiliated entity. This assessment is based on: infrastructure correlation with previously attributed operations (B2), TTP consistency with documented nation-state playbooks, targeting alignment with economic espionage objectives, and corroborating activity reported by sector peers.

Alternative hypotheses: Criminal actor (possible but less consistent with targeting), compromised third-party software (requires additional investigation).

Key assumptions: Threat intelligence attribution of destination IP is accurate; spearphishing email was the initial vector.

Intelligence gaps: No malware sample recovered; limited visibility into lateral movement.

The Wisdom

Recommendations:

  1. Initiate incident response procedures (HIGH priority)
  2. Preserve forensic evidence for potential law enforcement engagement
  3. Engage legal counsel regarding nation-state attribution implications
  4. Coordinate with sector ISAC (share indicators, request additional context)
  5. Brief executive leadership on potential business impact
  6. Expand detection for related TTPs across enterprise
  7. Review third-party applications with Finance access

Looking Ahead

This foundational piece establishes the conceptual framework for intelligence operations. Subsequent articles in this series will provide deeper dives:

  • Part 1: OSINT Deep Dive — Sources, Tools, Tradecraft, and Legal Considerations
  • Part 2: SIGINT Fundamentals — From RF to Packet
  • Part 3: HUMINT in Cyber Operations — Social Engineering and Beyond
  • Part 4: GEOINT for Cyber — Physical-Digital Convergence
  • Part 5: Structured Analysis — ACH, Red Teaming, and Cognitive Bias Mitigation

Glossary

TermDefinition
ACHAnalysis of Competing Hypotheses - structured technique for evaluating multiple explanations
COMINTCommunications Intelligence - subset of SIGINT
CTICyber Threat Intelligence
EEIEssential Elements of Information
ELINTElectronic Intelligence - subset of SIGINT
FININTFinancial Intelligence
GEOINTGeospatial Intelligence
HUMINTHuman Intelligence
I&WIndicators and Warnings
IMINTImagery Intelligence
INTIntelligence discipline
IOCIndicator of Compromise
ISACInformation Sharing and Analysis Center
MASINTMeasurement and Signature Intelligence
OSINTOpen Source Intelligence
PIRPriority Intelligence Requirement
SATStructured Analytical Technique
SIGINTSignals Intelligence
SOCMINTSocial Media Intelligence
TECHINTTechnical Intelligence
TTPTactics, Techniques, and Procedures

References and Further Reading

Official Publications

  • Joint Publication 2-0: Joint Intelligence (US DoD)
  • Intelligence Community Directive 203: Analytic Standards
  • Intelligence Community Directive 206: Sourcing Requirements
  • NIST SP 800-150: Guide to Cyber Threat Information Sharing

Foundational Texts

  • Heuer, R. (1999). Psychology of Intelligence Analysis — Essential reading on cognitive biases
  • Lowenthal, M. (2019). Intelligence: From Secrets to Policy — Comprehensive overview
  • Clark, R. (2019). Intelligence Analysis: A Target-Centric Approach — Modern analytical methods

Historical Case Studies

  • Wohlstetter, R. (1962). Pearl Harbor: Warning and Decision — Classic study of intelligence failure
  • Jervis, R. (2010). Why Intelligence Fails — Analysis of Iraq WMD assessment failure

CTI-Specific Resources


This article is part of the Intelligence Fundamentals series. The series aims to bridge traditional intelligence tradecraft with modern cyber threat intelligence operations.

Questions, corrections, or feedback? Open an issue on GitHub or reach out via the contact page.


Changelog:

  • v1.1: Added Source Evaluation section, Cognitive Biases section, Attribution Confidence Levels, expanded Legal/Ethical considerations, corrected Pyramid of Pain structure, added Glossary
  • v1.0: Initial release