Episode 42 — Prioritize Intelligence: Indicators, Observables, and the Pyramid of Pain

In this episode, we’re going to take the messy world of threat information and turn it into something you can actually prioritize, instead of treating every scary-looking detail as equally important. Beginners often collect facts the way someone scoops up every shell on a beach, and then they end up overwhelmed because none of it seems to connect. The goal here is to learn which details help you make better defensive decisions and which details are easy to collect but easy for attackers to change. We will focus on three ideas that work well together: indicators, observables, and the Pyramid of Pain. When you understand how these fit, you stop thinking of threat information as trivia and start using it like a map that tells you where to look, what to protect first, and what kinds of changes actually slow an adversary down. The result is calmer thinking and faster choices, even when the information you have is incomplete.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Let’s start with the most basic building block: an observable. An observable is simply something you can notice in the world of computers and networks, like an I P address, a domain name, a filename, a process name, a user account, a registry path, or a pattern in network traffic. Observables are not automatically good or bad, and that is the key point. A domain name could be used by a normal business today and abused tomorrow, and a filename can be harmless in one context and suspicious in another. Observables are raw facts, like seeing footprints in sand. They tell you that something happened, but not necessarily who did it or why it matters. For a beginner, thinking in observables is helpful because it trains you to separate what you saw from what you believe. That separation makes your analysis cleaner, and it keeps you from turning one odd-looking detail into a full-blown conclusion before you have enough evidence.

An indicator is different because it adds meaning and intention to an observable. When we say indicator, we mean an observable that is being used as a signal for a particular condition, like possible compromise or likely malicious behavior. On first mention, you will often hear the phrase Indicators of Compromise (I O C), which is a common label for items used to detect that something bad may have happened or is happening. The important idea is that an observable becomes an I O C only when you use it as a clue that points toward a threat hypothesis. That means the same observable might be an I O C in one environment and meaningless in another. A login from an unfamiliar country might be an indicator for a company with local-only staff, but not for a company with a global workforce. Good defenders are careful about this, because if you treat every observable as an I O C, you will generate endless false alarms and lose trust in your own signals.

Now we need one more word that often confuses new learners: intelligence. Intelligence is not just a list of bad things. Intelligence is information that has been processed and given context so it supports a decision. A raw list of I P addresses is not automatically intelligence, even if the list came from a respected source. It becomes intelligence when you know why those I Ps matter, what they are associated with, how recent and reliable the information is, and what action you can reasonably take because of it. Beginners sometimes believe that if something appears in a threat feed, it must be used immediately, but that approach usually creates noise. Intelligence should reduce uncertainty, not increase it. When you prioritize intelligence well, you become selective about what you store, what you alert on, and what you treat as high-confidence. That selectivity is not laziness; it is discipline, because your attention and response time are limited resources.

To prioritize well, you need a simple mental filter: how easy is it for the attacker to change this, and how much pain does it cause them if we block or detect it. That filter is exactly what the Pyramid of Pain is designed to teach. The Pyramid of Pain is a way of ranking different kinds of indicators based on how hard they are for an attacker to replace. The lower parts of the pyramid are things that are easy for attackers to change, and the higher parts are things that are harder to change because they reflect deeper choices and habits. If you only focus on the bottom of the pyramid, you might catch today’s version of an attack but miss tomorrow’s, because attackers can swap those items quickly. If you learn to detect and disrupt higher levels, you force attackers to spend more time, more money, and more effort to adapt, which is a real defensive win. This is not about being perfect; it is about choosing battles you can win repeatedly.

At the bottom of the pyramid, you typically find simple data points like file hashes. A hash is a short fingerprint of a file, and it can be very useful for quickly recognizing that you have seen the exact same file before. The weakness is that attackers can change a file slightly, even by adding harmless padding, and the hash changes completely. That makes hash-based I O C lists fragile, especially against attackers who rebuild or repackage malware often. Hashes still have value, particularly for confirming that a known bad file exists in your environment, but they are not a strong long-term strategy by themselves. For a beginner, the takeaway is to treat hashes as precise but brittle. They are great for matching the exact thing you already know about, but they are not great for predicting what the attacker will do next, because the attacker can easily give you a different file tomorrow.

Moving up, you often see I P addresses. Blocking an I P address can be tempting because it feels direct, like slamming a door shut, and sometimes it works for a short time. The problem is that attackers can rent new infrastructure, move to new hosting providers, or route through compromised systems that change constantly. I P blocking can also create accidental harm if the I P is shared, used by legitimate services, or changes ownership. That is why I P addresses are often considered low to moderate in the pain they cause attackers. They can slow down careless attackers, but skilled attackers treat I P rotation as routine. When you see an I P in threat information, you should ask what role it plays. Is it an endpoint used for Command and Control (C 2), is it part of scanning activity, or is it tied to an actual intrusion story in your environment. The answer determines whether the I P is a useful short-term control or just a noisy data point.

Next you will often find domains, and domains can be a bit more painful for attackers than I Ps, but still not extremely painful. Attackers can register new domains quickly, but domains sometimes connect to branding tricks, phishing themes, and infrastructure patterns that take effort to rebuild convincingly. A domain used in phishing might have a name that mimics a real service, and the attacker may rely on it to look believable to victims. If you can detect that style, you can sometimes catch future domains that follow the same pattern even after the specific domain changes. Domains also matter because they often show up in multiple places, like email links, browser history, and network lookups, which gives defenders multiple chances to observe them. Still, you should remember that a domain is not automatically malicious just because someone says it is. Prioritization means asking about the source of the claim, how recent it is, and whether you have local evidence that the domain is involved in behavior you actually care about.

As you climb higher in the pyramid, you start reaching things that are less like single values and more like patterns, sometimes called network artifacts and host artifacts. A network artifact might be a specific pattern in traffic, like an unusual user agent string, a repeating connection rhythm, or a consistent sequence of requests that a tool produces. A host artifact might be something like a particular process behavior, a persistence method, or a suspicious parent-child process relationship. These artifacts are more valuable because they capture the attacker’s behavior, not just a label. If an attacker changes an I P but still uses the same tool, the tool often leaves similar traces. That makes artifact-based detection more durable than simple lists of hashes or I Ps. For beginners, this is where the concept of observables becomes powerful again. You are no longer collecting a single observable and calling it an I O C. You are looking at multiple observables together and recognizing a behavior pattern that is harder for an attacker to change without effort.

Higher still, the pyramid often discusses tools. A tool might be a specific piece of malware, a remote access program, a credential theft utility, or an exploitation framework. Blocking or detecting a tool can cause real pain because it can force the attacker to switch methods, retrain, and reconfigure their workflow. However, tool-based detection can still be tricked if you focus only on names or signatures, because attackers can change the appearance of a tool or use a similar one that produces comparable outcomes. The real advantage of thinking at the tool level is that it nudges you toward capability-based reasoning. Instead of obsessing over a specific file, you ask what capability the attacker needed, such as remote control, data staging, or credential access, and then you look for signs of that capability regardless of the exact tool. That mindset also makes your intelligence more reusable. If you learn that a certain campaign uses a tool that performs a certain behavior, you can watch for that behavior even when the tool changes.

At the top of the Pyramid of Pain is the level that creates the most friction for attackers: their Tactics, Techniques, and Procedures (T T P). Tactics are the big goals within an attack, like gaining initial access, moving laterally, or exfiltrating data. Techniques are the general methods used to achieve those goals, like phishing, credential dumping, or remote service abuse. Procedures are the specific ways an attacker applies those techniques, including timing, sequencing, and preferred approaches. This is the level where you are no longer playing whack-a-mole with single values. You are learning the attacker’s playbook style. Changing T T P is hard because it affects the attacker’s whole operation and increases their chance of making mistakes. When defenders build detections and response plans around T T P, they tend to be more resilient over time. Even if an attacker rotates infrastructure, the need to accomplish goals remains, and many attackers reuse what works.

So how do indicators and observables connect to the Pyramid of Pain in a practical way. Think of observables as ingredients and indicators as recipes. You might observe an I P, a domain, a process name, and an unusual login time. Any one of those could be nothing, but together they might form an indicator of suspicious remote access behavior. The pyramid then helps you decide what kind of indicator you are building and how durable it will be. If your indicator relies on one I P, it is low on the pyramid and easily evaded. If your indicator relies on a combination of behaviors, like a sequence of authentication events followed by a rare process behavior and unusual outbound traffic, it moves up the pyramid because the attacker must change more than one thing to avoid it. Prioritization means investing more effort into the indicators that sit higher, because they pay you back longer. That does not mean you ignore the bottom. It means you treat the bottom as quick, temporary wins and the top as the strategic layer.

A common misconception is that higher-level intelligence is always better and lower-level intelligence is always useless. The truth is that lower-level indicators can be extremely valuable in the right moment, especially when you need fast containment. If you confirm that a specific domain is currently delivering a malicious payload to your users, blocking that domain immediately can prevent new victims. If you confirm a specific hash is present on multiple machines, that can speed up scoping and cleanup. The mistake is making the bottom of the pyramid your entire plan. You want a balanced approach where low-level indicators help you move quickly today, while higher-level indicators help you keep working tomorrow without restarting from scratch. This is also where context matters. A small organization with limited monitoring may start with simpler indicators because they are easier to implement mentally, but the long-term goal should always be moving upward toward behaviors and T T P.

When you evaluate threat information, you also need to think about confidence and relevance, because not all intelligence is equally trustworthy or useful. Confidence is about how likely it is that the information is correct, and relevance is about whether it matters in your environment. A high-confidence I O C list about attacks against a platform you do not use is not very relevant. A lower-confidence report that matches what you are seeing locally may be more valuable because it helps you form hypotheses and decide what to look for next. Beginners sometimes assume that intelligence must be perfect before it can be used, but intelligence is often probabilistic. The key is to connect it to decisions. If acting on an item is low risk, like increasing attention to certain behaviors, you can act with lower confidence. If acting on an item is high risk, like blocking a widely used service, you want higher confidence and local confirmation. Prioritization is not just about the pyramid level; it is also about the cost of being wrong.

Another important beginner skill is learning the difference between detection and attribution, because people often mix them up. Detection is about noticing suspicious or malicious activity, while attribution is about confidently naming who is behind it. The Diamond Model from earlier discussions can support attribution thinking, but you do not need attribution to prioritize good intelligence. You can focus on behaviors and outcomes. If the behavior suggests credential theft, you prioritize protecting accounts, monitoring authentication patterns, and limiting permissions, regardless of which group name someone attaches to it. This is helpful because attribution can be slow and uncertain, and attackers sometimes try to trick defenders by leaving misleading clues. The Pyramid of Pain naturally pushes you toward the kinds of intelligence that matter for defense even when attribution is unclear. If you can disrupt T T P, you have gained defensive value whether or not you can name the adversary.

To make this stick, picture a simple scenario where you hear that a certain campaign uses a specific set of I P addresses. You could rush to block them and feel productive, but that might only stop yesterday’s infrastructure. A more prioritized approach would be to treat the I Ps as starting points and ask what they represent. If they are tied to C 2 traffic, what does that traffic look like, and what does the connection pattern suggest about the tool or technique. If they are tied to phishing, what themes were used, what kinds of pages were hosted, and what user actions were required. Those questions move you up the pyramid, because you are translating low-level indicators into higher-level behaviors. Over time, you build a library of durable signals that help you spot related activity even when the attacker swaps the easy parts. That translation process is what turns raw observables into intelligence you can reuse.

The practical takeaway is that prioritizing intelligence is really about prioritizing your limited attention and your limited ability to respond. You can’t chase everything, and you do not need to. You want a small set of high-value indicators that are relevant to your environment, grounded in observables you can actually see, and aligned to higher levels of the Pyramid of Pain whenever possible. You still keep some quick, low-level items for fast response, but you do not let them become the only thing you do. If you build the habit of asking how easily the attacker can change this and what it would cost them to adapt, you will naturally move toward better signals and better decisions. The decision rule to remember is this: treat observables as raw facts, treat I O C as hypotheses that guide action, and invest most of your defensive effort in signals that force attackers to change behavior rather than just swapping easily replaced values.

Episode 42 — Prioritize Intelligence: Indicators, Observables, and the Pyramid of Pain
Broadcast by