Thinking Like an Adversary: Using ATT&CK to Build a Threat Hunt Scope

7 May 2026 | 9 min read | justruss.tech

Most threat hunting programmes start with the data. An analyst opens a SIEM, writes a query based on something they read in a blog post or a vendor report, and checks whether anything matches. This approach finds things occasionally but misses systematically. The gaps are not random; they follow directly from the limits of the analyst’s own visibility and imagination. Thinking like an adversary before you open the SIEM changes the starting point from “what data do I have” to “what would someone who wants to be in this environment actually do”, which produces a materially different and more complete set of things to look for.

The adversary’s decision framework

An attacker targeting a specific organisation is solving an optimisation problem. They want maximum access with minimum exposure, minimum noise, and minimum time. Every technique they choose is a trade-off between effectiveness and detectability. Understanding that trade-off is the foundation of adversary-centric thinking.

When an attacker has initial access to a network, they face a sequence of decisions. Do they move immediately or wait and observe? Do they use existing tools on the machine or bring their own? Do they escalate privileges now or operate within current permissions? Do they hit their objective directly or build a more stable foothold first? Each of these decisions has a detection implication. An attacker who moves immediately generates a compressed timeline of events that is easier to spot. An attacker who waits and moves slowly looks more like background noise. An attacker who uses only built-in tools has no malware to detect. An attacker who brings their own tools has file hashes and process names that can be matched against signatures.

The question to ask about any technique is: if I were trying to achieve this objective in this environment while avoiding detection, which approach would I choose and why? The answer tells you which techniques to prioritise in your hunting programme.

Mapping attacker objectives to MITRE ATT&CK

MITRE ATT&CK organises attacker behaviour into a taxonomy of tactics (the why) and techniques (the how). The tactics represent the sequential phases of an intrusion: initial access, execution, persistence, privilege escalation, defence evasion, credential access, discovery, lateral movement, collection, command and control, exfiltration, and impact. A real intrusion does not follow this sequence rigidly, but the tactics represent the objectives an attacker needs to achieve and the rough order in which they become relevant.

The adversary mindset approach uses ATT&CK not as a detection checklist but as a map of an attacker’s objective space. For each tactic, ask: given what I know about my environment and the threat actors most likely to target it, which techniques under this tactic would they actually choose? The answer is never “all of them”. An attacker targeting a financial services organisation for credential theft and fraud has a different technique selection than an APT targeting the same organisation for espionage. The techniques they choose reflect their objectives, their operational security requirements, and what they know about the defensive environment they are operating in.

Building an environment-specific attacker profile

Before translating attacker thinking into hunt hypotheses, you need a realistic picture of what an attacker in your specific environment would look like. This means understanding three things: what your environment looks like from the outside, what it looks like from the inside once access is achieved, and what your current detection capability covers and does not cover.

From the outside, your environment looks like a set of exposed services (email, VPN, web applications, cloud portals), a set of employees whose credentials and employment details are partially public, and a set of known software vendors whose products you use. Each of these is a potential initial access vector. Phishing is the most common for a reason: it requires no technical exploit, it scales easily, and the success rate against employees who are not specifically trained to resist it is high enough to be reliable.

From the inside, once an attacker has a foothold, your environment looks like a network topology, a set of Active Directory objects with permissions, a set of credentials cached on endpoints, and a set of services that can be reached from a compromised workstation. The attacker’s objective from this position is to understand what they have access to, find the path to whatever they actually want, and get there without triggering your detections along the way.

Generating a hunt scope from adversary decision trees

The practical output of this thinking is a prioritised list of hunt hypotheses tied to specific ATT&CK techniques. Here is how to build one for a realistic scenario: a financial services organisation targeted by a financially motivated threat actor group.

Start at initial access. The most likely techniques for this profile are phishing (T1566.001 and T1566.002), valid accounts obtained through credential stuffing or purchase (T1078), and exploitation of public-facing applications if any are running known-vulnerable software (T1190). Of these, phishing and valid accounts are most likely for a financially motivated actor who prioritises reliability over sophistication.

From a phishing-obtained foothold, the attacker’s immediate priority is execution and establishing persistence before the phishing email is reported and the initial access closes. This drives them toward techniques that work quickly and do not require additional downloads: PowerShell execution (T1059.001) via a macro or link in the phishing document, registry run key persistence (T1547.001), and possibly scheduled task creation (T1053.005). The persistence technique choice is driven by what survives a reboot and what looks least anomalous in your specific environment.

With stable access, discovery is next. An attacker in an unfamiliar environment runs enumeration: account discovery (T1087), network scanning to find domain controllers and file servers (T1046), and Active Directory enumeration to understand the permission structure (T1069). The tools for this range from built-in Windows commands (net, nltest, dsquery) to purpose-built tools (BloodHound, ADExplorer) depending on the attacker’s operational security discipline.

From the adversary’s perspective, credential access is the highest-priority objective in an Active Directory environment because credentials unlock lateral movement without requiring exploits. LSASS dumping (T1003.001) is the most direct approach. Kerberoasting (T1558.003) is preferred by attackers with higher operational security awareness because it requires no special tooling and leaves subtler indicators. AS-REP roasting (T1558.004) targets misconfigured accounts and requires no credentials at all.

The detection coverage gap analysis

The adversary mindset exercise produces real value when it is crossed against your actual detection coverage. For each technique in the hypothetical attack chain, ask whether your current monitoring would detect it, how quickly, and how reliably. The gaps in that analysis are your hunt priorities.

## ATT&CK coverage assessment - map techniques to your detection coverage
## For each technique: DETECTED / PARTIAL / BLIND

## Initial Access
T1566.001  Spearphishing Attachment     PARTIAL  # Email gateway blocks known bad
                                                  # but not all payloads
T1566.002  Spearphishing Link           PARTIAL  # URL rewriting catches some
T1078      Valid Accounts               BLIND    # No anomalous login detection
T1190      Exploit Public App           DETECTED # WAF and vuln scanning

## Execution
T1059.001  PowerShell                   PARTIAL  # Script block logging enabled
                                                  # but not on all endpoints
T1059.003  Windows Command Shell        PARTIAL  # 4688 enabled but no cmdline
T1047      Windows Management Inst.     BLIND    # No WMI activity logging

## Persistence
T1547.001  Registry Run Keys            DETECTED # Sysmon Event ID 13
T1053.005  Scheduled Tasks              DETECTED # Event ID 4698 monitored
T1546.003  WMI Event Subscription       BLIND    # Not checked in IR process

## Privilege Escalation
T1134      Access Token Manipulation    BLIND
T1068      Exploitation for PrivEsc     PARTIAL  # EDR detects some exploits

## Credential Access
T1003.001  LSASS Memory                 DETECTED # Sysmon Event ID 10 configured
T1558.003  Kerberoasting                PARTIAL  # 4769 monitored for RC4 only
                                                  # AES variant missed
T1558.004  AS-REP Roasting              BLIND    # 4768 preauth not monitored

## Lateral Movement
T1021.001  Remote Desktop Protocol      PARTIAL  # Login events captured
T1021.002  SMB/Windows Admin Shares     PARTIAL
T1047      WMI                          BLIND

## PRIORITY ORDER (BLIND techniques in most-likely-used order):
## 1. T1078  Valid account anomaly detection
## 2. T1558.004  AS-REP roasting monitoring (4768 preauth type 0)
## 3. T1046.003  WMI event subscriptions
## 4. T1558.003  Kerberoasting AES variant (not just RC4)
## 5. T1047  WMI execution monitoring

Translating the gap analysis into hunt hypotheses

Each blind spot in the coverage analysis becomes a hunt hypothesis. The hypotheses are specific enough to test and tied to the realistic attacker behaviour identified in the decision tree analysis, rather than being generic searches for all possible techniques.

For T1078 (valid accounts): “A threat actor who purchased or phished credentials for this environment would log in from an IP address with no prior authentication history to this tenant, during hours inconsistent with the account owner’s normal pattern, and would immediately begin enumeration activity within the first 30 minutes of the session.” This hypothesis is testable: look for sign-ins from new IP addresses followed by Event ID 4662 LDAP queries or net.exe execution within 30 minutes, filtered against known administrative accounts that legitimately log in from varied locations.

For T1558.004 (AS-REP roasting): “An attacker performing reconnaissance against this environment would request AS-REP tokens for accounts without pre-authentication required, generating Event ID 4768 with Pre-Authentication Type 0. Because this technique requires no credentials, it may occur from IP addresses not otherwise seen in authentication logs.” This is testable directly: query Event ID 4768 for PreAuthType=0 over the past 30 days and look for any results not matching known automation accounts.

The hunting cadence this produces

The adversary mindset exercise is not a one-time activity. Threat actor behaviour evolves. New techniques become available. Your environment changes. The coverage gap analysis should be revisited quarterly at minimum, and immediately after any significant change to the environment (a new cloud service, a merger, a major software deployment) or after any intelligence about active campaigns targeting your sector.

The output of this process is a living hunt plan: a prioritised list of hypotheses tied to realistic attacker behaviour in your specific environment, ordered by likelihood and impact, with the current coverage status documented alongside each one. Working through that list systematically, one hypothesis per hunt cycle, is how a threat hunting programme builds genuine coverage over time rather than repeatedly hunting the same well-documented techniques while the real blind spots go unchecked.

The discipline of asking “what would they actually do, and would I see it” before touching the SIEM is what separates hunting that finds sophisticated actors from hunting that finds the noise that your automated detection already knew about.