Lessons from a Ransomware Response

This case study is anonymized and composited from multiple real-world ransomware incidents our team has responded to. All identifying details have been changed while preserving technical accuracy and lessons learned.

The Call

Friday, 4:47 PM. The kind of call every CISO dreads.

“Our file servers are down. Users are reporting they can’t access shared drives. IT found a text file on the desktop that says our files are encrypted.”

Ransomware. The organization had 90 minutes before the attackers’ “business hours” ended and communication became significantly more difficult.

The 2025 Regulatory Reality

Before discussing technical response, security leaders need to understand the current regulatory landscape:

Regulation Requirement
CIRCIA (USA) Report significant incidents to CISA within 72 hours; ransom payments within 24 hours
NIS2/DORA (EU) Strict executive liability for operational resilience failures
SEC Rules Public companies must disclose material incidents within 4 business days

This incident triggered CIRCIA reporting obligations. The clock started immediately.

Phase 1: Containment (Hour 0-4)

What Went Right

The organization had practiced tabletop exercises. Within 15 minutes of discovering the attack:

  1. Legal counsel was engaged – Establishing attorney-client privilege before technical vendors arrived
  2. Out-of-band communication established – Assuming Microsoft 365 and Slack were compromised
  3. Crisis management team activated – Pre-defined roles with clear decision authority

What Went Wrong

The IT team’s immediate instinct was to “fix things.” Within 30 minutes of discovery, they had:

  • Re-imaged three infected servers (destroying forensic evidence)
  • Changed domain administrator passwords (alerting attackers to active response)
  • Begun restoring from backups (before understanding attack scope)

Critical mistake: These actions destroyed evidence needed to determine breach scope and prevented understanding of persistent access mechanisms.

The Identity-First Containment Approach

Modern incident response has shifted to identity-first containment. Rather than immediately disconnecting networks (which tips off attackers), we:

  1. Revoked all privileged credentials – Service accounts, domain admins, break-glass accounts
  2. Killed active sessions – Invalidated session tokens and Kerberos tickets
  3. Implemented emergency conditional access – Only allow access from known, clean endpoints

This “soft containment” allowed forensics to proceed while preventing lateral movement.

Edge Device Quarantine

The initial access vector turned out to be a compromised VPN appliance. The attackers had:

  • Exploited a zero-day vulnerability (later assigned CVE-2025-XXXXX)
  • Established persistent access through modified firmware
  • Used the VPN for command and control throughout the attack

Lesson learned: Modern ransomware groups compromise edge security devices first. Isolate perimeter devices during containment.

Phase 2: Forensic Investigation (Hour 4-48)

Memory vs. Disk

The first responder decision is critical: power off or memory capture?

We chose memory capture because:

  • RAM contains decryption keys, process lists, network connections
  • Disk encryption was already complete; minimal risk of further damage
  • Windows memory forensics reveals attacker commands and tools

Using Magnet RAM Capture and Volatility, we identified:

  • Active Cobalt Strike beacon in memory
  • PowerShell commands used for lateral movement
  • Credentials harvested via Mimikatz

Actionable recommendation: Have memory forensics tools pre-deployed and ready. Capturing RAM from 50+ servers while attackers are active requires preparation.

The “Looping” Problem

Ransomware groups frequently maintain persistent access through:

  • Web shells on internet-facing servers
  • Scheduled tasks with encoded PowerShell
  • Registry Run key modifications
  • Compromised service accounts with valid credentials

In this incident, the organization attempted recovery on Sunday. By Monday morning, they were re-encrypted.

The attackers had maintained access through:

  1. Web shell on Exchange server (pre-dating the ransomware by 3 months)
  2. Scheduled task on domain controller executing obfuscated PowerShell
  3. Compromised service account with domain admin privileges

Critical lesson: Never begin recovery until persistent access is eliminated. Forensics must identify all persistence mechanisms first.

Communication Protocol Failures

During the incident, we observed:

  • Executive team discussing ransom payment in company Slack (potentially monitored)
  • IT team coordinating via compromised email system
  • Sensitive forensic findings shared over Teams (before verification of security)

Best practice: Establish out-of-band communication immediately. Use pre-arranged Signal groups, personal phone numbers, or separate emergency communication platforms.

Phase 3: Recovery Decisions (Hour 48-72)

The Payment Dilemma

The ransom demand was 50 Bitcoin ($2.1M at time of incident). The organization’s decision process involved:

Financial considerations:

  • Cyber insurance covered $1M ransom payment
  • Estimated recovery cost from backups: $3.2M in consulting, downtime, lost revenue
  • Business interruption costs: $500K per day

Legal considerations:

  • OFAC compliance (ensuring ransomware group wasn’t sanctioned entity)
  • CIRCIA 24-hour payment reporting requirement
  • Potential shareholder lawsuits if payment was made unnecessarily

Technical considerations:

  • Backups were 7 days old (RPO issue)
  • Decryptor reliability unknown (ransomware group had mixed reputation)
  • Data exfiltration confirmed (paying wouldn’t prevent data leak)

Decision: The organization chose NOT to pay because:

  1. Backups were viable (though outdated)
  2. The ransomware group was identified as potentially sanctioned (OFAC risk)
  3. Data exfiltration meant payment wouldn’t prevent public disclosure

The Backup Reality Check

Organizations assume backups will save them. Reality:

  • 7-day RPO meant week of lost data (acceptable for file shares, catastrophic for databases)
  • Backup admin credentials compromised – Attackers had deleted several backup sets
  • No offline/immutable backups – Everything was accessible from the domain

What saved them:

The organization had implemented Veeam immutable backups 3 months prior as part of SOC 2 preparation. These backups were:

  • Isolated from Active Directory
  • Immutable (cannot be modified or deleted, even by admins)
  • Tested monthly (actual restore drills, not just backup verification)

Recovery took 6 days instead of 6 weeks.

Actionable recommendation: Immutable, offline backups are non-negotiable. Test restoration quarterly with actual fail-over scenarios.

Phased Restoration Strategy

Rather than restoring everything simultaneously, we implemented tiered recovery:

Tier 0 (Day 1-2): Identity and Core Infrastructure

  • New Active Directory forest (clean start)
  • DNS servers
  • Certificate authority
  • Identity provider

Tier 1 (Day 3-4): Critical Applications

  • Email (new Exchange environment)
  • VPN (new infrastructure, patched)
  • Core business applications

Tier 2 (Day 5-6): Standard Operations

  • File servers
  • Collaboration tools
  • Development environments

This approach prevented reinfection by ensuring foundational security before restoring potentially compromised systems.

Phase 4: Post-Incident Analysis (Week 2-4)

Timeline Reconstruction

Forensic analysis revealed the actual timeline:

  • Day -90: Initial access via VPN zero-day
  • Day -87: Web shell installed on Exchange server
  • Day -60: Credentials harvested, lateral movement to domain controllers
  • Day -30: Data exfiltration begins (100GB+ to attacker infrastructure)
  • Day -7: Ransomware binary deployed but not executed
  • Day 0: Ransomware executed across estate

Critical insight: The attackers had 90 days of access before deploying ransomware. Most of that time was spent in reconnaissance and data theft.

What We Missed

Post-incident review identified missed opportunities:

  1. Endpoint alerts dismissed – EDR flagged credential dumping on Day -60 but was closed as false positive
  2. Unusual data transfers – Firewall logs showed large HTTPS uploads to rare destination (not investigated)
  3. Off-hours administrative activity – Domain admin account active at 2 AM (timezone mismatch with user location)

Each signal was available. None triggered response.

Lesson: Detection without response is worthless. Organizations need SOC capacity to investigate anomalies, not just collect alerts.

Key Takeaways for Security Leaders

Before an Incident

  1. Immutable backups with offline copies – Air-gapped or immutable storage
  2. Backup restoration drills – Actually practice fail-over scenarios
  3. Out-of-band communication plan – Pre-established Signal/WhatsApp groups
  4. Incident response retainer – Have DFIR firm on speed-dial with pre-negotiated rates
  5. Legal privilege structure – Understand how to invoke attorney-client privilege for forensics
  6. Tabletop exercises – Practice decision-making, not just technical response

During an Incident

  1. Engage legal counsel first – Before technical vendors
  2. Preserve evidence – No re-imaging until forensics complete
  3. Identity-first containment – Revoke credentials before network isolation
  4. Document everything – Every action timestamped for regulatory compliance
  5. Assume attacker visibility – Communicate out-of-band
  6. Don’t race to restore – Eliminate persistence before recovery

After an Incident

  1. Root cause analysis – What signals were missed?
  2. Detection engineering – Build rules for TTPs observed
  3. Architecture changes – Fix systemic issues (not just patch vulnerabilities)
  4. Regulatory reporting – CIRCIA, SEC, breach notification as required
  5. Lessons learned – Document and share (anonymized) internally

The Uncomfortable Truth

This organization had:

  • Modern EDR deployed
  • SOC monitoring 24/7
  • Penetration testing annually
  • Security awareness training quarterly

They still got compromised and encrypted.

Why?

Because security tools generate alerts, but organizations lack the capacity to investigate them. The signal was there. The response wasn’t.

The fix isn’t more tools. It’s:

  • Adequate SOC staffing to investigate anomalies
  • Clear escalation procedures for suspicious activity
  • Executive willingness to disrupt business based on incomplete information
  • Testing of detection and response capabilities (purple teaming)

Final Thoughts

Ransomware incidents are no longer “if” but “when” scenarios. The organizations that survive with minimal damage share common traits:

  1. Preparation – Practiced response, tested backups, pre-arranged resources
  2. Decisiveness – Clear authority to make hard calls quickly
  3. Humility – Assume compromise, verify security, don’t assume existing controls worked
  4. Investment – Willingness to fund proper forensics, recovery, and post-incident hardening

The technical aspects of incident response are well-understood. The challenge is organizational: building cultures that prepare for disaster before it strikes.

The question isn’t whether your organization will face ransomware. It’s whether you’ll be ready when it happens.

Kevin Sutton
Kevin Suttonhttps://hiredhackers.com/
Principal Security Consultant over 30 years of IT and cybersecurity expertise spanning Fortune 100 companies and global enterprises. CISSP since 2003 and CISA since 2005, with deep experience securing critical infrastructure across Energy, Aviation, Healthcare, Finance, and Retail industries.

Latest articles

Related articles