The Most Dangerous Cloud Misconfigurations in 2025

After conducting penetration tests across hundreds of cloud environments in 2025, we’ve identified patterns. The same misconfigurations appear repeatedly, and they’re often trivial to exploit.

Here’s what we’re finding, why it matters, and how to fix it.

The Numbers

Recent research reveals the scope of the problem:

  • 68% of organizations experienced a cloud security incident in the past 12 months
  • 99% of cloud identities are considered over-privileged
  • 31% of S3 buckets are publicly accessible when first discovered
  • 47% of Azure security failures stem from storage misconfigurations
  • 55% of GCP service accounts have credentials older than 12 months
  • $4.88 million average cost of a misconfiguration-related breach
  • 43 average number of misconfigurations per cloud account

These aren’t theoretical risks. They’re active exposures we exploit during authorized penetration tests.

AWS: The Top Exploited Misconfigurations

1. S3 Bucket Exposure

The finding:

31% of S3 buckets are publicly accessible at time of creation. Many contain:

  • Database backups with customer PII
  • Application source code and credentials
  • Internal documentation and architecture diagrams
  • Log files containing session tokens and API keys

Real-world example:

During a recent pentest, we discovered an S3 bucket named [company]-backups exposed publicly. Contents:

  • PostgreSQL database dumps from production
  • Environment files (.env) containing AWS access keys
  • SSH private keys for production servers
  • Customer data in plaintext CSV exports

Total time to discover and download: 12 minutes using automated S3 bucket enumeration.

How to exploit:


# Find exposed buckets (authorized testing only)
aws s3 ls s3://[target-company-name]-backups --no-sign-request

# Download everything
aws s3 sync s3://[bucket-name] ./exfiltrated-data --no-sign-request

How to fix:

  1. Block Public Access at the account level (applies to all buckets)
  2. Enable S3 Block Public Access for the organization (applies to all accounts)
  3. Audit existing buckets using AWS Config or CSPM tools
  4. Implement bucket policies requiring encryption and authenticated access
  5. Use VPC endpoints for S3 access from EC2 (traffic never leaves AWS network)

2. IAM Over-Permissioning

The finding:

99% of cloud identities have more permissions than they actually use. Common patterns:

  • EC2 instance roles with AdministratorAccess or PowerUserAccess
  • Service accounts with wildcard (*) permissions
  • IAM users with both console and API access (principle of least privilege violation)

Real-world example:

We compromised an EC2 instance via an unpatched web application vulnerability. The instance role had AdministratorAccess policy attached.

From the compromised web server, we:

  1. Enumerated all S3 buckets in the account
  2. Downloaded database backups containing customer data
  3. Created new IAM users for persistent access
  4. Launched cryptocurrency mining instances
  5. Exfiltrated data to external infrastructure

The instance only needed:

  • S3 read access to a single bucket (application assets)
  • SSM Session Manager (for remote administration)

It had full account control instead.

How to fix:

  1. Use IAM Access Analyzer to identify unused permissions
  2. Generate policies from CloudTrail logs (AWS Access Analyzer policy generation)
  3. Remove wildcard permissions ("Resource": "*", "Action": "*")
  4. Implement SCPs (Service Control Policies) at organization level limiting maximum permissions
  5. Rotate credentials regularly (90 days for access keys, enforce via IAM policy)

3. Public EBS Snapshots

The finding:

EBS snapshots containing production databases are frequently shared publicly or with incorrect AWS account IDs.

Real-world example:

Using automated snapshot enumeration, we discovered publicly shared EBS snapshots containing:

  • MySQL databases with plaintext customer PII
  • Redis dumps with cached session tokens
  • Application servers with hardcoded credentials in config files

How to exploit:


# Find public snapshots (authorized testing only)
aws ec2 describe-snapshots --owner-ids [target-account-id] --restorable-by-user-ids all

# Create volume from snapshot
aws ec2 create-volume --snapshot-id [snap-id] --availability-zone us-east-1a

# Attach to attacker-controlled instance
aws ec2 attach-volume --volume-id [vol-id] --instance-id [attacker-instance]

# Mount and extract data
sudo mount /dev/xvdf /mnt

How to fix:

  1. Audit snapshot permissions using AWS Config rule ebs-snapshot-public-restorable-check
  2. Encrypt all snapshots using KMS (CMK, not AWS-managed keys)
  3. Implement automated remediation using Lambda to remove public snapshot permissions
  4. Use SCPs to prevent snapshot sharing outside the organization

Azure: The Top Exploited Misconfigurations

1. Storage Account Misconfigurations

The finding:

47% of Azure security failures stem from storage account misconfigurations. Common issues:

  • Public blob containers with sensitive data
  • Storage accounts accessible without authentication
  • Unrestricted network access (allowing 0.0.0.0/0)

Recent critical vulnerability:

CVE-2025-55241 (CVSS 10.0) demonstrates the severity of Azure platform vulnerabilities. Privilege escalation via Azure AD Graph API allowed attackers to gain tenant-wide control.

How to fix:

  1. Disable anonymous blob access at storage account level
  2. Implement network restrictions using Azure Private Link or Service Endpoints
  3. Enable Azure Defender for Storage (threat detection)
  4. Require authentication using Azure AD (Entra ID) or SAS tokens with expiration
  5. Audit access using Storage Analytics logging

2. Entra ID (Azure AD) Identity Risks

The finding:

40% of Azure AD applications have keys or secrets active for more than 12 months. Many applications have credentials that never expire.

Real-world example:

During external penetration testing, we discovered hardcoded Azure AD application credentials in a public GitHub repository.

Using those credentials, we:

  1. Authenticated to Azure AD as the application
  2. Accessed Microsoft Graph API with delegated permissions
  3. Enumerated all users, groups, and directory roles
  4. Accessed SharePoint sites and OneDrive files
  5. Read emails via Microsoft Graph Mail API

The credentials were 3 years old and had never been rotated.

How to fix:

  1. Implement 90-day credential rotation for service principals
  2. Use managed identities instead of service principals where possible (eliminates credentials entirely)
  3. Audit application permissions using Azure AD Access Reviews
  4. Enable Conditional Access requiring device compliance for application access
  5. Monitor for suspicious sign-ins using Azure AD Identity Protection

3. Network Security Group Misconfigurations

The finding:

Management protocols (RDP, SSH, WinRM) exposed to the internet (0.0.0.0/0) appear in virtually every Azure penetration test.

How to fix:

  1. Remove 0.0.0.0/0 from NSG rules for management protocols
  2. Use Azure Bastion for secure RDP/SSH access (no public IPs required)
  3. Implement Just-In-Time VM Access (Azure Defender feature)
  4. Require VPN or private connectivity for administrative access

GCP: The Top Exploited Misconfigurations

1. Service Account Credential Neglect

The finding:

55% of GCP service accounts have credentials not rotated in over 12 months. Many are stored in:

  • Source code repositories
  • CI/CD pipeline environment variables
  • Developer workstations (accidentally committed)

How to fix:

  1. Rotate service account keys every 90 days (automated via Cloud Scheduler)
  2. Use Workload Identity for GKE (eliminates need for service account keys)
  3. Implement short-lived credentials using Cloud IAM Service Account Credentials API
  4. Audit key age using Security Command Center

2. Default Service Account Over-Permissioning

The finding:

Google Compute Engine instances use the Compute Engine default service account by default, which has the Editor role on the project.

This means:

  • Any compromise of a GCE instance = compromise of the entire project
  • Attackers can create new instances, access Cloud Storage, modify databases

How to fix:

  1. Create custom service accounts with least-privilege permissions
  2. Remove Editor role from default service account
  3. Disable automatic key creation for default service account
  4. Use Workload Identity for GKE workloads

3. Legacy IAM Primitive Roles

The finding:

Organizations still using primitive roles (roles/owner, roles/editor, roles/viewer) instead of predefined or custom roles.

Primitive roles grant excessive permissions across all GCP services.

How to fix:

  1. Migrate to predefined roles (e.g., roles/storage.objectViewer instead of roles/viewer)
  2. Create custom roles for organization-specific needs
  3. Audit IAM bindings using Cloud Asset Inventory
  4. Implement Organization Policy constraints preventing use of primitive roles

Multi-Cloud Hardening Checklist

Immediate Actions (This Week)

  • [ ] Enable MFA on root/privileged accounts across AWS, Azure, GCP
  • [ ] Block public access to object storage at account/subscription/organization level
  • [ ] Enable audit logging (CloudTrail, Azure Activity Log, Cloud Audit Logs) in all regions
  • [ ] Enable threat detection (GuardDuty, Defender for Cloud, Security Command Center)

This Month

  • [ ] Inventory all cloud identities and service accounts
  • [ ] Rotate credentials older than 90 days
  • [ ] Remove 0.0.0.0/0 from all security groups and NSG rules
  • [ ] Audit storage account/bucket permissions

This Quarter

  • [ ] Implement least-privilege IAM policies based on actual usage
  • [ ] Enable CIS Benchmark monitoring across all cloud environments
  • [ ] Deploy Cloud Security Posture Management (CSPM) tool
  • [ ] Conduct purple team exercise validating cloud detection capabilities

The Bottom Line

Cloud misconfigurations are the easiest path to compromise in modern environments. They require minimal skill to exploit and provide extensive access when successful.

The good news: They’re also the easiest to fix.

Organizations that invest in:

  • Automated misconfiguration detection (CSPM tools)
  • Least-privilege identity management
  • Regular credential rotation
  • Defense-in-depth network architecture

…are substantially more resilient than those relying on default cloud configurations.

The adversaries are using automated tools to discover these misconfigurations at scale. Your defenses should be equally automated.

Scott Sailors
Scott Sailorshttps://www.hiredhackers.com
Principal Security Consultant with over 20 years of experience in security architecture, engineering, and executive leadership. Holds CISSP, OSCP, CISM, CRISC, Master's and Bachelor's degrees in Cybersecurity with expertise bridging technical teams and senior management to communicate complex security challenges in actionable terms.

Latest articles

Related articles