AWS Cost Optimization: 5 Proven Ways to Reduce Your Cloud Bill by 32%
Learn 5 actionable patterns to identify cloud waste and reduce your monthly bill by up to 32%.
32% of your cloud spend is probably going straight into the trash. That's not speculation—it's what Flexera's research found across thousands of organizations. For a company spending $100K monthly on AWS, that's $32K wasted every month.
The frustrating part? Most of this waste hides in plain sight. It's not rogue developers spinning up massive instances—it's the quiet accumulation of forgotten resources, oversized defaults, and commitments that no longer match reality.
Here are the five patterns bleeding your AWS accounts dry, with real numbers and actionable fixes.
1. Zombie Instances: The Undead of Your Cloud
The pattern: EC2 instances that technically run but serve no real purpose. Dev environments abandoned after a feature shipped. Staging servers for projects that wrapped up months ago. Test instances that someone "meant to shut down."
The damage: Research shows that idle or stopped resources account for 10-15% of monthly AWS bills across typical organizations. One analysis found that stopping instances for just 12 hours per day cuts their cost nearly in half.
Real-world example: A team running 100 test VMs in AWS achieved a 65% cost reduction simply by shutting them down outside business hours. If your team works 9-to-5, that's 128 hours of idle time per week you're paying for—per instance.
How to spot them:
Instances with near-zero CPU utilization over 7-14 days
Resources tagged as "temp," "test," or "dev" running for months
Instances in regions where your team no longer operates
The fix: Implement automated scheduling. Use AWS Lambda triggered by CloudWatch Events to shut down non-production instances outside business hours. Better yet, adopt infrastructure-as-code and spin up dev environments on demand rather than keeping them running.
MilkStraw tip: MilkStraw automatically detects idle and outdated EC2 instances across all your AWS accounts—no manual hunting required. Check out our EC2 Cost Optimization guide for safe remediation steps.
2. Overprovisioned Compute: Paying for Muscle You Never Flex
The pattern: Instances sized for peak load that only hits 5% of the time. Teams defaulting to larger instance types "just in case." The m5.xlarge that could easily be an m5.large.
The damage: Studies show 30% of EC2 instances in typical organizations are significantly oversized for their actual workloads. In one detailed analysis, 70-80% of instances were oversized by at least one size class. Over-provisioned compute contributes 10-12% of total cloud waste.
Why it happens: Fear of performance issues. Easier to ask for forgiveness than permission. No visibility into actual resource utilization. And the classic: "The default seemed reasonable."
How to spot it:
CPU utilization averaging below 20%
Memory utilization consistently under 40%
Network I/O far below instance capacity
The fix: Use AWS Compute Optimizer to get right-sizing recommendations based on actual utilization data. Start with your largest instances—a single right-sizing from r5.2xlarge to r5.xlarge saves over $1,200 annually per instance. Modern auto-scaling also means you don't need to provision for peak—let the infrastructure flex with demand.
3. Orphaned Storage: The Digital Junk Drawer
The pattern: EBS volumes disconnected from terminated instances. Snapshots of volumes that no longer exist. Old AMIs nobody uses but everyone's afraid to delete. The storage equivalent of "I might need this someday."
The damage: Orphaned storage artifacts add 3-6% of avoidable spend. Unattached EBS volumes cost $0.05-$0.10 per GB-month. That adds up: one documented case showed $1,200/month just for EBS snapshots that had accumulated over 4 years—over 100 snapshots at 300GB each.
The sneaky part: When you terminate an EC2 instance, the associated EBS volumes don't automatically delete unless you specifically enable "Delete on Termination." And when you deregister an AMI? The underlying snapshots stay behind silently billing you.
How to spot it:
EBS volumes in "available" state (not attached to any instance)
Snapshots referencing volume IDs that no longer exist
AMIs with creation dates over 6-12 months old
The fix:
Enable "Delete on Termination" as a standard for ephemeral workloads
Implement lifecycle policies that delete snapshots older than 90 days (unless compliance requires otherwise)
Run monthly audits using
aws ec2 describe-volumes --filters Name=status,Values=availableMove infrequently accessed snapshots to the archive tier for 75% cost reduction
MilkStraw tip: MilkStraw automatically detects unused EBS volumes and orphaned snapshots across your accounts. Our EBS cleanup guide walks you through safe deletion—including how to verify volumes are truly orphaned and create backups before removal.
4. Legacy Storage Types: Paying Premium for Yesterday's Technology
The pattern: Still running gp2 volumes when gp3 exists. Using S3 Standard for archival data that hasn't been accessed in years. Provisioned IOPS on databases that never burst.
The damage: gp3 launched in 2020 with 20% lower base cost AND better performance than gp2. Yet one analysis found over 60% of volumes were still running gp2. The same applies to S3: Standard storage costs $0.023/GB while Glacier Deep Archive costs $0.00099/GB—a 95% reduction for data you're legally required to keep but never access.
Why it persists: "It works, why change it?" No automated migration path. Teams don't know better options exist. Change requires testing that nobody prioritizes.
How to spot it:
Any gp2 volume that isn't hitting IOPS limits
S3 buckets with objects not accessed in 90+ days
Provisioned IOPS volumes averaging well below provisioned capacity
The fix:
gp2 to gp3 migration—it's a zero-downtime operation
Implement S3 Intelligent-Tiering or lifecycle policies for automatic tier transitions
Review provisioned IOPS quarterly against actual CloudWatch metrics
MilkStraw tip: MilkStraw scans for outdated resource types across EC2, EBS, RDS, ElastiCache, and OpenSearch. We flag legacy configurations and provide step-by-step migration guides for zero-downtime upgrades.
5. Commitment Misalignment: The Forecasting Trap
The pattern: AWS Savings Plans or Reserved Instances purchased based on last year's architecture. Commitments for instance types you've since migrated away from. Over-committing "to be safe" and watching utilization drop to 70%.
The damage: 7 out of 10 companies over-commit due to unexpected infrastructure changes. If you commit to $70/hour in Savings Plans but only use $60, that's $7,200 wasted monthly—$87,600 annually. And unlike on-demand, unused commitments don't roll over. If you commit to $10/hour and only use $8 in a given hour, that $2 is gone forever.
The challenge: Modern cloud environments don't behave predictably. Autoscaling, spot usage, and frequent deployments mean usage shifts daily. The architecture you commit to today may not exist in six months.
How to spot it:
Savings Plan utilization below 90% in Cost Explorer
Reserved Instances showing as "underutilized" in Trusted Advisor
Large gaps between committed spend and actual compute spend
Traditional fixes:
Monitor hourly utilization, not just monthly averages (AWS applies commitments hourly)
Sell unused EC2 Reserved Instances on the AWS Marketplace
Start with 1-year commitments (35% savings) instead of jumping to 3-year (66% savings) until you understand your baseline
The better approach: This is exactly where MilkStraw changes the game. Instead of you taking on commitment risk—forecasting future usage, managing utilization, losing money when needs change—MilkStraw handles the commitment. You get the same savings without the lock-in. No upfront payment, no 1-3 year obligation, full flexibility to scale. The commitment risk shifts from you to us.
Taking Action: Where to Start
You don't need to fix everything at once. Here's a prioritized approach:
The DIY Route
This Week:
Run
aws ec2 describe-instancesand identify anything running with <5% average CPUFind unattached EBS volumes:
aws ec2 describe-volumes --filters Name=status,Values=availableCheck your Savings Plan utilization in Cost Explorer
This Month:
Implement instance scheduling for non-production environments
Migrate gp2 volumes to gp3 (zero-downtime)
Set up snapshot lifecycle policies
This Quarter:
Right-size your top 20 largest instances
Implement S3 lifecycle policies
Review and optimize your commitment strategy
The Faster Route: Let MilkStraw Do the Heavy Lifting
Instead of manually hunting through AWS accounts, MilkStraw automatically detects all five waste patterns covered in this post:
What MilkStraw Detects | Safe Remediation |
Unused EBS volumes & orphaned snapshots | Step-by-step cleanup guides |
Outdated EC2 instance types | Migration recommendations |
Legacy EBS volume types (gp2→gp3) | Zero-downtime upgrade paths |
Outdated RDS configurations | Safe database optimization guides |
Legacy ElastiCache node types | Cache modernization steps |
Outdated OpenSearch domains | Domain upgrade documentation |
Every detection comes with documentation on how to remediate safely—no guesswork, no risk of breaking production.
The Bottom Line
Cloud waste isn't a technology problem—it's a visibility and process problem. The resources silently billing you are the ones nobody's watching. Only 30% of companies can accurately attribute their cloud costs, which explains why waste persists.
The good news? These five patterns account for the vast majority of avoidable spend. Fix them, and you're likely looking at 20-40% reduction in your monthly AWS bill.
MilkStraw tackles cloud waste from both angles:
Detection & Guidance — We automatically find unused volumes, outdated EC2/EBS/RDS/ElastiCache/OpenSearch resources, and provide safe remediation guides in our docs
Commitment-Free Savings — We handle Reserved Instance and Savings Plan commitments so you get the discounts without the lock-in or risk
Stop hunting through AWS consoles. Stop forecasting usage for 3-year commitments. Let MilkStraw handle the complexity while you focus on building.
Sources:
