« Back to blog

Lambda is Cheaper than any EC2 Instance

AWS EC2 offers 2 types of instances: "burst" (t2) and "fixed" (m4, c4, etc.)

Burst or "T-class" instances are pitched for burst use cases. They're typically half the price of a "fixed performance instance" (m4, c4, etc.), but you'll only get 20% of the throughput. You'll also need to configure/maintain autoscaling or risk depleting your "credits" at which point AWS will throttle the instance to almost nothing.

Fixed instances are just normal instances where you get 100% of the CPU without any throttling. These are supposedly comparable to VPS' from Linode, Digital Ocean, etc.

But Lambda, AWS' "function as a service" can offer the same potential as the T-class instances to serve burst loads, in fact, up to 1,000 cores at a time without any autoscaling configuration and for far cheaper. It's not a VPS or "instance", but it serves the same purpose: executing code and billed by the same 2 metrics as VPS'/instances: CPU and memory.

AWS Pricing is not easy to understand, and this 3rd party pricing table helps but still makes it hard to compare burst vs. fixed and doesn't even list Lambda.

Lastly, if you value concurrency or burst, there's no obvious way to include that as a factor in the pricing comparison, but Lambda is so much better at concurrency/burst that if you value it at all, there's no comparison.

EC2/Lambda Normalized Pricing

(assumes autoscale configuration for T-Class)

price/hrprice /core/month10 req/sec @ 100ms/req10 req/sec @ 10ms/req
t2.nano$0.01$86$864$86
t2.micro$0.01$86$864$86
t2.medium$0.05$85$846$85
t2.small$0.02$83$828$83
c4.large$0.10$36$360$36
m1.small$0.04$32$317$32
lambda 512$0.03$22$216$216
lambda 1024$0.06$43$432$432
  • Lambda bills in 100ms increments, so 10ms workloads probably don't make sense on Lambda
  • If your workload is between 70ms and 300s, and you can tolerate some cold starts (up to 4s delay), then Lambda is the rational choice.
  • The pricing sweet spot is for workloads with memory <= 512MB and duration >= 70ms

It doesn't make economic sense for latency sensitive workloads and probably doesn't make sense for high memory (> 1GB) workloads. And it's important to note that most webservers doing 100/req sec may not be using very much CPU at all (unless it's Ruby). A non-blocking, I/O bound Node.js server serving static is a use case for a T-Class instance.

But there are many sub 1GB RAM CPU-bound jobs where concurrency matters and Lambda is a silver bullet.

Hosted on Roast.io