AWS recently sent an email to its customers about vCPU-based on-demand instance limits becoming available. As it came from AWS Marketing (motto: “We’re willing to be misunderstood for long periods of time!”), the messaging was confused and unclear.
After the third customer asked me to explain the email to them, it was time to write this blog post.
Before this change, EC2 service limits were based on instance type and instance family. Once upon a time, this was reasonable. It served to keep someone from accidentally billing themselves a phone number due to a misplaced zero or six. This means that, right now, there are a hilarious 261 service limits per region for EC2 alone—all because the EC2 team has no sense of “that’s enough” when it comes to launching new instance sizes, families, and hues.
Now, these limits are dramatically simplifying, and this is a good thing. Soon, there will only be five:
* One for the instances that human beings use: A, C, D, H, I, M, R, T, and Z.
* One each (four in total) for F, G, P, and X—each of which are ridiculous horseshit instances that virtually nobody should use under any circumstances (read as: these families consist of premium-flavored snake oil for machine learning purposes).
These new limits are all tied to number of vCPUs and start with an on-demand baseline of 1152 vCPUs for the nine instances people actually use. That could be 1152 t3.nano instances, 12 r5d.24xlarge instances, or any combination of instances until you reach the limit.
At no point will this change drop limits below your current instance usage.
I can see why AWS is telegraphing this change before it happens with a full salvo of email alerts. I just wish the messaging had been a lot clearer.