COVID-19: 11 Tips to reduce AWS costs today


COVID-19 create financial challenges to many companies. A well working business gets into trouble because customers stay away and acquisition of new customer is difficult.

Reduce AWS Costs

Today, many companies need to optimize they AWS costs shortly and effectively, because Hosting is often a large part of the monthly spendings.

With over 7 years of experience in AWS, I would like to show a few “Quick Tips” to effectively reduce AWS costs in one day.

AWS must scale with your business model

The most common mistake for AWS cost inefficiency is that the architecture does not scale “with the business”. But what does that mean?

Example: If the revenue model is based on active users, the hosting costs have to scale mainly to this. In the worst case the double amount of users may double the costs if half of the users halve the costs. Sounds simple? But it is often not the case!

Here you can start and see for example why the costs have not fallen with less revenue / traffic and what you can change about it.


I see often setups in AWS where EC2 instances are used like you would do with “classic” dedicated servers. This of course throws the entire “potential” of a cloud provider right into the bin.

Many AWS services are scalable out-of-the-box with a small amount of effort. Only when we use these features we have a sustainable, scalable and cost-effective infrastructure!

Always touch a running System

There is often a fear existing of touching legacy systems because “something might break”.

If this is the case, this fear must be made transparent in order to consider how to solve it. Modern setups with IaC are well suited to handle such challenges in the long run. Similar to code, infrastructure must be able to be turn off and booted up without trouble.

If these principles are already followed we can look at the individual components and analyze how to make them more cost efficient. It is always important to consider what impact it might have on performance and stability.

AWS S3: Optimize costs with Storage Classes

The AWS Simple Storage Service (S3) is certainly one of the most popular AWS services to store files. But a lot of developer are not aware of:

S3 has different “storage classes” which differ mainly in access times, latency and data reliability.

If data is accessed frequently and latency is important, the default “Standard Storage Class” is usually ok. For the following scenarios, however, there is the possibility to save significant costs:

Overview of S3 Storage Classes and Pricing

Save up to 93% on backups with AWS S3 Glacier

Glacier is the storage class for data that is rarely accessed and when you are willing to wait up to 12 hours for access. This can reduce S3 costs by 93% compared to the S3 Standard class.

Automatic optimization with AWS S3 Intelligent-Tiering

Amazon S3 Intelligent-Tiering automatically selects the best storage class based on access patterns. This brings automatic cost optimization without additional development effort.

It is important to make sure that no unusual access patterns can be misinterpreted by AWS (very rare access but then high latency is important).

More information about Amazon S3 Intelligent-Tiering can be found here

AWS EC2: 90% discount with “Spot Instances” for Development & Testing

The stability of the application is often based on the “condition” that the underlaying infrastructure is stable, reliable and always available. And this (high availability). This is also often the best way as ensuring high availability itself is usually very complex and expensive.

But often this high availability is not really necessary for all environments. Development and Test systems which also often run 247 can probably unavailable for 5-10 minutes without having a major impact on the development.

If this is the case, or other scenarios you can go for “Amazon EC2 Spot Instances” instead of “On-Demand”.

What are Amazon EC2 Spot Instances?

Amazon EC2 Spot Instances are technically similar to the regular EC2 On-Demand Instances but are pay by bids instead of a fixed price ($ / hour). You set a maximum price that you are willing to pay ($ / hour), but effectively you only pay the “necessary price” to get instances from “the market”.

This way AWS sells the resources they already have available for the future at a much lower price in the meantime. Is a win-win scenario for AWS & Customer.

Be aware: If your bid is too low, e.g. there are less servers in the market available or the competition rise higher prices, the EC2 instances are automatically “taken away” after some minutes.

Spot Instances: It depends on the instance type.

When selecting a spot instance it is useful to move away from the common instance types. Rarely used types are often significant cheaper than the common m5.large which is used in every tutorial.

AWS Spot-Instances Pro Tipps:

Maximum bid = on-demand price

An easy way to reduce costs quickly without risking many failures is to set the maximum bid on the on-demand price. Unfortunately, it can still happen that the price rises above this or resources become unavailable in the spot market.

Check Availability Zones

AWS resources are always assigned to a region (e.g. us-west-q) and often, also to a specific availability zone, including EC2 and EC2 spot instances. The bid prices are depending also on the availability zone. An m5.xLarge can be more expensive in us-west-1a than in us-west-1b! Check this before!

Use the Spot Instance Advisor.

AWS offers with the “Spot Instance Advisor” a tool to check the current spot prices, savings and the “frequency of interruptions”.

“Frequency of interruptions” is the probability that the instance will be “taken away” for the listed price. When selecting spot instances you should therefore choose a server type with a low ”frequency of interruptions” to avoid unnecessary downtimes.

The Spot Instance Advisor can be found here

Save costs with the AWS Trusted Advisor

Databases, especially MySQL or Postgres, can also get a large cost share in AWS. These can be optimized very easily, especially with “workload peaks” on AWS Aurora or even better: AWS Aurora Serverless. The whole thing is part of the “AWS Trusted Advisor” Service at the menu item “Cost Optimization”. There you will find for example:

  • Underutilized EC2 instances which you can change to smaller instance types
  • Too large EBS hard disks
  • Unused load balancers
  • ..much more

Especially if your infrastructure is not available as code but manually generated via the GUI you will sometimes forget to free up unused resources. This is a very easy way to save money instantly.

More information about the AWS Trusted Advisor can be found here

MySQL und PostgreSQL: AWS Aurora

Databases, especially MySQL or Postgres, can also get a large cost share in AWS. These can be optimized very easily, especially with “workload peaks” on AWS Aurora or even better: AWS Aurora Serverless.

What is AWS Aurora?

On the AWS Database Services (RDS), AWS Aurora is a database system developed by AWS itself that is fully compatible with MySQL and PostgreSQL. Since these databases are by far the most frequently used, norregular MySQL or PostgreSQL databases can be migrated to AWS Aurora quite easily. AWS Aurora offers a bunch of advantages:

  • Up to 5x throughput compared to classic MySQL
  • Up to 10x more cost effective
  • Fully MySQL and PostgreSQL compatible
  • Automatic scaling of the required memory up to 64TB as needed
  • 99.99% availability
  • Easy migration of existing MySQL and PostgreSQL with AWS DMS Service

More information about AWS Aurora can be found here.

What is AWS Aurora Serverless

As the name suggests, this is the serverless version of AWS Aurora. This means there is no choose of the server instance necessary, because AWS automatically scales the resources based on load, number of database queries, memory consumption, etc.

This is especially very cost effective if you have a high variance in load on the database, for example by

  • Different load depending on the day of the week or time of day
  • Temporarily recurring high load through cron jobs, workers, ETL processes
  • Temporary high load from other company activities (e.g. TV advertising)

More information about AWS Aurora Serverless can be found here

Savings with commitment

AWS offers various discount possibilities if you guarantee a certain resource consumption / turnover at AWS for 1-3 years. Here are the common options:

AWS Saving Plans

Compute Saving Plans

With Compute Saving Plans you can save up to 66% with high flexibility. You commit to a certain turnover in EC2, Fargate or Lambda in the next 12-36 months with or without upfront payment. By not having to commit to specific EC2 instance types, you remain highly flexible regarding technical decision in the future!

EC2 Instance Saving Plans

You can save even more than with Compute Saving Plans (up to 72%) but you lose flexibility. You have to choose an instance family (e.g. M5) and region (e.g. us-west-1).

More information about saving plans can be found here

AWS EDP / Private Pricing: Enterprise Discount Program

The Enterprise Discount (EDP) Program are special discount conditions if you can commit to higher turnover for a longer period of time. Your sales or key account contact can tell you whether your account is suitable for EDP.

comments powered by Disqus