One of the most beautiful aspects to working in the cloud is the widespread support for managing your infrastructure. One of the things that you can do is automate starting and stopping databases easily. Why automatically start and stop databases? Often, the users (developers, testers, etc) of the non-production environments are only going to be interacting during business hours, which allows you stop the databases outside business hours for a little bit of cost savings.
How much cost savings are we talking about? Well, for Amazon Aurora, running a db.t2.small instance costs $0.041/hr. Might not seem like much, but with 11 hours of downtime every day, that works out to be $13.53 a month. With a db.r5.large instance ($0.29/hr), that goes up to $95.70 a month in savings. These impacts are multiplied when you have many databases, which is common with a microservices architecture. If you’re thinking “But what about the cost of setting it up?”, don’t worry, it only takes about a half hour to set up, so lets get started.
The Plan
What we effectively need is only two things that start and stop the databases, as well as a something to actually trigger the execution. In AWS, you can set up two Lambda functions, one to start the databases and one to stop the databases. Performing the start and stop of the databases is easy to code using AWS SDKs, but remember that the function’s assumed role will need IAM access to do so. After creating the Lambda functions, you can set up a CloudWatch schedule event to trigger those Lambda functions.
The Code
In my use case, I leveraged python to do simple boto3 calls to get a list of cluster names and then iterate over them and start/stop each one. Below is the code for starting the clusters.
import botocore
import boto3
import json
print('Loading function')
rds_client = boto3.client('rds')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
try:
print("Version is {}".format(botocore.__version__))
instances = rds_client.describe_db_instances()
clusters_to_start = [instance["DBClusterIdentifier"] for instance in instances]
for name in clusters_to_start:
print('Starting instance: ' + name)
rds_client.start_db_cluster(DBClusterIdentifier=name)
return True
except Exception as e:
print(e)
print('Error starting rds clusters.')
raise e
The stop function is near identical, with the exception that you are calling “rds_client.stop_db_cluster” instead of start. To be clear, you need to start and stop the cluster if you are working with Aurora databases, whereas other databases will require you to use the start/stop db instance command.
The Schedule
What we’re going to do is set up CloudWatch Event Rules based on a schedule that triggers our newly created Lambda functions. To do so, go to CloudWatch, then click the Rules link, then click the Create rule button.

This will take you to the Create Rule dialog, which gives you an option to select Schedule. Then enter in a Cron expression, and select the lambda function to trigger.

For my purposes, I want my databases to start before I get into work, and stop after I leave. With a little buffer, this means start at 7 AM and stop at 8 PM, everyday, Monday through Friday. Cron expressions are in GMT, so accounting for the time shift, they are as follows:
- Start cron (7 AM EST M-F) : 0 11 ? * MON-FRI *
- Stop cron (8 PM EST M-F) : 0 0 ? * TUE-SAT *

Checking in after hours lets me see that it does indeed work as intended.


Conclusion
With a few lambda functions and CloudWatch rules, you can automate starting and stopping of databases in AWS. Even if you don’t have any experience with AWS Lambda, automated database management is pretty easy to get working, and great experience in learning how to automate mundane tasks. Plus, you might be able to save some money, so go check it out.
Hello, thanks for the article. I would be also interested to see possibly how to stop start a specifically defined RDS instance rather than all RDS instances.