Migrate local database on Docker container to AWS

Migrate local database on Docker container to AWS

How to migrate your local Postgres Database do AWS RDS (Relational Database Service)

Subscribe to our newsletter and never miss any upcoming articles

In this article, we’ll see how to migrate your local Postgres database to AWS RDS.

Video Version:youtu.be/87G3iUl-tj0

Just a couple of definitions before we start:

Postgres (PostgreSQL): a free open-source relational database very popular and stable.

Amazon Relational Database Service (RDS): service to set up, operate, and scale a relational database in the cloud. Amazon RDS supports several database instance types:

  • Amazon Aurora
  • PostgreSQL
  • MySQL
  • MariaDB
  • Oracle Database
  • SQL Server

In this article, we’ll see how to migrate our local Postgres database using the AWS Database Migration Service to easily migrate or replicate your existing databases to Amazon RDS.

Video version:

Usually what we want is a local database to test in the initial phase. But then, after a while, we want to bring it to the cloud - for example, onto AWS.

This is the full procedure and it's not strictly related to Docker (though it’ll help to have Docker installed locally on your machine):

  1. Clone the repository
  2. Run the local instance of Postgres using docker compose
  3. Make some inserts in the database using Postman
  4. Make a backup of the database using Docker and psql
  5. Create an instance of RDS Postgres in AWS
  6. Restore the Postgres database using the AWS CLI
  7. Final check

Clone the repository

To do this, we will use a project that we used in a previous article. (Video version)

This repository is based on the official TinyStacks AWS Docker Express application repository on GitHubgithub.com/tinystacks/aws-docker-templates-...

To clone the repository, open a command prompt and type:

git clone https://github.com/FrancescoXX/crud-api-node-postgres

And then change folders:

cd crud-api-node-postgres

Open this folder with your favorite IDE. If you have Visual Studio Code you can use the command:

code .

Run the services using docker compose

Before starting our services, let's take a look at the docker-compose.yml file we used in the previous article. (If you want a more detailed explanation about these commands, you can check out my previous video.)

version: '3.8'

services: 
  node_backend:
    container_name: node_backend
    image: node-test:1.0.0
    build: 
      context: .
    ports: 
      - '8080:80'
    environment:
      - PGUSER=francesco
      - PGPASSWORD=12345
      - PGDATABASE=defaultdb
      - PGHOST=db
    depends_on: 
      - db  
  db:
    container_name: db
    image: 'postgres:12'
    ports: 
      - 5432:5432
    environment:
      - POSTGRES_USER=francesco
      - POSTGRES_PASSWORD=12345
      - POSTGRES_DB=defaultdb
    volumes: 
      - pgdata1:/var/lib/postgresql/data

volumes:
  pgdata1: {}

We have 2 services, and the volume called pgdata1, where we will store our data.

Let's run the database service with this command:

docker compose up -d db

And then the node backend service:

docker compose up -d node_backend

Let's check the running containers with this command:

docker ps -a

image.png

Now we’re ready to make our inserts into the database.

Database Inserts Using Postman

You can make the inserts in the database in different ways. In this article, we will use Postman, a tool to test API.

Check if the backend is up and running by creating a new [GET] request at the /ping endpoint, port 8080:

image.png

Check the current users bycreating a new [GET] request at the /users endpoint, port 8080. The response will be a blank response body ([]) with status 200, which is what we expect:

image.png

Create three new users on this database witha new [POST] request at the /users endpoint, port 8080:

image.png

image.png

image.png

If we check the users again via the /users, we will see the three new users.

image.png

Now we are ready to create the backup of the database using the pg_dump command.

Database Dump with pg_dump

pg_dump is a utility for backing up a Postgres database. It makes consistent backups even if the database is being used concurrently.

pg_dump does not block other users accessing the database.

The good news is that we can use docker exec to dump the database:

Run this command:

docker exec db pg_dump -U francesco defaultdb > backup.sql

That was easy!

We are now ready to move on the AWS UI to create an RDS instance.


Create RDS Instance on AWS (UI)

Let's create the AWS RDS instance.

On the AWS UI console, search for RDS:

image.png

Click on Create Database:

image.png

Here, unsurprisingly, you can choose a database. Choose Postgres, version 12.5 (the same used locally and it has also a Free tier on AWS, which is nice!) and Free Tier.

image.png

In the Settings section, you can choose a name for the database, a username, and a password:

image.png

Under Connectivity, select Yes for Public Access (it’s set to No by default) so we can access to it from our local AWS CLI.

You also need a VPC with Security group open to the public. To learn more about security groups, check out the AWS documentation.

image.png

Leave everything else as it is, and click Create Database. (Please notice that since we are on a free tier, we will not be charged.)

image.png

Database create will usually take a couple of minutes.

image.png

if you click on the database name, you will have a summary of it:

image.png

Now that we have the running postgres instance running on AWS, we can use the psql command to restore the database.

Database Migration using Docker and psql

First of all, let's make a connection test to the aws Postgres database:

docker run -it --rm postgres psql --host database-2.c9nq6suhmqou.us-east-1.rds.amazonaws.com --port 5432 --username postgres

image.png

To check if it's working we can type the following command command (don't forget the ; at the end!):

select 1;

image.png

This means that the remote postgres instance is accessible from our local machine!

Let's exit the current psql process with the command:

\q

Now, for ease of use, navigate into the folder where the backup.sql file is located. In our case, it was the same folder where we installed our service:

image.png

Database Migration

Now comes the interesting and most important part - the migration to AWS:

to do this,

psql -f backup.sql --host database-2.c9nq6suhmqou.us-east-1.rds.amazonaws.com --port 5432 --username postgres

image.png

Final Test

Now let's get back to our remote instance:

image.png

There it is! We have migrated our local database onto an RDS instance!

Video Version:youtu.be/87G3iUl-tj0

If you want to know more and be updated on TinyStacks news, check out our Web site!

 
Share this