eliasbrange.dev
Migrate DynamoDB tables with zero downtime and no data loss

Migrate DynamoDB tables with zero downtime and no data loss

2022-08-23
| #AWS #Serverless

Introduction

Migrating data with zero downtime and no data loss has always been a challenge. The new functionality with native import from S3 in DynamoDB helps us solve this. Most docs refer you to the console or CLI to do this. I found a yet-to-be-announced feature of CloudFormation to create a table from an S3 export. By doing it through CloudFormation and not through the console, you save yourself the headache of having to import the new table into a stack.

In this guide, I will show you how to migrate a DynamoDB table to a new account. You will use the native export/import to S3 functionalities to do this. Before AWS released the import functionality, you had to use glue services to import a table from S3. Now you can specify the bucket that holds your exported data and create a new table from it.

Note

I’m using yet-to-be-announced features of CloudFormation in this guide. The API and configuration settings might not be stable.

You can see the architecture for the solution in the image below. You will do the initial migration by using the native export to and import from S3 functionality. You will then set up delta migration with a DynamoDB stream and a Lambda function.

Migration Architecture
Migration Architecture

Starting point

To keep things simple, I have chosen to only include a DynamoDB table in the application. To follow along with the examples, you will need access to two AWS Accounts; Source and Target.

  1. To start, lets create an application and install dependencies:

    Terminal window
    1
    $ mkdir app
    2
    $ cd app
    3
    $ cdk init app --language=typescript --generate-only
    4
    5
    Applying project template app for typescript
    6
    # Welcome to your CDK TypeScript project
    7
    8
    ...
    9
    10
    All done!
    11
    12
    $ yarn
    13
    14
    yarn install v1.22.18
    15
    warning package.json: No license field
    16
    info No lockfile found.
    17
    warning app@0.1.0: No license field
    18
    [1/4] Resolving packages...
    19
    [2/4] Fetching packages...
    20
    [3/4] Linking dependencies...
    21
    [4/4] Building fresh packages...
    22
    success Saved lockfile.
    23
    Done in 19.49s.
  2. Replace the code in lib/app-stack.ts with the following. This code creates a DynamoDB table named MyTable:

    lib/app-stack.ts
    1
    import * as cdk from 'aws-cdk-lib';
    2
    import { Construct } from 'constructs';
    3
    import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
    4
    5
    export class AppStack extends cdk.Stack {
    6
    public readonly table: dynamodb.Table;
    7
    8
    constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    9
    super(scope, id, props);
    10
    11
    this.table = new dynamodb.Table(this, 'Table', {
    12
    tableName: 'MyTable',
    13
    partitionKey: { name: 'pk', type: dynamodb.AttributeType.STRING },
    14
    billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
    15
    });
    16
    }
    17
    }
  3. Bootstrap your Source account by running cdk bootstrap.

  4. Deploy you application to the Source account with cdk deploy AppStack.

  5. Make sure to bootstrap your Target account as well.

If you go to the DynamoDB console, you should see a table named MyTable. For the remainder of this example, we will use this console as the UI of our imaginary application. Before continuing, add a few items to your table so you have something to export later on.

Initial table data
Initial table data

Tutorial

Note

The steps below requires you to alternate your credentials between the Source and Target accounts.

1. Enable point-in-time recovery and stream on source table

The export functionality of DynamoDB requires you to enable point-in-time recovery. You also need to enable a DynamoDB stream to stream any changes made to the table after you trigger the export. To enable both, add the following lines in lib/app-stack.ts:

1
this.table = new dynamodb.Table(this, 'Table', {
2
tableName: 'MyTable',
3
partitionKey: { name: 'pk', type: dynamodb.AttributeType.STRING },
4
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
5
pointInTimeRecovery: true,
6
stream: dynamodb.StreamViewType.NEW_IMAGE,
7
});
8
9
// Outputs
10
new cdk.CfnOutput(this, 'TableStreamArn', {
11
value: this.table.tableStreamArn || '',
12
});

Deploy the updated stack to the Source account. You will need the ARN of the stream later on.

2. Deploy S3 bucket and IAM role in target account

You will now create the required resources in the Target account. You will need an S3 bucket to which you will export the source table. You will also need an IAM role, with permissions to write items to the imported DynamoDB table. You will assume this role in the Lambda function in the Source account.

Create the file lib/target-stack.ts and add the following:

1
import * as cdk from 'aws-cdk-lib';
2
import { Construct } from 'constructs';
3
import * as iam from 'aws-cdk-lib/aws-iam';
4
import * as s3 from 'aws-cdk-lib/aws-s3';
5
6
interface TargetStackProps extends cdk.StackProps {
7
sourceAccount: string;
8
}
9
10
export class TargetStack extends cdk.Stack {
11
constructor(scope: Construct, id: string, props?: TargetStackProps) {
12
super(scope, id, props);
13
14
// Create Bucket that will hold exported data from Source DynamoDB
15
const migrationBucket = new s3.Bucket(this, 'MigrationBucket', {});
16
17
// Allow source account to list bucket
18
migrationBucket.addToResourcePolicy(
19
new iam.PolicyStatement({
20
principals: [new iam.AccountPrincipal(props?.sourceAccount)],
21
actions: ['s3:ListBucket'],
22
resources: [migrationBucket.bucketArn],
23
}),
24
);
25
26
// Allow source account to write to bucket
27
migrationBucket.addToResourcePolicy(
28
new iam.PolicyStatement({
29
principals: [new iam.AccountPrincipal(props?.sourceAccount)],
30
actions: ['s3:AbortMultipartUpload', 's3:PutObject', 's3:PutObjectAcl'],
31
resources: [migrationBucket.arnForObjects('*')],
32
}),
33
);
34
35
// Role for cross-account access to new DynamoDB table
36
const role = new iam.Role(this, 'CrossAccountDynamoDBRole', {
37
assumedBy: new iam.AccountPrincipal(props?.sourceAccount),
38
managedPolicies: [
39
iam.ManagedPolicy.fromAwsManagedPolicyName('AmazonDynamoDBFullAccess'),
40
],
41
});
42
43
// Outputs
44
new cdk.CfnOutput(this, 'MigrationBucketOutput', {
45
value: migrationBucket.bucketArn,
46
});
47
48
new cdk.CfnOutput(this, 'CrossAccountDynamoDBRoleOutput', {
49
value: role.roleArn,
50
});
51
}
52
}

This creates an S3 bucket with the necessary permissions for DynamoDB to export data to it. You can read more about the required permissions in the AWS documentation. It also creates an IAM role and attaches the managed policy AmazonDynamoDBFullAccess. The role’s trust policy allows the Source account to assume it.

In bin/app.ts, add the following:

1
#!/usr/bin/env node
2
import * as cdk from 'aws-cdk-lib';
3
import { AppStack } from '../lib/app-stack';
4
import { TargetStack } from '../lib/target-stack';
5
6
// Change these
7
const targetAccount = '111111111111';
8
const sourceAccount = '222222222222';
9
10
const app = new cdk.App();
11
new AppStack(app, 'AppStack', {
12
env: {
13
account: process.env.CDK_DEFAULT_ACCOUNT,
14
region: process.env.CDK_DEFAULT_REGION,
15
},
16
});
17
new TargetStack(app, 'TargetStack', {
18
env: {
19
account: targetAccount,
20
region: process.env.CDK_DEFAULT_REGION,
21
},
22
sourceAccount,
23
});

Deploy the stack to the Target account by running cdk deploy TargetStack.

3. Deploy (disabled) Lambda function for streaming data in source account

You will now create a Lambda function in the Source account. This function will subscribe to the DynamoDB stream of the source table. Exporting a table will capture the state at the time of the export. But you might still have live traffic coming into the old application that you will need to migrate as well. The Lambda function lets you replicate all those delta changes to the target table.

You will first deploy the Lambda function in a disabled state. While the export and import are in progress, the stream will store all changes made to the source table. When the export/import is complete, you will then enable it to replicate the changes in the target table.

Note

DynamoDB streams can only store records for up to 24 hours. Thus, you must be able to export all data, import it to a new table, and enable the stream within 24 hours. Otherwise, you will lose data.

Let’s start with the Lambda function. You’ll need a few development dependencies.

Terminal window
1
$ yarn add --dev \
2
@types/aws-lambda \
3
@aws-sdk/client-sts \
4
@aws-sdk/credential-providers \
5
@aws-sdk/client-dynamodb

Create a new folder functions/ under lib/ and create the file lib/functions/stream-handler.ts with the following code:

1
import { DynamoDBStreamEvent } from 'aws-lambda';
2
import { AssumeRoleCommandInput } from '@aws-sdk/client-sts';
3
import {
4
DynamoDBClient,
5
DeleteItemCommand,
6
DeleteItemCommandInput,
7
PutItemCommand,
8
PutItemCommandInput,
9
AttributeValue,
10
} from '@aws-sdk/client-dynamodb';
11
import { fromTemporaryCredentials } from '@aws-sdk/credential-providers';
12
13
const tableName = process.env.TARGET_TABLE;
14
15
// Use STS to assume the role in target account
16
const params: AssumeRoleCommandInput = {
17
RoleArn: process.env.TARGET_ROLE,
18
RoleSessionName: 'Cross-Acct-DynamoDB',
19
};
20
21
const client = new DynamoDBClient({
22
credentials: fromTemporaryCredentials({ params }),
23
});
24
25
export const handler = async (event: DynamoDBStreamEvent) => {
26
for (const record of event.Records) {
27
if (record.eventName === 'REMOVE') {
28
// Item was deleted, remove it from target table
29
const input: DeleteItemCommandInput = {
30
TableName: tableName,
31
Key: record.dynamodb?.Keys as Record<string, AttributeValue>,
32
};
33
await client.send(new DeleteItemCommand(input));
34
} else {
35
// Item was created or modified, write it to the target table
36
const input: PutItemCommandInput = {
37
TableName: tableName,
38
Item: record.dynamodb?.NewImage as Record<string, AttributeValue>,
39
};
40
await client.send(new PutItemCommand(input));
41
}
42
}
43
};

This is a simple Lambda function that will subscribe to events from a DynamoDB stream. It takes two environment variables, TARGET_TABLE and TARGET_ROLE. It uses AWS STS to generate temporary credentials for the role in the target account. It then loops over all incoming records and replicates the changes in the target table. I have left out stuff like error handling and logging for clarity.

Add a new stack file lib/source-stack.ts with the following code:

1
import * as cdk from 'aws-cdk-lib';
2
import { Construct } from 'constructs';
3
import * as lambda from 'aws-cdk-lib/aws-lambda-nodejs';
4
import * as dynamo from 'aws-cdk-lib/aws-dynamodb';
5
import * as iam from 'aws-cdk-lib/aws-iam';
6
import { Runtime, StartingPosition } from 'aws-cdk-lib/aws-lambda';
7
import { DynamoEventSource } from 'aws-cdk-lib/aws-lambda-event-sources';
8
9
interface SourceStackProps extends cdk.StackProps {
10
targetRole: string;
11
tableName: string;
12
sourceStreamArn: string;
13
}
14
15
export class SourceStack extends cdk.Stack {
16
constructor(scope: Construct, id: string, props: SourceStackProps) {
17
super(scope, id, props);
18
19
const fn = new lambda.NodejsFunction(this, 'StreamHandler', {
20
runtime: Runtime.NODEJS_16_X,
21
entry: 'lib/functions/stream-handler.ts',
22
memorySize: 1024,
23
depsLockFilePath: 'yarn.lock',
24
handler: 'handler',
25
environment: {
26
TARGET_ROLE: props.targetRole,
27
TARGET_TABLE: props.tableName,
28
},
29
});
30
31
// Let the Lambda function assume the role in the target account
32
fn.addToRolePolicy(
33
new iam.PolicyStatement({
34
actions: ['sts:AssumeRole'],
35
resources: [props.targetRole],
36
}),
37
);
38
39
// Adding a stream as an event source requires a Table object.
40
const sourceTable = dynamo.Table.fromTableAttributes(this, 'SourceTable', {
41
tableName: props.tableName,
42
tableStreamArn: props.sourceStreamArn,
43
});
44
45
fn.addEventSource(
46
new DynamoEventSource(sourceTable, {
47
startingPosition: StartingPosition.TRIM_HORIZON,
48
batchSize: 100,
49
enabled: false,
50
}),
51
);
52
}
53
}

Add the new stack to bin/app.ts. It can be tricky to cross-reference values from different accounts. Since this is temporary anyway, we hardcode the variables tableRole, tableName, and sourceStreamArn. You can find the targetRole and sourceStreamArn in the outputs of the respective stacks.

1
...
2
import { SourceStack } from '../lib/source-stack';
3
4
const targetAccount = '111111111111';
5
const sourceAccount = '222222222222';
6
const targetRole = 'ROLE_ARN_FROM_TARGET_STACK_OUTPUT';
7
const tableName = 'MyTable'
8
const sourceStreamArn = 'YOUR_SOURCE_TABLE_STREAM_ARN'
9
10
new AppStack(...);
11
new TargetStack(...);
12
new SourceStack(app, 'SourceStack', {
13
env: {
14
account: sourceAccount,
15
region: process.env.CDK_DEFAULT_REGION,
16
},
17
targetRole,
18
tableName,
19
sourceStreamArn,
20
});

Deploy the stack to the Source account with cdk deploy SourceStack.

4. Export source table to S3 bucket in target account

Log in to the AWS console in the Source account. Navigate to the DynamoDB console and click on Exports to S3 on the left- hand side.

DynamoDB Export to S3 console
DynamoDB Export to S3 console

Click the Export to S3 button to bring up the export configuration. Enter your bucket name and an optional prefix. Using prefixes can be useful if you want to migrate multiple tables. Also, make sure to check A different AWS account and enter your Target account number. Finally, click the Export button.

Configure the destination bucket for your S3 export
Configure the destination bucket for your S3 export

Your migration has now started, and it can take a while for it to finish. You can check on the progress in the console.

Export in progress
Export in progress

When the migration changes status to completed, log in to the Target account and head over to the S3 console. Open your migration bucket and you should find your exported data under the prefix migration1/AWSDynamoDB/some-auto-generated-id/.

Export completed
Export completed

5. Update and delete some items in the source table

Now that your export is complete you will want to simulate some more traffic. Create, change, and delete some items in your source table. After your migration, you will want to have these delta updates replicated in the target table. This gives us a zero downtime migration with no data loss.

6. Deploy application stack with import configuration in target account

Here comes the magic part. AWS recently released support for creating a table from an S3 export, without requiring any glue services. But, all guides direct you to the console to do this. Doing it through the console means you have to resort to importing resources into CloudFormation if you want to manage your databases with, for example, CDK.

While searching the docs I stumbled upon a yet-to-be-announced feature in CloudFormation. A new property on the AWS::DynamoDB::Table resource called ImportSourceSpecification. The documentation says that the property is Not currently supported by AWS CloudFormation. I’ve tested it, and it works. Always be careful though when using undocumented and unreleased features.

You will now update your AppStack to use this new property. Since the feature is not announced yet, you need to use raw overrides in CDK. You can read more about using overrides and other escape hatches here.

During the migration, your application will still be live in the Source account. Since you might need to release an urgent update there, you need to keep the stack in a deployable state for both accounts. You will thus only add the ImportSourceSpecification if you are deploying to the Target account.

Update the AppStack to accept the following properties:

1
import * as cdk from 'aws-cdk-lib';
2
import { Construct } from 'constructs';
3
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
4
5
interface AppStackProps extends cdk.StackProps {
6
targetAccount: string;
7
importS3Bucket: string;
8
importS3Prefix: string;
9
}
10
11
export class AppStack extends cdk.Stack {
12
public readonly table: dynamodb.Table;
13
14
constructor(scope: Construct, id: string, props: AppStackProps) {
15
super(scope, id, props);
16
17
this.table = new dynamodb.Table(this, 'Table', {
18
tableName: 'MyTable',
19
partitionKey: { name: 'pk', type: dynamodb.AttributeType.STRING },
20
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
21
pointInTimeRecovery: true,
22
stream: dynamodb.StreamViewType.NEW_IMAGE,
23
});
24
}
25
}

Update bin/app.ts as well. The importS3Prefix should point to the data/ subfolder in the migration folder. In the example above, importS3Prefix should be migration1/AWSDynamoDB/some-auto-generated-id/data/

1
const targetRole = '...';
2
const importS3Bucket = 'your-bucket-name'; // without s3://
3
const importS3Prefix = 'prefix/AWSDynamoDB/some-auto-generated-id/data/';
4
5
const app = new cdk.App();
6
new AppStack(app, 'AppStack', {
7
env: {
8
account: process.env.CDK_DEFAULT_ACCOUNT,
9
region: process.env.CDK_DEFAULT_REGION,
10
},
11
targetAccount,
12
importS3Bucket,
13
importS3Prefix,
14
});

Now, conditionally add the ImportSourceSpecification configuration in lib/app-stack.ts:

1
export class AppStack extends cdk.Stack {
2
public readonly table: dynamodb.Table;
3
4
constructor(scope: Construct, id: string, props: AppStackProps) {
5
super(scope, id, props);
6
7
this.table = new dynamodb.Table(...);
8
9
// Only add ImportSourceSpecification if deploying to target account
10
if (props.targetAccount && props.targetAccount === props.env?.account) {
11
const cfnTable = this.table.node.defaultChild as dynamodb.CfnTable;
12
13
// ImportSourceSpecification is not yet supported on Table or CfnTable
14
cfnTable.addPropertyOverride(
15
'ImportSourceSpecification.S3BucketSource.S3Bucket',
16
props.importS3Bucket,
17
);
18
cfnTable.addPropertyOverride(
19
'ImportSourceSpecification.S3BucketSource.S3KeyPrefix',
20
props.importS3Prefix,
21
);
22
cfnTable.addPropertyOverride(
23
'ImportSourceSpecification.InputCompressionType',
24
'GZIP',
25
);
26
cfnTable.addPropertyOverride(
27
'ImportSourceSpecification.InputFormat',
28
'DYNAMODB_JSON',
29
);
30
}
31
}
32
}

The default behavior when exporting is to use the DynamoDB JSON format and GZIP compression, so we use those defaults when importing as well.

Deploy your stack in the Target account by running cdk deploy AppStack.

Open the DynamoDB console in the Target account and click on Imports from S3 on the left-hand side. You should see the import process starting.

Import in progress
Import in progress

When the import is complete, your table should be populated with the same data as the source table was at the time of the export.

Initial table data
Initial table data

7. Enable the streaming Lambda function in source account

You have now completed the initial migration. Now it’s time to replicate all changes that happened in step 5, after your initial export.

Flip the enabled switch to true in your Lambda function in lib/source-stack.ts:

1
fn.addEventSource(
2
new DynamoEventSource(sourceTable, {
3
startingPosition: StartingPosition.TRIM_HORIZON,
4
batchSize: 100,
5
enabled: true,
6
}),
7
);

Deploy the updated stack in the Source account with cdk deploy SourceStack.

After some time, all changes that happened after you exported the source table should be replicated to the target table. Your databases should now fully match and you should be able direct all traffic to your migrated application.

Full table data
Full table data

Conclusion

There you go. You have successfully migrated a DynamoDB table to another account, with zero downtime and no data loss.

The new Import from S3 functionality makes it easy to migrate tables to another account. Before this, migration of tables required much more effort. You needed more glue services for the initial migration. Now, you have the native import functionality and yet-to-be-announced CloudFormation support. They make it simple to migrate tables and reuse your existing stack templates in a new account. Not having to import resources into a stack is also a huge win.


About the author

I'm Elias Brange, a Cloud Consultant and AWS Community Builder in the Serverless category. I'm on a mission to drive Serverless adoption and help others on their Serverless AWS journey.

Did you find this article helpful? Share it with your friends and colleagues using the buttons below. It could help them too!

Are you looking for more content like this? Follow me on LinkedIn & Twitter !