Page updated Jan 16, 2024

Import an S3 bucket or DynamoDB table

Import an existing S3 bucket or DynamoDB tables into your Amplify project. Get started by running amplify import storage command to search for & import an S3 or DynamoDB resource from your account.

1amplify import storage

Make sure to run amplify push to complete the import process and deploy this backend change to the cloud.

The amplify import storage command will:

  • automatically populate your Amplify Library configuration files (aws-exports.js, amplifyconfiguration.json) with your chosen S3 bucket information
  • provide your designated S3 bucket or DynamoDB table as a storage mechanism for all storage-dependent categories (API, Function, Predictions, and more)
  • enable Lambda functions to access the chosen S3 or DynamoDB resource if you permit it

This feature is particularly useful if you're trying to:

  • enable Amplify categories (such as API and Function) to access your existing storage resources;
  • incrementally adopt Amplify for your application stack;
  • independently manage S3 and DynamoDB resources while working with Amplify.

Note: Amplify does not manage the lifecycle of an imported resource.

Import an existing S3 bucket

Select the "S3 bucket - Content (Images, audio, video, etc.)" option when you've run amplify import storage.

Run amplify push to complete the import procedure.

Amplify projects are limited to exactly one S3 bucket.

Connect to an imported S3 bucket with Amplify Libraries

By default, Amplify Libraries assumes that S3 buckets are configured with the following access patterns:

  • public/ - Accessible by all users of your app
  • protected/{user_identity_id}/ - Readable by all users, but writable only by the creating user
  • private/{user_identity_id}/ - Only accessible for the individual user

You can either configure your IAM role to use the Amplify-recommended policies or in your Amplify libraries configuration overwrite the default storage path behavior.

It is highly recommended to review your S3 bucket's CORS settings. Review the recommendation guide here.

If you're using an imported S3 bucket with an imported Cognito resource, then you'll need to update the policy of your Cognito Identity Pool's authenticated and unauthenticated role. Create new managed policies (not inline policies) for these roles with the following statements:

Make sure to replace {YOUR_S3_BUCKET_NAME} with your S3 bucket's name.

Unauthenticated role policies

  • IAM policy statement for public/:
1{
2 "Action": [
3 "s3:PutObject",
4 "s3:GetObject",
5 "s3:DeleteObject"
6 ],
7 "Resource": [
8 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/public/*"
9 ],
10 "Effect": "Allow"
11}
  • IAM policy statement for read access to public/, protected/, and private/:
1{
2 "Action": [
3 "s3:GetObject"
4 ],
5 "Resource": [
6 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/protected/*"
7 ],
8 "Effect": "Allow"
9},
10{
11 "Condition": {
12 "StringLike": {
13 "s3:prefix": [
14 "public/",
15 "public/*",
16 "protected/",
17 "protected/*"
18 ]
19 }
20 },
21 "Action": [
22 "s3:ListBucket"
23 ],
24 "Resource": [
25 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}"
26 ],
27 "Effect": "Allow"
28}

Authenticated role policies

  • IAM policy statement for public/:
1{
2 "Action": [
3 "s3:PutObject",
4 "s3:GetObject",
5 "s3:DeleteObject"
6 ],
7 "Resource": [
8 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/public/*"
9 ],
10 "Effect": "Allow"
11}
  • IAM policy statement for protected/:
1{
2 "Action": [
3 "s3:PutObject",
4 "s3:GetObject",
5 "s3:DeleteObject"
6 ],
7 "Resource": [
8 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/protected/${cognito-identity.amazonaws.com:sub}/*"
9 ],
10 "Effect": "Allow"
11}
  • IAM policy statement for private/:
1{
2 "Action": [
3 "s3:PutObject",
4 "s3:GetObject",
5 "s3:DeleteObject"
6 ],
7 "Resource": [
8 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/private/${cognito-identity.amazonaws.com:sub}/*"
9 ],
10 "Effect": "Allow"
11}
  • IAM policy statement for read access to public/, protected/, and private/:
1{
2 "Action": [
3 "s3:GetObject"
4 ],
5 "Resource": [
6 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/protected/*"
7 ],
8 "Effect": "Allow"
9},
10{
11 "Condition": {
12 "StringLike": {
13 "s3:prefix": [
14 "public/",
15 "public/*",
16 "protected/",
17 "protected/*",
18 "private/${cognito-identity.amazonaws.com:sub}/",
19 "private/${cognito-identity.amazonaws.com:sub}/*"
20 ]
21 }
22 },
23 "Action": [
24 "s3:ListBucket"
25 ],
26 "Resource": [
27 "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}"
28 ],
29 "Effect": "Allow"
30}

Import an existing DynamoDB table

Select the "DynamoDB table - NoSQL Database" option when you've run amplify import storage. In order to successfully import your DynamoDB table, your DynamoDB table needs to be located within the same region as your Amplify project.

Run amplify push to complete the import procedure.

Amplify projects can contain multiple DynamoDB tables.

Multi-environment support

When you create a new environment through amplify env add, Amplify CLI will assume by default that you're managing your app's storage resources outside of an Amplify project. You'll be asked to either import a different S3 bucket or DynamoDB tables or maintain the same imported storage resource.

If you want to have Amplify manage your storage resources in a new environment, run amplify remove storage to unlink the imported storage resources and amplify add storage to create new Amplify-managed S3 buckets and DynamoDB tables in the new environment.

In order to unlink your existing Storage resource run amplify remove storage. This will only unlink the S3 bucket or DynamoDB table referenced from the Amplify project. It will not delete the S3 bucket or DynamoDB table itself.

Run amplify push to complete the unlink procedure.

Configure environment variables for Amplify Hosting builds

In order to successfully build your application with Amplify Hosting add the following environment variables to your build environment:

Environment VariableDescriptionImported ResourceRequired
AMPLIFY_STORAGE_BUCKET_NAMEThe name of the S3 bucket being imported for storageS3 bucketYes
AMPLIFY_STORAGE_REGIONThe AWS region in which the S3 bucket or the DynamoDB table is located (for example: us-east-1, us-west-2, etc.)S3 bucket or DynamoDB tableYes
AMPLIFY_STORAGE_TABLESThe name of the storage resource and DynamoDB table being imported for storageDynamoDB tableYes

The value of the AMPLIFY_STORAGE_TABLES environment variable needs to be in a json format such as:

1{
2 "STORAGE_RESOURCE_NAME_1":"DDB_TABLE_NAME_1",
3 "STORAGE_RESOURCE_NAME_2":"DDB_TABLE_NAME_2" // If you are importing more than a single DynamoDB table
4}

The values for the STORAGE_RESOURCE_NAME and DDB_TABLE_NAME fields can be retrieved from the amplify/team-provider-info.json file.