Import an S3 bucket or DynamoDB table
Import an existing S3 bucket or DynamoDB tables into your Amplify project. Get started by running amplify import storage
command to search for & import an S3 or DynamoDB resource from your account.
amplify import storage
Make sure to run amplify push
to complete the import process and deploy this backend change to the cloud.
The amplify import storage
command will:
- automatically populate your Amplify Library configuration files (aws-exports.js, amplifyconfiguration.json) with your chosen S3 bucket information
- provide your designated S3 bucket or DynamoDB table as a storage mechanism for all storage-dependent categories (API, Function, Predictions, and more)
- enable Lambda functions to access the chosen S3 or DynamoDB resource if you permit it
This feature is particularly useful if you're trying to:
- enable Amplify categories (such as API and Function) to access your existing storage resources;
- incrementally adopt Amplify for your application stack;
- independently manage S3 and DynamoDB resources while working with Amplify.
Note: Amplify does not manage the lifecycle of an imported resource.
Import an existing S3 bucket
Select the "S3 bucket - Content (Images, audio, video, etc.)" option when you've run amplify import storage
.
Run amplify push
to complete the import procedure.
Amplify projects are limited to exactly one S3 bucket.
Connect to an imported S3 bucket with Amplify Libraries
By default, Amplify Libraries assumes that S3 buckets are configured with the following access patterns:
public/
- Accessible by all users of your appprotected/{user_identity_id}/
- Readable by all users, but writable only by the creating userprivate/{user_identity_id}/
- Only accessible for the individual user
You can either configure your IAM role to use the Amplify-recommended policies or use path
to specify your own path.
It is highly recommended to review your S3 bucket's CORS settings. Review the recommendation guide here.
Configuring IAM role to use Amplify-recommended policies
If you're using an imported S3 bucket with an imported Cognito resource, then you'll need to update the policy of your Cognito Identity Pool's authenticated and unauthenticated role. Create new managed policies (not inline policies) for these roles with the following statements:
Make sure to replace
{YOUR_S3_BUCKET_NAME}
with your S3 bucket's name.
Unauthenticated role policies
- IAM policy statement for
public/
:
{ "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/public/*" ], "Effect": "Allow"}
- IAM policy statement for read access to
public/
,protected/
, andprivate/
:
{ "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/protected/*" ], "Effect": "Allow"},{ "Condition": { "StringLike": { "s3:prefix": [ "public/", "public/*", "protected/", "protected/*" ] } }, "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}" ], "Effect": "Allow"}
Authenticated role policies
- IAM policy statement for
public/
:
{ "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/public/*" ], "Effect": "Allow"}
- IAM policy statement for
protected/
:
{ "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/protected/${cognito-identity.amazonaws.com:sub}/*" ], "Effect": "Allow"}
- IAM policy statement for
private/
:
{ "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/private/${cognito-identity.amazonaws.com:sub}/*" ], "Effect": "Allow"}
- IAM policy statement for read access to
public/
,protected/
, andprivate/
:
{ "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}/protected/*" ], "Effect": "Allow"},{ "Condition": { "StringLike": { "s3:prefix": [ "public/", "public/*", "protected/", "protected/*", "private/${cognito-identity.amazonaws.com:sub}/", "private/${cognito-identity.amazonaws.com:sub}/*" ] } }, "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::{YOUR_S3_BUCKET_NAME}" ], "Effect": "Allow"}
Import an existing DynamoDB table
Select the "DynamoDB table - NoSQL Database" option when you've run amplify import storage
. In order to successfully import your DynamoDB table, your DynamoDB table needs to be located within the same region as your Amplify project.
Run amplify push
to complete the import procedure.
Amplify projects can contain multiple DynamoDB tables.
Multi-environment support
When you create a new environment through amplify env add
, Amplify CLI will assume by default that you're managing your app's storage resources outside of an Amplify project. You'll be asked to either import a different S3 bucket or DynamoDB tables or maintain the same imported storage resource.
If you want to have Amplify manage your storage resources in a new environment, run amplify remove storage
to unlink the imported storage resources and amplify add storage
to create new Amplify-managed S3 buckets and DynamoDB tables in the new environment.
Unlink an existing S3 bucket or DynamoDB table
In order to unlink your existing Storage resource run amplify remove storage
. This will only unlink the S3 bucket or DynamoDB table referenced from the Amplify project. It will not delete the S3 bucket or DynamoDB table itself.
Run amplify push
to complete the unlink procedure.
Configure environment variables for Amplify Hosting builds
In order to successfully build your application with Amplify Hosting add the following environment variables to your build environment:
Environment Variable | Description | Imported Resource | Required |
---|---|---|---|
AMPLIFY_STORAGE_BUCKET_NAME | The name of the S3 bucket being imported for storage | S3 bucket | Yes |
AMPLIFY_STORAGE_REGION | The AWS region in which the S3 bucket or the DynamoDB table is located (for example: us-east-1, us-west-2, etc.) | S3 bucket or DynamoDB table | Yes |
AMPLIFY_STORAGE_TABLES | The name of the storage resource and DynamoDB table being imported for storage | DynamoDB table | Yes |