Page updated Mar 28, 2024

Storage

Under active development: The Storage experience for Amplify Gen 2 is under active development. The experience may change between versions of @aws-amplify/backend. Try it out and provide feedback at https://github.com/aws-amplify/amplify-backend/issues/new/choose

Adding storage to your Amplify backend enables uploading and downloading files. To get started using storage, create a file amplify/storage/resource.ts. Paste the following content into the file.

amplify/storage/resource.ts
1import { defineStorage } from '@aws-amplify/backend';
2
3export const storage = defineStorage({
4 name: 'myProjectFiles'
5});

Then include storage in your backend definition.

amplify/backend.ts
1import { defineBackend } from '@aws-amplify/backend';
2import { auth } from './auth/resource';
3import { storage } from './storage/resource';
4
5defineBackend({
6 auth,
7 storage
8});

Now when you run npx amplify sandbox or deploy your app on Amplify, it will configure AWS resources for file upload and download.

Before files can be accessed by your application, you must configure storage access rules.

To learn how to use storage in your frontend, see docs on uploading files or downloading files.

Storage access

By default, no users or other project resources have access to any files in storage. Access must be explicitly granted within defineStorage using the access callback.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'some/path/*': [
5 // access rules that apply to all files within "some/path/*" go here
6 ],
7 'another/path/*': [
8 // access rules that apply to all files within "another/path/*" go here
9 ]
10 })
11});

The access callback returns an object where each key in the object is a file prefix and each value in the object is a list of access rules that apply to that prefix. The following sections enumerate the types of access rules that can be applied.

Authenticated user access

To grant all authenticated (signed in) users of your application read access to files that start with foo/*, use the following access configuration.

Note that your backend must include defineAuth in order to use this access rule.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'foo/*': [allow.authenticated.to(['read'])] // additional actions such as "write" and "delete" can be specified depending on your use case
5 })
6});

Guest user access

To grant all guest (not signed in) users of your application read access to files that start with foo/*, use the following access config.

Note that your backend must include defineAuth in order to use this access rule.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'foo/*': [allow.guest.to(['read'])] // additional actions such as "write" and "delete" can be specified depending on your use case
5 })
6});

User group access

If you have configured user groups in defineAuth, you can scope storage access to specific groups. Suppose you have a defineAuth config with admin and auditor groups.

amplify/auth/resource.ts
1import { defineAuth } from '@aws-amplify/backend';
2
3export const auth = defineAuth({
4 loginWith: {
5 email: true
6 },
7 groups: ['auditor', 'admin']
8});

With the following access definition, you can configure permissions such that auditors have readonly permissions to foo/* while admin has full permissions.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'foo/*': [
5 allow.group('auditor').to(['read']),
6 allow.group('admin').to(['read', 'write', 'delete'])
7 ]
8 })
9});

Owner-based access

Access to files with a certain prefix can be scoped down to individual authenticated users. To do this, a placeholder token is used in the storage path which will be substituted with the user identity when uploading or downloading files. The access rule will only allow a user to upload or download files with their specific identity string.

Note that your backend must include defineAuth in order to use this access rule.

The following policy would allow authenticated users full access to files with a prefix that matches their identity id.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'foo/{entity_id}/*': [
5 // {entity_id} is the token that is replaced with the user identity id
6 allow.entity('identity').to(['read', 'write', 'delete'])
7 ]
8 })
9});

A user with identity id "123" would be able to perform read/write/delete operations on files within foo/123/* and would not be able to perform actions on files with any other prefix. Likewise, a user with identity id "ABC" would be able to perform read/write/delete operation on files only within foo/ABC/*. In this way, each user can be granted access to a "private storage location" that is not accessible to any other user.

It may be desireable for a file owner to be able to write and delete files in their private location but allow anyone to read from that location. For example, profile pictures should be readable by anyone, but only the owner can modify them. This use case can be configured with the following definition.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'foo/{entity_id}/*': [
5 allow.entity('identity').to(['read', 'write', 'delete']),
6 allow.guest.to(['read']),
7 allow.authenticated.to(['read'])
8 ]
9 })
10});

When a non-id-based rule is applied to a path with the {entity_id} token, the token is replaced with a wildcard (*). This means that the access will apply to files uploaded by any user. In the above policy, write and delete is scoped to just the owner, but read is allowed for guest and authenticated users for any file within foo/*/*.

Grant function access

In addition to granting application users access to storage files, you may also want to grant a backend function access to storage files. This could be used to enable a use case like resizing images, or automatically deleting old files. The following configuration is used to define function access.

amplify/storage/resource.ts
1import { defineStorage, defineFunction } from '@aws-amplify/backend';
2
3const demoFunction = defineFunction({});
4
5export const storage = defineStorage({
6 name: 'myProjectFiles',
7 access: (allow) => ({
8 'foo/*': [allow.resource(demoFunction).to(['read', 'write', 'delete'])]
9 })
10});

This would grant the function demoFunction the ability to read write and delete files within foo/*.

When a function is granted access to storage, it also receives an environment variable that contains the name of the S3 bucket configured by storage. This environment variable can be used in the function to make SDK calls to the storage bucket. The environment variable is named <storageName>_BUCKET_NAME. In the above example, it would be named myProjectFiles_BUCKET_NAME.

Learn more about function resource access environment variables

Access definition limitations

There are some limitations on the types of prefixes that can be specified in the storage access definition.

  1. All paths start at the storage root. Paths cannot be defined relative to other paths.
  2. All paths are treated as prefixes. To make this explicit, all paths must end with /*.
  3. Only one level of nesting is allowed. For example, you can define access controls on foo/* and foo/bar/* but not on foo/bar/baz/* because that path has 2 other prefixes.
  4. Wildcards cannot conflict with the {entity_id} token. For example, you cannot have both foo/* and foo/{entity_id}/* defined because the wildcard in the first path conflicts with the {entity_id} token in the second path.
  5. A path cannot be a prefix of another path with an {entity_id} token. For example foo/* and foo/bar/{entity_id}/* is not allowed.

Prefix behavior

When one path is a subpath of another, the permissions on the subpath always override the permissions from the parent path. Permissions are not "inherited" from a parent path. Consider the following access definition example.

1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'foo/*': [allow.authenticated.to(['read', 'write', 'delete'])],
5 'foo/bar/*': [allow.guest.to(['read'])],
6 'foo/baz/*': [allow.authenticated.to(['read'])],
7 'other/*': [
8 allow.guest.to(['read']),
9 allow.authenticated.to(['read', 'write'])
10 ]
11 })
12});

The access control matrix for this configuration is

foo/*foo/bar/*foo/baz/*other/*
Authenticated Usersread, write, deleteNONEreadread, write
Guest usersNONEreadNONEread

Authenticated users have access to read, write, and delete everything under foo/* EXCEPT foo/bar/* and foo/baz/*. For those subpaths, the scoped down access overrides the access granted on the parent foo/*

Available actions

When you configure access to a particular storage prefix, you can scope the access to one or more CRUDL actions.

read

This is a convenience action that is equivalent to setting both get and list access.

get

This action maps to the s3:GetObject IAM action, scoped to the corresponding object prefix.

list

This action maps to the s3:ListBucket IAM action, scoped to the corresponding object prefix.

write

This action maps to the s3:PutObject IAM action, scoped to the corresponding object prefix. Note that this action grants the ability to both create new object and update existing ones. There is no way to scope access to only creating or only updating objects.

delete

This action maps to the s3:DeleteObject IAM action, scoped to the corresponding object prefix.

Configuring Amplify Gen 1-equivalent access patterns

To configure defineStorage in Amplify Gen 2 to behave the same way as the storage category in Gen 1, the following definition can be used.

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 access: (allow) => ({
4 'public/*': [
5 allow.guest.to(['read'])
6 allow.authenticated.to(['read', 'write', 'delete']),
7 ],
8 'protected/{entity_id}/*': [
9 allow.authenticated.to(['read']),
10 allow.entity('identity').to(['read', 'write', 'delete'])
11 ],
12 'private/{entity_id}/*': [allow.entity('identity').to(['read', 'write', 'delete'])]
13 })
14});

Configure storage triggers

Function triggers can be configured to enable event-based workflows when files are uploaded or deleted. To add a function trigger, modify the defineStorage configuration.

First, in your storage definition, add the following:

amplify/storage/resource.ts
1export const storage = defineStorage({
2 name: 'myProjectFiles',
3 triggers: {
4 onUpload: defineFunction({
5 entry: './on-upload-handler.ts'
6 }),
7 onDelete: defineFunction({
8 entry: './on-delete-handler.ts'
9 })
10 }
11});

Then create the function definitions at amplify/storage/on-upload-handler.ts and amplify/storage/on-delete-handler.ts.

amplify/storage/on-upload-handler.ts
1import type { S3Handler } from 'aws-lambda';
2
3export const handler: S3Handler = async (event) => {
4 const objectKeys = event.Records.map((record) => record.s3.object.key);
5 console.log(`Upload handler invoked for objects [${objectKeys.join(', ')}]`);
6};
amplify/storage/on-delete-handler.ts
1import type { S3Handler } from 'aws-lambda';
2
3export const handler: S3Handler = async (event) => {
4 const objectKeys = event.Records.map((record) => record.s3.object.key);
5 console.log(`Delete handler invoked for objects [${objectKeys.join(', ')}]`);
6};

Note: The S3Handler type comes from the @types/aws-lambda npm package. This package contains types for different kinds of Lambda handlers, events, and responses.

Now, when you deploy your backend, these functions will be invoked whenever an object is uploaded or deleted from the bucket.

Next steps