Name:
interface
Value:
Amplify has re-imagined the way frontend developers build fullstack applications. Develop and deploy without the hassle.

Choose your framework/language

Gen1 DocsLegacy

Page updated Apr 21, 2026

Upload files

Implement upload functionality

Note: Refer to the Transfer Acceleration documentation to learn how to enable transfer acceleration for storage APIs.

Upload from file

The following is an example of how you would upload a file from a file object, this could be retrieved from the local machine or a different source.

import React from 'react';
import { uploadData } from 'aws-amplify/storage';
function App() {
const [file, setFile] = React.useState();
const handleChange = (event) => {
setFile(event.target.files?.[0]);
};
const handleClick = () => {
if (!file) {
return;
}
uploadData({
path: `photos/${file.name}`,
data: file,
});
};
return (
<div>
<input type="file" onChange={handleChange} />
<button onClick={handleClick}>Upload</button>
</div>
);
}

Upload from data

You can follow this example if you have data saved in memory and would like to upload this data to the cloud.

import { uploadData } from 'aws-amplify/storage';
try {
const result = await uploadData({
path: "album/2024/1.jpg",
// Alternatively, path: ({identityId}) => `album/${identityId}/1.jpg`
data: file,
}).result;
console.log('Succeeded: ', result);
} catch (error) {
console.log('Error : ', error);
}

Upload to a specified bucket

You can also perform an upload operation to a specific bucket by providing the bucket option. You can pass in a string representing the target bucket's assigned name in Amplify Backend.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
// Specify a target bucket using name assigned in Amplify Backend
bucket: 'assignedNameInAmplifyBackend'
}
}).result;

Alternatively, you can also pass in an object by specifying the bucket name and region from the console.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
// Alternatively, provide bucket name from console and associated region
bucket: {
bucketName: 'bucket-name-from-console',
region: 'us-east-2'
}
}
}).result;

Monitor upload progress

Monitor progress of upload by using the onProgress option.

import { uploadData } from 'aws-amplify/storage';
const monitorUpload = async () => {
try {
const result = await uploadData({
path: "album/2024/1.jpg",
// Alternatively, path: ({identityId}) => `album/${identityId}/1.jpg`
data: file,
options: {
onProgress: ({ transferredBytes, totalBytes }) => {
if (totalBytes) {
console.log(
`Upload progress ${Math.round(
(transferredBytes / totalBytes) * 100
)} %`
);
}
},
},
}).result;
console.log("Path from Response: ", result.path);
} catch (error) {
console.log("Error : ", error);
}
}

Pause, resume, and cancel uploads

We have callback functions that support resuming, pausing, and cancelling uploadData requests.

import { uploadData, isCancelError } from 'aws-amplify/storage';
// Pause, resume, and cancel a task
const uploadTask = uploadData({ path, data: file });
//...
uploadTask.pause();
//...
uploadTask.resume();
//...
uploadTask.cancel();
//...
try {
await uploadTask.result;
} catch (error) {
if (isCancelError(error)) {
// Handle error thrown by task cancellation
}
}

Transfer with Object Metadata

Custom metadata can be associated with your uploaded object by passing the metadata option.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
metadata: {
customKey: 'customValue',
},
},
});

More upload options

The behavior of uploadData and properties of the uploaded object can be customized by passing in additional options.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
// content-type header to be used when downloading
contentType: "image/jpeg",
// configure how object is presented
contentDisposition: "attachment",
// whether to use accelerate endpoint
useAccelerateEndpoint: true,
// the account ID that owns requested bucket
expectedBucketOwner: "123456789012",
// whether to check if an object with the same key already exists before completing the upload
preventOverwrite: true,
// whether to compute the checksum for the data to be uploaded, so the S3 can verify the data integrity
checksumAlgorithm: "crc-32", // only 'crc-32' is supported currently
},
});
OptionTypeDefaultDescription
bucketstring |
{ bucketName: string;
region: string; }
Default bucket and region from Amplify configurationA string representing the target bucket's assigned name in Amplify Backend or an object specifying the bucket name and region from the console.

Read more at Configure additional storage buckets
contentTypestringapplication/octet-streamThe default content-type header value of the file when downloading it.

Read more at Content-Type documentation
contentEncodingstring—The default content-encoding header value of the file when downloading it.

Read more at Content-Encoding documentation
contentDispositionstring—Specifies presentational information for the object.

Read more at Content-Disposition documentation
metadatamap<string>—A map of metadata to store with the object in S3.

Read more at S3 metadata documentation
useAccelerateEndpointbooleanfalseWhether to use accelerate endpoint.

Read more at Transfer Acceleration
expectedBucketOwnerstring-The account ID that owns requested bucket.
preventOverwritebooleanfalseWhether to check if an object with the same key already exists before completing the upload. If exists, a Precondition Failed error will be thrown
checksumAlgorithm"crc-32"-Whether to compute the checksum for the data to be uploaded, so the S3 can verify the data integrity. Only 'crc-32' is supported currently

Uploads that were initiated over one hour ago will be cancelled automatically. There are instances (e.g. device went offline, user logs out) where the incomplete file remains in your Amazon S3 account. It is recommended to setup a S3 lifecycle rule to automatically cleanup incomplete upload requests.

MultiPart upload

Amplify will automatically perform an Amazon S3 multipart upload for objects that are larger than 5MB. For more information about S3's multipart upload, see Uploading and copying objects using multipart upload

Upload using a presigned URL

You can use the getUrl API with method: 'PUT' to generate a presigned URL for uploading files directly to S3. This is useful when:

  • You need to integrate with third-party tools or libraries that only accept standard HTTP URL endpoints (e.g. DuckDB, database export tools)
  • You want to upload from server-side environments such as Next.js API routes or other SSR frameworks
  • You need to share a temporary upload link with another client or service
import { getUrl } from 'aws-amplify/storage';
// Generate a presigned URL for uploading
const { url, expiresAt } = await getUrl({
path: 'album/2024/1.jpg',
options: {
method: 'PUT',
expiresIn: 3600, // URL valid for 1 hour
contentType: 'image/jpeg',
}
});
console.log('Upload URL: ', url);
console.log('URL expires at: ', expiresAt);

Then use the presigned URL to upload the file with a standard HTTP PUT request:

await fetch(url, {
method: 'PUT',
body: file,
headers: {
'Content-Type': 'image/jpeg',
},
});

When method: 'PUT' is specified, the validateObjectExistence option is ignored since the object may not exist yet.

If you specify contentType when generating the presigned URL, you must include the matching Content-Type header in the upload request. A mismatch will cause S3 to reject the request with a signature error.

Using presigned URLs with third-party tools

Because the presigned URL is a standard HTTP endpoint, it works with any tool that supports HTTP uploads:

import { getUrl } from 'aws-amplify/storage';
const { url } = await getUrl({
path: 'analytics/data.parquet',
options: {
method: 'PUT',
contentType: 'application/octet-stream',
}
});
// Example: use with DuckDB to export query results directly to S3
await duckdb.query(`
COPY (SELECT * FROM processed_data)
TO '${url}'
(FORMAT PARQUET)
`);

Presigned URL upload options

OptionTypeDefaultDescription
method'GET' | 'PUT''GET'The HTTP method for the presigned URL. Use 'PUT' to generate an upload URL.
bucketstring |
{ bucketName: string;
region: string; }
Default bucket and region from Amplify configurationA string representing the target bucket's assigned name in Amplify Backend or an object specifying the bucket name and region from the console.

Read more at Configure additional storage buckets
expiresInnumber900Number of seconds till the URL expires.

The expiration time of the presigned url is dependent on the session and will max out at 1 hour.
contentTypestring—The MIME type of the file to be uploaded. When specified, the matching Content-Type header must be included in the upload request.
contentDispositionstring | object—Specifies presentational information for the object. Can be a string (e.g. 'attachment; filename="file.jpg"') or an object (e.g. { type: 'attachment', filename: 'file.jpg' }).
expectedBucketOwnerstring—The account ID that owns requested bucket.