Name:
interface
Value:
Amplify has re-imagined the way frontend developers build fullstack applications. Develop and deploy without the hassle.

Page updated Dec 12, 2024

Upload files

Implement upload functionality

Note: Refer to the Transfer Acceleration documentation to learn how to enable transfer acceleration for storage APIs.

Upload from file

The following is an example of how you would upload a file from a file object, this could be retrieved from the local machine or a different source.

import { uploadData } from "aws-amplify/storage";
const file = document.getElementById("file");
const upload = document.getElementById("upload");
upload.addEventListener("click", () => {
const fileReader = new FileReader();
fileReader.readAsArrayBuffer(file.files[0]);
fileReader.onload = async (event) => {
console.log("Complete File read successfully!", event.target.result);
try {
await uploadData({
data: event.target.result,
path: file.files[0].name
});
} catch (e) {
console.log("error", e);
}
};
});

Upload from data

You can follow this example if you have data saved in memory and would like to upload this data to the cloud.

import { uploadData } from 'aws-amplify/storage';
try {
const result = await uploadData({
path: "album/2024/1.jpg",
// Alternatively, path: ({identityId}) => `album/${identityId}/1.jpg`
data: file,
}).result;
console.log('Succeeded: ', result);
} catch (error) {
console.log('Error : ', error);
}

Upload to a specified bucket

You can also perform an upload operation to a specific bucket by providing the bucket option. You can pass in a string representing the target bucket's assigned name in Amplify Backend.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
// Specify a target bucket using name assigned in Amplify Backend
bucket: 'assignedNameInAmplifyBackend'
}
}).result;

Alternatively, you can also pass in an object by specifying the bucket name and region from the console.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
// Alternatively, provide bucket name from console and associated region
bucket: {
bucketName: 'bucket-name-from-console',
region: 'us-east-2'
}
}
}).result;

Monitor upload progress

Monitor progress of upload by using the onProgress option.

import { uploadData } from 'aws-amplify/storage';
const monitorUpload = async () => {
try {
const result = await uploadData({
path: "album/2024/1.jpg",
// Alternatively, path: ({identityId}) => `album/${identityId}/1.jpg`
data: file,
options: {
onProgress: ({ transferredBytes, totalBytes }) => {
if (totalBytes) {
console.log(
`Upload progress ${Math.round(
(transferredBytes / totalBytes) * 100
)} %`
);
}
},
},
}).result;
console.log("Path from Response: ", result.path);
} catch (error) {
console.log("Error : ", error);
}
}

Pause, resume, and cancel uploads

We have callback functions that support resuming, pausing, and cancelling uploadData requests.

import { uploadData, isCancelError } from 'aws-amplify/storage';
// Pause, resume, and cancel a task
const uploadTask = uploadData({ path, data: file });
//...
uploadTask.pause();
//...
uploadTask.resume();
//...
uploadTask.cancel();
//...
try {
await uploadTask.result;
} catch (error) {
if (isCancelError(error)) {
// Handle error thrown by task cancellation
}
}

Transfer with Object Metadata

Custom metadata can be associated with your uploaded object by passing the metadata option.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
metadata: {
customKey: 'customValue',
},
},
});

More upload options

The behavior of uploadData and properties of the uploaded object can be customized by passing in additional options.

import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({
path: 'album/2024/1.jpg',
data: file,
options: {
// content-type header to be used when downloading
contentType: "image/jpeg",
// configure how object is presented
contentDisposition: "attachment",
// whether to use accelerate endpoint
useAccelerateEndpoint: true,
// the account ID that owns requested bucket
expectedBucketOwner: "123456789012",
// whether to check if an object with the same key already exists before completing the upload
preventOverwrite: true,
// whether to compute the checksum for the data to be uploaded, so the S3 can verify the data integrity
checksumAlgorithm: "crc-32", // only 'crc-32' is supported currently
},
});
OptionTypeDefaultDescription
bucketstring |
{ bucketName: string;
region: string; }
Default bucket and region from Amplify configurationA string representing the target bucket's assigned name in Amplify Backend or an object specifying the bucket name and region from the console.

Read more at Configure additional storage buckets
contentTypestringapplication/octet-streamThe default content-type header value of the file when downloading it.

Read more at Content-Type documentation
contentEncodingstring—The default content-encoding header value of the file when downloading it.

Read more at Content-Encoding documentation
contentDispositionstring—Specifies presentational information for the object.

Read more at Content-Disposition documentation
metadatamap<string>—A map of metadata to store with the object in S3.

Read more at S3 metadata documentation
useAccelerateEndpointbooleanfalseWhether to use accelerate endpoint.

Read more at Transfer Acceleration
expectedBucketOwnerstring-The account ID that owns requested bucket.
preventOverwritebooleanfalseWhether to check if an object with the same key already exists before completing the upload. If exists, a Precondition Failed error will be thrown
checksumAlgorithm"crc-32"-Whether to compute the checksum for the data to be uploaded, so the S3 can verify the data integrity. Only 'crc-32' is supported currently

Uploads that were initiated over one hour ago will be cancelled automatically. There are instances (e.g. device went offline, user logs out) where the incomplete file remains in your Amazon S3 account. It is recommended to setup a S3 lifecycle rule to automatically cleanup incomplete upload requests.

MultiPart upload

Amplify will automatically perform an Amazon S3 multipart upload for objects that are larger than 5MB. For more information about S3's multipart upload, see Uploading and copying objects using multipart upload