Upload files
Implement upload functionality
Upload from file
The following is an example of how you would upload a file from a file object, this could be retrieved from the local machine or a different source.
import { uploadData } from "aws-amplify/storage";
const file = document.getElementById("file");const upload = document.getElementById("upload");
upload.addEventListener("click", () => { const fileReader = new FileReader(); fileReader.readAsArrayBuffer(file.files[0]);
fileReader.onload = async (event) => { console.log("Complete File read successfully!", event.target.result); try { await uploadData({ data: event.target.result, path: file.files[0].name }); } catch (e) { console.log("error", e); } };});Upload from data
You can follow this example if you have data saved in memory and would like to upload this data to the cloud.
import { uploadData } from 'aws-amplify/storage';
try { const result = await uploadData({ path: "album/2024/1.jpg", // Alternatively, path: ({identityId}) => `album/${identityId}/1.jpg` data: file, }).result; console.log('Succeeded: ', result);} catch (error) { console.log('Error : ', error);}Upload to a specified bucket
You can also perform an upload operation to a specific bucket by providing the bucket option. You can pass in a string representing the target bucket's assigned name in Amplify Backend.
import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({ path: 'album/2024/1.jpg', data: file, options: { // Specify a target bucket using name assigned in Amplify Backend bucket: 'assignedNameInAmplifyBackend' }}).result;Alternatively, you can also pass in an object by specifying the bucket name and region from the console.
import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({ path: 'album/2024/1.jpg', data: file, options: { // Alternatively, provide bucket name from console and associated region bucket: { bucketName: 'bucket-name-from-console', region: 'us-east-2' } }}).result;Monitor upload progress
Monitor progress of upload by using the onProgress option.
import { uploadData } from 'aws-amplify/storage';
const monitorUpload = async () => { try { const result = await uploadData({ path: "album/2024/1.jpg", // Alternatively, path: ({identityId}) => `album/${identityId}/1.jpg` data: file, options: { onProgress: ({ transferredBytes, totalBytes }) => { if (totalBytes) { console.log( `Upload progress ${Math.round( (transferredBytes / totalBytes) * 100 )} %` ); } }, }, }).result; console.log("Path from Response: ", result.path); } catch (error) { console.log("Error : ", error); }}Pause, resume, and cancel uploads
We have callback functions that support resuming, pausing, and cancelling uploadData requests.
import { uploadData, isCancelError } from 'aws-amplify/storage';
// Pause, resume, and cancel a taskconst uploadTask = uploadData({ path, data: file });//...uploadTask.pause();//...uploadTask.resume();//...uploadTask.cancel();//...try { await uploadTask.result;} catch (error) { if (isCancelError(error)) { // Handle error thrown by task cancellation }}Transfer with Object Metadata
Custom metadata can be associated with your uploaded object by passing the metadata option.
import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({ path: 'album/2024/1.jpg', data: file, options: { metadata: { customKey: 'customValue', }, },});More upload options
The behavior of uploadData and properties of the uploaded object can be customized by passing in additional options.
import { uploadData } from 'aws-amplify/storage';
const result = await uploadData({ path: 'album/2024/1.jpg', data: file, options: { // content-type header to be used when downloading contentType: "image/jpeg", // configure how object is presented contentDisposition: "attachment", // whether to use accelerate endpoint useAccelerateEndpoint: true, // the account ID that owns requested bucket expectedBucketOwner: "123456789012", // whether to check if an object with the same key already exists before completing the upload preventOverwrite: true, // whether to compute the checksum for the data to be uploaded, so the S3 can verify the data integrity checksumAlgorithm: "crc-32", // only 'crc-32' is supported currently },});| Option | Type | Default | Description |
|---|---|---|---|
| bucket | string | { bucketName: string; region: string; } | Default bucket and region from Amplify configuration | A string representing the target bucket's assigned name in Amplify Backend or an object specifying the bucket name and region from the console. Read more at Configure additional storage buckets |
| contentType | string | application/octet-stream | The default content-type header value of the file when downloading it. Read more at Content-Type documentation |
| contentEncoding | string | — | The default content-encoding header value of the file when downloading it. Read more at Content-Encoding documentation |
| contentDisposition | string | — | Specifies presentational information for the object. Read more at Content-Disposition documentation |
| metadata | map<string> | — | A map of metadata to store with the object in S3. Read more at S3 metadata documentation |
| useAccelerateEndpoint | boolean | false | Whether to use accelerate endpoint. Read more at Transfer Acceleration |
| expectedBucketOwner | string | - | The account ID that owns requested bucket. |
| preventOverwrite | boolean | false | Whether to check if an object with the same key already exists before completing the upload. If exists, a Precondition Failed error will be thrown |
| checksumAlgorithm | "crc-32" | - | Whether to compute the checksum for the data to be uploaded, so the S3 can verify the data integrity. Only 'crc-32' is supported currently |
MultiPart upload
Amplify will automatically perform an Amazon S3 multipart upload for objects that are larger than 5MB. For more information about S3's multipart upload, see Uploading and copying objects using multipart upload
Upload using a presigned URL
You can use the getUrl API with method: 'PUT' to generate a presigned URL for uploading files directly to S3. This is useful when:
- You need to integrate with third-party tools or libraries that only accept standard HTTP URL endpoints (e.g. DuckDB, database export tools)
- You want to upload from server-side environments such as Next.js API routes or other SSR frameworks
- You need to share a temporary upload link with another client or service
import { getUrl } from 'aws-amplify/storage';
// Generate a presigned URL for uploadingconst { url, expiresAt } = await getUrl({ path: 'album/2024/1.jpg', options: { method: 'PUT', expiresIn: 3600, // URL valid for 1 hour contentType: 'image/jpeg', }});
console.log('Upload URL: ', url);console.log('URL expires at: ', expiresAt);Then use the presigned URL to upload the file with a standard HTTP PUT request:
await fetch(url, { method: 'PUT', body: file, headers: { 'Content-Type': 'image/jpeg', },});Using presigned URLs with third-party tools
Because the presigned URL is a standard HTTP endpoint, it works with any tool that supports HTTP uploads:
import { getUrl } from 'aws-amplify/storage';
const { url } = await getUrl({ path: 'analytics/data.parquet', options: { method: 'PUT', contentType: 'application/octet-stream', }});
// Example: use with DuckDB to export query results directly to S3await duckdb.query(` COPY (SELECT * FROM processed_data) TO '${url}' (FORMAT PARQUET)`);Presigned URL upload options
| Option | Type | Default | Description |
|---|---|---|---|
| method | 'GET' | 'PUT' | 'GET' | The HTTP method for the presigned URL. Use 'PUT' to generate an upload URL. |
| bucket | string | { bucketName: string; region: string; } | Default bucket and region from Amplify configuration | A string representing the target bucket's assigned name in Amplify Backend or an object specifying the bucket name and region from the console. Read more at Configure additional storage buckets |
| expiresIn | number | 900 | Number of seconds till the URL expires. The expiration time of the presigned url is dependent on the session and will max out at 1 hour. |
| contentType | string | — | The MIME type of the file to be uploaded. When specified, the matching Content-Type header must be included in the upload request. |
| contentDisposition | string | object | — | Specifies presentational information for the object. Can be a string (e.g. 'attachment; filename="file.jpg"') or an object (e.g. { type: 'attachment', filename: 'file.jpg' }). |
| expectedBucketOwner | string | — | The account ID that owns requested bucket. |