Page updated Jan 16, 2024

Upload files

Upload Files

The Put method uploads files into Amazon S3.

It returns a {key: S3 Object key} object on success:

1const result = await Storage.put('test.txt', 'Hello');

Browser uploads

Upload an image in the browser:

1async function onChange(e) {
2 const file = e.target.files[0];
3 try {
4 await Storage.put(file.name, file, {
5 contentType: "image/png", // contentType is optional
6 });
7 } catch (error) {
8 console.log("Error uploading file: ", error);
9 }
10}
11
12<input type="file" onChange={onChange} />;

Note: 'contentType' is metadata (saved under the key 'Content-Type') for the S3 object and does not determine the 'Type' in the AWS S3 Console. If a file extension is not provided in the key of the uploaded object, the S3 console's 'Type' field will be omitted. Otherwise, the 'Type' will be populated to match the given extension of the key. The behavior of how the S3 object is treated will be based on 'contentType' in the metadata and not the 'Type'.

For example: uploading a file with the key "example.jpg" will result in the 'Type' being set as "jpg", but the 'contentType' in metadata will determine it's behavior so setting it as "text/html" will result in the file being treated as an HTML file regardless of displayed 'Type' in the S3 console.

When a networking error happens during the upload, Storage module retries upload for a maximum of 4 attempts. If the upload fails after all retries, you will get an error.

Public level

1const result = await Storage.put('test.txt', 'Hello');

Protected level

1const result = await Storage.put('test.txt', 'Protected Content', {
2 level: 'protected',
3 contentType: 'text/plain'
4});

Private level

1const result = await Storage.put('test.txt', 'Private Content', {
2 level: 'private',
3 contentType: 'text/plain'
4});

Monitor progress of upload

To track the progress of your upload, you can use the progressCallback:

1Storage.put('test.txt', 'File content', {
2 progressCallback(progress) {
3 console.log(`Uploaded: ${progress.loaded}/${progress.total}`);
4 }
5});

Encrypted uploads

To utilize Server-Side Encryption with AWS KMS, the following options can be passed in with the Put API like so:

1const serverSideEncryption = AES256 | aws:kms;
2const SSECustomerAlgorithm = 'string';
3const SSECustomerKey = new Buffer('...') || 'string';
4const SSECustomerKeyMD5 = 'string';
5const SSEKMSKeyId = 'string';
6
7const result = await Storage.put('test.txt', 'File content', {
8 serverSideEncryption, SSECustomerAlgorithm, SSECustomerKey, SSECustomerKeyMD5, SSEKMSKeyId
9});

Other options available are:

1Storage.put('test.txt', 'My Content', {
2 cacheControl: 'no-cache', // (String) Specifies caching behavior along the request/reply chain
3 contentDisposition: 'attachment', // (String) Specifies presentational information for the object
4 expires: new Date().now() + 60 * 60 * 24 * 7, // (Date) The date and time at which the object is no longer cacheable. ISO-8601 string, or a UNIX timestamp in seconds
5 metadata: { key: 'value' }, // (map<String>) A map of metadata to store with the object in S3.
6 resumable: true, // (Boolean) Allows uploads to be paused and resumed
7 useAccelerateEndpoint: true // If enabled on the bucket, use accelerated S3 endpoint
8});

Pause and resume uploads

Passing the resumable: true option to the Put API returns an object with the following methods: pause , resume. The Storage.cancel API is also available if you are looking to cancel the upload at any point of the upload.

1const upload = Storage.put(file.name, file, {
2 resumable: true
3});
4
5upload.pause();
6
7upload.resume();
8
9Storage.cancel(upload);

When a page refresh occurs during the upload, re-initializing the upload with the same file will continue from previous progress point.

1const upload = Storage.put(file.name, file, {
2 resumable: true
3});
4
5// This duplicate uploads will resume the original upload.
6const duplicateUpload = Storage.put(file.name, file, {
7 resumable: true
8});

Uploads that were initiated over one hour ago will be cancelled automatically. There are instances (e.g device went offline, user logs out) where the incomplete file remains in your S3 account. It is recommended to setup a s3 lifecycle rule to automatically cleanup incomplete upload requests.

Event handlers

With the resumable: true flag, there are 3 callback functions available: completeCallback, progressCallback and errorCallback.

1const upload = Storage.put(file.name, file, {
2 resumable: true,
3 completeCallback: (event) => {
4 console.log(`Successfully uploaded ${event.key}`);
5 },
6 progressCallback: (progress) => {
7 console.log(`Uploaded: ${progress.loaded}/${progress.total}`);
8 },
9 errorCallback: (err) => {
10 console.error('Unexpected error while uploading', err);
11 }
12});

Currently, the progressCallback event is dependent and based on the number of parts that are sent to S3. Currently, the library uploads at about 5 MB per part. If you upload a file smaller than this, the event will not be fired.