Uploading files to an S3 bucket from your LWC components

This post is also available in: Español (Spanish)

In this article we will covering how to upload files to your S3 buckets from a LWC component, as storing files in Salesforce is expensive.

The S3 bucket and IAM policies

First, you should have an AWS account, your bucket, and a user in IAM with enough permissions to store files in S3. In our Salesforce & Serverless 101 article, we explained how to create your AWS account and your first user, so we let’s create our bucket.

In the AWS console, head to S3 and click on Create Bucket, you should choose a name, a region. For security purposes, keep the block all public access checked, as we will using the S3 API to access our files. Also, depending on your company policies, use the default encryption provided by AWS.

Creating the S3 bucket

Once our bucket is ready, we need our API key to be able to read/write objects in this bucket, so open the IAM tab in the AWS console, and choose the user you will be using to access S3.

You should define a new policy, like this one, to allow read/write files in this bucket. We do not need list permissions, as we will be storing the paths in our records.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::my-example-bucket/*"
            ]
        }
    ]
}

Salesforce named credentials

Salesforce platform can handle the authentication with AWS, so you do not need to store access tokens or secrets in custom objects, this is achieved with Salesforce named credentials.

We will create this named credential using the Metadata API, but you can use the Setup tab to create it. Open Visual Studio and create a new file called aws_s3_storage.namedCredential-meta.xml and type the following XML, replacing your access token and secret, the bucket region, and the bucket URL (Your bucket name followed by s3.amazonaws.com)

<?xml version="1.0" encoding="UTF-8"?>
<NamedCredential xmlns="<http://soap.sforce.com/2006/04/metadata>">
    <awsAccessKey>your-access-key-here</awsAccessKey>
    <awsAccessSecret>your-secret-here</awsAccessSecret>
    <awsRegion>your-region-here</awsRegion>
    <awsService>s3</awsService>
    <generateAuthorizationHeader>true</generateAuthorizationHeader>
    <endpoint><https://my-example-bucket.s3.amazonaws.com></endpoint>
    <label>AWS S3</label>
    <principalType>NamedUser</principalType>
    <protocol>AwsSv4</protocol>
</NamedCredential>

Warning: Presently there is a typo in the Salesforce documentation, and protocol is mistakenly defined as AwsSig4 instead of AwsSv4. You should use AwsSv4 to connect to AWS using V4 signature.

Salesforce remote site settings

For security reasons, Salesforce will not allow any requests to other domains, so we will have to add it to our whitelist using a remote setting from the Metadata API, so create it using the following example, and save it as aws_s3_storage.remoteSite-meta.xml.

<?xml version="1.0" encoding="UTF-8"?>
<RemoteSiteSetting xmlns="<http://soap.sforce.com/2006/04/metadata>">
    <description>Used for S3 upload callouts</description>
    <disableProtocolSecurity>false</disableProtocolSecurity>
    <isActive>true</isActive>
    <url><https://my-example-bucket.s3.amazonaws.com/></url>
</RemoteSiteSetting>

Replace the url with the same value from the endpoint from the Named Credential.

Attachment custom object

We will need to track all files uploaded to S3, if you expect only one file to be uploaded with a record, you can store the S3 key for that file as text field, otherwise, you should create a custom object for storing S3 keys with a master-detail relationship to the owner object.

In this example, we created a new custom object called Attachment__c with a master-detail relationship to the standard Contactobject.

Coding the client side

Adding an upload input in the component

To allow uploads from your component, we need to add an input field, so type this in your component, to show the input wherever you need it:

<lightning-input label="Upload file" onchange={handleSelectedFile} type="file"></lightning-input>

With the previous HTML tag, you should see an upload input like the following one:

Handling the file upload in client side

Warning: Salesforce will only allow a maximum file of 3Mb right now. There is an open idea on the Salesforce Trailblazer asking to increase this limit.

When a file is dropped to this component, or chosen from the filesystem, the handleSelectedFile method will be called with an event parameter, containing all the required parameters to get and upload it.

handleSelectedFile(event) {
    if(event.target.files.length !== 1) {
		    return;
    }
    this.handleFileUpload(event.target.files[0]);
}

Inside the handledSelectedFile we call to another function named handleFileUpload. This is a tricky function, responsible of convert our file to base64 and then pass it to the function in APEX.

handleFileUpload(file) {
    let fileReaderObj = new FileReader();
    fileReaderObj.onloadend = (() => {
        let fileContents = fileReaderObj.result;
        fileContents = fileContents.substr(fileContents.indexOf(',')+1)
        
        let byteCharacters = atob(fileContents);
        let bytesLength = byteCharacters.length;
        let slicesCount = Math.ceil(bytesLength / 1024);                
        let byteArrays = new Array(slicesCount);
        for (let sliceIndex = 0; sliceIndex < slicesCount; ++sliceIndex) {
            let begin = sliceIndex * 1024;
            let end = Math.min(begin + 1024, bytesLength);                    
            let bytes = new Array(end - begin);
            for (let offset = begin, i = 0 ; offset < end; ++i, ++offset) {
                bytes[i] = byteCharacters[offset].charCodeAt(0);                        
            }
            byteArrays[sliceIndex] = new Uint8Array(bytes);                    
        }
        
        let myFile =  new File(byteArrays, file.name, { type: file.type });
        
        let reader = new FileReader();
        reader.onloadend = (async () => {
            await this.fileUpload(
                file.name,
                reader.result.substr(reader.result.indexOf(',')+1)
            );
            try {
                // Call to our APEX function
	        await uploadFileOrFail({
                    parentId: this.recordId,
		    filename: file.name,
		    fileContent: encodeURIComponent(reader.result.substr(reader.result.indexOf(',')+1))
		});
	    } catch (error) {
	        console.log(error);
       	    }
        });

        reader.readAsDataURL(myFile);                                 
    });

    fileReaderObj.readAsDataURL(file);
}

Now, the server-side

The S3 service class:

We will need APEX to code our functions, so to be as clean as possible, we will create an S3 class where we will handle all logic related to the communication with the service. So, create a new file called S3.cls with the following content.

This will call S3 using the credentials from the Named Credential we created before and will make a PUT request to the S3 bucket to store the content.

public with sharing class S3 {
    public static boolean saveFileOrFail(String callout, String bucket, String key, String content) {
        Blob base64Content = EncodingUtil.base64Decode(EncodingUtil.urlDecode(content, 'UTF-8'));

        HttpRequest req = new HttpRequest();
        req.setMethod('PUT');
        req.setEndpoint('callout:' + callout + '/' + key);

        req.setHeader('Host', bucket + '.s3.amazonaws.com');
        req.setHeader('Access-Control-Allow-Origin', '*');
        req.setHeader('Content-Length', String.valueOf(content.length()));
        req.setHeader('Content-Encoding', 'UTF-8');
        req.setHeader('Connection', 'keep-alive');
        req.setBodyAsBlob(base64Content);

        Http http = new Http();
        HTTPResponse res = http.send(req);

        if(res.getStatusCode() != 200) {
            throw new S3UploadErrorException(String.ValueOF(res.getBody()));
        }

        return true;
    }
}

Also, we created a new Exception class, that we will be firing in the case any error happens, so create a new file called S3UploadErrorException.clsfile:

public with sharing class S3UploadErrorException extends Exception {
    
}

Creating the Controller:

Now, we will use the S3 service from the controller, so create a new file called AttachmentUploadController.cls and type the UploadFileOrFailfunction:

public with sharing class AttachmentUploadController {
    public static final String CALLOUT = 'aws_s3_storage';
    public static final String BUCKET = 'my-example-bucket';
    
    @AuraEnabled
    public static void uploadFileOrFail(Id parentId, String filename, String fileContent) {
        if (filename == null) {
            throw new RuntimeException('Filename is empty');
        }

        String uploadFilename = filename.toLowerCase().trim().replaceAll('[^a-z0-9.\\\\s]+', '').replaceAll('[\\\\s]+', '-');
        String key = '/my-directory/' + uploadFilename;

        S3.saveFileOrFail(CALLOUT, BUCKET, key, fileContent);
	AttachmentRepository.insert(parentId, key);
    }
}

Sure, you noticed we called another class named AttachmentRepository, this method inserts the new Attachment__c record, but here is the code in case you want to read it:

public static Attachment__c insert(Id contactId, String key) {
    Attachment__c attachment = new Attachment__c();
    attachment.Contact__c = contactId;
    attachment.Key__c = key;
    insert attachment;

    return attachment;
}

What have we learned?

Now we are ready to handle file uploads to an external service (S3 in this case). From here on, you can use any other service to host your files, or even make signed requests to any service, as named credentials can also handle Oauth and JWT authentication.

In our case, we created a small component capable of uploading images related to another object and sort them.

As we know the record identifier when it is used inside a record page view, we can reuse it wherever we need it. Also, for displaying images and creating thumbnails, we have used the Imgix CDN.

If you have any further questions, ask them in the comments! =)