This is a followup to a previous post on how we use Lambdas to generate presigned URLs so that a user’s browser can directly upload to S3. We now want to have our S3 bucket enforce server-side encryption for all uploaded files. Getting all the pieces to work together was a bit hairy: bucket policies, URL settings, HTTP headers, and mostly the dreaded CORS configuration. This should be applicable to other upload properties as well. Finally, we close with a comparison of the default AWS signature algorithm and the newer V4 signatures.
Architecture
The upload portion of our architecture looks like the following diagram. An Angular application is served from an S3 bucket to the browser. It has a component to select a file and invoke a getUploadURL function which sends the filename and MIME type to a Lambda function; the Lambda calculates a presigned URL which permits uploading for a short time, using the IAM permissions applied to the Lambda. This allows the browser to do secure uploads without leaking credentials; more details on this are in that earlier post.
Our system resources, policies and the Lambda code is defined using the
Serverless Framework; it tames the complexity and makes deployment a breeze.
S3 Bucket Policy Enforces Crypto
We define a policy on our S3 bucket that requires uploads to use server side encryption (SSE) with the AES-256 cypher. It does this by checking the appropriate headers supplied with the upload. Rather than repeat it here, check the
AWS docs.
Lambda returns Presigned URLs with SSE
When we generate the presigned URL, we include a requirement for SSE using AES. We’re using Python and the Boto3 SDK.
s3 = boto3.client(‘s3’) # See below about non-default Signature version 4
params = {
‘Bucket’: UPLOAD_BUCKET_NAME,
‘Key’: ‘doc_pdf/’ + filename,
‘ContentType’: content_type,
‘ServerSideEncryption’: ‘AES256’
}
url = s3.generate_presigned_url(‘put_object’,
Params=params,
ExpiresIn=PSU_LIFETIME_SECONDS)
The URL we get includes query string parameters indicating we want x-amz-server-side-encryption, and the shape of the URL depends on the AWS signature version we’re using (see below).
This seems fine, but it doesn’t actually force the encryption. The generated URL can only specify information on the URL’s query strings, but S3 doesn’t look at those — it looks for HTTP headers to tell it how to disposition the upload.
Browser Must Set SSE HTTP Headers
Since S3 wants HTTP headers to tell it to enable encryption (as well as Content-Type and other metadata), we must have our client code set them. In our Angular app, we do this:
putUploadFile(uploadURL: UploadURL, file: File, fileBody): Observable<any> {
const headers = {
‘Content-Type’: file.type,
‘x-amz-server-side-encryption’: ‘AES256’, // force SSE AES256 on PUT
};
const options = { ‘headers’: headers };
return this.http.put(uploadURL.url, fileBody, options).pipe(
tap(res => console.log(`putUploadFile got res=${JSON.stringify(res)}`)),
catchError(this.handleError<UploadURL>(‘putUploadFile’, null))
);
We can watch the browser console and get the generated URL, we can use “curl” to PUT to the S3 bucket with the same presigned URL and HTTP headers: our upload works:
curl -v -X PUT
-H “Content-Type: application/pdf”
-H “x-amz-server-side-encryption: AES256”
–upload-file mydoc.pdf
$PresignedUrlWeGotFromLambda
However, when the Angular app does the HTTP PUT, it fails.
NG PUT requires S3 CORS allowing SSE Header
The console shows errors in the HTTP OPTIONS preflight check; this sure smells like a CORS problem. When we had our serverless.yml create our bucket, we defined a CORS configuration that allowed us to PUT, and to specify Content-Type headers. We just need to add a new CORS setting to tolerate the SSE header.
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.s3_name}
CorsConfiguration:
# Needed so WebUI can do OPTIONS preflight check
CorsRules:
– AllowedMethods:
– PUT
AllowedOrigins:
– “*”
AllowedHeaders:
– content-type
– x-amz-server-side-encryption
We could have configured it with AllowedHeaders: “*” but that’s more permissive than we’d like, so we opt to be explicit in what we tolerate.
We redeploy our Serverless stack to update the S3 configuration, and our app starts uploading successfully! If you’re not doing this with Serverless, just update through the AWS Console or whatnot.
Now we can see the files we uploaded are AES-256-encrypted:
AWS Signature: default versus V4
By default, the boto3 S3 client is not using AWS Signature Version 4, and the upload does work. We’ve used V4 before on other projects, and understood it to be best practice; we thought it might be required, but turns out it’s not. However, we can enable V4, and it works great. Interestingly, the generated presigned URLs are very different.
In both cases, the base URL we get is the same:
https://myuploads-dev.s3.amazonaws.com/doc_pdf/mydoc.pdf
There are significant differences in the query string parameters appended to this. Below we show the decoded parameters for comparison.
Default Signature
We get an S3 client with the default signature algorithm:
The query string parameters are:
AWSAccessKeyId: ASPI31415926535
Signature: Vqfl0NqIrr6ifBB3f9T1hXI5/+U=
content-type: application/pdf
x-amz-server-side-encryption: AES256
x-amz-security-token: …
Expires: 1541015925
AWS V4 Signature
We can request the V4 signature like:
s3 = boto3.client(‘s3′, config=Config(signature_version=’s3v4’))
The query string parameters become:
X-Amz-Algorithm: AWS4-HMAC-SHA256
X-Amz-Credential: ASPI31415926535/20181031/us-east-1/s3/aws4_request
X-Amz-Date: 20181031T190519Z
X-Amz-Expires: 3600
X-Amz-SignedHeaders: content-type;host;x-amz-server-side-encryption
X-Amz-Security-Token: …
X-Amz-Signature: a22b58dce238ed393026027ec0b40a7ffd0a9647d792fb0cc3d720bc1cc89fe4
Wrap-up
There are a lot of cat-herding to make this work, but once in place, it works beautifully: enforced encryption, time-limited presigned URLs, and browser uploads to S3. Now that we know all the pieces that need to be addressed, we can use the same approach to add other S3 object properties, like read-only ACLs, expiration date, etc.