Comments for Upload a File to S3

Hmm in this case we are only allowing authenticated users to upload to the S3 bucket.

Was getting 403 access denied during file upload, for IdentityPoolAuth_role with the policy mentioned in chapter IdentityPoolAuth_role Policy

Had to change the policy to the one mentioned in iam-roles for file upload to work.

  "Version": "2012-10-17",
  "Statement": [
      "Action": ["s3:ListBucket"],
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::mybucket"],
      "Condition": {"StringLike": {"s3:prefix": ["${}/*"]}}
      "Action": [
      "Effect": "Allow",
      "Resource": ["arn:aws:s3:::mybucket/private/${}/*"]

Thanks for reporting back!

1 Like

Hey @jayair, I found a weird behaviour in this chapter. In the createNote function, the body of the note only gets sent to DDB if I pass the object {body: note} (instead of {body: note.content}, which is weird because that is a whole object and not just the string we want. It does work as intended, if written like this, like you did before the file upload. Any clues why?

Hmm can you paste the code you are referring to?

Nevermind @jayair I’m dumb ahah. Thanks anyway man!

tl;dr For config.js don’t use the S3 bucket name you manually created. Use the bucket name that’s created automatically with the Serverless command.

If you’re like me you’ve come here looking for answers as to why you keep getting an error (ie. CORB & No ‘Access-Control-Allow-Origin’) when trying to upload a file to S3 when you’ve followed all the instructions up to now to a T.

It appears that the S3 Bucket that we create here is not the same as the one created here (using the Serverless setup from the chapter, add-support-for-es6-es7-javascript). If you head over to AWS and checkout your S3 Bucket you’ll see the bucket you created plus an additional one that was created when you ran the Serverless command. For me I had “notes-app-uploads-v01” and “notes-app-api-prod-serverlessdeploymentbucket-1c3wwxszxsnup”. Yours will differ slightly. In any case there are two different buckets. Don’t make the same mistake I made and delete the long worded one. In fact keep it and use that name to populate the bucket name in config.js.

Hopefully this will save someone minutes/hours of frustration.

1 Like

Thanks for the details @mikeyamato! I’m sure it’ll be helpful to others.

I was experiencing the same issues everyone is describing above. For myself the solution was to ensure my IAM policy specified the original bucket name that was configured earlier in the tutorial and config.js also referenced the correct bucket name.

  • I went to IAM and updated my authorized policy with the correct original bucket name.
  • In config.js I updated the value for s3.BUCKET to the correct bucket name.

I ignored the additional bucket created by the serverless deployment ( notes-app-api-prod-serverlessdeploymentbucket-17e7jhh2kw9hv in my case).

Fixed my issue.

1 Like

Thank you for reporting back!

Hi! the response is .
The region is us-east-1

I want to think this is incorrect. Both buckets are in use. The one created during the deploy contains the configurations and lambdas for our app when executing the serverless deploy command. While the one we created manually is just to store the uploaded files attached to our notes. I don’t think we should be uploading user files to the same bucket we are deploying our app to

Hi all I’m struggling with a permissions issue. The S3 upload is being rejected. The CLI tester works ok.

The issue seems to be the Policy, specifically, this below works:

            "Effect": "Allow",
            "Action": [
            "Resource": [

But this doesn’t – the Resource constraints (following AWS documentation) generates access denied for my authenticated user. Now I’ve been around the block enough to suggest this may well be a symptom of something else rather than a cause (I’m going to check this user really is authenticated for instance) – but in case anyone else has had this issue, I had to temporarily relax the policy so any user has access to any other user’s data for this demo app:

            "Effect": "Allow",
            "Action": [
            "Resource": [

When I try to use the test-api I get a http 500 error. I tried running

serverless invoke --function create --path mocks/create-event.json 

and got the same 500 error there. When I run the test api utility I get the following

Making API request

   status: 403,
   statusText: 'Forbidden',
   data: '{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n' +
   '\n' +
   'The Canonical String for this request should have been\n' +
   "'POST\n" +
   '/prod/notes\n' +
   '\n' +
   'accept:application/json\n' +
   'content-type:application/json\n' +
   '\n' +
   'x-amz-date:20191227T175028Z\n' +
   '\n' +
   'accept;content-type;host;x-amz-date\n' +
   "3a99f7c41ea871222ce9eb05cc8c7a5bbfc8e141bbb3c3999cff381d1462d448'\n" +
   '\n' +
   'The String-to-Sign should have been\n' +
   "'AWS4-HMAC-SHA256\n" +
   '20191227T175028Z\n' +
   '20191227/eu-west-2/execute-api/aws4_request\n' +
  "614c1776e4a9a523adf669a111e77ecfe9a486c9fa83fa3e59a56a2e5d956620'\n" +

Any advice will be much appreciated

I replied in the other thread, have a look.

Hmm did you manage to figure out why the original policy was failing?

I followed this all exactly, but when I try to upload a file it doesn’t even send the PUT request. Instead it instantly throws an exception “No credentials”.

I added window.LOG_LEVEL = "DEBUG"; and my console log says that it finds credentials, but then falls back to guest? Not sure what is wrong.

[DEBUG] 43:07.911 Credentials - getting credentials 
[DEBUG] 43:07.912 Credentials - picking up credentials 
[DEBUG] 43:07.912 Credentials - getting new cred promise 
[DEBUG] 43:07.913 Credentials - checking if credentials exists and not expired 
[DEBUG] 43:07.913 Credentials - need to get a new credential or refresh the existing one 
[DEBUG] 43:07.915 AuthClass - Getting current user credentials 
[DEBUG] 43:07.917 AuthClass - Getting current session 
[DEBUG] 43:07.918 AuthClass - Getting the session from this user: 
Object { username: "2a[GUID]0cf", pool: {…}, Session: null, client: {…}, signInUserSession: {…}, authenticationFlowType: "USER_SRP_AUTH", storage: Storage, keyPrefix: "CognitoIdentityServiceProvider.5a[ID]ci", userDataKey: "CognitoIdentityServiceProvider.5a[ID]ci.2a[GUID]0cf.userData", attributes: {…}, … }
[DEBUG] 43:07.921 AuthClass - Succeed to get the user session 
Object { idToken: {…}, refreshToken: {…}, accessToken: {…}, clockDrift: 2 }
[DEBUG] 43:07.922 AuthClass - getting session success 
Object { idToken: {…}, refreshToken: {…}, accessToken: {…}, clockDrift: 2 }
[DEBUG] 43:07.924 Credentials - set credentials from session 
[DEBUG] 43:07.924 Credentials - No Cognito Federated Identity pool provided 
[DEBUG] 43:07.925 AuthClass - getting session failed No Cognito Federated Identity pool provided
[DEBUG] 43:07.926 Credentials - setting credentials for guest
[WARN] 43:07.926 AWSS3Provider - ensure credentials error cannot get guest credentials when mandatory signin enabled

Hmm can you check that your S3 permissions are set correctly? It’s at the bottom of this chapter.

I think it’s and Amplify issue. It doesn’t even try to send the PUT request, it instantly throws the no credentials error. Looking at the Amplify source it has a credential object which the auth flow must not be setting up correctly.

CORS has been configured with:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="">

As well as an IAM policy allowing my Cognito user pool access to the S3 bucket.

Out of curiosity I spun up a temporary fully public read write bucket and still get the same error. It has to be something on the Amplify side and not to do with this chapter. Thanks.