Comments for Upload a File to S3

I was experiencing the same issues everyone is describing above. For myself the solution was to ensure my IAM policy specified the original bucket name that was configured earlier in the tutorial and config.js also referenced the correct bucket name.

  • I went to IAM and updated my authorized policy with the correct original bucket name.
  • In config.js I updated the value for s3.BUCKET to the correct bucket name.

I ignored the additional bucket created by the serverless deployment ( notes-app-api-prod-serverlessdeploymentbucket-17e7jhh2kw9hv in my case).

Fixed my issue.

1 Like

Thank you for reporting back!

Hi! the response is https://stackoverflow.com/questions/25027462/aws-s3-the-bucket-you-are-attempting-to-access-must-be-addressed-using-the-spec/26725760#26725760 .
The region is us-east-1

I want to think this is incorrect. Both buckets are in use. The one created during the deploy contains the configurations and lambdas for our app when executing the serverless deploy command. While the one we created manually is just to store the uploaded files attached to our notes. I don’t think we should be uploading user files to the same bucket we are deploying our app to

Hi all I’m struggling with a permissions issue. The S3 upload is being rejected. The CLI tester works ok.

The issue seems to be the Policy, specifically, this below works:

            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::kiwi-notes-app-uploads/private/${cognito-identity.amazonws.com:sub}/*"
            ]

But this doesn’t – the Resource constraints (following AWS documentation) generates access denied for my authenticated user. Now I’ve been around the block enough to suggest this may well be a symptom of something else rather than a cause (I’m going to check this user really is authenticated for instance) – but in case anyone else has had this issue, I had to temporarily relax the policy so any user has access to any other user’s data for this demo app:

            "Effect": "Allow",
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::kiwi-notes-app-uploads/private/*"
            ]

When I try to use the test-api I get a http 500 error. I tried running

serverless invoke --function create --path mocks/create-event.json 

and got the same 500 error there. When I run the test api utility I get the following

Making API request

{
   status: 403,
   statusText: 'Forbidden',
   data: '{"message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n' +
   '\n' +
   'The Canonical String for this request should have been\n' +
   "'POST\n" +
   '/prod/notes\n' +
   '\n' +
   'accept:application/json\n' +
   'content-type:application/json\n' +
   'host:unbtq4ium0.execute-api.eu-west-2.amazonaws.com\n' +
   'x-amz-date:20191227T175028Z\n' +
   '\n' +
   'accept;content-type;host;x-amz-date\n' +
   "3a99f7c41ea871222ce9eb05cc8c7a5bbfc8e141bbb3c3999cff381d1462d448'\n" +
   '\n' +
   'The String-to-Sign should have been\n' +
   "'AWS4-HMAC-SHA256\n" +
   '20191227T175028Z\n' +
   '20191227/eu-west-2/execute-api/aws4_request\n' +
  "614c1776e4a9a523adf669a111e77ecfe9a486c9fa83fa3e59a56a2e5d956620'\n" +
  '"}'
}

Any advice will be much appreciated

I replied in the other thread, have a look.

Hmm did you manage to figure out why the original policy was failing?

I followed this all exactly, but when I try to upload a file it doesn’t even send the PUT request. Instead it instantly throws an exception “No credentials”.

I added window.LOG_LEVEL = "DEBUG"; and my console log says that it finds credentials, but then falls back to guest? Not sure what is wrong.

[DEBUG] 43:07.911 Credentials - getting credentials 
[DEBUG] 43:07.912 Credentials - picking up credentials 
[DEBUG] 43:07.912 Credentials - getting new cred promise 
[DEBUG] 43:07.913 Credentials - checking if credentials exists and not expired 
[DEBUG] 43:07.913 Credentials - need to get a new credential or refresh the existing one 
[DEBUG] 43:07.915 AuthClass - Getting current user credentials 
[DEBUG] 43:07.917 AuthClass - Getting current session 
[DEBUG] 43:07.918 AuthClass - Getting the session from this user: 
Object { username: "2a[GUID]0cf", pool: {…}, Session: null, client: {…}, signInUserSession: {…}, authenticationFlowType: "USER_SRP_AUTH", storage: Storage, keyPrefix: "CognitoIdentityServiceProvider.5a[ID]ci", userDataKey: "CognitoIdentityServiceProvider.5a[ID]ci.2a[GUID]0cf.userData", attributes: {…}, … }
[DEBUG] 43:07.921 AuthClass - Succeed to get the user session 
Object { idToken: {…}, refreshToken: {…}, accessToken: {…}, clockDrift: 2 }
[DEBUG] 43:07.922 AuthClass - getting session success 
Object { idToken: {…}, refreshToken: {…}, accessToken: {…}, clockDrift: 2 }
[DEBUG] 43:07.924 Credentials - set credentials from session 
[DEBUG] 43:07.924 Credentials - No Cognito Federated Identity pool provided 
[DEBUG] 43:07.925 AuthClass - getting session failed No Cognito Federated Identity pool provided
[DEBUG] 43:07.926 Credentials - setting credentials for guest
[WARN] 43:07.926 AWSS3Provider - ensure credentials error cannot get guest credentials when mandatory signin enabled

Hmm can you check that your S3 permissions are set correctly? It’s at the bottom of this chapter.

I think it’s and Amplify issue. It doesn’t even try to send the PUT request, it instantly throws the no credentials error. Looking at the Amplify source it has a credential object which the auth flow must not be setting up correctly.

CORS has been configured with:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>HEAD</AllowedMethod>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <MaxAgeSeconds>3000</MaxAgeSeconds>
    <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
    <ExposeHeader>x-amz-request-id</ExposeHeader>
    <ExposeHeader>x-amz-id-2</ExposeHeader>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

As well as an IAM policy allowing my Cognito user pool access to the S3 bucket.

Out of curiosity I spun up a temporary fully public read write bucket and still get the same error. It has to be something on the Amplify side and not to do with this chapter. Thanks.

I see. That’s really weird.

Hi, I ran to an issue of an undefined function of “createNote”. Since the copy button doesn’t include that function, you’ll need to add it back manually.

Hmm can you show me a screenshot of what is missing? I’ll edit the guide.

I looked but I can’t find which chapter first defines the createNote function. But when I followed the steps in this chapter and pasted in the new “async function handleSubmit(event)” I no longer had the createNote function. So the error I got was just stating that there was no createNote function. Don’t know if that’s helpful. If I can trace my steps as to where/when the createNote function was lost I’ll post back here.

when I try to upload multiple images to s3,

I get Error: Unsupported body payload object

import React, { useState, useEffect, useRef } from "react";
import { IonButton, IonButtons, IonInput, IonBackButton, IonContent, IonHeader, IonList, IonItem, IonItemGroup, IonLoading, IonItemDivider, IonLabel, IonPage, IonTitle, IonToolbar, IonAlert, IonRadio, IonRadioGroup, IonListHeader, IonSlide, IonSlides, IonIcon } from '@ionic/react';
import { LinkContainer } from "react-router-bootstrap";
import { API, Storage } from "aws-amplify";

function JobInfo(props) {

const file = useRef(null);

  function handleFileChange(event) {
    file.current = event.target.files[0];
  }
    

    async function handleSubmit(event) {
          event.preventDefault();
        
         setShowLoading(true);
        
        try {
    const attachment = file.current
    const filename = `${job.JobId}-${file.name}`;

  const stored = await Storage.put(filename, file, {
    contentType: file.type
  });

  return stored.key;
      
  } catch (e) {
    alert(e);
    
  }
 setShowLoading(false);
}
        
    
  var centerText = {textAlign: "center"}
  let today = new Date().toDateString();
  let start = new Date(date).toDateString();
  return(
      <IonPage>
      <IonHeader>
        <IonToolbar>
      <IonButtons slot="start">
          <IonBackButton/>
      </IonButtons>
          <IonTitle>Upload Images</IonTitle>
        </IonToolbar>
      </IonHeader>  
      <IonContent className="ion-padding">
      
      <div style={{display: "flex", justifyContent: 'center'}}>
          <IonItem>
          <IonInput name="file" multiple="true" id="file" type="file" onIonChange={(e) => handleFileChange}></IonInput>
          </IonItem>
          <IonButton  expand="block" color="primary" strong="true" size="default" type="submit" onClick={handleSubmit} >Upload</IonButton>
       </div>
           <IonLoading
               isOpen={showLoading}
               onDidDismiss={() => setShowLoading(false)}
               message={'Please Wait...'}
               duration={6000}
            />

    </IonContent>
    </IonPage>
    );

In that snippet, how are you uploading multiple files?

Is there no way to upload to any folders besides public/protected/private? Like if you wanted to have groups of users that could access/modify objects in a shared folder. It seems like Amplify Storage forces public/private/protected to be first in a key, and then everything in there goes into a folder specific to a user.

This one is a bit tricky. AFAIK, the way the files are protected is by creating them inside a folder that’s using the user id in their path. That allows Cognito to restrict access. If you wanted a group of users to access a folder. You’ll probably need to create a private folder and then handle access controls on your own?

@jayair thanks. I think that’s right, but what’s weird is that with Storage.put, the documentation seems to always require that the first folder must be public/protected/private, and the second folder must be the user ID.

I see no way to specify any path, like ‘/my/own/path/helloWorld.jpg’.

It seems like for this case I may actually need to upload to a serverless function and let it put the file in S3?