Configure DynamoDB in Serverless

@jayair, I was using serverless 1.30.3. I just upgraded to 1.31.0.

Same error when running locally. But deployed, it appears to work.

Thanks. You are right this is an issue. This happens because the Ref: option is for CloudFormation and while using it locally Serverless does not have access to that. Here are a couple of ways around it.

  1. You can specify it by referencing directly to the resource file tableName: ${self:resources.0.Resources.NotesTable.Properties.TableName}. Where 0 is the index of the resources specified in your serverless.yml.

  2. Create a custom variable for the table name in your serverless.yml and use that in your DynamoDB resource.

We will probably update the tutorial to go with the second option.

Thanks for getting back to me. I like the option for #2 and will likely implement that. #1 just feels a bit clunky.

1 Like

Why can we remove the following line from libs/dynamodb-lib.js?

AWS.config.update({ region: "us-east-1" });

Without this line, how do Lambda functions know the region of DynamoDB that we want to connect to?

I need to confirm this but by default the AWS SDK within the Lambda defaults the region to the region of the Lambda.

We are going to remove this line from Part I of the tutorial as well since it is causing some confusion.

Why I cannot rename the secondary key noteId with itemId and simply push to git and redeploy with Seed?

An error occurred: NotesTable - Property AttributeDefinitions is inconsistent with the KeySchema of the table and the secondary indexes.

And therefore a build fails.

I think what is happening here is that because of the index change itā€™s not able to find what you had previously defined. Can you post what your serverless.yml or resource looked like before/after the change?

Before

Resources:
  NotesTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: ${self:custom.tableName}
      AttributeDefinitions:
        - AttributeName: userId
          AttributeType: S
        - AttributeName: noteId
          AttributeType: S
      KeySchema:
        - AttributeName: userId
          KeyType: HASH
        - AttributeName: noteId
          KeyType: RANGE
      # Set the capacity based on the stage
      ProvisionedThroughput:
        ReadCapacityUnits: ${self:custom.tableThroughput}
        WriteCapacityUnits: ${self:custom.tableThroughput}

After

Resources:
  NotesTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: ${self:custom.tableName}
      AttributeDefinitions:
        - AttributeName: userId
          AttributeType: S
        - AttributeName: itemId
          AttributeType: S
      KeySchema:
        - AttributeName: userId
          KeyType: HASH
        - AttributeName: itemId
          KeyType: RANGE
      # Set the capacity based on the stage
      ProvisionedThroughput:
        ReadCapacityUnits: ${self:custom.tableThroughput}
        WriteCapacityUnits: ${self:custom.tableThroughput}

Error

An error occurred: NotesTable - CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename dev-notes and update the stack again..

@cwayfinder
I am pretty sure, once a table is created you cannot change the primary key on that table. You would most likely have to remove the table, and recreate it with the new schema, add a secondary index to the table with your new keys, or create a new table (rename the table in your yaml file to build the new table on deploy).

@cwayfinder Itā€™s exactly as @mathewgries is saying. You canā€™t change the schema after deploying it. In the docs a change like this entails a ā€œReplacementā€. Meaning that you canā€™t update it, the table needs to be removed and replaced.

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-keyschema

You have two options:

  1. Remove the table first, and then recreate it with different KeySchema
  2. Give the new table a different name, ie. dev-items instead of dev-notes. And then move the data over manually.

Thanks. Is it possible to resolve such a case during CI process without manual work?

@cwayfinder
You can try this.

  1. Backup the table if you need the data stored in it
  2. Manually delete the table from AWS
  3. Comment all lines referencing your table in serverless.yml
  4. Deploy the services ( this should remove all references to the db)
  5. Update the schema in your db declaration
  6. Deploy again

The new table should be created with your new schema

@cwayfinder Iā€™d add that, this should not be something that should happen often and you probably donā€™t want to do this through your CI.

Thank you, so much, for this site, youā€™re willingness to share your knowledge with the greater internet.
So glad I found you <3

1 Like

Thank you for the kind words! It means a lot!

Please add clarity to how youā€™re referencing the resources in your serverless.yml.

This is important information that could use some expanding on.

For example, how would I reference an index inside the table?

Hmm what do you mean by reference an index inside the table? Do you mean in your Lambda function code?

Sorry that was super unclear. I was referring to referencing a table index when defining your iamRoleStatement resources. An index is separate from the table so it has to have its own permissions.

I ended up with:

Resource: 
       # this is building the following format: arn:aws:dynamodb:us-west-2:AWS::AccountId:table/TABLE_NAME/index/*
        Fn::Join:
          - ''
          -
            - 'arn:aws:dynamodb:'
            - Ref: AWS::Region
            - ':'
            - Ref: AWS::AccountId
            - ':table/'
            - ${self:custom.tableName}/
            - 'index/*'

Or,

Resource: 
        !Sub '${NotesTable.Arn}/index/*'
1 Like

Ah I see. Thatā€™s a good point. Thanks for sharing.