Introduction

The AWS-SDK for JavaScript/NodeJS has for a very long time sat at version 2. At the end of 2020 AWS released version 3 of the SDK, and this is now the default in Lambda from NodeJS 18.x onwards. AWS-SDK v3 is not downwards compatible with v2, so changes to the code will need to be made.

This document provides a quick overview of the differences, and gives code examples on how API calls should be constructed.

General differences

Modularized

Where the AWS-SDK v2 was one big bundle, the AWS-SDK is now modularized. This means that the final codebase for a project can be much smaller. However, it also means that you need to be more careful in constructing your packages.json file for a project.

AWS-SDK v2:

npm init
npm install aws-sdk

AWS-SDK v3:

npm init
npm install @aws-sdk/client-ec2

For a Lambda environment, the full SDK is bundled, just like before, so you don't need to worry about installing packages.

You also need to be more specific in importing the SDK in your code. You don't just declare the SDK, but you need to import the client and any command you need to use, specifically.

AWS-SDK v2:

var AWS = require('aws-sdk');
var ec2 = new AWS.EC2();

AWS-SDK v3:

import { EC2Client, DescribeTagsCommand } from "@aws-sdk/client-ec2";    // ES Modules import
// const { EC2Client, DescribeTagsCommand } = require("@aws-sdk/client-ec2");  // CommonJS import
var ec2 = new EC2Client();

If you import a large number of modules and there's a potential command name conflict, you can also "import as" (but only for ES Modules import):

import { EC2Client, DescribeTagsCommand as EC2_DescribeTagsCommand} from "@aws-sdk/client-ec2";

Alternative syntax for importing a v3 module

There is a different syntax that can be used with AWS-SDK v3 that more closely resembles the v2 syntax:

import { EC2 } from "@aws-sdk/client-ec2";    // ES Modules import
// const { EC2 } = require("@aws-sdk/client-ec2");  // CommonJS import
var ec2 = new EC2();

If you use this syntax, you don't have to instantiate a new command and send it to the client, but you can call the command as a method of the client directly. That makes code marginally shorter and more readable. (See examples further down.)

However, if you use "tree shake" tools to generate the smallest package sizes, for instance because the code needs to be downloaded and run inside a browser, then this method generates bigger packages. Also, the JavaScript/Node parser needs to parse a greater body of code when importing the packages in this way, leading to marginally slower startup of your code. (In a limited test, the time difference was not measurable though.)

Support for SSO in the .aws/config file

The AWS-SDK v2 file had no support for SSO and role switches. The AWS-SDK v3 does. This means that the following in your ~/.aws/config file is now possible and used properly:

[profile SSOProfile]
sso_session = sso-cloud9
sso_account_id = 123456789012 
sso_role_name = AdministratorAccess

Better handing of Promises

The AWS-SDK v2 primarily relied on callbacks. This led to the familiar "callback hell" if you needed to do multiple API calls, either in parallel or in series. It was possible to add ".promise()" to an API call to turn it into a Promise, and use async/await to handle these, but this required extra code, that was sometimes forgotten. Furthermore, all the examples in the v2 documentation feature callbacks.

An AWS-SDK v3 API call always returns a Promise that can be handled with await. The examples in the documentation also use these constructs.

If you are going to use await in your code, your outer function has to be an async function. In AWS-SDK v2 you could get away with this, if you were using callbacks:

exports.handler = (event, context) => {
  ...
  return( {} );
};

For all practical purposes, in AWS-SDK v3 your handler has to be defined as an async function:

exports.handler = async (event, context) => {
  ...
  return( Promise.resolve( {} ) );
};

(Note: If you have an "async" function and you return a non-Promise object, NodeJS will automatically wrap this in a Resolved Promise. So if you write "return( {} );" on line 3 of the example above, things will still work OK. Nevertheless, I prefer to be explicit so that I know what’s going on. Also, there is a corner case where the behaviour is not exactly the same. See this page )

If your Lambda code, for testing purposes, also needs to run as a standalone program, then you need to make sure your main function is wrapped in an async function as well. So instead of something like this:

handler(testevent, testcontext);

you should now do this:

( async () => {
  await handler(testevent, testcontext);
})();

Support for pagination

The AWS-SDK v3 can generate an iterator-like object, similar to Python, that greatly simplifies the handling of paginated API calls. For examples, see further down.

Better use of new JS features

The AWS-SDK v3 tries to be more efficient in operation by using new JS features. As an example, if you do an s3:GetObject, the object content itself is no longer contained in the response. Instead, you will receive a "stream" in the response. This allows you to handle the contents of that object in a streaming fashion, which can be much more memory-efficient in case of large objects. There are also methods available to download the stream for you, but you will have to "await" these. See below for examples.

Full API Call examples

AWS-SDK v2 examples

AWS-SDK v2 with callbacks and error handling:

var AWS = require('aws-sdk');
var s3 = new AWS.S3();
s3.abortMultipartUpload(params, function (err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

AWS-SDK v2 with Promise() (no error handling for clarity):

var s3 = new AWS.S3();
result = await s3.abortMultipartUpload(params).promise();
console.log( result )

AWS-SDK v2 with Promise() (error handling with catch()):

var s3 = new AWS.S3();
result = await s3.abortMultipartUpload(params).promise().catch( function(e) {
  console.error( e );
  return( {} );
});
console.log( result )

Note that the return statement in line 4 is the return statement of the inline catch() function. So the data returned ({}) is put in the "result" variable.

AWS-SDK v2 with Promise() (error handling with try/catch):

var s3 = new AWS.S3();
try {
  result = await s3.abortMultipartUpload(params).promise();
} catch( e ) {
  console.error( e );
  return( Promise.reject( e ) );
}
console.log( result )

Note that in this example, the return statement in line 6 is the return statement for the overall function. This needs to be an async function, and we return a rejected Promise.

AWS-SDK v3

AWS-SDK v3 does not use callbacks, and it is uncommon to use catch(). This means the API command will always return a Promise which you need to "await", somehow. You also need to wrap each API call in a try/catch block:

const { S3Client, ListBucketsCommand } = require( "@aws-sdk/client-s3" ); 
const s3 = new S3Client();

( async () => {
  try {
    const response = await s3.send( new ListBucketsCommand( {} ) );
    console.log( response.Buckets.length );
  } catch( err ) {
    console.error(err);
    return( Promise.reject( err ));
  }
})();

AWS-SDK v3 Alternative Syntax

In order to make migration easier, there is also a style which more closely resembles the v2 syntax, but this leads to larger packages and slower code startups (see the remark earlier):

const { S3 } = require( "@aws-sdk/client-s3" ); 
const s3 = new S3();

( async () => {
  try {
    const response = await s3.listBuckets( {} );
    console.log( response.Buckets.length );
  } catch( err ) {
    console.error(err);
    return( Promise.reject( err ));
  }
})();

This alternative way of calling the SDK is not documented on a per-command basis. It does work, provided that you pay attention to the following: In the first example the command is a standalone object which, by convention, is first-letter-capitalized (ListBucketsCommand, with 3 capitals in total). In the second example the command is a method of an object, which, by convention, starts with a lowercase letter (s3.listBuckets, with just one capital).

AWS-SDK v3 with callbacks

If you have existing v2 code that is written with callbacks, and you don't want to modify the structure of your code, then it is possible to convert it to v3 with callbacks. The code would then look like this:

const { S3Client, ListBucketsCommand } = require( "@aws-sdk/client-s3" ); 
const s3 = new S3Client();

s3.send( new ListBucketsCommand( {} ) ).then( function( response ) {
    console.log( response.Buckets.length );
}).catch( function( err ) {
    console.error( err );
    return( {} );
});

Note that in the v2 syntax a single callback function handled both the result and the error. With this v3 syntax you have to separate out that code into two functions.

Configuration

When instantiating a client, you can pass in a Configuration block. This is commonly used to switch regions, but can also be used to use alternative credentials.

In SDK v2, the configuration was global, so any change was merged into the master SDK. In SDK v3, the configuration is local to the instantiated client. This helps a lot if you are doing parallel work across multiple accounts or regions.

As far as actual code is concerned, there are virtually no differences. One notable exception is that all of the credentials (AK, SK, Session) are now collected under a new property "credentials". That property is identical to the output of AssumeRole, so it becomes easier to re-use credentials. (For an example of this, see Getting and Using STS Credentials below.)

AWS-SDK v2:

var AWS = require('aws-sdk');
var ec2 = new AWS.EC2( { region: "us-east-1" } );

AWS-SDK v3 standard syntax:

const { EC2Client } = require( "@aws-sdk/client-s3" );
var ec2 = new EC2Client( { region: "us-east-1" } );

AWS-SDK v3 alternative syntax:

const { EC2 } = require( "@aws-sdk/client-s3" );
var ec2 = new EC2( { region: "us-east-1" } );

Pagination

With the AWS-SDK v2 you had to implement pagination yourself, based on the response token you got from the first API call. This is, of course, still possible with the AWS-SDK v3. But the AWS-SDK v3 finally supports pagination as well, through the use of an iterator-class object. This is similar to Python.

However, keep in mind that all API calls are async, so you need to put an await in there as well.

const { DynamoDBClient, paginateListTables } = require("@aws-sdk/client-dynamodb");

const paginatorConfig = {
  client: new DynamoDBClient({}),
  pageSize: 25
};
const commandParams = {};
const paginator = paginateListTables(paginatorConfig, commandParams);

const tableNames = [];
for await (const page of paginator) {
  // page contains a single paginated output.
  tableNames.push(...page.TableNames);
}

You can also simplify the syntax if you don't need a paginatorConfig but just want to use the defaults:

const { DynamoDBClient, paginateListTables } = require("@aws-sdk/client-dynamodb");

const client = new DynamoDBClient({});

const tableNames = [];
for await (const page of paginateListTables({ client }, {})) {
    // page contains a single paginated output.
    tableNames.push(...page.TableNames);
}

Just like earlier, the only examples given in the official AWS documentation are for the standard syntax, like ListTablesCommand. The corresponding iterator, if available, will be called paginateListTables.

Getting and using STS Credentials (role switches)

The way credentials can be obtained, and how these are handled, have become more sophisticated in SDK v3. For a full discussion see the documentation.

In our environment, performing a role switch is probably the most common use case:

AWS-SDK v2 (error handling skipped for clarity):

var sts = new AWS.STS({ region: stsRegion });
var params = {
  RoleArn: role,
  RoleSessionName: "Something"
};
var token = await sts.assumeRole(params).promise();

var s3 = new AWS.S3( {
  accessKeyId: token.Credentials.AccessKeyId,
  secretAccessKey: token.Credentials.SecretAccessKey,
  sessionToken: token.Credentials.SessionToken,
  region: region
})

AWS-SDK v3 (error handling skipped for clarity):

import { fromTemporaryCredentials } from "@aws-sdk/credential-providers";
import { S3Client } from "@aws-sdk/client-s3";

var credentials = fromTemporaryCredentials( {
  params: {
    RoleArn: role,
    RoleSessionName: "Something"
  }
});

var s3 = new S3Client({
  credentials: credentials,
  region: region
});

S3

Specific for S3, a notable difference is in the GetObject API call. In AWS-SDK v2, when you did a GetObject call, the complete contents of the object was already in the response. You just needed to transform it from a Buffer to a String or other data type as appropriate:

const aws = require('aws-sdk');
const s3 = new aws.S3(); // Pass in opts to S3 if necessary

var getParams = {
    Bucket: 'test-bucket', // your bucket name,
    Key: 'hello-s3.txt' // path to the object you're looking for
}

s3.getObject(getParams, function(err, data) {
    // Handle any error and exit
    if (err)
        return err;

  // No error happened
  // Convert Body from a Buffer to a String
  let objectData = data.Body.toString('utf-8'); // Use the encoding necessary
});

In AWS-SDK v3, the result of a GetObject call is a STREAMING_BLOB_VALUE, and you need to 'await' the method that downloads this stream and converts it to a byte array or string.

import { GetObjectCommand, S3Client } from "@aws-sdk/client-s3";

const client = new S3Client({});

export const main = async () => {
  const command = new GetObjectCommand({
    Bucket: "test-bucket",
    Key: "hello-s3.txt",
  });

  try {
    const response = await client.send(command);
    // The Body object also has 'transformToByteArray' and 'transformToWebStream' methods.
    const str = await response.Body.transformToString();
    console.log(str);
  } catch (err) {
    console.error(err);
  }
};

As an alternative to .transformToString(), .transformToByteArray() or .transformToWebStream() you can also handle the stream yourself. See this link for an example. Depending on the rest of your code, handing large objects in a streaming fashion could be more memory-efficient than downloading the whole object and handling it afterwards.

Note: The upgrade helper script (below) does NOT make this change for you.

Upgrade helper script

There is a github project which will make most of the required changes for you. This script can be found here.

That page contains the download links and an example. Note that the script generates the "alternative" syntax, which is not necessarily the most efficient solution. However, in a Lambda environment using the efficient or alternative syntax is likely not going to make a measurable difference.

Usage example:

$ npm install aws-sdk-js-codemod

$ cat example.js
import AWS from "aws-sdk";
const client = new AWS.DynamoDB();
const response = await client.listTables({}).promise();

$ npx aws-sdk-js-codemod@latest -t v2-to-v3 example.js

$ cat example.js
import { DynamoDB } from "@aws-sdk/client-dynamodb";
const client = new DynamoDB();
const response = await client.listTables({});

References

AWS-SDK v2 API Reference

AWS-SDK v3 API Reference