Name:
interface
Value:
Amplify has re-imagined the way frontend developers build fullstack applications. Develop and deploy without the hassle.

Overwrite and customize resolvers

You are currently viewing the legacy GraphQL Transformer documentation. View latest documentation

Overwriting Resolvers

Let's say you have a simple schema.graphql...

type Todo @model {
id: ID!
name: String!
description: String
}

and you want to change the behavior of request mapping template for the Query.getTodo resolver that will be generated when the project compiles. To do this you would create a file named Query.getTodo.req.vtl in the resolvers directory of your API project. The next time you run amplify push or amplify api gql-compile, your resolver template will be used instead of the auto-generated template. You may similarly create a Query.getTodo.res.vtl file to change the behavior of the resolver's response mapping template.

Custom Resolvers

You can add custom Query, Mutation and Subscription when the generated ones do not cover your use case.

  1. Add the required Query, Mutation or Subscription type to your schema.
  2. Create resolvers for newly created Query, Mutation or Subscription by creating request and response template in <project-root>/amplify/backend/api/<api-name>/resolvers folder. Graphql Transformer follows <TypeName>.<FieldName>.<req/res>.vtl as convention to name the resolvers. So if you're adding a custom query name myCustomQuery the resolvers would be name Query.myCustomQuery.req.vtl and Query.myCustomQuery.res.vtl.
  3. Add resolvers resource by creating a custom stack inside <project-root>/amplify/backend/api/<api-name>/stacks directory of your API.

To add the custom fields, add the following to your schema:

# <project-root>amplify/backend/api/<api-name>/schema.graphql
type Query {
# Add all the custom queries here
}
type Mutation {
# Add all the custom mutations here
}
type Subscription {
# Add all the custom subscription here
}

The GraphQL Transformer by default creates a file called CustomResources.json inside <project-root>/amplify/backend/api/<api-name>/stacks, which can be used to add the custom resolvers for newly added Query, Mutation or Subscription. The custom stack gets the following arguments passed to it, allowing you to get details about API:

ParameterTypePossible valuesDescription
AppSyncApiIdStringThe id of the AppSync API associated with this project
AppSyncApiNameStringThe name of the AppSync API
envStringEnvironment name
S3DeploymentBucketStringThe S3 bucket containing all deployment assets for the project
S3DeploymentRootKeyStringAn S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory.
DynamoDBEnableServerSideEncryptionStringtrue or falseEnable server side encryption powered by KMS.
AuthCognitoUserPoolIdStringThe id of an existing User Pool to connect
DynamoDBModelTableReadIOPSNumberThe number of read IOPS the table should support.
DynamoDBModelTableWriteIOPSNumberThe number of write IOPS the table should support
DynamoDBBillingModeStringPAY_PER_REQUEST or PROVISIONEDConfigure @model types to create DynamoDB tables with PAY_PER_REQUEST or PROVISIONED billing modes
DynamoDBEnablePointInTimeRecoveryStringtrue or falseWhether to enable Point in Time Recovery on the table
APIKeyExpirationEpochNumberhe epoch time in seconds when the API Key should expire
CreateAPIKeyNumber0 or 1The boolean value to control if an API Key will be created or not. The value of the property is automatically set by the CLI. If the value is set to 0 no API Key will be created

Any additional values added Custom Stacks will be exposed as parameter in the root stack, and value can be set by adding the value for it in <project-root>/amplify/backend/api/<api-name>/parameters.json file.

To add a custom resolver, add the following in the resource section of CustomResource.json

{
"Resources": {
"CustomQuery1": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "CommentTable",
"TypeName": "Query",
"FieldName": "myCustomQuery",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.myCustomQuery.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.myCustomQuery.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
}
}

The request and response template should be placed inside <project-root>/amplify/backend/api/<api-name>/resolvers folder. Resolver templates are written in the Apache Velocity Template Language, commonly referred to as VTL. Query.myCustomQuery.req.vtl is a request mapping template, which receives an incoming AppSync request and transforms it into a JSON document that is subsequently passed to the GraphQL resolver. Similarly, Query.myCustomQuery.res.vtl is a response mapping template. These templates receive the GraphQL resolver's response and transform the data before returning it to the user.

Several example VTL files are discussed later in this documentation. For more detailed information on VTL, including how it can be used in the context of GraphQL resolvers, see the official AppSync Resolver Mapping Template Reference.

Add a custom resolver that targets a DynamoDB table from @model

This is useful if you want to write a more specific query against a DynamoDB table that was created by @model. For example, assume you had this schema with two @model types and a pair of @connection directives.

type Todo @model {
id: ID!
name: String!
description: String
comments: [Comment] @connection(name: "TodoComments")
}
type Comment @model {
id: ID!
content: String
todo: Todo @connection(name: "TodoComments")
}

This schema will generate resolvers for Query.getTodo, Query.listTodos, Query.getComment, and Query.listComments at the top level as well as for Todo.comments, and Comment.todo to implement the @connection. Under the hood, the transform will create a global secondary index on the Comment table in DynamoDB but it will not generate a top level query field that queries the GSI because you can fetch the comments for a given todo object via the Query.getTodo.comments query path. If you want to fetch all comments for a todo object via a top level query field i.e. Query.commentsForTodo then do the following:

  • Add the desired field to your schema.graphql.
# ... Todo and Comment types from above
type CommentConnection {
items: [Comment]
nextToken: String
}
type Query {
commentsForTodo(todoId: ID!, limit: Int, nextToken: String): CommentConnection
}
  • Add a resolver resource to a stack in the stacks/ directory. The DataSourceName is auto-generated. In most cases, it'll look like {MODEL_NAME}Table. To confirm the data source name, you can verify it from within the AppSync Console (amplify console api) and clicking on the Data Sources tab.
{
// ... The rest of the template
"Resources": {
"QueryCommentsForTodoResolver": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "CommentTable",
"TypeName": "Query",
"FieldName": "commentsForTodo",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.commentsForTodo.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.commentsForTodo.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
}
}
  • Write the resolver templates.
## Query.commentsForTodo.req.vtl **
#set( $limit = $util.defaultIfNull($context.args.limit, 10) )
{
"version": "2017-02-28",
"operation": "Query",
"query": {
"expression": "#connectionAttribute = :connectionAttribute",
"expressionNames": {
"#connectionAttribute": "commentTodoId"
},
"expressionValues": {
":connectionAttribute": {
"S": "$context.args.todoId"
}
}
},
"scanIndexForward": true,
"limit": $limit,
"nextToken": #if( $context.args.nextToken ) "$context.args.nextToken" #else null #end,
"index": "gsi-TodoComments"
}
## Query.commentsForTodo.res.vtl **
$util.toJson($ctx.result)

Add a custom resolver that targets an AWS Lambda function

Velocity is useful as a fast, secure environment to run arbitrary code but when it comes to writing complex business logic you can just as easily call out to an AWS lambda function. Here is how:

  • First create a function by running amplify add function. The rest of the example assumes you created a function named "echofunction" via the amplify add function command. If you already have a function then you may skip this step.

  • Add a field to your schema.graphql that will invoke the AWS Lambda function.

type Query {
echo(msg: String): String
}
  • Add the function as an AppSync data source in the stack's Resources block.
"EchoLambdaDataSource": {
"Type": "AWS::AppSync::DataSource",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"Name": "EchoFunction",
"Type": "AWS_LAMBDA",
"ServiceRoleArn": {
"Fn::GetAtt": [
"EchoLambdaDataSourceRole",
"Arn"
]
},
"LambdaConfig": {
"LambdaFunctionArn": {
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:echofunction-${env}",
{ "env": { "Ref": "env" } }
]
}
}
}
}
  • Create an AWS IAM role that allows AppSync to invoke the lambda function on your behalf to the stack's Resources block.
"EchoLambdaDataSourceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::Sub": [
"EchoLambdaDataSourceRole-${env}",
{ "env": { "Ref": "env" } }
]
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "appsync.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "InvokeLambdaFunction",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:invokeFunction"
],
"Resource": [
{
"Fn::Sub": [
"arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:echofunction-${env}",
{ "env": { "Ref": "env" } }
]
}
]
}
]
}
}
]
}
}
  • Create an AppSync resolver in the stack's Resources block.
"QueryEchoResolver": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": {
"Fn::GetAtt": [
"EchoLambdaDataSource",
"Name"
]
},
"TypeName": "Query",
"FieldName": "echo",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.echo.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.echo.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
  • Create the resolver templates in the project's resolvers directory.

resolvers/Query.echo.req.vtl

{
"version": "2017-02-28",
"operation": "Invoke",
"payload": {
"type": "Query",
"field": "echo",
"arguments": $utils.toJson($context.arguments),
"identity": $utils.toJson($context.identity),
"source": $utils.toJson($context.source)
}
}

resolvers/Query.echo.res.vtl

$util.toJson($ctx.result)

After running amplify push open the AppSync console with amplify api console and test your API with this simple query:

query {
echo(msg: "Hello, world!")
}

Add a custom geolocation search resolver that targets an OpenSearch domain created by @searchable

To add a geolocation search capabilities to an API add the @searchable directive to an @model type.

type Todo @model @searchable {
id: ID!
name: String!
description: String
comments: [Comment] @connection(name: "TodoComments")
}

The next time you run amplify push, an Amazon OpenSearch domain will be created and configured such that data automatically streams from DynamoDB into OpenSearch. The @searchable directive on the Todo type will generate a Query.searchTodos query field and resolver but it is not uncommon to want more specific search capabilities. You can write a custom search resolver by following these steps:

  • Add the relevant location and search fields to the schema.
type Comment @model {
id: ID!
content: String
todo: Todo @connection(name: "TodoComments")
}
type Location {
lat: Float
lon: Float
}
type Todo @model @searchable {
id: ID!
name: String!
description: String
comments: [Comment] @connection(name: "TodoComments")
location: Location
}
type TodoConnection {
items: [Todo]
nextToken: String
}
input LocationInput {
lat: Float
lon: Float
}
type Query {
nearbyTodos(location: LocationInput!, km: Int): TodoConnection
}
  • Create the resolver record in the stack's Resources block.
"QueryNearbyTodos": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "ElasticSearchDomain",
"TypeName": "Query",
"FieldName": "nearbyTodos",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyTodos.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyTodos.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
  • Write the resolver templates.
## Query.nearbyTodos.req.vtl
## Objects of type Todo will be stored in the /todo index
#set( $indexPath = "/todo/doc/_search" )
#set( $distance = $util.defaultIfNull($ctx.args.km, 200) )
{
"version": "2017-02-28",
"operation": "GET",
"path": "$indexPath.toLowerCase()",
"params": {
"body": {
"query": {
"bool": {
"must": {
"match_all": {}
},
"filter": {
"geo_distance": {
"distance": "${distance}km",
"location": $util.toJson($ctx.args.location)
}
}
}
}
}
}
}
## Query.nearbyTodos.res.vtl
#set( $items = [] )
#foreach( $entry in $context.result.hits.hits )
#if( !$foreach.hasNext )
#set( $nextToken = "$entry.sort.get(0)" )
#end
$util.qr($items.add($entry.get("_source")))
#end
$util.toJson({
"items": $items,
"total": $ctx.result.hits.total,
"nextToken": $nextToken
})
  • Run amplify push

Amazon OpenSearch domains can take a while to deploy. Take this time to read up on OpenSearch to see what capabilities you are about to unlock.

Getting Started with OpenSearch

  • After the update is complete but before creating any objects, update your OpenSearch index mapping.

An index mapping tells OpenSearch how it should treat the data that you are trying to store. By default, if you create an object with field "location": { "lat": 40, "lon": -40 }, OpenSearch will treat that data as an object type when in reality you want it to be treated as a geo_point. You use the mapping APIs to tell OpenSearch how to do this.

Make sure you tell OpenSearch that your location field is a geo_point before creating objects in the index because otherwise you will need delete the index and try again. Go to the Amazon OpenSearch Console and find the OpenSearch domain that contains this environment's GraphQL API ID. Click on it and open the OpenSearch Dashboard link. To get the OpenSearch Dashboard to show up you need to install a browser extension such as AWS Agent and configure it with your AWS profile's public key and secret so the browser can sign your requests to the OpenSearch Dashboard for security reasons. Once you have the OpenSearch Dashboard open, click the "Dev Tools" tab on the left and run the commands below using the in browser console.

# Create the /todo index if it does not exist
PUT /todo
# Tell OpenSearch that the location field is a geo_point
PUT /todo/_mapping/doc
{
"properties": {
"location": {
"type": "geo_point"
}
}
}
  • Use your API to create objects and immediately search them.

After updating the OpenSearch index mapping, open the AWS AppSync console with amplify api console and try out these queries.

mutation CreateTodo {
createTodo(
input: {
name: "Todo 1"
description: "The first thing to do"
location: { lat: 43.476446, lon: -110.767786 }
}
) {
id
name
location {
lat
lon
}
description
}
}
query NearbyTodos {
nearbyTodos(location: { lat: 43.476546, lon: -110.768786 }, km: 200) {
items {
id
name
location {
lat
lon
}
}
}
}

When you run Mutation.createTodo, the data will automatically be streamed via AWS Lambda into OpenSearch such that it nearly immediately available via Query.nearbyTodos.