Overwrite and customize resolvers

You are currently viewing the legacy GraphQL Transformer documentation. View latest documentation

Overwriting Resolvers

Let's say you have a simple schema.graphql...

1type Todo @model {
2 id: ID!
3 name: String!
4 description: String
5}

and you want to change the behavior of request mapping template for the Query.getTodo resolver that will be generated when the project compiles. To do this you would create a file named Query.getTodo.req.vtl in the resolvers directory of your API project. The next time you run amplify push or amplify api gql-compile, your resolver template will be used instead of the auto-generated template. You may similarly create a Query.getTodo.res.vtl file to change the behavior of the resolver's response mapping template.

Custom Resolvers

You can add custom Query, Mutation and Subscription when the generated ones do not cover your use case.

  1. Add the required Query, Mutation or Subscription type to your schema.
  2. Create resolvers for newly created Query, Mutation or Subscription by creating request and response template in <project-root>/amplify/backend/api/<api-name>/resolvers folder. Graphql Transformer follows <TypeName>.<FieldName>.<req/res>.vtl as convention to name the resolvers. So if you're adding a custom query name myCustomQuery the resolvers would be name Query.myCustomQuery.req.vtl and Query.myCustomQuery.res.vtl.
  3. Add resolvers resource by creating a custom stack inside <project-root>/amplify/backend/api/<api-name>/stacks directory of your API.

To add the custom fields, add the following to your schema:

1# <project-root>amplify/backend/api/<api-name>/schema.graphql
2
3 type Query {
4 # Add all the custom queries here
5 }
6
7 type Mutation {
8 # Add all the custom mutations here
9 }
10
11 type Subscription {
12 # Add all the custom subscription here
13 }

The GraphQL Transformer by default creates a file called CustomResources.json inside <project-root>/amplify/backend/api/<api-name>/stacks, which can be used to add the custom resolvers for newly added Query, Mutation or Subscription. The custom stack gets the following arguments passed to it, allowing you to get details about API:

ParameterTypePossible valuesDescription
AppSyncApiIdStringThe id of the AppSync API associated with this project
AppSyncApiNameStringThe name of the AppSync API
envStringEnvironment name
S3DeploymentBucketStringThe S3 bucket containing all deployment assets for the project
S3DeploymentRootKeyStringAn S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory.
DynamoDBEnableServerSideEncryptionStringtrue or falseEnable server side encryption powered by KMS.
AuthCognitoUserPoolIdStringThe id of an existing User Pool to connect
DynamoDBModelTableReadIOPSNumberThe number of read IOPS the table should support.
DynamoDBModelTableWriteIOPSNumberThe number of write IOPS the table should support
DynamoDBBillingModeStringPAY_PER_REQUEST or PROVISIONEDConfigure @model types to create DynamoDB tables with PAY_PER_REQUEST or PROVISIONED billing modes
DynamoDBEnablePointInTimeRecoveryStringtrue or falseWhether to enable Point in Time Recovery on the table
APIKeyExpirationEpochNumberhe epoch time in seconds when the API Key should expire
CreateAPIKeyNumber0 or 1The boolean value to control if an API Key will be created or not. The value of the property is automatically set by the CLI. If the value is set to 0 no API Key will be created

Any additional values added Custom Stacks will be exposed as parameter in the root stack, and value can be set by adding the value for it in <project-root>/amplify/backend/api/<api-name>/parameters.json file.

To add a custom resolver, add the following in the resource section of CustomResource.json

1{
2 "Resources": {
3 "CustomQuery1": {
4 "Type": "AWS::AppSync::Resolver",
5 "Properties": {
6 "ApiId": {
7 "Ref": "AppSyncApiId"
8 },
9 "DataSourceName": "CommentTable",
10 "TypeName": "Query",
11 "FieldName": "myCustomQuery",
12 "RequestMappingTemplateS3Location": {
13 "Fn::Sub": [
14 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.myCustomQuery.req.vtl",
15 {
16 "S3DeploymentBucket": {
17 "Ref": "S3DeploymentBucket"
18 },
19 "S3DeploymentRootKey": {
20 "Ref": "S3DeploymentRootKey"
21 }
22 }
23 ]
24 },
25 "ResponseMappingTemplateS3Location": {
26 "Fn::Sub": [
27 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.myCustomQuery.res.vtl",
28 {
29 "S3DeploymentBucket": {
30 "Ref": "S3DeploymentBucket"
31 },
32 "S3DeploymentRootKey": {
33 "Ref": "S3DeploymentRootKey"
34 }
35 }
36 ]
37 }
38 }
39 }
40 }
41}

The request and response template should be placed inside <project-root>/amplify/backend/api/<api-name>/resolvers folder. Resolver templates are written in the Apache Velocity Template Language, commonly referred to as VTL. Query.myCustomQuery.req.vtl is a request mapping template, which receives an incoming AppSync request and transforms it into a JSON document that is subsequently passed to the GraphQL resolver. Similarly, Query.myCustomQuery.res.vtl is a response mapping template. These templates receive the GraphQL resolver's response and transform the data before returning it to the user.

Several example VTL files are discussed later in this documentation. For more detailed information on VTL, including how it can be used in the context of GraphQL resolvers, see the official AppSync Resolver Mapping Template Reference.

Add a custom resolver that targets a DynamoDB table from @model

This is useful if you want to write a more specific query against a DynamoDB table that was created by @model. For example, assume you had this schema with two @model types and a pair of @connection directives.

1type Todo @model {
2 id: ID!
3 name: String!
4 description: String
5 comments: [Comment] @connection(name: "TodoComments")
6}
7type Comment @model {
8 id: ID!
9 content: String
10 todo: Todo @connection(name: "TodoComments")
11}

This schema will generate resolvers for Query.getTodo, Query.listTodos, Query.getComment, and Query.listComments at the top level as well as for Todo.comments, and Comment.todo to implement the @connection. Under the hood, the transform will create a global secondary index on the Comment table in DynamoDB but it will not generate a top level query field that queries the GSI because you can fetch the comments for a given todo object via the Query.getTodo.comments query path. If you want to fetch all comments for a todo object via a top level query field i.e. Query.commentsForTodo then do the following:

  • Add the desired field to your schema.graphql.
1# ... Todo and Comment types from above
2
3type CommentConnection {
4 items: [Comment]
5 nextToken: String
6}
7type Query {
8 commentsForTodo(todoId: ID!, limit: Int, nextToken: String): CommentConnection
9}
  • Add a resolver resource to a stack in the stacks/ directory. The DataSourceName is auto-generated. In most cases, it'll look like {MODEL_NAME}Table. To confirm the data source name, you can verify it from within the AppSync Console (amplify console api) and clicking on the Data Sources tab.
1{
2 // ... The rest of the template
3 "Resources": {
4 "QueryCommentsForTodoResolver": {
5 "Type": "AWS::AppSync::Resolver",
6 "Properties": {
7 "ApiId": {
8 "Ref": "AppSyncApiId"
9 },
10 "DataSourceName": "CommentTable",
11 "TypeName": "Query",
12 "FieldName": "commentsForTodo",
13 "RequestMappingTemplateS3Location": {
14 "Fn::Sub": [
15 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.commentsForTodo.req.vtl",
16 {
17 "S3DeploymentBucket": {
18 "Ref": "S3DeploymentBucket"
19 },
20 "S3DeploymentRootKey": {
21 "Ref": "S3DeploymentRootKey"
22 }
23 }
24 ]
25 },
26 "ResponseMappingTemplateS3Location": {
27 "Fn::Sub": [
28 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.commentsForTodo.res.vtl",
29 {
30 "S3DeploymentBucket": {
31 "Ref": "S3DeploymentBucket"
32 },
33 "S3DeploymentRootKey": {
34 "Ref": "S3DeploymentRootKey"
35 }
36 }
37 ]
38 }
39 }
40 }
41 }
42}
  • Write the resolver templates.
1## Query.commentsForTodo.req.vtl **
2
3#set( $limit = $util.defaultIfNull($context.args.limit, 10) )
4{
5 "version": "2017-02-28",
6 "operation": "Query",
7 "query": {
8 "expression": "#connectionAttribute = :connectionAttribute",
9 "expressionNames": {
10 "#connectionAttribute": "commentTodoId"
11 },
12 "expressionValues": {
13 ":connectionAttribute": {
14 "S": "$context.args.todoId"
15 }
16 }
17 },
18 "scanIndexForward": true,
19 "limit": $limit,
20 "nextToken": #if( $context.args.nextToken ) "$context.args.nextToken" #else null #end,
21 "index": "gsi-TodoComments"
22}
1## Query.commentsForTodo.res.vtl **
2
3$util.toJson($ctx.result)

Add a custom resolver that targets an AWS Lambda function

Velocity is useful as a fast, secure environment to run arbitrary code but when it comes to writing complex business logic you can just as easily call out to an AWS lambda function. Here is how:

  • First create a function by running amplify add function. The rest of the example assumes you created a function named "echofunction" via the amplify add function command. If you already have a function then you may skip this step.

  • Add a field to your schema.graphql that will invoke the AWS Lambda function.

1type Query {
2 echo(msg: String): String
3}
  • Add the function as an AppSync data source in the stack's Resources block.
1"EchoLambdaDataSource": {
2 "Type": "AWS::AppSync::DataSource",
3 "Properties": {
4 "ApiId": {
5 "Ref": "AppSyncApiId"
6 },
7 "Name": "EchoFunction",
8 "Type": "AWS_LAMBDA",
9 "ServiceRoleArn": {
10 "Fn::GetAtt": [
11 "EchoLambdaDataSourceRole",
12 "Arn"
13 ]
14 },
15 "LambdaConfig": {
16 "LambdaFunctionArn": {
17 "Fn::Sub": [
18 "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:echofunction-${env}",
19 { "env": { "Ref": "env" } }
20 ]
21 }
22 }
23 }
24}
  • Create an AWS IAM role that allows AppSync to invoke the lambda function on your behalf to the stack's Resources block.
1"EchoLambdaDataSourceRole": {
2 "Type": "AWS::IAM::Role",
3 "Properties": {
4 "RoleName": {
5 "Fn::Sub": [
6 "EchoLambdaDataSourceRole-${env}",
7 { "env": { "Ref": "env" } }
8 ]
9 },
10 "AssumeRolePolicyDocument": {
11 "Version": "2012-10-17",
12 "Statement": [
13 {
14 "Effect": "Allow",
15 "Principal": {
16 "Service": "appsync.amazonaws.com"
17 },
18 "Action": "sts:AssumeRole"
19 }
20 ]
21 },
22 "Policies": [
23 {
24 "PolicyName": "InvokeLambdaFunction",
25 "PolicyDocument": {
26 "Version": "2012-10-17",
27 "Statement": [
28 {
29 "Effect": "Allow",
30 "Action": [
31 "lambda:invokeFunction"
32 ],
33 "Resource": [
34 {
35 "Fn::Sub": [
36 "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:echofunction-${env}",
37 { "env": { "Ref": "env" } }
38 ]
39 }
40 ]
41 }
42 ]
43 }
44 }
45 ]
46 }
47}
  • Create an AppSync resolver in the stack's Resources block.
1"QueryEchoResolver": {
2 "Type": "AWS::AppSync::Resolver",
3 "Properties": {
4 "ApiId": {
5 "Ref": "AppSyncApiId"
6 },
7 "DataSourceName": {
8 "Fn::GetAtt": [
9 "EchoLambdaDataSource",
10 "Name"
11 ]
12 },
13 "TypeName": "Query",
14 "FieldName": "echo",
15 "RequestMappingTemplateS3Location": {
16 "Fn::Sub": [
17 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.echo.req.vtl",
18 {
19 "S3DeploymentBucket": {
20 "Ref": "S3DeploymentBucket"
21 },
22 "S3DeploymentRootKey": {
23 "Ref": "S3DeploymentRootKey"
24 }
25 }
26 ]
27 },
28 "ResponseMappingTemplateS3Location": {
29 "Fn::Sub": [
30 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.echo.res.vtl",
31 {
32 "S3DeploymentBucket": {
33 "Ref": "S3DeploymentBucket"
34 },
35 "S3DeploymentRootKey": {
36 "Ref": "S3DeploymentRootKey"
37 }
38 }
39 ]
40 }
41 }
42}
  • Create the resolver templates in the project's resolvers directory.

resolvers/Query.echo.req.vtl

1{
2 "version": "2017-02-28",
3 "operation": "Invoke",
4 "payload": {
5 "type": "Query",
6 "field": "echo",
7 "arguments": $utils.toJson($context.arguments),
8 "identity": $utils.toJson($context.identity),
9 "source": $utils.toJson($context.source)
10 }
11}

resolvers/Query.echo.res.vtl

1$util.toJson($ctx.result)

After running amplify push open the AppSync console with amplify api console and test your API with this simple query:

1query {
2 echo(msg: "Hello, world!")
3}

Add a custom geolocation search resolver that targets an OpenSearch domain created by @searchable

To add a geolocation search capabilities to an API add the @searchable directive to an @model type.

1type Todo @model @searchable {
2 id: ID!
3 name: String!
4 description: String
5 comments: [Comment] @connection(name: "TodoComments")
6}

The next time you run amplify push, an Amazon OpenSearch domain will be created and configured such that data automatically streams from DynamoDB into OpenSearch. The @searchable directive on the Todo type will generate a Query.searchTodos query field and resolver but it is not uncommon to want more specific search capabilities. You can write a custom search resolver by following these steps:

  • Add the relevant location and search fields to the schema.
1type Comment @model {
2 id: ID!
3 content: String
4 todo: Todo @connection(name: "TodoComments")
5}
6type Location {
7 lat: Float
8 lon: Float
9}
10type Todo @model @searchable {
11 id: ID!
12 name: String!
13 description: String
14 comments: [Comment] @connection(name: "TodoComments")
15 location: Location
16}
17type TodoConnection {
18 items: [Todo]
19 nextToken: String
20}
21input LocationInput {
22 lat: Float
23 lon: Float
24}
25type Query {
26 nearbyTodos(location: LocationInput!, km: Int): TodoConnection
27}
  • Create the resolver record in the stack's Resources block.
1"QueryNearbyTodos": {
2 "Type": "AWS::AppSync::Resolver",
3 "Properties": {
4 "ApiId": {
5 "Ref": "AppSyncApiId"
6 },
7 "DataSourceName": "ElasticSearchDomain",
8 "TypeName": "Query",
9 "FieldName": "nearbyTodos",
10 "RequestMappingTemplateS3Location": {
11 "Fn::Sub": [
12 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyTodos.req.vtl",
13 {
14 "S3DeploymentBucket": {
15 "Ref": "S3DeploymentBucket"
16 },
17 "S3DeploymentRootKey": {
18 "Ref": "S3DeploymentRootKey"
19 }
20 }
21 ]
22 },
23 "ResponseMappingTemplateS3Location": {
24 "Fn::Sub": [
25 "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyTodos.res.vtl",
26 {
27 "S3DeploymentBucket": {
28 "Ref": "S3DeploymentBucket"
29 },
30 "S3DeploymentRootKey": {
31 "Ref": "S3DeploymentRootKey"
32 }
33 }
34 ]
35 }
36 }
37}
  • Write the resolver templates.
1## Query.nearbyTodos.req.vtl
2## Objects of type Todo will be stored in the /todo index
3
4#set( $indexPath = "/todo/doc/_search" )
5#set( $distance = $util.defaultIfNull($ctx.args.km, 200) )
6{
7 "version": "2017-02-28",
8 "operation": "GET",
9 "path": "$indexPath.toLowerCase()",
10 "params": {
11 "body": {
12 "query": {
13 "bool": {
14 "must": {
15 "match_all": {}
16 },
17 "filter": {
18 "geo_distance": {
19 "distance": "${distance}km",
20 "location": $util.toJson($ctx.args.location)
21 }
22 }
23 }
24 }
25 }
26 }
27}
1## Query.nearbyTodos.res.vtl
2
3#set( $items = [] )
4#foreach( $entry in $context.result.hits.hits )
5 #if( !$foreach.hasNext )
6 #set( $nextToken = "$entry.sort.get(0)" )
7 #end
8 $util.qr($items.add($entry.get("_source")))
9#end
10$util.toJson({
11 "items": $items,
12 "total": $ctx.result.hits.total,
13 "nextToken": $nextToken
14})
  • Run amplify push

Amazon OpenSearch domains can take a while to deploy. Take this time to read up on OpenSearch to see what capabilities you are about to unlock.

Getting Started with OpenSearch

  • After the update is complete but before creating any objects, update your OpenSearch index mapping.

An index mapping tells OpenSearch how it should treat the data that you are trying to store. By default, if you create an object with field "location": { "lat": 40, "lon": -40 }, OpenSearch will treat that data as an object type when in reality you want it to be treated as a geo_point. You use the mapping APIs to tell OpenSearch how to do this.

Make sure you tell OpenSearch that your location field is a geo_point before creating objects in the index because otherwise you will need delete the index and try again. Go to the Amazon OpenSearch Console and find the OpenSearch domain that contains this environment's GraphQL API ID. Click on it and open the OpenSearch Dashboard link. To get the OpenSearch Dashboard to show up you need to install a browser extension such as AWS Agent and configure it with your AWS profile's public key and secret so the browser can sign your requests to the OpenSearch Dashboard for security reasons. Once you have the OpenSearch Dashboard open, click the "Dev Tools" tab on the left and run the commands below using the in browser console.

1# Create the /todo index if it does not exist
2PUT /todo
3
4# Tell OpenSearch that the location field is a geo_point
5PUT /todo/_mapping/doc
6{
7 "properties": {
8 "location": {
9 "type": "geo_point"
10 }
11 }
12}
  • Use your API to create objects and immediately search them.

After updating the OpenSearch index mapping, open the AWS AppSync console with amplify api console and try out these queries.

1mutation CreateTodo {
2 createTodo(
3 input: {
4 name: "Todo 1"
5 description: "The first thing to do"
6 location: { lat: 43.476446, lon: -110.767786 }
7 }
8 ) {
9 id
10 name
11 location {
12 lat
13 lon
14 }
15 description
16 }
17}
18
19query NearbyTodos {
20 nearbyTodos(location: { lat: 43.476546, lon: -110.768786 }, km: 200) {
21 items {
22 id
23 name
24 location {
25 lat
26 lon
27 }
28 }
29 }
30}

When you run Mutation.createTodo, the data will automatically be streamed via AWS Lambda into OpenSearch such that it nearly immediately available via Query.nearbyTodos.