Set up custom queries and mutations
Define your custom business logic in a Lambda function resolver, HTTP resolver, or an AppSync JavaScript or VTL resolver and expose them in a GraphQL query or mutation. Extend or override Amplify-generated GraphQL resolvers to optimize for your specific use cases.
Create a custom query or mutation
While @model
automatically generates dedicated "create", "read", "update", "delete", and "subscription" queries or mutations for you, there are some cases where you want to define a stand-alone query or mutation.
- Define your custom query or mutation
type Mutation { myCustomMutation(args: String): String # your custom mutations here}
type Query { myCustomQuery(args: String): String # your custom queries here}
-
Use one of these resolver choices to handle the query or mutation request:
- Lambda function resolver: use a custom Lambda function to handle query or mutation
- HTTP resolver: call an HTTP endpoint upon a query or mutation
- AppSync JavaScript or VTL resolver (most advanced): use AppSync's JavaScript resolver or AppSync's VTL mapping templates to customize the query and mutation logic
-
Secure your custom query or mutation with field-level authorization rules
- Note: Dynamic authorization rules are not supported on a custom query or mutation.
Lambda function resolver
The @function
directive allows you to quickly & easily configure a AWS Lambda resolvers with your GraphQL API. You can use any AWS Lambda functions that was created with the Amplify CLI, AWS CDK or reference an existing AWS Lambda function created with any other means.
For example, use amplify add function
to add a Lambda function called "echofunction" with the following handler:
exports.handler = async function (event, context) { return event.arguments.msg;};
To connect an AWS Lambda resolver to the GraphQL API, add the @function
directive to a field in your schema.
type Query { echo(msg: String): String @function(name: "echofunction-${env}")}
The Amplify CLI provides support for maintaining multiple environments. When you deploy a function via amplify add function
, it will automatically add the environment suffix to your Lambda function name. For example, if you create a function named echofunction
using amplify add function
in the dev
environment, the deployed function will be named echofunction-dev
. The @function
directive allows you to use ${env}
to reference the current Amplify CLI environment.
First, create your Lambda function in CDK with your logic and set the functionName
parameter.
const echoLambda = new lambda.Function(this, 'EchoLambda', { functionName: 'echofunction', // MAKE SURE THIS MATCHES THE @function's "name" PARAMETER code: lambda.Code.fromAsset(path.join(__dirname, 'handlers/echo')), handler: 'index.handler', runtime: lambda.Runtime.NODEJS_18_X});
const amplifyApi = new AmplifyGraphqlApi(this, 'AmplifyCdkGraphQlApi', { definition: AmplifyGraphqlDefinition.fromFiles( path.join(__dirname, 'schema.graphql') ), authorizationModes: { defaultAuthorizationMode: 'API_KEY', apiKeyConfig: { expires: cdk.Duration.days(30) } }});
To connect an AWS Lambda resolver to the GraphQL API, add the @function
directive to a field in your GraphQL schema.
type Query { echo(msg: String): String @function(name: "echofunction")}
Optionally, if you don't want to hard-code the function name into the GraphQL schema, you can also set an arbitrary name in the GraphQL schema and then map a function within CDK to the function name.
const coolLambdaFunction = new lambda.Function(this, 'MyCoolLambdaFunction', { code: lambda.Code.fromAsset(path.join(__dirname, 'handlers/echo')), handler: 'index.handler', runtime: lambda.Runtime.NODEJS_18_X});
const amplifyApi = new AmplifyGraphqlApi(this, 'AmplifyCdkGraphQlApi', { definition: AmplifyGraphqlDefinition.fromFiles( path.join(__dirname, 'schema.graphql') ), authorizationModes: { defaultAuthorizationMode: 'API_KEY', apiKeyConfig: { expires: cdk.Duration.days(30) } }, functionNameMap: { echofunction: coolLambdaFunction // Remap the function name to any function you define or reference within CDK. }});
If you deployed your Lambda function without Amplify CLI then you must provide the full Lambda function name in the name
parameter. If you deployed the same function with the name echofunction then you would have:
type Query { echo(msg: String): String @function(name: "echofunction")}
Structure of the function event
When writing Lambda functions that are connected via the @function
directive, you can expect the following structure for the AWS Lambda event
object.
Key | Description |
---|---|
typeName | The name of the parent object type of the field being resolved. |
fieldName | The name of the field being resolved. |
arguments | A map containing the arguments passed to the field being resolved. |
identity | A map containing identity information for the request. Contains a nested key 'claims' that will contains the JWT claims if they exist. |
source | When resolving a nested field in a query, the source contains parent value at runtime. For example when resolving Post.comments , the source will be the Post object. |
request | The AppSync request object. Contains header information. |
prev | When using pipeline resolvers, this contains the object returned by the previous function. You can return the previous value for auditing use cases. |
Your function should follow the Lambda handler guidelines for your specific language. See the developer guides from the AWS Lambda documentation for your chosen language. If you choose to use structured types, your type should serialize the AWS Lambda event object outlined above. For example, if using Golang, you should define a struct with the above fields.
Calling functions in different regions
By default, you expect the function to be in the same region as the Amplify project. If you need to call a function in a different or a specific region, you can provide the region argument.
type Query { echo(msg: String): String @function(name: "echofunction", region: "us-east-1")}
Calling functions in different AWS accounts is not supported via the @function
directive but is supported by AWS AppSync.
Chaining functions
You can chain together multiple @function
resolvers such that they are invoked in series when your field's resolver is invoked. To create a pipeline resolver that calls to multiple AWS Lambda functions in series, use multiple @function
directives on the field. Similarly, @function
can be combined with field-level @auth
. When combining these field directives, the ordering in the schema matches the ordering in the pipeline resolver. You can choose to have functions before and/or after field level authorization is applied.
Note: Be careful when using @auth directives on a field in a root type. @auth directives on field definitions use the source object to perform authorization logic and the source will be an empty object for fields on root types. Static group authorization should perform as expected.
type Mutation { doSomeWork(msg: String): String @function(name: "worker-function") @function(name: "audit-function")}
In the example above when you run a mutation that calls the Mutation.doSomeWork
field, the worker-function will be invoked first then the audit-function will be invoked with an event that contains the results of the worker-function under the event.prev.result key. The audit-function would need to return event.prev.result if you want the result of worker-function to be returned for the field.
How it works
Definition of @function
directive:
directive @function(name: String!, region: String) on FIELD_DEFINITION
Under the hood, Amplify creates an AppSync::FunctionConfiguration
for each unique instance of @function
in a document and a pipeline resolver containing a pointer to a function for each @function
on a given field.
The @function
directive generates these resources as necessary:
- An AWS IAM role that has permission to invoke the function as well as a trust policy with AWS AppSync.
- An AWS AppSync data source that registers the new role and existing function with your AppSync API.
- An AWS AppSync pipeline function that prepares the lambda event and invokes the new data source.
- An AWS AppSync resolver that attaches to the GraphQL field and invokes the new pipeline functions.
HTTP resolver
The @http
directive allows you to quickly configure HTTP resolvers within your GraphQL API.
To connect to an endpoint, add the @http directive to a field in your GraphQL schema. The directive allows you to define URL path parameters, and specify a query string and/or specify a request body. For example, given the definition of a Post type:
type Post { id: ID! title: String description: String views: Int}
type Query { listPosts: [Post] @http(url: "https://www.example.com/posts")}
Amplify generates the definition below that sends a request to the url when the listPosts query is used.
type Query { listPosts: [Post]}
Request headers
The @http
directive generates resolvers that can handle XML and JSON responses. If an HTTP method is not defined, GET
is used. You can specify a list of static headers to be passed with the HTTP requests to your backend in your directive definition.
type Query { listPosts: [Post] @http( url: "https://www.example.com/posts" headers: [{ key: "X-Header", value: "X-Header-Value" }] )}
Path parameters
You can create dynamic paths by specifying parameters in the directive URL by using the special :<parameter>
notation. Your set of parameters can then be specified in the params input object of the query. Note that path parameters are not added to the request body or query string. You can define multiple parameters.
type Query { getPost: Post @http(url: "https://www.example.com/posts/:id")}
In the example above, the :id
parameter will generate the appropriate query input as shown below:
type Query { getPost(params: QueryGetPostParamsInput!): Post}
input QueryGetPostParamsInput { id: String!}
You can fetch a specific post by enclosing the id in the params input object.
query post { getPost(params: { id: "POST_ID" }) { id title }}
This executes the following request:
GET /posts/POST_IDHost: www.example.com
Query String
You can send a query string with your request by specifying variables for your query. The query string is supported with all request methods.
Given the definition
type Query { listPosts(sort: String!, from: String!, limit: Int!): Post @http(url: "https://www.example.com/posts")}
Amplify generates
type Query { listPosts(query: QueryListPostsQueryInput!): [Post]}
input QueryListPostsQueryInput { sort: String! from: String! limit: Int!}
You can query for posts using the query
input object
query posts { listPosts(query: { sort: "DESC", from: "last-week", limit: 5 }) { id title description }}
which sends the following request:
GET /posts?sort=DESC&from=last-week&limit=5Host: www.example.com
Request Body
The @http
directive also allows you to specify the body of a request, which is used for POST
, PUT
, and PATCH
requests. To create a new post, you can define the following.
type Mutation { addPost(title: String!, description: String!, views: Int): Post @http(method: POST, url: "https://www.example.com/post")}
Amplify generates the addPost
query field with the query
and body
input objects since this type of request also supports a query string. The generated resolver verifies that non-null arguments (e.g.: the title
and description
) are passed in at least one of the input objects; if not, an error is returned.
type Mutation { addPost(query: QueryAddPostQueryInput, body: QueryAddPostBodyInput): Post}
input QueryAddPostQueryInput { title: String description: String views: Int}
input QueryAddPostBodyInput { title: String description: String views: Int}
You can add a post by using the body
input object:
mutation add { addPost(body: { title: "new post", description: "fresh content" }) { id }}
which will send
POST /postHost: www.example.com{ title: "new post" description: "fresh content"}
Reference Amplify environment name
The @http
directive allows you to use ${env}
to reference the current Amplify CLI environment.
type Query { listPosts: Post @http(url: "https://www.example.com/${env}/posts")}
which, in the DEV
environment, will send
GET /DEV/postsHost: www.example.com
Combining the different components
You can use a combination of parameters, query, body, headers, and environments in your @http
directive definition.
Given the definition
type Post { id: ID! title: String description: String views: Int comments: [Comment]}
type Comment { id: ID! content: String}
type Mutation { updatePost( title: String! description: String! views: Int withComments: Boolean ): Post @http( method: PUT url: "https://www.example.com/${env}/posts/:id" headers: [{ key: "X-Header", value: "X-Header-Value" }] )}
you can update a post with
mutation update { updatePost( body: { title: "new title", description: "updated description", views: 100 } params: { id: "EXISTING_ID" } query: { withComments: true } ) { id title description comments { id content } }}
which, in the DEV
environment, will send
PUT /DEV/posts/EXISTING_ID?withComments=trueHost: www.example.comX-Header: X-Header-Value{ title: "new title" description: "updated description" views: 100}
Reference existing field data
In some cases, you may want to send a request based on existing field data. Take a scenario where you have a post and want to fetch comments associated with the post in a single query. Let's use the previous definition of Post
and Comment
.
type Post { id: ID! title: String description: String views: Int comments: [Comment]}
type Comment { id: ID! content: String}
A post can be fetched at /posts/:id
and a post's comments at /posts/:id/comments
. You can fetch the comments based on the post id with the following updated definition. $ctx.source
is a map that contains the resolution of the parent field (Post
) and gives access to id
.
type Post { id: ID! title: String description: String views: Int comments: [Comment] @http(url: "https://www.example.com/posts/${ctx.source.id}/comments")}
type Comment { id: ID! content: String}
type Query { getPost: Post @http(url: "https://www.example.com/posts/:id")}
You can retrieve the comments of a specific post with the following query and selection set.
query post { getPost(params: { id: "POST_ID" }) { id title description comments { id content } }}
Assuming that getPost
retrieves a post with the id POST_ID
, the comments field is resolved by sending this request to the endpoint
GET /posts/POST_ID/commentsHost: www.example.com
Note that there is no check to ensure that the reference variable (here the post ID) exists. When using this technique, it is recommended to make sure the referenced field is non-null.
How it works
Definition of @http
directive:
directive @http( method: HttpMethod url: String! headers: [HttpHeader]) on FIELD_DEFINITIONenum HttpMethod { PUT POST GET DELETE PATCH}input HttpHeader { key: String value: String}
The @http
transformer will create one HTTP datasource for each identified base URL. For example, if multiple HTTP resolvers are created that interact with the "https://www.example.com" endpoint, only a single datasource is created. Each directive generates one resolver. Depending on the definition, the appropriate body
, params
, and query
input types are created. Note that @http
transformer does not support calling other AWS services where Signature Version 4 signing process is required.
AppSync JavaScript or VTL resolver
You can use AWS Cloud Development Kit (CDK) to define custom AppSync resolvers for your GraphQL API. @auth
directives are not supported for custom queries or mutations that are connected to a JavaScript or VTL resolver. This is because you are replacing Amplify's auto-generated capabilities for that particular query or mutation with a custom-defined cloud resources.
amplify add custom
? How do you want to define this custom resource?❯ AWS CDK? Provide a name for your custom resource❯ MyCustomResolvers
Next, install the AppSync dependencies for your custom resource:
cd amplify/backend/custom/MyCustomResolversnpm i @aws-cdk/aws-appsync@~1.172.0
Note: Installations using the '~' character do not modify the package.json. To use '~' for default npm configurations, make sure your package.json reflects the right dependency to avoid compatibility errors in CDK.
Finally, add your custom resolvers into the cdk-stack.ts
file. You can either add the JavaScript or VTL inline into your cdk-stack.ts
file.
Unit Resolver
Review the AppSync JavaScript resolver tutorial for JavaScript resolver examples for different data sources.
import * as cdk from 'aws-cdk-lib';import * as AmplifyHelpers from '@aws-amplify/cli-extensibility-helper';import * as appsync from 'aws-cdk-lib/aws-appsync';import { AmplifyDependentResourcesAttributes } from '../../types/amplify-dependent-resources-ref';import { Construct } from 'constructs';
const jsResolverTemplate = `export function request(ctx) { return { payload: null }}
export function response(ctx) { return ctx.arguments.message}`
export class cdkStack extends cdk.Stack { constructor( scope: Construct, id: string, props?: cdk.StackProps, amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps ) { super(scope, id, props); /* Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter */ new cdk.CfnParameter(this, 'env', { type: 'String', description: 'Current Amplify CLI env name' });
// Access other Amplify Resources const retVal: AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency( this, amplifyResourceProps.category, amplifyResourceProps.resourceName, [ { category: 'api', resourceName: '<YOUR-API-NAME>' } ] );
const resolver = new appsync.CfnResolver(this, 'CustomResolver', { // apiId: retVal.api.new.GraphQLAPIIdOutput, // https://github.com/aws-amplify/amplify-cli/issues/9391#event-5843293887 // If you use Amplify you can access the parameter via Ref since it's a CDK parameter passed from the root stack. // Previously the ApiId is the variable Name which is wrong , it should be variable value as below apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), fieldName: 'echo', typeName: 'Query', // Query | Mutation | Subscription code: jsResolverTemplate, dataSourceName: 'NONE_DS', // DataSource name runtime: { name: 'APPSYNC_JS', runtimeVersion: '1.0.0' } }); }}
Review the Resolver Mapping Template Programming Guide to learn more about the VTL template.
import * as cdk from 'aws-cdk-lib';import * as AmplifyHelpers from '@aws-amplify/cli-extensibility-helper';import * as appsync from 'aws-cdk-lib/aws-appsync';import { AmplifyDependentResourcesAttributes } from '../../types/amplify-dependent-resources-ref';import { Construct } from 'constructs';
const requestVTL = `<YOUR CUSTOM VTL REQUEST MAPPING TEMPLATE HERE>`;const responseVTL = `<YOUR CUSTOM VTL RESPONSE MAPPING TEMPLATE HERE>`;
export class cdkStack extends cdk.Stack { constructor( scope: Construct, id: string, props?: cdk.StackProps, amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps ) { super(scope, id, props); /* Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter */ new cdk.CfnParameter(this, 'env', { type: 'String', description: 'Current Amplify CLI env name' });
// Access other Amplify Resources const retVal: AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency( this, amplifyResourceProps.category, amplifyResourceProps.resourceName, [ { category: 'api', resourceName: '<YOUR-API-NAME>' } ] );
const resolver = new appsync.CfnResolver(this, 'custom-resolver', { // apiId: retVal.api.new.GraphQLAPIIdOutput, // https://github.com/aws-amplify/amplify-cli/issues/9391#event-5843293887 // If you use Amplify you can access the parameter via Ref since it's a CDK parameter passed from the root stack. // Previously the ApiId is the variable Name which is wrong , it should be variable value as below apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), fieldName: 'querySomething', typeName: 'Query', // Query | Mutation | Subscription requestMappingTemplate: requestVTL, responseMappingTemplate: responseVTL, dataSourceName: 'TodoTable' // DataSource name }); }}
Pipeline Resolver
import * as cdk from 'aws-cdk-lib';import * as AmplifyHelpers from '@aws-amplify/cli-extensibility-helper';import * as appsync from 'aws-cdk-lib/aws-appsync';import { AmplifyDependentResourcesAttributes } from '../../types/amplify-dependent-resources-ref';import { Construct } from 'constructs';
const beforeMappingVTL = `<YOUR CUSTOM VTL REQUEST MAPPING TEMPLATE HERE>`;const afterMappingVTL = `<YOUR CUSTOM VTL RESPONSE MAPPING TEMPLATE HERE>`;const function1requestVTL = `<YOUR CUSTOM VTL FUNCTION 1 REQUEST MAPPING TEMPLATE HERE>`;const function1responseVTL = `<YOUR CUSTOM VTL FUNCTION 1 RESPONSE MAPPING TEMPLATE HERE>`;const function2requestVTL = `<YOUR CUSTOM VTL FUNCTION 2 REQUEST MAPPING TEMPLATE HERE>`;const function2responseVTL = `<YOUR CUSTOM VTL FUNCTION 2 RESPONSE MAPPING TEMPLATE HERE>`;export class cdkStack extends cdk.Stack { constructor( scope: Construct, id: string, props?: cdk.StackProps, amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps ) { super(scope, id, props); /* Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter */ new cdk.CfnParameter(this, 'env', { type: 'String', description: 'Current Amplify CLI env name' });
// Access other Amplify Resources const retVal: AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency( this, amplifyResourceProps.category, amplifyResourceProps.resourceName, [ { category: 'api', resourceName: '<YOUR-API-NAME>' } ] );
const function1 = new appsync.CfnFunctionConfiguration(this, 'function1', { apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), dataSourceName: 'NONE_DS', // DataSource name functionVersion: '2018-05-29', name: 'function1', requestMappingTemplate: function1requestVTL, responseMappingTemplate: function1responseVTL });
const function2 = new appsync.CfnFunctionConfiguration(this, 'function2', { apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), dataSourceName: 'TodoTable', // DataSource name functionVersion: '2018-05-29', name: 'function2', requestMappingTemplate: function2requestVTL, responseMappingTemplate: function2responseVTL });
const resolver = new appsync.CfnResolver(this, 'pipeline-resolver', { apiId: cdk.Fn.ref(retVal.api.replaceWithAPIName.GraphQLAPIIdOutput), fieldName: 'querySomething', typeName: 'Query', // Query | Mutation | Subscription kind: 'PIPELINE', pipelineConfig: { functions: [function1.attrFunctionId, function2.attrFunctionId] }, requestMappingTemplate: beforeMappingVTL, responseMappingTemplate: afterMappingVTL }); }}
Note: Users moving from ElasticSearch to OpenSearch will need to change the datasource name from
ElasticSearchDomain
toOpenSearchDataSource
if the upgrade process changes the source name. For new @searchable models the datasource name will default toOpenSearchDataSource
.
You can alternatively define the VTL templates in another file such as Query.querySomething.req.vtl
or Query.querySomething.res.vtl
in amplify/backend/custom/MyCustomResolvers/
. Then use the following code snippets to retrieve them:
requestMappingTemplate: appsync.MappingTemplate.fromFile(path.join(__dirname, "..", "Query.testColin.req.vtl")).renderTemplate(),responseMappingTemplate: appsync.MappingTemplate.fromFile(path.join(__dirname, "..", "Query.testColin.res.vtl")).renderTemplate(),
Note: the
..
is added to the path because the path is always relative to thebuild
folder of the custom resource.
Add authorization rules to custom queries and mutations
Authorization rules can be applied with the @auth
directive in the same way as field-level authorization rules. See Field-level authorization rules for details.
In the example below, myCustomMutation
can only be executed by signed-in customers who are authenticated with IAM:
type Mutation { myCustomMutation(args: String): String @auth(rules: [{ allow: private, provider: iam }])}
Known limitation: You can't combine the
@auth
directive with a custom query or mutation that is using a VTL resolver.
Override Amplify-generated resolvers
Amplify generates AWS AppSync pipeline resolver for your queries and mutations. The resolvers are listed the following API resource's folder amplify/backend/api/<resource_name>/build/resolvers/
.
To override an Amplify-generated resolver:
- Find the resolver file name you want to override under
build/resolvers
- Place a
.vtl
with the same file name the resource'sresolvers/
(not underbuild/
) - Upon the next
amplify api gql-compile
oramplify push
the Amplify-generated resolver file will be replaced with your overwritten resolver file
amplify/backend/api/<resource_name>├── build│ ├── ...│ ├── resolvers│ │ ├── ...│ │ ├── Query.searchTodos.req.vtl # Find resolver file name│ │ └── ...| ...├── resolvers│ └── Query.searchTodos.req.vtl # Place resolver overrides with the same file name here
The example above shows how the Query.searchTodos.req.vtl
is overwritten with a custom resolver. Review the Resolver Mapping Template Programming Guide to learn more about the VTL template.
Extend Amplify-generated resolvers
Amplify generates AWS AppSync pipeline resolvers for your queries and mutations. You can "slot" in your custom business logic between Amplify-generated resolvers. You can find Amplify-generated resolvers under your API resources' build/resolvers/
folder. The resolver functions file name determines its placement within the slot sequence.
File name convention: [Query|Mutation|Subscription].[field name].[slot name].[slot placement].[req|res].vtlExample: Mutation.createTodo.postAuth.1.req.vtl
To extend an Amplify-generated resolver:
- Find the resolver slot you want to add your custom business logic to
- Place a
.vtl
file with the correct the file naming convention intoresolvers/
(not underbuild/
) - Upon the next
amplify api gql-compile
oramplify push
the Amplify-generated resolver file will be replaced within the desired slot within the resolver sequence.
amplify/backend/api/<resource_name>├── build│ ├── ...│ ├── resolvers│ │ ├── ...│ │ ├── Mutation.createTodo.postAuth.1.req.vtl # Amplify-generated resolvers│ │ └── ...| ...├── resolvers│ └── Mutation.createTodo.postAuth.2.req.vtl # Custom resolver slotted in after postAuth.1 resolver
For example, the a resolver function file named Mutation.createTodo.postAuth.2.req.vtl
will be slotted in right after the Mutation.createTodo.postAuth.1.req.vtl
resolver. Review the Resolver Mapping Template Programming Guide to learn more about the VTL template.
Supported resolver slots
Query
Sequence | Slot name | Description |
---|---|---|
1 | init | Initial resolvers that are run. Usually used for initializing default values. |
2 | preAuth | Resolvers that are intended to run before authorization rule checks are applied. |
3 | auth | Resolvers that implement authorization rule checks. |
4 | postAuth | Resolvers that are run after authorization rule checks. |
5 | preDataLoad | Resolvers to configure values to make a request to the data source. |
6 | postDataLoad | Resolvers for post-processing after request to data source. |
7 | finish | Final set of resolvers before response is returned to client. Typically used for clean-up. |
Mutation
Sequence | Slot name | Description |
---|---|---|
1 | init | Initial resolvers that are run. Usually used for initializing default values. |
2 | preAuth | Resolvers that are intended to run before authorization rule checks are applied. |
3 | auth | Resolvers that implement authorization rule checks. |
4 | postAuth | Resolvers that are run after authorization rule checks. |
5 | preUpdate | Resolvers to configure values to make a request to the data source. |
6 | postUpdate | Resolvers for post-processing after request to data source. |
7 | finish | Final set of resolvers before response is returned to client. Typically used for clean-up. |
Subscription
Sequence | Slot name | Description |
---|---|---|
1 | init | Initial resolvers that are run. Usually used for initializing default values. |
2 | preAuth | Resolvers that are intended to run before authorization rule checks are applied. |
3 | auth | Resolvers that implement authorization rule checks. |
4 | postAuth | Resolvers that are run after authorization rule checks. |
5 | preSubscribe | Resolver slot that executes after auth but before the subscription returns |