Authoring a new plugin
The Amplify CLI provides the command amplify plugin init
(with alias amplify plugin new
) for the development of plugins. This command first collects requirements, and then creates the skeleton of the plugin package for you to start the development. The newly created plugin is added to your local Amplify CLI plugin platform, so you can conveniently test its functionalities while it is being developed. It can be easily removed from the local plugin platform with the amplify plugin remove
command, and added back with the amplify plugin add
command.
Step 1: Install Amplify CLI
npm install -g @aws-amplify/cli
curl -sL https://aws-amplify.github.io/amplify-cli/install | bash && $SHELL
curl -sL https://aws-amplify.github.io/amplify-cli/install-win -o install.cmd && install.cmd
Step 2: Initialize plugin
amplify plugin init
You will be prompted to enter the plugin name, then select the plugin type, and event subscriptions. The CLI will then create a plugin package for you and add it to the local Amplify CLI plugin platform.
Step 3: Test your plugin
The newly created plugin package is already added to the local Amplify CLI, so you can start testing it immediately.
Let's say you have chosen to use the default plugin name: my-amplify-plugin
$ amplify my-amplify-plugin helphelp command to be implemented.
You will see that the default help message is printed out.
At this point, there are only two sub commands in the plugin package, help
and version
, with dummy implementations. If you try to execute any other command, it will trigger the Amplify CLI plugin platform to perform a fresh scan, and then after it failed to find the command, it will print out the default help message.
From here, you can start to develop the plugin package. See below for the detailed explanation of the package structure.
Step 4: Publish to npm
After the completion of one development cycle and you are ready to release your plugin to the public, you can publish it to the npm: https://docs.npmjs.com/getting-started/publishing-npm-packages
Step 5: Install and Use
Once your plugin is published to the npm, other developers can install and use it
npm install -g my-amplify-pluginamplify plugin add my-amplify-pluginamplify my-amplify-plugin help
Plugin Package Structure
Here's the plugin package directory structure
|_my-amplify-plugin/ |_commands/ | |_ help.js | |_ version.js | |_event-handlers | |_handle-PostInit.js | |_handle-PostPush.js | |_handle-PreInit.js | |_handle-PrePush.js | |_amplify-plugin.json |_index.js |_package.json
amplify-plugin.json
The amplify-plugin.json
file is the plugin's manifest file, it specifies the plugin's name, type, commands and event handlers. The Amplify CLI uses it to verify and add the plugin package into its plugin platform.
Here's the contents of the file when it's first generated by the amplify plugin init
command for a util plugin.
{ "name": "my-amplify-plugin", "type": "util", "commands": [ "version", "help" ], "eventHandlers": [ "PreInit", "PostInit", "PrePush", "PostPush" ]}
index.js
The "main"
file specified in the package.json
is the Amplify CLI's entry to invoke the plugin's functionalities specified in the manifest file amplify-plugin.json
.
Here's the contents of the file when it's first generated by the amplify plugin init
command for a util plugin.
const path = require('path');
async function executeAmplifyCommand(context) { const commandsDirPath = path.normalize(path.join(__dirname, 'commands')); const commandPath = path.join(commandsDirPath, context.input.command); const commandModule = require(commandPath); await commandModule.run(context);}
async function handleAmplifyEvent(context, args) { const eventHandlersDirPath = path.normalize(path.join(__dirname, 'event-handlers')); const eventHandlerPath = path.join(eventHandlersDirPath, `handle-${args.event}`); const eventHandlerModule = require(eventHandlerPath); await eventHandlerModule.run(context, args);}
module.exports = { executeAmplifyCommand, handleAmplifyEvent,};
commands
The commands
folder contains files that implement the commands
specified in the manifest file amplify-plugin.json
.
event-handlers
The event-handlers
folder contains files that implement the eventHandlers
specified in the manifest file amplify-plugin.json
.
Authoring custom GraphQL transformers & directives
This section outlines the process of writing custom GraphQL transformers. The @aws-amplify/graphql-transformer-core
package serves as a lightweight framework that takes as input a GraphQL SDL document and a list of GraphQL Transformers and returns a set of deployment resources that fully implements the data model defined by the input schema. A GraphQL Transformer is a class that defines a directive and a set of functions that manipulate a context and are called whenever that directive is found in an input schema.
For example, the AWS Amplify CLI calls the GraphQL Transform like this:
import { GraphQLTransform } from '@aws-amplify/graphql-transformer-core';import { FeatureFlagProvider, TransformerPluginProvider } from '@aws-amplify/graphql-transformer-interfaces';import { AuthTransformer } from '@aws-amplify/graphql-auth-transformer';import { BelongsToTransformer, HasManyTransformer, HasOneTransformer, ManyToManyTransformer } from '@aws-amplify/graphql-relational-transformer';import { DefaultValueTransformer } from '@aws-amplify/graphql-default-value-transformer';import { FunctionTransformer } from '@aws-amplify/graphql-function-transformer';import { HttpTransformer } from '@aws-amplify/graphql-http-transformer';import { IndexTransformer, PrimaryKeyTransformer } from '@aws-amplify/graphql-index-transformer';import { ModelTransformer } from '@aws-amplify/graphql-model-transformer';import { PredictionsTransformer } from '@aws-amplify/graphql-predictions-transformer';import { SearchableModelTransformer } from '@aws-amplify/graphql-searchable-transformer';
// This adapter class supports the propagation of feature flag values from the CLI to the transformersclass TransformerFeatureFlagAdapter implements FeatureFlagProvider { getBoolean(featureName: string, defaultValue?: boolean): boolean { throw new Error('Method not implemented.'); } getString(featureName: string, defaultValue?: string): string { throw new Error('Method not implemented.'); } getNumber(featureName: string, defaultValue?: number): number { throw new Error('Method not implemented.'); } getObject(featureName: string, defaultValue?: object): object { throw new Error('Method not implemented.'); }}
const modelTransformer = new ModelTransformer();const indexTransformer = new IndexTransformer();const hasOneTransformer = new HasOneTransformer();const authTransformer = new AuthTransformer({ authConfig: { defaultAuthentication: { authenticationType: 'API_KEY', }, additionalAuthenticationProviders: [ { authenticationType: 'AMAZON_COGNITO_USER_POOLS', userPoolConfig: { userPoolId: 'us-east-1_abcdefghi', } } ] }, addAwsIamAuthInOutputSchema: true,});
const transformers: TransformerPluginProvider[] = [ modelTransformer, new FunctionTransformer(), new HttpTransformer(), new PredictionsTransformer(), new PrimaryKeyTransformer(), indexTransformer, new BelongsToTransformer(), new HasManyTransformer(), hasOneTransformer, new ManyToManyTransformer(modelTransformer, indexTransformer, hasOneTransformer, authTransformer), new DefaultValueTransformer(), authTransformer, new SearchableModelTransformer(),];
const graphQLTransform = new GraphQLTransform ({ transformers, featureFlags: new TransformerFeatureFlagAdapter(), sandboxModeEnabled: false,});
const schema = `type Post @model { id: ID! title: String! comments: [Comment] @hasMany}type Comment @model { id: ID! content: String! post: Post @belongsTo}`;
const { rootStack, stacks, schema } = graphQLTransform.transform(schema);
console.log('Schema compiled successfully.')
As shown above the GraphQLTransform
class takes a list of transformers and later is able to transform
GraphQL SDL documents into deployment resources, this includes the transformed GraphQL schema, CloudFormation templates, AppSync service resolvers, etc.
The Transform Lifecycle
At a high level the GraphQLTransform
takes the input SDL, parses it, and validates the schema is complete and satisfies the directive definitions. It then iterates through the list of transformers passed to the transform when it was created.
In order to support inter communication/dependency between these classes of transformers, the transformation will be done in phases. The table below shows the lifecycle methods that a transformer plugin can implement to handle different phases in the execution of the transformer:
Lifecycle method | Description | |
---|---|---|
before | initialization of the transformer | |
GraphQL visitor pattern functions | object | for each type that has the directive defined by the transformer |
interface | for each interface that has the directive defined by the transformer | |
field | for each field that has the directive defined by the transformer | |
argument | for each argument that has the directive defined by the transformer | |
union | for each union that has the directive defined by the transformer | |
enum | for each enum that has the directive defined by the transformer | |
enumValue | for each enumValue that has the directive defined by the transformer | |
scalar | for each scalar that has the directive defined by the transformer | |
input | for each input that has the directive defined by the transformer | |
inputValue | for each inputValue that has the directive defined by the transformer | |
prepare | transformer register themselves in the TransformerContext (as data provider or data enhancer) | |
validate | transformer validates the directive arguments | |
transformSchema | transformer updates/augments the output schema | |
generateResolvers | transformer generates resources such as resolvers, IAM policies, Tables, etc. | |
after | cleanup, this lifecycle method is invoked in reverse order for the registered transformers |
Here is pseudo code for how const { rootStack, stacks, schema } = graphQLTransform.transform(schema);
works.
public transform(schema: string): DeploymentResources {
// ...
for (const transformer of this.transformers) { // Run the prepare function one time per transformer. if (isFunction(transformer.before)) { transformer.before(context); }
// Transform each definition in the input document. for (const def of context.inputDocument.definitions as TypeDefinitionNode[]) { switch (def.kind) { case 'ObjectTypeDefinition': this.transformObject(transformer, def, context); // Walk the fields and call field transformers. break; case 'InterfaceTypeDefinition': this.transformInterface(transformer, def, context); // Walk the fields and call field transformers. break; case 'ScalarTypeDefinition': this.transformScalar(transformer, def, context); break; case 'UnionTypeDefinition': this.transformUnion(transformer, def, context); break; case 'EnumTypeDefinition': this.transformEnum(transformer, def, context); break; case 'InputObjectTypeDefinition': this.transformInputObject(transformer, def, context); break; // Note: Extension and operation definition nodes are not supported. default: continue; } } } }
// Validate for (const transformer of this.transformers) { if (isFunction(transformer.validate)) { transformer.validate(context); } }
// Prepare for (const transformer of this.transformers) { if (isFunction(transformer.prepare)) { transformer.prepare(context); } }
// Transform Schema for (const transformer of this.transformers) { if (isFunction(transformer.transformSchema)) { transformer.transformSchema(context); } }
// After is called in the reverse order as if they were popping off a stack. let reverseThroughTransformers = this.transformers.length - 1; while (reverseThroughTransformers >= 0) { const transformer = this.transformers[reverseThroughTransformers]; if (isFunction(transformer.after)) { transformer.after(context); }
reverseThroughTransformers -= 1; }
// ...
// Return the deployment resources. // In the future there will likely be a formatter concept here. return this.synthesize(context);}
The Transformer Context
The transformer context serves like an accumulator that is manipulated by transformers. See the code to see what methods are available to you.
For now, the transformer only support CloudFormation and uses AWS CDK to create CloudFormation resources in code.
Adding Custom GraphQL Transformers to the Project
To add a custom GraphQL transformer to the list of transformers, they need to be registered within the project. This registration can be done by adding an entry to transform.conf.json
file which can be found in the amplify/backend/api/<api-name>
folder. A transformer can be registered by adding a file URI to the JavaScript file that implements the transformer or by specifying the npm package name. The transformer modules will be dynamically imported during the transform process.
Example transform.conf.json
file
{ "transformers":[ "some-transformer-via-npm", "file:///some/absolute/local/module" ]}
Example
As an example let's walk through how we implemented the @model
transformer. The first thing to do is to define a directive for your transformer.
Note: Some parts of the code will not be shown for brevity.
export const directiveDefinition = /* GraphQL */ ` directive @model( queries: ModelQueryMap mutations: ModelMutationMap subscriptions: ModelSubscriptionMap timestamps: TimestampConfiguration ) on OBJECT`;
Our @model
directive can be applied to OBJECT
type definitions and automatically adds CRUD functionality, timestamp fields to an API. For example, we might write:
type Post @model { id: ID! title: String!}
The next step after defining the directive is to implement the transformer's business logic. The @aws-amplify/graphql-transformer-core
package makes this a little easier
by exporting a common class through which we may define transformers. Users extend the TransformerPluginBase
class and implement the required functions.
Note: In this example
@model
extended from a higher level class,TransformerModelBase
.
export class ModelTransformer extends TransformerModelBase implements TransformerModelProvider { // ...}
Since your directiveDefinition
only specifies OBJECT
in its on condition, we have to implement the object
method and some other
lifecycle methods like validate
, prepare
and transformSchema
to have a fully functioning transformer. You may implement before
and after
methods which will be called once at the beginning and end respectively of the transformation process.
/** * Users extend the TransformerPluginBase class and implement the relevant functions. */export class ModelTransformer extends TransformerModelBase implements TransformerModelProvider { constructor(options: ModelTransformerOptions = {}) { super('amplify-model-transformer', directiveDefinition); }
// ...}
The following snippet shows the prepare
method implementation which takes all the type from the GraphQL schema and registers the transformer as a data source provider. Data source providers are used by transformers that are creating persistent resources like DynamoDB tables in this case. prepare
is the place to register data enhancers as well. Data enhancers are used by transformers that enriching existing types or operations by adding or modifying fields, arguments, etc.
prepare = (context: TransformerPrepareStepContextProvider) => { for (const modelTypeName of this.typesWithModelDirective) { const type = context.output.getObject(modelTypeName); context.providerRegistry.registerDataSourceProvider(type!, this); } };
For the full source code of the @model
transformer, go here.
VS Code Extension
Add the VSCode extension to get code snippets and automatic code completion for Amplify APIs.