Amplify has re-imagined the way frontend developers build fullstack applications. Develop and deploy without the hassle.

Connect machine learning services

You are currently viewing the legacy GraphQL Transformer documentation. View latest documentation


The @predictions directive allows you to query an orchestration of AI/ML services such as Amazon Rekognition, Amazon Translate, and/or Amazon Polly.

Note: Support for adding the @predictions directive uses the s3 storage bucket which is configured via the CLI. At the moment this directive works only with objects located within public/.


The supported actions in this directive are included in the definition.

directive @predictions(actions: [PredictionsActions!]!) on FIELD_DEFINITION
enum PredictionsActions {
identifyText # uses Amazon Rekognition to detect text
identifyLabels # uses Amazon Rekognition to detect labels
convertTextToSpeech # uses Amazon Polly in a lambda to output a presigned url to synthesized speech
translateText # uses Amazon Translate to translate text from source to target language


Given the following schema a query operation is defined which will do the following with the provided image.

  • Identify text from the image
  • Translate the text from that image
  • Synthesize speech from the translated text.
type Query {
speakTranslatedImageText: String
@predictions(actions: [identifyText, translateText, convertTextToSpeech])

An example of that query will look like:

query SpeakTranslatedImageText($input: SpeakTranslatedImageTextInput!) {
input: {
identifyText: { key: "myimage.jpg" }
translateText: { sourceLanguage: "en", targetLanguage: "es" }
convertTextToSpeech: { voiceID: "Conchita" }

A code example of this using the JS Library:

import React, { useState } from 'react';
import { Amplify, Storage, API, graphqlOperation } from 'aws-amplify';
import awsconfig from './aws-exports';
import { speakTranslatedImageText } from './graphql/queries';
/* Configure Exports */
function SpeakTranslatedImage() {
const [src, setSrc] = useState('');
const [img, setImg] = useState('');
function putS3Image(event) {
const file =[0];
Storage.put(, file)
.then(async (result) => {
setSrc(await speakTranslatedImageTextOP(result.key));
setImg(await Storage.get(result.key));
.catch((err) => console.log(err));
return (
<div className="Text">
<h3>Upload Image</h3>
onChange={(event) => {
<br />
{img && <img src={img}></img>}
{src && (
{' '}
<audio id="audioPlayback" controls>
<source id="audioSource" type="audio/mp3" src={src} />
</audio>{' '}
async function speakTranslatedImageTextOP(key) {
const inputObj = {
translateText: {
sourceLanguage: 'en',
targetLanguage: 'es'
identifyText: { key },
convertTextToSpeech: { voiceID: 'Conchita' }
const response = await client.graphql(
graphqlOperation(speakTranslatedImageText, { input: inputObj })
function App() {
return (
<div className="App">
<h1>Speak Translated Image</h1>
<SpeakTranslatedImage />
export default App;

How it works

From example schema above, @predictions will create resources to communicate with Amazon Rekognition, Translate and Polly. For each action the following is created:

  • IAM Policy for each service (e.g. Amazon Rekognition detectText Policy)
  • An AppSync VTL function
  • An AppSync DataSource

Finally a resolver is created for speakTranslatedImageText which is a pipeline resolver composed of AppSync functions which are defined by the action list provided in the directive.


Each of the actions described in the @predictions definition section can be used individually, as well as in a sequence. Sequence of actions supported today are as follows:

  • identifyText -> translateText -> convertTextToSpeech
  • identifyLabels -> translateText -> convertTextToSpeech
  • translateText -> convertTextToSpeech

Action resources