Name:
interface
Value:
Amplify has re-imagined the way frontend developers build fullstack applications. Develop and deploy without the hassle.

Choose your framework/language

Page updated Nov 22, 2024

Tools

Tools allow LLMs to query information to respond with current and relevant information. They are invoked only if the LLM requests to use one based on the user's message and the tool's description.

There are a few different ways to define LLM tools in the Amplify AI kit.

  1. Model tools
  2. Query tools
  3. Lambda tools

The easiest way to define tools for your conversation route is with a.ai.dataTool() for data models and custom queries in your data schema. When you define a tool for a conversation route, Amplify takes care of the heavy lifting:

  • Describing the tools to the LLM: Each tool definition is an Amplify model query or custom query that is defined in the schema. Amplify knows the input parameters needed for that tool and describes them to the LLM.
  • Invoking the tool with the right parameters: After the LLM requests to use a tool with necessary input parameters, the conversation handler Lambda function invokes the tool, returns the result to the LLM, and continues the conversation.
  • Maintaining the caller identity and authorization: Through tools, the LLM can only access data that the application user has access to. When the LLM requests to invoke a tool, we will call it with the user's identity. For example, if the LLM wanted to invoke a query to list Todos, it would only return the todos that user has access to.

Model tools

You can give the LLM access to your data models by referencing them in an a.ai.dataTool() with a reference to a model in your data schema. This requires that the model uses at least one of the following authorization strategies:

Per user data access

  • owner()
  • ownerDefinedIn()
  • ownersDefinedIn()

Any signed-in user data access

  • authenticated()

Per user group data access

  • group()
  • groupsDefinedIn()
  • groups()
  • groupsDefinedIn()
amplify/data/resource.ts
import { type ClientSchema, a, defineData } from "@aws-amplify/backend";
const schema = a.schema({
Post: a.model({
title: a.string(),
body: a.string(),
})
.authorization(allow => allow.owner()),
chat: a.conversation({
aiModel: a.ai.model('Claude 3.5 Haiku'),
systemPrompt: 'Hello, world!',
tools: [
a.ai.dataTool({
// The name of the tool as it will be referenced in the message to the LLM
name: 'PostQuery',
// The description of the tool provided to the LLM.
// Use this to help the LLM understand when to use the tool.
description: 'Searches for Post records',
// A reference to the `a.model()` that the tool will use
model: a.ref('Post'),
// The operation to perform on the model
modelOperation: 'list',
}),
],
}),
})

This will let the LLM list and filter Post records. Because the data schema has all the information about the shape of a Post record, the data tool will provide that information to the LLM so you don't have to. Also, the Amplify AI kit handles authorizing the tool use requests based on the caller's identity. This means if you have an owner-based model, the LLM will only be able to query the user's records.

The only supported model operation is 'list'.

Query tools

You can also give the LLM access to custom queries defined in your data schema. To do so, define a custom query with a function or custom handler and then reference that custom query as a tool. This requires that the custom query uses the allow.authenticated() authorization strategy.

amplify/data/resource.ts
import { type ClientSchema, a, defineData, defineFunction } from "@aws-amplify/backend";
export const getWeather = defineFunction({
name: 'getWeather',
entry: './getWeather.ts',
environment: {
API_ENDPOINT: 'MY_API_ENDPOINT',
API_KEY: secret('MY_API_KEY'),
},
});
const schema = a.schema({
getWeather: a.query()
.arguments({ city: a.string() })
.returns(a.customType({
value: a.integer(),
unit: a.string()
}))
.handler(a.handler.function(getWeather))
.authorization((allow) => allow.authenticated()),
chat: a.conversation({
aiModel: a.ai.model('Claude 3.5 Haiku'),
systemPrompt: 'You are a helpful assistant',
tools: [
a.ai.dataTool({
// The name of the tool as it will be referenced in the LLM prompt
name: 'get_weather',
// The description of the tool provided to the LLM.
// Use this to help the LLM understand when to use the tool.
description: 'Gets the weather for a given city',
// A reference to the `a.query()` that the tool will invoke.
query: a.ref('getWeather'),
}),
]
})
.authorization((allow) => allow.owner()),
});

The Amplify data tool takes care of specifying the necessary input parameters to the LLM based on the query definition.

Below is an illustrative example of a Lambda function handler for the getWeather query.

amplify/data/getWeather.ts
import { env } from "$amplify/env/getWeather";
import type { Schema } from "./resource";
export const handler: Schema["getWeather"]["functionHandler"] = async (
event
) => {
const { city } = event.arguments;
if (!city) {
throw new Error('City is required');
}
const url = `${env.API_ENDPOINT}?city=${encodeURIComponent(city)}`;
const request = new Request(url, {
headers: {
Authorization: `Bearer ${env.API_KEY}`
}
});
const response = await fetch(request);
const weather = await response.json();
return weather;
}

Lastly, you will need to update your amplify/backend.ts file to include the newly defined getWeather function.

amplify/backend.ts
import { defineBackend } from '@aws-amplify/backend';
import { auth } from './auth/resource';
import { data, getWeather } from './data/resource';
const backend = defineBackend({
auth,
data,
getWeather
});

Connect to any AWS Service

You can connect to any AWS service by defining a custom query and calling that service in the function handler. To properly authorize the custom query function to call the AWS service, you will need to provide the Lambda with the proper permissions.

amplify/backend.ts
import { defineBackend } from "@aws-amplify/backend";
import { auth } from "./auth/resource";
import { data } from "./data/resource";
import { storage } from "./storage/resource";
import { getWeather } from "./functions/getWeather/resource";
import { PolicyStatement } from "aws-cdk-lib/aws-iam";
const backend = defineBackend({
auth,
data,
storage,
getWeather
});
backend.getWeather.resources.lambda.addToRolePolicy(
new PolicyStatement({
resources: ["[resource arn]",],
actions: ["[action]"],
}),
)

Custom Lambda Tools

You can also define a tool that executes in the conversation handler AWS Lambda function. This is useful if you want to define a tool that is not related to your data schema or that does simple tasks within the Lambda function runtime.

First install the @aws-amplify/backend-ai package.

Terminal
npm install @aws-amplify/backend-ai

Define a custom conversation handler function in your data schema and reference the function in the handler property of the a.conversation() definition.

amplify/data/resource.ts
import { type ClientSchema, a, defineData } from '@aws-amplify/backend';
import { defineConversationHandlerFunction } from '@aws-amplify/backend-ai/conversation';
export const chatHandler = defineConversationHandlerFunction({
entry: './chatHandler.ts',
name: 'customChatHandler',
models: [
{ modelId: a.ai.model("Claude 3.5 Haiku") }
]
});
const schema = a.schema({
chat: a.conversation({
aiModel: a.ai.model('Claude 3.5 Haiku'),
systemPrompt: "You are a helpful assistant",
handler: chatHandler,
})
.authorization((allow) => allow.owner()),
})

Define the executable tool(s) and handler. Below is an illustrative example of a custom conversation handler function that defines a calculator tool.

amplify/data/chatHandler.ts
import {
ConversationTurnEvent,
createExecutableTool,
handleConversationTurnEvent
} from '@aws-amplify/backend-ai/conversation/runtime';
const jsonSchema = {
json: {
type: 'object',
properties: {
'operator': {
'type': 'string',
'enum': ['+', '-', '*', '/'],
'description': 'The arithmetic operator to use'
},
'operands': {
'type': 'array',
'items': {
'type': 'number'
},
'minItems': 2,
'maxItems': 2,
'description': 'Two numbers to perform the operation on'
}
},
required: ['operator', 'operands']
}
} as const;
// declare as const to allow the input type to be derived from the JSON schema in the tool handler definition.
const calculator = createExecutableTool(
'calculator',
'Returns the result of a simple calculation',
jsonSchema,
// input type is derived from the JSON schema
(input) => {
const [a, b] = input.operands;
switch (input.operator) {
case '+': return Promise.resolve({ text: (a + b).toString() });
case '-': return Promise.resolve({ text: (a - b).toString() });
case '*': return Promise.resolve({ text: (a * b).toString() });
case '/':
if (b === 0) throw new Error('Division by zero');
return Promise.resolve({ text: (a / b).toString() });
default:
throw new Error('Invalid operator');
}
},
);
export const handler = async (event: ConversationTurnEvent) => {
await handleConversationTurnEvent(event, {
tools: [calculator],
});
};

Note that we throw an error in the calculator tool example above if the input is invalid. This error is surfaced to the LLM by the conversation handler function. Depending on the error message, the LLM may try to use the tool again with different input or completing its response with test for the user.

Lastly, update your backend definition to include the newly defined chatHandler function.

amplify/backend.ts
import { defineBackend } from '@aws-amplify/backend';
import { auth } from './auth/resource';
import { data, chatHandler } from './data/resource';
defineBackend({
auth,
data,
chatHandler,
});

Best Practices

  • Validate and sanitize any input from the LLM before using it in your application, e.g. don't use it directly in a database query or use eval() to execute it.
  • Handle errors gracefully and provide meaningful error messages.
  • Log and monitor tool usage to detect potential misuse or issues.