Connect to Amazon Bedrock for generative AI use cases
Amazon Bedrock is a fully managed service that removes the complexity of using foundation models (FMs) for generative AI development. It acts as a central hub, offering a curated selection of high-performing FMs from leading AI companies like Anthropic, AI21 Labs, Cohere, and Amazon itself.
Amazon Bedrock streamlines generative AI development by providing:
-
Choice and Flexibility: Experiment and evaluate a wide range of FMs to find the perfect fit for your use case.
-
Simplified Integration: Access and use FMs through a single, unified API, reducing development time.
-
Enhanced Security and Privacy: Benefit from built-in safeguards to protect your data and prevent misuse.
-
Responsible AI Features: Implement guardrails to control outputs and mitigate bias.
In the following sections, we walk through the steps to add Amazon Bedrock to your API as a data source and connect to it from your Amplify app:
- Add Amazon Bedrock as a data source
- Define a custom query
- Configure custom business logic handler code
- Invoke a custom query to prompt a generative AI model
Step 1 - Add Amazon Bedrock as a data source
To connect to Amazon Bedrock as a data source, you can choose between two methods - using a Lambda function or a custom resolver powered by AppSync JavaScript resolvers. The following steps demonstrate both methods:
In your amplify/backend.ts
file, replace the content with the following code to add a lambda function to your backend and grant it permission to invoke a generative AI model in Amazon Bedrock. The generateHaikuFunction
lambda function will be defined in and exported from the amplify/data/resource.ts
file in the next steps:
import { defineBackend } from "@aws-amplify/backend";import { auth } from "./auth/resource";import { data, MODEL_ID, generateHaikuFunction } from "./data/resource";import { Effect, PolicyStatement } from "aws-cdk-lib/aws-iam";
export const backend = defineBackend({ auth, data, generateHaikuFunction,});
backend.generateHaikuFunction.resources.lambda.addToRolePolicy( new PolicyStatement({ effect: Effect.ALLOW, actions: ["bedrock:InvokeModel"], resources: [ `arn:aws:bedrock:*::foundation-model/${MODEL_ID}`, ], }));
In your amplify/backend.ts
file, replace the content with the following code to add an HTTP data source for Amazon Bedrock to your API and grant it permissions to invoke a generative AI model:
import { defineBackend } from "@aws-amplify/backend";import { auth } from "./auth/resource";import { data } from "./data/resource";import { Effect, PolicyStatement } from "aws-cdk-lib/aws-iam";import { Stack } from "aws-cdk-lib";
export const backend = defineBackend({ auth, data,});
const MODEL_ID = "anthropic.claude-3-haiku-20240307-v1:0";
const bedrockDataSource = backend.data.addHttpDataSource( "BedrockDataSource", "https://bedrock-runtime.us-east-1.amazonaws.com", { authorizationConfig: { signingRegion: Stack.of(backend.data).region, signingServiceName: "bedrock", }, });
bedrockDataSource.grantPrincipal.addToPrincipalPolicy( new PolicyStatement({ effect: Effect.ALLOW, actions: ["bedrock:InvokeModel"], resources: [ `arn:aws:bedrock:${Stack.of(backend.data).region}::foundation-model/${MODEL_ID}`, ], }));
backend.data.resources.cfnResources.cfnGraphqlApi.environmentVariables = { MODEL_ID}
For the purpose of this guide, we will use Anthropic's Claude 3 Haiku to generate content. If you want to use a different model, you can find the ID for your model of choice in the Amazon Bedrock documentation's list of model IDs or the Amazon Bedrock console and replace the value of MODEL_ID
.
Step 2 - Define a custom query
Next, replace the contents of your amplify/data/resource.ts
file with the following code. This will define and export a lambda function that was granted permission to invoke a generative AI model in Amazon Bedrock in the previous step. A custom query named generateHaiku
is added to the schema with the generateHaikuFunction
as the handler using the a.handler.function()
modifier:
import { type ClientSchema, a, defineData, defineFunction,} from "@aws-amplify/backend";
export const MODEL_ID = "anthropic.claude-3-haiku-20240307-v1:0";
export const generateHaikuFunction = defineFunction({ entry: "./generateHaiku.ts", environment: { MODEL_ID, },});
const schema = a.schema({ generateHaiku: a .query() .arguments({ prompt: a.string().required() }) .returns(a.string()) .authorization((allow) => [allow.publicApiKey()]) .handler(a.handler.function(generateHaikuFunction)),});
export type Schema = ClientSchema<typeof schema>;
export const data = defineData({ schema, authorizationModes: { defaultAuthorizationMode: "apiKey", apiKeyAuthorizationMode: { expiresInDays: 30, }, },});
With Amazon Bedrock added as a data source, you can reference it in custom queries using the a.handler.custom()
modifier which accepts the name of the data source and an entry point for your resolvers. Replace the contents of your amplify/data/resource.ts
file with the following code to define a custom query named generateHaiku
in the schema:
import { type ClientSchema, a, defineData } from "@aws-amplify/backend";
const schema = a.schema({ generateHaiku: a .query() .arguments({ prompt: a.string().required() }) .returns(a.string()) .authorization((allow) => [allow.publicApiKey()]) .handler( a.handler.custom({ dataSource: "BedrockDataSource", entry: "./generateHaiku.js", }) ),});
export type Schema = ClientSchema<typeof schema>;
export const data = defineData({ schema, authorizationModes: { defaultAuthorizationMode: "apiKey", apiKeyAuthorizationMode: { expiresInDays: 30, }, },});
Step 3 - Configure custom business logic handler code
Next, create a generateHaiku.ts
file in your amplify/data
folder and use the following code to define a custom resolver for the custom query added to your schema in the previous step:
The following code uses the BedrockRuntimeClient
from the @aws-sdk/client-bedrock-runtime
package to invoke the generative AI model in Amazon Bedrock. The handler
function takes the user prompt as an argument, invokes the model, and returns the generated haiku.
import type { Schema } from "./resource";import { BedrockRuntimeClient, InvokeModelCommand, InvokeModelCommandInput,} from "@aws-sdk/client-bedrock-runtime";
// initialize bedrock runtime clientconst client = new BedrockRuntimeClient();
export const handler: Schema["generateHaiku"]["functionHandler"] = async ( event, context) => { // User prompt const prompt = event.arguments.prompt;
// Invoke model const input = { modelId: process.env.MODEL_ID, contentType: "application/json", accept: "application/json", body: JSON.stringify({ anthropic_version: "bedrock-2023-05-31", system: "You are a an expert at crafting a haiku. You are able to craft a haiku out of anything and therefore answer only in haiku.", messages: [ { role: "user", content: [ { type: "text", text: prompt, }, ], }, ], max_tokens: 1000, temperature: 0.5, }), } as InvokeModelCommandInput;
const command = new InvokeModelCommand(input);
const response = await client.send(command);
// Parse the response and return the generated haiku const data = JSON.parse(Buffer.from(response.body).toString());
return data.content[0].text;};
Next, create a generateHaiku.js
file in your amplify/data
folder and use the following code to define a custom resolver for the custom query added to your schema in the previous step:
The following code defines a request
function that constructs the HTTP request to invoke the generative AI model in Amazon Bedrock. The response
function parses the response and returns the generated haiku.
export function request(ctx) {
// Define a system prompt to give the model a persona const system = "You are a an expert at crafting a haiku. You are able to craft a haiku out of anything and therefore answer only in haiku.";
const prompt = ctx.args.prompt
// Construct the HTTP request to invoke the generative AI model return { resourcePath: `/model/${ctx.env.MODEL_ID}/invoke`, method: "POST", params: { headers: { "Content-Type": "application/json", }, body: { anthropic_version: "bedrock-2023-05-31", system, messages: [ { role: "user", content: [ { type: "text", text: prompt, }, ], }, ], max_tokens: 1000, temperature: 0.5, }, }, };}
// Parse the response and return the generated haikuexport function response(ctx) { const res = JSON.parse(ctx.result.body); const haiku = res.content[0].text;
return haiku;}
The code above uses the Messages API, which is supported by chat models such as Anthropic's Claude 3 Haiku.
The system
prompt is used to give the model a persona or directives to follow, and the messages
array can contain a history of messages. The max_tokens
parameter controls the maximum number of tokens the model can generate, and the temperature
parameter determines the randomness, or creativity, of the generated response.
Step 4 - Invoke a custom query to prompt a generative AI model
From your generated Data client, you can find all your custom queries and mutations under the client.queries
and client.mutations
APIs respectively.
The custom query below will prompt a generative AI model to create a haiku based on the given prompt. Replace the prompt
value with your desired prompt text or user input and invoke the query as shown below:
const { data, errors } = await client.queries.generateHaiku({ prompt: "Frank Herbert's Dune",});
Here's an example of a simple UI that prompts a generative AI model to create a haiku based on user input:
import { FormEvent, useState } from "react";
import { generateClient } from "aws-amplify/api";import { Schema } from "@/amplify/data/resource";
import { Amplify } from "aws-amplify";import outputs from "@/amplify_outputs.json";
Amplify.configure(outputs);
const client = generateClient<Schema>();
export default function App() { const [prompt, setPrompt] = useState<string>(""); const [answer, setAnswer] = useState<string | null>(null);
const sendPrompt = async (e: FormEvent<HTMLFormElement>) => { e.preventDefault();
const { data, errors } = await client.queries.generateHaiku({ prompt, });
if (!errors) { setAnswer(data); setPrompt(""); } else { console.log(errors); } };
return ( <main className="flex min-h-screen flex-col items-center justify-center p-24 dark:text-white"> <div> <h1 className="text-3xl font-bold text-center mb-4">Haiku Generator</h1>
<form className="mb-4 self-center max-w-[500px]" onSubmit={sendPrompt}> <input className="text-black p-2 w-full" placeholder="Enter a prompt..." name="prompt" value={prompt} onChange={(e) => setPrompt(e.target.value)} /> </form>
<div className="text-center"> <pre>{answer}</pre> </div> </div> </main> );}
Conclusion
In this guide, you learned how to connect to Amazon Bedrock from your Amplify app. By adding Bedrock as a data source, defining a custom query, configuring custom business logic handler code, and invoking custom queries, you can leverage the power of generative AI models in your application.
To clean up, you can delete your sandbox by accepting the prompt when terminating the sandbox process in your terminal. Alternatively, you can also use the AWS Amplify console to manage and delete sandbox environments.