Page updated Jan 16, 2024

Identify entities from images

Identify entities from images

Setup your backend with the following: If you haven't already done so, run amplify init inside your project and then amplify add auth (we recommend selecting the default configuration).

Run amplify add predictions and select Identify. Then use the following answers:

? What would you like to identify? Identify Text ❯ Identify Entities Identify Labels Learn More ? Would you like use the default configuration? Default Configuration ? Who should have access? Auth and Guest users
1? What would you like to identify?
2 Identify Text
3❯ Identify Entities
4 Identify Labels
5 Learn More
6
7? Would you like use the default configuration? Default Configuration
8
9? Who should have access? Auth and Guest users

Now run amplify push which will generate your aws-exports.js and create resources in the cloud. You can now either add this to your backend or skip and add more features to your app.

Services used: Amazon Rekognition

Advanced configuration

You can enhance your application's ability to identify entities by performing indexing against a pre-defined collection of images and providing them to Amazon Rekognition. This can be done in one of two ways:

  1. Administrators provide images to be indexed from an S3 bucket
  2. Application users upload files to an S3 bucket which are indexed

Amplify configures Lambda Triggers to automatically perform this indexing when the files are uploaded. As Rekognition does not store any faces, but rather the facial features into a vector which are stored in the backend, these images can be deleted after indexing if you choose to do so. However note that by default the Lambda Triggers will remove the index if you delete an image so you will need to modify their behavior if you desire this functionality.

To add this functionality into your application choose advanced when prompted in the Identify flow (if you already enabled Identify Entities you will need to run amplify update predictions):

? What would you like to identify? Identify Entities ? Would you like use the default configuration? Advanced Configuration ? Would you like to enable celebrity detection? Yes ? Would you like to identify entities from a collection of images? Yes ? How many entities would you like to identify 50 ? Would you like to allow users to add images to this collection? Yes ? Who should have access? Auth and Guest users ? The CLI would be provisioning an S3 bucket to store these images please provide bucket name: mybucket
1? What would you like to identify? Identify Entities
2? Would you like use the default configuration? Advanced Configuration
3? Would you like to enable celebrity detection? Yes
4? Would you like to identify entities from a collection of images? Yes
5? How many entities would you like to identify 50
6? Would you like to allow users to add images to this collection? Yes
7? Who should have access? Auth and Guest users
8? The CLI would be provisioning an S3 bucket to store these images please provide bucket name: mybucket

Note that if you already have an S3 bucket in your project from running amplify add storage it would be reused. To upload images from the CLI for administrator indexing, you can run amplify predictions console and select Identify which will open the S3 bucket location in the AWS console for you to upload your images.

For application users, when they upload images with Storage.put() they will specify a prefix which the Lambda functions perform indexing like so:

Storage.put('test.jpg', file, { level: 'protected', customPrefix: { protected: 'protected/predictions/index-faces/', }, });
1Storage.put('test.jpg', file, {
2 level: 'protected',
3 customPrefix: {
4 protected: 'protected/predictions/index-faces/',
5 },
6});

In the sample React application code ahead, you will see that to use this functionality you will need to set collection:true when calling Predictions.identify() and remove celebrityDetection: true. The flow is that you will first upload an image to S3 with the PredictionsUpload function (which is connected to a button in the app) and after a few seconds you can send this same image to Predictions.identify() and it will check if that image has been indexed in the Collection.

Working with the API

Predictions.identify({entities: {...}}) => Promise<> Detects entities from an image and potentially related information such as position, faces, and landmarks. Can also identify celebrities and entities that were previously added. This function returns a Promise that returns an object with the entities that was identified.

Input can be sent directly from the browser (using File object or ArrayBuffer object) or an Amazon S3 key from project bucket.

Detect entities directly from image uploaded from the browser. (File object)

Predictions.identify({ entities: { source: { file, }, } }) .then((response) => console.log({ response })) .catch(err => console.log({ err }));
1Predictions.identify({
2 entities: {
3 source: {
4 file,
5 },
6 }
7})
8.then((response) => console.log({ response }))
9.catch(err => console.log({ err }));

Detect entities directly from image binary from the browser. (ArrayBuffer object) This technique is useful when you have base64 encoded binary image data, for example, from a webcam source.

Predictions.identify({ entities: { source: { bytes: imageArrayBuffer, }, } }) .then((response) => console.log({ response })) .catch(err => console.log({ err }));
1Predictions.identify({
2 entities: {
3 source: {
4 bytes: imageArrayBuffer,
5 },
6 }
7})
8.then((response) => console.log({ response }))
9.catch(err => console.log({ err }));

From Amazon S3 key

Predictions.identify({ entities: { source: { key: pathToPhoto, level: 'public | private | protected', //optional, default is the configured on Storage category }, } }) .then((response) => console.log({ response })) .catch(err => console.log({ err }));
1Predictions.identify({
2 entities: {
3 source: {
4 key: pathToPhoto,
5 level: 'public | private | protected', //optional, default is the configured on Storage category
6 },
7 }
8})
9.then((response) => console.log({ response }))
10.catch(err => console.log({ err }));

The following options are independent of which source is specified. For demonstration purposes it will be used file but it can be used S3 Key as well.

Detecting bounding box of faces from an image with its landmarks (eyes, mouth, nose).

Predictions.identify({ entities: { source: { file, }, } }) .then(({ entities }) => { entities.forEach(({boundingBox, landmarks}) => { const { width, // ratio of overall image width height, // ratio of overall image height left, // left coordinate as a ratio of overall image width top // top coordinate as a ratio of overall image height } = boundingBox; landmarks.forEach(landmark => { const { type, // string "eyeLeft", "eyeRight", "mouthLeft", "mouthRight", "nose" x, // ratio of overall image width y // ratio of overall image height } = landmark; }) }) }) .catch(err => console.log({ err }));
1Predictions.identify({
2 entities: {
3 source: {
4 file,
5 },
6 }
7})
8.then(({ entities }) => {
9 entities.forEach(({boundingBox, landmarks}) => {
10 const {
11 width, // ratio of overall image width
12 height, // ratio of overall image height
13 left, // left coordinate as a ratio of overall image width
14 top // top coordinate as a ratio of overall image height
15 } = boundingBox;
16 landmarks.forEach(landmark => {
17 const {
18 type, // string "eyeLeft", "eyeRight", "mouthLeft", "mouthRight", "nose"
19 x, // ratio of overall image width
20 y // ratio of overall image height
21 } = landmark;
22 })
23 })
24})
25.catch(err => console.log({ err }));

Detecting celebrities on an image. It will return only celebrities the name and urls with related information.

Predictions.identify({ entities: { source: { file, }, celebrityDetection: true // boolean. It will only show detected celebrities } }) .then(({ entities }) => { entities.forEach(({ boundingBox, landmarks, metadata }) => { const { name, urls } = metadata; // celebrity info // ... }) }) .catch(err => console.log({ err }));
1Predictions.identify({
2 entities: {
3 source: {
4 file,
5 },
6 celebrityDetection: true // boolean. It will only show detected celebrities
7 }
8})
9.then(({ entities }) => {
10 entities.forEach(({ boundingBox, landmarks, metadata }) => {
11 const {
12 name,
13 urls
14 } = metadata; // celebrity info
15
16 // ...
17 })
18})
19.catch(err => console.log({ err }));

Detecting entities from previously uploaded images (e.g. Advanced Configuration for Identify Entities)

Predictions.identify({ entities: { source: { file, }, collection: true } }) .then(({ entities }) => { entities.forEach(({ boundingBox, metadata: { externalImageId } }) => { const { width, // ratio of overall image width height, // ratio of overall image height left, // left coordinate as a ratio of overall image width top // top coordinate as a ratio of overall image height } = boundingBox; console.log({ externalImageId }); // this is the object key on S3 from the original image! Storage.get("", { customPrefix: { public: externalImageId }, level: "public", }).then(src => console.log({ src })); }) }) .catch(err => console.log(err));
1Predictions.identify({
2 entities: {
3 source: {
4 file,
5 },
6 collection: true
7 }
8})
9.then(({ entities }) => {
10 entities.forEach(({ boundingBox, metadata: { externalImageId } }) => {
11 const {
12 width, // ratio of overall image width
13 height, // ratio of overall image height
14 left, // left coordinate as a ratio of overall image width
15 top // top coordinate as a ratio of overall image height
16 } = boundingBox;
17 console.log({ externalImageId }); // this is the object key on S3 from the original image!
18
19 Storage.get("", {
20 customPrefix: {
21 public: externalImageId
22 },
23 level: "public",
24 }).then(src => console.log({ src }));
25 })
26})
27.catch(err => console.log(err));