Page updated Jan 16, 2024

Identify entities from images

Amplify iOS v1 is now in Maintenance Mode until May 31st, 2024. This means that we will continue to include updates to ensure compatibility with backend services and security. No new features will be introduced in v1.

Please use the latest version (v2) of Amplify Library for Swift to get started.

If you are currently using v1, follow these instructions to upgrade to v2.

Amplify libraries should be used for all new cloud connected applications. If you are currently using the AWS Mobile SDK for iOS, you can access the documentation here.

The following APIs will enable you to identify entities (faces and/or celebrities) from images.

For identifying entities on iOS we use both AWS backend services as well as Apple's on-device Core ML Vision Framework to provide you with the most accurate results. If your device is offline, we will return results only from Core ML. On the other hand, if you are able to connect to AWS Services, we will return a unioned result from both the service and Core ML. Switching between backend services and Core ML is done automatically without any additional configuration required.

Set up your backend

If you haven't already done so, run amplify init inside your project and then amplify add auth (we recommend selecting the default configuration).

Run amplify add predictions, then use the following answers:

1? Please select from one of the categories below (Use arrow keys)
2❯ Identify
3 Convert
4 Interpret
5 Infer
6 Learn More
7
8? What would you like to identify?
9 Identify Text
10❯ Identify Entities
11 Identify Labels
12
13? Provide a friendly name for your resource
14 <Enter a friendly name here>
15
16? Would you like use the default configuration? (Use arrow keys)
17❯ Default Configuration
18 Advanced Configuration
19
20? Who should have access?
21 Auth users only
22❯ Auth and Guest users

Run amplify push to create the resources in the cloud

Working with the API

In order to match entities from a pre-created Amazon Rekognition Collection, ensure that both collectionId and maxEntities are set in your amplifyconfiguration.json file. The value of collectionId should be the name of your collection that you created either with the CLI or the SDK. The value of maxEntities should be a number greater than 0 or less than 51 (50 is the max number of entities Rekognition can detect from a collection). If both collectionId and maxEntities do not have valid values in the amplifyconfiguration.json file, then this call will just detect entities in general with facial features, landmarks, etc. Bounding boxes for entities are returned as ratios so make sure if you would like to place the bounding box of your entity on an image that you multiple the x by the width of the image, the y by the height of the image, and both height and width ratios by the image's respective height and width.

You can identify entity matches from your Rekognition Collection in your app using the following code sample:

1func detectEntities(_ image: URL) {
2 Amplify.Predictions.identify(type: .detectEntities, image: image) { event in
3 switch event {
4 case let .success(result):
5 let data = result as! IdentifyEntityMatchesResult
6 print(data.entities)
7 case let .failure(error):
8 print(error)
9 }
10 }
11}
1func detectEntities(_ image: URL) -> AnyCancellable {
2 Amplify.Predictions.identify(type: .detectEntities, image: image)
3 .resultPublisher
4 .sink {
5 if case let .failure(error) = $0 {
6 print(error)
7 }
8 }
9 receiveValue: { result in
10 let data = result as! IdentifyEntityMatchesResult
11 print(data.entities)
12 }
13}

To detect general entities like facial features, landmarks etc, you can use the following call pattern. Results are mapped to IdentifyEntityResult. For example:

1func detectEntities(_ image: URL) {
2 Amplify.Predictions.identify(type: .detectEntities, image: image) { event in
3 switch event {
4 case let .success(result):
5 let data = result as! IdentifyEntitiesResult
6 print(data.entities)
7 case let .failure(error):
8 print(error)
9 }
10 }
11}
1func detectEntities(_ image: URL) -> AnyCancellable {
2 Amplify.Predictions.identify(type: .detectEntities, image: image)
3 .resultPublisher
4 .sink {
5 if case let .failure(error) = $0 {
6 print(error)
7 }
8 }
9 receiveValue: { result in
10 let data = result as! IdentifyEntitiesResult
11 print(data.entities)
12 }
13}

Detecting Celebrities

To detect celebrities you can pass in .detectCelebrity in the type: field. Results are mapped to IdentifyCelebritiesResult. For example:

1func detectCelebs(_ image: URL) {
2 Amplify.Predictions.identify(type: .detectCelebrity, image: image) { event in
3 switch event {
4 case let .success(result):
5 let data = result as! IdentifyCelebritiesResult
6 if let detectedCeleb = data.celebrities.first {
7 print("Celebrity Name: \(detectedCeleb.metadata.name)")
8 }
9 print(result)
10 case let .failure(error):
11 print(error)
12 }
13 }
14}
1func detectCelebs(_ image: URL) -> AnyCancellable {
2 Amplify.Predictions.identify(type: .detectCelebrity, image: image)
3 .resultPublisher
4 .sink {
5 if case let .failure(error) = $0 {
6 print(error)
7 }
8 }
9 receiveValue: { result in
10 let data = result as! IdentifyCelebritiesResult
11 if let detectedCeleb = data.celebrities.first {
12 print("Celebrity Name: \(detectedCeleb.metadata.name)")
13 }
14 print(result)
15 }
16}

As a result of passing in a URL of an image of a well known celebrity, you will see the corresponding celebrity name printed to the screen along with additional metadata.