Amazon Data Firehose client
AmplifyFirehoseClient is a standalone client for streaming data to Amazon Data Firehose delivery streams. It provides:
- Local persistence for offline support
- Automatic retry for failed records
- Automatic batching (up to 500 records or 4 MB per request)
- Interval-based automatic flushing (default: every 30 seconds)
- Enable/disable toggle that silently drops new records while preserving cached ones
Getting started
Installation
Add AmplifyFirehoseClient to your project using Swift Package Manager. In Xcode, go to File > Add Package Dependencies and enter the repository URL for the Amplify Swift SDK.
Initialize the client
import AmplifyFirehoseClient
let firehose = try AmplifyFirehoseClient( region: "us-east-1", credentialsProvider: credentialsProvider)Configuration options
You can customize the client behavior by passing an options object:
| Option | Default | Description |
|---|---|---|
cacheMaxBytes | 5 MB | Maximum size of the local record cache in bytes. |
maxRetries | 5 | Maximum retry attempts per record before it is discarded. |
flushStrategy | .interval(30) | Automatic flush interval in seconds. Use .none for manual-only flushing. |
configureClient | nil | Closure to customize the underlying FirehoseClientConfiguration. |
let firehose = try AmplifyFirehoseClient( region: "us-east-1", credentialsProvider: credentialsProvider, options: .init( cacheMaxBytes: 10 * 1_024 * 1_024, // 10 MB maxRetries: 5, flushStrategy: .interval(30), configureClient: { config in // Customize the underlying FirehoseClientConfiguration } ))To disable automatic flushing:
options: .init(flushStrategy: .none)Usage
Record data
Use record() to persist data to the local cache. Records are sent to Firehose during the next flush cycle (automatic or manual).
let result = try await firehose.record( data: "Hello Firehose".data(using: .utf8)!, streamName: "my-delivery-stream")Records submitted while the client is disabled are silently dropped.
Flush records
The client automatically flushes cached records at the configured interval (default: 30 seconds). You can also trigger a manual flush:
let flushResult = try await firehose.flush()print("Flushed \(flushResult.recordsFlushed) records")Each flush sends at most one batch per stream (up to 500 records or 4 MB). Remaining records are picked up in subsequent flush cycles. If a flush is already in progress, the call returns immediately with flushInProgress: true.
Manual flushes work even when the client is disabled, allowing you to drain cached records without re-enabling collection.
Clear cache
Delete all cached records from local storage:
let cleared = try await firehose.clearCache()Enable and disable
You can toggle record collection and automatic flushing at runtime. When disabled, new records are silently dropped but already-cached records remain in storage.
await firehose.disable()// Records are dropped, auto-flush paused
await firehose.enable()// Collection and auto-flush resumeAdvanced
Escape hatch
Access the underlying AWS SDK FirehoseClient for operations not covered by this client's API:
let sdkClient = firehose.getFirehoseClient()// Use sdkClient for direct Firehose API callsError handling
All operations surface errors through a sealed exception hierarchy:
| Error type | Description |
|---|---|
FirehoseError.validation | Record input validation failed (oversized record). |
FirehoseError.cacheLimitExceeded | Local cache is full. Call flush() or clearCache() to free space. |
FirehoseError.cache | Local database error. |
FirehoseError.unknown | Unexpected or uncategorized error. |
Operations throw FirehoseError:
do { try await firehose.record( data: payload, streamName: "stream" )} catch let error as FirehoseError { switch error { case .validation(let desc, _, _): print("Validation error: \(desc)") case .cacheLimitExceeded: print("Cache full") case .cache(let desc, _, _): print("Storage error: \(desc)") case .unknown(let desc, _, _): print("Unknown error: \(desc)") }}Retry behavior
- All
PutRecordBatcherror codes (ServiceUnavailableException,InternalFailure) are treated as retryable. - Each failed record's retry count is incremented after each attempt.
- Records exceeding
maxRetries(default: 5) are permanently deleted from the cache. - SDK-level Firehose errors are logged and skipped per-stream, so other streams can still flush.
- Non-SDK errors (network failures, storage errors) abort the flush entirely.
Firehose service limits
The client enforces these limits before sending to the service:
| Limit | Value |
|---|---|
Max records per PutRecordBatch request | 500 |
| Max single record size | 1,000 KiB |
Max total payload per PutRecordBatch request | 4 MB |