Use AutoTranscribe in your Amazon Connect solution
asapp-prod-mg-amazonconnect-role
.
Setup your account’s IAM role (e.g., kinesis-connect-access-role-for-asapp
) to trust asapp-prod-mg-amazonconnect-role
to assume it and create a policy permitting list/read operations on appropriate Kinesis Video Streams associated with Amazon Connect instance.
/start-streaming
endpoint request requires several fields, but three specific attributes must come from Amazon:$.ContactId
, $.InitialContactId
, $.PreviousContactId
$.MediaStreams.Customer.Audio.StreamARN
$.MediaStreams.Customer.Audio.StartFragmentNumber
Requests to /start-streaming
also require agent and customer identifiers. These identifiers can be sourced from Amazon Connect but may also originate from other systems if your use case requires it./start-streaming
endpoint and another for sending a request to ASAPP’s /stop-streaming
endpoint.
/start-streaming
endpoint, place the Lambda block following the Start media streaming flow block/stop-streaming
endpoint, place the Lambda block immediately before the Stop media streaming flow block./start-streaming
endpoint, AutoTranscribe begins to publish transcript
messages, each of which contains a full utterance for a single call participant.
The expected latency between when ASAPP receives audio for a completed utterance and provides a transcription of that same utterance is 200-600ms.
transcript
type messages is JSON encoded with these fields:
Field | Subfield | Description | Example Value |
---|---|---|---|
externalConversationId | Unique identifier with the Amazon Connect Contact Id for the call | 8c259fea-8764-4a92-adc4-73572e9cf016 | |
streamId | Unique identifier assigned by ASAPP to each call participant’s stream returned in response to /start-streaming and /stop-streaming | 5ce2b755-3f38-11ed-b755-7aed4b5c38d5 | |
sender | externalId | Customer or agent identifier as provided in request to /start-streaming | ef53245 |
sender | role | A participant role, either customer or agent | customer, agent |
autotranscribeResponse | message | Type of message | transcript |
autotranscribeResponse | start | The start ms of the utterance | 0 |
autotranscribeResponse | end | Elapsed ms since the start of the utterance | 1000 |
autotranscribeResponse | utterance | Transcribed utterance text | Are you there? |
transcript
message format:
GET /conversation/v1/conversation/messages
Once a conversation is complete, make a request to the endpoint using a conversation identifier and receive back every message in the conversation.
utterances
data feed.
The File Exporter service is meant to be used as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g. nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC.
Visit Retrieving Data from ASAPP Messaging for a guide on how to interact with the File Exporter service.
/mg-autotranscribe/v1/start-streaming
Request
transcript
messages are sent for each participant from ASAPP’s webhook publisher to a target endpoint configured to receive the messages.
HTTPS POST for Customer Utterance
/stop-streaming
endpoint to pause transcription and prevents hold music and promotional messages from being transcribed.
POST /mg-autotranscribe/v1/stop-streaming
Request