# Agent Desk Source: https://docs.asapp.com/agent-desk Use the Agent Desk to empower agents to deliver fast and exceptional customer service. Agent Desk is an end-to-end AI-native® messaging platform designed for digital customer service. It enhances digital adoption, maintains customer satisfaction (CSAT), and increases contact center capacity efficiently. At its core, Agent Desk uses an AI-Native design approach. AI is not just an added feature, but the foundation for building the entire platform. Agent Desk leverages advanced machine learning algorithms and generative AI to provide comprehensive support for digital customer service. This holistic approach benefits agents, leaders, and customers alike, offering a seamless and intelligent messaging experience across various channels. ### Supported Channels Agent Desk supports [multiple messaging channels](/agent-desk/integrations), including: * [Android SDK](/agent-desk/integrations/android-sdk "Android SDK") * [Apple Messages for Business](/agent-desk/integrations/apple-messages-for-business "Apple Messages for Business") * [iOS SDK](/agent-desk/integrations/ios-sdk "iOS SDK") * [Voice](/agent-desk/integrations/voice "Voice") * [Web SDK](/agent-desk/integrations/web-sdk "Web SDK") * [WhatsApp Business](/agent-desk/integrations/whatsapp-business "WhatsApp Business") ## How it works Agent Desk seamlessly integrates with your existing channels, creating a unified ecosystem for customer interactions and agent support. Here's how it enhances the experience for all stakeholders: **For your customers**: * Seamlessly connect with your [preferred messaging channels](#implement-messaging-platform) for a consistent brand experience. * Benefit from intelligent automation with [**Virtual Agent**](#virtual-agent). **For your agents**: * Leverage the powerful [**Digital Agent Desk**](#digital-agent-desk). * Boost productivity with built-in AI-powered tools like **AI Summary** and **AI Compose**. **For your management team**: * Gain valuable insights with [**Insights Manager**](#insights-manager) By seamlessly blending AI capabilities with human expertise, Agent Desk elevates your customer service operations to new heights of efficiency and satisfaction. ### Virtual Agent Virtual Agent is our cutting-edge automation solution that enables organizations to: * Recognize intent intelligently and route seamlessly * Automate common customer inquiries with natural language * Handle dynamic input and secure forms * Customize workflows tailored to your brand's unique requirements Learn more about Virtual Agent ### Digital Agent Desk Digital Agent Desk is our AI-enhanced app that empowers agents to deliver exceptional customer service via messaging: * Send and receive messages across multiple channels * Manage concurrent conversations with intelligent prioritization * Access interaction history for context-aware support * Use AI tools like AI Compose, Autopilot, and AI Summary for faster Average Handle Time (AHT) * Navigate an intuitive interface with integrated knowledge and customer information Learn more about Digital Agent Desk ### Insights Manager Insights Manager is our powerful analytics tool that optimizes contact center operations: * Identify and respond to customer trends in real time * Monitor contact center activity with intuitive dashboards * Manage conversation volume and agent workload efficiently * Gain insights through performance analysis and reporting * Investigate customer interactions for quality and compliance Insights Manager provides data-driven insights to improve your customer service operations. Learn more about Insights Manager ## Implement Agent Desk To start using Agent Desk, you need to choose the channels your users will engage with, and configure Agent Desk, Virtual Agent, and Insights Manager to meet your needs. Connect ASAPP to your messaging channels. The main application where agents can communicate with customers through chat (message) View feature release announcements for ASAPP Messaging # Digital Agent Desk Source: https://docs.asapp.com/agent-desk/digital-agent-desk Use the Digital Agent Desk to empower agents to deliver fast and exceptional customer service. The Digital Agent Desk for chat serves as the main application where agents communicate with customers. Agents can: * Send and receive messages across multiple channels. * Manage concurrent conversations with intelligent prioritization. * Access interaction history for context-aware support. * Utilize AI tools like AI Compose, Autopilot, and AI Summary for faster Average Handle Time (AHT). * Access an intuitive interface with integrated [knowledge base](/agent-desk/digital-agent-desk/knowledge-base) and customer info. ## AI tools Digital Agent Desk captures agent conversations and actions to power Machine Learning (ML) models. These models power a number of AI tools that help agents deliver exceptional customer service. Automatically sends messages to customers based on conversation context to allow agents to focus on meaningful parts of a conversation. You can configure the Digital Agent Desk to send messages in English, Spanish, or French. * **AutoPilot Greeting**: Sends a greeting message to customers when conversations start. * **AutoPilot Ending**: Sends a closing message to customers when conversations end. * **AutoPilot Timeout**: Automatically handles closing conversations where customers have become inactive. Contact your ASAPP account team to enable Spanish or French language support for AutoPilot. Shows full responses to your agent based on conversation context, allowing your agent to select a response from the list to quickly reply. As your agent types, the system suggests new complete responses. This empowers agents to take advantage of the full response library. Proposes inline completions as your agent types, streamlining the typing and response process. Agents can use a library of pre-written responses either from your own organization or from their own responses. Streamlines post-call work by automatically summarizing conversations. ## Right-Hand Panel The right-hand panel serves as the hub for all agent activity. It provides a range of tools like key customer information, conversation history, knowledge base, and more to help agents deliver fast, accurate, and exceptional customer service. This data comes directly from ASAPP, or from your own CRM or other systems. ## Next Steps Learn how to navigate the Agent Desk Learn how to set up and use the Knowledge Base Connect your own systems and CRMs to the Agent Desk # Digital Agent Desk Navigation Source: https://docs.asapp.com/agent-desk/digital-agent-desk/agent-desk-navigation Overview of the Digital Agent Desk navigation and features. ## App Overview 1. [Main Navigation](#main-navigation) 2. [Conversation](#conversation) 3. [Agent Solutions](#agent-solutions) ## Main Navigation ### A. Agent Stats | **Feature** | **Feature Overview** | **Configurability** | | :---------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Agent Stats | Basic statistics related to chats handled since the agent last logged into Agent Desk (Current Session) or to all chats handled in Agent Desk (All Time). | Core | ### B. Navigation | **Feature** | **Feature Overview** | **Configurability** | | :--------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Concurrency Slots | The agent can see their concurrent chats and available 'Open Slots' directly in Agent Desk. | Configurable | | Waiting Timers | A timer displays when either the customer is waiting or the agent is waiting. The customer waiting time appears in larger text with a badge around it | Core | | Last Message Preview | Preview of the last message a customer sent in chat. | Core | | Color Coded Chat Cards | Unique color assigned to each chat card to help distinguish chats. | Core | | Copy Tool | Hover-over tool to easily copy entities across Agent Desk. | Core | ### C. Help & Resources | Feature | Feature Overview | Configurability | | :----------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------- | | Agent Feedback | Text form for agent to send feedback to ASAPP team (available by default; can be disabled if an agent has an active chat, if an agent is in an available status, or in both instances). | Configurable | | Keyboard Shortcuts | List of Keyboard Shortcuts. **Ctrl +S** | Core | | Patent Notice | List of Patents. | Core | ### D. Preferences | Feature | Feature Overview | Configurability | | :---------------- | :----------------------------------------------------- | :-------------- | | Font Size | Select the Font Size: **Small**, **Medium**, **Large** | Core | | Color Temperature | Adjust the display to reduce eye strain. | Core | ### E. Status Switcher & Log Out | **Feature** | **Feature Overview** | **Configurability** | | :----------- | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Agent Status | Configurable list of Agent statuses: **Active**, **After Chat Wrap-Up**, **Coaching**, **Lunch/Break**, **Team Meeting**, **Training**. | Configurable | | Go to Admin | Opens the Admin Dashboard in another tab. | Core | | Log Out | Logs out of Digital Agent Desk | Core | ## 2. Conversation Navigation ### A. Status | **Feature** | **Feature Overview** | **Configurability** | | :---------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Active/Away Status | Configurable list of 'Away' statuses (instead of binary option 'Active' / 'Away'). | Configurable | | Auto Log Out Inactivity and After X Hours | If an agent does not move their mouse for over X hours, auto-log them out of Agent Desk.

If an agent is logged in for more than X hours, even if they are active, log them out (unless they are in an active chat with a customer). | Configurable | ### B. Navigation | **Feature** | **Feature Overview** | **Configurability** | | :--------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Waiting Timers | A timer displays when either the customer is waiting or the agent is waiting.

The customer waiting time appears in larger text with a badge around it. | Core | | Last Message Preview | Preview of the last message a customer sent in chat. | Core | | Color Coded Chat Cards | Unique color assigned to each chat card to help distinguish chats. | Core | | Copy Tool | Hover-over tool to easily copy entities across Agent Desk. | Core | ## 3. Conversation ### A. Conversation Header | **Feature** | **Feature Overview** | **Configurability** | | :------------------------------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Chat Duration | Indication of how long the customer has been chatting and waiting, at the top of the conversation panel. | Core | | **Contextual Actions (From Left to Right)** | | | | Quick Notes | Enables an agent to type and save notes during a conversation that will save in Conversation History | Configurable | | Secure Messaging | Enables an agent to send an invite to customers to share sensitive information (e.g. credit card number) securely. | Configurable | | Send to Flow | Expose **Send Flow** buttons in the center panel drop-down menu that allow an agent to send the customer back to SRS and into a particular automated flow. | Configurable | | Autopilot Forms / Quick Send | Configurable forms and flows to send to customer and remain connected. You can configure deep links and single step flows. | Configurable | | Co-Browsing | Enables an agent to send an invitation to a customer to share their screen. The agent has limited capabilities (can scroll, draw, and focus, but can't click or type). | Configurable | | **End Controls** | | | | Autopilot Timeout (APTO) | Allows an agent to initiate an autopilot flow that checks in and eventually times out an unresponsive customer; timeout suggestions can appear after an initial conversation turn with a live agent | Configurable | | Timeout | Enables the agent to timeout a customer. | Core | | Transfer | Enables the agent to transfer a customer to another queue or individual agent. Queues are only available for transfer if business hours are open, the queue is not paused, and at least one agent in the queue is online. If needed, specific queues can be excluded from the transfer menu. | Configurable | | End Chat | Enables the agent to close an issue. | Core | | Auto Transfer on Agent Disconnect | If agents disconnect from Agent Desk for over 60 seconds, ASAPP will auto transfer any currently assigned issues to another agent. | Core | | Auto Requeue if Agent is unresponsive | When a chat is first connected to an agent, give them X seconds to send their first message. If they exceed this timer, auto-reassign the issue to the next available agent. | Configurable | ### B. Conversation | **Feature** | **Feature Overview** | **Configurability** | | :--------------- | :---------------------------------------------------------------------------------------------- | :------------------ | | Chat Log | Enables scrolling through the customer's previous conversation history. | Core | | Message Previews | Enables viewing a preview of what the customer is typing before the customer sends the message. | Core | ### C. Composer | **Feature** | **Feature Overview** | **Configurability** | | :----------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Autosuggest | Suggested responses before the agent begins typing based on conversational context. | Core | | Autocomplete | Suggested responses after the agent begins typing based on conversational context. | Core | | Fluency boosting | If an agent makes a known spelling error while typing and hits the space bar, ASAPP will auto-correct the spelling mistake. The correction is indicated by a blue underline, and the agent may click on the word to undo the correction. | Core | | Profanity handling | Generic list of phrases ASAPP disables agents from sending to customers. | Core | ## 4. Agent Solutions ### Customer Information | **Feature** | **Feature Overview** | **Configurability** | | :------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------ | | Customer Profile (A) | Displays customer, company, and specific account information for authenticated customers. | Configurable | | Customer History (B) | A separate tab that gives a quick snapshot of each current and historical interaction with the customer, including time, duration, notes, intent, etc. | Core | | Copy Tool (C) | Hover-over tool to easily copy entities across Agent Desk. | Core | ### Knowledge Base | **Feature** | **Feature Overview** | **Configurability** | | :------------------------------------------------------------------ | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------- | | [Knowledge Base](/agent-desk/digital-agent-desk/knowledge-base) (A) | Agents can traverse a folder hierarchy of customer company specific content to search, add a favorite, and send content to customers. Select **Favorites** or **All Files**. | Requires you to upload and maintain Knowledge Base content via Admin or an integration. | | List of Favorites or All Files (B) | Displays your Favorites or All Files. | Configurable | | Knowledge Base Suggestions (C) | Suggests Knowledge Base articles to agents. | Core | | Contextual Actions (D) | Agents can attach an article (send to a customer) or make it a favorite. | Configurable | ### Responses

Feature

Feature Overview

Configurability

Custom Responses (A)

Agents can create, edit, search, and view custom responses in Agent Desk. Agent Desk uses these custom responses in Auto Suggest. Click + to create new custom responses. To edit, hover over a response and select Edit. Click the Search icon to search custom responses.

If an agent sends something that isn't in their custom library or the global whitelist, ASAPP recommends it back to them from a growing list of their favorites.

Core

Global Responses

(A)

Agents can search, view, and click-to-insert responses from the global whitelist. Click the Search icon to search the global responses.

Core

Navigate Folders (B)

In both the custom and global response libraries, agents can navigate into and out of folders.

Core

Uncategorized Custom Responses (C)

Single custom responses that you add but do not categorize into a specific folder display here.

Core

Click-to-Insert (D)

In both the custom and global response libraries, agents can hover over a response and click Insert to insert the full text of the selected response into the typing field.

Core

Chat Takeover

Managers can takeover an agent's chat.

Core

Receive attachments

End customers can send pdf attachments to agents in order to provide more information about their case.

Core

### Chat Takeover Administrators (managers or supervisors) can take over chats from agents or unassigned chats in the queue. This feature is useful for: * Closing resolved chats that need disposition * Handling complex or convoluted conversations * Managing queue traffic during high-volume periods To take over a chat: 1. Navigate to the conversation in Live Insights 2. Open the transcript area 3. Click the Takeover button in the upper left-hand corner 4. Confirm the takeover action Once transferred, administrators can continue the chat through Agent Desk. Access to this functionality requires appropriate permissions set up by ASAPP. ### Wrap-Up | **Feature** | **Feature Overview** | **Configurability** | | :----------------------- | :---------------------------------------------------------------- | :------------------ | | Chat Notes (A) | Agents can leave notes during a chat and at the end of a chat. | Core | | End Chat Disposition (C) | Ask the customer if the initial intent was correct. | Core | | End Chat Resolution (D) | Agents can indicate if an issue is resolved or not while closing. | Core | ### Receiving Attachments Agents can ask for and receive PDF and image attachments from end customers. This feature is particularly useful for scenarios like fraud cases where agents need proof of transactions. When a customer sends an attachment, the agent will receive a notification in the chat. Images can be viewed in a modal while PDFs can be downloaded for the agent to view within their own desktop environment. * JPEG * JPG * PNG * PDF Images: Maximum 10MB PDFs: Maximum 20MB Apple Messages for Business ### Undelivered Messages If the live agent fails to send a message to the customer, the agent will see an undelivered message indicator near the transcript. Message delivery issues impact customer experience, agent efficiency, and operational clarity. With this feature: * Agents gain clarity into whether a customer actually received the message they sent, reducing unnecessary repetition. * Admins and supervisors gain visibility into reliability issues that previously went undetected, enabling quicker diagnosis and resolution. * Quality Assurance and Compliance teams benefit from transcripts that accurately reflect what the customer saw. * CSMs and Delivery teams gain insights to help customers troubleshoot delivery issues and validate expected outcomes. This feature strengthens trust in the CXP transcript as an accurate record and improves the overall reliability of customer-facing communication flows. When a message fails to deliver, the following occurs: 1. When a message fails to deliver to a customer, the transcript displays a **"Failed to send"** indicator directly below the undelivered message. 2. The indicator shows that message delivery failed and displays an additional banner stating "Messaging service is unavailable". 3. Agents immediately see this status when viewing the transcript during customer interactions. 4. Admins and supervisors see the same delivery failure status in conversation history and can use this information for quality assurance and review. 5. The feature automatically applies to all customers without requiring any action from end users. Undelivered Messages Interface # Agent SSO Source: https://docs.asapp.com/agent-desk/digital-agent-desk/agent-sso Learn how to use Single Sign-On (SSO) to authenticate agents and admin users to the Digital Agent Desk. ASAPP recommends that customers use SSO to authenticate agents and admin users to our applications. In this scenario: 1. ASAPP acts as the Service Provider (SP) while the customer serves as the Identity Provider (IDP). 2. The customer's authentication system performs user authentication using their existing customer credentials. 3. ASAPP supports Service Provider Initiated SSO. Customers provide the SSO URL to agents and admins. 4. The URL points to the customer's SSO service, which will authenticate the users via their authentication system. 5. Once ASAPP authenticates the user, the customer's SSO service sends a SAML assertion with user information to ASAPP's SSO service. 6. ASAPP uses the information inside the SAML assertion to identify the user and redirect them to the appropriate application. The diagram below illustrates the IDP-initiated SSO flow. ## Configuring Single Sign-On via SAML ### Environments ASAPP supports SSO in non-production and production environments. We strongly recommend that customers configure SSO in both environments. ### Exchange of SAML metadata Both ASAPP and the customer generate their respective SAML metadata and exchange the metadata files. Each environment requires different metadata, so teams must generate metadata once per environment. Sample metadata file content: ```json theme={null} REDACTED REDACTED urn:oasis:names:tc:SAML:2.0:nameid-format:unspecified ``` ### SAML Profile Configuration Next, ASAPP and the customer configure their respective SSO services with each other's SAML profile. Teams can achieve this by importing the SAML metadata into the SSO service (if it supports a metadata import feature). ### SAML Attributes Configuration SAML Attributes are key-value fields within the SAML message (also called SAML assertion) that the Identity Provider (IDP) sends to the Service Provider (SP). ASAPP requires the following fields to be included with the SAML assertion | **Attribute Name** | **Required** | **Description** | **Example** | | | :----------------- | :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------ | ------- | | userId | yes | user's unique identifier used for authentication. Can be a unique readable value such as user's email or an opaque identifier such as a customer's internal user ID. | [jdoe@company.com](mailto:jdoe@company.com) | | | firstName | yes | user's first name | John | | | lastName | yes | user's last name | Doe | | | nameAlias | yes | user's display name. Allows an agent, based on their personal preference or company's privacy policy, to set an alias to show to the customers they are chatting with. If this is not sent then the agent firstName will be displayed. | John Doe | | | roles | yes | the roles the user has within the ASAPP platform. Typically mapped to one or more AD Security Groups on the IDP. | representative | manager | The following fields are not **required** but **desired** to further automate the agent Desk configuration: | **Attribute Name** | **Required** | **Description** | **Example** | | | :----------------- | :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------- | -------- | | groups | no | group(s) the user belongs to. This attribute controls the queue(s) that a user is assigned to. Not to be confused with the AD Security Groups (see the **roles** attribute above) | residential | business | | concurrency | no | number of concurrent chats the user can handle | 5 | | In addition, any custom fields can be configured in the SAML assertion. See the section below for more details. ### Sending User Data via SAML ASAPP uses SAML attribute fields to keep user data current in our system. This also allows us to register new users automatically when they log into the ASAPP application for the first time. In addition to the required fields that ASAPP needs to identify the user, customers can send additional fields in the SAML assertion that can be used for other purposes such as Reporting. An example can be the Agent Location. These fields are specific per customers. The name and possible values of these fields need to be agreed upon and configured prior to the SAML implementation. ### SSO Testing SSO testing between the customer and ASAPP must be a coordinated effort due to the nature of the IDP-initiated SSO flow. The customer must provide several user accounts to be used for testing. Generally, the test scenarios are as follows: 1. An agent logs in for the first time. ASAPP observes that a new user record is created and the agent lands on the correct ASAPP application for their role (Desk for a rep, Admin for supervisor/manager). 2. The same agent logs out and logs back in. The agent observes that the correct application still opens. 3. Repeat the same test for another user account, ideally with different roles. Once testing is completed successfully, then the SSO flow is certified for that environment. Setting up SSO in the Production environment should follow the same steps. # API Integration Source: https://docs.asapp.com/agent-desk/digital-agent-desk/api-integration Learn how to connect the Digital Agent Desk to your backend systems. ASAPP integrates with your APIs to provide customers and agents with richer and more personalized interactions. ASAPP accomplishes this by making real-time backend calls to customer APIs, providing customers with current, up-to-date information. Customers must expose relevant APIs securely for ASAPP to make server-to-server calls. ## Authentication Customers should secure their APIs with authentication mechanisms, addressing both Customer Authentication and API Authentication. ### API Authentication on behalf of the User ASAPP leverages customers' existing mechanisms for authenticating their customers, which should ideally remain consistent across different channels. Systems should issue identifiers with short expiration times while allowing for good user experience without requiring customers to authenticate multiple times during a session. * **Cookie-based Authentication**: Users post login credentials to the customer's server and receive a signed cookie. The server stores the cookie and places a copy in the browser for use in subsequent interactions during the session. However, teams typically prefer token-based approaches where possible. * **Token-based Authentication:** Users post login credentials to the customer's server and receive a signed JSON Web Token (JWT). The server does not store this token, making all interactions fully stateless. All client requests include the JWT, which only the server can decode to authenticate every request. For more information on generating and signing JSON Web Tokens, refer to [https://jwt.io/](https://jwt.io/). **API Endpoint**: `POST /customer_authenticate` **Request** ```json theme={null} curl -X POST https://api.example.com/auth/customer_authenticate \ -H 'cache-control: no-cache' \ -d 'username=&password=' ``` **Response** ```json theme={null} { "issued_at" : "1570733606449", "JWT" : "", "expires_in" : "28799" } ``` ASAPP requires direct access to the "customer\_authenticate" API to retrieve JWTs/cookies programmatically for testing. #### Communicating Customer Identifier with ASAPP Customers can implement any mechanism to authenticate their users, as long as they can pass the identifier (cookie, JWT, etc.) to ASAPP. The methods for passing this value to ASAPP depend on the chat channel used: [Web](/agent-desk/integrations/web-sdk/web-authentication), [iOS](/agent-desk/integrations/ios-sdk/user-authentication), or [Android](/agent-desk/integrations/android-sdk/user-authentication). #### Customer Identifier Requirements ASAPP uses this customer identifier as a pass-through value by including it as an HTTP Header or in the request body when requesting customer data from backend APIs. Since the Customer Identifier is the only data ASAPP uses to identify users, it must adhere to the following requirements: * **Unique**: ASAPP will associate every customer chat with this id allowing ASAPP to tie chats from different channels into one single conversation. It is imperative that the Customer Identifier be unique per customer. * **Consistent**: The Customer Identifier should remain consistent so that even if the customer returns after a significant amount of time, we are able to identify the customer. * **Opaque**: The Customer Identifier by itself should not contain any customer Personally Identifiable Information (PII). It should be hashed, encoded and/or encrypted so that when used by itself, it is of no value. ### API Authentication using System-level Credential Customers can secure backend APIs by restricting client access to specific resources for limited time periods. Teams can implement this using various mechanisms like OAuth 2.0, API Keys, or System Credentials. This section details OAuth using a Client Credentials Grant, which works well for server-to-server communication. #### Client Credentials Grant In this mechanism, the client sends a HTTP POST request with the following parameters in return for an access\_token. * **grant\_type** * **client\_id** * **client\_secret** **API Endpoint**: `POST /access_token` **Request** ```json theme={null} curl -X POST https://api.example.com/oauth/access_token?grant_type=client_credentials \ -H 'cache-control: no-cache' \ -H 'content-type: application/x-www-form-urlencoded' \ -d 'client_id=&client_secret=' ``` **Response** ```json theme={null} { "token_type" : "Bearer", "issued_at" : "1570733606449", "client_id" : “”, "access_token" : "", "scope" : "client_credentials", "expires_in" : "28799" } ``` ### API Authorization Customers can also use API keys to provide authorization to specific APIs. API keys are passed in the HTTP header along with the authentication token. **API Endpoint:** POST /getprofile **Request** ```json theme={null} curl -X POST https://api.example.com/account/getprofile -H 'Authorization: Bearer ' \ -H 'customer-auth: JWT ' \ -H 'content-type: application/json' \ -H 'api-key: ' \ ``` # Knowledge Base Source: https://docs.asapp.com/agent-desk/digital-agent-desk/knowledge-base Learn how to integrate your Knowledge Base with the Digital Agent Desk. Knowledge Base (KB) is a technology used to store structured and unstructured information that agents can reference while servicing customer inquiries. You can integrate KB data into ASAPP Desk by manually uploading articles through an offline process or by integrating with digital systems that expose content via REST APIs. Knowledge Base helps Agents access information without requiring them to navigate external systems by surfacing KB content directly within Agent Desk's Right Hand Panel view. This approach reduces Average Handle Time and increases concurrency. KB also learns from agent interactions, suggests relevant articles, and supports Agent Augmentation. ## Integration ASAPP integrates with customer Knowledge Base systems or CRMs to pull data and make it available to Agent Desk. A dedicated service accomplishes this by consuming data from external systems that support standard REST APIs. The service layer offers enough flexibility to integrate with various industry-standard Knowledge Base systems as well as proprietary in-house systems. The service programmatically retrieves new and updated articles regularly to surface fresh and accurate content to agents in real-time. The system transforms data pulled from external systems into ASAPP's standard format and securely stores it in S3 and in a database. Refer to the [Data Storage](#data-storage) section below for more details. ### Configuration The service that integrates with customers uses configuration-driven approaches to interface with different systems supporting various data formats and structures. ASAPP requires the following information to integrate with APIs: * REST endpoints and API definitions, data schemas and SLAs * URLs, Connection info, and Test Accounts for each environment * Authentication and Authorization requirements * JSON schema defining requests and responses, preferably Swagger * API Host that can handle HTTPS/TLS traffic * Resource * HTTP Method(s) supported * Content Type(s) supported and other Request Headers * Error handling documentation * Response sizes to expect * API Call Rate limits, if any * Response time SLAs * API Response Requirements * Every 'article' should contain at least a unique identifier and last updated date timestamp. * Hierarchical data needs to clearly define the parent-child relationships * Content should not contain any PII/PCI related information * Refreshing Data * On a set cadence as determined and agreed upon by both parties * Size of data to help in capacity planning and scaling ## Data Storage Once the service receives KB content, it stores the data in a secure S3 bucket that serves as the source of truth for all Knowledge Base articles. The system then structures and packages the data into standard Knowledge Base types: Category, Folder, and Article. The service then cleans, processes, and stores the packaged data in a database for further usage. ## Data Processing ASAPP runs all Knowledge Base articles stored in the database through a Knowledge Base Ranker service, which ranks articles and feeds Agent Augmentation. Given a set of user utterances, the KB Ranker service assigns a score to every article in the Knowledge Base based on how relevant those articles are for that agent at that moment in the conversation. ASAPP determines relevance by considering the frequency of words in an article within the corpus of articles and words from a given subset of utterances. ## Data Refresh ASAPP can refresh data periodically and schedule it to meet customer needs. ASAPP uses a Unix cron style scheduler to run the refresh job, which allows flexible configuration. Data Refresh replaces all current folders and articles with new ones received. The refresh does not affect article ranking, as the system maintains their state separately. # Live Agent Summary - GenAgent Source: https://docs.asapp.com/agent-desk/digital-agent-desk/live-agent-summary Learn how to receive summaries from GenerativeAgent conversations. When using GenerativeAgent (GA), conversations may escalate to human agents. **Live Agent Summary** automatically generates concise summaries of these conversations, giving your agents the key context they need without requiring them to read through lengthy chat transcripts. Live Agent Summary helps improve Average Handling Time (AHT) by generating concise, structured summaries that present key information to agents at handoff. The summary highlights: * Customer intent * Key actions taken * Unresolved issues Live Agent Summary Interface ## How Live Agent Summary Works Live Agent Summary enhances your existing Digital Agent Desk by automatically generating context during escalations: * **Smart Context Generation**: The system analyzes GA conversations and creates concise summaries when escalations occur * **Inline Integration**: The system displays summaries directly in the transcript panel with highlight styling, so agents don't need to switch between views * **Complete Context**: Agents maintain access to all standard transcript information (customer profile, conversation history) plus the generated insights inline * **Immediate Understanding**: No need to read through entire conversations - the system highlights key information and makes it easily accessible ## Set Up Live Agent Summary To set up Live Agent Summary, you need to work with ASAPP to enable the feature in your environment and configure the summary generation parameters: Contact your ASAPP Implementation Manager to enable GenAgent summaries in your Digital Agent Desk environment. This involves configuring the summary generation parameters, setting up handoff trigger conditions, and testing the summary output quality. Live Agent Summary requires GenerativeAgent to be configured and operational in your environment. Conduct pilot testing with a small group of agents to validate the effectiveness of summaries: * Review summaries with agents to confirm accuracy and usefulness * Gather feedback on summary clarity and completeness * Adjust summary parameters based on agent input and preferences Once validation is complete, roll out Live Agent Summary to your full agent team: * Train agents on how to interpret and use summary information effectively * Establish best practices for leveraging summary context in customer interactions * Monitor AHT and CSAT metrics to measure improvements and identify optimization opportunities ## Next Steps After setting up Live Agent Summary, you're ready to improve agent efficiency and enhance customer experience during handoffs. You may find the following sections helpful in advancing your Digital Agent Desk capabilities: Learn how to configure and optimize your knowledge base for better agent support. Configure user roles and permissions to optimize agent access and capabilities. Set up efficient queue management and routing to complement summary capabilities. Learn more about GenerativeAgent capabilities that power Live Agent Summary. # Queues and Routing Source: https://docs.asapp.com/agent-desk/digital-agent-desk/queues-and-routing Learn how to manage conversation queues and agent routing in the Digital Agent Desk. Digital Agent Desk routes customer conversations to the most appropriate agents through a structured workflow: 1. A customer initiates a conversation 2. The system labels the conversation with an intent 3. Queue Routing evaluates the intent and additional criteria to select the appropriate queue 4. The queue assigns the conversation to an available agent from its associated agent group Issue Routing Work with your ASAPP account team to configure intents, queues, and routing logic that align with your business needs. ## Managing Intents and Queues An **Intent** classifies each customer conversation (issue) and serves as the primary method of categorization. ASAPP analyzes conversation data and business requirements to determine the available intents. During runtime, Machine Learning (ML) models automatically assign the most appropriate Intent to each new issue. **Queue Routing** uses these Intents along with other defined criteria to direct conversations to specific queues (referred to as [Attributes Based Routing](/agent-desk/digital-agent-desk/queues-and-routing/attributes-based-routing)). Each **queue** represents a group of agents qualified to handle particular types of issues. ASAPP manages the configuration and maintenance of Intents and Queue Routing. Work with your ASAPP account team to optimize these settings for your business needs. ## Optimizing Agent Concurrency Concurrency controls how many simultaneous conversations each agent manages. Setting appropriate concurrency levels helps balance customer experience with agent workload. Each agent has an individual concurrency level setting that determines their maximum number of concurrent conversations. Digital Agent Desk provides several tools to help manage agent workloads: * [High Effort Issues](#high-effort-issues) - Automatically identifies complex conversations that require more agent attention * [Flexible Concurrency](#flexible-concurrency) - Dynamically adjusts capacity during natural conversation lulls ### High Effort Issues By default, each conversation occupies one concurrency slot. However, certain conversations may require more time and attention from agents due to their complexity or scope. Digital Agent Desk can automatically identify high-effort issues and assign them multiple concurrency slots based on the intent and other attributes. For example, a technical troubleshooting conversation might count as two slots, while a simple account update remains one slot. This intelligent slot allocation helps: * Give agents adequate time for complex customer needs * Maintain balanced workloads across your team * Improve customer satisfaction on challenging issues Work with your ASAPP account team to configure complexity rules that align with your specific business scenarios and agent capabilities. #### Monitoring High Effort Issues The Real Time Dashboard displays agents handling high-effort issues with a "high effort" icon. Select any agent's name to view their current conversation assignments. High effort dashboard ### Flexible Concurrency Flexible Concurrency maximizes agent productivity by temporarily increasing their conversation capacity during natural downtimes, such as: * When conversations enter auto-pilot timeout (a period of customer inactivity) * While agents complete disposition tasks Flex concurrency Configure Flexible Concurrency settings per queue to match different conversation types and agent capabilities. #### Protecting Agents with Flex Protect During auto-pilot timeout, the system assumes a conversation is temporarily inactive due to customer inactivity. However, customers may return and resume their conversation at any point during this timeout period. Without protection, this creates a challenging situation where an agent who received a new flexible assignment suddenly needs to handle both the returning customer and their new conversation simultaneously. Flex Protect prevents this type of overload by: * Assigning protected status to the agent * Providing a configurable rest period where the system blocks new flexible assignments for that agent We recommend enabling Flex Protect as agents may avoid using auto-pilot timeout if they fear being overloaded, leading to longer handle times. Flex protect #### Monitoring Flexible Assignments The Real Time Dashboard displays agents handling flexible assignments with a "flex" icon. Select any agent's name to view their current conversation assignments. Flex dashboard # Attributes Based Routing Source: https://docs.asapp.com/agent-desk/digital-agent-desk/queues-and-routing/attributes-based-routing Learn how to use Attributes Based Routing (ABR) to route chats to the appropriate agent queue. Attributes Based Routing (ABR) uses a rules-based system to determine which agent queue should receive an incoming chat. ASAPP invokes ABR by default after our Machine Learning model classifies customer utterances to an intent and determines that the intent cannot be handled by an automated flow. ## Attributes of ABR Attributes can be any piece of information that customers can pass to ASAPP using the integrated SDKs. ASAPP natively defines the standard attributes below: * Intent - This is a code determined by running customer utterances through various different ML models. Ex: ACCTINFO, BILLING * Web URL - This is the webpage that invoked the SDK. You can use any part of the URL as a value to route on. Ex: [www.customer.com/consumer/support](http://www.customer.com/consumer/support), [www.customer.com/business/sales](http://www.customer.com/business/sales) * Channel - This is the channel the chat originated from. Ex: Web, iOS The ASAPP SDK defines additional parameters, which can also be used in ABR. You can define these parameters as part of the ContextProvider. * Company Subdivision Ex: divisionId1, subDivisionId2 * Segments Ex: NorthEast, USA, EMEA You can also define custom customer-specific attributes for routing. Customer Information allows definition of any number of attributes as key-value pairs, which you can set per chat and use for routing to specific agent queues. Refer to the Customer Information section for more details on defining custom attributes. ## Configuration ABR can use any or all of the above attributes to determine which queue receives a chat. The configuration offers extreme flexibility and can accommodate various complex rules including regular expression matches and multi-value matches. Contact your implementation manager to model the routing rules. ## Template for Submitting Rules Customers can create an Excel document with a sheet for each attribute they would like to define. The sheet name should be the name of the attribute and have two columns, one defining all the possible attribute values and the other column containing the name of the queue to be routed to. If you are going to use multiple attributes in any different combinations, then you should define these conditions in a separate sheet, dedicating a row for every unique combination. ASAPP will assume that Excel attribute names that do not follow the ASAPP standard are custom defined and passed in 'Customer Information'. See the [User Management](/agent-desk/digital-agent-desk/user-management) section for more information. ## Queue Management You can define queues based on business or technical needs. You can define any number of queues and follow any desired naming convention. You can apply business hours to queues individually. For more information on other features and functionality, contact your implementation manager. You can assign agents to one or more queues based on skills and requirements. Refer to [User Management](/agent-desk/digital-agent-desk/user-management) for more details. # User Management Source: https://docs.asapp.com/agent-desk/digital-agent-desk/user-management Learn how to manage users and roles in the Digital Agent Desk. You control User Management (roles and permissions) within the Digital Agent Desk. These roles determine whether a user can authenticate to *Agent Desk*, *Admin Dashboard*, or both. Additionally, roles determine what views and data users see in the Admin Dashboard. You can pass user data to ASAPP via *SSO*, AD/LDAP, or other approved integrations. This section describes the following: * [Process Overview](#process-overview) * [Resource Overview](#resource-overview) * [Definitions](#definitions "Definitions") ## Process Overview This is a high-level overview of the User Management setup process. 1. ASAPP demonstrates the Desk/Admin Interface. 2. ASAPP calls with you to confirm access and permission requirements. ASAPP and you complete a configuration spreadsheet defining all roles and permissions. 3. ASAPP sends you a copy of the configuration spreadsheet for review and approval. ASAPP makes additional changes if needed and sends them to you for approval. 4. ASAPP implements and tests the configuration. 5. ASAPP trains you to set up and modify user management. 6. ASAPP launches your new customer interaction system. ## Resource Overview The following table lists and defines all resources:

Feature

Overview

Resource

Definition

Agent Desk

The App where Agents communicate with customers.

Authorization

Enables you to successfully authenticate via Single Sign-On (SSO) into the ASAPP Agent Desk.

Go to Desk

Enables you to click Go to Desk from the Nav to open Agent Desk in a new tab. Requires Agent Desk access.

Default Concurrency

The default value for the maximum number of chats a newly added agent can handle at the same time.

Default Concurrency

Sets the default concurrency of all new users with access to Agent Desk if no concurrency was set via the ingest method.

Admin Dashboard

The App where you can monitor agent activity in real-time, view agent metrics, and take operational actions (e.g. biz hours adjustments)

Authorization

Enables you to successfully authenticate via SSO into the ASAPP Admin Dashboard.

Live Insights

Dashboard in Admin that displays how each of your queues are performing in real-time. You can drill down into each queue to gain insight into what areas need attention.

Access

Enables you to see Live Insights in the Admin navigation and access it.

Data Security

Limits the agent-level data that certain users can see in Live Insights. If a user is not allowed to see data for any agents who belong to a given queue, that queue will not be visible to that user in Live Insights.

Historical Reporting

Dashboard in Admin where you can find data and insights from customer experience and automation all the way to agent performance and workforce management.

Power Analyst Access

Enables you to see the Historical Reporting page in the Admin Navigation with Power Analyst access type, which includes the following:

  • Access to ASAPP Reports

  • Ability to change widget chart type

  • Ability to toggle dimensions and filters on/off for any report

  • Export data per widget and dashboard

  • Cannot share reports to other users

  • Cannot create or copy widgets and dashboards

Creator Access

Enables you to see the Historical Reporting page in the Admin Navigation with Creator access type, which includes the following:

  • Power Analyst privileges

  • Can share reports

  • Can create net new widgets and dashboards

  • Can copy widgets and dashboards

  • Can create custom dimensions/calculated metrics

Reporting Groups

Out-of-the-box groups are:

  • Everybody: all users

  • Power Analyst: Users with Power Analyst Role

  • Creator: Users with Creator role

If a client has data security enabled for Historical Reporting, policies need to be written to add users to the following 3 groups:

  • Core: Users who can see the ASAPP Core Reports

  • Contact Center: Users who can see the ASAPP Contact Center Reports

  • All Reports: Users who can see both the ASAPP Contact Center and ASAPP Core Reports

If you have any Creator users, you may want custom groups created. This can be achieved by writing a policy to create reporting groups based on a specific user attribute (i.e. I need reporting groups per queue, where queue is the attribute).

Data Security

Limits the agent-level data that certain users can see in Historical Reporting. If anyone has these policies, then the Core, Contact Center, and All Reports groups should be enabled.

Business Hours

Allows Admin users to set their business hours of operation and holidays on a per queue basis.

Access

Enables you to see Business Hours in the Admin navigation, access it, and make changes.

Triggers

An ASAPP feature that allows you to specify which pages display the ASAPP Chat UI. You can show the ASAPP Chat UI on all pages with the ASAPP Chat SDK embedded and loaded, or on just a subset of those pages.

Access

Allows you to see Triggers in the Admin navigation, access it, and make changes.

Knowledge Base

An ASAPP feature that helps agents access information without needing to navigate external systems by surfacing KB content directly within Agent Desk.

Access

Enables you to see Knowledge Base content in the Admin navigation, access it, and make changes.

Conversation Manager

Admin Feature where you can monitor current conversations individually in the Conversation Manager. The Conversation Manager shows all current, queued, and historical conversations handled by SRS, bot, or by a live agent.

Access

Enables you to see Conversation Manager in the Admin navigation and access it.

Conversation Download

Enables you to select 1 or more conversations in Conversation Manager to export to either an HTML or CSV file.

Whisper

Enables you to send an inline, private message to an agent within a currently live chat, selected from the Conversation Manager.

SRS Issues

Enables you to see conversations only handled by SRS in the Conversation Manager.

Data Security

Limits the agent-assisted conversations that certain users can see at the agent-level in the Conversation Manager.

User Management

Admin Feature to edit user roles and permissions.

Access

Enables you to see User Management in their Admin navigation, access it, and make changes to queue membership, status, and concurrency per user.

Editable Roles

Enables you to change the role(s) of a user in User Management.

Editable Custom Attributes

Enables you to change the value of a custom user attribute per user in User Management. If disabled, these custom attributes will be read-only in the list of users.

Data Security

Limits the users that certain users can see or edit in User Management.

## Definitions The following table defines the key terms related to ASAPP Roles & Permissions.

Role

Definition

Resource

The ASAPP functionality that you can permission in a certain way. ASAPP determines Resources when features are built.

Action

Describes the possible privileges a user can have on a given resource. (i.e. View Only vs. Edit)

Permission

Action + Resource. ex. "can view Live Insights"

Target

The user or a set of users who are given a permission.

User Attribute

A describing attribute for a client user. User Attributes are either sent to ASAPP via accepted method by the client, or ASAPP Native.

ASAPP Native User Attribute

A user attribute that exists within the ASAPP platform without the client needing to send it. Currently:

  • Role

  • Group

  • Status

  • Concurrency

Custom User Attribute

An attribute specific to the client's organization that is sent to ASAPP.

Clarifier

An additional and optional layer of restriction in a policy. Must be defined by a user attribute that already exists in the system.

Policy

An individual rule that assigns a permission to a user or set of users. The structure is generally: Target + Permission (opt. + Clarifier) = Target + Action + Resource (opt. + Clarifier)

## Grouping and Data Filtering via SSO You can use attributes from your SSO/SAML configuration to control what chats and metrics users see within Live Insights, Conversation Manager, and User Management. This ensures users only see information relevant to their role and responsibilities. These attributes create a hierarchical structure where: * BPOs only see their service chats * Workforce Management users see all chats and metrics for their BPO * Agents see only their own chats and data * Managers see chats for their assigned teams To use this grouping, you need to: Define groups using the following attributes: * BPO * Product * Role * Location Make sure to define a name for each group. Reach out to your ASAPP account team with the groups you define. ASAPP will implement the groups for you. Ensure that your SSO/SAML System sends the necessary attributes to ASAPP. You can reach out to your ASAPP account team with any questions. Within Live Insights, Conversation Manager, and User Management, you can map the groups you defined to filters and queues. The groups will be applied to filter data and control access based on your defined mappings. # Insights Manager Overview Source: https://docs.asapp.com/agent-desk/insights-manager Analyze metrics, investigate interactions, and uncover insights for data-driven decisions with Insights Manager. ASAPP's Insights Manager aims to provide relevant and actionable learnings from your data. Insights highlights trends that impact your customers, provides live activity monitoring, volume management tools, in-depth performance analysis and reporting, and tools to conduct investigations on customer interactions. Insights Manager includes three primary functions: * Live Insights, to track and monitor agent activity in real time * Historical Insights, to perform data analysis and output in-depth reports * Conversation Manager, to conduct investigations on customer interactions ## Live Insights Live Insights serves as your go-to platform to track and monitor agents, conversations, and performance activity in real time. * Track agent performance in real-time * Monitor conversations as they happen through the live transcription service * Whisper to agents as customer interactions happen to guide and course-correct behaviors * Keep an eye on all your live performance metrics such as handle time, queue volume, and resolution rate * Mitigate high queue volume to better manage instances of high traffic Visit Live Insights for a functional breakdown of reporting interfaces and metrics. ## Historical Insights Historical Insights is a powerful tool to analyze performance metrics, conduct investigations, and uncover insights to make data-driven decisions. * Access core performance dashboards that we pre-populate with your data and prepare for conducting analyses * Program dashboards provide a deep overview of primary conversation and agent metrics * Automation & Flow dashboards provide insights into the performance of flow containment, successful automations, and intent performance * Operation & Workforce Management dashboards provide in-depth data to help you understand how agents are utilized and pinpoint areas ripe for improvement * Outcomes dashboards provide a view into the voice of the customer * Content creators can create and share dashboards with members of your organization * Automate report sharing based on your preferred schedule. Attach data to automated emails to continue investigations into your preferred tools ## Conversation Manager Conversation Manager provides robust features to help you conduct investigations on customer interactions. Use the tools provided to find relevant conversations to support your quality control needs, to deepen research initiated in Historical Insights, or to review performance data associated with your conversations. * Find all captured conversations, regardless of channels * Filter and drill-down into conversation content based on performance data, metadata, keywords, and personal customer identifiers * Review feedback survey data that customers submit ## Users & Capabilities Insights Manager supports two main types of users: **Workforce Management Leaders** and **Business Stakeholders**. | Workforce Management Leaders | Business Stakeholders | | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Who: Supervisors, Managers, and Front Line Leaders directly involved in the day-to-day management of individual or multiple contact centers. | Who: Business & CX Analysts, Program Managers, and Directors directly working with ASAPP teams to implement and optimize for business goals. | | What: Managing agent staffing and contact center volume; Monitoring agent performance and customer satisfaction levels; Involved in coaching and quality management efforts. | What: Focused on optimizing for specific business goals; Creating and synthesizing data for end-to-end reporting; Detecting trends and improving customer experience insights. | ### Monitoring Capabilities | | Workforce Management Leaders | Business Stakeholders | | :----------------------------- | :--------------------------- | :-------------------- | | Queue Groups & Personalization | ✓ | ✓ | | Queue Performance | ✓ | ✓ | | Agent Monitoring | ✓ | ✓ | | CSAT Monitoring | ✓ | ✓ | | Viewing Live Conversations | ✓ | - | | Whisper | ✓ | - | | High Queue Mitigation | - | ✓ | | Chat Takeover | ✓ | - | | Queue Overflow Routing | ✓ | - | ### Reporting Capabilities | | Workforce Management Leaders | Business Stakeholders | | :---------------------------- | :--------------------------- | :-------------------- | | Core Historical Reports | ✓ | ✓ | | Creating & Sharing Reports | ✓ | ✓ | | Data Definitions / Dictionary | ✓ | ✓ | | Viewing Conversations | ✓ | ✓ | | Filters | ✓ | ✓ | | Notes | ✓ | ✓ | | Search | ✓ | ✓ | | Export | ✓ | ✓ | ### Management Capabilities | | Workforce Management Leaders | Business Stakeholders | | :------------- | :--------------------------- | :-------------------- | | Business Hours | ✓ | - | | Users | ✓ | - | # Live Insights Overview Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights Learn how to use Live Insights to monitor and analyze real-time contact center activity. Live Insights provides tools to track agent and conversation performance in real time. You can: * Monitor all queues * Monitor alerts * Drill down into each queue to gain insight into what areas need attention 1. The Overview page (All Queues) shows a summary widget for each configured queue. 2. Click a **queue tile** or select a **queue** from the header dropdown to navigate to the Queue Details page. ## Monitor Performance per Queue The Queue Details page for each queue shows performance across the most important metrics. All metrics that the dashboard displays update in true real time. You can categorize metrics either as "Right Now" or "Current Period": * Right Now metrics update immediately upon a change in the ecosystem. * Current Period metrics will constantly update in aggregate over the day. ## Information Architecture ASAPP continues to improve the Live Insights experience with new touch points to host live transcripts and to scale up when introducing new metrics and performance signals. 1. **All Queues** → Provides a performance overview of all queues and queue groups. Also provides customization tools to show/hide queues and create/manage queue groups. 2. **Single Queue and Queue Groups** → These now include two pages: * **Conversations:** Displays performance data for all conversations currently connected to an agent, as well as live transcripts and alerts. * **Performance:** Displays queue performance data, both for 'right now' and rolling 'since 12 am'. It also provides agent performance data and showcases feedback that customers send. ### Two Views: Conversations & Performance **Conversations:** Displays performance data for all conversations currently connected to an agent, as well as live transcripts and alerts. **Performance:** Displays queue performance data, both for 'right now' and rolling 'since 12 am'. It also provides agent performance data and showcases feedback that customers send. # Agent Performance Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/agent-performance Monitor agent performance in Live Insights. ASAPP provides robust real-time agent performance data. You can monitor: * Agent status * Handle and response time performance * Agent utilization In addition, alerts and signals provide context to better understand how agents are performing. ## How to Access Agent Data You can access agent performance data from the 'Performance' page. 1. **Open agent panel**: To view agent performance data, click the **Agent** icon on the right-side of the screen. The system opens a panel that contains a list of all agents currently logged into the queue. 2. **Close agent panel**: To close the agent panel, click the **close** icon. ## Agent Real-time Performance Data Live Insights automatically updates performance data related to agents in real-time every 15 seconds. ## Search for Agents & Sort Data ASAPP provides tools to organize and find the right content. You can search for a specific agent name or sort the data based on current performance. 1. **Find agents**: To find a specific agent, enter the **agent name** in the search field. The list of agents will filter down to the relevant results. To remove the query, delete the **agent name** from the search field. 2. **Sort agents**: You can use each column in the agent panel to sort content. Default: the system sorts the list by agent name. To sort by a different metric, click the **column name**. To change the sort order, click the **active column name**. ## View Agent Transcripts You can access live agent transcripts from the 'Agent' panel. 1. **Agents with assignments**: The system underlines agents currently taking assignments in the 'Agent' panel. Click an **underlined agent name** to go to the 'Conversation' page to view relevant agent transcripts. 2. **Agent filter applied**: When you view an agent's transcript, the 'Conversation Activity' table displays only their chats. The filter chip displayed above the list of conversations indicates this filtering. To remove the filter, click the **X** icon in the filter chip. # Alerts, Signals & Mitigation Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/alerts,-signals---mitigation Use alerts, signals, and mitigation measures to improve agent task efficiency. To improve user focus and task efficiency, ASAPP elevates various alerts and signals within Live Insights. These alerts notify users when performance is degrading, when the system detects events, or when high queue mitigation measures can be activated based on volume. ## Type of Alerts Live Insights displays four alert types: 1. **Metric Highlighting**: Highlights metrics that are above their target threshold within Live Insights. The system shows the highlights on the Overview page, as well as within single queues and queue groups. The alert will persist until the metric's performance returns below its threshold. 2. **Event-based Alerts**: Detects and records events per conversation and displays them in the conversation activity table. 3. **High Queue Mitigation**: Activates when the queue volume exceeds the target threshold. When active, you can use mitigation measures to reduce queue volume impacts. 4. **High Effort Issue**: Indicates when a high effort issue is awaiting assignment and is currently blocking other issues from being assigned. ## Metric Highlighting Live Insights highlights metrics that are above their target threshold on the Overview page, as well as within single queues and queue groups. The alert persists until the metric's performance returns below its threshold. Where metrics are highlighted: 1. **Conversation performance**: The system can highlight both 'average handle time' and 'average response time'. 2. **Agent performance**: 'Time in status', 'average handle time', and 'average response time'. 3. **Queue performance**: The system can highlight queue-level metrics within a single queue, queue groups, or on the Overview page. ## Event-based Alerts Agents, customers, or you generate events from actions taken. Live Insights detects and records these events and displays them alongside conversation data, within the 'alert' column. 1. **Conversation events**: These events are related to a unique conversation. Agent actions or your actions can generate the events. * **Customer transfers**: When an agent transfers a customer, Live Insights displays an alert next to the conversation. * **Whisper sent**: When you send a whisper message to an agent, Live Insights records and displays the event next to the conversation. 2. **Agent events**: These events impact the agent workload and help you contextualize agent performance. Live Insights displays the events for all targeted agents, within the Agent Performance panel. * **High effort**: Agents that are currently handling a high effort issue. * **Flex concurrency**: The agent is currently flexed and has a higher than normal utilization. ## High Queue Mitigation ASAPP provides tools to enable workforce management groups to act fast when queues are or could be anomalously high. **Tools Overview** Live Insights can: * Monitor queue volume for unusually high volume. * Highlight 'Queued' metric based on severity level. * Activate 'Custom High Wait Time' messaging and replace Estimated Wait Time messaging. * Pause queues experiencing extremely high volume and prevent new queue assignments. **Volume Thresholds:** Live Insights highlights metrics when they reach past a threshold defined for the queue. 1. **Low Severity:** detects abnormal activity and has moderate impact on the queue. 2. **High Severity**: detects highly abnormal activity. The queue is severely impacted. **Mitigation Options:**

Mitigation

Severity Threshold

Features available

Default behavior

Business as usual. All queues are operating based on this setting.

None

  • Estimated Wait Time messaging is active.

  • Routing & assignment rules remain unchanged.

Custom High Wait Time Message

Low severity mitigation measure. Replaces Estimated Wait Time messaging.

Low Severity

  • The system replaces Estimated Wait Time messaging with a custom message.

  • Routing & assignment rules remain unchanged.

Pausing the Queue

High severity mitigation measure. Prevents new assignments to the queue.

High Severity

  • Estimated Wait Time messaging is replaced with a custom message alerting users the queue is currently closed due to high volume.

  • The system pauses assignment to the queue.

  • Users currently in the queue remain in the queue.

  • To time out users waiting in the queue, please contact ASAPP.

### Activate Mitigation 1. **Mitigation menu options**: When available, Live Insights displays a menu on the relevant queue card in the Overview, as well as on the 'Performance' page of single queues and queue groups. To view those options, click the **menu** icon. The menu icon only displays when you highlight 'Queued'. 2. **Select mitigation**: Based on the severity level, Live Insights displays different mitigation options. Select an **option** to activate it. To remove the mitigation behavior, select **Default behavior**. 3. **Mitigation applied**: When you select a mitigation option, the system indicates it on the queue card or on the Performance page. ## High Effort Issues ASAPP supports a capability to enable agent focus for higher effort issues, while maintaining efficiency. This feature dynamically adjusts how many concurrent issues an agent should handle while assigned a high effort issue. ### What is a High Effort Issue ASAPP will route customers based on the expected effort of their issue. All issues, by default, will have an effort of 1. Any issue with an effort value greater than 1 becomes "high effort". Reach out to your ASAPP Implementation team to configure high effort rules for your program. ## Feature Definition * **Slot**: A slot represents a space for a chat to be assigned to an agent. You can assign and configure multiple slots to a single agent via User Management. * **Effort**: Effort represents what is needed from an agent to solve an issue. For each effort point assigned to an issue, an agent must have an equivalent number of available slots to be assigned that issue. ASAPP determines an issue's effort by its relevant customer attributes. * **High Effort Time Threshold**: A threshold that sets how much time an agent can parallelize a high effort issue with other issues. You can configure this threshold per queue. This threshold represents the duration of all existing assignments an agent is handling when a high effort issue is next in line. * **Flex Slot**: All agents have 1 additional slot that can be used if they are eligible to receive a flex assignment or if they are temporarily over-effort when handling a high effort issue. * **Linear Utilization Level:** A type of Linear Utilization relative to the number of assignments an agent has assigned at a given time, regardless of the assignment workload state. * **Assignment Workload**: A measure of Linear Workload relative to the number of active assignments an agent has assigned at a given time. An assignment is not considered active if it has caused an agent to become Flex Eligible. * **Effort Workload**: A measure of Linear Workload relative to the issue effort of all active assignments an agent has assigned at a given time. ### How are high effort issues prioritized and assigned? ASAPP assigns high effort chats in the order that they entered the queue. You can prioritize high effort chats higher in the queue using customer attributes. This prioritization is optional and not required. A configurable *high effort time threshold* allows each queue to set how much time an agent can parallelize a high effort issue with other assignments. ### How are high effort issues assigned against other issues? ASAPP assigns high effort issues in order of configured priority and when they entered the queue. An agent will receive a high effort assignment if they meet at least 1 of the following criteria: * An agent has 0 active assignments. * An agent has sufficient open slots to receive a high effort assignment. * The **high effort time threshold** has elapsed for all of an agent's current assignments and the high effort chat's effort would not extend the agent's Effort Workload past their Flex Slot. ### How do high effort issues impact performance? * High effort issues will not change current behavior for Queue Priority. * High effort issues will not change current behavior for Flex Eligibility or Flex Protect. * High effort issues take longer to assign because they have to wait for an agent to have sufficient effort capacity. * If a set of queues has 50% or more agents in common, then a high effort issue at the front of one queue will hold the issues in the other "shared" queues until it is assigned. ### How do I monitor the impact of high effort issues? You can view the 'Queued - High Effort' metric in Live Insights on queue detail pages. This metric captures the number of high effort issues currently waiting in the queue. If a high effort issue is first in queue and slows other issues from being assigned, Live Insights displays an alert on this metric. These changes will also be visible for programs that do not have high effort rules configured. ### How can I tell which agents are handling high effort issues? In the Agent Right Rail, you can monitor which agents are currently handling high effort issues. ASAPP displays an icon next to the agent's utilization indicating a high effort issue is assigned. These changes will also be visible for programs that do not have high effort rules configured. # Customer Feedback Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/customer-feedback Learn how to view customer feedback in Live Insights. Live Insights tracks customers that engage with the satisfaction survey. The Customer Feedback panel displays all feedback received throughout the day. ## Access Customer Feedback 1. **Open feedback panel**: To view feedback, click the **Feedback** icon on the right side of the 'Performance' page. The system opens the Customer Feedback panel. 2. **Time stamp**: Indicates when the system recorded the feedback. 3. **Agent name and issue ID**: Indicates the targeted agent, as well as the customer's issue ID. * **Issue ID link**: Click the **issue ID** to display the transcript in the Conversation Manager. 4. **Feedback**: Feedback that the customer left. 5. **CSAT**: CSAT score that the system calculates based on customer responses to the survey. 6. **Find agent**: You can filter the feedback received by agent. To view feedback related to a specific agent, type the **agent name** in the search field. # Live Conversations Data Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/live-conversations-data Learn how to view and interact with live conversations in Live Insights. You can find all conversations that are currently connected to an agent in Live Insights. Performance data updates automatically in Live Insights. If a conversation's metrics are outside their target range, the system displays alerts. ## Conversation Activity The conversation activity table is the bread and butter of real-time monitoring. You can see all conversations currently assigned to an agent. You can sort content by performance metrics to provide you with the view that is most relevant to your needs. Live Insights automatically refreshes performance data every 15 seconds. Furthermore, you can access live transcripts for each conversation currently assigned. 1. **Links**: Provides a quick entry point to view historical transcripts or performance data. 2. **Conversation count & refresh**: Displays the total conversations displayed in the table. Live Insights updates the content automatically every 15 seconds. 3. **Sorting**: You can sort the content by each of the metrics captured for each conversation. You can sort all columns in ascending/descending order. To sort, click the **column header**. Click the **header** again to reverse the sorting order. Default: Ascending by time assigned. 4. **Conversations**: Each conversation currently assigned to an agent displays as a row in the Conversation Activity table. Metrics associated with the conversation display and update dynamically. 5. **Metric highlighting**: Metrics that have assigned thresholds are highlighted. See 'Metrics Highlighting' for more information. 6. **Alerts**: When an event is recorded, it displays in the column. Not all conversations will include an event. See [Alerts, Signals & Mitigation](/agent-desk/insights-manager/live-insights/alerts,-signals---mitigation "Alerts, Signals & Mitigation") for more information. ## Conversation Data Anatomy Each row in the conversation activity table lists performance data. The chart below outlines data available in Live Insights for each chat conversation. 1. **Issue ID**: Unique conversation identifier that the system assigns to a customer intent. 2. **Agent name**: Name of the agent handling the conversation. 3. **Channel**: Detected channel the customer is engaging with. 4. **Intent**: Last detected intent before the system assigns the user to the queue. 5. **Time Assigned**: Time when the system assigned the conversation to an agent. 6. **Handle time**: Current handle time of the conversation. 7. **Average Response Time**: Average time it takes an agent to reply to customer utterances. 8. **Time Waiting**: Time since the last message that the sender has been waiting for a response. 9. **Alerts**: Event-based signals recorded throughout the conversation. 10. **Queue name**: Name of the queue that received the issue assignment. This feature only displays in Queue Groups. Click the **queue name** to go to the queue details view. ## View a Live Transcript Each conversation connected to an agent includes a live transcript that you can view. The transcript updates in real time. You can send a Whisper to the agent from the transcript. 1. **Open transcripts**: To view a transcript, click any **row** in the Conversation Activity table. 2. **Transcript**: The transcript updates in real time. The system displays handle time alongside conversation data (issue ID, agent, channel and intent). 3. **Close transcripts:** To close a transcript, click the **Close** icon. 4. **Whisper**: A Whisper allows you to send a discrete message within the transcript that agents can see but that the system hides from customers. ## Conversations: Current Performance Data Current queue performance data appears to the right of the activity table. These metrics encompass all conversations currently in the queue or connected to an agent. You can view a drill-down, enhanced view of the performance data under the Performance page. 1. **Queue Activity**: Includes 'Queued', 'Avg current time in queue', 'Average wait time', and 'Average time to assign'. 2. **Volume**: Includes 'Offered', 'Assigned to agent', and 'Time out by agent'. 3. **Handle & Response Time**: Includes 'Average handle time (AHT)', 'Average response time (ART)', and 'Average first response time (AFRT)' # Metric Definitions Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/metric-definitions Learn about the metrics available in Live Insights. ## Performance - 'Right Now' Metrics | **Metric name** | **Definition** | | :------------------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Offered** | The number of conversations that are currently connected with an agent or waiting in the queue. | | **Assigned to Agent** | The number of conversations where a customer is currently talking to a live agent. | | **Timed out by Agent** | Only available as a current period metric for the day. | | **Queued** | The number of customers who are waiting in the queue to be connected to an agent. | | **Queued - Eligible for Assignment** | The number of customers who are waiting in the queue, received a check-in message, and replied to it. | | **Max Queue Time** | The actual wait time of the customer who is positioned last in the given queue. | | **Average Wait Time** | The average queue time for all customers who are currently assigned to an agent or waiting in the queue, including 'zero-time' for customers directly assigned to an agent when there were available slots. | | **Average Time in Queue** | The average queue time for all customers who are currently waiting in the queue. | | **Average Time to Assign** | The average queue time for all customers who are currently assigned to an agent, including 'zero-time' for customers directly assigned to an agent when there were available slots. | | **Queue Abandons** | Only available as a current period metric for the day. | | **Average Abandon Queue Time** | Only available as a current period metric for the day. | | **Queue Abandonment Rate** | Only available as a current period metric for the day. | | **Average Agent Response Time** | The average amount of time to respond to a customer message across the assignment for agents who are currently handling chats. | | **Average Agent First Response Time** | The average amount of time to send the first line to a customer after the chat was assigned for agents who are currently handling chats. | | **Average Handle Time** | The time spent across all current chats by an agent per assignment starting from when the chat was assigned to when it is dispositioned. | | **Active Slots** | A ratio of the number of currently active conversations to number of concurrent slots for agents who are in an Active status or actively handling chats. | | **Occupancy** | The percentage of currently assigned chats to the number of agents with slots set to active. | | **Concurrency** | The percentage of currently assigned chats to the number of agents with currently assigned chats. | | **Logged In Agents** | The number of agents currently logged in to Agent Desk. | | **Active and Away Agents** | The number of agents with an active-type and away-type label respectively. | | **Agent Status** | The number of agents with each status label. | ## Agent Metrics | **Metric name** | **Definition** | | :------------------------ | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Agent Name** | Name of the agent currently logged in to Agent Desk. Agents currently handling assignments have their names underlined. When you click, the system displays the agent's current assignments in the 'Conversations' tab. | | **Agent Status** | Name of the status selected by the agent in Agent Desk. Green labels represent Available statuses, while orange labels represent Away statuses. | | **Time in Status** | The time an agent has spent in the currently displayed status. | | **Average Handle time** | The time spent across all current assignments, starting from when the chat was assigned to when it is dispositioned, for a given agent. | | **Average Response Time** | The average amount of time to respond to a customer message across all current assignments for a given agent. | | **Assignments** | The number of assignments an agent is currently handling. | ## Conversation Metrics | **Metric name** | **Definition** | | :------------------------ | :------------------------------------------------------------------------------------------------ | | **Issue ID** | Unique conversation identifier that the system assigns to a customer intent. | | **Agent Name** | Name of the agent handling the conversation. | | **Channel** | Channel the customer is engaging with. | | **Intent** | Last detected intent before the system assigns the user to the queue. | | **Queue Membership** | Queue where the system assigned the issue based on intent classification and queue routing rules. | | **Time Assigned** | Time when the system assigned the conversation to an agent. | | **Handle Time** | Current handle time of the conversation. | | **Average Response Time** | Average time it takes an agent to reply to customer utterances. | | **Time Waiting** | Time since the last message sender awaits a response. | | **Alerts** | Event-based signals that the system records throughout the conversation. | ## Performance - 'Current Period' Metrics (since 12 am) | **Metric name** | **Definition** | | :------------------------------------ | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Offered - Total** | The total instances where the conversation was either placed in queue or assigned directly to an agent attributed to the time interval the Queue or direct Assignment event (without being placed in queue) occurs. | | **Assigned to Agent - Total** | The total instances where the customer was assigned to an agent. | | **Timed Out by Agent - Total** | The total instances assigned to an agent where they "Timed Out" the customer. | | **Queued - Total** | The total instances where a customer was placed in or is currently waiting in the queue to be connected to an agent. | | **Queued - Eligible for Assignment** | Only available as a right now metric. | | **Max Queue Time** | Only available as a right now metric. | | **Average Wait Time** | The average time a customer waited to abandon or be assigned to an agent, including 'zero-time' for customers directly assigned to an agent when there were available slots. | | **Average Time in Queue** | The average time a customer waited in queue for those who either abandoned the queue or were assigned to an agent. | | **Average Time to Assign** | The average queue time for all customers who were assigned to an agent, including 'zero-time' for customers directly assigned to an agent when there were available slots. | | **Queue Abandons** | The total count of customers who abandoned the queue. | | **Average Abandon Queue Time** | The average time a customer waited in queue prior to abandoning, either by being dequeued on the web or ending the chat before being assigned to an agent. | | **Queue Abandonment Rate** | The percent of customers who required a visit to the queue and abandoned before being assigned to an agent. | | **Average Agent Response Time** | The average amount of time it takes an agent to respond to a customer message across all assignments. | | **Average Agent First Response Time** | The average amount of time it takes an agent to send the first line to a customer after the chat was assigned across all assignments. | | **Average Handle Time** | The average amount of time spent by an agent per assignment, from when the chat was assigned to when the agent finishes dispositioning the assignment. | | **Active Slots** | Only available as a right now metric. | | **Occupancy** | The percentage of cumulative utilization time to cumulative available time for all agents who handled chats. | | **Concurrency** | The weighted average number of concurrent chats that agents are handling at once, given an agent is utilized by handling at least one chat. | | **Logged In Agents** | Only available as a right now metric. | | **Active and Away Agents** | Only available as a right now metric. | | **Agent Status** | Only available as a right now metric. | ## Teams and Locations You can track the live behaviors of agents by overseeing outages and staff levels at different geographic locations. Furthermore, each team provides hourly updates as to who is active/lunch etc and they want to be able to get this information easily. Admins see a list of agents when they click into a particular queue and select Performance from the left-hand panel and clicked into the Agents icon on the right-hand panel. Admins can further oversee results by performance metrics of the current day and filter both the agent list and metrics by any of the following attributes: * **Agent Name** * **Location** * **Team** * **Status** ### Team Table Admins can filter teams by type of role and review each company assigned to the team. Also, each result shows the size and the occupancy of each team. Each administrator can provide an hourly update of how many agents are active, on lunch, or in a different state as well as view corresponding metrics. ### Location Table Admins can filter locations by region and review the occupancy and size of each location. Each location provides updates of performance and agent names. # Navigation Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/navigation Learn how to navigate the Live Insights interface. ## How to Access Live Insights * You can access Live Insights from the primary navigation. To open, click the **Live Insights** link. ## How to Access a Queue or Queue Group Live Insights provides different views of queue performance data: * Overview of all queue activity, including queue groups and organizational groups. * Single queue and queue groups, which display queue and agent performance data. You can access a single queue or queue group in two ways: 1. From Overview, **click a tile** → the system opens the relevant queue details page. 2. From the **queue dropdown**, select a **queue** or **queue group**. ## Navigate Away from a Single Queue or Queue Group You can navigate back to the Live Insights Overview, or to a different queue or queue group. 1. **Back arrow**: when you click, the system opens the Live Insights Overview. 2. **Queue channel indicator**: indicates if the queue is a voice or chat queue. 3. **Queue dropdown**: when you click, you can select a different queue or queue group. ## Channel-based Queues Queues and queue groups host channel-specific content. ASAPP supports three queue types: 1. **Chat queues**: includes all digital channels such as Apple Messages for Business, Web, SMS, iOS and Android. 2. **Voice queues**: includes all voice channels in one queue. 3. **Queue groups**: groups consist of aggregated queues of a single type. Each group contains either chat queues or voice queues. The number of queues in the queue group displays below the channel icon. ## Access Queue Performance and Conversation Data Single queues and queue groups include two views: performance data about the queue and conversation activity data. 1. **Performance**: Click to access the performance data of the queue, as well as agent performance data and customer feedback. 2. **Conversations**: Click to access conversation activity, view transcripts, and send whisper messages. # Performance Data Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/performance-data Learn how to view performance data in Live Insights. Live Insights provides a comprehensive view of today's performance within each queue and queue group. You can view performance data for 'right now', as well as for the 'current period', defined as since 12 am. You can also view alerts, signals, and agent performance data on the Performance page. 1. **Data definitions**: On click, opens a link to view metric definitions within Historical Reporting. 2. **Channel filter**: Filter performance data by channel. When you click, the system displays channel options. Select options to automatically filter data. 3. **Performance metrics**: Displays all performance metrics currently available. By default, the system shows performance metrics in a 'Right Now' view. 4. **Intraday**: Rolling data since the beginning of the day (12 am) is available upon activation. When active, the system displays the rolling count or averages since 12 am. 5. **Agent metrics and feedback data**: When you click, the system shows the Agent performance data or customer feedback received. ## Intraday Data You can view current performance data ('right now') or view aggregate counts and averages since the beginning of the day ('current period'). These two views provide you with a fuller picture of queue performance and facilitate investigations and contextualization of events. 1. **Right Now**: Default view. Provides performance data currently captured. 2. **Since 12 am**: Click the **toggle** to display 'current period' metrics. The system does not provide some metrics in this configuration. See Metrics Definitions for more information ## Filter by Channel You can segment performance data per channel or by groups of channels. 1. **Channel dropdown**: To filter data per channel, click the **channel** dropdown to activate channel selection. 2. **Channel options**: The channel dropdown shows all available channels. You can select one or more **channels** to filter data by. Once selected, the data will automatically update. # Queue Overview (All Queues) Source: https://docs.asapp.com/agent-desk/insights-manager/live-insights/queue-overview--all-queues- Learn how to view and customize the performance overview for all queues and queue groups. Live Insights provides a view of all single queues and queue groups, with a performance overview. Live Insights highlights metrics that are outside the normal performance range. You can also find customization tools to show/hide queues, or to create and manage queue groups on this page. 1. **Queue count**: Displays the total number of queues available. 2. **Customization**: Users can access tools to customize the display of queues. * Queue visibility: Show/hide queues to customize the Overview page * Queue groups: Create new queue groups, edit existing groups. 3. **Single Queues & Queue groups**: Displays performance overview for each queue and queue groups. Each tile leads to a drilled down view. ## Customization ASAPP supports customization features to change the display of queues, as well as create and manage queue groupings. To access customization features: click the **Customize** button on the Overview page. Two options appear. Click an **option** to launch the associated customization feature. ## Change Queue Visibility ASAPP provides tools for you to customize the queues showcased on the Overview page. You can hide Queues based on customization needs. Click the **Customize** button on the All Queues page to sort and select the **queues** to display. 1. **Find a queue**: Use the search field to find a specific queue. Type in the **queue name** to filter the list of queues down to relevant matches. 2. **Sort queues**: You can sort in ascending or descending order. Click the **Sort** dropdown to select the desired sort order. 3. **Bulk selection**: To select all queues, or deselect all queues, click the **bulk selection feature** to view all queues. Click again to deselect all queues. 4. **Single queue selection**: Select and deselect **queues** in the list. The system hides deselected queues on the Overview page. 5. **Apply and cancel**: To confirm your selection, click **Apply**. To dismiss changes or close the modal, select **Cancel**. ## Create and Edit Queue Groups You can create groups of queues to more efficiently monitor performance across multiple queues. When you create a queue group, the system displays a drill-down view of the queue group. A queue group behaves similarly to a single queue: you get access to all live transcripts across all queues selected in the group. You can access Performance data for all agents in the group, as well as consolidated customer feedback. Queue groups are unique to each user. You can edit and create an unlimited number of queue groups. 1. **Create new group**: Click this **button** to create a new queue group. 2. **Existing queue group**: You can view, edit or delete them. 3. **Organizational group**: Your organization creates queue groups that display with a 'Preset' tag. Queues with this tag are visible by all Live Insights users. These groups cannot be edited or deleted. 4. **Edit a group**: To edit an existing queue group, click the **Edit** icon. 5. **Delete a group**: To delete an existing queue group, click the **Delete** icon. **Edit a queue group:** 1. **Queue group name**: Name assigned to the queue group. 2. **Available queues**: List of all queues that you can add to a group. Select **queues** to add the queues to the group, and vice versa. 3. **Queues added to group**: The system displays currently selected queues under the queue name. 4. **Apply and cancel**: To apply changes, click the **Apply** button. To dismiss changes or close the modal, click the **Cancel** button. ## Overflow Queue Routing Overflow Queue Routing enables administrators to redirect traffic from one queue to another to reduce estimated wait times for end-customers and support closed queues when it is a legal requirement. Overflow Queue Routing can use two rules: 1. **Business Hours Rule**: The system redirects traffic from Queue A to Queue B when it is outside the operating business hours for Queue B. 2. **Agent Availability Rule**: The system redirects traffic from Queue A to Queue B when there are no available agents serving Queue A. Work with your ASAPP account team to configure overflow routing rules that align with your business needs. ## Bulk Close and Transfer Chats ASAPP provides capabilities to bulk close and transfer chats in Live Insights to help manage queues experiencing unusual activity or high traffic. ### Bulk Chat Transfer To transfer all chats from one queue to another: 1. Click the dropdown menu in the queue card 2. Select "Transfer all chats" 3. The system displays a queue selection modal asking "Select the queue which you want to transfer all chats to?" 4. Select the target queue from the dropdown list 5. Click "Transfer chats" to complete the action The system will display a toast message confirming that all chats have been transferred. The end customer will not see any change on their side and will assume they are still waiting in a queue. ### Bulk Chat Closure To close all chats in a queue: 1. Click the 3 dots in the upper right-hand corner of the queue card 2. Select "End all chats" from the dropdown menu 3. The system displays a confirmation modal asking "Are you sure you want to end all chats in this queue?" 4. Click "Confirm" or "Yes" to complete the action Use these features carefully as they affect multiple customer conversations simultaneously. Bulk actions are best used in situations where immediate intervention is needed to manage queue performance or address unusual activity. # Integration Channels Source: https://docs.asapp.com/agent-desk/integrations Learn about the channels and integrations available for ASAPP Messaging. ASAPP Messaging offers a wide range of integration options to connect your brand with customers across various channels and enhance your customer service capabilities. These integrations are divided into two main categories: [Customer Channels](#customer-channels) and [Applications Integrations](#applications-integrations). **Customer Channels** are the direct touchpoints where your customers can interact with your brand. Regardless of which channels you choose to integrate, [Digital Agent Desk](/agent-desk/digital-agent-desk) standardizes the interaction for your agents into a single interface. **Applications Integrations** enhance the functionality and efficiency of your customer service operations. These integrations cover various aspects such as agent authentication, routing, knowledge management, and user management. ## Customer Channels ## Applications Integrations # Android SDK Overview Source: https://docs.asapp.com/agent-desk/integrations/android-sdk Learn how to integrate the ASAPP Android SDK into your application. You can integrate ASAPP's Android SDK into your application to provide a seamless messaging experience for your Android customers. ### Android Requirements ASAPP supports Android 5.0 (API level 21) and up. The SDK currently targets API level 30. ASAPP distributes the library via a Maven repository and you can import it with Gradle. ASAPP wrote the SDK in Kotlin. You can also use it if you developed your application in Java. ## Getting Started To get started with Android SDK, you need to: 1. [Gather Required Information](#1-gather-required-information "1. Gather Required Information") 2. [Install the SDK](#2-install-the-sdk "2. Install the SDK") 3. [Configure the SDK](#3-configure-the-sdk "3. Configure the SDK") 4. [Open Chat](#4-open-chat "4. Open Chat") ### 1. Gather Required Information Before downloading and installing the SDK, please make sure you have the following information. Contact your Implementation Manager at ASAPP if you have any questions. | | | | :------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | App ID | Also known as the "Company Marker", assigned by ASAPP. | | API Host Name | The fully-qualified domain name used by the SDK to communicate with ASAPP's API. Provided by ASAPP and subject to change based on the stage of implementation. | | Region Code | The ISO 3166-1 alpha-2 code for the region of the implementation, provided by ASAPP. | | Supported Languages | Your app's supported languages, in order of preference, as an array of language tag strings. Strings can be in the format "\{ISO 639-1 Code}-\{ISO 3166-1 Code}" or "\{ISO 639-1 Code}", such as "en-us" or "en". Defaults to \["en"]. | | Client Secret | This can be an empty or random string\* until otherwise notified by ASAPP. | | User Identifier | A username or similar value used to identify and authenticate the customer, provided by the Customer Company. | | Authentication Token | A password-equivalent value, which may or may not expire, used to authenticate the customer that is provided by the Customer Company. | \* In the future, the ASAPP-provided client secret will be a string that authorizes the integrated SDK to call the ASAPP API in production. ASAPP recommends fetching this string from a server and storing it securely using Secure Storage; however, as it is one of many layers of security, you can hard-code the client secret. ### 2. Install the SDK ASAPP distributes the library via a Maven repository and you can import it with Gradle. First, add the ASAPP Maven repository to the top-level `build.gradle` file of your project: ```groovy theme={null} repositories { maven { url "https://packages.asapp.com/chat/sdk/android" } } ``` Then, add the SDK to your application dependencies: `implementation 'com.asapp.chatsdk:chat-sdk:'` Please check the latest Chat SDK version in the [repository](https://gitlab.com/asappinc/public/mobile-sdk/android/-/packages) or [release notes](https://docs-sdk.asapp.com/api/chatsdk/android/releasenotes/). At this point, sync and rebuild your project to make sure you have successfully imported all dependencies. You can also validate the authenticity of the downloaded dependency by following these steps. ### Validate Android SDK Authenticity You can verify the authenticity of the SDK and make sure that ASAPP generated the binary. The GPG signature is the standard way ASAPP handles Java binaries when this is a requirement. #### Setup First, download the ASAPP public key [from here](https://docs-sdk.asapp.com/api/chatsdk/android/security/asapp_public.gpg). ```json theme={null} wget -O asapp_public_key.asc https://docs-sdk.asapp.com/api/chatsdk/android/security/asapp_public.gpg ``` #### Verify File Signature Use the console GPG command to import the key: ```json theme={null} gpg --import asapp_public_key.asc ``` You can verify that the public key was imported via `gpg --list-keys`. Download the ASC file directly from [our repository](https://gitlab.com/asappinc/public/mobile-sdk/android/-/packages). Finally, you can verify the Chat SDK AAR and associated ASC files like so: ```json theme={null} gpg --verify chat-sdk-.aar.asc chat-sdk-.aar ``` ### 3. Configure the SDK Use the code below to create a configuration and initialize the SDK with it. You must pass your `Application` instance. Refer to the aforementioned [required information](/agent-desk/integrations/ios-sdk/ios-quick-start). ASAPP recommends you initialize the SDK in your `Application.onCreate`. ```json theme={null} import com.asapp.chatsdk.ASAPP import com.asapp.chatsdk.ASAPPConfig val asappConfig = ASAPPConfig( appId = "my-app-id", apiHostName = "my-hostname.test.asapp.com", clientSecret = "my-secret", enableSDKCrashlytics = true, enableSentry = true) ASAPP.init(application = this, config = asappConfig) ``` In case, you are facing compile issue after setting enableSDKCrashlytics to true, perform the following steps 1. Add (if not present) the following plugins to your app module's build.gradle.kts or build.gradle: * id("com.google.gms.google-services") * id("com.google.firebase.crashlytics") 2. Add (if not present) the Firebase BOM to your app's dependencies block:" * implementation(platform("com.google.firebase:firebase-bom:29.0.0")) 3. Create a dummy google-services.json and place it in location: \$APP\_DIR/google-services.json. The SDK should only be initialized once and it is possible to update the configuration at runtime. ### 4. Open Chat Once the SDK has been configured and initialized, you can open chat. To do so, use the `openChat(context: Context)` function which will start a new Activity: ```kotlin theme={null} ASAPP.instance.openChat(context = this) ``` Once the chat interface is open, you should see an initial state similar to the one below: ## Next Steps # Android SDK Release Notes Source: https://docs.asapp.com/agent-desk/integrations/android-sdk/android-sdk-release-notes The scrolling window below shows release notes for ASAPP's Android SDK. This content may also be viewed as a stand-alone webpage here: [https://docs-sdk.asapp.com/api/chatsdk/android/releasenotes](https://docs-sdk.asapp.com/api/chatsdk/android/releasenotes) # Customization Source: https://docs.asapp.com/agent-desk/integrations/android-sdk/customization ## Styling The SDK uses color attributes that you define in the ASAPP theme, as well as extra style configuration options that you set via the style configuration class. ### Themes To customize the SDK theme, extend the default ASAPP theme in your `styles.xml` file: ```xml theme={null} ``` You must define your color variants for day and night in the appropriate resource files, unless night mode is disabled in your application. ASAPP recommends starting by only customizing `asapp_primary` to be your brand's primary color, and adjusting other colors when necessary for accessibility. The system uses `asapp_primary` as the message bubble background in most buttons and other controls. The screenshot below shows the default theme (gray primary - center) and custom primary colors on the left and right. There are two other colors you may consider customizing for accessibility or to achieve an exact match with your app's theme: `asapp_on_background` and `asapp_on_primary`. Other elements that might appear in front of the background use `asapp_on_background`. Text and other elements that appear in front of the primary color use `asapp_on_primary`. ### More Colors Besides the colors used for [themes](#themes "Themes"), you can override specific colors in a number of categories: the toolbar, chat content, messages, and other elements. You can override all properties mentioned below in the `ASAPPTheme.Chat` style. The status bar color is `asapp_status_bar` and toolbar colors are `asapp_toolbar` (background), `asapp_nav_button`, `asapp_nav_icon`, and `asapp_nav_text` (foreground). **General chat content colors** * `asapp_background` * `asapp_separator_color` * `asapp_control_tint` * `asapp_control_secondary` * `asapp_control_background` * `asapp_success` * `asapp_warning` * `asapp_failure` **Message colors** * `asapp_messages_list_background` * `asapp_chat_bubble_sent_text` * `asapp_chat_bubble_sent_bg` * `asapp_chat_bubble_reply_text` * `asapp_chat_bubble_reply_bg` ### Text and Buttons To customize fonts and colors for both text and buttons, use the `ASAPPCustomTextStyleHandler`. To set this optional handler use `ASAPPStyleConfig.setTextStyleHandler`. Use the given `ASAPPTextStyles` object to: * Set a new font family with `updateFonts`. If you set no new fonts, the system uses the default instead. * Override font sizes, letter spacing, text colors, and text casing styles. You can also customize the font family for each text style individually, if needed. * Override button colors for normal, highlighted and disabled states. Example: ```kotlin theme={null} ASAPP.instance.getStyleConfig() .setTextStyleHandler { context, textStyles -> val regular = Typeface.createFromAsset(context.assets, "fonts/NH-Regular.ttf") val medium = Typeface.createFromAsset(context.assets, "fonts/Lato-Bold.ttf") val black = Typeface.createFromAsset(context.assets, "fonts/Lato-Black.ttf") textStyles.updateFonts(regular, medium, black) textStyles.body.fontSize = 14f val textHighlightColor = ContextCompat.getColor(context, R.color.my_text_hightlight_color) textStyles.primaryButton.textHighlighted = textHighlightColor } ``` See `ASAPPTextStyles` to see all overridable styles. The system calls `setTextStyleHandler` when it creates an ASAPP activity. Use the given `Context` object if you access resources to make sure that all customization uses correct resource qualifiers. For example: if a user is in chat and toggles Night Mode, the SDK automatically triggers an activity restart. Once the system creates the new activity, the SDK calls `setTextStyleHandler` with the new night/day context, which retrieves the correct color variants from your styles. ## Chat Header The chat header (toolbar in the chat activity) has no content by default, but you can add text or icon using `ASAPPStyleConfig`. ### Text Title To add text to the chat header, pass a String resource to `setChatActivityTitle`. By default, the system aligns the title to start. For example: ```kotlin theme={null} ASAPP.instance.getStyleConfig() .setChatActivityTitle(R.string.asapp_chat_title) ``` ### Drawable Title To add an icon to the chat header use: `setChatActivityToolbarLogo`. You can also center the header content by calling `setIsToolbarTitleOrIconCentered(true)`. For example: ```kotlin theme={null} ASAPP.instance.getStyleConfig .setChatActivityToolbarLogo(R.drawable.asapp_chat_icon) .setIsToolbarTitleOrIconCentered(true) ``` Icons will have priority in the chat header. If you add both text and icon, the system uses only the icon. ## Dark Mode Android 10 (API 29) introduced Dark Mode (a.k.a night mode, dark theme), with a system UI toggle that allows users to switch between light and dark modes. ASAPP recommends reading the [developer documentation](https://developer.android.com/guide/topics/ui/look-and-feel/darktheme) for more information. The ASAPP SDK theme defines default colors using the system resource "default" and "night" qualifiers, so chat will react to changes to the system night mode setting. The ASAPP SDK does not automatically convert any color or image assets in Dark Mode; you must define night variants for each custom asset as described in [Android >Styling>Theming](#themes "Customization"). ### Disable or Force a Dark Mode Setting To disable Dark Mode, or to force Dark Mode for Android API levels below 29, ASAPP recommends using the [AppCompatDelegate.setDefaultNightMode](https://developer.android.com/reference/androidx/appcompat/app/AppCompatDelegate#setDefaultNightMode\(int\)) AndroidX API. This function changes the night mode setting throughout the entire application session, which also includes ASAPP SDK activities. For example, it is possible to use Dark Mode on Android API 21 with the following: ```kotlin theme={null} AppCompatDelegate.setDefaultNightMode(AppCompatDelegate.MODE_NIGHT_YES) ``` ## Atomic Customization To customize the styles at an atomic level, you can use the `setAtomicViewStyleHandler` to update viewStyles. Customizing at the atomic level will **override any default style** that is being set on the UI views. Use it only if general styling is not sufficient, and you need further customization. This is optional, and in most cases, you won't need it. Use with caution. ```kotlin theme={null} ASAPP.instance.getStyleConfig() .setAtomicViewStyleHandler { context: Context, viewStyles: ASAPPCustomViewStyles -> // Update viewStyles as needed } ``` ### Custom Theming Example Following code snippet is an example of how to apply an atomic customized theme. ```kotlin theme={null} private fun setCustomStyling() { val orangeColor = "#ea7024" ASAPP.instance.getStyleConfig(reset = true) .setChatActivityToolbarLogo(0) .setChatActivityTitle(R.string.asapp_toolbar_custom_title) .setIsToolbarTitleOrIconCentered(true) .setAtomicViewStyleHandler { context: Context, viewStyles: ASAPPCustomViewStyles -> val iconWidthInDp = 24 val iconWidthInPx = (context.resources.displayMetrics.density * iconWidthInDp).toInt() val customRegularTypeface = Typeface.createFromAsset( context.assets, "fonts/Lato-Regular.ttf" ) val customMediumTypeface = Typeface.createFromAsset( context.assets, "fonts/Lato-Medium.ttf" ) val customBoldTypeface = Typeface.createFromAsset( context.assets, "fonts/Lato-Bold.ttf" ) with(viewStyles.connectionBar.success) { container.backgroundColor = Color.GREEN icon.src = R.drawable.nav_check_24px primaryText.color = Color.RED primaryText.typeface = customRegularTypeface icon.src = R.drawable.nav_check_24px icon.tintColor = ContextCompat.getColor(context, R.color.asapp_error_red) icon.width = iconWidthInPx } with(viewStyles.connectionBar.warn) { container.backgroundColor = Color.YELLOW primaryText.color = Color.WHITE primaryText.typeface = customMediumTypeface } with(viewStyles.connectionBar.error) { container.backgroundColor = Color.RED icon.src = R.drawable.asapp_img_icon_x primaryText.fontSize = 18f primaryText.typeface = customBoldTypeface icon.tintColor = ContextCompat.getColor(context, R.color.asapp_error_red) } with(viewStyles.bottomSheetConfirmationDialog) { confirmButtonBar.button.width = MATCH_PARENT cancelButtonBar.button.width = MATCH_PARENT confirmButtonBar.button.radius = Int.MAX_VALUE cancelButtonBar.button.radius = Int.MAX_VALUE confirmButtonBar.button.typeface = Typeface.DEFAULT_BOLD confirmButtonBar.button.textNormal = Color.WHITE confirmButtonBar.button.backgroundNormal = Color.DKGRAY cancelButtonBar.button.typeface = Typeface.DEFAULT_BOLD cancelButtonBar.button.textNormal = Color.DKGRAY cancelButtonBar.button.backgroundNormal = Color.WHITE cancelButtonBar.button.borderNormal = Color.DKGRAY } with(viewStyles.quickRepliesViewGroup) { container.maxHeight = 600 } viewStyles.titleBar = ASAPPCustomViewStyles.TitleBar.newInstance().apply { primaryText.color = Color.DKGRAY primaryText.fontSize = 14.0f primaryText.typeface = Typeface.DEFAULT icon.width = WRAP_CONTENT actionBackButton.color = Color.parseColor(orangeColor) actionMoreButton.color = Color.parseColor(orangeColor) } with(viewStyles.ewtBar) { progressBar.visibility = View.VISIBLE progressBar.progressColor = Color.parseColor("#eeeeee") progressBar.backgroundColor = Color.parseColor(orangeColor) btnLeave.textNormal = Color.parseColor("#006fd6") txtEwtTitle.color = Color.DKGRAY txtEwtTitle.fontSize = 16.0f txtEwtValue.color = Color.DKGRAY txtEwtValue.fontSize = 22.0f } with(viewStyles.chatComposerBar) { btnSend.color = Color.parseColor(orangeColor) } } } ``` ### Customization Details Support for the following customizations are available: #### Send Button Color ```kotlin theme={null} with(viewStyles.chatComposerBar) { btnSend.color = Color.parseColor(orangeColor) } ``` #### TitleBar customizations ```kotlin theme={null} viewStyles.titleBar = ASAPPCustomViewStyles.TitleBar.newInstance().apply { primaryText.color = Color.DKGRAY primaryText.fontSize = 14.0f primaryText.typeface = Typeface.DEFAULT icon.width = WRAP_CONTENT actionBackButton.color = Color.parseColor(orangeColor) actionMoreButton.color = Color.parseColor(orangeColor) } ``` #### Quick Reply Max Height ```kotlin theme={null} with(viewStyles.quickRepliesViewGroup) { // Using Dp container.maxHeight = ASAPPStyleConfig.dpToPx(context, 600) // Or Using Pixel container.maxHeight = 600 } ``` #### Estimated Wait Time (EWT) Bar Customization ```kotlin theme={null} with(viewStyles.ewtBar) { progressBar.visibility = View.GONE // Or VISIBLE progressBar.progressColor = Color.parseColor("#eeeeee") progressBar.backgroundColor = Color.parseColor(orangeColor) btnLeave.textNormal = Color.parseColor("#006fd6") txtEwtTitle.color = Color.DKGRAY txtEwtTitle.fontSize = 16.0f txtEwtValue.color = Color.DKGRAY txtEwtValue.fontSize = 22.0f } ``` #### Connection Status Bar with customized Success/Warning/Error ```kotlin theme={null} with(viewStyles.connectionBar.success) { container.backgroundColor = Color.GREEN icon.src = R.drawable.nav_check_24px primaryText.color = Color.RED primaryText.typeface = customRegularTypeface icon.src = R.drawable.nav_check_24px icon.tintColor = ContextCompat.getColor(context, R.color.asapp_error_red) icon.width = iconWidthInPx } with(viewStyles.connectionBar.warn) { container.backgroundColor = Color.YELLOW primaryText.color = Color.WHITE primaryText.typeface = customMediumTypeface } with(viewStyles.connectionBar.error) { container.backgroundColor = Color.RED icon.src = R.drawable.asapp_img_icon_x primaryText.fontSize = 18f primaryText.typeface = customBoldTypeface icon.tintColor = ContextCompat.getColor(context, R.color.asapp_error_red) } ``` #### Modal Button Styling Customization ```kotlin theme={null} with(viewStyles.bottomSheetConfirmationDialog) { confirmButtonBar.button.width = MATCH_PARENT cancelButtonBar.button.width = MATCH_PARENT confirmButtonBar.button.radius = Int.MAX_VALUE cancelButtonBar.button.radius = Int.MAX_VALUE confirmButtonBar.button.typeface = Typeface.DEFAULT_BOLD confirmButtonBar.button.textNormal = Color.WHITE confirmButtonBar.button.backgroundNormal = Color.DKGRAY cancelButtonBar.button.typeface = Typeface.DEFAULT_BOLD cancelButtonBar.button.textNormal = Color.DKGRAY cancelButtonBar.button.backgroundNormal = Color.WHITE cancelButtonBar.button.borderNormal = Color.DKGRAY } ``` # Deep Links and Web Links Source: https://docs.asapp.com/agent-desk/integrations/android-sdk/deep-links-and-web-links ## Handling Deep Links in Chat Certain chat flows may present buttons that are deep links to another part of your app. To react to taps on these buttons, implement the `ASAPPDeepLinkHandler` interface: ```kotlin theme={null} ASAPP.instance.deepLinkHandler = object : ASAPPDeepLinkHandler { override fun handleASAPPDeepLink(deepLink: String, data: JSONObject?, activity: Activity) { // Handle deep link. } } ``` ASAPP provides an `Activity` instance for convenience, in case you need to start a new activity. Please ask your Implementation Manager if you have questions regarding deep link names and data. ### Example: Parsing and Opening Deep Links in Your Activity If your app receives deep links through an Intent, you can extract the parameters and forward them to the ASAPP SDK when you reopen a chat. ```kotlin theme={null} object AppDeepLinkHelper { fun getASAPPDeepLinkDataIfAny(context: Context, intent: Intent): Map? { val uri = intent.data ?: return null if (context.getString(R.string.app_deep_link_host) != uri.host) return null val map = mutableMapOf() uri.queryParameterNames .map { key -> val value = uri.getQueryParameter(key) val name = if (key == "intentKey") "Code" else key if (value != null) map[name] = value } return map } } ``` Then handle it in your activity: ```kotlin theme={null} override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) ... handleIntent(intent) // Handle deep link if app is cold-started } override fun onNewIntent(intent: Intent) { super.onNewIntent(intent) handleIntent(intent) // Handle deep link if activity is reused } private fun handleIntent(intent: Intent) { val deepLinkData = AppDeepLinkHelper.getASAPPDeepLinkDataIfAny(this, intent) if (!deepLinkData.isNullOrEmpty()) { APP.instance.openChat(this, asappIntent = deepLinkData) } else { openChatIfNotificationIntent(intent) } } ``` ✅ Tip: This approach ensures your app only responds to deep links from the expected host, and safely maps query parameters into a format that openChat() can consume. ## Handling Web Links in Chat Certain chat flows may present buttons that are web links. To react to taps on these buttons, implement the `ASAPPWebLinkHandler` interface: ```kotlin theme={null} ASAPP.instance.webLinkHandler = object : ASAPPWebLinkHandler { override fun handleASAPPWebLink(webLink: String, activity: Activity) { // Handle web link. } } ``` If you don't implement the handler (see above), the ASAPP SDK will open the link utilizing the system default with `Intent.ACTION_VIEW`. ## Implementing Deep Links into Chat ### Getting Started Please see the Android documentation on [Handling Android App Links](https://developer.android.com/training/app-links). ### Connecting the Pieces Once you set up a custom URL scheme for your app and handle deep links into your application, you can start chat to pass any data payload that you extract from the link: ```json theme={null} ASAPP.instance.openChat(context, asappIntent= mapOf("Code": "EXAMPLE_INTENT")) ``` # Miscellaneous APIs Source: https://docs.asapp.com/agent-desk/integrations/android-sdk/miscellaneous-apis ## Conversation Status To get the current `ASAPPConversationStatus`, implement the `conversationStatusHandler` callback: ```kotlin theme={null} ASAPP.instance.conversationStatusHandler = { conversationStatus -> // Handle conversationStatus.isLiveChat and conversationStatus.unreadMessages } ``` * If `isLiveChat` is `true`, the customer is currently connected to a live support agent or in a queue. * The `unreadMessages` integer indicates the number of new messages received since last entering Chat. ### Trigger the Conversation Status Handler You can trigger this handler in two ways: 1. Manually trigger it with: ```kotlin theme={null} ASAPP.instance.fetchConversationStatus() ``` The Chat SDK fetches the status asynchronously and calls back to `conversationStatusHandler` once it becomes available. 2. The system may trigger the handler when you receive a push notification if the application is in the foreground. If your application handles Firebase push notifications, use: ```kotlin theme={null} class MyFirebaseMessagingService : FirebaseMessagingService() { override fun onMessageReceived(message: RemoteMessage) { super.onMessageReceived(message) val wasFromAsapp = ASAPP.instance.onFirebaseMessageReceived(message) // Additional handling... } } ``` The Chat SDK only looks for conversation status data in the payload and doesn't cache or persist analytics. If ASAPP sent the push notification, the SDK returns true and triggers the `conversationStatusHandler` callback. ## Debug Logs By default, the SDK only prints error logs to the console output. To allow the SDK to log warnings and debug information, use `setDebugLoggingEnabled`. ```kotlin theme={null} ASAPP.instance.setDebugLoggingEnabled(BuildConfig.DEBUG) ``` Disable debug logs for production use. ## Clear the Persisted Session To clear the ASAPP session persisted on disk: ```kotlin theme={null} ASAPP.instance.clearSession() ``` Only use this when an identified user signs out. Don't use for anonymous users, as it will cause chat history loss. ## Setting an Intent ### Open Chat with an Initial Intent ```kotlin theme={null} ASAPP.instance.openChat(context, asappIntent = mapOf("Code" to "EXAMPLE_INTENT")) ``` To set the intent while chat is open, use `ASAPP.instance.setASAPPIntent()`. Only call this if chat is already open. Use `ASAPP.instance.doesASAPPActivityExist` to verify if the user is in chat. ## Handling Chat Events Implement the `ASAPPChatEventHandler` interface to react to specific chat events: ```kotlin theme={null} ASAPP.instance.chatEventHandler = object : ASAPPChatEventHandler { override fun handle(name: String, data: Map?) { // Handle chat event } } ``` These events relate to user flows inside chat, not user behavior like button clicks. ### Implement Chat end, New Issue, and Agent Assigned To track the `end of a chat`, or add custom codes on `new issue` and `agent assigned`, implement the following custom events In this example, the system shows event messages as Toasts. But you can add any custom code here. ```kotlin theme={null} chatEventHandler = object : ASAPPChatEventHandler { override fun handle(name: String, data: Map?) { if (name == CustomEvent.CHAT_CLOSED.name) { Toast.makeText(applicationContext, "Chat is closed", Toast.LENGTH_LONG).show() } else if (name == CustomEvent.NEW_ISSUE.name) { Toast.makeText(applicationContext, "New Issue event received", Toast.LENGTH_LONG).show() } else if (name == CustomEvent.AGENT_ASSIGNED.name) { Toast.makeText(applicationContext, "Agent is assigned", Toast.LENGTH_LONG).show() } } } ``` ### Event Object Each event in ASAPPChatEventHandler's handle, has a `name` with the name of the event and an `data` map which contains the following custom properties: `IssueId` (String) The ASAPP identifier for an individual issue. This ID may change as a user completes and starts new queries to the ASAPP system. `CustomerId` (String) The ASAPP identifier for a customer. This ID is consistent for authenticated users but may be different for anonymous ones. Anonymous users will have a consistent ID for the duration of their session. `EventTime` (Long) The time the event occurred, in milliseconds since the epoch. `EventId` (String) The ASAPP identifier for the event. `ExternalSenderId` (String) The external identifier you provide to ASAPP that represents an agent identifier. This property will be undefined if the user is not connected with an agent. # Notifications Source: https://docs.asapp.com/agent-desk/integrations/android-sdk/notifications ASAPP provides the following notifications: * [Push Notifications](#push-notifications "Push Notifications") * [Persistent Notifications](#persistent-notifications "Persistent Notifications") ## Push Notifications ASAPP's systems may trigger push notifications at certain times, such as when an agent sends a message to an end customer who does not currently have the chat interface open. In such scenarios, ASAPP calls your company's API with data that identifies the recipient's device, which triggers push notifications. ASAPP's servers do not communicate with Firebase directly. ASAPP provides methods in the SDK to register and deregister the customer's device for push notifications. For a deeper dive on how ASAPP and your company's API handle push notifications, please see our documentation on [Push Notifications and the Mobile SDKs](../push-notifications-and-the-mobile-sdks "Push Notifications and the Mobile SDKs"). In addition to this section, see Android's documentation about [Firebase Cloud Messaging](https://firebase.google.com/docs/cloud-messaging) and specifically how to setup [Android Cloud Messaging](https://firebase.google.com/docs/cloud-messaging/android/client). ### Enable Push Notifications 1. Identify which token you will use to send push notifications to the current user. This token is usually either the Firebase instance ID or an identifier that your company's API generates for this purpose. 2. Then, register the push notification token using: ```kotlin theme={null} ASAPP.instance.updatePushNotificationsToken(newToken: String) ``` In case you issue a new token to the current user, you also need to update it in the SDK. ### Disable Push Notifications In case the user logs out of the application or other related scenarios, you can disable push notifications for the current user by calling: `ASAPP.instance.disablePushNotifications().` Call this function before you change `ASAPP.instance.user` (or clear the session) to prevent the customer from receiving unintended push notifications. ### Handle Push Notifications You can verify if ASAPP triggered a push notification and passed a data payload into the SDK. Your application usually won't receive push notifications from ASAPP if the identified user for this device is connected to chat. For a deeper dive on how Android handles push notifications, please see the Firebase docs on [Receiving Messages in Android](https://firebase.google.com/docs/cloud-messaging/android/receive). #### Background Push Notifications If your app receives a push notification while in the background or closed, the system displays the OS notification. Once the user taps the notification, the app starts with `Intent` data from that push notification. To help differentiate notifications from ASAPP and others your app might receive, ASAPP recommends that the push notification has a `click_action` with the value `asapp.intent.action.OPEN_CHAT`. For more information on how to set a click action, please see the [Firebase documentation](https://firebase.google.com/docs/cloud-messaging/http-server-ref#notification-payload-support). With a click action set to the push notification, you will need to add a new intent filter to match it: ```xml theme={null} ``` Once you create the activity, check if it's an ASAPP notification, and then open chat with the data: ```kotlin theme={null} if (ASAPP.instance.shouldOpenChat(intent)) { ASAPP.instance.openChat(context = this, androidIntentExtras = intent.extras) } ``` The helper function [`shouldOpenChat`](https://docs-sdk.asapp.com/api/chatsdk/android/latest/chatsdk/com.asapp.chatsdk/-a-s-a-p-p/should-open-chat.html) simply checks if the intent action matches the recommended one, but its use is optional. #### Foreground Push Notifications When you receive Firebase push notifications while your app is in the foreground, it calls `FirebaseMessagingService.onMessageReceived`. Check if that notification is from ASAPP: ```kotlin theme={null} class MyFirebaseMessagingService : FirebaseMessagingService() { override fun onMessageReceived(message: RemoteMessage) { super.onMessageReceived(message) val wasFromAsapp = ASAPP.instance.onFirebaseMessageReceived(message) // ... } } ``` For a good user experience, ASAPP recommends providing UI feedback to indicate there are new messages instead of opening chat right away. For example, update the unread message counter for your app's chat badge. You can retrieve that information from: `ASAPP.instance.conversationStatusHandler`. ## Persistent Notifications The ASAPP Android SDK automatically surfaces a persistent notification when a user joins a queue or connects to a live agent (starting on v8.4.0). Tapping the notification triggers an intent that takes the user directly into ASAPP Chat. Once the live chat ends or the user leaves the queue, the SDK dismisses the notification. Persistent notifications are: * ongoing, not dismissible [notifications](https://developer.android.com/reference/android/app/Notification). * low priority and do not vibrate or make sounds. * managed directly by the SDK and do not require integration changes. ASAPP enables this feature by default. To disable it, please reach out to your ASAPP Implementation Manager. Persistent notifications are not push notifications, which are created and handled by your application. ### Customize Persistent Notifications #### Notification Title and Icon To customize the title of persistent notifications, override the following string resource: ```json theme={null} Chat for Customer Support ``` And to customize the icon, create a new drawable resource with the following identifier (file name): ```json theme={null} drawable/asapp_ic_contact_support.xml ``` ASAPP recommends that you do not change the body of persistent notifications. #### Notification Channel By default, ASAPP sets the notification to [Notification Channel](https://developer.android.com/reference/android/app/NotificationChannel) `asapp_chat`, but it is possible to customize the channel being used. To customize the channel used by persistent notifications, override the following string resources: ```json theme={null} asapp_chat Chat for Customer Support ``` # User Authentication Source: https://docs.asapp.com/agent-desk/integrations/android-sdk/user-authentication As in the Quick Start section, you can connect to chat as an anonymous user by not setting a user, or initializing an `ASAPPUser` with a null customer identifier. However, many chat use cases might require ASAPP to know the identity of the user. To connect as an identified user, please specify a customer identifier string and a request context provider function. This provider will be called from a background thread, when the SDK makes requests that require customer authentication with your company's servers. The request context provider is a function that returns a map with keys and values agreed upon with ASAPP. Please ask your Implementation Manager if you have questions. ## Example: ```kotlin theme={null} val requestContextProvider = object : ASAPPRequestContextProvider { override fun getASAPPRequestContext(user: ASAPPUser, refreshContext:Boolean): Map? { return mapOf( "Auth" to mapOf( "Token" to "example-token" ) ) } } ASAPP.instance.user = ASAPPUser("testuser@example.com", requestContextProvider) ``` ## Handle Login Buttons If you connect to chat anonymously, you may be asked to log in when necessary by being shown a message button: If you then tap the **Sign In** button, the SDK will use the `ASAPPUserLoginHandler` to call to the application. Due to the asynchronous nature of this flow, your application should use the activity lifecycle to provide a result to the SDK. How to Implement the Sign In Flow 1. Implement the `ASAPPUserLoginHandler` and start your application's `LoginActivity`, including the given request code. ```kotlin theme={null} ASAPP.instance.userLoginHandler = object: ASAPPUserLoginHandler { override fun loginASAPPUser(requestCode: Int, activity: Activity) { val loginIntent = new Intent(activity, LoginActivity::class.java) activity.startActivityForResult(loginIntent, requestCode) } } ``` 2. If a user successfully signs into your application, update the user instance and then finish your `LoginActivity` with `Activity.RESULT_OK`. ```kotlin theme={null} ASAPP.instance.user = ASAPPUser(userIdentifier, contextProvider) setResult(Activity.RESULT_OK) finish() ``` 3. In case a user cancels the operation, finish your `LoginActivity` with `Activity.RESULT_CANCELED`. ```kotlin theme={null} setResult(Activity.RESULT_CANCELED) finish() ``` After your `LoginActivity` finishes, the SDK will capture the result and resume the chat conversation. ## Token Expiration and Refreshing the Context If the provided token has expired, the SDK will call the [ASAPPRequestContextProvider](https://docs-sdk.asapp.com/api/chatsdk/android/latest/chatsdk/com.asapp.chatsdk/-a-s-a-p-p-request-context-provider) with an `refreshContext` parameter set to `true` indicating that the context must be refreshed. In that case, please make sure to return a map with fresh credentials that can be used to authenticate the user. In case an API call is required to refresh the credentials, make sure to block the calling thread until the updated context can be returned. # Apple Messages for Business Source: https://docs.asapp.com/agent-desk/integrations/apple-messages-for-business Apple Messages for Business is a service that enables your organization to communicate directly with your customers through your Customer Service Platform (CSP), which in this case will be ASAPP, using the Apple Messages for Business app. All third party specifications are subject to change by Apple. As such, this section may become out-of-date. ASAPP will always attempt to use the most up-to-date third-party documentation. If you come across any errors or out-of-date content, please contact your ASAPP representative. ## Quick Start Guide 1. Register for an Apple Messages for Business account 2. Specify Entry Points 3. Complete User Experience Review 4. Determine Launch & Throttling Strategy ### Register for an Apple Messages for Business Account Before integrating with ASAPP's Apple Messages for Business adapter, you must register an account with Apple. See [Apple Messages for Business Getting Started](https://register.apple.com/resources/messages/messaging-documentation/register-your-acct#getting-started) documentation for more details. ### Specify Entry Points Entry points are where your customers start conversations with your business. You can select from Apple and ASAPP entry points. #### Apple Entry Points Apple supports multiple entry points for customers to engage using the Messages app. See [Apple Messages for Business Entry Points](https://register.apple.com/resources/messages/messaging-documentation/customer-journey#entry-points) documentation for more information. #### ASAPP Entry Point ASAPP supports the Chat Instead entry point. See the [Chat Instead](/agent-desk/integrations/chat-instead "Chat Instead") documentation for more information. ### Complete User Experience Review Apple requires a Brand Experience QA review before you can launch the channel. Please work with your Engagement Manager to prepare and schedule for the QA review. See the [Apple User Experience Review](https://register.apple.com/resources/messages/messaging-documentation/pass-apple-qa) documentation for more information. ### Determine Launch & Throttling Strategy Depending on the entry points configured, your Engagement Manager will share launch best practices and throttling strategies. ## Customer Authentication Apple Messages for Business supports Customer Authentication, which allows for a better and personalized customer experience. You can implement Authentication using OAuth. ### OAuth * Requires OAuth 2.0 implemented by customer * No support for biometric (fingerprint/Face Id) authentication on device * Does not require native iOS app User Authentication in Apple Messages for Business can utilize a standards-based approach using an OAuth 2.0 flow with additional key validation and OAuth token encryption steps. This approach requires customers to implement and host a login page that Apple Messages for Business will invoke to authenticate the user. Users will have to sign-in with their credentials every time their OAuth token expires. Additional steps are required to support authorization for users with devices running versions older than iOS 15. Consult your ASAPP account team for more information. #### Latest Authentication Flow #### Old Authentication Flow ASAPP requires the following customer functionalities to support the older authentication flow: * An OAuth 2.0 login flow, including a login page that supports form autofill. This page must be Apple-compliant. See the [Authentication Message](https://register.apple.com/resources/messages/msp-rest-api/type-interactive#authentication-message) documentation for more details. * Provide an API endpoint for ASAPP to obtain an external user identifier. This should be the same identifier that is supplied via the ASAPP web and mobile SDKs as the CustomerId. * Provide an endpoint through which to obtain an access token by supplying an authcode. This endpoint must support URL encoded parameters. * Provide an endpoint that can accepted POST requests in the following format: ```json theme={null} Content-Type: application/x-www-form-urlencoded grant_type=authorization_code&code=xxxx &client_id=yyyy&client_secret=zzzz where: xxxx=authorization_code value yyyy=client_id value zzzz=client_secret value ``` The authorization request from the device to the customer's login page will always contain response\_type, client\_id, redirect\_uri, scope and state and will be application/x-www-form-urlencoded Also note that the older authentication flow is backwards-compatible for iOS versions 16+. # Chat Instead Overview Source: https://docs.asapp.com/agent-desk/integrations/chat-instead Chat Instead is a feature that prompts customers to chat instead of calling. When customer volume shifts from calls to chat, this reduces costs and improves the customer experience. You can use any ASAPP SDK to display a menu when a customer taps on a phone number to give them the option to Chat Instead or to call. To enable this feature: 1. Identify Call buttons or phone numbers on your website that you want to convert into entry points for Chat Instead. 2. Use the Chat Instead API, which is part of the ASAPP SDK. 3. Contact your Implementation Manager to configure Chat Instead. See the following sections for more information: * [Android](/agent-desk/integrations/chat-instead/android "Android") * [iOS](/agent-desk/integrations/chat-instead/ios "iOS") * [Web](/agent-desk/integrations/chat-instead/web "Web") # Android Source: https://docs.asapp.com/agent-desk/integrations/chat-instead/android ## Requirements Chat Instead requires ASAPP Android Chat SDK 8.0.0 or later, and a valid phone number. Before you proceed, make sure you configure it [correctly](/agent-desk/integrations/android-sdk). ## Phone Formats Chat Instead accepts a wide variety of formats. See [tools.ietf.org/html/rfc3966](https://tools.ietf.org/html/rfc3966) for the precise definition. For example, Chat Instead accepts: "+1 (555) 555-5555" and "555-555-5555". ## Getting Started There are two ways to add Chat Instead. The easiest way is to add the `ASAPPChatInsteadButton` to the layout and call the `ASAPPChatInsteadButton.init`. Alternatively, you can manage the lifecycle yourself. ### 1. Add an ASAPPChatInsteadButton You can add this button to any layout, like any other [AppCompatButton](https://developer.android.com/reference/androidx/appcompat/widget/AppCompatButton). ```json theme={null} ``` After that, be sure to call the `ASAPPChatInsteadButton.init` method. Only the phone number is mandatory. Optionally, you can overwrite the `ASAPPChatInsteadButton.onChannel` and the `ASAPPChatInsteadButton.onError` properties of the button. ### 2. Manual Setup of ASAPPChatInstead 1. Initialize Chat Instead Somewhere after the `ASAPP.init` call: ```json theme={null} val chatInstead = ASAPPChatInstead.create(phoneNumber) ``` to initialize Chat Instead. Depending on cache, this will trigger a network call so channels are "immediately" available to the user once the fragment is displayed. Along with an optional header and a chat icon, you can pass callbacks to be notified when a channel is tapped or an error on a channel happens. ASAPP makes both callbacks after Chat Instead tries to act on the tap. 2. Display Channels With the instance returned by `ASAPPChatInstead.init`, call `ASAPPChatInstead.show` whenever you want to display the [BottomSheetDialogFragment](https://developer.android.com/reference/com/google/android/material/bottomsheet/BottomSheetDialogFragment?hl=en). Depending on cache, this might show a loading state. 3. Clear Chat Instead (optional) You can interrupt the Chat Instead initial network call, if you call `ASAPPChatInstead.clear`. ASAPP advises you to add the call `onDestroy` for Activities and `onDetachedFromWindow` for Fragments. If you call `ASAPPChatInstead.clear` after you create the [BottomSheetDialogFragment](https://developer.android.com/reference/com/google/android/material/bottomsheet/BottomSheetDialogFragment?hl=en) view, it has no effect. ## Error Handling and Debugging In the case of problems, look for logs with the "ASAPPChatInstead" tag. Be sure to call `ASAPP.setDebugLoggingEnabled(true)` to enable the logs. Alternatively, you can set callbacks with `ASAPPChatInstead.init`. ### Troubleshoot Chat Instead Errors #### Crash Caused by UnsupportedOperationException when Displaying the Fragment This occurs whenever someone does not define `asapp_primary` in the style that the calling Activity uses. Please refer to **Customization > Colors**. #### "Unknown Channel" in the Log or the onError Callback Talk to your Implementation Manager at ASAPP. ASAPP's Backend sent a channel we don't know how to handle. You might need to upgrade the Android SDK version. #### "Unknown Error" in the Log Talk to your Implementation Manager at ASAPP. This might be a bug. Please attach logs and reproduction steps. #### "Activity Context Not Found" in the Log It means you are not sending the right context at `ASAPPChatInstead.show`. ## Tablet and Landscape Support Chat Instead supports these configurations seamlessly. ## Customization ### Header By default, Chat Instead uses the text in `R.string.asapp_chat_instead_default_header`. You can send a different string when initializing Chat Instead, but the ASAPP Backend overwrites it if the call succeeds. ### Chat Icon You can customize the SDK Chat channel icon. By default, the system tints it with `asapp_primary` and `asapp_on_primary`. If you customize the icon, make sure to test how it looks in Night Mode (a.k.a. Dark Mode). ### Colors Chat Instead uses the ASAPP text styles and colors. For more information on how to customize, go to [Customization](../android-sdk/customization "Customization"). ## Remote settings Chat Instead receives configuration information from ASAPP's Backend (BE), in addition to the channels to display. The configuration enables/disables the feature and selects the device type (mobile, tablet, none). Contact your Implementation Manager at ASAPP if you have any questions. It's important to know how the BE affects customization. If you provide a header, the BE overwrites it. On the other hand, the BE cannot overwrite the phone number. ## Cache Chat Instead will temporarily cache the displayed channels to provide a better user experience. The cache is warmed at instantiation. The information persists through phone restarts. As usual, it does not survive an uninstall or a "clear cache" in App info. # iOS Source: https://docs.asapp.com/agent-desk/integrations/chat-instead/ios ## Pre-requisites * ASAPP iOS SDK 9.4.0 or later, correctly configured and initialized [see more here](/agent-desk/integrations/ios-sdk/ios-quick-start). ## Getting Started Once you've successfully configured and initialized the ASAPP SDK, you can start using Chat Instead for iOS. 1. Create a New Instance. ```json theme={null} let chatInsteadViewController = ASAPPChatInsteadViewController(phoneNumber: phoneNumber, delegate: delegate, title: title, chatIcon: image) ``` | | | | :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | phoneNumber (required) | The phone number to call when the phone channel is selected. Must be a valid phone number. For more information, see Apple's documentation on [phone links](https://developer.apple.com/library/archive/featuredarticles/iPhoneURLScheme_Reference/PhoneLinks/PhoneLinks). | | delegate (required) | An object that implements `ASAPPChannelDelegate`. | | title (optional) | A title (also called the "Chat Instead header title") which is displayed at the top of the Chat Instead UI. (See [Customization](#customization "Customization")) | | image (optional) | A UI Image that will override the default image for the chat channel. (See [Customization](#customization "Customization")) | 2. Implement two functions that the `ASAPPChannelDelegate` requires: ```json theme={null} func channel(_ channel: ASAPPChannel, didFailToOpenWithErrorDescription errorDescription: String?) ``` ASAPP calls this if an error occurs while trying to open a channel. ```json theme={null} func didSelectASAPPChatChannel() ``` This opens the ASAPP chat. You should use one of these methods: ```json theme={null} ASAPP.createChatViewControllerForPresentingFromChatInstead() ``` or ```json theme={null} ASAPP.createChatViewControllerForPushingFromChatInstead() ``` to present or push the view controller instance that ASAPP returned. This means that you must present/push the ASAPP chat view controller inside `didSelectASAPPChatChannel()`. ASAPP highly recommends initializing `ASAPPChatInsteadViewController` as early as possible for the best user experience. Whenever a channel is selected, ASAPP handles everything by default (except for the chat channel), but you can also handle a channel by yourself by implementing `func shouldOpenChannel(_ channel: ASAPPChannel) -> Bool` and returning false. 3. Show the `chatInsteadViewController` instance by using: ```json theme={null} present(chatInsteadViewController, animated: true) ``` Only presentation works. Pushing the `chatInsteadViewController` instance does not work and causes unexpected behavior. ## Support for iPad For the best user experience, you should configure popover mode, which is used on iPad. Use the `.popover` presentation style and set both the [sourceView](https://developer.apple.com/documentation/uikit/uipopoverpresentationcontroller/1622313-sourceview) and [sourceRect](https://developer.apple.com/documentation/uikit/uipopoverpresentationcontroller/1622324-sourcerect) properties following Apple's conventions: ```json theme={null} chatInsteadViewController.modalPresentationStyle = .popover chatInsteadViewController.popoverPresentationController?.sourceView = aView chatInsteadViewController.popoverPresentationController?.sourceRect = aRect ``` This will only have an effect when your app is run on iPad. If you set `modalPresentationStyle` to `.popover` and forget to set `sourceView` and `sourceRect`, the application will crash in runtime. So please be sure to set both if you're using the popover mode. ## Customization You can customize the Chat Instead header title and the chat icon when creating the `ASAPPChatInsteadViewController` instance. (See [Getting Started](#getting-started "iOS"). `ASAPPChatInsteadViewController` uses [ASAPPColors](https://docs-sdk.asapp.com/api/chatsdk/ios/latest/Classes/ASAPPColors.html) for styling, so it will automatically use the colors set there (e.g. `primary`, `background`, `onBackground`, etc.), which are the same colors used for customizing the ASAPP chat interface. There is no way to independently change the styling of the Chat Instead UI. ASAPP supports [Dark Mode](../ios-sdk/customization#dark-mode-15935 "Dark Mode") by default as long as you enable it. ## Remote settings When you create an instance of `ASAPPChatInsteadViewController`, it automatically fetches remote settings to indicate which channels to display. You can configure these settings. These remote settings override local ones (i.e. the ones you pass in when creating the `ASAPPChatInsteadViewController` instance). If an error occurs while fetching the settings and no local values were set, the system uses the defaults. ## Cache When fetching succeeds, the SDK caches the remote settings for a short period of time. This cache references in lieu of repeated fetches. The cache remains valid across multiple app sessions. # Web Source: https://docs.asapp.com/agent-desk/integrations/chat-instead/web A feature that prompts customers to use Chat instead of calling. When customers shift volume from phone to chat, this reduces costs and improves the customer experience. When customers tap a phone number, phone button, or any other entry point that the customer company chooses, ASAPP triggers an intercept that gives the customer the option to chat or call. In order to enable this feature, please: 1. Identify Entry Points. Contact your Implementation Manager to determine the best entry point to Chat Instead on your website. On Mobile, the best entry point is a "Call" button or a clickable phone number. On Desktop, the best entry point is a "Call" button. ASAPP recommends that you modify your website to display a "Call Us" button (or other similar language) rather than displaying the phone number, and the "Call Us" button should invoke Chat Instead. ASAPP recommends that you also make all entry points telephone links (href="tel" number). The customer company must specify the formatting to display for the phone number that they pass to the [showChatInstead](../web-sdk/web-javascript-api#-showchatinstead- "'showChatInstead'") API: (800) 123-4567 **Example Use Case:** ```json theme={null} (800) 123-4567 ``` 2. Integrate with the [showChatInstead](../web-sdk/web-javascript-api#-showchatinstead- "'showChatInstead'") API. 3. Chat Instead receives configuration information from ASAPP's Backend (BE), in addition to the channels to display and the order to display them in. Contact your Implementation Manager to turn on Chat Instead and configure these options. If you would like to use Apple Business Chat or other messaging application as an option within Chat Instead, please inform your Implementation Manager. This feature is currently available in the mobile Web SDK and desktop Web SDK. | | | | :--------------------- | :--------------------- | | | | # Customer Authentication Source: https://docs.asapp.com/agent-desk/integrations/customer-authentication Customer Authentication enables consistent and personalized conversations across channels and over time. The authentication requirements consist of two main elements: 1. A customer identifier 2. An access token The source, format, and use of these items depends on the customer's infrastructure and services. However, where applicable and feasible, ASAPP instills a few direct requirements for integration of these components. This section outlines the requirements and considerations in the sections below. Integrations leveraging customer authentication enable two main features of ASAPP: 1. Combine the conversation history of a customer into a single view to enable the true asynchronous behavior of ASAPP. This allows a customer to come back over time as well as change communication channels but maintain a consistent state and experience. 2. Validate a customer and make API calls for a customer's data to display to a representative or directly to the customer.   The following sequence diagram depicts an example of a customer authentication integration utilizing OAuth customer credentials and a JSON Web Token (JWT) for API calls. ## Identification ### What is a Customer Identifier? A customer identifier is the first and most important piece of the Customer Authentication strategy. The identifier is the key element to determine: * when to transition from unauthenticated to authenticated * when to show previous conversation history within chat When a customer returns with the same identifier, the customer sees all previous history within web and mobile chat. These identifiers are typically string values of hashed or encrypted account numbers or other internal values. However, it is important to not send identifiable or reusable information as the customer identifier, such as their actual unprotected account numbers or PII. ### Customer Identifier Format The customer may determine the format of the customer identifier. The ASAPP requirements for the customer identifier are: * Consistent - the same customer must authenticate using the same customer identifier every time. * Unique - the customer identifier must represent a unique customer; No two customers can have the same identifier. * Opaque - ASAPP does not store PII data. The customer must obfuscate the customer identifier so it does not contain PII or any other sensitive data. An example of obfuscation strategy is to generate a hash or an encrypted value of a unique user identifier (e.g. user ID, account number, or email address). * Traceable - customer-traceable but not ASAPP-traceable. * The customer must be able to trace the customer identifier back to a user. However, it cannot be used by ASAPP, or any other party, to trace back to a specific user. * The reporting data generated by ASAPP includes the customer identifier. This reporting data is typically used to generate further analytics by the customer. You can use the customer identifier to relate ASAPP's reporting data back to the actual user identifier and record on the customer side. ### Passing the Customer Identifier to ASAPP Once a customer authenticates a user on their website or app, the customer must retrieve and pass the customer identifier to ASAPP ( typically via the SDK parameters) as part of the conversation authentication flow. You can find more details for your specific integration in the following sections: * [Web SDK - Web Authentication](/agent-desk/integrations/web-sdk/web-authentication "Web Authentication") * [Android SDK - Chat User Authentication in the Android Integration Walkthrough](/agent-desk/integrations/android-sdk/user-authentication) * [iOS SDK - Basic Usage in the iOS SDK Quick Start](/agent-desk/integrations/ios-sdk/ios-quick-start)iOS SDK ## Tokens While they are not a hard requirement for Customer Authentication, tokens play an important part in the overall Customer Authentication strategy. Tokens provide a way of securely wrapping all communication between Customers, Customer Companies and ASAPP. You can achieve this when you ensure that every request to a server is accompanied by a signed token, which ASAPP can verify for authenticity. Some of the benefits of using tokens over other methods, such as cookies, is that tokens are completely stateless and are typically short-lived. The following sections outline some examples of token input, as well as requirements for their use and validation. ### Identity Tokens Identity tokens are self contained, signed, short-lived tokens containing User Attributes like Name, User Identifiers, Contact Information, Claims, and Roles. The simplest and most common example of such a token is a JSON Web Token, JWT. JWTs contain a Header, Payload and Signature. The Header contains metadata about the token, the Payload contains the user info and claims, and the Signature is the algorithmically signed portion of the token based on the payload. You can find more information about JWTs at [https://jwt.io/](https://jwt.io/). **Example JWT:** ```json theme={null} eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c ``` ### Bearer Tokens A Bearer Token is a lightweight security token which provides the bearer, or user, access to protected resources. When a user authenticates, the system issues a Bearer Token type that contains an access token and a refresh token, along with expiration details. Bearer tokens are short-lived, dynamic access tokens that you can update throughout a session using a refresh token. **Example Bearer Token:** ```json theme={null} { "token_type":"Bearer", "access_token":"eyJhbGci....", "expires_in":3600, "expires_on":1479937454, "refresh_token":"0/LTo...." } ``` ### Token Duration Since every token has an expiration time, you need a way for ASAPP to know when a token is valid and when it expires. A customer can do this by: * allowing decoding of signed tokens. * providing an API to validate the token. #### Token Refresh You need to refresh expired tokens on either the client side, via the ASAPP SDK, or through an API call.  You can find SDK token refresh implementation examples at: * [Web SDK - Web Context Provider](/agent-desk/integrations/web-sdk/web-contextprovider#authentication "Web ContextProvider") * [Web SDK - Set Customer](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'") * [Android SDK - Context Provider](/agent-desk/integrations/android-sdk/user-authentication) * [iOS SDK - Type Aliases](https://docs-sdk.asapp.com/api/chatsdk/ios/latest/Typealiases.html) #### Token Validation You need to validate tokens before you can rely on them for API access or user information. Two examples of token validation are: * **Compare multiple pieces of information** - ASAPP compares a JWT payload against the SDK input of the same attributes, or against response data from a UserProfile API call. * **Signature Validation** - ASAPP can also validate signatures and decode data if needed. This would require sharing of a trusted public certificate with ASAPP. ## Omni-Channel Strategy One of the key capabilities of the ASAPP backend is that it supports customer interaction via multiple channels - such as chat on web portals or within mobile apps. This enables a customer to migrate from one channel to another, if they choose, within the same support dialog. In order for this to function, it is important that the process of Customer Authentication be common to all channels. The ASAPP backend should obtain the same access token to access the Customer's API endpoints regardless of the channel that the customer selects. If a customer switches from one channel to another, the access token should remain the same. ## Testing You need a comprehensive testing strategy to ensure success. This includes the ability to exercise edge cases with various permutations of test account data, as well as utilize the customer login with direct test account credentials.  Operationally, the customer handles customer login credentialing; however, ASAPP requires the ability to simulate the login process in order to execute end to end tests.  This process is crucial in performing test scenarios that require customer authentication. Corollary, it is equally important to ensure complete test scenario coverage with different types of test-based customer accounts. # iOS SDK Overview Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk Welcome to the ASAPP iOS SDK Overview! This document guides you through the process of integrating ASAPP functionality into your iOS application. It includes the following sections: * [Quick Start](/agent-desk/integrations/ios-sdk/ios-quick-start "iOS Quick Start") * [Customization](/agent-desk/integrations/ios-sdk/customization "Customization") * [User Authentication](/agent-desk/integrations/ios-sdk/user-authentication "User Authentication") * [Miscellaneous APIs](/agent-desk/integrations/ios-sdk/miscellaneous-apis "Miscellaneous APIs") * [Deep Links and Web Links](/agent-desk/integrations/ios-sdk/deep-links-and-web-links "Deep Links and Web Links") * [Push Notifications](/agent-desk/integrations/ios-sdk/push-notifications "Push Notifications") In addition, you can view the following documentation: * [iOS SDK Release Notes](/agent-desk/integrations/ios-sdk/ios-sdk-release-notes "iOS SDK Release Notes") ## Requirements ASAPP supports iOS 12.0 and up. As a rule, ASAPP supports two major versions behind the latest. Once iOS 15 is released, ASAPP will drop support for iOS 12 and only support iOS 13.0 and up. The SDK is written in Swift 5 and compiled with support for binary stability, meaning it is compatible with any Swift compiler version greater than or equal to 5. # Customization Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/customization ## Styling ### Themes There is one main color that you can set to ensure the ASAPP chat view controller fits with your app's theme: `ASAPP.styles.colors.primary`. ASAPP recommends starting out only setting `.primary` to be your brand's primary color, and adjusting other colors when necessary for accessibility. The system uses `.primary` as the message bubble background and in most buttons and other controls. Themes There are two other colors you may consider customizing for accessibility or to achieve an exact match with your app's theme: `ASAPP.styles.colors.onBackground` and `.onPrimary`. Most other elements that might appear in front of the background use `.onBackground`. Text and other elements that appear in front of the primary color use `.onPrimary`. ### Fonts The ASAPP SDK uses the iOS system font family, SF Pro, by default. To use another font family, pass an `ASAPPFontFamily` to `ASAPP.styles.textStyles.updateStyles(for:)`. There are two `ASAPPFontFamily` initializers: one that takes font file names and another that takes `UIFont` references. ```json theme={null} let avenirNext = ASAPPFontFamily( lightFontName: “AvenirNext-Regular”, regularFontName: “AvenirNext-Medium”, mediumFontName: “AvenirNext-DemiBold”, boldFontName: “AvenirNext-Bold”)! ASAPP.styles.textStyles.updateStyles(for: avenirNext) ``` ## Overrides The ASAPP SDK API allows you to override many aspects of the design of the chat interface, including [colors](#colors "Colors"), [button styles](#buttons "Buttons"), [navigation bar styles](#navigation-bar-styles "Navigation Bar Styles"), and various [text styles](#text-styles "Text Styles"). ### Colors Besides the colors used for themes, you can override specific colors in a number of categories: * Navigation bar * General chat content * Buttons * Messages * Quick replies * Input field. All property names mentioned below are under `ASAPP.styles.colors`. Navigation bar colors are .`navBarBackground`, `.navBarTitle`, `.navBarButton`, and `.navBarButtonActive`. Navigation Bar Colors General chat content colors are `.background`, `.separatorPrimary`, `.separatorSecondary`, `.controlTint`, `.controlSecondary`, `.controlBackground`, `.success`, `.warning`, and `.failure`. General Chat Content Colors General Chat Content Colors Buttons use sets of colors defined with an `ASAPPButtonColors` initializer. You can override `.textButtonPrimary`, `.buttonPrimary`, and `.buttonSecondary`. Buttons Message colors are `.messagesListBackground`, `.messageText`, `.messageBackground`, `.messageBorder`, `.replyMessageText`, `.replyMessageBackground`, and `.replyMessageBorder`. Message Colors Quick replies and action buttons also use `ASAPPButtonColors`. You can override `.quickReplyButton` and `.actionButton`. Quick Replies and Action Buttons Quick Replies and Action Buttons The chat input field uses `ASAPPInputColors`. You can override `.chatInput`. Chat Input Field ### Text Styles ASAPP strongly recommends that you use one font family as described in the [Fonts](#fonts) section. However, if you need to, you may override: `ASAPP.styles.textStyles.navButton`, `.button`, `.actionButton`, `.link`, `.header1`, `.header2`, `.header3`, `.subheader`, `.body`, `.bodyBold`, `.body2`, `.bodyBold2`, `.detail1`, `.detail2`, and `.error`. To update all but the first four with a color, call `ASAPP.styles.textStyles.updateColors(with:)`. ### Navigation Bar Styles You can override the default `ASAPP.styles.navBarStyles.titlePadding` using `UIEdgeInsets`. ### Buttons The shape of primary buttons in message attachments, forms, and other dynamic layouts depends on the value of `ASAPP.styles.primaryButtonRoundingStyle`. The default value is `.radius(0)`. You can set it to a custom radius with `.radius(_:)` or fully rounded with `.pill`. ## Images ### Navigation Bar There are three images used in the chat view controller's navigation bar that are overridable: the icons for the **close ✕**, **back ⟨**, and **more ⋮** buttons. Each is tinted appropriately, so the image need only be a template in black with an alpha channel. ASAPP displays only one of the **close** and **back** buttons at a time; the former is used when the chat view controller is presented modally, and the latter when pushed onto a navigation stack. ```json theme={null} ASAPP.styles.navBarStyles.buttonImages.close ASAPP.styles.navBarStyles.buttonImages.back ASAPP.styles.navBarStyles.buttonImages.more ``` Use the `ASAPPCustomImage(image:size:insets:)` initializer to override each: ```json theme={null} ASAPP.styles.navBarStyles.buttonImages.more = ASAPPCustomImage( image: UIImage(named: “Your More Icon Name”)!, size: CGSize(width: 20, height: 20), insets: UIEdgeInsets(top: 14, left: 0, bottom: 14, right: 0)) ``` ### Title View To use a custom title view, assign `ASAPP.views.chatTitle`. If you set a custom title view, it will override any string you set as `ASAPP.strings.chatTitle`. The title view will be rendered in the center of the navigation bar. ## Quick Reply View Height To set quick reply view height, assign value to 'maxQuickReplyViewHeight' variable which is part of the class 'ASAPPCustomViewStyles'. We have added a safe height value calculation based on the value provided to 'maxQuickReplyViewHeight' and the quick replies contents data. We should not allow any random higher value to maxQuickReplyViewHeight. The quickReplies height view should not increase to the middle of the iPhone as the chat conversation list view contents visibility will be reduced. Hence we have added a safe height value calculation logic to avoid such UI issues. You can try different values like 180, 240, 300 etc to check the UI. You can set the preferred tint color to 'titleBar.actionBackButton' ```json theme={null} let customViewStyle = ASAPPCustomViewStyles() customViewStyle.maxQuickReplyViewHeight = 300 customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.orange ) ASAPP.views.customViewStyle = customViewStyle ``` ## Banner Theme To set customised theme to sucess, warning and failure banners. ```json theme={null} func setBannerTheme() { let customViewStyle = ASAPPCustomViewStyles() let successImageObj = UIImage(named: "PlusIcon") let warningImageObj = UIImage(named: "PlusIcon") let successFontInfo = UIFont(name: "BrutalType-Light", size: 14)! let failureFontInfo = UIFont(name: "BrutalType-Black", size: 14)! //Success customViewStyle.connectionBar.success = ConnectionBar( container: OptionalViewTypeConfig( backgroundColor: FeedbackType.error.getBackgroundColor() ), primaryText: OptionalTextTypeConfig( typeface: failureFontInfo, letterSpacing: 0, color: .white ), icon: OptionalImageViewTypeConfig(width: 22, height: 22, src: warningImageObj) ) //Warning customViewStyle.connectionBar.warn = ConnectionBar( container: OptionalViewTypeConfig( backgroundColor: FeedbackType.warning.getBackgroundColor() ), primaryText: OptionalTextTypeConfig( typeface: UIFont.systemFont(ofSize: 14), letterSpacing: 0, color: .black ), icon: OptionalImageViewTypeConfig(width: 22, height: 22, src: warningImageObj ) ) //Error customViewStyle.connectionBar.error = ConnectionBar( container: OptionalViewTypeConfig( backgroundColor: FeedbackType.error.getBackgroundColor() ), primaryText: OptionalTextTypeConfig( typeface: failureFontInfo, letterSpacing: 0, color: .white ), icon: OptionalImageViewTypeConfig(width: 22, height: 22, src: warningImageObj) ) customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.orange ) ASAPP.views.customViewStyle = customViewStyle } ``` ## Modal Button Styling To set the bottom sheet alert button title, body text UI, confirmation and cancel buttons UI ```json theme={null} let customViewStyle = ASAPPCustomViewStyles() customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.blue ) customViewStyle.bottomSheetConfirmationDialog.title = OptionalTextTypeConfig(typeface: UIFont.systemFont(ofSize: 16, weight: .bold), color: UIColor.black) customViewStyle.bottomSheetConfirmationDialog.bodyText = OptionalTextTypeConfig(typeface: UIFont.systemFont(ofSize: 14, weight: .regular), color: UIColor.red) customViewStyle.bottomSheetConfirmationDialog.confirmButtonBar.button = OptionalButtonTypeConfig(width: ButtonWidthType.matchParent.rawValue, margin: 40) customViewStyle.bottomSheetConfirmationDialog.cancelButtonBar.button = OptionalButtonTypeConfig(width: ButtonWidthType.matchParent.rawValue, margin: 40) ASAPP.views.customViewStyle = customViewStyle ``` ## Leave Button Text And Progress Bar To set the leave button, your position text UI and progress bar UI. ```json theme={null} let customViewStyle = ASAPPCustomViewStyles() customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.orange ) customViewStyle.ewtBar.progressBar = OptionalProgressBarTypeConfig(isVisible: true, backgroundColor: UIColor.red, progressColor: UIColor.lightGray) customViewStyle.ewtBar.btnLeave = OptionalButtonTypeConfig(typeface: UIFont.systemFont(ofSize: 18.0, weight: .regular), textColorNormal: UIColor.red) customViewStyle.ewtBar.txtEwtTitle = OptionalTextTypeConfig(typeface: UIFont.systemFont(ofSize: 16), color: UIColor.darkGray) customViewStyle.ewtBar.txtEwtValue = OptionalTextTypeConfig(typeface: UIFont.systemFont(ofSize: 22.0), color: UIColor.darkGray) ASAPP.views.customViewStyle = customViewStyle ``` ## New Question Text Style To set custom UI for New Question text font, text color and highlighted color etc. ```json theme={null} let customViewStyle = ASAPPCustomViewStyles() customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.orange ) customViewStyle.restartButtonBar.primaryText = OptionalButtonTypeConfig(typeface: UIFont.systemFont(ofSize: 18), textColorNormal: UIColor.darkGray, textColorHighlighted: UIColor.green) customViewStyle.restartButtonBar.icon = OptionalImageViewTypeConfig(src: UIImage(named: "PlusIcon"), tintColor: UIColor.orange) ASAPP.views.customViewStyle = customViewStyle ``` ## Send Button Theme To set custom UI for chat text Send button theme ```json theme={null} let customViewStyle = ASAPPCustomViewStyles() customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.purple ) customViewStyle.chatComposerBar.btnSend = OptionalButtonTypeConfig( tintColorNormal: UIColor.purple ) ASAPP.views.customViewStyle = customViewStyle ``` ## Title Bar Theme To set custom back button, title bar icon, title bar More button themes. ```json theme={null} let customViewStyle = ASAPPCustomViewStyles() customViewStyle.titleBar.actionBackButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.systemPink ) customViewStyle.titleBar.actionMoreButton = OptionalButtonTypeConfig( tintColorNormal: UIColor.systemPink ) let warningImageObj = UIImage(named: "PlusIcon") customViewStyle.titleBar.primaryText = OptionalTextTypeConfig(typeface: UIFont.systemFont(ofSize: 18, weight: .bold), color: UIColor.orange) customViewStyle.titleBar.icon = OptionalImageViewTypeConfig(width: 16, height: 16, src: warningImageObj, tintColor: UIColor.systemPink) ASAPP.views.customViewStyle = customViewStyle let strings = ASAPPStrings() strings.chatTitle = "Chat Test" ASAPP.strings = strings ``` ## Dark Mode Apple introduced Dark Mode in iOS 13. Please see Apple's [Supporting Dark Mode in Your Interface](https://developer.apple.com/documentation/xcode/supporting_dark_mode_in_your_interface) documentation for an overview. The ASAPP SDK does not automatically convert any colors for use in Dark Mode; you must define dark variants for each custom color at the app level, which the SDK will use automatically when the interface style changes. ASAPP recommends that you add a Dark Appearance to colors you define in color sets in an asset catalog. Please see [Apple's documentation](https://developer.apple.com/documentation/xcode/supporting_dark_mode_in_your_interface#2993897) for more details. Once you have defined a color set, you can refer to it by name with the `UIColor(named:)` initializer, which was introduced in iOS 11. After you have defined a dark variant for at least the primary color, be sure to set it and flip the Dark Mode flag: ```json theme={null} ASAPP.styles.colors.primary = UIColor(named: "Your Primary Color Name")! ASAPP.styles.isDarkModeAllowed = true ``` ASAPP highly recommends adding a Dark Appearance for any color you set. Please don't forget to define a Dark Appearance for your custom logo if you have set `ASAPP.views.chatTitle`. If your app does not support Dark Mode, ASAPP recommends that you do not change the value of `ASAPP.styles.isDarkModeAllowed` to ensure a consistent user experience. Dark Mode ## Orientation The default value of `ASAPP.styles.allowedOrientations` is `.portraitLocked`, meaning the chat view controller will always render in portrait orientation. To allow landscape orientation on an iPad, set it to `.iPadLandscapeAllowed` instead. There is currently no landscape orientation option for iPhone. ## Strings Please see the class reference for details on each member of `ASAPPStrings`. # Deep Links and Web Links Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/deep-links-and-web-links ## Handle Deep Links in Chat Certain chat flows may present buttons that are deep links to another part of your app. To react to taps on these buttons, implement the `ASAPPDelegate` protocol, including the `chatViewControlledDidTapDeepLink(name:data:)` method. Please ask your Implementation Manager if you have questions regarding deep link names and data. ## Handle Web Links in Chat Certain chat flows may present buttons that are web links. To react to taps on these buttons, implement the `ASAPPDelegate` protocol, including the `chatViewControllerShouldHandleWebLink(url:)` method. Return true if the ASAPP SDK should open the link in an `SFSafariViewController`; return `false` if you'd like to handle it instead. ## Implement Deep Links into Chat ### Getting Started Please see Apple's documentation on [Allowing Apps and Websites to Link to Your Content](https://developer.apple.com/documentation/xcode/allowing_apps_and_websites_to_link_to_your_content). ### Connect the Pieces Once you have set up a custom URL scheme for your app, you can detect links pointing to ASAPP chat within `application(_:open:options:)`. Call one of the four provided methods to create an ASAPP chat view controller: ```json theme={null} ASAPP.createChatViewControllerForPushing(fromNotificationWith:) ASAPP.createChatViewControllerForPresenting(fromNotificationWith:) ASAPP.createChatViewControllerForPushing(withIntent:) ASAPP.createChatViewControllerForPresenting(withIntent:) ``` # iOS Quick Start Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/ios-quick-start If you want to start fast, follow these steps: 1. [Gather Required Information](#1-gather-required-information "1. Gather Required Information") 2. [Download the SDK](#2-download-the-sdk "2. Download the SDK") 3. [Install the SDK](#3-install-the-sdk "3. Install the SDK") 4. [Configure the SDK](#4-configure-the-sdk "4. Configure the SDK") 5. [Open Chat](#5-open-chat "5. Open Chat") ## 1. Gather Required Information Before downloading and installing the SDK, please make sure you have the following information. Contact your Implementation Manager at ASAPP if you have any questions. | App ID | Also known as the "Company Marker", assigned by ASAPP. | | :------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | API Host Name | The fully-qualified domain name used by the SDK to communicate with ASAPP's API. Provided by ASAPP and subject to change based on the stage of implementation. | | Region Code | The ISO 3166-1 alpha-2 code for the region of the implementation, provided by ASAPP. | | Supported Languages | Your app's supported languages, in order of preference, as an array of language tag strings. Strings can be in the format `{ISO 639-1 Code}-{ISO 3166-1 Code}` or `{ISO 639-1 Code}`, such as "en-us" or "en". Defaults to \["en"]. | | Client Secret | This can be an empty or random string\* until otherwise notified by ASAPP. | | User Identifier | A username or similar value used to identify and authenticate the customer, provided by the Customer Company. | | Authentication Token | A password-equivalent value, which may or may not expire, used to authenticate the customer that is provided by the Customer Company. | \* In the future, the ASAPP-provided client secret will be a string that authorizes the integrated SDK to call the ASAPP API in production. ASAPP recommends fetching this string from a server and storing it securely using Secure Storage; however, as it is one of many layers of security, you can hard-code the client secret. ## 2. Download the SDK Download the iOS SDK from the [ASAPP iOS SDK releases page on GitHub](https://github.com/asappinc/chat-sdk-ios-release/releases). ## 3. Install the SDK ASAPP provides the SDK as an `.xcframework` with and without bitcode in dynamic and static flavors. If in doubt, ASAPP recommends that you use the dynamic `.xcframework` with bitcode. Add your chosen flavor of the framework to the "Frameworks, Libraries, and Embedded Content" section of your target's "General" settings. ### Include SDK Resources When Using the Static Framework Add the provided `ASAPPResources.bundle` to your target's "Frameworks, Libraries, and Embedded Content" and then include it in your target's "Copy Bundle Resources" build phase. The SDK allows customers to take and upload photos, [unless you disable these features through configuration](https://docs-sdk.asapp.com/api/chatsdk/ios/latest/Classes/ASAPP.html#/Permissions). Since iOS 10, Apple requires descriptions for why your app uses the photo library and/or camera, which Apple displays to the customer. If you haven't already, you'll need to add these descriptions to your app's `Info.plist`. * If you access `Info.plist` via Xcode's plist editor, the description keys are "Privacy - Camera Usage Description" and "Privacy - Photo Library Usage Description". * If you access `Info.plist` via a text editor, the keys are "NSPhotoLibraryUsageDescription" and "NSCameraUsageDescription". ### Validate iOS SDK Authenticity ASAPP uses GPG (GNU Privacy Guard) for creating digital signatures. To install on macOS: 1. Using [Homebrew](https://brew.sh), install gpg: `brew install gpg` 2. Download the [ASAPP SDK Team public key](https://docs-sdk.asapp.com/api/chatsdk/ios/security/asapp_public.gpg). 3. Add the key to GPG: `gpg --import asapp_public.gpg` Optionally, you can also validate the public key. Please refer to the [GPG documentation](https://www.gnupg.org/documentation/manuals.html) for more information. Then, you can verify the signature using: `gpg --verify <-sdk-filename>.sig ` ASAPP provides the signature alongside the SDK in each release. ## 4. Configure the SDK Use the code below to create a config, initialize the SDK with the config, and set an anonymous user. Refer to the aforementioned [Required Information](#1-gather-required-information-15931 "1. Gather Required Information") for more details. ASAPP recommends that you initialize the SDK on launch in `application(_:didFinishLaunchingWithOptions…)`. Please see the [User Authentication](/agent-desk/integrations/ios-sdk/user-authentication "User Authentication") section for details about how to authenticate an identified user. ```json theme={null} import ASAPPSDK let config = ASAPPConfig(appId: appId, apiHostName: apiHostName, clientSecret: clientSecret, regionCode: regionCode) ASAPP.initialize(with: config) ASAPP.user = ASAPPUser(nil, requestContextProvider: { _ in return [:] }) ``` ## 5. Open Chat Once the SDK has been initialized with a config and a user has been set, you can create a chat view controller that can then be pushed onto the navigation stack. ASAPP recommends doing so when a navigation bar button is tapped. ```json theme={null} let chatViewController = ASAPP.createChatViewControllerForPushing(fromNotificationWith: nil)! navigationController?.pushViewController(chatViewController, animated: true) ``` If you prefer to present the chat view controller as a modal, use the `ForPresenting` method instead: ```json theme={null} let chatViewController = ASAPP.createChatViewControllerForPresenting(fromNotificationWith: nil)! present(chatViewController, animated: true, completion: nil) ``` Once the chat interface is open, you should see an initial state similar to the one below: # iOS SDK Release Notes Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/ios-sdk-release-notes The scrolling window below shows release notes for ASAPP's iOS SDK. This content may also be viewed as a stand-alone webpage here: [https://docs-sdk.asapp.com/api/chatsdk/ios/releasenotes](https://docs-sdk.asapp.com/api/chatsdk/ios/releasenotes) # Miscellaneous APIs Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/miscellaneous-apis ## Conversation Status Call `ASAPP.getChatStatus(success:failure:)` to get the current conversation status. The first parameter of the success handler provides a count of unread messages, while the second indicates whether the chat is live. If `isLive` is true, it means the customer is currently connected to a live customer support agent, even if the user isn't currently on the chat screen or the application is in the background. **Example:** ```json theme={null} ASAPP.getChatStatus(success: { unread, isLive in DispatchQueue.main.async { [weak self] in self?.updateBadge(count: unread, isLive: isLive) } }, failure: { error in print("Could not get chat status: \(error)") }) ``` ## Debug Logs To allow the SDK to print more debugging information to the console, set `ASAPP.debugLogLevel` to.debug. Please see [`ASAPPLogLevel`](https://docs-sdk.asapp.com/api/chatsdk/ios/latest/Enums/ASAPPLogLevel.html) for more options and make sure to set the level to `.errors` or `.none` in release builds. Example: ```json theme={null} #if DEBUG ASAPP.debugLogLevel = .debug #else ASAPP.debugLogLevel = .none #endif ``` ## Clear the Persisted Session To clear the session persisted on disk, call `ASAPP.clearSavedSession()`. This will also disable push notifications to the customer. ## Set an Intent To open chat with an initial intent, call one of the two functions below, passing in a dictionary specifying the intent in a format provided by ASAPP. Please ask your Implementation Manager for details. ### Create a Chat View Controller with an Initial Intent ```json theme={null} let chat = ASAPP.createChatViewControllerForPushing(withIntent: [“Code”: “EXAMPLE_INTENT”]) or let chat = ASAPP.createChatViewControllerForPresenting(withIntent: [“Code”: “EXAMPLE_INTENT”]) ``` To set the intent while chat is already open, call `ASAPP.setIntent(_:)`, passing in a dictionary as described above. This should only be called if a chat view controller already exists. ## Handle Chat Events Certain agreed-upon events may occur during chat. To react to these events, implement the `ASAPPDelegate` protocol, including the `chatViewControllerDidReceiveChatEvent(name:data:)` method. Please ask your Implementation Manager if you have questions regarding chat event names and data. ## Send This API is primarily used to send information that is used to show a proactive chat prompt when a specific criteria or set of criteria are met. To use and trigger this API, create data structure like below and call ASAPP.updateCustomerDataInfo() method. ```json theme={null} let customerInfo: [String: Any] = [ "CustomerInfo": [ "key1": "value1" "OrangeKey": "A Key", "FirstName": "A name", "key4": "value4" ] ] ASAPP.updateCustomerDataInfo(customerParams: customerInfo) ``` This API is primarily used to send information that is used to show a proactive chat prompt. ## Custom Chat Events To track the 'end of a chat', or add custom codes on 'new issue' and 'agent assigned', implement the following custom events. ‘eventName’: Name of the chat event. ‘issueId’: The ASAPP identifier for an individual issue. This ID may change as a user completes and starts new queries to the ASAPP system. ‘customerId’: The ASAPP identifier for a customer. This ID is consistent for authenticated users but may be different for anonymous ones. Anonymous users will have a consistent ID for the duration of their session. ‘eventTime’: The time the event occurred. ‘eventId’: The ASAPP identifier for the event. ‘externalSenderId’: The external identifier you provide to ASAPP that represents an agent identifier. This property will be undefined if the user is not connected with an agent. ```json theme={null} func chatViewControllerDidReceiveChatCustomEvents(eventData: [String: Any]?) { if let eventDetails = eventData { let issueId = eventDetails["issueId"] as? Int64 let customerId = eventDetails["customerId"] as? Int64 let eventTime = eventDetails["eventTime"] as? Double let eventId = eventDetails["eventId"] as? String let externalSenderId = eventDetails["externalSenderId"] as? String let eventName = eventDetails["eventName"] as? String if eventName == "issue:end" { //This code block will be triggered when a chat conversation is ended } else if eventName == "issue:new" { //This code block will be triggered when a user taps “New Question” or new issue is created } else if eventName == "agent:assigned" { //This code block will be triggered when a new agent has been assigned } } } ``` # Push Notifications Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/push-notifications ## Get Started with Push Notifications Please see Apple's documentation on the [Apple Push Notification service](https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview#//apple_ref/doc/uid/TP40008194-CH8-SW1) and the [User Notifications](https://developer.apple.com/documentation/usernotifications) framework. ## ASAPP Push Notifications ASAPP's systems may trigger push notifications at certain times, such as when an agent sends a message to a customer who does not currently have the chat interface open. These push notifications are triggered by ASAPP's servers calling your company's API with data that identifies the recipient's device; ASAPP's servers do not communicate with APNs directly. Therefore, we provide methods in the SDK to register and deregister the customer's device for ASAPP push notifications. For a deeper dive on how push notifications are handled between ASAPP and your company's API, please see our documentation on [Push Notifications and the Mobile SDKs](../push-notifications-and-the-mobile-sdks "Push Notifications and the Mobile SDKs"). ### Enable Push Notifications To enable push notifications for the current user when using the token provided by APNs in `didRegisterForRemoteNotificationsWithDeviceToken(_:)`, call `ASAPP.enablePushNotifications(with deviceToken: Data)`. To enable push notifications using an arbitrary string that uniquely identifies the device and current user, call `ASAPP.enablePushNotifications(with uuid: String)`. ### Disable Push Notifications To disable push notifications for the current user on the device, call `ASAPP.disablePushNotifications(failure:)`. The failure handler will be called in the event of an error. Make sure you call this function before you change or clear `ASAPP.user` to prevent the customer receiving push notifications that are not meant for them. ### Handle Push Notifications Implement `application(_:didReceiveRemoteNotification:[fetchCompletionHandler:])` and pass the `userInfo` dictionary to `ASAPP.canHandleNotification(with:)` to determine if the push notification was triggered by ASAPP. If the function returns `true`, you can then pass `userInfo` to: `ASAPP.createChatViewControllerForPushing(fromNotificationWith:)`. Your application usually won't receive push notifications from ASAPP if the user is currently connected to chat. ### Request Permissions for Push Notifications When a user joins a queue in the ASAPP mobile app, a prompt screen asks them to enable push notifications and provides some context on the benefits. If the user has already accepted or denied these permissions, they will not receive this prompt. After enablement, users will receive a push notification every time there is a new message in the app chat. Users only receive push notifications if the app is not active. You can control this feature remotely. Please contact your Integration Manager for further information. ASAPP highly recommends that you enable this feature. # User Authentication Source: https://docs.asapp.com/agent-desk/integrations/ios-sdk/user-authentication ## Set an ASAPPUser with a Request Context Provider As in the Quick Start section, you can connect to chat as an anonymous user by specifying a nil user identifier when initializing an `ASAPPUser`. However, many use cases might require ASAPP to know the identity of the customer. To connect as an identified user, please specify a user identifier string and a request context provider function. This provider will be called from a background thread when the SDK makes requests that require customer authentication with your company's servers. The request context provider is a function that returns a dictionary with keys and values agreed upon with ASAPP. Please ask your Implementation Manager if you have questions. **Example:** ```json theme={null} let requestContextProvider = { needsRefresh in return [ "Auth": [ "Token": "exampleValue" ] ] } ASAPP.user = ASAPPUser(userIdentifier: "testuser@example.com", requestContextProvider) ``` ## Handle Login Buttons If a customer connects to chat anonymously, they may be asked to log in when necessary by being shown a message button: If the customer then taps on the **Sign In** button, the SDK will call a delegate method: `chatViewControllerDidTapUserLoginButton()`. Please implement this method and set `ASAPP.user` once the customer has logged in. The SDK will detect the change and then authenticate the user. You may set `ASAPP.user` in any thread. Make sure to set the delegate as well: for example, `ASAPP.delegate = self`. See `ASAPPDelegate` for more details. ## Token Expiration and Refresh the Context In the event that the provided token has expired, the SDK will call the request context provider with an argument that is true, indicating that you must refresh the context. In that case, please make sure to return a dictionary with fresh credentials that the SDK can use to authenticate the user. If the SDK requires an API call to refresh the credentials, please make sure to block the calling thread until you can return the updated context. # Push Notifications and the Mobile SDKs Source: https://docs.asapp.com/agent-desk/integrations/push-notifications-and-the-mobile-sdks ## Use Cases In ASAPP Chat, users can receive Push Notifications (a.k.a. ASAPP background messages) for the following reasons: * **New live messages**: if a customer is talking to a live agent and leaves the chat interface, the system can deliver new messages via Push Notifications. * **Proactive messages**: used to notify customers about promotions, reminders, or other relevant information, depending on the requirements of the implementation. If you are looking for a way to get the most recent Conversation Status, please see the [Android](/agent-desk/integrations/android-sdk/miscellaneous-apis "Miscellaneous APIs") or [iOS](/agent-desk/integrations/ios-sdk/miscellaneous-apis "Miscellaneous APIs") documentation. ## Overall Architecture ### Overview 1 - Device Token Registration Figure 1: Push Notification Overview 1 - Device Token Registration. ### Overview 2 - Sending Push Notifications Figure 2: Push Notification Overview 2 - Sending Push Notifications ## Device Token After the Customer App (Figure 1) acquires the Device Token, it is then responsible to register it to the ASAPP SDK. ASAPP's servers use this token to send push notification requests to a Customer-provided API endpoint (Customer Backend), which in turn sends requests to Firebase and/or APNs. The ASAPP SDK and servers act as the middle-man with regards to the Device Token. In general, the Device Token must be a string that uniquely identifies the device that is defined and generated by the customer. The Device Token format and content can be customized to include the necessary information for the Customer's Backend service to send the push notifications. As an example, the Device Token can be a base64-encoded JSON Web Token (JWT) that contains the end user information required by the Customer's Backend service. ASAPP does not need to understand the content of the Device Token; however, the ASAPP Push Notification system persists the Device Token. Please consult with us if there is a requirement to include one or more PII data fields in the Device Token. ASAPP's servers do not communicate directly with Firebase or APNs; it is the responsibility of the customer to do so. ## Customer Implementation This section details the customer work necessary to integrate Push Notifications in two parts: the App and the Backend. ### Customer App The Customer App manages the Device Token. In order for ASAPP's servers to route notifications properly, the Customer App must register and deregister the token with the ASAPP SDK. The Customer App also detects when push notifications are received and handles them accordingly. #### Register for Push Notifications Please refer to Figure 1 for a high level overview. There are usually two situations where the Customer App will need to register the Device Token: * **App start** After you initialize the ASAPP SDK and set up the ASAPP User properly, register the Device Token. * **Token update** In case the Device Token changes, register the token again. Please refer to the specific [Android](/agent-desk/integrations/android-sdk/notifications#push-notifications "Push Notifications") and [iOS](/agent-desk/integrations/ios-sdk/push-notifications "Push Notifications") docs for more detailed information. #### Deregister for Disable Push Notifications If the user signs out of the Customer App, it is important to call the SDK API to de-register for push notifications. This must be done before changing the ASAPP user credentials so that the SDK can use those credentials to properly disable Push Notifications for the user who is signing out. If the device token de-registration isn't done properly, there's risk that the device will continue to receive Push Notifications for the user who previously signed out. Please refer to the specific [Android](/agent-desk/integrations/android-sdk/notifications#push-notifications "Push Notifications") and [iOS](/agent-desk/integrations/ios-sdk/push-notifications "Push Notifications") docs for more detailed information. #### Receive Messages in the Foreground If the user is currently in chat, the system sends the message directly to chat via WebSocket and sends no push notification. See Scenario 2 in Figure 2. On **Android**: you usually receive foreground Push Notifications via a Firebase callback. To check if this is an ASAPP-generated Push Notification, call `ASAPP.instance.getConversationStatusFromNotification`, which will return a non-null status object. The Customer App can now display user feedback as desired using the status object. On **iOS**, if you have set a `UNUserNotificationCenterDelegate`, it calls [userNotificationCenter(\_:willPresent:withCompletionHandler:)](https://developer.apple.com/documentation/usernotifications/unusernotificationcenterdelegate/1649518-usernotificationcenter) when a push notification is received while the app is in the foreground. In your implementation of that delegate method, call `ASAPP.canHandleNotification(with: notification.request.userInfo)` to determine if ASAPP generated the notification. An alternative method is to implement [application(\_:didReceiveRemoteNotification:fetchCompletionHandler:)](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1623013-application), which is called when a push notification is received regardless of whether the app is in the foreground or the background. In both cases, you can access `userInfo["UnreadMessages"]` to determine the number of unread messages. #### Receive Push Notifications in the Background See Scenario 1 in Figure 2. When the App is in the background (or the device is locked), a system push notification displays as usual. When the user opens the push notification: * On **Android**: the App opens with an Android Intent. The Customer App can verify if the Intent is from an ASAPP generated Push Notification by calling the utility method `ASAPP.instance.shouldOpenChat` . This should open chat. See more details and code examples in the Android SDK [Handle Push Notifications](/agent-desk/integrations/android-sdk/notifications#handle-push-notifications "Handle Push Notifications") section. * On **iOS**: if the app is running in the background, it calls [application(\_:didReceiveRemoteNotification:fetchCompletionHandler:)](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1623013-application) as above. If the app is not running, the app will start and call [application(\_:didFinishLaunchingWithOptions:)](https://developer.apple.com/documentation/uikit/uiapplicationdelegate/1622921-application), with the notification's payload accessible at `launchOptions[.remoteNotification]`. Once again, call `ASAPP.canHandleNotification(with:)` to determine if ASAPP generated the notification. ### Customer Backend It is common that the Customer solution already includes middleware that handles Push Notifications. This middleware usually provides the Customer App with the Device Tokens and sends Push Notification requests to Firebase and/or APNs. If the middleware provides an endpoint that can be called to trigger push notifications, ASAPP can integrate with it (given that the authentication strategy is in place). Otherwise, ASAPP requires that the Customer provides or implements an endpoint for this to take place. ASAPP's Push Notification adapters call the provided endpoint with a previously agreed-upon payload format. The following is a payload example: ```json theme={null} { authToken: "auth-token", deviceToken: "the-device-token", payload: { aps: { alert: { title: "New Message", body: "Hello, how can we help?" } }, ... }, ... } ``` ## ASAPP Implementation ### ASAPP Backend For any new Push Notification Integration, ASAPP creates an "adapter" for ASAPP's Notification Hub service. This adapter translates messages sent by Agents to a request that is compatible with the Customer Backend. This usually means that the Notification Hub adapter makes HTTP calls to the Customer's specified endpoint, with a previously agreed-upon payload format. ### ASAPP SDK The ASAPP Android and iOS SDKs already supply the interfaces and utilities needed for Customer Apps to register and de-register for Push Notifications. ### Testing Environments and QA From a Quality Assurance standpoint, ASAPP requires access to lower-level environments with credentials so that we can properly develop and test new adapters. # User Management Source: https://docs.asapp.com/agent-desk/integrations/user-management This section provides an overview of User Management (Roles and Permissions). These roles dictate if an ASAPP user can authenticate to *Agent Desk*, *Admin Dashboard*, or both. In addition, roles determine what view and data users see in the Admin Dashboard. You pass User Data to ASAPP via *SSO*, AD/LDAP, or other approved integration. This section describes the following: * [Process Overview](#process-overview) * [Resource Overview](#resource-overview) * [Definitions](#definitions "Definitions") ## Process Overview This is a high-level overview of the User Management setup process. 1. ASAPP demos the Desk/Admin Interface. 2. Call with ASAPP to confirm the access and permission requirements. ASAPP and you complete a Configuration spreadsheet defining all the Roles & Permissions. 3. ASAPP sends you a copy of the Configuration spreadsheet for review and approval. ASAPP will make additional changes if needed and send to you for approval. 4. ASAPP implements and tests the configuration. 5. ASAPP trains you to set up and modify User Management. 6. ASAPP goes live with your new Customer Interaction system. ## Resource Overview The following table lists and defines all resources:

Feature

Overview

Resource

Definition

Agent Desk

The App where Agents communicate with customers.

Authorization

Allows you to successfully authenticate via Single Sign-On (SSO) into the ASAPP Agent Desk.

Go to Desk

Allows you to click Go to Desk from the Nav to open Agent Desk in a new tab. Requires Agent Desk access.

Default Concurrency

The default value for the maximum number of chats a newly added agent can handle at the same time.

Default Concurrency

Sets the default concurrency of all new users with access to Agent Desk if the ingest method did not set any concurrency.

Admin Dashboard

The App where you can monitor agent activity in real-time, view agent metrics, and take operational actions (e.g. biz hours adjustments)

Authorization

Allows you to successfully authenticate via SSO into the ASAPP Admin Dashboard.

Live Insights

Dashboard in Admin that displays how each of your queues are performing in real-time. You can drill down into each queue to gain insight into what areas need attention.

Access

Allows you to see Live Insights in the Admin navigation and access it.

Data Security

Limits the agent-level data that certain users can see in Live Insights. If a user is not allowed to see data for any agents who belong to a given queue, Live Insights will not show that queue to that user.

Historical Reporting

Dashboard in Admin where you can find data and insights from customer experience and automation all the way to agent performance and workforce management.

Power Analyst Access

Allows you to see the Historical Reporting page in the Admin Navigation with Power Analyst access type, which entails the following:

  • Access to ASAPP Reports

  • Ability to change widget chart type

  • Ability to toggle dimensions and filters on/off for any report

  • Export data per widget and dashboard

  • Cannot share reports to other users

  • Cannot create or copy widgets and dashboards

Creator Access

Allows you to see the Historical Reporting page in the Admin Navigation with Creator access type, which entails the following:

  • Power Analyst privileges

  • Can share reports

  • Can create net new widgets and dashboards

  • Can copy widgets and dashboards

  • Can create custom dimensions/calculated metrics

Reporting Groups

Out-of-the-box groups are:

  • Everybody: all users

  • Power Analyst: Users with Power Analyst Role

  • Creator: Users with Creator role

If a client has data security enabled for Historical Reporting, policies need to be written to add users to the following 3 groups:

  • Core: Users who can see the ASAPP Core Reports

  • Contact Center: Users who can see the ASAPP Contact Center Reports

  • All Reports: Users who can see both the ASAPP Contact Center and ASAPP Core Reports

If you have any Creator users, you may want custom groups created. This can be achieved by writing a policy to create reporting groups based on a specific user attribute (i.e. I need reporting groups per queue, where queue is the attribute).

Data Security

Limits the agent-level data that certain users can see in Historical Reporting. If anyone has these policies, then the Core, Contact Center, and All Reports groups should be enabled.

Business Hours

Allows Admin users to set their business hours of operation and holidays on a per queue basis.

Access

Allows you to see Business Hours in the Admin navigation, access it, and make changes.

Triggers

An ASAPP feature that allows you to specify which pages display the ASAPP Chat UI. You can show the ASAPP Chat UI on all pages with the ASAPP Chat SDK embedded and loaded, or on just a subset of those pages.

Access

Allows you to see Triggers in the Admin navigation, access it, and make changes.

Knowledge Base

An ASAPP feature that helps Agents access information without the needing to navigate any external systems by surfacing KB content directly within Agent Desk.

Access

Allows you to see Knowledge Base content in the Admin navigation, access it, and make changes.

Conversation Manager

Admin Feature where you can monitor current conversations individually in the Conversation Manager. The Conversation Manager shows all current, queued, and historical conversations handled by SRS, bot, or by a live agent.

Access

Allows you to see Conversation Manager in the Admin navigation and access it.

Conversation Download

Allows you to select 1 or more conversations in Conversation Manager to export to either an HTML or CSV file.

Whisper

Allows you to send an inline, private message to an agent within a currently live chat, selected from the Conversation Manager.

SRS Issues

Allows you to see conversations only handled by SRS in the Conversation Manager.

Data Security

Limits the agent-assisted conversations that certain users can see at the agent-level in the Conversation Manager.

User Management

Admin Feature to edit user roles and permissions.

Access

Allows you to see User Management in their Admin navigation, access it, and make changes to queue membership, status, and concurrency per user.

Editable Roles

Allows you to change the role(s) of a user in User Management.

Editable Custom Attributes

Allows you to change the value of a custom user attribute per user in User Management. If Off, then these custom attributes will be read-only in the list of users.

Data Security

Limits the users that certain users can see or edit in User Management.

## Definitions The following table defines the key terms related to ASAPP Roles & Permissions.

Role

Definition

Resource

The ASAPP functionality that you can permission in a certain way. ASAPP determines Resources when features are built.

Action

Describes the possible privileges a user can have on a given resource. (i.e. View Only vs. Edit)

Permission

Action + Resource. ex. "can view Live Insights"

Target

The user or a set of users who are given a permission.

User Attribute

A describing attribute for a client user. User Attributes are either sent to ASAPP via accepted method by the client, or ASAPP Native.

ASAPP Native User Attribute

A user attribute that exists within the ASAPP platform without the client needing to send it. Currently:

  • Role

  • Group

  • Status

  • Concurrency

Custom User Attribute

An attribute specific to the client's organization that is sent to ASAPP.

Clarifier

An additional and optional layer of restriction in a policy. Must be defined by a user attribute that already exists in the system.

Policy

An individual rule that assigns a permission to a user or set of users. The structure is generally: Target + Permission (opt. + Clarifier) = Target + Action + Resource (opt. + Clarifier)

# Voice Source: https://docs.asapp.com/agent-desk/integrations/voice The ASAPP Voice Agent Desk includes web-based agent-assist services, which provide telephone agents with a machine learning and natural-language processing powered desktop. Voice Agent Desk augments the agent's ability to respond to inbound telephone calls from end customers. Voice Agent Desk augments the agents by allowing quick access to relevant customer information and provides actionable suggestions that ASAPP infers from analyzing the ongoing conversation. The content, actions, and responses ASAPP provides to agents augment the agent's ability to respond quickly and more effectively to end customers. Voice Agent Desk interfaces with relevant customer applications to enable desired features. The ASAPP Voice Agent Desk is not in the call-path but is more of an active listener, and uses two different integrations to provide the real-time augmentation: * [SIPREC](#glossary "Glossary") - you enable SIP RECording on the customer [Session Border Controllers (SBC)](#glossary "Glossary") and route a copy of the media stream, call information, and metadata per session to ASAPP. * [CTI](#glossary "Glossary") Events - ASAPP subscribes to telephony events of the voice agents via the CTI server (login, logout, on-hook, off-hook, etc.) ASAPP associates and aggregates the media sessions and CTI events within the ASAPP solution and uses them to power the agent augmentation features presented in Voice Agent Desk to the agents. The ASAPP Voice Agent Desk solution provides agents with the real-time features that automate many of their repeatable tasks. Agents can use Voice Agent Desk for: * The real-time transcript * Conversation Summary - where agents add notes and structured data tags that ASAPP suggests as well as disposition the call during the interaction and once it is complete. * Agents login to Voice Agent Desk via the customer's SSO. * Customer information (optional) * Knowledge Base integration (optional) ## Customer Current State Solution ASAPP works with you to understand your current telephony infrastructure and ecosystem, including the type of voice work assignment platform/s and other capabilities available, such as SIPREC. ## Solution Architecture After ASAPP completes the discovery of the customer's current state, ASAPP completes the architecture definition, including integration points into the existing infrastructure. You can deploy the ASAPP [media gateways and media gateway proxies](#glossary "Glossary") within your existing AWS instance or within ASAPP's, providing additional flexibility and control. ### Network Connectivity ASAPP will determine the network connectivity between your infrastructure and the ASAPP AWS Virtual Private Cloud (VPC) based on the architecture, however, ASAPP will deploy secure connections between your data centers and the ASAPP VPC. ### Port Details You can see ports and protocols in use for the Voice implementation depicted in the following diagram. These definitions provide visibility to your security teams for the provisioning of firewalls and ACL's. * SIP/SIPREC - TCP (5060, 5070-5072) * SBC to Media Gateway Proxies * SBC to Media Gateway/s * Audio Streams - UDP \ * CTI Event Feed - TCP \ * API Endpoints - TCP 443 In customer firewalls, disable the [SIP Application Layer Gateway (ALG)](#glossary "Glossary") and any 'Threat Detection' features, as they typically interfere with the SIP dialogs and the re-INVITE process. ### Data Flow The Voice Agent Desk Data Flow diagram illustrates the [PCI Zone](#glossary "Glossary") within the ASAPP solution. The customer SBC originates the SIPREC sessions and the media streams and sends them to ASAPP media gateways, which repackage the streams into secure WebSockets and send them to the [Voice Streamer](#glossary "Glossary") within the PCI zone. ASAPP encrypts the data in transit and at rest. The SBC does not typically encrypt the SIPREC sessions and associated media streams from the SBC to the ASAPP media gateways, but usually encapsulates them within a secure connection. You are responsible for the compliance/security of the network path between the SBC and the media gateways, in accordance with applicable customer policies. ## SIPREC and CTI Correlation and Association In order to be able to associate the correct audio stream and the correct agent and agent desktop, ASAPP must associate the audio session and the CTI events of the particular agent. ASAPP assigns voice agents a unique Agent ID and adds it to the SSO profile as a custom attribute. ASAPP will then map this to the Agent ID within ASAPP. You configure the SBCs to set a unique call identifier, such as [UCID](#glossary "Glossary") (Avaya) or [GUID](#glossary "Glossary")/GUCID (Cisco), etc. on inbound calls, which provides ASAPP the means to correlate the individual SIPREC stream with the CTI events of the correct agent. The SBCs will initiate a SIPREC session INVITE for each new call. With SIPREC, the customer SBC and the ASAPP media gateway negotiate the media attributes via the [SDP](#glossary "Glossary") offer/answer exchange during the establishment of the session. The codec/s in use today are: 1. G.711 2. G.729 Traffic and load considerations: * Total number of voice agents using ASAPP -\ * Maximum concurrently logged in agents \ * Maximum concurrent calls at each SBC pair -\ * Maximum calls per second at each SBC pair -\ ### Load Balancing for ASAPP Media Gateway Proxies In order to distribute traffic across all of the media gateway proxies, the SBCs load balance the SIPREC dialogs to the ASAPP MG Proxies. To facilitate this, you configure the SBCs with a proxy list that provides business continuity and enables fail-over to the next available proxy if one of the proxies becomes unavailable. Session Recording Group Example: The customer data center SBCs use different orders for the media gateway proxy list. Data Center 1: 1. MG Proxy #1 2. MG Proxy #2 3. MG Proxy #3 Data Center 2: 1. MG Proxy #3 2. MG Proxy #2 3. MG Proxy #1 ## Media Failover and Survivability ### Session Border Controller (SBC) to Media Gateways (MG) and Proxies * Typically unencrypted signaling and audio through a secure connection/private tunnel * You can encrypt the traffic in theory, but the SBC has costs and scale limitations associated with encrypting traffic, as well as cost increases to MGs as you will need more instances. * ASAPP accepts SIPREC dialogs, but initially sets SDP media to "inactive," which pauses the audio while in the IVR and in queue. * The ASAPP media gateway will re-invite the session and re-negotiate the media parameters to resume the audio stream when the agent answers the call. * SIP RFC handles some level of packet loss and re-transmissions but if the SIP signal is lost, the SIPREC dialog will be torn down and the media will no longer be sent. * Media is sent via UDP. * No retransmissions so packet loss or disconnects result in permanent loss of the audio. * Proxies are transactionally stateless. * No audio is ever sent to/through proxies, all audio goes directly to media gateways. * Proxies are no longer in the signal path after the first transaction. * If a proxy fails or is disconnected, SBCs can "hunt" or failover to the next proxy in it's configuration. * No existing calls are impacted. * If media gateways fail or are disconnected, the next SIP transaction will fail and the existing media stream (if resumed) will send via UDP to nothing (media is lost). * Media gateways use regular SIP OPTIONS sent to static proxies that indicate if they are available and their current number of calls. * Proxies use this active call load to evenly load balance to the least used media gateway. * As well as dynamically pick up when a media gateway is no longer available or new ones come online. * Any inbound calls coming in over ISDN-PRI/TDM trunk facilities will not have associated SIPREC sessions, as these calls do not traverse the SBC. ### Media Gateways to ASAPP Voice Streamers * Secure websocket initiated per stream (2 per call) to the ASAPP Voice Streamer * Media gateways do not store media, all processing is done in memory. * Packet loss can be tolerated a little with TCP retransmissions. * Buffer overrun audio data in the media gateway is purged instantly (per stream). * If a secure websocket connection is lost, the media gateway will attempt a limited number of reconnections and then fail. * If a voice streamer fails, a media gateway will reconnect to a new streamer. * If a media gateway fails, the SIPREC stream is lost and the SBC can no longer send audio for that group of calls. ## Integration ### API Integration Integration to existing customer systems enable ASAPP to call for information from those systems to present to the agent, such as: * customer profile information * billing history/statements * customer product purchases * Knowledge Base Integration also enables ASAPP to push information to those systems, such as disposition notes and account changes/updates. ASAPP will work with you to determine use cases for each integration that will add value to the agent and customer experience. ### Custom Call Data from CTI Information In many instances, CTI will carry end customer specific information about the end customer and the call. This may be in the form of [User-to-User Information (UUI)](#glossary "Glossary"), `Call Variables`, Custom `NamedVariables`, or Custom `KVList UserData`. ASAPP uses this data to provide more information to agents and admins. It may contain information that provides customer identity information, route codes, queue information, customer authentication status, IVR interactions/ outputs, or simply unique identifiers for further data lookup from APIs. ASAPP extracts the custom fields and leverages the data in real-time to provide agents as much information as possible as part of the initial part of the interaction. Each environment is uniquely different and ASAPP needs to understand what data is available from the CTI events to maximize relevant data to the agent and for voice intelligence processing. Examples: **Avaya** ```json theme={null} UserToUserInfo: “10000002321489708161;verify=T;english;2012134581” ``` **Cisco** ```json theme={null} CallVariable1:10000002321489708161 CallVariable7:en-us user.AuthDetect:87 ``` **Genesys** ```json theme={null} userAccount:10000002321489708161 userLanguage:en userFirstName:John ``` **Twilio** ```json theme={null} ``` ### SSO Integration [Single Sign-On (SSO)](#glossary "Glossary") allows users to sign in to ASAPP using their existing corporate credentials. ASAPP supports [Security Assertion Markup Language](#glossary "Glossary") (SAML) 2.0 Identity Provider (IdP) based authentication. ASAPP requires SSO integration to support implementation. To enable the SSO integration, the customer must populate and pass the Agent Login ID as a custom attribute in the SAML payload. Then, when a user logs in to ASAPP and authenticates via the existing SSO mechanism, the Agent Login ID value is then passed to ASAPP via SAML assertion for subsequent CTI event correlation. The ASAPP Voice Agent Desk supports role-based access. You can define a specific role for each user that will determine their permissions within the ASAPP platform. For example, you can define the "app-asappagentprod" role in the Active Directory to send to ASAPP via SAML for those specific users that should have access to ASAPP Voice Agent Desk only. You can define multiple roles for an agent, such as access to Voice Agent Desk, Digital Agent Desk, and Admin Desk. You must define roles for voice agents and supervisors and include them in the SAML payload as a custom attribute. The table below provides examples of SAML user attributes.

SAML Attribute Values

ASAPP Usage

Examples

Agent Login ID

Provides mapping of the customer telephony agent ID to ASAPP’s internal user ID.

user.extensionattribute1

or

cti\_agent\_id

Givenname

Given name

user.givenname

Surname

Surname

user.surname

Mail

Email address

user.mail

Unique User Identifier

The User ID (authRepId); can be represented as an employee ID or email address.

user.employeeid or user.userprincipalname

PhysicalDeliveryOfficeName

Physical delivery office name

user.physicaldeliveryofficename

HireDate

Hire date attribute used by reporting.

HireDate

Title

Can be used for reporting.

Title

Role

The roles define what agents can see in the UI and have access to when they login.

user.role app-asappadminprod app-asappagentprod

Group

For Voice, this is only for reporting purposes. For digital chat this also can be used for queue management.

user.groups

## Call Flows Once an inbound [Automatic Call Distribution (ACD)](#glossary "Glossary") call is connected to an agent, the agent may need to transfer or conference the customer in with another agent/skill group. It is important to identify and document these types of call flows, when the transcript and customer data needs to be provided to another agent due to a change in call state. Then ASAPP will test these call scenarios as part of the QA and UAT testing process. These scenarios include: * Cold Transfers * The agent transfers the call to a queue (or similar) but does not stay on the call. * Warm Transfers * The agent talks to the receiving agent prior to completing the transfer, in order to prepare the agent with the context of the call/customer issue. * Conferences * The agent conferences in another agent or supervisor and remains on the call. * Other * Customer call back applications or other unique call flows. ## Speech Files for Model Training To prepare for a production launch, ASAPP will train the speech models on the customer language and vocabulary, which will provide better transcription accuracy. ASAPP will use a set of customer call recordings from previous interactions. You will need to provide ASAPP with a minimum of 1,000 hours of agent/customer dual-channel/speech separated media files in .wav format with a sample rate of 8000 and signed 16-bit [Pulse-Code Modulation (PCM)](#glossary "Glossary") in order for ASAPP to train the speech recognition models. * ASAPP will set up an SFTP site in our PCI zone to receive voice media files from you. You will provide an SSH public key and ASAPP will configure the SFTP location within S3. * ASAPP prefers that you redact the PCI data from the provided voice recordings. Regardless, ASAPP will use its media redaction technology to remove sensitive customer data (Credit Card Numbers and Social Security Numbers) from the recordings to the extent possible. In addition to the default redaction noted above, ASAPP can customize redaction criteria per your requirements and feature considerations. * The unredacted voice media files will remain within the [PCI Zone](#glossary "Glossary"). * ASAPP will use a combination of automated and manual transcription to refine our speech models. Data that ASAPP shares with vendors goes through the redaction process described above and is transferred via secured mechanisms such as SFTP. ## Non-Production Lower Environments As part of our implementation strategy, ASAPP will implement two lower environments for testing (UAT and QA) by both ASAPP and customer resources. It is important that the lower environments do not use production data, including the audio data, as it may contain PCI information or other customer information that you should not expose to the lower environments. You can implement lower environments using a lab environment, or a production environment. When using the production infrastructure to support the lower environments, ASAPP separates production traffic from the lower environment traffic. The lower environments will have dedicated inbound numbers and routing that will allow them to be isolated and provide the ability for ASAPP and the customer teams to fully test using non-production traffic. As part of the environment's buildout, ASAPP will need a way to initiate and terminate test calls. The ASAPP team will use the same soft-client and tools used by agents to login as a voice agent, answer inbound test calls, and simulate the various call flows used within the customer contact center. ASAPP proposes customers allocate two [Direct Inward Dialing](#glossary "Glossary") (DID)/ [Toll Free Number](#glossary "Glossary") (TFN) numbers, one for each of the two different test environments. * Demo Environment - A lower environment used by both ASAPP and customers. * Preprod Environment - A lower environment used by ASAPP QA for testing. At the SBC level, you should configure the Demo and Preprod DID numbers with their own Session Recording Server (SRS), unique from the production SRS configuration. This will allow the test environments to always have SIPREC turned on, but not send excess/production traffic to ASAPP. This also allows the test environments to operate independently of production. With Oracle/Acme, you can accomplish this with session agents. For Avaya SBCE, you can accomplish this with End Point Flows. ASAPP will have a separate set of media gateways and media gateway proxies for each environment to ensure traffic and data separation. The lower environments (not PCI compliant) are for testing only and will not receive actual customer audio. The production environment is where ASAPP transcribes and redacts the audio in a PCI zone. ## Appendix A - Avaya Configuration Details This section provides specific configuration details for the solution that leverages Avaya telephony infrastructure. **Avaya Communication Manager** * Set Avaya [Internet Protocol - Private Branch Exchange (IP-PBX)](#glossary "Glossary") SIP trunks to 'shared' to ensure the UCID is not reset by the PBX. * Change trunk-group x -> page 3 -> UUI Treatment:shared * Set `SendtoASAI` parameter to 'yes.' * Change system-parameters features -> page 13 -> Send UCID to ASAI? Y * Add ASAPP voice agents to a new skill, one that is not used for queuing or routing. * Configure AES to monitor the new skill. * ASAPP will use the `cstaMonitorDevice` service to monitor the ACD skill. * ASAPP may also call `cstaMonitorCallsViaDevice` if more call data is needed. **Avaya AES [TSAPI](#glossary "Glossary") configuration** * Networking -> Ports -> TSAPI Ports * Enabled * TSAPI Service Port (450) * Firewalls will also need to allow these ports. | **Connection Type** | **TCP Min Port** | **TCP Max Port** | | :------------------ | :--------------- | :--------------- | | unencrypted/TCP | 1050 | 1065 | | encrypted/TLS | 1066 | 1081 | * AES link to ASAPP connection provisioning * Provisioning of new ASAPP Voice skill for monitoring. ## Appendix B - Cisco Configuration Details This section provides specific configuration details for the solution that leverages Cisco telephony infrastructure. **Cisco CTI Server configuration** * ASAPP will connect with the `CTI_SERVICE_ALL_EVENTS` * You will need the Preferred `ClientID` (identifier for ASAPP) and `ClientPassword` (if not null) to send the `OPEN_REQ` message. * Ports 42027 (side A) and 43027 (side B) * Instance number if not 0 will increase these ports * Firewalls will also need to allow these ports * `CallVariable`1-10 Definitions/usages * Custom `NamedVariables` and `NamedArrays` Definitions/usages * Events currently used by ASAPP: * `OPEN_REQ` * `OPEN_CONF` * `SYSTEM` * `AGENT_STATE` * `AGENT_PRE_CALL` * `BEGIN_CALL` * `CALL_DATA_UPDATE` * `CALL_CLEARED` * `END_CALL` ## Appendix C - Oracle (Acme) Session Border Controller In order to provide the correlation between the SIPREC session and specific CTI events, ASAPP will use the following approach: * Session Border Controller * Configure the SBC to create an Avaya UCID (universal call identifier) in the SIP header. * UCID generation is a native feature for Oracle/Acme Packet session border controller platforms. * [Oracle SBC UCID Admin](https://docs.oracle.com/en/industries/communications/enterprise-session-border-controller/8.4.0/configuration/universal-call-identifier-spl#GUID-97456BB9-264F-4290-AB92-8C60F64B9734) * In the Oracle (Acme Packet) SBCs, load balancing across the ASAPP Media Gateway Proxies requires the use of static IP addresses versus the use of dynamic hostnames. * SBC Settings for Media Gateway Proxies - Production and Lower Environments: * Transport = TCP * SIP OPTIONS = disabled * Load Balancing strategy = "hunt" * Session-recording-required = disabled * Port = 5070 ## Glossary | **Term** | **Acronym** | **Definition** | | :-------------------------------------------------------- | :---------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Automated Speech Recognition** | ASR | The service that converts speech (audio) to text. | | **Automatic Call Distributor** | ACD | A telephony system that automatically receives incoming calls and distributes them to available agents. Its purpose is to help inbound contact centers sort and manage large volumes of calls to avoid overwhelming the team. | | **Computer Telephony Integration** | CTI | The means of linking a call center's telephone systems to a business application. In this case, ASAPP monitors agents and receives call state event data via CTI. | | **Direct Inward Dialing** | DID | A service that allows a company to provide individual phone numbers for each employee without a separate physical line. | | **Globally Unique IDentifier** | GUID | A numeric label used for information in communications systems. When generated according to the standard methods, GUIDs are, for practical purposes, unique. Also known as Universally Unique IDentifier (UUID) | | **Internet Protocol Private Branch Exchange** | IP-PBX | A system that connects phone extensions to the Public Switched Telephone Network (PSTN) and provides internal business communication. | | **Media Gateway** | MG | Entry point for all calls from Customer. Receives and forwards SIP and audio data. | | **Media Gateway Proxy** | MGP | SIP Proxy, used for SIP signaling to/from customer SBC. | | **Payment Card Industry Data Security Standard** | PCI DSS | Payment card industry compliance refers to the technical and operational standards that businesses follow to secure and protect credit card data provided by cardholders and transmitted through card processing transactions. | | **Payment Card Industry Zone** | PCI Zone | PCI Level I Certified environment for cardholder data and other sensitive customer data storage (Transport layer security for encryption in transit, encryption at rest, access tightly restricted and monitored). | | **Pulse-Code Modulation** | PCM | Pulse-code modulation is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in digital telephony. | | **Security Assertion Markup Language** | SAML | An open standard for exchanging authentication and authorization data between an identity provider and a service provider. | | **Session Border Controller** | SBC | SIP-based voice security platform; source of the SIPREC sessions to ASAPP. | | **Session Description Protocol** | SDP | Used between endpoints for negotiation of network metrics, media types, and other associated properties, such as codec and sample size. | | **Session Initiation Protocol Application-Level Gateway** | SIP ALG | A firewall function that enables the firewall to inspect the SIP dialog/s. This function should be disabled to prevent SIP dialog interruption. | | **Session Initiation Protocol Recording** | SIPREC | IETF standard used for establishing recording sessions and reporting of the metadata of the communication sessions. | | **Single Sign On** | SSO | Single sign-on is an authentication scheme that allows a user to log in with a single ID and password to any of several related, yet independent, software systems. | | **Toll-Free Number** | TFN | A service that allows callers to reach businesses without being charged for the call. The called person is charged for the toll-free number. | | **Telephony Services API** | TSAPI | Telephony server application programming interface (TSAPI) is a computer telephony integration standard that enables telephony and computer telephony integration (CTI) application programming. | | **Universal Call IDentifier** | UCID | UCID assigns a unique number to a call when it enters that call center network. The single UCID can be passed among platforms, and can be used to compile call-related information across platforms and sites. | | **User to User Information** | UUI | The SIP UUI header allows the IVR to insert information about the call/caller and pass it to downstream elements, in this case, Communication Manager. The UUI information is then available via CTI. | | **Voice Streamer** | VS | Receives SIP and audio data from MG. Gets the audio transcribed into text through the ASR and sends that downstream. | # Web SDK Overview Source: https://docs.asapp.com/agent-desk/integrations/web-sdk Welcome to the ASAPP Chat SDK Web Overview! This document provides an overview of how to integrate the SDK (authenticate, customize, display) and the various API methods and properties you can use to call the ASAPP Chat SDK. In addition, it provides an overview of the ASAPP ContextProvider, which allows you to pass various user information to the Chat SDK. If you're just getting started with the ASAPP Chat SDK, ASAPP recommends starting with the [Web Quick Start](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start") section. There you will learn the basics of embedding the ASAPP Chat SDK and how to best align it with your site. ASAPP functionality can be integrated into your website simply, by making sure that a snippet of javascript is included in your site template. The subsections below provide both an integration overview and detailed documentation covering everything from how to easily get started with your ASAPP integration through how to implement arbitrarily fine-grained customization of the look and feel and the functioning of ASAPP technology to meet your design and functional requirements. The Web SDK Overview includes the following sections: * [Web Quick Start](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start") * [Web Authentication](/agent-desk/integrations/web-sdk/web-authentication "Web Authentication") * [Web Customization](/agent-desk/integrations/web-sdk/web-customization "Web Customization") * [Web Features](/agent-desk/integrations/web-sdk/web-features "Web Features") * [Web JavaScript API](/agent-desk/integrations/web-sdk/web-javascript-api "Web JavaScript API") * [Web App Settings](/agent-desk/integrations/web-sdk/web-app-settings "Web App Settings") * [Web ContextProvider](/agent-desk/integrations/web-sdk/web-contextprovider "Web ContextProvider") * [Web Examples](/agent-desk/integrations/web-sdk/web-examples "Web Examples") # Web App Settings Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-app-settings This section details the various properties you can provide to the Chat SDK. These properties are used for various display, feature, and application settings. Before utilizing these settings, make sure you've [integrated the ASAPP SDK](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start") script on your page. Once you've integrated the SDK with your site, you can use the [JavaScript API](/agent-desk/integrations/web-sdk/web-javascript-api "Web JavaScript API") for applying these settings. The properties available to the ASAPP Chat SDK include: * [APIHostName](#apihostname "APIHostName") * [AppId](#appid "AppId") * [ContextProvider](#contextprovider "ContextProvider") * [CustomerId](#customerid "CustomerId") * [Display](#display "Display") * [Chat](#chat "Chat") * [Intent](#intent "Intent") * [Language](#language) * [onLoadComplete](#onloadcomplete "onLoadComplete") * [RegionCode](#regioncode "RegionCode") * [Sound](#sound "Sound") * [UserLoginHandler](#userloginhandler-11877 "UserLoginHandler") Each property has three attributes: * Key - provides the name of the property that you can set. * Available APIs - lists the [JavaScript APIs](/agent-desk/integrations/web-sdk/web-javascript-api "Web JavaScript API") that the property is accepted on. * Value Type - describes the primitive type of value required. ## APIHostName * Key: `APIHostName` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `String` Sets the ASAPP APIHostName for connecting customers with customer support. ## AppId * Key: `AppId` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `String` Your unique Company Identifier. ## Chat * Key: `Chat` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load) * Value Type: `Object` The `Chat` setting allows you to customize the: * [Styling](/agent-desk/integrations/web-sdk/web-customization#dynamic-styling-configuration) * [Icons](/agent-desk/integrations/web-sdk/web-customization#custom-icons) * [Features](/agent-desk/integrations/web-sdk/web-customization#chat-features) ## ContextProvider * Key: `ContextProvider` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'"), ['setCustomer'](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'") * Value Type: `Function` The ASAPP `ContextProvider` is used for passing various information about your users to the Chat SDK. This information may include authentication, analytics, or session information. Please see the in-depth section on [Using the ContextProvider](/agent-desk/integrations/web-sdk/web-contextprovider "Web ContextProvider") for details about each of the use cases. ## CustomerId * Key: `CustomerId` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'"), ['setCustomer'](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'") * Value Type: `String` The unique identifier for an authenticated customer. This value is typically a customer's login name or account ID. If setting a **`CustomerId`** you must also provide a [ContextProvider ](#contextprovider "ContextProvider")property to pass along their access token and any other required authentication properties. ## Display * Key: `Display` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `Object` The `Display` setting allows you to customize the presentation aspects of the Chat SDK. The setting is an object that contains each of the customization's you wish to provide. Read on below for the currently supported keys: ```javascript theme={null} ASAPP('load', { "APIHostname": "example-co-api.asapp.com", "AppId": "example-co", "Display": { "Align": "left", "AlwaysShowMinimize": true, "BadgeColor": "rebeccapurple", "BadgeText": "Support", "BadgeType": "tray", "FrameDraggable": true, "FrameStyle": "sidebar", "HideBadgeOnLoad": false, "Identity": "electronics" } } ``` ### Align * Key: `Align` * Value Type: `String` * Accepted Values: `'left'`, `'right'` (default) Renders the [Chat SDK Badge](/agent-desk/integrations/web-sdk/web-customization#badge "Badge") and [iframe](/agent-desk/integrations/web-sdk/web-customization#iframe "iframe") on the left or right side of your page. ### AlwaysShowMinimize * Key: `AlwaysShowMinimize` * Value Type: `Boolean` Determines if the iframe minimize icon displays in the Chat SDK's header. The default `false` value displays the button only on tablet and mobile screen sizes. When set to `true`, the button will also be visible on desktop-sized screens. ### BadgeColor * Key: `BadgeColor` * Value Type: `String` * Accepted Values: `Color Keyword`,`RGB hex value` Customizes the background color of the [Chat SDK Badge](/agent-desk/integrations/web-sdk/web-customization#badge "Badge"). This will be the primary color of Proactive Messages and Channel Picker if the PrimaryColor is not provided. ### BadgeText * Key: `BadgeText` * Value Type: `String` Applies a caption to the [Chat SDK Badge](/agent-desk/integrations/web-sdk/web-customization#badge "Badge"). This setting only works when applying the `BadgeType`:`tray`. ### BadgeType * Key: `BadgeType` * Value Type: `String` * Accepted Values: `'tray'`,`'badge'`(default) , `'none'` `BadgeType: 'tray'` `BadgeType: 'badge'` Customizes the display of the [Chat SDK Badge](/agent-desk/integrations/web-sdk/web-customization#badge "Badge"). When you set the type to `'tray'`, you may also enter a `BadgeText` value. When you set this to 'none', the badge will not render. ### FrameDraggable * Key: `FrameDraggable` * Value Type: `Boolean` Enabling this setting allows a user to reposition the placement of the [Chat SDK iframe](/agent-desk/integrations/web-sdk/web-customization#iframe "iframe"). When this is set to `true`, a user can hover over the frame's heading region, then click and drag to reposition the frame. The user's frame position will be recalled as they navigate your site or minimize/open the Chat SDK. If the user has repositioned the frame, a button will appear allowing them to reset the Chat SDK to its default position. ### FrameStyle * Key: `FrameStyle` * Value Type: `String` Accepted Values: `'sidebar'`, `'default'` (default) Customizes the layout of the [Chat SDK iframe](/agent-desk/integrations/web-sdk/web-customization#iframe "iframe"). By default, the frame will appear as a floating window with a responsive height and width. When set to `'sidebar'`, the frame will be docked to the side of the page and take 100% of the browser's viewport height. The`'sidebar'` setting will adjust your page's content as though the user resized their browser viewport. Use the `Align` setting if you wish to change which side of the page the frame appears on. ### HideBadgeOnLoad * Key: `HideBadgeOnLoad` * Value Type: `Boolean` * Accepted Values: `'true'`,`'false'`(default) When set to true, [Chat Badge](/agent-desk/integrations/web-sdk/web-customization#badge "Badge") is not visible on load. You can open the [Chat SDK iframe](/agent-desk/integrations/web-sdk/web-customization#iframe "iframe") via Proactive Message, [Chat Instead](../chat-instead/web "Web"), or [Show API](/agent-desk/integrations/web-sdk/web-javascript-api#show "'show'"). Once you open the Chat SDK iframe, Chat Badge will become visible allowing a user to minimize/reopen. ### Identity * Key: `Identity` * Value Type: `String` A string that represents the branding you wish to display on the SDK. Your ASAPP Implementation Manager will help you determine this value. If set to a non-supported value the Chat SDK will display in a generic, non-branded experience. ### PrimaryColor * Key: `PrimaryColor` * Value Type: `String` * Accepted Values: `Color Keyword`,`RGB hex value` Customizes the primary color of Proactive Messages and [Chat Instead](/agent-desk/integrations/chat-instead/web "Web"). This will be the background color of the [Chat SDK Badge](/agent-desk/integrations/web-sdk/web-customization#badge "Badge") if the BadgeColor is not provided. ## Intent * Key: `Intent` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#-load- "'load'") * Value Type: `String` The intent code that you wish for a user's conversation to initialize with. The setting takes an object, with a required key of `Code`. `Code` accepts a string. Your team and your ASAPP Implementation Manager will determine the available values. ```javascript theme={null} ASAPP('load', { APIHostname: 'example-co-api.asapp.com', AppId: 'example-co', Intent: { Code: 'PAYBILL' } }); ``` ## Language * Key: `Language` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `String` By Default, the SDK will use English (`en`). You can override this by setting the `Language` property. It accepts a value of: * `en` for English * `fr` for French * `es` for Spanish ASAPP does not support switching languages mid-session, after a conversation has started. You must set a language before starting a conversation. ```javascript English theme={null} ASAPP('load', { APIHostname: 'example-co-api.asapp.com', AppId: 'example-co', Language: 'en' }); ``` ```javascript French theme={null} ASAPP('load', { APIHostname: 'example-co-api.asapp.com', AppId: 'example-co', Language: 'fr' }); ``` ## onLoadComplete * Key: `onLoadComplete` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `Function` A callback that is triggered once the Chat SDK has finished initializing. This is useful when attaching events via the [Action API](/agent-desk/integrations/web-sdk/web-javascript-api#action-on-or-off "action: 'on' or 'off'") or whenever you need to perform custom actions to the SDK after it has loaded. The provided method receives a single argument as a boolean value. If the value is `false`, then the page is not configured to display under the [ASAPP Trigger feature](/agent-desk/integrations/web-sdk/web-features#triggers "Triggers"). If the value is `true`, then the Chat SDK has loaded and finished appending to your DOM. ``` ASAPP('load', { APIHostname: 'example-co-api.asapp.com', AppId: 'example-co', onLoadComplete: function (isDisplayingChat) { console.log('ASAPP Loaded'); if (isDisplayingChat) { ASAPP('on', 'message:received', handleMessageReceivedEvent); } else { console.log('ASAPP not enabled on this page'); } } }); ``` ## RegionCode * Key: `RegionCode` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `String` Localizes the Chat SDK with a certain region. It accepts a value from the [ISO 3166 alpha-2 country codes](https://www.iso.org/obp/ui/#home) representing the country you wish to localize. ## Sound * Key: `Sound` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `Boolean` When set to `true`, users will receive an audio notification when they receive a message in the chat log. This defaults to `false`. ## UserLoginHandler * Key: `UserLoginHandler` * Available APIs: [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") * Value Type: `Function` The `UserLoginHandler` allows you to provide a means of authentication so a user may access account information via the ASAPP Chat SDK. When the Chat SDK determines that a user is unauthorized, a "Log In" button appears. When the user clicks that button, the Chat SDK will call the method you provided. See the [Authentication](/agent-desk/integrations/web-sdk/web-authentication "Web Authentication") page for options on how you can authenticate your customers. Note: If you do not provide a `UserLoginHandler`, a user will not be able to transition from an anonymous to an authorized session. When the Chat SDK calls the `UserLoginHandler`, it provides a single argument. The argument is an object and contains various session information that may be useful to your integration. You and your Implementation Manager determine the information provided. It may contain things such as [CompanySubdivision](/agent-desk/integrations/web-sdk/web-contextprovider#company-subdivisions "Company Subdivisions"), [ExternalSessioninformation](/agent-desk/integrations/web-sdk/web-contextprovider#session-information "Session Information"), and more. ```javascript theme={null} ASAPP('load', { APIHostname: 'example-co-api.asapp.com', AppId: 'example-co', UserLoginHandler: function (data) { if (data.CompanySubdivision === 'chocolatiers') { // Synchronous login window.open('/login?makers=tempering') } else { // Get Customer Id and access_token ... var CustomerId = 'Retrieved customer ID'; var access_token = 'Retrieved access token'; // Call SetCustomer with retrieved access_token, CustomerId, and ContextProvider ASAPP('setCustomer', { CustomerId: CustomerId, ContextProvider: function (callback) { var context = { Auth: { Token: access_token } }; callback(context); } }); } } }); ``` # Web Authentication Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-authentication This section details the process for authenticating your users to the ASAPP Chat SDK. * [Authenticating at Page Load](#authenticating-at-page-load "Authenticating at Page Load") * [Authenticating Asynchronously](#authenticating-asynchronously "Authenticating Asynchronously") * [Using the 'UserLoginHandler' Method](#using-the-userloginhandler-method "Using the 'UserLoginHandler' Method") Before getting started, make sure you've [embedded the ASAPP Chat SDK](/agent-desk/integrations/web-sdk/web-quick-start#1-embed-the-script "1. Embed the Script") into your site. Your site is responsible for the entirety of the user authentication process. This includes the presentation of an interface for login and the maintenance of a session, and for the retrieval and formatting of context data about that user. Please read the section on using the [Authentication with the ContextProvider](/agent-desk/integrations/web-sdk/web-contextprovider#authentication "Authentication") to understand how you can pass authorization information to the Chat SDK. Once your site has authenticated a user, you can securely pass that authentication forward to the ASAPP Chat environment by making certain calls to the ASAPP Chat SDK (more on those calls below). Your user can then be authenticated both on your web site and in the ASAPP Chat Environment, enabling them to execute within the ASAPP Chat use cases that require authentication. ASAPP provides two methods for authenticating a user to the ASAPP Chat SDK. * You can proactively [authenticate your user at page load](#authenticating-at-page-load "Authenticating at Page Load"). * You can [authenticate your user midway through a session](#authenticating-asynchronously "Authenticating Asynchronously") using the [SetCustomer API](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'"). With rare exceptions, you must also configure [UserLoginHandler](#using-the-userloginhandler-method "Using the 'UserLoginHandler' Method") to enable ASAPP to handle cases where a user requires authentication or re-authentication in the midst of a chat session (e.g., if a user's authentication credentials expire during a chat session.) ## Authenticating at Page Load If a user who is already authenticated with your site requests a page that includes ASAPP chat functionality, you can proactively authenticate that user to the ASAPP SDK at page load time. This allows an authenticated user who initiates a chat session to have immediate access to their account details without having to login again. To authenticate a user to the ASAPP Chat SDK on page load, use the ASAPP [Load API](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") providing both [ContextProvider](/agent-desk/integrations/web-sdk/web-app-settings#contextprovider "ContextProvider") and [CustomerId](/agent-desk/integrations/web-sdk/web-app-settings#customerid "CustomerId") as additional keys in the [Load method](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'"). For example: ```javascript theme={null} ``` The sample above initializes the ASAPP Chat SDK with your user's `CustomerId` and a `ContextProvider` incorporating that user's `Auth`. When a user opens the ASAPP Chat SDK, they are already authenticated to the chat client and can access account information within the chat without being asked to login again. ## Authenticating Asynchronously If a user's authentication credentials are not available at page load time, you can authenticate asynchronously using the ASAPP [SetCustomer](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'") API. After you've retrieved your user's credentials, you can call the API to authenticate that user with the ASAPP Chat SDK mid-session. You might want to asynchronously authenticate a user to the ASAPP Chat SDK when (for example) that user has just completed a login flow, or if their credentials are retrieved after the page initially loads, or if a session expires and the user needs to reauthenticate. The following sample snippet shows how to call the SetCustomer API: ```javascript theme={null} ``` Once you call the [SetCustomer](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'") method, and as long as the provided `Auth` information remains valid on your backend, any ASAPP Chat SDK actions that require authentication authenticate properly. The SetCustomer method is typically called as part of the [UserLoginHandler](/agent-desk/integrations/web-sdk/web-app-settings#userloginhandler-11877 "UserLoginHandler"). See the section on [Using the 'UserLoginHandler' Method](#using-the-userloginhandler-method "Using the 'UserLoginHandler' Method") for a complete picture of how you may want to authenticate a user during an ASAPP Chat SDK session. ## Using the 'UserLoginHandler' Method ```javascript theme={null} ``` # Web ContextProvider Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-contextprovider This section details the various ways you can use the ASAPP ContextProvider with the Chat SDK API. Before using the ContextProvider, make sure you've [integrated the ASAPP SDK](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start") script on your page. The ASAPP `ContextProvider` is used for passing various information about your users or their sessions to the Chat SDK. It is a key that may be set in the [Load and SetCustomer](/agent-desk/integrations/web-sdk/web-javascript-api) APIs. The key must be assigned a function that will receive two arguments. The first argument is a `callback` function. The second argument is a `needsRefresh` boolean indicating whether or not the authorization information needs to be refreshed. The `ContextProvider` is called whenever the user types in the Chat SDK. ## 'Callback' After you've retrieved all the context needed for a user, call the `callback` argument with your context object as the sole argument. This will pass your context object to the ASAPP Chat SDK. ## 'needsRefresh' The `needsRefresh` argument returns a boolean value indicating whether or not your user's authorization has expired. ```javascript theme={null} function contextProviderHandler(callback, needsRefresh) { var contextObject = Object.assign( {}, yourGetAnalyticsMethod(), yourGetSessionMethod(), yourGetAuthenticationMethod() ); if (needsRefresh) { Object.assign(contextObject.Auth, getUpdatedAuthorization() ); } callback(contextObject); } ASAPP('setCustomer', { CustomerId: yourGetCustomerIdMethod(), ContextProvider: contextProviderHandler } ) \ ; ``` ## Authentication The `ContextProvider` plays an important role in authorizing your users with the ASAPP Chat SDK. Whether your users are always authenticated or transitioning from an anonymous to integrated use case, you must use the ContextProvider's `Auth` key to provide a user's authorization. Your site is responsible for retrieving and providing all authorization information. Once provided to ASAPP, your user will be allowed secure access to any integrated use cases. Along with providing a [CustomerId](/agent-desk/integrations/web-sdk/web-app-settings#customerid "CustomerId"), you'll need to provide any request body with information, cookies, headers, or access tokens required for ASAPP to authorize with your systems. You may provide this information using the `Auth` key and the following set of nested properties: ```javascript theme={null} function contextProviderHandler(callback, needsRefresh) { var contextObject = { // Auth key provided to the ContextProvider Auth: { Body: { customParam: 'value' }, Cookies: { AuthCookie: 'authCookieValue' }, Headers: { 'X-Custom-Header': 'value' }, Scopes: ['paybill'], Token: 'b34r3r...' } }; callback(contextObject); } ``` Each key within the `Auth` object is optional, but you must provide any necessary information for your authenticated users. * The `Body`, `Cookies`, and `Headers` keys all accept an object containing any number of key:value pairs. * The `Scopes` key accepts an array of strings defining which services may be updated with the provided token. * The `Token` key accepts a single access token string. Please see the [Authentication](/agent-desk/integrations/web-sdk/web-authentication "Web Authentication") section for full details on using the `ContextProvider` for authenticating your users. ## Customer Info You may assign analytic data and add other customer information to a user's Chat SDK interactions by using the `CustomerInfo` key. The key is a child of the context object and contains a series of key:value pairs. Your page is responsible for defining and setting the keys you would like to track. You may define and pass along as many keys as you would like. You must discuss and agree upon the attribute names with your Implementation Manager. **CustomerInfo:** * Key: `CustomerInfo` * Value Type: `Object` The object should contain a set of key:value pairs that you wish to provide as analytics or customer information. The value of each key must be a string. **WARNING ABOUT SENSITIVE DATA** Do NOT send sensitive data via `CustomerInfo`, `custom_params`, or `customer_params`. For more information, [click here](/security/warning-about-customerinfo-and-sensitive-data "Warning about CustomerInfo and Sensitive Data"). A user does not need to be authenticated in order to provide analytics information. The following code snippet shows the `CustomerInfo` key being used to pass along analytics data. ```javascript theme={null} function contextProviderHandler(callback, needsRefresh) { var contextObject = { CustomerInfo: { // Your own key: value pairs category: 'payment', action: 'ASAPP', parent_page: 'Pay my Bill' } }; // Return the callback callback(contextObject); } ASAPP('load', { APIHostname: '[API_HOSTNAME]', AppId: '[APP_ID]', ContextProvider: contextProviderHandler }); ``` ## Session Information The `ContextProvider` may be used for passing existing session information along to the Chat SDK. This is for connecting a user's page session with their SDK session. You may provide two keys---`ExternalSessionId `and `ExternalSessionType`---for connecting session information. The value of each key is at your discretion. A user does not need to be authenticated in order to provide session information. ### ExternalSessionId * Key: `ExternalSessionId` * Value Type: `String` * Example Value: `'j6oAOxCWZh...'` Your user's unique session identifier. This information can be used for joining your session IDs with ASAPP's session IDs. ### ExternalSessionType * Key: `ExternalSessionType` * Value Type: `String` * Example Value:`'visitID'` A descriptive label of the type of identifier being passed via the `ExternalSessionId`. ## Company Subdivisions If your company has multiple entities segmented under a single AppId, you may use the `ContextProvider` to pass the entity information along to the Chat SDK. To do so, provide the optional `CompanySubdivision`key with a value of your subdivision's identifier. The identifier value will be determined in coordination with your ASAPP Implementation Manager. ### CompanySubdivision * Key: `CompanySubdivision` * Value Type: `Object` * Example Value: `'divisionId'` An object containing a set of key:value pairs that you wish to provide as analytics information. The value of each key must be a string. ## Segments If your company needs to group users at a more granular level than [AppId](/agent-desk/integrations/web-sdk/web-app-settings#appid "AppId") or [CompanySubdivision](#company-subdivisions "Company Subdivisions"), you may use the `Segments` key to apply labels to your reports. Each key you provide allows you to filter your reporting dashboard by those values. ### Segments * Key: `Segments` * Value Type: `Array` * Example Value: \[`'north america'`, `'usa',``'northeast'`] The key value must be an array containing a set of strings. # Web Customization Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-customization Once properly installed and configured, the ASAPP Chat SDK embeds two snippets of HTML markup into your host web page: * [Chat SDK Badge](#badge "Badge") * [Chat SDK iframe](#iframe "iframe") This section details how these elements function. In addition, it describes how to [Customize the Chat UI](#customize-the-chat-ui "Customize the Chat UI"), [Dynamic Styling Configuration](#dynamic-styling-configuration), [Custom Icons](#custom-icons), and [Advanced Configuration Options](#advanced-configuration-options). ## Badge The ASAPP Chat SDK Badge is the default interface element your customers can use to open or close the ASAPP Chat iframe. When a user clicks on this element, it will trigger the [ASAPP('Show')](/agent-desk/integrations/web-sdk/web-javascript-api#show "'show'") or [ASAPP('Hide')](/agent-desk/integrations/web-sdk/web-javascript-api#hide"'hide'") APIs. This toggles the display of the ASAPP Chat SDK iframe. ### Badge Markup By default. the ASAPP Chat SDK Badge is inserted into your markup as a lightweight `button` element, with a click behavior that toggles the display of the [iframe](#iframe "iframe") element. ASAPP recommends that you use the default badge element so you can take advantage of our latest features as they become available. However, if you wish to customize the badge, you can do so by either manipulating the CSS associated with the badge, or by hiding/removing the element from your DOM and toggling the display of the iframe using your own custom element. See the [Badge Styling](#asapp-badge-styling "ASAPP Badge Styling") section below for more details on customizing the appearance of the ASAPP Chat SDK Badge. ```html theme={null} ``` ### ASAPP Badge Styling You can customize the ASAPP Chat SDK Badge with CSS using the ID `#asapp-chat-sdk-badge` or classname `.asappChatSDKBadge` selectors. ASAPP recommends that you use [BadgeColor](/agent-desk/integrations/web-sdk/web-app-settings#display "Display"). The following snippet is an example of how you might use these selectors to customize the element to meet your brand needs: ```css theme={null} #asapp-chat-sdk-badge { background-color: rebeccapurple; } #asapp-chat-sdk-badge:focus, #asapp-chat-sdk-badge:hover, #asapp-chat-sdk-badge:active { -webkit-tap-highlight-color: rgba(102, 51, 153, 0.25); background-color: #fff; } #asapp-chat-sdk-badge .icon { fill: #fff; } #asapp-chat-sdk-badge:focus .icon, #asapp-chat-sdk-badge:hover .icon, #asapp-chat-sdk-badge:active .icon { fill: rebeccapurple; } ``` ### Custom Badge You can hide the ASAPP Chat SDK Badge and provide your own interface for opening the ASAPP Chat SDK iframe. * Set [BadgeType](/agent-desk/integrations/web-sdk/web-app-settings#display "Display") to none. * Call [`ASAPP('show')`](/agent-desk/integrations/web-sdk/web-javascript-api#show "'show'") and/or [`ASAPP('hide')`](/agent-desk/integrations/web-sdk/web-javascript-api#hide "'hide'")  when your custom badge is clicked to open/close the iframe. * In order to ensure that the Chat SDK is ready, ASAPP recommends to display your custom badge disabled/loading state at first and then utilize [onLoadComplete](/agent-desk/integrations/web-sdk/web-app-settings#onloadcomplete "onLoadComplete") to enable it. **Example:** In the code example below, the 'Chat with us' button is not clickable until you enable it using onLoadComplete. Once enabled, a user can click the button to open the ASAPP SDK iframe. Custom Button: ```html theme={null} ``` Load config example: ```javascript theme={null} ``` ## iframe The ASAPP Chat SDK iframe contains the interface that your customers will use to interact with the ASAPP platform. The element is populated with ASAPP-provided functionality and styled elements, but the iframe itself is customizable to your brand's needs. ### iframe Markup The SDK iframe is instantiated as a lightweight ` ``` ### iframe Styling You can customize the ASAPP Chat SDK iframe by using the ID `#asapp-chat-sdk-iframe` or classname `.asappChatSDKIFrame` selectors. The following snippet is an example of how you may want to use these selectors to customize the element to your brand. ```css theme={null} @media only screen and (min-width: 415px) { #asapp-chat-sdk-iframe { box-shadow: 0 2px 12px 0 rgba(35, 6, 60, 0.05), 0 2px 49px 0 rgba(102, 51, 153, 0.25); } } ``` Modifying the sizing or positioning of the iframe is currently not supported. Change those properties at your own risk; a moved or resized iframe is not guaranteed to work with upcoming releases of the ASAPP platform ## Customize the Chat UI ASAPP will customize the Chat SDK iframe User Interface (UI) in close collaboration with design and business stakeholders. ASAPP will work within your branding guidelines to apply an appropriate color palette, logo, and typeface. There are two particularly technical requirements that we can assess early on to provide a more seamless delivery of requirements: ### 1. Chat Header Logo The ASAPP SDK Team will embed your logo into the Chat SDK Header. Please provide your logo in the following format: * SVG format * Does not exceed 22 pixels in height * Does not exceed 170 pixels in width * Should not contain animations * Should not contain filter effects If you follow the above guidelines your logo will: * display at the most optimal size for responsive devices * sit well within the overall design * display properly ### 2. Custom Typefaces Using a custom typeface within the ASAPP Chat SDK requires detailed technical requirements to ensure that the client is performant, caching properly, and displaying the expected fonts. For the best experience, you should provide ASAPP with the following: * The font should be available in any of the following formats: WOFF2, WOFF, OTF, TTF, and EOT. * The font should be hosted in the same place that your own site's custom typeface is hosted. * The same hosted font files should have an `Access-Control-Allow-Origin` that allows `sdk.asapp.com` or `*`. * The files should have proper cache-control headers as well as GZIP compression. For more information on web font performance enhancements, ASAPP recommends the article: [Web Font Optimization](https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/webfont-optimization), published by Google and Ilya Grigorik. * You acknowledge that you will provide ASAPP with the URLs for each of the hosted font formats for use in a CSS @font-face declaration, hosted on sdk.asapp.com. * If your font becomes unavailable for display, ASAPP will default to using [Lato](https://fonts.google.com/specimen/Lato), then Arial, Helvetica, or a default sans-serif font. ## Dynamic Styling Configuration The ASAPP Chat SDK supports dynamic styling through configuration. This allows you to customize the appearance of various chat components without requiring CSS modifications. ### Incorporating Styling in the SDK To dynamically apply styles based on the provided configuration, the SDK implements the following approach: * **Extract Styling Configurations**: The SDK parses the `Chat.Styling` section of the configuration and identifies the defined styles for various elements. * **Apply Styles Dynamically**: The SDK targets elements based on their class names and applies inline styles dynamically, ensuring that the defined styles are reflected in real-time. * **Handle State-Based Styling**: Elements that support different states (e.g., button with default and focus styles) have event listeners added dynamically to switch styles when users interact with them. ### Configuration Schema The enhanced configuration schema includes a `Chat` section that supports styling, icons, and features: ```json theme={null} { "Chat": { "Styling": { /* ChatStyling object */ }, "Icons": { /* ChatIcons object */ }, "Features": { /* ChatFeatures object */ } } } ``` ### Styling Configuration The `Chat.Styling` object allows you to customize the appearance of various chat elements: #### Available Styling Options **QuickReplies**: Customize quick reply buttons and their container * `Container`: Scrollable container holding all quick reply buttons * `Button`: Individual quick reply action buttons ```json theme={null} { "Chat": { "Styling": { "QuickReplies": { "Container": { "backgroundColor": "#f8f9fa", "padding": "12px", "borderRadius": "8px" }, "Button": { "backgroundColor": "#5255e2", "color": "#ffffff", "border": "none", "borderRadius": "20px", "padding": "8px 16px", "fontSize": "14px", "fontWeight": "500" } } } } } ``` **ChatInput**: Style chat input components * `AttachmentButton.Icon`: Icon within the attachment button * `SendButton.Icon.Disabled`: Send button icon in disabled state * `SendButton.Icon.Enabled`: Send button icon in enabled state ```json theme={null} { "Chat": { "Styling": { "ChatInput": { "AttachmentButton": { "Icon": { "fill": "#5255e2", "width": "20px", "height": "20px" } }, "SendButton": { "Icon": { "Disabled": { "fill": "#cccccc", "opacity": "0.5" }, "Enabled": { "fill": "#5255e2", "opacity": "1" } } } } } } } ``` **EwtSheet**: Estimated Wait Time display styling * `ProgressBar.Container`: Background container for progress bar * `ProgressBar.Line`: Active progress indicator line * `Content`: Main content area styling * `SecondaryText`: Secondary description text * `PrimaryText`: Primary time display text * `TextContainer`: Wrapper for text elements * `Button`: Action buttons in EWT sheet ```json theme={null} { "Chat": { "Styling": { "EwtSheet": { "ProgressBar": { "Container": { "backgroundColor": "#e9ecef", "borderRadius": "4px", "height": "8px" }, "Line": { "backgroundColor": "#5255e2", "borderRadius": "4px", "transition": "width 0.3s ease" } }, "Content": { "padding": "20px", "backgroundColor": "#ffffff", "borderRadius": "12px" }, "PrimaryText": { "fontSize": "18px", "fontWeight": "600", "color": "#212529" }, "SecondaryText": { "fontSize": "14px", "color": "#6c757d", "marginTop": "8px" }, "TextContainer": { "textAlign": "center", "marginBottom": "16px" }, "Button": { "backgroundColor": "#5255e2", "color": "#ffffff", "border": "none", "borderRadius": "6px", "padding": "10px 20px", "fontSize": "14px", "fontWeight": "500" } } } } } ``` **ActionSheet**: Full screen modal action buttons * `Buttons.Container`: Wrapper for all action buttons * `Buttons.Primary`: Primary action button styling * `Buttons.Secondary`: Secondary action button styling ```json theme={null} { "Chat": { "Styling": { "ActionSheet": { "Buttons": { "Container": { "display": "flex", "flexDirection": "column", "gap": "12px", "padding": "20px" }, "Primary": { "backgroundColor": "#5255e2", "color": "#ffffff", "border": "none", "borderRadius": "8px", "padding": "12px 24px", "fontSize": "16px", "fontWeight": "600" }, "Secondary": { "backgroundColor": "transparent", "color": "#5255e2", "border": "2px solid #5255e2", "borderRadius": "8px", "padding": "12px 24px", "fontSize": "16px", "fontWeight": "500" } } } } } } ``` **NewQuestion**: New question prompt styling * `Container`: Container for new question UI * `Content.Text`: Text elements in prompt * `Content.Icon`: Icon in new question UI ```json theme={null} { "Chat": { "Styling": { "NewQuestion": { "Container": { "backgroundColor": "#f8f9fa", "borderRadius": "12px", "padding": "16px", "border": "1px solid #dee2e6" }, "Content": { "Text": { "fontSize": "14px", "color": "#495057", "fontWeight": "500" }, "Icon": { "fill": "#5255e2", "width": "18px", "height": "18px" } } } } } } ``` **ChatBanner**: Chat banner notifications * `Container`: Banner container * `Text`: Main banner text * `Warning`: Warning variant styling * `Error`: Error variant styling * `Success`: Success variant styling * `Icon`: Icon variant styling ```json theme={null} { "Chat": { "Styling": { "ChatBanner": { "Container": { "padding": "12px 16px", "borderRadius": "8px", "marginBottom": "12px", "display": "flex", "alignItems": "center", "gap": "8px" }, "Text": { "fontSize": "14px", "fontWeight": "500", "lineHeight": "1.4" }, "Warning": { "backgroundColor": "#fff3cd", "borderColor": "#ffeaa7", "color": "#856404" }, "Error": { "backgroundColor": "#f8d7da", "borderColor": "#f5c6cb", "color": "#721c24" }, "Success": { "backgroundColor": "#d1edff", "borderColor": "#bee5eb", "color": "#0c5460" }, "Icon": { "width": "16px", "height": "16px", "flexShrink": "0" } } } } } ``` **ChatMessagesView**: Chat messages display * `ScrollView`: Chat view for chat messages ```json theme={null} { "Chat": { "Styling": { "ChatMessagesView": { "ScrollView": { "backgroundColor": "#ffffff", "padding": "16px", "overflowY": "auto", "maxHeight": "400px" } } } } } ``` #### Styling Example Here is a comprehensive example that combines multiple styling options: ```json theme={null} { "APIHostname": "sample.api.com", "AppId": "sampleAppId", "Language": "", "RegionCode": "", "CustomerId": "", "Display": { "Align": "right", "AlwaysShowMinimize": true, "BadgeColor": "#5255e2", "BadgeText": "", "BadgeType": "badge", "FrameDraggable": false, "FrameStyle": "default", "FrameTitle": null, "HideBadgeOnLoad": false, "Identity": "", "PrimaryColor": "#5255e2", "DarkColor": "#494a7c" }, "Chat": { "Styling": { "QuickReplies": { "Container": { "backgroundColor": "#f8f9fa", "padding": "12px" }, "Button": { "backgroundColor": "#5255e2", "color": "#ffffff", "borderRadius": "20px" } }, "ChatInput": { "SendButton": { "Icon": { "Enabled": { "fill": "#5255e2" }, "Disabled": { "fill": "#cccccc" } } } }, "ChatBanner": { "Container": { "borderRadius": "8px", "padding": "12px" }, "Success": { "backgroundColor": "#d1edff", "color": "#0c5460" } } } }, "Intent": null, "Sound": true } ``` ## Custom Icons The `Chat.Icons` section allows you to specify custom SVG path data for key icons used within the chat interface. ### Icon Configuration The SDK supports using SVG `d` path data to create customizable icons. You only need to provide the `d` attribute value from your SVG: ```json theme={null} { "Chat": { "Icons": { "Minimize": "M 44 24 C 44 24.847656 43.308594 25.539062 42.460938 25.539062 L 5.539062 25.539062 C 4.691406 25.539062 4 24.847656 4 24 C 4 23.152344 4.691406 22.460938 5.539062 22.460938 L 42.460938 22.460938 C 43.308594 22.460938 44 23.152344 44 24 Z M 44 24", "Send": "M 2.289062 9.734375 C 1.773438 8.703125 1.957031 7.460938 2.746094 6.621094 C 3.539062 5.78125 4.765625 5.53125 5.832031 5.976562 L 44.332031 22.4375 C 45.347656 22.867188 46 23.859375 46 24.957031 C 46 26.054688 45.347656 27.050781 44.332031 27.476562 L 5.832031 43.9375 C 4.773438 44.390625 3.539062 44.136719 2.746094 43.296875 C 1.957031 42.453125 1.773438 41.210938 2.289062 40.183594 L 9.921875 24.964844 Z M 12.3125 26.339844 L 4.75 41.425781 L 40.042969 26.339844 Z M 40.042969 23.59375 L 4.75 8.507812 L 12.3125 23.59375 Z M 40.042969 23.59375", "NewQuestion": "M 16.5 33.601562 C 17.882812 33.601562 19 34.671875 19 36 L 19 39.601562 L 26.664062 34.082031 C 27.09375 33.765625 27.625 33.601562 28.164062 33.601562 L 39 33.601562 C 40.382812 33.601562 41.5 32.527344 41.5 31.199219 L 41.5 9.601562 C 41.5 8.273438 40.382812 7.199219 39 7.199219 L 9 7.199219 C 7.617188 7.199219 6.5 8.273438 6.5 9.601562 L 6.5 31.199219 C 6.5 32.527344 7.617188 33.601562 9 33.601562 Z M 4 9.601562 C 4 6.953125 6.242188 4.800781 9 4.800781 L 39 4.800781 C 41.757812 4.800781 44 6.953125 44 9.601562 L 44 31.199219 C 44 33.847656 41.757812 36 39 36 L 28.164062 36 L 18.5 42.960938 C 18.125 43.230469 17.613281 43.273438 17.1875 43.070312 C 16.757812 42.867188 16.5 42.457031 16.5 42 L 16.5 36 L 9 36 C 6.242188 36 4 33.847656 4 31.199219 Z M 4 9.601562", "ChatBanner": "your-custom-banner-icon-d-path-here" } } } ``` ### Available Icons * **Minimize**: Icon for minimizing the chat window * **Send**: Icon for the send button * **NewQuestion**: Icon for starting a new question * **ChatBanner**: Icon for chat banner notifications Make sure to size your icons correctly. You may use an online SVG editor to resize your icons to meet the chat interface specifications. ## Advanced Configuration Options ### Chat Features The `Chat.Features` section controls the availability of specific chat features: #### Estimated Wait Time (EWT) Control whether the EWT feature is displayed: ```json theme={null} { "Chat": { "Features": { "EWT": { "enabled": true } } } } ``` #### Focus Trap Enable focus trap within the chat instance for better accessibility: ```json theme={null} { "Chat": { "Features": { "TrapFocus": true } } } ``` # Web Examples Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-examples This section provides a few common integration scenarios with the ASAPP Chat SDK. Before continuing, make sure you've [integrated the ASAPP SDK](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start") script on your page. You must have the initial script available before utilizing any of the examples below. Also, be sure that you have a [Trigger](/agent-desk/integrations/web-sdk/web-features#triggers "Triggers") enabled for the page(s) you wish to display the Chat SDK. * [Basic Integration (no Authentication)](#basic-integration-no-authentication "Basic Integration (no Authentication)") * [Basic Integration (With Authentication)](#basic-integration-with-authentication "Basic Integration (With Authentication)") * [Customizing the Interface](#customizing-the-interface "Customizing the Interface") * [Advanced Integration](#advanced-integration "Advanced Integration") ## Basic Integration (no Authentication) The most basic integrations are ones with no customizations to the ASAPP interface and no integrated use cases. If your company is simply providing an un-authed user experience, an integration like the one below may suffice. ee the [App Settings](/agent-desk/integrations/web-sdk/web-app-settings "Web App Settings") page for details on the [APIHostname](/agent-desk/integrations/web-sdk/web-app-settings#apihostname "APIHostName") and [AppId](/agent-desk/integrations/web-sdk/web-app-settings#appid "AppId") settings. The following code snippet is an example of a non-authenticated integration with the ASAPP Chat SDK. ```json theme={null} document.addEventListener('DOMContentLoaded', function () { ASAPP('load', { APIHostname: 'example-co.api.asapp.com', AppId: 'example-co' }); }); ``` With the above information set, a user will be able to access integrated use cases. If their session or token information has expired, then the user will be presented with a "Sign In" button. Once the user clicks the Sign In button, the Chat SDK will call your provided UserLoginHandler, allowing them to authorize. Here's a sample of what the Sign In button looks like. ## Basic Integration (With Authentication) Integrating the Chat SDK with authenticated users requires the addition of the `CustomerId`, `ContextProvider`, and `UserLoginHandler` keys. See the [App Settings](/agent-desk/integrations/web-sdk/web-app-settings "Web App Settings") page for more detailed information on their usage. With each of these keys set, a user will be able to access integrated use cases or be capable of logging in if they are not already. The following code snippet is an example of providing user credentials for allowing a user to enter integrated use cases. ```json theme={null} document.addEventListener('DOMContentLoaded', function () { ASAPP('load', { APIHostname: 'example-co.api.asapp.com', AppId: 'example-co', CustomerId: 'hashed-customer-identifier', ContextProvider: function (callback, tokenIsExpired) { var context = { Auth: { Token: 'secure-session-user-token' } }; callback(context); }, // If a user's token expires or their user credentials // are not available, handle their login path UserLoginHandler: function () { window.location.href = '/login'; } }); }); ``` With the above information set, a user will be able to access integrated use cases. If their session or token information has expired, then the user will be presented with a "Sign In" button. Once the user clicks the Sign In button, the Chat SDK will call your provided `UserLoginHandler`, allowing them to authorize. Here's a sample of what the Sign In button looks like. ## Customizing the Interface The Chat SDK offers a few basic keys for customizing the interface to your liking. The `Display` key enables you to perform those customizations as needed. Please see the [Display Settings](/agent-desk/integrations/web-sdk/web-app-settings#display "Display") section for detailed information on each of the available keys. The following code snippet shows how to add the Display key to your integration to customize the display settings of the Chat SDK. ```json theme={null} document.addEventListener('DOMContentLoaded', function () { ASAPP('load', { APIHostname: 'example-co.api.asapp.com', AppId: 'example-co', Display: { Align: 'left', AlwaysShowMinimize: true, BadgeColor: '#36393A', BadgeText: 'Chat With Us', BadgeType: 'tray', FrameDraggable: true, FrameStyle: 'sideBar' } }); }); ``` For cases in which you have more specific styling needs, you may utilize the available IDs or classnames for targeting and customizing the Chat SDK elements with CSS. These selectors are stable and can be used to target the ASAPP Badge and iFrame for specific styling needs. The following code snippet provides a CSS example showcasing a few advanced style changes. ```json theme={null} #asapp-chat-sdk-badge, #asapp-chat-sdk-badge, #asapp-chat-sdk-badge { border-radius: 25px; bottom: 10px; box-shadow: 0 0 0 2px #fff, 0 0 0 4px #36393A; } #asapp-chat-sdk-iframe { border-radius: 0; } ``` With the above customizations in place, the Chat SDK Badge will look like the following. ## Advanced Integration Here's a more robust example showing how to utilize most of the ASAPP Chat SDK settings. In the examples below we will define a few helper methods, then pass those helpers to the [Load](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") or [SetCustomer](/agent-desk/integrations/web-sdk/web-javascript-api#setcustomer "'setCustomer'") APIs. The following example showcases a [ContextProvider](/agent-desk/integrations/web-sdk/web-contextprovider "Web ContextProvider") that sets some basic session information, then sets any available user authentication information. Once that information is retrieved, it passes the prepared context to the `callback` so that ASAPP can process each Chat SDK request. The following code snippet is a ContextProvider example utilizing session expiration conditionals. ```javascript theme={null} function asappContextProvider (callback, tokenIsExpired, sessionInfo) { var context = { CustomerInfo: { Region: 'north-america', ViewingProduct: 'New Smartphone', } }; if (tokenIsExpired || !sessionInfo) { sessionInfo = retrieveSessionInfo(); }; if (sessionInfo) { context.Auth = { Cookies: { 'X-User-Header': sessionInfo.cookies.userValue }, Token: sessionInfo.access_token }; } callback(context); } ``` The next example shows conditional logic for logging a user in on single or multi-page application. You'll likely only need to handle one of the cases in your application. If a user enters a use case they are not authorized for, they will be presented with a "Sign In" button within the SDK. When the user clicks that link, it will trigger your provided [UserLoginHandler](/agent-desk/integrations/web-sdk/web-app-settings#userloginhandler "UserLoginHandler") so you can allow the user to authenticate. The following code snippet shows a UserLoginHandler utilizing page redirection or modals to log a user in. ```javascript theme={null} function asappUserLoginHandler () { if (isSinglePageApp) { displayUserLoginModal() .then(function (customer, sessionInfo) { ASAPP('SetCustomer', { CustomerId: customer, ContextProvider: function (callback, tokenIsExpired) { asappContextProvider(callback, tokenIsExpired, sessionInfo) } }); }) } else { window.location.href = '/login'; } } ``` The next helper defines the [onLoadComplete](/agent-desk/integrations/web-sdk/web-app-settings#onloadcomplete "onLoadComplete") handler. It is used for preparing any additional logic you wish to tie to ASAPP or your own page functionality. The below example checks whether the Chat SDK loaded via a [Trigger](/agent-desk/integrations/web-sdk/web-features#triggers "Triggers") (via the `isDisplayingChat` argument). If it's configured to display, it prepares some event bindings through the [Action API](/agent-desk/integrations/web-sdk/web-javascript-api#action-on-or-off "action: 'on' or 'off'") which in turn call an example metrics service. The following code snippet shows an `onLoadComplete` handler being used with the isDisplayingChat conditional and Action API. ```javascript theme={null} function asappOnLoadComplete (isDisplayingChat) { if (isDisplayingChat) { // Chat SDK has loaded and exists on the page document.body.classList.add('chat-sdk-loaded'); var customerId = retrieveCurrentSessionOrUserId(); ASAPP('on', 'issue:new', function (event) { metricService('set', 'chat:action', { actionName: event.type, customerId: customerId, externalCustomerId: event.detail.customerId, issueId: event.detail.issueId }) }); ASAPP('on', 'message:received', function (event) { metricService('set', 'chat:action', { actionName: event.type, customerId: customerId, externalCustomerId: event.detail.customerId, isLiveChat: event.detail.isLiveChat, issueId: event.detail.issueId, senderType: event.detail.senderType }) }); } else { // Chat SDK is not configured to display on this page. // See Display Settings: Triggers documentation } } ``` Finally, we tie everything together. The example below shows a combination of adding the above helper functions to the ASAPP [Load API](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'"). It also combines many of the [App Settings](/agent-desk/integrations/web-sdk/web-app-settings "Web App Settings") available to you and your integration. ```javascript theme={null} document.addEventListener('DOMContentLoaded', function () { var customerId = retrieveCustomerIdentifier(); ASAPP('load', { APIHostname: 'example-co.api.asapp.com', AppId: 'example-co', Display: { Align: 'left', AlwaysShowMinimize: true, BadgeColor: 'rebeccapurple', BadgeText: 'Chat With Us', BadgeType: 'tray', FrameDraggable: true, FrameStyle: 'sideBar', Identity: 'subsidiary-branding' }, Intent: { Code: 'PAYBILL' }, RegionCode: 'US', Sound: true, CustomerId: customerId, ContextProvider: asappContextProvider, UserLoginHandler: asappUserLoginHandler, onLoadComplete: asappOnLoadComplete }); }); ``` # Web Features Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-features This section describes various features that are unique to the ASAPP Web SDK: * [Triggers](#triggers "Triggers") * [Deeplinks](#deeplinks-11865 "Deeplinks") In addition, please see [Chat Instead](/agent-desk/integrations/chat-instead/web "Web"). ## Triggers A Trigger is an ASAPP feature that allows you to specify which pages display the ASAPP Chat UI. You may choose to show the ASAPP Chat UI on all pages where the ASAPP Chat SDK is [embedded and loaded](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start"), or on just a subset of those pages. You must enable at least one Trigger in order for the ASAPP Chat UI to display anywhere on your site. Until you define at least one Trigger, the ASAPP Chat UI will not display on your site. Once you've [embedded](/agent-desk/integrations/web-sdk/web-quick-start#1-embed-the-script "1. Embed the Script") and [loaded](/agent-desk/integrations/web-sdk/web-javascript-api#load "'load'") the Chat SDK on your web pages, ASAPP will determine whether or not to display the Chat UI on the user's current URL. URLs that are enabled for displaying the UI are configured by a feature known as Triggers. You will need to be set up as a user of the ASAPP Admin Control Panel in order to make the changes described below. Once you are granted permissions, you may utilize the Triggers as a means of specifying which pages are eligible to show the ASAPP Chat UI. ### Creating a Trigger 1. Visit the **Admin > Triggers** section of your Admin Desk. 2. Click the **Add +** button from the Triggers settings page. 3. In the **URL Link** field, enter the URL for the page where you would like to display the ASAPP Chat UI. (See the **Types of Triggers** section below for some example values.) 4. Click **Next >**. 5. Give the Trigger a display name. (Display names are used only on the Triggers settings page to help you organize and manage your triggers.) 6. Click **Save**. 7. You should now see the new entry on your Trigger settings page. 8. Visit the newly configured page on your site to double-check that the ASAPP Chat UI is loading or hiding as you expect. ### Types of Triggers You may finely control the display of the ASAPP Chat UI on your site by adding as many Triggers as you like. Triggers can be defined in two different ways: as **Wildcard** and as **Single-Page Triggers**. #### Wildcard Triggers You can use the wildcard character in the URL Link field of a Trigger to enable the display of the Chat SDK pages that follow a URL pattern. The asterisk (i.e., `/*` is the wildcard character you use when defining a Trigger. When you use an asterisk in the URL Link of your Trigger definition, that character will match any sequence of one or more characters. To set a wildcard for your entire domain, enter a **URL Link** value for your domain name, followed by `/*` (e.g., `example.com/*` ). This will enable the display of the ASAPP Chat UI on all pages of your site. To enable the ASAPP Chat UI to appear on a more limited set of pages, enter a **URL Link** value that includes the appropriate sub-route path, followed by the `/*` wildcard (e.g., `example.com/settings/*`). This will cause the Chat UI to display on any pages that start with the URL and sub-route `example.com/settings/`, such as `example.com/settings/profile` and `example.com/settings/payment`. #### Single-Page Triggers If you want the ASAPP Chat UI to display on only a few specific pages, you can create a separate Trigger for each of those pages, one at a time, by entering the exact URL for the page you wish to enable in the URL Link field of the Trigger definition. For example, entering `example.com/customer-support/shipping.html` in the URL Link field of your Trigger definition will enable the ASAPP Chat UI to display on that single page. ## Deeplinks A feature that defines how the SDK opens hyperlinks when a user clicks a link to another document. In the ASAPP Web SDK, we use the browser's `window.location.origin` API to determine whether the link should open in the same window or a new window. In order for a link to open in the same window as the user's current SDK window, the `window.location.origin` must return a matching protocol and hostname. For example, if a user is on `https://www.example.com` and clicks a link to `https://www.example.com/page-two`, the SDK changes the current page to the destination page in the same window. A link opens in a *new* window if there is any difference between the current page and the destination page origin. When a user clicks a link from `https://www.example.com` to `https://subdomain.example.com` , the SDK opens the destination page in a new window due to hostname variation. A link from `https://example.com` to `http://example.com` also opens a new window due to a mismatched protocol. When a link opens a new window, the user's SDK window remains open. # Web JavaScript API Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-javascript-api This section details the various API methods you can call to the ASAPP Chat SDK. Before making any API calls, make sure you've [integrated the ASAPP SDK](/agent-desk/integrations/web-sdk/web-quick-start "Web Quick Start") script on your page. Once you've integrated the SDK with your site, you can use the JavaScript API to toggle settings in the Chat SDK, trigger events, or send information to ASAPP. The Chat SDK Web JavaScript API allows you to perform a variety of actions after the SDK has been initialized, such as the [`load`](#setCustomer) method if you are authorization a user after a conversation has started, or updating the customers info mid conversation with the [`setCustomer`](#setcustomer) method. Read on for details on each of these methods: * [action: `on` or `off`](#action-on-or-off) * [`getState`](#getstate) * [`hide`](#hide) * [`load`](#load) * [`refresh`](#refresh) * [`send`](#send) * [`set`](#set) * [`setCustomer`](#setcustomer) * [`setIntent`](#setintent) * [`show`](#show) * [`showChatInstead`](#showchatinstead) * [`unload`](#unload) * [`unloadAndClearSession`](#unloadandclearsession) ## action: `on` or `off` This API subscribes or unsubscribes to events that occur within the Chat SDK. A developer can apply custom behavior, track metrics, and more by subscribing to one of the Chat SDK custom events. To utilize the Action API, pass either the `on` (subscribes) or `off` (unsubscribes) keywords to the `ASAPP` method. The next argument is the name of the event binding. The final argument is the callback handler you wish to attach. The following code snippet is an example of the Action API subscribing and unsubscribing to the `agent:assigned` and `message:received` events: ```javascript theme={null} function agentAssignedHandler (event) { onAgentAssigned(event.detail.issueId, event.detail.externalSenderId); } function messageHandler (event) { const { isFirstMessage, externalSenderId } = event.detail; if (isFirstMessage && externalSenderId) { OnAgentInteractive(event.detail.issueId, event.detail.customerId); } else if (isFirstMessage === false && isFromRep) { ASAPP('off', 'message:received', messageHandler); } } ASAPP('load', { onLoadComplete: () => { ASAPP('on', 'agent:assigned', agentAssignedHandler); ASAPP('on', 'message:received', messageHandler); } }); ``` ### Event Object Each event receives a `CustomEvent` object as the first argument to your event handler. This is a [standard event object](https://developer.mozilla.org/en-US/docs/Web/API/CustomEvent) with all typical interfaces. The object has an `event.type` with the name of the event and an `event.detail` key which contains the following custom properties: `issueId` (Number) The ASAPP identifier for an individual issue. This ID may change as a user completes and starts new queries to the ASAPP system. `customerId` (Number) The ASAPP identifier for a customer. This ID is consistent for authenticated users but may be different for anonymous ones. Anonymous users will have a consistent ID for the duration of their session. `externalSenderId` (String) The external identifier you provide to ASAPP that represents an agent identifier. This property will be undefined if the user is not connected with an agent. ### Chat Events Chat events trigger when a user opens or closes the Chat SDK window. These events do not have any additional event details. `chat:show` * Cancellable: true This event triggers when a user opens the Chat SDK. It may fire multiple times per session if a user repeatedly closes and opens the chat. `chat:hide` * Cancellable: true This event triggers when a user closes the Chat SDK. It may fire multiple times per session if a user repeatedly opens and closes the chat. ### Issue Events Issue events occur when a change in state of an Issue occurs within the ASAPP system. These events do not have any additional event details. `issue:new` * Cancellable: false This event triggers when a user has opened a new issue. It fires when they first open the Chat SDK or if they complete an issue and start another one. `issue:end` * Cancellable: false This event triggers when a user or agent has ended an issue. It fires when the user has completed an automated support request or when a user/agent ends an active chat. ### Agent Events Agent events occur when particular actions occur with an agent within ASAPP's system. These events do not have any additional event details. `agent:assigned` * Cancellable: false This event triggers when a user is connected to an agent for the first time. It fires once the user has left an automated support flow and has been connected to a live support agent. ### Message Events Message events occur when the user receives a message from either SRS or an agent. These events have the following additional event details: `senderType` (String) Returns either `srs` or `agent`. `isLiveChat` (Boolean) Returns `true` when a user is connected with an agent. Returns `false` when a user is within an automated flow. `isFirstMessage` (Boolean) Returns `true` only when a message is the first message received from an agent or SRS. Otherwise returns `false`. `message:received` * Cancellable: false This event triggers whenever the Chat SDK receives a message event to the chat log. It will fire when a user receives a message from SRS or an agent. ## getState This API returns the current state of Chat SDK session. It accepts a callback which receives the current state object. ```javascript theme={null} ASAPP('getState', function(state) { console.log(state); }); ``` ### State Object The state object contains the following keys which give you insight into the user's actions: `hasContext` (Object) Returns the current [context](/agent-desk/integrations/web-sdk/web-contextprovider "Web ContextProvider") known by the SDK. `hasCustomerId` (Boolean) Returns true when the SDK has been provided with a [CustomerId](/agent-desk/integrations/web-sdk/web-app-settings#customerid "CustomerId") setting. `isFullscreen` (Boolean) Returns true when the SDK will render in fullscreen for mobile web devices. `isLiveChat` (Boolean) Returns true when the use is connected to an agent. `isLoggingIn` (Boolean) Returns true if the user has been presented with and clicked on a button to Log In. `isMobile`(Boolean) Returns true when the SDK is rendering on a mobile or tablet device. `isOpen` (Boolean) Returns true if the user has the SDK open on the current or had it open on the previous page. `unreadMessages` (Integer) Returns a count of how many messages the user has received since minimizing the SDK. ## hide This API hides the Chat SDK iframe. See [Show](#-show- "'show'") for revealing the Chat SDK iframe. This method is useful for when you want to close the SDK iframe after certain page interactions or if you've provided a custom Badge entry point. ```javascript theme={null} ASAPP('hide'); ``` ## load This API initializes the ASAPP Chat SDK for display on your pages and typically specify a [`contextProviderHandler`](/agent-desk/integrations/web-sdk/web-contextprovider). To call the `load` the API and initialize the SDK, specify any of the [Web App Settings](/agent-desk/integrations/web-sdk/web-app-settings), though the following are required: * [APIHostname](/agent-desk/integrations/web-sdk/web-app-settings#apihostname): The Hostname for connection customers with customer support. * [AppId](/agent-desk/integrations/web-sdk/web-app-settings#appid "AppId"): Your unique Company Identifier (or company marker). Work with your ASAPP Account Team to determine the correct values for these settings. Typically, you'll also specify a [`ContextProvider` handler](/agent-desk/integrations/web-sdk/web-contextprovider) to provide context to the SDK such as user authentication information or other customer information. ```javascript Load with CustomerInfo And Authentication Token theme={null} ASAPP('load', { APIHostname: '[API_HOSTNAME]', AppId: '[APP_ID]', ContextProvider: (callback) => { const context = { CustomerInfo: { category: 'payment', parent_page: 'Pay my Bill' }, Auth: { Token: '[AUTH_TOKEN]' } }; callback(context); } }); ``` Please see the [Web App Settings](/agent-desk/integrations/web-sdk/web-app-settings) page for a list of all the available properties that can be passed to the `Load` API. ## refresh This API checks to make sure that [Triggers](/agent-desk/integrations/web-sdk/web-features#triggers) work properly when a page URL changes in a SPA (Single-Page Application). You should call this API every time the page URL changes if your website is a SPA. ```javascript theme={null} ASAPP('refresh') ``` ## send Use this API to update the `customerInfo` object at any time, regardless of whether the user is currently typing in the Chat SDK. Typically, the `customerInfo` is updated as part of your [`contextProviderHandler`](/agent-desk/integrations/web-sdk/web-app-settings#contextprovider) function defined in your [`load`](#'load') call, which is called whenever the user types in the Chat SDK. This API is primarily used to send information that is used to show a proactive chat prompt when a specific criteria or set of criteria are met. The `send` API is rate limited to one request for every 5 seconds. To use this API: * Specify a `type` of `customer` Only the `customer` event type is supported. * Provide a `data` object containing the `customerInfo` object: ```javascript theme={null} ASAPP('send', { type: 'customer', data: { "key1": "value1", "key2": "value2" } }); ``` For example, you could use a key within `CustomerInfo` to indicate that a customer had abandoned their shopping cart. Do not use the send API for transmitting any information that you would consider sensitive or Personally Identifiable Information (PII). The accepted keys are listed below. ## set This API applies various user information to the Chat SDK. Calling this API does not make a network request. The API accepts two arguments. The first is the name of the key you want to update. The second is the value that you wish to assign the key. ```javascript theme={null} ASAPP('set', 'Auth', { Token: '3858f62230ac3c915f300c664312c63f' }); ASAPP('set', 'ExternalSessionId', 'j6oAOxCWZh...'); ``` Please see the [Context Provider](/agent-desk/integrations/web-sdk/web-contextprovider "Web ContextProvider") page for a list of all the properties you can provide to this API. ## setCustomer This API provides an access token with your customers account after the Chat SDK has already loaded. This method is useful if a customer logs into their account or if you need to refresh your customers auth token from time to time. See the [SDK Settings](/agent-desk/integrations/web-sdk/web-app-settings "Web App Settings") section for details on the [CustomerId](/agent-desk/integrations/web-sdk/web-app-settings#customerid "CustomerId") (Required), [ContextProvider](/agent-desk/integrations/web-sdk/web-app-settings#contextprovider "ContextProvider") (Required), and [UserLoginHandler](/agent-desk/integrations/web-sdk/web-app-settings#userloginhandler "UserLoginHandler") properties accepted for SetCustomers second argument. ```javascript theme={null} ASAPP('setCustomer', { CustomerId: 'a1b2c3x8y9z0', ContextProvider: function (callback) { var context = { Auth: { Token: '3858f62230ac3c915f300c664312c63f' } }; callback(context); } }); ``` ## setIntent This API lets you set an intent after Chat SDK has already been loaded and will take effect even if the user is in chat. ASAPP recommends that you use [Intent](/agent-desk/integrations/web-sdk/web-app-settings#intent "Intent") via App Settings during load. This method takes an object as a parameter, with a required key of `Code`. `Code` accepts a string. Your team and your ASAPP Implementation Manager will determine the available values. ```javascript theme={null} ASAPP('setIntent', {Code: 'PAYBILL'}); ``` ## show This API shows the Chat SDK iframe. See [Hide](#-hide- "'hide'") for hiding the Chat SDK iframe. This method is useful for when you want to open the SDK iframe after certain page interactions or if you've provided a custom Badge entry point. ```javascript theme={null} ASAPP('show'); ``` ## showChatInstead This API displays the [Chat Instead](../chat-instead/web "Web") feature. This API displays the Chat Instead feature. In order to enable this feature, please integrate with the `showChatInstead` API and then contact your Implementation Manager. **Options:**

Key

Description

Required

phoneNumber

Phone number used when a user clicks phone in Chat Instead.

Yes

APIHostName

Sets the ASAPP APIHostName for connecting customers with customer support.

No

(Required if you have not initialized the WebSDK via the Load API on the page)

AppId

Your unique Company Identifier.

**Example Use Case:** ```html theme={null} (800) 123-4567 ``` ## unload This API removes all the SDK related elements from the DOM (Badge, iframe, and Proactive Messages if any). If the SDK is already open or a user is in live chat, ASAPP will ignore this call. To reload the SDK, you need to call the "Load" API. ```javascript theme={null} ASAPP('unload') ``` ## unloadAndClearSession This API removes all SDK instances from the browser and clears the session details. This method is specifically designed to be called during user logout to ensure complete session cleanup and allow users to reinstantiate the chat with a completely new session. This API will: * Remove chat instances from the browser * Clear all session details and stored data * Allow reinstantiation of chat with a fresh session * Ensure proper session cleanup for user privacy Unlike the `unload` API, this method performs a complete cleanup of all session-related data, making it ideal for logout scenarios where you want to ensure no user data persists. ```javascript theme={null} ASAPP('unloadAndClearSession') ``` **Usage Example:** ```javascript theme={null} // Call this method when user logs out function handleUserLogout() { // Perform other logout operations performLogout(); // Clear ASAPP chat session ASAPP('unloadAndClearSession'); } ``` It's recommended to call `unloadAndClearSession` as part of your application's logout process to ensure proper session cleanup and user privacy. After calling this API, you will need to call the `load` API again to reinitialize the chat SDK. # Web Quick Start Source: https://docs.asapp.com/agent-desk/integrations/web-sdk/web-quick-start If you want to start fast, follow these steps: 1. Embed the Script 2. Initialize the SDK 3. Customize the SDK 4. Authenticate Users In addition, see an example of a [Full Snippet](#full-snippet "Full Snippet"). ## 1. Embed the Script 1. Embed the script directly inline. See the instructions below. 2. Use a tag manager to control where and how the scripts load. The ASAPP Chat SDK works with most tag managers. See the tag manager documentation for more detailed instructions. To enable the ASAPP Chat SDK, you'll first need to paste the [ASAPP Chat SDK Web snippet](#full-snippet) into your site's HTML. You can place it anywhere in your markup, but it's ideal to place it near the top of the `` element. ``` ``` This snippet does two things: 1. Creates a ` ``` **Note:** ASAPP will provide the `APIHostname` and `AppId` values to you after coordination between your organization and your ASAPP Implementation Manager. Once ASAPP determines and provides these values, you can make the following updates: 1. Replace `API_HOSTNAME` with the hostname of your ASAPP API location. This string will look something like ```screen theme={null} 'examplecompanyapi.asapp.com'. ``` 2. Replace `APP_ID` with your Company Marker identifier. This string will look something like `'examplecompany'`. Calling `ASAPP('load')` will make a network request to your APIHostname and determine whether or not it should display the Chat SDK Badge. The Badge displays based on your company's business hours, your trigger settings, and whether or not you have enabled the SDK in your Admin control panel. For more advanced ways to display the ASAPP Chat SDK, see the [JavaScript API Documentation](/agent-desk/integrations/web-sdk/web-javascript-api "Web JavaScript API"). ## 3. Customize the SDK After you Embed the Script and Initialize the SDK, the ASAPP Chat SDK should display and function on your web page. You may wish to head to the [Customization](/agent-desk/integrations/web-sdk/web-customization "Web Customization") section of the documentation to learn how to customize the appearance of the ASAPP Chat SDK. ## 4. Authenticate Users Some integrations of the ASAPP Chat SDK allow users to access sensitive account information. If any of your use cases fall under this category, please read the [Authentication](/agent-desk/integrations/web-sdk/web-authentication "Web Authentication") section to ensure your users experience a secure and seamless session. ## Full Snippet For additional legibility, here's the full Chat SDK Web integration snippet: ```json theme={null} (function(win, doc, hostname, namespace, script) { script = doc.createElement('script'); win[namespace] = win[namespace] || function() { (win[namespace]._ = win[namespace]._ || []).push(arguments) } win[namespace].Host = hostname; script.async = 1; script.src = hostname + '/chat-sdk.js'; script.type = 'text/javascript'; doc.body.appendChild(script); })(window, document, 'https://sdk.asapp.com', 'ASAPP'); ``` # WhatsApp Business Source: https://docs.asapp.com/agent-desk/integrations/whatsapp-business WhatsApp Business is a service that enables your organization to communicate directly with your customers in WhatsApp through your Customer Service Platform (CSP), which in this case will be ASAPP. ## Quick Start Guide 1. Create a Business Manager (BM) Account with Meta 2. Create WhatsApp Business Accounts (WABA) in AI-Console 3. Modify Flows and Test 4. Create and Implement Entry Points 5. Determine Launch and Throttling Strategy ### Create a Business Manager (BM) Account Before integrating with ASAPP's WhatsApp adapter, you must create a Business Manager (BM) account with Meta - visit [this page for account creation](https://www.facebook.com/business/help/1710077379203657?id=180505742745347). Following account creation, Meta will also request you follow a [business verification](https://www.facebook.com/business/help/1095661473946872?id=180505742745347) process before proceeding. ### Create WhatsApp Business Accounts (WABAs) Once a Business Manager account is created and verified, proceed to set up WhatsApp Business Accounts (WABAs) using Meta's embedded signup flow in AI-Console's **Messaging Channels** section. Five total WABAs need to be created: three for lower environments, one for the demo (testing) environment and one for production. Your ASAPP account team can assist with creation of WABAs for lower environments if needed - please reach out with your teams to coordinate account creation. In this signup flow, you will set up an account name, time zone and payment method for the WABA and assign full control permissions to the `ASAPP (System User)`. #### Register Phone Numbers As part of the signup flow, each WABA must have at least one phone number assigned to it (multiple phone numbers per WABA are supported). Before adding a number, you must also create a profile display name, **which must match the name of the Business Manager (BM) account.** For implementation speed, ASAPP recommends using ASAPP-provisioned phone numbers for the three lower environment WABAs. Your ASAPP account team can guide you through this process. All provisioned phone numbers registered to WABAs need to meet [requirements specified by Meta](https://developers.facebook.com/docs/whatsapp/phone-numbers#pick-number). ### Modify Flows and Test The WhatsApp customer experience is distinct from ASAPP SDKs in several ways - some elements of the Virtual Agent are displayed differently while others are not supported. Your ASAPP account team will work with you to implement intent routing and flows to account for nodes with unsupported elements and to validate expected behaviors during testing before launch. #### Buttons and Forms All buttons with external links display using message text with a link for each button. See below for an example of two buttons (**Hello, I open a link** and **Hello, I open a view**) that each render as a message with a link: Similarly, forms that agents send and feedback forms at the end of chat also send messages with links to a separate page to complete the survey. Once users complete the survey, the system redirects them back to WhatsApp. #### Quick Reply Limitations Quick replies in WhatsApp also have different limitations from other ASAPP SDKs: * Each node may only include up to three quick reply options; the system truncates a node with more than three replies and shows only the first three replies. * Each quick reply may only include up to 20 characters; the system truncates a quick reply with more than 20 characters and shows only the first 17 characters, followed by an ellipsis * You can send a node that includes both a button in the message and quick replies, but this is not recommended, as the system will send the links to the customer out of order #### Authentication The WhatsApp Cloud API currently **does not support authentication**. As such, login nodes should not be used in flows that can be reached by users on WhatsApp. #### Attachments Cards Nodes that include attachments, such as cards and carousels, are not supported in this channel. The WhatsApp interface includes voice messages and emojis. WhatsApp also excludes some features that typically support live chat with an agent: * **Images**: Agents cannot view images that customers send. The same applies to voice messages and emojis, which are also part of the WhatsApp interface. * **Typing preview and indicators**: Agents will not see typing previews or indicators while the customer is typing. The customer will not see a typing indicator while the agent is typing. * **Co-browsing**: This capability is not currently supported in WhatsApp ### Create and Implement Entry Points Entry points are where your customers start conversations with your business. You have the option to embed a WhatsApp entry point into your websites in multiple ways: a clickable logo, text link, on-screen QR code, etc. You can also direct to WhatsApp from social media pages or using Meta's Ads platform to provide an entry point. Ads are fully configurable within the Meta suite of products and will result in no costs incurred for conversations that originate via interactions with them. ASAPP does not currently support [Chat Instead](/agent-desk/integrations/chat-instead "Chat Instead") functionality for WhatsApp. ### Determine Launch and Throttling Strategy Depending on the entry points configured, your ASAPP account team will share launch best practices and throttling strategies. # Virtual Agent Source: https://docs.asapp.com/agent-desk/virtual-agent Learn how to use Virtual Agent to automate your customer interactions. Virtual Agent is a set of automation tools that enables you to automate your customer interactions and route them to the right agents when needed. Virtual Agent provides a means for better understanding customer issues, offering self-service options, and connecting with live agents when necessary. You can deploy Virtual Agent to your website or mobile app via our Chat SDKs, or directly to channels like Apple Messages for Businesses. While you'll start with a baseline set of [core dialog capabilities](#core-dialog "Core Dialog"), the Virtual Agent requires thoughtful configuration to appropriately handle the use cases specific to your business. ## Customization Virtual Agent is fully customizable to fit your brand's unique needs. This includes: * Determining the list of Intents and how they are routed. * Advanced flows to take in structured and unstructured input. * Reach out to APIs to receive and send data. ### Access You configure the Virtual Agent through the AI-Console. To access AI-Console, log into [Insights Manager](/agent-desk/insights-manager "Insights Manager"), click on your user icon, and then **Go to AI-Console**. This option becomes available only if your organization grants you permission to access AI-Console. ## How It Works The Virtual Agent understands what customers say and transforms it into structured data that you can use to define how the Virtual Agent responds. This is accomplished via the following core concepts and components: ### Intents The Virtual Agent recognizes Intents when customers first reach out, representing the set of reasons that customers might contact your business. The Virtual Agent can also understand when a user changes intent in the middle of a conversation (see: [digressions](#core-dialog "Core Dialog")). Our teams can work with you to refine your intent list on an ongoing basis and train the Virtual Agent to recognize them. Examples include requests to "Pay Bill" or "Reset Password". Once the Virtual Agent recognizes an intent, you can use it to determine what happens next in the dialog. ### Intent Routes Once the Virtual Agent has recognized an intent, the next question is "so what?". Intent routes house the logic that determines what will happen after the Virtual Agent recognizes an intent. * Once the Virtual Agent classifies a customer's intent, the default behavior places the customer in an agent queue * Alternatively, you can use an intent route to specify a pre-defined flow for the Virtual Agent to execute, which can collect additional information, offer solutions, or link customers out to self-serve elsewhere. * To promote flexibility, intent routes can point to different flows based on conditional logic that uses contextual data, like customer channels. For a comprehensive breakdown of the intent list and routes, please refer to the Intent Routing Selection. For a comprehensive breakdown of the intent list and routes, please refer to the [Intent Routing](/agent-desk/virtual-agent/intent-routing "Intent Routing") section. ### Flows Flows define how the Virtual Agent interacts with the customer given a specific situation. They can be as simple as an answer to an FAQ, or as complex as a multi-turn dialog that offers self-service recommendations. You build flows through a series of [nodes](#flow-nodes "Flow Nodes") that dictate the flow of the conversation as well as any business logic it needs to perform. Once you build them, flows can reach through [intent routing](#intent-routes "Intent Routes"), or redirect from other flows. For more information on how flows are built, see our Flow Building Guide ### Core Dialog While much of what the Virtual Agent does is customized in flows, some fundamental aspects are driven by the Virtual Agent's core dialog system. This system defines the behavior for: * **Welcome experience**: The messages that the system sends when someone opens a chat window, or receives a first message. * **Disambiguation**: How the Virtual Agent clarifies ambiguous or vague initial utterances. * **Digressions**: How the Virtual Agent handles a new path of dialog when customer expresses a new intent. * **Enqueuement & waiting**: How the Virtual Agent transitions customers to live chat, including enqueuement, wait time, & business hours messaging. * **Post-live-chat experience**: What the Virtual Agent does when a customer concludes an interaction with a Live agent. * **Error handling**: How the Virtual Agent handles API errors or customer responses it doesn't recognize. If you have any questions about these settings, please contact your ASAPP Team. ## Flow Nodes You build flows through a series of nodes that dictate the flow of the conversation as well as any business logic it needs to perform. 1. **Response Node**: The most basic function of a flow is to define how the Virtual Agent should converse with the customer. This accomplishes the task through response nodes which allow you to configure Virtual Agent responses, send deeplinks, and classify what customers say in return. 2. **Login Node** When building a flow, you may want to force users to login before proceeding to later nodes in a flow. You accomplish this by adding a login node to your flow that directs the customer to authenticate in order to proceed. 3. **API Node** If the virtual agent has available API integrations, you can leverage those integrations to display data dynamically to customers and to route to different responses based on what the API returns. API nodes allow for the retrieval of data fields and the usage of that data within a flow. 4. **Redirect Node** Flows also have the ability to link to one another through the use of redirect nodes. This is powerful in situations where the same series of dialog turns appear in multiple flows. Flow redirects allow you to manage those dialog turns in a single location that is referenced by many flows. 5. **Agent Node** In cases where the flow is unable to address the customer's concern on its own, an agent node is used to direct the customer to an agent queue. The data associated with this customer will be used to determine the live agent queue to put them in. 6. **End Node** When your flow has reached its conclusion, an end node wraps up the conversation by confirming whether the customer needs additional help. # Attributes Source: https://docs.asapp.com/agent-desk/virtual-agent/attributes ASAPP supports attributes that can be routed on to funnel customers to the right flow through [intent routing](/agent-desk/virtual-agent/intent-routing "Intent Routing"). Attributes tell the virtual agent who the customer is. For example, they indicate if a customer is currently authenticated, which channel the customer is using to communicate with your business, or which services and products the customer is engaged with. ## Attributes List The Attributes List contains all the attributes available for intent routing. Here, you'll find the following information displayed in table format: 1. **Attribute name:** Display name of the attribute 2. **Definition:** Indicates if the attribute is Standard or Custom. ASAPP natively supports Standard attributes. Custom attributes are added in accordance with your business requirements\* 3. **Type:** Indicates the value type of an attribute. There are two possible types: Boolean, or Value. a. **Boolean:** A boolean attribute includes two values. For example: Yes/No, True/False, On/Off. b. **Value:** A value attribute can include any number of values. For example: Market 1, Market 2, Market 3. 4. **Origin Key:** Exact value that the company passes to ASAPP. Contact your ASAPP team for more details on how to add a custom attribute. ## Attribute Details To view specific attribute details, click an **attribute name** to launch the details modal. 1. **Description:** Describes what the attribute is. 2. **Value ID:** Unique, non-editable key that the system directly passes to ASAPP for that attribute (can be non-human readable). 3. **Value name:** Display name for the value to describe what the attribute value is. These value names appear in intent routing for ease of use. Descriptions and value names can be edited. To modify these fields, make your changes and click **Save**. When you click, the system automatically saves changes, and changes take effect immediately. There is no support for versioning or adding new attributes and/or values at this time, please contact your ASAPP team for support in this area. # Best Practices Source: https://docs.asapp.com/agent-desk/virtual-agent/best-practices ## Designing your Virtual Agent ### 1. Focus on Customer Problems The most important thing to keep in mind when designing a good flow is whether it is likely to resolve the intent for most of your customers. It can be easy to diverge from this strategy (perhaps because a flow is designed with containment top of mind; perhaps because of inherent business process limitations). But it's the best way you can truly allow customers to self-serve. ### (a) Understanding the Intent Since flows are invoked when ASAPP classifies an intent, understanding the intent in question is key to successfully designing a flow. The best way to do this is to review recent utterances that ASAPP has classified to the intent and categorize them into more nuanced use cases that your flow must address. This will ensure that the flow you design is complete in its coverage given how customers will enter the flow. ASAPP Historical Reporting makes these utterances accessible through the First Utterance table. ### (b) Ongoing Refinement Every flow you build can be thought of as a hypothesis for how to effectively understand and respond to your customers in a given scenario. Your ability to refine those hypotheses over time—and test new ones—is key to managing a truly effective virtual agent program that meets your customers' needs. We recommend performing the following steps on a regular basis—at least monthly—to identify opportunities for flow refinement, and improve effectiveness over time. #### Step 1: Identify opportunity areas in particular flows 1. **Flows with relatively high containment, but a low success rate:** This indicates that customers are dropping out of the flow before they receive useful information. 2. **Flows with the highest negative EndSRS rates:** This indicates that the flow did not meet the customer's needs. #### Step 2: Determine Likely Causes for Flow Underperformance, Identify Remedies Once you've identified problematic flows, the next step is to determine why they underperform. In most cases, reviewing transcripts of issueIDs from Conversation Manager in Insights Manager will quickly reveal at least one of the following issues with your flow: **1. General unhelpfulness or imprecise responses** Oftentimes flows break down when the virtual agent responds confidently in a manner that is on-topic but completely misses the customers' point. A common example is customers reaching out about a difficulty to log in, only to be sent to the same "forgot your password" experience they were experiencing issues with in the first place. Issues of this type typically receive a negative EndSRS score from the customer, who doesn't believe their problem has been solved. The key to increase the performance of these flows is to configure the virtual agent to ask further, more specific questions before jumping to conclusions. Following the example above, you could ask "Have you tried resetting your password yet?". Including this question can go a long way to ensure that the customer receives the support they're looking for. **2. Unrecognized customer responses** This happens when the customer says or wants to say something that the virtual agent is unable to understand. In free-text channels, this will result in classification errors where the virtual agent has re-prompted the customer to no avail, or has incorrectly attempted to digress to another intent. You can identify these issues by searching for re-prompt language in transcripts where customers have escalated to an agent from the flow in question. Looking at the customers' problematic response, you can determine how best to improve your flow. If customers' response is reasonable given the prompt, you can introduce a new response route in the flow and train it to understand what the customer is saying. Even if it's a path of dialog you don't want the virtual agent to pursue, it's better for the virtual agent to acknowledge what they said and redirect rather than failing to understand entirely. **Don't:** * "Which option would you prefer?" * "Let's do both" * "Sorry I didn't understand that. Could you try again?" **Do:** * "Which option would you prefer?" * "Let's do both" * "Sorry, but we can only accommodate one. Do you have a preference?" Another option for avoiding unrecognized customer responses in free-text channels, is to rephrase the prompt in a manner that reduces the number of ways that a customer is likely to respond. This is often the best approach in cases where the virtual agent prompt is vague or open-ended. **Don't:** * "What issue are you having with your internet?" * "I think maybe my router is broken" * "Sorry I didn't understand that. Could you try again?" **Do:** * "Is your internet slow, or is it completely down?" * "It's completely down" In SDK channels (web or mobile apps), which are driven by quick replies, the concern here is to ensure that customers have the opportunity to respond in the way that makes sense given their situation. A common example failing to provide an "I'm not sure" quick reply option when asking a "yes or no" question. Faced with this situation, customers will often click on "new question" or abandon the chat entirely, leaving very little signal on what they intended. The best way to improve quick reply coverage is to maintain a clear understanding of the different contexts in which a customer might enter the flow---how they conceive of their issue, what information they might or might not have going in, etc. Gaining this perspective is helped greatly by reviewing live chat interactions that relate to the flow in question, and determining whether your flow could have accommodated the customer's situation. **3. Incorrect classification** This issue is unique to free-text use cases and happens when the virtual agent thinks the customer said one thing, when in fact the customer meant something else. One example would be a response like "no idea" being misclassified as "no" rather than the expected "I'm not sure." Another example might be a response triggering a digression (i.e., a change of intent in the middle of a conversation), rather than an expected trained response route. This can happen in flows where you've trained response routes to help clarify a customer's issue but their response sounds like an intent and thus triggers a digression instead of the response route you intended. For example: ``` "I need help with a refund" "No problem. What is the reason for the refund?" "My flight got cancelled" "Are you trying to rebook travel due to a cancelled flight?"\<\< Digression "No, I'm asking about a refund" ``` While these issues tend to occur infrequently, when you do encounter them, the best place to start is revising the prompt to encourage responses that are less likely to be classified incorrectly. For example, instead of asking an open-ended question like "What is the reason for your refund?"---to which a customer response is very likely to sound like an intent---you can ask directly ("Was your flight cancelled?") or ask for more concrete information from which you can infer the answer ("No problem! What's the confirmation number?"). Alternatively, you can solve issues of incorrect classification by training a specific response route that targets the exact language that is proving problematic. In the case of the unclear "I'm not sure" route, a response route that's trained explicitly to recognize "no idea" might perform better than one that is broadly trained to recognize the long tail of phrases that more or less mean "I'm not sure." In this case, you can point the response route to the same node as your generic "I'm not sure" route to resolve the issue. **4. Too much friction** Another cause for underperformance is too much friction in a particular flow. This happens when the virtual agent is asking a lot of the customer. One type of friction is authentication. Customers don't always remember their specific login or PINs, so authentication requests should be used only when needed. If customers are asked to find their authentication information unnecessarily, many will oftentimes abandon the chat. Another type of friction is repetitive or redundant steps--particularly around disambiguating the customer. While it's helpful to clarify what a customer wants to do to adequately solve their need, repetitive questions that don't feel like they are progressing the customer forward often lead to a feeling of frustration--and abandonment. #### Step 3: Version, improve, and track the impact of flow changes Once you've identified an issue with a specific flow, create a new version of it in AI-Console with one of the remedies outlined above. After you have implemented a new version, you can save and release the new version to a lower environment to test it, and subsequently to production. Then, track the impact in Historical Reporting in Insights Manager by looking at the Flow Success Rate for such flow on the Business Flow Details tab of the Flow Dashboard. ### 2. Know your Channels Messaging channels have advantages and limitations. Appreciating the differences will help you optimize virtual agents for the channels they live on, and avoid channel-specific pitfalls. To illustrate this, look at a single flow rendered in Apple Messages for Business vs the ASAPP SDK: The ASAPP SDK has quick replies, while Apple Messages for Business supports list pickers. #### (a) General rules of thumb * Be aware of each channel's strengths and limitations and optimize accordingly--these are described below. * Pay particular attention to potentially confusing interface states, and compensate by being explicit about how you expect customers to interact with a flow (e.g., "Choose an option below ...") * Be sure to test the flow on the device/channel it is deployed to in a lower environment. #### (b) Channel-specific considerations ##### ASAPP SDK The ASAPP SDKs (Web, Android, and iOS) have a number of features that help to build rich virtual agent experiences. Strengths of SDKs: 1. Quick Replies - surface explicit text options to a customer to tap/click on, and route to the next part of a flow. 2. Authentication / context - with the authentication of customers, the SDK allows for a persistent chat history which provides seamless continuity. Additionally, authentication allows for the direct calling of APIs (e.g. retrieving a bill amount). Limitations: * Not as sticky of an experience (i.e. it's not an application the customer has top of mind/high visibility), so the customer may abandon the chat. One cause for this is the lack of guaranteed push notifications -- particularly in the Web SDK. How to optimize for ASAPP SDKs: * We encourage you to build more complicated, multi-step flows, leveraging quick replies that keep customers on the rails. ### 3. Promote Function over Form First and foremost, your virtual agent needs to be effective at facilitating dialog. It may be tempting to prioritize focus on virtual agent tone and voice but that can ultimately detract from virtual agent's functional purpose. Next we'll offer examples that illustrate effective or ineffective dialogs that will help you when building out your flows. #### (a) It's OK to sound Bot-like The virtual agent **is** a bot, and it primarily serves a functional purpose. It is much better to be explicit with customers and move the conversation forward, rather than making potential UX sacrifices to sound friendly or human-like. Customers are coming to a virtual agent to solve a specific problem efficiently. Here is a positive example of a greeting that, while bot-like, is clear and effective: ``` "Hello! How can I help you today? Choose from a topic below or type a specific question." ``` #### (b) Tell People How to Interact Customers interact with virtual agents to solve a problem and/or to achieve something. They benefit from explicit guidance with how they are supposed to interact with the virtual agent. If your flow design expects the customer to do something, tell them upfront. Here is a positive example of clear instructions telling a customer how to interact with the virtual agent: ``` "Please choose an option below so we can best help" ``` #### (c) Set Clear Expectations for Redirects The virtual agent can't always handle a customer's issue. When you need to redirect the customer to self-serve on a website, or even on a phone number, set clear expectations for what they need to do next. You never want a customer to feel abandoned. Here are two positive examples of very clear instructions about what the customer will need to do next, and what they can expect: ``` "To process your payment and complete your request, you'll need to call us at 1-800-555-5555. **Agents are available** from 8am to 9pm ET, Monday through Friday" "You can check the status of your order on website by either **entering your order number** or **logging in**". ``` #### (d) Acknowledge Progress & Justify Steps Think of a bot like a standard interaction funnel -- a customer has to go through multiple steps to achieve an outcome. Acknowledging progress made and justifying steps to the customer makes for a better user experience, and makes it more like for the customer to complete all of the steps (think of a breadcrumb in a checkout flow). The customer should have a sense of where they are in the process. Here's a simple example of orienting a customer to where they are in a process: ``` "We're happy to help answer questions about your bill, but will need you to sign in so we can access your account information." ``` #### (e) Be careful with Personification Over-personifying your virtual agent can make for a frustrating customer experience: * **Do** frame language in a more impersonal "we" * **Don't** make the virtual agent "I" * **Do** frame the virtual agent as a representative for your company. * **Don't** give your virtual agent a name / distinct personality. * **Do** give your virtual agent a warm, action-oriented tone. * **Don't** give your virtual agent an overly friendly, text-heavy tone. * **Do** "Great! We can help you pay your bill now. What payment method would you like to use?" * **Don't** "Great, thank you so much for clarifying that! I am so happy to help you with your bill today." #### (f) Affirm What Customers Say, Not What the Flow Does Affirmations help customers feel heard, and they help customers understand what the virtual agent is taking away from their responses. When drafting a virtual agent response, ensure that you match the copy to the variety of customer responses that may precede it -- historical customer responses can be viewed in the utterance table in historical reporting. If there is a broad set reasons for a customer to end up on a node or a flow, your affirmation should likewise be broad: * **Do** "We can help with that" * **Do** "We can help you with your bill" * **Don't** "We can help you pay your bill online" Similarly, if there is a narrow set of reasons for a customer to end up in a node or a flow, your affirmation should likewise be narrow. Even then, it's important to not phrase things in such a way that you're putting words in the mouth of the customer, so they don't feel frustrated by the virtual agent. * **Do** "To set up autopay ..." * **Don't** "It sounds like you want to set up autopay" * **Don't** "Okay, so autopay" In some cases where writing a good affirmation feels particularly tricky, feel free to err on the side of not having one. It's all good so long as the virtual agent responds in an expected manner given what the customer just said. ### 4. Reduce Friction If interacting with your virtual agent is confusing or hard, people will revert to tried and true escalation pathways like shouting "agent" or just calling in. As you are designing flows, be mindful about the following friction points you could potentially introduce in your flows. #### (a) Be Judicious with Deep Links Deep links are used when you link a customer out of chat to self-serve. It is tempting to leverage existing web pages, and to create dozens of flows that are simple link outs. But this often does not provide a good customer experience. A virtual agent that is mostly single step deep links will feel like a frustrating search engine. Wherever possible, try to solve a customer's problem conversationally within the chat itself. Don't rely on links as a default. But, when you **do** rely on a deep link, make sure to: 1. Validate the link actually solves the customers intent and is accessible to all customers (e.g. not behind an authentication wall, or only accessible to certain types of customers). 2. Leverage native app links where possible. 3. Be clear about what the customer needs to do when they go to the link and leave the chat experience. #### (b) Avoid All-or-Nothing Flow Requirements Be careful with "all or nothing" requirements in a flow; if you want a customer to sign in to allow you to access an API, that's great, but give customers an alternative option at that moment too. Some customers might not remember their password. When you are at a point in a flow where there is a required step or just one direction a customer can go, think about what alternative answer there could be for a customer. If you don't, those customers might just abandon the virtual agent at that point. ### 5. Anticipate Failure It's tempting to design with the happy path in mind, but customers don't always go down the flow you expect. Anticipate the failure points in a virtual agent, and design for them explicitly. #### (a) Explicit Design for Error Cases Always imagine something will go wrong when asking the customer to do something: * When asking customer to complete something manually, give them a response route or a quick reply that allows them to acknowledge it's not working (e.g. the speed test isn't working). * When asking the customer to self-serve on a web page or in chat: allow them to go down a path in case that doesn't work (e.g. login isn't working). * When designing flows that involve self-service through APIs: explicitly design for what happens when the API doesn't work. #### (b) Consider Free Text Errors In channels where free text is always enabled (i.e.. AMB, SMS), the customer input may not be recognized. We recommend writing language that guides the customer to explicitly understand the types of answers we're expecting. Leverage "else" conditions in your flows (on Response Nodes). **Don't:** * "What issue are you having with your internet?" * "I think maybe my router is broken" * "Sorry I didn't understand that. Could you try again?" **Do:** * "Is your internet slow, or is it completely down?" * "I think maybe my router is broken" * "Sorry I didn't understand that. Is your internet slower than usual, or is your internet completely off?" ## Measuring Virtual Agents ### 1. Flow Success Containment is a measure of whether a customer was prevented from escalating to an agent; it is the predominant measure in the industry for chatbot effectiveness. ASAPP, however, layers a more stringent definition called "Flow success," which indicates whether or not a customer was actually helped by the virtual agent. ### Important When you are designing a new flow or modifying an existing flow, be sure to enable flow success when you have provided useful information to the customer. "Flow success" is defined as when a customer arrives at a screen or receives a response that: 1. Provides useful information addressing the recognized intent of the inquiry. 2. Confirms a completed transaction in a back-end system. 3. Acknowledges the customer has resolved an issue successfully. With flow success, chronology matters. If a customer starts a flow, and is presented with insightful information (i.e. success), but then escalates to an agent in the middle of a flow (i.e. negation of success), that issue will be recorded as not successful. ### How It Works Flow success is an event that can be emitted on a [node](/agent-desk/virtual-agent/flows#node-types "Node Types"). It is incumbent on the author of a flow to define which steps in the flow they design could be considered successful. Default settings: * **Response Nodes:** When flow reporting status is **on**, the **success** option will be chosen by default. * **Agent Node:** When flow reporting status is **on**, the **failure** option will be chosen. * **End & Redirect:** Flow success is not available in the tooling. By default, the End Node question will emit or not emit flow success depending on the customer response. ### 2. Assessing a Flow's Performance You're able to track your flows' performance on the "Automation Success" report in historical reporting. There you can assess containment metrics and flow success which will help you determine whether a flow is performing according to expectations. ## Tactical Flow Creation Guide ### 1. Naming Nodes Flows are composed of different node types, which represent a particular state/act of a given flow. When you create a flow, you create a number of different nodes. We recommend naming nodes to describe what the node accomplishes in a flow. Clear node names will make the data more readable going forward. Here are some best practices to keep in mind: * Response node (no prompt): name is by the content (e.g. "NoBalanceMessage") * Response node (with prompt): name by the request (e.g. "RequestSeatPreferences") * Any node that takes an action of some sort should start with the action being taken and end with what is being acted upon (e.g. "ResetModem") ### 2. Training Response Routes When you create a Response Node that is expected to classify free text customer input (e.g. "Would you like a one way flight or a round trip flight?"), you need to supply training utterances to train a response route. There are some best practices you should keep in mind: * Be explicit where possible. * Vary your language. * More training utterances is almost always better. * Keep neighboring routes in mind -- what are the different types of answers you will be training, and how will the responses differ between them? ### 3. Designing Disambiguation Sometimes customers initiate conversations with vague utterances like "Help with bill" or "Account issues." In these cases the virtual agent understands enough to classify the customer's intent, but not enough to immediately solve their problem. In these cases, you are able to design a flow that asks follow-up questions to disambiguate the customer's particular need. Based on the customer's response you can redirect them to more granular intents where they can better be helped. Designing effective disambiguation starts with reviewing historical conversations to get a sense of what types of issues customer's are having related to the vague intent. Once you've determined these, you'll want to optimize your prompt and response routes for the channel your designing for: #### (a) ASAPP SDKs These channels are driven by quick replies only, meaning that the customer can only choose an option that is provided by the virtual agent. Here, the prompt matters less than the response branches / quick replies you write. Just make sure they map to things a customer would say---even if multiple response routes lead to the same place. For example: ``` We're happy to help! Please choose an option below: - Billing history - Billing complaint - Billing question - Something else ``` #### (b) Free-Text Channels, with Optional Quick Replies (Post-iOS 15 AMB) These channels offer quick replies, but do not prevent customers from responding with free text. The key here is optimizing your question to increase the likelihood that customers choose a quick reply. ``` We're happy to help! Please tap on one of the options below: - Billing history - Billing complaint - Billing question - Something else ``` #### (c) Free-Text-Only Channels (Pre iOS 15 AMB, SMS ) These channels are often the most challenging, as the customer could respond in any number of ways, and given the minimal context of the conversation it's challenging to train the virtual agent to adequately understand all of them. Similar to other channels, the objective is to prompt in a manner that limits how customers are likely to respond. The simplest approach here is to list out options as part of your prompt: ``` Please tell us more about your billing needs. You can say things like "Billing history" "Question" "Complaint" or "Something else" ``` ### 4. Message Length Keep messages to be short and to the point. Walls of text can be intimidating. Never allow an individual message to exceed 400 characters (or, even less if there are spaces).. An example of something to avoid: ### 5. Quick Replies Quick Replies should be short and to the point. Some things to keep in mind when writing Quick Replies: * Avoid punctuation * Use sentence case capitalization, unless you're referring to a specific product or feature. * Keep to at least two and up to five quick replies per node. * While this is generally best practice, it is required for Quick Replies in Apple Messages for Business. * If there are more than 3 Quick Replies, the list will be truncated to the first 3 in WhatsApp Business * External channels have character limits and any Quick Replies longer than these limits will be truncated: * Apple Messages for Business: 24 characters maximum * WhatsApp Business: 20 characters maximum # Flows Source: https://docs.asapp.com/agent-desk/virtual-agent/flows Learn how to build flows to define how the virtual agent interacts with the customer. Flows define how the virtual agent interacts with the customer. They can be as simple as an answer to an FAQ, or as complex as a multi-turn dialog used to offer self-service recommendations. Flows are built through a series of [nodes](getting-started#flow-nodes "Flow Nodes") that dictate the flow of the conversation as well as any business logic it needs to perform. Once built, customers can reach flows from intents, or other flows can redirect to them. ## Flows List In the flows page, you will find a list of existing flows for your business. The system displays the following information in table format: * **Flow Name** A unique flow name, with letters and numbers only. * **Flow Description** A brief description that describes the objective of the flow. * **Traffic from Intent** Intents can be routed to specific flows through [intent routing](/agent-desk/virtual-agent/intent-routing "Intent Routing"). In this column, you will see which intents route to the respective flow. You can click the intent to navigate to the specific [intent routing detail page](/agent-desk/virtual-agent/intent-routing#intent-routing-detail-page "Intent Routing Detail Page") to view routing behavior details. * **Traffic from Redirect** Flows have the ability to link to one another through the use of [redirect nodes](#redirect-node "Redirect Node"). In this column, you will be able to see which existing flows redirect to the respective flow. You can click the flow to navigate to the specific [flow builder page](#flow-builder "Flow Builder") to view flow details. ## Flow Builder The flow builder consists of three major parts: 1. Flow Graph 2. Node Configuration Panel 3. Toolbar ### Flow Graph The Flow Graph is a visual representation of the conversation flow you're designing, and displays all possible paths of dialog as you create them. #### Select Nodes Each node in the graph can be selected by clicking anywhere on the node. Upon selection, the node configuration panel will automatically expand on the right. #### Flow Graph Zoom You can zoom in on particular parts of the flow by using the zoom percentage bar at the bottom right or using your computer trackpad or mouse. ### Node Configuration Panel The node configuration panel allows you to manage settings and configure routing rules for the following [node types](#node-types "Node Types"): * [Response Node](#node-types "Node Types"): configure virtual agent responses, send deeplinks, and classify what customers say in return. * [Login Node](#login-node "Login Node"): direct the customer to authenticate before proceeding in the flow. * [Redirect Node](#redirect-node "Redirect Node"): redirect customer to another flow. * [Agent Node](#agent-node "Agent Node"): direct the customer to an agent queue. * [End Node](#end-node "End Node"): wrap up the conversation by confirming whether the customer needs additional help. * [API Node](#api-node): use API fields dynamically in your flows. ### Toolbar The toolbar displays the flow name and allows you to perform a number of different functions: 1. [Version Dropdown:](#navigate-flow-versions "Navigate Flow Versions") view and toggle through multiple versions of the flow. 2. [Version Indicators](#version-indicators "Version Indicators"): keep track of flow version deployment to Test or Production environments 3. [Manage Versions](#manage-versions "Manage Versions"): manage flow version deployment to Test or Production environments 4. [Preview](#preview-flow "Preview Flow"): click to preview your current flow version in real-time 5. More Actions: * Copy link to test: Navigate to your demo environment to test a flow. * Flow Settings: View flow information such as name, description, and flow shortcut. Learn more: [Save, Deploy, and Test](#save-new-flow "Save New Flow") ## Node Types ### Response Node The **Response** node allows you to configure virtual agent responses, send deeplinks, and classify what customers say in return. It consists of three sections: 1. **Content** 2. **Routing** 3. **Advanced Settings** ### Content The **Content** section allows you to specify the responses and deeplinks that the system will send to the customer. You can add as many of either as you like by clicking **Add Content** and selecting from the menu. Once added, this content can be easily reordered by dragging, or deleted by hovering over the content block and clicking the trash icon. In the flow graph, you will be able to preview how the system will display the content to the customer. #### Responses Any response text you specify will be sent to the customer when they reach the node. #### Deeplinks After selecting **Deeplink** from the **Add Content** menu, the following additional fields will appear: * **Link to**: select an existing link from the dropdown or directly [create a new link](/agent-desk/virtual-agent/links#create-a-link "Create a Link"). If you select an existing link, you can click **View link definition** to open the specific [link details](/agent-desk/virtual-agent/links#edit-a-link "Edit a Link") in a new tab. * **Call to action**: define the accompanying text that the customer will click on in order to navigate to the link. * **Hide button after new message**: choose to remove the deeplink after a new response appears to prevent users from navigating to the link past this node. ### Routing The **Routing** section is where you will configure what happens after the content is sent. You have two options: * **Jump to node** Choosing to **Jump to node** allows you to define a default routing rule that will execute immediately after the node content has been delivered to the user. * **Wait for response** Choosing to **Wait for response** means that the virtual agent will pause until the customer responds, then attempt to classify their response and branch accordingly. When this option is selected, you'll need to specify the branches and [quick reply text](#quick-replies "Quick Replies") for each type of response you wish the virtual agent to classify. See [Branch Classifiers](#branch-classifiers "Branch Classifiers") section for more detailed information. Flows cannot end on a response node. To appropriately end a flow after a response node, please route to an [End node](#end-node "End Node"). #### Branch Classifiers When **Wait for response** is selected for routing, you can define the branches for each type of response you wish the virtual agent to classify. There are two types of branch classifiers that you can use: * **System classifiers** ASAPP supports pre-trained system templates to classify free text user input. You can use branches like `CONFIRM` or `DENY` that are already trained by our system and are readily available for use for polar (yes/no) questions. You do not need to supply training utterances for system classifiers. * **Custom classifiers** If pre-trained classifiers do not meet your needs, define your own custom branches and supply training utterances. You must give your branch classifier a **Display Name** and supply at least five training utterances to train this custom classification. Learn more about how to best train your custom branches in the [Training response route](/agent-desk/virtual-agent/best-practices#2-training-response-routes "2. Training Response Routes") section. #### Quick Replies For each branch classifier, you should define the corresponding **Quick Reply text**. These will appear in our SDKs (web, mobile) and third-party channels as tapable options. ### Advanced Settings In the **Advanced Settings** section, you can set flow success reporting for the response node. #### Flow Success Flow success attempts to accurately measure whether a customer has successfully self-served through the virtual agent. You measure this by setting the appropriate flow reporting status on certain nodes within a flow. Learn more: [How do I determine flow success?](/agent-desk/virtual-agent/best-practices#measuring-virtual-agents "Measuring Virtual Agents") To set flow reporting status for response nodes: 1. Toggle **Set flow reporting status** on. 2. By default, **Success** is selected for response nodes but this can be modified for your particular flow. ### Login Node The **Login Node** enables customer authentication within a flow. In this node, you can define the following: * **Content** * **Routing** * **Advanced Settings** #### Content The **Content** section allows you to define the text to be shown to the customer to accompany the login action. All login nodes will have default text which you can modify to suit your particular flow needs. * **Message text**: Define the main text that will prompt the customer to login * **Call to action**: Define the accompanying text that the customer will click on in order to login * **Text sent to indicate that a login is in process**: customize the text that is sent after the customer has tried to log in. In the flow graph, you can preview how the content will be displayed to the customer. #### Routing Flows cannot end on a login node. The **Routing** section is where you can configure what happens after a customer successfully logs in or optionally configure branches for exceptional conditions. ##### On login In the **On login** section, you must define the default routing rule that will execute after the customer successfully logs in. ##### On response Similar to response nodes, you can optionally add response branches in the **On response** section to account for exceptional conditions that may occur when a customer is trying to authenticate, such as login errors or retries and refreshes. Please see [Branch Classifiers](#branch-classifiers "Branch Classifiers") on the response node for more information on how to configure these routing rules. ##### Else In the **Else** section, you can define what happens if login is unsuccessful and we do not recognize customer responses. #### Advanced Settings In **Advanced Settings**, you have the option to **Force reauthentication** which will prompt all customers to log in again, regardless of current authentication state. ### API Node The API node allows you to use API fields dynamically in your flows. The data you retrieve on an API node can be used for two things: 1. **Displaying the data** on subsequent nodes. 2. **Routing to different nodes** based on the data. #### Data Request The **Data Request** section allows you to add data fields from an existing API integration. Select **Add data fields** to choose objects from existing integrations, which will allow you to add collections of data fields to the node. There is a search bar that allows you to easily search through the available fields. After you select objects, all of the referenced fields will automatically populate in the API node. In addition to objects and arrays, you can request actions. You can only select one action per node; selecting an action will automatically disable the selection of additional objects, actions, and arrays. #### Input Parameters Some actions require input parameters for the API call, which you can define in AI-Console. In the node edit panel, you can see a field for defining parameters that will be passed as part of the API call. This field leverages curly brackets: click the **curly bracket** icon or select the **shift>\{** or **}** keys to choose the API value to pass as an input parameter. Only valid data can be used for input parameters; objects or arrays will not be surfaced through curly brackets. #### Displaying Data You are easily able to display API fields from an API node in subsequent response nodes. This field leverages curly brackets: click the **curly bracket** icon or select the **shift>\{** or **}** keys in the Response Node Content section to choose API values to display, which will render as a dynamic API field in the flow graph. When you click on the API field itself, data format options appear that will allow you to specify exactly what format to display to the end user. #### Routing to Different Nodes Routing and data operators allow you to specify different flow branching based on what is returned from an API. This leverages the same framework as routing on other nodes, but provides additional functionality around operators to give you flexibility in configuring routing conditions. Operators allow you to contextually define conditions to route on. #### Error Handling API nodes provide default error handling, but you are able to create custom error handling on the node itself if desired. You can specify where a user should be directed in the event of an error with the API call. #### API Library API fields are available under the integrations menu. In this page, you can view and search through all available objects and associated data fields. ### Redirect Node The **Redirect Node** serves to link flows with one another by directing the customer to a separate flow. A Redirect Node does not display content to the customer. In this node, you can define the following: * **Destination** * **Routing** * **Advanced Settings** #### Destination The **Destination** section allows you to define where to redirect the customer. You can redirect to an existing **flow** or an **intent**. * Select **Flow** to redirect to an individual flow destination. * Select **Intent** to redirect the customer to solve for a broader issue intent that may route them to different flows depending on the [intent routing rules](/agent-desk/virtual-agent/intent-routing "Intent Routing"). Depending on the option you select, you will be able to select the destination flow or intent from the dropdown. #### Routing (Return Upon Completion) Redirect nodes can end your flow or you can choose to have the customer return your flow after the destination flow has completed. To do so, toggle on **Return upon completion**. After doing so, you can define the default routing rule that will execute upon customer return. ### Agent Node The **Agent Node** enables you to direct the customer to an agent queue in order to help resolve their issue. The data associated with this customer will be used to determine the live agent queue to put them in. #### Advanced Settings In the Advanced Settings section, you can set flow success reporting for the agent node. ##### Flow Success Flow success attempts to accurately measure whether a customer has successfully self-served through the virtual agent. This is measured by setting the appropriate flow reporting status on certain nodes within a flow. Learn more: [How do I determine flow success?](/agent-desk/virtual-agent/best-practices#measuring-virtual-agents "Measuring Virtual Agents") For agent nodes, this is always considered a failure. To set flow reporting status for agent nodes: 1. Toggle **Set flow reporting status** on. 2. By default, **Failure** will be selected for agent nodes ### End Node The **End Node** wraps up the conversation by confirming whether the customer needs additional help. #### Advanced Settings In the **Advanced Settings** section, you can select the end Semantic Response Score (SRS) options (see below) for your flow. By default, all three options will be selected when an end node is added, thus presenting all three options for the customer to select from. You can expand the section to modify these options to present to the customer. ##### End SRS Options At the end of a flow, the virtual agent will ask the customer: "Is there anything else we can help you with today?"\* After the above message is sent, there are three options available for the customer to select from: * **"Thanks, I'm all set"** A customer selecting this **positive** option will prompt the virtual agent to wrap up and resolve the issue. * **"I have another question"** A customer selecting this **neutral** option will prompt the virtual agent to ask the customer what their question is. * **"My question has not been answered"** A customer selecting this **negative** option will prompt the virtual agent to escalate the customer into agent chat to help resolve their issue. \*Exact end SRS options and text may vary. Please contact your ASAPP team for more details. ### Logic Nodes The **Logic Node** enables you to define a “rule” or “logic” by which a flow should branch off based on different conditions. This gives you the ability to create more dynamic flows that can adapt to different customer inputs or other conversational context. For example, a Logic Node can evaluate if a user's `zipcode` equals `New York`: * If true, the flow continues to Message Node 1 ("Area is eligible for discount") * Otherwise, it goes to Message Node 2 ("Area is not eligible for a discount") ## Quick Start: Flows ### Create Flow Click **Create** to trigger a dialog for creating a new flow. The following data must be provided: * **Name:** Give a unique name for your flow, using letters and numbers only. * **Description:** Give a brief description of the purpose of the flow. Avoid vague flow names. Using clear names & descriptions allow others to quickly distinguish the purpose of your flow. We recommend following an "Objective +**Topic**" naming convention, such as: Find **Food** or Pay **Bill**. Click **Next** to go to the flow builder where you will design and build your flow using the various [node types](#node-types "Node Types"). ### Preview Flow You can preview your flow as you are building it! In the toolbar, select the **eye** icon to open the in-line preview. This panel will then allow you to preview the version of the flow that is currently displayed. As you are actively editing a flow, select this icon at any time to preview your progress. To preview a previously saved version of the flow, navigate to the flow version in the [version dropdown](#version-indicators "Version Indicators"), then click the **eye** icon to preview. #### Preview Capabilities There are a few capabilities to leverage in preview: * **Re-setting:** puts you back to the first node of the flow and allows you to test it again. * **Debug information:** opens a panel that provides more detailed insights into where you are in a flow and the associated metadata with your preview. * **Close:** close the in-line preview. #### Preview with Mocked Data The real time preview also has the ability to preview integrated flows using mocked data. By mocking data directly in the preview, you can test different flow paths based on the different values an API can return. 1. Define Request * You can define if the request is a success or failure when previewing. Each API node is treated as a separate call in the preview experience. 2. View and Edit Mock Data Fields * For a successful API call, you can view and edit mock data fields, which will inform the subsequent flow path in the preview. * By default, all returned values are selected and pre-filled. Values set in the preview will be cached until you leave the flow builder, to prevent the need to re-enter each mock data form. ### Save New Flow When you are building a new flow, the following buttons will display in the toolbar: * **Discard changes:** remove all unsaved changes made to the flow. * **Save:** save changes to the flow as a new version or override an existing version. To save your new flow, select **Save**. ### Deploy New Flow Newly created flows (i.e. the initial version) will **immediately deploy to test environments and production**. These new flows can be deployed without harm since customers will not be able to invoke the flow unless there are incoming routes due to [intent routing](/agent-desk/virtual-agent/intent-routing "Intent Routing"). ### Test New Flow After deploying your flow to test, navigate to your respective test environment in order to verify your flow changes: 1. In the upper right corner of the toolbar, click the icon for **More actions**. 2. Select **Copy link to demo**. 3. Copy the **Flow Shortcut**. 4. Choose to **Go to demo env.** 5. Once there, select the chat bubble and paste the flow shortcut into the text entry to start testing your flow. ### Edit & Save New Version You can make changes to your new flow by selecting a node and making edits in the [Node Configuration Panel](#node-configuration-panel "Node Configuration Panel"). Once you are ready to save your changes, select **Save**. Since the current version of the flow is already deployed to production, you will **NOT** be able to save over the current version and **MUST** save as a new version to prevent unintentional changes to flows in production. For future flow versions that are not deployed to production, you will be able to save your changes as a **new flow version** or to overwrite the **current flow version**. ### Deploy Version to Test After saving, you will be directed to **Manage Versions** where you will manage which flow version is deployed to test environments and to production.. All flows should be verified in test environments, such as demo or pre-production environments before production. Therefore, new flow versions **MUST** be deployed to test **PRIOR** to [deployment in production](#deploy-version-to-prod "Deploy Version to Prod"). To deploy a flow version to test environments: 1. Select the new version you want to deploy in the version dropdown for **Test**. 2. After selection, click **Save**. 3. Flow version will deploy to all lower test environments within 5-10 minutes. ### Test Version After deploying your flow to test, navigate to your respective test environment in order to verify your flow changes: 1. In the upper right corner of the toolbar, click the icon for **More actions**. 2. Select **Copy link to demo**. 3. Copy the **Flow Shortcut**. 4. Choose to **Go to demo env**. 5. Once there, select the chat bubble and paste the flow shortcut into the text entry to start testing your flow. ### Deploy Version to Prod After verifying the expected flow behavior in **Test**, you can deploy the flow version to production, which will impact customers if there the [flow is routed from an intent](/agent-desk/virtual-agent/intent-routing "Intent Routing"): 1. Select the version you want to deploy in the version dropdown for **Prod**. 2. After selection, click **Save**. 3. Flow version will deploy to Production within 5-10 minutes. ### Manage Versions When you are simply viewing a flow without making any changes, **Manage Versions** will always be at the top of the toolbar for you to manage flow version deployments. Upon selection, the versions that are currently deployed to Test and Prod environments will display, which you can edit as appropriate. In addition to version deployments, you can view any existing [intents that route to this flow](/agent-desk/virtual-agent/intent-routing "Intent Routing") in **Incoming Routes**. Upon selection, you will be directed to the specific [intent detail](/agent-desk/virtual-agent/intent-routing#intent-routing-detail-page "Intent Routing Detail Page") page where you can view the intent routing rules. ### Navigate Flow Versions Many flows may iterate through multiple versions. You can toggle to view previous flow versions using the version dropdown: 1. Next to the flow name, click the version dropdown in the toolbar. 2. Selecting the version you want to view. 3. Once selected, the version details will display in the flow graph. 4. You can click any node to start editing that specific flow version. #### Version Indicators As flow versions are iteratively edited and deployed to Test and Prod, there are a few indicators in the toolbar to help the you quickly understand which version is being edited and which versions have been deployed to an environment: * **Unsaved changes** If the version is denoted with an asterisk along with a filled gray indicator of "Unsaved Changes", the flow version is currently being edited and must be saved before navigating away from the page. * **Unreleased version** If a version is denoted with a hollow *gray* indicator *Unreleased version* , the flow version is saved but not deployed to any environment. * **Available in test** If a version is denoted with a hollow *orange* indicator of *Available in test*, the flow version is deployed to test environments (e.g. demo) but it is **not routed** from an intent. * **Live in test** If a version is denoted with a filled *orange* indicator of *Live in test*, the flow version is deployed to test environments (e.g. demo) and it is **routed from an intent**. * **Available in prod** If a version is denoted with a hollow *green* indicator of *Available in prod*, the flow version is deployed to the production environment but it is **not routed** from an intent. * **Live in prod** If a version is denoted with a filled *green* indicator of *Live in prod*, the flow version is deployed to the production environment and it **is routed from an intent which can be reached by customers**. * **Available in test and prod** If a version is denoted with a hollow *green* indicator of *Available in test and prod*, the flow version is deployed to test environments (e.g. demo) but it is **not routed** from an intent. * **Live in test and prod** If a version is denoted with a filled *green* indicator of *Live in test and prod*, the flow version is deployed to all environments and it **is routed from an intent which can be reached by customers**. #### View Intent Routing If a flow is **routed from an intent** (e.g. Live in...), you can hover over these indicators to view and navigate to the respective intent routing page. # Glossary Source: https://docs.asapp.com/agent-desk/virtual-agent/glossary | | | | :------------------------------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Term** | **Definition** | | **Agent Node** | A flow node used to direct customers to a live agent. | | **AI-Console** | A web-based application for managing your implementation of ASAPP's virtual agent. | | **AMB** | See "Apple Messages for Business" | | **Ambiguous Utterance** | Customer utterances characterized by having multiple distinct meanings like "My battery died." This contrasts with "vague" utterances which are characterized by having broad, but still distinct meaning (e.g. "Phone issue"). | | **Apple Messages for Business (AMB**) | Offers the ability for customers to chat with businesses directly in the Apple Messages app. Includes dedicated uis to facilitate more efficient interactions than would be possible using traditional SMS, as well as support for highly impactful entry points in Siri Suggestions and chat intercepts for customers who tap on phone numbers while on their iOS device. Learn more at: [apple.com/ios/business-chat](https://www.apple.com/ios/business-chat/). | | **ASAPP Team** | Your direct representatives at ASAPP, inclusive of your assigned Solutions Architect, Customer Success Manager, and Implementation Manager. | | **Business Flow** | Business Flows resolve customer needs as indicated by their intent. This contrasts with "Non-Business Flows," which serve more generic purposes such as greeting a customer, disambiguating an utterance, or enabling customers to log in or connect with an agent. | | **Chat SDKs** | Embeddable chat UI that ASAPP offers for web, iOS, and Android applications. Each SDK supports quick replies, rich components and various other content interactions to facilitate conversations between businesses and their customers. | | **Classification** | Refers to the process of classifying the customer's intent by analyzing the language they use. | | **Containment** | The success rate of the virtual agent prevents human interaction. | | **Core Dialog** | Refers to the settings that define how the virtual agent behaves in common dialog scenarios like initial welcome, live chat enqueuement, digressions (triggering a new intent in the middle of a flow), and error handling. | | **Customer** | Your customer who is engaging with your virtual agent. | | **Customer Channels** | The set of UIs and applications that your customers can use to engage with your business. Includes chat SDKs, Apple Messages for Business, SMS, etc. | | **Deeplinks** | Links that send users directly to a web page or to a specific page in an app. | | **Dialog Turns** | The conversational steps required for a virtual agent to acquire the relevant information from the end-user. | | **Disambiguation** | The process whereby the virtual agent gets clarification from the consumer on what the customer's message means. Disambiguation is often triggered when the customer's message matches multiple intents. | | **End Node** | The flow node used to end a flow and trigger end SRS options (See: Semantic Response Score) | | **Enqueuement** | Refers to the process where a customer is waiting in queue to chat with a live agent. | | **Flow** | Flows define how the virtual agent interacts with the customer given a specific situation. They are built through a series of nodes. | | **Flow Success** | Metric to accurately measure whether a customer has successfully self-served through the virtual agent. | | **Free Text** | The unstructured customer utterances that can be freely typed and submitted without Autocomplete or quick replies. | | **Insights Manager** | The operational hub through which users can monitor traffic and conversations in real time, gain insights through Historical Reporting, and manage queues and queue settings. Learn more in the [Insights Manager overview](../insights-manager "Insights Manager"). | | **Intent** | Intents are the set of reasons that a customer might contact your business and are recognized by the virtual agent when the customer first reaches out. The virtual agent can also understand when a user changes intent in the middle of a conversation (see: digressions). | | **Intent Code** | Unique, capitalized identifier for an intent. | | **Intent Routes** | The logic that determines what will happen after an intent has been recognized. | | **Library** | The panel that houses content that can be used within intent routing and flows. | | **Login Node** | A flow node used to enable customer authentication within a flow. | | **Multi-Turn Dialog** | Questions that should be filtered or refined to determine the correct answer. | | **Node** | Functional objects used in flows to dictate the conversation as well as any business logic it needs to perform. | | **Queue** | A group of agents assigned to handle a particular set of issues or types of incoming customers. | | **Quick Reply** | The set of buttons that customers can directly choose to respond to the virtual agent. | | **Redirect Node** | A flow node used to link to other flows. | | **Response Node** | A flow node used to configure virtual agent responses, send deeplinks, and classify what customers say in return. | | **Response Routes** | On a response node, the set of routes defined to classify a customer response and branch accordingly. Users will define the training data and quick reply text for each type of response. | | **Routing (within flows)** | On any given node, the set of rules that determine what node the virtual agent should execute next. | | **Self-Serve** | Regarding the virtual agent, self-serve refers to cases where the virtual agent helps a customer resolve their issue without the need for human agent intervention. | | **Semantic Response Score (SRS)** | Options presented at the end of a flow to help gauge whether or not the virtual agent met the customer's needs. | | **User** | Refers to the user of AI-Console. Users of the chat experience are referred to as "customers." | | **Vague Utterance** | Customer utterances characterized by having broad, but still distinct meaning (e.g. "Phone issue"). This contrasts with "ambiguous" utterances which are characterized by having multiple distinct meanings like "My battery died." | | **Virtual Agent** | The ASAPP "Virtual agent" is chat-based, multi-channel artificial intelligence that can understand customer issues, offer self-service options, and connect customers with live agents when necessary. | # Intent Routing Source: https://docs.asapp.com/agent-desk/virtual-agent/intent-routing Learn how to route intents to flows or agents. Intents are the set of reasons that a customer might contact your business and the virtual agent recognizes them when the customer first reaches out. Our ASAPP teams work with you to optimize your intent list on an ongoing basis. Within intent routing, you can view the full list of intents and the routing behavior after an intent is recognized. Creators have the ability to modify this behavior. ## Navigate to Intent Routing You can access **Intent Routing** in the **Virtual Agent** navigation panel. ## Intent Routing List On the intent routing page, you will find a filterable list of intents along with their routing information. The table displays the following information: 1. **Intent name:** displays the name of the intent, as well as a brief description on what it is. 2. **Code:** unique identifier for each intent. 3. **Routing:** displays the flow routing rules currently configured for an intent, if available. a. If the intent is routed to one or more flows, the column will list such flow(s). b. If the intent is not routed to any flow, the column will display an 'Add Route...'. These intents will immediately direct customers to an agent queue. ## Intent Routing Detail Page Clicking on a specific intent in the list will direct you to a page where routing behavior for the intent can be defined. The intent detail page is broken down as follows: 1. **Routing behavior** 2. **Conditional rules and default flow** 3. **Intent information** 4. **Intent toolbar** ### Routing Behavior Routing behavior for a specific intent is determined by selecting one of the following options: 1. **Route to a live agent** When the system identifies the intent, it will immediately direct the customer to an agent queue. This is the default selection for any new intents unless configured otherwise. 2. **Route to a flow** When the system identifies the intent, it will direct the customer to a flow in accordance with the [conditional rules](#conditional-rules-and-default-flow) that you will subsequently define. ### Conditional Rules and Default Flow If an intent is configured to be [routed to a flow](#routing-behavior), you have the option to build conditional rules and route to a flow only when the system validates the conditions as TRUE. If all the conditional rules are invalid, the system will route customers to a [default flow](#default-flow) of your choosing. #### Add Conditional Route To add a new conditional route: 1. Select **Add Conditional Route**. 2. Define a conditional statement in the **Conditional Route** editor by: a. Selecting an available [attribute](/agent-desk/virtual-agent/attributes) as target from the drop-down menu and choose the value to validate against. E.g. authentication equals true. i. Multiple conditions can be added by clicking **Add Conditions**. Once added, they can be reordered by dragging, or deleted by clicking the trash can icon. b. Selecting the flow to route customers to, if the conditions are validated in the dropdown. c. Click **Apply** to save your changes. 3. Edit or delete a route by hovering over the route and selecting the respective icons. #### Multiple Conditional Routes You can add multiple conditional rules that can route to different flows. You can reorder these conditions by dragging the conditional rule from the icon on the left. Once saved, the system evaluates conditions from top to bottom, with the customer being routed to the first flow for which the system validates the conditions. If no conditional route is valid, the system will route the customer to the [default flow](#default-flow). #### Default Flow A default flow must be selected if the routing behavior is defined to [route to a flow](#routing-behavior). Customers will be routed to the selected default flow if no conditional routes exist, or if none of the conditional routes were valid. ### Intent Information The **Intent Information** panel will display the intent name, code, and description for easy reference as you are viewing or editing intent routes. The **Assigned routes** will display any flow(s) that are currently routed from the intent. ### Intent Toolbar When you are editing intent routing, the toolbar displays the following buttons: * **Discard changes**: remove all unsaved changes. * **Save**: save changes to intent routing. ## Save Intent Routing To save any changes to intent routing, click **Save** from the toolbar. By default, when saving an intent route, it is immediately released to production. There is currently no versioning available when saving intent routes. ### Test a Different Intent Route in Test Environments To avoid impacting customer routing and assignments in production you can test a particular intent route in a test environment before releasing it to customers by following the steps below: * In the **Conditional Route** editor, add a condition that targets the 'InTest' attribute. a. The value assigned to 'InTest' should equal 'TRUE'. b. Select the flow that you want to test the routing for. c. Click **Apply**. To fully release the intent route to Production, delete the conditional statement and update the routing to the new flow. ## Test Intent Routes Intent routes can be tested in demo environments. To test an intent route: 1. Access your demo environment. 2. Type `INTENT_INTENTCODE`, where `INTENTCODE` is the code associated with the intent you want to test. Please note that this is case sensitive. 3. Press **Enter** to test intent routes for that intent. # Links Source: https://docs.asapp.com/agent-desk/virtual-agent/links Learn how to manage external links and URLs that direct customers to web pages. ASAPP provides a powerful mechanism to manage external links and URLs that direct customers to web pages. Flows, core dialogs, and customer profiles predominantly use links. ## Links List The Links list page displays a list of all links available to use in AI-Console. When you create a link, you can attach it to content in a node in Flow Tooling, include it in the Customer Profile panels, assign it to a View, etc. Here, you'll find the **Link name & URL**. When adding a link to a flow or other feature, you must add it from a list of all link names. ## Create a Link To create a link: 1. From the **Links** landing page, click the **+** button at the bottom right. 2. A modal window will open. 3. **Link name:** Provide a name for the link. Make the name descriptive so that other users can recognize its purpose. 4. **URL:** Include the full external URL, including **http\://** (e.g., `http://example.com/about`). 5. **Channel Targets:** This feature is optional. It allows users to create a link variant that targets customers using a specific channel. See details below. ### Add a Channel Target Variant 1. Click **Add Channel Target** to add a URL variant. The system adds a new input field. a. **URL Override:** Include the URL variant for the targeted channel. Please follow the same URL syntax as described under **Create a Link**. b. **Channel Target:** From the drop-down menu, select which channel to target. Bear in mind that a single variant per channel is currently supported. 2. **Delete targets:** To remove a target, click the **Delete** icon. 3. **Save:** To save the link, click the **Save** button. The link will not be active until it is assigned to a flow, customer profile or any other feature that supports **Links**. 4. **Cancel:** When you click, the system clears all changes. ### Link Assignments Once you create a link, you can send it to customers in flows. The **Links** feature will keep tabs on where each link has been assigned and provide quick access to those feature areas. When viewing a specific link, the Usage section indicates which flows are currently using the respective link. When you click, you can navigate directly to the flow. When a link is not assigned in any flow, the system displays 'Not yet assigned'. ## Edit a Link Link changes are global, which means that the system immediately pushes saved changes to all features that reference the link. 1. From the **Links** landing page, click the **link name** you want to edit. 2. **Link ID:** After you save a link for the first time, the system automatically assigns a unique identifier to the link. This identifier does not change over time, including when you edit the link. a. The **Link ID** can be referenced in **Historical Reporting** for your reporting needs. 3. Assign changes to the configurations. 4. **Save:** When changes are complete, click **Save** to automatically apply the changes. ## Delete a Link Links can be deleted, but only if they are not currently assigned. To delete a link that is assigned, remove the assignments first. 1. If the link is assigned: When opening the Link modal, the **Delete** button will be disabled. The delete function will remain disabled until all link assignments have been removed. 2. If the link is not assigned: The link can be deleted by clicking on the **Delete** button on the bottom-left area of the link modal. # Surveys Source: https://docs.asapp.com/agent-desk/virtual-agent/surveys Collect customer satisfaction feedback after interactions with agents and AI systems. Surveys allow you to gather customer satisfaction (CSAT) feedback following interactions with Virtual Agents or GenerativeAgents. This feature helps you evaluate and improve your support quality by capturing structured feedback directly from customers. With surveys, you can: * **Capture structured feedback** on agent and AI interactions * **Use survey data** for reporting and automation optimization * **Reduce manual feedback collection** or disjointed tools * **Improve CSAT** without disrupting the user experience ## Survey Types ASAPP supports three types of surveys: * **Virtual Agent Surveys**: Collect feedback after interactions with automated Virtual Agent flows. * The system triggers these surveys only at the end of Virtual Agent interactions. * **GenerativeAgent Surveys**: Collect feedback after interactions with AI-powered GenerativeAgents. * The system triggers these surveys only at the end of GenerativeAgent interactions, not when escalating from Virtual Agent to GenerativeAgent, to avoid redundant feedback collection. ## Setting up Surveys Each survey type has a different setup process: Virtual Agent surveys require configuration and the system then triggers them at the end of the Virtual Agent interaction when the final flow ends with SRS options and the customer selects the positive outcome. Work with your ASAPP account team to enable Virtual Agent surveys for your organization. The system disables Virtual Agent surveys by default and they require ASAPP assistance to enable. Once enabled, you can edit the survey form which determines what information is collected from the customer. 1. Navigate to Company Resources > Library > Forms 2. Select Virtual AgentFeedback Survey Form 3. Modify the form according to your business needs 4. Save the form Once the form is created, you can customize the survey settings. 1. Navigate to VirtualAgent > Core Dialog 2. Select "VirtualAgentCSATSurveyFlow" 3. Modify the settings according to your business needs 4. Save and deploy the settings Once the survey is enabled and configured, you can test it by walking through a flow that ends with SRS options. Preview the flow, follow through the flow, and select the positive outcome. The survey should appear at the end of the flow. GenerativeAgent surveys require configuration and the system then triggers them at the end of the GenerativeAgent interaction when GenerativeAgent returns control to VirtualAgent with a specific system transfer. Work with your ASAPP account team to enable GenerativeAgent surveys for your organization. The system disables GenerativeAgent surveys by default and they require ASAPP assistance to enable. Create an intent that maps to a flow that opens the GenerativeAgent survey form. 1. Navigate to VirtualAgent > Intents 2. Create a new intent (e.g., "survey\_feedback") 3. Map this intent to a flow that opens the survey form 4. Save the intent configuration Create or modify a system transfer function to return a key/value pair for survey triggering. 1. Navigate to GenerativeAgent > Functions 2. Create or modify a system transfer function 3. Configure the function to return in the output variables: * Key: `external_intent` * Value: The intent name you created in the previous step 4. Save the function configuration Add the system transfer function to relevant GenerativeAgent tasks to enable survey triggering. Once the survey is enabled and configured, you can test it by walking through a flow that engages GenerativeAgent and follow the conversation to trigger the return of control to VirtualAgent. The survey should appear after GenerativeAgent returns control to VirtualAgent. ## Third-Party Survey Support You can use third-party survey providers to collect feedback from your customers, such as Qualtrics. This results in displaying a survey link for the user to click into the third-party survey tool. The system integrates results from external surveys into your reporting. To use a third-party survey provider, work with your ASAPP account team to configure the survey settings to display the survey link. ## Survey Results and Reporting The system automatically stores survey results and makes them viewable in Conversation Manager. Conversation Manager does not provide results for third-party surveys. The system creates events for when a survey is displayed and when a survey is submitted. This data is available via the [ASAPP Messaging Feeds](/reporting/retrieve-messaging-data). ## Next Steps Learn how to build and configure flows for your Virtual Agent interactions. Understand how to route customer intents to appropriate flows and surveys. # AI Compose Source: https://docs.asapp.com/ai-productivity/ai-compose ASAPP AI Compose helps agents compose the best response to customers, using machine learning techniques to suggest complete responses, partial sentences, key phrases and spelling fixes in real-time based on both the context of the conversation and past agent behavior. ## Features AI Compose provides the following features: | Feature | Description | | :----------------------- | :------------------------------------------------------------------------------------------------------------------------ | | **Autosuggest** | Provides up to three suggestions that appear in a suggestion drawer above the typing field before the agent begins typing | | **Autocomplete** | Provides up to three suggestions that appear in a suggestion drawer above the typing field after the agent starts typing | | **Phrase autocomplete** | Provides in-line phrase suggestions that appear while an agent is typing | | **Response quicksearch** | Allows in-line search of global and custom responses | | **Fluency correction** | Applies automatic grammar corrections that agents can undo | | **Profanity blocking** | Prevents agents from sending messages containing profanity to customers | | **Custom response list** | Enables management of an individual agent's custom responses in a simple library interface | | **Global response list** | Enables management of global responses in a simple tooling interface | ## How it works AI Compose takes in a live feed of your agents' conversations and then uses our various AI models to return a list of changes or suggested responses based on the state of the conversation and the currently typed message. 1. Provide conversation data via the Conversation API. 2. In your Agent Application, call the AI Compose APIs to retrieve the list of changes or suggested responses. 3. Show the potential changes or responses to your agents for them to incorporate. This streamlines your agents' efficiency while still allowing agents to review changes, ensuring only the highest quality of responses reach your customers. AI Compose has the following technical components: | Component | Description | | :--------------------- | :------------------------------------------------------------------------------------------------------------------------------------ | | **Autosuggest model** | LLM that ASAPP retrains with agent usage data | | **Data Storage** | Storage for historical conversations, global response lists, and agent historical feature usage that ASAPP uses for weekly retraining | | **Conversation API**\* | An API for creating and updating conversations and conversation data | ## Get Started Integrate AI Compose into your applications and scale up your agent response rates. ### Integrate AI Compose AI Compose is available both as an integration into leading messaging applications and as an API for custom-built messaging interfaces. For technical instructions on how to implement the service for each approach, refer to the deployment guides below: Learn more on the use of AI Compose API Deploy AI Compose via LivePerson Deploy AI Compose on your Salesforce solution ### Use AI Compose For a functional breakdown and walkthrough of effective use cases and configurations, refer to the guides below: Learn more on the use of AI Compose Check the tooling options for AI Compose ### Feature Releases Visit the feature releases for new additions to AI Compose functionality The system will update Product and Deployment Guides as new features become available in production. ## Enhance AI Compose ASAPP AI Summary is a recommended pairing with AI Compose, generating conversation summaries of key events for 100% of customer interactions. Note-taking and disposition questions take call time and agent focus, both of which can have a negative impact on agent performance. Removing summarization tasks from agents through automation can keep agents focused on messaging with customers and yield higher summary data coverage than manual agent notes. Head to AI Summary Overview to learn more. Learn more about AI Summary on ASAPP.com # AI Compose Tooling Guide Source: https://docs.asapp.com/ai-productivity/ai-compose/ai-compose-tooling-guide Learn how to use the AI Compose tooling UI ## Overview This page outlines how to manage and configure global response lists for AI Compose in Agent Desk. The global response list is created and maintained by program administrators, and the responses contained within it can be suggested to the full agent population. Suggestions given to agents can also include custom responses created by agents and organic responses, which are a personalized response list of frequently-used responses by each agent. To learn more about AI Compose Features, go to [AI Compose Product Guide](/ai-productivity/ai-compose/product-guide). Agent Desk gives program administrators full control over the global response list. In Agent Desk, click on **AI Compose** and then on **Global Responses** in the sidebar. ## Best Practices The machine learning models powering AI Compose look at the global response list and select the response that is most likely to be said by the agent. To create an effective global response list, take into account the following best practices: 1. We recommend having a global response list containing 1000-5000 responses. * The more global responses, the better. Having responses that cover the full range of common conversational scenarios enables the models to make better selections. * Deploying a small response list that contains only one way of saying each phrase is not recommended. The best practice is to include several ways of saying the same phrase, as that will enable our machine learning models to match each agent's conversational style. * Typically, the list is generated by collecting and curating the most frequent agent messages from historical chats at the beginning of an ASAPP deployment. 2. You should keep responses up-to-date as there are changes to business logic and policies to avoid suggestions with stale information. ## Managing Responses The Global Responses page contains a table where each row represents a response that can be suggested to an agent. There are two ways of managing the global response list: 1. Directly add or edit responses through the AI-Console UI, which provides a simple and intuitive experience. This method is best suited for small volumes of changes. 2. Upload a .csv file containing the entire global response list, doing a bulk edit. This method is best suited for large volumes of changes. The following table describes the elements that can be included with each response: | Field | Description | Required | | :--------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- | | Text | The text field contains the response that can be suggested to an agent. Optionally, the text can include [metadata inserts](#metadata "Metadata") to dynamically embed information into a response. | Yes | | Title | Used to provide short descriptors for responses. If you specify a title, the response will display its title when suggested to an agent. | No | | Metadata filters | Used to determine when a response can appear as a suggestion. Allows responses to be filtered to specific agents based on one or more conditions (e.g., filtering responses to specific queues). | No | | Folder path | Used to organize responses into folder hierarchies. Agents can access and navigate these folders to discover relevant responses. | No | ## Uploading Responses in Bulk The global response list can be updated by uploading a .csv file containing the full response list. The recommended workflow is to first download the most recent response list, make changes, and upload the list back into AI-Console. ### .csv Templates The following instructions provide detailed descriptions of how responses need to be defined when using a .csv file. **Text** The text field should contain the exact response that will be suggested to an agent. Optionally, the text field may contain metadata inserts. To use a metadata insert within a response, type the key of the metadata insert inside curly brackets: > "Hello, my name is \{rep\_name}. How may I assist you today?" To learn more about which metadata inserts are available to use within responses, see [Metadata](#metadata "Metadata"). **Folder path** Responses can be organized within a folder structure. This field can contain a single folder name, or a series of nested folders. If using nested folders, each folder should be separated by the ">" character (e.g. "PARENT FOLDER > CHILD FOLDER"). **Title** The title field enables short descriptions for responses. Titles do not need to be unique. **Metadata filters** You can add metadata filters by specifying conditions using the metadata filter key and metadata filter value columns. Key: The metadata filter key contains the field on which to condition the response. For example, if you want to filter a response to a specific queue, the metadata key should be "queue\_name". Value: The metadata filter value specifies for which values of the metadata key the response will be valid. A single metadata filter key can have multiple values, which should be written as a comma-separated list. For example, if the response should be available to the "general" and "escalation" queues, then the metadata filter value should be "general, escalation". A response can contain multiple conditions. To define multiple conditions, separate each with a new line; use shift+enter in Windows or option+enter in Mac to enter a new line in the same cell. [Click here to download a global responses template file](https://docs-sdk.asapp.com/product_features/global-responses-template.csv). **Getting the "invalid character �" when uploading a response list?** If you are uploading a response list and seeing an error message that a response contains the invalid character �, it is likely caused by using Microsoft Excel to edit the response list, as Excel uses a non-standard encoding mechanism. To fix this issue, select **Save as...** and under **File Format**, select **CSV UTF-8 (Comma delimited) (.csv)**. ## Saving and Deploying Saving changes to the global response list or uploading a new list from a .csv file will create a new version. You can see past versions by selecting **Past Versions** under the vertical ellipses menu. You can easily deploy the global response list into testing or production environments. An indicator at the top of each version indicates the status of the response list: unsaved draft, saved draft, deployed in a testing environment, or deployed in production. ## Metadata The Metadata Dictionary, accessible through the navigation panel, provides an overview of metadata that is available for your organization to use in global responses. There are two types of metadata: * **Metadata inserts** are used within the text of each response as templates that can dynamically insert information. Inserts are defined using curly brackets (e.g. Hello, this is \{rep\_name}, how may I assist you today?). * **Metadata filters** introduce conditions to control in which conversations responses can be suggested. By default, responses without any metadata filters are available as suggestions for the entire agent population. Common patterns for filtering include restricting responses to specific queues or lines of business. The metadata on global responses doesn’t control visibility or access for agents in the Global Responses tab of the Right-Hand Panel. The metadata implementation, will influence when a response is suggested by the model in the AI Compose functionality. ### Metadata Inserts A response that contains a metadata insert is a templated response. When the system suggests a templated response, it shows the response to the agent with the metadata insert filled in. *Adding a templated response in AI-Console* *Templated response being suggested to the agent in AI Compose* If the needed metadata insert (such as customer or agent name) is unavailable for a particular response (e.g. the customer in the conversation is unidentifiable), the response will not be suggested by AI Compose. To view all metadata inserts available to use within a conversation, navigate to **Metadata Dictionary** in the navigation panel. ### Metadata Filters Responses that do not have associated metadata filters will be available to the full agent population. In the metadata dictionary, click on any metadata filter to view details about the filter and all possible values available for it. # Deploying AI Compose API Source: https://docs.asapp.com/ai-productivity/ai-compose/deploying-ai-compose-api Communicate with AI Compose via API. AI Compose has the following technical components: * **An autosuggest model** that ASAPP retrains weekly with [agent usage data you provide through the `/analytics/message-sent` endpoint](#sending-agent-usage-data "Sending Agent Usage Data") * **Data storage** for historical conversations, global response lists, and agent historical feature usage that ASAPP uses for weekly retraining * The **Conversation API** for creating and updating conversation data and the **AI Compose API** that interfaces with the application with which agents interact and receives agent usage data in the form of message analytics events ### Setup ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can do the following: * Access relevant API documentation (e.g. OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps In order to use ASAPP's APIs, all apps must be registered through the portal. Once registered, each app will be provided unique API keys for ongoing use. Visit the [Get Started](/getting-started/developers) page on the Developer Portal for instructions on creating a developer account, managing teams and apps, and setting up AI Service APIs. ## Usage ASAPP AI Compose exposes API endpoints that each enable distinct features in the course of an agent's message composition workflow. You should send requests to each endpoint based on events in the conversation and actions taken by the agent in their interface. For example, the sequence below shows requests made for a typical new conversation in which the agent begins creating their first message, sends the first message and receives one message in return from an end-customer: This example is not comprehensive of every possible endpoint request supported by AI Compose. Refer to the [Endpoints](#endpoints-25843 "Endpoints") section for a full listing of endpoints. **In this example:**

Conversation Event

API Request

Conversation starts

1. Create a new ASAPP conversation record

2. Request first set of response suggestions

Agent keystroke

1. Request updated response suggestions

Agent uses the spacebar

1. Request updated response suggestions

2. Check the spelling of the most recent word

Agent searches for a response

1. Get the response list that pertains to their search

Agent saves a custom response

1. Add the new response to their personal library

Agent submits their message

1. Check if any profanity is present in the message

Agent message is sent

1. Add the message to ASAPP’s conversation record

2. Create analytics event for the message that details how the agent used AI Compose 

3. Request updated response suggestions

Customer message is sent

1. Add the message to ASAPP’s conversation record

2. Request updated response suggestions

The [Endpoints](#endpoints-25843 "Endpoints") section below outlines how to use each endpoint. ### Endpoints Listing For all requests, you must provide a header containing the `asapp-api-id` API Key and the `asapp-api-secret`. You can find them under your Apps in the [AI Services Developer Portal](https://developer.asapp.com/). All requests to ASAPP sandbox and production APIs must use `HTTPS` protocol. Traffic using `HTTP` will not be redirected to `HTTPS`. Use the links below to skip to information about the relevant fields and parameters for the corresponding endpoint(s): **[Conversations](#conversations-api-25843 "Conversations API")** * `POST /conversation/v1/conversations` * `POST /conversation/v1/conversations/\{conversationId\}/messages` [**Requesting Suggestions**](#requesting-suggestions "Requesting Suggestions") * `POST /autocompose/v1/conversations/\{conversationId\}/suggestions` [**Checking Profanity & Spelling**](#check-profanity-spelling "Check Profanity & Spelling") * `POST /autocompose/v1/profanity/evaluation` * `POST /autocompose/v1/spellcheck/correction` [**Sending Agent Usage Data**](#sending-agent-usage-data "Sending Agent Usage Data") * `POST /autocompose/v1/analytics/message-sent` [**Getting Response Lists**](#getting-response-lists "Getting Response Lists") * `GET /autocompose/v1/responses/globals` * `GET /autocompose/v1/responses/customs` [**Updating Custom Response Lists**](#updating-custom-response-lists "Updating Custom Response Lists") * `POST /autocompose/v1/responses/customs/response` * `PUT /autocompose/v1/responses/customs/response/\{responseId\}` * `DELETE /autocompose/v1/responses/customs/response/\{responseId\}` * `POST /autocompose/v1/responses/customs/folder` * `PUT /autocompose/v1/responses/customs/folder/\{folderId\}` * `DELETE /autocompose/v1/responses/customs/folder/\{folderId\}` ### Conversations API ASAPP receives conversations through POST requests to the Conversations API. This service creates a record of conversations referenced as a source of truth by all ASAPP services. By promptly sending conversation and message data to this API, you ensure that ASAPP's conversation records match your own and that ASAPP services use the most current information available. [`POST /conversation/v1/conversations`](/apis/conversations/create-or-update-a-conversation) Use this endpoint to create a new conversation record or update an existing conversation record. **When to Call** This service should be called when a conversation starts or when something about the conversation changes (e.g. a conversation is reassigned to a different agent). **Request Details** Requests must include a conversation identifier from your system of record (external to ASAPP) and a timestamp (formatted in RFC3339 micro second date-time expressed in UTC) for when the conversation started. Requests to create a conversation record must also include identifying information about the human participants. Two types of requests are supported to create a new conversation: 1. **Conversations started with an agent:** Provide both the `agent` and `customer` objects in the request when the conversation begins. 2. **Conversations started with a virtual agent:** Provide only the `customer` object in the initial request when the conversation with the virtual agent begins; you must send a subsequent request that includes both the `agent` and `customer` objects once the agent joins the conversation. Requests may also include key-value pair metadata for the conversation that can be used either (1) to insert values into templated responses for agents or (2) as filter criteria to determine whether a conversation is eligible for specific response suggestions. To support inserting the customer's time of day (morning, afternoon, evening) into templated agent responses, conversation metadata key-value pairs should take the format of `CUSTOMER_TIMEZONE: ` **Response Details** When successful, this endpoint responds with a unique ASAPP identifier (`id`) for the conversation. This identifier should be used whenever referencing this conversation in the future. For example, adding new messages to this conversation record will require use of this identifier so that ASAPP knows to which conversation messages should be added. [`POST /conversation/v1/conversations/\{conversationId\}/messages`](/apis/messages/create-a-message) Use this endpoint to add a message to an existing conversation record. **When to Call** This service should be called after each sent message by a participant in the conversation. If a conversation begins with messages between a customer and virtual agent/bot, ensure the conversation record is updated once the agent joins the conversation, prior to posting messages to this endpoint for the agent. **Request Details** The path parameter for this request is the unique ASAPP conversation ID that was provided in the response body when the conversation record was initially created. Requests must include the message's text and the message's sent timestamp (formatted in RFC3339 micro second date-time expressed in UTC). Requests must also include identifying information about the sender of the message, including their `role`; supported values include `agent`, `customer`, or `system` for virtual agent messages. **Response Details** When successful, this endpoint responds with a unique ASAPP identifier (`id`) for the message. This identifier should be used if a need arises to reference this message in the future. When a conversation message is posted, ASAPP applies redaction to the message text to prevent storage of sensitive information.  Visit the [Data Redaction](/security/data-redaction "Data Redaction") section to learn more. Reach out to your ASAPP account contact for information on available redaction capabilities to configure for your implementation. ### Requesting Suggestions ASAPP provides suggestions through one POST request to the AI Compose API. [`POST /autocompose/v1/conversations/\{conversationId\}/suggestions`](/apis/autocompose/generate-suggestions) Use this endpoint to get suggestions for the next agent message in the conversation. **When to Call** This service should be called when an agent joins the conversation, after every agent keystroke, and after a message is sent by either the customer or the agent. In each of these instances, AI Compose takes into account new conversation context (e.g. the next letter the agent typed) and will return suggestions suitable for that context. If a conversation begins with messages between a customer and virtual agent/bot, ensure the conversation record is updated once the agent joins the conversation. Suggestion requests to this endpoint will fail if no agent is associated with a conversation. While making a request for a suggestion, a new sent message by either the customer or agent can be posted to the conversation record by including it in the request body. This optional approach to updating the conversation record is in lieu of sending a separate request to the `/messages` endpoint. New messages cannot be added to the conversation record using the suggestions endpoint if no agent is associated with the conversation. **Request Details** The path parameter for this request is the unique ASAPP conversation ID that was provided in the response body when the conversation record was initially created. Requests must include any text that the agent has already typed (called the `query`). To add a message to the conversation record during a suggestion request, you must also include a message object that contains the text of the sent message, the sender role and ID, and the timestamp for the sent message. **Response Details** When successful, this endpoint responds with a set of suggestions or phrase completions, and a unique ASAPP identifier (`id`) that corresponds to this set of suggestions. Full suggestions will be returned when the agent has not yet typed and early in the composition of their typed message. Once the agent's typed message is sufficiently complete, no suggestions will be returned. Phrase completions are only provided when a high-confidence phrase is available to complete a partially typed message with several words. If no such phrases fit the message, phrase completions will not be returned. If a message object was included in the request body, the response will include a message object with a unique message identifier. **Metadata Inserts** Suggestions will always include messages with `text` and `templateText` fields. `Text` fields contain the message as it should be shown in the end-user interface, whereas `templateText` indicates where metadata was inserted into a templated part of the message. For example, `text` would read `"Sure John"`and `templateText` would read `"Sure \{NAME\}"`. AI Compose currently supports inserting metadata about a customer name or agent name into a templated suggestion. `templateText` will be returned even if there are no metadata elements being inserted into the suggestion `text`. In these cases, the `templateText` and `text` will be identical. ### Check Profanity & Spelling [`POST /autocompose/v1/profanity/evaluation`](/apis/autocompose/evaluate-profanity) Use this endpoint to receive an evaluation of a text string to verify if it contains a word present on ASAPP's profanity blocklist. **When to Call** This service should be called when a carriage return or "enter" is used to send an agent message in order to prevent sending profanities in the chat. **Request Details** Requests need only specify the text required to be checked for profanity **Response Details** When successful, this endpoint responds with a boolean indicating whether or not the submitted text contains profanity. [`POST /autocompose/v1/spellcheck/correction`](/apis/autocompose/check-for-spelling-mistakes) Use this endpoint to get a spelling correction for a message as it is being typed. **When to Call** This service should be called after a space character is entered, checking  the most recently completed word in the sentence. **Request Details** Requests must include the text the agent has typed and the position of the cursor to indicate which word the agent has just typed to be checked for spelling. The request may also specify a user dictionary of any words that should not be corrected if present. **Response Details** When successful and a spelling mistake is present, this endpoint identifies the misspelled text, the correct spelling of the word and start position of the cursor where the misspelled word begins so that it can be replaced. ### Sending Agent Usage Data [`POST /autocompose/v1/analytics/message-sent`](/apis/autocompose/create-a-messagesent-analytics-event) Use this endpoint to create an analytics event describing the agent's usage of AI Compose for a given message. ASAPP uses these events to train AI Compose, identifying which forms of augmentation should be credited for contributing to the final sent message. **When to Call** This service should be called after both of the following have occurred: 1. A message has been submitted by an agent 2. A successful request has been made to add this message to ASAPP's record of the conversation Message sent analytics events should be posted after every agent message regardless of whether any AI Compose capabilities were used. **Request Details** Requests must include the ASAPP identifiers for the conversation and the specific message about which the analytics data is about. Requests must also include an array called `augmentationType` that describes the agent's sequence of AI Compose usage before sending the message. Valid `augmentationType` values are described below: | augmentationType | When to Use | | :------------------- | :--------------------------------------------------------------------------------------------------- | | AUTOSUGGEST | When agent uses a full response suggestion with no text in the composer | | AUTOCOMPLETE | When agent uses a full response suggestion with text already in the composer | | PHRASE\_AUTOCOMPLETE | When agent uses a phrase completion rather than a full response suggestion | | CUSTOM\_DRAWER | When agent inserts a custom message from a drawer menu in the composer | | CUSTOM\_INSERT | When agent inserts a custom message from a response panel | | GLOBAL\_INSERT | When agent inserts a global message from a response panel | | FLUENCY\_APPLY | When a fluency correction is applied to a word | | FLUENCY\_UNDO | When a fluency correction is undone | | FREEHAND | When the agent types the entire message themselves and does not use any augmentation from AI Compose | Requests should include identifiers for the initial set of suggestions shown to the agent and the last set of suggestions where the agent made a selection (if any selections were made). If a selection was made, the index of the selected message (from the list of three) should also be specified. Requests may also include further metadata describing the agents editing keystrokes after selecting a suggestion, their time crafting and waiting to send the message, the time between the last sent message and their first action, and their interactions with phrase completion suggestions (if relevant). **Response Details** When successful, this endpoint confirms the analytics message event was received and returns no response body. ### Getting Response Lists ASAPP provides access to the global response list and agent-specific custom response lists through GET requests to two endpoints. Each endpoint is designed to be used to show an agent the contents of the response list in a user interface as they browse or search the list. [`GET /autocompose/v1/responses/globals`](/apis/autocompose/list-the-global-responses) Use this endpoint to retrieve the global responses and associated folder organization. **When to Call** This service should be called to show an agent the global response list - the list of responses available to all agents - in a user interface in response to an action taken by the agent, such as clicking on a response panel icon or searching for a specific response. **Request Details** Requests must include the agent's unique identifier from your system - this is the same identifier used to create conversation and conversation message records. * Only values within a specific folder * Only responses, only folders, or both * Only values that match an agent search term Requests can be returned in multiple pages based on a maximum per page parameter set to ensure a user interface only receives the number responses it can support. This endpoint can be called again with the same query parameters and a pageToken to indicate which page to retrieve in a multi-page list. **Response Details** When successful, this endpoint responds with a response list (if requested) that fits the criteria of the request query parameters, including the id of the response along with the text, title, corresponding folder to which it belongs and any key-value pair metadata associated with the response. As discussed previously in Metadata Inserts, responses can be templated to insert metadata into specific parts of the message, such as the customer or agent's name. ASAPP can also use metadata associated with a response (e.g. agent skills for which that response is allowed) to filter out that response from suggestions for a given conversation. If there is a next page to the response list, a pageToken is provided in the response for use in a subsequent call to show the next page to the user. This endpoint also responds with a folder list (if requested) including the identifier of the folder, its name, and parent folder (if one exists), and version information about the global list of responses from which this list is sourced. Global responses are returned in alphabetical order, sorted on the text of the response. Folders are sorted by folder name. [`GET /autocompose/v1/responses/customs`](/apis/autocompose/get-custom-responses) Use this endpoint to retrieve the custom responses and associated folder organization. **When to Call** This service should be called to show an agent their custom response list - the list of responses available to only that agent - in a user interface in response to an action taken by the agent, such as clicking on a response panel icon or searching for a specific response. **Request Details** Requests must include the agent's unique identifier from your system - this is the same identifier used to create conversation and conversation message records. Requests may include parameters about what values the returned list should contain based on the context of the request: * Only values within a specific folder * Only responses, only folders, or both * Only values that match an agent search term Requests can be returned in multiple pages based on a maximum per page parameter set to ensure a user interface only receives the number responses it can support. This endpoint can be called again with the same query parameters and a pageToken to indicate which page to retrieve in a multi-page list. **Response Details** When successful, this endpoint responds with a response list (if requested) that fits the criteria of the request query parameters, including the identifier of the response along with the text, title, corresponding folder to which it belongs and any key-value pair metadata associated with the response. As discussed previously in Metadata Inserts, responses can be templated to insert metadata into specific parts of the message, such as the customer or agent's name. ASAPP can also use metadata associated with a response (e.g. agent skills/queues for which that response is allowed) to filter out that response from suggestions for a given conversation. If there is a next page to the response list, a pageToken is provided in the response for use in a subsequent call to show the next page to the user. This endpoint also responds with a folder list (if requested) including the identifier of the folder, its name, and parent folder (if one exists). Custom responses are returned in alphabetical order, sorted on the title of the response. Folders are sorted by folder name. ### Updating Custom Response Lists Each agent's custom responses and the related folders can be added, updated and deleted using six endpoints. These endpoints are designed to carry out actions taken by agents in their personal list management interface. #### For Responses [`POST /autocompose/v1/responses/customs/response`](/apis/autocompose/create-a-custom-response) Use this endpoint to add a single custom response for an agent. **When to Call** This service should be called when an agent creates a new custom response. **Request Details** Requests must include the agent's unique identifier from your system - this is the same identifier used to create conversation and conversation message records. Requests must also include the text of the custom response and its title. Requests may include the identifier of the folder in which the response should be stored; if not provided, the response is created at the \_\_root folder level. Requests may also specify metadata to be inserted into specific parts of the message, such as the customer or agent's name. **Response Details** When successful, the endpoint responds with a unique ASAPP identifier for the response. This value should be used to update and delete the same response. [`PUT /autocompose/v1/responses/customs/response/\{responseId\}`](/apis/autocompose/update-a-custom-response) Use this endpoint to update a specific custom response for an agent. **When to Call** This service should be called once an agent edits a custom response. **Request Details** The path parameter for this request is the unique ASAPP response ID provided in the response body when creating the response. Requests must also include the text and title values of the updated custom response. Requests may include the identifier of the folder in which the response should be stored and may also specify metadata to be inserted into specific parts of the message, such as the customer or agent's name. **Response Details** When successful, this endpoint confirms the update and returns no response body. [`DELETE /autocompose/v1/responses/customs/response/\{responseId\}`](/apis/autocompose/delete-a-custom-response) Use this endpoint to delete a specific custom response for an agent. **When to Call** This service should be called when an agent deletes a response. **Request Details** The path parameter for this request is the unique ASAPP response ID provided in the response body when creating the response. Requests must also include the agent's unique identifier from your system. **Response Details** When successful, this endpoint confirms the deletion and returns no response body. #### For Folders [`POST /autocompose/v1/responses/customs/folder`](/apis/autocompose/create-a-response-folder) Use this endpoint to add a single folder for an agent. **When to Call** This service should be called when an agent creates a new custom response folder. **Request Details** Requests must include the agent's unique identifier from your system - this is the same identifier used to create conversation and conversation message records. Requests must also include the name of the custom response folder. Requests may include the identifier of the parent folder in which to create the new folder. **Response Details** When successful, the endpoint responds with a unique ASAPP identifier for the folder. This value should be used to update and delete the same folder. [`PUT /autocompose/v1/responses/customs/folder/\{folderId\}`](/apis/autocompose/update-a-response-folder) Use this endpoint to update a specific folder for an agent. **When to Call** This service should be called once an agent edits the name or hierarchy location of the folder. **Request Details** The path parameter for this request is the unique ASAPP folder ID provided in the response body when creating the folder. Requests must include the agent's unique identifier from your system and the name of the folder once updated. Requests may include the identifier of the folder in which the response should be stored if that parent folder has been updated. **Response Details** When successful, this endpoint confirms the update and returns no response body. [`DELETE /autocompose/v1/responses/customs/folder/\{folderId\}`](/apis/autocompose/delete-a-response-folder) Use this endpoint to delete a specific folder for an agent. **When to Call** This service should be called when an agent deletes a folder. **Request Details** The path parameter for this request is the unique ASAPP folder ID provided in the folder body when creating the folder. Requests must include the agent's unique identifier from your system. **Response Details** When successful, this endpoint confirms the deletion and returns no response body. ## Certification Before providing credentials for applications to use production services, ASAPP reviews your completed integration in the sandbox environment to certify that your application is ready. The following criteria are used to certify that the integration is ready to use the AI Compose API in a production environment: * Under normal conditions, the integration is free of errors * Under abnormal conditions, the integration provides the correct details in order to troubleshoot the issue * The correct analytics events are being provided for agent messages that are sent To test these criteria, an ASAPP Solution Architect will review these AI Compose functionalities: * Load a new customer conversation onto the agent desktop/view (with existing customer messages) * Present the agent with suggestions and enable them to select an option and send * Enable the agent to modify or add to a selected suggestion, and then send * Enable the agent to freely type and use a phrase completion * Enable the agent to use the spell check and profanity functionality * Verify that correct analytics details are sent to ASAPP when an agent sends a message * Disable API Keys in developer.asapp.com and generate an error message The following are the test scenarios and accompanying sequence of expected API requests:

Scenario

Expected Requests

A

Start new chat for agent with pre-existing customer messages

POST /conversation

POST /messages

POST /suggestions

B

Populate suggestions, select a suggestion and send

POST /suggestions

POST /spellcheck

POST /profanity

POST /messages

POST /message-sent

C

Populate suggestions, don’t choose one and type “Hello” and send message

POST /suggestions

POST /suggestions per keystroke

POST /spellcheck

POST /profanity

POST /messages

POST /message-sent

D

Choose a suggestion and edit suggestion and select a phrase completion

POST /suggestions

POST /suggestions per keystroke

POST /spellcheck

POST /profanity

POST /messages

POST /message-sent

E

Choose a suggestion and add to it, purposely misspelling a word and undoing the spelling correction

POST /suggestions

POST /suggestions per keystroke

POST /spellcheck

POST /profanity

POST /messages

POST /message-sent

F

Choose a suggestion and edit with profanity

POST /suggestions

POST /suggestions per keystroke

POST /spellcheck

POST /profanity

POST /messages

POST /message-sent

## Use Case Examples ### 1. Create a Conversation and Ask for Suggestions The example below is a conversation post request with one customer message. Notice that the `id` value provided in the `/conversations` response is used as the `conversationId` path parameter in subsequent calls. The conversation and message calls are followed by a suggestion request and response for the agent's reply which includes two suggestions without a title and one suggestion with a title. The `phraseCompletion` field is not returned, as the agent has only just begun typing their message with `"query": "Sure"` when this suggestion request was made. **POST** `/conversation/v1/conversations` **Request** ```json theme={null} { "externalId": "33411121", "agent": { "externalId": "671", "name": "agentname" }, "customer": { "externalId": "11462", "name": "Sarah Jones" }, "metadata": { "organizationalGroup": "some-group", "subdivision": "some-division", "queue": "some-queue" }, "timestamp": "2021-11-23T12:13:14.55Z" } ``` **Response** *STATUS 200: Successfully created or updated conversation* ```json theme={null} { "id": "5544332211" } ``` **POST** `/conversation/v1/conversations/5544332211/messages` **Request** ```json theme={null} { "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "customer", "externalId": "3455123" }, "timestamp": "2021-11-23T12:13:18.55Z" } ``` **Response** *STATUS 200: Successfully created message in conversation* ```json theme={null} { "id": "099455443322115544332211" } ``` **POST** `/autocompose/v1/conversations/5544332211/suggestions` **Request** ```json theme={null} { "query": "Sure" } ``` **Response** *STATUS 200: Successfully fetched suggestions for the conversation* ```json theme={null} { "id": "453466732233", "suggestions": [ { "text": "Sure, can I get your account number for verification please?", "templateText": "Sure, can I get your account number for verification please?", "title": "" }, { "text": "Sure Sarah, I can certainly help you with that.", "templateText": "Sure {NAME}, I can certainly help you with that.", "title": "" }, { "text": "The GOLD plan is a great choice", "templateText": "The GOLD plan is a great choice", "title": "Gold plan great choice" } ] } ``` ### 2. Check Profanity The example below is of a profanity check request and response for a text string that does not contain any words found in the profanity blocklist: **POST** `/autocompose/v1/profanity/evaluation` **Request** ```json theme={null} { "text": "This is a perfectly decent sentence." } ``` **Response** *STATUS 200: Successfully fetched an evaluation result of the sentence.* ```json theme={null} { "hasProfanity": false } ``` ### 3. Check Spelling The example below is of a spell check request and response for a text string that contains a misspelling in the last typed word of the string: **POST** `/autocompose/v1/spellcheck/correction` **Request** ```json theme={null} { { "text": "How is tihs ", "typingEvent": { "cursorStart": 11, "cursorEnd": 12 }, "userDictionary": [ "Hellooo" ] } ``` **Response** *STATUS 200: Successfully checked for a spelling mistake.* ```json theme={null} { "misspelledText": "tihs", "correctedText": "this", "position": 7 } ``` ### 4. Send an Analytics Message Event The example below is of an analytics message event being sent to ASAPP that provides metadata about how an agent used AI Compose for a given message. For this message example, the agent used a spelling correction, selected the first response suggestion offered, and subsequently used the first phrase completion presented to finish the sentence, in that order. **POST** `/autocompose/v1/analytics/message-sent` **Request** ```json theme={null} { "conversationId": "5544332211", "messageId": "ee675e6576c0faf40dbb92d0d5993f5f", "augmentationType": [ "FLUENCY_APPLY", "AUTOSUGGEST", "PHRASE_AUTOCOMPLETE" ], "numEdits": 2, "selectedSuggestionText": "How can I help you today?", "selectedSuggestionsId": "5e9491b203e6ecccfef964e26fb1a5d3", "selectedSuggestionIndex": 1, "initialSuggestionsId": "5e9491b203e6ecccfef964e26fb1a5d3", "timeToAction": 1.891412, "craftingTime": 10.9472, "dwellTime": 4.132985, "phraseAutocompletePresentedCt": 1, "phraseAutocompleteSelectedCt": 1 } ``` **Response** *STATUS 200: Successfully created a MessageSent event.* In this example, the agent typed a message and sent it without using any assistance from AI Compose: **Request** ```json theme={null} { "messageId": "ee675e6576c0faf40dbb92d0d5993e2q", "augmentationType": [ "FREEHAND" ], "initialSuggestionsId": "5e9491b303e6ecccfef164e26fb1afq9", "timeToAction": 2.891412, "craftingTime": 20.9472, "dwellTime": 5.132985 } ``` **Response** *STATUS 200: Successfully created a MessageSent event.* In this example, the agent typed "hel", selected the second suggestion presented to them and sent it: **Request** ```json theme={null} { "messageId": "ee675e1236c0faf40dcb92h0e5y93e2p", "augmentationType": [ "AUTOCOMPLETE" ], "selectedSuggestionText": "Hello there, welcome to customer support chat!", "selectedSuggestionsId": "4d2fd982640c311394008259594399a1", "selectedSuggestionIndex": 2, "initialSuggestionsId": "4d2fd982640c311394008259594399a1", "timeToAction": 1.891412, "craftingTime": 11.9472, "dwellTime": 2.132985 } ``` **Response** *STATUS 200: Successfully created a MessageSent event.* In this example, the agent typed "htis", hit the space bar, and spellcheck corrected the text to "this". Then the agent accidentally reversed the spell check and sent the message to the customer without using any other AI Compose assistance: **Request** ```json theme={null} { "messageId": "fe675e1236c0fbf40dcb33h0e5y93e1d", "augmentationType": [ "FLUENCY_APPLY", "FLUENCY_UNDO" ], "initialSuggestionsId": "2d2fd982640c311146008259594399a2", "timeToAction": 1.891412, "craftingTime": 11.9472, "dwellTime": 2.132985 } ``` **Response** *STATUS 200: Successfully created a MessageSent event.* ### 5. Show an Agent Global Responses The example below is of a request to show global responses only to an agent who is searching the greetings folder for a particular response. **NOTE**: The response below is shortened to show two responses. **GET** `/autocompose/v1/responses/globals` **Request** *Query Parameters:* ```json theme={null} folderId: "9923599" resourceType: "responses" searchTerm: "transfer" ``` **Response** *STATUS 200: The global responses for this company* ```json theme={null} { "responses": { "responsesList": [ { "id": "425523523599", "text": "I’d be happy to transfer you to my supervisor.", "title": "Sup Transfer 2", "folderId": "9923599", }, { "id": "425523523598", "text": "No problem {NAME}, I’d be happy to transfer you to my supervisor.", "title": "Sup Transfer 1", "folderId": "9923599 ", "metadata": [ { "name": "NAME", "allowedValues": [ "customer.name" ] } ] } ], }, "version": { "id": "12134", "description": "June 5 2022 Update" } } ``` ### 6. Creating a New Custom Response Folder and Response The example below shows the calls that would accompany an agent creating a new greeting custom response without a folder, then adding it to an existing folder. **POST** `/autocompose/v1/responses/customs/response` **Request** ```json theme={null} { "text": "Howdy, how can I help you today?", "title": "Howdy Help" } ``` **Response** *STATUS 200: Acknowledgement that the response was successfully added* ```json theme={null} { "id": "425523523523", "text": "Howdy, how can I help you today?", "title": "Howdy Help", "folderId": "__root" } ``` **PUT** `/autocompose/v1/responses/customs/response/425523523523` **Request** ```json theme={null} { "text": "Howdy, how can I help you today?", "title": "Howdy Help", "folderId": "9923523" } ``` **Response** *STATUS 201: Acknowledgement that the custom response was successfully updated* ## Data Security ASAPP's security protocols protect data at each point of transmission, from first user authentication, to secure communications, to our auditing and logging system, all the way to securing the environment when data is at rest in the data logging system. Access to data by ASAPP teams is tightly constrained and monitored. Strict security protocols protect both ASAPP and our customers. The following security controls are particularly relevant to AI Compose: 1. Client sessions are controlled using a time-limited authorization token. Privileges for each active session are controlled server-side to mitigate potential elevation-of-privilege and information disclosure risks. 2. To avoid unauthorized disclosure of information, unique, non-guessable IDs are used to identify conversations. These conversations can only be accessed using a valid client session. 3. Requests to API endpoints that can potentially receive sensitive data are put through a round of redaction to strip the request of sensitive data (like SSNs and phone numbers). ## Additional Considerations ### Historical Conversation Data for Generating a Response List ASAPP uses past agent conversations to generate a customized response list tailored to a given use case. In order to create an accurate and relevant list, ASAPP requires a minimum of 200,000 historical transcripts to be supplied ahead of implementing AI Compose. For more information on how to transmit the conversation data, reach out to your ASAPP account contact. Visit [Transmitting Data to SFTP](/reporting/send-sftp "Transmitting Data to SFTP") for instructions on how to send historical transcripts to ASAPP. # Deploying AI Compose for LivePerson Source: https://docs.asapp.com/ai-productivity/ai-compose/deploying-ai-compose-for-liveperson Use AI Compose on your LivePerson application. ## Overview This page describes how to integrate AI Compose into your LivePerson application. ### Integration Steps There are four parts to the AI Compose setup process. Use the links below to skip to information about a specific part of the process: 1. [Install the ASAPP browser extension](#1-install-the-asapp-browser-extension) on all agents' desktop (via a system policy or using your company's existing deployment processes) 2. [Configure the LivePerson organization](#2-configure-liveperson) centrally using an administrator account 3. [Setup agent/user authentication](#3-set-up-single-sign-on) through the existing single sign-on (SSO) service 4. [Work with your ASAPP contact to configure Auto-Pilot Greetings](#4-configure-auto-pilot-greetings), if desired ## Requirements **Browser Support** ASAPP AI Compose is supported in Google Chrome and Microsoft Edge * NOTE: This support covers the latest version of each browser and extends to the previous two versions Please consult your ASAPP account contact if your installation requires support for other browsers **LivePerson** ASAPP supports LivePerson's Messaging conversation type **SSO Support** The AI Compose widget supports SP-initiated SSO with either OIDC (preferred method) or SAML. **Domain Whitelisting** In order for AI Compose to interact with ASAPP's backend and third-party support services, the following domains need to be accessible from end-user environments: | Domain | Description | | :----------------------------------------- | :----------------------------------------------------------------- | | \*.asapp.com | ASAPP service URLs | | \*.ingest.sentry.io | Application performance monitoring tool | | fonts.googleapis.com | Fonts | | google-analytics.com | Page analytics | | asapp-chat-sdk-production.s3.amazonaws.com | Static ASAPP AWS URL for desktop network connectivity health check | **Policy Check** Before proceeding, check the current order of precedence of policies deployed in your organization. Platform-deployed policies (like Group Policy Objects) and cloud-deployed policies (like Google Admin Console) are enforced in a priority order that can lead to lower-priority policies not being enforced. * If installing the ASAPP browser extension via Group Policy Objects, set platform policies to have precedence over cloud policies. * If installing the ASAPP browser extension via Google Admin Console, set cloud policies to have precedence over platform policies. For more on how to check and modify order of precedence, see [policy management guides from Google Enterprise](https://support.google.com/chrome/a/answer/9037717). ## Integrate with LivePerson ### 1. Install the ASAPP Browser Extension Customers have two options for installing the AI Compose browser extension: A. Group Policy Objects (GPO) B. Google Admin Console #### A. Install Group Policy Objects (GPO) Customers can automatically install and manage the ASAPP AI Compose browser extension via Group Policy Objects (GPO). ASAPP provides an installation server from which the extension can be downloaded and automatically updated. The Customer's system administrator must configure GPO rules to allow the installation server URL and the software component ID. Through GPO, the administrator can choose to force the installation (i.e., install without requiring human intervention). The following policies will configure Chrome and Edge to download the AI Compose browser extension on all on-premise managed devices via GPO: | **Policy Name** | **Value to Set** | | :---------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | [ExtensionInstallSources](https://cloud.google.com/docs/chrome-enterprise/policies/?policy=ExtensionInstallSources) | https\://\*.asapp.com/\* | | [ExtensionInstallAllowlist](https://cloud.google.com/docs/chrome-enterprise/policies/?policy=ExtensionInstallAllowlist) | bfcmlmledhddbnialbbdopfefoelbbei | | [ExtensionInstallForcelist](https://cloud.google.com/docs/chrome-enterprise/policies/?policy=ExtensionInstallForcelist) | bfcmlmledhddbnialbbdopfefoelbbei;[https://app.asapp.com/autocompose-liveperson-chrome-extension/updates](https://app.asapp.com/autocompose-liveperson-chrome-extension/updates) | Each Policy Name above links to documentation that describes how to set the values with the proper format depending on the platform. When policy changes occur, you may need to reload policies manually or force restart the browser to ensure newly deployed policies are applied. Figure 2 shows example policy files for the Windows platform. The policy adds the URL 'https\://\*.asapp.com/\*' as a valid extension install source, allows the extension ID 'bfcmlmledhddbnialbbdopfefoelbbei', and forces the extension installation. Google Chrome: ```registry theme={null} Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome] [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\ExtensionInstallAllowlist] "1"="bfcmlmledhddbnialbbdopfefoelbbei" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\ExtensionInstallSources] "1"="https://*.asapp.com/*" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome\ExtensionInstallForcelist] "1"="bfcmlmledhddbnialbbdopfefoelbbei;https://app.asapp.com/autocompose-liveperson-chrome-extension/updates” ``` Microsoft Edge: ```registry theme={null} Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge] [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\ExtensionInstallAllowlist] "1"="bfcmlmledhddbnialbbdopfefoelbbei" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\ExtensionInstallSources] "1"="https://*.asapp.com/*" [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\ExtensionInstallForcelist] "1"="bfcmlmledhddbnialbbdopfefoelbbei;https://app.asapp.com/autocompose-liveperson-chrome-extension/updates” ``` Figure 2: Example policy files to install the AI Compose browser extension in Google Chrome and Microsoft Edge browsers respectively (*Windows Registry*) #### B. Install via Google Admin Console For Google Chrome deployments, customers can install and manage the ASAPP AI Compose browser extension using Managed Chrome Device policies in the Google Admin console. The Customer's system administrator must set up the AI Compose browser extension through the Google Admin console by creating a custom app and configuring the extension ID and XML manifest URL. Through managed Chrome policies the administrator can choose to force the installation (i.e. install without requiring human intervention). In order to have Chrome download the ASAPP hosted extension in all managed devices through the Google Admin console: 1. Navigate to **Device management > Chrome**. 2. Click **Apps & Extensions**. 3. Click on **Add (+)** and search for the **Add Chrome app or extension by ID** option. 4. Complete the fields using the values provided below. Be sure to select the **From a custom URL** option. | **Field** | **Value** | | :-------- | :--------------------------------------------------------------------------------------------------------------------------------------------- | | ID | bfcmlmledhddbnialbbdopfefoelbbei | | URL | [https://app.asapp.com/autocompose-liveperson-chrome-extension/updates](https://app.asapp.com/autocompose-liveperson-chrome-extension/updates) | Please check Google's [Managing Extensions in Your Enterprise](https://docs.google.com/document/d/1pT0ZSbGdrbGvuCsVD2jjxrw-GVz-80rMS2dgkkquhTY/edit#heading=h.ojow7ntunwpx) for more information. To ensure that cloud policies are enabled for production environment users in a given organizational unit, locate that group of users by navigating to **Devices** > **Chrome** > **Settings** menu in Google Suite. Ensure the setting **[Chrome management for signed-in users](https://support.google.com/chrome/a/answer/2657289?hl=en#zippy=%2Cchrome-management-for-signed-in-users)** is set to **Apply all user policies when users sign into Chrome, and provide a managed Chrome experience.** **Testing** The following two checks on a target machine will verify the extension is installed correctly: 1. **The extension is force-installed in the browser** a. Expand the extension icon in the browser toolbar. b. Alternatively, navigate to chrome://extensions/ or edge://extensions/ and look for 'ASAPP Extension' c. Alternatively, navigate to edge://extensions/ and look for 'ASAPP Extension' 2. **The extension is properly configured** a. Click the extension icon and validate that the allowlist and denylist values in the extension's options are as they were set. b. Alternatively, navigate to chrome://policy and search for the extension policies. c. Alternatively, navigate to edge://policy and search for the extension policies. ### 2. Configure LivePerson **Before You Begin** You will need the following information to configure ASAPP for LivePerson: * The URL for your custom widget, which will be provided to you by ASAPP * Credentials to login to your LivePerson organization as an administrator **Configuration Steps** 1. **Add New Widget** * Open the LivePerson website and login as an administrator. * Go to 'agent workspace' and click **Night Vision**, in the top right: * Click +, then **Add new widget**. 2. **Enter Widget Attributes** * Fill in the **Widget name** as 'ASAPP' * Assign the conversation skill(s) to which ASAPP is being deployed in the **Assigned skills** dropdown menu. Leaving **Assigned skills** blank will show the ASAPP widget for all conversation regardless of skill. * Enter the URL that contains the API key you were provided by your ASAPP account contact for your custom widget in the **URL** field. When configuring for a sandbox environment, use this URL format: `https://app.asapp.com/autocompose-liveperson/autocompose.html?apikey=\{your_sandbox_api_key\}&asapp_api_domain=api.sandbox.asapp.com` When configuring for a production environment, use this URL format: `https://app.asapp.com/autocompose-liveperson/autocompose.html?apikey=\{your_prod_api_key\}` * Click the **Save** button. Ensure **Hide** and **Manager View** are unselected once you are ready for agents to see the widget for conversations with the assigned skill(s). 3. **Move Widget to Top** * Click the **Organize** button * Scroll down to the ASAPP widget, and click the **Move top** button: * Click the **Done** button 4. **Enable Pop-in Composer** * In the Agent Workspace, click the nut icon (similar to a gear shape) next to the **+** icon at the bottom of the AI Compose panel widget. * Enable the **Pop-in Composer** option. Press the escape key and reload the page to see the changes; the ASAPP widget should now be available across your LivePerson organization Upon login to the Agent Workspace, the ASAPP widget for AI Compose will appear in place of the standard LivePerson composer, underneath the conversation transcript. By default, the response panel for AI Compose will appear to the right of the conversational panel. ### 3. Set Up Single Sign-On ASAPP handles authentication through the customer's SSO service to confirm the identity of the agent. ASAPP acts as the Service Provider (SP) with the customer acting as the Identity Provider (IDP). The customer's authentication system performs user authentication using their existing user credentials. ASAPP supports SP-initiated SSO with either OIDC (preferred method) and SAML. Once the user initiates sign-in, ASAPP detects that the user is authenticated and requests an assertion from the customer's SSO service. **Configuration Steps for OIDC (preferred method)** 1. Create a new IDP OIDC application with type `Web` 2. Set the following attributes for the app:

Attribute

Value\*

Grant Type

authorization code

Sign-in Redirect URIs

Production: [https://api.asapp.com/auth/v1/callback/\\\{company\_marker\\}](https://api.asapp.com/auth/v1/callback/\\\{company_marker\\})

Sandbox: [https://api.sandbox.asapp.com/auth/v1/callback/\\\{company\_marker\\}-sandbox](https://api.sandbox.asapp.com/auth/v1/callback/\\\{company_marker\\}-sandbox)

**\*NOTE:** ASAPP to provide `company_marker` value 3. Save the application and send ASAPP the `Client ID` and `Client Secret` from the app through a secure communication channel 4. Set scopes for the OIDC application: * Required: `openid` * Preferred: `email`, `profile` 5. Tell ASAPP which end-user attribute should be used a unique identifier 6. Tell ASAPP your IDP domain name **Configuration Steps for SAML** 1. Create a new IDP SAML application. 2. Set the following attributes for the app: | Attribute | Value\* | | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Single Sign On URL | Production: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml/endpoint/clients/asapp-saml](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml)

Sandbox: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox) | | Recipient URL | Production: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml/endpoint/clients/asapp-saml](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml)

Sandbox: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox) | | Destination URL | Production: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml/endpoint/clients/asapp-saml](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml)

Sandbox: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox) | | Audience Restriction | Production: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml/endpoint/clients/asapp-saml](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml)

Sandbox: [https://sso.asapp.com/auth/realms/standalone-\{company\_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox](https://sso.asapp.com/auth/realms/standalone-\{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox) | | Response | Signed | | Assertion | Signed | | Signature Algorithm | RSA\_SHA256 | | Digest Algorithm | SHA256 | | Attribute Statements | externalUserId: | **\*NOTE:** ASAPP to provide `company_marker` value 3. Save the application and send the Public Certificate to validate Signature for this app SAML payload to ASAPP team 4. Send ASAPP team the URL of the SAML application ### 4. Configure Auto-Pilot Greetings If you so choose, you can work with your ASAPP contact to enable Auto-Pilot Greetings in your AI Compose installation. Auto-Pilot Greetings automatically generates a greeting at the beginning of a conversation, and that greeting can be automatically sent to a customer on your agent's behalf after a configurable timer elapses. Your ASAPP contact can: * Turn Auto-Pilot Greetings on or off for your organization * Set a countdown timer value after which the Auto-Pilot Greeting is sent if an agent does not cancel Auto-Pilot by typing or clicking a "cancel" button * Set the global default messages that will be provided for Auto-Pilot Greetings across your organization (note that agents can optionally customize their Auto-Pilot Greetings messages within the Auto-Pilot tab of the AI Compose panel) ## Usage ### Customization #### LivePerson For LivePerson, the standard process is to download ASAPP AI Compose as a standalone widget. In the case that you already have your own LivePerson custom widget, ASAPP also provides the option for you to embed our custom widget inside your own custom widget, thus economizing on-screen real estate. **Conversation Attributes** Once the ASAPP AI Compose widget is embedded, LivePerson shares the following conversation attributes with ASAPP: customer name, agent name and skill. ASAPP can use name attributes to populate values into templated responses (e.g. "Hi \[customer name], how can I help you today?") and to selectively filter response lists based on the skill of the conversation. **Conversation Redaction** When message text in the conversation transcript is sent to ASAPP, ASAPP applies redaction to the message text to prevent transmission of sensitive information. Reach out to your ASAPP account contact for information on available redaction capabilities to configure for your implementation. ### Data Security ASAPP's security protocols protect data at each point of transmission from first user authentication, to secure communications, to our auditing and logging system, all the way to securing the environment when data is at rest in the data logging system. Access to data by ASAPP teams is tightly constrained and monitored. Strict security protocols protect both ASAPP and our customers. The following security controls are particularly relevant to AI Compose: 1. Client sessions are controlled using a time-limited authorization token. Privileges for each active session are controlled server-side to mitigate potential elevation-of-privilege and information disclosure risks. 2. To avoid unauthorized disclosure of information, unique, non-guessable IDs are used to identify conversations. These conversations can only be accessed using a valid client session. 3. Requests to API endpoints that can potentially receive sensitive data are put through a round of redaction to strip the request of sensitive data (like SSNs and phone numbers). ### Additional Considerations #### Historical Conversation Data for Generating a Response List ASAPP uses past agent conversations to generate a customized response list tailored to a given use case. In order to create an accurate and relevant list, ASAPP requires a minimum of 200,000 historical transcripts to be supplied ahead of implementing AI Compose. For more information on how to transmit the conversation data, reach out to your ASAPP account contact. #### LivePerson ASAPP uses a browser extension to replace the LivePerson composer with the ASAPP composer. In the unlikely event that the DOM of the LivePerson composer or its surrounding area changes, the LivePerson composer may no longer be replaced by the ASAPP composer. In this case, the CSR has the option to toggle the ASAPP composer so that it 'retreats' into the ASAPP Custom Widget. In such a case, the ASAPP composer will continue to be fully functional, even if it is no longer ideally placed just below the LivePerson chat history. In order to quickly restore the placement of the ASAPP composer directly beneath the LivePerson chat log, ASAPP deploys its extension so that the extension's configuration is pulled down from our servers in real-time. If the LivePerson DOM does change, we can deploy a fix rapidly. # Deploying AI Compose for Salesforce Source: https://docs.asapp.com/ai-productivity/ai-compose/deploying-ai-compose-for-salesforce Use AI Compose on Salesforce Lightning Experience. ## Overview This page describes how to integrate AI Compose into your Salesforce application. ### Integration Steps There are three parts to the AI Compose setup process. Use the links below to skip to information about a specific part of the process: 1. [Configure the Salesforce organization](#1-configure-the-salesforce-organization-centrally) centrally using an administrator account 2. [Setup agent/user authentication](#2-set-up-single-sign-on) through the existing single sign-on (SSO) service 3. [Work with your ASAPP contact to configure Auto-Pilot Greetings](#3-configure-auto-pilot-greetings), if desired Expected effort for each part of the setup process: * 1 hour for installation and configuration of the ASAPP chat components * 1-2 hours to enable user authentication, depending on SSO system complexity ## Requirements **Browser Support** ASAPP AI Compose is supported in Google Chrome and Microsoft Edge NOTE: This support covers the latest version of each browser and extends to the previous two versions Please consult your ASAPP account contact if your installation requires support for other browsers **Salesforce** ASAPP supports Lightning-based chat (cf. classic) **SSO Support** The AI Compose widget supports SP-initiated SSO with either OIDC (preferred method) or SAML. **Domain Whitelisting** In order for AI Compose to interact with ASAPP's backend and third-party support services, the following domains need to be accessible from end-user environments: | Domain | Description | | :----------------------------------------- | :----------------------------------------------------------------- | | \*.asapp.com | ASAPP service URLs | | \*.ingest.sentry.io | Application performance monitoring tool | | fonts.googleapis.com | Fonts | | google-analytics.com | Page analytics | | asapp-chat-sdk-production.s3.amazonaws.com | Static ASAPP AWS URL for desktop network connectivity health check | ## Integrate with Salesforce ### 1. Configure the Salesforce Organization Centrally **Before You Begin** You will need the following information to configure ASAPP for Salesforce: * Administrator credentials to login to your Salesforce organization account. * **NOTE:** Organization and Administrator should be enabled for 'chat'. * A URL for the ASAPP installation package, which will be provided by ASAPP. ASAPP provides the same install package for implementing both AI Compose and AI Summary in Salesforce. Use this guide to configure AI Compose. If you're looking to implement AI Summary, [use this guide](/ai-productivity/ai-summary/salesforce-plugin). * API Id and API URL values, which can be found in your ASAPP Developer Portal account (developer.asapp.com) in the **Apps** section. **Configuration Steps** **1. Install the ASAPP Package** * Open the package installation URL from ASAPP. * Login with your Salesforce organization administrator credentials. The package installation page appears: * Choose **Install for All Users** (as shown above). * Check the acknowledgment statement and click the **Install** button: * The Installation runs. An **Installation Complete!** message appears: * Click the **Done** button. **2. Add ASAPP to the Chat Transcript Page** * Open the 'Service Console' page (or your chat page). * Choose an existing chat session or start a new chat session so that the chat transcript page appears (the exact mechanism is organization-specific). * In the top-right, click the **gear** icon, then right-click **Edit Page**, and **Open Link in a New Tab**. * Navigate to the new tab to see the chat transcript edit page: * Select the conversation panel (middle) and delete it. * Drag the **chatAsapp** component (left), inside the conversation panel: * Drag the **exploreAsapp** component (left), to the right column. Next, add your organization's **API key** and **API URL** (found in the ASAPP Developer Portal) in the rightmost panel: The API key is labeled as **API Id** in the ASAPP Developer Portal. The API URL should be listed as `https://api.sandbox.asapp.com` for lower environments and `https://api.asapp.com` for production. * Click **Save**, then click **Activate** * Click **Assign as org default**. * Choose the **Desktop** form factor, then click **Save**. * Return to the chat transcript page and refresh - the ASAPP composer should appear. ### 2. Set Up Single Sign-On ASAPP handles authentication through the customer's SSO service to confirm the identity of the agent. ASAPP acts as the Service Provider (SP), with the customer acting as the Identity Provider (IDP). The customer's authentication system performs user authentication using their existing user credentials. ASAPP supports SP-initiated SSO with either OIDC (preferred method) and SAML. Once the user initiates sign-in, ASAPP detects that the user is authenticated and requests an assertion from the customer's SSO service. **Configuration Steps for OIDC (preferred method)** 1. Create a new IDP OIDC application with type `Web` 2. Set the following attributes for the app:

Attribute

Value\*

Grant Type

authorization code

Sign-in Redirect URIs

Production: `https://api.asapp.com/auth/v1/callback/\{company_marker\}`

Sandbox: `https://api.sandbox.asapp.com/auth/v1/callback/\{company_marker\}-sandbox`

**\*NOTE:** ASAPP to provide `company_marker` value 3. Save the application and send ASAPP the `Client ID` and `Client Secret` from the app through a secure communication channel 4. Set scopes for the OIDC application: * Required: `openid` * Preferred: `email`, `profile` 5. Tell ASAPP which end-user attribute should be used a unique identifier 6. Tell ASAPP your IDP domain name **Configuration Steps for SAML** 1. Create a new IDP SAML application. 2. Set the following attributes for the app: | Attribute | Value\* | | :------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Single Sign On URL | Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-sam`l

Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox` | | Recipient URL | Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`

Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox` | | Destination URL | Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`

Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox` | | Audience Restriction | Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`

Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox` | | Response | Signed | | Assertion | Signed | | Signature Algorithm | RSA\_SHA256 | | Digest Algorithm | SHA256 | | Attribute Statements | externalUserId: `{unique_id_to_identify_the_user}` | **\*NOTE:** ASAPP to provide `company_marker` value 3. Save the application and send the Public Certificate to validate Signature for this app SAML payload to ASAPP team 4. Send ASAPP team the URL of the SAML application ### 3. Configure Auto-Pilot Greetings If you so choose, you can work with your ASAPP contact to enable Auto-Pilot Greetings in your AI Compose installation. Auto-Pilot Greetings automatically generates a greeting at the beginning of a conversation, and that greeting can be automatically sent to a customer on your agent's behalf after a configurable timer elapses. Your ASAPP contact can: * Turn Auto-Pilot Greetings on or off for your organization * Set a countdown timer value after which the Auto-Pilot Greeting is sent if an agent does not cancel Auto-Pilot by typing or clicking a "cancel" button * Set the global default messages that will be provided for Auto-Pilot Greetings across your organization (note that agents can optionally customize their Auto-Pilot Greetings messages within the Auto-Pilot tab of the AI Compose panel) ## Usage ### Customization #### Conversation Attributes Once the ASAPP AI Compose widget is embedded, Salesforce shares the following conversation attributes with ASAPP: customer name, agent name and skill. ASAPP can use name attributes to populate values into templated responses (e.g. "Hi \[customer name], how can I help you today?") and to selectively filter response lists based on the skill of the conversation. #### Conversation Redaction When message text in the conversation transcript is sent to ASAPP, ASAPP applies redaction to the message text to prevent transmission of sensitive information. Reach out to your ASAPP account contact for information on available redaction capabilities to configure for your implementation. #### Composer Placement ASAPP currently targets Lightning desktops. Within Lightning-based desktops, you are free to place our composer wherever you choose. However, we suggest placing it immediately below the Salesforce conversation widget, such that the chat log appears above the ASAPP composer. ### Data Security ASAPP's security protocols protect data at each point of transmission from first user authentication, to secure communications, to our auditing and logging system, all the way to securing the environment when data is at rest in the data logging system. Access to data by ASAPP teams is tightly constrained and monitored. Strict security protocols protect both ASAPP and our customers. The following security controls are particularly relevant to AI Compose: 1. Client sessions are controlled using a time-limited authorization token. Privileges for each active session are controlled server-side to mitigate potential elevation-of-privilege and information disclosure risks. 2. To avoid unauthorized disclosure of information, unique, non-guessable IDs are used to identify conversations. These conversations can only be accessed using a valid client session. 3. Requests to API endpoints that can potentially receive sensitive data are put through a round of redaction to strip the request of sensitive data (like SSNs and phone numbers). ### Additional Considerations #### Historical Conversation Data for Generating a Response List ASAPP uses past agent conversations to generate a customized response list tailored to a given use case. In order to create an accurate and relevant list, ASAPP requires a minimum of 200,000 historical transcripts to be supplied ahead of implementing AI Compose. For more information on how to transmit the conversation data, reach out to your ASAPP account contact. # AI Compose Product Guide Source: https://docs.asapp.com/ai-productivity/ai-compose/product-guide Learn more about the features and insights of AI Compose ## Getting Started This page provides an overview of the features and functionalities in AI Compose. After you integrate AI Compose into your applications, you can use its features to scale up your agent responses. The following UI descriptions are examples of AI Compose Integrations with LivePerson and Salesforce. API-based integrations do not include custom UIs. ### Suggestions AI Compose supports agents throughout the conversation with both complete response suggestions before they type and suggestions while typing to complete their sentence. The machine learning models powering AI Compose suggestions use the entire conversation context (not just the last few responses) and personal agent response history to predict the most likely next agent message or phrase in the conversation. ### Response Library AI Compose suggests responses from a library curated from a wide range of domain-specific conversation topics. The response library combines three lists: 1. **Global response list:** Messages created and maintained by program administrators available to a designated full agent population. 2. **Custom response list:** Messages created and maintained directly in AI Compose by individual agents; only available to the agent that created the message. 3. **Organically growing response list:** Messages that ASAPP automatically creates for each agent based on their most commonly used messages that do not already exist in the global response list or the agent's curated custom response list. Agents use custom responses to make their favorite messages readily available for sending quickly: well-honed explanations for difficult processes and concepts, discovery questions, personal anecdotes, and greetings and farewells infused with their personal style.  Agents often curate their custom responses based on global responses, with a bit of their own personal touch. ### Agent Interface AI Compose provides three complete response suggestions in the drawer above the composer both before typing begins and in the early stages of message composition; the system makes phrase completion suggestions directly in-line as the agent types more of a sentence. Agents can also search for a response in two places: 1. **Composer:** As agents type, they can choose to search for their typed text in the global response list to see the full list of related messages with that term. By typing `/` in the empty composer, agents can also browse their custom response library by using either the message text or title of the custom response as a search term. 2. **Response panel:** In the response panel, agents can browse both the global and custom response lists, either using a folder hierarchy or with the provided search field. The organically growing response list is not available for agents to browse - responses from this list only appear in suggestions.  Agents are encouraged to add these frequently used responses to their custom response list. ### Autocomplete Once the agent begins typing, AI Compose provides two forms of autocomplete suggestions at different stages in the message composition: * As the agent begins typing a message, complete response suggestions are available. At this point, the agent is in the early stages of composing their response and several potential complete response options are relevant. * After several words are typed, a high-confidence phrase completion can be recommended in-line to help agents finish their already well-formed thought. AI Compose provides phrase completions from common, high-frequency phrases used in each implementation's production conversations. AI Compose only makes phrase suggestions when a sufficiently high-confidence phrase is available and only uses language found in the global and custom response library. ### Templated Responses AI Compose can dynamically insert metadata into designated templated responses in the global response list. For example, a customer's first name can be automatically populated into this templated response: "Hi *\{name}*, how can I help you today?". By default, AI Compose supports inserting customer first name, agent first name and the customer's time of day (morning, afternoon, evening) into templated responses. Time of day can be set to a single zone or be dynamically determined for each conversation. AI Compose also supports inserting custom conversation-specific metadata passed to ASAPP. For more information on custom inserts, reach out to your ASAPP account team. If the needed metadata is unavailable for a particular templated response (e.g. there is no customer name available), that response will not be suggested by AI Compose. ### Fluency and Profanity **Fluency Boosting** AI Compose automatically corrects commonly misspelled words once the space bar is pressed following a given word. Corrections are underlined in the composer for agent-awareness and can be undone if needed by hovering over the corrected word. **Profanity Blocking** AI Compose checks for profanity in messages when the agent attempts to send the message. If any terms match ASAPP's profanity criteria, the system does not send the message and informs the agent. ## Customization ### Suggestion Model The AI Compose suggestion model functions as a custom recommendation service for each agent. The model references the global response list, a library of custom responses created by each agent, and also learns from each agent's unique production message-sending behaviors to surface the best responses. ### Global Response List Prior to deployment, ASAPP can generate a domain-relevant global response list using representative historical conversations as an input. This is a highly recommended customization to ensure agents receive useful, relevant suggestions as early as possible. If historical conversation data is unavailable prior to deploying AI Compose, you can use production conversations after deployment to adapt the response list at a later date. | **Option** | **Description** | **Requirements** | | :-------------- | :--------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------- | | Model-generated | Fully-custom global response list that extracts relevant terminology and sentences from real conversations | 200,000 historical transcripts to enable prior to implementation | For more information on sending historical transcript files to ASAPP, see [Transmitting Data to ASAPP](/reporting/send-s3#historical-transcript-file-structure). ### Queue/Skill Response List Filtering AI Compose can filter the global response list by agent queue/skill for a given conversation. For example, a subset of responses appropriate only for sales conversations can be labeled to be removed from technical troubleshooting conversations. The system labels responses with applicable queue(s)/skill(s) and makes them unavailable for suggestion if their labels do not match the conversation. | **Option** | **Description** | **Requirements** | | :------------------------------------------ | :----------------------------------------------------------------------------------------------------------- | :---------------------------------------- | | Global Response List with filter attributes | Global responses are labeled with optional attributes for skills for which they are exclusively appropriate. | Review and labeling of specific responses | For technical information about implementing this service, refer to the deployment guides for AI Compose: * [AI Compose API](/ai-productivity/ai-compose/deploying-ai-compose-api "Deploying AI Compose API") * [AI Compose for LivePerson](/ai-productivity/ai-compose/deploying-ai-compose-for-liveperson "Deploying AI Compose for LivePerson") * [AI Compose for Salesforce](/ai-productivity/ai-compose/deploying-ai-compose-for-salesforce "Deploying AI Compose for Salesforce") ## Use Cases ### For Improved Agent Productivity **Challenge** Agents spend a lot of time manually crafting responses to similar customer problems.  Using scripts can make the conversation sound robotic, so agents who do use canned responses spend a lot of time adjusting the language to sound more like them or use them too rarely to impact their response time or ability to handle multiple conversations concurrently. Average response crafting time and messaging concurrency, even with canned response library usage, remains high for most digital messaging programs. **Using AI Compose** AI Compose drastically reduces crafting time by not only serving up response suggestions from a much more exhaustive set of responses, but it also addresses the problem of canned responses sounding overly generic by empowering agents to craft messaging in their personal style. This is why adoption, and therefore efficiency gains overall, are so impressive when AI Compose is deployed. ### For CX Quality and Consistency **Challenge** Agents have a lot of information to learn to become domain experts and are often handling issues with which they have limited experience or that they have trouble recalling the best way to handle. Many companies use a variety of resources that agents have to search through to find answers on how to best handle a customer's problem.  This can be difficult for an agent to juggle while in a live conversation, especially if they are unsure where to begin their search. **Using AI Compose** AI Compose learns from the global population of agents over time, which is incredibly useful to newer agents or agents who are beginning to handle conversation in a newer domain. While the model may not initially have much indication on language that a particular agent likes to use, it naturally adapts to this by surfacing up suggestions from the global response list and global history of how similar conversations have been handled in the past.  This helps ensure that agents follow consistent behaviors when handling issues that they are less certain about. ## FAQs ### Model Improvement **How does the suggestion model improve over time?** The system automatically trains the model weekly on the latest historical data, informed by agent interactions with AI Compose at given moments of conversations. As the system observes more situational agent behaviors, it surfaces better response suggestions at more relevant points in the conversation. **Does the model adapt to topics not seen before?** As a baseline, models are able to make inferences about what existing responses are most relevant even if the topic is new. **Do new topics require new entries to be added to the global response list?** Major new topics are best handled by updating the global response list with appropriate responses. If, for example, you want to prepare for a new product launch, our recommendation is to make additions and edits to the global response list in advance, then upload on the day of the launch. Our models will immediately start making suggestions from the updated response list. As more agents use the system, the models will become even smarter at identifying when these new responses should be suggested.4. ### Response Lists **How is the global response list updated?** AI-Console gives program administrators full control to make targeted or bulk updates to the global response list and manage deployments of those changes. Once deployed to production, response list changes are immediately available to agents. For more information, refer to the [AI Compose Tooling Guide](/ai-productivity/ai-compose/ai-compose-tooling-guide "AI Compose Tooling Guide"). **Does the global response list change automatically?** The global response list does not automatically update. It is managed exclusively by product owners for a given implementation. The organically growing list of commonly used responses updates automatically without need for manual updates. **What is the difference between the global and custom response list?** The global response list is managed by center leadership and contains a comprehensive list of responses available across the agent population. This list is intended to be the recommended wording for responding to customers. The custom response list is managed by each agent and is exclusively accessible to them. Responses in the custom response list are suggested by AI Compose alongside entries in the global response list. Like the global response list, the custom response list also supports a folder structure and can be manually searched by the agent. **Does the suggestion model act differently from one agent to the next?** The suggestion model uses the agent's live conversation context and uses agent-specific response from both the custom response list and the organically-growing response list. AI Compose suggestion models are not unique to each agent, but have different inputs and potential responses that personalize their experience. **Can the global response list be customized by queue/skill?** Yes. When the global response list is being created or edited, ASAPP can add metadata to specific responses that make them eligible for specific queues or skills, and ineligible for suggestions for all others. For example, a set of 40 responses may only be applicable for an escalation queue, and be tagged such that they don't appear as suggestions in any conversation that appears in another queue. # AI Summary Source: https://docs.asapp.com/ai-productivity/ai-summary Use AI Summary to extract insights and data from your conversations AI Summary provides a set of APIs that enable you to extract insights from the wealth of data generated when your agents talk to your customers. AI Summary insights use ASAPP's Generative AI (LLMs). Organizations use these insights to identify custom data, intents, topics, entities, sentiment drivers, and other structured data from every voice or chat (message) interaction between a customer and an agent. You can customize AI Summary to your specific use cases, such as workflow optimizations, trade confirmations, compliance monitoring, and quality assurance. ## Insights and Data With AI Summary, you can extract the following information: | Insight | Description | This enables you to | | :----------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Free text summary](/ai-productivity/ai-summary/free-text-summary) | Generates a concise text summary of each conversation |
  • Reduces average handle time by eliminating post-call summarization.
  • Improves customer experience by allowing agents to focus on customers.
| | [Intents](/ai-productivity/ai-summary/intent) | Identifies the topic-level categorization of the customer's primary issue or question |
  • Optimizes operations by analyzing contact reasons.
  • Improves customer experience through better conversation routing.​
| | [Structured Data](/ai-productivity/ai-summary/structured-data) | Extract specific, customizable data points from a conversation:​
  • Question: Answers to predefined queries (e.g., "Was the customer issue resolved?", "Did the agent follow the script?")
  • Entities: Key information said in the conversation such as claim numbers, account details, approval dates, monetary amount, and more.
|
  • Automates data collection for analytics and reporting.
  • Facilitates compliance monitoring and quality assurance
  • Enables rapid population of CRMs and other business tools
  • Supports data-driven decision making and process improvement
| ## Customizable ASAPP designs AI Summary to be highly customizable to meet your specific business needs: * **Free Text Summaries and Intents**: We train these features on your historical conversation data for optimal performance. * **Structured Data**: * **Questions**: You have full control over the questions asked. Define any yes/no questions relevant to your business processes or compliance needs. * **Entities**: Configure the system to extract specific data points that matter most to your organization. This level of customization ensures that AI Summary provides precisely the insights you need for your unique use cases. ## Implementation AI Summary requires conversation transcripts to evaluate. You have multiple methods available to provide transcripts: * **API (Real-Time)**: Use the Conversation API to upload conversations. This approach provides a Getting Started guide. * **[AI Transcribe (speech-to-text service)](/ai-productivity/ai-transcribe)**: Use ASAPP's AI Transcribe to transcribe your phone calls. * **[Salesforce plugin (for free text summaries only)](/ai-productivity/ai-summary/salesforce-plugin)**: If you use Salesforce Chat, install our plugin to automatically handle the API interactions. Only free text summary is supported. }> Learn how to start using AI Summary # Example Use Cases Source: https://docs.asapp.com/ai-productivity/ai-summary/example-use-cases See examples of how AI Summary can be used AI Summary can be applied to various industries and use cases. This section provides examples of how AI Summary can be implemented to solve specific business challenges. Each use case includes a brief overview, key components, and a high-level architecture diagram. ## Regulatory Compliance Monitoring AI Summary automatically flags customer conversations that trigger regulatory compliance requirements, such as Regulation Z (Truth in Lending Act) and Regulation E (Electronic Funds Transfer Act) in the financial services industry. | Industry | Category | AI Summary Features | | :----------------- | :--------- | :-------------------------------------------------------------------------- | | Financial Services | Compliance |
  • Structured Data extraction
  • Intent identification
| ### Implementation 1. Configure Structured Data extraction to identify key compliance-related information (e.g., loan terms, electronic fund transfer details). 2. Set up Intent identification to categorize conversations related to the compliance information. 3. Integrate with existing call center software for real-time or batch processing. 4. Connect outputs to risk management systems for review and reporting. ### Architecture ## Real-time Product Quality Monitoring (Retail, Telecommunications) AI Summary generates free-text summaries of customer complaints about product quality, allowing for real-time identification of defects and issues. This includes data such as specific products, complaints, or issue types. | Industry | Category | AI Summary Features | | :------------------------ | :---------------- | :------------------ | | Retail Telecommunications | Quality Assurance | Entity Extraction | ### Implementation 1. Configure Entity Extraction to identify product names and specific defect or issue descriptions. 2. Integrate with call center software for real-time processing. 3. Connect outputs to business intelligence systems for analysis and reporting. ### Architecture ## Automated Call Wrap-up (Multiple Industries) AI Summary can automate the process of summarizing customer interactions, eliminating the need for manual note-taking by agents and providing consistent, high-quality call summaries. You can directly insert the summary and specific data elements into your contact center or CRM tool to remove manual steps. | Industry | Category | AI Summary Features | | :------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------- | |
  • Retail
  • Telco
  • Insurance Travel
  • Financial Services
  • \*Any\*
|
  • Call Center Operations
  • Quality Assurance
|
  • Free Text Summary generation
  • Targeted Structured Data (Questions)
  • Entity Extraction
| ### Implementation 1. Set up Free Text Summary to generate comprehensive call summaries. 2. Configure Targeted Structured Data to answer key questions (e.g., "Was the customer's issue resolved?", "Were any follow-up actions required?"). 3. Set up API connections to populate summaries into CRM or agent desktop applications. 4. Implement quality assurance processes to validate summary accuracy. ### Architecture ## Trade Confirmations (Financial Services) AI Summary ensures compliance with financial regulations like FINRA by automatically verifying if agents have confirmed trade details with customers before entering orders into the system. You can use Structured Data to extract the price, type of order, the security being bought or sold, etc. | Industry | Category | AI Summary Features | | :----------------- | :--------- | :------------------ | | Financial Services | Compliance | Entity Extraction | ### Implementation 1. Configure Structured Data extraction to identify order type, security name/symbol, quantity, and price. 2. Set up Entity Extraction to capture customer account numbers and trade confirmation phrases. 3. Integrate with trading platforms for real-time verification. 4. Implement an alerting system for non-compliant trade confirmations. ### Architecture # Free Text Summary Source: https://docs.asapp.com/ai-productivity/ai-summary/free-text-summary Generate conversation summaries with Free Text Summary A Free Text Summary is a generated summary or description from a conversation. AI Summary generates high-quality, free-text summaries that are fully configurable in both format and content. You have the flexibility to include or exclude targeted elements based on your needs. This eliminates the need for agents to take notes during or after calls and minimizes post-call forms. ## How it works To help understand how free-text summary works, let's use an example conversation: > **Agent**: Hello, thank you for contacting XYZ Insurance. How can I assist you today?\ > **Customer**: Hi, I want to check the status of my payout for my claim.\ > **Agent**: Sure, can you please provide me with the claim number?\ > **Customer**: It's H123456789.\ > **Agent**: Thank you. Could you also provide the last 4 digits of your account number?\ > **Customer**: 6789\ > **Agent**: Let me check the details for you. One moment, please.\ > **Agent**: I see that your claim was approved on June 10, 2024, for \$5000. The payout has been processed.\ > **Customer**: Great! When will I receive the money?\ > **Agent**: The payout will be credited to your account within 3-5 business days.\ > **Customer**: Perfect, thank you so much for your help.\ > **Agent**: You’re welcome! Is there anything else I can assist you with?\ > **Customer**: No, that's all. Have a nice day.\ > **Agent**: You too. Goodbye! The system selects each word in a paragraph summary uniquely for a given conversation transcript, rather than using predefined tags. The paragraph incorporates language used by the customer and agent to create a faithful representation of what was discussed in the conversation. Since the summary is generated, there may be minor variations in grammar if you repeatedly generate summaries for the same conversation. Here is an example summary generated from the above conversation: > The customer inquired about the status of a payout for an approved claim. The agent confirmed that the claim was approved and the payout has been processed and will be credited within 3-5 business days. For conversations that involve transfers or multiple agents, AI Summary can generate summaries for both the entire multi-leg conversation and specific legs. ## Generate a free text summary To generate a free text summary, first provide the conversation transcript to ASAPP. This example uses our Conversation API, but you have options to use AI Transcribe or batch integration options. ### Step 1: Create a conversation To create a **`conversation`**, provide your IDs for the conversation and customer. ```javascript theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "externalId": "[Your id for the conversation]", "customer": { "externalId": "[Your id for the customer]", "name": "customer name" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` A successfully created conversation returns a status code of 200 and a conversation ID. ### Step 2: Add messages You need to add the messages for the conversation. In this example, we use the `/messages/batch` endpoint to add the entire example conversation. ```javascript theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/5544332211/messages/batch' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "messages": [ { "text": "Hello, thank you for contacting XYZ Insurance. How can I assist you today?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:00:00Z" }, { "text": "Hi, I want to check the status of my payout for my claim.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:01:00Z" }, { "text": "Sure, can you please provide me with the claim number?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:02:00Z" }, { "text": "It\'s H123456789.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:03:00Z" }, { "text": "Thank you. Could you also provide the last 4 digits of your account number?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:04:00Z" }, { "text": "****", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:05:00Z" }, { "text": "Let me check the details for you. One moment, please.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:06:00Z" }, { "text": "I see that your claim was approved on June 10, ****, for ****. The payout has been processed.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:07:00Z" }, { "text": "Great! When will I receive the money?", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:08:00Z" }, { "text": "The payout will be credited to your account within 3-5 business days.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:09:00Z" }, { "text": "Perfect, thank you so much for your help.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:10:00Z" }, { "text": "You\'re welcome! Is there anything else I can assist you with?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:11:00Z" }, { "text": "No, that\'s all. Have a nice day.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:12:00Z" }, { "text": "You too. Goodbye!", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:13:00Z" } ] }' ``` ### Step 3: Generate free text summary Now that you have a conversation with messages, you can generate a free text summary. To generate the summary, provide the ID of the conversation. ```javascript theme={null} curl -X GET 'https://api.sandbox.asapp.com/autosummary/v1/free-text-summaries/5544332211' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` A successful free text summary generation returns 200 and the summary. ```javascript theme={null} { "conversationId": "5544332211", "summaryId": "0992d936-ff70-49fc-ac88-76f1246d8t27", "summaryText": "The customer inquired about the status of a payout for an approved claim. The agent confirmed that the claim was approved and the payout has been processed and will be credited within 3-5 business days." } ``` This summary is for the entire conversation, regardless of the number of agents. ## Multi-leg summaries You may have a conversation where one end user talks to multiple agents about different topics. With AI Summary, you can generate summaries for a conversation based on the agent interactions you want to summarize. To generate a summary for one leg, provide the ID of the conversation in the path and the agent ID as a query parameter. ```javascript theme={null} curl -X GET 'https://api.sandbox.asapp.com/autosummary/v1/free-text-summaries/5544332211?agentExternalId=agent_1234 \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` This generates a summary for the conversation, only for the parts of the conversation that the specific agent participated in. ## Customization AI Summary allows for extensive customization for the free text summary to meet your specific needs. Whether you want to highlight particular aspects of conversations or adhere to industry-specific standards, this feature provides the flexibility to tailor summaries in a way that aligns with your operational goals. To customize your free text summaries, work with your ASAPP account team to refine what you want from your summaries. As an example, using the sample conversation, you may want summaries to be specific about dates and amounts mentioned. Here is an example with that customization: > The customer inquired about the status of a payout for an approved claim. The agent confirmed that the claim was approved on **June 10, 2024**, for **\$5,000**, and the payout has been processed and will be credited within 3-5 business days. # Getting Started Source: https://docs.asapp.com/ai-productivity/ai-summary/getting-started Learn how to get started with AI Summary To start using AI Summary, choose your integration method: * Upload transcripts or use a conversation from AI Transcribe and receive the insights instantly. * You can use this approach for real-time experiences like conversation routing and form pre-filling. * For digital channels: Provide the chat messages directly. * For voice channels: Use AI Transcribe or your own transcription service. This integration is covered in this Getting Started guide. * Supports only Salesforce Chat. * Inserts free-text summaries into conversation objects. Learn how to use the Salesforce Plugin The following instructions cover the **API (Real Time) Integration** as it is the most common method. To use AI Summary via API: 1. Provide transcripts 2. Extract insights with AI Summary API ## Before you Begin Before you start integrating AI Summary, you need to: * Get your API Key Id and Secret * Ensure your account and API key can access AI Summary. Reach out to your ASAPP team if you are unsure. ## Step 1: Provide transcripts How you provide transcripts depends on the conversation channel. **For digital channels:** * Use the **conversation API** to upload the messages directly. **For voice channels:** * Use **AI Transcribe** Service to transcribe the audio, or * Upload utterances via Conversation API if using your own transcription service. To send transcripts via the Conversation API, you need to: 1. Create a `conversation`. 2. Add `messages` to the `conversation`. To create a `conversation`, provide your IDs for the conversation and customer. ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations' \ --header 'asapp-api-id: \' \ --header 'asapp-api-secret: \' \ --header 'Content-Type: application/json' \ --data '{ "externalId": "\[Your id for the conversation\]", "customer": { "externalId": "\[Your id for the customer\]", "name": "customer name" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` This conversation represents a thread of messages between an end user and one or more agents. A successfully created conversation returns a status code of 200 and provides the `id` of the conversation. ```json theme={null} {"id":"01HNE48VMKNZ0B0SG3CEFV24WM"} ``` Each time your end user or an agent sends a message, you need to add the messages to the conversation by creating a `message` on the `conversation`. This may be either the chat messages in digital channels or the audio transcript from your transcription service. You have the choice to add a **single message** for each turn of the conversation, or you can upload a **batch of messages** for a conversation. ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/01HNE48VMKNZ0B0SG3CEFV24WM/messages' \ --header 'asapp-api-id: \' \ --header 'asapp-api-secret: \' \ --header 'Content-Type: application/json' \ --data '{ "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "customer", "externalId": "\[Your id for the customer\]" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` A successfully created message returns a status code of 200 and the id of the message. We only show one message as an example, though you would create many messages over the course of the conversation. Use the `/messages/batch` endpoint to send multiple messages at once for a given conversation. ```javascript theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/5544332211/messages/batch' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "messages": [ { "text": "Hello, thank you for contacting XYZ Insurance. How can I assist you today?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:00:00Z" }, { "text": "Hi, I want to check the status of my payout for my claim.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:01:00Z" }, { "text": "Sure, can you please provide me with the claim number?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:02:00Z" }, { "text": "It\'s H123456789.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:03:00Z" }, { "text": "Thank you. Could you also provide the last 4 digits of your account number?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:04:00Z" }, { "text": "****", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:05:00Z" }, { "text": "Let me check the details for you. One moment, please.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:06:00Z" }, { "text": "I see that your claim was approved on June 10, ****, for ****. The payout has been processed.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:07:00Z" }, { "text": "Great! When will I receive the money?", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:08:00Z" }, { "text": "The payout will be credited to your account within 3-5 business days.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:09:00Z" }, { "text": "Perfect, thank you so much for your help.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:10:00Z" }, { "text": "You\'re welcome! Is there anything else I can assist you with?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:11:00Z" }, { "text": "No, that\'s all. Have a nice day.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:12:00Z" }, { "text": "You too. Goodbye!", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:13:00Z" } ] }' ``` AI Transcribe converts audio streams into real-time transcripts. Regardless of the platform you use: 1. AI Transcribe generates a `conversation` object for each transcribed interaction. 2. You'll receive a unique `conversation` ID. 3. Use this `conversation` ID to extract insights via the AI Summary API. Platform-specific integration steps vary. Refer to the AI Transcribe documentation for detailed instructions for your chosen platform. ## Step 2: Extract Insights AI Summary offers three types of insights, each with its own API endpoint: * **Free text summary** * **Intent** * **Structured Data** All APIs require the conversation ID to extract the relevant insight. ### Example: Generate a free text summary To generate a free text summary, use the following API call: ```javascript theme={null} curl -X GET 'https://api.sandbox.asapp.com/autosummary/v1/free-text-summaries/[conversationId]' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` A successful free text summary generation returns 200 and the summary: ```javascript theme={null} { "conversationId": "5544332211", "summaryId": "0992d936-ff70-49fc-ac88-76f1246d8t27", "summaryText": "Customer called in saying their internet was slow. Customer wasn't home so couldn't run a speed test. Agent recommended calling back once they could run the speed test." } ``` ## Next Steps Now that you understand the fundamentals of using AI Summary, explore further: # Intent Source: https://docs.asapp.com/ai-productivity/ai-summary/intent Generate intents from your conversations An intent is a topic-level descriptor—a single word or short phrase—that reflects the customer's main issue or question at the beginning of a conversation. Intents come out-of-the-box with support for common intents, but you can customize them to match your unique use cases. Intents enable you to optimize operations by analyzing contact reasons, route conversations more effectively, and contribute to your larger analysis activities. ## How it works To help understand how intent identification works, let's use an example conversation: > **Agent**: Hello, thank you for contacting XYZ Insurance. How can I assist you today?\ > **Customer**: Hi, I want to check the status of my payout for my claim.\ > **Agent**: Sure, can you please provide me with the claim number?\ > **Customer**: It's H123456789.\ > **Agent**: Thank you. Could you also provide the last 4 digits of your account number?\ > **Customer**: 6789\ > **Agent**: Let me check the details for you. One moment, please.\ > **Agent**: I see that your claim was approved on June 10, 2024, for \$5000. The payout has been processed.\ > **Customer**: Great! When will I receive the money?\ > **Agent**: The payout will be credited to your account within 3-5 business days.\ > **Customer**: Perfect, thank you so much for your help.\ > **Agent**: You're welcome! Is there anything else I can assist you with?\ > **Customer**: No, that's all. Have a nice day.\ > **Agent**: You too. Goodbye! AI Summary analyzes the conversation, focusing primarily on the initial exchanges, to determine the customer's main reason for contact. This is represented by the `name` of the intent and the `code`, a machine readable identifier for that intent. In this case, the intent might be identified as: ```javascript theme={null} { "code": "Payouts", "name": "Payouts" } ``` The system determines the intent based on the customer's initial statement about checking the status of their payout, which is the primary reason for their contact. ## Generate an Intent To generate an intent, provide the conversation transcript to ASAPP. This example uses our **Conversation API** to provide the transcript, but you have options to use [AI Transcribe](/ai-productivity/ai-transcribe) integration if you have voice conversations you want to send to ASAPP. ### Step 1: Create a conversation To create a `conversation`, provide your IDs for the conversation and customer. ```javascript theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "externalId": "[Your id for the conversation]", "customer": { "externalId": "[Your id for the customer]", "name": "customer name" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` A successfully created conversation returns a status code of 200 and a conversation ID. ### Step 2: Add messages You need to add the messages for the conversation. You have the choice to add a **single message** for each turn of the conversation, or you can upload a **batch of messages** for a conversation. ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/01HNE48VMKNZ0B0SG3CEFV24WM/messages' \ --header 'asapp-api-id: \' \ --header 'asapp-api-secret: \' \ --header 'Content-Type: application/json' \ --data '{ "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "customer", "externalId": "\[Your id for the customer\]" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` A successfully created message returns a status code of 200 and the id of the message. We only show one message as an example, though you would create many messages over the course of the conversation. Use the `/messages/batch` endpoint to send multiple messages at once for a given conversation. ```javascript theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/5544332211/messages/batch' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "messages": [ { "text": "Hello, thank you for contacting XYZ Insurance. How can I assist you today?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:00:00Z" }, { "text": "Hi, I want to check the status of my payout for my claim.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:01:00Z" }, { "text": "Sure, can you please provide me with the claim number?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:02:00Z" }, { "text": "It\'s H123456789.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:03:00Z" }, { "text": "Thank you. Could you also provide the last 4 digits of your account number?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:04:00Z" }, { "text": "****", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:05:00Z" }, { "text": "Let me check the details for you. One moment, please.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:06:00Z" }, { "text": "I see that your claim was approved on June 10, ****, for ****. The payout has been processed.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:07:00Z" }, { "text": "Great! When will I receive the money?", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:08:00Z" }, { "text": "The payout will be credited to your account within 3-5 business days.", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:09:00Z" }, { "text": "Perfect, thank you so much for your help.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:10:00Z" }, { "text": "You\'re welcome! Is there anything else I can assist you with?", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:11:00Z" }, { "text": "No, that\'s all. Have a nice day.", "sender": {"role": "customer", "externalId": "cust_1234"}, "timestamp": "2024-09-09T10:12:00Z" }, { "text": "You too. Goodbye!", "sender": {"role": "agent", "externalId": "agent_1234"}, "timestamp": "2024-09-09T10:13:00Z" } ] }' ``` ### Step 3: Generate Intent With a conversation containing messages, you can generate an intent. To generate the intent, provide the ID of the conversation: ```javascript theme={null} curl -X GET 'https://api.sandbox.asapp.com/autosummary/v1/intent/5544332211' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` A successful intent generation returns 200 and the intent: ```javascript theme={null} { "conversationId": "5544332211", "intent": { "code": "Payouts", "name": "Payouts" } } ``` This intent represents the primary reason for the customer's contact, regardless of the number of agents involved in the conversation. ## Customization AI Summary provides extensive customization options for intent identification to meet your specific needs. Whether you want to use industry-specific intents or adhere to your company's unique categorization, this feature provides the flexibility to tailor intents in a way that aligns with your operational goals. To customize your intents, you can use the Self-Service Configuration tool in ASAPP's AI Console. This tool allows you to: 1. Upload, create, or modify intent labels 2. Expand intent classifications by adding as many intents as needed 3. Configure the system to align with your specific operational requirements For more advanced customization, work with your ASAPP account team to refine and implement a custom set of intents that perfectly suit your business needs. # Deploying AI Summary for Salesforce Source: https://docs.asapp.com/ai-productivity/ai-summary/salesforce-plugin Learn how to use the AI Summary Salesforce plugin. ## Using This Guide **Deployment Guides** describe the technical components of ASAPP services and provide information about how to interact with and implement these components in your organization. ## Overview ASAPP AI Summary generates a summary of each voice or messaging (chat) interaction between a customer and an agent. AI Summary also generates Structured Data and intent outputs. With automated interaction summaries, organizations reduce agent time and effort both during and after calls, and gain high-coverage summary data for future reference by agents, supervisors and quality teams. AI Summary currently supports English-language conversations only. ### Technology ASAPP AI Summary has the following technical components: * An AI Summary model that ASAPP uses to generate summary text * An ASAPP component that interfaces between ASAPP's AI Summary and Conversation APIs to generate summaries for each conversation. ## Setup ### Requirements **Browser Support** ASAPP AI Summary is supported in Google Chrome and Microsoft Edge This support covers the latest version of each browser and extends to the previous two versions Please consult your ASAPP account contact if your installation requires support for other browsers **Salesforce** ASAPP supports Lightning-based chat (cf. classic) **SSO Support** AI Summary supports SP-initiated SSO with either OIDC (preferred method) or SAML. **Domain Whitelisting** In order for AI Summary to interact with ASAPP's backend and third-party support services, the following domains need to be accessible from end-user environments: | Domain | Description | | :------------------------------------------- | :----------------------------------------------------------------- | | `*.asapp.com` | ASAPP service URLs | | `*.ingest.sentry.io` | Application performance monitoring tool | | `fonts.googleapis.com` | Fonts | | `google-analytics.com` | Page analytics | | `asapp-chat-sdk-production.s3.amazonaws.com` | Static ASAPP AWS URL for desktop network connectivity health check | ### Procedure There are two parts to the AI Summary setup process. Use the links below to skip to information about a specific part of the process: 1. [Configure the Salesforce organization](#1-configure-the-salesforce-organization-centrally-35766 "1. Configure the Salesforce Organization Centrally") centrally using an administrator account 2. [Setup agent/user authentication](#2-set-up-single-sign-on-sso-user-authentication-35766 "2. Set Up Single Sign-On (SSO) User Authentication") through the existing single sign-on (SSO) service Expected effort for each part of the setup process: * 1 hour for installation and configuration of the ASAPP chat components * 1-2 hours to enable user authentication, depending on SSO system complexity #### 1. Configure the Salesforce Organization Centrally **Before You Begin** You will need the following information to configure ASAPP for Salesforce: * Administrator credentials to login to your Salesforce organization account. * **NOTE:** Organization and Administrator should be enabled for 'chat'. * A URL for the ASAPP installation package, which will be provided by ASAPP. ASAPP provides the same install package for implementing both AI Compose and AI Summary in Salesforce. Use this guide to configure AI Summary. If you're looking to implement AI Compose, [use this guide](/ai-productivity/ai-compose/deploying-ai-compose-for-salesforce). * API Id and API URL values, which can be found in your ASAPP Developer Portal account (developer.asapp.com) in the **Apps** section. **Configuration Steps** **1. Install the ASAPP Package** * Open the package installation URL from ASAPP. * Login with your Salesforce organization administrator credentials. The package installation page appears: * Choose **Install for All Users** (as shown above). * Check the acknowledgment statement and click the **Install** button: * The Installation runs. An **Installation Complete!** message appears: * Click the **Done** button. **2. Add ASAPP to the Chat Transcript Page** * Open the 'Service Console' page (or your chat page). * Choose an existing chat session or start a new chat session so that the chat transcript page appears (the exact mechanism is organization-specific). * In the top-right, click the **gear** icon, then right-click **Edit Page**, and **Open Link in a New Tab**. * Navigate to the new tab to see the chat transcript edit page: * Select the conversation panel (middle) and delete it. * Drag the **chatAsapp** component (left), inside the conversation panel: * Drag the **exploreAsapp** component (left), to the right column. Next, add your organization's **API key** and **API URL** (found in the ASAPP Developer Portal) in the rightmost panel: The API key is labeled as **API Id** in the ASAPP Developer Portal. The API URL should be listed as `https://api.sandbox.asapp.com` for lower environments and `https://api.asapp.com` for production. * Click **Save**, then click **Activate** * Click **Assign as org default**. * Choose **Desktop** form factor, then click **Save**. * Return to the chat transcript page and refresh - the ASAPP composer should appear. **3. Add a new Salesforce field to populate AI Summary results** AI Summary writes only to the **Chat Transcript** object. You need to create a new field on the Chat Transcript object that will be used by the ASAPP component. * Go to **Setup** > **Object Manager** > **Chat Transcript** > **Fields & Relationships** page (in this specific example, we choose to add the field for summarization on the Chat Transcript page). * Click on the **New** button. * **Choose the field type (Step 1)**: we suggest setting this field as **Text Area (Long)**. Once this radio button is selected, click on the **Next** button. * **Enter the field details (Step 2)**: Add a **Field Label** and a **Field Name**. Click **Next**. Take note of this **Field Name**, as it will be needed when setting up the AI Summary widget. * **Establish field-level security (Step 3)**: no need to modify anything. Click on **Next**. * **Add to page layouts (Step 4)**: ensure to add the new field to page layouts for this implementation and then click **Save**. * Once created, you will be able to see the field on the following page: **4. Configure AI Summary Widget** * On the Service Console page, click on **Configuration** (gear icon) and then click **Edit Page**. * Click the **ASAPP** panel. Then the configuration panel will appear on the right of the page. Enter the following information into the fields: * **API key**: this is the **API Id** found in the ASAPP Developer Portal. * **API URL**: this is found in the ASAPP Developer Portal; use `https://api.sandbox.asapp.com` in lower environments and `https://api.asapp.com`in production. * Select the checkbox for **ASAPP AI Summary**. * **ASAPP AI Summary field**: enter the **Field Name** created as part of Step 3. This is the field where the ASAPP-generated summary will appear. * Click on the **Save** button to apply the changes. These configuration steps add the AI Summary field to the Chat Transcript object. From this point forward, you may use this summary field as part of your agent-facing or internal summary data use case. A common use case is to display this field to the agent in the Record Detail widget. **5. Add Record Detail Widget (OPTIONAL)** * If the Record Detail widget is not already on the Chat Transcript page, drag the **Record Detail** widget from the left panel and place it on the page. * Click on the **Save** button to apply the changes. * Refresh the page to see the changes applied to the page. The AI Summary field should now be visible under the **Transcription** section of the Record Detail widget. Once the conversation is ended, summarization will be displayed in this newly configured field in the Record Detail widget. #### 2. Set Up Single Sign-On (SSO) User Authentication ASAPP handles authentication through the customer's SSO service to confirm the identity of the agent. ASAPP acts as the Service Provider (SP) with the customer acting as the Identity Provider (IDP). The customer's authentication system performs user authentication using their existing user credentials. ASAPP supports SP-initiated SSO with either OIDC (preferred method) and SAML. Once the user initiates sign-in, ASAPP detects that the user is authenticated and requests an assertion from the customer's SSO service. **Configuration Steps for OIDC (preferred method)** 1. Create a new IDP OIDC application with type `Web` 2. Set the following attributes for the app: | Attribute | Value\* | | :-------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Grant Type | authorization code | | Sign-in Redirect URIs |
  • Production: `https://api.asapp.com/auth/v1/callback/{company_marker}`
  • Sandbox: `https://api.sandbox.asapp.com/auth/v1/callback/{company_marker}-sandbox`
| ASAPP to provide `company_marker` value 3. Save the application and send ASAPP the `Client ID` and `Client Secret` from the app through a secure communication channel 4. Set scopes for the OIDC application: * Required: `openid` * Preferred: `email`, `profile` 5. Tell ASAPP which end-user attribute should be used a unique identifier 6. Tell ASAPP your IDP domain name **Configuration Steps for SAML** 7. Create a new IDP SAML application. 8. Set the following attributes for the app: | Attribute | Value\* | | :------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Single Sign On URL |
  • Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`
  • Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox`
| | Recipient URL |
  • Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`
  • Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox`
| | Destination URL |
  • Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`
  • Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox`
| | Audience Restriction |
  • Production: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml/endpoint/clients/asapp-saml`
  • Sandbox: `https://sso.asapp.com/auth/realms/standalone-{company_marker}-auth/broker/saml-sandbox/endpoint/clients/asapp-saml-sandbox`
| | Response | Signed | | Assertion | Signed | | Signature Algorithm | RSA\_SHA256 | | Digest Algorithm | SHA256 | | Attribute Statements | externalUserId: | ASAPP to provide `company_marker` value 9. Save the application and send the Public Certificate to validate Signature for this app SAML payload to ASAPP team 10. Send ASAPP team the URL of the SAML application ## Usage ### Customization #### Historical Transcripts for Summary Model Customization ASAPP uses past agent conversations to generate a customized summary model that is tailored to a given use case. In order to create a customized summary model, ASAPP requires a minimum of 500 representative historical transcripts to generate free-text summaries. Transcripts should identify both the agent and customer messages. Proper transcript formatting and sampling ensures data is usable for model training. Please ensure transcripts conform to the following: **Formatting** * Each utterance is clearly demarcated and sent by one identified sender * Utterances are in chronological order and complete, from beginning to very end of the conversation * Where possible, transcripts include the full content of the conversation rather than an abbreviated version. For example, in a digital messaging conversation:

Full

Abbreviated

Agent: Choose an option from the list below

Agent: (A) 1-way ticket (B) 2-way ticket (C) None of the above

Customer: (A) 1-way ticket

Agent: Choose an option from the list below

Customer: (A)

**Sampling** * Transcripts are from a wide range of dates to avoid seasonality effects; random sampling over a 12-month period is recommended * Transcripts mimic the production conversations on which models will be used - same types of participants, same channel (voice, messaging), same business unit * There are no duplicate transcripts
For more information on how to transmit the conversation data, reach out to your ASAPP account contact. Visit [Transmitting Data to SFTP](/reporting/send-sftp) for instructions on how to send historical transcripts to ASAPP. #### Conversation Redaction When message text in the conversation transcript is sent to ASAPP, ASAPP applies redaction to the message text to prevent transmission of sensitive information. Reach out to your ASAPP account contact for information on available redaction capabilities to configure for your implementation. ### Data Security ASAPP's security protocols protect data at each point of transmission from first user authentication, to secure communications, to our auditing and logging system, all the way to securing the environment when data is at rest in the data logging system. Access to data by ASAPP teams is tightly constrained and monitored. Strict security protocols protect both ASAPP and our customers. The following security controls are particularly relevant to AI Compose: 1. Client sessions are controlled using a time-limited authorization token. Privileges for each active session are controlled server-side to mitigate potential elevation-of-privilege and information disclosure risks. 2. To avoid unauthorized disclosure of information, unique, non-guessable IDs are used to identify conversations. These conversations can only be accessed using a valid client session. 3. Requests to API endpoints that can potentially receive sensitive data are put through a round of redaction to strip the request of sensitive data (like SSNs and phone numbers). # AI Summary Sandbox Source: https://docs.asapp.com/ai-productivity/ai-summary/sandbox Learn how to use the AI Summary Sandbox to test and validate summary generation. The AI Summary Sandbox is a testing environment accessible through AI-Console that allows administrators and developers to: * Generate and visualize free-text summaries and structured data * Test summary generation on voice and messaging conversations * Validate summary outputs before deploying to production * Simulate conversations or upload existing transcripts ## Creating Test Conversations The AI Summary Sandbox supports two methods for testing summary generation: **Simulate Conversations** * Create new conversations by switching between customer and agent roles * Test voice conversations using real-time transcription via AI Transcribe * Validate summary generation on different conversation types and scenarios **Upload Transcripts** * Load existing conversation transcripts * Test summary generation on historical conversations * Validate model performance on real customer interactions ## Available Summary Types The Sandbox generates summaries based on your environment's configuration: | Type | Description | Availability | | :---------------- | :-------------------------------------------------------- | :------------------------------------------------------------- | | Free Text Summary | Concise narrative summary of the conversation | Always available | | Intent | Topic-level categorization of customer's primary issue | Available after custom model training | | Structured Data | Extracted data points and answers to predefined questions | Available after customizing your structured data configuration | You must configure intent and structured data capabilities for your specific business needs. Contact your ASAPP account team to enable these features. ## Using the Sandbox Depending on the type of conversation you want to test, you can use one of the following methods: When testing voice conversations in the Sandbox: * AI Transcribe powers real-time transcription * If no custom AI Transcribe model exists, the system uses a baseline contact center model * The system generates transcripts in real-time as you speak For messaging conversations, you can: * Switch between customer and agent roles to simulate a chat * Type messages directly in the interface * Upload existing chat transcripts ### Generating Summaries 1. Create or upload a conversation using one of the methods above 2. Click "Generate Summary" to process the conversation 3. View the generated free-text summary and any enabled structured data 4. Use the conversation ID to retrieve summaries via API if needed # Structured Data Source: https://docs.asapp.com/ai-productivity/ai-summary/structured-data Extract entities and targeted data from your conversations Structured data extracts specific, customizable data points from conversations. This feature includes two main components: * **Entity extraction**: Automatically identifies and extracts specific information from conversations. * **Question extraction (Targeted Structured Data)**: Answers predefined questions based on conversation content. Both entity and question extraction come with industry-specific templates that can be customized to fit your unique use cases. Structured data's flexible nature enables it to solve numerous challenges, including: * Automating data collection for analytics and reporting * Facilitating compliance monitoring and quality assurance * Populating CRMs and other business tools efficiently * Supporting data-driven decision-making and process improvement ## How it works Here's an example conversation to demonstrate how Structured Data works: > **Agent**: Hello, thank you for contacting XYZ Insurance. How can I assist you today?\ > **Customer**: Hi, I want to check the status of my payout for my claim.\ > **Agent**: Sure, can you please provide me with the claim number?\ > **Customer**: It's H123456789.\ > **Agent**: Thank you. Could you also provide the last 4 digits of your account number?\ > **Customer**: 6789\ > **Agent**: Let me check the details for you. One moment, please.\ > **Agent**: I see that your claim was approved on June 10, 2024, for \$5000. The payout has been processed.\ > **Customer**: Great! When will I receive the money?\ > **Agent**: The payout will be credited to your account within 3-5 business days.\ > **Customer**: Perfect, thank you so much for your help.\ > **Agent**: You’re welcome! Is there anything else I can assist you with?\ > **Customer**: No, that's all. Have a nice day.\ > **Agent**: You too. Goodbye! AI Summary analyzes this conversation and extracts structured data in two ways: Entity Extraction automatically identifies and extracts specific information from conversations. Extracted entities can include claim numbers, account details, dates, monetary amounts, and more. For the example conversation above, the extracted entities would look like this: ```javascript theme={null} [ { "name": "Claim Number", "value": "H123456789" }, { "name": "Account Number Last 4", "value": "5678" }, { "name": "Approval Date", "value": "2024-06-10" }, { "name": "Payout Amount", "value": 5000 } ] ``` Targeted Structured Data (Questions) provides answers to predefined queries based on conversation content. You can customize these questions to address specific aspects of customer interactions, compliance requirements, or other relevant factors. For the example conversation above, predefined questions and their answers would look like this: ```javascript theme={null} [ { "name": "Customer Satisfied", "answer": "Yes" }, { "Name": "Payout Information Provided", "answer": "Yes" }, { "name": "Verification Completed", "answer": "Yes" } ] ``` ## Structured Data Dashboard Use the Structured Data dashboard to configure which entities or questions you want to extract. Once configured, ASAPP will return the extracted structured data. The Structured Data dashboard enables you to: * **Add new entities**: Define fields to extract from conversations, such as order numbers, dates, amounts, or other relevant data points. * **Add new questions**: Create targeted questions to answer based on conversation content. Tailor these questions to your specific use cases, including compliance checks, behavioral insights, or downstream workflows. * **Modify or delete existing entities and questions**: Rename, change types, or update parameters for existing entities or questions to better suit your requirements. Remove any entities or questions that are no longer needed. * **Control visibility and lifecycle**: Enable or disable fields based on team, workflow, or environment. * **Preview changes**: Preview and validate extraction behavior before publishing. Structured Data Dashboard ## Before you begin Before configuring Structured Data, ensure you have: * Access to the AI Summary dashboard with necessary permissions to manage Structured Data settings. ## Configure Structured Data Configure Structured Data to extract specific entities and answer questions most relevant to your business needs. Follow these steps: 1. Access the Structured Data Dashboard from your AI Summary dashboard. 2. Navigate to the "Structured Data" section. 3. Select **Create Data Fields** and click **Create Entity** or **Create Question**. 4. Fill in the required details: name, description, category, segment tag, and examples (for entities). 5. Click **Create** to save the new entity or question. Create Structured Data Once you've configured your entities and questions, they'll appear in the Structured Data Interface where you can edit, delete, or manage their visibility as needed. ## View Extracted Structured Data Successfully created Structured Data returns extracted data within 24 hours. View extracted structured data in a dashboard under the **"Historical Data"** section within the AI Console interface. You can also download the structured data as a report in either CSV or PDF format from this dashboard. The structured data represents both the entities and answered questions you've configured. You can export structured data to your data warehouse via [S3 bucket](/reporting/send-s3). Contact your ASAPP account team to enable this feature. ## Customization Structured Data questions and entities are fully customizable to meet your business needs. We provide industry-specific questions and entities to get you started. Work with your ASAPP account team to determine whether our out-of-the-box configurations meet your needs or if custom structured data is required. Use our [APIs to self-serve custom structured data fields](/ai-productivity/ai-summary/structured-data/segments-and-customization). # Segments and Customization Source: https://docs.asapp.com/ai-productivity/ai-summary/structured-data/segments-and-customization Learn how to customize the data extracted with Structured Data. Each business has unique needs and requirements for the type of data they want to extract from their conversations. We offer out-of-the-box configuration for common use cases but also offer two sets of APIs for you to customize the structured data yourself: * Create custom `structured data fields` to expand the types of data you can extract. * Create `segments` to customize which sets of structured data the system extracts for a defined type of conversation. ## Before you Begin Before you start integrating to Custom Structured Data Fields and Segments, you need to: * [Get your API Key Id and Secret](/getting-started/developers) * Ensure your API key has been configured to access AI Summary Configuration APIs. Reach out to your ASAPP team if you are unsure. ## Custom Structured Data Fields Each structured data you extract is defined by a `structured-data-field`. ASAPP sets up an initial set of structured data fields for you, but you can also query and create custom structured data fields yourself. To create a custom structured data field, you need to create a new [`structured-data-field`](/apis/configuration/structured-data-fields/create-structured-data-field) object with the following fields: * `id`: Your unique identifier for the structured data field. Must begin with `q_` or `e_`. * `name`: The name of the structured data field. * `categoryId`: The category of the structured data field. Must be either **OUTCOME** or **CUSTOM**. * `type`: The type of the structured data field. Must be either **QUESTION** or **ENTITY**. * `question`: The question that the system will answer using the context of the conversation. * `active`: Whether the structured data field is active. ```bash theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/configuration/v1/structured-data-fields \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "id": "q_promotion_was_offered", "name": "Promotion was offered", "categoryId": "OUTCOME", "type": "QUESTION", "question": { "question": "Did the agent offer the correct promotion?" }, "active": true }' ``` A successfully created structured data field returns a `200` status code and the newly created `structured-data-field` object in the response body. ```json theme={null} { "id": "q_promotion_was_offered", "name": "Promotion was offered", "categoryId": "OUTCOME", "type": "QUESTION", "question": { "question": "Did the agent offer the correct promotion?" }, "active": true } ``` The system will not extract an inactive structured data field from conversations. You can then use the structured data field id to create a segment. ## Segments Segments configure which sets of structured data fields the system extracts for a defined type of conversation. Segments are defined by two parts: * A **query** that matches against the conversation metadata and intent. * A list of **structured data field ids** that the segment includes. When you generate structured data for a conversation, the system follows these steps: 1. Checks the conversation against the queries of all segments 2. For each matching query: * Extracts the structured data fields that the segment defines 3. If multiple segments match: * Combines and extracts all structured data fields from all matching segments By default, there is a [**GLOBAL** segment](#global-segment) that represents the initially configured structured data fields with a query that matches TRUE on any conversation. Most companies will want to create custom segments to extract structured data fields for specific types of conversations, such as a support call involving a specific product or service, or types of sales calls. ### Create a new segment To create a new segment, you need to create a new [`segment`](/apis/configuration/segments/create-segment) object with the following fields: * `id`: Your unique identifier for the segment. * `name`: The name of the segment. * `query`: The [query](#query) that defines which conversations are included in the segment. * `structuredDataFieldIds`: The list of structured data field ids that the segment includes. ```bash theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/configuration/v1/segments \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "id": "USER_SUPPORT", "name": "Support", "query": { "type": "raw", "raw": "TRUE" }, "structuredDataFieldIds": [ "q_promotion_was_offered", "e_promotion_details" ] }' ``` A successfully created segment returns a 200 status code and the newly created segment object in the response body. ```json theme={null} { "id": "USER_SUPPORT", "name": "Support", "query": { "type": "raw", "raw": "TRUE" }, "structuredDataFieldIds": [ "q_promotion_was_offered", "e_promotion_details" ] } ``` ### Query The segment's query defines rules for when the system should apply a segment to a conversation. We currently only support a query type of `raw` that uses a SQL-like syntax with a focused set of operators for clear and precise matching. The query language supports these key elements: **Logical Operators** * `AND`, `OR`, `NOT` - Combine conditions * Parentheses `()` for grouping and precedence **Field Comparisons** * Equality: `field = 'value'` * List membership: `field IN ['value1', 'value2']` #### Available Fields The data you can query against is the conversation metadata that you upload as part of [metadata ingestion](/reporting/metadata-ingestion). Specifically, you can query against the following fields: **Conversation Metadata** | Field | Description | | -------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | lob\_id | Line of business identifier | | lob\_name | Line of business name | | group\_id | Group identifier | | group\_name | Group name | | agent\_routing\_code | Agent's routing attribute | | campaign | Activities related to the issue | | device\_type | Client's device type (TABLET, PHONE, DESKTOP, WATCH, OTHER) | | platform | Client's platform type (SMS, WEB, IOS, ANDROID, APP, LOCAL, VOICE, VOICE\_IOS, VOICE\_ANDROID, VOICE\_ECHO, VOICE\_HOMEPOD, VOICE\_GGLHOME, VOICE\_WEB, APPLEBIZ, GOOGLEBIZ, GBM, WAB) | | company\_segment | Company's segment(s) that the issue belongs to | | company\_subdivision | Company's subdivision that the issue belongs to | | business\_rule | Business rule to use | | entry\_type | Way the issue started and created in the system | | operating\_system | Operating system used to enter the issue (MAC\_OS, LINUX, WINDOWS, ANDROID, IOS, OTHER) | | browser\_type | Browser type used | | browser\_version | Browser version used | **Conversation Intent** | Field | Description | | ------------ | ----------------- | | intent\_code | Intent identifier | | intent\_name | Intent name | #### Query Examples Here are some examples of how queries can be constructed for different types of conversations. Note that the field values used in these examples are arbitrary and for illustration purposes only. You will need to construct queries using your actual metadata fields and values based on your business needs. ```sql theme={null} group_id IN ['mobile_support', 'mobile_tech'] AND platform = 'ios' ``` ```sql theme={null} intent_code IN ['UPGRADE_INQUIRY', 'ADDITIONAL_SERVICE', 'PREMIUM_FEATURES'] AND company_subdivision = 'inside_sales' ``` ```sql theme={null} intent_code = 'COMPLAINT' AND campaign = 'holiday_season' AND business_rule = 'high_priority' ``` ```sql theme={null} intent_code = 'BILLING' AND lob_id IN ['wireless_service', 'broadband_service'] ``` ### Global Segment The **GLOBAL** segment is a special segment that matches all conversations. The system automatically creates it when you first configure your structured data fields. You can update the **GLOBAL** segment to include new structured data fields or modify the query to change the criteria for matching conversations. We recommend that once you start creating custom segments, you update the **GLOBAL** segment to remove the structured data fields and rely on the custom segments to extract structured data. # AI Transcribe Source: https://docs.asapp.com/ai-productivity/ai-transcribe Transcribe your audio with best in class accuracy ASAPP AI Transcribe converts speech to text in real-time for live call audio streams and audio recordings. Use AI Transcribe for voice interactions between contact center agents and their customers, in support of a broad range of use cases including real-time guidance, topical analysis, coaching, and quality management A speech recognition model that transforms spoken form to written forms in real-time powers ASAPP's AI Transcribe service, along with punctuation and capitalization. To optimize performance, you can customize the model to support domain-specific needs by training on historical call audio and adding custom vocabulary to further boost recognition accuracy. ## How it Works A speech recognition model that transforms spoken form to written forms in real-time powers ASAPP's AI Transcribe service, along with punctuation and capitalization. To optimize performance, you can customize the model to support domain-specific needs by training on historical call audio and adding custom vocabulary to further boost recognition accuracy ASAPP designed AI Transcribe to be fast enough to show an agent what was said immediately after every utterance. You can implement AI Transcribe in three main integration patterns: 1. **WebSocket API**: All audio streaming, call signaling, and returned transcripts use a WebSocket API, preceded by an authentication mechanism that uses a REST API. 2. **SIPREC Media Gateway**: The ASAPP media gateway receives audio streaming, and a dedicated API receives call signaling; the system returns transcripts either in real-time or post-call. 3. **Third-Party CCaaS**: A third-party contact center as a service (CCaaS) vendor sends audio to the ASAPP media gateway, and an API sends call signaling; the system returns transcripts either in real-time or post-call. Learn more about AI Transcribe in the Product Guide ## Get Started To get started with AI Transcribe, you need to: 1. Follow the [Developer Quickstart](/getting-started/developers) to get your API Credentials 2. Choose the integration that best fits your use case: ### Platform Connectors Transcribe audio from your SIPRec system using the ASAPP Media Gateway Transcribe audio from your Twilio system using the ASAPP Media Gateway Transcribe audio from your Amazon Connect system using the ASAPP Media Gateway Transcribe audio from your Genesys system using the ASAPP Media Gateway ### Direct Integration Use a websocket to send audio directly to AI Transcribe and receive the transcriptions ## Next Steps Learn more about AI Transcribe in the Product Guide Get started with the Developer Quickstart Guide See a list of feature releases for AI Transcribe # Deploying AI Transcribe for Amazon Connect Source: https://docs.asapp.com/ai-productivity/ai-transcribe/amazon-connect Use AI Transcribe in your Amazon Connect solution ## Overview This guide covers the **Amazon Connect** solution pattern, which consists of the following components to receive speech audio and call signals, and return call transcripts: * Media gateways for receiving call audio from Amazon Kinesis Video Streams * Start/Stop API for Lambda functions to provide call data and signals for when to start and stop transcribing call audio ASAPP can also accept requests to start and stop transcription via API from other call-state aware services. AWS Lambda functions are the approach outlined in this guide. * Required AWS IAM role to allow access to Kinesis Video Streams * Webhook to POST real-time transcripts to a designated URL of your choosing, alongside two additional APIs to retrieve transcripts after-call ASAPP works with you to understand your current telephony infrastructure and ecosystem. Your ASAPP account team will also determine the main use case(s) for the transcript data to determine where and how call transcripts should be sent. ### Integration Steps There are five parts of the integration process: 1. Setup Authentication for Kinesis Video Streams 2. Enable Audio Streaming to Kinesis Video Streams 3. Add Start Media and Stop Media To Flows 4. Send Start and Stop Requests to ASAPP 5. Receive Transcript Outputs ### Requirements **Audio Stream Codec** AWS Kinesis Video Streams provides MKV format, which ASAPP supports. You do not need any modification or additional transcoding when forking audio to ASAPP. When supplying recorded audio to ASAPP for AI Transcribe model training prior to implementation, send uncompressed .WAV media files with speaker-separated channels. Recordings for training should have a sample rate of 8000 and 16-bit PCM audio encoding. See the [Customization section of the AI Transcribe Product Guide](/ai-productivity/ai-transcribe/product-guide#customization) for more on data requirements for transcription model training. **Developer Portal** ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can do the following: * Access relevant API documentation (e.g. OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps Visit the [Get Started](/getting-started/developers) for instructions on creating a developer account, managing teams and apps, and setting up AI Service APIs. ## Integrate with Amazon Connect ### 1. Set Up Authentication for Kinesis Video Streams The audio streams for Amazon Connect are stored in the Amazon Kinesis Video Streams service in your AWS account where the Amazon Connect instance resides. [IAM policies control](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-iam) the access to the Kinesis Video Streams service. ASAPP will use [IAM Roles for Service accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts) to receive a specific IAM role in the ASAPP account, for example `asapp-prod-mg-amazonconnect-role`. Setup your account's IAM role (e.g., `kinesis-connect-access-role-for-asapp`) to trust `asapp-prod-mg-amazonconnect-role` to assume it and create a policy permitting list/read operations on appropriate Kinesis Video Streams associated with Amazon Connect instance. ### 2. Enable Audio Streaming to Kinesis Video Streams ASAPP retrieves streaming audio by sending requests to Kinesis Video Streams. Streaming media is not enabled by default and you must turn it on manually. Enable live media streaming for applicable instances in your Amazon Connect console to ensure audio is available when ASAPP sends requests to Kinesis Video Streams. If you choose to use a non-default KMS key, ensure that the IAM role for Service Accounts (IRSA) created for ASAPP has access to this KMS key. Amazon provides [documentation to guide enabling live media streaming to Kinesis Video Streams](https://docs.aws.amazon.com/connect/latest/adminguide/enable-live-media-streams). ### 3. Add Start Media and Stop Media To Flows Sending streaming media to Kinesis Video Streams is initiated and stopped by inserting preset blocks - called **Start media streaming** and **Stop media streaming** - into Amazon Connect flows. Place these blocks into your flows to programmatically set when the system will stream and stop media - this determines what audio will be available for transcription. Typically for ASAPP, audio streaming begins as close as possible to when the agent is assigned. Audio streaming typically stops ahead of parts of calls that should not be transcribed such as holds, transfers, and post-call surveys. When placing the **Start media streaming** block, ensure **From the customer** and **To the customer** menu boxes are checked so that both participants' call media streams are available for transcription. Amazon provides [documentation on adding Start media streaming and Stop media streaming blocks](https://docs.aws.amazon.com/connect/latest/adminguide/use-media-streams-blocks) to Amazon Connect flows. ### 4. Send Start and Stop Requests to ASAPP AWS Lambda functions can be inserted into Amazon Connect flows in order to send requests directly to ASAPP APIs to start and stop transcription. ASAPP can also accept requests to start and stop transcription via API from other call-state aware services. If you are using another service to interact with ASAPP APIs, you can use AWS Lambda functions to send important call metadata to your other services before they send requests to ASAPP. The approach outlined in this guide is to call ASAPP APIs directly using AWS Lambda functions. As outlined in [Requirements](#requirements "Requirements"), user accounts must be created in the developer portal in order to enroll apps and receive API keys to interact with ASAPP endpoints. Lambda functions (or any other service you use to interact with ASAPP APIs) require these API keys to send requests to start and stop transcription. See the [Endpoints](#endpoints "Endpoints") section to learn how to interact with them, including what you need to include in requests to each endpoint. ASAPP will not begin transcribing call audio until you request it to, at which point we will request the audio from Kinesis Video Streams and begin transcribing. With AWS Kinesis Video streams, there are 2 supported selectorTypes to start-streaming: * **NOW**: NOW will start transcribing from the most recent audio data in the Kinesis stream. * **FRAGMENT\_NUMBER**: FRAGMENT\_NUMBER will require another parameter afterFragmentNumber to be populated and would be the fragment within the media stream to start (for example, the start fragment number to capture all transcripts in the stream prior to start-streaming being called). The `/start-streaming` endpoint request requires several fields, but three specific attributes must come from Amazon: * Amazon Connect Contact Id (multiple possible sources) JSONPath formats: `$.ContactId`, `$.InitialContactId`, `$.PreviousContactId` * Audio Stream ARN JSONPath format: `$.MediaStreams.Customer.Audio.StreamARN` * \[OPTIONAL] Start Fragment Number JSONPath format: `$.MediaStreams.Customer.Audio.StartFragmentNumber` Requests to `/start-streaming` also require agent and customer identifiers. These identifiers can be sourced from Amazon Connect but may also originate from other systems if your use case requires it. Stop requests pause or end transcription for any needed reason. For example, you could use a stop request when the agent initiates a transfer to another agent or queue or at the end of the call to prevent transcribing post-call interactions such as satisfaction surveys. AI Transcribe is only meant to transcribe conversations between customers and agents - you should implement start and stop requests to ensure the system does not transcribe non-conversation audio (e.g., hold music, IVR menus, surveys). Attempted transcription of non-conversation audio will negatively impact other services meant to consume conversation transcripts, such as ASAPP AI Summary. #### Adding Lambda Functions to Flows First, create and deploy two new Lambda functions in the AWS Lambda console: one for  sending a request to ASAPP's `/start-streaming` endpoint and another for sending a request to ASAPP's `/stop-streaming` endpoint. Refer to the [API Reference in ASAPP's Developer Portal](/apis/autotranscribe-media-gateway/start-streaming) for detailed specifications for sending requests to each endpoint. Once Lambda functions are deployed and configured, add the Lambda functions to your Amazon Connect instance using the Amazon Connect console. Once added, the Lambda functions will be available for use in your existing applicable flows. In Amazon Connect's flow tool, add an Invoke **AWS Lambda function** where you want to make a request to ASAPP's APIs. * For requests to `/start-streaming` endpoint, place the Lambda block following the **Start media streaming** flow block * For requests to `/stop-streaming` endpoint, place the Lambda block immediately before the **Stop media streaming** flow block. Amazon provides [documentation on invoking AWS Lambda functions](https://docs.aws.amazon.com/connect/latest/adminguide/connect-lambda-functions). ### 5. Receive Transcript Outputs AI Transcribe outputs transcripts using three separate mechanisms, each corresponding to a different temporal use case: * **[Real-time](#real-time-via-webhook "Real-Time via Webhook")**: Webhook posts complete utterances to your target endpoint as they are transcribed during the live conversation * **[After-call](#after-call-via-get-request "After-Call via GET Request")**: GET endpoint responds to your requests for a designated call with the full set of utterances from that completed conversation * **[Batch](#batch-via-file-exporter "Batch via File Exporter")**: File Exporter service responds to your request for a designated time interval with a link to a data feed file that includes all utterances from that interval's conversations #### Real-Time via Webhook ASAPP sends transcript outputs in real-time via HTTPS POST requests to a target URL of your choosing. Authentication Once the target is selected, work with your ASAPP account team to implement one of the following supported authentication mechanisms: * **Custom CAs:** Custom CA certificates for regular TLS (1.2 or above). * **mTLS:** Mutual TLS using custom certificates provided by the customer. * **Secrets:** A secret token. The secret name is configurable as is whether it appears in the HTTP header or as a URL parameter. * **OAuth2 (client\_credentials):** Client credentials to fetch tokens from an authentication server. Expected Load Target servers should be able to support receiving transcript POST messages for each utterance of every live conversation on which AI Transcribe is active. For reference, an average live call sends approximately 10 messages per minute. At that rate, 50 concurrent live calls represents approximately 8 messages per second. Please ensure the selected target server is load tested to support anticipated peaks in concurrent call volume. Transcript Timing and Format Once you have started transcription for a given call stream using the `/start-streaming` endpoint, AI Transcribe begins to publish `transcript` messages, each of which contains a full utterance for a single call participant. The expected latency between when ASAPP receives audio for a completed utterance and provides a transcription of that same utterance is 200-600ms. Perceived latency will also be influenced by any network delay sending audio to ASAPP and receiving transcription messages in return. Though we send messages in the order they are transcribed, network latency may impact the order in which they arrive or cause messages to be dropped due to timeouts. Where latency causes timeouts, the system will drop the oldest pending messages first; AI Transcribe does not retry to deliver dropped messages. The message body for `transcript` type messages is JSON encoded with these fields: | Field | Subfield | Description | Example Value | | :--------------------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | | externalConversationId | | Unique identifier with the Amazon Connect Contact Id for the call | 8c259fea-8764-4a92-adc4-73572e9cf016 | | streamId | | Unique identifier that ASAPP assigns to each call participant's stream returned in response to `/start-streaming` and `/stop-streaming` | 5ce2b755-3f38-11ed-b755-7aed4b5c38d5 | | sender | externalId | Customer or agent identifier as provided in request to `/start-streaming` | ef53245 | | sender | role | A participant role, either customer or agent | customer, agent | | autotranscribeResponse | message | Type of message | transcript | | autotranscribeResponse | start | The start ms of the utterance | 0 | | autotranscribeResponse | end | Elapsed ms since the start of the utterance | 1000 | | autotranscribeResponse | utterance | Transcribed utterance text | Are you there? | Expected `transcript` message format: ```json theme={null} { "type": "transcript", "externalConversationId": "", "streamId": "", "sender": { "externalId": "", "role": "customer", // or "agent" }, "autotranscribeResponse": { "message": "transcript", "start": 0, "end": 1000, "utterance": [ {"text": ""} ] } } ``` ## Error Handling Should your target server return an error in response to a POST request, ASAPP will record the error details for the failed message delivery and drop the message. ### After-Call via GET Request AI Transcribe makes a full transcript available at the following endpoint for a given completed call: `GET /conversation/v1/conversation/messages` Once a conversation is complete, make a request to the endpoint using a conversation identifier and the system returns every message in the conversation. ### Message Limit This endpoint responds with up to 1,000 transcribed messages per conversation, approximately a two-hour continuous call. You receive all messages in a single response without any pagination. To retrieve all messages for calls that exceed this limit, use either a real-time mechanism or File Exporter for transcript retrieval. You set transcription settings (e.g., language, detailed tokens, redaction) for a given call with the Start/Stop API when you initiate call transcription. All transcripts retrieved after the call will reflect the initially requested settings with the Start/Stop API. See the [Endpoints](#endpoints "Endpoints") section to learn how to interact with this API. #### Batch via File Exporter AI Transcribe makes full transcripts for batches of calls available using the File Exporter service's `utterances` data feed. The File Exporter service is meant to be used as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g. nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC. Visit [Retrieving Data from ASAPP Messaging](/reporting/file-exporter) for a guide on how to interact with the File Exporter service. ## Use Case Example Real-Time Transcription This real-time transcription use case example consists of an English language call between an agent and customer with redaction enabled, ending with a hold. Note that redaction is enabled by default and does not need to be requested explicitly. 1. When the customer and agent are connected, send ASAPP a request to start transcription for the call: **POST** `/mg-autotranscribe/v1/start-streaming` **Request** ```json theme={null} { "namespace": "amazonconnect", "guid": "8c259fea-8764-4a92-adc4-73572e9cf016", "customerId": "TT9833237", "agentId": "RE223444211993", "autotranscribeParams": { "language": "en-US" }, "amazonConnectParams": { "streamArn": arn:aws:kinesisvideo:us-east-1:145051540001:stream/streamtest-connect-asappconnect-contact-cccaa6b8-12e4-44a6-90d5-829c4fdf68e4/1696422764859TBD, "startSelectorType":"NOW" } } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" } }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" } } } } ``` 2. The agent and customer begin their conversation and separate HTTPS POST `transcript` messages are sent for each participant from ASAPP's webhook publisher to a target endpoint configured to receive the messages. HTTPS **POST** for Customer Utterance ```json theme={null} { type: "transcript", externalConversationId: "8c259fea-8764-4a92-adc4-73572e9cf016", streamId: "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", sender: { externalId: "TT9833237", role: "customer", }, autotranscribeResponse: { message: 'transcript', start: 400, end: 3968, utterance: [ {text: "I need help upgrading my streaming package and my PIN number is ####"} ] } } ``` HTTPS **POST** for Agent Utterance ```json theme={null} { type: "transcript", externalConversationId: "8c259fea-8764-4a92-adc4-73572e9cf016", streamId: "cf31116-3f38-11ed-9116-7a0a36c763f1", sender: { externalId: "RE223444211993", role: "agent", }, autotranscribeResponse: { message: 'transcript', start: 4744, end: 8031, utterance: [ {text: "Thank you sir, let me pull up your account."} ] } } ``` 3. Later in the conversation, the agent puts the customer on hold. This triggers a request to the `/stop-streaming` endpoint to pause transcription and prevents hold music and promotional messages from being transcribed. **POST** `/mg-autotranscribe/v1/stop-streaming` **Request** ```json theme={null} { "namespace": "amazonconnect", "guid": "8c259fea-8764-4a92-adc4-73572e9cf016", } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, } } } ``` ### Data Security ASAPP's security protocols protect data at each point of transmission, from first user authentication to secure communications to our auditing and logging system (which includes hashing of data in transit) all the way to securing the environment when data is at rest in the data logging system. The teams at ASAPP are also under tight restraints in terms of access to data. These security protocols protect both ASAPP and its customers. # AI Transcribe via Direct Websocket Source: https://docs.asapp.com/ai-productivity/ai-transcribe/direct-websocket Use a websocket URL to send audio media to AI Transcribe Your organization can use AI Transcribe to transcribe voice interactions between contact center agents and their customers, in support of a broad range of use cases including analysis, coaching, and quality management. ASAPP AI Transcribe is a streaming speech-to-text transcription service that works both with live streams and with audio recordings of completed calls. Integrating your voice system with GenerativeAgent using the AI Transcribe Websocket enables real-time communication, allowing for seamless interaction between your voice platform and GenerativeAgent's services. A speech recognition model powers the AI Transcribe service and transforms spoken form to written forms in real-time, along with punctuation and capitalization. To optimize performance, you can customize the model to support domain-specific needs by training on historical call audio and adding custom vocabulary to further boost recognition accuracy. Some benefits of using Websocket to Stream events include: * Websocket Connection: Establish a persistent connection between your voice system and the GenerativeAgent server. * API Streaming: All audio streaming, call signaling, and returned transcripts use a WebSocket API, preceded by an authentication mechanism using a REST API * Real-time Data Exchange: Messages are exchanged in real time, ensuring quick responses and efficient handling of user queries. * Bi-directional Communication: Websockets facilitate bi-directional communication, making the interaction smooth and responsive. ### Implementation Steps 1. Step 1: Authenticate with ASAPP 2. Step 2: Open a Connection 3. Step 3: Start an Audio Stream 4. Step 4: Send the Audio Stream 5. Step 5: Receive the Free-Text Transcriptions from AI Transcribe 6. Step 6: Stop the Audio Stream Finalize the audio stream when the conversation is over or escalates to a human agent ### How it works 1. The API Gateway authenticates customer requests and returns a WebSocket URL, which points to the Voice Gateway with secure protocol. 2. The Voice Gateway validates the client connection request, translates public WebSocket API calls to internal protocols and sends live audio streams to the Speech Recognition Server 3. The Redaction Server redacts the transcribed texts with given customizable redaction rules if you request redaction. 4. AI Transcribe receives the texts, analyzes them, and sends back a reply This guide covers the **WebSocket API** solution pattern, which consists of an API Gateway, Voice Gateway, Speech Recognition Server and Redaction Server, where: ### Integration Steps Here's a high level overview of how to work with AI Transcribe: 1. Authenticate with ASAPP to gain access to the AI Transcribe API. 2. Establish a WebSocket connection with the ASAPP Voice Gateway. 3. Send a `startStream` message with appropriate feature parameters specified. 4. Once the request is accepted by the ASAPP Voice Gateway, stream audio as binary data. 5. The ASAPP voice server will return transcripts in multiple messages. 6. Once the audio streaming is completed, send a `finishStream` to indicate to the Voice server that there is no more audio to send for this stream request. 7. Upon completion of all audio processing, the server sends a `finalResponse` which contains a summary of the stream request. ### Requirements **Audio Stream Format** In order to be transcribed properly, audio sent to ASAPP AI Transcribe must be in mono or single-channel for each speaker. You send audio as binary format through the WebSocket; you should provide the audio encoding (sample rate and encoding format) in the `startStream` message. For real-time live streaming, ASAPP recommends that you stream audio chunk-by-chunk in a real-time streaming format, by sending every 20ms or 100ms of audio as one binary message and sending the next chunk after a 20ms or 100ms interval. If the chunk is too small, it will require more audio binary messages and more downstream message handling; if the chunk is too big, it increases buffering pressure and slows down the server responsiveness. Exceptionally large chunks may result in WebSocket transport errors such as timeouts. When supplying recorded audio to ASAPP for AI Transcribe model training prior to implementation, send uncompressed `.WAV` media files with speaker-separated channels. Recordings for training and real-time streams should have both the same sample rate (8000 samples/sec) and audio encoding (16-bit PCM). See the [Customization section of the AI Transcribe Product Guide](/ai-productivity/ai-transcribe/product-guide#customization) for more on data requirements for transcription model training. **Developer Portal** ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can do the following: * Access relevant API documentation (e.g. OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps Visit the [Get Started](/getting-started/developers) for instructions on creating a developer account, managing teams and apps, and setting up AI Service APIs. ## Step 1 : Authenticate with ASAPP and Obtain an Access URL All requests to ASAPP sandbox and production APIs must use `HTTPS` protocol. Traffic using `HTTP` will not be redirected to `HTTPS`. The following HTTPS REST API enables authentication with the ASAPP API Gateway: * `asapp-api-id` and `asapp-api-secret` are required header parameters, both of which ASAPP will provide to you. * We recommend that you send a unique conversation ID in the request body as `externalId`. ASAPP refers to this identifier from the client's system in real-time streaming use cases to redact utterances using context from other utterances in the same conversation (e.g., reference to a credit card in an utterance from 20s earlier). It is the client's responsibility to ensure `externalId` is unique. [`POST /autotranscribe/v1/streaming-url`](/apis/autotranscribe/get-streaming-url) Headers (required) ```json theme={null} { "asapp-api-id": , "asapp-api-secret": } ``` Request body (optional) ```json theme={null} { "externalId": "" } ``` If the authentication succeeds, the HTTP response body will return a secure WebSocket short-lived access URL. Default TTL (time-to-live) for this URL is 5 minutes. ```json theme={null} { "streamingUrl": "" } ``` ## Step 2: Open a Connection Before sending any message, create a WebSocket connection with the access URL obtained from previous step: `wss://?token=` The system will establish a WebSocket connection if it validates the `short_lived_access_token`. Otherwise, the system will reject the requested connection. ## Step 3: Start an Audio Stream AI Transcribe uses the following message sequence for streaming audio, sending transcripts, and ending streaming: | | **Send Your Request** | **Receive ASAPP Response** | | :- | :--------------------- | :------------------------- | | 1 | `startStream` message | `startResponse` message | | 2 | Stream audio | `transcript` message | | 3 | `finishStream` message | `finalResponse` message | WebSocket protocol request messages in the sequence must be formatted as text (UTF-8 encoded string data); only the audio stream should be formatted in binary. All response messages will also be formatted as text. ### Send startStream message Once you establish the connection, send a `startStream` message with information about the speaker including their `role` (customer, agent) and their unique identifier (`externalId`) from your system before sending any audio packets. ```json theme={null} { "message":"startStream", "sender": { "role": "customer", "externalId": "JD232442" } } ``` Provide additional [optional fields](#fields-and-parameters) in the `startStream` message to adjust default transcription settings. For example, the default `language` transcription setting is `en-US` if not denoted in the `startStream` message. To set the language to Spanish, the `language` field should be set with value `es-US`. Once set, AI Transcribe will expect a Spanish conversation in the audio stream and return transcribed message text in Spanish. ### Receive startResponse message For any `startStream` message, the server will respond with a `startResponse` if the request is granted: ```json theme={null} { "message": "startResponse", "streamID": "128342213", "status": { "code": "1000", "description": "OK" } } ``` The `streamID` is a unique identifier that the ASAPP server assigns to the connection. The status code and description may contain additional useful information. If there is an application status code error with the request, the ASAPP server sends a `finalResponse` message with an error description, and the server then closes the connection. ## Step 4: Send the audio stream You can start to stream audio as soon as you send the `startStream` message without the need to wait for the `startResponse`. However, the system could reject a request either due to an invalid `startStream` or internal server errors. If that is the case, the server notifies with a `finalResponse` message, and the server will drop any streamed audio packets. Audio must be sent as binary data of WebSocket protocol: `ws.send()` The server does not acknowledge receiving individual audio packets. The summary in the `finalResponse` message can be used to verify if any audio packet was not received by the server. If audio can be transcribed, the server sends back `transcript` messages asynchronously. For real-time live streaming, we recommend that audio streams are sent chunk-by-chunk, sending every 20ms or 100ms of audio as one binary message. Exceptionally large chunks may result in WebSocket transport errors such as timeouts. ### Receive transcript messages The server sends back the `transcript` message, which contains one complete utterance. Example of a `transcript` message: ```json theme={null} { "message": "transcript", "start": 0, "end": 1000, "utterance": [ {"text": "Hi, my ID is 123."} ] } ``` ## Step 5: Receive Transcriptions from AI Transcribe Now you must call `GET /messages` to receive all the transcript messages for a completed call. Conversation transcripts are available for seven days after they are completed. ```json theme={null} curl -X GET 'https://api.sandbox.asapp.com/conversation/v1/conversation/messages' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "externalId": "Your GUID/UCID of the SPIREC Call" }' ``` A successful response returns a 200 and the Call Transcripts ```json theme={null} { "type": "transcript", "externalConversationId": "", "streamId": "", "sender": { "externalId": "", "role": "customer", // or "agent" }, "autotranscribeResponse": { "message": "transcript", "start": 0, "end": 1000, "utterance": [ {"text": ""} ] } } ``` ## Step 6: Stop the Streaming Audio Message ### Send finishStream message When you complete the audio stream, send a `finishStream` message. The service will drop any audio message sent after `finishStream`. ```json theme={null} { "message": "finishStream" } ``` Any other non-audio messages sent after `finishStream` will be dropped, the service will send a `finalResponse` with error code 4056 (Wrong message order) and the connection will close. ### Receive finalResponse message The server sends a `finalResponse` at the end of the streaming session and closes the connection, after which it will stop processing incoming messages for the stream. It is safe to close the WebSocket connection when you receive the `finalResponse` message. The server will end a given stream session if any of following are true: * Server receives `finishStream` and has processed all audio received * Server detects connection idle timeout (at 60 seconds) * Server internal errors occur (unable to recover) * Request message is invalid (note: if the access token is invalid, the WebSocket will close with a WebSocket error code) * A critical requested feature is not supported, for example, redaction * Service maintenance * Streaming duration over limit (default is 3 hours) In case of non-application WebSocket errors, the WebSocket layer closes the connection, and the server may not get an opportunity to send a `finalResponse` message. The `finalResponse`message has a summary of the stream along with the status code, which you can use to verify if there are any missing audio packets or transcript messages: ```json theme={null} { "message": "finalResponse", "streamId": "128342213", "status": { "code": "1000", "description": "OK" }, "summary": { "totalAudioBytes": 300, // number of audio bytes received "audioDurationMs": 6000, // audio length in milliseconds processed by the server "streamingSeconds": 6, "transcripts": 10 // number of transcripts recognized } ``` ## Fields & Parameters ### StartStream Request Fields | Field | Description | Default | Supported Values | | :--------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------- | :--------------------------------------------------- | | sender.role (required) | A participant role, usually the customer or an agent for human participants. | n/a | "agent", "customer" | | sender.externalId (required) | Participant ID from the external system, it should be the same for all interactions of the same individual | n/a | "BL2341334" | | language | IETF language tag | en-US | "en-US", "es-US" | | samplingRate | Audio samples/sec | 8000 | 8000 | | encoding | 'L16': PCM data with 16 bit/sample | L16 | "L16" | | smartFormatting | Request for post processing: Inverse Text Normalization (convert spoken form to written form), e.g., 'twenty two --> 22'. Auto punctuation and capitalization | true | true, false | | detailedToken | If true, outputs word-level details like word content, timestamp and word type. | false | true, false | | audioRecordingAllowed | false: ASAPP will not record the audio; true: ASAPP may record and store the audio for this conversation | false | true, false | | redactionOutput | If detailedToken is true along with value 'redacted' or 'redacted\_and\_unredacted', the system will reject the request. If the client has not configured redaction rules for 'redacted' or 'redacted\_and\_unredacted', the system will reject the request. If smartFormatting is False, the system will reject requests with value 'redacted' or 'redacted\_and\_unredacted'. | redacted | "redacted", "unredacted","redacted\_and\_unredacted" | ### Transcript Message Response Fields | Field | Description | Format | Example Syntax | | :------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ | :------------------ | | start | Start time (millisecond) of the utterance (in milliseconds) relative to the start of the audio input | integer | 0 | | end | End time (millisecond) of the utterance (in milliseconds) relative to the start of the audio input | integer | 300 | | utterance.text | The written text of the utterance. While an utterance can have multiple alternatives (e.g., 'me two' vs. 'me too') ASAPP provides only the most probable alternative, based on model prediction confidence. | array | "Hi, my ID is 123." | If the `detailedToken` in `startStream` request is set to true, additional fields are provided within the `utterance` array for each `token`: | Field | Description | Format | Example Syntax | | :---------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------ | :------------- | | token.content | Text or punctuation | string | "is", "?" | | token.start | Start time (millisecond) of the token relative to the start of the audio input | integer | 170 | | token.end | End time (millisecond) audio boundary of the token relative to the start of the audio input, there may be silence after that, so it does not necessarily match with the startMs of the next token. | integer | 200 | | token.punctuationAfter | Optional, punctuation attached after the content | string | '.' | | token.punctuationBefore | Optional, punctuation attached in front of the content | string | '"' | ### Custom Vocabulary The ASAPP speech server can boost specific word accuracy if you provide a target list of vocabulary words before recognition starts, using an `updateVocabulary` message. You can send the `updateVocabulary` service multiple times during a session. Vocabulary is additive, which means the system appends the new vocabulary words to the previous ones. If you send vocabulary in between sent audio packets, it will take effect only after the end of the current utterance being processed. All `updateVocabulary` changes are valid only for the current WebSocket session. The following fields are part of a `updateVocabulary` message: | Field | Description | Mandatory | Example Syntax | | :--------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------- | :------------------------------------------------- | | phrase | Phrase which needs to be boosted. Prevent adding longer phrases, instead add them as separate entries. | Yes | "IEEE" | | soundsLike | This provides the ways in which a phrase can be said/pronounced. Certain rules: - Spell out numbers (25 -> 'two five' and/or 'twenty five') - Spell out acronyms (WHO -> 'w h o') - Use lowercase letters for everything - Limit phrases to English and Spanish-language letters (accented consonants and vowels accepted) | No | "i triple e" | | category | Supported Categories: 'address', 'name', 'number'. Categories help the AI Transcribe service normalize the provided phrase so it can guess certain ways in which a phrase can be pronounced. e.g., '717 N Blvd' with 'address' category will help the service normalize the phrase to 'seven one seven North Boulevard' | No | "address", "name", "number", "company", "currency" | Example request and response: **Request** ```json theme={null} { "message": "updateVocabulary", "phrases": [ { "phrase": "IEEE", "category": "company", "soundsLike": [ "I triple E" ] }, { "phrase": "25.00", "category": "currency", "soundsLike": [ "twenty five dollars" ] }, { "phrase": "HHilton", "category": "company", "soundsLike": [ "H Hilton", "Hilton Honors" ] }, { "phrase": "Jon Snow", "category": "name", "soundsLike": [ "John Snow" ], }, { "phrase": "717 N Shoreline Blvd", "category": "address" } ] } ``` **Response** ```json theme={null} { "message": "vocabularyResponse", "status": { "code": "1000", "description": "OK" } ``` ### Application Status Codes | Status code | Description | | :---------- | :-------------------------------------------------------------------------------------------------------------------- | | 1000 | OK | | 1008 | Invalid or expired access token | | 2002 | Error in fetching conversationId. This error code is only possible when integration with other AI Services is enabled | | 4040 | Message format incorrect | | 4050 | Language not supported | | 4051 | Encoding not supported | | 4053 | Sample rate not supported | | 4056 | Wrong message order or missing required message | | 4080 | Unable to transcribe the audio | | 4082 | Audio decode failure | | 4083 | Connection idle timeout. Try streaming audio in real-time | | 4084 | Custom vocabulary phrase exceeds limit | | 4090 | Streaming duration over limit | | 4091 | Invalid vocabulary format | | 4092 | Redact only smart formatted text | | 4093 | Redaction only supported if detailedTokens in True | | 4094 | RedactionOutput cannot be unredacted or redacted\_and\_unredacted because of global config being to always redact | | 5000 | Internal service error | | 5001 | Service shutting down | | 5002 | No instances available | ## Retrieving Transcript Data In addition to real-time transcription messages via WebSocket, AI Transcribe also can output transcripts through two other mechanisms: * **After-call**: GET endpoint responds to your requests for a designated call with the full set of utterances from that completed conversation * **Batch**: File Exporter service responds to your request for a designated time interval with a link to a data feed file that includes all utterances from that interval's conversations ### After-Call via GET Request [`GET /conversation/v1/conversation/messages`](/apis/messages/list-messages-with-an-externalid) Use this endpoint to retrieve all the transcript messages for a completed call. **When to Call** Once the conversation is complete. Conversation transcripts are available for seven days after they are completed. For conversations that include transfers, the endpoint will provide transcript messages for all call legs that correspond to the call's identifier. **Request Details** Requests must include a call identifier with the GUID/UCID of the SIPREC call. **Response Details** When successful, this endpoint responds with an array of objects, each of which corresponds to a single message. Each object contains the text of the message, the sender's role and identifier, a unique message identifier, and timestamps. You set transcription settings (e.g., language, detailed tokens, redaction) for a given call with [the `startStream` websocket message](#startstream-request-fields) when you initiate call transcription. All transcripts retrieved after the call will reflect the initially requested settings in the `startStream` message. **Message Limit** This endpoint responds with up to 1,000 transcribed messages per conversation, approximately a two-hour continuous call. You receive all messages in a single response without any pagination. To retrieve all messages for calls that exceed this limit, use either a real-time mechanism or File Exporter for transcript retrieval. ### Batch via File Exporter AI Transcribe makes full transcripts for batches of calls available using the File Exporter service's `utterances` data feed. You can use the File Exporter service as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g., nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC. Visit [Retrieving Data for AI Services](/reporting/file-exporter) for a guide on how to interact with the File Exporter service. # Deploying AI Transcribe for Genesys AudioHook Source: https://docs.asapp.com/ai-productivity/ai-transcribe/genesys-audiohook Use AI Transcribe in your Genesys Audiohook application This guide covers the **Genesys AudioHook Media Gateway** solution pattern, which consists of the following components to receive speech audio and call signals, and return call transcripts: * Media gateways for receiving call audio from Genesys Cloud * HTTPS API which enables the customer to POST requests to start and stop call transcription * Webhook to POST real-time transcripts to a designated URL of your choosing, alongside two additional APIs to retrieve transcripts after-call for one or a batch of conversations ASAPP works with you to understand your current telephony infrastructure and ecosystem. Your ASAPP account team will also determine the main use case(s) for the transcript data to determine where and how call transcripts should be sent. ASAPP then completes the architecture definition, including integration points into the existing infrastructure. ### Integration Steps There are three steps to integrate AI Transcribe into Genesys Audiohook: 1. Enable AudioHook and Configure for ASAPP 2. Send Start and Stop Requests 3. Receive Transcript Outputs ### Requirements **Audio Stream Codec** Genesys AudioHook provides audio in the mu-law format with 8000 sample rate, which ASAPP supports. You do not need any modification or additional transcoding when forking audio to ASAPP. When supplying recorded audio to ASAPP for AI Transcribe model training prior to implementation, send uncompressed .WAV media files with speaker-separated channels. Recordings for training should have a sample rate of 8000 and 16-bit PCM audio encoding. Read the [Customization section of the AI Transcribe Product Guide](/ai-productivity/ai-transcribe/product-guide#customization) for more on data requirements for transcription model training. **Developer Portal** ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can do the following: * Access relevant API documentation (e.g. OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps Visit the [Get Started](/getting-started/developers) for instructions on creating a developer account, managing teams and apps, and setting up AI Service APIs. ## Integrate with Genesys AudioHook ### 1. Enable AudioHook and Configure for ASAPP To enable AudioHook within Genesys: 1. Access Genesys Cloud Admin, navigate to Integrations/Integrations and click "plus" in upper right to add more integrations. 2. Find [AudioHook](https://help.mypurecloud.com/articles/install-audiohook-monitor-from-genesys-appfoundry/) Monitor and Install. 3. [Configure AudioHook Monitor](https://help.mypurecloud.com/articles/configure-and-activate-audiohook-monitor-in-genesys-cloud/) Integration, using the Connection URI (i.e. wss\://ws-example.asapp.com/mg-genesysaudiohook-autotranscribe/) and credentials provided by ASAPP. 4. [Enable voice transcription](https://help.mypurecloud.com/articles/configure-voice-transcription/) on desired trunks and within desired Architect Flows. You do not need to select ASAPP as the transcription engine. ### 2. Send Start and Stop Requests The `/start-streaming` and `/stop-streaming` endpoints of the Start/Stop API control when transcription occurs for every call media stream (identified by the Genesys conversationId) sent to ASAPP's media gateway. See the [Endpoints](#endpoints) section to learn how to interact with them. ASAPP will not begin transcribing call audio until you request it to, thus preventing transcription of audio at the very beginning of the Genesys AudioHook audio streaming session, which may include IVR, hold music, or queueing. Stop requests pause or end transcription for any needed reason. For example, you could use a stop request mid-call when the agent places the call on hold or at the end of the call to prevent transcribing post-call interactions such as satisfaction surveys. AI Transcribe is only meant to transcribe conversations between customers and agents - you should implement start and stop requests to ensure the system does not transcribe non-conversation audio (e.g., hold music, IVR menus, surveys). Attempted transcription of non-conversation audio will negatively impact other services meant to consume conversation transcripts, such as ASAPP AI Summary. ### 3. Receive Transcript Outputs AI Transcribe outputs transcripts using three separate mechanisms, each corresponding to a different temporal use case: * **[Real-time](#real-time-via-webhook)**: Webhook posts complete utterances to your target endpoint as they are transcribed during the live conversation * **[After-call](#after-call-via-get-request "After-Call via GET Request")**: GET endpoint responds to your requests for a designated call with the full set of utterances from that completed conversation * **[Batch](#batch-via-file-exporter "Batch via File Exporter")**: File Exporter service responds to your request for a designated time interval with a link to a data feed file that includes all utterances from that interval's conversations #### Real-Time via Webhook ASAPP sends transcript outputs in real-time via HTTPS POST requests to a target URL of your choosing. Authentication Once the target is selected, work with your ASAPP account team to implement one of the following supported authentication mechanisms: * **Custom CAs:** Custom CA certificates for regular TLS (1.2 or above). * **mTLS:** Mutual TLS using custom certificates provided by the customer. * **Secrets:** A secret token. The secret name is configurable as is whether it appears in the HTTP header or as a URL parameter. * **OAuth2 (client\_credentials):** Client credentials to fetch tokens from an authentication server. Expected Load Target servers should be able to support receiving transcript POST messages for each utterance of every live conversation on which AI Transcribe is active. For reference, an average live call sends approximately 10 messages per minute. At that rate, 50 concurrent live calls represent approximately 8 messages per second. Please ensure you load test the selected target server to support anticipated peaks in concurrent call volume. Transcript Timing and Format Once you have started transcription for a given call stream using the `/start-streaming` endpoint, AI Transcribe begins to publish `transcript` messages, each of which contains a full utterance for a single call participant. The expected latency between when ASAPP receives audio for a completed utterance and provides a transcription of that same utterance is 200-600ms. Perceived latency will also be influenced by any network delay sending audio to ASAPP and receiving transcription messages in return. Though we send messages in the order they are transcribed, network latency may impact the order in which they arrive or cause the system to drop messages due to timeouts. Where latency causes timeouts, the system will drop the oldest pending messages first; AI Transcribe does not retry to deliver dropped messages. The message body for `transcript` type messages is JSON encoded with these fields: | Field | Subfield | Description | Example Value | | :--------------------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | | externalConversationId | | Unique identifier with the Genesys conversation Id for the call | 8c259fea-8764-4a92-adc4-73572e9cf016 | | streamId | | Unique identifier that ASAPP assigns to each call participant's stream returned in response to `/start-streaming` and `/stop-streaming` | 5ce2b755-3f38-11ed-b755-7aed4b5c38d5 | | sender | externalId | Customer or agent identifier as provided in request to `/start-streaming` | ef53245 | | sender | role | A participant role, either customer or agent | customer, agent | | autotranscribeResponse | message | Type of message | transcript | | autotranscribeResponse | start | The start ms of the utterance | 0 | | autotranscribeResponse | end | Elapsed ms since the start of the utterance | 1000 | | autotranscribeResponse | utterance | Transcribed utterance text | Are you there? | Expected `transcript` message format: ```json theme={null} { "type": "transcript", "externalConversationId": "", "streamId": "", "sender": { "externalId": "", "role": "customer", // or "agent" }, "autotranscribeResponse": { "message": "transcript", "start": 0, "end": 1000, "utterance": [ {"text": ""} ] } } ``` ## Error Handling Should your target server return an error in response to a POST request, ASAPP will record the error details for the failed message delivery and drop the message. ### After-Call via GET Request AI Transcribe makes a full transcript available at the following endpoint for a given completed call: `GET /conversation/v1/conversation/messages` Once a conversation is complete, make a request to the endpoint using a conversation identifier and the system returns every message in the conversation. Message Limit This endpoint responds with up to 1,000 transcribed messages per conversation, approximately a two-hour continuous call. You receive all messages in a single response without any pagination. To retrieve all messages for calls that exceed this limit, use either a real-time mechanism or File Exporter for transcript retrieval. You set transcription settings (e.g., language, detailed tokens, redaction) for a given call with the Start/Stop API when you initiate call transcription. All transcripts retrieved after the call will reflect the initially requested settings with the Start/Stop API. See the [Endpoints](#endpoints) section to learn how to interact with this API. ### Batch via File Exporter AI Transcribe makes full transcripts for batches of calls available using the File Exporter service's `utterances` data feed. You can use the File Exporter service as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g., nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC. ## Use Case Example Real-Time Transcription This real-time transcription use case example consists of an English language call between an agent and customer with redaction enabled, ending with a hold. Note that redaction is enabled by default and does not need to be requested explicitly. 1. Ensure the Genesys AudioHook is enabled and configured on the desired trunk and flow. 2. When the customer and agent are connected, send ASAPP a request to start transcription for the call: **POST** `/mg-autotranscribe/v1/start-streaming` **Request** ```json theme={null} { "namespace": "genesysaudiohook", "guid": "090eaa2f-72fa-480a-83e0-8667ff89c0ec", "customerId": "TT9833237", "agentId": "RE223444211993", "autotranscribeParams": { "language": "en-US" } } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" } }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" } } } } ``` 3. The agent and customer begin their conversation and ASAPP's webhook publisher sends separate HTTPS POST `transcript` messages for each participant to a target endpoint configured to receive the messages. HTTPS **POST** for Customer Utterance ```json theme={null} { type: "transcript", externalConversationId: "8c259fea-8764-4a92-adc4-73572e9cf016", streamId: "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", sender: { externalId: "TT9833237", role: "customer", }, autotranscribeResponse: { message: 'transcript', start: 400, end: 3968, utterance: [ {text: "I need help upgrading my streaming package and my PIN number is ####"} ] } } ``` HTTPS **POST** for Agent Utterance ```json theme={null} { type: "transcript", externalConversationId: "8c259fea-8764-4a92-adc4-73572e9cf016", streamId: "cf31116-3f38-11ed-9116-7a0a36c763f1", sender: { externalId: "RE223444211993", role: "agent", }, autotranscribeResponse: { message: 'transcript', start: 4744, end: 8031, utterance: [ {text: "Thank you sir, let me pull up your account."} ] } } ``` 4. Later in the conversation, the agent puts the customer on hold. This triggers a request to the `/stop-streaming` endpoint to pause transcription and prevents hold music and promotional messages from being transcribed. **POST** `/mg-autotranscribe/v1/stop-streaming` **Request** ```json theme={null} { "namespace": "genesysaudiohook", "guid": "8c259fea-8764-4a92-adc4-73572e9cf016", } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, } } } ``` ### Data Security ASAPP's security protocols protect data at each point of transmission, from first user authentication to secure communications to our auditing and logging system (which includes hashing of data in transit) all the way to securing the environment when data is at rest in the data logging system. ASAPP teams also operate under tight restraints in terms of access to data. These security protocols protect both ASAPP and its customers. # AI Transcribe Product Guide Source: https://docs.asapp.com/ai-productivity/ai-transcribe/product-guide Learn more about the use of AI Transcribe and its features ## Getting Started This page provides an overview of the features and functionalities in AI Transcribe. After you integrate AI Transcribe into your applications, you can use all of the configured features. ### Transcription Outputs AI Transcribe returns transcriptions as a sequence of utterances with start and end timestamps in response to an audio stream from a single speaker. As the agent and customer speak, ASAPP's automated speech recognition (ASR) model transcribes their audio streams and returns completed utterances based on the natural pauses from each speaker. The expected latency between when ASAPP receives audio for a completed utterance and provides a transcription of that same utterance is 200-600ms. Perceived latency will also be influenced by any network delay sending audio to ASAPP and receiving transcription messages in return. The system enables Smart Formatting by default, producing utterances with punctuation and capitalization already applied. The system also automatically converts any spoken forms of utterances to written forms (e.g. 'twenty two' becomes '22'). ### Redaction AI Transcribe can immediately redact audio for sensitive information, returning utterances with sensitive information denoted in hashmarks. ASAPP applies default redaction policies to prevent exposure of sensitive combinations of numerical digits. To configure redaction rules for your implementation, consult your ASAPP account contact. Visit the [Data Redaction](/security/data-redaction) section to learn more. The system enables redaction by default. You must also enable Smart Formatting (it is by default) in order for redaction to function. ## Customization ### Transcriptions ASAPP customizes transcription models for each implementation of AI Transcribe to ensure domain-specific context and terminology is well incorporated prior to launch. Consult your ASAPP account contact if the required historical call audio files are not available ahead of implementing AI Transcribe.

Option

Description

Requirements

Baseline

ASAPP’s general-purpose transcription capability, trained with no audio from relevant historical calls

none

Customized

A custom-trained transcription model to incorporate domain-specific terminology likely to be encountered during implementation

For English custom models, a minimum 100 hours of representative historical call audio between customers and agents

For Spanish custom models, a minimum of 200 hours.

When supplying recorded audio to ASAPP for AI Transcribe model training prior to implementation, send uncompressed `.WAV` media files with speaker-separated channels. Recordings for training and real-time streams should have both the same sample rate (8000 samples/sec) and audio encoding (16-bit PCM). Visit [Transmitting Data to SFTP](/reporting/send-sftp) for instructions on how to send historical call audio files to ASAPP. ### Vocabulary In addition to training on historical transcripts, AI Transcribe accepts explicitly defined custom vocabulary for terms that are specific to your implementation. AI Transcribe also boosts detection for these terms by accepting what the term may ordinarily sound like, so that it can be recognized and outputted with the correct spelling. Common examples of custom vocabulary include: * Branded products, services and offers * Commonly used acronyms or abbreviations * Important corporate addresses You can send custom vocabulary to ASAPP for each audio transcription session, and can keep it consistent for all transcription requests or adjust it for different use cases (different brands, skills/queues, geographies, etc.) AI Transcribe implementations via WebSocket API only support session-specific custom vocabulary. For Media Gateway implementations, transcription models can also be trained with custom vocabulary through an alternative mechanism. Reach out to your ASAPP account team for more information. ## Use Cases ### For Live Agent Assistance **Challenge** Organizations are exploring technologies to assist agents in real-time by surfacing customer-specific offers, troubleshooting process flows, topical knowledge articles, relevant customer profile attributes and more. Agents have access to most (if not all) of this content already, but a great assistive technology makes content actionable by finding the right time to bring the right item to the forefront. To do this well, these technologies need to know both what's been said and what is being said in the moment with very low latency. Many of these technologies face agent adoption and click-through challenges for two reported reasons: 1. Recommended content often doesn't fit the conversation, which may mean the underlying transcription isn't an accurate representation of the real conversation 2. Recommended content doesn't arrive soon enough for them to use it, which may mean the latency between the audio and outputted transcript is too high **Using AI Transcribe** AI Transcribe is built to be the call transcript input data source for models that power assistive technologies for customer interactions. Because AI Transcribe is specifically designed for customer service interactions and trained on implementation-specific historical data, the word error rate (WER) for domain and company-specific language is reduced substantially rather than being the subject of incorrect transcriptions that lead models astray. To illustrate this point, consider a sample of 10,000 hours of transcribed audio from a typical contact center. A speech-to-text service only needs to recognize 241 of the most frequently used words to get 80% accuracy; those are largely words like "the", "you", "to", "what", and so on. To get to 90% accuracy, the system needs to correctly transcribe the next 324 most frequently used words, and even more for every additional percent. These are often words that are unique to your business---the words that really matter. Read more here about [why small increases in transcription accuracy matter.](https://www.asapp.com/blog/why-a-little-increase-in-transcription-accuracy-is-such-a-big-deal/) To ensure these high-accuracy transcript inputs reach models quickly enough to make timely recommendations, the expected time from audio received to transcription of that same utterance is 200-600 ms (excluding effects of network delay, as noted in *Transcription Outputs*). ### For Insights and Compliance **Challenge** For many organizations, lack of accuracy and coverage of speech-to-text technologies prevent them from effectively employing transcripts for insights, quality management and compliance use cases. Transcripts that fall short of accurately representing conversations compromise the usability of insights and leave too much room for ambiguity for quality managers and compliance teams. Transcription technologies that aren't accurate enough for many use cases also tend to be employed only for a minority share of total call volume because the outputs aren't useful enough to pay for full coverage. As a result, quality and compliance teams must rely on audio recordings since most calls don't get transcribed. **Using AI Transcribe** AI Transcribe is specifically designed to maximize domain-specific accuracy for call center conversations. It is trained on past conversations before being deployed and continues to improve early in the implementation as it encounters conversations at scale. For non real-time use cases, AI Transcribe also supports processing batches of call audio at an interval that suits the use case. Teams can query AI Transcribe outputs in time-stamped utterance tables for data science and targeted compliance use cases or load customer and agent utterances into quality management systems for managers to review in messaging-style user interfaces. ### AI Services That Enhance AI Transcribe Once accurate call transcripts are generated, automatic summarization of those customer interactions becomes possible. ASAPP AI Summary is a recommended pairing with AI Transcribe, generating analytics-ready structured summaries and readable paragraph summaries that save agents the distraction of needing to write and submit disposition notes on every call. Head to AI Summary Overview to learn more. Learn more about AI Summary on ASAPP.com AI Summary currently supports English-language conversations only. # Deploy AI Transcribe into SIPREC via Media Gateway Source: https://docs.asapp.com/ai-productivity/ai-transcribe/siprec Integrate AI Transcribe into your SIPREC system using ASAPP Media Gateway This guide covers the **SIPREC Media Gateway** solution pattern, which consists of the following components to receive speech audio and call signals, and return call transcripts: * Session border controllers and media gateways for receiving call audio from your session border controllers (SBCs) * HTTPS API to receive requests to start and stop call transcription * Webhook to POST real-time transcripts to a designated URL of your choosing, alongside two additional APIs to retrieve transcripts after-call for one or a batch of conversations ASAPP works with you to understand your current telephony infrastructure and ecosystem, including the type of voice work assignment platform(s) and other capabilities available, such as SIPREC. Your ASAPP account team will also determine the main use case(s) for the transcript data to determine where and how call transcripts should be sent. ASAPP then completes the architecture definition, including integration points into the existing infrastructure. ### Integration Steps There are three steps to integrate AI Transcribe into SIPREC: 1. Send Audio to Media Gateway 2. Send Start and Stop Requests 3. Receiving Transcript Outputs ### Requirements **Audio Stream Codec** With SIPREC, the customer SBC and the ASAPP media gateway negotiate the media attributes via the SDP offer/answer exchange during the establishment of the session. The codecs in use today are as follows: * G.711 * G.729 When supplying recorded audio to ASAPP for AI Transcribe model training prior to implementation, send uncompressed `.WAV` media files with speaker-separated channels. Recordings for training and real-time streams should have both the same sample rate (8000 samples/sec) and audio encoding (16-bit PCM). See the [Customization section of the AI Transcribe Product Guide](/ai-productivity/ai-transcribe/product-guide#customization) for more on data requirements for transcription model training. **Developer Portal** ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can do the following: * Access relevant API documentation (e.g. OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps Visit the [Get Started](/getting-started/developers) for instructions on creating a developer account, managing teams and apps, and setting up AI Service APIs. ## Integrate to the Media Gateway ### 1. Send Audio to Media Gateway Media Gateway (MG) and Media Gateway Proxy (MG Proxy) components receive real-time audio via SIPREC protocol (acting as Session Recording Servers) along with metadata and send it to AI Transcribe. ASAPP offers a software-as-a-service approach to hosting MGs and MG Proxies at ASAPP's VPC in the PCI-scoped zone. **Network Connectivity** ASAPP will determine the network connectivity between your infrastructure and the ASAPP AWS Virtual Private Cloud (VPC) based on the architecture; however, we will deploy secure connections between your data centers and the ASAPP VPC. * **Edge layer**: ASAPP has built an edge layer utilizing public IPv4 addresses registered to ASAPP. These IP addresses are NOT routed over the Internet, but they guarantee uniqueness across all IP networks. The edge layer houses firewalls and session border controllers that properly handle full NAT for both SIP and non-SIP traffic. * **Customer connection aggregation**: ASAPP connects to customers via AWS Transit Gateway, which allows establishment of multiple route-based VPN connections to customers. Sample configuration for various customer devices is available on request. **Port Details** Ports and protocols in use for the AI Transcribe implementations are shown below. These definitions provide visibility to your security teams for the provisioning of firewalls and ACLs. * **SIP/SIPREC:** TCP 5070 and above; your ASAPP account team will specify a value for your implementation * **Audio Streams:** UDP \; your ASAPP account team will specify a value for your implementation * **API Endpoints:** TCP 443 In customer firewalls, you must disable the SIP Application Layer Gateway (ALG) and any 'Threat Detection' features, as they typically interfere with the SIP dialogs and the re-INVITE process. #### Generating Call Identifiers AI Transcribe uses your call identifier to ensure a given call can be referenced in subsequent start and stop requests and associated with transcripts. To ensure ASAPP receives your call identifiers properly, configure the SBC to create a universal call identifier (UCID or equivalent identifier). UCID generation is a native feature for session border controller platforms. For example, the Oracle/Acme Packet session border controller platform provides documentation on UCID generation as part of its [configuration guide](https://docs.oracle.com/en/industries/communications/enterprise-session-border-controller/8.4.0/configuration/universal-call-identifier-spl).  Other session border controller vendors have similar features, so please refer to the vendor documentation for guidance. ### 2. Send Start and Stop Requests As outlined above in requirements, you must create user accounts in the developer portal to enroll apps and receive API keys to interact with ASAPP endpoints. The `/start-streaming` and `/stop-streaming` endpoints of the Start/Stop API control when transcription occurs for every call media stream (identified by the GUID/UCID) sent to ASAPP's media gateway. See the [Endpoints](#endpoints) section to learn how to interact with them. ASAPP will not begin transcribing call audio until you request it to, thus preventing transcription of audio at the very beginning of the SIPREC session such as standard IVR menus and hold music. Stop requests pause or end transcription for any needed reason. For example, you could use a stop request mid-call when the agent places the call on hold or at the end of the call to prevent transcribing post-call interactions such as satisfaction surveys. AI Transcribe is only meant to transcribe conversations between customers and agents - you should implement start and stop requests to ensure the system does not transcribe non-conversation audio (e.g., hold music, IVR menus, surveys). Attempted transcription of non-conversation audio will negatively impact other services meant to consume conversation transcripts, such as ASAPP AI Summary. ### 3. Receiving Transcript Outputs AI Transcribe outputs transcripts using three separate mechanisms, each corresponding to a different temporal use case: * **[Real-time](#real-time-via-webhook)**: Webhook posts complete utterances to your target endpoint as they are transcribed during the live conversation * **[After-call](#after-call-via-get-request)**: GET endpoint responds to your requests for a designated call with the full set of utterances from that completed conversation * **[Batch](#batch-via-file-exporter)**: File Exporter service responds to your request for a designated time interval with a link to a data feed file that includes all utterances from that interval's conversations #### Real-Time via Webhook ASAPP sends transcript outputs in real-time via HTTPS POST requests to a target URL of your choosing. Authentication Once the target is selected, work with your ASAPP account team to implement one of the following supported authentication mechanisms: * **Custom CAs:** Custom CA certificates for regular TLS (1.2 or above). * **mTLS:** Mutual TLS using custom certificates provided by the customer. * **Secrets:** A secret token. The secret name is configurable as is whether it appears in the HTTP header or as a URL parameter. * **OAuth2 (client\_credentials):** Client credentials to fetch tokens from an authentication server. **Expected Load** Target servers should be able to support receiving transcript POST messages for each utterance of every live conversation on which AI Transcribe is active. For reference, an average live call sends approximately 10 messages per minute. At that rate, 50 concurrent live calls represent approximately 8 messages per second. Please ensure you load test the selected target server to support anticipated peaks in concurrent call volume. **Transcript Timing and Format** See the [API Reference](/apis/overview) to learn how to interact with this API. The expected latency between when ASAPP receives audio for a completed utterance and provides a transcription of that same utterance is 200-600ms. Perceived latency will also be influenced by any network delay sending audio to ASAPP and receiving transcription messages in return. Though we send messages in the order they are transcribed, network latency may impact the order in which they arrive or cause the system to drop messages due to timeouts. Where latency causes timeouts, the system will drop the oldest pending messages first; AI Transcribe does not retry to deliver dropped messages. The message body for `transcript` type messages is JSON encoded with these fields: | Field | Sub field | Description | Example Value | | | :--------------------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------- | --------------- | | externalConversationId | | Unique identifier with the GUID/UCID of the SIPREC call | 00002542391662063156 | | | streamId | | Unique identifier that ASAPP assigns to each call participant's stream returned in response to `/start-streaming` and `/stop-streaming` | 5ce2b755-3f38-11ed-b755-7aed4b5c38d5 | | | sender | externalId | Customer or agent identifier as provided in request to `/start-streaming` | ef53245 | | | | role | | A participant role, either customer or agent | customer, agent | | autotranscribeResponse | message | Type of message | transcript | | | | start | The start ms of the utterance | 0 | | | | end | Elapsed ms since the start of the utterance | 1000 | | | | utterance | text | Transcribed utterance text | Are you there? | Expected `transcript` message format: ```json theme={null} { "type": "transcript", "externalConversationId": "", "streamId": "", "sender": { "externalId": "", "role": "customer", // or "agent" }, "autotranscribeResponse": { "message": "transcript", "start": 0, "end": 1000, "utterance": [ {"text": ""} ] } } ``` **Error Handling** Should your target server return an error in response to a POST request, ASAPP will record the error details for the failed message delivery and drop the message. #### After-Call via GET Request AI Transcribe makes a full transcript available at the following endpoint for a given completed call: `GET /conversation/v1/conversation/messages` Once a conversation is complete, make a request to the endpoint using a conversation identifier and the system returns every message in the conversation. **Message Limit** This endpoint responds with up to 1,000 transcribed messages per conversation, approximately a two-hour continuous call. You receive all messages in a single response without any pagination. To retrieve all messages for calls that exceed this limit, use either a real-time mechanism or File Exporter for transcript retrieval. You set transcription settings (e.g., language, detailed tokens, redaction) for a given call with the Start/Stop API when you initiate call transcription. All transcripts retrieved after the call will reflect the initially requested settings with the Start/Stop API. See the [Endpoints](#endpoints) section to learn how to interact with this API. #### Batch via File Exporter AI Transcribe makes full transcripts for batches of calls available using the File Exporter service's `utterances` data feed. You can use the File Exporter service as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g., nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC. Visit [Retrieving Data for AI Services](/reporting/file-exporter) for a guide on how to interact with the File Exporter service. ## Usage ### Endpoints ASAPP receives start/stop requests to signal when transcription for a given call should occur. You can send start and stop requests multiple times during a single call (for example, stopped when an agent places the call on hold and resumed when the agent resumes the call). For all requests, you must provide a header containing the `asapp-api-id` API Key and the `asapp-api-secret`. You can find them under your Apps in the [AI Services Developer Portal](https://developer.asapp.com/). All requests to ASAPP sandbox and production APIs must use `HTTPS` protocol. Traffic using `HTTP` will not be redirected to `HTTPS`. [`POST /mg-autotranscribe/v1/start-streaming/`](/apis/autotranscribe-media-gateway/start-streaming) Use this endpoint to tell ASAPP to start or resume transcription for a given call. **When to Call** Transcription can be started (or resumed after a [`/stop-streaming`](/apis/autotranscribe-media-gateway/stop-streaming) request) at any point during a call. **Request Details** Requests must include a call identifier with the GUID/UCID of the SIPREC call, a namespace (e.g., `siprec`), and an identifier from your system(s) for each of the customer and agent participants on the call. Agent identifiers provided here can tell ASAPP whether agents have changed, indicating a new leg of the call has begun. This agent information enables other services to target specific legs of calls rather than only the higher-level call. The `guid` field expects the decimal formatting of the identifier. Cisco example: `0867617078-0032318833-2221801472-0002236962` Avaya example: `00002542391662063156` Requests also include a parameter to indicate the mapping of media lines (m-lines) in the SDP of SIPREC protocol; the parameter specifies whether the top m-line is mapped to the agent or customer participant. The top m-line is typically reversed for outbound calls vs. inbound calls. Requests may also include optional parameters for transcription including: * Language (e.g., `en-us` for English or `es-us` for Spanish) * Whether detailed tokens are requested * Whether call audio recording is permitted * Whether transcribed outputs should be redacted, unredacted, or both redacted and unredacted outputs should be returned AI Transcribe can immediately redact audio for sensitive information, returning utterances with sensitive information denoted in hashmarks. Visit [Redaction Policies](/security/data-redaction/redaction-policies) to learn more. **Response Details** When successful, this endpoint responds with a boolean indicating whether the stream has started successfully along with a `customer` and `agent` object. Each object contains a stream identifier (`streamId`), status code and status description. [`POST /mg-autotranscribe/v1/stop-streaming/`](/apis/autotranscribe-media-gateway/stop-streaming) Use this endpoint to tell ASAPP to pause or end transcription for a given call. **When to Call** Transcription can be stopped at any point during a call. **Request Details** Requests must include a call identifier with the GUID/UCID of the SIPREC call and a namespace (e.g., `siprec`). The `guid` field expects the decimal formatting of the identifier. Cisco example: `0867617078-0032318833-2221801472-0002236962` Avaya example: `00002542391662063156` **Response Details** When successful, this endpoint responds with a boolean indicating whether the stream has stopped successfully along with a `customer` and `agent` object. Each object contains a stream identifier (`streamId`), status code and status description. Each object also contains a `summary` object of transcription stats related to that participant's stream. [`GET /conversation/v1/conversation/messages`](/apis/messages/list-messages-with-an-externalid) Use this endpoint to retrieve all the transcript messages for a completed call. **When to Call** Once the conversation is complete. Conversation transcripts are available for seven days after they are completed. For conversations that include transfers, the endpoint will provide transcript messages for all call legs that correspond to the call's identifier. **Request Details** Requests must include a call identifier with the GUID/UCID of the SIPREC call. **Response Details** When successful, this endpoint responds with an array of objects, each of which corresponds to a single message. Each object contains the text of the message, the sender's role and identifier, a unique message identifier, and timestamps. #### Error Handling ASAPP uses HTTP status codes to communicate the success or failure of an API Call. * 2XX HTTP status codes are for successful API calls. * 4XX and 5XX HTTP status codes are for errored API calls. ASAPP errors are returned in the following structure: ```json theme={null} { "error": { "requestId": "67441da5-dd2b-4820-b47d-441998f066e9", "message": "Bad request", "code": "400-02" } } ``` When using the `/start-streaming` and `/stop-streaming` endpoints, the system may return the following error codes: | Code | Description | | :-------- | :---------------------------------------------------- | | `400-201` | MG AI Transcribe API parameter incorrect | | `400-202` | AI Transcribe parameter or combination incorrect | | `400-203` | No call with specified guid found | | `409-201` | Call transcription already started or already stopped | | `409-202` | Another API request for same guid is pending | | `409-203` | SIPREC BYE being processed | | `500-201` | MG AI Transcribe or AI Transcribe internal error | #### Data Security ASAPP's security protocols protect data at each point of transmission from first user authentication, to secure communications, to our auditing and logging system, all the way to securing the environment when data is at rest in the data logging system. Access to data by ASAPP teams is tightly constrained and monitored. Strict security protocols protect both ASAPP and our customers. ## Use Case Example ### Real-Time Transcription This real-time transcription use case example consists of an English language call between an agent and customer with redaction enabled, ending with a hold. Note that redaction is enabled by default and does not need to be requested explicitly. 1. When the call record is created, ASAPP media gateway components receive real-time audio via SIPREC protocol along with metadata, most notably the call's Avaya-formatted UCID/GUID: `00002542391662063156` 2. When the customer and agent are connected, ASAPP is sent a request to start transcription for the call: **POST** `/mg-autotranscribe/v1/start-streaming` **Request** ```json theme={null} { "namespace": "siprec", "guid": "00002542391662063156", "customerId": "TT9833237", "agentId": "RE223444211993", "autotranscribeParams": { "language": "en-US" }, "siprecParams": { "mediaLineOrder": "CUSTOMER_FIRST" } } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" } }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" } } } } ``` 3. The agent and customer begin their conversation and ASAPP's webhook publisher sends separate HTTPS POST `transcript` messages for each participant to a target endpoint configured to receive the messages. **HTTPS POST for Customer Utterance** ```json theme={null} { type: "transcript", externalConversationId: "00002542391662063156", streamId: "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", sender: { externalId: "TT9833237", role: "customer", }, autotranscribeResponse: { message: 'transcript', start: 400, end: 3968, utterance: [ {text: "I need help upgrading my streaming package and my PIN number is ####"} ] } } ``` **HTTPS POST for Agent Utterance** ```json theme={null} { type: "transcript", externalConversationId: "00002542391662063156", streamId: "cf31116-3f38-11ed-9116-7a0a36c763f1", sender: { externalId: "RE223444211993", role: "agent", }, autotranscribeResponse: { message: 'transcript', start: 4744, end: 8031, utterance: [ {text: "Thank you sir, let me pull up your account."} ] } } ``` 4. Later in the conversation, the agent puts the customer on hold. This triggers a request to the `/stop-streaming` endpoint to pause transcription and prevents hold music and promotional messages from being transcribed. **POST** `/mg-autotranscribe/v1/stop-streaming` **Request** ```json theme={null} { "namespace": "siprec", "guid": "00002542391662063156", } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, } } } ``` # Deploying AI Transcribe for Twilio Source: https://docs.asapp.com/ai-productivity/ai-transcribe/twilio Use AI Transcribe with Twilio This guide covers the **Twilio Media Gateway** solution pattern, which consists of the following components to receive speech audio from Twilio, receive call signals, and return call transcripts: * Media gateways for receiving call audio from Twilio * HTTPS API which enables the customer to GET a streaming URL to which call audio is sent and POST requests to start and stop call transcription * Webhook to POST real-time transcripts to a designated URL of your choosing, alongside two additional APIs to retrieve transcripts after-call for one or a batch of conversations ASAPP works with you to understand your current telephony infrastructure and ecosystem. Your ASAPP account team will also determine the main use case(s) for the transcript data to determine where and how call transcripts should be sent. ASAPP then completes the architecture definition, including integration points into the existing infrastructure. ### Integration Steps There are four steps to integrate AI Transcribe into Twilio: 1. Authenticate with ASAPP and Obtain a Twilio Media Stream URL 2. Send Audio to Media Gateway 3. Send Start and Stop Requests 4. Receive Transcript Outputs ### Requirements **Audio Stream Codec** Twilio provides audio in the mu-law format with 8000 sample rate, which ASAPP supports. You do not need any modification or additional transcoding when forking audio to ASAPP. When supplying recorded audio to ASAPP for AI Transcribe model training prior to implementation, send uncompressed .WAV media files with speaker-separated channels. Recordings for training should have a sample rate of 8000 and 16-bit PCM audio encoding. See the [Customization section of the AI Transcribe Product Guide](/ai-productivity/ai-transcribe/product-guide#customization) for more on data requirements for transcription model training. **Developer Portal** ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can do the following: * Access relevant API documentation (e.g. OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps Visit the [Get Started](/getting-started/developers) for instructions on creating a developer account, managing teams and apps, and setting up AI Service APIs. ## Integrate with Twilio ### 1. Authenticate with ASAPP and Obtain a Twilio Media Stream URL A Twilio media stream URL is required to start streaming audio. Begin by authenticating with ASAPP to obtain this URL. All requests to ASAPP sandbox and production APIs must use `HTTPS` protocol. Traffic using `HTTP` will not be redirected to `HTTPS`. The following HTTPS REST API enables authentication with the ASAPP API Gateway: [`GET /mg-autotranscribe/v1/twilio-media-stream-url`](/apis/autotranscribe-media-gateway/get-twilio-media-stream-url) HTTP headers (required): ```json theme={null} { "asapp-api-id": , "asapp-api-secret": } ``` ASAPP provides these header parameters to you in the [Developer Portal](https://developer.asapp.com/). HTTP response body: ```json theme={null} { "streamingUrl": "" } ``` If the authentication succeeds, the HTTP response body will return a secure WebSocket short-lived access URL. TTL (time-to-live) for this URL is 5 minutes. The system checks validity of the short-lived URL only at the beginning of the WebSocket connection, so sessions can last as long as needed. You can use the same short-lived access URL to start as many unique sessions as desired within the 5-minute TTL. For example, if the call center has an average rate of 1 new call per second, the call center can use the same short-lived access URL to initiate 300 total calls (60 calls per minute × 5 minutes). And each call can last as long as needed, regardless of whether it's 2 minutes long or longer than 30 minutes. But after the five-minute TTL, the system will need to obtain a new short-lived access URL to start any new calls. We recommend obtaining a new short-lived URL in less than 5 minutes to always have a valid URL. ### 2. Send Audio to Media Gateway With the URL obtained in the previous step, instruct Twilio to start sending Media Stream to ASAPP Media Gateway components. Media Gateway (MG) components receive real-time audio along with Call SID metadata. Twilio provides multiple ways to initiate Media Stream, which are described in [their documentation](https://www.twilio.com/docs/voice/api/media-streams#startstop-media-streams). While instructing Twilio to send Media Streams, we highly recommend that you provide a `statusCallback` URL. Twilio will use this URL in the event connectivity is lost or has an error. The customer call center will need to process this callback and instruct Twilio to again start new Media Streams, assuming transcriptions are still desired. See Handling Failures for Twilio Media Streams for details below. ASAPP offers a software-as-a-service approach to hosting MGs at ASAPP's VPC in the PCI-scoped zone. **Network Connectivity** Twilio cloud will send audio to ASAPP cloud via secure (TLS 1.2) WebSocket connections over the internet. The system does not require additional or custom networking. **Port Details** Ports and protocols in use for the AI Transcribe implementations are shown below: * **Audio Streams**: Secure WebSocket with destination port 443 * **API Endpoints**: TCP 443 **Handling Failures for Twilio Media Streams** There are multiple reasons (e.g., intermediate internet failures, scheduled maintenance) why Twilio Media Stream could be interrupted mid-call. The only way to know that the system interrupted the Media Stream is to utilize the `statusCallback` parameter (along with `statusCallbackMethod` if needed) of the Twilio API. Should a failure occur, the URL you specified in the `statusCallback` parameter will receive the HTTP request informing of a failure. If you receive a failure notification, it means ASAPP has stopped receiving audio from Twilio and no more transcriptions for that call will take place. To restart transcriptions: * Obtain a Twilio Media Stream URL - unless failure occurred within 5 minutes of the start of the call, you won't be able to reuse the original call streaming URL. * Send Audio to Media Gateway - instruct Twilio through their API to start a new media stream to the Twilio Media Stream URL that ASAPP provided. * Send Start request (see [3. Sending Start and Stop Requests](#3-send-start-and-stop-requests) for details). **Generating Call Identifiers** AI Transcribe uses your call identifier to ensure a given call can be referenced in subsequent [start and stop requests](#3-send-start-and-stop-requests) and associated with transcripts. Twilio will automatically generate a unique Call SID identifier for the call. ### 3. Send Start and Stop Requests As outlined in [requirements](#requirements), you must create user accounts in the developer portal to enroll apps and receive API keys to interact with ASAPP endpoints. The `/start-streaming` and `/stop-streaming` endpoints of the Start/Stop API control when transcription occurs for every call. See the [API Reference](/apis/overview) to learn how to interact with this API. ASAPP will not begin transcribing call audio until you request it to, thus preventing transcription of audio at the very beginning of the audio streaming session, which may include IVR, hold music, or queueing. Stop requests pause or end transcription for any needed reason. For example, you could use a stop request mid-call when the agent places the call on hold or at the end of the call to prevent transcribing post-call interactions such as satisfaction surveys. AI Transcribe is only meant to transcribe conversations between customers and agents - you should implement start and stop requests to ensure the system does not transcribe non-conversation audio (e.g., hold music, IVR menus, surveys). Attempted transcription of non-conversation audio will negatively impact other services meant to consume conversation transcripts, such as ASAPP AI Summary. ### 4. Receive Transcript Outputs AI Transcribe outputs transcripts using three separate mechanisms, each corresponding to a different temporal use case: * **[Real-time](#real-time-via-webhook)**: Webhook posts complete utterances to your target endpoint as they are transcribed during the live conversation * **[After-call](#after-call-via-get-request)**: GET endpoint responds to your requests for a designated call with the full set of utterances from that completed conversation * **[Batch](#batch-via-file-exporter)**: File Exporter service responds to your request for a designated time interval with a link to a data feed file that includes all utterances from that interval's conversations #### Real-Time via Webhook ASAPP sends transcript outputs in real-time via HTTPS POST requests to a target URL of your choosing. Authentication Once the target is selected, work with your ASAPP account team to implement one of the following supported authentication mechanisms: * **Custom CAs:** Custom CA certificates for regular TLS (1.2 or above). * **mTLS:** Mutual TLS using custom certificates provided by the customer. * **Secrets:** A secret token. The secret name is configurable as is whether it appears in the HTTP header or as a URL parameter. * **OAuth2 (client\_credentials):** Client credentials to fetch tokens from an authentication server. Expected Load Target servers should be able to support receiving transcript POST messages for each utterance of every live conversation on which AI Transcribe is active. For reference, an average live call sends approximately 10 messages per minute. At that rate, 50 concurrent live calls represent approximately 8 messages per second. Please ensure you load test the selected target server to support anticipated peaks in concurrent call volume. Transcript Timing and Format Once you have started transcription for a given call stream using the `/start-streaming` endpoint, AI Transcribe begins to publish `transcript` messages, each of which contains a full utterance for a single call participant. The expected latency between when ASAPP receives audio for a completed utterance and provides a transcription of that same utterance is 200-600ms. Perceived latency will also be influenced by any network delay sending audio to ASAPP and receiving transcription messages in return. Though we send messages in the order they are transcribed, network latency may impact the order in which they arrive or cause the system to drop messages due to timeouts. Where latency causes timeouts, the system will drop the oldest pending messages first; AI Transcribe does not retry to deliver dropped messages. The message body for `transcript` type messages is JSON encoded with these fields: | Field | Subfield | Description | Example Value | | :--------------------- | :--------- | :-------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | | externalConversationId | | Unique identifier with the Amazon Connect Contact Id for the call | 8c259fea-8764-4a92-adc4-73572e9cf016 | | streamId | | Unique identifier that ASAPP assigns to each call participant's stream returned in response to `/start-streaming` and `/stop-streaming` | 5ce2b755-3f38-11ed-b755-7aed4b5c38d5 | | sender | externalId | Customer or agent identifier as provided in request to `/start-streaming` | ef53245 | | sender | role | A participant role, either customer or agent | customer, agent | | autotranscribeResponse | message | Type of message | transcript | | autotranscribeResponse | start | The start ms of the utterance | 0 | | autotranscribeResponse | end | Elapsed ms since the start of the utterance | 1000 | | autotranscribeResponse | utterance | Transcribed utterance text | Are you there? | Expected `transcript` message format: ```json theme={null} { "type": "transcript", "externalConversationId": "", "streamId": "", "sender": { "externalId": "", "role": "customer", // or "agent" }, "autotranscribeResponse": { "message": "transcript", "start": 0, "end": 1000, "utterance": [ {"text": ""} ] } } ``` Error Handling Should your target server return an error in response to a POST request, ASAPP will record the error details for the failed message delivery and drop the message. #### After-Call via GET Request AI Transcribe makes a full transcript available at the following endpoint for a given completed call: `GET /conversation/v1/conversation/messages` Once a conversation is complete, make a request to the endpoint using a conversation identifier and the system returns every message in the conversation. Message Limit This endpoint responds with up to 1,000 transcribed messages per conversation, approximately a two-hour continuous call. You receive all messages in a single response without any pagination. To retrieve all messages for calls that exceed this limit, use either a real-time mechanism or File Exporter for transcript retrieval. You set transcription settings (e.g., language, detailed tokens, redaction) for a given call with the Start/Stop API when you initiate call transcription. All transcripts retrieved after the call will reflect the initially requested settings with the Start/Stop API. See the [API Reference](/apis/overview) to learn how to interact with this API. #### Batch via File Exporter AI Transcribe makes full transcripts for batches of calls available using the File Exporter service's `utterances` data feed. You can use the File Exporter service as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g., nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC. Visit [Retrieving Data from ASAPP](https://asapp.mintlify.app/reporting/data-from-messaging-platform) for a guide on how to interact with the File Exporter service. ## Use Case Example Real-Time Transcription This real-time transcription use case example consists of an English language call between an agent and customer with redaction enabled, ending with a hold. Note that redaction is enabled by default and does not need to be requested explicitly. 1. Obtain a Twilio media streaming URL destination by authenticating with ASAPP. **GET** `/mg-autotranscribe/v1/twilio-media-stream-url` **Response** *STATUS 200: OK - Twilio media stream url in the response body* ```json theme={null} { "streamingUrl": "wss://localhost/twilio-media?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c" } ``` 2. With the URL obtained in the previous step, instruct Twilio to start Media Stream to ASAPP media gateway components.  ASAPP will now receive real-time audio via Twilio Stream along with metadata, most notably the call's SID: `CA5b040e075515c424391012acc5a870cf` 3. When the customer and agent are connected, send ASAPP a request to start transcription for the call: **POST** `/mg-autotranscribe/v1/start-streaming` **Request** ```json theme={null} { "namespace": "twilio", "guid": "CA5b040e075515c424391012acc5a870cf", "customerId": "TT9833237", "agentId": "RE223444211993", "autotranscribeParams": { "language": "en-US" }, "twilioParams": { "trackMap": { "inbound": "customer", "outbound": "agent" } } } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" } }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" } } } } ``` 4. The agent and customer begin their conversation and ASAPP's webhook publisher sends separate HTTPS POST `transcript` messages for each participant to a target endpoint configured to receive the messages. HTTPS **POST** for Customer Utterance ```json theme={null} { type: "transcript", externalConversationId: "CA5b040e075515c424391012acc5a870cf", streamId: "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", sender: { externalId: "TT9833237", role: "customer", }, autotranscribeResponse: { message: 'transcript', start: 400, end: 3968, utterance: [ {text: "I need help upgrading my streaming package and my PIN number is ####"} ] } } ``` HTTPS **POST** for Agent Utterance ```json theme={null} { type: "transcript", externalConversationId: "CA5b040e075515c424391012acc5a870cf", streamId: "cf31116-3f38-11ed-9116-7a0a36c763f1", sender: { externalId: "RE223444211993", role: "agent", }, autotranscribeResponse: { message: 'transcript', start: 4744, end: 8031, utterance: [ {text: "Thank you sir, let me pull up your account."} ] } } ``` 5. Later in the conversation, the agent puts the customer on hold. This triggers a request to the `/stop-streaming` endpoint to pause transcription and prevents hold music and promotional messages from being transcribed. **POST** `/mg-autotranscribe/v1/stop-streaming` **Request** ```json theme={null} { "namespace": "twilio", "guid": "CA5b040e075515c424391012acc5a870cf", } ``` **Response** *STATUS 200: Router processed the request, details are in the response body* ```json theme={null} { "isOk": true, "autotranscribeResponse": { "customer": { "streamId": "5ce2b755-3f38-11ed-b755-7aed4b5c38d5", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, "agent": { "streamId": "cf31116-3f38-11ed-9116-7a0a36c763f1", "status": { "code": 1000, "description": "OK" }, "summary": { "totalAudioBytes": 1334720, "audioDurationMs": 83420, "streamingSeconds": 84, "transcripts": 2 }, } } } ``` ### Data Security ASAPP's security protocols protect data at each point of transmission, from first user authentication to secure communications to our auditing and logging system (which includes hashing of data in transit) all the way to securing the environment when data is at rest in the data logging system. ASAPP teams also operate under tight restraints in terms of access to data. These security protocols protect both ASAPP and its customers. # Check for spelling mistakes Source: https://docs.asapp.com/apis/autocompose/check-for-spelling-mistakes api-specs/autocompose.yaml post /autocompose/v1/spellcheck/correction Get spelling correction for a message as it is being typed, if there is a misspelling. Only the current word will be corrected, once it's fully typed (so it is recommended to call this endpoint after space characters). # Create a custom response Source: https://docs.asapp.com/apis/autocompose/create-a-custom-response api-specs/autocompose.yaml post /autocompose/v1/responses/customs/response Add a single custom response for an agent # Create a message analytic event Source: https://docs.asapp.com/apis/autocompose/create-a-message-analytic-event api-specs/autocompose.yaml post /autocompose/v1/conversations/{conversationId}/message-analytic-events To improve the performance of ASAPP suggestions, provide information about the actions performed by the agent while composing a message by creating `message-analytic-events`. These analytic events indicate which AutoCompose functionality was used or not. This information along with the conversation itself is used to optimize our models, resulting in better results for the agents. We track the following types of message analytic events: - suggestion-1-inserted: The agent selected the first of the `suggestions` from a `Suggestion` API response. - suggestion-2-inserted: The agent selected the second of the `suggestions` from a `Suggestion` API response. - suggestion-3-inserted: The agent selected the third of the `suggestions` from a `Suggestion` API response. - phrase-completion-accepted: The agent selected the `phraseCompletion` from a `Suggestion` API response. - spellcheck-applied: A correction provided in a `SpellcheckCorrection` API response was applied automatically. - spellcheck-undone: A correction provided in a `SpellcheckCorrection` API response was undone by clicking the undo button. - custom-response-drawer-inserted: The agent inserted one of their custom responses from the custom response drawer. - custom-panel-inserted: The agent inserted a response from their custom response list in the custom response panel. - global-panel-inserted: The agent inserted a response from the global response list in the global response panel. Some of the event types have a corresponding event object to provide details. # Create a MessageSent analytics event Source: https://docs.asapp.com/apis/autocompose/create-a-messagesent-analytics-event api-specs/autocompose.yaml post /autocompose/v1/analytics/message-sent Create a MessageSent analytics event describing the agent's usage of AutoCompose augmentation features while composing a message # Create a response folder Source: https://docs.asapp.com/apis/autocompose/create-a-response-folder api-specs/autocompose.yaml post /autocompose/v1/responses/customs/folder Add a single folder for an agent # Delete a custom response Source: https://docs.asapp.com/apis/autocompose/delete-a-custom-response api-specs/autocompose.yaml delete /autocompose/v1/responses/customs/response/{responseId} Delete a specific custom response for an agent # Delete a response folder Source: https://docs.asapp.com/apis/autocompose/delete-a-response-folder api-specs/autocompose.yaml delete /autocompose/v1/responses/customs/folder/{folderId} Delete a folder for an agent # Evaluate profanity Source: https://docs.asapp.com/apis/autocompose/evaluate-profanity api-specs/autocompose.yaml post /autocompose/v1/profanity/evaluation Get an evaluation of a text to verify if it contains profanity, obscenity or other unwanted words. This service should be called before sending a message to prevent the agent from sending profanities in the chat. # Generate suggestions Source: https://docs.asapp.com/apis/autocompose/generate-suggestions api-specs/autocompose.yaml post /autocompose/v1/conversations/{conversationId}/suggestions Get suggestions for the next agent message in the conversation. There are several times when this should be called: - when an agent joins the conversation, - after a message is sent by either the customer or the agent, - and as the agent is typing in the composer (to enable completing the agent's in-progress message). Optionally, add a message to the conversation. # Get autopilot greetings Source: https://docs.asapp.com/apis/autocompose/get-autopilot-greetings api-specs/autocompose.yaml get /autocompose/v1/autopilot/greetings Get autopilot greetings for an agent # Get autopilot greetings status Source: https://docs.asapp.com/apis/autocompose/get-autopilot-greetings-status api-specs/autocompose.yaml get /autocompose/v1/autopilot/greetings/status Get autopilot greetings status for an agent # Get custom responses Source: https://docs.asapp.com/apis/autocompose/get-custom-responses api-specs/autocompose.yaml get /autocompose/v1/responses/customs Get custom responses for an agent. Responses are sorted by title, and folders are sorted by name. # Get settings for AutoCompose clients Source: https://docs.asapp.com/apis/autocompose/get-settings-for-autocompose-clients api-specs/autocompose.yaml get /autocompose/v1/settings Get settings for AutoCompose clients, such as whether any features should not be used. It may be desirable to disable some features in high-latency scenarios. # List the global responses Source: https://docs.asapp.com/apis/autocompose/list-the-global-responses api-specs/autocompose.yaml get /autocompose/v1/responses/globals Get the global responses and folder organization for a company. Responses are sorted by text, and folders are sorted by name. # Update a custom response Source: https://docs.asapp.com/apis/autocompose/update-a-custom-response api-specs/autocompose.yaml put /autocompose/v1/responses/customs/response/{responseId} Update a specific custom response for an agent # Update a response folder Source: https://docs.asapp.com/apis/autocompose/update-a-response-folder api-specs/autocompose.yaml put /autocompose/v1/responses/customs/folder/{folderId} Update a folder for an agent # Update autopilot greetings Source: https://docs.asapp.com/apis/autocompose/update-autopilot-greetings api-specs/autocompose.yaml put /autocompose/v1/autopilot/greetings Update autopilot greetings for an agent # Update autopilot greetings status Source: https://docs.asapp.com/apis/autocompose/update-autopilot-greetings-status api-specs/autocompose.yaml put /autocompose/v1/autopilot/greetings/status Update autopilot greetings status for an agent # Create free text summary Source: https://docs.asapp.com/apis/autosummary/create-free-text-summary api-specs/autosummary.yaml post /autosummary/v1/free-text-summaries Generates a concise, human-readable summary of a conversation. Provide an agentExternalId if you want to get the summary for a single agent's involvment with a conversation. You can use the id from ASAPP's system (conversationId or IssueId) or your own id (externalConversationId). Multilingual support: You can get summaries in languages different from English by making use of the 'Accept-Language' header. # Create structured data Source: https://docs.asapp.com/apis/autosummary/create-structured-data api-specs/autosummary.yaml post /autosummary/v1/structured-data Creates and returns set of structured data about a conversation that is already known to ASAPP. You can use the id from ASAPP's system (conversationId or IssueId) or your own id (externalConversationId). Provide an agentExternalId if you want to get the structured data for a single agent's involvment with a conversation. # Get conversation intent Source: https://docs.asapp.com/apis/autosummary/get-conversation-intent api-specs/autosummary.yaml get /autosummary/v1/intent/{conversationId} Retrieves the primary intent of a conversation, represented by both an intent code and a human-readable intent name. If no intent is detected, "NO_INTENT" is returned. This endpoint requires: 1. Intent support to be explicitly enabled for your account. 2. A valid conversationId, which is an ASAPP-generated identifier created when using the ASAPP /conversations endpoint. Use this endpoint to gain insights into the main purpose or topic of a conversation. # Get free text summary Source: https://docs.asapp.com/apis/autosummary/get-free-text-summary api-specs/autosummary.yaml get /autosummary/v1/free-text-summaries/{conversationId} **Deprecated** Replaced by [POST /autosummary/v1/free-text-summaries](/apis/autosummary/create-free-text-summary) Generates a concise, human-readable summary of a conversation. Provide an agentExternalId if you want to get the summary for a single agent's involvment with a conversation. Multilingual support: You can get summaries in languages different from English by making use of the 'Accept-Language' header. # Provide feedback. Source: https://docs.asapp.com/apis/autosummary/provide-feedback api-specs/autosummary.yaml post /autosummary/v1/feedback/free-text-summaries/{conversationId} Create a feedback event with the full and updated summary. Each event is associated with a specific summary id. The event must contain the final summary, in the form of text. # Get Twilio media stream url Source: https://docs.asapp.com/apis/autotranscribe-media-gateway/get-twilio-media-stream-url api-specs/mg-autotranscribe.yaml get /mg-autotranscribe/v1/twilio-media-stream-url Returns url where [Twilio media stream](/autotranscribe/deploying-autotranscribe-for-twilio) should be sent to. # Start streaming Source: https://docs.asapp.com/apis/autotranscribe-media-gateway/start-streaming api-specs/mg-autotranscribe.yaml post /mg-autotranscribe/v1/start-streaming This starts the transcription of the audio stream. Use in conjunction with the [stop-streaming](/apis/media-gateway/stop-streaming-audio) endpoint to control when transcription occurs for a given call. This allow you to prevent transcription of sensitive parts of a conversation, such as entering PCI data. # Stop streaming Source: https://docs.asapp.com/apis/autotranscribe-media-gateway/stop-streaming api-specs/mg-autotranscribe.yaml post /mg-autotranscribe/v1/stop-streaming This stops the transcription of the audio stream. Use in conjunction with the [start-streaming](/apis/media-gateway/start-streaming-audio) endpoint to control when transcription occurs for a given call. This allow you to prevent transcription of sensitive parts of a conversation, such as entering PCI data. # Get streaming URL Source: https://docs.asapp.com/apis/autotranscribe/get-streaming-url api-specs/autotranscribe.yaml post /autotranscribe/v1/streaming-url Get [websocket streaming URL](/autotranscribe/deploying-autotranscribe-via-websocket) to transcribe audio in real time. This websocket is used to send audio to ASAPP's transcription service and receive transcription results. # Create a custom vocabulary Source: https://docs.asapp.com/apis/configuration/custom-vocabularies/create-custom-vocabularies api-specs/partner-configuration.yaml post /configuration/v1/custom-vocabularies Creates a new custom vocabulary configuration to improve transcription accuracy. Custom vocabularies are used to enhance speech-to-text transcription by providing: - Specific phrases that are commonly used in your domain - Phonetic representations ("sounds like") to help the system recognize these phrases For example, you might define: - Phrase: "IEEE" - Sounds Like: ["I triple E"] This helps the system correctly transcribe technical terms, brand names, or industry-specific terminology. The API returns immediately, but the transcription service can take up to 1 minute to incorporate the custom vocabulary change. # Delete a custom vocabulary Source: https://docs.asapp.com/apis/configuration/custom-vocabularies/delete-custom-vocabularies api-specs/partner-configuration.yaml delete /configuration/v1/custom-vocabularies/{customVocabularyId} Deletes a custom vocabulary configuration # Retrieve a custom vocabulary Source: https://docs.asapp.com/apis/configuration/custom-vocabularies/get-custom-vocabularies api-specs/partner-configuration.yaml get /configuration/v1/custom-vocabularies/{customVocabularyId} Get a custom vocabulary configuration # List custom vocabularies Source: https://docs.asapp.com/apis/configuration/custom-vocabularies/list-custom-vocabularies api-specs/partner-configuration.yaml get /configuration/v1/custom-vocabularies Retrieves all custom vocabulary configurations. # Retrieve a redaction entity Source: https://docs.asapp.com/apis/configuration/redaction-entities/get-redaction-entity api-specs/partner-configuration.yaml get /configuration/v1/redaction-entities/{entityId} Get a specific redaction entity with a entity id # List redaction entities Source: https://docs.asapp.com/apis/configuration/redaction-entities/list-redaction-entities api-specs/partner-configuration.yaml get /configuration/v1/redaction-entities Lists all available redaction entities and their current activation status across different policies. Redaction entities represent different types of sensitive information that can be automatically redacted from conversations. Each entity can be independently enabled or disabled for different redaction policies: - Customer Immediate: Redaction in real-time for customer-facing content - Customer Delayed: Redaction for stored customer-facing content - Agent Immediate: Real-time redaction for agent-facing content - Auto Transcribe: Redaction in transcription output - Voice: Redaction in voice content The API returns immediately, but the redaction service can take up to 1 minute to incorporate the redaction change. # Update a redaction entity Source: https://docs.asapp.com/apis/configuration/redaction-entities/update-redaction-entity api-specs/partner-configuration.yaml patch /configuration/v1/redaction-entities/{entityId} Update the policies of a specific redaction entity. Only the policies field can be modified. # Create a segment Source: https://docs.asapp.com/apis/configuration/segments/create-segment api-specs/partner-configuration.yaml post /configuration/v1/segments Creates a new `segment` to organize structured data extraction based on conversation metadata. A segment consists of: 1. Query logic that matches conversations based on metadata 2. A set of structured data fields to extract from matching conversations For example, you can create segments to: - Extract problem details from support conversations - Extract product and promotion info from sales conversations # Delete a segment Source: https://docs.asapp.com/apis/configuration/segments/delete-segment api-specs/partner-configuration.yaml delete /configuration/v1/segments/{segmentId} Delete a specific segment specifying the id # Retrieve a segment Source: https://docs.asapp.com/apis/configuration/segments/get-segment api-specs/partner-configuration.yaml get /configuration/v1/segments/{segmentId} Get a specific segment specifying the id # List segments Source: https://docs.asapp.com/apis/configuration/segments/list-segments api-specs/partner-configuration.yaml get /configuration/v1/segments Retrieves a list of all segments. # Partial update a segment Source: https://docs.asapp.com/apis/configuration/segments/update-segment api-specs/partner-configuration.yaml patch /configuration/v1/segments/{segmentId} Update a specific segment specifying the id # Create a structured data field Source: https://docs.asapp.com/apis/configuration/structured-data-fields/create-structured-data-field api-specs/partner-configuration.yaml post /configuration/v1/structured-data-fields Creates a new structured data field configuration that defines what information should be extracted from conversations. This endpoint supports creating two types of structured data fields: 1. QUESTION type: Defines specific questions to be answered about the conversation Example: "Did the agent offer the correct promotion?" 2. ENTITY type: Defines entities to be identified and extracted Example: Product names mentioned in the conversation These fields are used by the Structured Data API (/apis/autosummary/create-structured-data) to automatically extract the configured information from conversations. # Delete a structured data field Source: https://docs.asapp.com/apis/configuration/structured-data-fields/delete-structured-data-field api-specs/partner-configuration.yaml delete /configuration/v1/structured-data-fields/{structuredDataFieldId} Delete a specific structured data field specifying the id # Retrieve a structured data field Source: https://docs.asapp.com/apis/configuration/structured-data-fields/get-structured-data-field api-specs/partner-configuration.yaml get /configuration/v1/structured-data-fields/{structuredDataFieldId} Get a specific structured data field specifying the id # List structured data fields Source: https://docs.asapp.com/apis/configuration/structured-data-fields/list-structured-data-fields api-specs/partner-configuration.yaml get /configuration/v1/structured-data-fields Retrieves a list of all configured structured data fields. # Update a structured data field Source: https://docs.asapp.com/apis/configuration/structured-data-fields/update-structured-data-field api-specs/partner-configuration.yaml put /configuration/v1/structured-data-fields/{structuredDataFieldId} Update a specific structured data field specifying the id # Authenticate a user in a conversation Source: https://docs.asapp.com/apis/conversations/authenticate-a-user-in-a-conversation api-specs/conversations.yaml post /conversation/v1/conversations/{conversationId}/authenticate Stores customer-specific authentication credentials for use in integrated flows. - Can be called at any point during a conversation - Commonly used at the start of a conversation or after mid-conversation authentication - May trigger additional actions, such as GenerativeAgent API signals to customer webhooks This API only accepts the customer-specific auth credentials; the customer is responsible for handling the specific authentication mechanism. # Create or update a conversation Source: https://docs.asapp.com/apis/conversations/create-or-update-a-conversation api-specs/conversations.yaml post /conversation/v1/conversations Creates a new conversation or updates an existing one based on the provided `externalId`. Use this endpoint when: - Starting a new conversation - Updating conversation details (e.g., reassigning to a different agent) If the `externalId` is not found, a new conversation will be created. Otherwise, the existing conversation will be updated. # List conversations Source: https://docs.asapp.com/apis/conversations/list-conversations api-specs/conversations.yaml get /conversation/v1/conversations Retrieves a list of conversation resources that match the specified criteria. You must provide at least one search criterion in the query parameters. # Retrieve a conversation Source: https://docs.asapp.com/apis/conversations/retrieve-a-conversation api-specs/conversations.yaml get /conversation/v1/conversations/{conversationId} Retrieves the details of a specific conversation using its `conversationId`. This endpoint returns detailed information about the conversation, including participants and metadata. # List feed dates Source: https://docs.asapp.com/apis/file-exporter/list-feed-dates api-specs/fileexporter.yaml post /fileexporter/v1/static/listfeeddates Lists dates for a company feed/version/format # List feed files Source: https://docs.asapp.com/apis/file-exporter/list-feed-files api-specs/fileexporter.yaml post /fileexporter/v1/static/listfeedfiles Lists files for a company feed/version/format/date/interval # List feed formats Source: https://docs.asapp.com/apis/file-exporter/list-feed-formats api-specs/fileexporter.yaml post /fileexporter/v1/static/listfeedformats Lists feed formats for a company feed/version/ # List feed intervals Source: https://docs.asapp.com/apis/file-exporter/list-feed-intervals api-specs/fileexporter.yaml post /fileexporter/v1/static/listfeedintervals Lists intervals for a company feed/version/format/date # List feed versions Source: https://docs.asapp.com/apis/file-exporter/list-feed-versions api-specs/fileexporter.yaml post /fileexporter/v1/static/listfeedversions Lists feed versions for a company # List feeds Source: https://docs.asapp.com/apis/file-exporter/list-feeds api-specs/fileexporter.yaml post /fileexporter/v1/static/listfeeds Lists feed names for a company # Retrieve a feed file Source: https://docs.asapp.com/apis/file-exporter/retrieve-a-feed-file api-specs/fileexporter.yaml post /fileexporter/v1/static/getfeedfile Retrieves a feed file URL for a company feed/version/format/date/interval/file # Get Twilio media stream url Source: https://docs.asapp.com/apis/genagent-media-gateway/get-twilio-media-stream-url api-specs/mg-genagent.yaml get /mg-genagent/v1/twilio-media-stream-url Returns url where Twilio media stream should be sent to. # Analyze conversation Source: https://docs.asapp.com/apis/generativeagent/analyze-conversation api-specs/generativeagent.yaml post /generativeagent/v1/analyze Call this API to trigger GenerativeAgent to analyze and respond to a conversation. This API should be called after a customer sends a message while not speaking with a live agent. The Bot replies will not be returned on this request; they will be delivered asynchronously via the webhook callback. This API also adds an optional **message** field to create a message for a given conversation before triggering the bot replies. The message object is the exact same message used in the conversations API /message endpoint # Create Call Transfer Source: https://docs.asapp.com/apis/generativeagent/create-call-transfer api-specs/generativeagent.yaml post /generativeagent/v1/call-transfers Creates a new Call Transfer resource that represents an attempt to transfer a call from your IVR or CCaaS to ASAPP. The `type` indicates the type of call transfer: - `PHONE_NUMBER`: A temporary phone number is assigned for the transfer. - `SIP`: Session Initiation Protocol (SIP) transfer. You can optionally provide `inputContext` to provide context for the conversation. This is passed to GenerativeAgent. # Create stream URL Source: https://docs.asapp.com/apis/generativeagent/create-stream-url api-specs/generativeagent.yaml post /generativeagent/v1/streams This API creates a generative agent event streaming URL to start a streaming connection (SSE). This API should be called when the client boots-up to request a streaming_url, before it calls endpoints whose responses are delivered asynchronously (and most likely before calling any other endpoint). Provide the streamId to reconnect to a previous stream. # Get Call Transfer Source: https://docs.asapp.com/apis/generativeagent/get-call-transfer api-specs/generativeagent.yaml get /generativeagent/v1/call-transfers/{callTransferId} Get Call Transfer resource by ID. # Get GenerativeAgent state Source: https://docs.asapp.com/apis/generativeagent/get-generativeagent-state api-specs/generativeagent.yaml get /generativeagent/v1/state This API provides the current state of the generative agent for a given conversation. # Check ASAPP's API's health. Source: https://docs.asapp.com/apis/health-check/check-asapps-apis-health api-specs/healthcheck.yaml get /v1/health The API Health check endpoint enables you to check the operational status of our API platform. # Create a submission Source: https://docs.asapp.com/apis/knowledge-base/create-a-submission api-specs/knowledge-base.yaml post /knowledge-base/v1/submissions Initiate a request to add a new article or update an existing one. The provided title and content will be processed to create the final version of the submission. # Retrieve a submission Source: https://docs.asapp.com/apis/knowledge-base/retrieve-a-submission api-specs/knowledge-base.yaml get /knowledge-base/v1/submissions/{id} Obtain the details of a specific submission using its unique identifier. # Retrieve an article Source: https://docs.asapp.com/apis/knowledge-base/retrieve-an-article api-specs/knowledge-base.yaml get /knowledge-base/v1/articles/{id} Fetch a specific article by its unique identifier. If the article has not been created because the associated submission was not approved, a 404 status will be returned. # Create a message Source: https://docs.asapp.com/apis/messages/create-a-message post /conversation/v1/conversations/{conversationId}/messages Creates a message object, adding it to an existing conversation. Use this endpoint to record each new message in the conversation. # Create multiple messages Source: https://docs.asapp.com/apis/messages/create-multiple-messages post /conversation/v1/conversations/{conversationId}/messages/batch This creates multiple message objects at once, adding them to an existing conversation. Use this endpoint when you need to add several messages at once, such as when importing historical conversation data. # List messages Source: https://docs.asapp.com/apis/messages/list-messages get /conversation/v1/conversations/{conversationId}/messages Lists all messages within a conversation. This messages are returned in chronological order. # List messages with an externalId Source: https://docs.asapp.com/apis/messages/list-messages-with-an-externalid get /conversation/v1/conversation/messages Get all messages from a conversation. # Retrieve a message Source: https://docs.asapp.com/apis/messages/retrieve-a-message get /conversation/v1/conversations/{conversationId}/messages/{messageId} Retrieve the details of a message from a conversation. # Add a conversation metadata Source: https://docs.asapp.com/apis/metadata/add-a-conversation-metadata api-specs/metadata-ingestion.yaml post /metadata-ingestion/v1/single-convo-metadata Add metadata attributes of one issue/conversation # Add a customer metadata Source: https://docs.asapp.com/apis/metadata/add-a-customer-metadata api-specs/metadata-ingestion.yaml post /metadata-ingestion/v1/single-customer-metadata Add metadata attributes of one customer # Add an agent metadata Source: https://docs.asapp.com/apis/metadata/add-an-agent-metadata api-specs/metadata-ingestion.yaml post /metadata-ingestion/v1/single-agent-metadata Add metadata attributes of one agent # Add multiple agent metadata Source: https://docs.asapp.com/apis/metadata/add-multiple-agent-metadata api-specs/metadata-ingestion.yaml post /metadata-ingestion/v1/many-agent-metadata Add multiple agent metadata items; submit items in a batch in one request # Add multiple conversation metadata Source: https://docs.asapp.com/apis/metadata/add-multiple-conversation-metadata api-specs/metadata-ingestion.yaml post /metadata-ingestion/v1/many-convo-metadata Add multiple issue/conversation metadata items; submit items in a batch in one request # Add multiple customer metadata Source: https://docs.asapp.com/apis/metadata/add-multiple-customer-metadata api-specs/metadata-ingestion.yaml post /metadata-ingestion/v1/many-customer-metadata Add multiple customer metadata items; submit items in a batch in one request # Overview Source: https://docs.asapp.com/apis/overview Overview of the ASAPP API The ASAPP API is Resource oriented, relying on REST principles. Our APIs accept and respond with JSON. ## Authentication The ASAPP API uses a combination of an API Id and API Secret to authenticate requests. ```bash theme={null} curl -X GET 'https://api.sandbox.asapp.com/conversation/v1/conversations' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ ``` Learn how to find your API Id and API Secret in the [Developer quickstart](/getting-started/developers). ## Environments The ASAPP API is available in two environments: * **Sandbox**: Use the Sandbox environment for development and testing. * **Production**: Use the Production environment for production use. Use the API domain to make requests to the relevant environment. | Environment | API Domain | | :---------- | :------------------------------------------------------------- | | Sandbox | [https://api.sandbox.asapp.com](https://api.sandbox.asapp.com) | | Production | [https://api.asapp.com](https://api.asapp.com) | ## Errors The ASAPP API uses standard HTTP status codes to indicate the success or failure of a request. | Status Code | Description | | :---------- | :-------------------- | | 200 | OK | | 201 | Created | | 204 | No Content | | 400 | Bad Request | | 401 | Unauthorized | | 403 | Forbidden | | 404 | Not Found | | 429 | Too Many Requests | | 500 | Internal Server Error | We also return a `code` and `message` in the response body for each error. Learn more about error codes in the [Error handling](/getting-started/developers/error-handling) section. # Building a GenerativeAgent Source: https://docs.asapp.com/generativeagent/build-overview Learn how GenerativeAgent works and the key components you'll configure. Building a GenerativeAgent means configuring an agent that can handle customer conversations by understanding their needs, accessing relevant information, and taking appropriate actions. ## How GenerativeAgent Works Unlike traditional bots with predefined flows, GenerativeAgent uses natural language processing to understand and respond to a wide range of customer queries and issues. GenerativeAgent detailed flow When a customer sends a message or speaks, it reaches GenerativeAgent through a connector or API integration. GenerativeAgent searches its local copy of your knowledge base for relevant information and adds it to the conversation context. At every conversation turn, our Knowledge Base service selects the most relevant articles and adds them to the conversation context. The system syncs to your knowledge base at regular intervals, according to your configuration. GenerativeAgent identifies the appropriate task based on the customer's needs and executes the task instructions by choosing one of several actions: * **Use a Function**: Call your APIs to retrieve data or perform actions (may make multiple function calls). Functions can also perform local logic, data manipulation, and variable setting. * **Request Human Help**: Escalate to a human agent when needed * **Change Task**: Switch to a different task if the conversation direction changes GenerativeAgent generates a human-like response to communicate with the customer. This process continues in a loop until either: * **GenerativeAgent resolves the conversation** and hands control back to your system * GenerativeAgent cannot help further and **escalates to a human agent** ## Key Components You'll Configure ### Tasks Define the specific issues or actions you want GenerativeAgent to handle. Each task includes: * Clear instructions in human language * Associated functions for performing actions * Knowledge base filtering for relevant information ### Functions Connect GenerativeAgent to your APIs so it can: * Retrieve customer data * Perform actions (refunds, account updates, etc.) * Store conversation variables * Transfer control to other systems As part of function configuration, you will create [Backend System Integrations](/generativeagent/integrate) to connect to your APIs. ### Knowledge Base Filter and connect your existing knowledge base so GenerativeAgent can: * Access relevant articles for each task * Use metadata to find the right information * Provide accurate, up-to-date responses ## Understanding Environments GenerativeAgent operates across three environments to ensure safe testing and deployment. As you refine your tasks, functions, and knowledge base, you'll deploy them through this progression: * **Draft**: Test and configure your agent components before deployment * **Sandbox**: Staging environment to validate agent behavior and responses * **Production**: Live environment where your agent handles real customer conversations You'll typically start by configuring and testing in Draft, then deploy to Sandbox for validation, and finally promote to Production once everything is working correctly. Learn more about [deploying to GenerativeAgent](/generativeagent/configuring/deploying-to-generativeagent) for detailed deployment procedures. ## Next Steps Learn how to add and configure use cases for your GenerativeAgent Learn how to build your first GenerativeAgent that can use your KnowledgeBase to start answering your users' questions. # Adding a use case to GenerativeAgent Source: https://docs.asapp.com/generativeagent/build/adding-a-use-case Learn how to add new use cases to GenerativeAgent by creating tasks and functions Adding a use case to GenerativeAgent involves creating a **Task** that represents the business scenario you want your agent to handle, along with the **Functions** that enable it to perform the necessary actions. Tasks represent use cases - they define what GenerativeAgent can accomplish for your customers. Functions are the tools that enable those tasks by connecting to your backend systems. ## Step 1: Define Your Use Case Start by clearly defining the business scenario you want GenerativeAgent to handle: * **Customer goal**: What does the customer want to accomplish? * **Business context**: What type of issue or request is this? * **Success criteria**: How do you know when the task is complete? * **Required actions**: What backend operations does the task need? **Customer goal**: "I want to check where my order is" **Business context**: Customer service inquiry about order tracking **Success criteria**: Customer receives current order status and tracking information **Required actions**: Look up order details, retrieve tracking information, provide status update **Customer goal**: "I need to book a doctor's appointment" **Business context**: Healthcare appointment booking **Success criteria**: Customer has a confirmed appointment with details **Required actions**: Check availability, book appointment, send confirmation ## Step 2: Create the Task Navigate to the Tasks page and create a new task that represents your use case: Go to the Tasks page in your GenerativeAgent configuration. Click "Create task" to start creating your task. Provide the following information: * **Task name**: Clear, descriptive name for your use case * **Task selector description**: How GenerativeAgent identifies when to use this task * **Task message** (optional): Initial response when this task is selected * **Channels**: Which channel(s) this task will support GenerativeAgent Task creation interface **This is the most critical part of creating a successful use case.** Your task instructions determine how GenerativeAgent behaves and whether it handles your use case correctly. In the task configuration, provide: * **Procedures**: Detailed guidance on how to handle this use case * **Voice settings**: Policies and communication guidelines that are specific to the Voice channel * **Chat settings**: Policies and communication guidelines that are specific to the Chat channel When writing your procedures, reference the functions you'll need for this task. You'll create these functions in the next step and connect them back to this task. Effective task instructions should: * **Be specific and actionable**: Tell GenerativeAgent exactly what to do in each scenario * **Include examples**: Provide common customer requests and expected responses * **Plan for edge cases**: Define what to do when things go wrong * **Set clear boundaries**: Specify when to escalate to human agents **Need help writing effective instructions?** See our comprehensive guides: * [Task Best Practices](/generativeagent/configuring/task-best-practices) - Complete guide to writing effective task instructions * [Improving Tasks](/generativeagent/configuring/tasks-and-functions/improving) - Resources and tools for optimizing task performance Specify knowledge base metadata to restrict GenerativeAgent to using only articles with matching metadata for this specific use case. This helps ensure GenerativeAgent uses the most relevant information for each specific use case, improving accuracy and reducing irrelevant responses. If you followed the [Build your first GenerativeAgent](/generativeagent/getting-started) guide, you already have a Knowledge Base imported. Otherwise you can [connect your knowledge base](/generativeagent/configuring/connecting-your-knowledge-base) now. ## Step 3: Create Required Functions Functions enable GenerativeAgent to perform actions similar to a live agent. Based on the procedures you defined in your task instructions, create the functions needed to enable those actions. ### Available Function Types Connect to your existing APIs to fetch data or perform actions. The most common function type for integrating with backend systems. Define ideal API interactions before connecting to real systems. Perfect for testing and development. Store conversation data as reference variables for future use. Useful for tracking customer information across the conversation. Signal that control should be transferred to an external system or end the conversation with relevant data. ### Creating Functions Go to the Functions page in your GenerativeAgent configuration. Click "Create Function" to start defining your function. Provide the following information: * **Function name**: Clear, descriptive name for the function * **Description**: How GenerativeAgent should use this function * **API Connection**: The backend system this function connects to Select the appropriate function type based on your use case needs: For functions that need to interact with your live backend systems. 1. Select an existing [API connection](/generativeagent/configuring/connect-apis) 2. Choose the API version if the system offers multiple versions 3. Save the function GenerativeAgent will call the real API during customer interactions. For testing or when you want to define the ideal API interaction first. 1. Click "Integrate later" 2. Define request parameters in JSON schema format 3. Save the function You can replace the Mock call with a real API connection at any time. For storing conversation data as reference variables. 1. Select "Set variable" function type 2. Define the input GenerativeAgent should use 3. Add the variables you want to set 4. Save the function Useful for tracking customer information across the conversation. For signaling control transfer or conversation ending. 1. Select "System transfer" function type 2. Define the input GenerativeAgent should use 3. Optionally add variables to pass along 4. Save the function Helpful for ending conversations or handing control back to external systems. ## Step 4: Connect Functions to Your Task Once you've created the necessary functions, connect them to your task: 1. Return to your task configuration 2. Select the functions this task should use 3. Ensure the functions align with the actions your use case requires 4. Test the task with the Previewer to verify everything works correctly ## Step 5: Test Your Use Case Use the [Previewer](/generativeagent/configuring/previewer) tool to test your new use case: Navigate to the [Previewer](/generativeagent/configuring/previewer) in your GenerativeAgent configuration. Create test conversations that represent how customers would interact with your use case. Use [Test Scenarios](/generativeagent/configuring/tasks-and-functions/test-scenarios) to define customer profiles and test different edge cases. Ensure the system calls all functions correctly and they return expected results. Make adjustments to task instructions or function configurations based on test results. ## Next Steps Use the Previewer to test your new use case with realistic scenarios. Learn how to connect your knowledge base for better context and responses. Explore integration options for connecting GenerativeAgent to your existing systems. Learn how to deploy your configured GenerativeAgent to production. # Connect Your APIs Source: https://docs.asapp.com/generativeagent/configuring/connect-apis Learn how to connect your APIs to GenerativeAgent with API Connections GenerativeAgent can call your APIs to get data or perform actions through **API Connections**. These connections allow GenerativeAgent to handle complex tasks like looking up account information or booking flights. Our API Connection tooling lets you transform your existing APIs into LLM-friendly interfaces that GenerativeAgent can use effectively. Unlike other providers that require you to create new simplified APIs specifically for LLM use, ASAPP's approach lets you leverage your current infrastructure without additional development work. Typically, a developer or other technical user will create API Connections. If you need help, reach out to your ASAPP team. ## Understanding API Connections API Connections are the bridge between your GenerativeAgent and your external APIs. They allow your agent to interact with your existing systems and services, just like a human agent would. ### How API Connections Fit In GenerativeAgent uses a hierarchical structure to organize its capabilities: 1. **Tasks**: High-level instructions that tell GenerativeAgent what to do. A task can have one or more functions. 2. **Functions**: Tools that help GenerativeAgent complete tasks. A function can point to a single API Connection. 3. **API Connections**: Configurations that enable Functions to interact with your APIs. ### Core Components Each API Connection consists of three main parts that work together: 1. **API Source**: * Handles the technical details of calling your API * Manages authentication and security * Configures environment-specific settings (sandbox/production) 2. **Request Interface**: * Defines what information GenerativeAgent can send * Transforms GenerativeAgent's requests into your API's format * Includes testing tools to verify the transformation 3. **Response Interface**: * Controls what data GenerativeAgent receives * Transforms the API response to a GenerativeAgent friendly format * Includes testing tools to verify the transformation ## Create an API Connections To create an API Connection, you need to: 1. Navigate to **API Integration Hub** in your dashboard 2. Select the **API Connections** tab 3. Click the **Create Connection** button Choose your API source type: Every API Connection requires an [OpenAPI specification](https://spec.openapis.org/oas/latest.html) that defines your API endpoints and structure. * Choose an existing API spec from your previously uploaded API Specs, or * Upload a new OpenAPI specification file We support any API that uses JSON for requests and responses. Use an MCP (Model Context Protocol) server to connect tools designed for LLM interaction. See [Using MCP Servers](/generativeagent/configuring/connect-apis/mcp-server) for detailed instructions. Use pre-built platform adapters (e.g., Salesforce, Slack, ServiceNow) to connect without defining APIs from scratch. See [Using Adapters](/generativeagent/configuring/connect-apis/adapters) for detailed instructions. Provide the essential information for your connection: * **Name**: A descriptive name for the API Connection * **Description**: Brief explanation of the connection's purpose * **Endpoint**: Select the specific API endpoint from your specification We only support endpoints with JSON request and response bodies. After creation, you'll be taken to the API Source configuration page. Here you'll need to: 1. Set up [authentication methods](#authentication) 2. Configure [environment settings](#environment-settings) 3. Define [error handling](#error-handling) rules 4. Add any required [static headers](#headers) Configure how GenerativeAgent interacts with your API: 1. Define the [Request Interface](#request-interface): * Specify the schema GenerativeAgent will use * Create request transformations * Test with sample requests 2. Configure the [Response Interface](#response-interface): * Define the response schema * Set up response transformations * Validate with sample responses Before finalizing your API Connection: 1. Run test requests in the sandbox environment 2. Verify transformations work as expected 3. Check error handling behavior Once your API Connection is configured and tested, you can [reference it in a Function](/generativeagent/configuring#step-4-create-functions) to enable GenerativeAgent to use the API. ### Duplicate an API Connection You can duplicate an existing API Connection from the list view. Open the overflow menu (⋮) on the connection and select **Duplicate**. The copy inherits the full configuration; you can customize it after creation. ## Request Interface The Request Interface defines how GenerativeAgent interacts with your API. It consists of three key components that work together to enable effective API communication. * [Request Schema](#request-schema): The schema of the data that GenerativeAgent can send to your API. * [Request Transformation](#request-transformation): The transformation that will apply to the data before sending it to your API. * [Testing Interface](#request-testing): The interface that allows you to test the request transformation with different inputs. Request Interface ### Request Schema The Request Schema specifies the structure of data that GenerativeAgent can send to your API. This schema should be designed for optimal LLM interaction. This schema is NOT the schema of the API. This is the schema that the system shows to GenerativeAgent. **Best Practices for Schema Design** ```json theme={null} // Good - Clear and descriptive { "type": "object", "properties": { "customer_name": { "type": "string" }, "order_date": { "type": "string" } } } // Avoid - Cryptic or complex { "type": "object", "properties": { "cust_nm_001": { "type": "string" }, "ord_dt_timestamp": { "type": "string" } } } ``` ```json theme={null} // Good - Flat structure { "type": "object", "properties": { "shipping_street": { "type": "string" }, "shipping_city": { "type": "string" }, "shipping_country": { "type": "string" } } } // Avoid - Deep nesting { "type": "object", "properties": { "shipping": { "type": "object", "properties": { "address": { "type": "object", "properties": { "street": { "type": "string" }, "city": { "type": "string" }, "country": { "type": "string" } } } } } } } ``` ```json theme={null} { "properties": { "order_status": { "type": "string", "description": "Current status of the order (pending, shipped, delivered)", "enum": ["pending", "shipped", "delivered"] } } } ``` * Keep only essential fields that GenerativeAgent needs * Set `"additionalProperties": false` to prevent unexpected data When first created, the Request Schema is a 1-1 mapping to the underlying API spec. ### Request Transformation The Request Transformation converts GenerativeAgent's request into the format your API expects. This is done using [JSONata](https://jsonata.org/) expressions. When first created, the Request Transformation is a 1-1 mapping to the underlying API spec. Request Interface Configuration **Common Transformation Patterns** ```javascript theme={null} { "headers": { "Content-Type": "application/json" }, "pathParameters": { "userId": request.id }, "queryParameters": { "include": "details,preferences" }, "body": { "name": request.userName, "email": request.userEmail } } ``` ```javascript theme={null} { "body": { // Convert date to ISO format "timestamp": $toMillis(request.date), // Uppercase a value "region": $uppercase(request.country), // Join array values "tags": $join(request.categories, ",") } } ``` ```javascript theme={null} { "body": { // Include field only if present "optional_field": $exists(request.someField) ? request.someField : undefined, // Transform based on condition "status": request.isActive = true ? "ACTIVE" : "INACTIVE" } } ``` ### Request Testing Thoroughly test your request transformations to ensure GenerativeAgent can send the correct data to your API. The API Connection can not be saved until the request transformation has a successful test. **Testing Best Practices** ```json theme={null} // Test 1: Minimal valid request { "customerId": "123", "action": "view" } // Test 2: Full request with all fields { "customerId": "123", "action": "update", "data": { "name": "John Doe", "email": "john@example.com" } } ``` * Test with missing required fields * Verify invalid data handling * Check boundary conditions By Default, the API Connection testing is local. You can test against actual API endpoints by setting "Run test in" to Sandbox. * Test against actual API endpoints * Verify complete request flow * Check response handling ## Response Interface The Response Interface determines how GenerativeAgent processes and presents API responses. A well-designed response interface makes it easier for GenerativeAgent to understand and use the API's data effectively. There are three main components to the response interface: * [Response Schema](#response-schema): The JSON schema for the data returned to GenerativeAgent from the API. * [Response Transformation](#response-transformation): A JSONata transformation where the API response is transformed into the response given to GenerativeAgent. * [Test Response](#response-testing): The testing panel to test the response transformation with different API responses and see the output. Response Interface Configuration ### Response Schema The Response Schema defines the structure of data that GenerativeAgent will receive. Focus on creating clear, simple schemas that are optimized for LLM processing. The Response Schema is NOT the schema of the underlying API. This is the schema of what the system returns to GenerativeAgent. **Schema Design Principles** ```json theme={null} // Good - Only relevant fields { "orderStatus": "shipped", "estimatedDelivery": "2024-03-20", "trackingNumber": "1Z999AA1234567890" } // Avoid - Including unnecessary details { "orderStatus": "shipped", "estimatedDelivery": "2024-03-20", "trackingNumber": "1Z999AA1234567890", "internalId": "ord_123", "systemMetadata": { /* ... */ }, "auditLog": [ /* ... */ ] } ``` ```json theme={null} { "type": "object", "properties": { "temperature": { "type": "number", "description": "Current temperature in Celsius" }, "isOpen": { "type": "boolean", "description": "Whether the store is currently open" }, "lastUpdated": { "type": "string", "format": "date-time", "description": "When this information was last updated" } } } ``` * Use consistent date/time formats * Normalize enumerated values * Use standard units of measurement When first created, the Response Schema is a 1-1 mapping to the underlying API spec. ### Response Transformation Transform complex API responses into GenerativeAgent-friendly formats using JSONata. The goal is to simplify and standardize the data. The Transformation's input is the raw API response. The output is the data that GenerativeAgent will receive and must match the Response Schema. When first created, the Response Transformation is a 1-1 mapping to the underlying API spec. **Transformation Examples** ```javascript theme={null} { // Extract and rename fields "status": clientApiCall.data.orderStatus, "items": clientApiCall.data.orderItems.length, "total": clientApiCall.data.pricing.total } ``` ```javascript theme={null} { // Convert ISO timestamp to readable format "orderDate": $fromMillis($toMillis(clientApiCall.data.created_at), "[FNn], [MNn] [D1o], [Y]"), // Format time in 12-hour clock "deliveryTime": $fromMillis($toMillis(clientApiCall.data.delivery_eta), "[h]:[m01] [P]") } ``` ```javascript theme={null} { // Calculate order summary "orderSummary": { "totalItems": $sum(clientApiCall.data.items[*].quantity), "uniqueItems": $count(clientApiCall.data.items), "hasGiftItems": $exists(clientApiCall.data.items[type="GIFT"]) }, // Format address components "deliveryAddress": $join([ clientApiCall.data.address.street, clientApiCall.data.address.city, clientApiCall.data.address.state, clientApiCall.data.address.zip ], ", ") } ``` ### Response Testing Thoroughly test your response transformations to ensure GenerativeAgent receives well-formatted, useful data. The API Connection can not be saved until the response transformation has a successful test. Use [API Mock Users](/generativeagent/configuring/connect-apis/mock-apis) to save response from your server to use them in the response testing. **Testing Strategies** Make sure to test with different response types your server may respond with. This should include happy paths, varied response types, and error paths. * Check date/time formatting * Verify numeric calculations * Test string manipulations * Handle null/undefined values * Process empty arrays/objects * Manage missing optional fields ## Redaction You can redact sensitive fields from API Connection Logs and conversation details views. There are two types of redaction, each affecting different parts of the request/response flow: * **Request/Response Interface**: The data that GenerativeAgent interacts with * **Raw API Request/Response**: The actual data sent to and received from the underlying API Redacting fields does not affect the data that GenerativeAgent can access. GenerativeAgent requires access to the data in order to perform its tasks. Redaction only impacts the views in the UI. Redact fields in the request and response that GenerativeAgent uses. This redacts the transformed data that appears in conversations and API Connection Logs. To redact request/response interface fields: 1. Add `x-redact` to the field in the Request Schema or Response Schema 2. Save the API connection to apply the changes Redacting internal fields affects both API Connection Logs and conversations where GenerativeAgent uses the API. Redact fields in the raw API request and response that are sent to and received from the underlying API. This redacts the underlying API data in API Connection Logs only. To redact raw API fields: 1. Navigate to **API Integration Hub** > **API Specs** 2. Click on the relevant API Spec 3. Click on the **Parameters** tab 4. Per endpoint, click the fields you want to redact Redacting raw API fields only affects the [API Connection Logs](#api-connection-logs) as the raw API data is not visible to GenerativeAgent. ## API Versioning Every update to an API Connection requires a version change. This is to ensure that no change can be made to an API connection that impacts a live function. If you make a change to an API connection, the Function that references that API connection will need to be explicitly updated to point to the new version. ## API Connection Logs We log all requests and responses for API connections. This allows you to see the raw requests and responses, and the transformations that were applied. Use the logs to debug and understand how API connections are working. Logs are available in API Integration Hub > API Connection Logs. ## Default API Spec Settings You can set default information in an API spec. These default settings serve as a template for newly created API connections, copying those settings for all API connections created for that API spec. You can set the following defaults: * Headers * Sandbox Settings: * Base URL * Authentication Method * Production Settings: * Base URL * Authentication Method You can make further changes to API connections as necessary. You can also change the defaults and it will not change existing API connections, though the system will use the new defaults on any new connections made with that Spec. ## Examples Here are some examples of how to configure API connections for different scenarios. This example demonstrates configuring an API connection for updating a passenger's name on a flight booking. #### API Endpoint ```json theme={null} PUT /flight/[flightId]/passenger/[passengerId] { "name": { "first": [Passenger FirstName], "last": [Passenger LastName] } } ``` #### API Response ```json theme={null} { "id": "pax-12345", "flightId": "XZ2468", "updatedAt": "2024-10-04T14:30:00Z", "passenger": { "id": "PSGR-56789", "name": { "first": "John", "last": "Doe" }, "seatAssignment": "14A", "checkedIn": true, "frequentFlyerNumber": "FF123456" }, "status": "confirmed", "specialRequests": ["wheelchair", "vegetarian_meal"], "baggage": { "checkedBags": 1, "carryOn": 1 } } ``` 1. Request Schema: ```json theme={null} { "type": "object", "properties": { "externalCustomerId": {"type": "string"}, "passengerFirstName": {"type": "string"}, "passengerLastName": {"type": "string"}, "flightId": {"type": "string"} }, "required": ["externalCustomerId", "passengerFirstName", "passengerLastName", "flightId"] } ``` 2. Request Transformation: ```javascript theme={null} { "headers": {}, "pathParameters": { "flightId": request.flightId, "passengerId": request.externalCustomerId }, "queryParameters": {}, "body": { "name": { "first": request.passengerFirstName, "last": request.passengerLastName } } } ``` 3. Sample Test Request: ```json theme={null} { "externalCustomerId": "CUST123", "passengerFirstName": "Johnson", "passengerLastName": "Doe", "flightId": "XZ2468" } ``` 1. Response Schema: ```json theme={null} { "type": "object", "properties": { "success": { "type": "boolean", "description": "Whether the name update was successful" } }, "required": ["success"] } ``` 2. Response Transformation: ```javascript theme={null} { "success": $exists(clientApiCall.data.id) and $exists(clientApiCall.data.passenger.name.first) and $exists(clientApiCall.data.passenger.name.last) and clientApiCall.data.status = "confirmed" } ``` 3. Sample Test Response: ```json theme={null} { "clientApiCall": { "data": { "id": "pax-12345", "flightId": "XZ2468", "updatedAt": "2024-10-04T14:30:00Z", "passenger": { "id": "PSGR-56789", "name": { "first": "John", "last": "Doe" }, "seatAssignment": "14A", "checkedIn": true, "frequentFlyerNumber": "FF123456" }, "status": "confirmed", "specialRequests": ["wheelchair", "vegetarian_meal"], "baggage": { "checkedBags": 1, "carryOn": 1 } } } } ``` This example shows how to simplify a complex flight status API response by removing unnecessary fields and flattening nested structures. #### API Endpoint ```json theme={null} GET /flights/[flightNumber]/status ``` #### API Response ```json theme={null} { "flightDetails": { "flightNumber": "AA123", "route": { "origin": { "code": "SFO", "terminal": "2", "gate": "A12", "weather": { /* complex weather object */ } }, "destination": { "code": "JFK", "terminal": "4", "gate": "B34", "weather": { /* complex weather object */ } } }, "schedule": { "departure": { "scheduled": "2024-03-15T10:30:00Z", "estimated": "2024-03-15T10:45:00Z", "actual": null }, "arrival": { "scheduled": "2024-03-15T19:30:00Z", "estimated": "2024-03-15T19:45:00Z", "actual": null } }, "status": "DELAYED", "aircraft": { /* aircraft details */ } } } ``` 1. Request Schema: ```json theme={null} { "type": "object", "properties": { "flightNumber": { "type": "string", "description": "The flight number to look up" } }, "required": ["flightNumber"] } ``` 2. Request Transformation: ```javascript theme={null} { "headers": {}, "pathParameters": { "flightNumber": request.flightNumber }, "queryParameters": {}, "body": {} } ``` 3. Sample Test Request: ```json theme={null} { "flightNumber": "AA123" } ``` 1. Response Schema: ```json theme={null} { "type": "object", "properties": { "flight_number": { "type": "string", "description": "The flight number" }, "flight_status": { "type": "string", "description": "Current status of the flight" }, "origin_airport_code": { "type": "string", "description": "Three-letter airport code for origin" }, "destination_airport_code": { "type": "string", "description": "Three-letter airport code for destination" }, "scheduled_departure_time": { "type": "string", "description": "Scheduled departure time" }, "scheduled_arrival_time": { "type": "string", "description": "Scheduled arrival time" }, "is_flight_delayed": { "type": "boolean", "description": "Whether the flight is delayed" } } } ``` 2. Response Transformation: ```javascript theme={null} { "flight_number": clientApiCall.data.flightDetails.flightNumber, "flight_status": clientApiCall.data.flightDetails.status, "origin_airport_code": clientApiCall.data.flightDetails.route.origin.code, "destination_airport_code": clientApiCall.data.flightDetails.route.destination.code, "scheduled_departure_time": clientApiCall.data.flightDetails.schedule.departure.estimated, "scheduled_arrival_time": clientApiCall.data.flightDetails.schedule.arrival.estimated, "is_flight_delayed": clientApiCall.data.flightDetails.status = "DELAYED" } ``` This example demonstrates date formatting and complex object transformation for an appointment lookup API. #### API Endpoint ```json theme={null} GET /appointments/[appointmentId] ``` #### API Response ```json theme={null} { "id": "apt_123", "type": "DENTAL_CLEANING", "startTime": "2024-03-15T14:30:00Z", "endTime": "2024-03-15T15:30:00Z", "provider": "Dr. Sarah Smith", "location": "Downtown Medical Center", "patient": { "id": "pat_456", "name": "John Doe", "dateOfBirth": "1985-06-15", "contactInfo": { "email": "john.doe@email.com", "phone": "+1-555-0123" } }, "status": "confirmed", "notes": "Regular cleaning and check-up", "insuranceVerified": true, "lastUpdated": "2024-03-01T10:15:00Z" } ``` 1. Request Schema: ```json theme={null} { "type": "object", "properties": { "appointmentId": { "type": "string", "description": "The ID of the appointment to look up" } }, "required": ["appointmentId"] } ``` 2. Request Transformation: ```javascript theme={null} { "headers": {}, "pathParameters": { "appointmentId": request.appointmentId }, "queryParameters": {}, "body": {} } ``` 3. Sample Test Request: ```json theme={null} { "appointmentId": "apt_123" } ``` 1. Response Schema: ```json theme={null} { "type": "object", "properties": { "appointmentType": { "type": "string", "description": "The type of appointment in a readable format" }, "date": { "type": "string", "description": "The appointment date in a friendly format" }, "startTime": { "type": "string", "description": "The appointment start time in 12-hour format" }, "doctor": { "type": "string", "description": "The healthcare provider's name" }, "clinic": { "type": "string", "description": "The location where the appointment will take place" }, "status": { "type": "string", "description": "The current status of the appointment" }, "patientName": { "type": "string", "description": "The name of the patient" } }, "required": ["appointmentType", "date", "startTime", "doctor", "clinic", "status", "patientName"] } ``` 2. Response Transformation: ```javascript theme={null} { /* Convert appointment type from UPPER_SNAKE_CASE to readable format */ "appointmentType": $replace(clientApiCall.data.type, "_", " ") ~> $lowercase(), /* Format date as "Friday, March 15th, 2024" */ "date": $fromMillis($toMillis(clientApiCall.data.startTime), "[FNn], [MNn] [D1o], [Y]"), /* Format start time as "2:30 PM" */ "startTime": $fromMillis($toMillis(clientApiCall.data.startTime), "[h]:[m01] [P]"), /* Map provider and location directly */ "doctor": clientApiCall.data.provider, "clinic": clientApiCall.data.location, /* Map status and patient name */ "status": clientApiCall.data.status, "patientName": clientApiCall.data.patient.name } ``` 3. Sample Transformed Response: ```json theme={null} { "appointmentType": "dental cleaning", "date": "Friday, March 15th, 2024", "startTime": "2:30 PM", "doctor": "Dr. Sarah Smith", "clinic": "Downtown Medical Center", "status": "confirmed", "patientName": "John Doe" } ``` ## Next Steps Now that you've configured your API connections, GenerativeAgent can interact with your APIs just like a live agent. Here are some helpful resources for next steps: # Adapters in API Connections Source: https://docs.asapp.com/generativeagent/configuring/connect-apis/adapters Learn how to connect GenerativeAgent to third-party platforms using adapters Adapters are pre-built connectors to popular third-party platforms such as Salesforce, ServiceNow, Slack, and many more. They encapsulate API endpoints and authentication for each platform, so you can connect GenerativeAgent to these systems without defining APIs from scratch. Like [MCP servers](/generativeagent/configuring/connect-apis/mcp-server), adapter interfaces are designed for LLM use from the start, which simplifies setup. You can still customize request and response schemas and transformations when needed. ## List of platforms The following platforms are available as adapters. Use the Integration Hub to add a platform and select the actions you want to use. ## Use an adapter 1. Navigate to **API Integration Hub** > **API Connections** 2. Click **Create Connection** 3. Select **Adapters** from the connection type list 4. Click **Add New** to add a new adapter (platform) source, or choose an existing one if you have already connected that platform You can create, edit, and delete adapter sources from the **Sources** page in the **API Integration Hub**. Configure the adapter for the platform you selected: * **Platform**: The third-party platform (e.g., Salesforce, Slack) * **Authentication**: Sign in or provide credentials for sandbox and/or production as required by the platform * **Environment**: Use sandbox or production according to the platform’s options Adapter sources are stored at the company level. Once created, they appear in the adapter dropdown for future connections so you can reuse them. After adding and configuring the platform, select the tool (action) you want to use: 1. Browse the available tools/actions for that adapter (e.g., create\_case, update\_record, send\_message) 2. Select the tool you want to connect 3. The connection name and description are inherited from the action; you can change them if needed Adapter interfaces are already designed to be LLM-friendly, so request and response schemas are typically well-structured for GenerativeAgent. You may still configure [request and response transformations](/generativeagent/configuring/connect-apis#request-interface) if needed—for example to rename fields, reshape payloads, or redact data—but they are often unnecessary. Once your adapter connection is configured, [reference it in a Function](/generativeagent/configuring#step-4-create-functions) so GenerativeAgent can call the adapter tool. ## Configure request and response interfaces Adapter actions already expose LLM-friendly schemas, so transformations are often not needed. You can still configure: * [Request Interface](/generativeagent/configuring/connect-apis#request-interface): Change how GenerativeAgent sends data to the adapter action * [Response Interface](/generativeagent/configuring/connect-apis#response-interface): Transform the action’s response before it is passed to GenerativeAgent Start with the default schemas and transformations. Only customize when you need to adapt the adapter’s interface for your use case. ## Using adapters in code connections You can reference adapter actions from [code-based connections](/generativeagent/configuring/connect-apis/code-api-connections) when you need custom logic or composition of multiple adapter calls. ## Next steps Learn how to create functions that use your adapter connection so GenerativeAgent can call adapter tools. Explore other API connection types and how to configure request and response transformations. # Authentication Methods Source: https://docs.asapp.com/generativeagent/configuring/connect-apis/authentication-methods Learn how to configure Authentication methods for API connections. APIs require authentication to control access to their endpoints. GenerativeAgent's API connections support the following authentication methods: * Basic Authentication (username/password) * Custom Header Authentication (API keys) * OAuth 2.0 (Authorization Code and Client Credentials flows) If your APIs require an authentication flow that is not supported by the default authentication methods, we can create a [custom authentication method](#custom-authentication-methods) for you. ## Create an Authentication Method To Create an Authentication Method: You may also create an Authentication Method when specifying the API Connection's API Source. * Provide a name and description * Select the Authentication Type matching your API's requirements * Configure the type-specific settings detailed in sections below * Save the Authentication Method In the API Connection's API Source tab, select this Authentication Method for Sandbox or Production environments. ## Basic Authentication [Basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication) requires: * Username * Password ## Custom Header Custom headers add authentication data to API requests via HTTP headers. Common implementations include API keys and bearer tokens. To configure a custom header, you need to: 1. Optionally enable client authentication: * Enable if you need to reference values from the client in a header. * Set the client data validity duration. * Reference client data using `{Auth.*}` 2. Header configuration: * Header Name (e.g., "Authorization", "X-API-Key") * Header Value (static value or dynamic client data) * e.g. `{Auth.client_token}` ## OAuth OAuth 2.0 provides delegated authorization flows. GenerativeAgent supports: Required configuration: * Authorization Code reference This is the location within the [client data](#client-authentication-data) that contains the authorization code. `{Auth.authorization_code}` * Client ID * Client secret * Token Request URL * Redirect URI You can use a variable from the client data for the redirect URI. `{Auth.redirect_uri}` * How the client authentication data is passed * Basic Auth, or * Request Body * One or more headers to be added to the request. * Header Name * Header Value Use `{OAuth.access_token}` for the generated access token. You can also reference the client data in the header values, using the variable: `{Auth.[auth_data_key]}`. Required configuration: * Client ID * Client secret * Token Request URL * How the client authentication data is passed * Basic Auth, or * Request Body * One or more headers to be added to the request. * Header Name * Header Value Use `{OAuth.access_token}` for the generated access token. You can also reference the client data in the header values, using the variable: `{Auth.[auth_data_key]}`. ## Client Authentication Data Some authentication flows require dynamic data from the client: * OAuth authorization codes * User-specific API keys * Custom tokens You provide client authentication data through: If you are using GenerativeAgent independently of ASAPP Messaging, this Auth data is passed via the [`/authenticate`](/apis/conversations/authenticate-a-user-in-a-conversation) endpoint. If you are using GenerativeAgent as part of ASAPP Messaging, this Auth data is passed via the [SDKs](/agent-desk/integrations) depending on the chat channel you are using. ### Client Authentication Session Any authentication method that requires client data will store the auth data for the session. If the underlying API returns a `401`, the system will require new client authentication data for the session. GenerativeAgent communicates this in the event stream as an [`authenticationRequested`](/generativeagent/integrate/handling-events#user-authentication-required) event. ## Custom Authentication Methods If your API requires an authentication flow not supported by our default methods, we can work with you to create a custom solution. Contact your ASAPP account team to discuss your custom authentication requirements. We'll work with you to build and implement the solution. ### Using Custom Authentication Methods Custom authentication methods work the same way as standard methods: * They appear in your authentication method list * You can select them when configuring API connections * They support both sandbox and production environments Custom authentication methods are read-only configurations. To modify an existing custom authentication method, please work with your ASAPP account team. # Custom Code API Connections Source: https://docs.asapp.com/generativeagent/configuring/connect-apis/code-api-connections Create custom API connections using JavaScript code Code-based API connections allow you to create custom integrations with any external API using JavaScript. This enables you to support any authentication flow or API structure that your external systems require. You will implement a function that takes a request as input and returns the response that GenerativeAgent will receive. This allows you to transform data, handle complex authentication flows, and integrate with any external system. ## Before You Begin Before you begin, you will need: * An ASAPP Dashboard account with Edit permissions for Code based API Connections. Your Admin should be able to enable this for you. Reach out to your ASAPP account team if you need help. * A basic understanding of JavaScript. ## Using Code Based API Connections 1. Navigate to **API Integration HUB** > **API Connections** 2. Click **Create Connection** 3. Select **Code-based API Connection** from the dropdown 4. Enter a **Name** and optional **Description**. 5. Click **Create connection** This creates a new, unimplemented API connection. You will be dropped into the **Settings** tab. Create Code API Connection Code-based API connections restrict access to an explicit whitelist of domains as a security measure. You need to add each domain that your code will reach out to. 1. In the **Basic Settings** tab, specify allowed domains for any API calls your code needs to make 2. Click **Add Domain** and enter URLs (e.g., `api.example.com`) 3. Wildcards (`*`) are supported for subdomains 4. Your code only calls URLs specified in the allowed domains using `asappUtilities.callAPI(url, apiRequest)` Allowed Domains If your code will call [adapters](/generativeagent/configuring/connect-apis/adapters) via `asappUtilities.getAdapter()`, add them in the **Basic Settings** tab: 1. In **Basic Settings**, find the **Import adapters** section 2. Click **Add adapters** and select the adapter(s) you need 3. The string you pass to `getAdapter()` in code must match the name of the adapter you added (e.g. `getAdapter("SalesforceAdapter")` requires an adapter named "SalesforceAdapter") Store and manage environment variables to be used in your code. This is often used to store data that will change between environments, such as URLs for the API call you will be making. Environment variables support storing **Secret** values which the system encrypts when stored in the database. However, for API credentials and authentication data, we strongly recommend using **Authentication Methods** instead, as they provide better security and easier management. 1. In the **Environment Variables** settings tab, add any variables your code needs. 2. Specify the value for Sandbox and Production environments. 3. Use the **Secret** checkbox to store encrypted environment variables (for configuration data, not API credentials) 4. Access environment variables in your code using `asappUtilities.getEnvVariable("VARIABLE_NAME")` Environment Variables If your API connection requires authentication, add authentication methods to your API connection. 1. Navigate to **Settings** → **Authentication Methods** 2. For each environment, click **Add Authentication** to add a new authentication method. 3. Create a new [authentication method](/generativeagent/configuring/connect-apis/authentication-methods) or select an existing one. Add Authentication Method You need to define the structure of data that your API connection will receive and return: 1. **Request Schema** (`schemas/request.json`) - Define the request schema that your function will receive 2. **Response Schema** (`schemas/response.json`) - Define the response schema that your function will return See the [Request Schema](#request-schema) and [Response Schema](#response-schema) sections below for detailed information on how to structure these schemas. Implement the `handleRequest(request)` function in `src/index.js` that takes the request object and returns the response object. See the [Implementing the Handle Function](#implementing-the-handle-function) section below for detailed information on how to implement your function. As you write code, test it to ensure it works as expected: 1. In the **Run** panel, define an example request that matches your `request.json` schema 2. Select which environment to run: **Sandbox** or **Production** 3. Click **Run** 4. Review the results in the left panel You can not publish your code until you have successfully tested it. Once you have successfully tested your API connection: 1. Publish the API connection. This will save your code and make it available as a new version. The system does not provide separate Save vs Publish functionality. You must directly publish any changes to your code. 2. It will now be available for use in your GenerativeAgent tasks and functions. If you have an existing function that uses this API connection, you will need to update it to use the new version. 3. Test the integration end-to-end to ensure it works as expected ## Request Schema The request schema (`schemas/request.json`) defines the structure of data that your function will receive. This JSON schema is both the request schema that your function will receive and the parameters shown to GenerativeAgent. Add the variables you want as input. Ensure the name and description of each variable is easy for GenerativeAgent to understand. GenerativeAgent works better with flat JSON object as an input with a reduced number of properties. The more complex the input, the harder it is for GenerativeAgent to understand the request. By default, the request schema is empty. ```json theme={null} { "additionalProperties": false, "type": "object" } ``` ### Example Request Schema This example takes a last name and confirmation code as input for an airline rebooking function. ```json theme={null} { "type": "object", "properties": { "last_name": { "type": "string", "description": "The last name of the user" }, "confirmation_code": { "type": "string", "description": "The flight confirmation code from the user" } }, "required": ["last_name", "confirmation_code"] } ``` ## Response Schema The response schema (`schemas/response.json`) defines the structure of data that your function must return. This is the response that GenerativeAgent receives. Add the variables you want as output. Ensure the name and description of each variable is easy for GenerativeAgent to understand. GenerativeAgent can read complex JSON objects more effectively than on request. However, if you are seeing issues with GenerativeAgent not understanding the response, try reducing the number of properties in the response. By default, the response schema is empty. ```json theme={null} { "additionalProperties": false, "type": "object" } ``` ### Example Response Schema ```json theme={null} { "type": "object", "properties": { "response": { "type": "string", "description": "Human-readable response message" }, "data": { "type": "array", "description": "Array of search results", "items": { "type": "object", "properties": { "id": { "type": "string" }, "title": { "type": "string" }, "description": { "type": "string" } } } }, "metadata": { "type": "object", "description": "Additional metadata about the response", "properties": { "totalResults": { "type": "number" }, "query": { "type": "string" } } } }, "required": ["response"] } ``` ## Implementing the Handle Function The `handleRequest(request)` function in `src/index.js` is the core of your API connection. This function takes the request object and returns the response object. When first created, your function looks like this: ```javascript theme={null} export async function handleRequest(request) { /* Implement this function to handle the request. Return the result or throw an error. > Success case return { "response": "Your response text", "data": {}, "metadata": {} }; > Failure case (throw an error) throw new asappUtilities.APIConnectionError({ customErrorCode: "SOME_ERROR", // Codes make it easier to specify logic to GenerativeAgent around error handling errorMessage: "Error message", // This is the human readable error message error: [Optionally pass along error object] }); */ throw new asappUtilities.APIConnectionError({ customErrorCode: "NOT_IMPLEMENTED" }); } ``` Replace the `NOT_IMPLEMENTED` error with your own implementation for the API connection. ### Function Parameters The `request` parameter contains the data defined in your request schema. ### Return Value Your function should return an object that matches your response schema. ### Error Handling We expose [`several error classes in the asappUtilities`](#asapp-utilities-errors) library. Always use one of these error classes for error handling to ensure proper error propagation to ASAPP: ```javascript theme={null} try { // Your API logic here return { response: "Success message", data: resultData, metadata: {} }; } catch (error) { throw new asappUtilities.APIConnectionError({ customErrorCode: "API_ERROR", errorMessage: "API call failed", error: error }); } ``` ## ASAPP Utilities ASAPP Utilities is a library designed for integration of Code Driven API Connections. It provides tools for writing secure and efficient code. ### Making API Calls The library provides a secure way to make API calls to allowed domains with the `callAPI` function. This function uses fetch under the hood and follows fetch's interface. This is the only way to make external HTTP API calls from your code. ```javascript theme={null} const response = await asappUtilities.callAPI(`${asappUtilities.getEnvVariable("API_URL")}/data`, { method: "GET", headers: { "Content-Type": "application/json" }, authMethods: { prod: "Production API Auth", sandbox: "Sandbox API Auth" } }); if (response.status === 200) { // Status codes must be check, non 2XX don't raise errors. const data = await response.json(); console.log("Data received:", data); } else { throw new asappUtilities.APIConnectionError({ customErrorCode: `API_HTTP_STATUS_ERROR`, errorMessage: `API call failed with HTTP status ${response.status}: ${response.statusText}` }); } ``` ### Environment Variables Access environment variables configured during setup. Use environment variables for configuration data like API URLs. Do not use environment variables for sensitive credentials. For API credentials and authentication data, use [Authentication Methods](#using-authentication-methods). ```javascript theme={null} const apiUrl = asappUtilities.getEnvVariable("API_URL"); console.log(`Using API URL: ${apiUrl}`); ``` ### ASAPP Utilities Errors In order to properly propagate errors to ASAPP, you must throw an `asappUtilities` error: This is a generic error that can be used to catch any error that occurs in your code. ```javascript theme={null} try { // Your business logic here const result = await processUserData(request.userId); if (!result) { throw new asappUtilities.APIConnectionError({ customErrorCode: "USER_NOT_FOUND", errorMessage: "User data could not be retrieved" }); } return result; } catch (error) { throw new asappUtilities.APIConnectionError({ customErrorCode: "PROCESSING_ERROR", errorMessage: "An error occurred while processing the request", error: error }); } ``` This is an error that is thrown when a client authentication error occurs. Raising this error will cause GenerativeAgent to send an [authentication\_required event](/generativeagent/integrate/handling-events#user-authentication-required) to the client. ```javascript theme={null} const response = await asappUtilities.callAPI(url, request); if (response.status === 401) { throw new asappUtilities.ClientAuthenticationError({ customErrorCode: "TOKEN_EXPIRED", errorMessage: "Token expired and user needs to re-authenticate" }); } ``` ### Context Data When you execute an API Connection, the ASAPP system may provide context data available to you in the `context` object. ```javascript theme={null} const contextData = asappUtilities.getContextData(); console.log(`Current ASAPP conversation ID: ${contextData.asapp.conversationId}`); ``` This is the context data that is available to you in the `context` object. Access it using dot notation. ```json theme={null} { "externalCustomerId": "1234567890", // The external customer ID for the customer "asapp": { "externalConversationId": "1234567890", // The external conversation ID for the conversation "conversationId": "1234567890" // The ASAPP generated ID for the conversation } } ``` ### Fetch Authentication Method Data To use authentication methods for an API call, provide them as part of the request to `asappUtilities.callAPI()`. There may be times you want to pull out data from the authentication method, such grabbing claims from a JWT token. You can access the authentication methods you've configured in your code using the `asappUtilities.getAuthMethod()` function. ```javascript theme={null} const authMethodResults = asappUtilities.getAuthMethod({ prod: "prod-auth-method-1", sandbox: "sandbox-auth-method-2", }); // returns array of type { headers: Record; authResultContext?: object; ttlSeconds?:number;} console.log(`Current authMethodResults: ${authMethodResults}`); ``` ## Using Authentication Methods **Best Practice**: Always use authentication methods instead of storing API keys in environment variables. Authentication methods provide better security, easier management, and support for complex authentication flows like OAuth, JWT, and custom token management. If your API connection requires authentication, first you must add the authentcation method in **Settings** > **Authentication Methods**. Then, when making API calls with `asappUtilities.callAPI()`, specify which authentication method to use by name for each environment: ```javascript theme={null} const response = await asappUtilities.callAPI( "https://api.example.com/data", { method: "GET", headers: { "Content-Type": "application/json" }, authMethods: { prod: "Your Production Auth Method Name", sandbox: "Your Sandbox Auth Method Name", }, } ); ``` The `authMethods` object maps environment names to the exact names of your configured authentication methods. ASAPP automatically applies the appropriate authentication (e.g. tokens, api keys, etc.) to the headers based on the method you've configured for each environment. **Important**: The authentication method names in your code must exactly match the names you gave them when creating the authentication methods in the **Settings** → **Authentication Methods** section. ## Running your code While developing your code-based API connection, you can test it by running it in the **Run** panel. This will allow you to test your code without having to publish it. Any `console.log` statements will be displayed in the **Run** panel. To specify the data that will be passed to your function when running, in the right hand run panel, you need to specify: * The **Request** object that will be passed to your function. This must match the `request.json` schema. * The **Context** object that will be passed to your function. * The **Auth** object that will be passed to your function. This is only used if the authentication method you are using uses [client authentication data](/generativeagent/configuring/connect-apis/authentication-methods#client-authentication-data). Additionally, you can specify the **Environment** to run your code in in the Environment dropdown. Run Code ## Examples ### Calling an adapter Use `asappUtilities.getAdapter()` to call a pre-built [adapter](/generativeagent/configuring/connect-apis/adapters). First add the adapter in **Basic Settings** under **Import adapters** (click **Add adapters**); the string you pass to `getAdapter()` must match the adapter name exactly (e.g. `getAdapter("SalesforceAdapter")`). The second argument to `adapter.call()` is a timeout in milliseconds. ```javascript theme={null} export async function handleRequest(request) { const adapter = asappUtilities.getAdapter("SalesforceAdapter"); const result = await adapter.call("create_case", { input: { AccountId: "001Hn00001z6jgKIAQ", ContactId: "003Hn00002ro3tOIAQ", Description: "", Origin: "", Priority: "", Reason: "", Status: "", Subject: "Pepito22", Type: "" } }, 10000); return result; } ``` ### Simple API Call ```javascript theme={null} export async function handleRequest(request) { const { query, context } = request; try { const response = await asappUtilities.callAPI( `https://api.example.com/search?q=${encodeURIComponent(query)}`, { method: "GET", headers: { "Content-Type": "application/json" }, authMethods: { prod: "Production API Auth", sandbox: "Sandbox API Auth" } } ); if (!response.ok) { throw new Error("API call failed"); } const data = await response.json(); return { response: `Found ${data.results.length} results for "${query}"`, data: data.results, metadata: { totalResults: data.total, query: query } }; } catch (error) { throw new asappUtilities.APIConnectionError({ customErrorCode: "API_ERROR", error: error }); } } ``` ### Using JWT Claims from Authentication ```javascript theme={null} export async function handleRequest(request) { const { query, context } = request; try { // Get authentication method results to access JWT token const authMethodResults = asappUtilities.getAuthMethod({ prod: "Production JWT Auth", sandbox: "Sandbox JWT Auth" }); // Extract user ID from JWT token (assuming it's in the Authorization header) let userId = null; if (authMethodResults && authMethodResults.length > 0) { const authHeaders = authMethodResults[0].headers; const authHeader = authHeaders['Authorization'] || authHeaders['authorization']; if (authHeader && authHeader.startsWith('Bearer ')) { const token = authHeader.substring(7); // Decode JWT payload (this is a simplified example - in practice, you'd want proper JWT validation) try { const payload = JSON.parse(atob(token.split('.')[1])); userId = payload.user_id || payload.sub; } catch (e) { console.log('Could not decode JWT token'); } } } // Build API URL with user ID if available let apiUrl = `https://api.example.com/data?customer=${context.externalCustomerId}`; if (userId) { apiUrl += `&user=${encodeURIComponent(userId)}`; } const response = await asappUtilities.callAPI( apiUrl, { method: "GET", headers: { "Content-Type": "application/json" }, authMethods: { prod: "Production JWT Auth", sandbox: "Sandbox JWT Auth" } } ); const data = await response.json(); return { response: `Customer data retrieved successfully${userId ? ` for user ${userId}` : ''}`, data: data, metadata: { customerId: context.externalCustomerId, userId: userId } }; } catch (error) { throw new asappUtilities.APIConnectionError({ customErrorCode: "AUTH_ERROR", error: error }); } } ``` ## Additional Libraries Currently, code-based API connections support the core ASAPP Utilities library and do not allow the use of third-party libraries e.g., using `require()` or `import` statements. If you require additional third-party libraries or tools for your integration, reach out to your ASAPP account team to discuss your specific needs. # API Design Best Practices for GenerativeAgent Source: https://docs.asapp.com/generativeagent/configuring/connect-apis/designing-apis-for-generativeagent Learn how to design APIs that integrate smoothly with GenerativeAgent. As AI agents become the primary interface for many business operations, traditional API design patterns are failing. **Poorly designed APIs can reduce AI agent accuracy** and create frustrating user experiences. The solution? APIs designed specifically for AI consumption. Most AI agents use middleware to expose functions to the LLM and call underlying HTTP APIs, such as [MCP](https://modelcontextprotocol.io/docs/getting-started/intro) or custom solutions. To streamline this process with GenerativeAgent, we built [API Connections](/generativeagent/configuring/connect-apis) to enable your GenerativeAgent to work with any API, regardless of how it's designed. Many customers still desire to revamp or redesign their APIs to make them more LLM and GenerativeAgent friendly. We've found several best practices that help you design APIs that are easily interpreted by GenerativeAgent and other AI agents. ## API for Humans vs AI Agents Most APIs are designed for human consumption, which relies on implicit dependencies like documentation, trial and error, training, and technical support. Real-time AI agents (like GenerativeAgent) must perform API calls in one shot. They rely exclusively on specifications to determine how to call the API. We recommend using OpenAPI specifications for this purpose. Humans vs AI agents To design APIs that are easily interpreted by GenerativeAgent or any other AI agent, you must prioritize **machine-readability** and **explicit semantic context** over human-centric documentation. AI agents can't infer intent from documentation prose. They need structured, self-describing contracts that eliminate ambiguity. Treat the API's specification not as documentation about the API, but as an integral, machine-readable part of the API itself. ## HTTP API and LLM Tools In virtually all AI Agent use cases, middleware (such as MCP) handles transforming LLM requests to API calls. This need is critical as LLMs work best with targeted, RPC-like tool calls (such as `update_address` or `refund_order` with parameters for the ID or fields to update). They are not as successful with constructing actual HTTP API requests (such as a `call_api` function with parameters for verb, path, body, etc.). When you decide to update your APIs for AI agents, there are two approaches to consider: 1. **Expose an API that is entirely focused on AI agents** and let human developers work around it * Your APIs will be focused on RPC-like tool calls 2. **Expose an API that is designed for both human developers and AI agents** * Ideally this is a RESTful API where the middleware can convert RESTful actions into RPC-like tool calls Most customers choose the second approach to maximize API usage by both human developers and AI agents. Our guidance below focuses on this approach, though it's applicable if you opt for RPC-like API calls. Guidance and best practices for designing HTTP APIs that will be used by AI agents. Guidance and best practices for designing tool calls that will be used by AI agents. ## API Design Principles Being consistent and using common schema patterns makes it easier for AI agents. We show examples using OpenAPI specifications. There are many good general API design guides out there such as [Google's API design guide](https://cloud.google.com/apis/design). Here are the design principles that will help your APIs be AI-ready: ### Simplify Field Names Use clear, descriptive, and simple field names. Avoid abbreviations that are not immediately clear. ```json Good - Clear and descriptive theme={null} { "type": "object", "properties": { "customer_name": { "type": "string" }, "order_date": { "type": "string" } } } ``` ```json Avoid - Cryptic or complex theme={null} { "type": "object", "properties": { "cust_nm_001": { "type": "string" }, "ord_dt_timestamp": { "type": "string" } } } ``` ### Use Intuitive, Human-Friendly Concepts Design APIs that work for both humans and AI agents by balancing technical primitives with human-friendly concepts. Esoteric terms like "record", "details", or "session" are difficult for humans to understand without context—AI agents will struggle even more. Instead, design around resources that can express their current state and requirements clearly. This makes it easier for AI agents to understand what data is needed and how to provide it. This will make your APIs easier to expand and map to [RPC-like tool calls](#tool-call-design-best-practices) that work well with GenerativeAgent. ### Be Verbose in Descriptions Use the summary and description fields in OpenAPI for each endpoint and parameter. Describe the purpose in clear, simple language. GenerativeAgent can use this text to map a user's natural language request (e.g., "find a customer by their email") to the correct API call. ```json theme={null} { "properties": { "order_status": { "type": "string", "description": "Current status of the order (pending, shipped, delivered)", "enum": ["pending", "shipped", "delivered"] } } } ``` ### Define Strict Schemas Meticulously define the data structures for every request and response using JSON Schema within the OpenAPI document. Specify data types (`string`, `integer`), formats (`date-time`, `email`), and constraints (`minLength`, `maximum`). This is how an agent knows exactly what to send and how to parse the result. ### Standardize Data Formats Use universal standards for common data types: * **Timestamps**: ISO 8601 format (`2024-01-15T10:30:00Z`) * **Currency**: ISO 4217 codes (`USD`, `EUR`) * **Phone numbers**: E.164 format (`+1234567890`) ### Consistent Naming Conventions Use a predictable naming schema for your API endpoints such as plural nouns for collections (`/users`, `/orders`). Maintain consistent case style (e.g., `camelCase` for JSON properties). ### Provide Rich Context Don't just return an ID. If you have a userId, also include a userName or userEmail. Design API responses to be self-contained and descriptive. ```json theme={null} { "status": { "code": "A", "label": "Active" } } ``` ### Ensure Idempotency For operations like `PUT` and `DELETE`, ensure they can be called multiple times with the same input and produce the same result. GenerativeAgent may need to retry operations, and idempotency makes this safe. ### Design Structured Errors Provide detailed error information that AI systems can understand. Provide a structured JSON error payload that an agent can parse. ```json theme={null} { "error": { "code": "InvalidParameter", "message": "The 'email' parameter is not a valid email address.", "target": "email", "details": "https://developer.example.com/errors#InvalidParameter" } } ``` ### Cursor Pagination Implement cursor-based pagination for list endpoints. Include metadata for navigating to next/previous pages. Provide total count when possible. ### Essential OpenAPI Elements We recommend using OpenAPI specifications for your APIs. Here are some essential elements to consider: #### Descriptive Documentation * Use clear `summary` and `description` fields for each endpoint * Describe parameters and responses in simple language * Include examples that show expected usage #### Unique Operation IDs * Provide descriptive `operationId` values (e.g., `getUserById`, `createOrder`) * Use consistent naming conventions across your API * Avoid generic names like `getData` or `processRequest` ## Tool Call Design Best Practices ### Granular Tool Names for Specific Use Cases GenerativeAgent and other LLMs work best with targeted, RPC-like tool calls. If you want to have GenerativeAgent update a user's address, instead of directly calling a generic `update_user` tool, you can create a tool called `update_user_address`. **Start Simple, Expand as Needed** You can start with basic CRUD tools based on your existing APIs, but expand on them to be more specific to the task. A basic exposing of RESTful API results in tools like `create_user`, `update_user` which may be okay for some use cases, but often you'll end up exposing specific tools for a specific task. ### Flatten Complex Structures Often a good RESTful design has logical nesting for different entities within a given resource. But LLMs can have a hard time understanding deeply nested objects. Flatten nested objects into a single level. Avoid deep nesting of objects. ```json Good - Flat structure theme={null} { "type": "object", "properties": { "shipping_street": { "type": "string" }, "shipping_city": { "type": "string" }, "shipping_country": { "type": "string" } } } ``` ```json Avoid - Deep nesting theme={null} { "type": "object", "properties": { "shipping": { "type": "object", "properties": { "address": { "type": "object", "properties": { "street": { "type": "string" }, "city": { "type": "string" }, "country": { "type": "string" } } } } } } } ``` ### Focus on Essential Data * Remove optional or additional fields that are not directly needed for the specific task * Only expose the specific fields from the API that are needed for the specific task * The more fields, the more likely GenerativeAgent may make a mistake in providing unnecessary information or getting confused by a large API response ### Schema Consistency Across Tools When exposing multiple tools to GenerativeAgent, ensure you expose consistent naming to GenerativeAgent regardless of the underlying system. If you have a person called a `user` in one API and an `account` in another, it may confuse GenerativeAgent. You should use a consistent naming convention across all the APIs and ensure you use that same naming in the task instructions. ### Cross Field Relationships AI agents need explicit guidance about how fields relate to each other. Document field dependencies, constraints, and interactions clearly in your schema descriptions. Use descriptive field names that indicate relationships and explicitly document when one field's value affects another field. ```json theme={null} { "type": "object", "properties": { "payment_method": { "type": "string", "description": "Payment method type. When set to 'credit_card', the 'card_details' field becomes required.", "enum": ["credit_card", "bank_transfer", "paypal"] }, "card_details": { "type": "object", "description": "Required when payment_method is 'credit_card'. Ignored for other payment methods.", "properties": { "card_number": {"type": "string"}, "expiry_date": {"type": "string"} } } } } ``` # MCP Servers in API Connections Source: https://docs.asapp.com/generativeagent/configuring/connect-apis/mcp-server Learn how to connect GenerativeAgent to an MCP server MCP ([Model Context Protocol](https://modelcontextprotocol.io/)) servers provide a standardized way to expose tools and capabilities to LLMs. Unlike traditional APIs that implicitly require a developer to read docs, trial and error, and technical support to successfully use, MCP servers are designed with LLM use from the start, making them easier to integrate with GenerativeAgent. ## Use MCP Server 1. Navigate to **API Integration Hub** > **API Connections** 2. Click **Create Connection** 3. Select **MCP Server** from the connection type list Select Connection Type 4. Click **Add New** to create a new MCP server source Select MCP Server You can create, edit, and delete sources from the **Sources** page in the **API Integration Hub** Configure the MCP server: * **URL**: Enter the URL of your MCP server * **Authentication Method**: Select or create an [authentication method](/generativeagent/configuring/connect-apis/authentication-methods) for the server Configure MCP Server You can reuse MCP server sources across multiple API connections. Once created, they appear in the MCP server dropdown for future connections. After configuring the MCP server source, select the tool you want to use: 1. Browse available tools from the MCP server 2. Select the tool you want to connect 3. The tool's name and purpose are automatically populated from the MCP server response Select MCP Tool MCP servers expose tools that are already designed for LLM use, so the request and response schemas are typically well-structured for GenerativeAgent. You may still configure [request and response transformations](/generativeagent/configuring/connect-apis#request-interface) if needed, but they're often unnecessary. Once your MCP server connection is configured, [reference it in a Function](/generativeagent/configuring#step-4-create-functions) to enable GenerativeAgent to use the MCP tool. ## Configure Request and Response Interfaces Since MCP servers are designed for LLM interaction, transformations are often not needed. However, you can still configure: * [Request Interface](/generativeagent/configuring/connect-apis#request-interface): Modify how GenerativeAgent sends data to the tool * [Response Interface](/generativeagent/configuring/connect-apis#response-interface): Transform the tool's response format Start with the default schemas and transformations. Only customize if you need to adapt the tool's interface for your specific use case. ## MCP Server Schema Changes MCP servers are unversioned. When you update the schema of your MCP tools, the existing MCP server source in your system continues to use the original schema. To incorporate schema changes: 1. Create a new MCP server source with the same URL and authentication 2. The new source will fetch and use the updated schema from your MCP server 3. Update your API connections to reference the new MCP server source instead of the old one ## Authentication MCP server connections in API Connection support the same [authentication methods](/generativeagent/configuring/connect-apis/authentication-methods) as standard API connections. They **do not** currently support MCP's authentication specification. The [MCP authorization specification](https://modelcontextprotocol.io/docs/tutorials/security/authorization) is designed for uses cases where end users authorize bots to access their personal resources, such as internal productivity tools or chatGPT accessing resources on your behalf, such as your Google Drive. This flow generally does not work for customer service communication where a user logins to your website or app and presumes any chat experience is already authenticated. The authentication methods of API Connection are designed around that user auth flow. To use authentication for your MCP Server with API Connections, you need to use one of the [Authenticaiton Methods](/generativeagent/configuring/connect-apis/authentication-methods) we support. If your MCP server requires a unique authentication flow, contact your ASAPP account team to discuss your requirements. We'll work with you to build and implement the solution. ## Next Steps Learn how to create functions that use your MCP server connection to enable GenerativeAgent to call MCP tools. Explore other API connection types and learn how to configure request and response transformations. # Mock API Users Source: https://docs.asapp.com/generativeagent/configuring/connect-apis/mock-apis Learn how to mock APIs for testing and development. While you are building your API Connection, you can use Mock Data to test the API Connection and ensure your transformations are working as expected. This Mock data saves as a Mock User where you can group mock responses for a given scenario. The system only uses the Mock Data when testing the API Connection. Use [Test Users](/generativeagent/configuring/tasks-and-functions/test-users) to test and simulate Tasks and Function responses. ## Mock Users A mock user is a collection of mock responses that simulate how your server may respond. Each endpoint in use by an API Connection can have a mock response defined. By default, the mock user will return the [default mock data](/generativeagent/configuring/connect-apis#api-source) defined in the API Connection's API Source. To Create a Mock User: Access the API Mock Users section from the API Integration Hub. Select the 'Create User' button to start creating a new mock user. Provide the following information: * Name of the User * Description of the User The newly created mock user will have a default mock response for each endpoint in the API Connection. You can check "Override Default Mock response" and specify a new mock response. Make sure to save the mock user to apply the changes. ## Using Mock Users You can use Mock Users to test your transformations. From within the Response interface, you can select the mock user to use in the "Test Response" panel. Mock User Selection This allows you to save common responses from your server in sets of Mock users. As you iterate on your API Connection, you can test your transformation using the same mock responses. ## Next Steps Learn how to use test users to simulate and validate task and function responses Understand how to connect and configure external APIs with your application Learn how to authenticate your API connections Step-by-step guide to integrate APIs with your GenerativeAgent implementation # Connecting your Knowledge Base Source: https://docs.asapp.com/generativeagent/configuring/connecting-your-knowledge-base Learn how to import, sync, and deploy your Knowledge Base for GenerativeAgent. Your knowledge base is crucial for GenerativeAgent to provide accurate and contextually relevant responses to users. You have full control over which articles you include, their update frequency, and how you deploy those updates. All content and update management occurs within the ASAPP dashboard. GenerativeAgent's Knowledge Base is designed to store reference material that GenerativeAgent can use to answer a variety of customer questions—both explicit (“What is the return policy?”) and implicit (“I don't understand ‘eligible for store credit’”). GenerativeAgent determines if a question should be answered from the Knowledge Base and searches for the most relevant content. To configure your Knowledge Base, you need to: 1. Import your knowledge base 2. Configure sync and deployment preferences 3. Deploy knowledge base articles Do not upload internal, agent-facing knowledge base material intended only for live agents. Use GenerativeAgent’s task instructions for internal-only guidance. ## Step 1: Importing Your Knowledge Base You can import content by: * Navigating to GenerativeAgent > Knowledge in the ASAPP dashboard * Clicking **Add content** * Choosing from: * **Import from URL** * **Import from Zendesk KB** * **Custom Knowledge Base Connector** * **Create Snippet** * **Add via API** Importing from a URL lets you specify a site or page for the crawler to harvest articles. 1. Select **Import from URL** 2. Enter the URL to start the import 3. (Optional) Use URL Prefixes or Excluded URLs to target specific sections Importing from Zendesk KB lets you import articles from your Zendesk Knowledge Base. 1. Select **Import from Zendesk KB** 2. Enter your Zendesk subdomain. This is the unique identifier for your Zendesk account. 3. Enter your Zendesk API credentials as an Authentication Method. 4. Optionally specify how to filter the content by: * Locale * Categories * Sections * Labels Connect a third-party or internal knowledge base using an [API Connection](/generativeagent/configuring/connect-apis). You implement an API that fetches articles from your KB; GenerativeAgent crawls it on a schedule and ingests the content. See [Custom KB Connectors](/generativeagent/configuring/connecting-your-knowledge-base/custom-kb-connectors) for the full setup (create API connection → add connector → crawl and deploy). Snippets are stand-alone articles created manually. 1. Click **Create Snippet** 2. Enter the article’s title and content 3. Optionally add instructions and metadata Programmatically add or update articles using the Knowledge Base Article Submission API. ## Step 2: Configure Sync & Deployment Preferences When adding or modifying a content source, you now have advanced control over how and when it syncs and how updates are deployed to production. **Automatic update options:** Automatic updates dialog * **Enable with review notification**\ Scrapes and cleans content every 24 hours. Updates require manual review before deployment. * **Enable with auto-deployment**\ Scrapes and cleans content every 24 hours. The system *immediately* deploys updates to production, bypassing the review process. * **Turn off**\ Import content only once. No automated updates. > You can adjust this setting anytime in the content source management screen. The system visually indicates the current sync mode for each source. Auto-deploy indicator in Production table ## Step 3: Article-Level Frequent Refresh (Critical Updates Only) You can override the sync and deployment setting of the content source for specific, time-sensitive articles: 1. Click the three-dot menu next to the article and select **Set refresh frequency**. Set refresh frequency dropdown 2. In the dialog, enable **refresh frequency** for updates every 15 minutes. * The system will auto-deploy updates to production immediately, even if your content source normally requires review. * Disabling will revert to the content source's settings. Set refresh frequency dialog Use frequent refresh and auto-deploy only for critical, time-sensitive articles (e.g., live incident or service disruption updates). These changes will go live immediately without review. ## Step 4: Deploying Your Knowledge Base Once imported and configured, deploy your Knowledge Base and changes to your desired environment for GenerativeAgent. This ensures users receive the most accurate and timely information. You manage deployment as part of the [generative agent deployment process](/generativeagent/configuring/deploying-to-generativeagent). ## Reviewing Imported Articles By default, imported articles (from URL/API) require review and publishing (unless auto-deploy is enabled for the source or article). If articles are pending review, a banner will appear at the top of the Knowledge Base page. You can choose between a cleaned-up or raw version of each article before publishing. > If the system updates an article (by new crawl or API update), the same rules apply: > > * Requires re-review if the parent content source is in review mode > * Deploys instantly if in auto-deploy or frequent refresh mode ## Visual Indicators & Notifications * The system shows sync and deployment status both for sources and individual articles. * Recent auto-sync and deployment activity can be reviewed in audit logs or dashboards. *** ## Optimizing GenerativeAgent Article Usage Boost GenerativeAgent’s accuracy and retrieval behavior by leveraging: ### Query Examples Add typical customer questions to help surface relevant content: 1. In the “GenerativeAgent Instructions” column, click **Add query example** 2. Enter common questions as needed ### Additional Instructions Provide special clarifications or company-specific answers: 1. Click **Add Instruction** 2. Write a clear clarification or sample response ### Article Metadata Use metadata to ensure GenerativeAgent uses specific articles only for relevant tasks. 1. Navigate to the article and click **Edit Metadata**. 2. Add or modify metadata keys to enable targeted article discovery and control. ### Search & Filter Easily find or bulk manage articles using metadata, status, content source, creator, or deployment state. You can select and combine multiple filters with "AND" for precise searching. ## Preview Quickly test live how GenerativeAgent uses your knowledge base: 1. Click the eye icon next to "Deploy" 2. Start a test conversation to see answers pulled from your knowledge base For more details, see the [Previewer guide](/generativeagent/configuring/previewer). ## Next Steps Explore further integration topics: # Add via API Source: https://docs.asapp.com/generativeagent/configuring/connecting-your-knowledge-base/add-via-api Learn how to add Knowledge Base articles programmatically using the API The Knowledge Base Article Submission API offers an alternative to manual creation of article snippets and URL imports. This is especially beneficial for large data sources that are not easily scraped, such as internal knowledge bases or articles within a Content Management System. All content imported via API follows the [Imported Articles](/generativeagent/configuring/connecting-your-knowledge-base#imported-articles) review process. ## Before you Begin Before using the Knowledge Base Article Submission API, you need to: * [Get your API Key Id and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access Knowledge Base APIs. Reach out to your ASAPP team if you need access enabled. ## Step 1: Create a submission To import an article via API, you need to create a `submission`. A **submission** is the attempt to import an article. It will still need to be reviewed and published like any other imported article. To [create a submission](/apis/knowledge-base/create-a-submission), you need to specify: * `title`: The title of the article * `content`: The content of the article There are additional optional fields that can be used to improve the articles such as `url`, `metadata`, and `queryExamples`. More information can be found in the [API Reference](/apis/knowledge-base/create-a-submission). As an example, here's a request to create a submission for an article including additional values such as `url` and `metadata`: ```json theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/knowledge-base/v1/submissions \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "title": "5G Data Plan", "content": "Our 5G data plans offer lightning-fast speeds and generous data allowances. The Basic 5G plan includes 50GB of data per month, while our Unlimited 5G plan offers truly unlimited data with no speed caps. Both plans include unlimited calls and texts within the country. International roaming can be added for an additional fee.", "url": "https://example.com/5g-data-plans", "metadata": [ { "key": "department", "value": "Customer experience" } ], "queryExamples": [ "What 5G plans do you offer?", "Is there an unlimited 5G plan?" ], "additionalInstructions": [ { "clarificationInstruction": "Emphasize that 5G coverage may vary by location", "exampleResponse": "Our 5G plans offer great speeds and data allowances, but please note that 5G coverage may vary depending on your location. You can check coverage in your area on our website." } ] }' ``` ## Step 2: Article Processing The Article Submission API submits the article that will still need review and publication like any other imported article. You can check the status of the submission by calling the [Get a Submission](/apis/knowledge-base/retrieve-a-submission) API. The response will include the `id` of the submission and the `status` of the submission. ```json theme={null} { "id": "fddd060c-22d7-4aed-acae-8f8dcc093a88", "articleId": "8f8dcc09-22d7-4aed-acae-fddd060c3a88", "submittedAt": "2024-12-12T00:00:00", "title": "5G Data Plan", "content": "Our 5G data plans offer lightning-fast speeds and generous data allowances...", "status": "PENDING_REVIEW" } ``` ## Step 3: Publication and Updates Once you approve the submission, the system will publish the article and make it available in the Knowledge Base. The status of the submission will update to `ACCEPTED` and you will see it within the ASAPP AI-Console UI. You can also update the article after the system has published it by creating another submission with the same `articleId`. ## Troubleshooting Common API response codes and their solutions: If you receive a `500` code, there is an issue with the server. Wait and try again. If the error persists, contact your ASAPP Team. The `400` code usually means missing required parameters. Recheck your request body and try again. A `401` code indicates incorrect credentials or unconfigured ASAPP credentials. The request body is too large. Article content is limited to 200,000 Unicode characters. Try again with less content. ## Next Steps View the Knowledge Base API documentation Learn more about managing your Knowledge Base articles Configure how GenerativeAgent uses your Knowledge Base Deploy your Knowledge Base to production # Custom Knowledge Base Connectors Source: https://docs.asapp.com/generativeagent/configuring/connecting-your-knowledge-base/custom-kb-connectors Connect a third-party knowledge base to GenerativeAgent using API Connections Custom Knowledge Base Connectors let you integrate into any knowledge base using [API Connections](/generativeagent/configuring/connect-apis). By using API Connections, we can crawl any knowledge base that exposes an API, which most knowledge management systems (KMSs) do. To use a Custom KB Connector, you implement an API Connection that fetches articles from your knowledge base; ASAPP then crawls that connection on a schedule, ingesting the articles into the Knowledge Base for GenerativeAgent to use when answering user questions. We provide example implementations for popular KB platforms such as Salesforce; reach out to your ASAPP team for access. ## How it works When the system crawls your Custom KB Connector, it runs your API Connection **repeatedly** until all articles are fetched: 1. **First call** — The connection is invoked with empty or undefined `paginationParams`. 2. **Your response** — You return a batch of articles and, if there are more, a `nextParams` object (e.g., page number, cursor, or offset). 3. **Next call** — The system invokes your connection again, passing that `nextParams` as `paginationParams`. 4. **Repeat** — Steps 2–3 continue until you return that there are no further pages. 5. **Ingestion** — All articles from every call are collected and submitted into the Knowledge Base for review and deployment. The shape of `paginationParams` and `nextParams` is up to you; the system passes them through so you can support any pagination style your KB API uses. Articles from Custom KB Connectors go through the same [review and deployment process](/generativeagent/configuring/connecting-your-knowledge-base#reviewing-imported-articles) as other imported content. You can enable auto-deploy on the connector if you want updates to go live without manual review. ## Before you begin * [Get your API Key and Secret](/getting-started/developers#access-api-credentials), with access to GenerativeAgent and Knowledge Base as needed. * Your knowledge base must have a publically accessible API that supports listing/fetching articles in a way that can be adapted to pagination (e.g., by page, cursor, or offset). ## Step 1: Create an API connection for the Knowledge Base Create an API Connection that fetches articles from your knowledge base. The connection must implement the Knowledge Base Connector request/response interface so the system can crawl all articles and handle pagination. In **API Integration Hub** → **API Connections**, create a new connection or open an existing one to edit. You will implement the KB Connector interface in this connection. Implement the request/response contract below. Your logic (JSONata or code) should call your knowledge base API and map its data into the required response shape. The implementation must conform to this contract: **Request:** The connection receives a single argument with this shape: ```json theme={null} { "type": "object", "properties": { "paginationParams": { "type": "object", "description": "Pagination parameters passed on each request.", "additionalProperties": true } }, "additionalProperties": false } ``` On the first execution, `paginationParams` is `undefined` or `{}`. On each subsequent call, it contains the `nextParams` you returned in the previous response. **Response:** Return an object with this shape: ```json theme={null} { "type": "object", "properties": { "articles": { "type": "array", "items": { "type": "object", "properties": { "title": { "type": "string", "description": "The proposed title of the article. Required.", "minLength": 1, "maxLength": 256 }, "content": { "type": "string", "description": "The article content in plain text. Required.", "minLength": 1, "maxLength": 200000 }, "url": { "type": "string", "description": "Required. The canonical URL for the article. Used as the identifier during crawling so the system can update a previously uploaded article and avoid duplication." }, "metadata": { "type": "array", "description": "Additional key-value pairs related to the article (optional).", "items": { "type": "object", "required": ["key", "value"], "properties": { "key": { "type": "string", "minLength": 1 }, "value": { "type": "string", "minLength": 1 } } } }, "queryExamples": { "type": "array", "description": "Examples of customer questions related to the article (optional).", "items": { "type": "string" } } }, "required": ["title", "content", "url"], "additionalProperties": false } }, "hasMore": { "type": "boolean" }, "nextParams": { "type": "object", "description": "Parameters to use as paginationParams on the next call (if hasMore true).", "additionalProperties": true } }, "required": ["articles", "hasMore"], "additionalProperties": false } ``` Each article must include a **url**. This URL is the identifier for that article during crawling: the system uses it to recognize when an article was already ingested, so it can update the existing one instead of creating a duplicate. When `hasMore` is `true`, you must provide `nextParams` with whatever your API needs for the next page (e.g., `{ page: 2 }` or `{ cursor: "abc123" }`). When `hasMore` is `false`, the crawl stops. Save and test the API Connection before moving to Step 2. The system submits these articles to the Knowledge Base; they will go through the normal review and approval flow unless the connector is set to auto-deploy. The system executes your API Connection in a loop: 1. First call: `paginationParams` is empty or undefined. 2. Your connection returns `articles`, `hasMore`, and `nextParams`. 3. If `hasMore` is `true`, the system calls again with `paginationParams` set to your `nextParams`, and repeats until `hasMore` is `false`. 4. All collected articles are submitted to the Knowledge Base and follow the standard [import and review process](/generativeagent/configuring/connecting-your-knowledge-base#reviewing-imported-articles). After the API Connection is created and implements the KB Connector interface, go to the Knowledge Bases page to add the Custom connector. ## Step 2: Add the Custom connector Configure the Custom Knowledge Base Connector so GenerativeAgent knows which API Connection to use and how often to crawl it. 1. Navigate to **GenerativeAgent** → **Knowledge** in the ASAPP dashboard. 2. Click **Add content** (or the equivalent control to add a source). 3. From the **Add Source** dropdown, select **Custom Knowledge Base Connector**. In the Custom Knowledge Base Connector form, provide: * **Name** (required): A name for this connector (e.g., "Internal Wiki KB"). * **Description** (optional): Short description of the source. * **API Connection** (required): Select the API Connection you created in Step 1. Only connections that implement the KB Connector interface are eligible. The form validates that the selected connection conforms to the required interface. * **Crawling settings**: Use the same crawling and deployment options as other Knowledge Base sources (e.g., sync frequency, whether updates require review or auto-deploy). Save the Custom Knowledge Base Connector. It will appear as a content source on the Knowledge Bases page. The system will use your crawling settings to determine when to run the next sync. ## Step 3: Crawl and use the connector Once the Custom connector is saved, the system crawls it according to your crawling settings. All articles are fetched and appear in the Knowledge Base like any other content source—you can view, review, and manage them as described in [Connecting your Knowledge Base](/generativeagent/configuring/connecting-your-knowledge-base). After articles are published and deployed, GenerativeAgent uses the new content when answering questions, the same way it uses other Knowledge Base sources. ## Viewing and managing articles Articles imported from a Custom Knowledge Base Connector are shown in the Knowledge Base UI with the connector indicated as the content source. You get the same article-level controls as for other sources: edit metadata, set refresh frequency, search, and filter. Content source and deployment status are visible so you can confirm which articles came from your custom connector. ## Debugging your connector All executions of your API Connection for the Knowledge Base are recorded in the API log. In the log view. You can inspect request/response data, execution time, and errors. Use this to troubleshoot pagination, authentication, or response shape issues. ## Next steps Overview of importing, syncing, and deploying your Knowledge Base Create and manage API Connections Submit individual articles via the Knowledge Base API Submission API and article format reference # Conversations Source: https://docs.asapp.com/generativeagent/configuring/conversations Review and analyze GenerativeAgent conversations to improve performance. **”Conversations”**, formerly known as Conversation Explorer, is now fully integrated into the AI-Console to provide a centralized platform for viewing, monitoring, and managing conversations. This integration also enables faster navigation, improved usability, and streamlines access control and permission management. As you add use cases and refine GenerativeAgent's configuration, you can review, analyze, and fine-tune how the GenerativeAgent handles real customer interactions. The **Conversations** interface enables you to: * Find specific conversations using search and filter options * Share conversations with others * Review how GenerativeAgent handled customer interactions by going through the conversation transcript or playing back the conversation. * Use Evaluators to review and analyze conversations. * Analyze GenerativeAgent's decision-making process via model actions ## Access Conversations Once you have access to Conversations, follow these steps to navigate to the interface: 1. Sign in to the AI-Console. 2. From the AI-Console landing page, navigate to the "GenerativeAgent" section in the left-hand menu. 3. On the left-hand panel, navigate to Conversations. Conversations ## Find conversations In the Conversations interface, you can find specific conversations using the search and filter options. You can either combine multiple search terms and filters to narrow down your results or use them individually to explore different aspects of your GenerativeAgent interactions. When you open a conversation, you can see the follow: * Full transcript of the interaction between the customer and GenerativeAgent. * Model actions that show how GenerativeAgent processed the conversation. * Flags that indicate potential quality issues in the conversation under **Quality**. * Goals and their completion status under the **About** tab. * A button to copy the conversation link for [sharing](#share-a-conversation). * Summary of the conversation under the **Summary** tab, which provides a brief overview of the interaction. * **Structured data** that was extracted from the conversation, which can be used for further analysis and insights. You can access the **"Summary"** only if you have enabled [AI Summary](/ai-productivity/ai-summary), and "Structured Data" if you have Configured [Structured data extraction](/ai-productivity/ai-summary/structured-data) . ### Search conversations Use the search bar to find conversations containing specific words or phrases. Enclose terms in quotes for exact matches. Search functionality ### Filter conversations You can filter conversations using the following criteria: * **Agent**: Filter by the GenerativeAgent handling the conversation * **Agent ID**: Search for conversations by a specific GenerativeAgent ID * **Conversation ID**: Search for a specific conversation * **Customer ID**: Find conversations involving a specific customer * **Date range**: Select specific time periods * **Escalation from GenerativeAgent**: Find conversations that were escalated to a human agent * **Functions**: Locate conversations that called particular APIs * **Flags**: Find conversations flagged for quality issues * **Goal Completion**: Filter conversations by whether customer goals were met * **HILA Consult**: Find conversations that involved Human in the Loop assistance * **Intent**: Search for conversations involving specific intents * **Internal Conversation ID**: Search using internal conversation identifiers * **Structured Data**: Filter conversations based on extracted structured data points * **System Transfers**: Find conversations that involved system transfers * **Task**: Find conversations where specific tasks were performed Filter options #### Finding Flagged conversations Find both manually flagged conversations and those flagged by automated systems by using the "GenerativeAgent Flags" filter from the [filter](#filter-conversations) options. Flags filter You can further refine your search by selecting specific flag types (manual or automated), tags, and severity levels. ## Review conversations You can review the Conversation by playbacking it to understand the flow of the interaction and how GenerativeAgent responded to different customer inputs. Also, you can Manually annotate specific turns or messages that require attention to provide feedback, identify issues, or highlight important moments in the conversation. ### Playback a conversation You can play back a conversation to listen to how GenerativeAgent interacted with the customer over time. Conversation Playback is useful for understanding the flow of the conversation and how GenerativeAgent responded to different customer inputs. The Conversation Playback enables you to: * Play, pause, and navigate through the conversation timeline. * Skip forward or backward by 15 seconds. * Play the conversation at different speeds such as 1x, 1.5x, and 2x. Conversation Playback ### Manually annotate a conversation You can add manual annotations to a conversation to flag specific turns or messages that require attention. This is useful for providing feedback, identifying issues, or highlighting important moments in the conversation. The Manual Annotations can be done by using the [Manual Annotation Evaluator](/generativeagent/observe/evaluators/manual-annotation) which allows you to add flags and comments to specific turns in the conversation. Manual annotations include: * **Flags**: Mark specific turns or messages with predefined categories and severity levels (e.g., critical, major) to indicate quality issues or areas for improvement. * **Comments**: Add detailed comments to provide context and rationale for the flags, which can help in understanding the issue and guiding improvements. * **Flag refinement**: For turns already flagged by automated systems, you can confirm, recategorize, dismiss, or update the rationale to refine the annotation and improve evaluator training data. * **Classification**: Use tags to classify the type of issue (e.g., policy violation, incorrect information, escalation required) for better organization and analysis. Add Manual Annotation You can find the manually flagged conversations by using the appropriate [filters](#finding-flagged-conversations). For more information on manual annotations, see [Manual Annotation Evaluator](/generativeagent/observe/evaluators/manual-annotation). ## Analyze conversations Use [Goal Completion](/generativeagent/observe/evaluators/goal-completion) and [Conversation Monitoring](/generativeagent/observe/evaluators/conversation-monitoring) evaluators to analyze conversations based on specific quality metrics. Evaluators help you analyze the quality of conversations, goal completion, and other important aspects of GenerativeAgent's performance. ### Goal Completion Evaluate whether customer goals were successfully achieved during conversations. Goals are specific objectives that customers aim to accomplish during their interactions with GenerativeAgent, such as resolving an issue, obtaining information, or completing a transaction. You can evaluate goal completion by: Use the "Goal Resolution" and "Conversation Resolution" filters from the [filter](#filter-conversations) options to narrow down conversations by their [topline completion status](/generativeagent/observe/evaluators/goal-completion#topline-completion-status) or [specific assessments](/generativeagent/observe/evaluators/goal-completion#specific-assessments) or a combination of both. Goal Completion Filter The goals associated with each conversation are displayed in the **Goal Resolution** section of the **About** tab. Each goal displays its completion status, including specific assessments and are listed in the order they were detected by the evaluator. Review the goals to understand the root cause of incomplete or unmet goals. Goal Completion Review See [Goal Completion](/generativeagent/observe/evaluators/goal-completion) for detailed instructions on how to use this evaluator. ### Conversation Monitoring The Conversation Monitoring evaluator enables you to monitor conversations and identify issues that may impact the quality of customer interactions. There are quality evaluators that assess each turn for adherence to configured tasks and knowledge. When evaluators detect issues, they: * Flag the conversation * Highlight problematic utterances * Provide rationale for flagging You can find the system flagged conversations by using the appropriate [filters](#finding-flagged-conversations). Once a conversation is flagged, you will find an **inline indicator** near the particular flow. You can analyze them in the **Quality** tab of the conversation review interface. The flags can be **customized** by changing their severity level (e.g., from "major" to "critical") or dismissing them if they are false positives. Quality tab See [Conversation Monitoring](/generativeagent/observe/evaluators/conversation-monitoring) for detailed instructions on how to use this evaluator. ## Analyze Model Actions Once you have found a conversation, you can see exactly how GenerativeAgent makes decisions by viewing its internal reasoning process. Model actions are the input, knowledge, api calls, reasoning, and output of GenerativeAgent's model while handling the customer interaction. The information in the model actions can drive how you update the configuration of your tasks and functions. When looking at a conversation, model actions are displayed inline with the conversation flow. This allows you to understand exactly when and why the AI made specific decisions during the interaction. To analyze model actions: 1. Open a conversation 2. In the center panel, enable the model actions you want to review Model actions filter 3. View the AI's reasoning process inline with the conversation, showing the chronological flow of decisions 4. Click any model action to see detailed information. This example shows a function response. Model action details You can also see the "Raw" JSON interaction between GenerativeAgent and the function. Model actions in conversation ### Model actions categories Model actions are categorized into the following: When enabling a model action category, there may be multiple model actions with the same category that will be displayed. e.g. enabling Functions will show both a "Function Call" for the request and a "Function Response" for the response. Actions occur when GenerativeAgent needs authentication data to call an API. Only used for API Connections that require [client data](/generativeagent/configuring/connect-apis/authentication-methods#client-authentication-data). Actions occur when GenerativeAgent needs to confirm an action with the customer. Actions that occur when GenerativeAgent calls a function to handle the customer interaction. The model actions will show: * The function name * Input parameters * Output Function calls may appear as multiple entries in the model action stream. You can also see the "Raw" JSON interaction between GenerativeAgent and the function. Actions occur when GenerativeAgent encounters an error. Actions occur when GenerativeAgent needs consultation from a human agent as part of [Human in the Loop](/generativeagent/human-in-the-loop). The [input variables](/generativeagent/configuring/tasks-and-functions/input-variables) that GenerativeAgent uses to call a function. The Knowledgebase articles that GenerativeAgent was given to answer the customer's question. This occurs when GenerativeAgent determines that the customer's question is out of scope for the current task. The task that GenerativeAgent is entering or changing into in order to handle the customer interaction. GenerativeAgent's internal thoughts and reasoning process. This occurs when GenerativeAgent performs a [transfer to agent](/generativeagent/configuring/tasks-and-functions/system-transfer#transfer-to-agent) to escalate the conversation to a human agent. This occurs when GenerativeAgent determines that the customer's input is unsafe. This occurs when the system detects an unsafe message from GenerativeAgent. This occurs when GenerativeAgent transfers conversation control to an external system using a [System Transfer function](/generativeagent/configuring/tasks-and-functions/system-transfer). A system-generated instruction that provides essential context to the GenerativeAgent and serves as a trigger for various types of responses including: * Reasoning-based responses * Safety check validations * Predefined messages (e.g., welcome prompts) This instruction acts as the foundational context that guides the agent's behavior and response generation process. ## Share a conversation You can share a conversation with others by clicking the "Copy Link" button when viewing a conversation. Copy link You can also share your current filtered view by copying the URL of your current page. ## Next Steps: Explore Evaluators Evaluators help you review and analyze conversations based on specific quality metrics. You can use evaluators to have an in-depth analysis of GenerativeAgent's performance. Manually annotate conversations to provide feedback and identify quality issues. Evaluate whether customer goals were successfully achieved during conversations. Monitor and review conversations for compliance and quality assurance. # Deploying to GenerativeAgent Source: https://docs.asapp.com/generativeagent/configuring/deploying-to-generativeagent Learn how to deploy GenerativeAgent. After importing your Knowledge Base and connecting your APIs to GenerativeAgent, you need to manage deployments for GenerativeAgent's use. You can deploy and undeploy articles and API Connections in the GenerativeAgent UI. There are also options to view version history and roll back changes in the UI. You must deploy Articles or Functions separately from each other. ## Environments The GenerativeAgent UI offers the following environments to deploy, undeploy, or roll back: * **Draft**: In this environment, you can try out any article or API connection. * **Sandbox**: This environment works as a staging version to test GenerativeAgent's responses. You can test the behavior of GenerativeAgent and how it performs tasks or calls functions before deploying to a live environment. * **Production**: When you deploy to this version, GenerativeAgent will be live in collaborating in the flows and taking over tasks within your Production environments. For any version or environment, you can deploy Articles. API Connections are tested via Trial Mode. This way, you are able to test how GenerativeAgent behaves with a specific article, resource, or API Connection. ## GenerativeAgent Versions As we continue to update GenerativeAgent, we will release new versions of the core system. You can manage which version of GenerativeAgent is deployed for your organization with Pinned Versions. On the Settings page, you can choose which version of GenerativeAgent that you want to test in the [Previewer](/generativeagent/configuring/previewer) by selecting a specific version from the Version selector. This allows you to test how GenerativeAgent would behave under a new version. * The `Default` version will always point to the latest version of GenerativeAgent. * Versions with a `stable` badge have been thoroughly tested and will not change. * Versions with a `beta` badge are in development and may change. Eventually they will become `stable`. Your GenerativeAgent will use the `Default` version if no other version is pinned. Using the `Default` version ensures that GenerativeAgent is always using the safest version with the latest features. If you do want to manually pin your GenerativeAgent to specific version, select the version from Settings and deploy the Settings to your production environment. Older versions will eventually become deprecated. ASAPP will reach out to you if you are using a deprecated version to communicate timelines and best practices for migration. ## Articles ### Deploy Articles To Deploy Content to Sandbox or Production environments: 1. Click on Deploy, then choose the root and the target environments. 2. Write any Release Notes that you deem necessary. 3. For Resource, select Knowledge Base. 4. You will be prompted with a list of all resources pulled from your file. Choose the content you want to upload to the Knowledge Base Tooling. 5. Click on Deploy and the system will save the content in the new version. You can now see a list of all recently deployed content. ### Undeploy Articles You can undeploy Content from Sandbox or Production environments: 1. Head to the Content Row and click on the ellipsis, then on Undeploy. 2. Select the environments that should undeploy the Resource. A confirmation message appears every time you successfully undeploy a resource. Keep in mind undeployed resources can be redeployed via individual deployment. ### View Current Articles and Versions After clicking on a Resource, you can see all of its details. You can also review each Resource's detail per version. ### View Deployment History Deployment History shows a detailed account of all deployments across environments for each article. On the Deployment History tab, you can: 1. Toggle between Production and Sandbox to access environment specific deployments. 2. Filter deployment records by time frames. 3. Manage Deployment and rollback to previous versions. Each deployment entry shows date, time, type, and a brief description of deployment. ## API Connections When you create an API Connection, it will automatically be available for GenerativeAgent. You can test resources that use APIs like Functions into the same environments before going live. ### Trial Mode ASAPP safely deploys new API use cases to production via Trial Mode. You configure Trial Mode in a way that if there are multiple APIs configured for a task or a function, GenerativeAgent is only allowed to call the first API. After GenerativeAgent calls an API, it will immediately escalate to an agent. This way to observe GenerativeAgent's behavior after the API call. Once you and your ASAPP Team are confident that GenerativeAgent is correctly using API Connections, you give GenerativeAgent full access to use the Connection. After that, you start Functional Testing on the next API Connection. ## Rollbacks Rollback involves reverting a deployed resource to a previous version or state. Rollbacks restore the previous version of the resource, undoing any changes introduced by the most recent update. Version pointers for each resource indicate the new\_version\_number from the chosen deployment for rollback. ### Undeployment Undeployment is restricted to individual resources (a task, a function, or an article). It is possible to remove resources from specific environments without deploying any version of them. Undeploying a resource does not change the state of the draft, and the system still considers the latest modification of the draft as the latest version. Undeploying also generates a new line item within the deployment history. If a resource is critical for the functioning of other resources or services, undeployment is blocked to prevent system failures or disruptions. ### Edit History Each resource has a history of all modifications. You can use Edit History to restore a resource to a past version. ### Resource Deletion Deleting a resource results in the resource becoming inaccessible and invisible on the list. The system prohibits deletion if there are any dependencies, such as a task utilizing a function. The system does not permit deletion of deployed resources until the resource is undeployed from all the dependent environments to ensure uninterrupted service. If a resource is critical for the functioning of other resources or services, the system blocks deletion to prevent system failures or disruptions. ## Next Steps With a functioning Knowledge Base Deployment, you are ready to use GenerativeAgent. You may find one of the following sections helpful in advancing your integration: # Functional Testing Source: https://docs.asapp.com/generativeagent/configuring/functional-testing Learn how to test GenerativeAgent to ensure it handles customer scenarios correctly before production launch. Functional Testing is a critical step in evaluating GenerativeAgent after setting requirements for Tasks and Functions. Given the dynamic nature of Large Language Models (LLMs), it's essential to validate that GenerativeAgent works as expected in various scenarios. Testing is the best strategy to ensure reliability and performance before launching any task into production. This testing phase is a crucial part of your integration process. We strongly recommend completing thorough functional testing, with assistance from the ASAPP team, before deploying GenerativeAgent in a live environment. This process involves verifying, validating, and confirming that GenerativeAgent functions as expected across a wide range of potential user interactions. It's helpful to have a high-level overview of how GenerativeAgent works while planning your testing. GenerativeAgent assumes it is engaging with a customer who has a problem it can help resolve. GenerativeAgent uses a combination of: * Task Instructions * API Response Data * Retrieved Knowledge Base Articles If GenerativeAgent cannot help the customer or is unsure about what to do, it will offer to escalate to a live agent. ### Acceptance Testing Objectives * Ensure GenerativeAgent does not make mistakes given expected inputs * Focus on preventing potential hallucinations or bad inputs * Ensure GenerativeAgent handles expected customer scenarios correctly You perform Functional Testing after your ASAPP Team has configured GenerativeAgent Tasks and Functions. You will be able to fully integrate GenerativeAgent into your apps after the tests are passed. ## Testing Process ### Pretesting In the pretesting phase, keep in mind cases like the following use case scenarios: Reading a sample of production scenarios for this task: * Read summaries for 100 sample conversations to understand typical conversations within this use case across both the virtual agent and those that escalate to a live agent * Have clear should/must-dos for each task * Have a clear idea of the things that GenerativeAgent should do vs. must do within each task * Keep in mind the common scenarios you expect users to go through based on the sample of real conversations * Clear test users to do the testing * Consider the permutations of test data that are important to cover. For example: * Someone with a flight canceled a few minutes ago * Someone with two flights, one which is canceled and one which is not * Someone with elite status vs. someone with no status ### Testing GenerativeAgent Once you've completed the pretesting phase, you're ready to start testing GenerativeAgent itself. This phase involves simulating real-world scenarios and interactions to ensure GenerativeAgent performs as expected. Here are some key points to keep in mind: * Aim to test approximately 100 conversations per use case * Go through the expected conversation scenarios, as relevant, for each of the test users * Make sure to operate in a manner that is consistent with the data in the test account you are using * Formulate questions, based on the sample of conversations, that aim to test the knowledge articles available to GenerativeAgent * Plan to repeat some scenarios with slight variation to ensure GenerativeAgent responses are consistent (though no response is likely to ever be exactly the same due to its generative nature) ## Example Test The following is an example scenario of Functional Testing for a task. ### Test Scenario If a customer asks about their flight status, GenerativeAgent should provide the relevant details. ### Preconditions A correct confirmation number and last name ### Test Procedure 1. IF a customer asks about their current flight status 2. THEN GenerativeAgent will invoke the flight\_status task 3. AND GenerativeAgent will request the necessary criteria to look up the customer's flight details 4. AND if the customer provides a valid confirmation number and last name 5. THEN GenerativeAgent will call the appropriate API 6. AND GenerativeAgent will retrieve the required information 7. AND GenerativeAgent will inform the customer of their current flight status based on the API response ### Test Objectives 1. Confirm that GenerativeAgent correctly invokes the flight\_status task 2. Verify that GenerativeAgent identifies the necessary information from the customer to verify the flight 3. Ensure that GenerativeAgent requests the required information (confirmation number and last name) 4. Check that the appropriate API is called 5. Validate the information provided by the customer through the API 6. Ensure GenerativeAgent gathers the necessary flight status information 7. Confirm GenerativeAgent accurately communicates the flight status to the customer This example illustrates the "happy path." But there are other scenarios such as: what if the customer only provides a confirmation number? Can they provide alternative information? What if the customer doesn't have a confirmation number? Consider other potential scenarios and instructions to test against. ## Next Steps With correct Acceptance Testing, you are ready to support real users. You may find one of the following sections helpful in advancing your integration: # Previewer Source: https://docs.asapp.com/generativeagent/configuring/previewer Learn how to use the Previewer in AI Console to test and refine your GenerativeAgent's behavior The Previewer is a testing and simulation tool that allows you to try out different configurations of your GenerativeAgent's behavior. The Previewer makes it easy to rapidly iterate on GenerativeAgent's design and provides a quick tool to test GenerativeAgent's capabilities. Opening the Previewer in AI Console ## Using Previewer To use the Previewer, select one of two modes: 1. **"Talk to GenerativeAgent"** to manually interact with GenerativeAgent by typing out messages as the customer. 2. **"Simulate a customer and conversation"** to automatically generate a conversation from a Test Scenario. This is useful when you want to test GenerativeAgent's behavior with a specific set of data. Starting the conversation will run the Test scenario. To use 'Simulate a customer and conversation' the goals need to be populated in your selected [Test Scenario](/generativeagent/configuring/tasks-and-functions/test-scenarios). You can also select the [previewing environment](#previewer-environment) that GenerativeAgent uses to test and preview a conversation with GenerativeAgent. This defaults to Draft. ### Previewer Environment Choose the [Environment](/generativeagent/configuring/deploying-to-generativeagent#environments) that GenerativeAgent uses to test and preview a conversation with GenerativeAgent. Choose between: * Draft * Sandbox * Production ## Talk to GenerativeAgent "Talk to GenerativeAgent" enables you to manually interact with GenerativeAgent. This is useful for initial testing of your task and function configurations. ### Test Scenario Type When directly talking to GenerativeAgent, you can choose what kind of data GenerativeAgent will use for its function calls by selecting the "Scenario type": * **Test Scenario**: This uses the data from a previously created [Test Scenario](/generativeagent/configuring/tasks-and-functions/test-scenarios) where you have already defined a simulated mock data that a function would return. This allows you to try out different Tasks and iterate on tasks definitions or on Functions without concern of hitting actual APIs. * **External Endpoint**: This will use the actual API Connections, allowing you to test GenerativeAgent using real APIs and data. Most preview testing uses test scenarios as it is faster to design iterations. The External Endpoint is helpful for final QA testing and pre-launch validation. ### External Endpoint When using the External Endpoint, you can provide: * User ID: This is an id of the user for the conversation. This is needed as ASAPP's APIs require it and many APIs rely on it. * Task Name: The [specific task for GenerativeAgent to enter](/generativeagent/configuring/tasks-and-functions/enter-specific-task). * Input Variables: This is the [input variables data](/generativeagent/configuring/tasks-and-functions/input-variables) that GenerativeAgent will use to perform the Task. Input variables can be submitted as key-value pairs in JSON format. ## Observing GenerativeAgent's Behavior Previewer gives you insight into the actions that GenerativeAgent is taking with the **Turn inspector**. This includes its thoughts during the conversation, the Knowledge Base articles it references, and the API calls it makes. Use the Turn Inspector to examine how instructions are processed within GenerativeAgent. Turn Inspector includes detailed visibility into: * Active Task Configuration * Current reference variables * Precise instruction parsing * Function call context and parameters * Execution state at each conversational turn ### Using Live Preview The Live Preview feature allows you to test changes in real-time during a conversation. You have the ability to: * **Regenerate a response**: For a given bot response, regenerate it using the latest state of the draft settings. * **Send a different message**: For a given customer message, change what is sent to see how GenerativeAgent would respond with that conversation context. Live Preview feature in AI Console Previewer ### Replaying Conversations During testing and configuration, you may want to replay conversations while trying out changes or validating GenerativeAgent across new versions. In Previewer, you can save the conversation to replay it again in the future. Save conversation option in AI Console Previewer ## Next Steps You may find one of the following sections helpful in advancing your integration: # Tasks Best Practices Source: https://docs.asapp.com/generativeagent/configuring/task-best-practices Improve task writing by following best practices Before integrating GenerativeAgent, define the tasks and functions that GenerativeAgent will perform for your organization. **Tasks** represent the issues or actions you want the generative agent to handle. Each task consists of **instructions** and the **functions** needed to perform those instructions. * **Instructions** define the business logic and acceptance criteria for a task. * **Functions** are the tools (such as APIs) needed to perform a task according to its instructions. The goal of all instructions is to deliver the desired outcome using the minimum number of expressions. ## Best Practices Clearly defining tasks is essential for configuring GenerativeAgent. GenerativeAgent acts on the tasks you ask it to perform and solves customer issues across your apps. When writing or defining tasks, keep the following practices in mind: ### Know where to place information Deciding which information belongs in tasks versus in the Knowledge Base can be challenging. Use this rule of thumb: * **Task instructions** define procedures and courses of action for GenerativeAgent. > Example: "Flip a coin. The result of coin\_flip determines whether the customer starts the game." * **Knowledge Base articles** contain information and guides on how to operate during an action. > Example: "Flipping coins must use quarters. Mark the result after the coin falls into your hand and stops moving. If the coin falls from your hand, the result is null." For example, a task that uses the `refund_eligibility` API would be: ``` Use the refund_eligibility API to check if the purchase is eligible for a refund. If eligible, ask the customer if they want store credit or a refund to their original payment method ``` A Knowledge Base article for this task would be: ``` Refunds typically take 7-10 days to appear on credit card statements. Store credit will be sent via email within one hour of issuing the refund. ``` ### Format Instructions Use clear instructions for tasks. Be consistent in how you use formatting elements, such as headers or bullet/numbered lists. Use markdown for task definitions. * Use headers to organize sections within the instructions * Use lists for clarity ```json theme={null} # Headers organize task sections - Use headers to break down complex tasks into sections - Use bullet points for clarity within each section -- Use sub bullet points for further clarification on a specific point # Task Format Example - Verify the customer's order ID - Use the order_status API to retrieve current status - Communicate the status to the customer -- If the status is pending, ask the customer to check back later. ``` ### Provide Resolution Steps Enumerate the steps that GenerativeAgent needs to resolve a task. This provides a logical flow of actions that GenerativeAgent can follow to be more efficient. Just as a human agent needs to check, read, resolve, and send information to a customer, GenerativeAgent needs these steps to be clearly defined. ```json theme={null} # Steps to take to check order status 1. Verify Purchase Eligibility - Check the purchase date to ensure it is within the 30-day refund policy. - Verify that the item is eligible for a refund 2. Gather Necessary Information - Ask the customer for their order ID. 3. Check Order Status - Call the `order_status` function to retrieve the current status of the order. - Confirm that the order is eligible for a refund. ``` ### Define Functions to Call Functions represent the set of APIs needed alongside their instructions. GenerativeAgent invokes functions to perform the necessary actions for a task. Task instructions must specify how and when GenerativeAgent should invoke a function. Here is an example of how to reference functions in task instructions: Within the "FlightStatus" task, functions might include: * `trip_details_extract_with_pnr`: Retrieves flight details using the customer's PNR and last name. * `trip_details_pnr_emails`: Handles email addresses associated with the PNR. * `send_itinerary_email_as_string`: Sends the trip receipt or itinerary to the customer via email. Here is how the task instruction would be outlined to use the function: ```json theme={null} "The function `trip_details_extract_with_pnr` is used within the 'FlightStatus' task to retrieve the current schedule of a customer's flight using their confirmation code and last name." ``` ### API Return Handling Provide instructions for handling API call responses after performing a function. Use the syntax `(data["APICallName"])` to reference specific data returned from an API call. Here is an example of API Return Handling: ```json theme={null} When called, if there is a past due amount, you MUST tell them their exact soft disconnect date (data["softDisconnectDate"]), and let them know that after that day, their service will be shut off, but still be easy to turn back on. ``` ### State Policies and Scenarios Clearly define company policies and outline what GenerativeAgent must do in various scenarios. Stating policies ensures consistency and compliance with your organization's standards. Remember that a good portion of policies can be taken from your Knowledge Base. ```json theme={null} # Refund eligibility - Customers can request a refund within 30 days of purchase. - Refunds will be processed to the original payment method. - Items must be returned in their original condition. # Conversational Style - Always refer to the customer as "customer." - Do not address the customer by their name or title. ``` ### Ensure Knowledge Base Resourcing Ensure that GenerativeAgent uses your Knowledge Base either through an API or the Knowledge Base tooling in the GenerativeAgent UI. Reference Knowledge Base resources within the task so GenerativeAgent can access them during conversations. You can test GenerativeAgent's behavior using the Previewer. Store task-related information in the Knowledge Base with metadata tags. Use metadata to ensure certain articles are only used by specific tasks. If an article and a task have the same metadata tags, GenerativeAgent will filter and only use that relevant information during a conversation. ### Outline Limitations Be clear about the limitations of each task. Provide instructions for handling customer requests that go beyond a task's limits. This helps GenerativeAgent manage customer expectations, provide alternative solutions, and switch to tasks that align with the customer's needs. ```json theme={null} # Limitations - Cannot process refunds for items purchased more than 30 days ago. - Redirect customers to the website for refunds involving gift cards. - No knowledge of specific reasons for payment failures. ``` ### Use Conditional Templates Use [conditional templating](/generativeagent/configuring/tasks-and-functions/conditional-templates) to make parts of the task instructions conditional on reference variables determined from API responses. This ensures that only the contextually relevant task instructions are available at the right time in the conversation. ```json theme={null} {% if data["refundStatus"] == "approved" %} - Inform the customer that their refund has been approved and will be processed shortly. {% elif data["refundStatus"] == "pending" %} - Let the customer know that their refund request is pending and provide an estimated time for resolution. {% endif %} ``` ### Use Reference Variables [Reference variables](/generativeagent/configuring/tasks-and-functions/reference-variables) let you store and reuse specific data returned from function responses. They are powerful tools for creating dynamic and context-aware tasks. Once a reference variable is created, you can use it to: * Conditionally make other functions available * Set conditional logic in prompt instructions * Compare values across different parts of your GenerativeAgent workflow * Control function exposure based on data from previous function calls * Toggle conditional instructions in your task's prompt depending on returned data * Extract and transform values without hard-coding logic into prompts or code For example: ```json theme={null} val == "COMPLIANT" → returns True if the string is "COMPLIANT" val == true or val == false → checks if the value is a boolean true/false val is not none and val|length > 0 → returns True if val has length > 0 ``` ### Create Subtasks Some tasks are larger and more complex than others. GenerativeAgent is more efficient with cohesive and direct tasks. A good practice for complex tasks is to divide them into subtasks. For example, to process a refund for a customer, GenerativeAgent might need to: * Confirm the customer's status * Confirm the policies allow for the refund * Process the refund ```json theme={null} For a customer seeking a refund, consider splitting the task into: OrderStatus: To check the status of the order and communicate the results to the customer. IssueRefund: To gather the information necessary to process the refund and actually process the refund. ``` ### Call Task Switch Sometimes GenerativeAgent needs to switch from one task to another. Be explicit about which tasks to switch to based on the context. ```json theme={null} # Damage Claims - For claims regarding damaged products, use the 'DamageClaims' task # Exchange Requests - For exchange inquiries, use the 'ExchangeProducts' task # No pets rule - (#rule_1) no dogs in the house - (#rule_2) no cats outside - (#rule_3) if either #rule_1 or #rule_2 are broken escalate to agent. ``` ### Outline Human Support State scenarios in which GenerativeAgent needs to escalate the issue to a human agent. This ensures GenerativeAgent's role in your organization is clearly defined. ```json theme={null} # Escalate to a Human Agent - Refunds involving high-value items. - Refunds where payment method issues are detected. ``` You can also state scenarios for HILA: ```json theme={null} # Call HILA and wait on approval - Refunds of purchases older than 30 days - Cancelation of high-value purchases ``` ### Keep It Simple Keep task instructions focused and concise. The more details you add to tasks, the greater the chance that essential instructions could be overlooked or diluted. If instructions are too long or complex, GenerativeAgent might not follow the most important steps precisely. We recommend not placing too much task-relevant information directly in the task. Instead, use the other tools GenerativeAgent provides, such as metadata, functions, and the Knowledge Base. We do not recommend directly uploading an internal agent-facing knowledge base to the GenerativeAgent Knowledge Base. GenerativeAgent's Knowledge Base is meant for GenerativeAgent's use. Task instructions better suit instructions meant for agents. # API Functions Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/api-functions Connect to your existing APIs to fetch data or perform actions with API Functions. API Functions enable GenerativeAgent to call your existing APIs to fetch data or perform actions. They are the most common function type for integrating with backend systems and allow your agent to interact with your existing infrastructure without requiring additional development work. By using an API Function, you can: * Connect to your existing APIs without creating new simplified interfaces * Fetch real-time data from your backend systems * Perform actions like creating records, updating accounts, or processing transactions * Leverage your current infrastructure without additional development To create an API function: 1. [Create a function](#step-1-create-a-new-function) 2. [Select an API connection](#step-2-select-an-api-connection) 3. [Specify name and purpose](#step-3-specify-the-name-and-purpose-of-the-function) 4. [Configure optional settings](#step-4-configure-optional-settings) 5. [Use the function in a task](#step-5-using-the-api-function-in-a-task) ## Step 1: Create a New Function Navigate to the Functions page and click "Create Function." 1. Select "Connect to an API" and click "Next: Choose an API" ## Step 2: Select an API Connection Under "Choose an API": 1. Select one of your existing [API connections](/generativeagent/configuring/connect-apis) 2. Click "Next: Function details" If you don't have any API connections yet, you'll need to [create one](/generativeagent/configuring/connect-apis) first or create a [Mock API Function](/generativeagent/configuring/tasks-and-functions/mock-api). ## Step 3: Specify the Name and Purpose of the Function * **Function Name**: Provide a concise, unique name, using underscores (e.g., `get_account_balance`). By default, the function name will be the same as the API connection name. * **Function Purpose**: Briefly describe what the function does (e.g., "Retrieves the current account balance for a customer"). * GenerativeAgent uses this description to determine if/when it should call the function. Click "Create Function" to create the function. You'll be taken to the function detail page where you can configure additional settings. ## Step 4: Configure Optional Settings After creating the function, you can configure additional fields to enhance the function's behavior: * **Message before sending**: Display a message to the user before calling the API * **Confirmation message**: Show confirmation after successful API calls * **Reference variables**: The API function response will be part of the conversation context for GenerativeAgent to reference, but you can optionally specify a specific field as a reference variable to either reference it in a [Conditional Template](/generativeagent/configuring/tasks-and-functions/conditional-templates), or to include it when you call a [System Transfer Function](/generativeagent/configuring/tasks-and-functions/system-transfer). The function detail page shows the function configuration and any available API endpoints. The function will call the real API during interactions. Make sure your API connection is properly configured and accessible. ## Step 5: Using the API Function in a Task Once you have created your API function, you must add the function to the task's list of available functions for GenerativeAgent to use it. GenerativeAgent will call the function when it determines the API call is needed to complete the task. Here's how the function works within a task and conversation flow: 1. GenerativeAgent analyzes the user's request and determines if it needs an API call 2. (Optional) The system can display a "Message before Sending" to the user 3. GenerativeAgent calls the API function with the appropriate parameters 4. The system processes the API response and can use it to generate a response to the user 5. (Optional) The system can display a "Confirmation Message" after successful API calls ```jinja theme={null} # Objective Help a customer check their account balance and recent transactions. # Context - Customer wants to know their current balance - They may also want to see recent transactions # Instructions 1. **Gather Account Information:** - Ask for the customer's account number or phone number - Store "account_number" once provided 2. **Check Account Balance:** - Call the `get_account_balance` function with the account number - Display the current balance to the customer 3. **Show Recent Transactions (if requested):** - If the customer asks about recent activity, call the `get_recent_transactions` function - Display the transaction history in a user-friendly format ``` ## Next Steps Learn how to create and configure API connections for your functions. Start with mock APIs for testing before connecting to real systems. Learn how to store and manipulate conversation data with Set Variable Functions. Test your API functions in real-time with the Previewer tool. # Bricks Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/bricks A modular system for composing prompts from reusable and configurable fragments Bricks are reusable prompt fragments that are managed centrally and can be inserted into task instructions. This enables you to author, reuse, and safely update modular prompts across GenerativeAgent tasks, reducing duplication, inconsistency, and risk. Each brick is traceable and synchronized across tasks, unlocking faster onboarding, more consistent customer experiences, and safer experimentation. Bricks Overview ### When to use Bricks Use bricks for any content you want to reuse across multiple tasks: * Greetings and standard opening messages * Disclaimers and legal notices * Business policies and guidelines * Company information and context * Tone guidelines for customer interactions * Common instructions that don't change ## Configuring Bricks Here is a step-by-step overview of how to configure and use Bricks: 1. Navigate to **Bricks** under the **Build** section. 2. Click on **Create Brick**. 3. Under **Variable Name**, provide a unique name for the Brick. 4. Under **Description**, provide a brief description of the Brick's purpose. 5. Click **Create** to save the Brick. 6. Add the prompt of the Brick in the text area. 7. Click **Save** to save the content of the Brick. Bricks support [conditional templates](/generativeagent/configuring/tasks-and-functions/conditional-templates) which enables you to reference variables and implement conditional logic in your bricks. When using a variable, ensure you have set the variable within the task instructions. Reference bricks wherever [Jinja variables](/generativeagent/configuring/tasks-and-functions/conditional-templates) are supported, including prompts and other text fields. For this example, we will reference the Brick in a Task. 1. Navigate to **Tasks** under the **Build** section. 2. Select an existing Task or create a new one. 3. In the Task editor, navigate to where you want to insert the Brick (e.g., Task Instructions, Talker Prompt, Reasoner Prompt). 4. Use the Jinja syntax to reference the Brick by its variable name, e.g., `@brick{brick_variable_name}`. 5. Click **Save** to save the Task. Use the Previewer to test how the updated Bricks affect the behavior of the GenerativeAgent. 1. Click on **Deploy** to deploy the updated tasks with the new Brick content. 2. Under **Select Resource**, choose **General Configurations**. 3. Select the variable names of the Bricks you want to deploy. 4. Click **Deploy** to apply the changes. ## Updating Bricks When a Brick is updated and deployed, all tasks that reference that Brick automatically receive the updated content upon the deployment. This ensures consistency across all tasks using the same Brick. Tasks referencing that Brick will not need to be redeployed. Before making changes or deploying updates to a Brick, you can check which tasks reference that Brick to understand the impact. References to Tasks in Bricks ## Next Steps Learn how to use reference variables to store and reuse data from function responses Learn how to build your first GenerativeAgent that can use your KnowledgeBase to start answering your users' questions. # Conditional Templates Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/conditional-templates Conditional Templates allow you to use saved values from API calls to change the instructions for a given task. GenerativeAgent uses the conditional templates to render API and Prompt instructions based on the conditions and values in each template. Conditional Templates must be referenced in Jinja2 templating language conditional statements. Head to the [Jinja Documentation](https://jinja.palletsprojects.com/en/3.0.x/templates/) to dive further into Conditional Statements ## Write Conditional Templates Conditional templating supports rendering based on the presence of data in an API Response Model Action. The system pulls this data at run-time from the input model context (list of ModelActions) and stores it in reference variables that you can use in Jinja2 conditional statements. If you want to render based on ModelActions that are not API Responses, it will require further help from your ASAPP Team. Write a Conditional Template: 1. Identify the Function and the keypath to the value from the API response you want to use for conditional rendering. 2. Add a reference variable to the list of reference\_vars for the Function in the company's functions.yaml. It should include name and response\_keypath at minimum, with the response\_keypath format being response.\. You can optionally define a transform expression with val as the keypath value to be transformed. Note that these reference variables are used across the company's Tasks, so the name parameter must be unique. 3. Use the conditional in two places by referencing vars.get("my\_reference\_var\_name"): * In a Task, add Jinja2 conditional statements in the prompt\_instructions and define conditions for each TaskFunction so they only render when the condition evaluates to True. * Conditions on TaskFunctions are optional, and functions will always render in the final prompt if no conditions are provided. * In a Function, add Jinja2 conditional statements to the Function's description. ## Available Variables The following variables are available to use in the conditional templates: | Variable | Description | Example | | -------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- | | `vars` | A dictionary of [reference variables](/generativeagent/configuring/tasks-and-functions/reference-variables). | `vars.get("my_reference_var_name")` | | `input_vars` | A dictionary of [input variables](/generativeagent/configuring/tasks-and-functions/input-variables). | `input_vars.get("my_input_var_name")` | | `function_responses` | A dictionary where keys are function names and values are the corresponding function response data. If the system finds no function responses in the actions, it returns an empty dictionary. | `function_responses.get("my_function_response_name")` | | `current_datetime` | The current date and time. | `current_datetime` | | `channel_type` | The type of channel the conversation is on. Can be either `voice` or `messaging`. | `channel_type` | | `queue_status` | A dictionary of queues and a boolean of whether the queue is open. | `queue_status.get("general_queue")` | ## Use Case Example - Mobile Bill This use case example makes GenerativeAgent behave as follows: * If CPNI compliance is unknown, only render the identity API without its description about checking the response field "data\['cpniCompliance']", and render in the prompt\_instructions that tell the LLM it must first confirm the customer is CPNI compliant. * If a customer is not CPNI compliant, only render the identity API with its description about checking the response field "data\['cpniCompliance']", and do not render in the prompt\_instructions that tell the LLM it must first confirm the customer is CPNI compliant. * If a customer is CPNI compliant, do not render the identity API and render the APIs that require CPNI compliance instead, and do not render in the prompt\_instructions that tell the LLM it must first confirm the customer is CPNI compliant. ```json theme={null} identity: name: identity lexicon_name: identity-genagent lexicon_type: entity description: |:- Use this API call to determine whether you can discuss billing or account information with the customer. {%- if not vars.get("compliance_unknown") and not vars.get("is_compliant") %} - If the data['cpniCompliance'] does not return "COMPLIANT", you cannot discuss account or billing information with the customer. {%- endif %} message_before: Give me a few seconds while I pull up your account. reference_vars: - name: is_compliant # this variable to be used in conditions response_keypath: response.cpniCompliance # keypath to the value from the response transform: val == "COMPLIANT" # val is passed in from the keypath for the transform - name: compliance_unknown response_keypath: response.cpniCompliance transform: val == None ``` ```json theme={null} dname: MobileBill selector_description: For Mobile billing inquiries only, see the current billing situation and status of your Spectrum mobile account(s), including dues, balances, important dates and more. prompt_instructions: |:- - If the customer expresses anything about their question not being answered (EXAMPLES: "That didn't answer my question" "My question wasn't answered"), *before doing anything else* ask them for more details - The APIs in these instructions and the information they return MUST only be used to answer basic questions about a mobile bill or statements. - They MUST NOT be used to answer any out-of-scope concerns like the following: - - To answer questions related to cable (internet, TV, landline), use the command `APICALL change_task(task_name="CableBill")` to switch to the CableBill flow. - - concerns about why services are not working - - concerns about when service will be restored - - inquiries about where bills are being sent, or sending confirmation emails - - updating billing address {%- if vars.get("compliance_unknown") %} - You must confirm that a customer is CPNI compliant before telling them anything about their account or billing info. The only way to do this is via the identity() api as described below. - - Note: Authentication is not the same as being CPNI compliant. You still need to use the identity() api to confirm that a customer is CPNI compliant if they are authenticated. {%- endif %} - Mobile services are billed seperately from Cable (Internet, TV, and Home phone) services. functions: - name: identity conditions: not vars.get("is_compliant") - name: mobile_current_balance conditions: vars.get("is_compliant") instructions: |:- - Anytime you call `mobile_current_balance`, you should also call `mobile_current_statement` - name: mobile_current_statement conditions: vars.get("is_compliant") - name: mobile_statements conditions: vars.get("is_compliant") instructions: |:- - When describing payments, your response to the customer must not imply that you know the purpose or reason for any payment or how it will affect the account. - If you think you have found a payment the customer is referring to, ask the customer if it's the right payment, but do not say anything to confirm the customer's impression of the payment or what it is for. - name: mobile_specific_statement conditions: vars.get("is_compliant") ``` # Enter a Specific Task Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/enter-specific-task Learn how to enter a specific task for GenerativeAgent When GenerativeAgent analyzes a conversation, by default, it will automatically select the appropriate task and follow its instruction. If your system already knows which task to use, you can specify it by using the `taskName` attribute in the [`/analyze` request](/apis/generativeagent/analyze-conversation). ```bash theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/generativeagent/v1/analyze \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "conversationId": "01BX5ZZKBKACTAV9WEVGEMMVS0", "message": { "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "agent", "externalId": 123 }, "timestamp": "2021-11-23T12:13:14.555Z" }, "taskName": "UpgradePlan", }' ``` # Human-in-the-loop Agent Function Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/hila-function Involve a human agent into a conversation with HILA Function The HILA function lets you create a function that can be referenced in a task to seamlessly involve a human agent in a conversation. This is useful for handling complex queries or situations where human judgment is required. The GenerativeAgent will call the HILA function when it determines that human intervention is necessary based on the task configuration and conversation context. By using the HILA function, you can: * Define specific scenarios where human intervention is required. * Ensure that complex or sensitive issues are handled appropriately. * Improve customer satisfaction by providing a human touch when needed. To create a HILA function: 1. [Create a function](#step-1:-create-a-new-function) 2. [Define input parameters](#step-2:-define-input-parameters-json) 3. [Configure "Message before sending" (optional)](#step-3:-message-before-sending) 4. [Set variables (Optional)](#step-4:-set-variables) 5. [Save the function](#step-5:-save-your-function) 6. [Use the function in a task](#step-6:-using-the-hila-function-in-a-task) 7. [Handle the HILA ticket in a conversation](#step-7:-handle-the-hila-ticket-in-a-conversation) ## Step 1: Create a New Function Navigate to the Functions page and click "Create Function." 1. Select "Human-in-the-loop agent" and click "Next: Function details" 2. Specify the Name and Purpose of the Function * **Function Name**: Provide a concise, unique name, using underscores (e.g., `get_refund_approval`). * **Function Purpose**: Briefly describe what the function does (e.g., "Sends a message to the human agent to get approval for a refund request"). * Generative Agent uses this description to determine if/when it should call the function. ## Step 2: Define Input Parameters (JSON) The input parameters are the values that GenerativeAgent needs to pass when calling this function to create a HILA ticket for the human agent. Under "Input Parameters," enter a valid JSON schema describing the required parameters. GenerativeAgent gathers the necessary information (from user messages or prior context) before calling the function. ```json Example Input Schema theme={null} { "type": "object", "required": [ "customer_id", "issue_description", "refund_amount", "question" ], "properties": { "customer_id": { "type": "string", "description": "The unique identifier for the customer" }, "issue_description": { "type": "string", "description": "A detailed description of the issue that requires human intervention" }, "refund_amount": { "type": "number", "description": "The amount of refund requested" }, "question": { "type": "string", "description": "A question or instruction for the human agent" } } } ``` ## Step 3: Message before sending You can also configure a message that will be sent to the human agent when the HILA function is called. This message can provide context about the issue and any relevant information that the human agent may need to assist the customer effectively. This is optional, but it can help ensure that the human agent has all the necessary information to provide a timely and accurate response to the customer. ## Step 4: Set Variables You can optionally configure one or more reference variables: * Configure variables to rename or transform parameter values * Use [Jinja2](https://jinja.palletsprojects.com/en/stable/) for transformations if needed * Toggle "Include return variable as part of function response" to make variables immediately available **Jinja2 Templates**: Use Jinja2 to transform values before transfer. For example, to convert a string boolean to a proper boolean: ```jinja2 theme={null} true if params.get("is_refund_eligible") == "True" else false ``` ## Step 5: Save Your Function With your function defined, save it by clicking "Create Function". After saving, you'll see a detail page showing the JSON schema and any configured reference variables. ## Step 6: Using the HILA function in a task To use the HILA function, add it to the list of available functions in a task. When GenerativeAgent determines that human intervention is needed based on the task configuration and conversation context, it will call the HILA function to create a ticket for the human agent. Here's how the HILA function works within a task and conversation flow: 1. GenerativeAgent analyzes the user's request and determines if it needs human intervention. 2. (Optional) The system sends a "Message before Sending" to the human agent 3. GenerativeAgent calls the HILA function with the appropriate parameters to create a ticket for the human agent. ```jinja theme={null} # Objective - The customer is requesting a refund for a recent purchase. - The refund amount exceeds the threshold for automatic approval, so human intervention is required to review and approve the refund request. # Context - Customer has purchased an item but is unsatisfied and requests a refund. - The refund amount is $150, which exceeds the automatic approval limit of $100. - The customer has provided a reason for the refund request, citing that the product was defective. # Instructions 1. **Gather necessary information**: - Collect the customer's ID, a description of the issue, and the refund amount from the conversation and store them in variables for use in the HILA function. 2. **Call the HILA function**: - When the refund amount is detected to exceed the automatic approval limit, call the HILA function with the following parameters: - `question`: "A customer has requested a refund that exceeds the automatic approval limit. Please review the request and approve or deny the refund based on the provided information." - `customer_id`: The unique identifier for the customer. - `issue_description`: A detailed description of the issue that requires human intervention. - `refund_amount`: The amount of refund requested. ``` ## Step 7: Handle the HILA ticket in a conversation When the Human agent responds to the HILA ticket, GenerativeAgent will receive the response and continue the conversation with the customer based on the human agent's input. The human agent's response can include instructions, approvals, or any other relevant information needed to assist the customer effectively. The response will be stored in the response parameter of the HILA function: * `question`: The original question or instruction sent to the human agent. * `agentResponse`: The response provided by the human agent after reviewing the ticket. GenerativeAgent can then use this information to provide further assistance to the customer, such as confirming the refund approval or providing additional instructions based on the human agent's response. # Next Steps Learn more about best practices for task and function configuration. Learn how to store and manipulate conversation data with Set Variable Functions. Learn how to set up a Human-in-the-Loop system to seamlessly involve human agents in conversations when needed. Test your HILA functions in real-time with the Previewer tool. # Improving Tasks Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/improving Learn how to improve task performance Tasks have a number of features and tools available to improve performance. Use these resources to optimize your tasks: A list of different strategies and approaches to improve task performance. Use conditional logic to dynamically change the instructions shown to GenerativeAgent. Learn how to have GenerativeAgent enter a specific task. Use Trial Mode to test whether GenerativeAgent can use new Functions correctly before rolling them out fully in production. Use Keep Fields to limit the data saved when calling a function. Learn how to use mock APIs for testing and development. Configure test users for development and testing purposes. Learn how to use input variables in your tasks and functions. # Input Variables Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/input-variables Learn how to pass information from your application to GenerativeAgent. Input Variables allow you to provide contextual information to GenerativeAgent when analyzing a conversation. This is the main way to pass information from your application to GenerativeAgent. These variables can then be referenced in the task instructions and functions. Use Input Variables to provide GenerativeAgent with context information like: * Entities extracted from a previous system or API call * Relevant customer metadata * Conversation context, like a summary of previous interactions * Instructions on the next steps for a given task ## Add Input Variables to a conversation To add input variables to a conversation: Call [`analyze`](/apis/generativeagent/analyze-conversation), adding the `inputVariables` attributes. `inputVariables` is an untyped JSON object and you can pass any key-value pairs. You need to ensure you are consistent in the key names you use between `/analyze` and the task instructions. With each call, the system adds any new input variable to the conversation context. ```bash theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/generativeagent/v1/analyze \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "conversationId": "01BX5ZZKBKACTAV9WEVGEMMVS0", "message": { "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "agent", "externalId": 123 }, "timestamp": "2021-11-23T12:13:14.555Z" }, "taskName": "UpgradePlan", "inputVariables": { "context": "Customer called to upgrade their current plan to GOLD", "customer_info": { "current_plan": "SILVER", "customer_since": "2020-01-01" } } }' ``` Once you add the Input Variables to the conversation, they become part of GenerativeAgent's context. GenerativeAgent will consider them when interacting with your users. You can also reference them directly in the task instructions. ``` The customer has a plan status of {{ input_vars.get("customer_info.current_plan") }} ``` Input variables can be used as part of [Conditional Templates](/generativeagent/configuring/tasks-and-functions/conditional-templates). ## Add Input Variables in the Previewer While you are iterating on your tasks, you can simulate how GenerativeAgent responds with added Input Variables in the [Previewer](/generativeagent/configuring/previewer#input-variables) You can also simulate launching the customer directly into a specific task, instead of allowing GenerativeAgent to choose a task. In a scenario where a IVR has already gathered information, you can ensure GenerativeAgent picks up from where the IVR left off. # Keep Fields Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/keep-fields Learn how to keep fields from API responses so GenerativeAgent can use them for more calls The history of responses in conversations is part of the data that GenerativeAgent continually uses as context to reply, analyze, and respond to your customers. As GenerativeAgent makes constant calls to APIs via Functions, response history can grow. This can result in a lot of data in the conversation history and make it more difficult for GenerativeAgent to identify the most relevant fields or data to use in subsequent calls. While you can control this by specifying the data to return within the underlying API Connection, you can also use a different set of fields for multiple Tasks using the same Function. With the Keep Fields functionality, you can configure which fields to keep in the context for that Task. Most users will not need to configure Keep Fields and instead rely on specifying the fields to keep in the underlying [API Connection](/generativeagent/configuring/connect-apis). ## Configure a Keep Field Keep Fields are part of the Function page. To configure a Keep Field: 1. Identify the Function within a Task * Determine the function that you want to configure fields to keep. 2. Go to the Keep Field Configuration * In the Function settings, see the Keep Configuration table. 3. Specify Keep Fields * List all the fields that this function should retain. Use a nested list format to specify the paths of the fields you want to keep. Each path should be an array of strings representing the keys to traverse in the JSON structure. Inside of the Function options, you can add Keep Fields. ### Specify fields within objects JSON responses on API Connections often contain arrays of objects. To specify fields within these objects, you need to indicate that you are traversing an array. Use the `[]` notation to denote array elements in the path when specifying which fields to keep. This is necessary because JSON structures can include arrays, and you need to indicate that you are referring to elements within those arrays. ## Example Keep Field Configuration See this example of a configuration to keep all fields except for `scheduledDepartureTime` under `origin` within `segments` of `originalSlice`: ```json theme={null} [ ["response", "flightChanged"], ["response", "flightChangeReason"], ["response", "flownStatus"], ["response", "flightStatus"], ["response", "isReaccommodated"], ["response", "eligibleToRebook"], ["response", "originalSlice", "available"], ["response", "originalSlice", "origin"], ["response", "originalSlice", "destination"], ["response", "originalSlice", "importantInformation"], ["response", "originalSlice", "segments", "[]", "flightNumber"], ["response", "originalSlice", "segments", "[]", "status"], ["response", "originalSlice", "segments", "[]", "bookingCode"], ["response", "originalSlice", "segments", "[]", "impacted"], ["response", "originalSlice", "segments", "[]", "numberOfLegs"], ["response", "originalSlice", "segments", "[]", "origin", "estimatedDepartureDate"], ["response", "originalSlice", "segments", "[]", "origin", "estimatedDepartureTime"], ["response", "originalSlice", "segments", "[]", "origin", "scheduledDepartureDate"], ["response", "originalSlice", "segments", "[]", "origin", "airportCode"], ["response", "originalSlice", "segments", "[]", "origin", "airportCity"], ["response", "originalSlice", "segments", "[]", "destination", "estimatedArrivalDate"], ["response", "originalSlice", "segments", "[]", "destination", "estimatedArrivalTime"], ["response", "originalSlice", "segments", "[]", "destination", "scheduledArrivalDate"], ["response", "originalSlice", "segments", "[]", "destination", "scheduledArrivalTime"], ["response", "originalSlice", "segments", "[]", "destination", "airportCode"], ["response", "originalSlice", "segments", "[]", "destination", "airportCity"], ["response", "rebookedSlice", "available"], ["response", "rebookedSlice", "origin", "estimatedDepartureDate"], ["response", "rebookedSlice", "origin", "estimatedDepartureTime"], ["response", "rebookedSlice", "origin", "scheduledDepartureDate"], ["response", "rebookedSlice", "origin", "scheduledDepartureTime"], ["response", "rebookedSlice", "origin", "airportCode"], ["response", "rebookedSlice", "origin", "airportCity"], ["response", "rebookedSlice", "destination", "estimatedDepartureDate"], ["response", "rebookedSlice", "destination", "estimatedDepartureTime"], ["response", "rebookedSlice", "destination", "scheduledDepartureDate"], ["response", "rebookedSlice", "destination", "scheduledDepartureTime"], ["response", "rebookedSlice", "destination", "airportCode"], ["response", "rebookedSlice", "destination", "airportCity"], ["response", "rebookedSlice", "importantInformation", "[]", "alert"], ["response", "rebookedSlice", "importantInformation", "[]", "value"], ["response", "rebookedSlice", "importantInformation", "[]", "alertPriority"], ["response", "rebookedSlice", "segments", "[]", "flightNumber"], ["response", "rebookedSlice", "segments", "[]", "status"], ["response", "rebookedSlice", "segments", "[]", "bookingCode"], ["response", "rebookedSlice", "segments", "[]", "impacted"], ["response", "rebookedSlice", "segments", "[]", "numberOfLegs"], ["response", "rebookedSlice", "segments", "[]", "origin", "estimatedDepartureDate"], ["response", "rebookedSlice", "segments", "[]", "origin", "estimatedDepartureTime"], ["response", "rebookedSlice", "segments", "[]", "origin", "scheduledDepartureDate"], ["response", "rebookedSlice", "segments", "[]", "origin", "airportCode"], ["response", "rebookedSlice", "segments", "[]", "origin", "airportCity"], ["response", "rebookedSlice", "segments", "[]", "destination", "estimatedArrivalDate"], ["response", "rebookedSlice", "segments", "[]", "destination", "estimatedArrivalTime"], ["response", "rebookedSlice", "segments", "[]", "destination", "scheduledArrivalDate"], ["response", "rebookedSlice", "segments", "[]", "destination", "scheduledArrivalTime"], ["response", "rebookedSlice", "segments", "[]", "destination", "airportCode"], ["response", "rebookedSlice", "segments", "[]", "destination", "airportCity"] ] ``` # Managing Configuration Branches Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/local-configs Learn how to create and manage local branches for GenerativeAgent configurations. # Managing Configuration Branches ## Introduction Branching allows you to test new ideas, such as adding a new task or function, or changing instructions dramatically, without modifying the main version of GenerativeAgent. Think of a branch as your own “playground” to experiment and make changes freely—knowing the main configuration stays safe. Once you’re happy with your changes, you can bring them into the main setup to share with the rest of your team. GenerativeAgent allows users to create local branches of configurations. This feature enables experimentation and collaboration without affecting the main configurations. Local branches provide a safe environment for testing changes to tasks, functions, and settings. ## Creating a New Branch To create a new branch: 1. Navigate to the "Branch Switcher" in the GenerativeAgent interface. 2. Click on "Create branch." 3. Enter a unique branch name. Remember, branch names are case-insensitive and must include only letters, numbers, and dashes. 4. Select the source branch from which you wish to branch: Draft, Sandbox, or Production. 5. Click "Create branch." > **Note:** You cannot create a branch from another branch; only the main environments can be used as a base. ## Editing Configurations in a Branch Within a local branch, you can edit tasks, functions, and settings: * Access the configurations you wish to change. * Make your edits safely, knowing they won't affect the main versions. * Be aware that changes to the knowledge base, test users, and API connections will be visible across all environments and branches. > **Note:** Knowledge articles do not support branching yet, so all branches will use the knowledge articles from draft. ## Previewing Configurations To preview configurations in a local branch: 1. Select the branch you’re working on. 2. Click the "Preview" button. 3. The previewer will display the current state of your configurations within the branch. > **Limitations:** You cannot switch branches/environments during a conversation. To do so, you must restart the conversation and select a different branch or environment. ## Managing Branches ### Switching Branches * Use the "Branch Switcher" to select the desired branch for viewing or editing configurations. ### Deleting a Branch 1. Click "Delete branch" button in the header. 2. Confirm the deletion. This action is irreversible, and the system will lose all configurations in the branch. ## Promoting Changes to Main Environments When ready to implement changes: * Copy adjustments from your local branch to the main environments. * Review and test thoroughly to ensure seamless integration. > **Best Practices:** Document changes and collaborate with team members to maintain alignment and ensure successful deployment. With these steps, you can leverage the full power of configuration branching in GenerativeAgent, fostering a collaborative and flexible development process. # Mock API Connections Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/mock-api You can mock API connections using Mock APIs. GenerativeAgent supports mocking API connections to test your raw API responses. Mock API Functions let you define request parameters (in JSON) without needing a live API. The main benefits of mocking API connections are: * Rapid prototyping of new Functions without a fully built API. * Testing how GenerativeAgent processes or populates request parameters before real integration. * Simplifying configuration for teams that want to get interacting with GenerativeAgent quickly before building or exposing internal APIs. ## Create a Mock API Function Navigate to the Functions page in the main GenerativeAgent menu. 1. Click "Create Function" 2. Choose "Integrate Later" * You will be prompted to select an existing API or "Integrate later." * Select "Integrate later" to mark this Function as a Mock API and define the request parameters directly. 3. Name and describe the new Function * **Function Name**: Give it a concise, unique name * **Function Purpose**: Briefly describe what the Mock Function is for Example: * Name: “get\_flight\_details” * Purpose: “Retrieves flight information given a PNR” 4. Define Request Parameters (JSON) * Under "Request parameters," enter a valid JSON schema describing the parameters you want. * You can pick a template from the "Examples" dropdown or start with an empty JSON schema. * **Example Request** ```json theme={null} { "name": "name_of_function", "description": "Brief description of what the Function is for", "strict": true, "parameters": { "type": "object", "required": ["account_number"], "properties": { "account_number": { "type": "string", "description": "The user’s account number." }, "include_details": { "type": "boolean", "description": "Whether to include itemized details." } } } } ``` Make sure the JSON is valid. The system prevents invalid schemas from being saved. 5. Save your Function * Click "Create Function" (or "Save"). If any part of your schema is invalid, an error will appear. * After saving, you remain on the function detail page, which shows the Function's configuration and preview. You can configure additional fields and variables if you need prompts or placeholders in the conversation flow. For example: “Message before sending”, “Confirmation Message”, “Reference Variables” ### Best Practices Here are some recommendations to help you make the best use of the Mock API feature: Start with the core parameters. Add more detail as your needs become clearer. Parameter descriptions help GenerativeAgent understand what the parameters are and how to determine their values. They also help future users to remember each parameter purpose. Begin testing your Function with a Mock schema, then transition smoothly to a real API when ready. ## Connect to a real API When you are ready to connect the Function to an existing API in the Console: 1. Click "Replace" on the Function detail page. 2. Select an existing API connection or create a new one. 3. Once replaced, the Function will call the real API during interactions instead of the Mock schema. ### Use Test Users You can use [Test Scenarios](/generativeagent/configuring/tasks-and-functions/test-scenarios) to mock API return scenarios in the Previewer. # Reference Variables Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/reference-variables Learn how to use reference variables to store and reuse data from function responses Reference variables let you store and reuse specific data returned from function responses. Reference variables offer a powerful way to condition your GenerativeAgent tasks and functions on real data returned by your APIs—all without requiring code edits. By properly naming, key-pathing, and optionally transforming your variables, you can build flexible, dynamic flows that truly adapt to each user's situation. Once a reference variable is created, you can use it to: * Conditionally make other Functions available * Set conditional logic in prompt instructions * Compare values across different parts of your GenerativeAgent workflow * Control Function exposure based on data from previous function calls. * Toggle conditional instructions in your Task's prompt depending on returned data * Extract and transform values without hard‐coding logic into prompts or code Reference variables can be configured in the GenerativeAgent Function edit page under the "Reference vars" option. ## Define a Reference Variable To create a reference variable in the GenerativeAgent UI: 1. Navigate to the Function's settings 2. Find the "Reference vars (Optional)" section and click "Add" 3. Configure the following fields: * **Name** * **Response Keypath** * **Transform Expression** (Optional) ### Name This is the identifier you'll use to reference this variable in Jinja expressions. ```jinja theme={null} vars.get("variable_name") ``` ### Response Keypath This is the JSON path where the data will be extracted from, using dot notation. ```json theme={null} // For a response like: { "available_rooms": [...] } // Use keypath: response.available_rooms ``` ### Transform Expression (Optional) This is a Jinja expression to transform the extracted value. Common patterns include: ```jinja theme={null} # Check for specific string value val == "COMPLIANT" # Check boolean values val == true or val == false # Check for non-empty arrays/strings val is not none and val|length > 0 ``` Once saved, GenerativeAgent will automatically update these variables whenever the Function executes successfully and returns data matching the specified keypath. Reference variable names are not unique across the entire system. If more than one Function defines a reference variable with the same name, whichever Function you call last may overwrite a variable's value. GenerativeAgent also uses reference variables at runtime, meaning it extracts the specified response data from each API call that returns successfully and updates the variable accordingly. ## Example Condition The following example calls a Condition on a `CheckRoomAvailability` Function. 1. Suppose a Reference Variable named `rooms_available` is defined with: * Response Keypath: `response.available_rooms` * Transform: `val is not none and val|length > 0` 2. The `rooms_available` variable will be True whenever the returned list has a length greater than zero. You can then write: 3. In a Function's conditions (to make a function available for use, based on the reference variable): ```json theme={null} conditions: vars.get("rooms_available") ``` 4. In Task instructions using Jinja: ```jinja theme={null} {%- if vars.get("rooms_available") %} The requested rooms are available. {%- else %} No rooms are currently available. {%- endif %} ``` ### Tips and Best Practice Here are some tips to enhance your experience with Reference Variables: Consider prefixing variable names to avoid clashes if multiple teams define references. Example: `user_is_compliant` vs. `is_compliant` Use short-circuit logic in transforms to avoid "NoneType cannot have length" errors Example `val is not none and val|length > 0` Keep in mind that if multiple Functions define the same reference variable name, one may overwrite the other depending on the call order. # Set Variable Functions Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/set-variable Save a value from the conversation with a Set Variable Function. You can store information determined during the conversation for reference in future steps using Set Variable Functions. This is useful for: * Storing key information (like account numbers, ages, cancellation types) so GenerativeAgent doesn't have to re-prompt the user later. * Returning or conditioning logic on data that GenerativeAgent has inferred. * Manipulating or filtering data from APIs (e.g., extracting the single charge the customer disputes). GenerativeAgent "sets" these variables in conversation, so they can be used immediately or in subsequent steps. You specify how the variable gets set based on the input parameters or existing variables. To create a set variable function: 1. [Create a function](#step-1-create-a-function). 2. [Define the input parameters](#step-2-define-input-parameters-json). 3. [Specify the variables to set](#step-3-specify-set-variables). 4. [Save the function](#step-4-save-your-function). 5. [Use the function in a task](#step-5-use-the-function-in-the-conversation). ## Step 1: Create a Function Navigate to the Functions page and click "Create Function." 1. Select "Set variable" and click "Next: Function details" 2. Specify the Name and Purpose of the Function * **Function Name**: Provide a concise, unique name, using underscores (e.g., `get_lap_child_policy`). * **Function Purpose**: Briefly describe what the function does (e.g., "Determines whether a child can fly as a lap child"). * GenerativeAgent uses this description to decide if and when it should invoke the function. ## Step 2: Define Input Parameters (JSON) The input parameters are the values that GenerativeAgent needs to pass when calling this function. You can leave the input parameters empty if you won't need new values from the conversation. As with any function call, GenerativeAgent will gather the necessary information (from user messages or prior context) before calling the function. Under "Input Parameters," enter a valid JSON schema describing the parameters GenerativeAgent needs to pass when calling this function. Mark a field as "required" if GenerativeAgent must obtain these values from the conversation. ```json Example Input Schema theme={null} { "type": "object", "required": [ "account_number", "first_name", "last_name" ], "properties": { "account_number": { "type": "string", "description": "Customer's account number" }, "first_name": { "type": "string", "description": "Customer's first name" }, "last_name": { "type": "string", "description": "Customer's last name" } } } ``` ## Step 3: Specify "Set Variables" At least one variable must be configured so GenerativeAgent can store the outcome of your function call. For each reference variable: * Provide a Variable Name (e.g., `lap_child_policy`). * Optionally, include [Jinja2](#jinja2-templating) transformations to manipulate or combine inputs or existing reference variables. * Toggle "Include return variable as part of function response" to make the new variable immediately available to GenerativeAgent after the function call. ### Jinja2 Templates Use [Jinja2](https://jinja.palletsprojects.com/en/stable/) to create or modify the stored value. For example, the following Jinja2 template sets the variable to **"Children under 2 can fly as a lap child."** if `child_age_at_time_of_flight` is less than 2. Otherwise, it sets the variable to **"Children 2 or older must have their own seat."** ```jinja2 theme={null} 'Children under 2 can fly as a lap child.' if params.child_age_at_time_of_flight < 2 else 'Children 2 or older must have their own seat.'' ``` ## Step 4: Save Your Function With your function defined, you can save it by clicking "Create Function". After saving, you'll see a detail page showing the JSON schema and the configured reference variables. ## Step 5: Use the Function in the Conversation Once you have created your set variable function, add it to the task's list of available functions for GenerativeAgent to use it. GenerativeAgent may call the function proactively, but we recommend you instruct GenerativeAgent to call the function explicitly. Always make sure to test your functions with Previewer to ensure they work as expected. Here's how the function works within a task and conversation flow: 1. GenerativeAgent collects the required parameters from the user (or context). 2. (Optional) The system can display a "Message before Sending" to the user, clarifying why GenerativeAgent is saving data. 3. Jinja2 transformations convert or combine inputs, if defined. 4. The system creates reference variables as soon as the function runs successfully—GenerativeAgent can immediately incorporate them into logic or other function calls. 5. If you turned on "Include return variable as part of function response," GenerativeAgent receives the new values right away, shaping subsequent interaction steps. ```jinja2 theme={null} # Objective Assist the customer in adding a lap child to their flight reservation by determining eligibility and communicating relevant policies. # Context - The customer has provided their confirmation number. - No lap children currently exist on their reservation. # Instructions 1. **Eligibility Check:** - Call the `get_lap_child_policy` function to determine if the child is eligible as a lap child and obtain the policy. 2. **Communicate Eligibility and Policy:** - {% if vars.get("child_eligible_as_lap_child") == true %} - Inform the customer: "The child is eligible as a lap child and will be {{ vars.get('childs_age') }} at the time of the flight. Lap child policy: {{ vars.get('lap_child_policy') }}." - {% elif vars.get("child_eligible_as_lap_child") == false %} - Inform the customer: "The child is not eligible as a lap child because they will be {{ vars.get('childs_age') }} at the time of the flight. Lap child policy: {{ vars.get('lap_child_policy') }}." - {% endif %} 3. **Customer Action Based on Eligibility:** - {% if vars.get("child_eligible_as_lap_child") == true %} - Ask if the customer would like to add their child as a lap child. - If yes, call the `add_lap_child()` function. - {% elif vars.get("child_eligible_as_lap_child") == false %} - Offer assistance in purchasing a seat for the child. - Based on customer response: - Assist in seat purchase if desired. - If not, ask if further assistance is needed. - {% endif %} ``` ## Best Practices Here are some recommendations to help you make the best use of the set variables function type: Label your variables and functions clearly (e.g., "child\_age\_at\_time\_of\_flight") so GenerativeAgent and your team understand their purpose. By toggling "Include return variable as part of function response," GenerativeAgent can incorporate newly stored data immediately. Even if this is off, the variable is still saved for future reference. Apply conditionals and expressions to reduce guesswork—for instance, deciding if a child is under 2 for lap-child eligibility. In a Task's configuration, specify "Conditions" to control when GenerativeAgent should call this function. This helps you keep flows tidy. Avoid clutter or extraneous parameters. A clear schema helps GenerativeAgent gather exactly what's needed without prompting extra questions. ## Next Steps Learn more about best practices for task and function configuration. Use conditional logic to dynamically change instructions based on variables. Test your functions in a safe environment before deploying to production. Test your functions and variables in real-time with the Previewer tool. # System Transfer Functions Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/system-transfer Signal conversation control transfer to external systems with System Transfer Functions. System Transfer Functions signal that control of the conversation should be transferred from GenerativeAgent to an external system. They can also return reference variables (e.g., a determined "intent," or details about a charge) for further processing outside of GenerativeAgent. By using a System Transfer Function, you can: * End the conversation gracefully, indicating that GenerativeAgent is finished. * Hand control back to the calling application or IVR once a goal is met. * Send relevant conversation data (e.g., identified charges, subscription flags, or determined intent) for follow-up workflows. To create a system transfer function: 1. [Create a function](#step-1-create-a-new-function) 2. [Define input parameters](#step-2-define-input-parameters-json) 3. [Set variables (optional)](#step-3-optional-set-variables) 4. [Save the function](#step-4-save-your-function) 5. [Use the function in a task](#step-6-using-the-system-transfer-function-in-the-conversation) 6. [Handle the system transfer event](#step-5-handle-the-system-transfer-event) ## Step 1: Create a New Function Navigate to the Functions page and click "Create Function." 1. Select "System transfer" and click "Next: Function details" 2. Specify the Name and Purpose of the Function * **Function Name**: Provide a concise, unique name, using underscores (e.g., `issue_refund_request`). * **Function Purpose**: Briefly describe what the function does (e.g., "Takes the collected charge info and indicates a refund request should be processed"). * GenerativeAgent uses this description to determine if/when it should call the function. ## Step 2: Define Input Parameters (JSON) The input parameters are the values that GenerativeAgent needs to pass when calling this function to transfer control to the external system. Under "Input Parameters," enter a valid JSON schema describing the required parameters. GenerativeAgent gathers the necessary information (from user messages or prior context) before calling the function. ```json Example Input Schema theme={null} { "type": "object", "required": [ "line_item_number", "is_eligible_for_refund", "is_subscription" ], "properties": { "line_item_number": { "type": "string", "description": "The line item number associated with the charge" }, "is_eligible_for_refund": { "type": "boolean", "description": "Whether or not the line item is eligible for a refund" }, "is_subscription": { "type": "boolean", "description": "Whether or not the charge is associated with a subscription" } } } ``` ## Step 3: (Optional) Set Variables Though System Transfer Functions typically return control to an external system, you can still configure one or more reference variables: * Configure variables to rename or transform parameter values for the external system * Use [Jinja2](https://jinja.palletsprojects.com/en/stable/) for transformations if needed * Toggle "Include return variable as part of function response" to make variables immediately available ### Jinja2 Templates Use Jinja2 to transform values before transfer. For example, to convert a string boolean to a proper boolean: ```jinja2 theme={null} true if params.get("is_subscription") == "True" else false ``` ## Step 4: Save Your Function With your function defined, save it by clicking "Create Function". After saving, you'll see a detail page showing the JSON schema and any configured reference variables. ## Step 5: Using the System Transfer Function in the Conversation Once you have created your system transfer function, add it to the task's list of available functions for GenerativeAgent to use it. GenerativeAgent may call the function proactively, but we recommend you instruct GenerativeAgent to call the function explicitly. Always make sure to test your functions with Previewer to ensure they work as expected. Here's how the function works within a task and conversation flow: 1. GenerativeAgent collects the required parameters from the user (or context). 2. (Optional) The system can display a "Message before Sending" to the user, clarifying why GenerativeAgent is transferring control. 3. Jinja2 transformations convert or combine inputs, if defined. 4. GenerativeAgent calls the System Transfer Function, signaling that control returns to the external system. * All reference variables collected during the conversation are passed along. * If configured, the function's specific variables also appear in the final response. ```jinja theme={null} # Objective Identify the line item for an unrecognized charge, verify refund eligibility, and transfer control to the external system once the user confirms a refund request. # Context - We already have a list of recent transactions. - The user has confirmed which charge is disputed. # Instructions 1. **Identify the Charge:** - Gather details: date, amount, and merchant to confirm the correct line item. - Store "line_item_number" once identified. 2. **Check Refund Eligibility:** - If the line item meets the refund criteria, set "is_eligible_for_refund" to true. - If part of a subscription, set "is_subscription" to true for any special handling. 3. **Offer Refund:** - {% if vars.get("is_eligible_for_refund") == true %} - Ask the customer: "Shall we proceed with the refund?" - If yes: - Call the `issue_refund_request` System Transfer Function. - {% else %} - Apologize, indicate no refund is possible. Offer further assistance. - {% endif %} ``` ### Best Practices Choose function names like "issue\_refund\_request" or "complete\_intent\_transfer." Provide concise descriptions so GenerativeAgent knows when to transfer control. If you only want the system transfer to occur after specific statuses or variables are set, configure "Conditions" in the Task's function list so GenerativeAgent calls it at the correct time. Your function schema should cover only the data needed by the external system. Minimizing extra fields ensures smoother handoff. Handle naming or logic differences between GenerativeAgent and your external system with optional Jinja2 transformations (e.g., rename "is\_subscription" to "subscriptionFlag"). ## Step 6: Handle the System Transfer Event When the function is called in your task, the system transfers the conversation to your system. The system communicates this transfer via the [generative agent events](/generativeagent/integrate/handling-events) that the system sends as part of the conversation handling. The system passes all currently set reference variables as `referenceVariables`, and the system passes any variables set in the function as `transferVariables`. ```json Example System Transfer Event theme={null} { "generativeAgentMessageId": "bba4320f-de53-4874-83b4-6c8704d3620c", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "systemTransfer", "systemTransfer": { "referenceVariables": { "customerName": "John Smith", "accountNumber": "12345", "isActive": true }, "transferVariables": { "line_item_number": "1234567890", "is_eligible_for_refund": true, "is_subscription": false }, "currentTaskName": "handle_refund_requests" } } ``` ## Next Steps Learn more about best practices for task and function configuration. Learn how to store and manipulate conversation data with Set Variable Functions. Connect your external systems to enable system transfers. Test your system transfer functions in real-time with the Previewer tool. # Test Scenarios Source: https://docs.asapp.com/generativeagent/configuring/tasks-and-functions/test-scenarios Learn how to create and use Test Scenarios to test conversations with GenerativeAgent. A **Test Scenario** simulates a customer profile to test conversations with GenerativeAgent. These scenarios help you define both the mock API responses and the customer's side of the interaction—goals, known information, and personality—so you can assess how GenerativeAgent handles different scenarios, edge cases, and potential issues. Use Test Scenarios to test: * Key happy-flows * Edge cases * Common problems or issues There are two ways to use Test Scenarios: 1. **Manual:** 'Talk to GenerativeAgent' mode\ To save and start using a test scenario to talk to GenerativeAgent, fill in the **Test Name** and **User API profile**. 2. **Automatic:** 'Simulate a customer and conversation' mode\ To save and start using a test scenario to simulate a customer and conversation, fill in the **Test Name**, **User API profile**, **User Goals**, and **Information the user knows** sections so the customer portion of the conversation can be simulated. ### Create a Test Scenario To create a Test Scenario: Go to the **Test Scenarios** page in GenerativeAgent and click **"Create Scenario"** to begin creating a new test scenario. Start by providing the essential details that identify and describe your test scenario: * **Test name**: A clear, descriptive name for your scenario * **Description**: (Optional) A brief explanation of what you're testing and what you expect to achieve Set up the [User API Profile](#configure-user-api-profile) to define the mock responses for your test case. Add optional settings like [Starting Task](#starting-task-optional) or [Input Variables](#input-variables-optional) to define how the test conversation begins. If you are using the automatic mode, you can configure the customer's behavior and goals by filling in the [Simulation Details](#define-simulation-details) section. Toggle evaluations **on**. Toggle evaluations **on** and optionally add [Applicability Criteria](#applicability-criteria) and [Evaluation Criteria](#evaluation-criteria) to assess the conversation outcomes. After completing these steps, save your Test Scenario and [start running it](#run-a-test-scenario) in either manual or automatic simulation mode. You can now preview the test scenario directly from the test file **after saving** your changes. ## Configure User API Profile The data that GenerativeAgent uses plays a critical role in how it behaves. The User API Profile defines the mock data that will be provided to GenerativeAgent when using this test scenario. You have two options to mock data: 1. [Auto-generate mock data](#auto-generate-mock-data). 2. [Manually define functions](#manually-define-functions) ### Auto-generate mock data Auto-generate mock data creates function mocks using the scenario you describe and the schema of the functions in the main branch of the draft environment. To auto-generate mock data: 1. Click **Generate**. 2. Describe your scenario in a few sentences. 3. Wait up to one minute for the profile to be created (you can close the dialog while you wait). The system pulls mocked data schemas from the main branch of the draft environment. To test a new function, it needs to have been added to the main branch of the draft environment. Adding details about what information the customer will have to share with GenerativeAgent and what GenerativeAgent will have to do will significantly improve the ability to auto-mock the appropriate APIs. ### Manually define functions If you know the specific data you want to mock for a scenario, you can manually define the functions and mock data. 1. Click **+ Select functions**. 2. Choose the API calls you want to mock (e.g. `getAccountInfo`, `confirmCode`). 3. For each function, provide the mocked **request** and **response** in JSON. #### Add variants to mocked data To simulate different API responses under the same function: 1. Click **+ Add variants** at the bottom of a function. 2. Define alternative request parameters and corresponding response schema. 3. Save to include the variant in your scenario. ### Updating Mock data You may need to update the mock data after creating a Test Scenario. After creating a Test Scenario, ensure your mock data aligns with your testing needs. This can be done by updating your scenario when auto-generating the mock data or manually adding the functions and mock data. ### Date & time override (optional) Often APIs have an implicit or explicit date and time when the system performs them. By default, GenerativeAgent assumes the customer interaction is happening at the date and time the scenario is run and updates timestamps in the API profile accordingly. You can override the assumed timestamp of the interaction. ## Set Input Context The **Input Context** defines the initial data and configuration that your system provides to GenerativeAgent when a conversation begins. This ensures your test scenarios accurately simulate real-world interactions. The input context consists of two main components: 1. **[Input Variables](/generativeagent/configuring/tasks-and-functions/input-variables)**: JSON data that provides context about the customer, conversation, or session 2. **[Starting Task](/generativeagent/configuring/tasks-and-functions/enter-specific-task)**: The specific task or flow that GenerativeAgent should begin with Ensure your input context matches exactly how your production system passes data to GenerativeAgent. This guarantees your test scenarios will match real conversations identically. ## Define Simulation Details When using automatic mode, define how the simulated customer will behave. Describe what the simulated customer brings to the conversation. These fields drive the customer's interactions with GenerativeAgent * The [User Goals](#user-goals). * The [Information the user knows](#information-the-user-knows). * The [User personality](#user-personality-optional). Additionally you can set the **max number of turns** the simulation can run. By default, it is set to **15** turns. ### User goals Describe the motivations of the simulated customer (e.g. "You want to change your billing address", "You want a refund"). The customer will attempt each goal in sequence; if it fails, it may escalate or skip. ### Information the user knows List identity or account data the customer can provide (e.g., Name, address, confirmation code, account number). The customer will relay this information when prompted; omit some to simulate forgotten details. To test how GenerativeAgent will handle a customer who *does not* have their account number or a critical piece of information, just leave it off the list. ### User personality (optional) Define the tone and style for your simulated customer (rude, insistent, confused, etc.).\ By default: > Slightly frustrated but polite. Speaks in short, direct sentences. Only say one sentence at a time, of no more than 8 words. Click **Revert to default personality** to restore. ## Add Evaluations When evaluations are toggled on, the system runs evaluation logic after the conversation ends. ### Applicability Criteria (optional) Define preconditions or milestones that must be met in the conversation for the evaluation to run. This is useful to disqualify cases where the conversation didn't reach the part you're testing. If left blank, the evaluation runs by default. Example: > "Customer asked for their bill amount" ### Evaluation Criteria Define one or more evaluation checks to apply to the conversation. Examples: > "The Agent provided the current month's bill amount of \$144.72" > "The Agent did NOT offer a discount" Evaluation results appear in the **Previewer**, side-by-side with the conversation after clicking on the **Results** button. If you update your evaluation or applicability criteria later, you can return to the same conversation in the Previewer and click **Run Eval Again** to re-run against the updated test. ## Run a Test Scenario Test scenarios are run in the **Previewer** on the side-panel. To run a Test Scenario: 1. Navigate to the **Previewer** (play-button) on the side-panel. 2. Click **Simulate a customer and conversation**. 3. Select your test scenario from the dropdown. 4. Optionally: * Override the starting task by selecting one from the dropdown. * Override input variables by pasting JSON into the field. 5. Click **Start Conversation** to begin the test. Once the conversation ends: * Click **Results** to open detailed evaluation outcomes. * Click **Run Eval Again** to re-run evaluations on the conversation in the previewer. ## Migration of Test Users The system has migrated all existing Test Users to Test Scenarios and retains their functionality: * Your Test Users became Test Scenarios automatically. You can still talk to GenerativeAgent with your old test users by selecting them in the dropdown in the previewer. * The system converted the original default Test User into a default Test Scenario—do not delete it unless you no longer need it. # Getting Started Source: https://docs.asapp.com/generativeagent/getting-started To get started with GenerativeAgent, let's create a GenerativeAgent that can help a customer from your knowledge base. By the end, you'll have a working GenerativeAgent that can answer customer questions using your own content. This is a basic example to get you started quickly. As you develop more sophisticated GenerativeAgent solutions, you'll want to explore [additional use cases](/generativeagent/build/adding-a-use-case) and consider the [comprehensive design](/generativeagent/build-overview) of your implementation. ## Before you begin Before you begin, make sure you have: * Followed the [Set up your account](/getting-started/setup) guide to get access to the ASAPP AI Console. * A customer-facing knowledge base URL (such as your help center or support documentation) ## Step 1: Connect Your Knowledge Base The first step is to import your knowledge base content so GenerativeAgent can use it to answer customer questions. We will use the Import from URL to import your knowledge base content but you have other options on how to [connect your knowledge base](/generativeagent/configuring/connecting-your-knowledge-base). Import from URL Optionally, you can import specific sections of your site by specifying [allowed URL Prefix or Excluded URLs](/generativeagent/configuring/connecting-your-knowledge-base#step-1%3A-importing-your-knowledge-base). The system will import your content and place it in a **Pending Review** state. ## Step 2: Review and Publish Your Content After crawling your knowledge base, the system will create a list of articles that could be added to the GenerativeAgent knowledge base. You need to review and publish the articles: At the top of the Knowledge Base page, look for the notification indicating articles need review. New notification For each article: * Choose between the cleaned-up or raw version * Add relevant query examples that customers might ask (e.g., "What is your return policy?") * Click **Publish** when you're satisfied with the content ## Step 3: Create a Basic Task Now that your knowledge base is set up, let's create a simple task for GenerativeAgent to handle customer inquiries. A task is a set of instructions that tells GenerativeAgent what to do. We go into more detail about tasks and functions in the [Learn More About Tasks and Functions](/generativeagent/configuring) guide. Here we provide sample names and instructions, but you are in full control of GenerativeAgent's behavior and can customize them to fit your needs. Fill in the following fields: * **Task name**: "Answer\_Customer\_Questions" * **Task selector description**: "Use this task when customers ask questions about our products, services, or policies." * **General Instructions**: "Answer customer questions clearly and concisely using information from the knowledge base. If the answer isn't in the knowledge base, politely explain that you don't have that information and offer to help them find other information." Dashboard Create task ## Step 4: Try It Out in the Previewer Now let's test your GenerativeAgent with the [Previewer](/generativeagent/configuring/previewer): 1. Navigate to **GenerativeAgent > Previewer** 2. Select **Draft** environment from the dropdown 3. Type a test question related to your knowledge base content (e.g., "What is your return policy?") 4. Send the message and observe GenerativeAgent's response 5. Use the Turn Inspector panel on the right to see which knowledge base articles were used and how GenerativeAgent processed the request The previewer side panel ## Safety ASAPP has developed GenerativeAgent with a safety-first approach. ASAPP ensures GenerativeAgent's accuracy and quality with rigorous testing and continuous updates, preventing hallucinations through advanced validation. Our team has incorporated Safety Layers that provide benefits such as reliability and response trust. Our safety standards include: * Safety Layers * Hallucination Control * Data Redaction * IP Blocking * Customer Info and Sensitive Data Protection You can learn more about this in [Safety and Troubleshooting](/generativeagent/configuring/safety-and-troubleshooting). ## What's Next Congratulations on setting up a basic GenerativeAgent to answer customer questions using your knowledge base! Now that you have a working GenerativeAgent, here are the recommended next steps to understand GenerativeAgent more holistically and build a complete solution: Get a comprehensive understanding of how GenerativeAgent works and the complete build process. Learn how to design and implement specific use cases for your contact center needs. Test and refine your GenerativeAgent before deploying to production. Connect your CCaaS and backend systems to create a complete GenerativeAgent solution. # Go Live Source: https://docs.asapp.com/generativeagent/go-live After configuring GenerativeAgent and connecting to ASAPP's servers, you can go live into your production environments. These are the steps to take to go live: ## Step 1: Validate your Configuration Review that the following sections of the Configuration Phase are working as expected or have been signed off: * **Functional requirements**: Confirm if your ASAPP Team addressed your Tasks and Function Requirements and set them up correctly in GenerativeAgent. You can use the Previewer to test task and functions. * **Functional and UAT Testing**: Validate individual components and end-to-end functionality between GenerativeAgent and your customers. Your organization and your ASAPP Team must have signed off acceptance for the functionality of tasks, requirements, and user interactions before going live. * **Human-In-the-Loop Set-up**: Confirm you properly defined the Human-In-the-Loop definitions in GenerativeAgent's Tasks You can use the Previewer to test Human-in-the-Loop. * **Credentials Verification**: Verify you obtained all your ASAPP Credentials and API Keys and that all key connections and calls to GenerativeAgent return data without any issue. * **API Connections**: Ensure you connected your APIs to GenerativeAgent and your applications call GenerativeAgent and ASAPP to send messages and analyze them * **Knowledge Base ingestion**: Ensure the Tasks and Functions you previously defined align with the responses that reference your Knowledge Base as Source-of-Truth. ## Step 2: Validate your Integration Separated from GenerativeAgent functional configuration, you need to ensure your voice or chat applications are fully integrated with GenerativeAgent to go live. Your method of integration determines the steps to go live To validate your integration is working smoothly, remember the following: **Event Handling** Ensure you are handling all events. GenerativeAgent communicates back to you via events. The system sends these events via Server-Sent-Event stream. **API Integration** Test your APIs exposure in the GenerativeAgent UI: Test how GenerativeAgent calls your APIs when performing Functions You can do this in the previewer. **Audio Integration** Audio Integrations need a consistent flow of incoming and outcoming audio streams. Ensure that your organization opens, stops, and ends audio streams in every interaction between a customer and an agent. **AI Transcribe Websocket Integration** * **Real-time Messaging**: Ensure that the ASAPP Server continuously provides URL Websocket connections. * **WebSocket protocol**: Request messages in the sequence must be formatted as text (UTF-8 encoded string data); only the audio stream should be formatted in binary. All response messages must also be formatted as text. **Third Party Connectors** Follow the integration procedure for the Third Party Connectors of your choice: After the integration, ensure that your Third Party Connector is receiving and sending audio streams to the ASAPP Servers. This is done outside of the ASAPP applications. **Text-only Integration** Text conversations with GenerativeAgent can be ensured via the Previewer. Ensure messages are sent, analyzed, and that GenerativeAgent replies with expected outputs ### Substep: Test the Integration Test your integration to ensure that GenerativeAgent > **Performance Testing**: Simulate expected traffic or high-traffic scenarios to determine any breaking-point or requirements meeting. ## Step 3: Launch GenerativeAgent to Production Now you are ready to deploy GenerativeAgent into your Production environments. ### Deploy GenerativeAgent Deploy GenerativeAgent into your Production environments without further effort. You can do this from the GenerativeAgent UI. ### Talk with GenerativeAgent Now that GenerativeAgent is live in your Organization's environments, you can talk to GenerativeAgent and receive LLM support. > If your Integration has a Voice Channel, call your internal phone numbers and ask for issues or inquiries your customers would ask. > > If your integration with GenerativeAgent has a (message) Chat Integration, use the GenerativeAgent UI to continuously review how GenerativeAgent helps with customer support and other issues. > > If your Integration involves Voice applications, you can also gather insights from GenerativeAgent's analyze calls in the GenerativeAgent UI. ## Step 4: Post Launch Maintenance There are still some things you can do after GenerativeAgent is deployed. Here are some things that your organization can do to continuously monitor GenerativeAgent while it is live. Your ASAPP team is at your disposal to check anything else! **Performance Monitoring** * **Analytics**: Continuously analyze user interactions and system logs to make better use of analytics that can make GenerativeAgent perform better. * **Alerts**: Use your internal monitoring tools to check on GenerativeAgent's Performance. * **Enhancement**: ASAPP is continuously enhancing its AI products, so feel free to address your ASAPP Team for new features or improvements. **Feedback** Feedback Sessions: Your ASAPP team is always ready to receive feedback either from customer satisfaction surveys or from internal auditions. **Internal Training** Provide comprehensive training sessions for your internal staff in the scenarios where GenerativeAgent performs. In the case that your Organization uses Human-in-the-Loop, train your staff for the scenarios when your human agents help GenerativeAgent in tasks. **Support Plan** Establish with your ASAPP team a support plan for post-launch assistance. This can work either by Helpdesk queries or direct support from your ASAPP Team. # Human in the Loop Agent Source: https://docs.asapp.com/generativeagent/human-in-the-loop Learn how GenerativeAgent works with human agents to handle complex cases requiring expert guidance or approval. Human-in-the-loop is a first-of-its kind capability that allows a human agent to guide GenerativeAgent in assisting a customer. GenerativeAgent may request human help in situations where it lacks the necessary API access, knowledge sources, or requires human approval to complete an action. GenerativeAgent raises a ticket requesting specific help through your existing digital support tool. Available agents within your organization are part of dedicated human-in-the-loop queues where they receive and respond to tickets from GenerativeAgent, thereby resolving the customer issue. The Human-in-the-loop Agents (HILA) within your organization wait in a queue and receive direction to specific scenarios where they take on the action of helping in a customer's issue and give back the conversation to GenerativeAgent. HILAs can also transfer the conversation to a Live Agent so they take over the task from GenerativeAgent. ## When should GenerativeAgent consult a human You can invoke the Human-in-the-loop capability in the task instructions for GenerativeAgent. You can specify scenarios or actions that GenerativeAgent cannot perform automatically and require human intervention for information or confirmation. This is similar to actions a human agent cannot complete without supervisor approval. Recommended scenarios for human assistance include: * Insufficient permissions: When GenerativeAgent should not act on its own without HILA approval. * No API to call: Whenever there is no API to call to retrieve specific customer information. * No Knowledge Base information: Whenever the question or issue provided by the customer has no content in the Knowledge Base source that GenerativeAgent can use. ## HILAs The primary function of the human-in-the-loop is to support and unblock GenerativeAgent. These supervisors handle tasks requiring approvals or a deep understanding of the issues. Tier 1 agents can address simpler queries. When accepting a ticket from GenerativeAgent in your digital support tool, agents access an embedded Human-in-the-loop UI. The actions HILAs perform include: * Ticket assignments * Response edits/decisions * Unlock GenerativeAgent * Escalation to live agents Human-in-the-loop agents assist GenerativeAgent without directly interacting with customers, differing from live agents. The key benefit is that a single agent can manage multiple customer conversations simultaneously without engaging in calls or chats. If the Human-in-the-loop agent determines that a live agent would better serve the customer, they can use the Transfer option in the UI to hand off the conversation from GenerativeAgent to a live agent. **When does GenerativeAgent transfer to a live agent?** We recommend the following scenarios for transferring a conversation to a live agent: * A human-in-the-loop instructs GenerativeAgent to do so * There are no Human-in-the-loop agents available. The system manages this automatically and does not require explicit instructions. * The customer explicitly requests it (if configured). ## Agent UI **Enabling human-in-the-loop capability** Human-in-the-loop agents operate from the existing Agent Desk. To enable the Human-in-the-loop UI and task functions in GenerativeAgent, you need to configure [Human-in-the-loop Functions](/generativeagent/configuring/tasks-and-functions/hila-function). The Human-in-the-loop agent UI is the primary application where agents can interact with GenerativeAgent. Through this interface, they can: * Respond to GenerativeAgent * Transfer conversations to live agents * View the interaction thread history * Access relevant customer information and summarized conversation context This section provides an overview of important features available: * **Transfer**: Allows the agent to transfer the conversation from GenerativeAgent to a live agent. * **Ticket Assignment Timer**: Tracks the time elapsed since the system assigned the ticket to the agent. * **Prompt**: Indicates the specific assistance GenerativeAgent needs to unblock the customer. This is generated by GenerativeAgent itself. * **Response**: The Human-in-the-loop agent can respond to GenerativeAgent through an open text field or structured options, depending on the configuration. * **Send**: After selecting a response, the agent can click 'Send' to submit the response and close the ticket simultaneously. * **Context**: Provides a summarized context of the conversation between GenerativeAgent and the customer. * **Transcript**: Displays the complete customer interaction thread prior to GenerativeAgent raising the ticket. * **Customer**: Shows the customer's details, including company and specific account information for authenticated customers. ## Next Steps After setting up Human-in-the-Loop, you are ready to speed up customer replies and solve their inquiries. You may find one of the following sections helpful in advancing your integration: # Approver Mode Source: https://docs.asapp.com/generativeagent/human-in-the-loop/approver-mode Learn how human agents review and refine generated responses to ensure safe, accurate, and on-brand customer interactions. Human-in-the-Loop Agent (HILA) **Approver Mode** is a powerful capability that enables human agents to supervise and refine GenerativeAgent responses in real time before they are delivered to customers. This information is used to **fine-tune GenerativeAgent** over time. This approach ensures safe, on-brand interactions and accelerates the training of AI systems to sound like your best support agents, making it ideal for piloting new intents and achieving high-quality automated support. Approver Mode works similarly to regular HILA: GenerativeAgent creates a ticket for a HILA queue, where a HILA can review the ticket. However, instead of providing guidance and information, the HILA approves or modifies GenerativeAgent's messages. ## Set Up Approver Mode ASAPP offers seamless integration into your existing agent desk through an iframe-based HILA application. HILAs operate from dedicated queues within your platform, receiving GenerativeAgent consultation requests directly. This ensures they work within a familiar interface, while you retain full control over routing and assignment. To set up HILA in your environment, contact your ASAPP Implementation Manager. HILAs are assigned tickets based on the routing and assignment configurations in your support platform. If no HILA is assigned within the configurable threshold (default: 60 seconds), the case is automatically escalated to a live agent. Reach out to your ASAPP account team to enable Approver Mode for your account and configure the Administrator settings. HILA in Approver Mode cannot be tried out in Previewer. ### Administrator Controls There are several settings that can be configured, particularly to ensure end customers receive quick responses: * **Auto-accept Timeout:** The system automatically approves responses after a configurable timeout (default: 30 seconds) to ensure the customer remains engaged if the HILA is inactive * **Agent Assignment Timeout:** If no HILA is assigned within a configurable timeout (default: 60 seconds), the conversation escalates to a live agent * **Task-level Control:** Define which tasks require complete supervision through Approver Mode * **Escalation Message:** When HILA initiates an escalation, the system sends a configurable message to the customer, informing them that their conversation will be handed over to a live agent These safeguards ensure that customers never wait long and always receive a human-verified message. ## Approver Mode Workflow The Approver Mode workflow follows the same process as regular HILA, with one key difference: instead of providing guidance and information to GenerativeAgent, agents approve or modify the messages that GenerativeAgent has already generated. Your support platform assigns tickets to HILAs based on the routing and assignment configurations. If the system does not assign a HILA within the configurable threshold (default: 60 seconds), the system automatically escalates the case to a live agent. The system presents HILAs with key information to help them answer GenerativeAgent query: Approver Mode Interface The HILA application includes: * **Transcript:** Complete history of the customer interaction before GenerativeAgent raised the ticket * **Context:** Summarized view of key customer intents and topics for quick understanding * **Customer information:** Details such as customer name and account data if authenticated * **Conversation thread:** Real-time view of the approver phase showing all approved messages, edits, and system messages * **Assignment timer:** Tracks how long the assigned HILA has worked on the ticket Unlike regular HILA, where agents provide guidance, in Approver Mode, HILA can do one of four things when a GenerativeAgent utterance comes for approval: 1. **Accept** the utterance as the most appropriate next response to the customer 2. **Review** the utterance and make necessary changes before sending it to the customer 3. **Transfer** if the situation requires escalation to a live agent for deeper support 4. **End** the ticket if GenerativeAgent resolves the conversation or escalates Based on the HILA's action, GenerativeAgent will continue the conversation and reach out to HILA (Approver Mode) for subsequent messages. HILA and GenerativeAgent conversations need to be quick since there is a customer waiting for resolution on the other side. To support this, there are **real-time notifications** to ensure that HILA doesn't miss any new messages from GenerativeAgent. ## What Can Be Fine-Tuned? Using Approver Mode, every agent correction becomes valuable feedback to improve GenerativeAgent over time. Today, we primarily use these edits to align GenerativeAgent's responses with your brand voice and the communication style of your best agents. Additionally, agent interactions generate insights that inform future improvements, such as: * **Knowledge Access:** Identify which APIs, knowledge base articles, or system data agents reference, helping refine task instructions * **Content Gaps & Escalation Patterns:** Spot where agents frequently intervene or escalate, highlighting areas where task instructions or fallback behavior can be improved These signals help build toward more confident, autonomous operation over time. ## Next Steps After setting up Approver Mode, you can enhance your human-in-the-loop capabilities: Learn about the core HILA capabilities and how to incorporate it into your use cases. Learn how to improve GenerativeAgent's tasks and functions. Learn how to configure GenerativeAgent tasks for your use case. Follow steps and best practices to launch your GenerativeAgent deployment in production. # GenerativeAgent Integration Overview Source: https://docs.asapp.com/generativeagent/integrate High-level guide to integrating GenerativeAgent with your systems - channel integration and backend connections GenerativeAgent integration involves two main components that work together to create a complete conversational AI solution: Connect your voice channels to GenerativeAgent to enable real-time conversations with end users. Connect your APIs and backend systems to GenerativeAgent so it can perform actions and access data on behalf of users. Add GenerativeAgent chat to your website with a customizable SDK that works alongside your existing chat systems. ## Channel Integration Methods Channel integration is how conversation data is sent back and forth between the end user and GenerativeAgent. Connect your existing voice platform to enable real-time conversations. ## Call Transfers Transfer calls to GenerativeAgent while maintaining control over the call flow. Transferring with SIP or PSTN works with virtually any voice platform. Transfer calls using SIP protocol with API-based or SIP headers approaches. Transfer calls using temporary phone numbers for maximum platform compatibility. ## Connectors We support out-of-the-box connectors to enable GenerativeAgent on contact center platforms: ### Platform-Specific Voice Integration Integration guides for popular voice platforms: Step-by-step guide for integrating using AWS components. One-click installation and configuration for Genesys Cloud. Step-by-step guide for integrating Twilio Voice. Step-by-step guide for integrating Zendesk Talk. Don't see your platform? Let us know what platform you'd like us to build an integration for. ### Web SDK For web-based chat integrations, we provide a ready-to-use SDK: Quickly integrate GenerativeAgent chat into your website with our customizable web SDK. Works alongside existing chat systems like Salesforce Chat or Zendesk. ## Direct API ### Direct Voice Integration Send conversation transcript directly to GenerativeAgent: Send conversation transcript directly to GenerativeAgent using your own ASR or voice processing. ## Backend System Integrations GenerativeAgent needs to interact with your business systems, but exposing APIs directly to LLMs often fail. Most APIs are designed for **human developers** who can: * read documentation * experiment with trial and error * get human support in order to get an API working. **LLMs don't have the same luxury**, and instead make a single API request for a given action, only relying on its current context. In order to have an **LLM successfully call an API**, we need to provide **simple** and **focused actions** instead of complex, inconsistent interfaces. Our **API Connection** tooling enables you to define the specific interface to GenerativeAgent while using tooling to map it to the more complex underlying API. Perform single, JSON API requests. Works for most modern APIs and SaaS platforms like Salesforce, HubSpot, and Zendesk. Concatenate multiple API calls as one action API formats such as HTTP Form Posts and SOAP Complex business workflows ## Next Steps Learn how to connect your voice platform using Call Transfer over PSTN. Set up API connections and functions for your business systems. Learn how GenerativeAgent works and the key components you'll configure. Deploy your integration to production. # Amazon Connect Source: https://docs.asapp.com/generativeagent/integrate/amazon-connect-pstn Learn how to integrate GenerativeAgent into Amazon Connect The Amazon Connect integration with ASAPP's GenerativeAgent allows callers to have conversations with GenerativeAgent while maintaining the call entirely with in your Amazon Connect contact center. Call transfers facilitate connecting users to GenerativeAgent via PSTN (dialing a number) while maintaining your control over the entire call duration. This guide demonstrates how GenerativeAgent integrates with Amazon Connect using Call Transfer over PSTN. ## How it works At a high level, the Amazon Connect integration with GenerativeAgent works by handing off the conversation between your Amazon Connect flow and GenerativeAgent: 1. **Hand off the conversation** to GenerativeAgent through Call Transfer over PSTN. 2. **GenerativeAgent handles the conversation** using Lambda functions to communicate with ASAPP's APIs, and respond to the caller using a Text to Speech (TTS) service. 3. **Return control back** to your Amazon Connect Flow when: * The conversation is successfully completed * The caller requests a human agent * An error occurs 4. **Use Output Context of the call** to determine the next course of action. Here's how a GenerativeAgent call will work in detail within your Amazon Connect: 1. Incoming call: A customer calls your existing phone number. 2. Call Processing: Amazon Connect processes the call and determines when to transfer to GenerativeAgent. 3. Request a number: Amazon Connect invokes a Lambda function to request a phone number from ASAPP to transfer the call to. 4. Transfer the call: Amazon Connect transfers the call to the GenerativeAgent using Call Transfer over PSTN. 5. GenerativeAgent interaction: GenerativeAgent takes over the call and engages with the customer using ASAPP's APIs. It processes the customer's requests, generates responses, communicates back to the customer. 6. Call Transfer back: The call with GenerativeAgent disconnects, and control returns to Amazon Connect. 7. Request Call Context: Amazon Connect requests the call context from GenerativeAgent using a Lambda function and passes the input context/ variable to determine the outcome of the conversation. 8. Call Context: GenerativeAgent returns the call context to Amazon Connect, which includes: * The conversation outcome * Any error messages * Instructions for next steps (e.g., transfer to agent) 9. Next Steps: Based on the call context, Amazon Connect decides the next steps, such as: * Ending the call if the conversation was successful. * Transferring to a human agent if requested by the customer. * Handling errors appropriately. ## Before you Begin Before using the GenerativeAgent integration with Amazon Connect, you need to: * [Get your API Key Id and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you need access enabled. * Have an existing Amazon Connect instance: * Have claimed phone numbers. * Access to an Amazon Connect admin account. * Be familiar with AWS including Amazon Connect, IAM roles, and more: You will set up and configure the following AWS services: * **Amazon Connect** - Handles call flow and audio streaming * **Lambda functions** - These functions will handle communication between Amazon Connect and GenerativeAgent * Enable Call Transfer over PSTN by contacting your ASAPP account team. ## Configuring Amazon Connect with GenerativeAgent ### Step 1: Create Lambda Functions Lambda functions are used to interact with ASAPP's GenerativeAgent APIs to create call transfers and retrieve call context. They can be created using the AWS Console or AWS API. To create a Lambda function in AWS Console: 1. Log in to the AWS Management Console. 2. Open Lambda. 3. Select **Author from scratch** and fill in the required fields: * **Function name**: `call-transfer` or 'get-call-context ' * **Runtime**: Select `Node.js 22.x` * **Architecture**: Select `x86_64` * Under **Change default execution role**, select **"Create a new role with basic Lambda permissions"**. This option automatically adds the necessary permissions for basic Lambda execution and CloudWatch logging. * CloudWatch logging is required for debugging and monitoring Lambda executions, so ensure these permissions are included. * If you are creating the Lambda function using infrastructure-as-code tools (such as Terraform), you must ensure the following permissions are included in the execution role: * Allow `logs:CreateLogStream` and `logs:PutLogEvents` for all streams under your CloudWatch Log Group. * Allow `lambda:InvokeFunction` action in your resource base policy. * List `connect.amazonaws.com` as the Principal Service in your resource policy. After filling in the required fields, click **Create function**. 4. In the **Function Overview** section, look for **ARN** and save it. 5. Go to the **Code** tab, click upload from and select `.zip` file to upload your Lambda function code. Click **Deploy.** 6. Go to the **Configuration** tab, select **Environment variables**, and add the following environment variables: | Variable | Description | | -----------------: | :---------------------------------------------------- | | `ASAPP_API_ID` | API Credential provided by ASAPP. | | `ASAPP_API_SECRET` | API Credential provided by ASAPP. | | `ASAPP_API_HOST` | API hostname provided by ASAPP, usually api.asapp.com | 7. Click **Save**. ```javascript theme={null} const ASAPP_API_ID = process.env.ASAPP_API_ID; const ASAPP_API_SECRET = process.env.ASAPP_API_SECRET; const ASAPP_API_HOST = process.env.ASAPP_API_HOST; export const handler = async function (event) { console.log("Received event:", JSON.stringify(event, null, 2)); let contactId = event.Details?.ContactData?.ContactId; if (!contactId) { console.error("ContactId not found in event"); return { result: "error", error: "ContactId not found in event", transferNumber: "" } } let taskName; if (event.Details?.Parameters?.taskName) taskName = event.Details.Parameters.taskName; let customerId = event.Details?.ContactData?.CustomerEndpoint?.Address; if (event.Details?.Parameters?.customerId) customerId = event.Details.Parameters.customerId; let requestBody = { id: contactId, externalConversationId: contactId, type: "PHONE_NUMBER", phoneNumber: { country: "US" }, inputContext: { inputVariables: {} } }; if (taskName) requestBody.inputContext.taskName = taskName; if (customerId) requestBody.inputContext.inputVariables.customerId = customerId; for (const [key, value] of Object.entries(event.Details.Parameters)) { if (key !== "taskName" && key !== "customerId") { requestBody.inputContext.inputVariables[key] = value; } } let response; try { response = await fetch(`https://${ASAPP_API_HOST}/generativeagent/v1/call-transfers`, { method: "POST", headers: { "Content-Type": "application/json", "asapp-api-id": ASAPP_API_ID, "asapp-api-secret": ASAPP_API_SECRET }, body: JSON.stringify(requestBody) }); } catch (error) { console.error("Error calling ASAPP API:", error); return { result: "error", error: error.message, transferNumber: "" } } if (!response.ok) { let responseText = await response.text(); console.error("Error calling ASAPP API:", response.statusText,responseText); return { result: "error", error: response.statusText, transferNumber: "" } } const responseData = await response.json(); console.log("ASAPP API response:", responseData); let transferNumber = responseData.phoneNumber?.transferNumber; if (!transferNumber) { console.error("Transfer number not found in ASAPP API response"); return { result: "error", error: "Transfer number not found in ASAPP API response", transferNumber: "" } } return { result: "ok", transferNumber: transferNumber } }; ``` **Parameters:** | Parameter | Description | | -----------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `taskName` | (Optional) The name of the GenerativeAgent task to be initiated in the initial Connections. | | `customerId` | The unique identifier for the customer. If not provided, the calling number of the caller will be used as the customerId. Ensure the value is unique and consistent for each customer to avoid integration issues. | **Response:** | Field | Type | Description | | ---------------: | :----- | :--------------------------------------------------------------------- | | `result` | string | "ok" if the call transfer was created successfully, "error" otherwise. | | `error` | string | Error message if the call transfer failed. | | `transferNumber` | string | A E.164 formatted PSTN number assigned to transfer the call by ASAPP. | ```javascript theme={null} const ASAPP_API_ID = process.env.ASAPP_API_ID; const ASAPP_API_SECRET = process.env.ASAPP_API_SECRET; const ASAPP_API_HOST = process.env.ASAPP_API_HOST; export const handler = async function (event) { console.log("Received event:", JSON.stringify(event, null, 2)); let contactId = event.Details?.ContactData?.ContactId; if (!contactId) { console.error("ContactId not found in event"); return { result: "error", error: "ContactId not found in event", outputContext: {} } } let response; try { response = await fetch(`https://${ASAPP_API_HOST}/generativeagent/v1/call-transfers/${contactId}`, { method: "GET", headers: { "asapp-api-id": ASAPP_API_ID, "asapp-api-secret": ASAPP_API_SECRET } }); } catch (error) { console.error("Error calling ASAPP API:", error); return { result: "error", error: error.message, outputContext: {} } } if (!response.ok) { console.error("Error calling ASAPP API:", response.statusText); return { result: "error", error: response.statusText, outputContext: {} } } const responseData = await response.json(); console.log("ASAPP API response:", responseData); if (!responseData.outputContext) { console.error("Output context not found in ASAPP API response"); return { result: "error", error: "Output context not found in ASAPP API response", outputContext: {} } } return { result: "ok", outputContext: responseData.outputContext } }; ``` **Parameters:** | Parameter | Description | | ----------: | :---------------------------------------------------------------------------------------------------- | | `contactId` | The unique identifier for the contact. This is typically the ContactId from the Amazon Connect event. | **Response:** | Field | Type | Description | | --------------: | :----- | :---------------------------------------------------------------------------------------------------------------------------- | | `result` | string | "ok" if the call context was retrieved successfully, "error" otherwise. On "Ok" response, it includes the Call Transfer Data. | | `outputContext` | object | The output context of the call, which includes the conversation | **Call Transfer Data** | Field | Type | Description | | ---------------------------------: | :----- | :--------------------------------------------------------------------------------------------------------- | | `ID` | string | The unique identifier for the call transfer. This is typically the Transfer ID. | | `externalConversationId` | string | The external conversation ID associated with the call transfer. | | `status` | string | The status of the call transfer, e.g., "COMPLETED". | | `createdAt` | string | The timestamp when the call transfer was created. | | `callReceivedAt` | string | The timestamp when the call was received. | | `completedAt` | string | The timestamp when the call transfer was completed. | | `inputContext` | object | The input context for the call transfer, which includes variables such as `taskName` and `inputVariables`. | | `inputContext.taskName` | string | The name of the task being handled by GenerativeAgent. | | `inputContext.inputVariables` | object | Key-value pairs of input variables used in the conversation. | | `inputContext.inputVariables.name` | string | The unique identifier for the customer. | | `type` | string | The type of transfer, which is "PHONE\_NUMBER" for this integration. | | `phoneNumber` | object | The phone number details for the transfer. | | `phoneNumber.country` | string | The country code for the phone number, e.g., "US". | | `phoneNumber.transferNumber` | string | The E.164 formatted PSTN number assigned to transfer the call by ASAPP. | **Status** | Status | Description | | ----------: | :----------------------------------------------------------------------------------------- | | `ACTIVE` | The call transfer is currently active and the temporary number is waiting to be connected. | | `ONGOING` | The call transfer is in progress and the GenerativeAgent is still handling the call. | | `COMPLETED` | The call transfer has been completed successfully. | | `EXPIRED` | The call transfer has expired and the temporary number is no longer valid. | **Output Context** | Field | Type | Description | | -------------------: | :----- | :--------------------------------------------------------------- | | `transferType` | string | The type of transfer. This can either be `SYSTEM` or `AGENT`. | | `currentTaskName` | string | The name of the current task being handled by GenerativeAgent. | | `referenceVariables` | object | Key-value pairs of reference variables used in the conversation. | | `transferVariables` | object | Key-value pairs of transfer variables used in the conversation. | You can have the metrics, logging, redundancy, warm starts, and other settings configured as per your specific requirements, environment, and uses cases. ### Step 2: Add your Lambda Functions to your Amazon Connect instance The Lambda function must be added to your Amazon Connect instance to be used in the contact flow. To do this: 1. Open the Amazon Connect console. 2. Select your Amazon Connect instance. 3. In the left navigation pane, choose **Contact flows**. 4. Choose **AWS Lambda functions** from the dropdown menu. 5. Click **Add Lambda function**. 6. Select the Lambda function `call-transfer` you created earlier. 7. Click **Add**. 8. Repeat the process for the `get-call-context` Lambda function. ### Step 3: Set Up Flow in Amazon Connect To createa a Call Transfer record in your Amazon Connect contact flow, you need to reference the `call-transfer` Lambda function: 1. In the Amazon Connect console, open your contact flow. 2. Add an **Invoke AWS Lambda function** block at the point where you want to initiate the transfer. 3. Select the `call-transfer` Lambda function. 4. Map any required parameters (such as `taskName` or `customerId`) in the block’s configuration. 5. Use the output variable `transferNumber` from the Lambda function as the destination number in a **Transfer to phone number** block. 6. Check for failure scenarios and handle errors appropriately. 7. Connect the blocks to complete the flow. 1. Go to **"Transfer to phone number"** block and navigate to the **Properties** panel. 2. Set the **Transfer Via** to **"Phone number"**. 3. Under **Phone number**, select **Set Dynamically**. 4. The **Namespace** must be set to `External`. 5. The **Key** must be set to `transferNumber`. 6. Set **Resume Flow after Disconnect** to **Yes**. 7. Connect the output of this block to the next step in your flow. 1. Add another **Invoke AWS Lambda function** block after the transfer block. 2. Select the `get-call-context` Lambda function. 3. Map the `ContactId` from the Amazon Connect flow to the Lambda function input. 4. Use the output variable `outputContext` from the Lambda function to determine the next steps. 5. Connect the output of this block to the next step in your flow. 1. Use the output context from the `get-call-context` Lambda function to determine the next steps in your flow. 2. Based on the following fields in the output context, you can decide how to proceed: * **Transfer Type**: If it is `AGENT`, you can transfer the call to a human agent. If it is `SYSTEM`, you can trnasfer the call back to the IVR. * **Current Task Name**: If it matches a specific task, you can route the call accordingly. * **Reference Variables**: Use these variables to provide additional context or information to the customer. * **Transfer Variables**: Use these variables to handle any specific transfer logic. ## Next Steps Now that you have integrated GenerativeAgent with Amazon Connect, here are some important next steps to consider: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure your APIs to allow GenerativeAgent to access necessary data and perform actions Connect and optimize your knowledge base to improve GenerativeAgent's responses Follow the deployment checklist to launch GenerativeAgent in your production environment # Call Transfer over PSTN Source: https://docs.asapp.com/generativeagent/integrate/call-transfer-pstn Learn how to use call transfers to handle GA calls with any platform ASAPP's Call Transfer over PSTN enables businesses to integrate GenerativeAgent Voice without being constrained by their existing telephony infrastructure. We have dedicated connector integrations for some platforms ([Genesys](/generativeagent/integrate/genesys-audiohook), [Amazon Connect](/generativeagent/integrate/amazon-connect)), but not every platform allows native integrations. Call transfers allow you to use PSTN (dialing a number) to connect your users to GenerativeAgent while keeping you in control of the call the entire time. ## How it works At a high level, call transfer over PSTN works by assigning a temporary phone number for a given customer that you can dial in for GenerativeAgent: 1. **Request a number**: Your system requests a temporary phone number from ASAPP. Optionally you can provide context to the call. 2. **GenerativeAgent handles the conversation**: Dial in GenerativeAgent via the temporary phone number so it can talk directly to the customer. 3. **Return control**: When GenerativeAgent has completed the call, it will disconnect the call and your system will fetch the resulting context and handle the rest of the call flow. PSTN Call Transfer Flow 1. **Incoming Call**: A customer calls your existing phone number 2. **IVR Processing**: Your existing IVR system processes the call and determines when to transfer to GenerativeAgent 3. **Request a number**: Your system requests a temporary phone number from ASAPP. Optionally you can provide context to the call. 4. **Dial the number (Supervised transfer)**: Your system dials the temporary phone number via a Supervised transfer. With the supervised transfer, you can monitor the call and control the call flow while GenerativeAgent is talking to the user. 5. **Detect GenerativeAgent disconnect**: When GenerativeAgent has completed the call, it will disconnect the call and your system will detect the disconnect. 6. **Fetch the call context**: Your system will fetch the context, which includes the transfer type, from the call. 7. **Handle the call**: Using the context and transfer type, your system handles the agent escalation, call disposition, or any other steps in your call flow. ## Before you Begin Before implementing PSTN Call Transfer, you need: * [Get your API Key Id and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you need access enabled. * [Configure Tasks and Functions](/generativeagent/configuring) * Contact your ASAPP account team to enable call transfer over PSTN * This includes determining how many concurrent calls you need to support, phone number countries to support, etc. ## Step 1: Request a temporary number To transfer a call to GenerativeAgent, you need to create a `call-transfer`. A call transfer is the attempt to transfer a call to GenerativeAgent. This resource will include a temporary phone number from ASAPP that you can connect the customer to. This number will be assigned specifically for this customer interaction and will expire after a set time period, by default 1 minute. To [create a call transfer](/apis/generativeagent/create-call-transfer), you need to specify: | Parameter | Description | | -----------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `id` | Your unique identifier for the call transfer. You will use this later to fetch the call transfer result. | | `externalConversationId` | Your unique identifier for the conversation. This allows you to reconnect the customer to the same conversation and is used in reporting. | | `type` | Specify a type of **PHONE\_NUMBER**. | | `phoneNumber.country` | The country code for the phone number. | | `inputContext` | Optionally specify the [`taskName`](/generativeagent/configuring/tasks-and-functions/enter-specific-task) and [`inputVariables`](/generativeagent/configuring/tasks-and-functions/input-variables) to trigger GenerativeAgent with specific task information and variables. | ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your Transfer Id]", "externalConversationId":"[Your Conversation Id]", "type": "PHONE_NUMBER", "phoneNumber":{ "country": "US" }, "inputContext": { "taskName":"call_routing", "inputVariables":{ "accountNumber":"3434", "name": "John Doe" } } }' ``` A successful request returns 200 with the call transfer data: ```json theme={null} { "id": "[Your Transfer Id]", "externalConversationId": "[Your Conversation Id]", "status": "ACTIVE", "type": "PHONE_NUMBER", "inputContext": { "taskName": "AsappDemo", "inputVariables": { "accountNumber": "3434", "name": "sample_name" } }, "phoneNumber": { "transferNumber": "+19991234", "country": "US", "expireAt": "2025-06-19T13:39:40Z" } } ``` **Save** the `phoneNumber.transferNumber`; you will need to transfer to it before the `phoneNumber.expireAt` time. This is 1 minute by default. ## Step 2: Make a supervised transfer Perform a supervised transfer to the `phoneNumber.transferNumber` in the call transfer resource. Once you dial the number, GenerativeAgent is given the input context, if provided, and talks to the customer. Once connected or expired, the number will be released and can be used by a subsequent call transfer. The specific implementation on how to perform a supervised transfer depends on your call center system, but you must maintain call control throughout the transfer. With the supervised transfer, your system is on the call the entire time. You can monitor the call and control the call flow while GenerativeAgent is talking to the user. ## Step 3: Detect disconnect With the supervised transfer, your system will be monitoring the call and two scenarios are possible during the conversation that you must handle accordingly: 1. **GenerativeAgent completes the conversation with an agent escalation or a system transfer** * GenerativeAgent has determined it needs to return the call to your system, either to an agent escalation or a system transfer. * GenerativeAgent disconnects from the call and the output context of the conversation can be retrieved; this is covered in the next step. 2. **Customer hangs up the phone call** * When the customer hangs up, there are no output variables since GenerativeAgent didn't close out the conversation. The disconnect detection is crucial for maintaining proper call flow control and ensuring you can fetch the conversation context before handling the next steps. ## Step 4: Fetch the call context Once you detect the disconnect, you need to retrieve the call transfer result and resulting outputContext. To retrieve the call transfer result, you need to [fetch the `call-transfers`](/apis/generativeagent/get-call-transfer) with the `id` of the original call transfer: ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers/[Your Transfer Id]' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` A successful request returns 200 with call transfer data: ```json theme={null} { "id": "[Your Transfer Id]", "externalConversationId": "[Your Conversation Id]", "status": "COMPLETED", "createdAt": "2025-06-19T13:38:40Z", "callReceivedAt": "2025-06-19T13:38:45Z", "completedAt": "2025-06-19T13:39:12Z", "inputContext": { "taskName": "AsappDemo", "inputVariables": { "accountNumber": "3434", "name": "sample_name" } }, "type": "PHONE_NUMBER", "outputContext": { "transferType": "SYSTEM", "currentTaskName": "currentTaskName", "referenceVariables": { "reference_variable_1": "4343" }, "transferVariables": { "transfer_variable_": "4343" } }, "phoneNumber": { "transferNumber": "+19991234", "country": "US", "expireAt": "2025-06-19T13:39:40Z" } } ``` **Extract the key information:** * **Status**: Indicates if the call was completed successfully. | Status | Description | | ------------- | ------------------------------------------------------------------------------------------------------ | | **ACTIVE** | The call transfer is active and the temporary phone number is waiting to be connected. | | **ONGOING** | The call was connected and GenerativeAgent is talking to the customer. | | **COMPLETED** | The call transfer has completed. | | **EXPIRED** | The call transfer has expired and the temporary phone number is no longer valid for that conversation. | * **outputContext**: Contains the conversation results and any transfer variables * **transferType**: Indicates the type of transfer that occurred. This can be either `AGENT` or [`SYSTEM`](/generativeagent/configuring/tasks-and-functions/system-transfer). * **referenceVariables**: Context information about the customer and conversation * **transferVariables**: Data that should be passed to the next system or agent With this information, handle the call according to your own business logic, such as routing the call to an agent or handling call disposition. ### Handling customer reconnections There may be scenarios where you want to reconnect a customer directly to where they left off with GenerativeAgent. For example, if the customer hangs up the phone, or after transferring back to your system, you want to transfer them back again to GenerativeAgent. To do this, ensure you use the same `externalConversationId` to reconnect the customer to the same conversation. GenerativeAgent will resume the conversation where it left off. ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your New Transfer Id for this transfer attempt]", "externalConversationId":"[Your Original Conversation Id from the first conversation leg]", "type": "PHONE_NUMBER", "phoneNumber":{ "country": "US" } }' ``` ## Next Steps Now that you have successfully implemented PSTN Call Transfer with GenerativeAgent, here are some important next steps to consider: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure your APIs to allow GenerativeAgent to access necessary data and perform actions Connect and optimize your knowledge base to improve GenerativeAgent's responses Follow the deployment checklist to launch GenerativeAgent in your production environment # Example Interactions Source: https://docs.asapp.com/generativeagent/integrate/example-interactions While each type of integration may have some subtle differences on how replies are handled and sent to back to end users. They all still follow the same basic interaction pattern. These examples show some example scenarios, the API calls you would make, and the events you would receive. * **[Simple Interaction](#simple-interaction "Simple interaction")** * **[Conversation with an action that requires confirmation](#conversation-with-an-action-that-requires-confirmation "Conversation with an action that requires confirmation")** * **[Conversation with authentication](#conversation-with-authentication "Conversation with authentication")** * **[Conversation with transfer to an agent](#conversation-with-transfer-to-an-agent "Conversation with transfer to an agent")** ## Simple interaction The example below shows a simple interaction with the GenerativeAgent. We first use the Conversation API to create a conversation, and then call the GenerativeAgent API to analyze a message from the customer. **Request** `POST /conversation/v1/conversations` ```json theme={null} { "externalId": "33411121", "agent": { "externalId": "671", "name": "agentname" }, "customer": { "externalId": "11462", "name": "customername" }, "metadata": { "queue": "some-queue" }, "timestamp": "2024-01-23T13:41:20Z" } ``` **Response** Status 200: Successfully created the conversation. ```json theme={null} { "id": "01HMVXRVSA1EGC0CHQTF1X2RN3" } ``` Now that we have a Conversation ID, we can use it to analyze a new message from our user, like the following: **Request** `POST /generativeagent/v1/analyze` ```json theme={null} { "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "message": { "text": "How can I pay my bill?", "sender": { "externalId": "11462", "role": "customer" }, "timestamp": "2024-01-23T13:43:04Z" } } ``` **Response** Status 200: Successfully sent the analyze request and created the new message. ```json theme={null} { "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "messageId": "01HMVXSWK8J9RR0PNGNN7Z4FVM" } ``` **GenerativeAgent Events** As a result of the analyze request, the following sequence of events will be sent to via the SSE stream: ```json theme={null} { generativeAgentMessageId: '116aaf51-8180-47b7-9205-9f61c8799c52', externalConversationId: '33411121', conversationId: '01HMVXRVSA1EGC0CHQTF1X2RN3', type: 'processingStart' } { generativeAgentMessageId: '5c020ad9-4a25-4746-a345-017bb9711dbe', externalConversationId: '33411121', conversationId: '01HMVXRVSA1EGC0CHQTF1X2RN3', type: 'reply', reply: { messageId: '01HMVXSZANHNGJ49R83HENDAJB', text: "I'm happy to help you! One moment please." } } { generativeAgentMessageId: 'd566fda8-3b7c-42a2-ae39-d08b66397238', externalConversationId: '33411121', conversationId: '01HMVXRVSA1EGC0CHQTF1X2RN3', type: 'reply', reply: { messageId: '01HMVXTDR1AT9CNQXPYKKBPJ7F', text: 'You can pay your bill by calling (XXX) XXX-6094, using the Mobile App, or with a customer service agent over the phone (with a $5 fee).' } } { generativeAgentMessageId: 'bba4320f-de53-4874-83b4-6c8704d3620c', externalConversationId: '33411121', conversationId: '01HMVXRVSA1EGC0CHQTF1X2RN3', type: 'processingEnd' } ``` ## Conversation with an action that requires confirmation In this use case, we go through a scenario that requires confirmation before the GenerativeAgent can execute a task on the user's behalf. Besides showing the payload of the GenerativeAgent Events that are sent from the GenerativeAgent, we also check the conversation state. We assume there is an existing conversation with ID 01HMSHT9KKHHBRMRKJTFZYRCKZ. **Request** `POST /generativeagent/v1/analyze` ```json theme={null} { "conversationId": "01HMSHT9KKHHBRMRKJTFZYRCKZ", "message": { "text": "hello, how can I reset my router?", "sender": { "externalId": "11462", "role": "customer" }, "timestamp": "2024-01-21T15:08:50Z" } } ``` **Response** Status 200: Successfully sent the analyze request and created the new message. ```json theme={null} { "conversationId": "01HMSHT9KKHHBRMRKJTFZYRCKZ", "messageId": "01HMSHVZGHAXDZMS722JS1JJJK" } ``` **GenerativeAgent events** As a result of the analyze request, the following sequence of events will be sent via the SSE stream: ```json theme={null} { generativeAgentMessageId: '33843eb0-10f6-4531-a645-ed9481833301', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'processingStart' } { generativeAgentkMessageId: '0ed65d99-215d-48b4-be28-fee936f4757e', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'reply', reply: { messageId: '01HMSQ946T9E3RCXHPNH1B65ZE', text: "I'm happy to help you! One moment please." } } { generativeAgentMessageId: '1121411d-e68e-45d3-bf9e-f2a3db73e7ca', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'reply', reply: { messageId: '01HMSQ96TWXB5DT4T259FP76RX', text: "Please say 'CONFIRM' to confirm the router reset. This action cannot be undone." } } { generativeAgentMessageId: '3c4b0f55-c702-453c-9a76-db591f685213', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'processingEnd' } ``` From the events above, we can see the GenerativeAgent requires user confirmation before it can proceed. This can be done through another customer message (analyze API call). Optionally, we can check the current conversation state by calling the GET /state API, before the confirmation is sent: **Request** `GET /generativeagent/v1/state?conversationId=01HMSHT9KKHHBRMRKJTFZYRCKZ` **Response** Status 200. We see the GenerativeAgent is waiting for confirmation for this conversation. ```json theme={null} { "state": "waitingForConfirmation", "lastGenerativeAgentMessageId": "3c4b0f55-c702-453c-9a76-db591f685213" } ``` Now, the user sends the confirmation message: **Request** `POST /generativeagent/v1/analyze` ```json theme={null} { "conversationId": "01HMSHT9KKHHBRMRKJTFZYRCKZ", "message": { "text": "CONFIRM", "sender": { "externalId": "11462", "role": "customer" }, "timestamp": "2024-01-21T15:09:10Z" } } ``` **Response** Status 200: Successfully sent the analyze request and created the new message. ```json theme={null} { "conversationId": "01HMSHT9KKHHBRMRKJTFZYRCKZ", "messageId": "01HMVJTR2CPABZ46DM0QK1NS3T" } ``` The analyze request triggers the following events: ```json theme={null} { generativeAgentMessageId: 'bae280e8-26c7-4333-ae8f-018e5f7140e9', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'processingStart' } { generativeAgentMessageId: '7bcbab42-e64f-4e1b-9ec8-5db343d471e3', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'reply', reply: { messageId: '01HMSQ946T9E3RCXHPNH1B65ZE', text: "Please wait while your router is being reset..." } } { generativeAgentMessageId: 'd0e3cb51-79e4-4b90-8c05-3f345090fbdf', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'reply', reply: { messageId: '01HMSQ96TWXB5DT4T259FP76RX', text: "Router successfully reset." } } { generativeAgentMessageId: '6af4172c-7bb7-4fa7-a338-b73a35be5d1c', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'reply', reply: { messageId: '01HMSQ96TWXB5DT4T259FP76RX', text: "If you have any other questions or need further assistance, please don't hesitate to ask." } } { generativeAgentMessageId: '008a21a0-af04-4ece-8f58-b7a0c82a1115', externalConversationId: '33411121', conversationId: '01HMSHT9KKHHBRMRKJTFZYRCKZ', type: 'processingEnd' } ``` Finally, we can optionally check the state again. We see it changed back into "ready". **Request** `GET /generativeagent/v1/state?conversationId=01HMSHT9KKHHBRMRKJTFZYRCKZ` **Response** ```json theme={null} { "state": "ready", "lastGenerativeAgentMessageId": "008a21a0-af04-4ece-8f58-b7a0c82a1115" } ``` ## Conversation with authentication In this scenario, the user tries to take an action that requires authentication first. GenerativeAgent will then ask for authentication via the GenerativeAgent event, which we can also confirm via the State API call. We'll authenticate and see the GenerativeAgent resuming the task. We assume there is an existing conversation with ID *01HMW15N6V27Y4V2HRCE0CBZJQ*. Please see the first use case to understand how to create a new conversation. **Request** `POST /generativeagent/v1/analyze` ```json theme={null} { "conversationId": "01HMW15N6V27Y4V2HRCE0CBZJQ", "message": { "text": "How much do I owe for my mobile?", "sender": { "externalId": "11462", "role": "customer" }, "timestamp": "2024-01-23T15:49:37Z" } } ``` **Response** Status 200: Successfully sent the analyze request and created the new message. ```json theme={null} { "conversationId": "01HMW15N6V27Y4V2HRCE0CBZJQ", "messageId": "01HMSHT9KKHHBRMRKJTFZYRCKZ" } ``` **GenerativeAgent events** As a result of the analyze request, the following sequence of messages will be sent via the SSE stream: ```json theme={null} { generativeAgentMessageId: '309181fd-be58-46fa-91b3-ea49f5f4b3d9', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'processingStart' } { generativeAgentMessageId: '3122535a-3d0b-4bb5-a0ff-6c26616d2325', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'reply', reply: { messageId: '01HMW172YTTESK1EG6A9Y8QRFZ', text: "I'm happy to help you! One moment please." } } { generativeAgentMessageId: '49771949-c26e-49ab-86aa-5259d1a249ab', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'authenticationRequested' } { generativeAgentMessageId: 'd2d43ac5-e160-40fd-9c5b-773c8f7417e0', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'processingEnd' } ``` We can see the second-to-last message is of type authenticationRequested. This tells us that the GenerativeAgent needs authentication in order to continue. Additionally, we can check the conversation state, which is waitingForAuth: **Request** `GET /generativeagent/v1/state?conversationId=01HMW15N6V27Y4V2HRCE0CBZJQ` **Response** Status 200. We see the GenerativeAgent is waiting for confirmation for this conversation. ```json theme={null} { "state": "waitingForAuth", "lastGenerativeAgentMessageId": "d2d43ac5-e160-40fd-9c5b-773c8f7417e0" } ``` Now let's call the authentication endpoint. Note that the specific format and content of the user credentials must be agreed upon between your organization and your ASAPP account team. **Request** `POST /conversation/v1/conversations/01HMW15N6V27Y4V2HRCE0CBZJQ/authenticate` ```json theme={null} { "customerExternalId": "33411121", "auth": { {{authentication payload}} } } ``` **Response** Status 204: Successfully sent the authenticate request no response body is expected. **GenerativeAgent Events** After a successful authenticate request, the GenerativeAgent will resume if it was waiting for auth. In this case, the following sequence of messages is sent via the SSE Stream: ```json theme={null} { generativeAgentMessageId: '07df33e7-8603-4393-8ea2-ac29e35197c9', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'processingStart' } { generativeAgentMessageId: 'adfe3156-18fe-457b-b726-90c489478c80', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'reply', reply: { messageId: '01HMY19BT31Z4AR05S0M5237EK', text: "Your current balance for your mobile account is $415.38, with no overdue amount and a past due amount of $10." } } { generativeAgentMessageId: '3325ea14-5b73-4c7a-9511-a6faebc5c98c', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'reply', reply: { messageId: '01HMY19CCJ9E8ENS34WNTQ29E2', text: 'There are 26 days remaining in your billing cycle.' } } { generativeAgentMessageId: '3325ea14-5b73-4c7a-9511-a6faebc5c98c', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'reply', reply: { messageId: '01HMY15DGHYHVHZ5GYAXR1TDWS', text: 'For more information on your mobile billing, you can visit https://website.com' } } { generativeAgentMessageId: 'd8785903-a680-4db5-a95f-ba9ed64a7aaa', externalConversationId: '33411121', conversationId: '01HMW15N6V27Y4V2HRCE0CBZJQ', type: 'processingEnd' } ``` ## Conversation with transfer to an agent This example showcases the bot transferring the conversation to an agent (a.k.a. agent escalation).  We assume there is an existing conversation with ID *01HMY50MM3D5JP23NPWXKPQVD4*. Please see the first use case to understand how to create a new conversation. **Request** `POST /generativeagent/v1/analyze` ```json theme={null} { "conversationId": "01HMY50MM3D5JP23NPWXKPQVD4", "message": { "text": "Can I talk to a real human?", "sender": { "externalId": "11462", "role": "customer" }, "timestamp": "2024-01-24T11:35:23Z" } } ``` **Response** Status 200: Successfully sent the analyze request and created the new message. ```json theme={null} { "conversationId": "01HMY50MM3D5JP23NPWXKPQVD4", "messageId": "01HMY5FRHW3B76JSS3BVP1VJJX" } ``` **GenerativeAgent Events** As a result of the analyze request, the following sequence of messages will be sent via the SSE Stream: ```json theme={null} { generativeAgentMessageId: '233e206d-a444-4736-9a66-1fde75e46df7', externalConversationId: '33411121', conversationId: '01HMY50MM3D5JP23NPWXKPQVD4', type: 'processingStart' } { generativeAgentMessageId: '2925b18f-4140-4312-b071-b56feac86d5a', externalConversationId: '33411121', conversationId: '01HMY50MM3D5JP23NPWXKPQVD4', type: 'reply', reply: { messageId: '01HMY5FWAMR5DF3DABGNB5118D', text: 'Sure, connecting you with an agent.' } } { generativeAgentMessageId: '42ec4212-02aa-4ac6-94e2-4c8fee24352f', externalConversationId: '33411121', conversationId: '01HMY50MM3D5JP23NPWXKPQVD4', type: 'transferToAgent' } { generativeAgentMessageId: '0deb0eb0-dc75-48e5-80ed-805f14d95e0c', externalConversationId: '33411121', conversationId: '01HMY50MM3D5JP23NPWXKPQVD4', type: 'processingEnd' } ``` The second-to-last message is of type transferToAgent. We can also optionally verify the conversation state by calling the state API: **Request** `GET /generativeagent/v1/state?conversationId=01HMY50MM3D5JP23NPWXKPQVD4` **Response** Status 200. We see the GenerativeAgent is waiting for confirmation for this conversation. ```json theme={null} { "state": "transferredToAgent", "lastGenerativeAgentMessageId": "0deb0eb0-dc75-48e5-80ed-805f14d95e0c" } ``` # Genesys AudioConnector for GenerativeAgent Source: https://docs.asapp.com/generativeagent/integrate/genesys-audiohook Learn how to integrate GenerativeAgent into Genesys Cloud using our Genesys AudioConnector integration. The Genesys AudioConnector integration with ASAPP's GenerativeAgent allows callers in your Genesys Cloud CX contact center to have conversations with GenerativeAgent while maintaining the call entirely within your Genesys environment. This guide demonstrates how to integrate GenerativeAgent using Genesys AudioConnector and ASAPP-provided components. It showcases how the various components work together, but you can adapt or replace any part of the integration to match your organization's requirements. ## How it works At a high level, the Genesys AudioConnector integration with GenerativeAgent works by streaming audio and managing conversations through your Genesys Architect flows: 1. **Stream the audio** to GenerativeAgent through Genesys AudioConnector. 2. **GenerativeAgent handles the conversation** using the audio stream and responds to the caller. Since calls remain within your Genesys infrastructure throughout the interaction, you maintain full control over call handling, including error scenarios and transfers. 3. **Return control back** to your Genesys flow when: * The conversation is successfully completed * The caller requests a human agent * An error occurs ## Before you Begin Before using the GenerativeAgent integration with Genesys Cloud CX, you need: * [Get your API Key and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you need access enabled. * Have your dedicated **Base Connection URI** from ASAPP. * This is a URI you will use when configuring the Genesys Audiohook Monitor, provided by your ASAPP account team. * Have an existing Genesys Cloud CX Instance * Genesys Cloud CX administrator account with permissions for: * Managing integrations * Configuring Architect flows * Setting up Audiohook Monitor * Managing audio streaming settings ## Step 1: Configure Genesys Cloud CX Integration First, you need to install and configure the ASAPP GenerativeAgent integration in your Genesys Cloud CX environment. You will need to install a separate Audio Connector Integration for each ASAPP environment (Sandbox and Production). From Admin Home, navigate to Integrations > Integrations. This is a list of third-party integrations you have available to install. Use the search functionality to find the ASAPP GenerativeAgent integration, called "AudioConnector". Once completing the install, you are taken to the Integration Details page. Genesys Integration Details You will have two sets of credentials, one for accessing the Production ASAPP environment and one for the Sandbox ASAPP environment. You will need to install a separate Audio Connector Integration for each. We highly recommend you include the appropriate environment when naming the connector, e.g. "ASAPP GenerativeAgent (Production)" or "ASAPP GenerativeAgent (Sandbox)" 1. Navigate to the Configuration tab > Properties and paste the **Base Connection URI**. 2. Navigate to Credentials sub-tab and click "Configure". * Enter the **API Key** and **API Secret** for the appropriate environment and click "Ok". Ensure the integration is set to "Active". ## Step 2: Set Up Architect Flow With the Audio Connector configured, you need to incorporate GenerativeAgent into your call flows. This is done by adding the GenerativeAgent Audio Connector to your Architect flows at the points where you want GenerativeAgent to handle the conversation. Genesys Audiohook Flow Open or create the Architect flow where you want to use GenerativeAgent. Determine where in the flow you want to add the GenerativeAgent Audio Connector. * In the Toolbox, expand "Bot" and drag the Audio Connector module to the flow. The connector should be placed at the point where you want to hand off the conversation to GenerativeAgent. * Name the connector. * Specify a Connector ID. * This is not required, but we recommend versioning the connector ID for future version control. * Optionally, configure input session variables: * `customerId`: Passed directly as the customer ID in ASAPP's system * `taskName`: Used to [enter a specific task](/generativeagent/configuring/tasks-and-functions/enter-specific-task) * All other variables are passed as [Input Variables](/generativeagent/configuring/tasks-and-functions/input-variables) When the GenerativeAgent Audio Connector is finished, it will return a result of either "Success" or "Failure". * **Success**: Indicates GenerativeAgent is transferring control back to your system or the caller has requested a human agent. * The block will return an output variable of `ASAPP_Disposition` with a value of: * `agent`: Indicates the caller requested a human agent. * `system`: Indicates GenerativeAgent has completed its task. * The block will also return output variables as defined in your tasks and functions as part of the [system transfer](/generativeagent/configuring/tasks-and-functions/system-transfer). * Configure your flow to route the conversation to the appropriate queue within Genesys Cloud. * **Failure**: Indicates an error occurred. Configure your flow to handle error scenarios, such as playing an error message to the caller and routing to a fallback option. ## Step 3: Test and Deploy Before deploying to production, thorough testing is essential to ensure the integration works as expected and provides a good caller experience. Test the integration thoroughly: * Make test calls through the flow Test various scenarios including normal conversations or requests for human agents. * Verify audio streaming quality and reliability * Test conversation handling * Ensure GenerativeAgent understands and responds appropriately * Test different caller accents and speech patterns * Verify handling of background noise and interruptions * Check error scenarios * Verify error handling paths in your flow ## Next Steps After successfully integrating GenerativeAgent with your Genesys Cloud CX environment, consider these next steps to optimize your implementation: Learn how to configure GenerativeAgent's behaviors and responses Understand safety features and how to troubleshoot common issues Follow our checklist for deploying to production # Handling GenerativeAgent Events Source: https://docs.asapp.com/generativeagent/integrate/handling-events While analyzing a conversation, GenerativeAgent communicates back to you via events. These events are sent via a [Server-Sent-Event](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) stream. The **single SSE stream** contain events for **all conversations** that are processed by GenerativeAgent. Each event contains the id of the conversation it relates to, and the type of event. Handling these events has 2 main steps: 1. Create SSE Stream 2. Handle the event ## Step 1: Create SSE Stream To create an SSE stream for GenerativeAgent, first generate a streaming URL, and then initiate the SSE stream with that URL To create the SSE stream URL, POST to `/streams`: ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/generativeagent/v1/streams' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{}' ``` A successful request returns 200 and the `streamingUrl` to use to create the SSE stream. Additionally it returns a `streamId`. Save this Id and use it to [reconnect SSE](#handle-sse-disconnects "Handle SSE disconnects") in-case the stream disconnects. ```json theme={null} { "streamId": "01ARZ3NDEKTSV4RRFFQ69G5FAV", "streamingUrl": "https://ws-coradnor.asapp.com/push-api/connect/sse?token=token", "messageTypes": [ "generative-agent-message" ] } ``` The streaming URL is only valid for 30 seconds. After that time, the connection will be rejected and you will need to request a new URL. Initiate the SSE stream by connecting to the URL and handle the events. How you connect to an SSE stream depends on the language you use and what are your preferred libraries. We include an [example in NodeJS](#code-sample "Code sample") below. ### Handle SSE disconnects If your SSE connection breaks, reestablish the stream using the `streamId` returned in the original request. ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/generativgeagent/v1/streams' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "streamId": "01ARZ3NDEKTSV4RRFFQ69G5FAV", } ``` A successful request returns 200 and the streaming URL you will reconnect with. ```json theme={null} { "streamId": "01ARZ3NDEKTSV4RRFFQ69G5FAV", "streamingUrl": "https://ws-coradnor.asapp.com/push-api/connect/sse?token=token", "messageTypes": [ "generative-agent-message" ] } ``` Save the `streamId` to use in your `/analyze` requests. This will send all the GenerativeAgent messages for that analyze request, to this SSE stream. ## Step 2: Handle events You need to process each event from GenerativeAgent. The data sent via SSE needs to be parsed into a JSON, and then handled accordingly. Determining the conversation the event pertains to and take the necessary action depending on the event `type`. For a given analyze request on a conversation, you may receive any of the following event types: * **`processingStart`**: The bot started processing. This can be used to trigger user feedback such as showing a "typing" indicator. * **`authenticationRequired`**: Some API Connections require additional User authentication. Refer to [User authentication required](#user-authentication-required "User Authentication Required") for more information. * **`reply`**: The bot has a reply for the conversation. We will automatically create a message for the bot, but you will need to send back the response to your user. This can be text directly when on a text based system, or your TTS for voice channels. * **`processingEnd`**: The bot finished processing. This indicates there will be no further events until analyze is called again. * **`transferToAgent`**: The bot could not handle the request and the conversation should be transferred to an agent. * **`transferToSystem`**: The bot is transferring control to an external system. This is a system transfer function. Here is an example set of events where analyze is called: ```json theme={null} { "generativeAgentMessageId": "116aaf51-8180-47b7-9205-9f61c8799c52", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "processingStart" } { "generativeAgentMessageId": "5c020ad9-4a25-4746-a345-017bb9711dbe", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "reply", "reply": { "messageId": "01HMVXSZANHNGJ49R83HENDAJB", "text": "I'm happy to help you! One moment please." } } { "generativeAgentMessageId": "d566fda8-3b7c-42a2-ae39-d08b66397238", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "reply", "reply": { "messageId": "01HMVXTDR1AT9CNQXPYKKBPJ7F", "text": "You can pay your bill by calling (XXX) XXX-6094, using the Mobile App, or with a customer service agent over the phone (with a $5 fee)." } } { "generativeAgentMessageId": "bba4320f-de53-4874-83b4-6c8704d3620c", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "processingEnd" } ``` ## User Authentication Required A key power of GenerativeAgent is it's ability to call your APIs to look up information or perform an action. These are determined by the [API Connections](/generativeagent/configuring/connect-apis) you create. Some APIs require end user authentication. When this is the case, we sent the `authenticationRequested` event. Work with your ASAPP team to determine those authentication needs and what needs to passed back to ASAPP. Based on the specifics of your API, you will need to gather the end user authentication information and call [`/authenticate`](/apis/conversations/authenticate-a-user-in-a-conversation) on the conversation: ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/[conversation Id]/authenticate' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "customerExternalId": "[Your Id of the customer]", "auth": { {{Your predetermined authentication payload}} } }' ``` A successful response returns a 204 response and no body. GenerativeAgent will continue processing and send you subsequent events. ## Code sample Here is an example of initiate the SSE stream and listening for the events using nodeJS. This uses [axios](https://www.npmjs.com/package/axios) to get the URL and the [EventSource](https://www.npmjs.com/package/eventsource) package for handling the events: ```javascript theme={null} import axios from 'axios'; import EventSource from 'eventsource'; const response = await axios.post('https://api.sandbox.asapp.com/generativeagent/v1/streams', {}, { headers: { 'asapp-api-id': '[Your API key id]', 'asapp-api-secret': '[Your API secret]', 'Content-Type': 'application/json' } }); console.log('Using streaming URL:', response.data.streamingUrl); const eventSource = new EventSource(response.data.streamingUrl); eventSource.onopen = (event) => { console.log('Connection opened:', event.type); }; eventSource.onerror = (error) => { console.error('EventSource failed:', error); eventSource.close(); }; eventSource.onmessage = (event) => { console.log('Received uncategorized data:', event.data); }; eventSource.addEventListener('status', (event) => { console.log('Received status ping:', event.data); }) eventSource.addEventListener('generative-agent-message', (event) => { console.log('Received generative-agent-message:', event.data); try { const parsedData = JSON.parse(event.data); console.log('Parsed data:', parsedData); // Handle different event types here switch (parsedData.type) { case "processingStart": console.log("Bot started processing."); break; case "authenticationRequired": console.log("Initiate customer authentication."); break; case "reply": console.log("GenerativeAgent responded:", parsedData.content); break; case "processingEnd": console.log("Bot finished processing"); break; case "transferToAgent": console.log("Bot could not handle request, transfer to a live agent."); break; default: console.log("Unknown event type:", parsedData.type); } } catch (error) { console.error('Error parsing event data:', error); } }) ``` This example is intended to be illustrative only. ## Event Schema Each event is a json format with several fields with the following specification: | Field Name | Type | Description | | :---------------------------------- | :----------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | generativeAgentMessageId | string | A unique identifier for this webhook request. | | conversationId | string | The internal identifier for the conversation from the ASAPP system. | | externalConversationId | string | The external identifier for the conversation from your external system. | | type | string, enum | The type of bot response. It can be one of the following:
  • reply
  • processingStart
  • processingEnd
  • authenticationRequired
  • transferToAgent
  • transferToSystem
| | reply.\* | object | If the `type` is **reply** then the bot's reply is contained in this object. | | reply.messageId | string | The identifier of the message sent in the reply | | reply.text | string | The message text of the reply | | transferToSystem.\* | object | If the `type` is **transferToSystem** then the variables to be transferred to the external system are contained in this object. | | transferToSystem.referenceVariables | object | A Hash map of reference variables to be transferred to the external system. | | transferToSystem.transferVariables | object | A Hash map of transfer variables to be transferred to the external system. | | transferToSystem.currentTaskName | string | The name of the current task that is being transferred to the external system. | ```json theme={null} { "generativeAgentMessageId": "d566fda8-3b7c-42a2-ae39-d08b66397238", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "reply", "reply": { "messageId": "01HMVXTDR1AT9CNQXPYKKBPJ7F", "text": "You can pay your bill by calling (XXX) XXX-6094, using the Mobile App, or with a customer service agent over the phone (with a $5 fee)." } } ``` ```json theme={null} { "generativeAgentMessageId": "116aaf51-8180-47b7-9205-9f61c8799c52", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "processingStart" } ``` ```json theme={null} { "generativeAgentMessageId": "bba4320f-de53-4874-83b4-6c8704d3620c", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "processingEnd" } ``` ```json theme={null} { "generativeAgentMessageId": "7d9e4f12-b3a8-4c91-95d6-8ef2a7c31b59", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "authenticationRequired" } ``` ```json theme={null} { "generativeAgentMessageId": "9f47d8e3-c612-4b9a-8d5f-e31a2c4b6789", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "transferToAgent" } ``` ```json theme={null} { "generativeAgentMessageId": "bba4320f-de53-4874-83b4-6c8704d3620c", "externalConversationId": "33411121", "conversationId": "01HMVXRVSA1EGC0CHQTF1X2RN3", "type": "transferToSystem", "transferToSystem": { "referenceVariables": { "customerName": "John Smith", "accountNumber": "12345", "isActive": true }, "transferVariables": { "department": "billing", "priority": "high", "notes": ["Payment pending", "Requires callback"] }, "currentTaskName": "billing_transfer" } } ``` ## Next Steps After handling Events from GenerativeAgents, you have control over what is happening during conversations. You may find one of the following sections helpful in advancing your integration: # SIP Transfers Source: https://docs.asapp.com/generativeagent/integrate/sip-transfers Choose between API-based or SIP headers approaches for integrating GenerativeAgent with SIP ASAPP's SIP Transfers enable businesses to integrate GenerativeAgent Voice using Session Initiation Protocol (SIP) instead of traditional phone numbers. This approach provides more control over the call flow and can be more cost-effective for certain use cases. We have dedicated connector integrations for some platforms ([Genesys](/generativeagent/integrate/genesys-audiohook), [Amazon Connect](/generativeagent/integrate/amazon-connect)), but not every platform allows native integrations. SIP transfers allow you to use SIP protocol to connect your users to GenerativeAgent while keeping you in control of the call the entire time. ## Choose Your Integration Approach **For rich context data** * Unlimited context data via API calls * Requires API integration **For basic context data** * Context in SIP headers (1024 char limit) * No API calls needed ## SIP Requirements ASAPP uses Twilio as our telephony provider. To ensure successful integration, your SIP infrastructure must meet the following requirements when transferring calls to GenerativeAgent: ### Network Configuration **Media IP Range**: Twilio voice media uses the global IP range `168.86.128.0/18` with UDP ports `10000-60000`. Configure your firewall to allow traffic from this range. **Codec Support**: Twilio supports the following audio codecs for media transmission: * G.711 μ-law (PCMU) * G.711 A-law (PCMA) ### Security **TLS Encryption**: We strongly recommend using Transport Layer Security (TLS) for SIP signaling. Twilio uses signed certificates and supports specific TLS ciphers. For detailed TLS configuration requirements, refer to the [Twilio SIP interface documentation](https://www.twilio.com/docs/voice/api/sip-interface#securing-sip-traffic-using-tls). Ensure your SIP infrastructure can handle the specified IP range and port requirements. Firewall misconfigurations are a common cause of connection failures. ## Next Steps Choose your preferred integration approach above to get started with implementation. # API-based SIP Transfers Source: https://docs.asapp.com/generativeagent/integrate/sip-transfers-api Use REST API calls to pass rich context data for SIP transfers with unlimited complexity API-based SIP transfers use REST API calls to create call transfers with rich context input and output. This approach provides unlimited complexity for context data but requires API integration in your system. ## How it works At a high level, API-based SIP transfers work by: 1. **Request a SIP URI**: Your system requests a destination SIP URI from ASAPP via API call. You can provide complex context to the call. 2. **GenerativeAgent handles the conversation**: Transfer the call to GenerativeAgent via the SIP URI so it can talk directly to the customer. 3. **Return control**: When GenerativeAgent has completed the call, it will transfer the call back to your specified return URI and your system will fetch the resulting context and handle the rest of the call flow. SIP Transfer Flow 1. **Incoming Call**: A customer calls your existing phone number 2. **IVR Processing**: Your existing IVR system processes the call and determines when to transfer to GenerativeAgent 3. **Request a SIP URI**: Your system requests a destination SIP URI from ASAPP via API call. You can provide complex context to the call. 4. **Transfer the call**: Your system transfers the call to the destination SIP URI via SIP protocol. 5. **Detect call completion**: When GenerativeAgent has completed the call, it will transfer the call back to your return URI. 6. **Fetch the call context**: Your system will fetch the context, which includes the transfer type, from the call via API call. 7. **Handle the call**: Using the context and transfer type, your system handles the agent escalation, call disposition, or any other steps in your call flow. ## Before you Begin Before implementing API-based SIP Transfers, you need: * [Get your API Key Id and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you need access enabled. * [Configure Tasks and Functions](/generativeagent/configuring) * Contact your ASAPP account team to enable SIP transfers * This includes determining how many concurrent calls you need to support, SIP infrastructure requirements, etc. * **Configure SIP server authentication**: ASAPP requires authentication for all incoming SIP requests to ensure security. You must provide one or both of the following authentication methods: * **IP whitelist**: The IP address(es) of your SIP server(s) that ASAPP will allow to make SIP requests * **Username and password**: SIP authentication credentials that ASAPP will use to validate your SIP requests **Security requirement**: ASAPP cannot accept unauthenticated SIP requests. You must provide at least one authentication method (IP whitelist and/or username/password) during setup. * **Get your SIP URI for transfers**: Obtain the static SIP URI from ASAPP that you'll use to route calls to GenerativeAgent * **Configure transfer settings**: Set up your default [transfer type](#transfer-types) and authentication credentials (for INVITE transfers) with your ASAPP account team ### Transfer Types Your transfer type is configured as part of your static setup with ASAPP. Choose the appropriate transfer type based on your call flow needs: | Transfer Type | Behavior | Use Case | | ------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------- | | **BYE** | GenerativeAgent disconnects when done | Your system handles the disconnect (lower cost) | | **REFER** | Standard blind transfer - sends REFER message to your return URI | Transfer to another system after ASAPP completes (higher cost) | | **INVITE** | Keeps ASAPP in call flow for continued transcription | Provide end-to-end conversation understanding for GenerativeAgent (higher cost) | **Cost Implications**: REFER and INVITE transfer types have higher cost implications that need to be aligned with your sales team before implementation. Contact your ASAPP representative to discuss pricing for these transfer types. For INVITE transfers, you can provide authentication credentials (username/password) as part of your static configuration with ASAPP. ## Step 1: Create a call transfer To transfer a call to GenerativeAgent, you need to create a `call-transfer`. A call transfer is the attempt to transfer a call to GenerativeAgent. This resource will include context and configuration for the transfer, but you'll route the call to the static SIP URI provided by ASAPP. To [create a call transfer](/apis/generativeagent/create-call-transfer), you need to specify: | Parameter | Description | | -----------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `id` | Your unique identifier for the call transfer. You will use this later to fetch the call transfer result. | | `externalConversationId` | Your unique identifier for the conversation. This allows you to reconnect the customer to the same conversation and is used in reporting. | | `type` | Specify a type of **SIP**. | | `sip.returnURI` | Only used for REFER and INVITE transfer types.

The SIP URI to transfer the call back to when the conversation ends. You can include any parameters in the URI (e.g., User-to-User headers, custom parameters) and ASAPP will send the URI back exactly as provided. Maximum length: 1024 characters. | | `inputContext` | Optionally specify the [`taskName`](/generativeagent/configuring/tasks-and-functions/enter-specific-task) and [`inputVariables`](/generativeagent/configuring/tasks-and-functions/input-variables) to trigger GenerativeAgent with specific task information and variables. | ```shell REFER/INVITE Example theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your Transfer Id]", "externalConversationId":"[Your Conversation Id]", "type": "SIP", "sip": { "returnURI": "sip:user@customer-sbc.example.com;User-to-User=UUI-12345" }, "inputContext": { "taskName":"call_routing", "inputVariables":{ "accountNumber":"3434", "name": "John Doe" } } }' ``` ```shell BYE Example theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your Transfer Id]", "externalConversationId":"[Your Conversation Id]", "type": "SIP", "inputContext": { "taskName":"call_routing", "inputVariables":{ "accountNumber":"3434", "name": "John Doe" } } }' ``` A successful request returns 200 with the call transfer data: ```json theme={null} { "id": "[Your Transfer Id]", "externalConversationId": "[Your Conversation Id]", "status": "ACTIVE", "type": "SIP", "inputContext": { "taskName": "call_routing", "inputVariables": { "accountNumber": "3434", "name": "John Doe" } }, "sip": { "returnURI": "sip:user@customer-sbc.example.com;User-to-User=UUI-12345" }, "createdAt": "2025-01-15T13:06:00Z" } ``` **Save** the `id` from the response; you will need it to pass as the `X-ASAPP-CallTransferId` header when transferring the call and to query the call transfer result in Step 3. ## Step 2: Transfer the call and handle the response Once you've created the call transfer, you need to route the call to GenerativeAgent and handle the return transfer when the conversation ends. Transfer the call to the static SIP URI provided by ASAPP during setup. Include the call transfer `id` as the `X-ASAPP-CallTransferId` header in your SIP transfer. **Authentication required**: ASAPP will authenticate your SIP request using the IP whitelist and/or username/password credentials you provided during setup. Your SIP request must pass authentication or it will be rejected. Once you transfer the call, GenerativeAgent is given the input context, if provided, and talks to the customer. The specific implementation on how to perform a SIP transfer depends on your call center system. With SIP transfers, the customer is calling into your phone number so your SBC/PBX is handling the inbound call. Your system maintains visibility and ultimate ownership of the call the entire time. Two scenarios are possible during the conversation that you must handle accordingly: 1. **GenerativeAgent completes the conversation with an agent escalation or a system transfer** * GenerativeAgent has determined it needs to return the call to your system, either to an agent escalation or a system transfer. * GenerativeAgent handles the call return based on your transfer type: * **BYE Transfer**: GenerativeAgent disconnects the call. Your system needs to detect the disconnect and proceed to fetch the call context. * **REFER Transfer**: GenerativeAgent sends a SIP REFER message to your return URI. Your system should accept and handle the REFER, and then fetch the call context. * **INVITE Transfer**: GenerativeAgent sends a SIP INVITE to your return URI. Your system should accept the INVITE and then fetch the call context. ASAPP will continue to transcribe the call until the call is ended. * The output context of the conversation can be retrieved. 2. **Customer hangs up the phone call** * When the customer hangs up, there are no output variables since GenerativeAgent didn't close out the conversation. * No transfer is performed and no output context will be available for fetching. The call completion detection is crucial for maintaining proper call flow control and ensuring you can fetch the conversation context before handling the next steps. ## Step 3: Fetch the call context After handling the return transfer in Step 2, retrieve the call transfer result and outputContext to understand what happened during the conversation. **Only fetch call context when GenerativeAgent handed back the call**: You can only retrieve meaningful output context when GenerativeAgent completed the conversation and transferred the call back to your system. If the customer hung up during the conversation, there will be no output context available for fetching. To retrieve the call transfer result, you need to [fetch the `call-transfers`](/apis/generativeagent/get-call-transfer) with the `id` of the original call transfer: ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers/[Your Transfer Id]' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` A successful request returns 200 with call transfer data: ```json theme={null} { "id": "[Your Transfer Id]", "externalConversationId": "[Your Conversation Id]", "status": "COMPLETED", "createdAt": "2025-01-15T13:06:00Z", "callReceivedAt": "2025-01-15T13:06:30Z", "completedAt": "2025-01-15T13:09:45Z", "inputContext": { "taskName": "call_routing", "inputVariables": { "accountNumber": "3434", "name": "John Doe" } }, "type": "SIP", "outputContext": { "transferType": "SYSTEM", "currentTaskName": "AccountInquiry", "referenceVariables": { "account_balance": "1250.00", "last_payment_date": "2025-01-10" }, "transferVariables": { "next_action": "schedule_callback", "priority": "high" } }, "sip": { "returnURI": "sip:user@customer-sbc.example.com;User-to-User=UUI-12345" } } ``` **Extract the key information:** * **Status**: Indicates if the call was completed successfully. | Status | Description | | ------------- | --------------------------------------------------------------------------------------------------- | | **ACTIVE** | The call transfer is active and the destination SIP URI is waiting to be connected. | | **ONGOING** | The call was connected and GenerativeAgent is talking to the customer. | | **COMPLETED** | The call transfer has completed. | | **EXPIRED** | The call transfer has expired and the destination SIP URI is no longer valid for that conversation. | * **outputContext**: Contains the conversation results and any transfer variables * **transferType**: Indicates the type of transfer that occurred. This can be either `AGENT` or [`SYSTEM`](/generativeagent/configuring/tasks-and-functions/system-transfer). * **referenceVariables**: Context information about the customer and conversation * **transferVariables**: Data that should be passed to the next system or agent With this information, handle the call according to your own business logic, such as routing the call to an agent or handling call disposition. ### Handling customer reconnections There may be scenarios where you want to reconnect a customer directly to where they left off with GenerativeAgent. For example, if the customer hangs up the phone, or after transferring back to your system, you want to transfer them back again to GenerativeAgent. To do this, ensure you use the same `externalConversationId` to reconnect the customer to the same conversation. GenerativeAgent will resume the conversation where it left off. ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your New Transfer Id for this transfer attempt]", "externalConversationId":"[Your Original Conversation Id from the first conversation leg]", "type": "SIP", "sip": { "returnURI": "sip:user@customer-sbc.example.com;User-to-User=UUI-12345", "returnTransferType": "REFER" } }' ``` ## Dynamic Transfer Configuration By default, your transfer type is configured as part of your static setup with ASAPP. However, you can optionally override the transfer type for specific calls by including the `sip.returnTransferType` parameter in your call transfer request. ### Override Transfer Type To use a different transfer type for a specific call, include the `sip.returnTransferType` parameter: | Parameter | Description | | ----------------------------------------: | :--------------------------------------------------------------------------------------- | | `sip.returnTransferType` | Override the default transfer type for this specific call (`BYE`, `REFER`, or `INVITE`). | | `sip.returnInviteAuthentication.username` | Username for authentication (optional for INVITE transfers). | | `sip.returnInviteAuthentication.password` | Password for authentication (optional for INVITE transfers). | ```shell INVITE with authentication example theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your Transfer Id]", "externalConversationId":"[Your Conversation Id]", "type": "SIP", "sip": { "returnURI": "sip:user@customer-sbc.example.com;User-to-User=UUI-12345", "returnTransferType": "INVITE", "returnInviteAuthentication": { "username": "your_username", "password": "your_password" } }, "inputContext": { "taskName":"call_routing", "inputVariables":{ "accountNumber":"3434", "name": "John Doe" } } }' ``` ```shell REFER example theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your Transfer Id]", "externalConversationId":"[Your Conversation Id]", "type": "SIP", "sip": { "returnURI": "sip:user@customer-sbc.example.com;User-to-User=UUI-12345", "returnTransferType": "REFER" }, "inputContext": { "taskName":"call_routing", "inputVariables":{ "accountNumber":"3434", "name": "John Doe" } } }' ``` ```shell BYE example theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "id":"[Your Transfer Id]", "externalConversationId":"[Your Conversation Id]", "type": "SIP", "sip": { "returnTransferType": "BYE" }, "inputContext": { "taskName":"call_routing", "inputVariables":{ "accountNumber":"3434", "name": "John Doe" } } }' ``` When using dynamic transfer configuration: * The `sip.returnURI` is required for REFER and INVITE transfer types, but not for BYE transfers * For INVITE transfers, you may provide `sip.returnInviteAuthentication.username` and `sip.returnInviteAuthentication.password` for authentication ## Next Steps Now that you have successfully implemented API-based SIP Transfers with GenerativeAgent, here are some important next steps to consider: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure your APIs to allow GenerativeAgent to access necessary data and perform actions Connect and optimize your knowledge base to improve GenerativeAgent's responses Follow the deployment checklist to launch GenerativeAgent in your production environment # SIP Headers Transfer Source: https://docs.asapp.com/generativeagent/integrate/sip-transfers-headers Use SIP headers to pass context directly in transfers without API calls SIP Headers Transfer enables you to integrate GenerativeAgent with your existing SIP telephony infrastructure by passing context directly in SIP headers. This approach eliminates the need for API calls during the transfer process, making it simpler to implement but with limited context complexity due to SIP header size constraints. ## How it works At a high level, SIP Headers Transfer works by: 1. **Transfer to ASAPP**: You transfer the call to ASAPP using SIP with context passed in headers 2. **GenerativeAgent handles the conversation**: Transfer the call to GenerativeAgent via SIP so it can talk directly to the customer 3. **Return control**: When GenerativeAgent has completed the call, it will transfer the call back to your system over SIP with return and transfer variables SIP Headers Transfer Flow 1. **Incoming Call**: A customer calls your existing phone number 2. **IVR Processing**: Your existing IVR system processes the call and determines when to transfer to GenerativeAgent 3. **Transfer the call**: Your system transfers the call to ASAPP's static SIP domain with context passed in SIP headers 4. **Receive the returned SIP transfer**: When GenerativeAgent has completed the call, it will transfer the call back to your return URI 5. **Process the return data**: Your system extracts the context from the Refer-To URI parameters and handles the call flow ## Prerequisites Before implementing SIP Headers Transfer, you need: * [Configure Tasks and Functions](/generativeagent/configuring) * Contact your ASAPP account team to enable SIP transfers * This includes determining how many concurrent calls you need to support, SIP infrastructure requirements, etc. * **Configure SIP server authentication**: ASAPP requires authentication for all incoming SIP requests to ensure security. You must provide one or both of the following authentication methods: * **IP whitelist**: The IP address(es) of your SIP server(s) that ASAPP will allow to make SIP requests * **Username and password**: SIP authentication credentials that ASAPP will use to validate your SIP requests **Security requirement**: ASAPP cannot accept unauthenticated SIP requests. You must provide at least one authentication method (IP whitelist and/or username/password) during setup. * ASAPP will provide a static SIP domain for you to transfer to * **Provide your SIP return URI**: You must provide ASAPP with your SIP URI for return transfers Unlike API-based transfers, SIP Headers Transfer does not require API credentials as context is passed directly in the SIP headers. ## SIP Return URI Configuration You need to provide ASAPP with your SIP return URI configuration to enable return transfers. This includes: ### Return Transfer Methods Choose the appropriate return transfer method based on your call flow needs: | Transfer Method | Behavior | Use Case | | --------------- | ---------------------------------------------------------------- | ----------------------------------------------------------------- | | **REFER** | Standard blind transfer - sends REFER message to your return URI | Transfer back to you after ASAPP completes | | **INVITE** | Keeps ASAPP in call flow for continued transcription | Provide end-to-end conversation understanding for GenerativeAgent | **Cost Implications**: INVITE transfer method has higher cost implications that need to be aligned with your sales team before implementation. Contact your ASAPP representative to discuss pricing for this transfer method. ### Required Configuration When setting up SIP Headers Transfer, provide ASAPP with: * **SIP Return URI**: The complete SIP URI where ASAPP should transfer calls back to * **Transfer Method**: Whether to use REFER or INVITE for return transfers * **Authentication (INVITE only)**: If using INVITE method, specify whether to use username/password authentication ## Step 2: Transfer the call to ASAPP Transfer the call to ASAPP using SIP. ASAPP will provide a static SIP domain for you to transfer to. **Authentication required**: ASAPP will authenticate your SIP request using the IP whitelist and/or username/password credentials you provided during setup. Your SIP request must pass authentication or it will be rejected. You have several headers you can use to pass context to ASAPP: | Header | Description | | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `X-GA-taskName` | The [entry task](/generativeagent/configuring/tasks-and-functions/enter-specific-task) for GenerativeAgent to handle the call | | `X-GA-extConversationId` | The external conversation ID for the call | | `X-GA-customerId` | The customer ID for the call | | `X-GA-iv-*` | Add any number of [input variables](/generativeagent/configuring/tasks-and-functions/input-variables) with this prefix. These can be referenced in the task instructions | For input variables, use the prefix `X-GA-iv-` followed by your variable name. For example, an input variable named `userName` would be passed as `X-GA-iv-userName`. ```bash Example SIP Transfer theme={null} INVITE sip:asapp-endpoint.com SIP/2.0 Via: SIP/2.0/UDP your-sbc.example.com:5060 From: To: Call-ID: call-12345@your-sbc.example.com CSeq: 1 INVITE X-GA-taskName: call_routing X-GA-extConversationId: conv-12345 X-GA-customerId: cust-67890 X-GA-iv-accountNumber: 3434 X-GA-iv-name: John Doe X-GA-iv-priority: high ``` After connecting, GenerativeAgent will handle the conversation using the provided context. ## Step 3: Handle the return transfer To transfer back a call to your system, GenerativeAgent will perform a SIP transfer back to you using the method you configured (REFER or INVITE). The transfer method determines how the call is returned: **REFER Transfer**: GenerativeAgent sends a SIP REFER message to your return URI. The transfer will send back the list of referenceVariables and transferVariables as query params in the Refer target (Refer-To). **INVITE Transfer**: GenerativeAgent sends a SIP INVITE to your return URI. ASAPP will continue to transcribe the call to provide end-to-end conversation understanding for GenerativeAgent. For **REFER transfers**, the following parameters are sent as query params in the Refer-To header: | Parameter | Description | | --------------- | ----------------------------------------------------------------------------------------- | | `X-GA-extConId` | The conversation ID you provided | | `X-GA-transfer` | Transfer type: **AGENT** (unable to help) or **SYSTEM** (called system transfer function) | | `X-GA-rv-*` | Each reference variable with this prefix | | `X-GA-tv-*` | Each transfer variable with this prefix | These are defined as part of the [System Transfer](/generativeagent/configuring/tasks-and-functions/system-transfer) function within the GenerativeAgent configuration. ```bash Example REFER Return Transfer theme={null} Refer-To: sip:your-sbc.example.com?X-GA-extConId=conv-12345&X-GA-transfer=SYSTEM&X-GA-tv-next_action=schedule_callback&X-GA-tv-priority=high&X-GA-rv-account_balance=1250.00&X-GA-rv-last_payment_date=2025-01-10 ``` ### Header Size Limit There is a **1024 character limit** for all context data. Headers are added in this order: 1. **Transfer variables** (`X-GA-tv-*`) - added first 2. **Reference variables** (`X-GA-rv-*`) - added second The system stops adding variables when it reaches the limit. Any remaining variables are dropped, so use shorter variable names and values to maximize the data you can transfer. The [System Transfer function](/generativeagent/configuring/tasks-and-functions/system-transfer) determines which variables are included, so make sure the total size of the variables you pass does not exceed this limit. ## Step 4: Process the return data The processing depends on the transfer method you configured: **For REFER transfers**: When you receive the REFER message, extract the context from the Refer-To URI parameters: 1. **Parse the Refer-To URI** to extract all parameters 2. **Identify the transfer type** from `X-GA-transfer`: * `AGENT`: GenerativeAgent was unable to help and needs human agent escalation * `SYSTEM`: GenerativeAgent called a system transfer function to hand control back to your system 3. **Extract variables**: * Reference variables (`X-GA-rv-*`): Context information about the customer and conversation * Transfer variables (`X-GA-tv-*`): Data that should be passed to the next system or agent Use this information to handle the call according to your business logic, such as routing to an agent or handling call disposition. ## Next Steps Now that you understand how SIP Headers Transfer works, here are some important next steps to consider: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure system transfer functions to control what data is returned in transfer variables Learn how to structure input variables for optimal context passing Follow the deployment checklist to launch GenerativeAgent in your production environment # Direct API Integration Source: https://docs.asapp.com/generativeagent/integrate/text-only-generativeagent You have the option to integrate with GenerativeAgent using the our APIs to directly provide the conversation transcript. This may be helpful if you: * Have your own Speech-to-Text (STT) and Text-to-Speech (TTS) service. * Adding GenerativeAgent to a text only channel like SMS or web site chat. GenerativeAgent works on a loop where you will send text content of the conversation and have GenerativeAgent analyze a conversation, then handle the results from GenerativeAgent. This process is repeated until GenerativeAgent addresses the user's needs, or GenerativeAgent is unable to help the user and requests a transfer to agent. Your text-only integration needs to handle: * Listening and Handling GenerativeAgent events. Create a single SSE stream where events from all conversations are sent. * Connecting your chat system and trigger GenerativeAgent. 1. Create a conversation 2. Add Messages 3. Analyze a conversation This diagram shows the interaction between your server and ASAPP, these steps are explained in more detail below: ## Before you Begin Before you start integrating to GenerativeAgent, you need to: * [Get your API Key Id and Secret](/getting-started/developers) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you unsure. * [Configure Tasks and Functions](/generativeagent/configuring). ## Step 1: Listen and Handle GenerativeAgent Events GenerativeAgent sends you events during the conversation. All events for all conversations being evaluated by GenerativeAgent are sent through the single [Server-Sent-Event](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) (SSE) stream.. To create the SSE stream URL, POST to [`/streams`](/apis/generativeagent/create-stream-url): ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/generativeagent/v1/streams' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{}' ``` A successful request returns 200 and the streaming URL you will reconnect with. ```json theme={null} { "streamId": "01ARZ3NDEKTSV4RRFFQ69G5FAV", "streamingUrl": "https://ws-coradnor.asapp.com/push-api/connect/sse?token=token", "messageTypes": [ "generative-agent-message" ] } ``` Save the `streamId`. You will use this later to send the GenerativeAgent events to this SSE stream. You need to [listen and handle these events](/generativeagent/integrate/handling-events) to enable GenerativeAgent to interact with your users. ## Step 2: Create a Conversation A `conversation` represents a thread of messages between an end user and one or more agents. GenerativeAgent evaluates and responds in a given conversation. [Create a `conversation`](/apis/conversations/create-or-update-a-conversation) providing your Ids for the conversation and customer: ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "externalId": "1", "customer": { "externalId": "[Your id for the customer]", "name": "customer name" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` A successfully created conversation returns a status code of 200 and the conversation's `id`. Save the conversation id as it is used when calling GenerativeAgent ```json theme={null} {"id":"01HNE48VMKNZ0B0SG3CEFV24WM"} ``` ## Step 3: Add messages Whether you are implementing a text based channel or using your own transcription, provide the utterances from your users by creating **`messages`**. A `message` represents a single communication within a conversation. [Create a `message`](/apis/messages/create-a-message) providing the text of what your user said: ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations/01HNE48VMKNZ0B0SG3CEFV24WM/messages' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "customer", "externalId": "[Your id for the customer]" }, "timestamp": "2024-01-23T11:42:42Z" }' ``` Continue to provide the messages while the conversation progresses. You can provide a single message as part of the `/analyze` call if that better works with the design of your system. ## Step 4: Analyze conversation with GenerativeAgent Once you have the SSE stream connected and are sending messages, you need to engage GenerativeAgent with a given conversation. To have GenerativeAgent analyze a conversation, make a [POST request to  `/analyze`](/apis/generativeagent/analyze-conversation): ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/generativeagent/v1/analyze' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "conversationId": "01HNE48VMKNZ0B0SG3CEFV24WM", "streamId": "01ARZ3NDEKTSV4RRFFQ69G5FAV" }' ``` Make sure to include the `streamId` created when you started the SSE Stream. GenerativeAgent evaluates the conversation at that moment of time to determine a response. GenerativeAgent is not aware of any additional messages that are sent while processing. A successful response returns a 200 and the conversation Id. ```json theme={null} { "conversationId":"01HNE48VMKNZ0B0SG3CEFV24WM" } ``` GenerativeAgent's response is communicated by the [events](/generativeagent/integrate/handling-events) sent through the SSE stream. ### Analyze with Message You have the option to send a message when calling analyze. ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/generativeagent/v1/analyze' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "conversationId": "01HNE48VMKNZ0B0SG3CEFV24WM", "streamId": "01ARZ3NDEKTSV4RRFFQ69G5FAV", "message": { "text": "hello, can I see my bill?", "sender": { "externalId": "321", "role": "customer" }, "timestamp": "2024-01-23T11:50:50Z" } }' ``` A successful response returns a 200 status code the id of the conversation and the message that was created. ```json theme={null} { "conversationId":"01HNE48VMKNZ0B0SG3CEFV24WM", "messageId":"01HNE6ZEAC94ENQT1VF2EPZE4Y" } ``` ### Add Input Variables and Task context As the conversation goes, it is possible to give GenerativeAgent more context of the conversation by using the`taskName` and `inputVariables` attributes. You can also simulate Tasks and Input Variables in the [Previewer](/generativeagent/configuring/previewer#input-variables) ```bash theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/generativeagent/v1/analyze \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "conversationId": "01BX5ZZKBKACTAV9WEVGEMMVS0", "message": { "text": "Hello, I would like to upgrade my internet plan to GOLD.", "sender": { "role": "agent", "externalId": 123 }, "timestamp": "2021-11-23T12:13:14.555Z" }, "taskName": "UpgradePlan", "inputVariables": { "context": "Customer called to upgrade their current plan to GOLD", "customer_info": { "current_plan": "SILVER", "customer_since": "2020-01-01" } } }' ``` ## Next Steps With your system implemented into GenerativeAgent, sending messages and engage GenerativeAgent, you are ready to use GenerativeAgent. You may find these other pages helpful in using GenerativeAgent: # Twilio Voice Source: https://docs.asapp.com/generativeagent/integrate/twilio-streams Learn how to integrate GenerativeAgent into Twilio using Twilio Media Stream The Twilio Voice integration with ASAPP's GenerativeAgent allows callers in your Twilio environment to have conversations with GenerativeAgent while maintaining complete control over call handling throughout the interaction. This integration uses Twilio's Media Stream, allowing it to work with any Twilio integration strategy: Functions, Flows, custom webhooks, etc. This guide demonstrates how to integrate GenerativeAgent using Twilio Media Stream inside Twilio Functions, showcasing how the various components work together. See the [detailed integration flow](#detailed-integration-flow) for the general approach for how Twilio Media Stream connects to GenerativeAgent. ## How it works At a high level, the Twilio Media Stream integration with GenerativeAgent works by streaming audio and managing conversations through your Twilio Functions: 1. **Stream the audio** to GenerativeAgent through Twilio Media Stream. 2. **GenerativeAgent handles the conversation** using the audio stream and responds to the caller. Since calls remain within your Twilio infrastructure throughout the interaction, you maintain full control over call handling, including error scenarios and transfers. 3. **Return control back** to your Twilio flow when: * The conversation is successfully completed * The caller requests a human agent * An error occurs 4. **Retrieve the call transfer state** after the conversation ends when GenerativeAgent ends the call. If the caller hangs up, there is no call transfer state to retrieve. This integration flow shows the general architecture of how Twilio and GenerativeAgent work together. Twilio provides multiple ways to initiate media streams, and this flow demonstrates the core components regardless of your specific implementation approach. Your system requests a Twilio media stream URL from ASAPP to start streaming audio. Use the [Get Twilio Media Stream URL API](/apis/genagent-media-gateway/get-twilio-media-stream-url) with your API credentials: ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/mg-genagent/v1/twilio-media-stream-url' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` The API returns a short-lived WebSocket URL valid for 5 minutes. The same URL can be used for multiple concurrent sessions within this timeframe. Your system instructs Twilio to initiate a bidirectional media stream to ASAPP Media Gateway components using the URL from the previous step. Twilio allows many methods to provide TwiML such as a webhook to your own server or via Twilio Functions (as shown in the integration walkthrough below). See [Twilio documentation](https://www.twilio.com/docs/voice/media-streams) for ways to initiate bidirectional media streams. Optionally, configure input session parameters: * `asapp_callTransferId` - the call transfer ID * `asapp_externalConversationId` - the external conversation ID * `customerId` - the customer ID for the conversation * `taskName` - used to [enter a specific task](/generativeagent/configuring/tasks-and-functions/enter-specific-task) * Additional values are passed as [inputVariables](/generativeagent/configuring/tasks-and-functions/input-variables) for the GenerativeAgent conversation. As an example, the following TwiML could be returned to initiate the bidirectional media stream with session parameters: ```xml Example TwiML Response theme={null} ``` When the call ends, your system retrieves the call transfer result and output context to determine next steps. Call transfer results are only available when GenerativeAgent completes the conversation and transfers control back to your system. If the caller hangs up, there is no call transfer result to retrieve. To retrieve the call transfer result, get the [Call Transfer](/apis/generativeagent/get-call-transfer) that would have been created with the `asapp_callTransferId` from the previous step: ```shell theme={null} curl --location 'https://api.sandbox.asapp.com/generativeagent/v1/call-transfers/[Your Transfer Id]' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' ``` A successful request returns call transfer data including: * **Status**: Indicates if the call was completed successfully (ACTIVE, ONGOING, COMPLETED, or EXPIRED) * **outputContext**: Contains the conversation results and transfer variables * **transferType**: Either `AGENT` (transfer to human agent) or `SYSTEM` (system transfer) * **referenceVariables**: Context information about the customer and conversation * **transferVariables**: Data to pass to the next system or agent Use this information to handle the call according to your business logic, such as routing to an agent or handling call disposition. ## Before you Begin Before using the GenerativeAgent integration with Twilio, you need: * [Get your API Key and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you need access enabled. * Have a Twilio account with: * Access to Twilio Functions and Assets * A phone number for testing * Administrator permissions to configure phone numbers and functions ### Your Infrastructure, Your Control This integration keeps calls within your Twilio infrastructure throughout the interaction. You maintain full control over call handling, including error scenarios and transfers. You may need to implement your own logic to handle the call transfer state and output context. This integration walks you through setting up Twilio Functions and provides the code you need to get started. You can adapt, modify, or replace any part of this implementation to match your organization's specific requirements and business logic. ## Step 1: Create and Configure Your Twilio Function You need to create a Twilio Function that will handle the integration with GenerativeAgent. This function will authenticate with ASAPP, obtain a media stream URL, and configure the audio streaming. You need a Twilio service to contain your function. Select an existing service from Functions and Assets > Services, or create a new one using the steps below. To create a new service, follow these steps: First, you need to add the `Functions and Assets` tool in your Twilio Account Dashboard. From Twilio Account Home, navigate to your Account Dashboard sidebar > Explore products > Developer Tools. Find the `Functions and Assets` tool and click on the "pin to sidebar" icon. Twilio Function and Assets The tool will now appear in your Account Dashboard sidebar. Go to the sidebar and click on Functions and Assets > Services. Click on `Create Service` button. Select a name for your service, such as `GenerativeAgentService`, and click `Next`. Twilio New Service Click on `Create your Function` button or click on `Add` > `Add Function` button. Pick a name for your function path, such as `/engage`. Twilio New Function Add axios dependency to the functions. The function code will use axios to reach out to ASAPP APIs. Twilio Axios Dependency Create the `/engage` function. This function establishes the connection between Twilio and GenerativeAgent. This function performs the following: 1. Obtains a Twilio Media Stream URL with the [Get Twilio Media Stream URL API](/apis/genagent-media-gateway/get-twilio-media-stream-url) 2. Configures the media stream with input parameters * `asapp_callTransferId` is passed as the id of the call transfer. * `asapp_externalConversationId` is passed as the external conversation id of the conversation. * `customerId` is passed as the customer id for the conversation. * `taskName` is used to [enter a specific task](/generativeagent/configuring/tasks-and-functions/enter-specific-task) * All other values are used as passed as [inputVariables](/generativeagent/configuring/tasks-and-functions/input-variables) used for the GenerativeAgent conversation. 3. Specifies a TwiML connect action to trigger call completion function when call ends 4. Returns the TwiML response to Twilio ```javascript Engage Function expandable theme={null} const axios = require('axios'); exports.handler = async function (context, event, callback) { const asappApiId = context.ASAPP_API_ID; const asappApiSecret = context.ASAPP_API_SECRET; const asappApiHost = context.ASAPP_API_HOST; // Authenticate with ASAPP to obtain a Twilio Media Stream URL const url = `${asappApiHost}/mg-genagent/v1/twilio-media-stream-url`; console.log('Call event:', event); const twiml = new Twilio.twiml.VoiceResponse(); // Configure input parameters for the media stream const inputParameters = { "asapp_callTransferId": event.CallSid, "asapp_externalConversationId": event.CallSid // Use CallSid as external conversation ID for tracking // Add additional input parameters to be passed as inputVariables as needed: // "customerId": "customer123", // "taskName": "customer_support" }; try { // Request streaming URL from ASAPP const res = await axios.get(url, { headers: { 'asapp-api-id': asappApiId, 'asapp-api-secret': asappApiSecret, 'Content-Type': 'application/json' } }); if (res.status === 200) { console.log('Streaming URL obtained:', res.data); // Configure the media stream with the obtained URL const connect = twiml.connect({ action: `https://${context.DOMAIN_NAME}/call-complete?callSid=${event.CallSid}` }); const stream = connect.stream({ url: res.data.streamingUrl }); // Add input parameters as stream parameters if (inputParameters) { for (const name in inputParameters) { stream.parameter({name: name, value: inputParameters[name]}); } } return callback(null, twiml); } else { console.error(`API request failed: ${res.status} ${res.statusText}`); twiml.say('We are experiencing technical difficulties. Please try again later.'); return callback(null, twiml); } } catch (error) { console.error('Error:', error); twiml.say('We are experiencing technical difficulties. Please try again later.'); return callback(null, twiml); } }; ``` Create the `/call-complete` function. This function retrieves the call transfer state after the conversation ends with the [Get Call Transfer API](/apis/generativeagent/get-call-transfer). This example code just logs what transfer type occurred, you will need to implement your own business logic to handle what happens next. ```javascript Call Complete Function expandable theme={null} const axios = require('axios'); exports.handler = async function (context, event, callback) { const asappApiId = context.ASAPP_API_ID; const asappApiSecret = context.ASAPP_API_SECRET; const asappApiHost = context.ASAPP_API_HOST; console.log('Call completed:', event); try { // Get call transfer state after call ends const callSid = event.CallSid; const resultUrl = `${asappApiHost}/generativeagent/v1/call-transfers/${callSid}`; const resultRes = await axios.get(resultUrl, { headers: { 'asapp-api-id': asappApiId, 'asapp-api-secret': asappApiSecret, 'Content-Type': 'application/json' } }); if (resultRes.status === 200) { const result = resultRes.data; console.log('Call transfer result:', result); // Handle the result based on your business logic if (result.outputContext) { const transferType = result.outputContext.transferType; const transferVariables = result.outputContext.transferVariables; // Process based on transfer type switch (transferType) { case 'AGENT': console.log('Transfer to agent required'); // Implement your agent transfer logic here // Examples: Update CRM, send notification, route to queue break; case 'SYSTEM': console.log('System transfer with variables:', transferVariables); // Implement your system transfer logic here // Examples: Update database, trigger follow-up actions break; default: console.log('Conversation completed successfully'); } } } } catch (error) { console.error('Error fetching call transfer state:', error); } return callback(null, 'OK'); }; ``` Add the required environment variables to your Twilio Function: | Key | Value | | ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ASAPP\_API\_HOST | The API host provided by ASAPP, e.g.: [https://api.sandbox.asapp.com](https://api.sandbox.asapp.com) (sandbox) or [https://api.asapp.com](https://api.asapp.com) (production) | | ASAPP\_API\_ID | The API Key provided by ASAPP | | ASAPP\_API\_SECRET | The API Secret provided by ASAPP | | DOMAIN\_NAME | Your Twilio Function domain within the service (e.g., your-service-1234.twil.io) | The streaming URL has a TTL (time-to-live) of 5 minutes. The same URL can be used to start multiple sessions within this timeframe, but you should obtain a new URL before the 5-minute expiration. ## Step 2: Configure Direct Phone Number Integration This approach directly connects a phone number to your GenerativeAgent function, making it the simplest setup for testing or simple use cases. Choose a phone number you want to connect to GenerativeAgent. You can use an existing number or buy a new one from Twilio. In the `Configure` tab of your selected phone number, set these settings: | Setting | Value | | --------------- | ---------------------------------------------------------- | | Configure With | "Webhook, TwiML Bin, Function, Studio Flow, Proxy Service" | | A call comes in | "Function" | | Service | The name of your service, e.g.: "GenerativeAgentService" | | Environment | "ui" | | Function Path | The name of your engagement function, e.g.: "/engage" | Click `Save Configuration` when done. We're using the "ui" environment for development. For production, you may want to deploy to specific environments (dev, staging, prod) and configure environment variables accordingly. ## Step 3: Validate and Deploy Your Integration Before going live, thoroughly test your integration to ensure it works as expected and provides a good caller experience. ### Test Your Integration Testing is essential to ensure the integration works as expected and provides a good caller experience. Test the integration thoroughly: * Make test calls through the phone number configured in Step 2. Test various scenarios including normal conversations or requests for human agents. * Verify audio streaming quality and reliability * Test conversation handling * Ensure GenerativeAgent understands and responds appropriately * Test different caller accents and speech patterns * Verify handling of background noise and interruptions * Check error scenarios * Verify error handling paths in your flow ## Twilio Flow Integration If you are using Twilio Studio Flows, you can modify the above integration to work with Flows. ### Flow Integration Approach When using flows, the integration is very similar but instead of relying on action callbacks to trigger the call transfer state function, you'll call the call transfer state function directly from your flow: 1. **Flow calls engage function via TwiML Redirect** - This connects GenerativeAgent to the conversation 2. **Flow calls call transfer state function** - Immediately after the call returns, call the function to get transfer state and output context 3. **Flow processes results** - Using the returned transfer data, you can implement your own logic for routing and other business objectives. ### Setting Up Flow Integration For flow integration, you need to remove the action callback from the engagement function. Change from: ```javascript theme={null} // FROM: With action callback (for direct phone integration) const connect = twiml.connect({ action: `https://${context.DOMAIN_NAME}/call-complete?callSid=${event.CallSid}` }); const stream = connect.stream({ url: res.data.streamingUrl }); ``` To: ```javascript theme={null} // TO: Without action callback (for flow integration) const connect = twiml.connect(); const stream = connect.stream({ url: res.data.streamingUrl }); ``` Update your call transfer state function to return the call transfer data that the flow can reference. For this example, the function returns the call transfer status and output context from the GenerativeAgent API for the flow to use. ```javascript Call Transfer Function expandable theme={null} const axios = require('axios'); exports.handler = async function (context, event, callback) { const asappApiId = context.ASAPP_API_ID; const asappApiSecret = context.ASAPP_API_SECRET; const asappApiHost = context.ASAPP_API_HOST; console.log('Call completed:', event); try { // Get call transfer state after call ends const callSid = event.CallSid; const resultUrl = `${asappApiHost}/generativeagent/v1/call-transfers/${callSid}`; const resultRes = await axios.get(resultUrl, { headers: { 'asapp-api-id': asappApiId, 'asapp-api-secret': asappApiSecret, 'Content-Type': 'application/json' } }); if (resultRes.status === 200) { const result = resultRes.data; console.log('Call transfer result:', result); // Return call transfer data for the flow to use const flowVariables = { callTransferStatus: result.status, // Call status from API transferType: null, currentTaskName: null, referenceVariables: null, transferVariables: null }; if (result.outputContext) { flowVariables.transferType = result.outputContext.transferType; flowVariables.currentTaskName = result.outputContext.currentTaskName; flowVariables.referenceVariables = result.outputContext.referenceVariables; flowVariables.transferVariables = result.outputContext.transferVariables; } // Return the call transfer data to the flow return callback(null, flowVariables); } else { console.error(`API request failed: ${resultRes.status} ${resultRes.statusText}`); return callback(new Error('Failed to fetch call transfer state')); } } catch (error) { console.error('Error fetching call transfer state:', error); return callback(error); } }; ``` Reference the [call transfer API](/apis/generativeagent/get-call-transfer) to ensure you are capturing the data relevant to your flow. In the flow you want to integrate GenerativeAgent: 1. Add a **Redirect** widget where you want to engage GenerativeAgent. * The URL needs to be the URL of your engagement function, with a query parameter of `FlowEvent=return` e.g.: `https://your-service-1234.twil.io/engage?FlowEvent=return`. 2. Add a **Run Function** widget, specifying the function to fetch the call transfer state. After the call transfer state function, add logic to handle the results using the returned call status and output context. ## Next Steps Now that you have successfully integrated GenerativeAgent with Twilio, here are some important next steps to consider: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure your APIs to allow GenerativeAgent to access necessary data and perform actions Connect and optimize your knowledge base to improve GenerativeAgent's responses Follow the deployment checklist to launch GenerativeAgent in your production environment # ASAPP SDK (Web) Source: https://docs.asapp.com/generativeagent/integrate/web-sdk Integrate GenerativeAgent chat into your website with our customizable web SDK The ASAPP SDK (Web) allows you to quickly integrate GenerativeAgent chat into your website. The SDK provides a customizable chat interface that can seamlessly transfer conversations to your existing human agents when needed. The ASAPP SDK works alongside your existing chat systems (like Salesforce Chat or Zendesk) rather than replacing them. When GenerativeAgent cannot handle a conversation, it transfers to your human agents using your original SDK. Picture of what the ASAPP SDK (Web) looks like ## How it works The ASAPP SDK creates a seamless chat experience that works alongside your existing customer service infrastructure: 1. **Website loads SDK**: Your website loads the ASAPP SDK script, which initializes the chat interface 2. **User talks to GA via SDK**: Customers interact with GenerativeAgent through the SDK's chat interface 3. **GA resolves conversation or transfers to agent**: GenerativeAgent either resolves the customer's issue or determines when human assistance is needed 4. **Transfer triggers existing chat system**: When a transfer is needed, the SDK calls your custom function to load your existing chat system SDK and unload the ASAPP SDK 5. **Customer talks to human agent**: The customer continues the conversation with a human agent using your existing chat system's interface ## Before you begin Before implementing the ASAPP SDK, you need: * [Configure Tasks and Functions](/generativeagent/configuring) * Contact your ASAPP account team to enable the web SDK ## Getting started Follow these steps to integrate the ASAPP SDK into your website: You can obtain your App ID and confirgure where the chat widget appears from the **SDK Settings** page. In the GenerativeAgent dashboard, navigate to the **API Integration Hub** and select **SDK**. In the **SDK Settings** tab, find your **App ID**, a unique identifier for your SDK integration. Configure the following settings: * **Include URLs**: Pages where the chat widget should appear * **Exclude URLs**: Pages where the chat widget should be hidden (e.g., checkout, admin pages) SDK Settings page in the API Integration Hub The App ID is different from your API credentials and is specifically for the web SDK. Include the ASAPP SDK script in your website's HTML: ```html theme={null} ``` Add this script tag to all pages where you want the chat widget to appear. Initialize the SDK with your App ID and environment: ```html theme={null} ``` The chat widget should now appear on your website with the default styling. You can start interacting with GenerativeAgent right away to verify the integration is working. Try asking a simple question to confirm the chat widget is responding and GenerativeAgent is active. This basic implementation allows you start interacting with GenerativeAgent but before you go live, you will need to implement [transfers](#handle-transfers) to be able to hand off conversations to your human agents: Implement seamless handoff to your human agents when GenerativeAgent cannot handle the conversation Customize the chat interface to match your brands look and feel Provide context to GenerativeAgent with taskName and inputVariables ## Styling and customization Customize the chat interface to match your brand using the `styling` configuration. As you will be loading your existing chat SDK on transfers, you will likely need to match the styling of your existing chat SDK to the GenerativeAgent chat interface. ```javascript theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', styling: { primaryColor: '#FF5733', accentColor: '#33FF57', textColor: '#FFFFFF', fontFamily: "Roboto Condensed", brandImageUrl: 'https://your-domain.com/logo.png', brandText: 'Your Company', header: { backgroundColor: '#F0F0F0', textColor: '#000000', }, message: { user: { backgroundColor: '#F0F0F0', textColor: '#333333', } } } }); ``` Primary brand color for buttons and highlights. Accepts hex colors (e.g., '#FF5733'). Accent color for secondary elements. Accepts hex colors. Default text color for the chat interface. Accepts hex colors. URL to your company logo image. Recommended size: 40x40 pixels. Your company name to display in the chat header. Background color for the chat header. Accepts hex colors. Text color for the chat header. Accepts hex colors. Background color for user messages. Accepts hex colors. Text color for user messages. Accepts hex colors. ## Input variables and task configuration You can provide context to GenerativeAgent when it starts a new conversation by specifying the [starting task name](/generativeagent/configuring/tasks-and-functions/enter-specific-task) and [input variables](/generativeagent/configuring/tasks-and-functions/input-variables): ```javascript theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', taskName: 'customer_support', inputVariables: { 'current_plan': 'SILVER', 'customer_since': '2024-01-01', 'user_id': '12345' } }); ``` The specific task name to trigger when the conversation starts. This determines which GenerativeAgent configuration to use. Key-value pairs of context variables to provide to GenerativeAgent. These help personalize the conversation. ## Handle transfers When GenerativeAgent cannot handle a conversation, it triggers a transfer using the `onTransfer` callback. There are two types of transfers: * **Agent transfers** (`transferToAgent`): When GenerativeAgent determines a human agent is needed, it transfers to your existing chat SDK (Salesforce, Zendesk, etc.) which should immediately engage a human agent. * **System transfers** (`transferToSystem`): When GenerativeAgent completes its task and needs to hand control back to your external system, it uses [System Transfer Functions](/generativeagent/configuring/tasks-and-functions/system-transfer) to pass relevant conversation data. Your code should handle both transfer types appropriately in the `onTransfer` callback. Depending on your platform, you may need to create dedicate conversation entry point for these transfers to be able to directly engage the human agent. Sending users through another option menu or intent classification before engaging the human agent is a frustrating experience for your customers and should be avoided. ```javascript Example onTransfer callback theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', onTransfer: (context) => { // Handle the transfer to your human agents console.log('Transfer triggered:', context); // Access transfer information const transferType = context.transferType; // 'transferToAgent' or 'transferToSystem' const transcript = context.transcript; if (transferType === 'transferToSystem') { // Only available for system transfers const transferVariables = context.transferVariables; const referenceVariables = context.referenceVariables; } // Example context objects: // Agent transfer: // { // transferType: "transferToAgent", // transcript: [ // { text: "I need help", sender: "CUSTOMER", timestamp: "2024-01-15T10:30:00Z" } // ] // } // // System transfer: // { // transferType: "transferToSystem", // transcript: [...], // transferVariables: { priority: "high" }, // referenceVariables: { customerId: "12345" } // } // Unload the ASAPP SDK before loading your existing chat system ASAPP('unload'); // Load your existing chat SDK (Salesforce, Zendesk, etc.) // Pass the conversation context to your human agents // This varies depending on your platform } }); ``` Type of transfer: `transferToAgent` for human agent escalation or `transferToSystem` for system-level transfer. Array of message objects representing the conversation transcript up to the transfer point. Each message has the same structure as the message parameter in [onMessage](#onmessage). Data that should be passed to the next system handling the conversation. Only available for system transfers (`transferToSystem`). Context information about the customer and conversation that should be preserved. Only available for system transfers (`transferToSystem`). ### Unload the SDK When transferring to human agents, you must unload the ASAPP SDK to remove the chat interface and clean up resources: ```javascript theme={null} ASAPP('unload'); ``` Always call `ASAPP('unload')` in your transfer function to ensure the ASAPP chat interface is properly removed before loading your existing chat system. ## Conversation events The SDK exposes functions for key conversation events that you can use to integrate with your existing systems: ### onNewChat Called when GenerativeAgent starts a new chat. ```javascript theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', onNewChat: (conversationId, userId) => { // Create conversation in your system console.log('New chat started:', conversationId, userId); // Example: conversationId = "conv_123", userId = "user_456" } }); ``` Unique identifier for the new conversation. The customer identifier for the user who started the conversation. Initially, this is randomly generated by the SDK but if the customer authenticates, this will be the customer identifier provided by the `authenticationRequest` callback. ### onMessage Called when either GenerativeAgent or the user sends a message. ```javascript theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', onMessage: (message) => { // Update your CCaaS system with the new message console.log('Message received:', message); // Example: message.text = "Hello, how can I help?" // message.sender = "BOT" // message.timestamp = "2024-01-15T10:30:00Z" } }); ``` The text content of the message. Who sent the message: `'CUSTOMER'` or `'BOT'`. ISO 8601 timestamp of when the message was sent. ### onEndChat Called when the user chooses to end the conversation: This callback is not invoked when GenerativeAgent transfers the conversation. ```javascript theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', onEndChat: (conversationId, transcript) => { // Handle conversation end in your system console.log('Chat ended:', conversationId, transcript); // Example: conversationId = "conv_123" // transcript[0].text = "Hello" // transcript[0].sender = "CUSTOMER" // transcript[0].timestamp = "2024-01-15T10:30:00Z" } }); ``` Unique identifier for the conversation that ended. Complete array of message objects from the conversation. Each message has the same structure as the message parameter in [onMessage](#onmessage). ## Authentication handling Some [API Connections](/generativeagent/configuring/connect-apis) require user-specific authentication data to access your APIs. When GenerativeAgent needs to call an API that requires client authentication, it will trigger the `authenticationRequest` callback. Authentication is only required when your API Connections use [client authentication data](/generativeagent/configuring/connect-apis/authentication-methods#client-authentication-data). If your API Connections only use static authentication (like API keys or basic auth), you won't need to implement this callback. Implement the `authenticationRequest` callback to return the required authentication data: ```javascript Example authenticationRequest callback theme={null} ASAPP('load', { appId: 'your-sdk-app-id', env: 'production', authenticationRequest: async () => { // Your authentication logic here // Return the required structure: return { customerExternalId: 'your_customer_id', auth: { // Authentication data structure depends on your API Connection token: 'user_specific_token', expiresAt: '2025-01-01' } }; } }); ``` Your internal customer identifier that will be passed to API Connections. Authentication data object. The structure depends on your API Connection's authentication method configuration. ## Complete implementation example Here's a complete example showing all configuration options: ```javascript theme={null} ``` ## Next steps With the ASAPP SDK successfully integrated, consider these next steps: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure your APIs to allow GenerativeAgent to access necessary data and perform actions Prepare your GenerativeAgent integration for production deployment # Zendesk Talk Source: https://docs.asapp.com/generativeagent/integrate/zendesk-talk Learn how to integrate GenerativeAgent with Zendesk Talk for automated call handling and ticket creation The Zendesk Talk integration with ASAPP's GenerativeAgent allows callers in your Zendesk environment to have conversations with GenerativeAgent while automatically creating support tickets based on the conversation context. This integration provides phone numbers that you route within Zendesk (either through IVR menus or overflow routing) and uses SIP-IN lines to redirect calls back to Zendesk when needed. This guide covers the customer configuration steps needed to set up the integration with your Zendesk Talk environment. ## How it works At a high level, the Zendesk Talk integration with GenerativeAgent works by routing calls through GenerativeAgent and creating tickets based on conversation outcomes: 1. **Calls are routed to GenerativeAgent** through phone numbers configured in your Zendesk IVR menu or overflow routing 2. **GenerativeAgent handles the conversation** using voice interaction and determines the appropriate response 3. **Calls are redirected back to Zendesk** via SIP-IN when GenerativeAgent needs to transfer to a live agent or complete the call 4. **Tickets are automatically created** in Zendesk based on the conversation context and outcome The integration maintains your existing Zendesk Talk workflows while adding GenerativeAgent capabilities for automated call handling and ticket creation. ## Before you Begin Before setting up the Zendesk Talk integration, you need: * [Get your API Key and Secret](/getting-started/developers#access-api-credentials) * Ensure your API key has been configured to access GenerativeAgent APIs. Reach out to your ASAPP team if you need access enabled. * A Zendesk account with: * Admin access to configure Talk settings * Ability to create SIP-IN lines * Access to configure triggers and business rules * The following information to provide to ASAPP: * **Zendesk API Token** - For ticket creation and management * **Zendesk Subdomain** - Your Zendesk instance subdomain * **Zendesk User Email** - Email associated with the API token * The following information to obtain from ASAPP: * **ASAPP SIP IP Address** - For configuring your SIP-IN line * **ASAPP Overflow Phone Numbers** - For routing calls to GenerativeAgent You may have both sandbox and production environments. Be careful not to mix production phone numbers with Zendesk sandbox or vice versa. See [Zendesk sandbox environments](https://support.zendesk.com/hc/en-us/articles/6150628316058-About-Zendesk-sandbox-environments) for more information. ## Understanding Your Routing Options Zendesk Talk only allows routing via Phone numbers. We enable this for GenerativeAgent by providing you with **one or more phone numbers** that are **mapped to specific GenerativeAgent tasks**. You need to route these phone numbers within your Zendesk environment. The routing approach you choose depends on how you want callers to interact with GenerativeAgent: * **IVR Menu Routing**: Use Zendesk Talks' IVR Menu to route calls to GenerativeAgent based on the caller's input. * **Overflow Routing**: Use Zendesk Talks' Overflow Routing to route calls to GenerativeAgent when you want GenerativeAgent to be the primary point of contact. Work with your ASAPP team to determine which routing approach works best for your specific requirements and the tasks you want GenerativeAgent to handle. ## Step 1: Configure Zendesk SIP-IN Line You need to create a SIP-IN line in Zendesk Talk to receive calls redirected back from GenerativeAgent. The SIP-IN line is used to receive calls redirected back from GenerativeAgent. Go to **Talk** → **Lines** → **Add SIP Line** in your Zendesk admin panel. Adding a SIP-IN Line in Zendesk Talk Dashboard Follow the [Zendesk documentation for adding a SIP-IN line](https://support.zendesk.com/hc/en-us/articles/8397091234586-Adding-a-SIP-IN-line) to complete this step. Configure your SIP-IN line to allow the ASAPP SIP IP address that you obtained from your ASAPP team. This allows calls to be redirected back to your Zendesk Talk environment from GenerativeAgent. Configuring allowed SIP IP addresses for ASAPP in Zendesk Talk SIP-IN line settings After creating the SIP-IN line, you'll receive a SIP destination address. Provide this SIP destination to your ASAPP team along with the other required information (API token, subdomain, and user email) for the integration configuration. ## Step 2: Configure Phone Number Routing Follow the steps for your chosen routing approach: Add GenerativeAgent as an option in your existing IVR menu: 1. Go to **Routing** → **IVR Menu** in your Zendesk admin panel 2. Select your existing IVR menu or create a new one 3. Add the GenerativeAgent phone number(s) as menu options 4. Configure the menu prompts to include GenerativeAgent as a choice 5. Follow the [Zendesk IVR documentation](https://support.zendesk.com/hc/en-us/articles/4408885628698-Routing-incoming-calls-with-IVR) for detailed setup Make GenerativeAgent the primary point of contact: 1. Go to **Talk** → **Lines** → Select your primary entry number 2. Configure the number for call overflow using the [Zendesk overflow call routing guide](https://support.zendesk.com/hc/en-us/articles/4408832017690-Managing-overflow-calls-and-after-hours-call-routing) 3. Set the overflow destination to your GenerativeAgent phone number For each GenerativeAgent phone number, disable voicemail: 1. Go to **Talk** → **Lines** → Select each GenerativeAgent overflow number 2. Disable voicemail to ensure calls are properly handled by the integration Configure each GenerativeAgent phone number as an overflow number: 1. Go to **Talk** → **Lines** → Select each GenerativeAgent overflow number 2. Set up the overflow number configuration (provided by ASAPP) For each GenerativeAgent phone number, decide whether to record overflowed calls: 1. Go to **Talk** → **Lines** → Select each GenerativeAgent overflow number 2. Configure call recording based on your organization's requirements Create an empty group to ensure calls overflow immediately to GenerativeAgent: 1. Go to **Admin Center** → **People** → **Groups** in your Zendesk admin panel 2. Click **Add group** 3. Name the group (e.g., "GenAgent" or "Overflow") 4. **Do not add any agents to this group** - it should remain empty 5. Save the group 1. Go to **Talk** → **Lines** → Select the GenerativeAgent overflow number 2. In the **Routing** section, assign the overflow number to the empty group you created 3. This removes waiting time since no agents are in the queue and overflow triggers automatically Set up automatic ticket management for overflow calls: 1. Go to **Objects and rules** → **Business rules** → **Triggers** in your Zendesk admin panel 2. Click on **Ticket** → **Create trigger** 3. Configure the trigger: * **Trigger name**: Choose a descriptive name (e.g., "Close Overflow Tickets") * **Description**: Optional description for the trigger * **Category**: Choose an appropriate category 4. Set the conditions: * **Ticket contains the tag**: `call_overflow` * **Status is not**: `solved` 5. Set the action: * **Ticket status**: `solved` This automatically closes tickets created from overflow calls to keep your ticket queue clean and prevent confusion with regular support tickets. ## Step 3: Create System Transfer The system transfer function is how GenerativeAgent passes calls back to Zendesk and creates tickets in your Zendesk system during that process. This function is configured in the AI Console and defines the schema for ticket creation. You need to create a system transfer function in the AI Console that defines how tickets are created in Zendesk. 1. Go to **AI Console** → **GenerativeAgent** → **Functions** 2. Select **Create function** → **System Transfer** 3. Create a function with the following schema: ```json theme={null} { "type": "object", "required": [ "subject", "priority", "type", "status", "comment" ], "properties": { "subject": { "type": "string", "description": "Request subject based on the conversation" }, "priority": { "type": "string", "enum": ["urgent", "high", "normal", "low"], "description": "Ticket priority based on the conversation" }, "type": { "type": "string", "enum": ["problem", "incident", "question", "task"], "description": "The type of the request based on the conversation" }, "status": { "type": "string", "enum": ["new", "open", "pending", "hold", "solved", "closed"], "description": "The state of the request" }, "comment": { "type": "string", "description": "Detail summary of the conversation containing all the relevant information provided by the customer" } } } ``` This schema defines the ticket fields that will be created in Zendesk when GenerativeAgent completes a call. Tickets created through live agent escalation will not include the detailed conversation context that tickets created through successful GenerativeAgent completion will have. Add this system transfer function to all relevant GenerativeAgent tasks, including instructions on how to complete these fields based on the conversation context. The Zendesk ticket will be successfully created if GenerativeAgent calls this system transfer function. If live agent escalation happens, a ticket is also created, but without the detailed conversation context. ## Step 4: Test Your Integration Before going live, thoroughly test your integration to ensure it works as expected: Test scenarios such as: * **Normal conversation flow** - Verify GenerativeAgent can handle typical customer inquiries * **Live agent transfer** - Test that calls can be properly transferred to human agents when needed * **Overflow handling** - Ensure calls are properly routed through the overflow system * **Ticket creation** - Verify that tickets are created with correct information and formatting * **Different call types** - Test various types of customer inquiries and requests ## Next Steps Once your Zendesk Talk integration is set up and tested, consider these next steps: Learn how to configure GenerativeAgent's behaviors, tasks, and communication style Configure your APIs to allow GenerativeAgent to access necessary data and perform actions Connect and optimize your knowledge base to improve GenerativeAgent's responses Follow the deployment checklist to launch GenerativeAgent in your production environment # Conversation Monitoring Source: https://docs.asapp.com/generativeagent/observe/evaluators/conversation-monitoring Monitor and review conversations for compliance and quality assurance using GenerativeAgent. The Conversation Monitoring evaluator enables you to monitor conversations and identify issues in them that may impact the quality of customer interactions. Our monitoring system can flag a conversation as having potential quality issues as determined by our quality evaluators. ## How it works Our monitoring system uses quality evaluators to assess each turn for adherence to configured tasks and knowledge. When evaluators detect issues, they: * Flag the conversation * Highlight problematic utterances within the conversation * Provide rationale for flagging. Once a conversation is flagged, you can review it in the [Conversations](/generativeagent/configuring/conversations) interface by applying the appropriate filters. ### Identifying quality issues When quality issues are detected, the conversation review interface provides the following features: * **Inline indicators**: Flagged messages appear with visual indicators directly in the conversation flow. This allows the reviewer to quickly identify potential issues. * **Quality Tab**: A dedicated tab in the **"Conversations"** interface provides detailed information about each detected utterance and acts as a centralized location for quality-related insights. This includes: * List of all the detected messages in the conversation * Specific turn(s) that were flagged * Reason provided by the conversation monitoring system for flagging the message * **Customizable flaging**: You can change the severity level of the flagged messages (e.g., from "major" to "critical") or dismiss them if they are false positives. This helps refine the monitoring system over time. Quality tab The features above help reviewers quickly identify and understand quality issues in conversations, enabling them to take appropriate actions to address them. ## Next steps After identifying quality issues in conversations, you can take the following next steps to improve the overall performance of your GenerativeAgent: * Audit AI-driven response quality * Identify regressions from new tasks, prompts, or configurations * Generate insights for evaluator training, task design, and knowledge updates * Improve automation accuracy and reduce escalations from model errors Consider exploring the following evaluators for more in-depth analysis: Manually annotate conversations to provide feedback and identify quality issues. Spot and review customer goals not being met during GenerativeAgent interactions. # Goal Completion Source: https://docs.asapp.com/generativeagent/observe/evaluators/goal-completion Spot and review customer goals not being met during GenerativeAgent interactions. Every customer interaction has specific goals that need to be addressed effectively. These goals can range from resolving issues, providing information, or completing transactions. Ensuring that these goals are met is crucial for customer satisfaction and overall service quality. The Goal Completion evaluator helps you identify and highlight instances where customer goals are not being met during interactions with the GenerativeAgent. By analyzing conversations, this evaluator can pinpoint specific moments where the agent may have failed to address customer needs effectively. ## How it works The user filters conversations based on the completion status of the conversation or goals. Once you apply the filter, review the conversations that meet the selected criteria to understand where the GenerativeAgent may have fallen short in achieving customer goals. The findings from this evaluator can be used to improve the GenerativeAgent's performance, enhance training data, and ultimately lead to better customer experiences. ### Filtering the conversations Use the Filter option in the [Conversations](/generativeagent/configuring/conversations) interface to filter by Goal Completion status. You can narrow down the conversations based on the following criteria: #### Topline Completion Status * "All goals are resolved": Conversations where all customer goals were successfully addressed by the GenerativeAgent. * "Some goals are unresolved": Conversations where one or more customer goals were not fully addressed. * "None goals are resolved": Conversations where none of the customer goals were met. #### Specific Assessments * "Completed": Goals that were successfully addressed by the GenerativeAgent. * "Escalation": Goals that led to escalation to a human agent after some interaction with the GenerativeAgent. * "Immediate Escalation": Goals that resulted in immediate escalation to a human agent without any attempt by the GenerativeAgent to address them. * "Unclear": Goals where it is unclear whether they were met due to lack of customer response. * "Customer Rejected Solution": Goals where the customer explicitly rejected the solution provided by the GenerativeAgent. You can select multiple criteria to refine your search and focus on conversations that are most relevant to your analysis of goal completion. Goal Completion Filter ### Reviewing Goals After filtering the conversations, you can proceed to review the goals associated with those conversations to understand the root cause of incomplete or unmet goals. When reviewing a conversation, the goals associated with that conversation are displayed in the **Goal Resolution** section of the **About** tab. The Customer goals are listed along the order they were detected by the evaluator. Each goal displays its completions status, which includes the specific assessment (e.g., Completed, Escalation, etc.) and any relevant notes or comments. Autosummary displays a brief conversation summary above the goals section when enabled. Goal Completion Review ## Next Steps Once the goal completion status has been reviewed, you can take further actions to improve the GenerativeAgent's performance. This may include: * **Training and Optimization**: Use the insights gained from the goal completion analysis and get in touch with your ASAPP team to train and optimize the GenerativeAgent for better performance in future interactions. * **Feedback Loop**: Implement a feedback loop where insights from goal completion reviews are used to continuously improve the agent's capabilities and response strategies. * **Customer Follow-up**: Identify customers whose goals were not met and consider following up with them to address any unresolved issues or concerns. Consider exploring the following evaluators for more in-depth analysis: Manually annotate conversations to provide feedback and identify quality issues. Monitor and review conversations for compliance and quality assurance using GenerativeAgent. # Manual Annotation Source: https://docs.asapp.com/generativeagent/observe/evaluators/manual-annotation Human-in-the-loop evaluation through manual conversation review Manual Annotations enable human reviewers to mark, classify, and provide context on turns within GenerativeAgent conversations. When automated evaluation identifies potential issues or when you discover conversations that aren't being handled as expected, manual annotations allow QA teams, supervisors, and AI specialists to apply human judgment, validate decisions, and deliver actionable insights for model tuning and quality processes. Human judgment complements automated evaluation, enabling accurate diagnosis of model behavior, evaluator training, and continuous GenerativeAgent improvements. Manual flagging includes: Reviewers will have the ability to flag specific GenerativeAgent turns and provide additional comments explaining the reason for the flag or if further action is needed. They also have the option to update Conversation Monitoring rationale to add more context for review and fine-tuning the classification. Assign a tag to Conversation Monitoring or manual flags to classify and investigate the type of issue identified in the flagged turn. This helps in categorizing issues for analysis and resolution. When a turn is flagged, reviewers can: * Confirm the flag if they agree with the automated assessment * Recategorize the flag as needed * Dismiss the flag if they believe the turn is false positive * Add nuance or rationale to the flag for better context ## When to Use Manual Annotations Use manual annotations to: * **Control conversation quality** * Catch issues automated systems miss: Flag incorrect, incomplete, or risky model responses that automated evaluation didn't detect * Ensure compliance: Highlight policy, tone, or guideline violations that require human judgment * **Improve automated evaluators** * Validate automated flags: Confirm or override automated evaluator flags to improve evaluator accuracy and reduce false positives * Train evaluators: Use confirmed or corrected flags as labeled training data to improve your automated evaluation systems * **Enhance your GenerativeAgent system** * Identify knowledge gaps: Discover missing information in knowledge bases, APIs, or domain-specific instructions * Improve orchestration: Use annotated examples to refine conversation flows and routing logic * Enhance test suites: Add real-world scenarios from annotations to verify GenerativeAgent behavior * **Manage review workflows** * Track follow-ups: Flag conversations requiring investigation, escalation, or additional review ## Adding Manual Annotations Navigate to the [Conversations](/generativeagent/configuring/conversations) interface and select a conversation to review. Identify the GenerativeAgent turn you want to annotate. Hover over the turn and click the **Add Flag** button. Add any supporting **comments** or **rationale** for the flag. You can also update Conversation Monitoring rationale if the turn was already flagged by automated systems. Select a category for the issue from the dropdown menu. This helps classify the type of issue for better organization and analysis. Assign tags to classify the issue type. Common classifications include: * Policy violations * Tone or guideline issues * Incorrect information * Incomplete responses * Escalation required Tags help organize and filter issues across all your conversations for systematic analysis and resolution. Click **Save** to apply the annotation. Add Manual Annotation You can edit existing flags at any time. For turns already flagged by automated Conversation Monitoring or other reviewers, you can confirm, recategorize, dismiss, or update the rationale to refine the annotation and improve evaluator training data. ## Reviewing and Filtering annotations The **Quality** tab in the [Conversations](/generativeagent/configuring/conversations) interface provides a consolidated view of all annotations. For each conversation, you can see: * All flags (both manual and automated) with their categories and severity levels * Reviewer comments and rationale for each flag * The associated GenerativeAgent transcripts Use the filters to narrow down conversations by: * Flag type (Manual vs. Automated) * Tags assigned to flags * Annotation severity level (critical or major) Flags filter ## Best Practices When creating manual annotations, follow these best practices: * Write specific rationale that describes what went wrong and why it matters * Apply consistent categorization to support downstream training and evaluation * Clearly state your reasoning when overriding conversation monitoring flags * Create concise but actionable annotations ## Next Steps The insights gained from manual annotations can be used to improve GenerativeAgent's performance in several ways: * **Evaluator training**: Train automated evaluators using confirmed or corrected flags as labeled data to reduce false positives and negatives * **Configuration improvements**: Use annotated examples to detect knowledge gaps and improve orchestration flows * **Knowledge updates**: Update knowledge bases, APIs, and domain-specific instructions based on feedback that identifies missing or incorrect information * **Test suite enhancement**: Update test suites with this feedback to verify specific GenerativeAgent behavior when deploying new tasks and configurations Consider exploring the following evaluator for more in-depth analysis: Spot and review customer goals not being met during GenerativeAgent interactions. Monitor and review conversations for compliance and quality assurance using GenerativeAgent. # Reporting Source: https://docs.asapp.com/generativeagent/reporting Learn how to track and analyze GenerativeAgent's performance. Monitoring how GenerativeAgent handles customer interactions is critical for ensuring optimal performance and customer satisfaction. By tracking key metrics around containment and task completion, you can continuously improve GenerativeAgent's effectiveness and identify areas for optimization. You can access GenerativeAgent reporting data in two ways: | Reporting Option | Capabilities | Availability | | :---------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------- | | **Out-of-the-box dashboards** |
  • Get started quickly with pre-built visualizations
  • View basic performance metrics like task completion and containment
| ASAPP Messaging only | | **Data feeds** |
  • Export raw data for custom analysis
  • Combine GenerativeAgent data with your own analytics
  • Build custom reports in your BI tools
  • Track end-to-end customer journeys across channels
| ASAPP Messaging and Standalone GenerativeAgent | ## Out-of-the-box dashboards The fastest way to start monitoring GenerativeAgent is through our pre-built dashboards. To access them depends on whether you are using ASAPP Messaging or running GenerativeAgent standalone. These dashboards show you: * Volume and containment over time * Containment by task * Intent and task breakdowns We only provide out-of-the-box dashboards for GenerativeAgent running on [ASAPP Messaging](/agent-desk). Access GenerativeAgent reporting through the [Historical Insights interface](/agent-desk/insights-manager#historical-insights): 1. Navigate to **ASAPP Core Digital Dashboards** -> **Automation & Flow** -> **GenerativeAgent** 2. Select **GenerativeAgent Quality Metrics** ## Data feeds For deeper analysis, or to integrate GenerativeAgent metrics with your existing analytics infrastructure, you can pipe GenerativeAgent's data directly into your system using: * [File Exporter APIs](/reporting/file-exporter) for standalone GenerativeAgent. * [Download from S3](/reporting/retrieve-messaging-data) if you are using our [Messaging Platform](/agent-desk). This approach is recommended when you need to: * Combine GenerativeAgent metrics with other customer journey data * Build custom dashboards in your BI tools * Perform advanced analytics across channels * Track end-to-end customer interactions Use File Exporter to export data from a standalone GenerativeAgent. When exporting data via the File Exporter APIs, you need to specify a `feed` of **generativeagent**. The system generates reports hourly. Here is an example to get a list of files in the generativeagent feed for a given day: ```bash theme={null} curl --request POST \ --url https://api.sandbox.asapp.com/fileexporter/v1/static/listfeedfiles \ --header 'Content-Type: application/json' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --data '{ "feed": "generativeagent", "version": "1", "format": "jsonl", "date": "2024-06-27", "interval": "hr=23" }' ``` Refer to the [File Exporter documentation](/reporting/file-exporter) for more details on the listing and retrieving files. Use S3 to download data exported from the Messaging Platform. When exporting data via S3, you will need to specify the `FEED_NAME` as **generativeagent**. Refer to the [Download from S3](/reporting/retrieve-messaging-data) guide for more details on the file structure and how to access the data. ## GenerativeAgent data schema See all available metrics and their definitions in our data reference guide # Developer Quickstart Source: https://docs.asapp.com/getting-started/developers Learn how to get started using ASAPPs APIs Most of ASAPP's products require a combination of configuration and implementation, and making API calls is part of a successful integration. If you are **only** integrating ASAPP Messaging and **no other ASAPP product**, then you can skip this quickstart and go straight to [ASAPP Messaging](/agent-desk) guide. To get started making API calls, you need to: * [Log in to the developer portal](#log-in-to-the-developer-portal) * [Understand Sandbox vs Production](#understanding-sandbox-and-production) * [Access your application's API Credentials](#access-api-credentials) * [Make your first API call](#make-first-api-call) ## Log in to the developer portal The developer portal is where you will: * Grant access to developers and manage your team. * Manage your API keys. As part of [onboarding](/getting-started/intro), you would have appointed someone as the Developer Portal Admin. This user is in control of adding users and adjusting user access within the Dev Portal. ### Managing the developer portal The developer portal uses **teams** and **apps** to manage access. The members of your team can have one of the following roles: * **Owner**: This user controls the team; this user is also called the Developer Portal Admin. * **App Admin**: These users are able to change the information on applications owned by the team. * **Viewers**: These users can view API credentials, but cannot change any settings. Apps represent access to all of ASAPP's products. Your team will already have an app created for you. One app can access all of ASAPP's products. There can be one or more keys for the app; by default, the system already generates an initial API key. The ASAPP email login or SSO only grants access to the dev portal, all permission and team management must be done from within the developer portal tooling. ## Understand Sandbox and Production Initially, you only have access to the sandbox environment and we will create a Sandbox team and app for you. The sandbox is where you can initially build your integration but also try out new features before launching in production. The different environments appear in ASAPP's API Domains: | Environment | API Domain | | :---------- | :------------------------------ | | Sandbox | `https://api.sandbox.asapp.com` | | Production | `https://api.asapp.com` | ASAPP's sandbox environment uses the same machine learning models and services as the production environment in order to replicate expected behaviors when interacting with a given endpoint. All requests to ASAPP sandbox and production APIs **must** use HTTPS protocol. The system will not redirect traffic using HTTP to HTTPS. ### Moving to Production Once you are ready to move to launch with real traffic and move to production, request production access. Tell your ASAPP account team which user will be the Production Developer Portal Admin. ASAPP will create a dedicated production team and app that you can manage as you did for the sandbox team and app. ## Access API Credentials To access your API credentials, once you've logged in: * Click your username and click Apps * Click your Sandbox App. * Navigate down to API Keys and copy your API Id and API Secret Save the API Id and Secret. All API requests use these for authentication. ## Make First API Call With credentials in hand, we can make our first API call. Let's start with creating a `conversation`, the root entity for any interaction within a call center. This example creates an empty conversation with required id from your system. You need to include the API Id and Secret as `asapp-api-id` and `asapp-api-secret` headers respectively. ```bash theme={null} curl -X POST 'https://api.sandbox.asapp.com/conversation/v1/conversations' \ --header 'asapp-api-id: ' \ --header 'asapp-api-secret: ' \ --header 'Content-Type: application/json' \ --data '{ "externalId": "con_1", "customer": { "externalId": "cust_1234" }, "timestamp": "2024-12-12T11:42:42Z" }' ``` # Error Handling Source: https://docs.asapp.com/getting-started/developers/error-handling Learn how ASAPP returns Errors in the API When you make an API call to ASAPP and there is an error, you will receive a non `2XX` HTTP status code. All errors return a `message`, `code`, and `requestId` for that request to help you debug the issue. The message will usually contain enough information to help you resolve the issue. If you require further help, reach out to support, including the requestId so that they can pinpoint the specific failing API call. ## Error Structure | Field | Type | Description | | :-------------- | :----- | :--------------------------------------------------------------------------- | | error | object | The main error object containing details about the error | | error.requestId | string | A unique identifier for the request that generated this error | | error.message | string | A detailed description of the error, including the specific validation issue | | error.code | string | An error code in the format "HTTP\_STATUS\_CODE-ERROR\_SUBCODE" | Here is an example where a timestamp in the request has an incorrect format. ```json theme={null} { "error":{ "requestId":"3851a807-f0c3-4873-8ba6-5bad4261f0ca3100", "message":"ERROR - [Path '/timestamp'] String 2024-08-14T00:00:00.000K is invalid against requested date format(s) [yyyy-MM-dd'T'HH:mm:ssZ, yyyy-MM-dd'T'HH:mm:ss.[0-9]{1,12}Z]: []]", "code":"400-03" } } ``` # Health Check Source: https://docs.asapp.com/getting-started/developers/health-check Check the operational status of ASAPP's API platform ASAPP provides a simple endpoint to check if our API services are operating normally. You can use this to verify the platform's availability or implement automated health monitoring. ## Checking API Health Send a GET request to the [health check](/apis/health-check/check-asapps-apis-health) endpoint: ```bash theme={null} curl https://api.sandbox.asapp.com/v1/health \ -H "asapp-api-id: YOUR_API_ID" \ -H "asapp-api-secret: YOUR_API_SECRET" ``` A successful response will return: ```json theme={null} { "healthCheck": "SUCCESS" } ``` The status will be either `SUCCESS` when operating normally or `FAILED` if there are service disruptions. # API Rate Limits and Retry Logic Source: https://docs.asapp.com/getting-started/developers/rate-limits Learn about API rate limits and recommended retry logic. ASAPP implements rate limits on our APIs to ensure system stability and optimal performance for all users. To maintain a smooth integration with our APIs, you need to: 1. Be aware of the rate limits in different environments 2. Implement retry logic to handle rate limit errors effectively ## Spike Arrest Limits ASAPP sets a **100 requests per second** limit to prevent API abuse rather than restrict regular expected usage. If your implementation is expected to approach or exceed these limits, contact your ASAPP account team in advance to discuss potential changes and prevent service interruptions. ## Behavior When Limits are Reached If you reach daily limits: * Calls to the endpoint will receive a 429 'Too Many Requests' response status code for the remainder of the day. * In cases of suspected abuse, ASAPP may revoke API tokens to temporarily suspend access to production services. ASAPP will inform you via ServiceDesk in such cases. ## Recommended Retry Logic ASAPP recommends implementing the following retry logic using an exponential backoff strategy only in response to **429** and **5xx** errors: ### On 429 Errors * 1st retry: 1s delay * 2nd retry: 2s delay * 3rd retry: 4s delay ### On 5xx Errors and Other Retriable Codes * 1st retry: 250ms delay * 2nd retry: 500ms delay * 3rd retry: 1000ms delay ### Other 4XX errors **Do not implement retries** for 4xx error codes except for 429. If you receive a `409 Conflict`, then the system has persisted the entity. # Setup ASAPP Source: https://docs.asapp.com/getting-started/setup Learn how to get started with your ASAPP account To get started with ASAPP, you need to: 1. Create and access your account with ASAPP 2. Invite Users and Developers 3. Configure and Integrate your products ## Create and access your account The first step with ASAPP is getting your own account. If you haven't already, [request a demo](https://ai-console.asapp.com/). During the initial conversations, an ASAPP member would have asked you for the following: * Display name of company * Admin user email: ASAPP grants this user initial admin access and the user can invite subsequent users. * Developer email: This is the user who is responsible for the technical integration. They will receive access to the developer portal. An account will be created for you, this account is sometimes referred to as an **organization name** or **company marker**. This company marker is your main account with ASAPP and includes all configuration, user management, and login settings for your account. When you login to the [ASAPP dashboard](https://ai-console.asapp.com/), called the AI Console, you will need to specify your **organization**, and then login with your email. At first, login is based on your email, though we do support SSO authentication. If you don't have an account, you can [reach out](https://www.asapp.com/get-started) to see a demo and get an account. ### Multiple company markers Most users only need a single company marker. If you require different sets configuration such as different sub entities with different configuration needs, you may require multiple company markers. Work with your ASAPP account team to determine the best account structure for your business. ## Invite users and developers Once you have access to your account and the [ASAPP dashboard](https://ai-console.asapp.com/). You need to invite your teammates to access relevant products. You can fully manage most products within the AI Console. [ASAPP Messaging](/agent-desk) has a separate dashboard to configure the platform compared to the [Agent Desk](/agent-desk/digital-agent-desk) where your agents login and interact with your customers. For developers, we would have already requested for your developer's email to get them access to the developer portal where they can manage API Keys. Point your developers to the [developer quickstart](/getting-started/developers). ## Get started with GenerativeAgent With access to your account and inviting your users, you need to configure and implement GenerativeAgent. Learn how to build a GenerativeAgent that can use your KnowledgeBase to start answering your users' questions. # AI Console Source: https://docs.asapp.com/getting-started/setup/ai-console Central dashboard for configuring and managing ASAPP products The **AI Console** is ASAPP's central dashboard for configuring, managing, and monitoring your ASAPP products. It provides a unified interface for administrators, developers, and business users to set up and control the full suite of ASAPP's AI-powered solutions. AI Console Home ## Product Configuration AI Console allows you to set up and manage ASAPP products. Product Tiles in AI Console You can configure the following products: * [GenerativeAgent](/generativeagent): Configure Tasks, Functions, Knowledge Bases, and more. * [ASAPP Messaging](/agent-desk): Configure intents, flows, response libraries, and other ASAPP Messaging settings. * [Queues and Routing](/agent-desk/digital-agent-desk/queues-and-routing) * [Virtual Agent](/agent-desk/virtual-agent) * [End Customer IP Blocking](/security/external-ip-blocking) * [AI Compose](/ai-productivity/ai-compose): Configure responses and other settings for AI Compose. * [AI Summary](/ai-productivity/ai-summary): Try out AI Summary within the AI Console. ## Manage Company Resources **Company Resources** are shared entities that multiple ASAPP products use. Company Resources in AI Console Company Resources include: * **API Integration Hub**: Manage [API Connections](/generativeagent/configuring/connect-apis) that connect GenerativeAgent to external systems. This includes: * Managing API Connections, API Specs, and Authentication. * Viewing [API Logs](/generativeagent/configuring/connect-apis#api-connection-logs). * **Entities**: Data fields used for routing, reporting, and personalization * **Integrations**: Data and API Integrations to connect ASAPP Messaging to external systems. * **Intents**: Customer intents recognized by Virtual Agent and other products * **Library**: Build and attach deep links or Forms to use in Flows. ## Administration AI Console is where you administer your ASAPP account. Located in the top right corner of the page. AI Console Administration Within the Administration page, you can: * **Manage Users and Roles**: * Invite users and assign them roles (admin, developer, business user, etc.) * Control permissions for access to different applications and features * Map SSO roles to ASAPP roles for seamless access * **View Audit Logs**: * Track all configuration changes, deployments, and user actions for compliance and troubleshooting * Search, filter, and export audit logs for review ## Quick Navigation AI Console provides quick navigation to the product pages and company resources. AI Console Quick Navigation ## Next Steps Now that you understand the AI Console, here are the recommended next steps to get started: Learn how to access the developer portal and make your first API calls. Set up and configure GenerativeAgent for your use cases. Configure your messaging platform with intents, flows, and virtual agents. Set up user roles and permissions for your team. # Audit Logs Source: https://docs.asapp.com/getting-started/setup/audit-logs Learn how to view, search, and export audit logs to track changes in AI Console. All activities in AI Console are saved as events and are viewable in audit logs. These logs provide a detailed record of configuration changes made in AI-Console for AI Services and ASAPP Messaging. The system saves these records indefinitely, providing administrators with a comprehensive historical view of changes made to ASAPP services, including when they were made and by whom. Administrators of your ASAPP organization can access audit logs. Audit logs allow you to: * See the most recent changes made to every resource. * Investigate a particular historical change associated with a deployment. * Review activity for a given user or product over the course of weeks or months. To access Audit Logs: 1. Navigate to the AI-Console home page 2. Select Admin View of the audit logs landing page. The following list displays the resources being tracked: * **General** * Links * Custom entities * **Virtual Agent** * Flows * Intent routing * **AI Compose** * Global responses ## Audit Logs Entries For each audit log record, the system records the following fields: | Field | Description | | :------------ | :-------------------------------------------------------------------------------- | | Resource type | Type of resource modified. | | Resource name | Name of the resource modified. | | Event type | Type of event. Supported fields are create, deploy, undeploy, update, and delete. | | Environment | Environment where you deployed the resource. Only applicable for deploy events. | | User | Name of user who caused the event. | | Timestamp | Time and date the event occurred, in UTC format. | | Unique ID | (Optional) Unique identifier for the resource. | ## Searching Audit Logs Administrators can use the search bar to look for a specific resource name, or user. To search your audit logs, navigate to the search bar on the top-right corner of the screen. The search functionality searches for exact matches with either the resource name, or the user that made the change. Additionally, you can filter the results of the audit logs by using the filter dropdown menus. You can filter by the following fields: * Resource type * Event type * User * Date You can additionally click on the "timestamp" column to re-order the results by ascending or descending dates: Timestamp column highlighted on the Audit Logs main view. ## Exporting Audit Logs Administrators can download the audit logs as a CSV file to store and review later. If you export the audit logs as a .csv file after filtering them using the search bar or filters, the downloaded file will also be filtered. To download the audit logs as a .csv file: 1. Navigate to the Audit Logs section in AI Console. 2. Click on the download button, next to the search bar. The system will record data in audit logs from the time the feature is enabled. The system will not display historical activity retroactively. # Manage Users Source: https://docs.asapp.com/getting-started/setup/manage-users Learn how to set up and manage users. You are in control of user management within ASAPP. This includes inviting users, granting access to applications, and assigning specific permissions for features and tooling. Managing users for ASAPP dashboard is separate from the [Digital Agent Desk](/agent-desk/digital-agent-desk/user-management) Manage users from within the ASAPP dashboard, including [inviting users](#invite-users), deleting users, and managing [application access and permissions](#application-access-and-permissions). We also support [SSO](#sso), allowing you to manage user access via your own auth system. ## Invite Users To Invite Users: * Navigate to Home > Admin > User management * Click Invite Users * Enter in the email and name for the user. * By default, users have the "Basic" role, but you may choose others. We will cover roles and permissions further below. * You may invite multiple users at once. ## Roles and Permissions Access to ASAPP is managed via roles. A role is a collection of permissions which dictate what UI elements a user has access to. By default, all users must have the Basic role, allowing them to log in to the dashboard. But you may create and assign as many roles as you like per given user. ### Creating a Role To create a role: 1. Navigate to Home > Admin > Roles & Permissions. 2. Click "Create Role". 3. Enter a name and description for the role. 4. Select the permissions for the role. 5. Optionally, if you are using SSO, [add IDP mapping](#idp-mapping) to the role user. 6. Click "Save Permission". ### IDP Mapping If you are using SSO, you can map roles in your Identity Provider (IDP) to the roles in ASAPP, allowing you to manage access to ASAPP via your own IDP. You must work with your ASAPP account team to determine which claim from your IDP contains the roles list. For each role in ASAPP, you specify one or more roles within your IDP that the system should map to it. You can map multiple ASAPP roles to the same IDP role. ## SSO ASAPP supports Single Sign-On (SSO). SSO allows you to manage your team's access through an Identity Provider (IDP). ASAPP supports SSO using OpenID Connect and SAML. When using SSO, your IDP manages the creation and authentication of user accounts, and determines which roles a user should have in ASAPP. You still need to manage the permissions for a given role within ASAPP via [IDP mapping](#idp-mapping). If you are interested in using SSO, please reach out to your ASAPP account team to get set up. # Human + AI Source: https://docs.asapp.com/human-+-ai Empower GenerativeAgent with Human in the Loop Agent (HILA) and Agent Desk. As your GenerativeAgent helps your customers, you can bring human judgment into AI-driven conversations with a [Human in the Loop Agent (HILA)](/generativeagent/human-in-the-loop). Your Human in the Loop Agents provide guidance and explicit approvals to GenerativeAgent in real-time. Some conversations require humans to fully step in for the best customer experience. Our [Agent Desk](/agent-desk) platform allows you to natively hand off conversations to live agents while remaining in the ASAPP CXP. You can hand off conversations to Agent Desk platforms separate from Agent Desk. Learn how to bring human judgment into the AI-driven conversation with a Human in the Loop Agent (HILA). Learn how to use the Agent Desk to directly support customers. ## AI Productivity As part of the ASAPP CXP, you can use the following AI tools to enhance your customer service operations: Generate conversation summaries and insights with AI Summary Streamline agent response rates with AI Compose Transcribe voice interactions with AI Transcription # Release Notes Source: https://docs.asapp.com/releases/overview New updates and improvements across ASAPP products Welcome to the ASAPP Product Updates page. Here you'll find comprehensive information about the latest features, improvements, and changes across all ASAPP products. This page is regularly updated to keep you informed about our evolving platform. # 2026 ## API Connection Duplication - GenerativeAgent Easily create new API connections by duplicating existing ones, allowing you to reuse configurations and make necessary adjustments without starting from scratch. Learn how to duplicate API connections. ## Multi-Lingual Support - AI Compose AI Compose's core features of Autosuggest, Autocomplete, Fluency Correction, and Profanity Blocking now support multiple languages, including Spanish and French. Learn about AI Compose's multi-lingual capabilities. ## Adaptors for Custom KB Connectors - GenerativeAgent Adaptors provide ready-to-use connections for popular third-party platforms including Salesforce, ServiceNow, Slack, and more. Each adaptor handles API endpoints and authentication for its platform, enabling you to connect GenerativeAgent without manual API configuration. Learn about adaptors for custom KB connectors and the list of platforms they support. ## Integrate Custom Knowledge Bases - GenerativeAgent The Custom Knowledge Base Connectors lets you integrate any knowledge base with GenerativeAgent using API connections. By using the API connection, you can crawl and sync articles from any knowledge base that supports API access. Learn how to connect custom knowledge bases. ## Human-in-the-loop Function - GenerativeAgent The HILA Function enables seamless involvement of human agents in customer conversations when GenerativeAgent determines intervention is necessary. Create HILA functions and define specific scenarios for human intervention, ensuring complex or sensitive issues are handled appropriately while improving customer satisfaction. Learn how to create and use HILA functions. ## Omni-channel Tooling Optimization (OCTO) - GenerativeAgent OCTO allows a single task to be used across voice and chat channels, streamlining GenerativeAgent configuration management. Unified task management for both channels with the ability to edit procedures, voice settings, and chat settings in one place. Learn how to create cross-channel tasks. ## MCP Server Integration - GenerativeAgent Connect GenerativeAgent to Model Context Protocol (MCP) servers to access a wider range of language models and enhanced capabilities. Leverage different models and select specific tools for your MCP server connection. Learn how to integrate with MCP servers. ## Segments Tooling - AI Summary Define which structured data are to be extracted from specific conversation types. The intuitive UI allows you to run precise structured data questions and obtain faster insights on your own. Learn how to configure Segments for Structured Data. # 2025 ## Structured Data Dashboard - AI Summary ASAPP has introduced a user-friendly interface in AI Summary dashboard that allows users to easily configure and manage Structured Data extraction for AI Summary. The UI supports both Entity Extraction and Targeted Structured Data (Questions), enabling teams to customize data points that they extract from conversations without requiring technical expertise. Learn how to configure and manage Structured Data with an intuitive dashboard. ## Bricks - GenerativeAgent Create reusable prompt content that stays synchronized across your GenerativeAgent configuration. Author modular prompts across multiple tasks, update content without redeploying, and maintain consistency across customer interactions. Learn how to use Bricks in your tasks. ## Conversation Explorer integration with AI-Console Conversations Explorer has been renamed as **"Conversations"**, you can now access it directly from AI-Console. This integration streamlines your workflow by providing seamless access to conversation monitoring and review tools within the AI-Console interface. ### New Features * **Improved Navigation**: Easily access conversation monitoring and review tools directly from AI-Console * **Enhanced Filtering**: Utilize advanced filtering options to quickly find and review specific conversations based on various updated criteria * **Evaluators**: Review and Analyze conversations with evaluators directly within the AI-Console for streamlined quality assurance. The evaluators include: * [Goal Completion](/generativeagent/observe/evaluators/goal-completion): Spot and review customer goals not being met during GenerativeAgent interactions. * [Conversation Monitoring](/generativeagent/observe/evaluators/conversation-monitoring): Monitor and review conversations for compliance and quality assurance using GenerativeAgent. * [Manual Annotation](/generativeagent/observe/evaluators/manual-annotation): Human-in-the-loop evaluation through manual conversation review and scoring. * **Enhanced Model Action analysis**: Additional model action categories for deeper insights into GenerativeAgent behavior during conversations. * **Conversation Playback**: Replay conversations to analyze GenerativeAgent interactions in detail. Learn how to navigate to Conversations through AI-Console for efficient conversation management. ## Zendesk KB Import - GenerativeAgent Import articles directly from your Zendesk Knowledge Base into your GenerativeAgent Knowledge Base, streamlining knowledge management and ensuring content consistency. Learn how to import Zendesk articles. ## ASAPP Chat SDK Integration - GenerativeAgent Deploy GenerativeAgent within your website using the ASAPP Chat SDK. Embed conversational AI in your website for seamless customer interactions with a customizable chat interface. Learn how to integrate the Web SDK. ## Configure Voice - GenerativeAgent Customize the voice of your GenerativeAgent for voice interactions with options for voice selection. You can preview different voices to find the best fit for your brand and customer experience. Learn how to configure voice settings for GenerativeAgent. ## Channel Integrations: Twilio Voice, Zendesk Talk, and SIP Transfer - GenerativeAgent Expanded GenerativeAgent's channel capabilities with new integrations for Twilio Voice, Zendesk Talk, and SIP Transfer, providing more flexibility in customer engagement across communication channels. Learn about Twilio Voice integration. ## Evaluation for Scenario Testing - GenerativeAgent Test Scenarios now support automated evaluation of simulated conversations. Define applicability and evaluation criteria, view pass/fail results in the Previewer, and set maximum turns for simulations. Learn how to define and run evaluations. ## Scenario Testing - GenerativeAgent Test Scenarios enable efficient testing of GenerativeAgent configurations through automated simulations. Automatically generate mock API responses, add customer goals and personalities, and review how GenerativeAgent handles different scenarios. Learn how to create Test Scenarios. ## Configuration Branches - GenerativeAgent Experiment with GenerativeAgent configurations through isolated local branching. Create branches from Draft, Sandbox, or Production environments, edit safely, and preview changes before promoting to main environments. Learn how to manage Configuration Branches. ## Advanced Syncing and Auto-Deploy for Knowledge Base - GenerativeAgent Enhanced controls for syncing and deploying your knowledge base with configurable sync modes and high-frequency automatic updates for critical, time-sensitive articles. Learn about KB sync and deploy options. ## Auto-generating Test Users - GenerativeAgent Automatically generate test users by describing test scenarios, making it easier to simulate API interactions. Accelerates testing by creating realistic test data based on scenario descriptions. Learn how to create test users. ## Pinned Versions - GenerativeAgent Pin specific versions of GenerativeAgent to a deployment for safer and more predictable deployments. Maintain version stability, control feature rollout, and test changes in preview before deployment. Learn how to manage versions. ## Scope and Safety Fine Tuning - GenerativeAgent Customizable guardrails let you define what's "in-scope" and "safe" for your specific use cases while maintaining core safety protections. Customize boundaries aligned with business policies. Learn about safety and scope settings. ## Mock Functions - GenerativeAgent Enable rapid prototyping and testing of GenerativeAgent integrations without requiring live API endpoints. Prototype behaviors, test parameter handling, and accelerate development with simulated API responses. Learn how to create Mock Functions. ## Overflow Queue Routing - Insights Manager Redirect traffic from one queue to another based on business hours and agent availability rules. Reduce estimated wait time for customers and support closed queues when legally required. Learn about Insights Manager. ## Bulk Close and Transfer Chats - Insights Manager Bulk chat management features help alleviate queues experiencing unusual activity or high traffic. Transfer all chats from one queue to another or end all chats in a queue quickly. Learn about bulk chat management. ## French and Spanish Language Support for AutoPilot - Agent Desk AutoPilot now supports French and Spanish languages, allowing agents to automate customer interactions in these additional languages for improved customer experience and operational efficiency. Learn about AutoPilot features. ## Undelivered Messages - Agent Desk Displays clear indicators when Live Agent messages fail to reach customers, enhancing transparency and accountability with visible indicators in transcripts whenever delivery failures occur. Learn about undelivered message handling. ## Live Agent Summary - Agent Desk AI-powered summaries presented directly to live agents at handoff from GenerativeAgent. Concise, structured summaries reduce AHT by eliminating the need to parse entire transcripts. Learn about Live Agent Summary. ## Chat Takeover - Agent Desk Managers can take over chats from agents or unassigned chats in the queue. Enabling them to close resolved chats, handle complex situations, or manage high-traffic periods. Learn about Agent Desk features. ## Send Attachments - Agent Desk End customers can send PDF attachments and images to agents to provide more information about their case. Supporting fraud cases and other scenarios requiring proof documentation. Learn about Agent Desk capabilities. ## Searchable Dropdown Component - Virtual Agent Searchable dropdown component enhances user experience when configuring flows, allowing users to quickly find and select options from long lists. Learn about Virtual Agent flows. # 2024 ## New ASAPP Dashboard Updated AI-Console home page with streamlined design enhancing the experience for all users. Easier navigation to key products, view recent activity, and access admin-related activities. Learn about AI-Console. ## Health Check API Verify the operational status of ASAPP's API platform through a simple endpoint for monitoring infrastructure health without needing to make calls to production endpoints. Learn about the Health Check API. ## Audit Logs Review configuration changes made in AI-Console with complete visibility into what is being updated, when, and by whom for control and compliance. Learn about Audit Logs. ## Search Queue Names - Agent Desk Agents can easily transfer conversations by typing queue names and selecting from filtered results, particularly useful for long queue lists. Learn about Agent Desk features. ## Auto-Pilot Endings - Agent Desk Automate the end-of-chat process allowing agents to opt-in to flows that handle check-in and ending processes. Freeing agents to focus on more valuable tasks. Learn about AutoPilot Endings. ## Import and Export Flows - Virtual Agent Promote flows from lower environments into production by exporting JSON files and importing them. Allowing flow builders to manage version control effectively. Learn about flow management. ## Live Insights Metrics - Insights Manager Added Average First Response Time metric and SLA column to help workforce managers monitor capacity and meet contractual SLA commitments. Learn about Live Insights metrics. ## Form Messages for Apple Messages for Business - Customer Channels Rich, multi-page interactive forms replacing Omniforms with native Apple Messages format. Gather customer information through customizable forms without leaving Apple Messages. Learn about AMB integration. ## WhatsApp Business - Customer Channels Support for WhatsApp Business as a messaging channel. Enabling customers to interact with virtual agents and have conversations with live agents in their preferred app. Learn about WhatsApp integration. ## Authentication in Apple Messages for Business - Customer Channels Customers can securely log in to their accounts during interactions. Accessing personalized experiences in automated flows and when speaking with agents. Learn about authentication. ## Intents Self Service Tooling - AI Summary Streamlined interface for managing intent classification with automated, self-serve UI. Upload, create, and modify intent labels without support team intervention, with deployment to production within minutes. Learn about AI Summary features. ## Turn Inspector - GenerativeAgent Advanced diagnostic feature in Previewer providing granular visibility into GenerativeAgent's interaction workflow. Inspect active tasks, reference variables, instruction parsing, function calls, and execution state per turn. Learn how to use the Previewer. ## Knowledge Base Search - GenerativeAgent Powerful free-text search across article titles, text, and URLs with metadata filtering for content source, creation details, and deployment status. Making KB management easier. Learn about KB Search. ## KB Article API - GenerativeAgent Programmatic management of Knowledge Base articles for integration with private internal knowledge bases. Importing from non-scrapable sources, and fine-grained control over knowledge ingestion. Learn how to use the API. ## Trial Mode - GenerativeAgent Safely deploy GenerativeAgent use cases by trialing functions in production. Validate how your AI system interacts with external functions and APIs before full deployment. Learn about Trial Mode. ## Custom Vocab Features - AI Transcribe Self-serve Custom Vocabulary feature allows management of business-specific keywords to improve transcription accuracy. Add, update, and delete custom vocabulary terms independently through API. Learn about AI Transcribe features. ## Custom Redaction Entities - AI Transcribe Self-serve feature for managing redaction entities through configuration API. Independently enable or disable redaction of PCI and PII data in transcriptions. Learn about redaction features. ## Structured Data - AI Summary Fully customizable feature for extracting data points from conversations through entity extraction and question answering. Generate automated insights, populate CRM fields, and monitor compliance automatically. Learn about Structured Data. # 2023 ## Salesforce Integration - AI Summary Native Salesforce plugin with AI Summary integration enables quick installation and configuration within Lightning environment. Automatically generating and saving conversation summaries. Learn about Salesforce integration. ## Sandbox for AI Compose Playground environment for administrators to preview and test AI Compose experience before integration. Simulating how agents interact with suggestion features. Learn about AI Compose Sandbox. ## Amazon Connect Media Gateway - AI Transcribe AI Transcribe implementation pattern for Amazon Connect allowing Kinesis Video Streams audio to be easily sent to AI Transcribe. Learn about Amazon Connect integration. ## Sandbox for AI Transcribe Preview ASAPP's speech-to-text capabilities designed for real-time agent assistance without waiting for integration completion. Learn about AI Transcribe Sandbox. ## Sandbox for AI Summary - AI Summary Testing environment in AI-Console for validating and experimenting with summary generation before deploying to production. Supporting both voice and messaging conversations. Learn about the Sandbox. ## Twilio Media Gateway - AI Transcribe AI Transcribe implementation pattern for Twilio allowing Twilio Media Streams audio to be easily sent to AI Transcribe for simplified integration. Learn about Twilio integration. ## Feedback for AI Summary Model retraining using agent feedback through the feedback endpoint. Using differences between automatically generated and final submissions to improve the model over time. Learn about feedback features. ## Get Transcript API - AI Transcribe New endpoint retrieves the full set of messages for specified conversations. Providing complete transcripts on-demand instead of only real-time or in daily batches. Learn about the Get Transcript API. # 2022 ## Tooling for AI Compose Configuration of global response lists in AI-Console with management through bulk CSV uploads or targeted UI edits. Enabling self-serve maintenance of response libraries. Learn about AI Compose tooling. ## Voice Desk Updates Multiple enhancements including Themeable Desk with Dark Mode, Automatic Summary, Transcript Redesign, Whisper, and Left Rail redesign with color temperature settings. Learn about Agent Desk features. # Reporting and Insights Source: https://docs.asapp.com/reporting ASAPP reports data back via several channels, each with different use cases: Retrieve data and reports via a secure API for programmatic access to ASAPP data. Download data and reports via S3. Access real-time data from ASAPP Messaging. Send data to ASAPP via S3 or SFTP. Send conversation, agent, and customer metadata. ## Batch vs Realtime One high-level differentiating feature of these channels is how the underlying data is processed for reporting: * **Real-time**: Processed data flows to the reporting channel as it happens. * **Batch**: Processed data aggregates into time-based buckets, delivered with some delay to the reporting channel. For reference: * Reports visible in ASAPP's Desk/Admin are considered *real-time reports*. * RTCI reports are *real-time reports*. * ASAPP's S3 reports are *batch reports,* delivered with a predictable time delay. * Historical Reports are *batch reports*. Often, metrics similar in both name and in underlying definition are delivered both via batch and via real-time channels. This can be confusing: a metric in viewed in a real-time context (say, via ASAPP's Desk/Admin) might well differ in value from a similar metric viewed in a time-delayed batch context (say, via a report delivered by S3). ***In fact, customers should not expect that values for similar metrics will line up across real-time and batch reporting channels.*** The short explanation for such differences is that **real-time and batch processed metrics are necessarily calculated using different underlying data sets** (with the real-time set current up-to-the-minute, and the batch set delayed as a function of the time bucketing aggregation). It is expected that different underlying data will yield different reported values for your metrics between delivery channels. The balance of this document provides a few concrete examples to further explain the variance you will typically see between real-time and batch reported values for similar metrics. ### Batch vs Real-time Metric Discrepancies Real-time metrics are calculated with a continual process, where computations are evaluated repeatedly with the most current data available. With multiple active and potentially geographically dispersed instances of an application communicating asynchronously across a global message bus, at times the data used to calculate real-time metrics can be intermediate or incomplete. On the other hand, metrics computed using batch processing are computed with all available, terminal data for each reported interaction, and so can provide a more accurate metric at the expense of a time delay vs real-time reporting. ASAPP S3 reports, for example, are normally computed over hours or days, and can therefore incorporate the most complete set of data points required to calculate a metric. As a simplified example, let's consider a metric that shows a daily average for customer satisfaction ratings. Let's assume: * the day starts at 8:00AM * batch processing works against hourly aggregate buckets * batch calculations run at 5 minutes past the hour * it is a *very slow* day :) Over the course of our pretend day, the following interactions are handled by the system: | TIME | Rating | Real-time avg for day | batch avg for day | | :------- | :----- | :-------------------- | :---------------- | | 8:00 AM | 4 | 4 | N/A | | 8:05 AM | 4 | 4 | N/A | | 8:10 AM | 4 | 4 | N/A | | 12:00 PM | 1 | 3.25 | 4 | | 12:05 PM | 1 | 2.8 | 4 | | 1:10 PM | 4 | 3 | 2.8 | At 8:00 AM, batch processing will not have incorporated the rating that was provided at 8:00AM. So the average rating can't be computed for a batch report. Since real-time reporting has access to up-to-the-minute data, real-time reporting shows a value of 4 for the daily average customer satisfaction rating. At 12:00PM, the real-time metric shows an average satisfaction over 4 transactions as 3.25. The batch system shows the average satisfaction rating as 4 over 3 transactions, since the 12:00 transaction has not yet been incorporated into the batch processing calculation. Given our example scenario, the interactions at 12:00 and 12:05 would not be incorporated into the batch reported metric until 1:05PM. In this simplified example, the batch processed metric would align with the real-time metric around 2:05 PM, once both the batch metric and the real-time metric are calculated against the same underlying data set. The next example shows how values provided by real-time vs batch processing might show inconsistent values for "rep assigned time". ```json theme={null} 8:00AM: NEW ISSUE 8:01AM: ENQUEUED 8:02AM: REP ASSIGNED: rep0 8:03AM: REP UNASSIGNED 8:04AM: REENQUEUED 8:05AM: REP ASSIGNED: rep0 8:06AM: ... ``` With real time reporting, the value for rep\_assigned\_time might show either 8:02AM or 8:05AM, depending on when the data is read and the real-time metric is viewed. Batch processed data, however, will have the complete historical data, and so will consistently report 8:02AM for the rep\_assigned\_time. Batch processed data and real-time processed data are almost always looking at different underlying data sets. Batch data is complete but time-delayed and real-time data is up-to-the-minute but not necessarily complete. As long as the data sets underlying real-time vs. batch reporting differ, customers should expect that the metrics calculated from those different data sets will differ more often than not. # ASAPP Messaging Feed Schemas Source: https://docs.asapp.com/reporting/asapp-messaging-feeds The tables below provide detailed information regarding the schema for exported data files available to you for ASAPP Messaging. ### Table: admin\_activity The admin\_activity table tracks ONLINE/OFFLINE statuses and logged in time in seconds for agents who use Admin. **Sync Time:** 1h **Unique Condition:** rep\_id, status\_description, status\_start\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------- | :----------- | :---------------------------------------------------- | :------------------ | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2020-11-10 00:00:00 | 2020-11-10 00:00:00 | no | | | | | | rep\_name | varchar(191) | Name of agent | John | | | 2020-11-10 00:00:00 | 2020-11-10 00:00:00 | no | | | | | | status\_description | varchar | Indicates status of the agent. | ONLINE | | | 2020-11-10 00:00:00 | 2020-11-10 00:00:00 | no | | | | | | status\_start\_ts | datetime | Timestamp at which this agent entered that status. | 2018-06-10 14:23:00 | | | 2020-11-10 00:00:00 | 2020-11-10 00:00:00 | no | | | | | | status\_end\_ts | datetime | Timestamp at which this agent exited that status. | 2018-06-10 14:23:00 | | | 2020-11-10 00:00:00 | 2020-11-10 00:00:00 | no | | | | | | status\_time\_seconds | double | Time in seconds that the agents spent in that status. | 2353.23 | | | 2020-11-10 00:00:00 | 2020-11-10 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2025-01-09 00:00:00 | 2025-01-09 00:00:00 | no | | | | | ### Table: agent\_journey\_rep\_event\_frequency Aggregated counts of various agent journey event types partitioned by rep\_id **Sync Time:** 1d **Unique Condition:** primary-key: rep\_id, event\_type, instance\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | | company\_marker | varchar(191) | The ASAPP company marker. | spear, aa | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | | event\_type | varchar(191) | agent journey event type on record | CUSTOMER\_TIMEOUT, TEXT\_MESSAGE | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | | event\_count | bigint | count of the agent journey event type on record | | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | | disconnected\_count | bigint | number of times that a rep disconnected for less than 1 hour | | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | | disconnected\_seconds | bigint | cumulative number of seconds that a rep disconnected for less than 1 hour | | | | 2022-01-31 00:00:00 | 2022-01-31 00:00:00 | no | | | | | ### Table: autopilot\_flow This table contains factual data about autopilot flow. **Sync Time:** 1h **Unique Condition:** issue\_id, form\_start\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | rep\_assigned\_ts | timestamp without time zone | | | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | form\_start\_ts | timestamp without time zone | Timestamp of autopilot form/flow being recommended by MLE or timestamp of flow sent from quick send. issue\_id + form\_recommended\_event\_ts should be unique | | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | form\_dismissed\_event\_ts | timestamp without time zone | Timestamp of recommended autopilot form being dismissed. | | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | form\_presented\_event\_ts | timestamp without time zone | Timestamp the autopilot form being presented to end user. | | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | form\_submitted\_event\_ts | timestamp without time zone | Timestamp the autopilot form being submitted by end user | | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | flow\_id | varchar(255) | An ASAPP identifier assigned to a particular flow executed during a customer event or request. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | flow\_name | varchar(255) | The ASAPP text name for a given flow which was executed during a customer event or request. | FirstChatMessage, AccountNumberFlow | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | form\_start\_from | character varying(191) | How the flow is being sent by the agent. manual: sent manually from the quick send dropdown in desk accept: sent by accept recommendation by ML server | | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | is\_secure\_form | boolean | Is this a secure form flow. | false | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | queue\_id | integer | The ASAPP queue identifier which the issue was placed. | 210001 | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | asapp\_mode | varchar(191) | Mode of the desktop that the rep is logged into (CHAT or VOICE). | CHAT, VOICE | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2022-03-09 00:00:00 | 2022-03-09 00:00:00 | no | | | | | ### Table: convos\_intents The convos\_intents table lists the current state for intent and utterance information associated with a conversation/issue that had events within the identified 15 minute time window. This table will include unended conversations. **Sync Time:** 1h **Unique Condition:** issue\_id **Ordering Precedence:** instance\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------- | :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------ | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_utterance\_ts | varchar(255) | The timestamp of the first customer utterance for an issue. | 2018-09-05 19:58:06 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_utterance\_text | varchar(255) | Time of the first customer message in the conversation. | 'Pay my bill', 'Check service availability' | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_intent\_code | varchar(255) | Code name used for classifying customer queries in first interaction. | PAYBILL, COVERAGE | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_intent\_code\_alt | varchar(255) | Alternative second best code name used for classifying customer queries in first interaction. | PAYBILL, COVERAGE | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | final\_intent\_code | varchar(255) | The final code name classifying the customer's query, based on the flow navigated; defaults to the first interaction code if no flow was followed. | PAYBILL, COVERAGE | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | intent\_path | varchar(255) | A comma-separated list of all intent codes from the customer’s flow navigation. If no flow was navigated, this will match the first intent code. | OUTAGE,CANT\_CONNECT | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | disambig\_count | bigint | The number of times a disambiguation event was presented for an issue. | 2 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | ftd\_visit | boolean | Indicates whether free-text disambiguation was used to help the customer present a clearer intent, based on the number of texts sent to AI. | true, false | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | faq\_id | varchar(255) | The last FAQ identifier presented for an issue. | FORGOT\_LOGIN\_faq | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | final\_action\_destination | varchar(255) | The last deep-link URL clicked during the issue resolution process. | asapp-pil://acme/JSONataDeepLink | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | is\_first\_intent\_correct | boolean | Indicates whether the initial intent associated with the chat was correct, based on feedback from the agent. | true, false | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | first\_rep\_id | varchar(191) | The first ASAPP rep/agent identifier found in a window of time. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: convos\_intents\_ended The convos\_intents\_ended table lists the current state for intent and utterance information associated with a conversation/issue that have had events within the identified 15 minute time window. This table will filter out unended conversations. **Sync Time:** 1h **Unique Condition:** issue\_id **Ordering Precedence:** instance\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------- | :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-11-07 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | first\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2018-11-07 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_utterance\_ts | varchar(255) | Timestamp of the first customer message in the conversation. | 2018-09-05 19:58:06T00:01:16.203000+00:00 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_utterance\_text | varchar(255) | First message from the customer. | I need to pay my bill. | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_intent\_code | varchar(255) | Code name which are used for classifying customer queries in first interaction | PAYBILL | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_intent\_code\_alt | varchar(255) | Alternative second best code name used for classifying customer queries in first interaction. | PAYBILL | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | final\_intent\_code | varchar(255) | The final code name classifying the customer's query, based on the flow navigated; defaults to the first interaction code if no flow was followed. | PAYBILL | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | intent\_path | varchar(255) | A comma-separated list of all intent codes from the customer’s flow navigation. If no flow was navigated, this will match the first intent code. | OUTAGE, CANT\_CONNECT | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | disambig\_count | bigint | The number of times a disambiguation event was presented for an issue. | 2 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | ftd\_visit | boolean | Indicates whether free-text disambiguation was used to help the customer present a clearer intent, based on the number of texts sent to AI. | false, true | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | faq\_id | varchar(255) | The last faq-id presented for an issue. | FORGOT\_LOGIN\_faq | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | final\_action\_destination | varchar(255) | The last deep-link URL clicked during the issue resolution process. | asapp-pil://acme-mobile/protection-plan-features | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | is\_first\_intent\_correct | boolean | Indicates whether the initial intent associated with the chat was correct, based on feedback from the agent. | true, false | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | first\_rep\_id | varchar(191) | The first ASAPP rep/agent identifier found in a window of time. | 123008 | | | 2018-11-07 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: convos\_metadata This convos\_metadata table contains data associated with a conversation/ issue during a specific 15 minute window. This table will include data from unended conversations. Expect to see columns containing the app\_version, the conversation\_end timestamp and whether it was escalated to chat or not. **Sync Time:** 1h **Unique Condition:** issue\_id **Ordering Precedence:** last\_event\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | first\_utterance\_ts | timestamp | Timestamp of the first customer message in the conversation. | 2018-09-05 19:58:06 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | first\_utterance\_text | varchar(255) | First message content from the customer. | "Hello, please assist me" | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | issue\_created\_ts | timestamp | Timestamp of the "NEW\_ISSUE" event for an issue. | 2018-09-05 19:58:06 | | | 2019-10-15 00:00:00 | 2019-10-15 00:00:00 | no | | | | | | last\_event\_ts | timestamp | The timestamp of the last event for an issue. | 2018-09-05 19:58:06 | | | 2019-09-16 00:00:00 | 2019-09-16 00:00:00 | no | | | | | | last\_srs\_event\_ts | timestamp without time zone | Timestamp of the last bot assisted event. | 2018-09-05 19:58:06 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | conversation\_end\_ts | timestamp | Timestamp when the conversation ended. | 2018-09-05 19:58:06 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | session\_id | varchar(128) | The ASAPP session identifier. It is a uuid generated by the chat backend. Note: a session may contain several conversations. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | session\_type | character varying(255) | ASAPP session type. | asapp-uuid | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | session\_event\_type | character varying(255) | Basic type of the session event. | UPDATE, CREATE | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | internal\_session\_id | character varying(255) | Internal identifier for the ASAPP session. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | internal\_session\_type | character varying(255) | An ASAPP session type for internal use. | asapp-uuid | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | internal\_user\_identifier | varchar(255) | The ASAPP customer identifier while using the ASAPP system. This identifier may represent either a rep or a customer. Use the internal\_user\_type field to determine which type the identifier represents. | 123004 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | internal\_user\_session\_type | varchar(255) | The customer ASAPP session type. | customer | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | external\_session\_id | character varying(255) | Client-provided session identifier passed to the SDK during chat initialization. | 062906ff-3821-4b5d-9443-ed4fecbda129 | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_session\_type | character varying(255) | Client-provided session type passed to the SDK during chat initialization. | visitID | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_user\_id | varchar(255) | Customer identifier provided by the client, available if the customer is authenticated. | EECACBD227CCE91BAF5128DFF4FFDBEC | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_user\_type | varchar(255) | The type of external user identifier. | acme\_CUSTOMER\_ACCOUNT\_ID | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_issue\_id | character varying(255) | Client-provided issue identifier passed to the SDK (currently unused). | | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_channel | character varying(255) | Client-provided customer channel passed to the SDK (currently unused). | | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | customer\_id | bigint | ASAPP customer id | 1470001 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | escalated\_to\_chat | bigint | Flag indicating whether the issue was escalated to an agent. false, true | 1 | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | platform | varchar(255) | A value indicating which consumer platform was used. | ios, android, web | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | device\_type | varchar(255) | The last device type used by the customer for an issue. | mobile, tablet, desktop, watch, unknown | | | 2019-06-17 00:00:00 | 2019-06-17 00:00:00 | no | | | | | | first\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2022-01-04 00:00:00 | 2022-01-04 00:00:00 | no | | | | | | last\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2022-01-04 00:00:00 | 2022-01-04 00:00:00 | no | | | | | | external\_agent\_id | varchar(255) | deprecated: 2019-09-25 | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2022-01-04 00:00:00 | 2022-01-04 00:00:00 | no | | | | | | assigned\_to\_rep\_time | timestamp | Time when the issue was first assigned to a rep, if applicable. | 2018-09-05 19:58:06 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | disposition\_event\_type | varchar(255) | Event type indicating how the conversation ended. | resolved, unresolved, timeout | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | disposition\_ts | timestamp | Timestamp when the rep exited the issue or conversation. | 2018-09-05 19:58:06 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | termination\_event\_type | varchar(255) | Event type indicating the reason for conversation termination. | customer, agent, autoend | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | disposition\_notes | text | Notes added by the last rep after marking the chat as completed. | "The customer wanted to pay his bill. We successfully processed his payment." | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | ended\_resolved | integer | 1 if the rep marked the conversation resolved, 0 otherwise. | 1, 0 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | ended\_unresolved | integer | 1 if the rep marked the conversation unresolved, 0 otherwise. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | ended\_timeout | integer | 1 if the customer timed out or abandoned chat, 0 otherwise. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | ended\_auto | integer | 1 if the rep did not disposition the issue and it was auto-ended. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | ended\_other | integer | 1 if the customer or rep terminated the issue but the rep didn't disposition the issue. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | app\_version\_asapp | varchar(255) | ASAPP API version used during customer event or request. | com.asapp.api\_api:-2f1a053f70c57f94752e7616b66f56d7bf1d6675 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | app\_version\_client | varchar(255) | ASAPP SDK version used during customer event or request. | web-sdk-4.0.0 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | session\_metadata | character varying(65535) | Additional metadata information about the session, provided by the client. | | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | last\_sequence\_id | integer | Last sequence identifier associated with the issue. | 115 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | issue\_queue\_id | varchar(255) | Queue identifier associated with the issue. | 20001 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | issue\_queue\_name | varchar(255) | Queue name associated with the issue. | acme-wireless-english | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | csat\_rating | double precision | Customer Satisfaction (CSAT) rating for the issue. | 400.0 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | sentiment\_valence | character varying(50) | Sentiment of the issue. | Neutral, Negative | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | deep\_link\_queue | character varying(65535) | Deeplink queued for the issue. | | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | end\_srs\_selection | character varying(65535) | User selected button upon end\_srs. | | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | trigger\_link | VARCHAR | deprecated: 2020-04-25 aliases: current\_page\_url | | | | 2022-01-04 00:00:00 | 2022-01-04 00:00:00 | no | | | | | | auth\_state | varchar(3) | Flag indicating if the user is authenticated. | false, true | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | auth\_external\_token\_id | character varying(65535) | Encrypted user identifier, provided by the client system, associated with the first authentication event for an issue. | 82EFDDADC5466501443E3E61ED640162 | | | 2019-05-15 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auth\_source | character varying(65535) | Source of the first authentication event for an issue. | ivr-url | | | 2019-05-15 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auth\_external\_user\_type | character varying(65535) | External user type of the first authentication event for an issue. | ACME\_CUSTOMER\_ACCOUNT\_ID | | | 2019-05-15 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auth\_external\_user\_id | character varying(65535) | User ID provided by the client for the first authentication event. | 9BE62CCD564D6982FF305DEBCEAABBB5 | | | 2019-05-15 00:00:00 | 2019-07-16 00:00:00 | no | | | | | | is\_review\_required | boolean | Flag indicates whether an admin must review this issue. data type: boolean data type: boolean | true, false | | | 2019-07-24 00:00:00 | 2019-07-24 00:00:00 | no | | | | | | mid\_issue\_auth\_ts | timestamp without time zone | Time when the user authenticates during the middle of an issue, | 2020-01-11 08:13:26.094 | | | 2019-07-24 00:00:00 | 2019-07-24 00:00:00 | no | | | | | | first\_rep\_id | varchar(191) | ASAPP provided identifier for the first rep involved with the issue. | 60001 | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | last\_rep\_id | varchar(191) | ASAPP provided identifier for the last rep involved with the issue. | 60001 | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | external\_rep\_id | varchar(255) | Client-provided identifier for the rep. | 0671018510 | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | first\_voice\_customer\_state | varchar(255) | Initial state assigned to the customer when using voice. | IDENTIFIED | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | first\_voice\_customer\_state\_ts | timestamp | 2020-01-11 08:13:26.094 | 2018-09-05 19:58:06 | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | first\_voice\_identified\_customer\_state\_ts | timestamp | Time when the customer was first assigned an IDENTIFIED state. | 2020-01-11 08:13:26.094 | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | first\_voice\_verified\_customer\_state\_ts | timestamp | Time when the customer was first assigned an VERIFIED state. | 2020-01-11 08:13:26.094 | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | merged\_ts | timestamp | Time when the issue was merged into another issue. data type: timestamp | 2020-01-11 08:13:26.094 | | | 2019-12-28 00:00:00 | 2019-12-28 00:00:00 | no | | | | | | desk\_mode\_flag | bigint | Bitmap encodes if agent handled voice-issue ASAPP desk, had engagement with ASAPP desk. bitmap: 0: null, 1: 'VOICE', 2: 'DESK', 4: 'ENGAGEMENT', 8: 'INACTIVITY' NULL for non voice issues | 0, 1, 2, 5, 7 | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | desk\_mode\_string | varchar(191) | Decodes the desk\_mode flag. Current possible values (Null, 'VOICE', 'VOICE\_DESK', 'VOICE\_DESK\_ENGAGEMENT','VOICE\_INACTIVITY'). NULL for non voice issues. | VOICE\_DESK | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | current\_page\_url | varchar(2000) | URL link (stripped of parameters) that triggered the start chat event. Only applicable for WEB platforms. aliases: trigger\_link | https:[www.acme.corp/billing/viewbill](http://www.acme.corp/billing/viewbill) | | | 2020-04-24 00:00:00 | 2020-04-24 00:00:00 | no | | | | | | raw\_current\_page\_url | varchar(65535) | Full URL link (including parameters) that triggered the chat event. Only applicable for WEB platforms. aliases: raw\_trigger\_link | | | | 2020-04-25 00:00:00 | 2020-04-25 00:00:00 | no | | | | | | language\_code | VARCHAR(32) | Language code for the issue\_id | English | | | 2022-01-04 00:00:00 | 2022-01-04 00:00:00 | no | | | | | | assignment\_complex\_ct | integer | Count of complex chats | 1 | | | 2026-01-22 00:00:00 | 2026-01-22 00:00:00 | no | | | | | ### Table: convos\_metadata\_ended The convos\_metadata table contains data associated with a conversation/issue during a specific 15 minute window. Expect to see columns containing the app\_version, the conversation\_end timestamp and whether it was escalated to chat or not. This table will filter out data from unended conversations. This export removes any unended issues and any issues which contained no chat activity. **Sync Time:** 1h **Unique Condition:** issue\_id **Ordering Precedence:** last\_event\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------- | :--------- | :---- | :------------------------------- | :------------------------------- | :----- | :-- | :------------ | :----------- | :------------ | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | first\_utterance\_ts | timestamp | Timestamp of the first customer message in the conversation. | 2019-09-22T13:12:26.073000+00:00 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | first\_utterance\_text | varchar(65535) | First message content from the customer. | "Hello, please assist me" | | | 2019-01-11 00:00:00 | 2022-06-08 00:00:00 | no | | | | | | issue\_created\_ts | timestamp | Timestamp when the "NEW\_ISSUE" event occurred. | 2019-11-21T19:11:01.748000+00:00 | | | 2019-10-15 13:12:26.073000+00:00 | 2019-10-15 13:12:26.073000+00:00 | no | | | | | | last\_event\_ts | timestamp | Timestamp of the last event in the issue. | 2019-09-23T14:00:09.043000+00:00 | | | 2019-09-16 00:00:00 | 2019-09-16 00:00:00 | no | | | | | | last\_srs\_event\_ts | timestamp without time zone | Timestamp of the last bot assisted event. | 2019-09-22T13:12:26.131000+00:00 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | conversation\_end\_ts | timestamp | Timestamp when the conversation ended. | 2019-10-08T14:00:07.395000+00:00 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | session\_id | varchar(128) | The ASAPP session identifier. It is a uuid generated by the chat backend. Note: a session may contain several conversations. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | session\_type | character varying(255) | ASAPP session type. | asapp-uuid | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | session\_event\_type | character varying(255) | Basic type of the session event. | CREATE, UPDATE, DELETE | | | 2018-11-26 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | internal\_session\_id | character varying(255) | Internal identifier for the ASAPP session. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | internal\_session\_type | character varying(255) | An ASAPP session type for internal use. | asapp-uuid | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | internal\_user\_identifier | varchar(255) | The ASAPP customer identifier while using the ASAPP system. This identifier may represent either a rep or a customer. Use the internal\_user\_session\_type field to determine which type the identifier represents. | 123004 | | | 2018-11-26 00:00:00 | 2018-12-06 00:00:00 | no | | | | | | internal\_user\_session\_type | varchar(255) | The customer ASAPP session type. | customer | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | external\_session\_id | character varying(255) | Client-provided session identifier passed to the SDK during chat initialization. | 062906ff-3821-4b5d-9443-ed4fecbda129 | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_session\_type | character varying(255) | Client-provided session type passed to the SDK during chat initialization. | visitID | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_user\_id | varchar(255) | Customer identifier provided by the client, available if the customer is authenticated. | MjU0ZTRiMDQyNDVlNTcyNWNlOTljNmI1NDc2NWQzNzdmNmJmZTFjZDgyY2IwMzc3MDkwZDI5YmQwZDlkODJhNA== | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_user\_type | varchar(255) | The type of external user identifier. | acme\_CUSTOMER\_ACCOUNT\_ID | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_issue\_id | character varying(255) | Client-provided issue identifier passed to the SDK (currently unused). | | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | external\_channel | character varying(255) | Client-provided customer channel passed to the SDK (currently unused). | | | | 2018-11-26 00:00:00 | 2020-10-24 00:00:00 | no | | | | | | customer\_id | bigint | An ASAPP customer identifier. | 1470001 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | escalated\_to\_chat | bigint | 1 if an issue escalated to live chat, 0 if not | 1 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | platform | varchar(255) | The consumer platform in use. | ios, android, web, voice | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | device\_type | varchar(255) | The last device type used by the customer for an issue. | mobile, tablet, desktop, watch, unknown | | | 2019-06-17 00:00:00 | 2019-06-17 00:00:00 | no | | | | | | first\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | last\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | external\_agent\_id | varchar(255) | deprecated: 2019-09-25 | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | assigned\_to\_rep\_time | timestamp | Timestamp when the issue was first assigned to a rep, if applicable. | 2018-09-05 19:58:06T16:14:57.289000+00:00 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | disposition\_event\_type | varchar(255) | Event type indicating how the conversation ended. | resolved, unresolved, timeout | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | disposition\_ts | timestamp | Timestamp when the rep exited the issue or conversation. | 2018-09-05 19:58:06T16:14:57.289000+00:00 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | termination\_event\_type | varchar(255) | Event type indicating the reason for conversation termination. | customer, agent, autoend | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | disposition\_notes | text | Notes added by the last rep after marking the chat as completed. | "The customer wanted to pay his bill. We successfully processed his payment." | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | ended\_resolved | integer | Indicator (1 or 0) for whether the rep marked the conversation as resolved. | 1, 0 | | | 2019-04-30 00:00:00 | 2019-05-01 00:00:00 | no | | | | | | ended\_unresolved | integer | Indicator (1 or 0) for whether the rep marked the conversation as unresolved. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-05-01 00:00:00 | no | | | | | | ended\_timeout | integer | Indicator (1 or 0) for whether the customer abandoned or timed out of the chat. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-04-30 00:00:00 | no | | | | | | ended\_auto | integer | Indicator (1 or 0) for whether the issue was auto-ended without rep disposition. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-05-01 00:00:00 | no | | | | | | ended\_other | integer | Indicator (1 or 0) for whether the customer or rep terminated the issue without rep disposition. | 0, 1 | | | 2019-04-30 00:00:00 | 2019-05-01 00:00:00 | no | | | | | | app\_version\_asapp | varchar(255) | ASAPP API version used during customer event or request. | com.asapp.api\_api:-b393f2d920bb74ce5bbc4174ac5748acff6e8643 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | app\_version\_client | varchar(255) | ASAPP SDK version used during customer event or request. | web-sdk-4.0.2 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | session\_metadata | character varying(65535) | Additional metadata information about the session, provided by the client. | | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | last\_sequence\_id | integer | Last sequence identifier associated with the issue. | 25 | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | issue\_queue\_id | varchar(255) | Queue identifier associated with the issue. | 2001 | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | issue\_queue\_name | varchar(255) | Queue name associated with the issue. | acme-mobile-english | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | csat\_rating | double precision | Customer Satisfaction (CSAT) rating for the issue. | 400.0 | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | sentiment\_valence | character varying(50) | Sentiment of the issue. | Neutral, Negative | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | deep\_link\_queue | character varying(65535) | Deeplink queued for the issue. | | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | end\_srs\_selection | character varying(65535) | User selected button option at the end of the session. | | | | 2019-01-11 00:00:00 | 2019-01-11 00:00:00 | no | | | | | | trigger\_link | VARCHAR | deprecated: 2020-04-25 aliases: current\_page\_url | | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | auth\_state | varchar(3) | Flag indicating if the user is authenticated. | 0, 1 | | | 2018-11-26 00:00:00 | 2018-11-26 00:00:00 | no | | | | | | auth\_external\_token\_id | character varying(65535) | A client provided field. Encrypted user ID from client system associated with the first authentication event for an issue. | 82EFDDADC5466501443E3E61ED640162 | | | 2019-05-15 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auth\_source | character varying(65535) | The source of the first authentication event for an issue. | ivr-url | | | 2019-05-15 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auth\_external\_user\_type | character varying(65535) | An external user type of the first authentication event for an issue. | ACME\_CUSTOMER\_ACCOUNT\_ID | | | 2019-05-15 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auth\_external\_user\_id | character varying(65535) | External user ID provided by the client for the first authentication event. | 9BE62CCD564D6982FF305DEBCEAABBB5 | | | 2019-05-15 00:00:00 | 2019-07-16 00:00:00 | no | | | | | | is\_review\_required | boolean | Flag indicates whether an admin must review this issue. data type: boolean | true, false | | | 2019-07-24 00:00:00 | 2019-07-24 00:00:00 | no | | | | | | mid\_issue\_auth\_ts | timestamp without time zone | Time when the user authenticates during the middle of an issue. | 2020-01-18T03:43:41.414000+00:00 | | | 2019-07-24 00:00:00 | 2019-07-24 00:00:00 | no | | | | | | first\_rep\_id | varchar(191) | Identifier for the first rep involved with the issue. | 60001 | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | last\_rep\_id | varchar(191) | Identifier for the last rep involved with the issue. | 60001 | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | external\_rep\_id | varchar(255) | Client-provided identifier for the rep. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | first\_voice\_customer\_state | varchar(255) | Initial state assigned to the customer when using voice. | IDENTIFIED, VERIFIED | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | first\_voice\_customer\_state\_ts | timestamp | Timestamp when the customer was first assigned a state. | 2020-01-18T03:43:41.414000+00:00 | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | first\_voice\_identified\_customer\_state\_ts | timestamp | Time when the customer was first assigned an IDENTIFIED state. | 2020-01-18T03:43:41.414000+00:00 | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | first\_voice\_verified\_customer\_state\_ts | timestamp | Time when the customer was first assigned an VERIFIED state. | 2020-01-18T03:43:41.414000+00:00 | | | 2019-11-21 00:00:00 | 2019-11-21 00:00:00 | no | | | | | | merged\_ts | timestamp | Time when the issue was merged into another issue. data type: timestamp | 2020-01-18T03:43:41.414000+00:00 | | | 2019-12-28 00:00:00 | 2019-12-28 00:00:00 | no | | | | | | desk\_mode\_flag | bigint | Bitmap encodes if agent handled voice-issue ASAPP desk, had engagement with ASAPP desk. bitmap: 0: null, 1: 'VOICE', 2: 'DESK', 4: 'ENGAGEMENT', 8: 'INACTIVITY' NULL for non voice issues | 0, 1, 2, 5, 7 | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | desk\_mode\_string | varchar(191) | Decodes the desk\_mode flag. Current possible values (Null, 'VOICE', 'VOICE\_DESK', 'VOICE\_DESK\_ENGAGEMENT','VOICE\_INACTIVITY'). NULL for non voice issues. | VOICE\_DESK | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | current\_page\_url | varchar(2000) | URL link (stripped of parameters) that triggered the start chat event. Only applicable for WEB platforms. aliases: trigger\_link | https:[www.acme.corp/billing/viewbill](http://www.acme.corp/billing/viewbill) | | | 2020-04-25 00:00:00 | 2020-04-25 00:00:00 | no | | | | | | raw\_current\_page\_url | varchar(65535) | Full URL link (including parameters) that triggered the chat event. Only applicable for WEB platforms. aliases: raw\_trigger\_link | | | | 2020-04-25 00:00:00 | 2020-04-25 00:00:00 | no | | | | | | language\_code | VARCHAR(32) | Language code for the issue\_id | English | | | 2022-01-04 00:00:00 | 2022-01-04 00:00:00 | no | | | | | | assignment\_complex\_ct | integer | count of complex chats | 1 | | | 2026-01-22 00:00:00 | 2026-01-22 00:00:00 | no | | | | | ### Table: convos\_metrics The convos\_metrics table contains counts of various metrics associated with an issue/conversation(e.g. "attempted to chat", "assisted"). The table contains data associated with an issue during a given 15 minute window. The convos\_metrics table will include unended conversation data. **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | first\_utterance\_ts | timestamp | Time of the first customer message in the conversation. | 2019-05-16T02:47:13+00:00 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-11-06 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | platform | varchar(255) | The platform which was used by the customer for a particular event or request (web, ios, android, applebiz, voice). | web, ios, android, applebiz, voice | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | device\_type | varchar(255) | Last device type used by the customer for an issue. | mobile, tablet, desktop, watch, unknown | | | 2019-06-18 00:00:00 | 2019-06-18 00:00:00 | no | | | | | | assisted | tinyint(1) | Flag indicates whether a rep was assigned and responded to the issue (1 if yes, 0 if no). | 0, 1 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_handle\_time | double | Total time in seconds that reps spent handling the issue, from assignment to disposition. | 168.093 | | | 2019-03-05 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | total\_lead\_time | double | Total time in seconds the customer spent interacting during the conversation, from assignment to last utterance. | 163.222 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | total\_wrap\_up\_time | double | Total time in seconds spent by reps wrapping up the conversation, calculated as the difference between handle and lead time. | 4.871 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | total\_session\_time | double | Total time the customer spent seeking resolution, including time in queue and up until the conversation end event. | 190.87900018692017 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | customer\_sent\_msgs | double | The total number of messages sent by the customer, including typed and tapped messages | 1, 3, 5 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | agent\_sent\_msgs | | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_generated\_msgs | bigint(20) | The number of messages sent by the AI system. | 0,2 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | first\_rep\_response\_count | bigint(20) | The number of first responses by reps, post-assignment. This field will increment if there are transfers and timeouts and then reassigned and a rep answers. This field will NOT increment if a rep is assigned but doesn't get a chance to answer. | 0, 1 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_seconds\_to\_first\_rep\_response | bigint(20) | Total time in seconds that passed before the rep responded to the customer. | 407.5679998397827 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | agent\_response\_count | | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | customer\_response\_count | bigint(20) | The total number of responses (excluding messages) sent by the customer. | 0, 4 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_rep\_seconds\_to\_respond | double | Total time in seconds the rep took to respond to the customer. | 407.5679998397827 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_cust\_seconds\_to\_respond | double | Total time in seconds the customer took to respond to the rep. | 65.87400007247925 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | time\_in\_queue | double | The cumulative time in seconds spent in queue, including all re-queues. | 78.30999994277954 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_suggest\_msgs | bigint(20) | Total time spent by the customer in the queue, including any re-queues. | 0, 1, 3 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_complete\_msgs | bigint(20) | The number of autocomplete messages sent by a rep. | 0, 1, 3 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_wait\_for\_agent\_msgs | bigint | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | customer\_wait\_for\_agent\_msgs | bigint | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | attempted\_chat | tinyint(1) | TinyInt value indicates if there was an attempt to connect the customer to an rep. A value of 1 if the customer receives an out of business hours message or if a customer was asked to wait for a rep. Also a value of 1 if customer was escalated to chat. deprecation-date: 2020-04-14 expected-eol-date: 2021-10-15 | 0, 1 | | | 2018-11-06 00:00:00 | 2019-07-26 00:00:00 | no | | | | | | out\_business\_ct | bigint | The number of times that a customer received an out of business hours message. | 0, 2 | | | 2018-11-06 00:00:00 | 2019-04-23 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_sent\_msgs | bigint(20) | The number of messages a rep sent. | 0, 6, 7 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_response\_count | bigint(20) | The count of responses (not messages) sent by the reps. (Note: A FAQ or send-to-flow should count as a response, since from the perspective of the customer they are getting a response of some kind.) | 0, 5, 6 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | auto\_wait\_for\_rep\_msgs | bigint(20) | The number of times a user was asked to wait for a rep. | 0, 1, 2 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | customer\_wait\_for\_rep\_msgs | bigint(20) | The number of times a user asked to speak with a rep. | 0, 1 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | hold\_ct | bigint | The number of times the customer was placed on hold. This applies to VOICE only. | 0, 1, 2 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | total\_hold\_time\_seconds | float | The total amount of time in seconds that the customer was placed on hold. This applies to VOICE only. | 180.4639995098114 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: convos\_metrics\_ended The convos\_metrics table contains counts of various metrics associated with an issue/conversation(e.g. "attempted to chat", "assisted"). The table contains data associated with an issue during a given 15 minute window. This table will filter out unended conversations and issues with no activity. **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------------------------------- | :----------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | first\_utterance\_ts | timestamp | Time of the first customer message in the conversation. | 2018-09-05 19:58:06 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | platform | varchar(255) | The platform which was used by the customer for a particular event or request (web, ios, android, applebiz, voice). | web, ios, android, applebiz, voice | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | device\_type | varchar(255) | The last device type used by the customer. | mobile, tablet, desktop, watch, unknown | | | 2019-06-18 00:00:00 | 2019-06-18 00:00:00 | no | | | | | | assisted | tinyint(1) | Flag indicates whether a rep was assigned and responded to the issue (1 if yes, 0 if no). | 0,1 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_handle\_time | double | Total time in seconds that reps spent handling the issue, from assignment to disposition. | 718.968 | | | 2019-03-05 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | total\_lead\_time | double | Total time in seconds the customer spent interacting during the conversation, from assignment to last utterance. | 715.627 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | total\_wrap\_up\_time | double | Total time in seconds spent by reps wrapping up the conversation, calculated as the difference between handle and lead time. | 27.583 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | total\_session\_time | double | Total time the customer spent seeking resolution, including time in queue and up until the conversation end event. | 1441.0329999923706 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | customer\_sent\_msgs | double | The total number of messages sent by the customer, including typed and tapped messages | 2, 1 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | agent\_sent\_msgs | | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_generated\_msgs | bigint(20) | The number of messages sent by SRS. | 5, 3 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | first\_rep\_response\_count | bigint(20) | The number of first responses by reps, post-assignment. This field will increment if there are transfers and timeouts and then reassigned and a rep answers. This field will NOT increment if a rep is assigned but doesn't get a chance to answer. | 0, 1 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_seconds\_to\_first\_rep\_response | bigint(20) | Total time in seconds that passed before the rep responded to the customer. | 4.291000127792358 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | agent\_response\_count | | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | customer\_response\_count | bigint(20) | The total number of responses (excluding messages) sent by the customer. | 3, 0, 8 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_rep\_seconds\_to\_respond | double | Total time in seconds the rep took to respond to the customer. | 240.28499960899353 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | total\_cust\_seconds\_to\_respond | double | Total time in seconds the customer took to respond to the rep. | 227.27100014686584 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | time\_in\_queue | double | Total time spent by the customer in the queue, including any re-queues. | 71.74499988555908 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_suggest\_msgs | bigint(20) | The number of autosuggest messages sent by rep. | 0, 3, 4 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_complete\_msgs | bigint(20) | The number of autocomplete messages sent by rep. | 0, 1, 2 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | auto\_wait\_for\_agent\_msgs | bigint | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | customer\_wait\_for\_agent\_msgs | bigint | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | attempted\_chat | tinyint(1) | A binary value of 1 indicates if there was an attempt to connect the customer to a rep. Also if a customer receives an out of business hours message or if customer was asked to wait for a rep or was escalated to chat. deprecation-date: 2020-04-14 expected-eol-date: 2021-10-15 | 0, 1 | | | 2018-11-06 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | out\_business\_ct | bigint | The number of times that a customer received an out of business hours message. | 0, 1 | | | 2018-11-06 00:00:00 | 2019-04-23 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_sent\_msgs | bigint(20) | The number of messages a rep sent. | 0, 4, 7 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_response\_count | bigint(20) | The number of first responses by reps, post-assignment. This field will increment if there are transfers and timeouts and then reassigned and a rep answers. This field will NOT increment if a rep is assigned but doesn't get a chance to answer. | 0, 1, 20 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | auto\_wait\_for\_rep\_msgs | bigint(20) | The number of times a user was asked to wait for a rep. | 0, 3, 4 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | customer\_wait\_for\_rep\_msgs | bigint(20) | The number of times a user asked to speak with a rep. | 0, 1, 2 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | hold\_ct | bigint | The number of times the customer was placed on hold. This field applies to VOICE. | 0, 1, 2 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | total\_hold\_time\_seconds | float | The total amount of time in seconds that the customer was placed on hold. This field applies to VOICE. | 53.472 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2019-11-01 00:00:00 | no | | | | | ### Table: convos\_summary\_tags The convos\_summary\_tags table contains information regarding all AI generated auto-summary tags populated by the system when a rep initiates the "end chat" disposition process. **Sync Time:** 1h **Unique Condition:** company\_id, issue\_id, summary\_tag\_presented | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------ | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | queue\_id | integer | The identifier of the group to which the rep (who dispositioned the issue) belongs. | 20001 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | queue\_name | varchar(255) | The name of the group to which the rep (who dispositioned the issue) belongs. | acme-mobile-english | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | disposition\_ts | timestamp | The time at which the rep dispositioned this issue (Exits the screen/frees up a slot). | 2020-01-18T00:21:41.423000+00:00 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | summary\_tag\_presented | character varying(65535) | The name of the auto-summary tag populated by the system when a rep ends an issue. The value is an empty string if no tag was populated but the rep. | '(customer)-(cancel)-(phone)', '(rep)-(add)-(account)' | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | summary\_tag\_selected\_bool | boolean | Boolean field returns true if a rep selects the summary\_tag\_presented. | false, true | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | disposition\_notes | text | Notes that the rep took when dispositioning the chat. Can be generated from free text or the chat summary tags. | 'no response from customer', 'edu cust on activation handling porting requests' | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | ### Table: csid\_containment The csid\_containment table tracks and organizes customer interactions by associating them with a unique session identifier (csid) with 30min window timeframe. It consolidates data related to customer sessions, including associated issue\_ids, session durations, and indicators of containment success. Containment success measures whether an issue was resolved within a session without escalation. This table is critical for analyzing customer interaction patterns, evaluating the effectiveness of issue resolution processes, and identifying areas for improvement. **Sync Time:** 1h **Unique Condition:** csid, company\_name | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | | | :-------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- | :--------- | :---------- | :------------------ | :------------------ | :------------------ | :------------------ | :------------ | :----------- | :------------ | - | - | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | customer\_id | bigint | The customer identifier on which this session is based, after merge if applicable. | 123008 | | | 2018-11-06 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | | | external\_customer\_id | varchar(255) | The customer identifier as provided by the client. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | csid | varchar(255) | Unique identifier for a continuous period of activity for a given customer, starting at the specified timestamp. | '24790001\_2018-09-24T22:17:41.341' | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | csid\_start\_ts | timestamp without time zone | The start time of the customer's session. | 2019-12-23T16:00:10.072000+00:00 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | csid\_end\_ts | timestamp without time zone | The end time of the active session. | 2019-12-23T16:00:10.072000+00:00 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | agents\_involved | | deprecated: 2019-09-25 | | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | included\_issues | character varying(65535) | Pipe-delimited list of issues involved in this period of customer activity. | '2044970001 | 2045000001 | 2045010001' | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | is\_contained | boolean | Flag indicating whether reps were involved with any issues during this csid. | true, false | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | event\_count | bigint | The number of customer (only) events active during this csid. | 21 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | fgsrs\_event\_count | bigint | The number of FGSRS events during this csid. | 5 | | | 2019-08-30 00:00:00 | 2019-08-30 00:00:00 | no | | | | | | | | was\_enqueued | boolean | Flag indicating if enqueued events existed for this session. | true, false | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | rep\_msgs | bigint | Count of text messages sent by reps during this csid. | 6 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | messages\_sent | bigint | Number of text messages typed or quick replies clicked by the customer during this csid. | 4 | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | has\_customer\_utterance | boolean | Flag indicating if the csid contains customer messages. | true, false | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | attempted\_escalate | boolean | A boolean value indicating if the customer or flow tried (or succeeded) to reach a rep. | false, true | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | last\_platform | VARCHAR(191) | Flag indicating if the customer or flow attempted or succeeded in reaching a rep. | ANDROID, WEB, IOS | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | | last\_device\_type | VARCHAR(191) | Last device type used by the customer | mobile, tablet, desktop, watch, unknown | | | 2019-06-18 00:00:00 | 2019-06-18 00:00:00 | no | | | | | | | | first\_auth\_source | character varying(65535) | First source of the authentication event for a csid. | ivr-url | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_source | character varying(65535) | Last source of the authentication event for a csid. | ivr-url | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | distinct\_auth\_source\_path | character varying(65535) | Comma-separated list of all distinct authentication event sources for the csid. | ivr-url, facebook | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | first\_auth\_external\_user\_type | character varying(65535) | The first external user type of the authentication event for a csid. | client\_CUSTOMER\_ACCOUNT\_ID | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_external\_user\_type | character varying(65535) | The last external user type of the authentication event for a csid. | client\_CUSTOMER\_ACCOUNT\_ID | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | first\_auth\_external\_user\_id | character varying(65535) | Client-provided field for the first external user ID linked to an authentication event. | 64b0959a65a63dec32e1be04fe755be1 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_external\_user\_id | character varying(65535) | Client-provided field for the last external user ID linked to an authentication event. | 64b0959a65a63dec32e1be04fe755be1 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | first\_auth\_external\_token\_id | character varying(65535) | A client provided field. The first encrypted user ID from client system associated with an authentication event. | 82EFDDADC5466501443E3E61ED640162 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_external\_token\_id | character varying(65535) | A client provided field. The last encrypted user ID from client system associated with an authentication event. | 82EFDDADC5466501443E3E61ED640162 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | reps\_involved | varchar(4096) | Pipe-delimited list of reps associated with any issues during this session. | '209000 | 2020001' | | | 2018-11-06 00:00:00 | 2018-11-06 00:00:00 | no | | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | | ### Table: csid\_containment\_1d The csid\_containment table tracks and organizes customer interactions by associating them with a unique session identifier (csid) with 24 hours of window timeframe. It consolidates data related to customer sessions, including associated issue\_ids, session durations, and indicators of containment success. Containment success measures whether an issue was resolved within a session without escalation. This table is critical for analyzing customer interaction patterns, evaluating the effectiveness of issue resolution processes, and identifying areas for improvement. **Sync Time:** 1h **Unique Condition:** csid, company\_name | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | | | :-------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- | :--------- | :---------- | :------------------ | :------------------ | :------------------ | :------------------ | :------------ | :----------- | :------------ | - | - | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | customer\_id | bigint | The customer identifier on which this session is based, after merge if applicable. | 123008 | | | 2018-01-15 00:00:00 | 2018-11-07 00:00:00 | no | | | | | | | | external\_customer\_id | varchar(255) | The customer identifier as provided by the client. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | csid | varchar(255) | Unique identifier for a continuous period of activity for a given customer, starting at the specified timestamp. | '24790001\_2018-09-24T22:17:41.341' | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | csid\_start\_ts | timestamp without time zone | The start time of the customer's session. | 2019-12-23T16:00:10.072000+00:00 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | csid\_end\_ts | timestamp without time zone | The end time of the active session. | 2019-12-23T16:00:10.072000+00:00 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | agents\_involved | | deprecated: 2019-09-25 | | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | included\_issues | character varying(65535) | Pipe-delimited list of issues involved in this period of customer activity. | '2044970001 | 2045000001 | 2045010001' | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | is\_contained | boolean | Flag indicating whether reps were involved with any issues during this csid. | true, false | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | event\_count | bigint | The number of customer (only) events active during this csid. | 21 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | fgsrs\_event\_count | bigint | The number of FGSRS events during this csid. | 5 | | | 2019-08-30 00:00:00 | 2019-08-30 00:00:00 | no | | | | | | | | was\_enqueued | boolean | Flag indicating if enqueued events existed for this session. | true, false | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | rep\_msgs | bigint | Count of text messages sent by reps during this csid. | 6 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | messages\_sent | bigint | Number of text messages typed or quick replies clicked by the customer during this csid. | 4 | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | has\_customer\_utterance | boolean | Flag indicating if the csid contains customer messages. | true, false | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | attempted\_escalate | boolean | A boolean value indicating if the customer or flow tried (or succeeded) to reach a rep. | false, true | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | last\_platform | VARCHAR(191) | Flag indicating if the customer or flow attempted or succeeded in reaching a rep. | ANDROID, WEB, IOS | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | | last\_device\_type | VARCHAR(191) | Last device type used by the customer | mobile, tablet, desktop, watch, unknown | | | 2019-06-18 00:00:00 | 2019-06-18 00:00:00 | no | | | | | | | | first\_auth\_source | character varying(65535) | First source of the authentication event for a csid. | ivr-url | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_source | character varying(65535) | Last source of the authentication event for a csid. | ivr-url | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | distinct\_auth\_source\_path | character varying(65535) | Comma-separated list of all distinct authentication event sources for the csid. | ivr-url, facebook | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | first\_auth\_external\_user\_type | character varying(65535) | The first external user type of the authentication event for a csid. | client\_CUSTOMER\_ACCOUNT\_ID | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_external\_user\_type | character varying(65535) | The last external user type of the authentication event for a csid. | client\_CUSTOMER\_ACCOUNT\_ID | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | first\_auth\_external\_user\_id | character varying(65535) | Client-provided field for the first external user ID linked to an authentication event. | 64b0959a65a63dec32e1be04fe755be1 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_external\_user\_id | character varying(65535) | Client-provided field for the last external user ID linked to an authentication event. | 64b0959a65a63dec32e1be04fe755be1 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | first\_auth\_external\_token\_id | character varying(65535) | A client provided field. The first encrypted user ID from client system associated with an authentication event. | 82EFDDADC5466501443E3E61ED640162 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | last\_auth\_external\_token\_id | character varying(65535) | A client provided field. The last encrypted user ID from client system associated with an authentication event. | 82EFDDADC5466501443E3E61ED640162 | | | 2019-05-15 00:00:00 | 2019-05-15 00:00:00 | no | | | | | | | | reps\_involved | varchar(4096) | Pipe-delimited list of reps associated with any issues during this session. | '209000 | 2020001' | | | 2018-01-15 00:00:00 | 2018-01-15 00:00:00 | no | | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | | ### Table: customer\_feedback The customer\_feedback table contains the feedback regarding how well their issue was resolved. This table contains columns such as the feedback question prompted at issue completion, the customer response and the last rep identifier which was associated with an issue\_id. **Sync Time:** 1d **Unique Condition:** issue\_id, company\_marker, last\_rep\_id, question, instance\_ts **Ordering Precedence:** last\_event\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | last\_agent\_id | varchar(191) | deprecated: 2019-09-25 | 123008 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | question | character varying(65535) | Question presented to the user. | VOC Score, endSRS rating, What did the agent do well, or what could the agent have done better? (1000 character limit) | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | question\_category | character varying(65535) | The question category type. | rating, comment, levelOfEffort | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | question\_type | character varying(65535) | The type of question. | rating, scale, radio | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | answer | character varying(65535) | The customer's answer to the question. | 0, 1, 17, yes | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | ordering | integer | The sequence or order of the question. | 0, 1, 3, 5 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | last\_rep\_id | varchar(191) | The last ASAPP rep/agent identifier found in a window of time. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2021-09-10 00:00:00 | 2021-09-10 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2021-09-10 00:00:00 | 2021-09-10 00:00:00 | no | | | | | | platform | varchar(255) | The platform which was used by the customer for a particular event or request (web, ios, android, applebiz, voice). | web, ios, android, applebiz, voice | | | 2021-09-10 00:00:00 | 2021-09-10 00:00:00 | no | | | | | | feedback\_type | character varying(65535) | The classification of feedback provided by the customer. | FEEDBACK\_AGENT, etc | | | 2021-09-10 00:00:00 | 2021-09-10 00:00:00 | no | | | | | | feedback\_form\_type | character varying(65535) | Indicates the type of feedback form completed by the customer. | ASAPP\_CSAT, GBM | | | 2021-09-10 00:00:00 | 2021-09-10 00:00:00 | no | | | | | ### Table: customer\_params The customer\_params table contains information which the client sends to ASAPP. The table may have multiple rows associated with one issue\_id. Clients specify the information to store using a JSON entry which may contain multiple semicolon separated (key, value) pairs. **Sync Time:** 1d **Unique Condition:** event\_id, param\_key **Ordering Precedence:** event\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | event\_ts | timestamp | The time at which this event was fired. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | The subdivision of the company. | ACMEsubcorp | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_segments | varchar(255) | The segments of the company. | marketing,promotions | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | rep\_id | varchar(191) | deprecated: 2022-06-30 | 123008 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | referring\_page\_url | character varying(65535) | The URL of the page the user navigated from. | [https://www.acme.com/wireless](https://www.acme.com/wireless) | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | event\_id | character varying(256) | A unique identifier for the event within the customer parameter payload. | | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | platform | varchar(255) | The platform the customer is using to interact with ASAPP. | 08679ded-38b7-11ea-9c44-debfe2011fef | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | session\_id | varchar(128) | The websocket UUID associated with the current request's session. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | auth\_state | boolean | Flag indicating if the user is authenticated. | true, false | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | params | character varying(65535) | A string representation of the JSON parameters. | `{"Key1":"Value1"; "Key2":"Value2"}` | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | param\_key | character varying(255) | A value of a specific key within the parameter JSON. | Key1 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | param\_value | character varying(65535) | The value corresponding with the specific key in param\_key. | Value1 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | current\_page\_url | varchar(2000) | The URL of the page where the customer initiated the ASAPP chat. | [https://www.asapp.com](https://www.asapp.com) | | | 2021-09-16 00:00:00 | 2021-09-16 00:00:00 | no | | | | | ### Table: customer\_params\_hourly The customer\_params table contains information which the client sends to ASAPP. The table may have multiple rows associated with one issue\_id. Clients specify the information to store using a JSON entry which may contain multiple semicolon separated (key, value) pairs. **Sync Time:** 1h **Unique Condition:** event\_id, param\_key | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | event\_ts | timestamp | The time at which this event was fired. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | The subdivision of the company. | ACMEsubcorp | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_segments | varchar(255) | The segments of the company. | marketing,promotions | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | rep\_id | varchar(191) | deprecated: 2022-06-30 | 123008 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | referring\_page\_url | character varying(65535) | The URL of the page the user navigated from. | [https://www.acme.com/wireless](https://www.acme.com/wireless) | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | event\_id | character varying(256) | A unique identifier for the event within the customer parameter payload. | | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | platform | varchar(255) | The platform the customer is using to interact with ASAPP. | 08679ded-38b7-11ea-9c44-debfe2011fef | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | session\_id | varchar(128) | The websocket UUID associated with the current request's session. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | auth\_state | boolean | Flag indicating if the user is authenticated. | true, false | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | params | character varying(65535) | A string representation of the JSON parameters. | `{"Key1":"Value1"; "Key2":"Value2"}` | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | param\_key | character varying(255) | A value of a specific key within the parameter JSON. | Key1 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | param\_value | character varying(65535) | The value corresponding with the specific key in param\_key. | Value1 | | | 2019-01-25 00:00:00 | 2019-01-25 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | current\_page\_url | varchar(2000) | The URL of the page where the customer initiated the ASAPP chat. | [https://www.asapp.com](https://www.asapp.com) | | | 2021-09-16 00:00:00 | 2021-09-16 00:00:00 | no | | | | | ### Table: dim\_queues The dim\_queues table creates a mapping of queue\_id to queue\_name. This is an hourly snapshot of information. **Sync Time:** 1h **Unique Condition:** queue\_key | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------ | :----------- | :----------------------------------------------------- | :------ | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2022-01-27 00:00:00 | 2022-01-27 00:00:00 | no | | | | | | queue\_key | bigint | Numeric primary key for dim queues | 100001 | | | 2022-01-27 00:00:00 | 2022-01-27 00:00:00 | no | | | | | | queue\_id | integer | The ASAPP queue identifier which the issue was placed. | 210001 | | | 2022-01-27 00:00:00 | 2022-01-27 00:00:00 | no | | | | | | queue\_name | varchar(255) | Name of the queue. | Voice | | | 2022-01-27 00:00:00 | 2022-01-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2025-01-27 00:00:00 | 2025-01-27 00:00:00 | no | | | | | ### Table: flow\_completions The purpose of this table is to list the flow success information, any negation data, and other associated metadata for all issues. This table provides insights into the success or failure of any issue. Flow Success refers to the successful completion of a predefined process or interaction flow without interruptions, errors, or escalations, as determined by specific business logic. **Sync Time:** 1h **Unique Condition:** issue\_id, flow\_name, flow\_status\_ts, success\_event\_details **Ordering Precedence:** COALESCE(success\_event\_ts, negation\_event\_ts) | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------ | :-------------------------- | :-------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-11-14 00:00:00 | 2019-09-12 00:00:00 | no | no | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | platform | varchar(255) | The customer's platform. | web, ios, android, applebiz, voice | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | external\_user\_id | varchar(255) | Client-provided identifier for customer, Available if the customer is authenticated. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | customer\_session\_id | character varying(65535) | The ASAPP application session identifier for this customer. | c5d7afcc-89b9-43cc-90e2-b869bb2be883 | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | success\_rule\_id | character varying(256) | The tag denoting whether the flow was successful within this issue. | LINK\_RESOLVED, TOOLING\_SUCCESS | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | success\_event\_details | character varying(65535) | Any additional metadata about this success rule. | asapp-pil://acme/grande-shop, EndSRSPositiveMessage | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | success\_event\_ts | timestamp without time zone | The time at which the flow success occurred. | 2019-12-03T01:43:17.079000+00:00 | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | negation\_rule\_id | character varying(256) | The tag denoting the last negation event that reverted a previous success. | TOOLING\_NEGATION, NEG\_QUESTION\_NOT\_ANSWERED | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | negation\_event\_ts | timestamp without time zone | The time at which this negation occurred. | 2019-12-03T01:49:19.875000+00:00 | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | is\_flow\_success\_event | boolean | True if this event was not negated directly, false otherwise. | true, false | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | is\_flow\_success\_issue | boolean | True if a success event occurred within this issue and no negation event occurred within this issue, false otherwise. | true, false | | | 2018-11-14 00:00:00 | 2018-11-14 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2019-11-01 00:00:00 | no | | | | | | last\_relevant\_event\_ts | timestamp | Timestamp of the most recent relevant event (success or negation) detected for this issue, useful for deduplication. | 2020-01-02T19:13:27.698000+00:00 | | | 2019-12-10 00:00:00 | 2019-12-10 00:00:00 | no | | | | | ### Table: flow\_detail The purpose of the flow\_detail table is to list out the data associated with each node traversed during an issue lifespan. A usage of this table is to understand the path a particular issue traversed trhough a flow node by node. **Sync Time:** 1h **Unique Condition:** event\_ts, issue\_id, event\_type | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------ | :----------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | event\_ts | timestamp | The time of an given event. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | event\_type | varchar(191) | The type of event within a given flow. | MESSAGE\_DISPLAYED | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | no | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-08-14 00:00:00 | 2018-08-27 00:00:00 | no | no | | | | | session\_id | varchar(128) | The ASAPP session identifier. It is a uuid generated by the chat backend. Note: a session may contain several conversations. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | flow\_id | varchar(255) | An ASAPP identifier assigned to a particular flow executed during a customer event or request. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | flow\_name | varchar(255) | The ASAPP text name for a given flow which was executed during a customer event or request. | FirstChatMessage, AccountNumberFlow | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | event\_name | character varying(65535) | The event name within a given flow. | FirstChatMessage, SuccessfulPaymentNode | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | no | | | | | link\_resolved\_pil | character varying(65535) | An ASAPP internal URI for the link. | asapp-pil://acme/bill-history | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | no | | | | | link\_resolved\_pdl | character varying(65535) | The resolved host deep link or web link. | [https://www.acme.com/BillHistory](https://www.acme.com/BillHistory) | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | no | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | flow\_type | character varying(256) | Type of the flow. Flow types include the following values: FLOW\_TYPE\_PRE\_ASAPP\_CHAT -- A flow to be completed before chatting with SRS. FLOW\_TYPE\_PRE\_ASAPP\_LIVE\_CHAT -- A flow to be completed before chatting with a live agent. FLOW\_TYPE\_ASAPP\_LIVE\_CHAT -- A flow to signifies a user is escalating or has escalated to an agent. FLOW\_TYPE\_BUSINESS -- An SRS flow intended to solve the customer's issue. FLOW\_TYPE\_UTILITY -- A background flow used for routing or gathering additional information from the custer. | FLOW\_TYPE\_UTILITY | | | 2025-05-06 00:00:00 | 2025-05-06 00:00:00 | no | no | | | | ### Table: intents The intents table contains a list of intent codes and other information associated with the intent codes. Information in the table includes flow\_name and short\_description. **Sync Time:** 1d **Unique Condition:** code | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :---------------------- | :---------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | code | character varying(128) | The ASAPP internal code for a given intent. | ACCTNUM | | | 2018-07-26 00:00:00 | 2018-07-26 00:00:00 | no | no | | | | | name | character varying(256) | The user-friendly name associated with an intent. | Get account number | | | 2018-07-26 00:00:00 | 2018-07-26 00:00:00 | no | no | | | | | intent\_type | character varying(128) | The hierarchical classification of this intent. | SYSTEM, LEAF, PARENT | | | 2018-07-26 00:00:00 | 2021-11-24 00:00:00 | no | no | | | | | short\_description | character varying(1024) | A short description for the intent code. | 'Users asking to get their account number.', 'Television error codes.' | | | 2018-07-26 00:00:00 | 2019-02-12 00:00:00 | no | no | | | | | flow\_name | varchar(255) | The ASAPP flow code attached to this intent code. | AccountNumberFlow | | | 2018-12-13 00:00:00 | 2018-12-13 00:00:00 | no | no | | | | | default\_disambiguation | boolean | True if the intents are part of the first "welcome" screen of disambiguation buttons presented to a customer, false otherwise. | false, true | | | 2018-12-13 00:00:00 | 2018-12-13 00:00:00 | no | no | | | | | actions | character varying(4096) | Describes the type of action for the customer interface (e.g., "flow" for forms, "link" for URLs, or "text" for help content). An empty value indicates no specific action or automation. | flow, link, test, NULL | | | 2018-12-20 00:00:00 | 2018-12-20 00:00:00 | no | no | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2021-04-09 00:00:00 | no | | | | | | deleted\_ts | timestamp | The date when this intent was removed. If blank or null, the intent is still active as of the export. An intent can be "undeleted" at a later date. | NULL, 2018-12-13 01:23:34 | | | 2021-11-23 00:00:00 | 2021-11-23 00:00:00 | no | no | | | | ### Table: issue\_callback\_3d The issue\_callback table relates issues from the same customer during a three day window. This table will help measure customer callback rate, the rate at which the same customer recontacts within a three day period. The issue\_callback table is applicable only to specific clients. **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :----------------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | issue\_created\_ts | timestamp | Timestamp when the issue ID is created. | 2018-09-05 19:58:06 | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | issue\_disconnect\_ts | timestamp without time zone | Timestamp when the issue ID is Disconnected. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | issue\_cutoff\_ts | timestamp without time zone | The timestamp when the callback period expires for an issue. This is calculated as 3 days after the issue\_disconnect\_ts. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | next\_callback\_issue\_id | bigint | The ID of the next issue created by the same customer. This must occur between issue\_disconnect\_ts and issue\_cutoff\_ts. Null if no such issue exists. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | next\_callback\_issue\_created\_ts | timestamp without time zone | Time when the next\_callback\_issue was created. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | time\_btwn\_next\_callback\_issue\_seconds | double precision | The duration in seconds between issue\_disconnect\_ts and next\_callback\_issue\_created\_ts | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | callback\_prev\_issue\_id | bigint | The ID of any previous issue created by the same customer, provided it was disconnected within 3 days of the current issue's create\_ts. Null if no such issue exists. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | callback\_prev\_issue\_created\_ts | timestamp without time zone | The timestamp when the callback\_prev\_issue was created. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | callback\_prev\_issue\_disconnect\_ts | timestamp without time zone | The timestamp when the callback\_prev\_issue was disconnected. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | time\_btwn\_callback\_prev\_issue\_seconds | double precision | The duration in seconds between callback\_prev\_issue\_disconnect\_ts and issue\_created\_ts. | | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-14 00:00:00 | 2019-11-14 00:00:00 | no | | | | | ### Table: issue\_entity\_genagent hourly snapshot of issue grain generative\_agent data including both dimensions and metrics aggregated over "all time" (two days in practice). **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :---------------------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------------- | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_turns\_\_turn\_ct | int | Number of turns ( one cycle of interaction between GenerativeAgent and a user) | 1 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_turns\_\_turn\_duration\_ms\_sum | bigint | Total duration in milliseconds between PROCESSING\_START and PROCESSING\_END across all turns. | 2 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_turns\_\_utterance\_ct | int | Number of generative\_agent utterances. | 2 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_turns\_\_contains\_escalation | boolean | Indicates if any turn in the conversation resulted in an escalation to a human agent. | 1 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_first\_task\_name | varchar(255) | Name of the first task initiated by the generative agent. | SomethingElse | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_last\_task\_name | varchar(255) | Name of the last task initiated by the generative agent. | SomethingElse | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_task\_ct | int | Number of tasks entered by generative\_agent. | 2 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_configuration\_id | varchar(255) | The configuration version responsible for the actions of the generative agent. | 4ea5b399-f969-49c6-8318-e2c39a98e817 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_used\_hila | boolean | Boolean representing if the conversation used a HILA escalation. True doesn't guarantee that there was a HILA response in the conversation. | TRUE | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | genagent\_tasks | | generative\_agent\_monitoring\_\_flagged\_for\_review | boolean | Boolean representing if the conversation has at least one suggested review flag. | TRUE | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | genagent\_monitoring | | generative\_agent\_monitoring\_\_review\_flags\_ct | bigint | Number of review flags. | 2 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | genagent\_monitoring | | generative\_agent\_monitoring\_\_evaluation\_ct | bigint | Number of evaluations. | 10 | | | 2024-11-08 00:00:00 | 2024-11-08 00:00:00 | no | | | | genagent\_monitoring | ### Table: issue\_entry This table shows data about how a user began an interaction with the sdk by issue **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | issue\_created\_ts | timestamp | timestamp of the "NEW\_ISSUE" event for an issue | 2018-09-05 19:58:06 | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | company\_id | bigint | The ASAPP identifier of the company or test data source. | 10001 | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | entry\_type | character varying(384) | Initiation source of the first activity for the Issue ID was from a proactive invitation, reactive button click, deep-link ask-secondary-question, etc. examples: PROACTIVE,REACTIVE,ASK,DEEPLINK | | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | treatment\_type | varchar(64) | Indicates whether proactive messaging is configured to route the customer to an automated flow or a live agent. | QUEUE\_PAUSED | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | rule\_name | character varying(65535) | Name of the logical set of criteria met by the customer to trigger a proactive invitation or reactive button display. | | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | is\_new\_conversation | boolean | Indicates whether the issue was created as a new conversation when the customer was not engaged in any ongoing or active issue. | | | | 2019-11-15 00:00:00 | 2019-11-15 00:00:00 | no | | | | | | is\_new\_user | boolean | Indicates if this is the first issue from the customer. | | | | 2019-11-15 00:00:00 | 2019-11-15 00:00:00 | no | | | | | | current\_page\_url | varchar(2000) | The URL of the page where the SDK was displayed. | [https://www.asapp.com](https://www.asapp.com) | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | referring\_page\_url | character varying(65535) | The URL of the page that directed the user to the current page. | | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | | client\_uuid | character varying(36) | The UUID generated (that only ever lasts fifteen minutes or so) on each fresh sdk cache that can identify a unique human. For internal debuging, it won't go to sync (exactly as it comes from the source without any transformation) | c3944019-24d3-4887-8794-045cd61d5a22 | | | 2024-07-01 00:00:00 | 2021-06-01 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2024-07-01 00:00:00 | 2024-07-01 00:00:00 | no | | | | | ### Table: issue\_omnichannel This table captures omnichannel tracking events related with the different platforms we have. (Initially only ABC) **Sync Time:** 1h **Unique Condition:** issue\_id, third\_party\_customer\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2020-06-02 00:00:00 | 2020-06-02 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2020-06-02 00:00:00 | 2020-06-02 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2020-06-02 00:00:00 | 2020-06-02 00:00:00 | no | | | | | | omni\_source | character varying(191) | The source of the information. | 'ABC' | | | 2020-06-03 00:00:00 | 2020-06-03 00:00:00 | no | | | | | | opaque\_id | varchar(191) | deprecated: 2020-09-11 | 'urn:mbid:XXXXXX' | | | 2020-06-03 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | external\_intent | character varying(65535) | The intention or purpose of the chat as specified by the business, such as account\_question. deprecated: 2020-09-11 | 'account\_question' | | | 2020-06-03 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | external\_group | character varying(65535) | Group identifier for the message, as specified by the business, such as department name. deprecated: 2020-09-11 | 'credit\_card\_department' | | | 2020-06-03 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | first\_utterance | character varying(191) | Captures the text of the first customer statement in an issue. | | | | 2020-06-03 00:00:00 | 2020-06-03 00:00:00 | no | | | | | | event\_ts | timestamp | deprecated: 2020-09-11 | 2019-11-08 14:00:06.957000+00:00 | | | 2020-06-02 00:00:00 | 2020-06-02 00:00:00 | no | | | | | | third\_party\_customer\_id | character varying(65535) | An encrypted identifier which is permanently mapped to an ASAPP customer. | 'urn:mbid:XXXXXX' | | | 2020-07-23 00:00:00 | 2020-07-23 00:00:00 | no | | | | | | external\_context\_1 | character varying(65535) | Provides traffic source or customer context from external platforms, including Apple Business Chat Group ID and Google Business Messaging Entry Point. | 'credit\_card\_department' | | | 2020-07-23 00:00:00 | 2020-07-23 00:00:00 | no | | | | | | external\_context\_2 | character varying(65535) | Provides additional traffic source or customer context from external platforms, including Apple Business Chat Intent ID and Google Business Messaging Place ID. | 'account\_question' | | | 2020-07-23 00:00:00 | 2020-07-23 00:00:00 | no | | | | | | created\_ts | timestamp | Timestamp at which the message was sent. | '2019-11-08T14:00:06.95700000:00' | | | 2020-07-23 00:00:00 | 2020-07-23 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2025-01-09 00:00:00 | 2025-01-09 00:00:00 | no | | | | | ### Table: issue\_queues The purpose for the issue\_queues table is to capture relevant data associated with an issue in a wait queue. Data captured includes the issue\_id, the enqueue time, the rep, the event type and flowname. This is captured in 15 minute windows of time. **Sync Time:** 1h **Unique Condition:** issue\_id, queue\_id, enter\_queue\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | enter\_queue\_ts | timestamp without time zone | Timestamp when the issue was added to the queue. | 2019-12-26T18:25:22.836000+00:00 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | exit\_queue\_ts | timestamp | Timestamp when the issue was removed from the queue. | 2019-12-26T18:25:28.552000+00:00 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | queue\_id | integer | ASAPP queue identifier which the issue was placed. | 20001 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | queue\_name | varchar(255) | Queue name which the issue was placed. | Acme Residential, Acme Wireless | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | abandoned | boolean | Flag indicating whether the issue was abandoned. | true, false | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | enqueue\_time | double precision | Duration in seconds that the issue spent in the queue. | 5.716000080108643 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | exit\_queue\_eventtype | character varying(65535) | Reason the customer exited the queue. | CUSTOMER\_TIMEDOUT, NEW\_REP | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | enter\_queue\_eventtype | character varying(65535) | Reason the customer entered the queue. | TRANSFER\_REQUESTED, SRS\_HIER\_AND\_TREEWALK | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | enter\_queue\_eventflags | bigint | Event causing the issue to be enqueued. | (1=customer, 2=rep, 4=bot) | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | enter\_queue\_flow\_name | character varying(65535) | Name of the flow which the issue was in before being enqueued. | LiveChatAgentsBusyFlow | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | enter\_queue\_message\_name | character varying(65535) | Message name within the flow which the user was in before being enqueued. | someoneWillBeWithYou, shortWaitFormNode | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | exit\_queue\_eventflags | bigint | Event causing the issue to be deenqueued. | (1=customer, 2=rep, 4=bot) | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: issue\_sentiment The issue\_sentiment table captures sentiment analysis information related to customer issues. Each row represents an issue and its associated sentiment score or classification. This table helps track customer sentiment trends, assess the emotional tone of interactions, and support decision-making for issue resolution strategies. **Sync Time:** 1d **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-07-26 00:00:00 | 2018-09-29 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-07-26 00:00:00 | 2018-07-26 00:00:00 | no | | | | | | score | double precision | The sentiment score applied to this issue. | 0.5545974373817444, -1000.0 | | | 2018-07-26 00:00:00 | 2018-07-26 00:00:00 | no | | | | | | status | character varying(65535) | Reason for the sentiment score, which may be NULL | CONVERSATION\_TOO\_SHORT | | | 2018-07-26 00:00:00 | 2018-07-26 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: issue\_session\_merge A list of the merged issues that have occurred as a result of transferring to a queue during a cold transfer and the first issue\_id associated with this new issue\_id. Only relevant for VOICE. activate-date: 2024-01-17 **Sync Time:** 1h **Unique Condition:** issue\_id, session\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------ | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | | | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | session\_id | varchar(128) | The ASAPP session identifier. It is a uuid generated by the chat backend. Note: a session may contain several conversations. | 'guid:2348001002-0032128785-2172846080-0001197432' | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | issue\_created\_ts | timestamp | Timestamp this issue\_id was created. | 2018-09-05 19:58:06 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | first\_issue\_id | bigint | The first issue\_id for this session. | 21352352 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | first\_issue\_created\_ts | timestamp | Timestamp when the NEW\_ISSUE event occurred for the first issue\_id associated with this session. | 2018-09-05 19:58:06 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | last\_issue\_id | bigint | The last issue\_id associated with this session. | 21352352 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | | last\_issue\_created\_ts | timestamp | Timestamp when the NEW\_ISSUE event occurred for the last issue\_id associated with this session | 2018-09-05 19:58:06 | | | 2020-02-05 00:00:00 | 2020-02-05 00:00:00 | no | | | | | ### Table: issue\_type The purpose of the issue\_type table is to capture any client specific naming of issue parameters. This captures per issue the initial "issue type name" which the client has specified. This is captured in 15 minute window increments. **Sync Time:** 1h **Unique Condition:** customer\_id, issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | prechat\_survey\_ts | timestamp without time zone | Timestamp when the pre-chat survey was completed to route the issue to an expert. | 2019-08-07 19:34:18.844 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | type\_change\_ts | timestamp without time zone | The timestamp when the issue type was changed (e.g. escalated from question.) Null if the issue type was not changed. | 2019-08-07 19:45:57.325 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | queue\_id | integer | The unique identifier for the queue to which the issue was routed. | 20001 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | issue\_type | character varying(65535) | Current type of the issue (question or escalation). | ESCALATION | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | initial\_type | character varying(65535) | Original type of the issue when it was created. | QUESTION | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | subsidiary\_name | character varying(65535) | Name of the company to which this issue is associated. | ACMEsubsid | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | channel\_type | character varying(65535) | Indicates the channel (voice or chat) if the issue started as ESCALATION, or null otherwise. | CALL | | | 2019-08-12 00:00:00 | 2019-08-12 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: knowledge\_base This table captures interactions with articles in the knowledge base. An article can be viewed, attached to a chat and marked as favorite **Sync Time:** 1h **Unique Condition:** issue\_id, article\_id, event\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------------- | :-------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | article\_id | character varying(65535) | The knowledge base identifier for the article. | 5, 16580001 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | interaction | character varying(8) | An indicator of whether the article was viewed or attached to a chat. | 'Viewed', 'Attached' | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | is\_favorited | boolean | Indicates whether the article is marked as a favorite. | TRUE, FALSE | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | event\_ts | timestamp | The time of an given event. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | event\_type | varchar(191) | Either Interaction events requested: ('OPEN\_ARTICLE', 'PAPERCLIP\_ARTICLE') or Recommendation events requested: ('DISPLAYED','AGENT\_HOVERED', 'AGENT\_CLICKED\_EXTERNAL\_ARTICLE\_LINK', 'AGENT\_CLICKED\_THUMBS\_UP' 'AGENT\_CLICKED\_THUMBS\_DOWN', 'AGENT\_CLICKED\_EXPAND\_CARD', 'AGENT\_CLICKED\_COLLAPSE\_CARD') | CUSTOMER\_TIMEOUT, TEXT\_MESSAGE | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | event\_name | character varying(191) | A string that determines if the action comes from an Interaction event or a Recommendation event | 'INTERACTION', 'SUGGESTION' | | | 2019-12-20 00:00:00 | 2019-12-20 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2020-03-30 00:00:00 | 2020-03-30 00:00:00 | no | | | | | | rep\_assigned\_ts | timestamp without time zone | timestamp of the NEW\_REP event | | | | 2020-10-15 00:00:00 | 2020-10-15 00:00:00 | no | | | | | | article\_category | character varying(191) | Category to distinguish between flows and knowledge base articles. REGULAR is for knowledge base articles. FLOWS is for flows recommendation. | 'REGULAR' | | | 2020-10-15 00:00:00 | 2020-10-15 00:00:00 | no | | | | | | discovery\_type | character varying(256) | How article was presented/discovered. (recommendation, quick\_access\_kbr, favorite, search, filebrowser) | recommendation | | | 2021-03-09 00:00:00 | 2021-03-09 00:00:00 | no | | | | | | position | integer | Position of article recommendation when multiple recommendations are presented. Default is 1 when a single recommendation is presented. | 1 | | | 2021-03-09 00:00:00 | 2021-03-09 00:00:00 | no | | | | | | span\_id | varchar(128) | Identifier for a recommendation. Can be used to tie a recommendation to an interaction such as HOVER, OPEN\_ARTICLE. | 'coo9c7b8-7a50-11eb-b13e-8ad0401b5458' | | | 2021-03-09 00:00:00 | 2021-03-09 00:00:00 | no | | | | | | article\_name | character varying(65535) | Short description of the article. | 500 | | | 2021-03-09 00:00:00 | 2021-03-09 00:00:00 | no | | | | | | is\_paperclip\_enabled | boolean | Flag which indicates whether the article is paper clipped (Bookmark). | TRUE | | | 2021-03-09 00:00:00 | 2021-03-09 00:00:00 | no | | | | | | external\_article\_id | character varying(65535) | Identifier for external article id. | 4567 | | | 2021-03-09 00:00:00 | 2021-03-09 00:00:00 | no | | | | | ### Table: live\_agent\_opportunities The live\_agent\_opportunities table tracks instances where automated processes, such as chatbots or virtual assistants, escalate a conversation or issue to a live agent. It offers insights into the effectiveness of automation, the reasons behind escalations, and key metrics for improving both customer experience and agent performance. The term "Opportunity" refers to the period from when the conversation is handed over to an agent until its closure. **Sync Time:** 1h **Unique Condition:** issue\_id, customer\_id, opportunity\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | rep\_id | varchar(191) | The identifier of the rep this opportunity was assigned to or null if it was never assigned. | 123008 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | opportunity\_ts | timestamp | Timestamp of the opportunity event. | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | platform | varchar(255) | The platform which was used by the customer for a particular event or request (web, ios, android, applebiz, voice). | web, ios, android, applebiz, voice | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | device\_type | varchar(255) | Last device type used by the customer. | mobile, tablet, desktop, watch, unknown | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | first\_opportunity | boolean | Indicator of whether this is the first opportunity for this issue. | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | triggered\_when\_busy | boolean | Indicator of whether the customer was asked if they wanted to wait for an agent. | true | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | triggered\_outside\_hours | boolean | Indicator of whether the customer was told they are outside of business hours. | false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | queue\_id | integer | Identifier of the agent group this opportunity will be routed to. | 2001 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | queue\_name | varchar(255) | Name of the queue this opportunity will be routed to. | Residential | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | intent\_code | character varying(128) | The most recent intent code used for routing this issue. | SALESFAQ, BILLINFO | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | event\_type | varchar(191) | The event\_type of this opportunity. This can be useful to determine if this is a transfer, etc. | NEW\_REP, SRS\_HIER\_AND\_TREEWALK | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | previous\_event\_type | character varying(65535) | The event\_type that occurred prior to this opportunity. This can be useful to determine if the customer was previously transferred or timed out. | SRS\_HIER\_AND\_TREEWALK | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | flow\_name | varchar(255) | The flow associated with the routing intent, if any. | ForceChatFlow | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | by\_request | boolean | Indicator of whether the customer explicitly request to speak to an agent (i.e. intent code has an AGENT as a parent). | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | by\_end\_srs | boolean | Indicator of whether this opportunity occurred because of a negative end srs response. | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | by\_api\_error | boolean | Indicator of whether this opportunity occurred because of an error in partner API. | true, false | | | 2019-10-21 00:00:00 | 2019-10-21 00:00:00 | no | | | | | | by\_design | boolean | Indicator of whether intent\_code is not null AND not by\_request AND not by\_end\_srs AND not by\_api\_error. Note this includes cases where a flow sends the customer to an agent if it has not successfully solved the problem. (ex: I am still not connected after a reset my router flow.) | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | by\_other | boolean | Catch all indicator for all cases that are not by request, design or end\_srs. This generally happens if we are missing the intent code, either because of an API error or because of a data bug. | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | enqueued\_ts | timestamp | The time which this opportunity was sent to a queue, or null if it never was enqueued. | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | exit\_queue\_ts | timestamp | Time at which the customer exited the queue. | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | abandoned\_ts | TIMESTAMP | The datetime when the customer abandoned the queue. | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | assigned\_ts | timestamp | Timestamp when the opportunity was assigned to a representative; null if it was never assigned. | 2020-01-03T18:54:45.140000+00:00 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | escalation\_initiated\_ts | timestamp | The lesser of enqueued and assigned time, null if never escalated. | 2020-01-06 23:13:50.617 | | | 2019-06-04 00:00:00 | 2019-06-04 00:00:00 | no | | | | | | rep\_first\_response\_ts | TIMESTAMP | The time when a rep first responded to the customer. | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | dispositioned\_ts | timestamp | The time at which the rep dispositioned this issue (Exits the screen/frees up a slot). | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | customer\_end\_ts | timestamp without time zone | The time at which customer ended the issue, if the customer ended the issue. | 2020-01-06 23:13:50.617 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | disposition\_event\_type | varchar(255) | Event type indicating how the conversation ended. | resolved, timedout | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | cust\_utterance\_count | bigint | Count of customer utterances from issue\_assigned\_ts to dispositioned\_ts | 4 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | rep\_utterance\_count | bigint | Count of rep utterances from issue\_assigned\_ts to dispositioned\_ts | 5 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | cust\_response\_ct | int | Total count of responses by customer. Max of one message following a rep message counted as a response. | 3 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | rep\_response\_ct | int | Total count of responses by agent. Max of one message following a customer message counted as a response. | 10 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | is\_ghost\_customer | boolean | True if the customer was assigned to a rep but never responded to the rep. | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | handle\_time\_seconds | double precision | Time in seconds spent an agent working on a particular assignment. Time between assignment and disposition event | 824.211 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | lead\_time\_seconds | double precision | Time in seconds spent by an agent leading the conversation. Time between assignment and time of last utterance by THE CUSTOMER. If no utterance by customer, Lead time is total\_handle\_time. | 101.754 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | wrap\_up\_time\_seconds | double precision | Time in seconds spent by an agent wrapping up the conversation. Defined as total\_handle\_time-total\_lead\_time. | 61.989 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | accepted\_wait\_ts | timestamp without time zone | Timestamp at which the customer was sent a message confirming they had been placed into a queue. | 2019-09-11T14:15:59.312000+00:00 | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | is\_transfer | boolean | Indicator whether this opportunity is due to a transfer. | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | is\_reengagement | boolean | Indicator whether this opportunity is due to the user returning from a timeout. | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | is\_conversation\_initiation | boolean | Indicator of whether this opportunity is from a conversation initiation (i.e. not from transfer or reengagement). | true, false | | | 2019-07-01 00:00:00 | 2019-07-01 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | from\_queue\_id | bigint | The identifier of the group from which the issue was transferred. | 30001 | | | 2019-12-18 00:00:00 | 2019-12-18 00:00:00 | no | | | | | | from\_queue\_name | character varying(191) | The name of the group from which the issue was transferred. | service, General | | | 2019-12-18 00:00:00 | 2019-12-18 00:00:00 | no | | | | | | from\_rep\_id | bigint | The identifier of the rep from which the issue was transferred. | 81001 | | | 2019-12-18 00:00:00 | 2019-12-18 00:00:00 | no | | | | | | is\_check\_in\_reengagement | boolean | Is this opportunity due to the user coming back within a 24h period after being timed-out for not answering a check-in prompt on time. | true | | | 2020-01-14 00:00:00 | 2020-01-14 00:00:00 | no | | | | | | desk\_mode\_flag | bigint | Bitmap encodes if agent handled voice-issue ASAPP desk, had engagement with ASAPP desk. bitmap: 0: null, 1: 'VOICE', 2: 'DESK', 4: 'ENGAGEMENT', 8: 'INACTIVITY' NULL for non voice issues | 0, 1, 2, 5, 7 | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | desk\_mode\_string | varchar(191) | Decodes the desk\_mode flag. Current possible values (Null, 'VOICE', 'VOICE\_DESK', 'VOICE\_DESK\_ENGAGEMENT','VOICE\_INACTIVITY'). NULL for non voice issues. | VOICE\_DESK | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | merged\_from\_issue\_id | bigint | The issue id before the merge | 21352352 | | | 2020-06-30 00:00:00 | 2020-06-30 00:00:00 | no | | | | | | merged\_ts | timestamp | the time the merge occurred | 2019-11-08T14:00:06.957000+00:00 | | | 2020-06-30 00:00:00 | 2020-06-30 00:00:00 | no | | | | | | exclusive\_phrase\_auto\_complete\_msgs | bigint | Count of utterances where at least one phrase autocomplete was accepted/sent and no other augmentation was used. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | autopilot\_ending\_msgs\_ct | integer | Number of autopilot endings | 2 | | | 2024-04-19 00:00:00 | 2024-04-19 00:00:00 | no | | | | | ### Table: queue\_check\_ins Exports for each 15 min window of Queue Check in events **Sync Time:** 1h **Unique Condition:** company\_id, issue\_id, customer\_id, check\_in\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :---------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | check\_in\_ts | timestamp without time zone | Timestamp at which the check in message was prompted to the customer. | 2018-06-10 14:23:00 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | wait\_time\_threshold\_ts | timestamp without time zone | Timestamp at which the queue wait time threshold was reached. | 2018-06-10 14:22:58 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | check\_in\_result | character varying(9) | The result of the check in message, either the customer 'Accepted' or was 'Dequeued'. | 'Dequeued' | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | | check\_in\_result\_ts | timestamp without time zone | Timestamp at which the result of the check in message was received. | 2018-06-10 14:24:00 | | | 2020-01-02 00:00:00 | 2020-04-24 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-03-23 00:00:00 | 2019-03-23 00:00:00 | no | | | | | | wait\_time\_threshold\_ct\_distinct | bigint | Quantity of times the queue wait time threshold was reached before getting the check in message. | 2 | | | 2020-04-25 00:00:00 | 2020-04-25 00:00:00 | no | | | | | | queue\_id | integer | The ASAPP queue identifier which the issue was placed. | 20001 | | | 2020-06-11 00:00:00 | 2020-06-11 00:00:00 | no | | | | | | queue\_name | varchar(255) | The queue name which the issue was placed. | Acme Residential, Acme Wireless | | | 2020-06-11 00:00:00 | 2020-06-11 00:00:00 | no | | | | | | opportunity\_ts | timestamp | Timestamp of the opportunity event | 2023-01-02 19:58:06 | | | 2020-01-02 00:00:00 | 2020-01-02 00:00:00 | no | | | | | ### Table: quick\_reply\_buttons The quick\_reply\_button\_interaction table contains information associated with a specific quick\_reply\_button, its final intent and any aggregation counts over the day (e.g. escalated\_to\_chat, escalation\_requested). Aggregated for a 24 hour period. Only ended issues are counted. **Sync Time:** 1d **Unique Condition:** company\_id, company\_subdivision, company\_segments, final\_intent\_code, quick\_reply\_button\_text, escalated\_to\_chat, escalation\_requested, quick\_reply\_button\_index | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :----------------------------- | :----------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | final\_intent\_code | character varying(255) | The last intent code of the flow which the user navigated. | PAYBILL | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | escalated\_to\_chat | bigint | 1 if an issue escalated to live chat, 0 if not. | 1 | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | escalation\_requested | integer | 1 if customer was asked to wait for an agent or if a customer asked to speak to an agent. | 1 | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | quick\_reply\_button\_text | character varying(65535) | The text of the quick reply button. | 'Billing' | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | quick\_reply\_button\_index | integer | The position of the quick reply button shown. | (1,2,3) | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | quick\_reply\_displayed\_count | bigint | The number of times this button was shown. | 42 | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | quick\_reply\_selected\_count | bigint | The number of times this button was selected. | 42 | | | 2019-02-12 00:00:00 | 2019-02-12 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: reps The rep table contains a listing of data regarding each rep. Expected data includes their name, the rep id, their slot configuration and the rep status. This rep data is collected daily. **Sync Time:** 1d **Unique Condition:** rep\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------- | :-------------------------- | :---------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | created\_ts | timestamp | The timestamp of when record gets generated. | 2019-02-19T21:31:43+00:00 | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | crm\_agent\_id | varchar(255) | deprecated: 2019-09-25 | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | name | varchar(255) | The rep name as imported from the CRM. | Smith, Anne | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | max\_slot | smallint | The number of slots or concurrent conversations this rep can have at the same time. | 4 | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | disabled\_time | timestamp without time zone | The time when this rep was removed from the ASAPP system. | 2019-02-27T12:56:34+00:00 | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | agent\_status | | deprecated: 2019-09-25 | | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2018-09-21 00:00:00 | 2018-09-21 00:00:00 | no | | | | | | crm\_rep\_id | character varying(191) | The rep identifier from the client system. | monica.rosa | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | rep\_status | varchar(255) | The last known status of the rep at UTC midnight. | 80001 | | | 2019-09-26 00:00:00 | 2019-09-26 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: rep\_activity The rep\_activity table tracks status and slot information of each agent over time, including time spent in each status and time utilized in chats. In this table, the data is captured in 15 minute increments throughout the day. instance\_ts is actually the 15-minute window in question, and is part of the primary key. It does not indicate the last time a relevant event happened as in other tables. Windows may be re-stated when information from a later window amends them, for example to account for additional utilized time. **Sync Time:** 1h **Unique Condition:** instance\_ts, rep\_id, status\_id, in\_status\_starting\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------ | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------ | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The start of the 15-minute time window under observation. As an example, for a 15 minute interval an instance\_ts of 12:30 implies activity from 12:30 to 12:45. | 2019-11-08 14:00:00 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | update\_ts | timestamp without time zone | The timestamp at which the last event for this record occurred. This usually represents the status end or the end of the last conversation handled in this status. | 2018-06-10 14:24:00 | | | 2019-12-16 00:00:00 | 2019-12-16 00:00:00 | no | | | | | | export\_ts | timestamp | The end of the time window for which this record was exported. This is used for de-duplicating records. | 2018-06-10 14:30:00 | | | 2019-12-16 00:00:00 | 2019-12-16 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | The company subdivision relates to the customer issue and is not relevant to reps. Intentionally left blank. | ACMEsubcorp | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | company\_segments | varchar(255) | The company segments field relates to the customer issue and is not relevant to reps. Intentionally left blank. | marketing,promotions | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | agent\_name | | deprecated: 2019-09-25 | | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | status\_id | character varying(65535) | The ASAPP identifier for a given status. | OFFLINE, 1 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | status\_description | character varying(65535) | The human text name for a given status. | | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | orig\_status\_description | character varying(191) | The text of the status before alteration for disconnects. | Available, Away, Coffee Break, Active | | | 2020-01-07 00:00:00 | 2020-01-07 00:00:00 | no | | | | | | in\_status\_starting\_ts | timestamp without time zone | Inside this 15m window, what time did the agent enter this status. | 2020-01-08T19:32:38.352000+00:00 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | linear\_ute\_time | double precision | Time in seconds the agent spent handling at least one issue in this status within this 15-minute time window. | 253.34, 0.0, 5.046 | | | 2019-03-05 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | cumul\_ute\_time | double precision | The collective time in seconds the agent spent handling all issues in this status within this 15-minute time window. This time may exceed the status time due to concurrency slots. | 498.82, 0.0, 0.428 | | | 2019-03-05 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | unutilized\_time | double precision | The time in seconds the agent spent not handling any issues in this status within this 15-minute time window. | 37.60, 0.0 | | | 2019-03-05 00:00:00 | 2019-03-05 00:00:00 | no | | | | | | window\_status\_time | double precision | The length of time which the agent was inside this status in seconds. | 0.107, 900.0 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | total\_status\_time | double precision | Time in seconds that the agents spent in this status including contiguous time spent outside of this 15-minute time window. | 5.046, 0.107 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | max\_slots | integer | The number of issue slots or concurrency values which the rep set for themselves for this window. | 3, 2 | | | 2018-10-01 00:00:00 | 2018-10-01 00:00:00 | no | | | | | | status\_end\_ts | timestamp without time zone | The timestamp at which this agent exited the designated state. Note that this time may be null or after the next instance\_ts, which implies that the agent did not change statuses within this 15-minute window. | 2018-06-10 14:23:00 | | | 2020-01-07 00:00:00 | 2020-01-07 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_name | varchar(191) | The name of this rep. Jane Doe | John | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | desk\_mode | varchar(191) | The mode of the desktop which the agent is logged into. Modes include CHAT or VOICE. | 'CHAT', 'VOICE' | | | 2019-12-10 00:00:00 | 2019-12-10 00:00:00 | no | | | | | | last\_dispositioned\_ts | timestamp | Timestamp at which rep gets unassigned for a given rep status started at a given time | 2018-06-10 14:24:00 | | | 2024-05-29 00:00:00 | 2024-05-29 00:00:00 | no | | | | | ### Table: rep\_assignment\_disposition This view contains information relating to rep-disposition responses. **Sync Time:** 1h **Unique Condition:** issue\_id, rep\_id, rep\_assigned\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------------------------------------- | :-------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | rep\_assigned\_ts | timestamp without time zone | The timestamp at which the issue was assigned to the rep. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | disposition\_event | character varying(65535) | The event type associated with the disposition event | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | disposition\_notes\_txt | character varying(65535) | Disposition notes associated with the disposition event | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | disposition\_notes\_valid | boolean | Boolean value to indicate if the notes are different than blank or null. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | crm\_offered\_ts | timestamp without time zone | Timestamp of the last CRM offered event. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | crm\_outcome\_ts | timestamp without time zone | Timestamp of the last CRM outcome event. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | crm\_is\_success | boolean | Boolean value to indicate if the disposition event is successfully sent to partner CRM | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | crm\_error\_type | character varying(65535) | This field indicates the type of an error occurred in the pipeline. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | crm\_error\_source | character varying(65535) | This field indicates where in the pipeline the event is failed to publish. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | presented\_tags | character varying(65535) | Unique list of all summary tags presented to agent for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | selected\_tags | character varying(65535) | Unique list of all summary tags selected by agent for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | notes\_presented\_tags | character varying(65535) | Unique list of the summary tags presented to agent at the OTF NOTES state for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | notes\_selected\_tags | character varying(65535) | Unique list of the summary tags selected by agent at the OTF NOTES state for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | assignment\_end\_presented\_tags | character varying(65535) | Unique list of the summary tags presented to agent at the end of assignment state. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | assignment\_end\_selected\_tags | character varying(65535) | Unique list of the summary tags selected by agent at the end of assignment state. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | presented\_tags\_ct\_distinct | bigint | Distinct count of all summary tags presented to agent for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | selected\_tags\_ct\_distinct | bigint | Distinct count of all summary tags selected by agent for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | notes\_presented\_tags\_ct\_distinct | bigint | Distinct count of the summary tags presented to agent at the OTF NOTES state for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | notes\_selected\_tags\_ct\_distinct | bigint | Distinct count of the summary tags selected by agent at the OTF NOTES state for this assignment. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | assignment\_end\_presented\_tags\_ct\_distinct | bigint | Distinct count of the summary tags presented to agent at the end of assignment state. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | assignment\_end\_selected\_tags\_ct\_distinct | bigint | Distinct count of the summary tags selected by agent at the end of assignment state. | | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2020-09-03 00:00:00 | 2020-09-03 00:00:00 | no | | | | | | auto\_summary\_txt | character varying(65535) | Text of the automatic generative summary of this assignment, if applicable. Note that this field will be null of no auto summary could be found or if the feature is not enabled. | | | | 2023-02-16 00:00:00 | 2023-02-16 00:00:00 | no | | | | | ### Table: rep\_attributes The rep attributes table contains various data associated with a rep such as their given role. This table may not exist or may be empty if the client chooses to use rep\_hierarchy instead. This is a daily snapshot of information. **Sync Time:** 1d **Unique Condition:** rep\_attribute\_id, rep\_id, created\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :---------------------- | :------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | created\_ts | timestamp | The date this agent was created. | 2019-06-24T18:02:05+00:00 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | attribute\_name | character varying(64) | The attribute key value. | role, companygroup, jobcode | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | attribute\_value | character varying(1024) | The attribute value associated with the attribute\_name. | manager, representative, lead | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | agent\_attribute\_id | | deprecated: 2019-09-25 | | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | external\_agent\_id | varchar(255) | deprecated: 2019-09-25 | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_attribute\_id | bigint | The ASAPP identifier for this attribute. | 1200001 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | external\_rep\_id | varchar(255) | Client-provided identifier for the rep. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: rep\_augmentation The rep\_augmentation table tracks a specific issue and rep; and calculates metrics on augmentation types and counts of usages of augmentation. This table will allow billing for the augmentation feature per each issue. **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | customer\_id | bigint | The ASAPP internal customer identifier. | 123008 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | external\_customer\_id | varchar(255) | The customer identifier as provided by the client. | | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | conversation\_end\_ts | timestamp | Timestamp when the conversation ended. | 2018-06-23 21:23:53 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | auto\_suggest\_msgs | bigint | The number of autosuggest prompts used by the rep. | 3 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | auto\_complete\_msgs | bigint | The number of autocompletion prompts used by the rep. | 2 | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | did\_customer\_timeout | boolean | Boolean value indicating whether the customer timed out. | false, true | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | is\_rep\_resolved | boolean | Boolean value indicating whether the rep marked this conversation resolved. | true, false | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | is\_billable | boolean | Boolean value indicating whether the rep marked the conversation resolved after using autocomplete or autosuggest. | true, false | | | 2018-11-27 00:00:00 | 2018-11-27 00:00:00 | no | | | | | | custom\_auto\_suggest\_msgs | bigint | The number of custom autocompletion prompts used by the rep (is a subset of auto\_suggest\_msgs). | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | custom\_auto\_complete\_msgs | bigint | The number of custom autosuggest prompts used by the rep (is a subset of auto\_complete\_msgs). | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | drawer\_msgs | bigint | The number of custom drawer messages used by the rep. | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | kb\_search\_msgs | bigint | The number of messages used from knowledge base search. | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | kb\_recommendation\_msgs | bigint | The number of messages used from knowledge base recommendations. | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_id | varchar(191) | Last rep\_id that worked on this issue. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | autopilot\_timeout\_msgs | bigint | Number of autopilot timeout messages. | 2 | | | 2020-06-11 00:00:00 | 2020-06-11 00:00:00 | no | | | | | | phrase\_auto\_complete\_presented\_msgs | integer | Count of utterances where at least one phrase autocomplete was suggested/presented. | | | | 2020-06-24 00:00:00 | 2020-06-24 00:00:00 | no | | | | | | cume\_phrase\_auto\_complete\_presented | integer | Total number of phrase autocomplete suggestions per issue. | | | | 2020-06-24 00:00:00 | 2020-06-24 00:00:00 | no | | | | | | phrase\_auto\_complete\_msgs | integer | Count of utterances where at least one phrase autocomplete was accepted/sent. | | | | 2020-06-24 00:00:00 | 2020-06-24 00:00:00 | no | | | | | | cume\_phrase\_auto\_complete | integer | Total number of phrase autocompletes per issue. | | | | 2020-06-24 00:00:00 | 2020-06-24 00:00:00 | no | | | | | | exclusive\_phrase\_auto\_complete\_msgs | integer | Count of utterances where at least one phrase autocomplete was accepted/sent and no other augmentation was used. | | | | 2020-06-24 00:00:00 | 2020-06-24 00:00:00 | no | | | | | ### Table: rep\_convos The rep\_convos table captures metrics associated with a rep and an issue. Expected metrics include "average response time", "cumulative customer response time", "disposition type" and "handle time". This data is captured in 15 minute window increments. **Sync Time:** 1h **Unique Condition:** issue\_id, rep\_id, issue\_assigned\_ts | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | issue\_assigned\_ts | timestamp without time zone | The time when an issue was first assigned to this rep. | 2019-10-31T18:37:37.848000+00:00 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | agent\_first\_response\_ts | | deprecated: 2019-09-25 | | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | dispositioned\_ts | timestamp | The time when the issue left the rep's screen. | 2019-10-31T18:46:39.869000+00:00 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | customer\_end\_ts | timestamp without time zone | The time at which the customer ended the issue. This may be NULL. | 2019-10-31T18:46:12.559000+00:00 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | disposition\_event\_type | varchar(255) | Event type indicating how the conversation ended. | rep, customer, batch (system/auto ended), batch | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | cust\_utterance\_count | bigint | The count of customer utterances from issue\_assigned\_ts to dispositioned\_ts. | 5 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | rep\_utterance\_count | bigint | The count of rep utterances from issue\_assigned\_ts to dispositioned\_ts. | 5 | | | 2018-09-01 00:00:00 | 2018-09-01 00:00:00 | no | | | | | | handle\_time\_seconds | double precision | Total time in seconds that reps spent handling the issue, from assignment to disposition. | 428.9 | | | 2019-03-19 00:00:00 | 2019-03-20 00:00:00 | no | | | | | | lead\_time\_seconds | double precision | Total time in seconds the customer spent interacting during the conversation, from assignment to last utterance. | 320.05 | | | 2019-03-19 00:00:00 | 2019-03-20 00:00:00 | no | | | | | | wrap\_up\_time\_seconds | double precision | Total time in seconds spent by reps wrapping up the conversation, calculated as the difference between handle and lead time. | 3.614 | | | 2019-03-19 00:00:00 | 2019-03-20 00:00:00 | no | | | | | | rep\_response\_ct | int | The total count of responses by the rep. Max of one message following a customer message counted as a response. | 5 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | cust\_response\_ct | int | The total count of responses by the customer. Max of one message following a rep message counted as a response. | 12 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | auto\_suggest\_msgs | bigint | The number of autosuggest prompts used by the rep (inclusive of custom\_auto\_suggest\_msgs). | 5 | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | auto\_complete\_msgs | bigint | The number of autocompletion prompts used by the rep (inclusive of custom\_auto\_complete\_msgs). | 5 | | | 2019-07-29 00:00:00 | 2019-07-29 00:00:00 | no | | | | | | custom\_auto\_suggest\_msgs | bigint | The number of custom autocompletion prompts used by the rep (is a subset of auto\_suggest\_msgs). | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | custom\_auto\_complete\_msgs | bigint | The number of custom autosuggest prompts used by the rep (is a subset of auto\_complete\_msgs). | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | drawer\_msgs | bigint | The number of custom drawer messages used by the rep. | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | kb\_search\_msgs | bigint | The number of messages used by the rep from the knowledge base searches. | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | kb\_recommendation\_msgs | bigint | The number of messages used by the rep from the knowledge base recommendations. | 2 | | | 2019-09-25 00:00:00 | 2019-09-25 00:00:00 | no | | | | | | is\_ghost\_customer | boolean | Boolean value indicating if the customer was assigned a rep but never responded. | true, false | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | first\_response\_seconds | bigint | The total time taken by the rep to send the first message, once the message was assigned. | 26.148 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | cume\_rep\_response\_seconds | bigint | The total time across the assignment for the rep to send response messages. | 53.243 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | max\_rep\_response\_seconds | double precision | The maximum time across the assignment for the rep to send a response message. | 77.965 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | avg\_rep\_response\_seconds | double precision | The average time across assignment for the rep to send response messages. | 22.359 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | cume\_cust\_response\_seconds | bigint | The total time across the assignment for the customer to send response messages. | 332.96 | | | 2019-05-17 00:00:00 | 2019-05-17 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_first\_response\_ts | datetime | The time when a rep first responded to the customer. | 2019-10-31T18:38:03.996000+00:00 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | hold\_ct | bigint | The total count that this rep was part of a hold call. This field is not applicable to chat. | 1 | | | 2019-11-19 00:00:00 | 2019-11-19 00:00:00 | no | | | | | | cume\_hold\_time\_seconds | double precision | The total duration of time the rep placed the customer on hold across the call. This field is not applicable to chat. 93.30 | | | | 2019-11-19 00:00:00 | 2019-11-19 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | client\_mode | varchar(191) | The communication mode used by the customer for a given issue (CHAT or VOICE). | CHAT, VOICE | | | 2019-12-10 00:00:00 | 2019-12-10 00:00:00 | no | | | | | | cume\_cross\_talk\_seconds | numeric(38,5) | Total duration of time where both agent and customer were speaking. Only relevant for voice client mode. | | | | 2019-12-28 00:00:00 | 2019-12-28 00:00:00 | no | | | | | | desk\_mode\_flag | bigint | Bitmap encodes if agent handled voice-issue ASAPP desk, had engagement with ASAPP desk. bitmap: 0: null, 1: 'VOICE', 2: 'DESK', 4: 'ENGAGEMENT', 8: 'INACTIVITY' NULL for non voice issues | 0, 1, 2, 5, 7 | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | desk\_mode\_string | varchar(191) | Decodes the desk\_mode flag. Current possible values (Null, 'VOICE', 'VOICE\_DESK', 'VOICE\_DESK\_ENGAGEMENT','VOICE\_INACTIVITY'). NULL for non voice issues. | VOICE\_DESK | | | 2020-02-19 00:00:00 | 2020-02-19 00:00:00 | no | | | | | | queue\_id | integer | The ASAPP queue identifier which the issue was placed. | 20001 | | | 2021-04-08 00:00:00 | 2021-04-08 00:00:00 | no | | | | | | autopilot\_timeout\_msgs | integer | Number of autopilot timeout messages. | 2 | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | exclusive\_phrase\_auto\_complete\_msgs | integer | Count of utterances where at least one phrase autocomplete was accepted/sent and no other augmentation was used. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | custom\_click\_to\_insert\_msgs | integer | Total count of custom click\_to\_insert messages. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | ms\_auto\_suggest\_msgs | integer | Total count of multi-sentence auto-suggest messages. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | ms\_auto\_complete\_msgs | integer | Total count of multi-sentence auto-complete messages. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | ms\_auto\_suggest\_custom\_msgs | integer | Total count of custom multi-sentence auto-suggest messages. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | ms\_auto\_complete\_custom\_msgs | integer | Total count of custom multi-sentence auto-complete messages. | | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | autopilot\_form\_msgs | bigint | Number of autopilot form messages. | 2 | | | 2021-08-02 00:00:00 | 2021-08-02 00:00:00 | no | | | | | | click\_to\_insert\_global\_msgs | integer | Number of click to insert messages. | 2 | | | 2023-02-15 00:00:00 | 2023-02-15 00:00:00 | no | | | | | | autopilot\_greeting\_msgs | bigint | Number of autopilot greeting messages. | 2 | | | 2023-02-15 00:00:00 | 2023-02-15 00:00:00 | no | | | | | | augmented\_msgs | bigint | Number of augmented messages. | 2 | | | 2023-02-22 00:00:00 | 2023-02-22 00:00:00 | no | | | | | | autopilot\_ending\_msgs\_ct | integer | Number of autopilot endings | 2 | | | 2024-04-19 00:00:00 | 2024-04-19 00:00:00 | no | | | | | | rep\_assignment\_flex\_source | varchar(256) | identifies the flex source types, i.e., LOW\_WORKLOAD, DIRECT\_TRANSFER, CUSTOMER\_ENDED | LOW\_WORKLOAD | | | 2026-01-22 00:00:00 | 2026-01-22 00:00:00 | no | | | | | ### Table: rep\_hierarchy The rep\_hierarchy table contains the rep and their direct reports and their manager. This is a daily snapshot of rep hierarchy information. This table may be empty and if empty, then consult rep\_attributes. **Sync Time:** 1d **Unique Condition:** subordinate\_rep\_id, superior\_rep\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :---------------------- | :---------------------- | :------------------------------------------------------------------------------------------------------- | :----------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | subordinate\_agent\_id | | deprecated: 2019-09-25 | | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | superior\_agent\_id | | deprecated: 2019-09-25 | | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | reporting\_relationship | character varying(1024) | Relationship between subordinate and superior reps, e.g. "superiors\_superior" for skip-level reporting. | superior, superiors\_superior | | | 2018-08-14 00:00:00 | 2018-08-14 00:00:00 | no | | | | | | subordinate\_rep\_id | bigint | ASAPP rep identifier that is the subordinate of the superior\_rep\_id. | 110001 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | superior\_rep\_id | bigint | Superior rep id that is the superior of the subordinate\_rep\_id. | 20001 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | ### Table: rep\_utilized The rep\_utilized table tracks a rep's activity and how much time they spend in each state. It shows utilization time and total minutes per state, recorded in 15-minute intervals throughout the day. The instance\_ts field represents the 15-minute window and is part of the primary key. It doesn’t show the most recent event like in other tables. The data may be updated if later information changes it, such as adding more utilization time. Utilization refers to the rep’s efficiency. **Sync Time:** 1h **Unique Condition:** instance\_ts, rep\_id, desk\_mode, max\_slots, company\_marker | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------------------------- | :----------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | :------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The start of the 15-minute time window under observation. As an example, for a 15 minute interval an instance\_ts of 12:30 implies activity from 12:30 to 12:45. | 2019-11-08 14:00:00 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | update\_ts | timestamp | Timestamp at which the last event for this record occurred - usually the last status end or conversation end that was active in this window deprecated: 2020-11-09 | 2019-06-10 14:24:00 | | | 2020-01-29 00:00:00 | 2020-01-29 00:00:00 | no | | | | | | export\_ts | timestamp | The end of the time window for which this record was exported. This is used for de-duplicating records. | 2019-06-10 14:30:00 | | | 2020-01-29 00:00:00 | 2020-01-29 00:00:00 | no | | | | | | company\_id | bigint | The ASAPP identifier of the company or test data source. | 10001 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | Relates to the customer issue, not relevant to reps. Intentionally left blank. | ACMEsubcorp | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | company\_segments | varchar(255) | Relates to the customer issue, not relevant to reps. Intentionally left blank. | marketing,promotions | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | rep\_name | varchar(191) | The name of the rep. | John Doe | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | max\_slots | integer | Maximum chat concurrency slots enabled for this rep. | 2 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_logged\_in\_min | bigint | Cumulative Logged In Time (min) -- Total cumulative time (linear time x max slots) the rep logged into the agent desktop. | 120 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_logged\_in\_min | bigint | Linear Logged In Time (min) -- Total linear time rep logged into agent desktop. | 60 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_avail\_min | bigint | Cumulative Available Time (min) -- Total cumulative time (linear time x max slots) the rep logged into agent desktop while in the "Available" state. | 90 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_avail\_min | bigint | Linear Available Time (min) -- Total linear time the rep logged into the agent desktop while in the "Available" state. | 45 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_busy\_min | bigint | Cumulative Busy Time (min) -- Total cumulative time (linear time x max slots) the rep logged into agent desktop while in a "Busy" state. | 30 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_busy\_min | bigint | Linear Busy Time (min) -- Total linear time rep logged into agent desktop while in a "Busy" state. | 15 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_prebreak\_min | bigint | Cumulative Busy Time - Pre-Break (min) -- Total cumulative time (linear time x max slots) rep logged into agent desktop while in the Pre-Break version of the "Busy" state. | 10 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_prebreak\_min | bigint | Linear Busy Time - Pre-Break (min) -- Total linear time the rep logged into Agent Desktop while in the Pre-Break version of Busy state | 5.6 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_ute\_total\_min | bigint | Cumulative Utilized Time (min) -- Total cumulative time (linear time x active slots) the rep logged into agent desktop and utilized over all states. | 27.71 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_ute\_total\_min | bigint | Linear Utilized Time (min) -- Total linear time rep logged into agent desktop and utilized over all states. | 5.5 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_ute\_avail\_min | bigint | Cumulative Utilized Time While Available (min) -- Total cumulative time (linear time x active slots) rep logged into agent desktop and utilized while in the "Available" state. | 11.5 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_ute\_avail\_min | bigint | Linear Utilized Time While Available (min) -- Total linear time rep logged into agent desktop and utilized while in the "Available" state. | 5.93 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_ute\_busy\_min | bigint | Cumulative Busy Time - While Chatting (min) -- Total cumulative time (linear time x active slots) rep logged into agent desktop while in a Busy state and handling at least one assignment. | 7.38 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_ute\_busy\_min | bigint | Linear Utilized Time While Busy (min) -- Total linear time rep logged into agent desktop while in a Busy state and handling at least one assignment. | 3.44 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_ute\_prebreak\_min | bigint | Cumulative Utilized Time While Busy Pre-Break (min) -- Cumulative time rep logged into agent desktop and utilized while in the "Pre-Break Busy" state. | 5.35 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | lin\_ute\_prebreak\_min | bigint | Linear Utilized Time While Busy Pre-Break (min) -- Linear time rep logged into agent desktop and utilized while in the "Pre-Break Busy" state. | 3.65 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | labor\_min | bigint | Total linear time rep logged into agent desktop in the available state, plus cumulative time rep was handling issues in any "Busy" state. | 18.44 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | busy\_clicks\_ct | bigint | Busy Clicks -- Number of times the rep moved from an active to a busy state. | 1 | | | 2019-05-10 00:00:00 | 2019-05-10 00:00:00 | no | | | | | | ute\_ratio | float | Utilization ratio - cumulative utilized time divided by linear total potential labor time. | 1.71 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | act\_ratio | float | Active utilization ratio - cumulative utilized time in the available state divided by total labor time. | 1.67 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2025-01-27 00:00:00 | no | | | | | | desk\_mode | varchar(191) | The mode of the desktop that the agent is logged into - whether CHAT or VOICE. | 'CHAT', 'VOICE' | | | 2019-12-10 00:00:00 | 2019-12-10 00:00:00 | no | | | | | | lin\_utilization\_level\_over\_min | bigint | Total linear time in minutes when rep has no assignment Total linear time in minute when rep's assignments is greater than rep's max slot | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | lin\_utilization\_level\_full\_min | bigint | Total linear time in minute when rep's assignments is equal to rep's max slot | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | lin\_utilization\_level\_light\_min | bigint | Total linear time in minute when rep's assignments is less than rep's max slot | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | workload\_level\_no\_min | bigint | Total time in minute when rep has no active assignment | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | workload\_level\_over\_min | bigint | Total time in minute when rep's active assignment is greater than rep's max slot | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | workload\_level\_full\_min | bigint | Total time in minute when rep's active assignment is equal to rep's max slot | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | workload\_level\_light\_min | bigint | Total time in minute when rep's active assignment is less than rep's max slot | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | flex\_protect\_min | bigint | Total time in minute when rep is flex protected | 120 | | | 2020-11-09 00:00:00 | 2020-11-09 00:00:00 | no | | | | | | cum\_weighted\_min | | | | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_weighted\_seconds | bigint | Total effort\_workload when a rep has active assignments | 10 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_ute\_weighted\_avail\_unflexed\_seconds | bigint | Total time weighted in seconds when a rep is available | 160 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | | cum\_weighted\_inactive\_seconds | bigint | Total effort\_workload when a rep has no active assignments | 10 | | | 2019-03-11 00:00:00 | 2019-03-11 00:00:00 | no | | | | | ### Table: sms\_events Exports for each 15 min window of SMS flow events **Sync Time:** 1h **Unique Condition:** company\_id, sms\_flow\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :---------------------------------- | :-------------------------- | :--------------------------------------------------------------------------------------------- | :---------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | sms\_flow\_id | character varying(65535) | The flow identifier. | 019bf9e4-a01a-4420-b419-459659a1b50e | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | external\_session\_id | character varying(65535) | The session identifier received from the client. | 772766038 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | message\_sent\_result | character varying(6) | The status of another SMS request received from the 3rd party SMS provider. | 'Sent' | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | message\_sent\_result\_status\_code | character varying(65535) | The failure reason received from 3rd party SMS provider. | 30001 (Queue Overflow), 30004 (Message Blocked) | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | message\_character\_count | integer | The SMS message's character count. | 29 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | partner\_triggered\_ts | timestamp without time zone | The date and time in which a partner sends an SMS request to ASAPP. | 2018-03-03 12:23:52 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | provider\_sent\_ts | timestamp without time zone | The date and time in which ASAPP sends an SMS request from 3rd party SMS provider. | 2018-03-03 12:23:52 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | provider\_status\_ts | timestamp without time zone | The date and time in which the 3rd party SMS provider sends back the status of an SMS request. | 2018-03-03 12:23:52 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | company\_id | bigint | The ASAPP identifier of the company or test data source. | 10001 | | | 2019-11-08 00:00:00 | 2019-11-08 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-08 00:00:00 | 2020-03-23 00:00:00 | no | | | | | ### Table: transfers The purpose of the transfers table is to capture information associated with an issue transfer between reps. The data is captured per 15 minute window. **Sync Time:** 1h **Unique Condition:** company\_id, issue\_id, rep\_id, timestamp\_req | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------ | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | timestamp\_req | timestamp without time zone | The date and time when the transfer was requested. | 2019-06-11T13:27:09.470000+00:00 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | timestamp\_reply | timestamp without time zone | The date and time when the transfer request was received. | 2019-06-11T13:31:58.537000+00:00 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | agent\_id | bigint | deprecated: 2019-09-25 | 123008 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2018-08-04 00:00:00 | 2018-08-04 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-10-04 00:00:00 | 2018-10-04 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-10-04 00:00:00 | 2018-10-04 00:00:00 | no | | | | | | requested\_agent\_transfer | | deprecated: 2019-09-25 | | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | group\_transfer\_to | character varying(65535) | The group identifier where the issue was transferred. | 20001 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | group\_transfer\_to\_name | character varying(191) | The group name where the issue was transferred. | acme-mobile-eng | | | 2018-08-04 00:00:00 | 2018-08-04 00:00:00 | no | | | | | | group\_transfer\_from | character varying(65535) | The group identifier which transferred the issue. | 87001 | | | 2018-08-04 00:00:00 | 2018-08-04 00:00:00 | no | | | | | | group\_transfer\_from\_name | character varying(191) | The group name which transferred the issue. acme-residential-eng | | | | 2018-08-04 00:00:00 | 2018-08-04 00:00:00 | no | | | | | | actual\_agent\_transfer | | deprecated: 2019-09-25 | | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | accepted | boolean | A boolean flag indicating whether the transfer was accepted. | true, false | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | is\_auto\_transfer | boolean | A boolean flag indicating whether this was an auto-transfer. | true, false | | | 2019-07-22 00:00:00 | 2019-07-22 00:00:00 | no | | | | | | exit\_transfer\_event\_type | character varying(65535) | The event type which concluded the transfer. | TRANSFER\_ACCEPTED, CONVERSATION\_END | | | 2019-07-22 00:00:00 | 2019-07-22 00:00:00 | no | | | | | | transfer\_button\_clicks | bigint | The number of times a rep requested a transfer from transfer initiation to when the transfer was received. | 1 | | | 2019-08-22 00:00:00 | 2019-08-22 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | rep\_id | varchar(191) | The ASAPP rep/agent identifier. | 123008 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | requested\_rep\_transfer | bigint | The rep which requested the transfer. | 1070001 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | actual\_rep\_transfer | bigint | The rep which received the transfer. | 250001 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | requested\_group\_transfer\_id | bigint | The group identifier where the transfer was initially requested. | 123455 | | | 2019-12-13 00:00:00 | 2019-12-13 00:00:00 | no | | | | | | requested\_group\_transfer\_name | character varying(191) | The group name where the transfer was initially requested. | support | | | 2019-12-13 00:00:00 | 2019-12-13 00:00:00 | no | | | | | | route\_code\_to | varchar(191) | IVR routing code indicating the customer contact reason to which the issue is being transferred into | 2323 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | | route\_code\_from | varchar(191) | IVR routing code indicating the customer contact reason from the previous assignment | 2323 | | | 2018-08-03 00:00:00 | 2018-08-03 00:00:00 | no | | | | | ### Table: utterances The purpose of the utterances table is to list out each utterance and associated data which was captured during a conversation. This table will include data associated with ongoing conversations which are unended. **Sync Time:** 1h **Unique Condition:** created\_ts, issue\_id, sender\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :-------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | created\_ts | timestamp | The date and time which the message was sent. | 2019-12-17T17:11:41.626000+00:00 | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | conversation\_id | bigint | deprecated: 2019-09-25 | 21352352 | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | company\_subdivision | varchar(255) | String identifier for the company subdivision associated with the conversation. | ACMEsubcorp | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | company\_segments | varchar(255) | String with comma separated segments for the company enclosed by square brackets. | marketing,promotions | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | sequence\_id | integer | deprecated: 2019-09-26 | | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | sender\_id | bigint | The identifier of the person who sent the message. | | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | sender\_type | character varying(191) | The type of sender. | customer, bot, rep, rep\_note, rep\_whisper | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | utterance\_type | character varying(65535) | The type of utterance sent. | autosuggest, autocomplete, script, freehand | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | sent\_to\_agent | boolean | deprecated: 2019-09-25 | | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | utterance | character varying(65535) | Text sent from a bot or human (i.e. customer, rep, expert). | 'Upgrade current device', 'Is there anything else we can help you with?' | | | 2018-07-13 00:00:00 | 2018-07-13 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | sent\_to\_rep | boolean | A boolean flag indicating if an utterance was sent from a customer to a rep. | true, false | | | 2019-09-27 00:00:00 | 2019-09-27 00:00:00 | no | | | | | | utterance\_start\_ts | timestamp without time zone | This timestamp marks the time when a person began speaking in the voice platform. In chat platforms or non-voice generated messages, this timestamp will be NULL. | NULL, 2019-10-18T18:45:00+00:00 | | | 2019-12-06 00:00:00 | 2019-12-06 00:00:00 | no | | | | | | utterance\_end\_ts | timestamp without time zone | This timestamp marks the time when a person finished speaking in the voice platform. In chat platforms or non-voice generated messages, this timestamp will be NULL. | NULL, 2019-10-18T18:45:00+00:00 | | | 2019-12-06 00:00:00 | 2019-12-06 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2019-11-01 00:00:00 | 2024-05-24 00:00:00 | no | | | | | | event\_uuid | varchar(36) | A UUID uniquely identifying each utterance record | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2020-10-23 00:00:00 | 2020-10-23 00:00:00 | no | | | | | ### Table: voice\_intents The voice intents table includes fields that provide visibility to the customer's contact reason for the call **Sync Time:** 1h **Unique Condition:** issue\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------ | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2021-08-10 00:00:00 | 2021-08-10 00:00:00 | no | | | | | | issue\_id | bigint | The ASAPP issue or conversation id. | 21352352 | | | 2021-08-10 00:00:00 | 2021-08-10 00:00:00 | no | | | | | | company\_id | bigint | DEPRECATED 2024-03-25 | 10001 | | | 2021-08-10 00:00:00 | 2021-08-10 00:00:00 | no | | | | | | voice\_intent\_code | varchar(255) | Voice intent code with the highest score associated to the issue | PAYBILL | | | 2021-08-10 00:00:00 | 2021-08-10 00:00:00 | no | | | | | | voice\_intent\_name | varchar(255) | Voice intent name with the highest score associated to the issue | Payment history | | | 2021-08-10 00:00:00 | 2021-08-10 00:00:00 | no | | | | | | company\_name | varchar(255) | Name of the company associated with the data. | acme | | | 2025-01-27 00:00:00 | 2025-01-27 00:00:00 | no | | | | | ### Table: vagafeedbacksurvey\_shown This feed displays all instances where a Virtual Agent or Generative Agent customer feedback survey was shown to a customer for an issue. **Sync Time:** 1h | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------- | :-------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2025-11-05 00:00:00 | | no | | | | | | event\_ts | timestamp | The time of an given event. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2025-11-05 00:00:00 | | no | | | | | | company\_marker | varchar | Name of the company associated with the data. | acme corp | | | 2025-11-05 00:00:00 | | no | | | | | | customer\_id | varchar | Unique identifier generated by the ASAPP application for the customer. | abc123 | | | 2025-11-05 00:00:00 | | no | | | | | | issue\_id | varchar | Unique identifier generated by the ASAPP application for the issue or conversation. | abc123 | | | 2025-11-05 00:00:00 | | no | | | | | | intent\_code | varchar | The ASAPP internal code for a given intent. | ACCTNUM | | | 2025-11-05 00:00:00 | | no | | | | | | flow\_id | varchar | An ASAPP identifier assigned to a particular flow executed during a customer event or request. | 347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c | | | 2025-11-05 00:00:00 | | no | | | | | | flow\_name | varchar | The ASAPP text name for a given flow which was executed during a customer event or request. | FirstChatMessage, AccountNumberFlow | | | 2025-11-05 00:00:00 | | no | | | | | | flow\_version | varchar | Version of the Flow | 1,2,3 | | | 2025-11-05 00:00:00 | | no | | | | | | flow\_type | varchar | Type of Flow | business, utility | | | 2025-11-05 00:00:00 | | no | | | | | | flow\_tags | varchar | Tags associated with the Flow | v1, v2 | | | 2025-11-05 00:00:00 | | no | | | | | | is\_external\_survey | boolean | Indicates if the survey form is owned by an external system | True or False | | | 2025-11-05 00:00:00 | | no | | | | | | client\_type | varchar | Type of client | web, mobile, etc. | | | 2025-11-05 00:00:00 | | no | | | | | | survey\_shown | varchar | Type of survey i.e Virtual Agent or Generative Agent | SURVEY\_FEEDBACK\_TYPE\_VIRTUAL\_AGENT or SURVEY\_FEEDBACK\_TYPE\_GENERATIVE\_AGENT | | | 2025-11-05 00:00:00 | | no | | | | | ### Table: vagafeedbacksurvey\_submitted This feed displays submitted results of all Virtual Agent or Generative Agent customer feedback surveys **Sync Time:** 1h | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :----------------- | :-------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------- | :--------- | :---- | :------------------ | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2025-11-05 00:00:00 | | no | | | | | | event\_ts | timestamp | The time of an given event. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2025-11-05 00:00:00 | | no | | | | | | company\_marker | varchar | Name of the company associated with the data. | acme corp | | | 2025-11-05 00:00:00 | | no | | | | | | customer\_id | varchar | Unique identifier generated by the ASAPP application for the customer. | abc123 | | | 2025-11-05 00:00:00 | | no | | | | | | issue\_id | varchar | Unique identifier generated by the ASAPP application for the issue or conversation. | abc123 or 123542 | | | 2025-11-05 00:00:00 | | no | | | | | | survey\_type | varchar | Type of survey i.e Virtual Agent or Generative Agent | SURVEY\_FEEDBACK\_TYPE\_VIRTUAL\_AGENT or SURVEY\_FEEDBACK\_TYPE\_GENERATIVE\_AGENT | | | 2025-11-05 00:00:00 | | no | | | | | | question | varchar | The question asked of the user. | VOC Score, endSRS rating, What did the agent do well, or what could the agent have done better? (1000 character limit) | | | 2025-11-05 00:00:00 | | no | | | | | | question\_category | varchar | The question category type. | rating, comment, levelOfEffort | | | 2025-11-05 00:00:00 | | no | | | | | | question\_type | varchar | The type of question. | rating, scale, radio | | | 2025-11-05 00:00:00 | | no | | | | | | answer | varchar | The customer's answer to the question. | 0, 1, 17, yes | | | 2025-11-05 00:00:00 | | no | | | | | | ordering | integer | The question ordering. | 0, 1, 3, 5 | | | 2025-11-05 00:00:00 | | no | | | | | Last Updated: 2026-01-22 00:00:00 UTC # File Exporter Source: https://docs.asapp.com/reporting/file-exporter Learn how to use File Exporter to retrieve data from Standalone ASAPP Services. Use ASAPP's File Exporter service to securely retrieve AI Services data via API. The service provides a specific link to access the requested data based on the file parameters of the request that include the feed, version, format, date, and time interval of interest. The File Exporter service is meant to be used as a batch mechanism for exporting data to your data warehouse, either on a scheduled basis (e.g. nightly, weekly) or for ad hoc analyses. Data that populates feeds for the File Exporter service updates once daily at 2:00AM UTC. Data feeds are not available by default. Reach out to your ASAPP account contact to ensure data feeds are enabled for your implementation. ## Before You Begin To use ASAPP's APIs, all apps must be registered through the AI Services Developer Portal. Once registered, each app will be provided unique API keys for ongoing use. Get your API credentials and learn how to set up AI Service APIs by visiting our [Developer Quick Start Guide](/getting-started/developers). ## Endpoints The File Exporter service uses six parameters to specify a target file: * `feed`: The name of the data feed of interest * `version`: The version number of the feed * `format`: The file format * `date`: The date of interest * `interval`: The time interval of interest * `fileName`: The data file name Each parameter is retrieved from a dedicated endpoint. Once all parameters are retrieved, the target file is retrieved using the endpoint (`/fileexporter/v1/static/getfeedfile`), which takes these parameters in the request and returns a URL. * `POST` `/fileexporter/v1/static/listfeeds` Use this endpoint to retrieve an array of feed names available for your implementation. * `POST` `/fileexporter/v1/static/listfeedversions` Use this endpoint to retrieve an array of versions available for a given data feed. * `POST` `/fileexporter/v1/static/listfeedformats` Use this endpoint to retrieve an array of available file formats for a given feed and version. * `POST` `/fileexporter/v1/static/listfeeddates` Use this endpoint to retrieve an array of available dates for a given feed/version/format. * `POST` `/fileexporter/v1/static/listfeedintervals` Use this endpoint to retrieve an array of available intervals for a given feed/version/format/date. * `POST` `/fileexporter/v1/static/listfeedfiles` Use this endpoint to retrieve an array of file names for a given feed/version/format/date/interval. * `POST` `/fileexporter/v1/static/getfeedfile` Use this endpoint to retrieve a single file URL for the data specified using parameters returned from the above endpoints. Values for `file` will differ based on the requested `date` and `interval` parameters. Always call this endpoint prior to calling `/getfeedfile`. In the `getfeedfile` request, all parameters are required except `interval` ### Endpoint List 1. `POST /fileexporter/v1/static/listfeeds` Retrieve an array of feed names available for your implementation. 2. `POST /fileexporter/v1/static/listfeedversions` Retrieve an array of versions available for a given data feed. 3. `POST /fileexporter/v1/static/listfeedformats` Retrieve an array of available file formats for a given feed and version. 4. `POST /fileexporter/v1/static/listfeeddates` Retrieve an array of available dates for a given feed/version/format. 5. `POST /fileexporter/v1/static/listfeedintervals` Retrieve an array of available intervals for a given feed/version/format/date. 6. `POST /fileexporter/v1/static/listfeedfiles` Retrieve an array of file names for a given feed/version/format/date/interval. Values for `file` will differ based on the requested `date` and `interval` parameters. Always call this endpoint prior to calling `/getfeedfile`. 7. `POST /fileexporter/v1/static/getfeedfile` Retrieve a single file URL for the data specified using parameters returned from the above endpoints. In the `getfeedfile` request, all parameters are required except `interval` ## Making Routine Requests Only two requests are needed for exporting data on an ongoing basis for different timeframes. To export a file each time, make these two calls: 1. Call `/listfeedfiles` using the same `feed`, `version`, `format` parameters, and alter the `date` and `interval` parameters as necessary (`interval` is optional) to specify the time period of the data file you wish to retrieve. In response, you will receive the name(s) of the `file` needed for making the next request. 2. Call `/getfeedfile` with the same parameters as above and the `file` name parameter returned from `/listfeedfiles`. In response, you will receive the access `url`. 3. Call `/listfeedfiles` using the same `feed`, `version`, `format` parameters, and alter the `date` and `interval` parameters as necessary (`interval` is optional) to specify the time period of the data file you wish to retrieve. In response, you will receive the name(s) of the `file` needed for making the next request. 4. Call `/getfeedfile` with the same parameters as above and the `file` name parameter returned from `/listfeedfiles`. In response, you receive the access `url`. Your final request to `/getfeedfile` for the file `url` would look like this: ```json theme={null} { "feed": "feed_test", "version": "version=1", "format": "format=jsonl", "date": "dt=2022-06-27", "fileName": "file1.jsonl" } ``` ## Data Feeds File Exporter makes the following data feeds available: 1. **Conversation State**: `staging_conversation_state` Combines ASAPP conversation identifiers with metadata including summaries, augmentation counts, intent, crafting times, and important timestamps. 2. **Utterance State**: `staging_utterance_state` Combines ASAPP utterance identifiers with metadata including sender type, augmentations, crafting times, and important timestamps. **NOTE:** Does not include utterance text. 3. **Utterances**: `utterances` Combines ASAPP conversation and utterance identifiers with utterance text and timestamps. Identifiers can be used to join utterance text with metadata from utterance state feed. 4. **GenerativeAgent**: `generativeagent` Contains the per conversation data for GenerativeAgent. [GenerativeAgent Feed Data can be found here](/generativeagent/reporting/data-reference) 5. **Free-Text Summaries**: `autosummary_free_text` Retrieves data from free-text summaries generated by AI Summary. This feed has one record per free-text summary produced and can have multiple summaries per conversation . 6. **Feedback**: `autosummary_feedback` Retrieves the text of the feedback submitted by the agent. Developers can join this feed to the AI Summary free-text feed using the summary ID. 7. **Structured Data**: `autosummary_structured_data` Retrieves structured data to extract information and insights from conversations in the form of yes/no answers (up to 20) from summaries generated by AI Summary. [Click here to view the full schema](/reporting/fileexporter-feeds) for each feed table. Feed table names that include the prefix `staging_` are not referencing a lower environment; table names have no connection to environments. # File Exporter Feed Schema Source: https://docs.asapp.com/reporting/fileexporter-feeds The tables below provide detailed information regarding the schema for exported data files available to you via the File Exporter API. ### Table: autosummary\_feedback The autosummary feedback table stores summary text submitted by the agent after they have reviewed and edited it. This export will be sent daily and contains the hour for time zone conversion later. **Sync Time:** 1d **Unique Condition:** company\_marker, conversation\_id, summary\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- | :--------- | :---- | :--------- | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | common | | | conversation\_id | string | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | | | no | | | common | | | external\_conversation\_id | VARCHAR(255) | Client-provided issue identifier. | vjs654 | | | | | no | | | common | | | agent\_id | varchar(255) | The agent identifier in the conversation provided by the customer. | cba321 | | | | | no | | | common | | | summary\_id | VARCHAR(36) | Unique identifier for AutoSummary feedback and free-text summary events | 57ffe572-e9dc-4546-963b-29d90b0d92a9 | | | | | no | | | AutoSummary | | | autosummary\_feedback\_ts | timestamp | The timestamp of the autosummary\_feedback\_summary event. | 2023-05-01 14:00:09 | | | | | no | | | AutoSummary | | | autosummary\_feedback | string | Text submitted with agent edits, summarizing the conversation from the autosummary freetext API call. | Customer chatted in to check whether the app worked | | | | | no | | | AutoSummary | | | company\_marker | varchar(255) | Identifier of the customer-company. | agnostic | | | | | no | | | common | | | dt | varchar(255) | Date string when summary feedback was submitted. | 2019-11-08 | | | | | no | | | common | | | hr | varchar(255) | Hour string when summary feedback was submitted. | 18 | | | | | no | | | common | | ### Table: autosummary\_free\_text The autosummary free text table stores the raw output of ASAPP's API. It is the unedited summary initially shown to the agent to be reviewed. This export will be sent daily and contains the hour for time zone conversion later. **Sync Time:** 1d **Unique Condition:** company\_marker, conversation\_id, summary\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :--------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------- | :--------- | :---- | :--------- | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | common | | | conversation\_id | string | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | | | no | | | common | | | external\_conversation\_id | VARCHAR(255) | Client-provided issue identifier. | vjs654 | | | | | no | | | common | | | agent\_id | varchar(255) | The agent identifier in the conversation provided by the customer. | cba321 | | | | | no | | | common | | | summary\_id | VARCHAR(36) | Unique identifier for AutoSummary feedback and free-text summary events | 57ffe572-e9dc-4546-963b-29d90b0d92a9 | | | | | no | | | AutoSummary | | | autosummary\_free\_text\_ts | timestamp | The timestamp of the autosummary\_free\_text\_summary event. | 2023-05-01 14:00:09 | | | | | no | | | AutoSummary | | | autosummary\_free\_text | string | Unedited text summarizing the conversation at the time of the autosummary free text API call. | Customer chatted in to check whether the app worked | | | | | no | | | AutoSummary | | | is\_autosummary\_feedback\_used | integer | An indicator that the AutoSummary had a feedback summary. | 1 | | | | | no | | | AutoSummary | | | is\_autosummary\_feedback\_edited | integer | An indicator that the AutoSummary had a feedback summary that was edited. | 0 | | | | | no | | | AutoSummary | | | autosummary\_free\_text\_length | integer | Length of the FreeText AutoSummaries. Will only have a value when there is both freetext and feedback summaries. | 54 | | | | | no | | | AutoSummary | | | autosummary\_feedback\_length | integer | Length of the Feedback AutoSummaries. Will only have a value when there is both freetext and feedback summaries. | 54 | | | | | no | | | AutoSummary | | | autosummary\_levenshtein\_distance | integer | Levenshtein edit distances between the AutoSummaries FreeText and Feedback. Will only have a value when there is both freetext and feedback summaries. | 0 | | | | | no | | | AutoSummary | | | autosummary\_sentences\_removed | string | autosummary\_sentences\_removed contains the sentences that the freetext summary generates and the feedback summary edits or removes. | Customer called to pay their bill | | | | | no | | | AutoSummary | | | autosummary\_sentences\_added | string | autosummary\_sentences\_added contains the sentences added as part of feedback summary compared to freetext summary. | Customer called to pay the bill | | | | | no | | | AutoSummary | | | company\_marker | varchar(255) | Identifier of the customer-company. | agnostic | | | | | no | | | common | | | dt | varchar(255) | Date string when summary feedback was submitted. | 2019-11-08 | | | | | no | | | common | | | hr | varchar(255) | Hour string when summary feedback was submitted. | 18 | | | | | no | | | common | | ### Table: autosummary\_structured\_data The autosummary structured data table stores the raw output of ASAPP's API. These structured data outputs consist of LLM generated answers to yes/no questions along with extracted entities based on configuration settings. These outputs can be aggregated and packaged into business insights. This export will be sent daily and contains the hour for time zone conversion later. **Sync Time:** 1d **Unique Condition:** company\_marker, conversation\_id, structured\_data\_id, structured\_data\_field\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :--------- | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | common | | | conversation\_id | string | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | | | no | | | common | | | external\_conversation\_id | VARCHAR(255) | Client-provided issue identifier. | vjs654 | | | | | no | | | common | | | agent\_id | varchar(255) | The agent identifier in the conversation provided by the customer. | cba321 | | | | | no | | | common | | | structured\_data\_id | varchar(36) | Unique identifier for AutoSummary structured data event | 57ffe572-e9dc-4546-963b-29d90b0d92a9 | | | | | no | | | common | | | structured\_data\_ts | timestamp | The timestamp of the autosummary\_structured\_data event. | 2023-05-01 14:00:09 | | | | | no | | | common | | | structured\_data\_field\_id | varchar(255) | The structured data id. | q\_issue\_escalated | | | | | no | | | common | | | structured\_data\_field\_name | varchar(255) | The structured data name. | Issue escalated | | | | | no | | | common | | | structured\_data\_field\_value | varchar(255) | The structured data value. | No | | | | | no | | | common | | | structured\_data\_field\_category | varchar(255) | The structured data category. | Outcome | | | | | no | | | common | | | company\_marker | varchar(255) | Identifier of the customer-company. | agnostic | | | | | no | | | common | | | dt | varchar(255) | Date string when summary structured data was generated. | 2019-11-08 | | | | | no | | | common | | | hr | varchar(255) | Hour string when summary structured data was generated. | 18 | | | | | no | | | common | | ### Table: contact\_entity\_generative\_agent hourly snapshot of contact grain generative\_agent data including both dimensions and metrics aggregated over "all time" (two days in practice). **Sync Time:** 1h **Unique Condition:** company\_marker, conversation\_id, contact\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :-------------------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :------------------ | :------------------ | :----- | :-- | :------------ | :----------- | :------------ | | company\_marker | string | The ASAPP identifier of the company or test data source. | acme | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | conversation\_id | string | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | contact\_id | string | | | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_turns\_\_turn\_ct | int | Number of turns. | 1 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_turns\_\_turn\_duration\_ms\_sum | bigint | Total number of milliseconds between PROCESSING\_START and PROCESSING\_END across all turns. | 2 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_turns\_\_utterance\_ct | int | Number of generative\_agent utterances. | 2 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_turns\_\_contains\_escalation | boolean | Boolean indicating the presence of a turn in the conversation that ended with an indication to escalate to a human agent. | 1 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_turns\_\_is\_contained | boolean | Boolean indicating whether or not the conversation was contained (NOT generative\_agent\_turns\_\_contains\_escalation). | 1 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_first\_task\_name | varchar(255) | Name of the first task entered by generative\_agent. | SomethingElse | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_last\_task\_name | varchar(255) | Name of the last task entered by generative\_agent. | SomethingElse | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_task\_ct | int | Number of tasks entered by generative\_agent. | 2 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | | generative\_agent\_tasks\_\_configuration\_id | varchar(255) | The configuration version that produced generative\_agent actions. | 4ea5b399-f969-49c6-8318-e2c39a98e817 | | | 2025-01-06 00:00:00 | 2025-01-06 00:00:00 | no | | | | | ### Table: staging\_conversation\_state This issue-grain table provides a consolidated view of metrics produced across multiple ASAPP services for a given issue. The table is populated daily and includes hour-level data for time zone conversion. **Sync Time:** 1d **Unique Condition:** company\_marker, conversation\_id, dt, hr | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :---------------------------------------------------- | :--------- | :---- | :--------- | :------------ | :----- | :-- | :------------ | :------------- | :------------ | | conversation\_id | timestamp | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | | | no | | | Conversation | | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | Conversation | | | first\_event\_ts | timestamp | Timestamp of the first event associated with the conversation\_id. | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | conversation\_start\_ts | timestamp | Timestamp indicating the start of the conversation as provided by the customer; this will be null if is not provided or conversation started on a previous day. Alternative timestamps include the customer\_first\_utterance\_ts and agent\_first\_response\_ts timestamps or the first\_event\_ts (earliest time for ASAPP involvement). | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | external\_conversation\_id | VARCHAR(255) | The conversation id provided by the customer. | 750068130001 | | | | | no | | | Conversation | | | conversation\_customer\_effective\_ts | timestamp | The timestamp of the last change to the customer\_id provided by the customer. | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | customer\_id | varchar(255) | The customer identifier provided by the customer. | abc123 | | | | | no | | | Conversation | | | conversation\_agent\_effective\_ts | timestamp | The timestamp of the last change to the agent\_id provided by the customer. | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | last\_agent\_id | varchar(191) | The last agent identifier in the conversation provided by the customer. | abc123 | | | | | no | | | Conversation | | | all\_agent\_ids | | A list of all the agent identifiers within the conversation provided by the customer. | \[abc123,abc456] | | | | | no | | | Conversation | | | customer\_utterance\_ct | | Count of all customer messages. | 5 | | | | | no | | | Conversation | | | agent\_utterance\_ct | | Count of all agent messages. | 16 | | | | | no | | | Conversation | | | customer\_first\_utterance\_ts | timestamp | Timestamp of the first customer utterance. | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | agent\_first\_utterance\_ts | | Timestamp of the first agent utterance. | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | customer\_last\_utterance\_ts | timestamp | Timestamp of the last customer utterance. | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | agent\_last\_utterance\_ts | | Timestamp of the last agent utterance. | 2019-11-08 14:00:07 | | | | | no | | | Conversation | | | autosuggest\_utterance\_ct | | Count of utterances where AutoSuggest was used. | 6 | | | | | no | | | AutoCompose | | | autocomplete\_utterance\_ct | | Count of utterances where AutoComplete was used. | 2 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_utterance\_ct | | Count of utterances where Phrase AutoComplete was used. | 0 | | | | | no | | | AutoCompose | | | custom\_drawer\_utterance\_ct | | Count of utterances where Custom Drawer was used. | 1 | | | | | no | | | AutoCompose | | | custom\_insert\_utterance\_ct | | Count of utterances where Custom Insert was used. | 0 | | | | | no | | | AutoCompose | | | global\_insert\_utterance\_ct | | Count of utterances where Global Insert was used. | 1 | | | | | no | | | AutoCompose | | | fluency\_apply\_utterance\_ct | | Count of utterances where Fluency Apply was used. | 0 | | | | | no | | | AutoCompose | | | fluency\_undo\_utterance\_ct | | Count of utterances where Fluency Undo was used. | 0 | | | | | no | | | AutoCompose | | | autosummary\_structured\_summary\_tags\_event\_ts | timestamp | The timestamp of the last autosummary\_structured\_summary\_tags event. | 2019-11-08 14:00:07 | | | | | no | | | AutoSummary | | | autosummary\_tags | string | Comma-separated list of tags or codes indicating key topics of this conversation. | `{"server":"some-server","server_version":"unknown"}` | | | | | no | | | AutoSummary | | | autosummary\_free\_text\_summary\_event\_ts | timestamp | The timestamp of the last autosummary\_free\_text\_summary event. | 2019-11-08 14:00:07 | | | | | no | | | AutoSummary | | | autosummary\_text | string | Text summarizing the conversation. | Unresponsive Customer. | | | | | no | | | AutoSummary | | | is\_autosummary\_structured\_summary\_tags\_used | | An indicator that the conversation had AutoSummary structured summary tags. When aggregating from conversation by day to conversation use MAX(). | 1 | | | | | no | | | AutoSummary | | | is\_autosummary\_free\_text\_summary\_used | | An indicator that the conversations had AutoSummary free text summary. When aggregating from conversation by day to conversation use MAX(). | 1 | | | | | no | | | AutoSummary | | | is\_autosummary\_feedback\_used | int | An indicator that the conversation had AutoSummary feedback summary. When aggregating from conversation by day to conversation use MAX(). | 1 | | | | | no | | | AutoSummary | | | is\_autosummary\_used | | An indicator that the conversation that had any response (tag, free text, feedback) in Autosummary. When aggregating from conversation by day to conversation use MAX(). | 1 | | | | | no | | | AutoSummary | | | is\_autosummary\_feedback\_edited | int | An indicator that the conversation had at least one AutoSummary that received Feedback with an edited summary. When aggregating from conversation by day to conversation use MAX(). | 1 | | | | | no | | | Conversation | | | autosummary\_feedback\_ct | bigint | Count of AutoSummaries that received Feedback for the conversation. | 4 | | | | | no | | | Conversation | | | autosummary\_feedback\_edited\_ct | bigint | Count of AutoSummaries that received edited Feedback for the conversation. | 3 | | | | | no | | | Conversation | | | autosummary\_free\_text\_length\_sum | bigint | Sum of the length of all the FreeText AutoSummaries for the conversation. | 80 | | | | | no | | | Conversation | | | autosummary\_feedback\_length\_sum | bigint | Sum of the length of all the Feedback AutoSummaries for the conversation. | 120 | | | | | no | | | Conversation | | | autosummary\_levenshtein\_distance\_sum | bigint | Sum of the Levenshtein edit distances between the AutoSummaries FreeText and Feedback. | 40 | | | | | no | | | Conversation | | | first\_intent\_effective\_ts | timestamp | The timestamp of the last first\_intent event. | 2019-11-08 14:00:07 | | | | | no | | | JourneyInsight | | | first\_intent\_message\_id | string | The id of the message that was sent with the first intent. | 01GA9V1F2B7Q4Y8REMRZ2SXVRT | | | | | no | | | JourneyInsight | | | first\_intent\_intent\_code | string | The intent code associated with the rule that was sent as the first intent within the conversation. | INCOMPLETE | | | | | no | | | JourneyInsight | | | first\_intent\_intent\_name | string | The intent name correspondent to the intent\_code that was sent as the first intent within the conversation. | INCOMPLETE | | | | | no | | | JourneyInsight | | | first\_intent\_is\_known\_good | boolean | Indicates if the classification for the first\_intent data comes from a known good. | FALSE | | | | | no | | | JourneyInsight | | | conversation\_metadata\_effective\_ts | timestamp | The timestamp of the last conversation metadata | 2019-11-08 14:00:07 | | | | | no | | | Metadata | | | conversation\_metadata\_lob\_id | string | Line of business ID from Conversation Metadata | 1038 | | | | | no | | | Metadata | | | conversation\_metadata\_lob\_name | string | Line of business descriptive name from Conversation Metadata | manufacturing | | | | | no | | | Metadata | | | conversation\_metadata\_agent\_group\_id | string | Agent group ID from Conversation Metadata | group59 | | | | | no | | | Metadata | | | conversation\_metadata\_agent\_group\_name | string | Agent group descriptive name from Conversation Metadata | groupXYZ | | | | | no | | | Metadata | | | conversation\_metadata\_agent\_routing\_code | string | Agent routing attribute from Conversation Metadata | route-13988 | | | | | no | | | Metadata | | | conversation\_metadata\_campaign | string | Campaign from Conversation Metadata | campaign-A | | | | | no | | | Metadata | | | conversation\_metadata\_device\_type | string | Client device type from Conversation Metadata | TABLET | | | | | no | | | Metadata | | | conversation\_metadata\_platform | string | Client platform type from Conversation Metadata | IOS | | | | | no | | | Metadata | | | conversation\_metadata\_company\_segment | \[]string | Company segment from Conversation Metadata | \["Sales","Marketing"] | | | | | no | | | Metadata | | | conversation\_metadata\_company\_subdivision | string | Company subdivision from Conversation Metadata | operating | | | | | no | | | Metadata | | | conversation\_metadata\_business\_rule | string | Business rule from Conversation Metadata | Apply customer's discount | | | | | no | | | Metadata | | | conversation\_metadata\_entry\_type | string | Type of entry from Conversation Metadata, e.g., proactive vs reactive | reactive | | | | | no | | | Metadata | | | conversation\_metadata\_operating\_system | string | Operating system from Conversation Metadata | OPERATING\_SYSTEM\_MAC\_OS | | | | | no | | | Metadata | | | conversation\_metadata\_browser\_type | string | Browser type from Conversation Metadata | Safari | | | | | no | | | Metadata | | | conversation\_metadata\_browser\_version | string | Browser version from Conversation Metadata | 14.1.2 | | | | | no | | | Metadata | | | contact\_journey\_contact\_id | string | (NULLIFIED) Conversation Contact ID | | | | | | no | | | Contact | | | contact\_journey\_last\_conversation\_inactive\_ts | timestamp | Last time the conversation went inactive (may be limited to voice conversations) | 2023-06-11 18:45:29 | | | | | no | | | Contact | | | contact\_journey\_first\_contact\_utterance\_ts | timestamp | First utterance in the contact | 2023-06-11 18:32:21 | | | | | no | | | Contact | | | contact\_journey\_last\_contact\_utterance\_ts | timestamp | Last utterance in the contact | 2023-06-11 18:40:29 | | | | | no | | | Contact | | | contact\_journey\_contact\_start\_ts | timestamp | First event in the contact | 2023-06-11 18:30:29 | | | | | no | | | Contact | | | contact\_journey\_contact\_end\_ts | timestamp | Last event in the contact | 2023-06-11 18:58:29 | | | | | no | | | Contact | | | aug\_metrics\_effective\_ts | timestamp | Timestamp of the last augmentation metrics event | "2023-08-09T19:21:34.224620050Z" | | | | | no | | | AutoCompose | | | augmented\_utterances\_ct | | Count of all utterances that used any augmentation feature (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | multiple\_augmentation\_features\_used\_ct | | Count utterances where multiple augmentation features (excluding fluency) were used | 100 | | | | | no | | | AutoCompose | | | autosuggest\_ct | | Count of utterances where only AutoSuggest augmentation is used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | autocomplete\_ct | | Count of utterances where only AutoComplete augmentation is used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_ct | | Count of utterances where only Phrase AutoComplete augmentation is used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | custom\_drawer\_ct | | Count of utterances where only Custom Drawer augmentation is used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | custom\_insert\_ct | | Count of utterances where only Custom Insert augmentation is used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | global\_insert\_ct | | Count of utterances where only Global Insert augmentation is used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | unknown\_augmentation\_ct | | Count of utterances where only an unidentified augmentation was used (excluding fluency) | 100 | | | | | no | | | AutoCompose | | | fluency\_apply\_ct | | Count of utterances where Fluency Apply augmentation is used | 100 | | | | | no | | | AutoCompose | | | fluency\_undo\_ct | | Count of utterances where Fluency Undo augmentation is used | 100 | | | | | no | | | AutoCompose | | | message\_edits\_ct | bigint | Total accumulated sum of the number of characters entered or deleted by the user and not by augmentation, after the most recent augmentation that replaces all text in the composer (AUTOSUGGEST, AUTOCOMPLETE, CUSTOM\_DRAWER). If the agent selected a suggestion and sends without any changes, this number is 0. | 100 | | | | | no | | | AutoCompose | | | time\_to\_action\_seconds | float | Total accumulated sum of the number of seconds between the agent sending their previous message and their first action for composing this message. | 100 | | | | | no | | | AutoCompose | | | crafting\_time\_seconds | float | Total accumulated sum of the number of seconds between the agent's first action and last action for composing this message. | 100 | | | | | no | | | AutoCompose | | | dwell\_time\_seconds | float | Total accumulated sum of the number of seconds between the agent's last action for composing this message and the message being sent | 100 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_presented\_ct | bigint | Total accumulated sum of the number of phrase autocomplete suggestions presented to the agent. Resets when augmentation\_type resets | 100 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_selected\_ct | bigint | Total accumulated sum of the number of phrase autocomplete suggestions selected by the agent. Resets when augmentation\_type resets. | 100 | | | | | no | | | AutoCompose | | | single\_intent\_effective\_ts | timestamp | The timestamp of the last single intent event. | 2019-11-08 14:00:07 | | | | | no | | | | | | single\_intent\_intent\_code | string | Intent code | CHECK\_COVERAGE | | | | | no | | | | | | single\_intent\_intent\_name | string | Intent name | Check Coverage | | | | | no | | | | | | single\_intent\_messages\_considered\_ct | bigint | How many utterances were considered to calculate a single intent code. | 2 | | | | | no | | | | | | company\_marker | string | Identifier of the customer-company. | agnostic | | | | | no | | | Conversation | | | dt | string | Date string representing the date during which the conversation state happened. | 2019-11-08 | | | | | no | | | Conversation | | | hr | string | Hour string representing the hour during which the conversation state happened. | 18 | | | | | no | | | Conversation | | ### Table: staging\_utterance\_state This utterance-grain table contains insights for individual conversation messages. Each record in this dataset represents an individual utterance, or message, within a conversation. The table is populated daily and includes hour-level data for time zone conversion purposes. **Sync Time:** 1d **Unique Condition:** company\_marker, conversation\_id, message\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :--------- | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | conversation\_id | string | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | | | no | | | Conversation | | | message\_id | string | This is the ULID id of a given message. | 01GASGE3WAG84BGARCS238Z0FG | | | | | no | | | Conversation | | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | Conversation | | | chat\_message\_event\_ts | timestamp | The timestamp of the last chat\_message event. | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | external\_conversation\_id | VARCHAR(255) | The issue or conversation id from the customer/client perspective. | ffe8a632-545f-4c2e-a0ae-c296e6ad4a22 | | | | | no | | | Conversation | | | sender\_type | string | The type of sender. | SENDER\_CUSTOMER | | | | | no | | | Conversation | | | sender\_id | string | Unique identifier of the sender user. | ffe8a632-545f-4c2e-a0ae-c296e6ad4a25 | | | | | no | | | Conversation | | | private\_message\_ct | bigint | Number of private messages, a private message is only when it was between agents/admins not the customer. | 1 | | | | | no | | | Conversation | | | tags | string | Key-value map of additional properties. | | | | | no | | | Conversation | | | | utterance\_augmentations\_effective\_ts | timestamp | The timestamp of the last utterance\_augmentations event. | 2018-06-23 21:28:23 | | | | | no | | | AutoCompose | | | augmentation\_type\_list | string | DEPRECATED Type of augmentation used. If multiple augmentations were used, a comma-separated list of types. | AUTOSUGGEST,AUTOCOMPLETE | | | | | no | | | AutoCompose | | | num\_edits\_ct | bigint | Number of edits made to an augmented message. | 2 | | | | | no | | | AutoCompose | | | selected\_suggestion\_text | string | DEPRECATED The text inserted into the composer by the last augmentation that replaced all text (AUTOSUGGEST, | Hi. How may I help you? | | | | | no | | | AutoCompose | | | time\_to\_action\_seconds | float | Number of seconds between the agent sending their previous message and their first action for composing | 3.286 | | | | | no | | | AutoCompose | | | crafting\_time\_seconds | float | Number of seconds between the agent's first action and last action for composing this message. | 0.0 | | | | | no | | | AutoCompose | | | dwell\_time\_seconds | float | Number of seconds between the agent's last action for composing this message and the message being sent. | 0.844 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_presented\_ct | bigint | Number of phrase autocomplete suggestions presented to the agent. | 1 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_selected\_ct | bigint | Number of phrase autocomplete suggestions selected by the agent. | 0 | | | | | no | | | AutoCompose | | | utterance\_message\_metrics\_effective\_ts | timestamp | The timestamp of the last utterance\_message\_metrics event. | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | utterance\_length | int | Length of utterance message. | 13 | | | | | no | | | Conversation | | | agent\_metadata\_effective\_ts | timestamp | The timestamp of the last agent\_metadata event. | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | agent\_metadata\_external\_agent\_id | string | The external rep/agent identifier. | abc123 | | | | | no | | | Conversation | | | agent\_metadata\_event\_ts | timestamp | The timestamp of when this event happened (system driven). | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | agent\_metadata\_start\_ts | timestamp | The timestamp of when the agent started. | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | agent\_metadata\_lob\_id | string | Line of business id. | lobId\_7 | | | | | no | | | Conversation | | | agent\_metadata\_lob\_name | string | Line of business descriptive name. | lobName\_7 | | | | | no | | | Conversation | | | agent\_metadata\_group\_id | string | Group id. | groupId\_7 | | | | | no | | | Conversation | | | agent\_metadata\_group\_name | string | Group descriptive name. | groupName\_7 | | | | | no | | | Conversation | | | agent\_metadata\_location | string | Agent's supervisor Id. | supervisorId\_7 | | | | | no | | | Conversation | | | agent\_metadata\_languages | string | Agent's languages. | `[{"value":"en-us"}]` | | | | | no | | | Conversation | | | agent\_metadata\_concurrency | int | Number of issues that the agent can take at a time. | 3 | | | | | no | | | Conversation | | | agent\_metadata\_category\_label | string | An agent category label that indicates the types of workflows these agents have access to or problems they solve. | categoryLabel\_7 | | | | | no | | | Conversation | | | agent\_metadata\_account\_access\_level | string | Agent levels mapping to the level of access they have to make changes to customer accounts. | accountAccessLevel\_7 | | | | | no | | | Conversation | | | agent\_metadata\_ranking | int | Agent's rank. | 2 | | | | | no | | | Conversation | | | agent\_metadata\_vendor | string | Agent's vendor. | vendor\_7 | | | | | no | | | Conversation | | | agent\_metadata\_job\_title | string | Agent's job title. | jobTitle\_7 | | | | | no | | | Conversation | | | agent\_metadata\_job\_role | string | Agent's role. | jobRole\_7 | | | | | no | | | Conversation | | | agent\_metadata\_work\_shift | string | The hours or shift name they work. | workShift\_7 | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_01\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_01\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr1\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_02\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_02\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr2\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_03\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_03\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr3\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_04\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_04\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr4\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_05\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_05\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr5\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_06\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_06\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr6\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_07\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_07\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr7\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_08\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_08\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr8\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_09\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_09\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr9\_name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_10\_name | string | Name of the arbitrary attribute (not indexed or used internally, used as pass through). | name | | | | | no | | | Conversation | | | agent\_metadata\_attributes\_attr\_10\_value | string | Value of the arbitrary attribute (not indexed or used internally, used as pass through). | attr10\_name | | | | | no | | | Conversation | | | augmented\_utterances\_ct | int | Count of all utterances that used any augmentation feature (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | multiple\_augmentation\_features\_used\_ct | int | Count utterances where multiple augmentation features (excluding fluency) were used. | 1 | | | | | no | | | AutoCompose | | | autosuggest\_ct | int | Count of utterances where only AutoSuggest augmentation is used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | autocomplete\_ct | int | Count of utterances where only AutoComplete augmentation is used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | phrase\_autocomplete\_ct | int | Count of utterances where only Phrase AutoComplete augmentation is used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | custom\_drawer\_ct | int | Count of utterances where only Custom Drawer augmentation is used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | custom\_insert\_ct | int | Count of utterances where only Custom Insert augmentation is used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | global\_insert\_ct | int | Count of utterances where only Global Insert augmentation is used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | unknown\_augmentation\_ct | int | Count of utterances where only an unidentified augmentation was used (excluding fluency) | 1 | | | | | no | | | AutoCompose | | | fluency\_apply\_ct | int | Count of utterances where Fluency Apply augmentation is used | 1 | | | | | no | | | AutoCompose | | | fluency\_undo\_ct | int | Count of utterances where Fluency Undo augmentation is used | 1 | | | | | no | | | AutoCompose | | | company\_marker | string | Identifier of the customer-company. | agnostic | | | | | no | | | Conversation | | | dt | string | Date string representing the date during which the utterance state happened. | 2018-06-23 | | | | | no | | | Conversation | | | hr | string | Hour string representing the hour during which the utterance state happened. | 21 | | | | | no | | | Conversation | | ### Table: utterances This S3 captures raw utterances, enabling customers to map message IDs and metadata to specific utterances. Each record in this feed represents an individual message within a conversation, providing utterance-level insights. The feed remains minimal and secure, including a comprehensive mapping of message IDs to their corresponding utterances, information not available in the utterance state file. For security purposes, this feed will only be accessible externally, retaining a maximum of 32 days of data before purging. The feed will be exported daily, with time-stamped data for time zone conversion. **Sync Time:** 1d **Unique Condition:** company\_marker, conversation\_id, message\_id | Column | Type | Description | Example | Aggregates | Notes | Date Added | Date Modified | Ignore | PII | release state | Specific Use | Feature Group | | :------------------------- | :----------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------- | :--------- | :---- | :--------- | :------------ | :----- | :-- | :------------ | :----------- | :------------ | | conversation\_id | string | Unique identifier generated by the ASAPP application for the issue or conversation. | ABC21352352 | | | | | no | | | Conversation | | | message\_id | | This is the ULID id of a given message. | 01GASGE3WAG84BGARCS238Z0FG | | | | | no | | | Conversation | | | instance\_ts | timestamp | The time window of computed elements. This window is usually 15 minutes or 1 hour depending on the generation time. Times are rounded down to the nearest interval (for a time of 12:34 and an interval of 15m, this is 12:30). As an example, for a 15 minute interval an instance\_ts of 12:30 implies events from 12:30 -> 12:45. All times are in UTC. | 2019-11-08 14:00:06.957000+00:00 | | | | | no | | | Conversation | | | chat\_message\_event\_ts | | The timestamp of the last chat\_message event. | 2018-06-23 21:28:23 | | | | | no | | | Conversation | | | external\_conversation\_id | VARCHAR(255) | The issue or conversation id from the customer/client perspective. | ffe8a632-545f-4c2e-a0ae-c296e6ad4a22 | | | | | no | | | Conversation | | | utterance | | Text of the utterance message. | Hello, I need to talk to an agent | | | | | no | | | Conversation | | | company\_marker | string | Identifier of the customer-company. | agnostic | | | | | no | | | Conversation | | | dt | string | Date string representing the date during which the utterance state happened. | 2018-06-23 | | | | | no | | | Conversation | | | hr | string | Hour string representing the hour during which the utterance state happened. | 21 | | | | | no | | | Conversation | | Last Updated: 2025-01-16 06:37:08 UTC # Metadata Ingestion API Source: https://docs.asapp.com/reporting/metadata-ingestion Learn how to send metadata via Metadata Ingestion API. Customers with AI Services implementations use ASAPP's Metadata Ingestion API to send key attributes about conversations, customers, and agents. Developers can join metadata with AI Service output data to sort and filter reports and analyses using attributes important to your business. Metadata Ingestion API is not accessible by default. Reach out to your ASAPP account contact to ensure it is enabled for your implementation. ## Before You Begin ASAPP provides an AI Services [Developer Portal](/getting-started/developers). Within the portal, developers can: * Access relevant API documentation (e.g., OpenAPI reference schemas) * Access API keys for authorization * Manage user accounts and apps In order to use ASAPP's APIs, all apps must be registered through the portal. Once registered, each app will be provided unique API keys for ongoing use. Visit the [Get Started](/getting-started/developers) page on the Developer Portal for instructions on creating a developer account, managing teams and apps, and setup for using AI Service APIs. ## Endpoints The Metadata Ingestion endpoints are used to send information about agents, conversations, and customers. Metadata can be sent for a single entity (e.g., one agent) or for multiple entities at once (e.g., several hundred agents) in a batch format. ### Agent The OpenAPI specification for each agent endpoint shows the types of metadata the API accepts. Examples include information about lines of business, groups, locations, supervisors, languages spoken, vendor, job role, and email. The endpoints also accept custom-defined `attributes` in key-value pairs if no existing field in the schema suits the type of metadata you wish to upload. * [`POST /metadata-ingestion/v1/single-agent-metadata`](/apis/metadata/add-an-agent-metadata) * Use this endpoint to add metadata for a single agent. * [`POST /metadata-ingestion/v1/many-agent-metadata`](/apis/metadata/add-multiple-agent-metadata) * Use this endpoint to add metadata for a batch of agents all at once. ### Conversation The OpenAPI specification for each conversation endpoint shows the types of metadata the API accepts. Examples include unique identifiers, lines of business, group and subdivision identifiers, routing codes, associated campaigns and business rules, browser and device information. The endpoints also accept custom-defined `attributes` in key-value pairs if no existing field in the schema suits the type of metadata you wish to upload. * [`POST /metadata-ingestion/v1/single-convo-metadata`](/apis/metadata/add-a-conversation-metadata) * Use this endpoint to add metadata for a single conversation. * [`POST /metadata-ingestion/v1/many-convo-metadata`](/apis/metadata/add-multiple-conversation-metadata) * Use this endpoint to add metadata for a batch of conversations all at once. ### Customer The OpenAPI specification for each customer endpoint shows the types of metadata the API accepts. Examples include unique identifiers, statuses, contact details, and location information. The endpoints also accept custom-defined `attributes` in key-value pairs if no existing field in the schema suits the type of metadata you wish to upload. * [`POST /metadata-ingestion/v1/single-customer-metadata`](/apis/metadata/add-a-customer-metadata) * Use this endpoint to add metadata for a single customer. * [`POST /metadata-ingestion/v1/many-customer-metadata`](/apis/metadata/add-multiple-customer-metadata) * Use this endpoint to add metadata for a batch of customers all at once. # Building a Real-Time Event API Source: https://docs.asapp.com/reporting/real-time-event-api Learn how to implement ASAPP's real-time event API to receive activity, journey, and queue state updates. ASAPP provides real-time access to events, enabling customers to power internal use cases. Typical use cases that benefit from real-time ASAPP events include: * Tracking the end-user journey through ASAPP * Supporting workforce management needs * Integrating with customer-maintained CRM systems ASAPP's real-time events provide raw data. Batch analytics and reporting handle complex processing, such as aggregation or deduplication. ASAPP presently supports three real-time event feeds: 1. **Activity**: Agent status change events, for tracking schedule adherence 2. **Journey**: Events denoting milestones in a conversation, for tracking the customer journey 3. **Queue State**: Updates on queues for tracking size and estimated wait times In order to utilize these available real-time events, a customer will need to configure an API endpoint service under the customer's control. The balance of this document provides information about the high-level tasks a customer will need to accomplish in order to receive real-time events from ASAPP, as well as further information on the events available from ASAPP. ## Architecture Discussion Upon a customer's request, ASAPP can provide several types of real-time event data. Note that ASAPP can separately enhance standard real-time events to accommodate specific customer requirements. Such enhancements would usually be specified and implemented as part of ASAPP's regular product development process. Data-ERTAPI-Arch The diagram above provides a high-level view of how a customer-maintained service that receives real-time ASAPP events might be designed; a service that runs on ASAPP-controlled infrastructure will push real-time event data to one or more HTTP endpoints maintained by the customer. For each individual event, the ASAPP service makes one POST request to the endpoint. ASAPP transmits event data using mTLS-based authentication (See the separate document [Securing Endpoints with Mutual TLS](/reporting/secure-data-retrieval#certificate-configuration) for details). ### Customer Requirements * The customer must implement a POST API endpoint to handle the event messages. * The customer and ASAPP must develop the mTLS authentication integration to secure the API endpoint * All ASAPP real-time "raw" events will post to the same endpoint; the customer is expected to filter the received events to their needs based on name and event type. * Each ASAPP real-time "processed" reporting feed can be configured to post to one arbitrary endpoint, at the customer's specified preference (i.e., each feed can post to a separate URI, or each can post to the same URI, or any combination required by the customer's use case.) It should be noted that real-time events do not implement the de-duplication and grouping of ASAPP's batch reporting feeds; rather these real-time events provide building blocks for the customer to aggregate and build on. When making use of ASAPP's real-time events, the customer will be responsible for grouping, de-duplication, and aggregation of related events as required by the customer's particular use case. The events include metadata fields to facilitate such tasks. ### Endpoint Sizing The endpoint configured by the customer should provisioned with sufficient scale to receive events at the rate generated by the customer's ASAPP implementation. As a rule of thumb, customers can expect: * A voice call will generate on the order of 100 events per issue * A text chat will generate on the order of 10 events per issue So, for example, if the customer's application services 1000 issues per minute, that customer should expect their endpoint to receive 10,000 -- 100,000 messages per minute, or on the order of 1,000 messages per second. ### Endpoint Configuration ASAPP can configure its service with the following parameters: * **url:** The destination URL of the customer API endpoint that is set up to handle POST http requests. * **timeout\_ms:** The number of milliseconds to wait for a HTTP 200 "OK" response before timing out. * **retries:** The number of times to retry to send a message after a failed delivery. * **(optional)event\_list:** List of `event_types` to send. If `event_type` is empty it will default to send all events for this feed. List the necessary `event_type` to reduce unnecessary traffic. If the number of retries is exceeded and the customer's API is unable to handle any particular message, that message will be dropped. Real-time information lost in this way will typically be available in historical reporting feeds. ## Real-time Overview ASAPP's standard real-time events include data representing human interactions and general issue lifecycle information from the ASAPP feeds named `com.asapp.event.activity`, `com.asapp.event.journey`, and `com.asapp.event.queue`. In the future, when additional event sources are added, the name of the stream will reflect the event source. ## Payload Schema Each of ASAPP's feeds will deliver a single event's data in a payload comprised of a two-level JSON object. The delivered payload includes: 1. Routing metadata at the top level common to all events. *A small set of fields that should always be present for all events, used for routing, filtering, and deduplication.* 2. Metadata common to all events. *These fields should usually be present for all events to provide meta-information on the event. Some fields may be omitted if they do not apply to the specific feed.* 3. Data specific to the event feed. *Some fields may be omitted but the same total set can be expected for each event of the same origin* 4. Details specific to the event type. The schema omits null fields -- the customer's API is expected to interpret missing keys as null. **Versioning** Minor-versions upgrades to the events are expected to be backwards-compatible; major-version updates typically include an interface-breaking change that may require the customer to update their API in order to take advantage of new features. ## Activity Feed The agent activity feed provides a series of events for agent login and status changes. ASAPP processes the event data minimally before pushing it into the `activity` feed to: * Convert internal flags to meaningful human-readable strings * Filter the feed to include only data fields of potential interest to the customer ASAPP's `activity` feed does not implement complex event processing (e.g., aggregation based on time windows, groups of events, de-duplication, or system state tracking). Any required aggregation or deduplication should be executed by the customer after receiving `activity` events. ### Sample Event JSON ```json theme={null} { "api_version": "v1.3.0", "name": "com.asapp.event.activity", "meta_data": { "create_time": "2022-06-21T20:10:24.411Z", "event_time": "2022-06-21T20:10:24.411Z", "session_id": "string", "issue_id": "string", "company_subdivision": "string", "company_id": "string", "company_segments": [ "string" ], "client_id": "string", "client_type": "SMS" }, "data": { "rep_id": "string", "desk_mode": "UNKNOWN", "rep_name": "string", "agent_given_name": "string", "agent_family_name": "string", "agent_display_name": "string", "external_rep_id": "string", "max_slots": 0, "queue_ids": [ "string" ], "queue_names": [ "string" ] }, "event_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "event_type": "UNKNOWN", "details": { "status_updated_ts": "2022-06-21T20:10:24.411Z", "status_description": "string", "routing_status_updated_ts": "2022-06-21T20:10:24.411Z", "routing_status": "UNKNOWN", "assignment_load_updated_ts": "2022-06-21T20:10:24.411Z", "assigned_customer_ct": 0, "previous_routing_status_updated_ts": "2022-06-21T20:10:24.411Z", "previous_routing_status": "UNKNOWN", "previous_routing_status_duration_sec": 0, "previous_routing_status_start_ts": "2022-06-21T20:10:24.411Z", "utilization_5_min_updated_ts": "2022-06-21T20:10:24.411Z", "utilization_5_min_window_start_ts": "2022-06-21T20:10:24.411Z", "utilization_5_min_window_end_ts": "2022-06-21T20:10:24.411Z", "utilization_5_min_any_status": { "linear_sec": 0, "linear_utilized_sec": 0, "cumulative_sec": 0, "cumulative_utilized_sec": 0 }, "utilization_5_min_active": { "linear_sec": 0, "linear_utilized_sec": 0, "cumulative_sec": 0, "cumulative_utilized_sec": 0 }, "utilization_5_min_away": { "linear_sec": 0, "linear_utilized_sec": 0, "cumulative_sec": 0, "cumulative_utilized_sec": 0 }, "utilization_5_min_offline": { "linear_sec": 0, "linear_utilized_sec": 0, "cumulative_sec": 0, "cumulative_utilized_sec": 0 }, "utilization_5_min_wrapping_up": { "linear_sec": 0, "linear_utilized_sec": 0, "cumulative_sec": 0, "cumulative_utilized_sec": 0 } } } ``` ### Field Explanations | Field | Description | | :---------------------- | :------------------------------------------------------------------------------------------------------------------- | | api\_version | Major and minor version of the API, compatible with the base major version | | name | Source of this event stream - use for filtering / routing | | event\_type | Event type within the stream - use for filtering / routing | | event\_id | Unique ID of an event, used to identify identical duplicate events | | meta\_data.create\_time | UTC creation time of this message | | meta\_data.event\_time | UTC time the event happened within the system - usually some ms before create time | | meta\_data.session\_id | Customer-side identifier to link events together based on customer session. May be null for system-generated events. | | meta\_data.client\_id | May include client type, device, and version, if present in the event headers | | data.rep\_id | Internal ASAPP identifier of an agent | | details | These fields vary based on the individual event type - only fields relevant to the event type will be present | Adding the `event_list` filter in the configuration allows the receiver of the real-time feed to indicate for which event types they want to receive an Activity message. This message will still contain all the fields that have been populated, as the events are being accumulated in the Activity message for that same `rep_id`. For example: If the `event_list` contains only `agent_activity_status_updated`, the Activity messages will still contain all the fields (`status_description`, `routing_status`, `previous_routing_status`, `assigned_customer_ct`, `utilization_5_min_active`, etc), but will only be sent whenever the agent status was updated. ### Event Types * `agent_activity_identity_updated` * `agent_activity_status_updated` * `agent_activity_capacity_updated` * `agent_activity_assignment_load_updated` * `agent_activity_routing_status_updated` * `agent_activity_previous_routing_status` * `agent_activity_queue_membership` * `agent_activity_utilization_5_min` ## Journey Feed The customer journey feed tracks important events in the customer's interaction with ASAPP. ASAPP processes the event data before pushing it into the `journey` feed to: * Collect conversation and session events into a single feed of the customer journey * Add metadata properties to the events to assist with contextualizing the events This feature is available only for ASAPP Messaging. ASAPP's `journey` feed does not implement aggregation. Any aggregation or deduplication required by the customer's use case will need to be executed by the customer after receiving `journey` events. ### Sample Event JSON ```json theme={null} { "api_version": "string", "name": "com.asapp.event.journey", "meta_data": { "create_time": "2024-08-06T13:57:43.053Z", "event_time": "2024-08-06T13:57:43.053Z", "session_id": "string", "issue_id": "string", "company_subdivision": "string", "company_id": "string", "company_segments": [ "string" ], "client_id": "string", "client_type": "UNKNOWN" }, "data": { "customer_id": "string", "opportunity_origin": "UNKNOWN", "opportunity_id": "string", "queue_id": "string", "session_id": "string", "session_type": "string", "user_id": "string", "user_type": "string", "session_update_ts": "2024-08-06T13:57:43.053Z", "agent_id": "string", "agent_name": "string", "agent_given_name": "string", "agent_family_name": "string", "agent_display_name": "string", "queue_name": "string", "external_agent_id": "string" }, "event_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "event_type": "ISSUE_CREATED", "details": { "issue_start_ts": "2024-08-06T13:57:43.053Z", "intent_code": "string", "business_intent_code": "string", "flow_node_type": "string", "flow_node_name": "string", "intent_code_path": "string", "business_intent_code_path": "string", "flow_name_path": "string", "business_flow_name_path": "string", "issue_ended_ts": "2024-08-06T13:57:43.053Z", "survey_responses": [ { "question": "string", "question_category": "string", "question_type": "string", "answer": "string", "ordering": 0 } ], "survey_submit_ts": "2024-08-06T13:57:43.053Z", "last_flow_action_called_ts": "2024-08-06T13:57:43.053Z", "last_flow_action_called_node_name": "string", "last_flow_action_called_action_id": "string", "last_flow_action_called_version": "string", "last_flow_action_called_inputs": { "additionalProp1": { "value": "string", "value_type": "VALUE_TYPE_UNKNOWN" }, "additionalProp2": { "value": "string", "value_type": "VALUE_TYPE_UNKNOWN" }, "additionalProp3": { "value": "string", "value_type": "VALUE_TYPE_UNKNOWN" } }, "detected_ts": "2024-08-06T13:57:43.053Z", "escalated_ts": "2024-08-06T13:57:43.053Z", "queued_ts": "2024-08-06T13:57:43.053Z", "assigned_ts": "2024-08-06T13:57:43.053Z", "abandoned_ts": "2024-08-06T13:57:43.053Z", "queued_ms": 0, "opportunity_ended_ts": "2024-08-06T13:57:43.053Z", "ended_type": "string", "assigment_ended_ts": "2024-08-06T13:57:43.053Z", "handle_ms": 0, "is_ghost_customer": true, "last_agent_utterance_ts": "2024-08-06T13:57:43.053Z", "agent_utterance_ct": 0, "agent_first_response_ms": 0, "timeout_ts": "2024-08-06T13:57:43.053Z", "last_customer_utterance_ts": "2024-08-06T13:57:43.053Z", "customer_utterance_ct": 0, "is_resolved": true, "customer_ended_ts": "2024-08-06T13:57:43.053Z", "customer_params_field_01": "string", "customer_params_field_02": "string", "customer_params_field_03": "string", "customer_params_field_04": "string", "customer_params_field_05": "string", "customer_params_field_06": "string", "customer_params_field_07": "string", "customer_params_field_08": "string", "customer_params_field_09": "string", "customer_params_field_10": "string", "customer_params_key_name_01": "string", "customer_params_key_name_02": "string", "customer_params_key_name_03": "string", "customer_params_key_name_04": "string", "customer_params_key_name_05": "string", "customer_params_key_name_06": "string", "customer_params_key_name_07": "string", "customer_params_key_name_08": "string", "customer_params_key_name_09": "string", "customer_params_key_name_10": "string", "uploaded_files_list": [ { "file_upload_event_id": "string", "file_upload_ts": "2024-10-03T12:30:55.123Z", "file_name": "string", "file_mime_type": "UNKNOWN", "file_size_mb": 0, "file_image_width": 0, "file_image_height": 0 } ], "last_assignment_summary_ts": "2025-10-01T13:03:35.360Z", "last_assignment_summary_text": "string", "disposition_ts": "2025-10-01T13:03:35.360Z", "disposition_fields": [ { "field_type": "UNKNOWN", "field_name": "string", "field_value": "string" } ] } } ``` ### Field Explanations | Field | Description | | :------------------------------ | :----------------------------------------------------------------------------------------------------- | | api\_version | Major and minor version of the API, compatible with the base major version | | name | Source of this event stream - use for filtering / routing | | event\_type | Event type within the stream - use for filtering / routing | | event\_id | Unique ID of an event, used to identify identical duplicate events | | meta\_data.create\_time | UTC creation time of this message | | meta\_data.event\_time | UTC time the event happened within the system - usually some ms before create time | | meta\_data.session\_id | Customer-side identifier to link events together based on customer session | | meta\_data.issue\_id | ASAPP internal tracking of a conversation - used to tie events together in the ASAPP system | | meta\_data.company\_subdivision | Filtering metadata | | meta\_data.company\_segments | Filtering metadata | | meta\_data.client\_id | May include client type, device, and version | | data.customer\_id | Internal ASAPP identifier of the customer | | data.rep\_id | Internal ASAPP identifier of an agent. Will be null if no rep is assigned | | data.group\_id | Internal ASAPP identifier of a company group or queue. Will be null if not routed to a group of agents | | details | The details of the event. All details are omitted when empty | ### Event Types * `ISSUE_CREATED` * `ISSUE_ENDED` * `INTENT_CHANGE` * `FIRST_INTENT_UPDATED` * `INTENT_PATH_UPDATED` * `NODE_VISITED` * `LINK_RESOLVED` * `FLOW_SUCCESS` * `FLOW_SUCCESS_NEGATED` * `END_SRS_RESPONSE` * `SURVEY_SUBMITTED` * `CONVERSATION_ENDED` * `CUSTOMER_ENDED` * `ISSUE_SESSION_UPDATED` * `DETECTED` * `OPPORTUNITY_ENDED` * `OPPORTUNITY_ESCALATED` * `QUEUED` * `QUEUE_ABANDONED` * `TIMED_OUT` * `TEXT_MESSAGE` * `FIRST_OPPORTUNITY` * `QUEUED_DURATION` * `CUSTOMER_RESPONSE_BY_OPPORTUNITY` * `ISSUE_OPPORTUNITY_QUEUE_INFO_UPDATED` * `ASSIGNED` * `ASSIGNMENT_ENDED` * `AGENT_RESPONSE_BY_OPPORTUNITY` * `SUPERVISOR_UTTERANCE_BY_OPPORTUNITY` * `AGENT_FIRST_RESPONDED` * `ISSUE_ASSIGNMENT_AGENT_INFO_UPDATED` * `LAST_FLOW_ACTION_CALLED` * `JOURNEY_CUSTOMER_PARAMETERS` * `FILE_UPLOAD_DETECTED` * `DISPOSITION` * `LAST_ASSIGNMENT_SUMMARY` Adding the `event_list` filter in the configuration allows the receiver of the real-time feed to indicate for which event types they want to receive a Journey message. This message will still contain all the fields that have been populated, as the events are being accumulated in the Journey message for that same `issue_id`. Example: if the `event_list` contains only `SURVEY_SUBMITTED` the Journey messages will still contain all the fields (`issue_start_ts`, `assigned_ts`, `survey_responses`, etc), but will only be sent whenever the survey submitted event happens. ## Queue State Feed The queue state feed provides a set of events describing the state of a queue over the course of time. ASAPP processes the event data before pushing it into the `queue` feed to: * Collect queue volume, queue time and queue hours events into a single feed of the queue state * Add metadata properties to the events to assist with contextualizing the events ASAPP's `queue` feed does not implement aggregation. Any aggregation or deduplication required by the customer's use case will need to be executed by the customer after receiving `queue` events. ### Sample Event JSON ```json theme={null} { "api_version": "string", "name": "com.asapp.event.queue", "meta_data": { "create_time": "2025-10-31T18:03:58.321Z", "event_time": "2025-10-31T18:03:58.321Z", "session_id": "string", "issue_id": "string", "company_subdivision": "string", "company_id": "string", "company_segments": [ "string" ], "client_id": "string", "client_type": "UNKNOWN" }, "data": { "queue_id": "string", "queue_name": "string", "business_hours_time_zone_offset_minutes": 0, "business_hours_time_zone_name": "string", "business_hours_start_minutes": [ 0 ], "business_hours_end_minutes": [ 0 ], "holiday_closed_dates": [ "2025-10-31T18:03:58.321Z" ], "queue_capping_enabled": true, "queue_capping_estimated_wait_time_seconds": 0, "queue_capping_size": 0, "queue_capping_fallback_size": 0, "mitigation_status": "UNKNOWN", "queue_availability_last_scheduled_open_ts": "2025-10-31T18:03:58.321Z", "queue_availability_last_scheduled_close_ts": "2025-10-31T18:03:58.321Z", "queue_availability_next_scheduled_open_ts": "2025-10-31T18:03:58.321Z", "queue_availability_next_scheduled_close_ts": "2025-10-31T18:03:58.321Z" }, "event_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "event_type": "UNKNOWN", "details": { "last_queue_size": 0, "last_queue_size_ts": "2025-10-31T18:03:58.321Z", "last_queue_size_update_type": "UNKNOWN", "estimated_wait_time_updated_ts": "2025-10-31T18:03:58.321Z", "estimated_wait_time_seconds": 0, "estimated_wait_time_is_available": true, "queue_availability_is_closed": true, "queue_availability_is_closed_by_business_hours": true, "queue_availability_is_closed_by_holiday_settings": true, "queue_availability_is_closed_by_estimated_wait_time_cap": true, "queue_availability_is_closed_by_size_cap": true, "queue_availability_is_closed_by_fallback_size_cap": true, "queue_availability_is_closed_by_mitigation": true, "queue_availability_is_estimated_wait_time_after_closed_hours": true } } ``` ### Field Explanations For a complete detail of all the fields please refer to the full openapi schema. | Field | Description | | :-------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------- | | api\_version | Major and minor version of the API, compatible with the base major version | | name | Source of this event stream - use for filtering / routing | | meta\_data.create\_time | UTC creation time of this message | | meta\_data.event\_time | UTC time the event happened within the system - usually some ms before create time | | meta\_data.session\_id | Customer-side identifier to link events together based on customer session. May be null for system-generated events. | | meta\_data.issue\_id | ASAPP internal tracking of a conversation - used to tie events together in the ASAPP system | | meta\_data.company\_subdivision | Filtering metadata | | meta\_data.company\_id | The short name used to uniquely identify the company associated with this event. This will be constant for any feed integration. | | meta\_data.company\_segments | Filtering metadata | | meta\_data.client\_id | May include client type, device, and version | | meta\_data.client\_type | The lower-cardinality, more general classification of the client used for the customer interaction | | data.queue\_id | Internal ASAPP ID for this queue | | data.queue\_name | The name of the queue | | data.business\_hours\_time\_zone\_offset\_minutes | The number of minutes offset from UTC for calculating or displaying business hours | | data.business\_hours\_time\_zone\_name | A time zone name used for display or lookup | | data.business\_hours\_start\_minutes | A list of offsets (in minutes from Sunday at 0:00) that correspond to the time the queue transitions from closed to open | | data.business\_hours\_end\_minutes | Same as business\_hours\_start\_minutes but for the transition from open to closed | | data.holiday\_closed\_dates | A list of dates currently configured as holidays | | data.queue\_capping\_enabled | Indicates if any queue capping is applied when enqueueing issues | | data.queue\_capping\_estimated\_wait\_time\_seconds | If the estimated wait time exceeds this threshold (in seconds), the queue will be capped. Zero is no threshold. | | data.queue\_capping\_size | If the queue size is greater than or equal to this threshold, the queue will be capped. Zero is no threshold. This applies independent of estimated wait time. | | data.queue\_capping\_fallback\_size | If there is no estimated wait time and the queue size is greater than or equal to this threshold, the queue will be capped. Zero is no threshold. | | event\_id | Unique ID of an event, used to identify identical duplicate events | | event\_type | Event type within the stream - use for filtering / routing | | details.last\_queue\_size | The latest size of the queue | | details.last\_queue\_size\_ts | Time when the latest queue size update happened | | details.last\_queue\_size\_update\_type | The reason for the latest queue size change | | details.estimated\_wait\_time\_updated\_ts | Time when the estimate was last updated | | details.estimated\_wait\_time\_seconds | The number of seconds a user at the end of the queue can expect to wait | | details.estimated\_wait\_time\_is\_available | Indicates if there is enough data to provide an estimate | ### Event Types * `queue_info_updated` * `queue_size_updated` * `queue_estimated_wait_time_updated` * `business_hours_settings_updated` * `holiday_settings_updated` * `queue_capping_settings_updated` * `queue_mitigation_updated` * `queue_availability_updated` # Retrieving Data for ASAPP Messaging Source: https://docs.asapp.com/reporting/retrieve-messaging-data Learn how to retrieve data from ASAPP Messaging ASAPP provides secure access to your messaging application data through SFTP (Secure File Transfer Protocol). You need to deduplicate the exported data before importing it into your system. If you're retrieving data from ASAPP's AI Services, use [File Exporter](/reporting/file-exporter) instead. ## Download Data via SFTP To download data from ASAPP via SFTP, you need to: You need to generate a SSH key pair and share the public key with ASAPP. If you don't have one already, you can generate one using the ssh-keygen command. ```bash theme={null} ssh-keygen -b 4096 ``` This will walk you creating a key pair. Share your `.pub` file with your ASAPP team. To connect to the SFTP server, you will need to use the following information: * Host: `prod-data-sftp.asapp.com` * Port: `22` * Username: `sftp{company name}` If you are unsure what your company name is, please reach out to your ASAPP account team. You should not use a password for SSH directly as you will be using the SSH key pair to authenticate. If you have a passphrase on your SSH key, you will need to enter it when prompted. Once connected, you can download or upload files. The folder structure and file names have a naming standard indicating the feed type and time of export, and other relevant information: `/FEED_NAME/version=VERSION_NUMBER/format=FORMAT_NAME/dt=DATE/hr=HOUR/mi=MINUTE/DATAFILE(S)` | Path Element | Description | | :-------------- | :------------------------------------------------------------------------------------------------------------------------------------------------ | | **FEED\_NAME** | The name of the table, extract, feed, etc. | | **version** | The version of the feed at hand. Changes whenever the schema, meaning of a column, etc., changes in a way that could break existing integrations. | | **format** | The format of the exported data. Almost always, this will be JSON Lines.\* | | **dt** | The YYYY-MM-DD formatted date corresponding to the exported data. | | **hr** | The hour of the day the data was exported. | | **mi** | The minute of the hour the data was exported. | | **DATAFILE(s)** | The filename or filenames of the exported feed partition. | It is possible to have duplicate entries within a given data feed for a given day. You need to [remove duplicates](#removing-duplicate-data) before importing it. File names that correspond to an exported feed partition will have names in the following form: `\{FEED_NAME\}\{FORMAT\}\{SPLIT_NUMBER\}.\{COMPRESSION\}.\{ENCRYPTION\}` | File name element | Description | | :---------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **FEED\_NAME** | The feed name from which this partition is exported. | | **FORMAT** | .jsonl | | **SPLIT\_NUMBER** | (optional) In the event that a particular partition's export needs to be split across multiple physical files in order to accommodate file size constraints, each split file will be suffixed with a dot followed by a two-digit incrementing sequence. If the whole partition can fit in a single file, no SPLIT\_NUMBER will be present in the file name. | | **COMPRESSION** | (optional) .gz will be appended to the file name if the file is gzip compressed. | | **ENCRYPTION** | (optional) In the atypical case where a file written to the SFTP store is doubly encrypted, the filename will have a .enc extension. | ### Verifying the Data Export is Complete ASAPP continuously generates new export files depending on the feed and the export schedule. You can check the `_SUCCESS` file to verify that the export is complete. Upon completing the generating for a particular partition, ASAPP will create an EMPTY file named `_SUCCESS` to the same path as the export file or files. This `_SUCCESS` file acts as a flag indicating that the generation for the associated partition is complete. A `_SUCCESS` file will be written even if there is no available data selected for export for the partition at hand. Until ASAPP creates the `_SUCCESS` file, the export is in progress and you should not import the associated data file. You should check for this file before downloading any data partition. ### General Data Formatting Notes All ASAPP exports follow this format: * Files are in [JSON Lines format](http://jsonlines.org/). * ASAPP export files are UTF-8 encoded. * Control characters are escaped. * Files are formatted with Unix-style line endings. ## Removing Duplicate Data ASAPP continuously generates data, which means newer files may contain updated versions of previously exported records. To ensure you're working with the most up-to-date information, you need to remove duplicate data by keeping only the latest version of each record and discarding older duplicates. To remove duplicates from the feeds, download the latest instance of a feed, and use the **Unique Conditions** as the "primary key" for that feed. Each table's unique conditions appear in the relevant [feed schema](/reporting/asapp-messaging-feeds). ### Example In order to remove duplicates from the table [`convos_metrics`](/reporting/asapp-messaging-feeds#table-convos-metrics), use this query: ```sql theme={null} SELECT * FROM (SELECT *, ROW_NUMBER() OVER (partition by {{ primary_key }} order by {{ logical_timestamp}} DESC, {{ insertion_timestamp }} DESC) as row_idx FROM convos_metrics ) WHERE row_idx = 1 ``` We partition by the `primary_key` for that table and get the latest data using order by `logical_timestamp`DESC in the subquery. Then we only select where `row_idx` = 1 to only pull the latest information we have for each `issue_id`. ### Schema Adjustments We will occasionally extend the schema of an existing feed to add new columns. Your system should be able to handle these changes gracefully. We will communicate any changes to the schema via your ASAPP account team. You can also enable automated schema evolution detection and identify any changes using `export_docs.yaml`, which ASAPP generates each day and sends via the S3 feed. By incorporating this into the workflows, you can maintain a proactive stance, ensuring uninterrupted service and a smooth transition in the event of schema adjustments. ## Export Schema We publish a [schema for each feed](/reporting/asapp-messaging-feeds). These schemas include the unique conditions for each table that you can use to remove duplicates from your data. If you are retrieving data from Standalone Services, you need to use [File Exporter](/reporting/file-exporter). # Secure Data Retrieval Source: https://docs.asapp.com/reporting/secure-data-retrieval Learn how to set up secure communication between ASAPP and your real-time event API. # Secure Data Retrieval TLS, specifically Mutual-TLS (mTLS), secures communication between ASAPP and a customer's real-time event API endpoint. This document provides details on the expected configuration of the mTLS-secured connection between ASAPP and the customer application. ## Overview Mutual TLS requires that both server and client authenticate using digital certificates. The mTLS-secured integration with ASAPP relies on public certificate authorities (CAs). In this scenario, clients and servers host certificates issued by trusted public CAs (like Digicert, Symantec, etc.). ## Certificate Configuration To further secure the connection, ASAPP requires the following additional configurations: 1. ASAPP's client certificate will contain a unique identifier in the "Subject" field. Customers can use this identifier to confirm that the presented certificate is from a legitimate ASAPP service. This identifier will be based on client identification conventions mutually agreed upon by ASAPP and the customer (e.g., UUIDs, namespaces). 2. Both server and client certificates will have validities of less than 3 years, in accordance with industry best practices. 3. Server certificates must have the "Extended Key Usage" field support "TLS Web Server Authentication" only. Client certificates must have the "Extended Key Usage" field support "TLS Web Client Authentication" only. 4. Minimum key sizes for client/server certificates should be: * 3072-bit for RSA * 256-bit for AES ## TLS/HTTPS Settings REST endpoints must only support TLSv1.3 when setting up HTTPS connections. Older versions support weak ciphers that can be broken if a sufficient number of packets are captured. ### Supported Ciphers Ensure that each endpoint supports only the following ciphers (or equivalent): * TLS\_ECDHE\_ECDSA\_WITH\_AES\_256\_GCM\_SHA384 * TLS\_ECDHE\_RSA\_WITH\_AES\_128\_GCM\_SHA256 * TLS\_ECDHE\_RSA\_WITH\_AES\_256\_GCM\_SHA384 * TLS\_ECDHE\_ECDSA\_WITH\_CHACHA20\_POLY1305\_SHA256 * TLS\_ECDHE\_ECDSA\_WITH\_CHACHA20\_POLY1305 * TLS\_ECDHE\_RSA\_WITH\_CHACHA20\_POLY1305\_SHA256 * TLS\_ECDHE\_RSA\_WITH\_CHACHA20\_POLY1305 ### Session Limits TLS settings should limit each session to a short period. TLS libraries like OpenSSL set this to 300 seconds by default, which is sufficiently secure. A short session limits the usage of per-session AES keys, preventing potential brute-force analysis by attackers who capture session packets. Qualys provides a free tool called SSLTest ([https://www.ssllabs.com/ssltest/](https://www.ssllabs.com/ssltest/)) to check for common issues in server TLS configuration. We recommend using this tool to test your public TLS endpoints before deploying to production. # Transmitting Data via S3 Source: https://docs.asapp.com/reporting/send-s3 S3 supports ongoing data transmissions, though you can also use it for one-time transfers where needed. ASAPP customers can transmit the following types of data to S3: * Call center data attributes * Conversation transcripts from messaging or voice interactions * Recorded call audio files * Sales records with attribution metadata ## Getting Started ### Your Target S3 Buckets ASAPP will provide you with a set of S3 buckets to which you may securely upload your data files, as well as a dedicated set of credentials authorized to write to those buckets. See the next section for more on those credentials. For clarity, ASAPP name buckets use the following convention: `s3://asapp-\{env\}-\{company_name\}-imports-\{aws-region\}`

Key

Description

env

Environment (prod, pre\_prod, test)

company\_name

The company name: acme, duff, stark\_industries, etc.

Note: company name should not have spaces within.

aws-region

us-east-1

Note: this is the current region supported for your ASAPP instance.

So, for example, an S3 bucket set up to receive pre-production data from ACME would be named: `s3://asapp-pre_prod-acme-imports-us-east-1` #### S3 Target for Historical Transcripts ASAPP has a distinct target location for sending historical transcripts for AI Services and will provide an exclusive access folder to which transcripts should be uploaded. The S3 bucket location follows this naming convention: `asapp-customers-sftp-\{env\}-\{aws-region\}` Values for `env` and `aws-region` are set in the same way as above. As an example, an S3 bucket to receive transcripts for use in production is named: `asapp-customers-sftp-prod-us-east-1` See the [Historical Transcript File Structure](/reporting/send-s3#historical-transcript-file-structure "Historical Transcript File Structure") section more information on how to format transcript files for transmission. ### Encryption ASAPP ensures that TLS/SSL encrypts the data you write to your dedicated S3 buckets in transit and AES256 encrypts it at rest. ### Your Dedicated Export AWS Credentials ASAPP will provide you with a set of AWS credentials that allow you to securely upload data to your designated S3 buckets. (Since you need write access in order to upload data to S3, you'll need to use a different set of credentials than the read-only credentials you might already have.) In order for ASAPP to securely send credentials to you, you must provide ASAPP with a public GPG key that we can use to encrypt a file containing those credentials. GitHub provides one of many good available  tutorials on GPG key generation here: [https://help.github.com/en/articles/generating-a-new-gpg-key](https://help.github.com/en/articles/generating-a-new-gpg-key) . It's safe to send your public GPG key to ASAPP using any available channel. Please do NOT provide ASAPP with your private key. Once you've provided ASAPP with your public GPG key, we'll forward to you an expiring https link pointing to an S3-hosted file containing credentials that have permissions to write to your dedicated S3 target buckets. ASAPP's standard practice is to have these links expire after 24 hours. The file itself will be encrypted using your public GPG key. Once you decrypt the provided file using your private GPG key, your credentials will be contained within a tab delimited file with the following structure: `id     secret      bucket     sub-folder (if any)` ## Data File Formatting and Preparation **General Requirements:** * Encode files in UTF-8. * Escape control characters. * You may provide files as CSV or JSONL format, but we strongly recommend JSONL where possible. (CSV files are just too fragile.) * If you send a CSV file, ASAPP recommends that you include a header. Otherwise, your CSV must provide columns in the exact order listed below. * When providing a CSV file, you must provide an explicit null value (as the unquoted string: `NULL` ) for missing or empty values. ### Call Center Data File Structure The table below shows the required fields to include in your uploaded call center data.

FIELD NAME

REQUIRED?

FORMAT

EXAMPLE

NOTES

customer\_id

Yes

String

347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c

External User ID. This is a hashed version of the client ID.

conversation\_id

No

String

21352352

If filled in, should map to ASAPP's system.  May be empty, if the customer has not had a conversation with ASAPP.

call\_start

Yes

Timestamp

2020-01-03T20:02:13Z

ISO 8601 formatted UTC timestamp.  Time/date call is received by the system.

call\_end

Yes

Timestamp

2020-01-03T20:02:13Z

ISO 8601 formatted UTC timestamp.  Time/date call ends.

Note: duration of call should be Call End - Call Start.

call\_assigned\_to\_agent

No

Timestamp

2020-01-03T20:02:13Z

ISO 8601 formatted UTC timestamp. The date/time the call was answered by the agent.

customer\_type

No

String

Wireless Premier

Customer account classification by client.

survey\_offered

No

Bool

true/false

Whether a survey was offered or not.

survey\_taken

No

Bool

true/false

When a survey was offered, whether it was completed or not.

survey\_answer

No

String

Survey answer

toll\_free\_number

No

String

888-929-1467

Client phone number (toll free number) used to call in that allows for tracking different numbers, particularly ones referred directly by SRS.

If websource or click to call, the web campaign is passed instead of TFN.

ivr\_intent

No

String

Power Outage

Phone pathing logic for routing to the appropriate agent group or providing self-service resolution. Could be multiple values.

ivr\_resolved

No

Bool

true/false

Caller triggered a self-service response from the IVR and then disconnected.

ivr\_abandoned

No

Bool

true/false

Caller disconnected without receiving a self-service response from IVR nor being placed in live agent queue.

agent\_queue\_assigned

No

String

Wireless Sales

Agent group/agent skill group (aka queue name)

time\_in\_queue

No

Integer

600

Seconds caller waits in queue to be assigned to an agent.

queue\_abandoned

No

Bool

true/false

Caller disconnected after being assigned to a live agent queue but before being assigned to an agent.

call\_handle\_time

No

Integer

650

Call duration in seconds from call assignment event to call disconnect event.

call\_wrap\_time

No

Integer

30

Duration in seconds from call disconnect event to end of agent wrap event.

transfer

No

String

Sales Group

Agent queue name if call was transferred. NA or Null value for calls not transferred.

disposition\_category

No

String

Change plan

Categorical outcome selection from agent. Alternatively, could be category like 'Resolved', 'Unresolved', 'Transferred', 'Referred'.

disposition\_notes

No

String

Notes from agent regarding the disposition of the call.

transaction\_completed

No

String

Upgrade Completed, Payment Processed

Name of transaction type completed by call agent on behalf of customer. Could contain multiple delimited values. May not be available for all agents.

caller\_account\_value

No

Decimal

129.45

Current account value of customer.

### Historical Transcript File Structure ASAPP accepts uploads for historical conversation transcripts for both voice calls and chats. The fields described below must be the columns in your uploaded .CSV table. Each row in the uploaded .CSV table should correspond to one sent message. | FIELD NAME | REQUIRED? | FORMAT | EXAMPLE | NOTES | | :--------------------------- | :-------- | :-------- | :------------------------------- | :------------------------------------------------ | | **conversation\_externalId** | Yes | String | 3245556677 | Unique identifier for the conversation | | **sender\_externalId** | Yes | String | 6433421 | Unique identifier for the sender of the message | | **sender\_role** | Yes | String | agent | Supported values are 'agent', 'customer' or 'bot' | | **text** | Yes | String | Happy to help, one moment please | Message from sender | | **timestamp** | Yes | Timestamp | 2022-03-16T18:42:24.488424Z | ISO 8601 formatted UTC timestamp | Proper transcript formatting and sampling ensures data is usable for model training. Please ensure transcripts conform to the following: **Formatting** * Each utterance is clearly demarcated and sent by one identified sender * Utterances are in chronological order and complete, from beginning to very end of the conversation * Where possible, transcripts include the full content of the conversation rather than an abbreviated version. For example, in a digital messaging conversation:

Full

Abbreviated

Agent: Choose an option from the list below

Agent: (A) 1-way ticket (B) 2-way ticket (C) None of the above

Customer: (A) 1-way ticket

Agent: Choose an option from the list below

Customer: (A)

**Sampling** * Transcripts are from a wide range of dates to avoid seasonality effects; random sampling over a 12-month period is recommended * Transcripts mimic the production conversations on which models will be used - same types of participants, same channel (voice, messaging), same business unit * There are no duplicate transcripts
**Transmitting Transcripts to S3** Historical transcripts are sent to a distinct S3 target separate from other data imports. Please refer to the [S3 Target for Historical Transcripts](/reporting/send-s3#your-target-s3-buckets "Your Target S3 Buckets") section for details. ### Sales Methods & Attribution Data File Structure The table below shows the required fields to be included in your uploaded sales methods and attribution data. | FIELD NAME | REQUIRED? | FORMAT | EXAMPLE | NOTES | | :-------------------------------- | :-------- | :-------- | :------------------------------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **transaction\_id** | Yes | String | 1d71dce2-a50c-11ea-bb37-0242ac130002  | An identifier which is unique within the customer system to track this transaction. | | **transaction\_time** | Yes | Timestamp | 2007-04-05T14:30:05.123Z | ISO 8601 formatted UTC timestamp. Details potential duplicates and also attribute to the right period of time | | **transaction\_value\_one\_time** | No | Float | 65.25 | Single value of initial purchase. | | **transaction\_value\_recurring** | No | Float | 7.95 | Recurring value of subscription purchase. | | **customer\_category** | No | String | US | Custom category value per client. | | **customer\_subcategory** | No | String | wireless | Custom subcategory value per client. | | **external\_customer\_id** | No | String | 34762720001 | External User ID. This is hashed version of the client ID. In order to attribute to ASAPP metadata, one of these will be required (Customer ID or Conversation ID) | | **issue\_id** | No | String | 1E10412200CC60EEABBF32 | IF filled in, should map to ASAPP's system. May be empty, if the customer has not had a conversation with ASAPP. In order to attribute to ASAPP metadata, one of these will be required (Customer ID or Conversation ID) | | **external\_session\_id** | Yes | String | 1a09ff6d-3d07-45dc-8fa9-4936bfc4e3e5 | External session id so we can track a customer | | **product\_category** | No | String | Wireless Internet | Category of product purchased. | | **product\_subcategory** | No | String | Broadband | Subcategory of product purchased. | | **product\_name** | No | String | Broadband Gold Package | The name of the product. | | **product\_id** | No | String | WI-BBGP | The identifier of the product. | | **product\_quantity** | Yes | Integer | 1 | A number indicating the quantity of the product purchased. | | **product\_value\_one\_time** | No | Float | 60.00 | Value of the product for one time purchase. | | **product\_value\_recurring** | No | Float | 55.00 | Value of the product for recurring purchase. | ## Uploading Data to S3 At a high level, uploading your data is a three step process: 1. Build and format your files for upload, as detailed above. 2. Construct a "target path" for those files following the convention in the section "Constructing your Target Path" below. 3. Signal the completion of your upload by writing an empty \_SUCCESS file to your "target path", as described in the section "Signaling that your upload is complete" below. ### Constructing your target path ASAPP's automation will use the S3 filename of your upload when deciding how to process your data file, where the filename is formatted as follows: `s3://BUCKET_NAME/FEED_NAME/version=VERSION_NUMBER/format=FORMAT_NAME/dt=DATE/hr=HOUR/mi=MINUTE/DATAFILE_NAME(S)` The following table details the convention that ASAPP follows when handling uploads: ### Signaling that Your Upload Is Complete Upon completing a data upload, you must upload an EMPTY file named \_SUCCESS to the same path as your uploaded file, as a flag that indicates your data upload is complete. Until this file is uploaded, ASAPP will assume that the upload is in progress and will not import the associated data file. As an example, let's say you're uploading one day of call center data in a set of files. ### Incremental and Snapshot Modes You may provide data to ASAPP as either Incremental or Snapshot data. The value you provide us in the `format` field discussed above, tells ASAPP whether to treat the data you provide as Incremental or Snapshot data. When importing data using **Incremental** mode, ASAPP will **append** the given data to the existing data imported for that `FEED_NAME`. When you specify **Incremental** mode, you are telling ASAPP that for a given date, the data which was uploaded is for that day only.  If you use the value `dt=2018-09-02` in your constricted filename, you are indicating that the data contained in that file includes records from `2018-09-02 00:00:00 UTC` → `2018-09-02 23:59:59 UTC`. When importing data using **Snapshot** mode, ASAPP will **replace** any existing data for the indicated `FEED_NAME` with the contents of the uploaded file. When you specify **Snapshot** mode, ASAPP treats the uploaded data as a complete record from "the time history started" until that particular day end.  A date of `2018-09-02` means the data includes, effectively, all things from `1970-01-01 00:00:00 UTC` → `2018-09-02 23:59:59 UTC`. ### Other Upload Notes and Tips 1. Make sure the structure for the imported file (whether columnar or json formatted) matches the current import standards (see below for details) 2. Data imports are scheduled daily, 4 hours after UTC midnight (for the previous day's data) 3. In the event that you upload historical data (i.e., from older dates than are currently in the system), please inform your ASAPP team so a complete re-import can be scheduled. 4. Snapshot data must go into a format=snapshot\_\{type} folder. 5. Providing a Snapshot allows you to provide all historical data at once.  In effect, this reloads the entire table rather than appending data as in the non-snapshot case. ### Upload Example The example below assumes a shell terminal with python 2.7+ installed. ```json theme={null} # install aws cli (assumes python) pip install awscli # configure your S3 credentials if not already done aws configure # push the files for 2019-01-20 for the call_center_issues import # for a company named `umbrella-corp` to your local drive in production aws s3 cp /location/of/your/file.csv s3://asapp-prod-umbrella-corp-imports-us-east-1/call_center_issues/version=1/format=csv/dt=2019-01-20/ aws s3 cp _SUCCESS s3://asapp-prod-umbrella-corp-imports-us-east-1/call_center_issues/version=1/format=csv/dt=2019-01-20/ # you should see some files now in the s3 location aws s3 ls s3://asapp-prod-umbrella-corp-imports-us-east-1/call_center_issues/version=1/format=csv/dt=2019-01-20/ file.csv _SUCCESS ``` # Transmitting Data to SFTP Source: https://docs.asapp.com/reporting/send-sftp SFTP supports **one-time data transmissions**, typically for sending training data files during the implementation phase prior to initial launch. ASAPP customers can transmit the following types of training data via SFTP: * Conversation transcripts from messaging or voice interactions * Recorded call audio files * Free-text agent notes associated with messaging or voice interactions ## Getting Started ASAPP will require you to provide the following information to set up the SFTP site. * An SSH public key.  This should use RSA encryption with a key length of 4096 bits. ASAPP will provide you a username to associate with the key. This will be of the form: `sftp` where the company marker will be selected by ASAPP.  For example a username could be: `sftptestcompany` In your network, open port 22 outbound to sftp.us-east-1.asapp.com. ## Data File Formatting and Preparation **General Requirements:** * Encode files in UTF-8. * Escape control characters. * You may provide files as CSV or JSONL format, but we strongly recommend JSONL where possible. (CSV files are just too fragile.) * If you send a CSV file, ASAPP recommends that you include a header. Otherwise, your CSV must provide columns in the exact order listed below. * When providing a CSV file, you must provide an explicit null value (as the unquoted string: `NULL` ) for missing or empty values. ### Call Center Data File Structure The table below shows the required fields to include in your uploaded call center data.

FIELD NAME

REQUIRED?

FORMAT

EXAMPLE

NOTES

customer\_id

Yes

String

347bdddb-d3a1-45fc-bbcd-dbd3a175fc1c

External User ID. This is a hashed version of the client ID.

conversation\_id

No

String

21352352

If filled in, should map to ASAPP's system.  May be empty, if the customer has not had a conversation with ASAPP.

call\_start

Yes

Timestamp

2020-01-03T20:02:13Z

ISO 8601 formatted UTC timestamp.  Time/date call is received by the system.

call\_end

Yes

Timestamp

2020-01-03T20:02:13Z

ISO 8601 formatted UTC timestamp.  Time/date call ends.

Note: duration of call should be Call End - Call Start.

call\_assigned\_to\_agent

No

Timestamp

2020-01-03T20:02:13Z

ISO 8601 formatted UTC timestamp. The date/time the call was answered by the agent.

customer\_type

No

String

Wireless Premier

Customer account classification by client.

survey\_offered

No

Bool

true/false

Whether a survey was offered or not.

survey\_taken

No

Bool

true/false

When a survey was offered, whether it was completed or not.

survey\_answer

No

String

Survey answer

toll\_free\_number

No

String

888-929-1467

Client phone number (toll free number) used to call in that allows for tracking different numbers, particularly ones referred directly by SRS.

If websource or click to call, the web campaign is passed instead of TFN.

ivr\_intent

No

String

Power Outage

Phone pathing logic for routing to the appropriate agent group or providing self-service resolution. Could be multiple values.

ivr\_resolved

No

Bool

true/false

Caller triggered a self-service response from the IVR and then disconnected.

ivr\_abandoned

No

Bool

true/false

Caller disconnected without receiving a self-service response from IVR nor being placed in live agent queue.

agent\_queue\_assigned

No

String

Wireless Sales

Agent group/agent skill group (aka queue name)

time\_in\_queue

No

Integer

600

Seconds caller waits in queue to be assigned to an agent.

queue\_abandoned

No

Bool

true/false

Caller disconnected after being assigned to a live agent queue but before being assigned to an agent.

call\_handle\_time

No

Integer

650

Call duration in seconds from call assignment event to call disconnect event.

call\_wrap\_time

No

Integer

30

Duration in seconds from call disconnect event to end of agent wrap event.

transfer

No

String

Sales Group

Agent queue name if call was transferred. NA or Null value for calls not transferred.

disposition\_category

No

String

Change plan

Categorical outcome selection from agent. Alternatively, could be category like 'Resolved', 'Unresolved', 'Transferred', 'Referred'.

disposition\_notes

No

String

Notes from agent regarding the disposition of the call.

transaction\_completed

No

String

Upgrade Completed, Payment Processed

Name of transaction type completed by call agent on behalf of customer. Could contain multiple delimited values. May not be available for all agents.

caller\_account\_value

No

Decimal

129.45

Current account value of customer.

### Historical Transcript File Structure ASAPP accepts uploads for historical conversation transcripts for both voice calls and chats. The fields described below must be the columns in your uploaded .CSV table. Each row in the uploaded .CSV table should correspond to one sent message. | FIELD NAME | REQUIRED? | FORMAT | EXAMPLE | NOTES | | :--------------------------- | :-------- | :-------- | :------------------------------- | :------------------------------------------------ | | **conversation\_externalId** | Yes | String | 3245556677 | Unique identifier for the conversation | | **sender\_externalId** | Yes | String | 6433421 | Unique identifier for the sender of the message | | **sender\_role** | Yes | String | agent | Supported values are 'agent', 'customer' or 'bot' | | **text** | Yes | String | Happy to help, one moment please | Message from sender | | **timestamp** | Yes | Timestamp | 2022-03-16T18:42:24.488424Z | ISO 8601 formatted UTC timestamp | Proper transcript formatting and sampling ensures data is usable for model training. Please ensure transcripts conform to the following: **Formatting** * Each utterance is clearly demarcated and sent by one identified sender * Utterances are in chronological order and complete, from beginning to very end of the conversation * Where possible, transcripts include the full content of the conversation rather than an abbreviated version. For example, in a digital messaging conversation:

Full

Abbreviated

Agent: Choose an option from the list below

Agent: (A) 1-way ticket (B) 2-way ticket (C) None of the above

Customer: (A) 1-way ticket

Agent: Choose an option from the list below

Customer: (A)

**Sampling** * Transcripts are from a wide range of dates to avoid seasonality effects; random sampling over a 12-month period is recommended * Transcripts mimic the production conversations on which models will be used - same types of participants, same channel (voice, messaging), same business unit * There are no duplicate transcripts
### Sales Methods & Attribution Data File Structure The table below shows the required fields to be included in your uploaded sales methods and attribution data. | FIELD NAME | REQUIRED? | FORMAT | EXAMPLE | NOTES | | :-------------------------------- | :-------- | :-------- | :------------------------------------ | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **transaction\_id** | Yes | String | 1d71dce2-a50c-11ea-bb37-0242ac130002  | An identifier which is unique within the customer system to track this transaction. | | **transaction\_time** | Yes | Timestamp | 2007-04-05T14:30:05.123Z | ISO 8601 formatted UTC timestamp. Details potential duplicates and also attribute to the right period of time | | **transaction\_value\_one\_time** | No | Float | 65.25 | Single value of initial purchase. | | **transaction\_value\_recurring** | No | Float | 7.95 | Recurring value of subscription purchase. | | **customer\_category** | No | String | US | Custom category value per client. | | **customer\_subcategory** | No | String | wireless | Custom subcategory value per client. | | **external\_customer\_id** | No | String | 34762720001 | External User ID. This is hashed version of the client ID. In order to attribute to ASAPP metadata, one of these will be required (Customer ID or Conversation ID) | | **issue\_id** | No | String | 1E10412200CC60EEABBF32 | IF filled in, should map to ASAPP's system. May be empty, if the customer has not had a conversation with ASAPP. In order to attribute to ASAPP metadata, one of these will be required (Customer ID or Conversation ID) | | **external\_session\_id** | Yes | String | 1a09ff6d-3d07-45dc-8fa9-4936bfc4e3e5 | External session id so we can track a customer | | **product\_category** | No | String | Wireless Internet | Category of product purchased. | | **product\_subcategory** | No | String | Broadband | Subcategory of product purchased. | | **product\_name** | No | String | Broadband Gold Package | The name of the product. | | **product\_id** | No | String | WI-BBGP | The identifier of the product. | | **product\_quantity** | Yes | Integer | 1 | A number indicating the quantity of the product purchased. | | **product\_value\_one\_time** | No | Float | 60.00 | Value of the product for one time purchase. | | **product\_value\_recurring** | No | Float | 55.00 | Value of the product for recurring purchase. | ## Generate SSH Public Key Pair and Upload Files You can generate the key and upload files via Windows, Mac, or Linux. ### Windows Users If you are using Windows, follow the steps below: #### 1. Generate an SSH Key Pair There are multiple tools that you can use to generate an SSH Key Pair. For example: by using puTTYgen (available from [PuTTY](https://www.putty.org/) ) as shown below. Choose RSA and 4096 bits, then click **generate** and move the mouse pointer randomly.  When the key is generated, enter `sftp` followed by your company marker as the key comment. #### 2. Provide the Public Key to ASAPP Save the public and private key.  Only send the public file for your key pair to ASAPP.  This is not a secret and can be emailed. #### 3. Upload Files Use an SFTP utility such as Cyberduck (available from [Cyberduck](https://cyberduck.io/) ) to upload files to ASAPP.  Click **Open Connection**, add sftp.us-east-1.asapp.com as the Server,  and add `sftpcompanymarker` as the Username.  Choose the private key you generated in step 2 as the SSH Private Key and click **connect**.  The following screenshots show how to do this using Cyberduck. A pop-up window appears. Click to allow the unknown fingerprint.  You will then see the `in` and `out` directories. Double click the `in` directory and click **Upload** to choose files to send to ASAPP. ### Mac/Linux Users If you are using a Mac or Linux, follow the steps below: #### 1. Generate an SSH Key Pair If you are using a Mac or Linux, you can generate a key pair from the terminal as follows. If you already have an `id_rsa` file in the `.ssh` directory that you use with other applications, you should specify a different filename for the key so you do not overwrite it.  You can either do that with the `-f` option or type in a `filename` when prompted. `ssh-keygen -t rsa -b 4096 -C sftp; -f filename` For Example: `ssh-keygen -t rsa -b 4096 -C sftptestcompany -f keyforasapp` Where the filename will be the name of two files generated - `filename` (the private key you must keep to yourself) and `filename.pub` (the public key which ASAPP needs) If you do not have an `id_rsa` file in the `.ssh` directory, you can go with the default filename of  `id_rsa` and do not need to use the `-f` option. `ssh-keygen -t rsa -b 4096 -C sftp` #### 2. Provide the Public Key to ASAPP Send the `.pub` file for your key pair to ASAPP.  This is not a secret and can be emailed. #### 3. Upload Files You can upload files using the terminal or you can use [Cyberduck](https://cyberduck.io/). This section describes how to upload files using the terminal. To login to the ASAPP server, type one of the following: If you used the default id\_rsa key name: `sftp sftp@sftp.us-east-1.asapp.com` If you specified a different filename for the key: `sftp -oIdentityFile=filename` `sftp sftp@sftp.us-east-1.asapp.com` For Example: `sftp -oIdentityFile=keyforasapp` `sftptestcompany@sftp.us-east-1.asapp.com` You will see the command line prompt change to `sftp>` If the `sftp` command fails, adding the `-v` parameter will provide logging information to help to diagnose the problem. Use terminal commands such as `ls, cd, mkdir` on the remote server. * `ls:` list files * `cd:` change directory * `mkdir`: make a new directory `ls` will show two directories: `in` (for sending files to ASAPP) and `out` (for receiving files from ASAPP). To create a transcripts directory on the remote machine to send transcripts to ASAPP, type: ```json theme={null} cd in mkdir transcripts cd transcripts ``` To navigate on the local machine, prefix terminal commands with l * `lcd`: change the local directory * `lls`: list local files * `lpwd`: to see the local working directory Use `get` (retrieve) and `put` (upload) to transfer files. `get` will fetch files from the remote server to the current directory on the local machine. For example: `get output.csv` will transfer a file named output.csv from the remote server. `put` will transfer files to the remote server from the current directory on the local machine. Navigate to local directory with transcripts and type: `put transcripts.csv` will transfer a file named transcripts.csv to the remote server. or `put *` will transfer all files in the local directory. or `put -r ` works recursively and will transfer all files in the local directory, all sub directories, and all files within them to the remote machine.  For example: `put -r sftptest` will transfer the sftptest directory and everything within it and below it from the local machine to the remote machine. To end the SFTP session, type `quit` or `exit`. # Transmitting Data to ASAPP Source: https://docs.asapp.com/reporting/transmitting-data-to-asapp Learn how to transmit data to ASAPP for Applications and AI Services. Customers can securely upload data for ASAPP's consumption for Applications and AI Services using three distinct mechanisms. Read more on how to transmit data by clicking on the link that best matches your use case. * [**Upload to S3**](/reporting/send-s3 "Transmitting Data to S3") S3 supports ongoing data transmissions, though you can also use it for one-time transfers where needed. ASAPP customers can transmit the following types of data to S3: * Call center data attributes * Conversation transcripts from messaging or voice interactions * Recorded call audio files * Sales records with attribution metadata * **[Upload to SFTP](/reporting/send-sftp "Transmitting Data to SFTP")** SFTP supports **one-time data transmissions**, typically for sending training data files during the implementation phase prior to initial launch. ASAPP customers can transmit the following types of training data via SFTP: * Conversation transcripts from messaging or voice interactions * Recorded call audio files * Free-text agent notes associated with messaging or voice interactions Reach out to your ASAPP account contact to coordinate sending data via SFTP. # Security Source: https://docs.asapp.com/security Security is a critical aspect of any platform, and ASAPP takes it seriously, being SOC2 and PCI compliant. We have implemented several security measures to ensure that your data is protected and secure. ## Trust portal ASAPP's [Trust Portal](https://trust.asapp.com) provides a centralized location for accessing security documentation and compliance information. Through the Trust Portal, you can: * Download security documentation including SOC2 reports * Access compliance certifications * Stay up to date with ASAPP's latest security updates ## Next steps Learn how Data Redaction removes sensitive data from your conversations. Use External IP Blocking to block IP addresses from accessing your data. Learn how to securely handle Customer Information. Find the latest security updates and security documentation on ASAPP's Trust Portal. # Data Redaction Source: https://docs.asapp.com/security/data-redaction Learn how Data Redaction removes sensitive data from your conversations. Live conversations are completely uninhibited and as such, customers may mistakenly communicate sensitive information (e.g. credit card number, SSN, etc.) in a manner that increases risk. In order to mitigate this risk, ASAPP performs redaction logic that you can customize for your business's needs. You also have the ability to add your own [custom redaction rules](#custom-regex-redaction-rules) using regular expressions. Reach out to your ASAPP account team to learn more. ## Custom Regex Redaction Rules In AI-Console, you can view existing custom, regex based redaction rules and add new ones for your organization. Adding rules match specific patterns by using regular expressions. You can deploy these new rules to testing environments and to production. Custom redaction rules live in the Core Resources section of AI-Console. * The system displays custom redaction rules as an ordered list of rules with names. * Each individual rule will display the underlying regex. To add a custom rule: 1. Click **Add new** 2. Create a unique Regex Name 3. Add the regex for the particular rule 4. Test your regex rule to ensure it works as expected 5. Add the regex to sandbox Once you add a rule to the sandbox environment, test it in your lower environment to ensure it's behaving as expected. # External IP Blocking Source: https://docs.asapp.com/security/external-ip-blocking Use External IP Blocking to block IP addresses from accessing your data. ASAPP has tools in place to monitor and automatically block activity based on malicious behavior and bad reputation sources (IPs). This blocking inhibits traffic from IPs that could damage, disable, overburden, disrupt or impair any ASAPP servers or APIs. By default, ASAPP does not block IPs of end users who exhibit abusive behaviors towards agents. IP blocking is trivial to evade and often causes unintended collateral damage to normal users since IP address are dynamic. It can happen that a previously blocked IP address becomes the IP address for a valid user, preventing the valid user from using ASAPP and your product. While we do not recommend IP blocking, you are still able to block users by IP address to help address urgent protection needs. ## Blocking IP Addresses on AI Console AI-Console provides the ability for administrators with the correct permissions to block external IP addresses that may present a threat to your organization. To block an IP Address in AI Console: 1. Manually enter (or copy) an individual IP address in the Denylist 2. Click Save and Deploy to save the changes to production You are able to access IP Addresses in Conversation Manager, giving you insight into the IP address associated with potentially malicious users. You can unblock IP Addresses by clicking the trash icon on the blocked IP's row, and then saving and deploying the updated list. Blocked users receive an error message and the Chat bubble will not appear at the end of their screen. From the API perspective,  *shouldDisplayWebChat* will return a **503 Forbidden** error ## Additional Contextual Information Dynamic ISP IP rotates quite often. This means that the 1-1 relationship between a public IP and an individual/device/client is merely temporary and the assignment will continually change in the future as described below. **ISP Assignation within the Time** IP(1) --- UserA IP(2) --- UserB IP(3) --- UserC ....................... IP(1) --- UserC IP(2) --- UserB IP(3) --- UserA If ASAPP prevents UserA from reaching our platform by blocking IP(1), there is a risk that ISPs assign IP(1) to UserB or UserC at some point in the future. There are also many scenarios where legitimate users share a single IP with abusive users, such as public WiFi networks: * Company named networks * College or corporate campuses that route many users from a single outbound IP * Personal and corporate VPN devices that aggregate many uses to a single IP Blocking those IP's will prevent many other legitimate users from access to the ASAPP platform. # Warning about CustomerInfo and Sensitive Data Source: https://docs.asapp.com/security/warning-about-customerinfo-and-sensitive-data Learn how to securely handle Customer Information. Do not send sensitive data via `CustomerInfo`, `custom_params`, or `customer_params`. ASAPP implements strict controls to ensure the confidentiality and security of ALL data  we handle on behalf of our customers. For **sensitive data**, ASAPP employs an even more stringent level of control. ("Sensitive data" includes such categories as Personal Health Information, Personally Identifiable Information, and financial/PCI data.) In general, ASAPP recommends that customers ONLY send sensitive data in specified fields, and where ASAPP expects to receive such data. ASAPP treats all customer data securely. By default, however, ASAPP may not apply the strictest levels of controls that we maintain for **sensitive data** for content submitted via `CustomerInfo`, `custom_params`, or `customer_params`. ## What is CustomerInfo? Certain calls available via ASAPP APIs and SDKs provide a parameter that supports the inclusion of arbitrary data with the call. We'll refer to such fields as **"CustomerInfo"** here, even though different ASAPP interfaces may call them "custom\_params", "customer\_params", and "CustomerInfo". CustomerInfo is typically a JSON object containing a set of key:value pairs that ASAPP and ASAPP customers can use in multiple ways. For example, as context input for use in the ASAPP Web SDK: ```javascript theme={null} "CustomerInfo": { "Inflight": true, "TierLevel": "Gold" } ``` ## Do not send sensitive data as cleartext via CustomerInfo ASAPP strongly recommends that our customers do NOT send sensitive data using CustomerInfo. If customer requirements dictate that sensitive data must be sent via CustomerInfo, CUSTOMERS MUST ENCRYPT SENSITIVE DATA BEFORE SENDING. The customer should encrypt any sensitive data before sending via CustomerInfo, using a private encryption mechanism (i.e. a mechanism not known to ASAPP). This practice will ensure that ASAPP never has access to the customer's sensitive data, so that data will remain securely protected while in transit through ASAPP systems. Additionally, ASAPP strongly recommends that our customers use strong encryption. Specifically, we insist that customers use one of the following configurations: * **Symmetric Encryption Model:** use AES-GCM-256 (authenticated encryption) with a random [salt](https://en.wikipedia.org/wiki/Salt_\(cryptography\)) to provide data integrity, confidentiality and enhanced security. Each combination of salt+associated data should be unique. * **Asymmetric Encryption Model:** use a key size of 2048, and use RSA as an algorithm. ASAPP recommends setting a key expiration date of less than two years. ASAPP and the customer should both have mechanisms in place to update the key being used. Temporarily retain private keys which are rotated for the purposes of accessing previously encrypted data. In extraordinary circumstances, ASAPP can make exceptions to these requirements. Please contact your ASAPP account team to discuss options if you have a compelling business need to have ASAPP implement an exception. # Support Overview Source: https://docs.asapp.com/support You can reach out to [ASAPP support](https://support.asapp.com) for help with your ASAPP account and implementation. # Reporting Issues to ASAPP Source: https://docs.asapp.com/support/reporting-issues-to-asapp ## Incident Management ### Overview The goals of incident management at ASAPP are: * To minimize the negative impact of service incidents on our customers.  * To restore our customers' ASAPP implementation to normal operation as quickly as possible. * To take the necessary steps in order to prevent similar incidents in the future. * To successfully integrate with our customers' standard incident management policies. ### Severity Level Classification | Severity Level | Description | Report To | | :------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------- | | 1 | ASAPP is unusable or inoperable; a major function is unavailable with no acceptable bypass/workaround; a security or confidentiality breach occurred. | Service Desk interface via support.asapp.com | | 2 | A major function is unavailable but an acceptable bypass/workaround is available. | Service Desk interface via support.asapp.com | | 3 | A minor function is disabled by a defect; a function is not working correctly; the defect is not time-critical and has minimal user impact. | Service Desk interface via support.asapp.com | | 4 | The issue is not critical to the day-to-day operations of any single user; and there is an acceptable alternative solution. | Service Desk interface via support.asapp.com | ### Standard Response and Resolution Times This table displays ASAPP's standard response and resolution times based on issue severity as outlined in the Service Level Agreement. | Severity Level | Initial Response Time | Issue Resolution Time | | :------------- | :-------------------- | :-------------------- | | 1 | 15 minutes | 2 hours | | 2 | 15 minutes | 4 hours | | 3 | 24 hours | 15 business days | | 4 | 1 business day | 30 business days | ### Severity Level 1 Incidents **Examples:** * Customer chat is inaccessible. * \>5% of agents are unable to access Agent Desk. * \>5% of agents are experiencing chat latency (>5 seconds to send or receive a chat message) **Overview:** * Severity Level 1 Incidents can require a significant amount of ASAPP resources beyond normal operating procedures, and outside of normal operating hours. * Escalating via Service Desk initiates an escalation policy that is more effective than reaching out directly to any individual ASAPP contact. * You will receive an acknowledgment from ASAPP within 15 minutes. ### Severity Level 2 & 3 Incidents **Severity Level 2 Examples:** * Conversation list screen within the Admin dashboard is blanking out for supervisors, but Agents are still able to maintain chats. * User Management screen within Admin is unavailable. **Severity Level 3 Examples:** * Historical Reporting data has not been refreshed in 90+ minutes. * A limited number of chats appear to be routing incorrectly. * A single agent is unable to access Agent Desk. ### Issue Ticketing and Prioritization * ASAPP maintains all client reported issues in its own ticketing system. * ASAPP triages and slates issues for sprints based on severity level and number of users impacted. * ASAPP's engineering teams work in 2 week sprints, meaning that the teams typically resolve reported issues within 1-2 sprints. * ASAPP will consider Severity Level 1 and 2 issues for a hotfix (i.e. breaking the ASAPP sprint and release process, and being released directly to PreProd / Prod). ### Issue Reporting Process * **For Severity 1 Issues:** In the event of a Severity 1 failure of a business function in the ASAPP environment, report issues via the Service Desk interface at [support.asapp.com](http://support.asapp.com) by filling out all required fields. By selecting the Severity value as **Critical**, you will automatically mobilize ASAPP's on-call team, who will assess the incident and start working on a solution if applicable. * **For Severity 2-4 Issues:** In the event of any non-critical issues with a business function in the ASAPP environment, report issues via the Service Desk interface at [support.asapp.com](http://support.asapp.com) by filling out all required fields. ASAPP will escalate the reported issue to the relevant members of the ASAPP team, and you will receive updates via the ticket comments. The ASAPP team will follow the workflow outlined below for each Service Desk ticket. Each box corresponds to the Service Desk ticket status. ### Issue Reporting Template When you report issues to ASAPP, please provide the following information whenever possible. * **Issue ID**: provide the Issue ID if the bug took place during a specific conversation. * **Hashed, encrypted customer ID:** (see below) * **Severity\*:** provide the severity level based on the 4 point severity scale. * **Subject\*:** provide a brief, accurate summary of the observed behavior. * **Date Observed\*:** provide the date you observed the issue. (please note: the observed date may differ from the date the issue is being reported) * **Description\*:** provide a detailed description of the observed issue, including number of impacted users, the specific users experiencing the issue, impacted sites, and the timestamp when the issue began. * **Steps to Reproduce\*:** provide detailed list of steps taken at the time you observed the issue. * **ASAPP Portal\*:** indicate environment if the bug does not occur in production. * **Device/Operating System\*:** provide the device / OS being utilized at the time you observed the issue. * **Browser\*:** provide the browser being utilized at the time you observed the issue. * **Attachments**: include any screenshots or videos that may help clearly illustrate the issue. * **\*** Indicates a required field. ASAPP deliberately does not log unencrypted customer account numbers or any kind of personally identifiable information. ### Locate the Issue ID **In Desk:** During the conversation, click on the **Notes** icon at the top of the center panel. The issue ID is next to the date. The issue ID is also in the Left Hand Panel and End Chat modal window. **In Admin:** Go to Conversation Manager. Issue IDs appear in the first column for both live and ended conversations. # Service Desk Information Source: https://docs.asapp.com/support/service-desk-information **What is the ASAPP Service Desk?** Service Desk is the tool ASAPP uses for the ingestion and tracking of all issues and modifications in our customers' demo and production environments. All issue reports and configuration requests between ASAPP and our customers are handled via the tool. **How can I access the Service Desk?** The Service Desk can be accessed at [support.asapp.com](http://support.asapp.com). Your ASAPP account team provisions Service Desk access after the initial Service Desk training is completed. All subsequent access requests and/or modifications should go through email with your ASAPP account team. **When do I create a ticket?** A Service Desk ticket should be created any time you identify an issue with an ASAPP product (this includes Admin, Desk, Chat SDK, and AI Services) in either the demo or production environment. Additionally, a ticket should be created in cases where an existing configuration needs to be updated. **How do I create a ticket?** A Service Desk ticket can be created by navigating to support.asapp.com, logging in, clicking **Submit a Request** in the top right corner of the screen, and filling out and submitting the ticket form. **What happens after I've created a ticket?** Once you submit the ticket form, ASAPP will receive an automatic notification. The ASAPP Technical Services team will acknowledge and review the ticket, triage internally, and request additional information if needed. All correspondence, including requests for additional information, explanations of existing functionality, and updates on fix timelines will take place in the ticket comments. **Should I use Service Desk to ask questions?** In general, reaching out to your ASAPP account contact(s) directly is the best way to answer a question or begin a discussion. Your ASAPP account contacts can help you determine whether observed behavior is expected or an unexpected issue. If you confirm an issue is unexpected or a configuration is available, create a ticket in Service Desk to begin addressing it. **What if I have an urgent production problem?** ASAPP calls urgent production issues **Severity 1** and defines them as follows: "ASAPP is unusable or inoperable; a major function is unavailable with no acceptable bypass/workaround; a security or confidentiality breach occurred" Report an issue that meets this criteria as a ticket in Service Desk with the Priority level **Urgent**. See [Severity Level Classification](/support/reporting-issues-to-asapp#severity-level-classification "Severity Level Classification") for descriptions, illustrative examples and associated reporting processes. # Troubleshooting Guide Source: https://docs.asapp.com/support/troubleshooting-guide This document outlines some preliminary checks to determine the health and status of the connection between the customer or agent's browser and an ASAPP backend host prior to escalating to the ASAPP Support Team. You must have access to Chrome Developer Tools in order to use this guide. ## Troubleshooting from a Web Browser ### Using Chrome Developer Tools Please take a moment to familiarize yourself with Chrome Developer Tools, if you are not already. ASAPP will base the troubleshooting efforts for front-end Web SDK use around the Chrome Web Browser. [https://developers.google.com/web/tools/chrome-devtools/open](https://developers.google.com/web/tools/chrome-devtools/open) ASAPP will also inspect network traffic as the Web SDK makes API calls to the ASAPP backend. Please also review the documentation on Chrome Developer Tools regarding networking. [https://developers.google.com/web/tools/chrome-devtools/network](https://developers.google.com/web/tools/chrome-devtools/network) ### API Call HTTP Return Status Codes In general, you can check connectivity and environment status by looking at the HTTP return codes from the API calls the ASAPP Web SDK makes to the ASAPP backend. You can accomplish this by limiting calls to ASAPP traffic in the Network tab. This will narrow the results to traffic that is using the string "ASAPP" somewhere in the call. First and foremost, look for traffic that does not return successful 200 HTTP status codes. If there are 400 and 500 level errors, there may be potential network connectivity or configuration issues between the user and ASAPP backend. Please review HTTP status codes at: [https://www.restapitutorial.com/httpstatuscodes.html](https://www.restapitutorial.com/httpstatuscodes). To view HTTP return codes: 1. Open **Dev Tools** and navigate to the **Network** tab. 2. Reload the page or navigate to a page with ASAPP chat enabled. 3. Filter network traffic to **ASAPP**. 4. Look at the "Status" for each call. The system highlights failed calls in red. 5. For non-200 status codes, denote what the call is by the "Request URL" and the return status. You can find other helpful information in context in the "Request Payload". ### Environment Targeting To determine the ASAPP environment targeted by the front-end, you can look at the network traffic and note what hostname the traffic references. For instance, ...something-demo01.test.asapp.com is the demo environment for that implementation. You will see this on every call to the ASAPP backend and it may be helpful to filter the network traffic to "ASAPP". 1. Open **Dev Tools** and navigate to the **Network** tab. 2. Reload the page or navigate to a page with ASAPP chat enabled. 3. Filter network traffic to **ASAPP**. 4. Look at the "Request URL" for the network call. 5. Parse the hostname from `https://something-demo01.test.asapp.com/api/noauth/ShouldDisplayWebChat?ASAPP-ClientType=web-sdk&amp;ASAPP-ClientVersion=4.0.1-uat\`: something-demo01.test.asapp.com ### WebSocket Status In addition to looking at the API calls, it is important to look at the WebSocket connections in use. You should also be able to inspect the frames within the WebSocket to ensure the system receives messages properly. [https://developers.google.com/web/tools/chrome-devtools/network/reference#frames](https://developers.google.com/web/tools/chrome-devtools/network/reference#frames) ## Troubleshooting Customer Chat ### Should Display Web Chat If chat does not display on the desired web page, the first place to check is ASAPP's call for `ShouldDisplayWebChat` via the **Chrome Developer Tool Network** tab. A successful API call response should contain a `DisplayCustomerSupport` field with a value of `true`. If this value is `false` for a page that should display chat, please reach out to the ASAPP Support Team. Superusers can access the Triggers section of ASAPP Admin. This will enable them to determine if the visited URL displays chat. To troubleshoot: 1. Open **Dev Tools** and navigate to the **Network** tab. 2. Reload the page or navigate to a page with ASAPP chat enabled. 3. Filter network traffic to **ASAPP**. 4. Look at the "Request Payload" for `ShouldDisplayWebChat` and look for a `true` response for `DisplayCustomerSupport`. ### Web SDK Context Input To view the context provided to the SDK, you can look at the request payload of most SDK API calls. Context input may vary but typical items include: * Subdivisions * Segments * Customer info parameters * External session IDs Review the ASAPP SDK Web Context Provider page To view the context: 1. Open **Dev Tools** and navigate to the **Network** tab. 2. Reload page or navigate to a page with ASAPP chat enabled. 3. Filter network traffic to **ASAPP**. 4. Look at the "Request Payload" for `GetOfferedMessageUnauthed` and open the tree as follows: **Params -> Params -> Context** -> All Customer Context (including Auth Token) **Params -> Params -> AuthenticationParams** -> Customer ID ### Customer Authentication Input For authenticated customer chat sessions, you can see the Auth key within the context parameters used throughout the API calls to ASAPP. The values passed into the Auth context will depend on the integration. Review the ASAPP SDK Web Context Provider page for the complete use of this key ## Troubleshooting Agent Chat from Agent Desk ### Connection Status Banners There are 3 connection statuses: * Disconnected (Red) * Reconnecting (Yellow) * Connected (Green) You will see a banner on the bottom of the ASAPP Agent Desk with the correlating colors: Red, Yellow, Green. The red and green banners only appear briefly while the connection state changes. However, the yellow banner will remain until a connection is reestablished. The connection state relies on 2 main inputs: * 204 API calls * WebSocket echo timeouts After a timeout grace period of 5 seconds for either of these timeouts, a red or yellow banner will appear. **Yellow Reconnecting Banner** ### 204 API call The ASAPP Agent Desk makes API calls to the backend periodically to ensure status and connectivity reporting is functional. Verify the HTTP status and response timing of these calls to look for indicators of an issue. These calls display as the number 204 in the Chrome Developer Tools Network tab. To view these calls: 1. Open **Dev Tools** and navigate to the **Network** tab. 2. Reload page or navigate to a page with ASAPP chat enabled. 3. Filter network traffic to **ASAPP**. 4. Look at the "204" calls over time to determine good health. ### WebSocket When a customer chat loads onto the ASAPP Agent Desk, this creates a WebSocket. During the life of that conversation, ASAPP sends continual echoes (requests and responses) to determine WebSocket health and status. ASAPP sends the echoes every 16-18 seconds and has a 6 second timeout by default. If these requests and responses intermittently time out, there is likely a network issue between the Agent Desktop and the ASAPP Desk application. You can also view messages being sent through WebSocket, as the agent to customer conversation happens: 1. Open **Dev Tools** and navigate to the **Network** tab. 2. Reload page or navigate to a page with ASAPP chat enabled. 3. Click **WS** next to the Filter text box to filter network traffic to WebSocket. 4. Look at the Messages tab in WebSocket. If you see one of these pairs of echoes missing, it is most likely because Agent Desk did not receive the echo from the ASAPP backend due to packet loss. If the 'Attempting to reconnect..' message shows, Agent Desk attempts to reconnect with the ASAPP backend to establish a new WebSocket. The messages display in red text starting with 'request?ASAPP-ClientType' in the Network tab of Chrome Developer Tools. If you lose network connectivity and then re-establish it, there will be multiple WebSocket entries visible when you click on **WS**. ## Troubleshooting Agent Desk/Admin Access Issues ### Using Employee List in ASAPP Admin If a user has issues logging in to ASAPP, you can view their details within ASAPP Admin after their first successful login. Check the Enabled status, Roles, and Groups for the user to determine if there are any user level issues. ASAPP will reject the user's login attempt if their account is disabled. To find an employee: 1. Login to ASAPP Admin. 2. Navigate to Employee List. 3. Use the filter to find the desired account. 4. Check account attributes for: Enabled, Roles, and Groups. ### Employee Roles Mismatch During the user's SSO login process, ASAPP receives a list of user roles via the Single-Sign-On SAML assertion. If the user roles in the Employee List is incorrect: 1. Check with your Identity & Access Management team to verify that the user has been added to the correct set of AD Security Groups. 2. Once you have verified the user's AD Security Groups, please ask the user to log out and log back in using the IDP-initiated SSO URL. 3. If you still see a mismatch between the user's AD Security Groups and the ASAPP Employee List, then please reach out to the ASAPP Support Team. ### Errors During User Login The SSO flow is a series of browser redirects in the following order: 1. Your SSO engine IDP-initiated URL -- typically hosted within your domain. This is the URL that users must use to login. 2. Your system's authentication system -- typically hosted within your domain. If the user is already authenticated, then it will immediately redirect the user back to your SSO engine URL. Otherwise, the user will be presented with the login page and prompted to enter their credentials. 3. ASAPP's SSO engine -- hosted on the auth-\{customerName}.asapp.com domain. 4. ASAPP's Agent/Admin app -- hosted on the \{customerName}.asapp.com domain. There are several potential errors that may happen during login. In all of these cases, it is beneficial to find out: 1. The SSO login URL being used by the user to login. 2. The error page URL and error message displayed. #### Incorrect SSO Login URL Confirm the user logins to the correct SSO URL. Due to browser redirects, users may accidentally bookmark an incorrect URL (e.g., ASAPP's SSO engine URL, instead of your SSO engine IDP-initiated URL). #### Invalid Role Values in the SSO SAML Assertion If the user sees a "Failed to authenticate user" error message and the URL is an ASAPP URL (...asapp.com), then please confirm that correct role values are being sent in the SAML assertion. This error message typically indicates that the user role value is not recognizable within ASAPP. #### Other Login Errors For any other errors, please check the error page URL. If the error page URL is an ASAPP URL (ends in asapp.com), please reach out to the ASAPP Support Team for help. If the URL is your SSO URL or your system's authentication system, please contact your internal support team. # Welcome to ASAPP Source: https://docs.asapp.com/welcome A personal AI agent for every customer with ASAPP Customer Experience Platform (CXP) Welcome to the ASAPP documentation! This is the place to find information on how to deploy the **ASAPP CXP**, with **GenerativeAgent**. Learn more about [ASAPP CXP](https://www.asapp.com/cxp). ## Getting Started with GenerativeAgent If you're new to ASAPP CXP, start here to learn how to build your first GenerativeAgent.
Learn how to build a GenerativeAgent that can use your KnowledgeBase to start answering your users' questions.
## Human + AI Empower your GenerativeAgent with human judgment and support by bringing in real-time human judgment, or fully handing off the conversation to a live agent. Learn how to bring human judgment into the AI-driven conversation with a Human in the Loop Agent (HILA). Learn how to use the Agent Desk to directly support customers. [Contact our sales team](https://www.asapp.com/get-started) for a personalized demo.