The most important thing to keep in mind when designing a good flow is whether it is likely to resolve the intent for most of your customers. It can be easy to diverge from this strategy (perhaps because a flow is designed with containment top of mind; perhaps because of inherent business process limitations). But it’s the best way you can truly allow customers to self-serve.
Since flows are invoked when ASAPP classifies an intent, understanding the intent in question is key to successfully designing a flow. The best way to do this is to review recent utterances that have been classified to the intent and categorizing them into more nuanced use cases that your flow must address. This will ensure that the flow you design is complete in its coverage given how customers will enter the flow.
These utterances are accessible through ASAPP Historical Reporting, in the First Utterance table.
Every flow you build can be thought of as a hypothesis for how to effectively understand and respond to your customers in a given scenario. Your ability to refine those hypotheses over time—and test new ones—is key to managing a truly effective virtual agent program that meets your customers needs.
We recommend performing the following steps on a regular basis—at least monthly—to identify opportunities for flow refinement, and improve effectiveness over time.
Once you’ve identified problematic flows, the next step is to determine why they are under-performing. In most cases you’ll quickly identify at least one of the following issues with your flow by reviewing transcripts of issueIDs from Conversation Manager in Insights Manager:
1. General unhelpfulness or imprecise responses
Oftentimes flows break down when the virtual agent responds confidently in a manner that is on-topic but completely misses the customers’ point. A common example is customers reaching out about a difficulty to log in, only to be sent to the same “forgot your password” experience they were experiencing issues with in the first place. Issues of this type typically receive a negative EndSRS score from the customer, who doesn’t believe their problem has been solved.
The key to increase the performance of these flows is to configure the virtual agent to ask further, more specific questions before jumping to conclusions. Following the example above, you could ask “Have you tried resetting your password yet?”. Including this question can go a long way to ensure that the customer receives the support they’re looking for.
2. Unrecognized customer responses
This happens when the customer says or wants to say something that the virtual agent is unable to understand.
In free-text channels, this will result in classification errors where the virtual agent has re-prompted the customer to no avail, or has incorrectly attempted to digress to another intent. You can identify these issues by searching for re-prompt language in transcripts where customers have escalated to an agent from the flow in question. Looking at the customers’ problematic response, you can determine how best to improve your flow. If customers’ response is reasonable given the prompt, you can introduce a new response route in the flow and train it to understand what the customer is saying. Even if it’s a path of dialog you don’t want the virtual agent to pursue, it’s better for the virtual agent to acknowledge what they said and redirect rather than failing to understand entirely.
Don’t:
Do:
Another option for avoiding unrecognized customer responses in free-text channels, is to rephrase the prompt in a manner that reduces the number of ways that a customer is likely to respond. This is often the best approach in cases where the virtual agent prompt is vague or open-ended.
Don’t:
Do:
In SDK channels (web or mobile apps), which are driven by quick replies, the concern here is to ensure that customers have the opportunity to respond in the way that makes sense given their situation. A common example failing to provide an “I’m not sure” quick reply option when asking a “yes or no” question. Faced with this situation, customers will often click on “new question” or abandon the chat entirely, leaving very little signal on what they intended. The best way to improve quick reply coverage is to maintain a clear understanding of the different contexts in which a customer might enter the flow---how they conceive of their issue, what information they might or might not have going in, etc. Gaining this perspective is helped greatly by reviewing live chat interactions that relate to the flow in question, and determining whether your flow could have accommodated the customer’s situation.
3. Incorrect classification
This issue is unique to free-text use cases and happens when the virtual agent thinks the customer said one thing, when in fact the customer meant something else. One example would be a response like “no idea” being misclassified as “no” rather than the expected “I’m not sure.”
Another example might be a response triggering a digression (i.e., a change of intent in the middle of a conversation), rather than an expected trained response route. This can happen in flows where you’ve trained response routes to help clarify a customer’s issue but their response sounds like an intent and thus triggers a digression instead of the response route you intended. For example:
While these issues tend to occur infrequently, when you do encounter them, the best place to start is revising the prompt to encourage responses that are less likely to be classified incorrectly. For example, instead of asking an open-ended question like “What is the reason for your refund?”---to which a customer response is very likely to sound like an intent---you can ask directly (“Was your flight cancelled?”) or ask for more concrete information from which you can infer the answer (“No problem! What’s the confirmation number?”).
Alternatively, you can solve issues of incorrect classification by training a specific response route that targets the exact language that is proving problematic. In the case of the unclear “I’m not sure” route, a response route that’s trained explicitly to recognize “no idea” might perform better than one that is broadly trained to recognize the long tail of phrases that more or less mean “I’m not sure.” In this case, you can point the response route to the same node as your generic “I’m not sure” route to resolve the issue.
4. Too much friction
Another cause for underperformance is too much friction in a particular flow. This happens when the virtual agent is asking a lot of the customer.
One type of friction is authentication. Customers don’t always remember their specific login or PINs, so authentication requests should be used only when needed. If customers are asked to find their authentication information unnecessarily, many will oftentimes abandon the chat.
Another type of friction is repetitive or redundant steps—particularly around disambiguating the customer. While it’s helpful to clarify what a customer wants to do to adequately solve their need, repetitive questions that don’t feel like they are progressing the customer forward often lead to a feeling of frustration—and abandonment.
Once you’ve identified an issue with a specific flow, create a new version of it in AI-Console with one of the remedies outlined above. After you have implemented a new version, you can save and release the new version to a lower environment to test it, and subsequently to production. Then, track the impact in Historical Reporting in Insights Manager by looking at the Flow Success Rate for such flow on the Business Flow Details tab of the Flow Dashboard.
Messaging channels have advantages and limitations. Appreciating the differences will help you optimize virtual agents for the channels they live on, and avoid channel-specific pitfalls.
To illustrate this, look at a single flow rendered in Apple Messages for Business vs the ASAPP SDK:
The ASAPP SDK has quick replies, while Apple Messages for Business supports list pickers.
The ASAPP SDKs (Web, Android, and iOS) have a number of features that help to build rich virtual agent experiences.
Strengths of SDKs:
Limitations:
How to optimize for ASAPP SDKs:
First and foremost, your virtual agent needs to be effective at facilitating dialog. It may be tempting to prioritize focus on virtual agent tone and voice but that can ultimately detract from virtual agent’s functional purpose. Next we’ll offer examples that illustrate effective or ineffective dialogs that will help you when building out your flows.
The virtual agent is a bot, and it primarily serves a functional purpose. It is much better to be explicit with customers and move the conversation forward, rather than making potential UX sacrifices to sound friendly or human-like. Customers are coming to a virtual agent to solve a specific problem efficiently. Here is a positive example of a greeting that, while bot-like, is clear and effective:
Customers interact with virtual agents to solve a problem and/or to achieve something. They benefit from explicit guidance with how they are supposed to interact with the virtual agent. If your flow design expects the customer to do something, tell them upfront. Here is a positive example of clear instructions telling a customer how to interact with the virtual agent:
The virtual agent can’t always handle a customer’s issue. When you need to redirect the customer to self-serve on a website, or even on a phone number, set clear expectations for what they need to do next. You never want a customer to feel abandoned. Here are two positive examples of very clear instructions about what the customer will need to do next, and what they can expect:
Think of a bot like a standard interaction funnel — a customer has to go through multiple steps to achieve an outcome. Acknowledging progress made and justifying steps to the customer makes for a better user experience, and makes it more like for the customer to complete all of the steps (think of a breadcrumb in a checkout flow). The customer should have a sense of where they are in the process. Here’s a simple example of orienting a customer to where they are in a process:
Over-personifying your virtual agent can make for a frustrating customer experience:
Affirmations help customers feel heard, and they help customers understand what the virtual agent is taking away from their responses. When drafting a virtual agent response, ensure that you match the copy to the variety of customer responses that may precede it — historical customer responses can be viewed in the utterance table in historical reporting.
If there is a broad set reasons for a customer to end up on a node or a flow, your affirmation should likewise be broad:
Similarly, if there is a narrow set of reasons for a customer to end up in a node or a flow, your affirmation should likewise be narrow. Even then, it’s important to not phrase things in such a way that you’re putting words in the mouth of the customer, so they don’t feel frustrated by the virtual agent.
In some cases where writing a good affirmation feels particularly tricky, feel free to err on the side of not having one. It’s all good so long as the virtual agent responds in an expected manner given what the customer just said.
If interacting with your virtual agent is confusing or hard, people will revert to tried and true escalation pathways like shouting “agent” or just calling in. As you are designing flows, be mindful about the following friction points you could potentially introduce in your flows.
Deep links are used when you link a customer out of chat to self-serve. It is tempting to leverage existing web pages, and to create dozens of flows that are simple link outs. But this often does not provide a good customer experience.
A virtual agent that is mostly single step deep links will feel like a frustrating search engine. Wherever possible, try to solve a customer’s problem conversationally within the chat itself. Don’t rely on links as a default.
But, when you do rely on a deep link, make sure to:
Be careful with “all or nothing” requirements in a flow; if you want a customer to sign in to allow you to access an API, that’s great, but give customers an alternative option at that moment too. Some customers might not remember their password.
When you are at a point in a flow where there is a required step or just one direction a customer can go, think about what alternative answer there could be for a customer. If you don’t, those customers might just abandon the virtual agent at that point.
It’s tempting to design with the happy path in mind, but customers don’t always go down the flow you expect. Anticipate the failure points in a virtual agent, and design for them explicitly.
Always imagine something will go wrong when asking the customer to do something:
In channels where free text is always enabled (i.e.. AMB, SMS), the customer input may not be recognized. We recommend writing language that guides the customer to explicitly understand the types of answers we’re expecting. Leverage “else” conditions in your flows (on Response Nodes).
Don’t:
Containment is a measure of whether a customer was prevented from escalating to an agent; it is the predominant measure in the industry for chatbot effectiveness. ASAPP, however, layers a more stringent definition called “Flow success,” which indicates whether or not a customer was actually helped by the virtual agent.
When you are designing a new flow or modifying an existing flow, be sure to enable flow success when you have provided useful information to the customer.
“Flow success” is defined as when a customer arrives at a screen or receives a response that:
With flow success, chronology matters. If a customer starts a flow, and is presented with insightful information (i.e. success), but then escalates to an agent in the middle of a flow (i.e. negation of success), that issue will be recorded as not successful.
Flow success is an event that can be emitted on a node.
It is incumbent on the author of a flow to define which steps in the flow they design could be considered successful.
Default settings:
You’re able to track your flows’ performance on the “Automation Success” report in historical reporting. There you can assess containment metrics and flow success which will help you determine whether a flow is performing according to expectations.
Flows are composed of different node types, which represent a particular state/act of a given flow. When you create a flow, you create a number of different nodes.
We recommend naming nodes to describe what the node accomplishes in a flow. Clear node names will make the data more readable going forward. Here are some best practices to keep in mind:
When you create a Response Node that is expected to classify free text customer input (e.g. “Would you like a one way flight or a round trip flight?”), you need to supply training utterances to train a response route. There are some best practices you should keep in mind:
Sometimes customers initiate conversations with vague utterances like “Help with bill” or “Account issues.” In these cases the virtual agent understands enough to classify the customer’s intent, but not enough to immediately solve their problem.
In these cases, you are able to design a flow that asks follow-up questions to disambiguate the customer’s particular need. Based on the customer’s response you can redirect them to more granular intents where they can better be helped.
Designing effective disambiguation starts with reviewing historical conversations to get a sense of what types of issues customer’s are having related to the vague intent. Once you’ve determined these, you’ll want to optimize your prompt and response routes for the channel your designing for:
These channels are driven by quick replies only, meaning that the customer can only choose an option that is provided by the virtual agent. Here, the prompt matters less than the response branches / quick replies you write. Just make sure they map to things a customer would say---even if multiple response routes lead to the same place. For example:
These channels offer quick replies, but do not prevent customers from responding with free text. The key here is optimizing your question to increase the likelihood that customers choose a quick reply.
These channels are often the most challenging, as the customer could respond in any number of ways, and given the minimal context of the conversation it’s challenging to train the virtual agent to adequately understand all of them. Similar to other channels, the objective is to prompt in a manner that limits how customers are likely to respond. The simplest approach here is to list out options as part of your prompt:
Keep messages to be short and to the point. Walls of text can be intimidating. Never allow an individual message to exceed 400 characters (or, even less if there are spaces)..
An example of something to avoid:
Quick Replies should be short and to the point. Some things to keep in mind when writing Quick Replies:
The most important thing to keep in mind when designing a good flow is whether it is likely to resolve the intent for most of your customers. It can be easy to diverge from this strategy (perhaps because a flow is designed with containment top of mind; perhaps because of inherent business process limitations). But it’s the best way you can truly allow customers to self-serve.
Since flows are invoked when ASAPP classifies an intent, understanding the intent in question is key to successfully designing a flow. The best way to do this is to review recent utterances that have been classified to the intent and categorizing them into more nuanced use cases that your flow must address. This will ensure that the flow you design is complete in its coverage given how customers will enter the flow.
These utterances are accessible through ASAPP Historical Reporting, in the First Utterance table.
Every flow you build can be thought of as a hypothesis for how to effectively understand and respond to your customers in a given scenario. Your ability to refine those hypotheses over time—and test new ones—is key to managing a truly effective virtual agent program that meets your customers needs.
We recommend performing the following steps on a regular basis—at least monthly—to identify opportunities for flow refinement, and improve effectiveness over time.
Once you’ve identified problematic flows, the next step is to determine why they are under-performing. In most cases you’ll quickly identify at least one of the following issues with your flow by reviewing transcripts of issueIDs from Conversation Manager in Insights Manager:
1. General unhelpfulness or imprecise responses
Oftentimes flows break down when the virtual agent responds confidently in a manner that is on-topic but completely misses the customers’ point. A common example is customers reaching out about a difficulty to log in, only to be sent to the same “forgot your password” experience they were experiencing issues with in the first place. Issues of this type typically receive a negative EndSRS score from the customer, who doesn’t believe their problem has been solved.
The key to increase the performance of these flows is to configure the virtual agent to ask further, more specific questions before jumping to conclusions. Following the example above, you could ask “Have you tried resetting your password yet?”. Including this question can go a long way to ensure that the customer receives the support they’re looking for.
2. Unrecognized customer responses
This happens when the customer says or wants to say something that the virtual agent is unable to understand.
In free-text channels, this will result in classification errors where the virtual agent has re-prompted the customer to no avail, or has incorrectly attempted to digress to another intent. You can identify these issues by searching for re-prompt language in transcripts where customers have escalated to an agent from the flow in question. Looking at the customers’ problematic response, you can determine how best to improve your flow. If customers’ response is reasonable given the prompt, you can introduce a new response route in the flow and train it to understand what the customer is saying. Even if it’s a path of dialog you don’t want the virtual agent to pursue, it’s better for the virtual agent to acknowledge what they said and redirect rather than failing to understand entirely.
Don’t:
Do:
Another option for avoiding unrecognized customer responses in free-text channels, is to rephrase the prompt in a manner that reduces the number of ways that a customer is likely to respond. This is often the best approach in cases where the virtual agent prompt is vague or open-ended.
Don’t:
Do:
In SDK channels (web or mobile apps), which are driven by quick replies, the concern here is to ensure that customers have the opportunity to respond in the way that makes sense given their situation. A common example failing to provide an “I’m not sure” quick reply option when asking a “yes or no” question. Faced with this situation, customers will often click on “new question” or abandon the chat entirely, leaving very little signal on what they intended. The best way to improve quick reply coverage is to maintain a clear understanding of the different contexts in which a customer might enter the flow---how they conceive of their issue, what information they might or might not have going in, etc. Gaining this perspective is helped greatly by reviewing live chat interactions that relate to the flow in question, and determining whether your flow could have accommodated the customer’s situation.
3. Incorrect classification
This issue is unique to free-text use cases and happens when the virtual agent thinks the customer said one thing, when in fact the customer meant something else. One example would be a response like “no idea” being misclassified as “no” rather than the expected “I’m not sure.”
Another example might be a response triggering a digression (i.e., a change of intent in the middle of a conversation), rather than an expected trained response route. This can happen in flows where you’ve trained response routes to help clarify a customer’s issue but their response sounds like an intent and thus triggers a digression instead of the response route you intended. For example:
While these issues tend to occur infrequently, when you do encounter them, the best place to start is revising the prompt to encourage responses that are less likely to be classified incorrectly. For example, instead of asking an open-ended question like “What is the reason for your refund?”---to which a customer response is very likely to sound like an intent---you can ask directly (“Was your flight cancelled?”) or ask for more concrete information from which you can infer the answer (“No problem! What’s the confirmation number?”).
Alternatively, you can solve issues of incorrect classification by training a specific response route that targets the exact language that is proving problematic. In the case of the unclear “I’m not sure” route, a response route that’s trained explicitly to recognize “no idea” might perform better than one that is broadly trained to recognize the long tail of phrases that more or less mean “I’m not sure.” In this case, you can point the response route to the same node as your generic “I’m not sure” route to resolve the issue.
4. Too much friction
Another cause for underperformance is too much friction in a particular flow. This happens when the virtual agent is asking a lot of the customer.
One type of friction is authentication. Customers don’t always remember their specific login or PINs, so authentication requests should be used only when needed. If customers are asked to find their authentication information unnecessarily, many will oftentimes abandon the chat.
Another type of friction is repetitive or redundant steps—particularly around disambiguating the customer. While it’s helpful to clarify what a customer wants to do to adequately solve their need, repetitive questions that don’t feel like they are progressing the customer forward often lead to a feeling of frustration—and abandonment.
Once you’ve identified an issue with a specific flow, create a new version of it in AI-Console with one of the remedies outlined above. After you have implemented a new version, you can save and release the new version to a lower environment to test it, and subsequently to production. Then, track the impact in Historical Reporting in Insights Manager by looking at the Flow Success Rate for such flow on the Business Flow Details tab of the Flow Dashboard.
Messaging channels have advantages and limitations. Appreciating the differences will help you optimize virtual agents for the channels they live on, and avoid channel-specific pitfalls.
To illustrate this, look at a single flow rendered in Apple Messages for Business vs the ASAPP SDK:
The ASAPP SDK has quick replies, while Apple Messages for Business supports list pickers.
The ASAPP SDKs (Web, Android, and iOS) have a number of features that help to build rich virtual agent experiences.
Strengths of SDKs:
Limitations:
How to optimize for ASAPP SDKs:
First and foremost, your virtual agent needs to be effective at facilitating dialog. It may be tempting to prioritize focus on virtual agent tone and voice but that can ultimately detract from virtual agent’s functional purpose. Next we’ll offer examples that illustrate effective or ineffective dialogs that will help you when building out your flows.
The virtual agent is a bot, and it primarily serves a functional purpose. It is much better to be explicit with customers and move the conversation forward, rather than making potential UX sacrifices to sound friendly or human-like. Customers are coming to a virtual agent to solve a specific problem efficiently. Here is a positive example of a greeting that, while bot-like, is clear and effective:
Customers interact with virtual agents to solve a problem and/or to achieve something. They benefit from explicit guidance with how they are supposed to interact with the virtual agent. If your flow design expects the customer to do something, tell them upfront. Here is a positive example of clear instructions telling a customer how to interact with the virtual agent:
The virtual agent can’t always handle a customer’s issue. When you need to redirect the customer to self-serve on a website, or even on a phone number, set clear expectations for what they need to do next. You never want a customer to feel abandoned. Here are two positive examples of very clear instructions about what the customer will need to do next, and what they can expect:
Think of a bot like a standard interaction funnel — a customer has to go through multiple steps to achieve an outcome. Acknowledging progress made and justifying steps to the customer makes for a better user experience, and makes it more like for the customer to complete all of the steps (think of a breadcrumb in a checkout flow). The customer should have a sense of where they are in the process. Here’s a simple example of orienting a customer to where they are in a process:
Over-personifying your virtual agent can make for a frustrating customer experience:
Affirmations help customers feel heard, and they help customers understand what the virtual agent is taking away from their responses. When drafting a virtual agent response, ensure that you match the copy to the variety of customer responses that may precede it — historical customer responses can be viewed in the utterance table in historical reporting.
If there is a broad set reasons for a customer to end up on a node or a flow, your affirmation should likewise be broad:
Similarly, if there is a narrow set of reasons for a customer to end up in a node or a flow, your affirmation should likewise be narrow. Even then, it’s important to not phrase things in such a way that you’re putting words in the mouth of the customer, so they don’t feel frustrated by the virtual agent.
In some cases where writing a good affirmation feels particularly tricky, feel free to err on the side of not having one. It’s all good so long as the virtual agent responds in an expected manner given what the customer just said.
If interacting with your virtual agent is confusing or hard, people will revert to tried and true escalation pathways like shouting “agent” or just calling in. As you are designing flows, be mindful about the following friction points you could potentially introduce in your flows.
Deep links are used when you link a customer out of chat to self-serve. It is tempting to leverage existing web pages, and to create dozens of flows that are simple link outs. But this often does not provide a good customer experience.
A virtual agent that is mostly single step deep links will feel like a frustrating search engine. Wherever possible, try to solve a customer’s problem conversationally within the chat itself. Don’t rely on links as a default.
But, when you do rely on a deep link, make sure to:
Be careful with “all or nothing” requirements in a flow; if you want a customer to sign in to allow you to access an API, that’s great, but give customers an alternative option at that moment too. Some customers might not remember their password.
When you are at a point in a flow where there is a required step or just one direction a customer can go, think about what alternative answer there could be for a customer. If you don’t, those customers might just abandon the virtual agent at that point.
It’s tempting to design with the happy path in mind, but customers don’t always go down the flow you expect. Anticipate the failure points in a virtual agent, and design for them explicitly.
Always imagine something will go wrong when asking the customer to do something:
In channels where free text is always enabled (i.e.. AMB, SMS), the customer input may not be recognized. We recommend writing language that guides the customer to explicitly understand the types of answers we’re expecting. Leverage “else” conditions in your flows (on Response Nodes).
Don’t:
Containment is a measure of whether a customer was prevented from escalating to an agent; it is the predominant measure in the industry for chatbot effectiveness. ASAPP, however, layers a more stringent definition called “Flow success,” which indicates whether or not a customer was actually helped by the virtual agent.
When you are designing a new flow or modifying an existing flow, be sure to enable flow success when you have provided useful information to the customer.
“Flow success” is defined as when a customer arrives at a screen or receives a response that:
With flow success, chronology matters. If a customer starts a flow, and is presented with insightful information (i.e. success), but then escalates to an agent in the middle of a flow (i.e. negation of success), that issue will be recorded as not successful.
Flow success is an event that can be emitted on a node.
It is incumbent on the author of a flow to define which steps in the flow they design could be considered successful.
Default settings:
You’re able to track your flows’ performance on the “Automation Success” report in historical reporting. There you can assess containment metrics and flow success which will help you determine whether a flow is performing according to expectations.
Flows are composed of different node types, which represent a particular state/act of a given flow. When you create a flow, you create a number of different nodes.
We recommend naming nodes to describe what the node accomplishes in a flow. Clear node names will make the data more readable going forward. Here are some best practices to keep in mind:
When you create a Response Node that is expected to classify free text customer input (e.g. “Would you like a one way flight or a round trip flight?”), you need to supply training utterances to train a response route. There are some best practices you should keep in mind:
Sometimes customers initiate conversations with vague utterances like “Help with bill” or “Account issues.” In these cases the virtual agent understands enough to classify the customer’s intent, but not enough to immediately solve their problem.
In these cases, you are able to design a flow that asks follow-up questions to disambiguate the customer’s particular need. Based on the customer’s response you can redirect them to more granular intents where they can better be helped.
Designing effective disambiguation starts with reviewing historical conversations to get a sense of what types of issues customer’s are having related to the vague intent. Once you’ve determined these, you’ll want to optimize your prompt and response routes for the channel your designing for:
These channels are driven by quick replies only, meaning that the customer can only choose an option that is provided by the virtual agent. Here, the prompt matters less than the response branches / quick replies you write. Just make sure they map to things a customer would say---even if multiple response routes lead to the same place. For example:
These channels offer quick replies, but do not prevent customers from responding with free text. The key here is optimizing your question to increase the likelihood that customers choose a quick reply.
These channels are often the most challenging, as the customer could respond in any number of ways, and given the minimal context of the conversation it’s challenging to train the virtual agent to adequately understand all of them. Similar to other channels, the objective is to prompt in a manner that limits how customers are likely to respond. The simplest approach here is to list out options as part of your prompt:
Keep messages to be short and to the point. Walls of text can be intimidating. Never allow an individual message to exceed 400 characters (or, even less if there are spaces)..
An example of something to avoid:
Quick Replies should be short and to the point. Some things to keep in mind when writing Quick Replies: