The age of assistance is here! Voice based conversational technology has become readily available and the tools offered by Google’s Dialogflow allow for easy modelling and creation of such interfaces. This lowers the bar for entry and allows you to provide a personalized experience on a digital assistant platform.
This blog answers the most common questions about (voice) assistants based on our investigations with Google’s DialogFlow technology.
What do you mean with “the age of assistance is here”?
Recent progress in machine learning have enabled the creation of powerful speech recognition and language processing software paving the way for voice based human interaction with devices. The ubiquity of the internet enabled mobile phone has made it very easy to tap into this processing power and therefore we say that the age of assistance has arrived. According to Voicebot there are already over a billion devices that support this technology. And looking at the US there are over 130 million monthly active users.
What’s the difference between a Google Home/Alexa and a voice assistant?
Google Home and Alexa are devices that encapsulate a voice assistant. They are basically a speaker, a microphone device and a bit of software that allows you to use the voice technology of one of those companies. A voice assistant can be used on many more devices such as a Smart TV, a smart home, or a smartwatch.
What are some of the use cases for this technology?
With this new technology come a lot of new opportunities to engage with your customer. Here are a couple of examples that have been explored already:
- Assisting customer in deciding what to buy
- Answering general questions about your company
- Checking on status of orders
- Placing orders
- Replace (parts of) a call center with a Dialogflow agent
- Enabling customers to check the stock of your products
With the continuous improvement on the technology side we can only expect this list to keep growing.
How do I get started with this technology?
The beauty of Dialogflow is that you do not need a PhD in machine learning in order to get started with the complex task of natural language processing. Dialogflow handles all the intricacies so you can focus on the conversation you want your customer to be able to have. You can simply train a voice assistant in Dialogflow (called an agent) by entering training phrases as follows:
Here you can see we are training an agent to answer questions regarding buying furniture. The highlighted parts are concepts that Dialogflow knows about (in this example the quantities “2” and “one”, and the products “lamp”, “desk” and “table”). You can use these processed concepts (parameters) to answer the customer most appropriately based on your stock information.
Another big advantage is that you don’t need to concern yourself with turning text responses into speech. Google’s text to speech library handles that for you.
Do I need to figure out all of this myself?
Dialogflow offers Pre-built agents. These offer a complete agent that specializes in handling a certain task. You can use these to get started quickly with a certain task or as guidance on how to implement your own tasks.
Besides these prebuilt agents there is also a self-guided tutorial section on the Dialogflow website. And of course, Xebia also offers a training if you prefer a personal guided tour on how to build voice assistants. More information will be available soon. Do you want us to keep you informed when this information is available? Leave your details here.
How can I help my customers find my assistant?
There is not really an organic way for your customers to discover the fact that you have a voice assistant available for them. There isn't anything like an App Store for voice assistants where you can promote your app. There is a directory listing for Google Assistant enabled applications explaining how to interact with them, which can be found at: https://assistant.google.com/explore
Am I tied to the Google ecosystem when I start using this?
When using Dialogflow you are tied to the Google ecosystem for the speech recognition and natural language processing part of your conversations. Once Google has processed the phrase and converted it into the parameters you have configured, you can tell Google to send the parameters to your existing systems and let them handle the generation of the response for the customer. There seem to be no plans for performing the speech recognition and natural language processing on device or making the software that is capable of this, available outside of the Google ecosystem.
How can I integrate this technology with my existing systems?
Dialogflow offers web hooks through which you can transfer control from Dialogflow to your existing systems. This allows you to go beyond simple preprogrammed answers and serve catered response to your customer’s requests based on information provided by your infrastructure. You can extract information from the utterances of your customers and pass that data as parameters to your own services by using entities. Dialogflow has a large number of built in entities it can recognize and turn into parameters for your services. A complete overview of these built-ins can be found here: https://dialogflow.com/docs/reference/system-entities
How do I Integrate this with existing conversational applications?
Diaogflow is not limited to the Google Assistant platform. It allows for integrations with, for example, the Facebook messenger platform or Skype, allowing to cater for an even broader audience than the Google based ecosystem. The full list of current integrations can be found here. Besides these pre-built integrations Dialogflow offers an API that allows you to build custom integrations.
How can I offer a personalized experience with my assistant?
Dialogflow offers various methods for linking the person interacting with the assistant to an identity. The easiest one is using the Google Sign-In. This way you can identify customers that have registered with your services with their @gmail.com email addresses.
A second alternative is using OAuth. This mechanism adds a bit more friction to the login process as this cannot be done with voice only. You will need to ask your customer to continue the login flow on a device with a screen, but once that one-time process has been completed the customer can continue using voice.
Another way to personalize the experience for your customer is to use their current location. You can use this to offer localized responses to the user’s queries, such as “what is the nearest store with tables in stock”. Getting the customer’s location is dependent on the Google Maps API and therefore is limited by the usage limits of that API.
What happens when the assistant doesn’t understand a customer?
Dialogflow offers a mechanism for handling the situation where the customer’s input cannot be processed into something meaningful. It will fall back to a default response where you have the possibility to present the customer with examples of what they can ask your assistant.
As a Dialogflow developer you have access to the Training section of your agent. This offers an overview of phrases that could not be processed. You can select phrases here and add them to the training phrases mentioned earlier so your assistant will recognize them correctly in the future. This way you can keep on improving your conversations and thereby your customer’s experience.
What about my sales funnel?
Dialogflow offers powerful insights into the way customers are interacting with your assistant. You can see the session flow of your customers. This way you can easily investigate where your customers are dropping off in a conversation with your assistant and what you can optimize in your voice offering.
The following chart shows analytics for a fairly simple agent allowing a customer to check the stock of products and subsequently reserve them for pickup later during the day.
You can navigatie easily between the different steps of the conversation and see the numbers and percentages of each step.
What about privacy?
Storing customer data is subject to the same regulations as websites so you will need to get consent from the user for storing that data and create a method for the user to remove the stored data. Google also requires you to specify a Privacy statement for your assistant. Data sent to Google’s APIs (for natural language processing for example) adhere to the following privacy policy.
Is it only limited to English?
No, at the moment of writing, Dialogflow supports over 20 languages. For a complete list of supported languages see this page. As language is essential to the functioning of the agent you need to design your conversations per language. This means you needs to translate your conversations to all the languages you would like to support.