Customizing the Assistant
Once the Assistant has been created, you can customize the Assistant to do the following -
- Supported languages and the default one
- Color and theme of the Assistant
- The various prompts
- The default actions to be performed
Note that it's not mandatory to do the customization at this point. One can directly jump to the code integration step and then come back and do this later too. Check with your Slang engineering contact for the recommended process for your app
Go to the App Settings section
Select the "App Settings"
And select "Language Settings" on the tabs below that
Select "Langauge Settings"
There you can specify the languages that you want to be supported in your Assistant instance and also select the "default" language.
Set the suported languages and the default one
Note that you can also change these settings via code too. But this is the preferred approach
The next thing to usually configure is the prompt that is spoken when the Assistant starts up. This is can be configured from the "Prompts" section under "App Settings"
You can change two types of prompts here
This is the prompt that will be spoken (which can be turned off if required) and also shown to the user as soon as the user (or the app) invokes the Assistant.
The greeting prompt has 3 levels and for each level, you can create multiple versions of the prompt which the Slang Assistant will randomly pick from.
- Level 1 - This is the prompt that will be spoken when Slang is invoked for the very first time. So one can create slightly longer prompts like "Welcome to My Awesome Company. I can help you get what you want quickly. What product are you searching for?"
- Level 2 - This is the prompt that will be spoken when Slang is invoked during subsequent sessions of the app. So prompts like "Welcome back. What product are you looking for?"
- Level 3 - This is the prompt that will be spoken when Slang is invoked in the same session. So prompts like "Next?" or "What's next?" are ideal. This prompt should be short and crisp as the user might potentially invoke the assistant multiple times in the same session
This is the prompt that is spoken when Slang does not understand what the user spoke and wants the user to repeat their command again or say it in a different way.
Say the user says "what is the time" to a Retail domain trained Slang Assistant, this is not something that Slang is usually trained to understand (remember Slang is not a general-purpose Assistant like Alexa but more purpose-oriented and domain-specific). So this will cause Slang to ask the user to clarify that it did not understand what the user said and ask them to try again.
Slang by default will retry 3 times to understand what a user said before giving up. Now it's possible to configure the prompts to be different for each of those 3 levels.
While CONVA does not enforce how the prompts are actually structured, here are some guidelines for the different levels -
- Level 1 - Something very simple and straightforward like "Sorry I did not understand. Try again"
- Level 2 - The user is probably not clear what to say. Give more details, giving some examples of things that might work "Sorry. Try saying things like Mango 2 kg"
- Level 3 - Maybe there are speaking too fast or they are speaking before the system is ready. Tell them to speak more slowly. "Sorry. Can you try speaking slowly and after the beep sound?"
Once the Assistant recognizes what the user spoke and classifies it into one of the supported journeys, the next thing is for the app to perform the action corresponding to that journey. CONVA allows you to specify the actions to be performed directly via the Console, without having to write any code.
But this will only work if the user journey target can be reached via either -
- A deep link (for all platforms)
- An Android intent (for Android)
This way of defining actions will not allow the app to exploit the full power of the various App State and conditions that the Assistant exposes to the app (for multi-turn and multi-modal conversations). So use this mechanism to quickly get started but once you are comfortable with the idea behind actions, it's recommended to do it via code as explained here
To specify the actions, go to the "User Journeys" section
Goto the User Journey section
And below that click on the "hand" icon next to the various journeys
Click on the "hand" icon which represents the action to be taken
And enter the deep link that will take you to the corresponding page in the app
Enter the deep-link corresponding to the journey
You can also pass some data that was retrieved by Slang as part of the user's utterance also to this deep link. The data is specific to each journey
Currently, tags are supported only for the Search journey. More tags and support for more journeys will be added over time
Next is to integrate the Retail Assistant SDK into the app and the code integration steps.