nonverbal capabilities and dialog strategies

In this demo we show the current state of NAVEL’s nonverbal and verbal dialog capabilities. There were a total of two recording sequences from which the demo is cut together.

As a technical basis for the dialog system, we use RASA, which uses various AI modules for both the NLU process such as intent detection and the selection of response strategies. For example, the system recognizes answers from the user that do not exactly match the predefined ones and the dialog system can also handle topic changes that were not previously defined in story lines. For speech recognition, we are currently using Google ASR. By using various tricks, we were able to reduce the latency enough to create a pleasant dialog flow.
 
NAVEL’s special nonverbal capabilities are partially controlled directly by the dialog system to have them synchronized and consistent with the dialog. Basic emotions and specific gestures are set by the dialog editor. In addition, we have automated as much of the non-verbal behavior as possible, so that authentic live behavior is created without much manual work. Functions such as eye contact including saccades and gaze aversion are fully automated. Even the movement patterns appropriate to different dialog situations are not scripted by hand but automated.
 
For the dialog itself, we use a wide variety of communication strategies to guide and activate the user: Open and closed questions, positive feedback, recognition, self-revelation, activating questions, etc. In terms of content, we are currently mapping exemplary functions that are suitable for care, including: Games, entertainment, information, small talk and exercises to improve well-being. For this we use exercises from the field of mindfulness and positive psychology.
 
We are also very excited to see how it will be when we go back to nursing homes with the new skills and NAVEL interacts with “normal” people!
 

Stay tuned.

GDPR Cookie Consent with Real Cookie Banner