AI-Enabled Non-verbal Expression for Chatbot Enhancement
Chatbot technology utilises textual content including typed-in text and speech recognised text to devise their subsequent responses to the other parties during the conversation. The audio and visual non-verbal communication cues such as emotion, body language and facial expression are not recognised and incorporated in the analysis during the information exchange in the conversation. Such non-verbal cues may contain supportive context for assisting the chatbots to better engage in the ongoing conversation.
This project aims to explore the use of artificial intelligence and machine learning to train three non-verbal expression detectors to detect the occurrences of those audio and visual non-verbal cues which represent immediate reactive responses of different parties in a conversation. The audio non-verbal cues will be classified by a trained sound response detector while the visual non-verbal cues will be classified by the trained facial response detector and the upper-body gesture response detector.
This enhancement will be useful for the chatbots which are expected to be deployed in the travel industry, elderly service industry and recruitment industry, etc.
|