Hmi Modelling for Multimodal Lithuanian Applications

Rytis Maskeliunas, Kastytis Ratkevicius

Abstract


Spoken dialogue based human-machine interfaces (HMI) are becoming more and more widely integrated in computer applications. Speech allows doing some task easier and faster. The combination with a more traditional means of input and out – i.e. the multimodality factor becomes more and more important allowing wider accessibility. It is important to model and design spoken language dialog trees to mimic the natural conversations in the human-computer interactions, especially in information retrieval systems and applications. The paper presents three algorithms of HMI dialogs and the results of their experimental evaluation. The results showed that it is possible to achieve about 97% recognition accuracy in simple phrase based dialog conversations and about 93% in a very naturally sounding keyword spotting based dialogs.

DOI: http://dx.doi.org/10.5755/j01.itc.41.2.909


Keywords


speech recognition; voice dialog modeling; human-machine interfaces

Full Text: PDF

Print ISSN: 1392-124X 
Online ISSN: 2335-884X