Hmi Modelling for Multimodal Lithuanian Applications

Rytis Maskeliunas, Kastytis Ratkevicius


Spoken dialogue based human-machine interfaces (HMI) are becoming more and more widely integrated in computer applications. Speech allows doing some task easier and faster. The combination with a more traditional means of input and out – i.e. the multimodality factor becomes more and more important allowing wider accessibility. It is important to model and design spoken language dialog trees to mimic the natural conversations in the human-computer interactions, especially in information retrieval systems and applications. The paper presents three algorithms of HMI dialogs and the results of their experimental evaluation. The results showed that it is possible to achieve about 97% recognition accuracy in simple phrase based dialog conversations and about 93% in a very naturally sounding keyword spotting based dialogs.



speech recognition; voice dialog modeling; human-machine interfaces

Full Text: PDF

Print ISSN: 1392-124X 
Online ISSN: 2335-884X