Abstract: It is a very important problem since voice-controlled devices and speech-to-text transcription are only two examples of how automatic recognition of spoken language in noisy environments may ...
Abstract: The Mixture of Experts (MoE) model is a promising approach for handling code-switching speech recognition (CS-ASR) tasks. However, the existing CS-ASR work on MoE has yet to leverage the ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results