Profile
The Chair of Speech Technology and Cognitive Systems conducts research at the interface of speech technology, phonetics, and machine learning. Our overarching goal is to better understand the cognitive, biomechanical, and physical processes involved in speech production and perception, and to apply our findings to solve specific problems. The main research areas are
- the computational modeling of the vocal tract and vocal folds and the simulation of speech movements as well as aerodynamics and acoustics during speech for a very flexible and natural synthesis of speech (articulatory speech synthesis),
- physical replication of the vocal tract and vocal folds as a computer-controlled electro-mechanical system,
- physical realizations of the concept of echo state networks (feedback artificial neural networks),
- the recognition of silently produced speech for "silent telephony", and to provide a substitute voice for people who have had their larynx removed
- the development of instrumental methods for measuring movements during speech, e.g., electro-optic stomatography for measuring tongue and lip movements, a method for non-invasive measurement of soft palate movements, and a method for non-invasive measurement of vocal fold movement.
Teaching at the professorship includes lectures on speech synthesis, speech recognition, signal processing, and signal recognition (machine learning) with accompanying exercises and labs.