Nonvocal mobile communication based on radar sensors and articulatory speech synthesis ("Radarspeech")
Table of contents
This Project is tax-supported based on the public budget agreed upon by the Saxon State Parliament.
Brief description
The popularity of speech assistance devices, either included directly into our smartphone or in the form of Google Now or Alexa (to only name a few) has increased tremendously in recent years. In any case, though, the user has to interact with the device directly, which can be awkward in a public setting (e.g., transportation, restaurants or busy streets) for both the user and surrounding people. This is especially problematic for private and/or confidential matters.
The goal of the Project RadarSpeech is to develop a technology that is capable of detecting silently uttered speech by only measuring the movement of the different articulators (e.g., the tongue, jaw and lips) with radar sensors. Several small antennas are placed at the chin or cheek which in turn measure the reflection- and transmission properties of the vocal tract from which speech can be inferred.
Apart from voiceless communication, this technology can potentially be employed to improve spoken communication under acoustically difficult conditions such as when wearing a breathing mask or working with noisy machinery.
Project duration
Period: 15.8.2019 - 30.6.2022
Project partner
TUD internal project partners
Chair of Radio Frequency and Photonics Engineering, Institute of Communication Technology (IfN)
Contact
Professor
NameMr Prof. Dr.-Ing. Peter Birkholz
Send encrypted email via the SecureMail portal (for TUD external users only).
Chair of Speech Technology and Cognitive Systems
Chair of Speech Technology and Cognitive Systems
Visiting address:
Barkhausenbau, Raum S48 Helmholtzstraße 18
01069 Dresden