This workshop aims to bring together interaction designers, usability researchers, and general HCI and speech processing practitioners. Our goal is to create, through an interdisciplinary dialogue, momentum for increased research and collaboration in:
Formally framing the challenges to the widespread adoption of speech, acoustic, and natural language interaction,
Taking concrete steps toward developing a framework of user-centric design guidelines for speech-, acoustic-, and language-based interactive systems, grounded in good usability practices,
Establishing directions to take and identifying further research opportunities in designing more natural interactions that make use of speech and natural language, and
Identifying key challenges and opportunities for enabling and designing multi-input modalities for a wide range of emerging devices such as wearables, smart home personal assistants, or social robots.
We invite the submission of position papers demonstrating research, design, practice, or interest in areas related to speech, acoustic. language, and multimodal interaction that address one or more of the workshop goals, with an emphasis, but not limited to, applications such as mobile, wearable, smart home, social robots, or pervasive computing.
Position papers should be 4-6 pages long, in the ACM SIGCHI extended abstract format and include a brief statement justifying the fit with the workshop's topic. Summaries of previous research are welcome if they contribute to the workshop's multidisciplinary goals (e.g. a speech processing research in clear need of HCI expertise). Submissions will be reviewed according to:
Fit with the workshop topic
Potential to contribute to the workshop goals
A demonstrated track of research in the workshop area (HCI or speech/multimodal processing, with an interest in both areas).
January 25th, 2017: Submission of position papers
February 8th, 2017: Notification of acceptance
February 22nd, 2017: Camera-ready submissions
Workshop URL: http://www.dgp.toronto.edu/dsli2017/