Voice-driven interaction in XR spaces (VOX Reality)

Summary

  • VOXReality aims to facilitate the convergence of Natural Language Processing (NLP) and Computer Vision (CV) technologies in the Extended Reality (XR) field.

  • The project aims to develop innovative models that combine language as a core interaction medium with visual understanding. This will result in next-generation applications that comprehensively understand users' goals, surrounding environment, and context.

  • The resulting virtual assistants will be deployed in three use cases: a factory setting, a virtual conference (Immersive Tech Week), and a theatre play (Athens Epidaurus Festival).

 

Keywords

Extended Reality (XR), Natural Language Processing (NLP), Computer Vision (CV), Digital Agents, Virtual Conferencing, Theatre

Date

October 2022 - Ongoing

Budget

€ 4.78M (total)

Role

Design Researcher

 

Role

  • I conducted interactive focus group workshops with three use case partners - theater (7 participants / Greece), virtual conference (7 participants / Netherlands), and digital agents (2 participants / Germany) - to gather and define user requirements for developing voice-driven VR/AR interaction technology.

Details soon to be updated :P

 
Sueyoon Lee

Sueyoon is a user experience designer & researcher based in Amsterdam. She creates immersive yet comfortable experiences with design and technology through a user-centric approach.

Previous
Previous

Ignite the Immersive Media Sector by Enabling New Narrative Visions (TRANSMIXR)

Next
Next

Designing and Evaluating a VR Lobby for a socially enriching remote Opera watching experience