This Message Is From an External Sender
This message came from outside your organization.
We are pleased to announce the upcoming Virtual Acoustics for Immersive Audio workshop, taking place July 21 – August 1, 2025, at the Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, with the option to attend remotely.
The workshop is aimed at those interested in exploring spatial audio for virtual and augmented reality applications. The goal is to provide participants with the theoretical background and practical tools needed to develop their immersive audio projects. The course will combine lectures with hands-on exercises. Week 1 will focus on room acoustics and artificial reverberation, which are key concepts for understanding how sound behaves in both physical and virtual spaces. This foundational knowledge will prepare participants for Week 2, which will cover practical spatial audio techniques used to create immersive sound experiences. Details: • When: July 21 – August 1, 2025 • Where: CCRMA, Stanford University, California – or join remotely • Cost: $800 in person / $350 remote • Requirements: Familiarity with linear algebra, basic DSP knowledge, and some experience with a scientific programming language (we’ll be using Python). • Scholarships: A limited number of scholarships are available for students and individuals from underrepresented backgrounds in the field. The application deadline is May 16th at 23:59 (AoE). Please visit the official workshop page for the full program and registration details: https://urldefense.proofpoint.com/v2/url?u=https-3A__ccrma.stanford.edu_workshops_virtual-2Dacoustics-2Dimmersive-2Daudio&d=DwIFaQ&c=009klHSCxuh5AI1vNQzSO0KGjl4nbi2Q0M1QLJX9BeE&r=TRvFbpof3kTa2q5hdjI2hccynPix7hNL2n0I6DmlDy0&m=hHCYW4dt0S4YCZM9uVMoqkv5bHGe9CxUdTLHBVD1GLusOZn6wLYKCwJw9psjNmgu&s=1XzGIZlh-O0yHubqhiQo1FbCWEtAJD8DzlWbhBaL0qk&e= Instructors: Orchisama Das brings a strong background in spatial audio and artificial reverberation, with experience spanning both academia and industry. She is currently a postdoctoral researcher at King’s College, London working on real-time room acoustics rendering for immersive audio. She was previously at Sonos, developing spatial audio algorithms for binaural rendering. Orchi received her PhD degree from CCRMA. Gloria Dal Santo has a strong interest in the use of machine learning for audio applications, with a particular focus on artificial reverberation. As part of her PhD research at the Aalto Acoustics Lab in Finland, she is exploring how machine learning can help address some of the key challenges that remain in this field. Feel free to reach out to Orchisama ([email protected]) or Gloria ([email protected]) if you have any questions! Warm regards, - Orchisama and Gloria
