Speakers

Jakub Klimeš
Jakub Klimeš is a sound engineer, producer, and pianist currently studying Art of Sound at the Royal Conservatoire in The Hague, The Netherlands. He graduated from the Academy of Performing Arts in Prague with a Bachelor’s degree in Music Production. Jakub specializes in recording Classical music and Early music, mostly chamber music ensembles, and smaller orchestras. His research topics are e.g. specific approaches in recording early music or improvement of classical music producers’ training. Besides Music production, he is an active musician and as a pianist, he focuses on chamber music and solo repertoire.

Jakub Pesek
Jakub Pesek is a 25 years old music producer, audio researcher and singer-songwriter currently studying Art of Sound at the Royal Conservatoire in The Hague, The Netherlands. Jakub is a creative person who loves music and the digital world and is excited about new technology. He is also a self-employed web developer and a qualified tennis coach. In 2019, Jakub graduated from Bachelor of Arts in Popular Music and Music Production at the University of Highlands and Islands.

Demo Spatial Audio Designer
The Spatial Audio Designer (SAD) is a high-performance pro audio tool for creating content and monitoring sound for movies, music, game, VR, and events. The SAD enables the mixing and monitoring of any immersive (surround or 3D) format, such as 5.1, 7.1, Dolby Atmos 7.1.2 Bed, 11.1, and 22.2 with speakers or regular headphones with any DAW. If no appropriate loudspeaker system is available, the SAD provides the high quality, flexible, and personally adjustable Headphone Surround 3D binaural loudspeaker virtualization for standard headphones. Jakub Pesek and Jakub Klimeš will introduce this powerful creation tool after following masterclasses with Tom Ammerman, the developer of Spatial Audio Designer.


Maria Schween is a freelance media artist with a focus on music and sound design for film and games. She studied media design followed by a media arts master specializing on electroacoustic composition at SEAM in Weimar and audio engineering at SAE in Leipzig. She has composed the music for films such as "Totentanz" by Urban Gad and "Lady Europa" by Toni Aurelio Agliata. One of your most important compositions for video games is "Bauhaus_Oasis" which could be experienced in some German museums in 2019. In addition to her work as a composer and sound designer, she gives specialist lectures and is a member of the German Games and XR Association “Games und XR Verband Mitteldeutschland”.

Adaptive Music in Games
The electronic game format enables the listener to actively participate in the process of composing and interpreting music. What does this mean for electroacoustic music? Adaptive music and adaptive sound design are a standard in the game industry. In order to reach this market with new music it is absolutely necessary to deal with the technical and compositional basics for this. What are the special features of the composition of adaptive music? What methods of sound design are there? What influence can the player have? What are the limits of adaptive music? These questions are to be clarified using examples from the NieR sequel “NieR: Automata” (2017), an ARPG by Square Enix and the mixed reality installation “Bauhaus Oasis” (2019) by Florian Froger. On the whole, I am convinced that the video game area, but especially the VR area, can bring a further development of electroacoustic music. In addition, music composition and sound design go hand in hand like in no other medium. Since the immersion in VR is so great, it is possible to create spaces with the help of the optical level that cannot be easily implemented with a loudspeaker orchestra.


Peter Pabon studied biochemistry, signal processing and sonology at Utrecht University. His professional career started in 1983 as a part-time researcher on a project called Objective Recording of Voice Quality with Professor Plomp at VU University in Amsterdam, and he worked at Utrecht University as a teacher/researcher on (singing) voice analysis and speech and music acoustics from 1983 until 2011.

He initiated a project for singing voice synthesis and analysis at the Royal Conservatoire that later resulted in a cooperative project with the singing department to monitor voice change as an effect of voice training. In 2002, he founded Voice Quality Systems, a company in which he develops the voice quality recording system Voice Profiler, which is nowadays in use at many clinical centres, conservatories and schools for professional voice training. Special to this recording system is a dual microphone headset that automatically selects the singers voice by continuously tracking the distance and orientation of the sound source. Peter Pabon completed a PhD thesis at KTH Stockholm, which has generated several papers and presentations on Voice Range Profile (VRP) recording methodology and the effects of voice training.

How to become a virtual listener in a virtual sound field?
Which techniques are available to model our binaural hearing in a virtual sound space? To what extend can we sample or (re)synthesize a private reconstruction of a sound field in which you as a local listener can freely move? This presentation touches upon a number of modelling techniques like: HRTF’s, head-tracking, wave-field synthesis, and ambisonics. These terms will be briefly introduced and when possible (links to) simple demonstrations will be given. Thereby I hope to clarify how the above methods fit to the approach of modelling the listener’s ears within the sound field, or getting the sound field to the listener’s ears.


Master student of Sound of Innovation, ArtEZ Conservatorium.

Functional music production for a self-built music therapy prototype
On the second year of master program, we further decided to apply the prototype made in the first year to the field of music therapy. To realize this idea, a series of researches are conducted to solve the problem of how to apply music production to music therapy. In this lecture, the procedure of the music production in this project would be presented including: Setting up the functional goals, Collaboration on production within a team, application of innovative music technology. The prototype would also be introduced at the beginning as the contextualization.

As the son of a magician, Steye tries to create magic with the newest media and technology. A main thread running through his work is putting the audience in the centre of his experiences. A good example is his latest project: The Smartphone Orchestra. An orchestra created by the phones of the audience members themselves with which Steye tells stories with the audience instead of to them.
Steye studied both music and fine arts at the Royal Academy for Music, Dance and Art in The Hague in The Netherlands and was creative lead at the Medialab from Dutch broadcaster the VPRO. After this he worked as creative director for JauntXR EMEA. The VR music video he made for his own band (What do we care4 ) was nominated for a UK music video award in 2015 and was a worldwide hit amongst early adopters of virtual reality. The Cinematic VR experience Ashes to Ashes which Steye directed won gold at the dutch VR Awards and was nominated for The Dutch Oscars. Weltatem - an interactive Virtual Reality Opera game (in which the audience was taught how to sing) won two Dutch Game Awards. At this moment Steye works as creative director for The Smartphone Orchestra and for the 4DR Studio - a volumetric video Capture studio in the Netherlands.

Thilo Schaller is a composer, music producer and audio educator currently residing in Canada. As recording engineer/producer, Thilo has worked on award-winning productions with solo artists, ensembles, and orchestras in Europe, Russia, South, and North America. His composition projects include autonomous music as well as music for feature film and interactive media. In 2010, Thilo joined the faculty at the University of Lethbridge and since then has been involved in developing and designing courses and study programs in audio arts. He currently works as Assistant Professor at Buffalo State College, State University of New York. With his experience as educator and expertise in interactive and immersive audio, Thilo is a frequent presenter at AES conventions and has been invited for guest lectures, workshops and master classes at renowned institutions such as the Banff Centre (Canada) and UNTREF (Argentina).

Arne Bock is a live sound, recording, mixing and mastering engineer. His work encompasses sound system design, optimization, and operation as well as music production. He is an expert in acoustic music with a focus on contemporary classical music productions and spatial audio and has worked with internationally acclaimed opera companies and ensembles. His work led Arne to some of the most prestigious performance venues around the world and albums he produced have been released on major labels. His current research focuses on the application of immersive audio tools in live sound and interactive audio/audio-visual audiovisual installations.

In 2020, Arne and Thilo - both Art of Sound graduates from the Royal Conservatoire, The Hague - joined forces to create IMEXsound (www.imexsound.com), aiming to produce creatively and technically compelling 3D audio content and provide spatial audio consulting services.

Tools for Creativity - Choosing the Right Technology for Being Creative in Immersive and Interactive Applications
Choosing the appropriate technical tools based on creative intent is an important aspect when composing and producing music for interactive and/or immersive media. Technical possibilities of hardware and software solutions need to be considered while selecting the ‘right’ approach for each project. This presentation provides an overview of various production tools in relation to creative considerations for immersive and interactive audio experiences. Several examples of ‘static’ binaural music and interactive applications will be presented and software/hardware solutions will be compared. Furthermore, creative considerations as well as technical challenges and limitations that directly influence the final experience will be discussed.


Thomas is a Spatial Audio Designer, Sound Designer, Composer and founder of scopeaudio. He studied Media Arts at the University of Applied Arts in Salzburg focusing on multichannel audio applications. He has worked in several studios on projects like the BMW Museum, where he could gain experience in the field of multichannel sound installations. Since 2015 Thomas focuses on designing 3D audio environments for VR/AR/XR. Currently he released a location based Augmented Audio Application on the Viennese Heldenplatz, called SONIC TRACES, where people can wander through a 3D soundscape and listen to stories from 1848 and 2084. More information: sonictraces.com

Location based 6DoF Spatial Audio: Creating walkable soundscapes in a 3D audio environment
In this lecture students will learn to create a Unity based scene of a walkable location based AR audio experience. We will explore and compare the possibilities of different spatial audio renderers, but also the limitations (at a beginner level) in Unity when it comes to create believable soundscapes - natural sounding and artistically. Therefore we will take a look at a middleware (audio engine for Unity) called Wwise, which helps in modifying the behavior of sound according to the user input and is in my opinion essential to be able to control your soundscape. We will also combine aesthetical and practical choices in recording the material and creating the soundscape to a) sound better, and b) to help guide the user through it. Furthermore we will discuss in which way AR audio might be heading, and for what applications it makes sense to use 6DoF spatial audio.

Founder, CEO WeMakeVR.

Director, writer and Creative Technologist of Immersive Experiences.

Avinash Changa is an all-rounder in the field of digital concepts and production techniques, and a true VR- evangelist. Brand names such as Tommy Hilfiger, Samsung, Heineken, Oculus, JBL, IBM, as well as musical institutions such as the New York Symphony Orchestra, the London Symphony Orchestra, ID&T, Sensation, Into the Great Wide Open, the Berliner Philharmoniker and many more have chosen to work with WeMakeVR.

A few notable works he and his team have worked on:

The MetaMovie Presents: Alien Rescue – Venice Biennale official selection 2020

Ahorse – SXSW Official selection 2019

Souvenir / On Entre – EYE Dioramas Nov/Dec. 2019

Meeting Rembrandt – Halo award 2018

Ashes to Ashes – 2017 Golden Calf Nominee

More info click here.

Watch a talk by Avinash on VR and its role in our future here.

Since founding award-winning studio WeMakeVR in 2013 he’s become a much- requested speaker and guest for international conferences, TV- programs and other media covering the Immersive industry. Highlights include BBC News Live, MIT’s EmTech, The Next Web, CodeMotion, and the Guangzhou International Innovation Festival in China.

He often speaks about the role of Immersive Technologies such as AR, VR and MR in the future of other industries, and he’s passionate about it’s untapped potential. His mission is to bring meaningful applications and content to the world that improve quality of life for everyone.

More about Avinash Changa and his Immersive journey


“Immersive Tech: past, present and future”

Talk summary

Throughout time humanity has been searching for better ways to communicate and transfer knowledge. Immersive technologies such as VR and AR have proven to help convey knowledge and practical skills more efficiently and effectively than any technology that came before. This bold claim has been supported by research from universities such as Stanford, and researchers such as Mel Slater, Jeremy Bailenson and many others.

Areas such as entertainment, arts, healthcare and education are evolving. In the academic field Immersive technologies are increasingly being implemented as valuable tools. Processes such as training, safety and experiment-design are moving into a world of new possibilities. And now, in a world dealing with the consequences of a global pandemic, Immersive tech is facilitating possibilities for social connecting and entertainment.

In his talk, Avinash Changa will talk the audience on a journey along the past, present and future of Immersive technologies.

Talk-duration:

60 minutes + 20 min. Q&A

Suitable audience-level:

Accessible to beginners / people unfamiliar with VR

Please note: this symposium is not open to the public. Students eligible for this symposium will be contacted directly by email.