As a design studio, we first embraced virtual reality design back in 2015, after trying a few early demonstrations on the development kits that were available to us at the time. Even at that early stage, we could immediately see the advantages that this medium could hold over the CGI renders and video flythroughs we were creating at that point. By placing a design into an immersive 3D space for the user to explore, homeowners could be shown their future property in full 1:1 scale, enabling architects and other designers to share their ideas in a powerful, visual manner.
In our line of work, photo-realism is crucial. Our clients expect us to recreate textures, materials and environmental effects such as light and shadow to an incredibly detailed degree. Yet, as the axiom goes, sound comprises half of the cinematic experience – and poor audio effects initially compromised how lifelike our earliest VR projects could be. A poorly-placed or unrealistic audio source is distracting to the user, reminding them that the virtual world they see around them is, after all, just an illusion.
Our initial efforts in VR audio were incredibly simple: in our first project, we only had a single audio source, which was triggered when the user interacted with the television and turned it on. Over time, we have refined this aspect and the audio of our VR experiences have become more much more sophisticated. As well as being able to incorporate a number of sound effects – from birdsong to rock music – throughout the interior and exterior of a property, we now integrate multiple audio call-outs that are triggered by certain events. Examples of this include adding background noise from the street outside when the user opens a window, then cutting this off when it is closed again, or changing the resonance of a user’s footsteps when they move from one type of flooring to another.
We create our VR experiences in Unreal Engine; as a gaming engine, this gives us a solid platform upon which to create realistic visual and aural environments. As we continue to experiment with using VR audio in Unreal, our studio has become able to add several individual audio sources to the same space, without having to worry about audio bleed or delay issues between them.
Creating a realistic surround-sound stage for architectural VR
Over the past year, Unreal has made significant changes to the way it handles audio within its platform, forcing designers such as ourselves to frequently adapt how we implement music and other audio cues. That said, this rapid development means that there are a number of features that are becoming available – such as multi-platform EQ and reverb master effects – which will allow us to make the sound quality of our architectural builds significantly more realistic. Coupled with improvements such as submix effects, source effects, real-time synthesis, and better audio plug-in support, we are now able to experiment with fully-fledged 3D audio soundscapes in our designs for the first time.
One trial project we are running involves mapping out a home cinema room in its entirety, allowing a user to compare different brands of loudspeaker, different channel configurations (i.e. 5.1 surround sound, Dolby Atmos etc…) as well as different types of audio treatment (i.e. wall panels, carpet thickness etc…) and adding different bass/treble sources to individual source points. Developing a realistic equalisation process for this kind of space will make an immediate difference for technology integrators when it comes to selling and designing home cinema, allowing a customer to not only see the options available but hear them as well.
This is still a work in progress, as we are still waiting on further platform updates from Unreal and trying a number of 3D audio delivery options in order to fully realise this ambition. Creating a true simulation of surround-sound in VR is still problematic and time-consuming but – gradually – we are getting closer to getting it right.