VR works


This is a quick reading list to start working on 360 Videos. It will expand in particular on the directing, storyboarding and editing processes.
Also check out this playlist of tutorial videos I've collected.

Some of my VR videos are available on my YouTube channel. The most professional examples are instead hosted on our Facebook page: 360 Video and Audio - Concerts in Parma
I've also recorded several editions of the Verdi Festival, and created a simple site to host the recordings: Verdi 360.

A couple of more recent works: an underwater recording with spatial audio in Panarea Blackpoint, Sicily and a series of 360 videos designed for helping kids choose their high school, the Orientamente project. The latter is the first time I used lavalier microphones, creating an Ambisonics mix in post production. A Brahma microphone was also used for ambience.

Finally, some of our videos are available for download and local playback on my father's Jump Videos page. Also, my whole thesis project was about VR!


Minimal workflow

The prototype Brahma literally gaffa-taped to the Qoocam

The prototype Brahma literally gaffa-taped to the Qoocam

Most video editors natively support 360 video, these days, and most 360 cameras come with their own stitching software, which is usually perfectly adequate. So you can just create a sequence that matches the correct resolution, frame rate and so on, making sure that you are setting it to be VR and to have spatial audio. 360 video without spatial audio is, in my opinion, completely unwatchable.

For musical performances

For live music ambience is really important. In order to capture it I recommend using an Ambisonics microphone array, of which there are many, and at pretty much any price point. They usually come with an A-format to B-format converter, or with appropriate convolution matrixes. It is then just a question of setting up your video editor for Ambisonics (here's a little guide for Premiere) and syncronizing it to the in-camera audio.

For speech

If somebody is speaking the most important thing is to hear them clearly. The best ways of achieving this are with highly directive microphones or lavaliers. It is then necessary to position the sound source in the Ambisonics field. For static characters you can use a VST plugin, such as O3A Panner. You can export a screenshot of your scene to aid in aiming.
For moving character you need either a more advanced software, like the Facebook Spatial Workstation, or to be very quick in tracking the subject across the image. This is particularly tricky with something like 03A Panner, because you cannot see your video in real time. Therefore I cannot really recommend this approach for complex scenes, but as a minimal workflow it works quite well. There are also more advanced programs that allow you both to display the video while you edit the audio and, more importantly, to perform an auralization of the sound that takes the distance into account.


Room scale playback


Of course this isn't strictly speaking VR, but we've set up the "white room" of the Casa del Suono to play back third order Ambisonics in sync with our videos. We're using the pre-existing audio system, and a computer with an Nvidia Quadro driving 4 1080p projectors. Of course the sense of presence isn't comparable to a dedicated HMD, but it still provides an immersive, yet social, audiovisual experience.


The future


The future is six degrees of freedom capture and playback, allowing the user not only to rotate their head, but to translate their position both vertically and horizontally. The audio part has pretty much been solved by Zylia. For the video part it's still a work in progress, but what is needed is basically photogrammetry of the venue and a volumetric video of any moving character.