Metadata based audio production for Next Generation Audio formats
The next generation immersive audio formats will require changes in the audio production workflow. Monitoring the audio along with authoring and verifying of dynamic metadata will become a new challenge. — New procedures for managing object based encoded content the same way as for personalization of services through the selection of alternative audio objects (such as commentator languages) needs to be established. — Object based audio will give the end user the option to personalize their experience by selecting from a number of audio sources and controlling the level and maybe even the position in the mix. In object based audio, an “object” is essentially an audio stream with accompanying descriptive metadata. The metadata carries information for the playback rendering process in the final decoder/receiver. — What does this all mean for the production of future audio content? A total re-think about the workflows and the audio processing equipment will be required? There will be inevitably some additions to production and distribution equipment. In the case of Immersive Audio and object based audio, all the metadata appropriate for the final codec needs to be created and must reach the final emission encoder. File based processes needs to perform this similar to stream based real-time live content production. The way of mixing and mastering will change and more auto-production procedures will be used to run NGA audio and legacy formats simultaneously the same time.
- Published
- 2017-10
- Content type
- Original Research
- Keywords
- Next Generation Audio, Object Based Audio, Dynamic Metadata, Descriptive Metadata, Audio Rendering, Legacy Formats
- DOI
- 10.5594/M001800
- ISBN
- 978-1-61482-959-1