From Meshes to Gaussians: Evolving 3D Scene Rendering for Broadcast Applications
This paper presents a technical overview of the evolving landscape of 3D scene representation and rendering, with a focus on photorealistic content generation in broadcast. While traditional polygon meshes offer highly efficient rendering, attaining convincing photorealism with them requires extensive manual modeling or complex capture setups. Breakthrough computer vision approaches have, however, recently enabled high photorealistic fidelity by jointly learning scene geometry, lighting, and appearance directly from images. A Neural Radiance Field (NeRF) encodes a scene implicitly as a continuous volumetric function, whereas 3D Gaussian Splatting (3DGS) provides an explicit representation through a set of Gaussian primitives. We introduce the fundamental principles of polygon meshes, NeRF and 3DGS, then examine the applicability of NeRF and 3DGS in the broadcast domain, emphasizing both the progress realized by the research community and the challenges that remain for large-scale adoption in time-critical live production environments. Finally, we illustrate the potential of 3DGS through a preliminary case study in the demanding context of sports broadcasting.
- Published
- 2025-10-13
- Content type
- Original Research
- Keywords
- 3d rendering, 3d scene reconstruction, meshes, neural radiance fields, gaussian splatting, virtual sets, virtual cameras, free viewpoint, immersive replays, live, broadcast applications
- ISBN
- 978-1-61482-966-9