Ofinno’s Standards Readouts feature expert insight and analysis that translates complex standardization progress into actionable insights to help navigate the future 5G/6G, next-gen Wi-Fi, and media compression technologies.
Chia-Yang Tsai
Director, Advanced Media Lab
Damian Ruiz Coll
Principal Engineer
Médéric Blestel
Senior Video Standard Researcher
Chao Cao
Senior Engineer
Stephen Siemonsma
Engineer
Key Takeaways
How do you compress a rapidly evolving 3D data representation that is under 3 years old?
MPEG is continuing its exploration of Gaussian Splat Coding (GSC)—the compression of Gaussian Splat (GS) data representations used in 3D scene capture—with involvement from multiple working groups including WG 4, WG 5, and WG 7. The effort remains officially in the Exploration stage, as Part 45 – Gaussian splat coding: Implicit Neural Visual Representation, with the current focus on extending support for GSC with existing standards and preparing for potential Call for Proposals (CfP) for GSC as an individual standard part. The core question—how to provide efficient compression technologies to satisfy the various requirements and use cases in the current market—is still open.
The race to replace VVC just got a calendar
The Call for Proposals will likely be published on May 15, 2026, with registration expected to open August 1 and proposals due January 2027. Some details may still change following Ad Hoc Group discussions at the April meeting. Contributions to the shared experimental codebase (ECM) are declining as companies shift to competitive mode and save their best work for their own submissions.
Point cloud compression is quietly building toward something bigger
The Geometry-based Point Cloud Compression (G-PCC) family continues to expand. The Geometry for Solid branch (GeS-PCC), which targets dense and solid point cloud content, is progressing toward a Committee Draft with a national body ballot targeted after the April meeting. The working group reviewed exploration experiments for adopting new tools and is also exploring how to adapt point-cloud-based coding techniques for GS-specific attributes such as opacity, scale, rotation, and color coefficients. This positions the geometry-based approach as one potential path for GSC, complementing video-based approaches being explored in parallel.
Overview
Gaussian Splat Coding—the compression of data used in photorealistic 3D scene representations—drew cross-group interest at the 41st Joint Video Experts Team (JVET) meeting (Virtual, January 14–23, 2026) and concurrent 153rd MPEG meeting. Multiple MPEG working groups, including WG 4, WG 5, and WG 7 are involved in exploring how best to compress GS data—collections of colored 3D ellipsoids that represent photorealistic 3D scenes. The technology has exploded from research paper to commercial products in just two years, and the compression of this data is now a focus of active MPEG exploration.
It is important to note that this work is officially in the Exploration stage—no formal standardization has been kicked off. The exploration is examining use cases, requirements, and potential coding approaches, but whether and how this becomes one or more standards remains an open question.
Gaussian splat data is fundamentally a collection of points in 3D space, each carrying attributes like orientation, scale, opacity, and color coefficients—but how to best compress and transport that data is under active investigation. The compressed data could potentially be packaged within existing video infrastructure, or it could be handled through geometry-based point cloud frameworks. Different industries prefer different approaches depending on the infrastructure they already have. With a desire to offer flexible and efficient technologies, MPEG is continuing its exploration of GSC through a joint effort within the organization.
Meanwhile, the Next Generation Video Coding effort reached a concrete milestone: the Draft Joint Call for Proposals is complete, with target dates now in place. CfP publication will likely occur on May 15, 2026, with registration expected to open August 1 and proposals to be evaluated in January 2027—though some changes may result from ongoing Ad Hoc Group discussions ahead of the April meeting. The competitive dynamics we flagged in our October readout are now fully visible—contributions to the shared experimental codebase are declining, and companies are increasingly submitting information-only documents that demonstrate capability without sharing code.
On the point cloud side, the G-PCC family of standards—which now includes branches for enhanced temporal prediction (E-G-PCC), dense solid objects (GeS-PCC), and low-latency LiDAR (L3C2)—continued advancing. GeS-PCC, in particular, progressed toward Committee Draft status, with the working group also exploring how the framework could be extended for Gaussian splat data.
Gaussian Splat Coding: Exploration Advances Across MPEG
Gaussian Splatting is a rendering technique for three-dimensional scenes that has captured significant industry attention since its introduction in 2023. The basic idea: instead of building 3D models from triangles (the traditional mesh approach) or compressing point cloud representations, a scene is represented as millions of tiny colored 3D ellipsoids—Gaussian splats—positioned and oriented in space. When you want to see the scene from a new angle, you draw all the splats from that perspective. The result is photorealistic rendering fast enough for real-time VR, AR, and immersive video applications.
While Gaussian Splatting refers to this rendering technique, the standards exploration is focused on GSC—the efficient compression of the underlying Gaussian splat data representation. This data includes the positions, orientations, scales, opacities, and color coefficients (typically represented as spherical harmonics) of each splat in the scene. Compressing this data efficiently is essential for practical storage and transmission of Gaussian splat scenes, regardless of how they are ultimately rendered or used.
The technology has already moved from research to products—but without standardized compression, every company implements it differently, files cannot be exchanged between tools, and the ecosystem risks fragmenting before it matures.
The January meeting saw multiple groups engage with Gaussian Splat Coding exploration: JVET (video coding), WG 4 (MPEG video coding), and WG 7 (MPEG Coding of 3D Graphics and Haptics) all have interest in the space. The underlying question driving this organizational tension is about compression and transport: Gaussian splat data is fundamentally a collection of points with associated attributes, but the compressed data could be packaged for delivery in different ways—either within existing video stream infrastructure, leveraging the decoder hardware already in every phone and TV, or through geometry-based point cloud frameworks that work directly in 3D space.
Different industries want different answers. A gaming company might prefer video-stream packaging to leverage existing hardware. An autonomous vehicle company might prefer point cloud treatment that fits their LiDAR pipelines. One emerging perspective from the meeting is that the packaging question may be secondary—multiple encapsulation options could coexist, as long as the underlying data format and semantics are aligned. But it is equally possible that the distinct priorities lead to separate approaches rather than a unified specification. The exploration is still at an early stage, and how the organizational and technical questions are resolved will become clearer in coming meetings.
One practical challenge surfaced around computational requirements. Gaussian splat workflows typically assume access to a GPU—the specialized processor that handles graphics. A contribution that provided only a GPU implementation was asked to also supply a CPU version for reproducibility within the exploration process. But there was sympathy for the reality that GPUs are part of the pipeline; the group may accept GPU-focused techniques as long as CPU versions exist for verification and results can be cached for sharing.
A notable area of technical progress involved how to represent the orientation of each Gaussian splat. Each splat’s orientation is typically stored as a quaternion—a four-dimensional vector that encodes a three-dimensional rotation. This representation is inherently overparameterized: a quaternion q and its negation −q describe the same orientation, and because 3D Gaussian splats are trained with unordered local dimensions (the x, y, and z axes are not sorted by size) and ellipsoids have multiple axes of symmetry, there are several equivalent ways to represent the same effective shape and orientation. This redundancy is bad for compression, since a codec works best when each unique shape maps to exactly one representation. Several companies brought forward proposals to normalize quaternion data into a more compact three-component form that eliminates these redundancies. The proposals turned out to be quite similar to one another, and the group reached a resolution—a positive sign that practical compression of Gaussian splat orientation data is converging toward a shared approach.
Ofinno’s Stephen Siemonsma contributed a first-person locomotion mode for the Gaussian Splatting renderer, enabling fly-through navigation of captured scenes. The contribution was received favorably, demonstrating that targeted software enhancements are a powerful way to establish presence and goodwill in the standardization space.
NGVC: The Calendar Takes Shape
The Next Generation Video Coding effort—the project to build a successor to VVC (H.266)—reached an important milestone with the completion of the Draft Joint Call for Proposals (JVET-AO0045). Where previous meetings established the general direction, this meeting produced target dates that companies can begin planning around, though some details may still be refined at the April meeting.
The target timeline: final CfP approval at the April meeting in Spain, with publication likely on May 15, test anchors available by end of May, registration expected to open August 1, test materials in October, subjective quality testing in November, and proposals due January 2027. Some details may be refined following AHG discussions at the April meeting. Test conditions are frozen—8K, 4K, and HD resolutions covering HDR formats, with use cases spanning streaming, video calls, cloud gaming, and user-generated content.
A key theme was practical deployability. Vitec, Huawei, and HiSilicon presented overviews of their chip implementations, emphasizing the gap between research-paper performance and what can actually run on a phone without draining the battery. The committee wants a codec that can be implemented in real hardware, not just one that looks good in software simulations.
One point of disagreement: AI-generated content was excluded from the test materials. Some experts felt this was a mistake—AI-generated video has different compression characteristics than camera-captured content, and the standard will eventually need to handle it. The door was left open for adding such content during the later collaborative phase, but for now, it stays out.
The competitive shift we noted in October is now fully visible. Contributions to the Enhanced Compression Model—the shared experimental codebase—are declining. More submissions are information-only documents that demonstrate capability without providing code. The exploratory work has reached a ceiling where further gains require either dramatically more complexity or fundamental architectural changes. The easy wins have been captured, and companies are now saving their best ideas for their own CfP submissions.
G-PCC and Its Branches: Advancing Toward Committee Draft
Geometry-based Point Cloud Compression (G-PCC) is an MPEG standard (ISO/IEC 23090-9) for compressing 3D point cloud data—the sparse, irregular spatial information produced by LiDAR sensors and 3D scanners. Unlike video-based approaches that project 3D data into 2D images for compression, G-PCC works directly in three-dimensional space using hierarchical data structures like octrees, compressing both the geometry (where points are) and the attributes (what properties each point carries, such as color).
The G-PCC framework has evolved well beyond the original core standard, branching into several specialized tracks that target distinct application needs. Enhanced G-PCC (E-G-PCC) adds temporal prediction across frames, improving coding of dynamic and time-evolving point clouds. Geometry for Solid (GeS-PCC) targets dense and solid point cloud content—surfaces that behave more like continuous manifolds—aiming to approach the performance of video-based compression while staying purely geometry-based. And Low Complexity, Low Latency LiDAR Coding (L3C2) is tailored specifically for spinning LiDAR sensors, enabling real-time processing for autonomous driving and robotics applications. L3C2 has already reached FDIS (Final Draft International Standard) stage.
The primary focus of the January meeting was continued progress on GeS-PCC, which is advancing toward a Committee Draft targeting a national body ballot after the April meeting. The working group reviewed exploration experiments (EEs) for adopting new compression tools and prepared the draft Committee Draft document. This represents significant forward motion for this branch’s capabilities in handling dense, solid 3D content.
Alongside this core progress, the group is exploring how to adapt GeS-PCC for Gaussian splat data. Since Gaussian splat primitives are fundamentally points in space with associated attributes, extending the geometry-based framework to handle Gaussian-specific properties—opacity, scale, rotation, and color coefficients—is a natural extension of the codec’s existing architecture. This exploration positions the geometry-based approach as one candidate path for GSC, complementing the video-based approaches being explored in parallel by other MPEG groups.
Whether the point cloud track, the video track, or both ultimately contribute to future GSC standards may depend less on technical merit alone than on which industries adopt the technology first and what infrastructure they already have in place.
What’s Next
The next major milestone is the April 2026 JVET meeting in Spain, where several threads converge:
NGVC: The Joint Call for Proposals gets final approval, with publication likely on May 15. Registration is expected to open August 1, and proposals will be evaluated at the January 2027 meeting.
Gaussian Splat Coding: The involved groups—WG 4, WG 5, and WG 7—will continue their respective exploration efforts. Expect further refinement of use cases and requirements, and clearer signals on whether the groups will coordinate closely or pursue separate approaches to GSC.
G-PCC / GeS-PCC: The Committee Draft ballot after April will mark a significant milestone for GeS-PCC’s capabilities in dense solid content compression, while the exploration of Gaussian splat extensions continues in parallel.