MPEG Column: 124th MPEG Meeting in Macau, China

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

The MPEG press release comprises the following aspects:

  • Point Cloud Compression – MPEG promotes a video-based point cloud compression technology to the Committee Draft stage
  • Compressed Representation of Neural Networks – MPEG issues Call for Proposals
  • Low Complexity Video Coding Enhancements – MPEG issues Call for Proposals
  • New Video Coding Standard expected to have licensing terms timely available – MPEG issues Call for Proposals
  • Multi-Image Application Format (MIAF) promoted to Final Draft International Standard
  • 3DoF+ Draft Call for Proposal goes Public

Point Cloud Compression – MPEG promotes a video-based point cloud compression technology to the Committee Draft stage

At its 124th meeting, MPEG promoted its Video-based Point Cloud Compression (V-PCC) standard to Committee Draft (CD) stage. V-PCC addresses lossless and lossy coding of 3D point clouds with associated attributes such as colour. By leveraging existing and video ecosystems in general (hardware acceleration, transmission services and infrastructure), and future video codecs as well, the V-PCC technology enables new applications. The current V-PCC encoder implementation provides a compression of 125:1, which means that a dynamic point cloud of 1 million points could be encoded at 8 Mbit/s with good perceptual quality.

A next step is the storage of V-PCC in ISOBMFF for which a working draft has been produced. It is expected that further details will be discussed in upcoming reports.

Research aspects: Video-based Point Cloud Compression (V-PCC) is at CD stage and a first working draft for the storage of V-PCC in ISOBMFF has been provided. Thus, a next consequence is the delivery of V-PCC encapsulated in ISOBMFF over networks utilizing various approaches, protocols, and tools. Additionally, one may think of using also different encapsulation formats if needed.

MPEG issues Call for Proposals on Compressed Representation of Neural Networks

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, media coding, data analytics, and many other fields. Their recent success is based on the feasibility of processing much larger and complex neural networks (deep neural networks, DNNs) than in the past, and the availability of large-scale training data sets. Some applications require the deployment of a particular trained network instance to a potentially large number of devices and, thus, could benefit from a standard for the compressed representation of neural networks. Therefore, MPEG has issued a Call for Proposals (CfP) for compression technology for neural networks, focusing on the compression of parameters and weights, focusing on four use cases: (i) visual object classification, (ii) audio classification, (iii) visual feature extraction (as used in MPEG CDVA), and (iv) video coding.

Research aspects: As point out last time, research here will mainly focus around compression efficiency for both lossy and lossless scenarios. Additionally, communication aspects such as transmission of compressed artificial neural networks within lossy, large-scale environments including update mechanisms may become relevant in the (near) future.

 

MPEG issues Call for Proposals on Low Complexity Video Coding Enhancements

Upon request from the industry, MPEG has identified an area of interest in which video technology deployed in the market (e.g., AVC, HEVC) can be enhanced in terms of video quality without the need to necessarily replace existing hardware. Therefore, MPEG has issued a Call for Proposals (CfP) on Low Complexity Video Coding Enhancements.

The objective is to develop video coding technology with a data stream structure defined by two component streams: a base stream decodable by a hardware decoder and an enhancement stream suitable for software processing implementation. The project is meant to be codec agnostic; in other words, the base encoder and base decoder can be AVC, HEVC, or any other codec in the market.

Research aspects: The interesting aspect here is that this use case assumes a legacy base decoder – most likely realized in hardware – which is enhanced with software-based implementations to improve coding efficiency or/and quality without sacrificing capabilities of the end user in terms of complexity and, thus, energy efficiency due to the software based solution. 

 

MPEG issues Call for Proposals for a New Video Coding Standard expected to have licensing terms timely available

At its 124th meeting, MPEG issued a Call for Proposals (CfP) for a new video coding standard to address combinations of both technical and application (i.e., business) requirements that may not be adequately met by existing standards. The aim is to provide a standardized video compression solution which combines coding efficiency similar to that of HEVC with a level of complexity suitable for real-time encoding/decoding and the timely availability of licensing terms.

Research aspects: This new work item is more related to business aspects (i.e., licensing terms) than technical aspects of video coding.

 

Multi-Image Application Format (MIAF) promoted to Final Draft International Standard

The Multi-Image Application Format (MIAF) defines interoperability points for creation, reading, parsing, and decoding of images embedded in High Efficiency Image File (HEIF) format by (i) only defining additional constraints on the HEIF format, (ii) limiting the supported encoding types to a set of specific profiles and levels, (iii) requiring specific metadata formats, and (iv) defining a set of brands for signaling such constraints including specific depth map and alpha plane formats. For instance, it addresses use case like a capturing device may use one of HEIF codecs with a specific HEVC profile and level in its created HEIF files, while a playback device is only capable of decoding the AVC bitstreams.

Research aspects: MIAF is an application format which is defined as a combination of tools (incl. profiles and levels) of other standards (e.g., audio codecs, video codecs, systems) to address the needs of a specific application. Thus, the research is related to use cases enabled by this application format. 

 

3DoF+ Draft Call for Proposal goes Public

Following investigations on the coding of “three Degrees of Freedom plus” (3DoF+) content in the context of MPEG-I, the MPEG video subgroup has provided evidence demonstrating the capability to encode a 3DoF+ content efficiently while maintaining compatibility with legacy HEVC hardware. As a result, MPEG decided to issue a draft Call for Proposal (CfP) to the public containing the information necessary to prepare for the final Call for Proposal expected to occur at the 125th MPEG meeting (January 2019) with responses due at the 126th MPEG meeting (March 2019).

Research aspects: This work item is about video (coding) and, thus, research is about compression efficiency.

 

What else happened at #MPEG124?

  • MPEG-DASH 3rd edition is still in the final editing phase and not yet available. Last time, I wrote that we expect final publication later this year or early next year and we hope this is still the case. At this meeting Amendment.5 is progressed to DAM and conformance/reference software for SRD, SAND and Server Push is also promoted to DAM. In other words, DASH is pretty much in maintenance mode.
  • MPEG-I (systems part) is working on immersive media access and delivery and I guess more updates will come on this after the next meeting. OMAF is working on a 2nd edition for which a working draft exists and phase 2 use cases (public document) and draft requirements are discussed.
  • Versatile Video Coding (VVC): working draft 3 (WD3) and test model 3 (VTM3) has been issued at this meeting including a large number of new tools. Both documents (and software) will be publicly available after editing periods (Nov. 23 for WD3 and Dec 14 for VTM3).

 

JPEG Column: 80th JPEG Meeting in Berlin, Germany

The 80th JPEG meeting was held in Berlin, Germany, from 7 to 13 July 2018. During this meeting, JPEG issued a record number of ballots and output documents, spread through the multiple activities taking place. These record numbers are very revealing of the level of commitment of JPEG standardisation committee. A strong effort is being accomplished on the standardisation of new solutions for the emerging image technologies enabling the interoperability of different systems on the growing market of multimedia. Moreover, it is intended that these new initiatives should provide royalty-free patent licensing solutions at least in one of the available profiles, which shall promote a wider adoption of these future JPEG standards from the consumer market, and applications and systems developers.

A significant progress in low latency and high throughput standardisation initiatives has taken place at Berlin meetings. The new part 15 of JPEG 2000, known as High Throughput JPEG 2000 (HTJ2K), is finally ready and reached committee draft status. Furthermore, JPEG XS profiles and levels were released for their second and final ballot. Hence, these new low complexity standards foresee to be finalised in a short time, providing new solutions for developers and consumers on applications where mobility is important and large bandwidth is available. Virtual and augmented reality, as well as 360º images and video, are among the several applications that might benefit from these new standards.

Berlin80T1cut

JPEG meeting plenary in Berlin.

The 80th JPEG meeting had the following highlights:

  • HTJ2K reaches Committee Draft status;
  • JPEG XS profiles and levels are under ballot;
  • JPEG XL publishes additional information to the CfP;
  • JPEG Systems – JUMBF & JPEG 360;
  • JPEG-in-HEIF;
  • JPEG Blockchain white paper;
  • JPEG Pleno Light Field verification model.

The following summarizes the various highlights during JPEG’s Berlin meeting.

HTJ2K

The JPEG committee is pleased to announce a significant milestone, with ISO/IEC 15444-15 High-Throughput JPEG 2000 (HTJ2K) reaching Committee Draft status.

HTJ2K introduces a new FAST block coder to the JPEG 2000 family. The FAST block coder can be used in place of the JPEG 2000 Part 1 arithmetic block coder, and, as illustrated in Table 1, offers in average an order of magnitude increase on decoding and encoding throughput – at the expense of slightly reduced coding efficiency and elimination of quality scalability.

Table 1. Comparison between FAST block coder and JPEG 2000 Part 1 arithmetic block coder. Results were generated by optimized implementations evaluated as part of the HTJ2K activity, using professional video test images in the transcoding context specified in the Call for Proposal available at https://jpeg.org.  Figures are relative to JPEG2000 Part1 arithmetic block coder (bpp – bits per pixel).

JPEG 2000 Part 1 Block Coder Bitrate 0.5 bpp 1 bpp 2 bpp 4 bpp 6 bpp lossless
Average FAST Block Coder Speedup Factor 17.5x 19.5x 21.1x 25.5x 27.4x 43.7x
Average FAST Block Decoder Speedup Factor 10.2x 11.4x 11.9x 14.1x 15.1x 24.0x
Average Increase in Codestream Size  8.4%  7.3%   7.1% 6.6%  6.5%  6.6% 

Apart from the block coding algorithm itself, the FAST block coding algorithm does not modify the JPEG 2000 codestream, and allows mathematically lossless transcoding to and from JPEG 2000 codestreams. As a result the FAST block coding algorithm can be readily integrated into existing JPEG 2000 applications, where it can bring significant increases in processing efficiency. 

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets. Targeted use cases are video transport over professional video links (SDI, IP, Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression (in particular in the automotive industry). The Core Coding System, expected to be published in Q4 2018 allows for visually lossless quality at 6:1 compression ratio for most content, 32 lines end-to-end latency, and ultra low complexity implementations in ASIC, FPGA, CPU and GPU. Following the 80th JPEG meeting in Berlin, profiles and levels (addressing specific application fields and use cases) are now under final ballot (expected publication in Q1 2019). Different means to store and transport JPEG XS codestreams in files, over IP networks or SDI infrastructures are also defined and go to a first ballot.

 

JPEG XL

The JPEG Committee issued a Call for Proposals (CfP) following its 79th meeting (April 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Coding activity. The CfP, with all related info, can be found in https://jpeg.org/downloads/jpegxl/jpegxl-cfp.pdf. The deadline for expression of interest and registration was August 15, 2018, and submissions to the CfP were due on September 1, 2018. 

As outcome of the 80th JPEG meeting in Berlin, a document was produced containing additional information related to the objective and subjective quality assessment methodologies that will be used to evaluate the anchors and proposals to the CfP, available on https://jpeg.org/downloads/jpegxl/wg1n80024-additional-information-cfp.pdf. Moreover, a detailed workflow is described, together with the software and command lines used to generate the anchors and to compute objective quality metrics.

To stay posted on the action plan of JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to its e-mail reflector.

 

JPEG Systems – JUMBF & JPEG 360

The JPEG Committee progressed towards a common framework and definition for metadata which will improve the ability to share 360 images. At the 80th meeting, the Committee Draft ballot was completed, the comments reviewed, and is now progressing towards DIS text for upcoming ballots on “JPEG Universal Metadata Box Format (JUMBF)” as ISO/IEC 19566-5, and “JPEG 360” as ISO/IEC 19566-6. Investigations have started to apply the framework on the structure of JPEG Pleno files.

 

JPEG-in-HEIF

The JPEG Committee made significant progress towards standardizing how JPEG XR, JPEG 2000 and the upcoming JPEG XS will be carried in ISO/IEC 23008-12 image file container.

 

JPEG Blockchain

Fake news, copyright violation, media forensics, privacy and security are emerging challenges for digital media. JPEG has determined that blockchain technology has great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain needs to be integrated closely with a widely adopted standard to ensure broad interoperability of protected images. JPEG calls for industry participation to help define use cases and requirements that will drive the standardization process. To reach this objective, JPEG issued a white paper entitled “Towards a Standardized Framework for Media Blockchain” that elaborates on the initiative, exploring relevant standardization activities, industrial needs and use cases. In addition, JPEG plans to organise a workshop during its 81st meeting in Vancouver on Tuesday 16th October 2018. More information about the workshop is available on https://www.jpeg.org. To keep informed and get involved, interested parties are invited to register on the ad hoc group’s mailing list at http://jpeg-blockchain-list.jpeg.org.

 

JPEG Pleno

The JPEG Committee is currently pursuing three activities in the framework of the JPEG Pleno Standardization: Light Field, Point Cloud and Holographic content coding.

At its Berlin meeting, a first version of the verification model software for light field coding has been produced. This software supports the core functionality that was indented for the light field coding standard. It serves for intensive testing of the standard. JPEG Pleno Light Field Coding supports various sensors ranging from lenslet cameras to high-density camera arrays, light field related content production chains up to light field displays.

For coding of point clouds and holographic data, activities are still in exploratory phase addressing the elaboration of use cases and the refinement of requirements for coding such modalities. In addition, experimental procedures are being designed to facilitate the quality evaluation and testing of technologies that will be submitted in later calls for coding technologies. Interested parties active in point cloud and holography related markets and applications, both from industry and academia are welcome to participate in this standardization activity.

 

Final Quote 

“After a record number of ballots and output documents generated during its 80th meeting, the JPEG Committee pursues its activity on the specification of effective and reliable solutions for image coding offering needed features in emerging multimedia applications. The new JPEG XS and JPEG 2000 part 15 provide low complexity compression solutions that will benefit many growing markets such as content production, virtual and augmented reality as well as autonomous cars and drones.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 80th JPEG Meeting was held on 7-13 July 2018, in Berlin, Germany. The next 81st JPEG Meeting will be held on 13-19 October 2018, in Vancouver, Canada.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

 

Future JPEG meetings are planned as follows:JPEG-signature

  • No 81, Vancouver, Canada, October 13 to 19, 2018
  • No 82, Lisbon, Portugal, January 19 to 25, 2019
  • No 83, Geneva, Switzerland, March 16 to 22, 2019

MPEG Column: 123rd MPEG Meeting in Ljubljana, Slovenia

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

IMG_5700The MPEG press release comprises the following topics:

  • MPEG issues Call for Evidence on Compressed Representation of Neural Networks
  • Network-Based Media Processing – MPEG evaluates responses to call for proposal and kicks off its technical work
  • MPEG finalizes 1st edition of Technical Report on Architectures for Immersive Media
  • MPEG releases software for MPEG-I visual activities
  • MPEG enhances ISO Base Media File Format (ISOBMFF) with new features

MPEG issues Call for Evidence on Compressed Representation of Neural Networks

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, media coding, data analytics, translation and many other fields. Their recent success is based on the feasibility of processing much larger and complex neural networks (deep neural networks, DNNs) than in the past, and the availability of large-scale training data sets. As a consequence, trained neural networks contain a large number of parameters (weights), resulting in a quite large size (e.g., several hundred MBs). Many applications require the deployment of a particular trained network instance, potentially to a larger number of devices, which may have limitations in terms of processing power and memory (e.g., mobile devices or smart cameras). Any use case, in which a trained neural network (and its updates) needs to be deployed to a number of devices could thus benefit from a standard for the compressed representation of neural networks.

At its 123rd meeting, MPEG has issued a Call for Evidence (CfE) for compression technology for neural networks. The compression technology will be evaluated in terms of compression efficiency, runtime, and memory consumption and the impact on performance in three use cases: visual object classification, visual feature extraction (as used in MPEG Compact Descriptors for Visual Analysis) and filters for video coding. Responses to the CfE will be analyzed on the weekend prior to and during the 124th MPEG meeting in October 2018 (Macau, CN).

Research aspects: As this is about “compression” of structured data, research aspects will mainly focus around compression efficiency for both lossy and lossless scenarios. Additionally, communication aspects such as transmission of compressed artificial neural networks within lossy, large-scale environments including update mechanisms may become relevant in the (near) future. Furthermore, additional use cases should be communicated towards MPEG until the next meeting.

Network-Based Media Processing – MPEG evaluates responses to call for proposal and kicks off its technical work

Recent developments in multimedia have brought significant innovation and disruption to the way multimedia content is created and consumed. At its 123rd meeting, MPEG analyzed the technologies submitted by eight industry leaders as responses to the Call for Proposals (CfP) for Network-Based Media Processing (NBMP, MPEG-I Part 8). These technologies address advanced media processing use cases such as network stitching for virtual reality (VR) services, super-resolution for enhanced visual quality, transcoding by a mobile edge cloud, or viewport extraction for 360-degree video within the network environment. NBMP allows service providers and end users to describe media processing operations that are to be performed by the entities in the networks. NBMP will describe the composition of network-based media processing services out of a set of NBMP functions and makes these NBMP services accessible through Application Programming Interfaces (APIs).

NBMP will support the existing delivery methods such as streaming, file delivery, push-based progressive download, hybrid delivery, and multipath delivery within heterogeneous network environments. MPEG issued a Call for Proposal (CfP) seeking technologies that allow end-user devices, which are limited in processing capabilities and power consumption, to offload certain kinds of processing to the network.

After a formal evaluation of submissions, MPEG selected three technologies as starting points for the (i) workflow, (ii) metadata, and (iii) interfaces for static and dynamically acquired NBMP. A key conclusion of the evaluation was that NBMP can significantly improve the performance and efficiency of the cloud infrastructure and media processing services.

Research aspects: I reported about NBMP in my previous post and basically the same applies here. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

MPEG finalizes 1st edition of Technical Report on Architectures for Immersive Media

At its 123nd meeting, MPEG finalized the first edition of its Technical Report (TR) on Architectures for Immersive Media. This report constitutes the first part of the MPEG-I standard for the coded representation of immersive media and introduces the eight MPEG-I parts currently under specification in MPEG. In particular, it addresses three Degrees of Freedom (3DoF; three rotational and un-limited movements around the X, Y and Z axes (respectively pitch, yaw and roll)), 3DoF+ (3DoF with additional limited translational movements (typically, head movements) along X, Y and Z axes), and 6DoF (3DoF with full translational movements along X, Y and Z axes) experiences but it mostly focuses on 3DoF. Future versions are expected to cover aspects beyond 3DoF. The report documents use cases and defines architectural views on elements that contribute to an overall immersive experience. Finally, the report also includes quality considerations for immersive services and introduces minimum requirements as well as objectives for a high-quality immersive media experience.

Research aspects: ISO/IEC technical reports are typically publicly available and provides informative descriptions of what the standard is about. In MPEG-I this technical report can be used as a guideline for possible architectures for immersive media. This first edition focuses on three Degrees of Freedom (3DoF; three rotational and un-limited movements around the X, Y and Z axes (respectively pitch, yaw and roll)) and outlines the other degrees of freedom currently foreseen in MPEG-I. It also highlights use cases and quality-related aspects that could be of interest for the research community.

MPEG releases software for MPEG-I visual activities

MPEG-I visual is an activity that addresses the specific requirements of immersive visual media for six degrees of freedom virtual walkthroughs with correct motion parallax within a bounded volume. MPEG-I visual covers application scenarios from 3DoF+ with slight body and head movements in a sitting position to 6DoF allowing some walking steps from a central position. At the 123nd MPEG meeting, an important progress has been achieved in software development. A new Reference View Synthesizer (RVS 2.0) has been released for 3DoF+, allowing to synthesize virtual viewpoints from an unlimited number of input views. RVS integrates code bases from Universite Libre de Bruxelles and Philips, who acted as software coordinator. A Weighted-to-Spherically-uniform PSNR (WS-PSNR) software utility, essential to 3DoF+ and 6DoF activities, has been developed by Zhejiang University. WS-PSNR is a full reference objective quality metric for all flavors of omnidirectional video. RVS and WS-PSNR are essential software tools for the upcoming Call for Proposals on 3DoF+ expected to be released at the 124th MPEG meeting in October 2018 (Macau, CN).

Research aspects: MPEG does not only produce text specifications but also reference software and conformance bitstreams, which are important assets for both research and development. Thus, it is very much appreciated to have a new Reference View Synthesizer (RVS 2.0) and Weighted-to-Spherically-uniform PSNR (WS-PSNR) software utility available which enables interoperability and reproducibility of R&D efforts/results in this area.

MPEG enhances ISO Base Media File Format (ISOBMFF) with new features

At the 123rd MPEG meeting, a couple of new amendments related to ISOBMFF has reached the first milestone. Amendment 2 to ISO/IEC 14496-12 6th edition will add the option to have relative addressing as an alternative to offset addressing, which in some environments and workflows can simplify the handling of files and will allow creation of derived visual tracks using items and samples in other tracks with some transformation, for example rotation. Another amendment reached its first milestone is the first amendment to ISO/IEC 23001-7 3rd edition. It will allow use of multiple keys to a single sample and scramble some parts of AVC or HEVC video bitstreams without breaking conformance to the existing decoders. That is, the bitstream will be decodable by existing decoders, but some parts of the video will be scrambled. It is expected that these amendments will reach the final milestone in Q3 2019.

Research aspects: The ISOBMFF reference software is now available on Github, which is a valuable service to the community and allows for active standard’s participation even from outside of MPEG. It is recommended that interested parties have a look at it and consider contributing to this project.


What else happened at #MPEG123?

  • The MPEG-DASH 3rd edition is finally available as output document (N17813; only available to MPEG members) combining 2nd edition, four amendments, and 2 corrigenda. We expect final publication later this year or early next year.
  • There is a new DASH amendment and corrigenda items in pipeline which should progress to final stages also some time next year. The status of MPEG-DASH (July 2018) can be seen below.

DASHstatus0718

  • MPEG received a rather interesting input document related to “streaming first” which resulted into a publicly available output document entitled “thoughts on adaptive delivery and access to immersive media”. The key idea here is to focus on streaming (first) rather than on file/encapsulation formats typically used for storage (and streaming second). This document should become available here.
  • Since a couple of meetings, MPEG maintains a standardization roadmap highlighting recent/major MPEG standards and documenting the roadmap for the next five years. It definitely worth keeping this in mind when defining/updating your own roadmap.
  • JVET/VVC issued Working Draft 2 of Versatile Video Coding (N17732 | JVET-K1001) and Test Model 2 of Versatile Video Coding (VTM 2) (N17733 | JVET-K1002). Please note that N-documents are MPEG internal but JVET-documents are publicly accessible here: http://phenix.it-sudparis.eu/jvet/. An interesting aspect is that VTM2/WD2 should have >20% rate reduction compared to HEVC, all with reasonable complexity and the next benchmark set (BMS) should have close to 30% rate reduction vs. HEVC. Further improvements expected from (a) improved merge, intra prediction, etc., (b) decoder-side estimation with low complexity, (c) multi-hypothesis prediction and OBMC, (d) diagonal and other geometric partitioning, (e) secondary transforms, (f) new approaches of loop filtering, reconstruction and prediction filtering (denoising, non-local, diffusion based, bilateral, etc.), (g) current picture referencing, palette, and (h) neural networks.
  • In addition to VVC — which is a joint activity with VCEG –, MPEG is working on two video-related exploration activities, namely (a) an enhanced quality profile of the AVC standard and (b) a low complexity enhancement video codec. Both topics will be further discussed within respective Ad-hoc Groups (AhGs) and further details are available here.
  • Finally, MPEG established an Ad-hoc Group (AhG) dedicated to the long-term planning which is also looking into application areas/domains other than media coding/representation.

In this context it is probably worth mentioning the following DASH awards at recent conferences

Additionally, there have been two tutorials at ICME related to MPEG standards, which you may find interesting

JPEG Column: 79th JPEG Meeting in La Jolla, California, U.S.A.

The JPEG Committee had its 79th meeting in La Jolla, California, U.S.A., from 9 to 15 April 2018.

During this meeting, JPEG had a final celebration of the 25th anniversary of its first JPEG standard, usually known as JPEG-1. This celebration coincides with two interesting facts. The first was the approval of a reference software for JPEG-1, “only” after 25 years. At the time of approval of the first JPEG standard a reference software was not considered, as it is common in recent image standards. However, the JPEG committee decided that was still important to provide a reference software, as current applications and standards can largely benefit on this specification. The second coincidence was the launch of a call for proposals for a next generation image coding standard, JPEG XL. This standard will define a new representation format for Photographic information, that includes the current technological developments, and can become an alternative to the 25 years old JPEG standard.

An informative two-hour JPEG Technologies Workshop marked the 25th anniversary celebration on Friday April 13, 2018. The workshop had presentations of several committee members on the current and future JPEG committee activity, with the following program:

IMG_4560

Touradj Ebrahimi, convenor of JPEG, presenting an overview of JPEG technologies.

  • Overview of JPEG activities, by Touradj Ebrahimi
  • JPEG XS by Antonin Descampe and Thomas Richter
  • HTJ2K by Pierre-Anthony Lemieux
  • JPEG Pleno – Light Field, Point Cloud, Holography by Ioan Tabus, Antonio Pinheiro, Peter Schelkens
  • JPEG Systems – Privacy and Security, 360 by Siegfried Foessel, Frederik Temmermans, Andy Kuzma
  • JPEG XL by Fernando Pereira, Jan De Cock

After the workshop, a social event was organized where a past JPEG committee Convenor, Eric Hamilton was recognized for key contributions to the JPEG standardization.

La Jolla JPEG meetings comprise mainly the following highlights:

  • Call for proposals of a next generation image coding standard, JPEG XL
  • JPEG XS profiles and levels definition
  • JPEG Systems defines a 360 degree format
  • HTJ2K
  • JPEG Pleno
  • JPEG XT
  • Approval of the JPEG Reference Software

The following summarizes various activities during JPEG’s La Jolla meeting.

JPEG XL

Billions of images are captured, stored and shared on a daily basis demonstrating the self-evident need for efficient image compression. Applications, websites and user interfaces are increasingly relying on images to share experiences, stories, visual information and appealing designs.

User interfaces can target devices with stringent constraints on network connection and/or power consumption in bandwidth constrained environments. Even though network capacities are improving globally, bandwidth is constrained to levels that inhibit application responsiveness in many situations. User interfaces that utilize images containing larger resolutions, higher dynamic ranges, wider color gamuts and higher bit depths, further contribute to larger volumes of data in higher bandwidth environments.

The JPEG Committee has launched a Next Generation Image Coding activity, referred to as JPEG XL. This activity aims to develop a standard for image coding that offers substantially better compression efficiency than existing image formats (e.g. more than 60% improvement when compared to the widely used legacy JPEG format), along with features desirable for web distribution and efficient compression of high-quality images.

To this end, the JPEG Committee has issued a Call for Proposals following its 79th meeting in April 2018, with the objective of seeking technologies that fulfill the objectives and scope of a Next Generation Image Coding. The Call for Proposals (CfP), with all related info, can be found at jpeg.org. The deadline for expression of interest and registration is August 15, 2018, and submissions to the Call are due September 1, 2018. To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector.

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets such as VR/AR/MR applications and autonomous cars. Among important use cases identified one can mention in particular video transport over professional video links (SDI, IP, Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression in the automotive industry. During the La Jolla meeting, profiles and levels have been defined to help implementers accurately size their design for their specific use cases. Transport of JPEG XS over IP networks or SDI infrastructures, are also being specified and will be finalized during the next JPEG meeting in Berlin (July 9-13, 2018). The JPEG committee therefore invites interested parties, in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions, to contribute to the specification process. Publication of the core coding system as an International Standard is expected in Q4 2018.

 

JPEG Systems – JPEG 360

The JPEG Committee continues to make progress towards its goals to define a common framework and definitions for metadata which will improve the ability to share 360 images and provide the basis to enable new user interaction with images.  At the 79th JPEG meeting in La Jolla, the JPEG committee received responses to a call for proposals it issued for JPEG 360 metadata. As a result, JPEG Systems is readying a committee draft of “JPEG Universal Metadata Box Format (JUMBF)” as ISO/IEC 19566-5, and “JPEG 360” as ISO/IEC 19566-6.  The box structure defined by JUMBF allows JPEG 360 to define a flexible metadata schema and the ability to link JPEG code streams embedded in the file. It also allows keeping unstitched image elements for omnidirectional captures together with the main image and descriptive metadata in a single file.  Furthermore, JUMBF lays the groundwork for a uniform approach to integrate tools satisfying the emerging requirements for privacy and security metadata.

To stay posted on JPEG 360, please regularly consult our website at jpeg.org and/or subscribe to the JPEG 360 e-mail reflector. 

 

HTJ2K

High Throughput JPEG 2000 (HTJ2K) aims to develop an alternate block-coding algorithm that can be used in place of the existing block coding algorithm specified in ISO/IEC 15444-1 (JPEG 2000 Part 1). The objective is to significantly increase the throughput of JPEG 2000, at the expense of a small reduction in coding efficiency, while allowing mathematically lossless transcoding to and from codestreams using the existing block coding algorithm.

As a result of a Call for Proposals issued at its 76th meeting, the JPEG Committee has selected a block-coding algorithm as the basis for Part 15 of the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). The algorithm has demonstrated an average tenfold increase in encoding and decoding throughput, compared to the algorithms based on JPEG 2000 Part 1. This increase in throughput results in less than 15% average loss in coding efficiency, and allows mathematically lossless transcoding to and from JPEG 2000 Part 1 codestreams.

A Working Draft of Part 15 to the JPEG 2000 suite of standards is now under development.

 

JPEG Pleno

The JPEG Committee is currently pursuing three activities in the framework of the JPEG Pleno Standardization: Light Field, Point Cloud and Holographic content coding.

JPEG Pleno Light Field finished a third round of core experiments for assessing the impact of individual coding modules and started work on creating software for a verification model. Moreover, additional test data has been studied and approved for use in future core experiments. Working Draft documents for JPEG Pleno specifications Part 1 and Part 2 were updated. A JPEG Pleno Light Field AhG was established with mandates to create a common test conditions document; perform exploration studies on new datasets, quality metrics, and random-access performance indicators; and to update the working draft documents for Part 1 and Part 2.

Furthermore, use cases were studied and are under consideration for JPEG Pleno Point Cloud. A current draft list is under discussion for the next period and will be updated and mapped to the JPEG Pleno requirements. A final document on use cases and requirements for JPEG Pleno Point Cloud is expected at the next meeting.

JPEG Pleno Holography has reviewed the draft of a holography overview document. Moreover, the current databases were classified according to use cases, and plans to analyze numerical reconstruction tools were established.

 

JPEG XT

The JPEG Committee released two corrigenda to JPEG XT Part 1 (core coding system) and JPEG XT Part 8 (lossless extension JPEG-1). These corrigenda clarify the upsampling procedure for chroma-subsampled images by adopting the centered upsampling in use by JFIF.

 

JPEG Reference Software

The JPEG Committee is pleased to announce that the CD ballot for Reference Software has been issued for the original JPEG-1 standard. This initiative closes a long-standing gap in the legacy JPEG standard by providing two reference implementations for this widely used and popular image coding format.

Final Quote

The JPEG Committee is hopeful to see its recently launched Next Generation Image Coding, JPEG XL, can result in a format that will become as important for imaging products and services as its predecessor was; the widely used and popular legacy JPEG format which has been in service for a quarter of century. said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 79th JPEG Meeting was held on 9-15 April 2018, in La Jolla, California, USA. The next 80th JPEG Meeting will be held on 7-13, July 2018, in Berlin, Germany.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

 

Future JPEG meetings are planned as follows:JPEG-signature

  • No 80, Berlin, Germany, July 7 to13, 2018
  • No 81, Vancouver, Canada, October 13 to 19, 2018
  • No 82, Lisbon, Portugal, January 19 to 25, 2019

MPEG Column: 122nd MPEG Meeting in San Diego, CA, USA

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

MPEG122 Plenary, San Diego, CA, USA.

MPEG122 Plenary, San Diego, CA, USA.

The MPEG press release comprises the following topics:

  • Versatile Video Coding (VVC) project starts strongly in the Joint Video Experts Team
  • MPEG issues Call for Proposals on Network-based Media Processing
  • MPEG finalizes 7th edition of MPEG-2 Systems Standard
  • MPEG enhances ISO Base Media File Format (ISOBMFF) with two new features
  • MPEG-G standards reach Draft International Standard for transport and compression technologies

Versatile Video Coding (VVC) – MPEG’ & VCEG’s new video coding project starts strong

The Joint Video Experts Team (JVET), a collaborative team formed by MPEG and ITU-T Study Group 16’s VCEG, commenced work on a new video coding standard referred to as Versatile Video Coding (VVC). The goal of VVC is to provide significant improvements in compression performance over the existing HEVC standard (i.e., typically twice as much as before) and to be completed in 2020. The main target applications and services include — but not limited to — 360-degree and high-dynamic-range (HDR) videos. In total, JVET evaluated responses from 32 organizations using formal subjective tests conducted by independent test labs. Interestingly, some proposals demonstrated compression efficiency gains of typically 40% or more when compared to using HEVC. Particular effectiveness was shown on ultra-high definition (UHD) video test material. Thus, we may expect compression efficiency gains well-beyond the targeted 50% for the final standard.

Research aspects: Compression tools and everything around it including its objective and subjective assessment. The main application area is clearly 360-degree and HDR. Watch out conferences like PCS and ICIP (later this year), which will be full of papers making references to VVC. Interestingly, VVC comes with a first draft, a test model for simulation experiments, and a technology benchmark set which is useful and important for any developments for both inside and outside MPEG as it allows for reproducibility.

MPEG issues Call for Proposals on Network-based Media Processing

This Call for Proposals (CfP) addresses advanced media processing technologies such as network stitching for VR service, super resolution for enhanced visual quality, transcoding, and viewport extraction for 360-degree video within the network environment that allows service providers and end users to describe media processing operations that are to be performed by the network. Therefore, the aim of network-based media processing (NBMP) is to allow end user devices to offload certain kinds of processing to the network. Therefore, NBMP describes the composition of network-based media processing services based on a set of media processing functions and makes them accessible through Application Programming Interfaces (APIs). Responses to the NBMP CfP will be evaluated on the weekend prior to the 123rd MPEG meeting in July 2018.

Research aspects: This project reminds me a lot about what has been done in the past in MPEG-21, specifically Digital Item Adaptation (DIA) and Digital Item Processing (DIP). The main difference is that MPEG targets APIs rather than pure metadata formats, which is a step forward into the right direction as APIs can be implemented and used right away. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

7th edition of MPEG-2 Systems Standard and ISO Base Media File Format (ISOBMFF) with two new features

More than 20 years since its inception development of MPEG-2 systems technology (i.e., transport/program stream) continues. New features include support for: (i) JPEG 2000 video with 4K resolution and ultra-low latency, (ii) media orchestration related metadata, (iii) sample variance, and (iv) HEVC tiles.

The partial file format enables the description of an ISOBMFF file partially received over lossy communication channels. This format provides tools to describe reception data, the received data and document transmission information such as received or lost byte ranges and whether the corrupted/lost bytes are present in the file and repair information such as location of the source file, possible byte offsets in that source, byte stream position at which a parser can try processing a corrupted file. Depending on the communication channel, this information may be setup by the receiver or through out-of-band means.

ISOBMFF’s sample variants (2nd edition), which are typically used to provide forensic information in the rendered sample data that can, for example, identify the specific Digital Rights Management (DRM) client which has decrypted the content. This variant framework is intended to be fully compatible with MPEG’s Common Encryption (CENC) and agnostic to the particular forensic marking system used.

Research aspects: MPEG systems standards are mainly relevant for multimedia systems research with all its characteristics. The partial file format is specifically interesting as it targets scenarios with lossy communication channels.

MPEG-G standards reach Draft International Standard for transport and compression technologies

MPEG-G provides a set of standards enabling interoperability for applications and services dealing with high-throughput deoxyribonucleic acid (DNA) sequencing. At its 122nd meeting, MPEG promoted its core set of MPEG-G specifications, i.e., transport and compression technologies, to Draft International Standard (DIS) stage. Such parts of the standard provide new transport technologies (ISO/IEC 23092-1) and compression technologies (ISO/IEC 23092-2) supporting rich functionality for the access and transport including streaming of genomic data by interoperable applications. Reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-5) will reach this stage in the next 12 months.

Research aspects: the main focus of this work item is compression and transport is still in its infancy. Therefore, research on the actual delivery for compressed DNA information as well as its processing is solicited.

What else happened at MPEG122?

  • Requirements is exploring new video coding tools dealing with low-complexity and process enhancements.
  • The activity around coded representation of neural networks has defined a set of vital use cases and is now calling for test data to be solicited until the next meeting.
  • The MP4 registration authority (MP4RA) has a new awesome web site http://mp4ra.org/.
  • MPEG-DASH is finally approving and working the 3rd edition comprising consolidated version of recent amendments and corrigenda.
  • CMAF started an exploration on multi-stream support, which could be relevant for tiled streaming and multi-channel audio.
  • OMAF kicked-off its activity towards a 2nd edition enabling support for 3DoF+ and social VR with the plan going to committee draft (CD) in Oct’18. Additionally, there’s a test framework proposed, which allows to assess performance of various CMAF tools. Its main focus is on video but MPEG’s audio subgroup has a similar framework to enable subjective testing. It could be interesting seeing these two frameworks combined in one way or the other.
  • MPEG-I architectures (yes plural) are becoming mature and I think this technical report will become available very soon. In terms of video, MPEG-I looks more closer at 3DoF+ defining common test conditions and a call for proposals (CfP) planned for MPEG123 in Ljubljana, Slovenia. Additionally, explorations for 6DoF and compression of dense representation of light fields are ongoing and have been started, respectively.
  • Finally, point cloud compression (PCC) is in its hot phase of core experiments for various coding tools resulting into updated versions of the test model and working draft.

Research aspects: In this section I would like to focus on DASH, CMAF, and OMAF. Multi-stream support, as mentioned above, is relevant for tiled streaming and multi-channel audio which has been recently studied in the literature and is also highly relevant for industry. The efficient storage and streaming of such kind of content within the file format is an important aspect and often underrepresented in both research and standardization. The goal here is to keep the overhead low while maximizing the utility of the format to enable certain functionalities. OMAF now targets the social VR use case, which has been discussed in the research literature for a while and, finally, makes its way into standardization. An important aspect here is both user and quality of experience, which requires intensive subjective testing.

Finally, on May 10 MPEG will celebrate 30 years as its first meeting dates back to 1988 in Ottawa, Canada with around 30 attendees. The 122nd meeting had more than 500 attendees and MPEG has around 20 active work items. A total of more than 170 standards have been produces (that’s approx. six standards per year) where some standards have up to nine editions like the HEVC standards. Overall, MPEG is responsible for more that 23% of all JTC 1 standards and some of them showing extraordinary longevity regarding extensions, e.g., MPEG-2 systems (24 years), MPEG-4 file format (19 years), and AVC (15 years). MPEG standards serve billions of users (e.g., MPEG-1 video, MP2, MP3, AAC, MPEG-2, AVC, ISOBMFF, DASH). Some — more precisely five — standards have receive Emmy awards in the past (MPEG-1, MPEG-2, AVC (2x), and HEVC).

Thus, happy birthday MPEG! In today’s society starts the high performance era with 30 years, basically the time of “compression”, i.e., we apply all what we learnt and live out everything, truly optimistic perspective for our generation X (millennials) standards body!

JPEG Column: 78th JPEG Meeting in Rio de Janeiro, Brazil

The JPEG Committee had its 78th meeting in Rio de Janeiro, Brazil. Relevant to its ongoing standardization efforts in JPEG Privacy and Security, JPEG organized a special session to explore how to support blockchain and distributed ledger technologies to past, ongoing and future JPEG family of standards. This is motivated by the fact that considering the potential impact of such technologies in the future of multimedia, standardization will be required to enable interoperability between different systems and services of imaging relying on blockchain and distributed ledger technologies.

Blockchain and distributed ledger technologies are behind the well-known crypto-currencies. These technologies can provide means for content authorship, or intellectual property and rights management control of the multimedia information. New possibilities can be made available, namely support for tracking online use of copyrighted images and ownership of the digital content.

IMG_3596_half

JPEG meeting session.

Rio de Janeiro JPEG meetings comprise mainly the following highlights:

  • JPEG explores blockchain and distributed ledger technologies
  • JPEG 360 Metadata
  • JPEG XL
  • JPEG XS
  • JPEG Pleno
  • JPEG Reference Software
  • JPEG 25th anniversary of the first JPEG standard

The following summarizes various activities during JPEG’s Rio de Janeiro meeting.

JPEG explores blockchain and distributed ledger technologies

During the 78th JPEG meeting in Rio de Janeiro, the JPEG committee organized a special session on blockchain and distributed ledger technologies and their impact on JPEG standards. As a result, the committee decided to explore use cases and standardization needs related to blockchain technology in a multimedia context. Use cases will be explored in relation to the recently launched JPEG Privacy and Security, as well as in the broader landscape of imaging and multimedia applications. To that end, the committee created an ad hoc group with the aim to gather input from experts to define these use cases and to explore eventual needs and advantages to support a standardization effort focused on imaging and multimedia applications. To get involved in the discussion, interested parties can register to the ad hoc group’s mailing list. Instructions to join the list are available on http://jpeg-blockchain-list.jpeg.org

JPEG 360 Metadata

The JPEG Committee notes the increasing use of multi-sensor images from multi-sensor devices, such as 360 degree capturing cameras or dual-camera smartphones available to consumers. Images from these cameras are shown on computers, smartphones, and Head Mounted Displays (HMDs). JPEG standards are commonly used for image compression and file format. However, because existing JPEG standards do not fully cover these new uses, incompatibilities have reduced the interoperability of their images, and thus reducing the widespread ubiquity, which consumers have come to expect when using JPEG files. Additionally, new modalities for interacting with images, such as computer-based augmentation, face-tagging, and object classification, require support for metadata that was not part of the original scope of JPEG.  A set of such JPEG 360 use cases is described in JPEG 360 Metadata Use Cases document. 

To avoid fragmentation in the market and to ensure wide interoperability, a standard way of interacting with multi-sensor images with richer metadata is desired in JPEG standards. JPEG invites all interested parties, including manufacturers, vendors and users of such devices to submit technology proposals for enabling interactions with multi-sensor images and metadata that fulfill the scope, objectives and requirements that are outlined in the final Call for Proposals, available on the JPEG website.

To stay posted on JPEG 360, please regularly consult our website at jpeg.org and/or subscribe to the JPEG 360 e-mail reflector.

JPEG XL

The Next-Generation Image Compression activity (JPEG XL) has produced a revised draft Call for Proposals, and intends to publish a final Call for Proposals (CfP) following its 79th meeting (April 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Compression. During the 78th meeting, objective and subjective quality assessment methodologies for anchor and proposal evaluations were discussed and analyzed. As outcome of the meeting, source code for objective quality assessment has been made available.

The draft Call for Proposals, with all related info, can be found in JPEG website. Comments are welcome and should be submitted as specified in the document. To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector.

 

JPEG XS

Since its previous 77th meeting, subjective quality evaluations have shown that the initial quality requirement of the JPEG XS Core Coding System has been met, i.e. a visually lossless quality at a compression ratio of 6:1 for large majority of images under test has been met. Several profiles are now under development in JPEG XS, as well as transport and container formats. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the furthering of the specifications in the above directions. Publication of the International Standard is expected for Q3 2018.

JPEG Pleno

JPEG Pleno activity is currently divided into Pleno Light Field, Pleno Point Cloud and Pleno Holography. JPEG Pleno Light Field has been preparing a third round of core experiments for assessing the impact of individual coding modules on the overall rate-distortion performance. Moreover, it was decided to pursue with collecting additional test data, and progress with the preparation of working documents for JPEG Pleno specifications Part 1 and Part 2.

Furthermore, quality modelling studies are under consideration for both JPEG Pleno Point Clouds, and JPEG Pleno Holography. In particular, JPEG Pleno Point Cloud is considering a set of new quality metrics provided as contributions to this work item. It is expected that the new metrics replace the current state of the art as they have shown superior correlation with subjective quality as perceived by humans. Moreover, new subjective assessment models have been tested and analysed to better understand the perception of quality for such new types of visual information.

JPEG Reference Software

The JPEG committee is pleased to announce that its first JPEG image coding specifications is now augmented by a new part, ISO/IEC 10918-7, that contains a reference software. The proposed candidate software implementations have been checked for compliance with 10918-2. Considering the positive results, this new part of the JPEG standard will continue to evolve quickly. 

RioJanView27332626_10155421780114370_2546088045026774482_n

JPEG meeting room window view during a break.

JPEG 25th anniversary of the first JPEG standard

JPEG’s first standard third and final 25th anniversary celebration is planned at its next 79th JPEG meeting taking place in La Jolla, CA, USA. The anniversary will be marked by a 2 hours workshop on Friday 13th April on current and emerging JPEG technologies, followed by a social event where past JPEG committee members with relevant contributions will be awarded.

Final Quote

“Blockchain and distributed ledger technologies promise a significant impact on the future of many fields. JPEG is committed to provide standard mechanisms to apply blockchain on multimedia applications in general and on imaging in particular. said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee meets nominally four times a year, in different world locations. The latest 77th meeting was held from 21st to 27th of October 2017, in Macau, China. The next 79th JPEG Meeting will be held on 9-15 April 2018, in La Jolla, California, USA.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

Future JPEG meetings are planned as follows:

  • No 79, La Jolla (San Diego), CA, USA, April 9 to 15, 2018
  • No 80, Berlin, Germany, July 7 to13, 2018
  • No 81, Vancouver, Canada, October 13 to 19, 2018

 

 

MPEG Column: 121st MPEG Meeting in Gwangju, Korea

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The MPEG press release comprises the following topics:

  • Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level
  • MPEG-G standards reach Committee Draft for metadata and APIs
  • MPEG issues Calls for Visual Test Material for Immersive Applications
  • Internet of Media Things (IoMT) reaches Committee Draft level
  • MPEG finalizes its Media Orchestration (MORE) standard

At the end I will also briefly summarize what else happened with respect to DASH, CMAF, OMAF as well as discuss future aspects of MPEG.

Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level

The Committee Draft (CD) for CDVA has been approved at the 121st MPEG meeting, which is the first formal step of the ISO/IEC approval process for a new standard. This will become a new part of MPEG-7 to support video search and retrieval applications (ISO/IEC 15938-15).

Managing and organizing the quickly increasing volume of video content is a challenge for many industry sectors, such as media and entertainment or surveillance. One example task is scalable instance search, i.e., finding content containing a specific object instance or location in a very large video database. This requires video descriptors which can be efficiently extracted, stored, and matched. Standardization enables extracting interoperable descriptors on different devices and using software from different providers, so that only the compact descriptors instead of the much larger source videos can be exchanged for matching or querying. The CDVA standard specifies descriptors that fulfil these needs and includes (i) the components of the CDVA descriptor, (ii) its bitstream representation and (iii) the extraction process. The final standard is expected to be finished in early 2019.

CDVA introduces a new descriptor based on features which are output from a Deep Neural Network (DNN). CDVA is robust against viewpoint changes and moderate transformations of the video (e.g., re-encoding, overlays), it supports partial matching and temporal localization of the matching content. The CDVA descriptor has a typical size of 2–4 KBytes per second of video. For typical test cases, it has been demonstrated to reach a correct matching rate of 88% (at 1% false matching rate).

Research aspects: There are probably endless research aspects in the visual descriptor space ranging from validation of the achieved to results so far to further improving informative aspects with the goal to increase correct matching rate (and consequently decreasing the false matching rating). In general, however, the question is whether there’s a need for descriptors in the era of bandwidth-storage-computing over-provisioning and the raising usage of artificial intelligence techniques such as machine learning and deep learning.

MPEG-G standards reach Committee Draft for metadata and APIs

In my previous report I introduced the MPEG-G standard for compression and transport technologies of genomic data. At the 121st MPEG meeting, metadata and APIs reached CD level. The former – metadata – provides relevant information associated to genomic data and the latter – APIs – allow for building interoperable applications capable of manipulating MPEG-G files. Additional standardization plans for MPEG-G include the CDs for reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-4), which are planned to be issued at the next 122nd MPEG meeting with the objective of producing Draft International Standards (DIS) at the end of 2018.

Research aspects: Metadata typically enables certain functionality which can be tested and evaluated against requirements. APIs allow to build applications and services on top of the underlying functions, which could be a driver for research projects to make use of such APIs.

MPEG issues Calls for Visual Test Material for Immersive Applications

I have reported about the Omnidirectional Media Format (OMAF) in my previous report. At the 121st MPEG meeting, MPEG was working on extending OMAF functionalities to allow the modification of viewing positions, e.g., in case of head movements when using a head-mounted display, or for use with other forms of interactive navigation. Unlike OMAF which only provides 3 degrees of freedom (3DoF) for the user to view the content from a perspective looking outwards from the original camera position, the anticipated extension will also support motion parallax within some limited range which is referred to as 3DoF+. In the future with further enhanced technologies, a full 6 degrees of freedom (6DoF) will be achieved with changes of viewing position over a much larger range. To develop technology in these domains, MPEG has issued two Calls for Test Material in the areas of 3DoF+ and 6DoF, asking owners of image and video material to provide such content for use in developing and testing candidate technologies for standardization. Details about these calls can be found at https://mpeg.chiariglione.org/.

Research aspects: The good thing about test material is that it allows for reproducibility, which is an important aspect within the research community. Thus, it is more than appreciated that MPEG issues such a call and let’s hope that this material will become publicly available. Typically this kind of visual test material targets coding but it would be also interesting to have such test content for storage and delivery.

Internet of Media Things (IoMT) reaches Committee Draft level

The goal of IoMT is is to facilitate the large-scale deployment of distributed media systems with interoperable audio/visual data and metadata exchange. This standard specifies APIs providing media things (i.e., cameras/displays and microphones/loudspeakers, possibly capable of significant processing power) with the capability of being discovered, setting-up ad-hoc communication protocols, exposing usage conditions, and providing media and metadata as well as services processing them. IoMT APIs encompass a large variety of devices, not just connected cameras and displays but also sophisticated devices such as smart glasses, image/speech analyzers and gesture recognizers. IoMT enables the expression of the economic value of resources (media and metadata) and of associated processing in terms of digital tokens leveraged by the use of blockchain technologies.

Research aspects: The main focus of IoMT is APIs which provides easy and flexible access to the underlying device’ functionality and, thus, are an important factor to enable research within this interesting domain. For example, using these APIs to enable communicates among these various media things could bring up new forms of interaction with these technologies.

MPEG finalizes its Media Orchestration (MORE) standard

MPEG “Media Orchestration” (MORE) standard reached Final Draft International Standard (FDIS), the final stage of development before being published by ISO/IEC. The scope of the Media Orchestration standard is as follows:

  • It supports the automated combination of multiple media sources (i.e., cameras, microphones) into a coherent multimedia experience.
  • It supports rendering multimedia experiences on multiple devices simultaneously, again giving a consistent and coherent experience.
  • It contains tools for orchestration in time (synchronization) and space.

MPEG expects that the Media Orchestration standard to be especially useful in immersive media settings. This applies notably in social virtual reality (VR) applications, where people share a VR experience and are able to communicate about it. Media Orchestration is expected to allow synchronizing the media experience for all users, and to give them a spatially consistent experience as it is important for a social VR user to be able to understand when other users are looking at them.

Research aspects: This standard enables the social multimedia experience proposed in literature. Interestingly, the W3C is working on something similar referred to as timing object and it would be interesting to see whether these approaches have some commonalities.


What else happened at the MPEG meeting?

DASH is fully in maintenance mode and we are still waiting for the 3rd edition which is supposed to be a consolidation of existing corrigenda and amendments. Currently only minor extensions are proposed and conformance/reference software is being updated. Similar things can be said for CMAF where we have one amendment and one corrigendum under development. Additionally, MPEG is working on CMAF conformance. OMAF has reached FDIS at the last meeting and MPEG is working on reference software and conformance also. It is expected that in the future we will see additional standards and/or technical reports defining/describing how to use CMAF and OMAF in DASH.

Regarding the future video codec, the call for proposals is out since the last meeting as announced in my previous report and responses are due for the next meeting. Thus, it is expected that the 122nd MPEG meeting will be the place to be in terms of MPEG’s future video codec. Speaking about the future, shortly after the 121st MPEG, Leonardo Chiariglione published a blog post entitled “a crisis, the causes and a solution”, which is related to HEVC licensing, Alliance for Open Media (AOM), and possible future options. The blog post certainly caused some reactions within the video community at large and I think this was also intended. Let’s hope it will galvanice the video industry — not to push the button — but to start addressing and resolving the issues. As pointed out in one of my other blog posts about what to care about in 2018, the upcoming MPEG meeting in April 2018 is certainly a place to be. Additionally, it highlights some conferences related to various aspects also discussed in MPEG which I’d like to republish here:

  • QoMEX — Int’l Conf. on Quality of Multimedia Experience — will be hosted in Sardinia, Italy from May 29-31, which is THE conference to be for QoE of multimedia applications and services. Submission deadline is January 15/22, 2018.
  • MMSys — Multimedia Systems Conf. — and specifically Packet Video, which will be on June 12 in Amsterdam, The Netherlands. Packet Video is THE adaptive streaming scientific event 2018. Submission deadline is March 1, 2018.
  • Additionally, you might be interested in ICME (July 23-27, 2018, San Diego, USA), ICIP (October 7-10, 2018, Athens, Greece; specifically in the context of video coding), and PCS (June 24-27, 2018, San Francisco, CA, USA; also in the context of video coding).
  • The DASH-IF academic track hosts special events at MMSys (Excellence in DASH Award) and ICME (DASH Grand Challenge).
  • MIPR — 1st Int’l Conf. on Multimedia Information Processing and Retrieval — will be in Miami, Florida, USA from April 10-12, 2018. It has a broad range of topics including networking for multimedia systems as well as systems and infrastructures.
 

MPEG Column: 120th MPEG Meeting in Macau, China

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

MPEG Plenary Meeting

MPEG Plenary Meeting

The MPEG press release comprises the following topics:

  • Point Cloud Compression – MPEG evaluates responses to call for proposal and kicks off its technical work
  • The omnidirectional media format (OMAF) has reached its final milestone
  • MPEG-G standards reach Committee Draft for compression and transport technologies of genomic data
  • Beyond HEVC – The MPEG & VCEG call to set the next standard in video compression
  • MPEG adds better support for mobile environment to MMT
  • New standard completed for Internet Video Coding
  • Evidence of new video transcoding technology using side streams

Point Cloud Compression

At its 120th meeting, MPEG analysed the technologies submitted by nine industry leaders as responses to the Call for Proposals (CfP) for Point Cloud Compression (PCC). These technologies address the lossless or lossy coding of 3D point clouds with associated attributes such as colour and material properties. Point clouds are referred to as unordered sets of points in a 3D space and typically captured using various setups of multiple cameras, depth sensors, LiDAR scanners, etc., but can also be generated synthetically and are in use in several industries. They have recently emerged as representations of the real world enabling immersive forms of interaction, navigation, and communication. Point clouds are typically represented by extremely large amounts of data providing a significant barrier for mass market applications. Thus, MPEG has issued a Call for Proposal seeking technologies that allow reduction of point cloud data for its intended applications. After a formal objective and subjective evaluation campaign, MPEG selected three technologies as starting points for the test models for static, animated, and dynamically acquired point clouds. A key conclusion of the evaluation was that state-of-the-art point cloud compression can be significantly improved by leveraging decades of 2D video coding tools and combining 2D and 3D compression technologies. Such an approach provides synergies with existing hardware and software infrastructures for rapid deployment of new immersive experiences.

Although the initial selection of technologies for point cloud compression has been concluded at the 120th MPEG meeting, it could be also seen as a kick-off for its scientific evaluation and various further developments including the optimization thereof. It is expected that various scientific conference will focus on point cloud compression and may open calls for grand challenges like for example at IEEE ICME 2018.

Omnidirectional Media Format (OMAF)

The understanding of the virtual reality (VR) potential is growing but market fragmentation caused by the lack of interoperable formats for the storage and delivery of such content stifles VR’s market potential. MPEG’s recently started project referred to as Omnidirectional Media Format (OMAF) has reached Final Draft of International Standard (FDIS) at its 120th meeting. It includes

  • equirectangular projection and cubemap projection as projection formats;
  • signalling of metadata required for interoperable rendering of 360-degree monoscopic and stereoscopic audio-visual data; and
  • provides a selection of audio-visual codecs for this application.

It also includes technologies to arrange video pixel data in numerous ways to improve compression efficiency and reduce the size of video, a major bottleneck for VR applications and services. The standard also includes technologies for the delivery of OMAF content with MPEG-DASH and MMT.

MPEG has defined a format comprising a minimal set of tools to enable interoperability among implementers of the standard. Various aspects are deliberately excluded from the normative parts to foster innovation leading to novel products and services. This enables us — researcher and practitioners — to experiment with these new formats in various ways and focus on informative aspects where typically competition can be found. For example, efficient means for encoding and packaging of omnidirectional/360-degree media content and its adaptive streaming including support for (ultra-)low latency will become a big issue in the near future.

MPEG-G: Compression and Transport Technologies of Genomic Data

The availability of high throughput DNA sequencing technologies opens new perspectives in the treatment of several diseases making possible the introduction of new global approaches in public health known as “precision medicine”. While routine DNA sequencing in the doctor’s office is still not current practice, medical centers have begun to use sequencing to identify cancer and other diseases and to find effective treatments. As DNA sequencing technologies produce extremely large amounts of data and related information, the ICT costs of storage, transmission, and processing are also very high. The MPEG-G standard addresses and solves the problem of efficient and economical handling of genomic data by providing new

  • compression technologies (ISO/IEC 23092-2) and
  • transport technologies (ISO/IEC 23092-1),

which reached Committee Draft level at its 120th meeting.

Additionally, the Committee Drafts for

  • metadata and APIs (ISO/IEC 23092-3) and
  • reference software (ISO/IEC 23092-4)

are scheduled for the next MPEG meeting and the goal is to publish Draft International Standards (DIS) at the end of 2018.

This new type of (media) content, which requires compression and transport technologies, is emerging within the multimedia community at large and, thus, input is welcome.

Beyond HEVC – The MPEG & VCEG Call to set the Next Standard in Video Compression

The 120th MPEG meeting marked the first major step toward the next generation of video coding standard in the form of a joint Call for Proposals (CfP) with ITU-T SG16’s VCEG. After two years of collaborative informal exploration studies and a gathering of evidence that successfully concluded at the 118th MPEG meeting, MPEG and ITU-T SG16 agreed to issue the CfP for future video coding technology with compression capabilities that significantly exceed those of the HEVC standard and its current extensions. They also formalized an agreement on formation of a joint collaborative team called the “Joint Video Experts Team” (JVET) to work on development of the new planned standard, pending the outcome of the CfP that will be evaluated at the 122nd MPEG meeting in April 2018. To evaluate the proposed compression technologies, formal subjective tests will be performed using video material submitted by proponents in February 2018. The CfP includes the testing of technology for 360° omnidirectional video coding and the coding of content with high-dynamic range and wide colour gamut in addition to conventional standard-dynamic-range camera content. Anticipating a strong response to the call, a “test model” draft design is expected be selected in 2018, with development of a potential new standard in late 2020.

The major goal of a new video coding standard is to be better than its successor (HEVC). Typically this “better” is quantified by 50% which means, that it should be possible encode the video at the same quality with half of the bitrate or a significantly higher quality with the same bitrate including. However, at this time the “Joint Video Experts Team” (JVET) from MPEG and ITU-T SG16 faces competition from the Alliance for Open Media, which is working on AV1. In any case, we are looking forward to an exciting time frame from now until this new codec is ratified and how it will perform compared to AV1. Multimedia systems and applications will also benefit from new codecs which will gain traction as soon as first implementations of this new codec becomes available (note that AV1 is available as open source already and continuously further developed).

MPEG adds Better Support for Mobile Environment to MPEG Media Transport (MMT)

MPEG has approved the Final Draft Amendment (FDAM) to MPEG Media Transport (MMT; ISO/IEC 23008-1:2017), which is referred to as “MMT enhancements for mobile environments”. In order to reflect industry needs on MMT, which has been well adopted by broadcast standards such as ATSC 3.0 and Super Hi-Vision, it addresses several important issues on the efficient use of MMT in mobile environments. For example, it adds distributed resource identification message to facilitate multipath delivery and transition request message to change the delivery path of an active session. This amendment also introduces the concept of a MMT-aware network entity (MANE), which might be placed between the original server and the client, and provides a detailed description about how to use it for both improving efficiency and reducing delay of delivery. Additionally, this amendment provides a method to use WebSockets to setup and control an MMT session/presentation.

New Standard Completed for Internet Video Coding

A new standard for video coding suitable for the internet as well as other video applications, was completed at the 120th MPEG meeting. The Internet Video Coding (IVC) standard was developed with the intention of providing the industry with an “Option 1” video coding standard. In ISO/IEC language, this refers to a standard for which patent holders have declared a willingness to grant licenses free of charge to an unrestricted number of applicants for all necessary patents on a worldwide, non-discriminatory basis and under other reasonable terms and conditions, to enable others to make, use, and sell implementations of the standard. At the time of completion of the IVC standard, the specification contained no identified necessary patent rights except those available under Option 1 licensing terms. During the development of IVC, MPEG removed from the draft standard any necessary patent rights that it was informed were not available under such Option 1 terms, and MPEG is optimistic of the outlook for the new standard. MPEG encourages interested parties to provide information about any other similar cases. The IVC standard has roughly similar compression capability as the earlier AVC standard, which has become the most widely deployed video coding technology in the world. Tests have been conducted to verify IVC’s strong technical capability, and the new standard has also been shown to have relatively modest implementation complexity requirements.

Evidence of new Video Transcoding Technology using Side Streams

Following a “Call for Evidence” (CfE) issued by MPEG in July 2017, evidence was evaluated at the 120th MPEG meeting to investigate whether video transcoding technology has been developed for transcoding assisted by side data streams that is capable of significantly reducing the computational complexity without reducing compression efficiency. The evaluations of the four responses received included comparisons of the technology against adaptive bit-rate streaming using simulcast as well as against traditional transcoding using full video re-encoding. The responses span the compression efficiency space between simulcast and full transcoding, with trade-offs between the bit rate required for distribution within the network and the bit rate required for delivery to the user. All four responses provided a substantial computational complexity reduction compared to transcoding using full re-encoding. MPEG plans to further investigate transcoding technology and is soliciting expressions of interest from industry on the need for standardization of such assisted transcoding using side data streams.

MPEG currently works on two related topics which are referred to as network-distributed video coding (NDVC) and network-based media processing (NBMP). Both activities involve the network, which is more and more evolving to highly distributed compute and delivery platform as opposed to a bit pipe, which is supposed to deliver data as fast as possible from A to B. This phenomena could be also interesting when looking at developments around 5G, which is actually much more than just radio access technology. These activities are certainly worth to monitor as it basically contributes in order to make networked media resources accessible or even programmable. In this context, I would like to refer the interested reader to the December’17 theme of the IEEE Computer Society Computing Now, which is about Advancing Multimedia Content Distribution.


Publicly available documents from the 120th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Gwangju, Korea, January 22-26, 2018. Feel free to contact Christian Timmerer for any questions or comments.


Some of the activities reported above are considered within the Call for Papers at 23rd Packet Video Workshop (PV 2018) co-located with ACM MMSys 2018 in Amsterdam, The Netherlands. Topics of interest include (but are not limited to):

  • Adaptive media streaming, and content storage, distribution and delivery
  • Network-distributed video coding and network-based media processing
  • Next-generation/future video coding, point cloud compression
  • Audiovisual communication, surveillance and healthcare systems
  • Wireless, mobile, IoT, and embedded systems for multimedia applications
  • Future media internetworking: information-centric networking and 5G
  • Immersive media: virtual reality (VR), augmented reality (AR), 360° video and multi-sensory systems, and its streaming
  • Machine learning in media coding and streaming systems
  • Standardization: DASH, MMT, CMAF, OMAF, MiAF, WebRTC, MSE, EME, WebVR, Hybrid Media, WAVE, etc.
    Applications: social media, game streaming, personal broadcast, healthcare, industry 4.0, education, transportation, etc.

Important dates

  • Submission deadline: March 1, 2018
  • Acceptance notification: April 9, 2018
  • Camera-ready deadline: April 19, 2018

JPEG Column: 77th JPEG Meeting in Macau, China

IMG_1670r050

JPEG XS is now entering the final phases of the standard definition and soon will be available. It is important to clarify the change on the typical JPEG approach, as this is the first JPEG image compression standard that is not developed only targeting the best compression performance for the best perceptual quality. Instead, JPEG XS establishes a compromise between compression efficiency and low complexity. This new approach is also complemented with the development of a new part for the well-established JPEG 2000, named High Throughput JPEG 2000.

With these initiatives, JPEG committee is standardizing low complexity and low latency codecs, with a slight sacrifice of the compression performance usually seek in previous standards. This change of paradigm is justified considering the current trends on multimedia technology with the continuous grow on devices that are usually highly dependent of battery life cycles, namely mobiles, tablets, and also augmented reality devices or autonomous robots. Furthermore this standard provides support for applications like Omnidirectional video capture or real time video storage and streaming applications. Nowadays, networks tend to grow in available bandwidth. The memory available in most devices has also been reaching impressive numbers. Although compression is still required to simplify the large amount of data manipulation, its performance might become secondary if kept into acceptable levels. As it is obvious, considering the advances in coding technology of the last 25 years, these new approaches define codecs with compression performances largely above the JPEG standard used in most devices today. Moreover, they provide enhanced capabilities like HDR support, lossless or near lossless modes, or alpha plane coding.

On the 77th JPEG meeting held in Macau, China, from 21st to 27th of October several activities have been considered, as shortly described in the following.

IMG_3037r025

  1. A call for proposals on JPEG 360 Metadata for the current JPEG family of standards has been issued.
  2. New advances on low complexity/low latency compression standards, namely JPEG XS and High Throughput JPEG 2000.
  3. Continuation of JPEG Pleno project that will lead to a family of standards on different 3D technologies, like light fields, digital holography and also point clouds data.
  4. New CfP for the Next-Generation Image Compression Standard.
  5. Definition of a JPEG reference software.

Moreover, a celebration of the 25th JPEG anniversary where early JPEG committee members from Asia have been awarded has taken place.

The different activities are described in the following paragraphs.

 

JPEG Privacy and Security

JPEG Privacy & Security is a work item (ISO/IEC 19566-4) aiming at developing a standard that provides technical solutions, which can ensure privacy, maintaining data integrity and protecting intellectual property rights (IPR). A Call for Proposals was published in April 2017 and based on descriptive analysis of submitted solutions for supporting protection and authenticity features in JPEG files, a working draft of JPEG Privacy & Security in the context of JPEG Systems standardization was produced during the 77th JPEG meeting in Macau, China. To collect further comments from the stakeholders in this filed, an open online meeting for JPEG Privacy & Security will be conducted before the 78th JPEG meeting in Rio de Janeiro, Brazil, on Jan. 27-Feb 2, 2018. JPEG Committee invites interested parties to the meeting. Details will be announced in the JPEG Privacy & Security AhG email reflector.

 

JPEG 360 Metadata

The JPEG Committee has issued a “Draft Call for Proposals (CfP) on JPEG 360 Metadata” at the 77th JPEG meeting in Macau, China. The JPEG Committee notes the increasing use of multi-sensor images from multiple image sensor devices, such as 360 degree capturing cameras or dual-camera smartphones available to consumers. Images from these cameras are shown on computers, smartphones and Head Mounted Displays (HMDs). JPEG standards are commonly used for image compression and file format to store and share such content. However, because existing JPEG standards do not fully cover all new uses, incompatibilities have reduced the interoperability of 360 images, and thus reduce the widespread ubiquity, which consumers have come to expect when using JPEG-based images. Additionally, new modalities for interaction with images, such as computer-based augmentation, face-tagging, and object classification, require support for metadata that was not part of the scope of the original JPEG. To avoid fragmentation in the market and to ensure interoperability, a standard way of interaction with multi-sensor images with richer metadata is desired in JPEG standards. This CfP invites all interested parties, including manufacturers, vendors and users of such devices to submit technology proposals for enabling interactions with multi-sensor images and metadata that fulfill the scope, objectives and requirements.

 

High Throughput JPEG 2000

The JPEG Committee is continuing its work towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K).

Since the release of an initial Call for Proposals (CfP) at the outcome of its 76th meeting, the JPEG Committee has completed the software test bench that will be used to evaluate technology submissions, and has reviewed initial registrations of intent. Final technology submissions are due on 1 March 2018.

The HTJ2K activity aims to develop an alternate block-coding algorithm that can be used in place of the existing block coding algorithm specified in ISO/IEC 15444-1 (JPEG 2000 Part 1). The objective is to significantly increase the throughput of JPEG 2000, at the expense of a small reduction in coding efficiency, while allowing mathematically lossless transcoding to and from codestreams using the existing block coding algorithm.

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After four rounds of Core Experiments, the Core Coding System has now been finalized and the ballot process has been initiated.

Additional parts of the Standard are still being specified, in particular future profiles, as well as transport and container formats. The JPEG Committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process. Publication of the International Standard is expected for Q3 2018.

 

JPEG Pleno

This standardization effort is targeting the generation of a multimodal framework for the exchange of light field, point cloud, depth+texture and holographic data in end-to-end application chains. Currently, the JPEG Committee is defining the coding framework of the light field modality for which the signalling syntax will be specified in part 2 of the JPEG Pleno standard. In parallel, JPEG is reaching out to companies and research institutes that are active in the point cloud and holography arena and invites them to contribute to the standardization effort. JPEG is seeking for additional input both at the level of test data and quality assessment methodologies for this specific type of image modalities as technology that supports their generation, reconstruction and/or rendering.

 

JPEG XL

The JPEG Committee has launched a Next-Generation Image Compression Standardization activity, also referred to as JPEG XL. This activity aims to develop a standard for image compression that offers substantially better compression efficiency than existing image formats (e.g. >60% over JPEG-1), along with features desirable for web distribution and efficient compression of high-quality images.

The JPEG Committee intends to issue a final Call for Proposals (CfP) following its 78th meeting (January 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Compression Standardization activity.

A draft Call for Proposals, with all related info, has been issued and can be found in JPEG website. Comments are welcome and should be submitted as specified in the document.

To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector. You will receive information to confirm your subscription, and upon the acceptance of the moderator will be included in the mailing-list.

 

JPEG Reference Software

Along with its celebration of the 25th anniversary of the commonly known JPEG still image compression specifications, The JPEG Committee has launched an activity to fill a long-known gap in this important image coding standard, namely the definition of a JPEG reference software. For its 77th meeting, The JPEG Committee collected submissions for a reference software that were evaluated for suitability, and started now the standardization process of such software on the basis of submissions received.


IMG_1670r050

JPEG 25th anniversary of the first JPEG standard

The JPEG Committee had a 25th anniversary celebration of its first standard in Macau specifically organized to honour past committee members from Asia, and was proud to award Takao Omachi for his contributions to the first JPEG standard, Fumitaka Ono for his long lasting contributions to JBIG and JPEG standards, and Daniel Lee for contributions to JPEG family of standards and long lasting services as Convenor of the JPEG Committee. The celebrations of the anniversary of this successful standard that is still growing in its use after 25th years will have a third and final event during the 79th JPEG meeting planned in La Jolla, CA, USA.

JPEG77annivers25

 

Final Quote

“JPEG is committed to design of specifications that ensure privacy and other security and protection solutions across the entire JPEG family of standards” said Prof. Touradj Ebrahimi, the Convener of the JPEG committee. 

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Frederik Temmermans of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG

Future JPEG meetings are planned as follows:

  • No 78, Rio de Janeiro, Brazil, January 27 to February 2, 2018
  • No 79, La Jolla (San Diego), CA, USA, April 9 to 15, 2018
  • No 80, Berlin, Germany, July 7 to 13, 2018

JPEG Column: 76th JPEG Meeting in Turin, Italy

The 76th JPEG meeting was held at Politecnico di Torino, Turin, Italy, from 15 to 21 of July. The current standardisation activities have been complemented by the 25th anniversary of the first JPEG standard. Simultaneously, JPEG pursues the development of different standardised solutions to meet the current challenges on imaging technology, namely on emerging new applications and on low complexity image coding. The 76th JPEG meeting featured mainly the following highlights:

  • JPEG 25th anniversary of the first JPEG standard
  • High Throughput JPEG 2000
  • JPEG Pleno
  • JPEG XL
  • JPEG XS
  • JPEG Reference Software

In the following an overview of the main JPEG activities at the 76th meeting is given.

JPEG 25th anniversary of the first JPEG standard – JPEG is proud tocelebrate the 25th anniversary of its first standard. This very successful standard won an Emmy award in 1995-96 and its usage is still rising, reaching in 2015 the impressive daily rate of over 3 billion images exchanged in just a few social networks. During the celebration, a number of early members of the committee were awarded for their contributions to this standard, namely Alain Léger, Birger Niss, Jorgen Vaaben and István Sebestyén. Also Richard Clark for his long lasting contribution as JPEG Webmaster and contributions to many JPEG standards was also rewarded during the same ceremony. The celebration will continue at the next 77th JPEG meeting that will be held in Macau, China from 21 to 27, October, 2017.

IMG_1113 2

High Throughput JPEG 2000 – The JPEG committee is continuing its work towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). In a significant milestone, the JPEG Committee has released a Call for Proposals that invites technical contributions to the HTJ2K activity. The deadline for an expression of interest is 1 October 2017, as detailed in the Call for Proposals, which is publicly available on the JPEG website at https://jpeg.org/jpeg2000/htj2k.html.

The objective of the HTJ2K activity is to identify and standardize an alternate block coding algorithm that can be used as a drop-in replacement for the block coding defined in JPEG 2000 Part-1. Based on existing evidence, it is believed that significant increases in encoding and decoding throughput are possible on modern software platforms, subject to small sacrifices in coding efficiency. An important focus of this activity is interoperability with existing systems and content libraries. To ensure this, the alternate block coding algorithm supports mathematically lossless transcoding between HTJ2K and JPEG 2000 Part-1 codestreams at the code-block level.

JPEG Pleno – The JPEG committee intends to provide a standard framework to facilitate capture, representation and exchange of omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. JPEG Pleno aims at defining new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms. At the 76th JPEG meeting in Turin, Italy, responses to the call for proposals for JPEG Pleno light field image coding were evaluated using subjective and objective evaluation metrics, and a Generic JPEG Pleno Light Field Architecture was created. The JPEG committee defined three initial core experiments to be performed before the 77thJPEG meeting in Macau, China. Interested parties are invited to join these core experiments and JPEG Pleno standardization.

JPEG XL – The JPEG Committee is working on a new activity, known as Next generation Image Format, which aims to develop an image compression format that demonstrates higher compression efficiency at equivalent subjective quality of currently available formats and that supports features for both low-end and high-end use cases.  On the low end, the new format addresses image-rich user interfaces and web pages over bandwidth-constrained connections. On the high end, it targets efficient compression for high-quality images, including high bit depth, wide color gamut and high dynamic range imagery. A draft Call for Proposals (CfP) on JPEG XL has been issued for public comment, and is available on the JPEG website.

JPEG XS – This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created. Several rounds of Core Experiments have allowed to further improving the Core Coding System, the last one being reviewed during this 76th JPEG meeting in Torino. More core experiments are on their way, including subjective assessments. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process. Publication of the International Standard is expected for Q3 2018.

JPEG Reference Software – Together with the celebration of 25th anniversary of the first JPEG Standard, the committee continued with its important activities around the omnipresent JPEG image format; while all newer JPEG standards define a reference software guiding users in interpreting and helping them in implementing a given standard, no such references exist for the most popular image format of the Internet age. The JPEG committee therefore issued a call for proposals https://jpeg.org/items/20170728_cfp_jpeg_reference_software.html asking interested parties to participate in the submission and selection of valuable and stable implementations of JPEG (formally, Rec. ITU-T T.81 | ISO/IEC 10918-1).

 

Final Quote

“The experience shared by developers of the first JPEG standard during celebration was an inspiring moment that will guide us to further the ongoing developments of standards responding to new challenges in imaging applications. said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

About JPEG

JPEG-signatureThe Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the Interna
tional Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Frederik Temmermans of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

Future JPEG meetings are planned as follows:

  • No. 77, Macau, CN, 23 – 27 October 2017