JPEG Column: 76th JPEG Meeting in Turin, Italy

The 76th JPEG meeting was held at Politecnico di Torino, Turin, Italy, from 15 to 21 of July. The current standardisation activities have been complemented by the 25th anniversary of the first JPEG standard. Simultaneously, JPEG pursues the development of different standardised solutions to meet the current challenges on imaging technology, namely on emerging new applications and on low complexity image coding. The 76th JPEG meeting featured mainly the following highlights:

  • JPEG 25th anniversary of the first JPEG standard
  • High Throughput JPEG 2000
  • JPEG Pleno
  • JPEG XL
  • JPEG XS
  • JPEG Reference Software

In the following an overview of the main JPEG activities at the 76th meeting is given.

JPEG 25th anniversary of the first JPEG standard – JPEG is proud tocelebrate the 25th anniversary of its first standard. This very successful standard won an Emmy award in 1995-96 and its usage is still rising, reaching in 2015 the impressive daily rate of over 3 billion images exchanged in just a few social networks. During the celebration, a number of early members of the committee were awarded for their contributions to this standard, namely Alain Léger, Birger Niss, Jorgen Vaaben and István Sebestyén. Also Richard Clark for his long lasting contribution as JPEG Webmaster and contributions to many JPEG standards was also rewarded during the same ceremony. The celebration will continue at the next 77th JPEG meeting that will be held in Macau, China from 21 to 27, October, 2017.

IMG_1113 2

High Throughput JPEG 2000 – The JPEG committee is continuing its work towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). In a significant milestone, the JPEG Committee has released a Call for Proposals that invites technical contributions to the HTJ2K activity. The deadline for an expression of interest is 1 October 2017, as detailed in the Call for Proposals, which is publicly available on the JPEG website at https://jpeg.org/jpeg2000/htj2k.html.

The objective of the HTJ2K activity is to identify and standardize an alternate block coding algorithm that can be used as a drop-in replacement for the block coding defined in JPEG 2000 Part-1. Based on existing evidence, it is believed that significant increases in encoding and decoding throughput are possible on modern software platforms, subject to small sacrifices in coding efficiency. An important focus of this activity is interoperability with existing systems and content libraries. To ensure this, the alternate block coding algorithm supports mathematically lossless transcoding between HTJ2K and JPEG 2000 Part-1 codestreams at the code-block level.

JPEG Pleno – The JPEG committee intends to provide a standard framework to facilitate capture, representation and exchange of omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. JPEG Pleno aims at defining new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms. At the 76th JPEG meeting in Turin, Italy, responses to the call for proposals for JPEG Pleno light field image coding were evaluated using subjective and objective evaluation metrics, and a Generic JPEG Pleno Light Field Architecture was created. The JPEG committee defined three initial core experiments to be performed before the 77thJPEG meeting in Macau, China. Interested parties are invited to join these core experiments and JPEG Pleno standardization.

JPEG XL – The JPEG Committee is working on a new activity, known as Next generation Image Format, which aims to develop an image compression format that demonstrates higher compression efficiency at equivalent subjective quality of currently available formats and that supports features for both low-end and high-end use cases.  On the low end, the new format addresses image-rich user interfaces and web pages over bandwidth-constrained connections. On the high end, it targets efficient compression for high-quality images, including high bit depth, wide color gamut and high dynamic range imagery. A draft Call for Proposals (CfP) on JPEG XL has been issued for public comment, and is available on the JPEG website.

JPEG XS – This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created. Several rounds of Core Experiments have allowed to further improving the Core Coding System, the last one being reviewed during this 76th JPEG meeting in Torino. More core experiments are on their way, including subjective assessments. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process. Publication of the International Standard is expected for Q3 2018.

JPEG Reference Software – Together with the celebration of 25th anniversary of the first JPEG Standard, the committee continued with its important activities around the omnipresent JPEG image format; while all newer JPEG standards define a reference software guiding users in interpreting and helping them in implementing a given standard, no such references exist for the most popular image format of the Internet age. The JPEG committee therefore issued a call for proposals https://jpeg.org/items/20170728_cfp_jpeg_reference_software.html asking interested parties to participate in the submission and selection of valuable and stable implementations of JPEG (formally, Rec. ITU-T T.81 | ISO/IEC 10918-1).

 

Final Quote

“The experience shared by developers of the first JPEG standard during celebration was an inspiring moment that will guide us to further the ongoing developments of standards responding to new challenges in imaging applications. said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

About JPEG

JPEG-signatureThe Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the Interna
tional Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Frederik Temmermans of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

Future JPEG meetings are planned as follows:

  • No. 77, Macau, CN, 23 – 27 October 2017

 

MPEG Column: 119th MPEG Meeting in Turin, Italy

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The MPEG press release comprises the following topics:

  • Evidence of New Developments in Video Compression Coding
  • Call for Evidence on Transcoding for Network Distributed Video Coding
  • 2nd Edition of Storage of Sample Variants reaches Committee Draft
  • New Technical Report on Signalling, Backward Compatibility and Display Adaptation for HDR/WCG Video Coding
  • Draft Requirements for Hybrid Natural/Synthetic Scene Data Container

Evidence of New Developments in Video Compression Coding

At the 119th MPEG meeting, responses to the previously issued call for evidence have been evaluated and they have all successfully demonstrated evidence. The call requested responses for use cases of video coding technology in three categories:

  • standard dynamic range (SDR) — two responses;
  • high dynamic range (HDR) — two responses; and
  • 360° omnidirectional video — four responses.

The evaluation of the responses included subjective testing and an assessment of the performance of the “Joint Exploration Model” (JEM). The results indicate significant gains over HEVC for a considerable number of test cases with comparable subjective quality at 40-50% less bit rate compared to HEVC for the SDR and HDR test cases with some positive outliers (i.e., higher bit rate savings). Thus, the MPEG-VCEG Joint Video Exploration Team (JVET) concluded that evidence exists of compression technology that may significantly outperform HEVC after further development to establish a new standard. As a next step, the plan is to issue a call for proposals at 120th MPEG meeting (October 2017) and responses expected to be evaluated at the 122th MPEG meeting (April 2018).

We already witness an increase of research articles addressing video coding technologies with capabilities beyond HEVC which will further increase in the future. The main driving force is over the top (OTT) delivery which calls for more efficient bandwidth utilization. However, competition is also increasing with the emergence of AV1 of AOMedia and we may observe also an increasing number of articles in that direction including evaluations thereof. An interesting aspect is also that the number of use cases is also increasing (e.g., see different categories above), which adds further challenges to the “complex video problem”.

Call for Evidence on Transcoding for Network Distributed Video Coding

The call for evidence on transcoding for network distributed video coding targets interested parties possessing technology providing transcoding of video at lower computational complexity than transcoding done using a full re-encode. The primary application is adaptive bitrate streaming where a highest bitrate stream is transcoded into lower bitrate streams. It is expected that responses may use “side streams” (or side information, some may call it metadata) accompanying the highest bitrate stream to assist in the transcoding process. MPEG expects submissions for the 120th MPEG meeting where compression efficiency and computational complexity will be assessed.

Transcoding has been discussed already for a long time and I can certainly recommend this article from 2005 published in the Proceedings of the IEEE. The question is, what is different now, 12 years later, and what metadata (or side streams/information) is required for interoperability among different vendors (if any)?

A Brief Overview of Remaining Topics…

  • The 2nd edition of storage of sample variants reaches Committee Draft and expands its usage to MPEG-2 transport stream whereas the first edition primarily focused on ISO base media file format.
  • The new technical report for high dynamic range (HDR) and wide colour gamut (WCG) video coding comprises a survey of various signaling mechanisms including backward compatibility and display adaptation.
  • MPEG issues draft requirements for a scene representation media container enabling the interchange of content for authoring and rendering rich immersive experiences which is currently referred to as hybrid natural/synthetic scene (HNSS) data container.

Other MPEG (Systems) Activities at the 119th Meeting

DASH is in fully maintenance mode as only minor enhancements/corrections have been discussed including contributions to conformance and reference software. The omnidirectional media format (OMAF) is certainly the hottest topic within MPEG systems which is actually between two stages (i.e., between DIS and FDIS) and, thus, a study of DIS has been approved and national bodies are kindly requested to take this into account when casting their votes (incl. comments). The study of DIS comprises format definitions with respect to coding and storage of omnidirectional media including audio and video (aka 360°). The common media application format (CMAF) has been ratified at the last meeting and awaits publications by ISO. In the meantime CMAF is focusing on conformance and reference software as well as amendments regarding various media profiles. Finally, requirements for a multi-image application format (MiAF) are available since the last meeting and at the 119th MPEG meeting a work draft has been approved. MiAF will be based on HEIF and the goal is to define additional constraints to simplify its file format options.

We have successfully demonstrated live 360 adaptive streaming as described here but we expect various improvements from standards available and under development of MPEG. Research aspects in these areas are certainly interesting in the area of performance gains and evaluations with respect to bandwidth efficiency in open networks as well as how these standardization efforts could be used to enable new use cases. 

Publicly available documents from the 119th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Macau, China, October 23-27, 2017. Feel free to contact me for any questions or comments.

JPEG Column: 75th JPEG Meeting in Sydney, Australia

17499035_10206924918881900_1929813196733915915_n

The 75th JPEG meeting was held at National Standards Australia in Sydney, Australia, from 26 to 31 March. Multiples activities have been ensued, pursuing the development of new standards that meet the current requirements and challenges on imaging technology. JPEG is continuously trying to provide new reliable solutions for different image applications. The 75th JPEG meeting featured mainly the following highlights:

  • JPEG issues a Call for Proposals on Privacy & Security;Image may contain: 3 people, people sitting, screen and indoor
  • New draft Call for Proposal for a Part 15 of JPEG 2000 standard on High Throughput coding;
  • JPEG Pleno defines methodologies for proposals evaluation;
  • A test model for the upcoming JPEG XS standard was created;
  • A new standardisation effort on Next generation Image Formats was initiated.

In the following an overview of the main JPEG activities at the 75th meeting is given.

JPEG Privacy & Security – JPEG Privacy & Security is a work item (ISO/IEC 19566-4) aiming at developing a standard for providing technical solutions which can ensure privacy, maintaining data integrity, and protecting intellectual property rights (IPR). JPEG Privacy & Security is exploring ways on how to design and implement the necessary features without significantly impacting coding performance while ensuring scalability, interoperability, and forward & backward compatibility with current JPEG standard frameworks.
Since the JPEG committee intends to interact closely with actors in this domain, public workshops on JPEG Privacy & Security were organised in previous JPEG meetings. The first workshop was organized on October 13, 2015 during the JPEG meeting in Brussels, Belgium. The second workshop was organized on February 23, 2016 during the JPEG meeting in La Jolla, CA, USA. Following the great success of these workshops, a third and final workshop was organized on October 18, 2016 during the JPEG meeting in Chengdu, China. These workshops targeted on understanding industry, user, and policy needs in terms of technology and supported functionalities. The proceedings of these workshops are published on the Privacy and Security page of JPEG website at www.jpeg.org under Systems section.
The JPEG Committee released a Call for Proposals that invites contributions on adding new capabilities for protection and authenticity features for the JPEG family of standards. Interested parties and content providers are encouraged to participate in this standardization activity and submit proposals. The deadline for an expression of interest and submissions of proposals has been set to October 6th, 2017, as detailed in the Call for Proposals. The Call for Proposals on JPEG Privacy & Security is publicly available on the JPEG website, https://jpeg.org/jpegsystems/privacy_security.html.

High Throughput JPEG 2000 – The JPEG committee is working towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). The goal of this project is to identify and standardize an alternate block coding algorithm that can be used as a drop-in replacement for the algorithm defined in JPEG 2000 Part-1. Based on existing evidence, it is believed that large increases in encoding and decoding throughput (e.g., 10X or beyond) should be possible on modern software platforms, subject to small sacrifices in coding efficiency. An important focus of this activity is inter-operability with existing systems and content repositories. In order to ensure this, the alternate block coding algorithm that will be the subject of this new Part of the standard should support mathematically lossless transcoding between HTJ2K and JPEG 2000 Part-1 codestreams at the code-block level. A draft Call for Proposals (CfP) on HTJ2K has been issued for public comment, and is available on JPEG web-site.

JPEG Pleno – The responses to the JPEG Pleno Call for Proposals on Light Field Coding will be evaluated at the July JPEG meeting in Torino. During JPEG 75th meetings has been defined the quality assessment procedure for this highly challenging type of large volume data. In addition to light fields, JPEG Pleno is also addressing point cloud and holographic data. Currently, the committee is undertaking in-depth studies to prepare standardization efforts on coding technologies for these image data types, encompassing the collection of use cases and requirements, but also investigations towards accurate and appropriate quality assessment procedures for associated representation and coding technologies. JPEG committee is probing for input from the involved industrial and academic communities.

JPEG XS – This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created and results of core experiments have been reviewed during the 75th JPEG meeting in Sydney. More core experiments are on their way to further improve the final standard: JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process.

Next generation Image Formats – The JPEG Committee is exploring a new activity, which aims to develop an image compression format that demonstrates higher compression efficiency at equivalent subjective quality of currently available formats, and that supports features for both low-end and high-end use cases.  On the low end, the new format addresses image-rich user interfaces and web pages over bandwidth-constrained connections. On the high end, it targets efficient compression for high-quality images, including high bit depth, wide color gamut and high dynamic range imagery.

Final Quote

“JPEG is committed to accommodate reliable and flexible security tools for JPEG file formats without compromising legacy usage of our standards said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

About JPEG

JPEG-signatureThe Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the Interna
tional Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro (pinheiro@ubi.pt) or Frederik Temmermans (ftemmerm@etrovub.be) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

Future JPEG meetings are planned as follows:

  • No. 76, Torino, IT, 17 – 21 July, 2017
  • No. 77, Macau, CN, 23 – 27 October 2017

MPEG Column: 118th MPEG Meeting

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The entire MPEG press release can be found here comprising the following topics:

  • Coded Representation of Immersive Media (MPEG-I): new work item approved and call for test data issued
  • Common Media Application Format (CMAF): FDIS approved
  • Beyond High Efficiency Video Coding (HEVC): call for evidence for “beyond HEVC” and verification tests for screen content coding extensions of HEVC

Coded Representation of Immersive Media (MPEG-I)

MPEG started to work on the new work item referred to as ISO/IEC 23090 with the “nickname” MPEG-I targeting future immersive applications. The goal of this new standard is to enable various forms of audio-visual immersion including panoramic video with 2D and 3D audio with various degrees of true 3D visual perception. It currently comprises five parts: (pt. 1) a technical report describing the scope of this new standard and a set of use cases and applications; (pt. 2) an application format for omnidirectional media (aka OMAF) to address the urgent need of the industry for a standard is this area; (pt. 3) immersive video which is a kind of placeholder for the successor of HEVC (if at all); (pt. 4) immersive audio as a placeholder for the successor of 3D audio (if at all); and (pt. 5) for point cloud compression. The point cloud compression standard targets lossy compression for point clouds in real-time communication, six Degrees of Freedom (6 DoF) virtual reality, and the dynamic mapping for autonomous driving, cultural heritage applications, etc. Part 2 is related to OMAF which I’ve discussed in my previous blog post.

MPEG also established an Ad-hoc Group (AhG) on immersive Media quality evaluation with the following mandates: 1. Produce a document on VR QoE requirements; 2. Collect test material with immersive video and audio signals; 3. Study existing methods to assess human perception and reaction to VR stimuli; 4. Develop test methodology for immersive media, including simultaneous video and audio; 5. Study VR experience metrics and their measurability in VR services and devices. AhGs are open to everybody and mostly discussed using mailing lists (join here https://lists.aau.at/mailman/listinfo/immersive-quality). Interestingly, a Joint Qualinet-VQEG team on Immersive Media (JQVIM) has been recently established with similar goals and also the VR Industry Forum (VRIF) has issued a call for VR360 content. It seems there’s a strong need for a dataset similar to the one we have created for MPEG-DASH long time ago.

The JQVIM has been created as part of the QUALINET task force on “Immersive Media Experiences (IMEx)” which aims at providing end users the sensation of being part of the particular media which shall result in a worthwhile, informative user and quality of experience. The main goals are providing datasets and tools (hardware/software), subjective quality evaluations, field studies, cross- validation including a strong theoretical foundation relevant along the empirical databases and tools which hopefully results in a framework, methodology, and best practices for immersive media experiences.

Common Media Application Format (CMAF)

The Final Draft International Standard (FDIS) has been issued at the 118th MPEG meeting which concludes the formal technical development process of the standard. At this point in time national bodies can only vote Yes|No and editorial changes are allowed (if any) before the International Standard (IS) becomes available. The goal of CMAF is to define a single format for the transport and storage of segmented media including audio/video formats, subtitles, and encryption — it is derived from the ISO Base Media File Format (ISOBMFF). As it’s a combination of various MPEG standard it’s referred to as an Application Format (AS) which mainly takes existing formats/standards and glues them together for a specific target application. The CMAF standard clearly targets dynamic adaptive streaming (over — but not limited to — HTTP) but focusing on the media format only and excluding the manifest format. Thus, the CMAF standard shall be compatible with other formats such as MPEG-DASH and HLS. In fact, HLS has been extended already some time ago to support ‘fragmented MP4’ which we have demonstrated also and it has been interpreted as a first step towards the harmonization of MPEG-DASH and HLS; at least on the segment format. The delivery of CMAF contents with DASH will be described in part 7 of MPEG-DASH that basically comprises a mapping of CMAF concepts to DASH terms.

From a research perspective, it would be interesting to explore how certain CMAF concepts are able to address current industry needs, specifically in the context of low-latency streaming which has been demonstrated recently.

Beyond HEVC…

The preliminary call for evidence (CfE) on video compression with capability beyond HEVC has been issued and is addressed to interested parties that have technology providing better compression capability than the existing standard, either for conventional video material, or for other domains such as HDR/WCG or 360-degree (“VR”) video. Test cases are defined for SDR, HDR, and 360-degree content. This call has been made jointly by ISO/IEC MPEG and ITU-T SG16/Q6 (VCEG). The evaluation of the responses is scheduled for July 2017 and depending on the outcome of the CfE, the parent bodies of the Joint Video Exploration Team (JVET) of MPEG and VCEG collaboration intend to issue a Draft Call for Proposals by the end of the July meeting.

Finally, verification tests have been conducted for the Screen Content Coding (SCC) extensions to HEVC showing exceptional performance. Screen content is video containing a significant proportion of rendered (moving or static) graphics, text, or animation rather than, or in addition to, camera-captured video scenes. For scenes containing a substantial amount of text and graphics, the tests showed a major benefit in compression capability for the new extensions over both the Advanced Video Coding standard and the previous version of the newer HEVC standard without the new SCC features.

The question whether and how new codecs like (beyond) HEVC competes with AV1 is subject to research and development. It has been discussed also in the scientific literature but lacks of vendor neutral comparison which is difficult to achieve and not to compare apples with oranges (due to the high number of different coding tools and parameters). An important aspect which always needs to be considered is one typically compares specific implementations of a coding format and not the standard as the encoding is usually not defined, only the bitstream syntax that implicitly defines the decoder.

Publicly available documents from the 118th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Torino, Italy, July 17-21, 2017. Feel free to contact us for any questions or comments.

Standards Column: JPEG and MPEG

Introduction

ISO/IEC JTC 1/SC 29 area of work comprises the standardization of coded representation of audio, picture, multimedia and hypermedia information and sets of compression and control functions for use with such information. SC29 basically hosts two working groups responsible for the development of international standards for the compression, decompression, processing, and coded representation of media content, in order to satisfy a wide variety of applications”, specifically WG1 targeting “digital still pictures”  — also known as JPEG — and WG11 targeting “moving pictures, audio, and their combination” — also known as MPEG. The earlier SC29 standards, namely JPEG, MPEG-1 and MPEG-2, received the technology & engineering Emmy award in 1995-96.

The standards columns within ACM SIGMM Records provide timely updates about the most recent developments within JPEG and MPEG respectively. The JPEG column is edited by Antonio Pinheiro and the MPEG column is edited by Christian Timmerer. The editors and an overview of recent JPEG and MPEG achievements as well as future plans are highlighted in this article.

Antonio Pinheiro received the BSc (Licenciatura) from I.S.T., Lisbon in 1988 and the PhD in faceAMGP3Electronic Systems Engineering from University of Essex in 2002. He is a lecturer at U.B.I. (Universidade da Beira Interior), Covilha, Portugal from 1988 and a researcher at I.T. (Instituto de Telecomunicações), Portugal. Currently, his research interests are on Image Processing, namely on Multimedia Quality Evaluation and Medical Image Analysis. He was a Portuguese representative of the European Union Actions COST IC1003 – QUALINET, COST IC1206 – DE-ID, COST 292 and currently of COST BM1304 – MYO-MRI. He is currently involved in the project EmergIMG funded by the Portuguese Funding agency and H2020, and he is a Portuguese delegate to JPEG, where he is currently the Communication Subgroup chair and involved with the JPEG Pleno project.

 

 

ct2013octChristian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constrained environments) both from the Alpen-Adria-Universität (AAU) Klagenfurt. He joined the AAU in 1999 (as a system administrator) and is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communications, streaming, adaptation, Quality of Experience, and Sensory Experience. He was the general chair of WIAMIS 2008, QoMEX 2013, and MMSys 2016 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, and ICoSOLE. He also participated in ISO/MPEG work for several years, notably in the area of MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2012 he cofounded Bitmovin (http://www.bitmovin.com/) to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO).

Major JPEG and MPEG Achievements

In this section we would like to highlight major JPEG and MPEG achievements without claiming to be exhaustive.

JPEG developed the well-known digital pictures coding standard, known as JPEG image format almost 25 years ago. Due to the recent increase of social networks usage, the number of JPEG encoded images shared online grew to an impressive number of 1,800 billion per day in 2014. JPEG 2000 is another JPEG successful standard that also received the 2015 Technology and Engineering Emmy award. This standard uses state of the art compression technology providing higher compression and a wider applications domain. It is widely used at professional level, namely on movies production and medical imaging. JPEG also developed JBIG2, JPEG-LS, JPSearch and JPEG-XR standards. More recently JPEG launched JPEG-AIC, JPEG Systems and JPEG-XT. JPEG-XT defines backward compatible extensions of JPEG, adding support for HDR, lossless/near lossless, and alpha coding. An overview of the JPEG family of standards is shown in the figure below.

JPEGstandards
An overview of existing MPEG standards and achievements is shown in the figure below (taken from here).

MPEGStandards

A first major milestone and success was the development of MP3 which revolutionized digital audio content resulting in a sustainable change of the digital media ecosystem. The same holds for MPEG-2 video & systems where the latter, i.e., MPEG-2 Transport Stream, received the technology & engineering Emmy award. The mobile era within MPEG has been introduced with the MPEG-4 standard resulting in the development of AVC (received yet another Emmy award), AAC, and also the MP4 file format which have been deployed widely. Finally, streaming over the open internet is addressed by DASH and new forms of digital television including ultra high-definition & immersive services are targeted by MPEG-H comprising MMT, HEVC, and 3D audio.

Roadmap for Future JPEG and MPEG Standards

In this section we would like to highlight a roadmap for future JPEG and MPEG standards.

A roadmap for future JPEG standards is represented in the figure above. The main efforts are towards the JPEG Pleno project that aims to standardize new immersive technologies like light fields, point clouds or digital holography. Moreover, JPEG is launching JPEG-XS for low latency and light weight coding, while JPEG Systems is also developing a new part to add privacy and security protection to their standards. Furthermore, JPEG is continuously seeking new technological developments and it is committed on providing new standardized image coding solutions.

JPEGroadmap

The future roadmap of MPEG standards is shown in the Figure below (taken from here).

MPEGRoadmap

MPEG’s roadmap for future standards comprises a variety of tools ranging from traditional audio-video coding to new forms of compression technologies like genome compression and lightfield. The systems aspects will cover applications domains which require media orchestration as well as focus on becoming the enabler for immersive media experiences.

Conclusion

In this article we briefly highlighted achievements and future plans of JPEG and MPEG but the future is not defined and requires participation from both industry and academia. We hope that our JPEG and MPEG columns will stimulate research and development within the multimedia domain and we are open for any kind of feedback. Contact Antonio Pinheiro (pinheiro@ubi.pt) or Christian Timmerer (christian.timmerer@itec.uni-klu.ac.at) for any further questions or comments.

JPEG Column: 74th JPEG Meeting

The 74th JPEG meeting was held at ITU Headquarters in Geneva, Switzerland, from 15 to 20 January featuring the following highlights:

  • A Final Call for Proposals on JPEG Pleno was issued focusing on light field coding;
  • Creation of a test model for the upcoming JPEG XS standard;
  • A draft Call for Proposals for JPEG Privacy & Security was issued;
  • JPEG AIC technical report finalized on Guidelines for image coding system evaluation;
  • An AHG was created to investigate the evidence of high throughput JPEG 2000;
  • An AHG on next generation image compression standard was initiated to explore a future image coding format with superior compression efficiency.

 

JPEG Pleno kicks off its activities towards JPEGmeeting74standardization of light field coding

At the 74th JPEG meeting in Geneva, Switzerland the final Call for Proposals (CfP) on JPEG Pleno was issued particularly focusing on light field coding. The CfP is available here.

The call encompasses coding technologies for lenslet light field cameras, and content produced by high-density arrays of cameras. In addition, system-level solutions associated with light field coding and processing technologies that have a normative impact are called for. In a later stage, calls for other modalities such as point cloud, holographic and omnidirectional data will be issued, encompassing image representations and new and rich forms of visual data beyond the traditional planar image representations.

JPEG Pleno intends to provide a standard framework to facilitate capture, representation and exchange of these omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. It aims to define new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms.

 

JPEG XS aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used for a wide range of applications including mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal issued on March 11th 2016 and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created during the 73rd JPEG meeting in Chengdu and the results of a first set of core experiments have been reviewed during the 74th JPEG meeting in Geneva. More core experiments are on their way before finalizing the standard: JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process.

 

JPEG Privacy & Security aims at developing a standard for realizing secure image information sharing which is capable of ensuring privacy, maintaining data integrity, and protecting intellectual property rights (IPR). JPEG Privacy & Security will explore ways on how to design and implement the necessary features without significantly impacting coding performance while ensuring scalability, interoperability, and forward and backward compatibility with current JPEG standard frameworks.

A draft Call for Proposals for JPEG Privacy & Security has been issued and the JPEG committee invites interested parties to contribute to this standardisation activity in JPEG Systems. The draft of CfP is available here.

The call addresses protection mechanisms and technologies such as handling hierarchical levels of access and multiple protection levels for metadata and image protection, checking integrity of image data and embedded metadata, and supporting backward and forward compatibility with JPEG coding technologies. Interested parties are encouraged to subscribe to the JPEG Privacy & Security email reflector for collecting more information. A final version of the JPEG Privacy & Security Call for Proposals is expected at the 75th JPEG meeting located in Sydney, Australia.

 

JPEG AIC provides guidance and standard procedures for advanced image coding evaluation.  At this meeting JPEG completed a technical report: TR 29170-1 Guidelines for image coding system evaluation. This report is a compendium of JPEGs best practices in evaluation that draws sources from several different international standards and international recommendations. The report discusses use of objective tools, subjective procedures and computational analysis techniques and when to use the different tools. Some of the techniques are tried and true tools familiar to image compression experts and vision scientists. Several tools represent new fields where few tools have been available, such as the evaluation of coding systems for high dynamic range content.

 

High throughput JPEG 2000

The JPEG committee started a new activity for high throughput JPEG 2000 and an AHG was created to investigate the evidence for such kind of standard. Experts are invited to participate in this expert group and to join the mailing list.

 

Final Quote

“JPEG continues to offer standards that redefine imaging products and services contributing to a better society without borders.” said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group JPEG-signatureof ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS and JPEG Systems and JPEG PLENO families of imaging standards.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Tim Bruylants of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.   Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

 

Future JPEG meetings are planned as follows:

  • No. 75, Sydney, AU, 26 – 31 March, 2017
  • No. 76, Torino, IT, 17 – 21 July, 2017
  • No. 77, Macau, CN, 23 – 27 October 2017

 

MPEG Column: 117th MPEG Meeting

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The 117th MPEG meeting was held in Geneva, Switzerland and its press release highlights the following aspects:

  • MPEG issues Committee Draft of the Omnidirectional Media Application Format (OMAF)
  • MPEG-H 3D Audio Verification Test Report
  • MPEG Workshop on 5-Year Roadmap Successfully Held in Geneva
  • Call for Proposals (CfP) for Point Cloud Compression (PCC)
  • Preliminary Call for Evidence on video compression with capability beyond HEVC
  • MPEG issues Committee Draft of the Media Orchestration (MORE) Standard
  • Technical Report on HDR/WCG Video Coding

In this article, I’d like to focus on the topics related to multimedia communication starting with OMAF.

Omnidirectional Media Application Format (OMAF)

Real-time entertainment services deployed over the open, unmanaged Internet – streaming audio and video – account now for more than 70% of the evening traffic in North American fixed access networks and it is assumed that this figure will reach 80 percent by 2020. More and more such bandwidth hungry applications and services are pushing onto the market including immersive media services such as virtual reality and, specifically 360-degree videos. However, the lack of appropriate standards and, consequently, reduced interoperability is becoming an issue. Thus, MPEG has started a project referred to as Omnidirectional Media Application Format (OMAF). The first milestone of this standard has been reached and the committee draft (CD) has been approved at the 117th MPEG meeting. Such application formats “are essentially superformats that combine selected technology components from MPEG (and other) standards to provide greater application interoperability, which helps satisfy users’ growing need for better-integrated multimedia solutions” [MPEG-A].” In the context of OMAF, the following aspects are defined:

  • Equirectangular projection format (note: others might be added in the future)
  • Metadata for interoperable rendering of 360-degree monoscopic and stereoscopic audio-visual data
  • Storage format: ISO base media file format (ISOBMFF)
  • Codecs: High Efficiency Video Coding (HEVC) and MPEG-H 3D audio

OMAF is the first specification which is defined as part of a bigger project currently referred to as ISO/IEC 23090 — Immersive Media (Coded Representation of Immersive Media). It currently has the acronym MPEG-I and we have previously used MPEG-VR which is now replaced by MPEG-I (that still might chance in the future). It is expected that the standard will become Final Draft International Standard (FDIS) by Q4 of 2017. Interestingly, it does not include AVC and AAC, probably the most obvious candidates for video and audio codecs which have been massively deployed in the last decade and probably still will be a major dominator (and also denominator) in upcoming years. On the other hand, the equirectangular projection format is currently the only one defined as it is broadly used already in off-the-shelf hardware/software solutions for the creation of omnidirectional/360-degree videos. Finally, the metadata formats enabling the rendering of 360-degree monoscopic and stereoscopic video is highly appreciated. A solution for MPEG-DASH based on AVC/AAC utilizing equirectangular projection format for both monoscopic and stereoscopic video is shown as part of Bitmovin’s solution for VR and 360-degree video.

Research aspects related to OMAF can be summarized as follows:

  • HEVC supports tiles which allow for efficient streaming of omnidirectional video but HEVC is not as widely deployed as AVC. Thus, it would be interesting how to mimic such a tile-based streaming approach utilizing AVC.
  • The question how to efficiently encode and package HEVC tile-based video is an open issue and call for a tradeoff between tile flexibility and coding efficiency.
  • When combined with MPEG-DASH (or similar), there’s a need to update the adaptation logic as the with tiles yet another dimension is added that needs to be considered in order to provide a good Quality of Experience (QoE).
  • QoE is a big issue here and not well covered in the literature. Various aspects are worth to be investigated including a comprehensive dataset to enable reproducibility of research results in this domain. Finally, as omnidirectional video allows for interactivity, also the user experience is becoming an issue which needs to be covered within the research community.

A second topic I’d like to highlight in this blog post is related to the preliminary call for evidence on video compression with capability beyond HEVC. 

Preliminary Call for Evidence on video compression with capability beyond HEVC

A call for evidence is issued to see whether sufficient technological potential exists to start a more rigid phase of standardization. Currently, MPEG together with VCEG have developed a Joint Exploration Model (JEM) algorithm that is already known to provide bit rate reductions in the range of 20-30% for relevant test cases, as well as subjective quality benefits. The goal of this new standard — with a preliminary target date for completion around late 2020 — is to develop technology providing better compression capability than the existing standard, not only for conventional video material but also for other domains such as HDR/WCG or VR/360-degrees video. An important aspect in this area is certainly over-the-top video delivery (like with MPEG-DASH) which includes features such as scalability and Quality of Experience (QoE). Scalable video coding has been added to video coding standards since MPEG-2 but never reached wide-spread adoption. That might change in case it becomes a prime-time feature of a new video codec as scalable video coding clearly shows benefits when doing dynamic adaptive streaming over HTTP. QoE did find its way already into video coding, at least when it comes to evaluating the results where subjective tests are now an integral part of every new video codec developed by MPEG (in addition to usual PSNR measurements). Therefore, the most interesting research topics from a multimedia communication point of view would be to optimize the DASH-like delivery of such new codecs with respect to scalability and QoE. Note that if you don’t like scalable video coding, feel free to propose something else as long as it reduces storage and networking costs significantly.

 

MPEG Workshop “Global Media Technology Standards for an Immersive Age”

On January 18, 2017 MPEG successfully held a public workshop on “Global Media Technology Standards for an Immersive Age” hosting a series of keynotes from Bitmovin, DVB, Orange, Sky Italia, and Technicolor. Stefan Lederer, CEO of Bitmovin discussed today’s and future challenges with new forms of content like 360°, AR and VR. All slides are available here and MPEG took their feedback into consideration in an update of its 5-year standardization roadmap. David Wood (EBU) reported on the DVB VR study mission and Ralf Schaefer (Technicolor) presented a snapshot on VR services. Gilles Teniou (Orange) discussed video formats for VR pointing out a new opportunity to increase the content value but also raising a question what is missing today. Finally, Massimo Bertolotti (Sky Italia) introduced his view on the immersive media experience age.

Overall, the workshop was well attended and as mentioned above, MPEG is currently working on a new standards project related to immersive media. Currently, this project comprises five parts. The first part comprises a technical report describing the scope (incl. kind of system architecture), use cases, and applications. The second part is OMAF (see above) and the third/forth parts are related to immersive video and audio respectively. Part five is about point cloud compression.

For those interested, please check out the slides from industry representatives in this field and draw your own conclusions what could be interesting for your own research. I’m happy to see any reactions, hints, etc. in the comments.

Finally, let’s have a look what happened related to MPEG-DASH, a topic with a long history on this blog.

MPEG-DASH and CMAF: Friend or Foe?

For MPEG-DASH and CMAF it was a meeting “in between” official standardization stages. MPEG-DASH experts are still working on the third edition which will be a consolidated version of the 2nd edition and various amendments and corrigenda. In the meantime, MPEG issues a white paper on the new features of MPEG-DASH which I would like to highlight here.

  • Spatial Relationship Description (SRD): allows to describe tiles and region of interests for partial delivery of media presentations. This is highly related to OMAF and VR/360-degree video streaming.
  • External MPD linking: this feature allows to describe the relationship between a single program/channel and a preview mosaic channel having all channels at once within the MPD.
  • Period continuity: simple signaling mechanism to indicate whether one period is a continuation of the previous one which is relevant for ad-insertion or live programs.
  • MPD chaining: allows for chaining two or more MPDs to each other, e.g., pre-roll ad when joining a live program.
  • Flexible segment format for broadcast TV: separates the signaling of the switching points and random access points in each stream and, thus, the content can be encoded with a good compression efficiency, yet allowing higher number of random access point, but with lower frequency of switching points.
  • Server and network-assisted DASH (SAND): enables asynchronous network-to-client and network-to-network communication of quality-related assisting information.
  • DASH with server push and WebSockets: basically addresses issues related to HTTP/2 push feature and WebSocket.

CMAF issued a study document which captures the current progress and all national bodies are encouraged to take this into account when commenting on the Committee Draft (CD). To answer the question in the headline above, it looks more and more like as DASH and CMAF will become friends — let’s hope that the friendship lasts for a long time.

What else happened at the MPEG meeting?

  • Committee Draft MORE (note: type in ‘man more’ on any unix/linux/max terminal and you’ll get ‘less – opposite of more’;): MORE stands for “Media Orchestration” and provides a specification that enables the automated combination of multiple media sources (cameras, microphones) into a coherent multimedia experience. Additionally, it targets use cases where a multimedia experience is rendered on multiple devices simultaneously, again giving a consistent and coherent experience.
  • Technical Report on HDR/WCG Video Coding: This technical report comprises conversion and coding practices for High Dynamic Range (HDR) and Wide Colour Gamut (WCG) video coding (ISO/IEC 23008-14). The purpose of this document is to provide a set of publicly referenceable recommended guidelines for the operation of AVC or HEVC systems adapted for compressing HDR/WCG video for consumer distribution applications
  • CfP Point Cloud Compression (PCC): This call solicits technologies for the coding of 3D point clouds with associated attributes such as color and material properties. It will be part of the immersive media project introduced above.
  • MPEG-H 3D Audio verification test report: This report presents results of four subjective listening tests that assessed the performance of the Low Complexity Profile of MPEG-H 3D Audio. The tests covered a range of bit rates and a range of “immersive audio” use cases (i.e., from 22.2 down to 2.0 channel presentations). Seven test sites participated in the tests with a total of 288 listeners.

The next MPEG meeting will be held in Hobart, April 3-7, 2017. Feel free to contact us for any questions or comments.

MPEG Column: 116th MPEG Meeting

MPEG Workshop on 5-Year Roadmap Successfully Held in Chengdu

Chengdu, China – The 116th MPEG meeting was held in Chengdu, China, from 17 – 21 October 2016

MPEG Workshop on 5-Year Roadmap Successfully Held in Chengdu

At its 116th meeting, MPEG successfully organised a workshop on its 5-year standardisation roadmap. Various industry representatives presented their views and reflected on the need for standards for new services and applications, specifically in the area of immersive media. The results of the workshop (roadmap, presentations) and the planned phases for the standardisation of “immersive media” are available at http://mpeg.chiariglione.org/. A follow-up workshop will be held on 18 January 2017 in Geneva, co-located with the 117th MPEG meeting. The workshop is open to all interested parties and free of charge. Details on the program and registration will be available at http://mpeg.chiariglione.org/.

Summary of the “Survey on Virtual Reality”

At its 115th meeting, MPEG established an ad-hoc group on virtual reality which conducted a survey on virtual reality with relevant stakeholders in this domain. The feedback from this survey has been provided as input for the 116th MPEG meeting where the results have been evaluated. Based on these results, MPEG aligned its standardisation timeline with the expected deployment timelines for 360-degree video and virtual reality services. An initial specification for 360-degree video and virtual reality services will be ready by the end of 2017 and is referred to as the Omnidirectional Media Application Format (OMAF; MPEG-A Part 20, ISO/IEC 23000-20). A standard addressing audio and video coding for 6 degrees of freedom where users can freely move around is on MPEG’s 5-year roadmap. The summary of the survey on virtual reality is available at http://mpeg.chiariglione.org/.

MPEG and ISO/TC 276/WG 5 have collected and evaluated the answers to the Genomic Information Compression and Storage joint Call for Proposals

At its 115th meeting, MPEG issued a Call for Proposals (CfP) for Genomic Information Compression and Storage in conjunction with the working group for standardisation of data processing and integration of the ISO Technical Committee for biotechnology standards (ISO/TC 276/WG5). The call sought submissions of technologies that can provide efficient compression of genomic data and metadata for storage and processing applications. During the 116th MPEG meeting, responses to this CfP have been collected and evaluated by a joint ad-hoc group of both working groups, comprising twelve distinct technologies submitted. An initial assessment of the performance of the best eleven solutions for the different categories reported compression factors ranging from 8 to 58 for the different classes of data.

The submitted twelve technologies show consistent improvements versus the results assessed as an answer to the Call for Evidence in February 2016. Further improvements of the technologies under consideration are expected with the first phase of core experiments that has been defined at the 116th MPEG meeting. The open core experiments process planned in the next 12 months will address multiple, independent, directly comparable rigorous experiments performed by independent entities to determine the specific merit of each technology and their mutual integration into a single solution for standardisation. The core experiment process will consider submitted technologies as well as new solutions in the scope of each specific core experiment. The final inclusion of submitted technologies into the standard will be based on the experimental comparison of performance, as well as on the validation of requirements and inclusion of essential metadata describing the context of the sequence data, and will be reached by consensus within and across both committees.

Call for Proposals: Internet of Media Things and Wearables (IoMT&W)

At its 116th meeting, MPEG issued a Call for Proposals (CfP) for Internet of Media Things and Wearables (see http://mpeg.chiariglione.org/), motivated by the understanding that more than half of major new business processes and systems will incorporate some element of the Internet of Things (IoT) by 2020. Therefore, the CfP seeks submissions of protocols and data representation enabling dynamic discovery of media things and media wearables. A standard in this space will facilitate the large-scale deployment of complex media systems that can exchange data in an interoperable way between media things and media wearables.

MPEG-DASH Amendment with Media Presentation Description Chaining and Pre-Selection of Adaptation Sets

At its 116th MPEG meeting, a new amendment for MPEG-DASH reached the final stage of Final Draft Amendment (ISO/IEC 23009-1:2014 FDAM 4). This amendment includes several technologies useful for industry practices of adaptive media presentation delivery. For example, the media presentation description (MPD) can be daisy chained to simplify implementation of pre-roll ads in cases of targeted dynamic advertising for live linear services. Additionally, support for pre-selection in order to signal suitable combinations of audio elements that are offered in different adaptation sets is enabled by this amendment. As there have been several amendments and corrigenda produced, this amendment will be published as a part of the 3rd edition of ISO/IEC 23009-1 together with the amendments and corrigenda approved after the 2nd edition.

How to contact MPEG, learn more, and find other MPEG facts

To learn about MPEG basics, discover how to participate in the committee, or find out more about the array of technologies developed or currently under development by MPEG, visit MPEG’s home page at http://mpeg.chiariglione.org. There you will find information publicly available from MPEG experts past and present including tutorials, white papers, vision documents, and requirements under consideration for new standards efforts. You can also find useful information in many public documents by using the search window.

Examples of tutorials that can be found on the MPEG homepage include tutorials for: High Efficiency Video Coding, Advanced Audio Coding, Universal Speech and Audio Coding, and DASH to name a few. A rich repository of white papers can also be found and continues to grow. You can find these papers and tutorials for many of MPEG’s standards freely available. Press releases from previous MPEG meetings are also available. Journalists that wish to receive MPEG Press Releases by email should contact Dr. Christian Timmerer at christian.timmerer@itec.uni-klu.ac.at or christian.timmerer@bitmovin.com.

Further Information

Future MPEG meetings are planned as follows:
No. 117, Geneva, CH, 16 – 20 January, 2017
No. 118, Hobart, AU, 03 – 07 April, 2017
No. 119, Torino, IT, 17 – 21 July, 2017
No. 120, Macau, CN, 23 – 27 October 2017

For further information about MPEG, please contact:
Dr. Leonardo Chiariglione (Convenor of MPEG, Italy)
Via Borgionera, 103
10040 Villar Dora (TO), Italy
Tel: +39 011 935 04 61
leonardo@chiariglione.org

or

Priv.-Doz. Dr. Christian Timmerer
Alpen-Adria-Universität Klagenfurt | Bitmovin Inc.
9020 Klagenfurt am Wörthersee, Austria, Europe
Tel: +43 463 2700 3621
Email: christian.timmerer@itec.aau.at | christian.timmerer@bitmovin.com

MPEG Column: 115th MPEG Meeting

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The 115th MPEG meeting was held in Geneva, Switzerland and its press release highlights the following aspects:

 

  • IMG_2276MPEG issues Genomic Information Compression and Storage joint Call for Proposals in conjunction with ISO/TC 276/WG 5
  • Plug-in free decoding of 3D objects within Web browsers
  • MPEG-H 3D Audio AMD 3 reaches FDAM status
  • Common Media Application Format for Dynamic Adaptive Streaming Applications
  • 4th edition of AVC/HEVC file format

In this blog post, however, I will cover topics specifically relevant for adaptive media streaming, namely:

  • Recent developments in MPEG-DASH
  • Common media application format (CMAF)
  • MPEG-VR (virtual reality)
  • The MPEG roadmap/vision for the future.

MPEG-DASH Server and Network assisted DASH (SAND): ISO/IEC 23009-5

Part 5 of MPEG-DASH, referred to as SAND – server and network-assisted DASH – has reached FDIS. This work item started sometime ago at a public MPEG workshop during the 105th MPEG meeting in Vienna. The goal of this part of MPEG-DASH is to enhance the delivery of DASH content by introducing messages between DASH clients and network elements or between various network elements for the purpose of improving the efficiency of streaming sessions by providing information about real-time operational characteristics of networks, servers, proxies, caches, CDNs as well as DASH client’s performance and status. In particular, it defines the following:

  1. The SAND architecture which identifies the SAND network elements and the nature of SAND messages exchanged among them.
  2. The semantics of SAND messages exchanged between the network elements present in the SAND architecture.
  3. An encoding scheme for the SAND messages.
  4. The minimum to implement a SAND message delivery protocol.

The way that this information is to be utilized is deliberately not defined within the standard and left open for (industry) competition (or other standards developing organizations). In any case, there’s plenty of room for research activities around the topic of SAND, specifically:

  • A main issue is the evaluation of MPEG-DASH SAND in terms of qualitative and quantitative improvements with respect to QoS/QoE. Some papers are available already and have been published within ACM MMSys 2016.
  • Another topic of interest includes an analysis regarding scalability and possible overhead; in other words, I’m wondering whether it’s worth using SAND to improve DASH.

MPEG-DASH with Server Push and WebSockets: ISO/IEC 23009-6

Part 6 of MPEG-DASH reached DIS stage and deals with server push and Web sockets, i.e., it specifies the carriage of MPEG-DASH media presentations over full duplex HTTP-compatible protocols, particularly HTTP/2 and WebSocket. The specification comes with a set of generic definitions for which bindings are defined allowing its usage in various formats. Currently, the specification supports HTTP/2 and WebSocket.

For the former it is required to define the push policy as an HTTP header extension whereas the latter requires the definition of a DASH subprotocol. Luckily, these are the preferred extension mechanisms for both HTTP/2 and WebSocket and, thus, interoperability is provided. The question of whether or not the industry will adopt these extensions cannot be answered right now but I would recommend keeping an eye on this and there are certainly multiple research topics worth exploring in the future.

An interesting aspect for the research community would be to quantify the utility of using push methods within dynamic adaptive environments in terms of QoE and start-up delay. Some papers provide preliminary answers but a comprehensive evaluation is missing.

To conclude the recent MPEG-DASH developments, the DASH-IF recently established the Excellence in DASH Award at ACM MMSys’16 and the winners are presented here (including some of the recent developments described in this blog post).

Common Media Application Format (CMAF): ISO/IEC 23000-19

The goal of CMAF is to enable application consortia to reference a single MPEG specification (i.e., a “common media format”) that would allow a single media encoding to use across many applications and devices. Therefore, CMAF defines the encoding and packaging of segmented media objects for delivery and decoding on end user devices in adaptive multimedia presentations. This sounds very familiar and reminds us a bit on what the DASH-IF is doing with their interoperability points. One of the goals of CMAF is to integrate HLS in MPEG-DASH which is backed up with this WWDC video where Apple announces the support of fragmented MP4 in HLS. The streaming of this announcement is only available in Safari and through the WWDC app but Bitmovin has shown that it also works on Mac iOS 10 and above, and for PC users all recent browser versions including Edge, FireFox, Chrome, and (of course) Safari. 

MPEG Virtual Reality

IMG_2285 (1)
Virtual reality is becoming a hot topic across the industry (and also academia) which also reaches standards developing organizations like MPEG. Therefore, MPEG established an ad-hoc group (with an email reflector) to develop a roadmap required for MPEG-VR. Others have also started working on this like DVB, DASH-IF, and QUALINET (and maybe many others: W3C, 3GPP). In any case, it shows that there’s a massive interest in this topic and Bitmovin has shown already what can be done in this area within today’s Web environments. Obviously, adaptive streaming is an important aspect for VR applications including a many research questions to be addressed in the (near) future. A first step towards a concrete solution is the Omnidirectional Media Application Format (OMAF) which is currently at working draft stage (details to be provided in a future blog post).

The research aspects covers a wide range activity including – but not limited to – content capturing, content representation, streaming/network optimization, consumption, and QoE.

MPEG roadmap/vision

At it’s 115th meeting, MPEG published a document that lays out its medium-term strategic standardization roadmap. The goal of this document is collecting feedback from anyone in professional and B2B industries dealing with media, specifically but not limited to broadcasting, content and service provision, media equipment manufacturing, and telecommunication industry. The roadmap is depicted below and further described in the document available here. Please note that “360 AV” in the figure below also refers to VR but unfortunately it’s not (yet) reflected in the figure. However, it points out the aspects to be addressed by MPEG in the future which would be relevant for both industry and academia.

MPEG-Roadmap

The next MPEG meeting will be held in Chengdu, October 17-21, 2016.

MPEG Column: Press release for the 114th MPEG meeting

Screen Content Coding Makes HEVC the Flexible Standard for Any Video Source

San Diego, USA − The 114th MPEG meeting was held in San Diego, CA, USA, from 22 – 26 February 2016

Powerful new HEVC tools improve compression of text, graphics, and animation

The 114th MPEG meeting marked the completion of the Screen Content Coding (SCC) extensions to HEVC – the High Efficiency Video Coding standard. This powerful set of tools augments the compression capabilities of HEVC to make it the flexible standard for virtually any type of video source content that is commonly encountered in our daily lives.

Screen content is video containing a significant proportion of rendered (moving or static) graphics, text, or animation rather than, or in addition to, camera-captured video scenes. The new SCC extensions of HEVC greatly improve the compression of such content. Example applications include wireless displays, news and other television content with text and graphics overlays, remote computer desktop access, and real-time screen sharing for video chat and video conferencing.

The technical development of the SCC extensions was performed by the MPEG and VCEG video coding joint team JCT-VC, following a joint Call for Proposals issued in February 2014.

CfP issued for technologies to orchestrate capture and consumption of media across multiple devices

At its 114th meeting, MPEG issued a Call for Proposals (CfP) for Media Orchestration. The CfP seeks submissions of technologies that will facilitate the orchestration of devices and media, both in time (advanced synchronization, e.g. across multiple devices) and space, where the media may come from multiple capturing devices and may be consumed by multiple rendering devices. An example application includes coordination of consumer CE devices to record a live event. The CfP for Media Orchestration can be found at http://mpeg.chiariglione.org/meetings/114.

User Description framework helps recommendation engines deliver better choices

At the 114th meeting, MPEG has completed a standards framework (in ISO/IEC 21000-22) to facilitate the narrowing of big data searches to help recommendation engines deliver better, personalized, and relevant choices to users. Understanding the personal preferences of a user, and the context within which that user

Source: Status: Subject: Date:

is interacting with a given application, facilitates the ability of that application to better respond to individual user requests. Having that information provided in a standard and interoperable format enables application providers to more broadly scale their services to interoperate with other applications providers. Enter MPEG User Description (MPEG-UD). The aim of MPEG User Description is to ensure interoperability among recommendation services, which take into account the user and his/her context when generating recommendations for the user. With MPEG-UD, applications can utilize standard descriptors for users (user descriptor), the context in which the user is operating (context descriptor), recommendations (recommendation descriptor), and a description of a specific recommendation service that could be eventually consumed by the user (service descriptor).

Publish/Subscribe Application Format is finalized

The Publish/Subscribe Application Format (PSAF, ISO/IEC 23000-16) has reached the final milestone of FDIS at this MPEG meeting. The PSAF enables a communication paradigm where publishers do not communicate information directly to intended subscribers but instead rely on a service that mediates the relationship between senders and receivers. In this paradigm, Publishers create and store Resources and their descriptions, and send Publications; Subscribers send Subscriptions. Match Service Providers (MSP) receive and match Subscriptions with Publications and, when a Match has been found, send Notifications to users listed in Publications and Subscriptions. This paradigm is enabled by three other MPEG technologies which have also reached their final milestone: Contract Expression Language (CEL), Media Contract Ontology (MCO) and User Description (UD). A PSAF Notification is expressed as a set of UD Recommendations.

CEL is a language to express contract regarding a digital license, the complete business agreements between the parties. MCO is an ontology to represent contracts dealing with rights on multimedia assets and intellectual property protected content in general. A specific vocabulary is defined in a model extension to represent the most common rights and constraints in the audiovisual context. PSAF contracts between Publishers or Subscribers and MSPs are expressed in CEL or MCO.

Augmented Reality Application Format reaches FDIS status

At the 114th MPEG meeting, the 2nd edition of ARAF, MPEG’s Application Format for Augmented Reality (ISO/IEC 23000-13) has reached FDIS status and will be soon published as an International Standard. The MPEG ARAF enables augmentation of the real world with synthetic media objects by combining multiple existing MPEG standards within a single specific application format addressing certain industry needs. In particular, ARAF comprises three components referred to as scene, sensor/actuator, and media. The target applications include geolocation-based services, image-based object detection and tracking, audio recognition and synchronization, mixed and augmented reality games and real-virtual interactive scenarios.

Genome compression progresses toward standardization

At its 114th meeting, MPEG has progressed its exploration of genome compression toward formal standardization. The 114th meeting included a seminar to collect additional perspectives on genome data standardization, and a review of technologies that had been submitted in response to a Call for Evidence. The purpose of that CfE, which had been previously issued at the 113th meeting, was to assess whether new technologies could achieve better performance in terms of compression efficiency compared with currently used formats.

In all, 22 tools were evaluated. The results demonstrate that by integrating a multiple of these tools, it is possible to improve the compression of up to 27% with respect to the best state-of-the-art tool. With this evidence, MPEG has issued a Draft Call for Proposals (CfP) on Genomic Information Representation and Compression. The Draft CfP targets technologies for compressing raw and aligned genomic data and metadata for efficient storage and analysis.

As demonstrated by the results of the Call for Evidence, improved lossless compression of genomic data beyond the current state-of-the-art tools is achievable by combining and further developing them. The call also addresses lossy compression of the metadata which make up the dominant volume of the resulting compressed data. The Draft CfP seeks lossy compression technologies that can provide higher compression performance without affecting the accuracy of analysis application results. Responses to the Genomic Information Representation and Compression CfP will be evaluated prior to the 116th MPEG meeting in October 2016 (in Chengdu, China). An ad hoc group, co-chaired by Martin Golobiewski, convenor of Working Group 5 of ISO TC 276 (i.e. the ISO committee for Biotechnology) and Dr. Marco Mattavelli (of MPEG) will coordinate the receipt and pre-analysis of submissions received in response to the call. Detailed results to the CfE and the presentations shown during the seminar will soon be available as MPEG documents N16137 and N16147 at: http://mpeg.chiariglione.org/meetings/114.

MPEG evaluates results to CfP for Compact Descriptors for Video Analysis

MPEG has received responses from three consortia to its Call for Proposals (CfP) on Compact Descriptors for Video Analysis (CDVA). This CfP addresses compact (i.e., compressed) video description technologies for search and retrieval applications, i.e. for content matching in video sequences. Visual content matching includes matching of views of large and small objects and scenes, that is robust to partial occlusions as well as changes in vantage point, camera parameters, and lighting conditions. The objects of interest include those that are planar or non-planar, rigid or partially rigid, and textured or partially textured. CDVA aims to enable efficient and interoperable design of video analysis applications in large databases, for example broadcasters’ archives or videos available on the Internet. It is envisioned that CDVA will provide a complementary set of tools to the suite of existing MPEG standards, such as the MPEG-7 Compact Descriptors for Visual Search (CDVS). Evaluation showed that sufficient technology was received such that a standardization effort is started. The final standard is expected to be ready in 2018.

Workshop on 5G/ Beyond UHD Media

A workshop on 5G/ Beyond UHD Media was held on February 24th, 2016 during the 114th MPEG meeting. The workshop was organized to acquire relevant information about the context in which MPEG technology related to video, virtual reality and the Internet of Things will be operating in the future, and to review the status of mobile technologies with the goal of guiding future codec standardization activity.

Dr. James Kempf of Ericsson reported on the challenges that Internet of Things devices face in a mobile environment. Dr. Ian Harvey of FOX discussed content creation for Virtual Reality applications. Dr. Kent Walker of Qualcomm promoted the value of unbundling technologies and creating relevant enablers. Dr. Jongmin Lee of SK Telecom explained challenges and opportunities in Next Generation Mobile Multimedia Services. Dr. Sudhir Dixit of Wireless World Research Forum reported on the next generation mobile 5G network and Its Challenges in Support of UHD Media. Emmanuel Thomas of TNO showed trends in 5G and future media consumption using media orchestration as an example. Dr. Charlie Zhang of Samsung Research America focused in his presentation on the 5G Key Technologies and Recent Advances.

Verification test complete for Scalable HEVC and MV-HEVC

MPEG has completed verification tests of SHVC, the scalable form of HEVC. These tests confirm the major savings that can be achieved by Scalable HEVC’s nested layers of data from which subsets can be extracted and used on their own to provide smaller coded streams. These smaller subsets can still be decoded with good video quality, as contrasted with the need to otherwise send separate “simulcast” coded video streams or add an intermediate “transcoding” process that would add substantial delay and complexity to the system.

The verification tests for SHVC showed that scalable HEVC coding can save an average of 40–60% in bit rate for the same quality as with simulcast coding, depending on the particular scalability scenario. SHVC includes capabilities for using a “base layer” with additional layers of enhancement data that improve the video picture resolution, the video picture fidelity, the range of representable colors, or the dynamic range of displayed brightness. Aside from a small amount of intermediate processing, each enhancement layer can be decoded by applying the same decoding process that is used for the original non-scalable version of HEVC. This compatibility that has been retained for the core of the decoding process will reduce the effort needed by industry to support the new scalable scheme.

Further verification tests were also conducted on MV-HEVC, where the Multiview Main Profile exploits the redundancy between different camera views using the same layering concept as scalable HEVC, with the same property of each view-specific layer being decodable by the ordinary HEVC decoding process. The results demonstrate that for the case of stereo (two views) video, a data rate reduction of 30% when compared to simulcast (independent HEVC coding of the views), and more than 50% when compared to the multi-view version of AVC (which is known as MVC), can be achieved for the same video quality.

Exploring new Capabilities in Video Compression Technology

Three years after finishing the first version of the HEVC standard, this MPEG meeting marked the first full meeting of a new partnership to identify advances in video compression technology. At its previous meeting, MPEG and ITU-T’s VCEG had agreed to join together to explore new technology possibilities for video coding that lie beyond the capabilities of the HEVC standard and its current extensions. The new partnership is known as the Joint Video Exploration Team (JVET), and the team is working to explore both incremental and fundamentally different video coding technology that shows promise to potentially become the next generation in video coding standardization. The JVET formation follows MPEG’s workshops and requirements-gathering efforts that have confirmed that video data demands are continuing to grow and are projected to remain a major source of stress on network traffic – even as additional improvements in broadband speeds arise in the years to come. The groundwork laid at the previous meeting for the JVET effort has already borne fruit. The team has developed a Joint Exploration Model (JEM) for simulation experiments in the area, and initial tests of the first JEM version have shown a potential compression improvement over HEVC by combining a variety of new techniques. Given sufficient further progress and evidence of practicality, it is highly likely that a new Call for Evidence or Call for Proposals will be issued in 2016 or 2017 toward converting this initial JVET exploration into a formal project for an improved video compression standard.

How to contact MPEG, learn more, and find other MPEG facts

To learn about MPEG basics, discover how to participate in the committee, or find out more about the array of technologies developed or currently under development by MPEG, visit MPEG’s home page at

http://mpeg.chiariglione.org. There you will find information publicly available from MPEG experts past and present including tutorials, white papers, vision documents, and requirements under consideration for new standards efforts. You can also find useful information in many public documents by using the search window].

Examples of tutorials that can be found on the MPEG homepage include tutorials for: High Efficiency Video Coding, Advanced Audio Coding, Universal Speech and Audio Coding, and DASH to name a few. A rich repository of white papers can also be found and continues to grow. You can find these papers and tutorials for many of MPEG’s standards freely available. Press releases from previous MPEG meetings are also available. Journalists that wish to receive MPEG Press Releases by email should contact Dr. Christian Timmerer at Christian.timmerer@itec.uni-klu.ac.at.

Further Information

Future MPEG meetings are planned as follows:

No. 115, Geneva, CH, 30 – 03 May – June 2016
No. 116, Chengdu, CN, 17 – 21 October 2016
No. 117, Geneva, CH, 16 – 20 January, 2017
No. 118, Hobart, AU, 03 – 07 April, 2017