JPEG Column: 78th JPEG Meeting in Rio de Janeiro, Brazil

The JPEG Committee had its 78th meeting in Rio de Janeiro, Brazil. Relevant to its ongoing standardization efforts in JPEG Privacy and Security, JPEG organized a special session to explore how to support blockchain and distributed ledger technologies to past, ongoing and future JPEG family of standards. This is motivated by the fact that considering the potential impact of such technologies in the future of multimedia, standardization will be required to enable interoperability between different systems and services of imaging relying on blockchain and distributed ledger technologies.

Blockchain and distributed ledger technologies are behind the well-known crypto-currencies. These technologies can provide means for content authorship, or intellectual property and rights management control of the multimedia information. New possibilities can be made available, namely support for tracking online use of copyrighted images and ownership of the digital content.

IMG_3596_half

JPEG meeting session.

Rio de Janeiro JPEG meetings comprise mainly the following highlights:

  • JPEG explores blockchain and distributed ledger technologies
  • JPEG 360 Metadata
  • JPEG XL
  • JPEG XS
  • JPEG Pleno
  • JPEG Reference Software
  • JPEG 25th anniversary of the first JPEG standard

The following summarizes various activities during JPEG’s Rio de Janeiro meeting.

JPEG explores blockchain and distributed ledger technologies

During the 78th JPEG meeting in Rio de Janeiro, the JPEG committee organized a special session on blockchain and distributed ledger technologies and their impact on JPEG standards. As a result, the committee decided to explore use cases and standardization needs related to blockchain technology in a multimedia context. Use cases will be explored in relation to the recently launched JPEG Privacy and Security, as well as in the broader landscape of imaging and multimedia applications. To that end, the committee created an ad hoc group with the aim to gather input from experts to define these use cases and to explore eventual needs and advantages to support a standardization effort focused on imaging and multimedia applications. To get involved in the discussion, interested parties can register to the ad hoc group’s mailing list. Instructions to join the list are available on http://jpeg-blockchain-list.jpeg.org

JPEG 360 Metadata

The JPEG Committee notes the increasing use of multi-sensor images from multi-sensor devices, such as 360 degree capturing cameras or dual-camera smartphones available to consumers. Images from these cameras are shown on computers, smartphones, and Head Mounted Displays (HMDs). JPEG standards are commonly used for image compression and file format. However, because existing JPEG standards do not fully cover these new uses, incompatibilities have reduced the interoperability of their images, and thus reducing the widespread ubiquity, which consumers have come to expect when using JPEG files. Additionally, new modalities for interacting with images, such as computer-based augmentation, face-tagging, and object classification, require support for metadata that was not part of the original scope of JPEG.  A set of such JPEG 360 use cases is described in JPEG 360 Metadata Use Cases document. 

To avoid fragmentation in the market and to ensure wide interoperability, a standard way of interacting with multi-sensor images with richer metadata is desired in JPEG standards. JPEG invites all interested parties, including manufacturers, vendors and users of such devices to submit technology proposals for enabling interactions with multi-sensor images and metadata that fulfill the scope, objectives and requirements that are outlined in the final Call for Proposals, available on the JPEG website.

To stay posted on JPEG 360, please regularly consult our website at jpeg.org and/or subscribe to the JPEG 360 e-mail reflector.

JPEG XL

The Next-Generation Image Compression activity (JPEG XL) has produced a revised draft Call for Proposals, and intends to publish a final Call for Proposals (CfP) following its 79th meeting (April 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Compression. During the 78th meeting, objective and subjective quality assessment methodologies for anchor and proposal evaluations were discussed and analyzed. As outcome of the meeting, source code for objective quality assessment has been made available.

The draft Call for Proposals, with all related info, can be found in JPEG website. Comments are welcome and should be submitted as specified in the document. To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector.

 

JPEG XS

Since its previous 77th meeting, subjective quality evaluations have shown that the initial quality requirement of the JPEG XS Core Coding System has been met, i.e. a visually lossless quality at a compression ratio of 6:1 for large majority of images under test has been met. Several profiles are now under development in JPEG XS, as well as transport and container formats. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the furthering of the specifications in the above directions. Publication of the International Standard is expected for Q3 2018.

JPEG Pleno

JPEG Pleno activity is currently divided into Pleno Light Field, Pleno Point Cloud and Pleno Holography. JPEG Pleno Light Field has been preparing a third round of core experiments for assessing the impact of individual coding modules on the overall rate-distortion performance. Moreover, it was decided to pursue with collecting additional test data, and progress with the preparation of working documents for JPEG Pleno specifications Part 1 and Part 2.

Furthermore, quality modelling studies are under consideration for both JPEG Pleno Point Clouds, and JPEG Pleno Holography. In particular, JPEG Pleno Point Cloud is considering a set of new quality metrics provided as contributions to this work item. It is expected that the new metrics replace the current state of the art as they have shown superior correlation with subjective quality as perceived by humans. Moreover, new subjective assessment models have been tested and analysed to better understand the perception of quality for such new types of visual information.

JPEG Reference Software

The JPEG committee is pleased to announce that its first JPEG image coding specifications is now augmented by a new part, ISO/IEC 10918-7, that contains a reference software. The proposed candidate software implementations have been checked for compliance with 10918-2. Considering the positive results, this new part of the JPEG standard will continue to evolve quickly. 

RioJanView27332626_10155421780114370_2546088045026774482_n

JPEG meeting room window view during a break.

JPEG 25th anniversary of the first JPEG standard

JPEG’s first standard third and final 25th anniversary celebration is planned at its next 79th JPEG meeting taking place in La Jolla, CA, USA. The anniversary will be marked by a 2 hours workshop on Friday 13th April on current and emerging JPEG technologies, followed by a social event where past JPEG committee members with relevant contributions will be awarded.

Final Quote

“Blockchain and distributed ledger technologies promise a significant impact on the future of many fields. JPEG is committed to provide standard mechanisms to apply blockchain on multimedia applications in general and on imaging in particular. said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee meets nominally four times a year, in different world locations. The latest 77th meeting was held from 21st to 27th of October 2017, in Macau, China. The next 79th JPEG Meeting will be held on 9-15 April 2018, in La Jolla, California, USA.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

Future JPEG meetings are planned as follows:

  • No 79, La Jolla (San Diego), CA, USA, April 9 to 15, 2018
  • No 80, Berlin, Germany, July 7 to13, 2018
  • No 81, Vancouver, Canada, October 13 to 19, 2018

 

 

MPEG Column: 121st MPEG Meeting in Gwangju, Korea

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The MPEG press release comprises the following topics:

  • Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level
  • MPEG-G standards reach Committee Draft for metadata and APIs
  • MPEG issues Calls for Visual Test Material for Immersive Applications
  • Internet of Media Things (IoMT) reaches Committee Draft level
  • MPEG finalizes its Media Orchestration (MORE) standard

At the end I will also briefly summarize what else happened with respect to DASH, CMAF, OMAF as well as discuss future aspects of MPEG.

Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level

The Committee Draft (CD) for CDVA has been approved at the 121st MPEG meeting, which is the first formal step of the ISO/IEC approval process for a new standard. This will become a new part of MPEG-7 to support video search and retrieval applications (ISO/IEC 15938-15).

Managing and organizing the quickly increasing volume of video content is a challenge for many industry sectors, such as media and entertainment or surveillance. One example task is scalable instance search, i.e., finding content containing a specific object instance or location in a very large video database. This requires video descriptors which can be efficiently extracted, stored, and matched. Standardization enables extracting interoperable descriptors on different devices and using software from different providers, so that only the compact descriptors instead of the much larger source videos can be exchanged for matching or querying. The CDVA standard specifies descriptors that fulfil these needs and includes (i) the components of the CDVA descriptor, (ii) its bitstream representation and (iii) the extraction process. The final standard is expected to be finished in early 2019.

CDVA introduces a new descriptor based on features which are output from a Deep Neural Network (DNN). CDVA is robust against viewpoint changes and moderate transformations of the video (e.g., re-encoding, overlays), it supports partial matching and temporal localization of the matching content. The CDVA descriptor has a typical size of 2–4 KBytes per second of video. For typical test cases, it has been demonstrated to reach a correct matching rate of 88% (at 1% false matching rate).

Research aspects: There are probably endless research aspects in the visual descriptor space ranging from validation of the achieved to results so far to further improving informative aspects with the goal to increase correct matching rate (and consequently decreasing the false matching rating). In general, however, the question is whether there’s a need for descriptors in the era of bandwidth-storage-computing over-provisioning and the raising usage of artificial intelligence techniques such as machine learning and deep learning.

MPEG-G standards reach Committee Draft for metadata and APIs

In my previous report I introduced the MPEG-G standard for compression and transport technologies of genomic data. At the 121st MPEG meeting, metadata and APIs reached CD level. The former – metadata – provides relevant information associated to genomic data and the latter – APIs – allow for building interoperable applications capable of manipulating MPEG-G files. Additional standardization plans for MPEG-G include the CDs for reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-4), which are planned to be issued at the next 122nd MPEG meeting with the objective of producing Draft International Standards (DIS) at the end of 2018.

Research aspects: Metadata typically enables certain functionality which can be tested and evaluated against requirements. APIs allow to build applications and services on top of the underlying functions, which could be a driver for research projects to make use of such APIs.

MPEG issues Calls for Visual Test Material for Immersive Applications

I have reported about the Omnidirectional Media Format (OMAF) in my previous report. At the 121st MPEG meeting, MPEG was working on extending OMAF functionalities to allow the modification of viewing positions, e.g., in case of head movements when using a head-mounted display, or for use with other forms of interactive navigation. Unlike OMAF which only provides 3 degrees of freedom (3DoF) for the user to view the content from a perspective looking outwards from the original camera position, the anticipated extension will also support motion parallax within some limited range which is referred to as 3DoF+. In the future with further enhanced technologies, a full 6 degrees of freedom (6DoF) will be achieved with changes of viewing position over a much larger range. To develop technology in these domains, MPEG has issued two Calls for Test Material in the areas of 3DoF+ and 6DoF, asking owners of image and video material to provide such content for use in developing and testing candidate technologies for standardization. Details about these calls can be found at https://mpeg.chiariglione.org/.

Research aspects: The good thing about test material is that it allows for reproducibility, which is an important aspect within the research community. Thus, it is more than appreciated that MPEG issues such a call and let’s hope that this material will become publicly available. Typically this kind of visual test material targets coding but it would be also interesting to have such test content for storage and delivery.

Internet of Media Things (IoMT) reaches Committee Draft level

The goal of IoMT is is to facilitate the large-scale deployment of distributed media systems with interoperable audio/visual data and metadata exchange. This standard specifies APIs providing media things (i.e., cameras/displays and microphones/loudspeakers, possibly capable of significant processing power) with the capability of being discovered, setting-up ad-hoc communication protocols, exposing usage conditions, and providing media and metadata as well as services processing them. IoMT APIs encompass a large variety of devices, not just connected cameras and displays but also sophisticated devices such as smart glasses, image/speech analyzers and gesture recognizers. IoMT enables the expression of the economic value of resources (media and metadata) and of associated processing in terms of digital tokens leveraged by the use of blockchain technologies.

Research aspects: The main focus of IoMT is APIs which provides easy and flexible access to the underlying device’ functionality and, thus, are an important factor to enable research within this interesting domain. For example, using these APIs to enable communicates among these various media things could bring up new forms of interaction with these technologies.

MPEG finalizes its Media Orchestration (MORE) standard

MPEG “Media Orchestration” (MORE) standard reached Final Draft International Standard (FDIS), the final stage of development before being published by ISO/IEC. The scope of the Media Orchestration standard is as follows:

  • It supports the automated combination of multiple media sources (i.e., cameras, microphones) into a coherent multimedia experience.
  • It supports rendering multimedia experiences on multiple devices simultaneously, again giving a consistent and coherent experience.
  • It contains tools for orchestration in time (synchronization) and space.

MPEG expects that the Media Orchestration standard to be especially useful in immersive media settings. This applies notably in social virtual reality (VR) applications, where people share a VR experience and are able to communicate about it. Media Orchestration is expected to allow synchronizing the media experience for all users, and to give them a spatially consistent experience as it is important for a social VR user to be able to understand when other users are looking at them.

Research aspects: This standard enables the social multimedia experience proposed in literature. Interestingly, the W3C is working on something similar referred to as timing object and it would be interesting to see whether these approaches have some commonalities.


What else happened at the MPEG meeting?

DASH is fully in maintenance mode and we are still waiting for the 3rd edition which is supposed to be a consolidation of existing corrigenda and amendments. Currently only minor extensions are proposed and conformance/reference software is being updated. Similar things can be said for CMAF where we have one amendment and one corrigendum under development. Additionally, MPEG is working on CMAF conformance. OMAF has reached FDIS at the last meeting and MPEG is working on reference software and conformance also. It is expected that in the future we will see additional standards and/or technical reports defining/describing how to use CMAF and OMAF in DASH.

Regarding the future video codec, the call for proposals is out since the last meeting as announced in my previous report and responses are due for the next meeting. Thus, it is expected that the 122nd MPEG meeting will be the place to be in terms of MPEG’s future video codec. Speaking about the future, shortly after the 121st MPEG, Leonardo Chiariglione published a blog post entitled “a crisis, the causes and a solution”, which is related to HEVC licensing, Alliance for Open Media (AOM), and possible future options. The blog post certainly caused some reactions within the video community at large and I think this was also intended. Let’s hope it will galvanice the video industry — not to push the button — but to start addressing and resolving the issues. As pointed out in one of my other blog posts about what to care about in 2018, the upcoming MPEG meeting in April 2018 is certainly a place to be. Additionally, it highlights some conferences related to various aspects also discussed in MPEG which I’d like to republish here:

  • QoMEX — Int’l Conf. on Quality of Multimedia Experience — will be hosted in Sardinia, Italy from May 29-31, which is THE conference to be for QoE of multimedia applications and services. Submission deadline is January 15/22, 2018.
  • MMSys — Multimedia Systems Conf. — and specifically Packet Video, which will be on June 12 in Amsterdam, The Netherlands. Packet Video is THE adaptive streaming scientific event 2018. Submission deadline is March 1, 2018.
  • Additionally, you might be interested in ICME (July 23-27, 2018, San Diego, USA), ICIP (October 7-10, 2018, Athens, Greece; specifically in the context of video coding), and PCS (June 24-27, 2018, San Francisco, CA, USA; also in the context of video coding).
  • The DASH-IF academic track hosts special events at MMSys (Excellence in DASH Award) and ICME (DASH Grand Challenge).
  • MIPR — 1st Int’l Conf. on Multimedia Information Processing and Retrieval — will be in Miami, Florida, USA from April 10-12, 2018. It has a broad range of topics including networking for multimedia systems as well as systems and infrastructures.
 

MPEG Column: 120th MPEG Meeting in Macau, China

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

MPEG Plenary Meeting

MPEG Plenary Meeting

The MPEG press release comprises the following topics:

  • Point Cloud Compression – MPEG evaluates responses to call for proposal and kicks off its technical work
  • The omnidirectional media format (OMAF) has reached its final milestone
  • MPEG-G standards reach Committee Draft for compression and transport technologies of genomic data
  • Beyond HEVC – The MPEG & VCEG call to set the next standard in video compression
  • MPEG adds better support for mobile environment to MMT
  • New standard completed for Internet Video Coding
  • Evidence of new video transcoding technology using side streams

Point Cloud Compression

At its 120th meeting, MPEG analysed the technologies submitted by nine industry leaders as responses to the Call for Proposals (CfP) for Point Cloud Compression (PCC). These technologies address the lossless or lossy coding of 3D point clouds with associated attributes such as colour and material properties. Point clouds are referred to as unordered sets of points in a 3D space and typically captured using various setups of multiple cameras, depth sensors, LiDAR scanners, etc., but can also be generated synthetically and are in use in several industries. They have recently emerged as representations of the real world enabling immersive forms of interaction, navigation, and communication. Point clouds are typically represented by extremely large amounts of data providing a significant barrier for mass market applications. Thus, MPEG has issued a Call for Proposal seeking technologies that allow reduction of point cloud data for its intended applications. After a formal objective and subjective evaluation campaign, MPEG selected three technologies as starting points for the test models for static, animated, and dynamically acquired point clouds. A key conclusion of the evaluation was that state-of-the-art point cloud compression can be significantly improved by leveraging decades of 2D video coding tools and combining 2D and 3D compression technologies. Such an approach provides synergies with existing hardware and software infrastructures for rapid deployment of new immersive experiences.

Although the initial selection of technologies for point cloud compression has been concluded at the 120th MPEG meeting, it could be also seen as a kick-off for its scientific evaluation and various further developments including the optimization thereof. It is expected that various scientific conference will focus on point cloud compression and may open calls for grand challenges like for example at IEEE ICME 2018.

Omnidirectional Media Format (OMAF)

The understanding of the virtual reality (VR) potential is growing but market fragmentation caused by the lack of interoperable formats for the storage and delivery of such content stifles VR’s market potential. MPEG’s recently started project referred to as Omnidirectional Media Format (OMAF) has reached Final Draft of International Standard (FDIS) at its 120th meeting. It includes

  • equirectangular projection and cubemap projection as projection formats;
  • signalling of metadata required for interoperable rendering of 360-degree monoscopic and stereoscopic audio-visual data; and
  • provides a selection of audio-visual codecs for this application.

It also includes technologies to arrange video pixel data in numerous ways to improve compression efficiency and reduce the size of video, a major bottleneck for VR applications and services. The standard also includes technologies for the delivery of OMAF content with MPEG-DASH and MMT.

MPEG has defined a format comprising a minimal set of tools to enable interoperability among implementers of the standard. Various aspects are deliberately excluded from the normative parts to foster innovation leading to novel products and services. This enables us — researcher and practitioners — to experiment with these new formats in various ways and focus on informative aspects where typically competition can be found. For example, efficient means for encoding and packaging of omnidirectional/360-degree media content and its adaptive streaming including support for (ultra-)low latency will become a big issue in the near future.

MPEG-G: Compression and Transport Technologies of Genomic Data

The availability of high throughput DNA sequencing technologies opens new perspectives in the treatment of several diseases making possible the introduction of new global approaches in public health known as “precision medicine”. While routine DNA sequencing in the doctor’s office is still not current practice, medical centers have begun to use sequencing to identify cancer and other diseases and to find effective treatments. As DNA sequencing technologies produce extremely large amounts of data and related information, the ICT costs of storage, transmission, and processing are also very high. The MPEG-G standard addresses and solves the problem of efficient and economical handling of genomic data by providing new

  • compression technologies (ISO/IEC 23092-2) and
  • transport technologies (ISO/IEC 23092-1),

which reached Committee Draft level at its 120th meeting.

Additionally, the Committee Drafts for

  • metadata and APIs (ISO/IEC 23092-3) and
  • reference software (ISO/IEC 23092-4)

are scheduled for the next MPEG meeting and the goal is to publish Draft International Standards (DIS) at the end of 2018.

This new type of (media) content, which requires compression and transport technologies, is emerging within the multimedia community at large and, thus, input is welcome.

Beyond HEVC – The MPEG & VCEG Call to set the Next Standard in Video Compression

The 120th MPEG meeting marked the first major step toward the next generation of video coding standard in the form of a joint Call for Proposals (CfP) with ITU-T SG16’s VCEG. After two years of collaborative informal exploration studies and a gathering of evidence that successfully concluded at the 118th MPEG meeting, MPEG and ITU-T SG16 agreed to issue the CfP for future video coding technology with compression capabilities that significantly exceed those of the HEVC standard and its current extensions. They also formalized an agreement on formation of a joint collaborative team called the “Joint Video Experts Team” (JVET) to work on development of the new planned standard, pending the outcome of the CfP that will be evaluated at the 122nd MPEG meeting in April 2018. To evaluate the proposed compression technologies, formal subjective tests will be performed using video material submitted by proponents in February 2018. The CfP includes the testing of technology for 360° omnidirectional video coding and the coding of content with high-dynamic range and wide colour gamut in addition to conventional standard-dynamic-range camera content. Anticipating a strong response to the call, a “test model” draft design is expected be selected in 2018, with development of a potential new standard in late 2020.

The major goal of a new video coding standard is to be better than its successor (HEVC). Typically this “better” is quantified by 50% which means, that it should be possible encode the video at the same quality with half of the bitrate or a significantly higher quality with the same bitrate including. However, at this time the “Joint Video Experts Team” (JVET) from MPEG and ITU-T SG16 faces competition from the Alliance for Open Media, which is working on AV1. In any case, we are looking forward to an exciting time frame from now until this new codec is ratified and how it will perform compared to AV1. Multimedia systems and applications will also benefit from new codecs which will gain traction as soon as first implementations of this new codec becomes available (note that AV1 is available as open source already and continuously further developed).

MPEG adds Better Support for Mobile Environment to MPEG Media Transport (MMT)

MPEG has approved the Final Draft Amendment (FDAM) to MPEG Media Transport (MMT; ISO/IEC 23008-1:2017), which is referred to as “MMT enhancements for mobile environments”. In order to reflect industry needs on MMT, which has been well adopted by broadcast standards such as ATSC 3.0 and Super Hi-Vision, it addresses several important issues on the efficient use of MMT in mobile environments. For example, it adds distributed resource identification message to facilitate multipath delivery and transition request message to change the delivery path of an active session. This amendment also introduces the concept of a MMT-aware network entity (MANE), which might be placed between the original server and the client, and provides a detailed description about how to use it for both improving efficiency and reducing delay of delivery. Additionally, this amendment provides a method to use WebSockets to setup and control an MMT session/presentation.

New Standard Completed for Internet Video Coding

A new standard for video coding suitable for the internet as well as other video applications, was completed at the 120th MPEG meeting. The Internet Video Coding (IVC) standard was developed with the intention of providing the industry with an “Option 1” video coding standard. In ISO/IEC language, this refers to a standard for which patent holders have declared a willingness to grant licenses free of charge to an unrestricted number of applicants for all necessary patents on a worldwide, non-discriminatory basis and under other reasonable terms and conditions, to enable others to make, use, and sell implementations of the standard. At the time of completion of the IVC standard, the specification contained no identified necessary patent rights except those available under Option 1 licensing terms. During the development of IVC, MPEG removed from the draft standard any necessary patent rights that it was informed were not available under such Option 1 terms, and MPEG is optimistic of the outlook for the new standard. MPEG encourages interested parties to provide information about any other similar cases. The IVC standard has roughly similar compression capability as the earlier AVC standard, which has become the most widely deployed video coding technology in the world. Tests have been conducted to verify IVC’s strong technical capability, and the new standard has also been shown to have relatively modest implementation complexity requirements.

Evidence of new Video Transcoding Technology using Side Streams

Following a “Call for Evidence” (CfE) issued by MPEG in July 2017, evidence was evaluated at the 120th MPEG meeting to investigate whether video transcoding technology has been developed for transcoding assisted by side data streams that is capable of significantly reducing the computational complexity without reducing compression efficiency. The evaluations of the four responses received included comparisons of the technology against adaptive bit-rate streaming using simulcast as well as against traditional transcoding using full video re-encoding. The responses span the compression efficiency space between simulcast and full transcoding, with trade-offs between the bit rate required for distribution within the network and the bit rate required for delivery to the user. All four responses provided a substantial computational complexity reduction compared to transcoding using full re-encoding. MPEG plans to further investigate transcoding technology and is soliciting expressions of interest from industry on the need for standardization of such assisted transcoding using side data streams.

MPEG currently works on two related topics which are referred to as network-distributed video coding (NDVC) and network-based media processing (NBMP). Both activities involve the network, which is more and more evolving to highly distributed compute and delivery platform as opposed to a bit pipe, which is supposed to deliver data as fast as possible from A to B. This phenomena could be also interesting when looking at developments around 5G, which is actually much more than just radio access technology. These activities are certainly worth to monitor as it basically contributes in order to make networked media resources accessible or even programmable. In this context, I would like to refer the interested reader to the December’17 theme of the IEEE Computer Society Computing Now, which is about Advancing Multimedia Content Distribution.


Publicly available documents from the 120th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Gwangju, Korea, January 22-26, 2018. Feel free to contact Christian Timmerer for any questions or comments.


Some of the activities reported above are considered within the Call for Papers at 23rd Packet Video Workshop (PV 2018) co-located with ACM MMSys 2018 in Amsterdam, The Netherlands. Topics of interest include (but are not limited to):

  • Adaptive media streaming, and content storage, distribution and delivery
  • Network-distributed video coding and network-based media processing
  • Next-generation/future video coding, point cloud compression
  • Audiovisual communication, surveillance and healthcare systems
  • Wireless, mobile, IoT, and embedded systems for multimedia applications
  • Future media internetworking: information-centric networking and 5G
  • Immersive media: virtual reality (VR), augmented reality (AR), 360° video and multi-sensory systems, and its streaming
  • Machine learning in media coding and streaming systems
  • Standardization: DASH, MMT, CMAF, OMAF, MiAF, WebRTC, MSE, EME, WebVR, Hybrid Media, WAVE, etc.
    Applications: social media, game streaming, personal broadcast, healthcare, industry 4.0, education, transportation, etc.

Important dates

  • Submission deadline: March 1, 2018
  • Acceptance notification: April 9, 2018
  • Camera-ready deadline: April 19, 2018

JPEG Column: 77th JPEG Meeting in Macau, China

IMG_1670r050

JPEG XS is now entering the final phases of the standard definition and soon will be available. It is important to clarify the change on the typical JPEG approach, as this is the first JPEG image compression standard that is not developed only targeting the best compression performance for the best perceptual quality. Instead, JPEG XS establishes a compromise between compression efficiency and low complexity. This new approach is also complemented with the development of a new part for the well-established JPEG 2000, named High Throughput JPEG 2000.

With these initiatives, JPEG committee is standardizing low complexity and low latency codecs, with a slight sacrifice of the compression performance usually seek in previous standards. This change of paradigm is justified considering the current trends on multimedia technology with the continuous grow on devices that are usually highly dependent of battery life cycles, namely mobiles, tablets, and also augmented reality devices or autonomous robots. Furthermore this standard provides support for applications like Omnidirectional video capture or real time video storage and streaming applications. Nowadays, networks tend to grow in available bandwidth. The memory available in most devices has also been reaching impressive numbers. Although compression is still required to simplify the large amount of data manipulation, its performance might become secondary if kept into acceptable levels. As it is obvious, considering the advances in coding technology of the last 25 years, these new approaches define codecs with compression performances largely above the JPEG standard used in most devices today. Moreover, they provide enhanced capabilities like HDR support, lossless or near lossless modes, or alpha plane coding.

On the 77th JPEG meeting held in Macau, China, from 21st to 27th of October several activities have been considered, as shortly described in the following.

IMG_3037r025

  1. A call for proposals on JPEG 360 Metadata for the current JPEG family of standards has been issued.
  2. New advances on low complexity/low latency compression standards, namely JPEG XS and High Throughput JPEG 2000.
  3. Continuation of JPEG Pleno project that will lead to a family of standards on different 3D technologies, like light fields, digital holography and also point clouds data.
  4. New CfP for the Next-Generation Image Compression Standard.
  5. Definition of a JPEG reference software.

Moreover, a celebration of the 25th JPEG anniversary where early JPEG committee members from Asia have been awarded has taken place.

The different activities are described in the following paragraphs.

 

JPEG Privacy and Security

JPEG Privacy & Security is a work item (ISO/IEC 19566-4) aiming at developing a standard that provides technical solutions, which can ensure privacy, maintaining data integrity and protecting intellectual property rights (IPR). A Call for Proposals was published in April 2017 and based on descriptive analysis of submitted solutions for supporting protection and authenticity features in JPEG files, a working draft of JPEG Privacy & Security in the context of JPEG Systems standardization was produced during the 77th JPEG meeting in Macau, China. To collect further comments from the stakeholders in this filed, an open online meeting for JPEG Privacy & Security will be conducted before the 78th JPEG meeting in Rio de Janeiro, Brazil, on Jan. 27-Feb 2, 2018. JPEG Committee invites interested parties to the meeting. Details will be announced in the JPEG Privacy & Security AhG email reflector.

 

JPEG 360 Metadata

The JPEG Committee has issued a “Draft Call for Proposals (CfP) on JPEG 360 Metadata” at the 77th JPEG meeting in Macau, China. The JPEG Committee notes the increasing use of multi-sensor images from multiple image sensor devices, such as 360 degree capturing cameras or dual-camera smartphones available to consumers. Images from these cameras are shown on computers, smartphones and Head Mounted Displays (HMDs). JPEG standards are commonly used for image compression and file format to store and share such content. However, because existing JPEG standards do not fully cover all new uses, incompatibilities have reduced the interoperability of 360 images, and thus reduce the widespread ubiquity, which consumers have come to expect when using JPEG-based images. Additionally, new modalities for interaction with images, such as computer-based augmentation, face-tagging, and object classification, require support for metadata that was not part of the scope of the original JPEG. To avoid fragmentation in the market and to ensure interoperability, a standard way of interaction with multi-sensor images with richer metadata is desired in JPEG standards. This CfP invites all interested parties, including manufacturers, vendors and users of such devices to submit technology proposals for enabling interactions with multi-sensor images and metadata that fulfill the scope, objectives and requirements.

 

High Throughput JPEG 2000

The JPEG Committee is continuing its work towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K).

Since the release of an initial Call for Proposals (CfP) at the outcome of its 76th meeting, the JPEG Committee has completed the software test bench that will be used to evaluate technology submissions, and has reviewed initial registrations of intent. Final technology submissions are due on 1 March 2018.

The HTJ2K activity aims to develop an alternate block-coding algorithm that can be used in place of the existing block coding algorithm specified in ISO/IEC 15444-1 (JPEG 2000 Part 1). The objective is to significantly increase the throughput of JPEG 2000, at the expense of a small reduction in coding efficiency, while allowing mathematically lossless transcoding to and from codestreams using the existing block coding algorithm.

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After four rounds of Core Experiments, the Core Coding System has now been finalized and the ballot process has been initiated.

Additional parts of the Standard are still being specified, in particular future profiles, as well as transport and container formats. The JPEG Committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process. Publication of the International Standard is expected for Q3 2018.

 

JPEG Pleno

This standardization effort is targeting the generation of a multimodal framework for the exchange of light field, point cloud, depth+texture and holographic data in end-to-end application chains. Currently, the JPEG Committee is defining the coding framework of the light field modality for which the signalling syntax will be specified in part 2 of the JPEG Pleno standard. In parallel, JPEG is reaching out to companies and research institutes that are active in the point cloud and holography arena and invites them to contribute to the standardization effort. JPEG is seeking for additional input both at the level of test data and quality assessment methodologies for this specific type of image modalities as technology that supports their generation, reconstruction and/or rendering.

 

JPEG XL

The JPEG Committee has launched a Next-Generation Image Compression Standardization activity, also referred to as JPEG XL. This activity aims to develop a standard for image compression that offers substantially better compression efficiency than existing image formats (e.g. >60% over JPEG-1), along with features desirable for web distribution and efficient compression of high-quality images.

The JPEG Committee intends to issue a final Call for Proposals (CfP) following its 78th meeting (January 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Compression Standardization activity.

A draft Call for Proposals, with all related info, has been issued and can be found in JPEG website. Comments are welcome and should be submitted as specified in the document.

To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector. You will receive information to confirm your subscription, and upon the acceptance of the moderator will be included in the mailing-list.

 

JPEG Reference Software

Along with its celebration of the 25th anniversary of the commonly known JPEG still image compression specifications, The JPEG Committee has launched an activity to fill a long-known gap in this important image coding standard, namely the definition of a JPEG reference software. For its 77th meeting, The JPEG Committee collected submissions for a reference software that were evaluated for suitability, and started now the standardization process of such software on the basis of submissions received.


IMG_1670r050

JPEG 25th anniversary of the first JPEG standard

The JPEG Committee had a 25th anniversary celebration of its first standard in Macau specifically organized to honour past committee members from Asia, and was proud to award Takao Omachi for his contributions to the first JPEG standard, Fumitaka Ono for his long lasting contributions to JBIG and JPEG standards, and Daniel Lee for contributions to JPEG family of standards and long lasting services as Convenor of the JPEG Committee. The celebrations of the anniversary of this successful standard that is still growing in its use after 25th years will have a third and final event during the 79th JPEG meeting planned in La Jolla, CA, USA.

JPEG77annivers25

 

Final Quote

“JPEG is committed to design of specifications that ensure privacy and other security and protection solutions across the entire JPEG family of standards” said Prof. Touradj Ebrahimi, the Convener of the JPEG committee. 

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Frederik Temmermans of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG

Future JPEG meetings are planned as follows:

  • No 78, Rio de Janeiro, Brazil, January 27 to February 2, 2018
  • No 79, La Jolla (San Diego), CA, USA, April 9 to 15, 2018
  • No 80, Berlin, Germany, July 7 to 13, 2018

JPEG Column: 76th JPEG Meeting in Turin, Italy

The 76th JPEG meeting was held at Politecnico di Torino, Turin, Italy, from 15 to 21 of July. The current standardisation activities have been complemented by the 25th anniversary of the first JPEG standard. Simultaneously, JPEG pursues the development of different standardised solutions to meet the current challenges on imaging technology, namely on emerging new applications and on low complexity image coding. The 76th JPEG meeting featured mainly the following highlights:

  • JPEG 25th anniversary of the first JPEG standard
  • High Throughput JPEG 2000
  • JPEG Pleno
  • JPEG XL
  • JPEG XS
  • JPEG Reference Software

In the following an overview of the main JPEG activities at the 76th meeting is given.

JPEG 25th anniversary of the first JPEG standard – JPEG is proud tocelebrate the 25th anniversary of its first standard. This very successful standard won an Emmy award in 1995-96 and its usage is still rising, reaching in 2015 the impressive daily rate of over 3 billion images exchanged in just a few social networks. During the celebration, a number of early members of the committee were awarded for their contributions to this standard, namely Alain Léger, Birger Niss, Jorgen Vaaben and István Sebestyén. Also Richard Clark for his long lasting contribution as JPEG Webmaster and contributions to many JPEG standards was also rewarded during the same ceremony. The celebration will continue at the next 77th JPEG meeting that will be held in Macau, China from 21 to 27, October, 2017.

IMG_1113 2

High Throughput JPEG 2000 – The JPEG committee is continuing its work towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). In a significant milestone, the JPEG Committee has released a Call for Proposals that invites technical contributions to the HTJ2K activity. The deadline for an expression of interest is 1 October 2017, as detailed in the Call for Proposals, which is publicly available on the JPEG website at https://jpeg.org/jpeg2000/htj2k.html.

The objective of the HTJ2K activity is to identify and standardize an alternate block coding algorithm that can be used as a drop-in replacement for the block coding defined in JPEG 2000 Part-1. Based on existing evidence, it is believed that significant increases in encoding and decoding throughput are possible on modern software platforms, subject to small sacrifices in coding efficiency. An important focus of this activity is interoperability with existing systems and content libraries. To ensure this, the alternate block coding algorithm supports mathematically lossless transcoding between HTJ2K and JPEG 2000 Part-1 codestreams at the code-block level.

JPEG Pleno – The JPEG committee intends to provide a standard framework to facilitate capture, representation and exchange of omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. JPEG Pleno aims at defining new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms. At the 76th JPEG meeting in Turin, Italy, responses to the call for proposals for JPEG Pleno light field image coding were evaluated using subjective and objective evaluation metrics, and a Generic JPEG Pleno Light Field Architecture was created. The JPEG committee defined three initial core experiments to be performed before the 77thJPEG meeting in Macau, China. Interested parties are invited to join these core experiments and JPEG Pleno standardization.

JPEG XL – The JPEG Committee is working on a new activity, known as Next generation Image Format, which aims to develop an image compression format that demonstrates higher compression efficiency at equivalent subjective quality of currently available formats and that supports features for both low-end and high-end use cases.  On the low end, the new format addresses image-rich user interfaces and web pages over bandwidth-constrained connections. On the high end, it targets efficient compression for high-quality images, including high bit depth, wide color gamut and high dynamic range imagery. A draft Call for Proposals (CfP) on JPEG XL has been issued for public comment, and is available on the JPEG website.

JPEG XS – This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created. Several rounds of Core Experiments have allowed to further improving the Core Coding System, the last one being reviewed during this 76th JPEG meeting in Torino. More core experiments are on their way, including subjective assessments. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process. Publication of the International Standard is expected for Q3 2018.

JPEG Reference Software – Together with the celebration of 25th anniversary of the first JPEG Standard, the committee continued with its important activities around the omnipresent JPEG image format; while all newer JPEG standards define a reference software guiding users in interpreting and helping them in implementing a given standard, no such references exist for the most popular image format of the Internet age. The JPEG committee therefore issued a call for proposals https://jpeg.org/items/20170728_cfp_jpeg_reference_software.html asking interested parties to participate in the submission and selection of valuable and stable implementations of JPEG (formally, Rec. ITU-T T.81 | ISO/IEC 10918-1).

 

Final Quote

“The experience shared by developers of the first JPEG standard during celebration was an inspiring moment that will guide us to further the ongoing developments of standards responding to new challenges in imaging applications. said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

About JPEG

JPEG-signatureThe Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the Interna
tional Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Frederik Temmermans of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

Future JPEG meetings are planned as follows:

  • No. 77, Macau, CN, 23 – 27 October 2017

 

MPEG Column: 119th MPEG Meeting in Turin, Italy

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The MPEG press release comprises the following topics:

  • Evidence of New Developments in Video Compression Coding
  • Call for Evidence on Transcoding for Network Distributed Video Coding
  • 2nd Edition of Storage of Sample Variants reaches Committee Draft
  • New Technical Report on Signalling, Backward Compatibility and Display Adaptation for HDR/WCG Video Coding
  • Draft Requirements for Hybrid Natural/Synthetic Scene Data Container

Evidence of New Developments in Video Compression Coding

At the 119th MPEG meeting, responses to the previously issued call for evidence have been evaluated and they have all successfully demonstrated evidence. The call requested responses for use cases of video coding technology in three categories:

  • standard dynamic range (SDR) — two responses;
  • high dynamic range (HDR) — two responses; and
  • 360° omnidirectional video — four responses.

The evaluation of the responses included subjective testing and an assessment of the performance of the “Joint Exploration Model” (JEM). The results indicate significant gains over HEVC for a considerable number of test cases with comparable subjective quality at 40-50% less bit rate compared to HEVC for the SDR and HDR test cases with some positive outliers (i.e., higher bit rate savings). Thus, the MPEG-VCEG Joint Video Exploration Team (JVET) concluded that evidence exists of compression technology that may significantly outperform HEVC after further development to establish a new standard. As a next step, the plan is to issue a call for proposals at 120th MPEG meeting (October 2017) and responses expected to be evaluated at the 122th MPEG meeting (April 2018).

We already witness an increase of research articles addressing video coding technologies with capabilities beyond HEVC which will further increase in the future. The main driving force is over the top (OTT) delivery which calls for more efficient bandwidth utilization. However, competition is also increasing with the emergence of AV1 of AOMedia and we may observe also an increasing number of articles in that direction including evaluations thereof. An interesting aspect is also that the number of use cases is also increasing (e.g., see different categories above), which adds further challenges to the “complex video problem”.

Call for Evidence on Transcoding for Network Distributed Video Coding

The call for evidence on transcoding for network distributed video coding targets interested parties possessing technology providing transcoding of video at lower computational complexity than transcoding done using a full re-encode. The primary application is adaptive bitrate streaming where a highest bitrate stream is transcoded into lower bitrate streams. It is expected that responses may use “side streams” (or side information, some may call it metadata) accompanying the highest bitrate stream to assist in the transcoding process. MPEG expects submissions for the 120th MPEG meeting where compression efficiency and computational complexity will be assessed.

Transcoding has been discussed already for a long time and I can certainly recommend this article from 2005 published in the Proceedings of the IEEE. The question is, what is different now, 12 years later, and what metadata (or side streams/information) is required for interoperability among different vendors (if any)?

A Brief Overview of Remaining Topics…

  • The 2nd edition of storage of sample variants reaches Committee Draft and expands its usage to MPEG-2 transport stream whereas the first edition primarily focused on ISO base media file format.
  • The new technical report for high dynamic range (HDR) and wide colour gamut (WCG) video coding comprises a survey of various signaling mechanisms including backward compatibility and display adaptation.
  • MPEG issues draft requirements for a scene representation media container enabling the interchange of content for authoring and rendering rich immersive experiences which is currently referred to as hybrid natural/synthetic scene (HNSS) data container.

Other MPEG (Systems) Activities at the 119th Meeting

DASH is in fully maintenance mode as only minor enhancements/corrections have been discussed including contributions to conformance and reference software. The omnidirectional media format (OMAF) is certainly the hottest topic within MPEG systems which is actually between two stages (i.e., between DIS and FDIS) and, thus, a study of DIS has been approved and national bodies are kindly requested to take this into account when casting their votes (incl. comments). The study of DIS comprises format definitions with respect to coding and storage of omnidirectional media including audio and video (aka 360°). The common media application format (CMAF) has been ratified at the last meeting and awaits publications by ISO. In the meantime CMAF is focusing on conformance and reference software as well as amendments regarding various media profiles. Finally, requirements for a multi-image application format (MiAF) are available since the last meeting and at the 119th MPEG meeting a work draft has been approved. MiAF will be based on HEIF and the goal is to define additional constraints to simplify its file format options.

We have successfully demonstrated live 360 adaptive streaming as described here but we expect various improvements from standards available and under development of MPEG. Research aspects in these areas are certainly interesting in the area of performance gains and evaluations with respect to bandwidth efficiency in open networks as well as how these standardization efforts could be used to enable new use cases. 

Publicly available documents from the 119th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Macau, China, October 23-27, 2017. Feel free to contact me for any questions or comments.

JPEG Column: 75th JPEG Meeting in Sydney, Australia

17499035_10206924918881900_1929813196733915915_n

The 75th JPEG meeting was held at National Standards Australia in Sydney, Australia, from 26 to 31 March. Multiples activities have been ensued, pursuing the development of new standards that meet the current requirements and challenges on imaging technology. JPEG is continuously trying to provide new reliable solutions for different image applications. The 75th JPEG meeting featured mainly the following highlights:

  • JPEG issues a Call for Proposals on Privacy & Security;Image may contain: 3 people, people sitting, screen and indoor
  • New draft Call for Proposal for a Part 15 of JPEG 2000 standard on High Throughput coding;
  • JPEG Pleno defines methodologies for proposals evaluation;
  • A test model for the upcoming JPEG XS standard was created;
  • A new standardisation effort on Next generation Image Formats was initiated.

In the following an overview of the main JPEG activities at the 75th meeting is given.

JPEG Privacy & Security – JPEG Privacy & Security is a work item (ISO/IEC 19566-4) aiming at developing a standard for providing technical solutions which can ensure privacy, maintaining data integrity, and protecting intellectual property rights (IPR). JPEG Privacy & Security is exploring ways on how to design and implement the necessary features without significantly impacting coding performance while ensuring scalability, interoperability, and forward & backward compatibility with current JPEG standard frameworks.
Since the JPEG committee intends to interact closely with actors in this domain, public workshops on JPEG Privacy & Security were organised in previous JPEG meetings. The first workshop was organized on October 13, 2015 during the JPEG meeting in Brussels, Belgium. The second workshop was organized on February 23, 2016 during the JPEG meeting in La Jolla, CA, USA. Following the great success of these workshops, a third and final workshop was organized on October 18, 2016 during the JPEG meeting in Chengdu, China. These workshops targeted on understanding industry, user, and policy needs in terms of technology and supported functionalities. The proceedings of these workshops are published on the Privacy and Security page of JPEG website at www.jpeg.org under Systems section.
The JPEG Committee released a Call for Proposals that invites contributions on adding new capabilities for protection and authenticity features for the JPEG family of standards. Interested parties and content providers are encouraged to participate in this standardization activity and submit proposals. The deadline for an expression of interest and submissions of proposals has been set to October 6th, 2017, as detailed in the Call for Proposals. The Call for Proposals on JPEG Privacy & Security is publicly available on the JPEG website, https://jpeg.org/jpegsystems/privacy_security.html.

High Throughput JPEG 2000 – The JPEG committee is working towards the creation of a new Part 15 to the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). The goal of this project is to identify and standardize an alternate block coding algorithm that can be used as a drop-in replacement for the algorithm defined in JPEG 2000 Part-1. Based on existing evidence, it is believed that large increases in encoding and decoding throughput (e.g., 10X or beyond) should be possible on modern software platforms, subject to small sacrifices in coding efficiency. An important focus of this activity is inter-operability with existing systems and content repositories. In order to ensure this, the alternate block coding algorithm that will be the subject of this new Part of the standard should support mathematically lossless transcoding between HTJ2K and JPEG 2000 Part-1 codestreams at the code-block level. A draft Call for Proposals (CfP) on HTJ2K has been issued for public comment, and is available on JPEG web-site.

JPEG Pleno – The responses to the JPEG Pleno Call for Proposals on Light Field Coding will be evaluated at the July JPEG meeting in Torino. During JPEG 75th meetings has been defined the quality assessment procedure for this highly challenging type of large volume data. In addition to light fields, JPEG Pleno is also addressing point cloud and holographic data. Currently, the committee is undertaking in-depth studies to prepare standardization efforts on coding technologies for these image data types, encompassing the collection of use cases and requirements, but also investigations towards accurate and appropriate quality assessment procedures for associated representation and coding technologies. JPEG committee is probing for input from the involved industrial and academic communities.

JPEG XS – This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created and results of core experiments have been reviewed during the 75th JPEG meeting in Sydney. More core experiments are on their way to further improve the final standard: JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process.

Next generation Image Formats – The JPEG Committee is exploring a new activity, which aims to develop an image compression format that demonstrates higher compression efficiency at equivalent subjective quality of currently available formats, and that supports features for both low-end and high-end use cases.  On the low end, the new format addresses image-rich user interfaces and web pages over bandwidth-constrained connections. On the high end, it targets efficient compression for high-quality images, including high bit depth, wide color gamut and high dynamic range imagery.

Final Quote

“JPEG is committed to accommodate reliable and flexible security tools for JPEG file formats without compromising legacy usage of our standards said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

About JPEG

JPEG-signatureThe Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the Interna
tional Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG group meets nominally three times a year, in Europe, North America and Asia. The latest 75th    meeting was held on March 26-31, 2017, in Sydney, Australia. The next (76th) JPEG Meeting will be held on July 15-21, 2017, in Torino, Italy.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro (pinheiro@ubi.pt) or Frederik Temmermans (ftemmerm@etrovub.be) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.  Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

Future JPEG meetings are planned as follows:

  • No. 76, Torino, IT, 17 – 21 July, 2017
  • No. 77, Macau, CN, 23 – 27 October 2017

MPEG Column: 118th MPEG Meeting

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The entire MPEG press release can be found here comprising the following topics:

  • Coded Representation of Immersive Media (MPEG-I): new work item approved and call for test data issued
  • Common Media Application Format (CMAF): FDIS approved
  • Beyond High Efficiency Video Coding (HEVC): call for evidence for “beyond HEVC” and verification tests for screen content coding extensions of HEVC

Coded Representation of Immersive Media (MPEG-I)

MPEG started to work on the new work item referred to as ISO/IEC 23090 with the “nickname” MPEG-I targeting future immersive applications. The goal of this new standard is to enable various forms of audio-visual immersion including panoramic video with 2D and 3D audio with various degrees of true 3D visual perception. It currently comprises five parts: (pt. 1) a technical report describing the scope of this new standard and a set of use cases and applications; (pt. 2) an application format for omnidirectional media (aka OMAF) to address the urgent need of the industry for a standard is this area; (pt. 3) immersive video which is a kind of placeholder for the successor of HEVC (if at all); (pt. 4) immersive audio as a placeholder for the successor of 3D audio (if at all); and (pt. 5) for point cloud compression. The point cloud compression standard targets lossy compression for point clouds in real-time communication, six Degrees of Freedom (6 DoF) virtual reality, and the dynamic mapping for autonomous driving, cultural heritage applications, etc. Part 2 is related to OMAF which I’ve discussed in my previous blog post.

MPEG also established an Ad-hoc Group (AhG) on immersive Media quality evaluation with the following mandates: 1. Produce a document on VR QoE requirements; 2. Collect test material with immersive video and audio signals; 3. Study existing methods to assess human perception and reaction to VR stimuli; 4. Develop test methodology for immersive media, including simultaneous video and audio; 5. Study VR experience metrics and their measurability in VR services and devices. AhGs are open to everybody and mostly discussed using mailing lists (join here https://lists.aau.at/mailman/listinfo/immersive-quality). Interestingly, a Joint Qualinet-VQEG team on Immersive Media (JQVIM) has been recently established with similar goals and also the VR Industry Forum (VRIF) has issued a call for VR360 content. It seems there’s a strong need for a dataset similar to the one we have created for MPEG-DASH long time ago.

The JQVIM has been created as part of the QUALINET task force on “Immersive Media Experiences (IMEx)” which aims at providing end users the sensation of being part of the particular media which shall result in a worthwhile, informative user and quality of experience. The main goals are providing datasets and tools (hardware/software), subjective quality evaluations, field studies, cross- validation including a strong theoretical foundation relevant along the empirical databases and tools which hopefully results in a framework, methodology, and best practices for immersive media experiences.

Common Media Application Format (CMAF)

The Final Draft International Standard (FDIS) has been issued at the 118th MPEG meeting which concludes the formal technical development process of the standard. At this point in time national bodies can only vote Yes|No and editorial changes are allowed (if any) before the International Standard (IS) becomes available. The goal of CMAF is to define a single format for the transport and storage of segmented media including audio/video formats, subtitles, and encryption — it is derived from the ISO Base Media File Format (ISOBMFF). As it’s a combination of various MPEG standard it’s referred to as an Application Format (AS) which mainly takes existing formats/standards and glues them together for a specific target application. The CMAF standard clearly targets dynamic adaptive streaming (over — but not limited to — HTTP) but focusing on the media format only and excluding the manifest format. Thus, the CMAF standard shall be compatible with other formats such as MPEG-DASH and HLS. In fact, HLS has been extended already some time ago to support ‘fragmented MP4’ which we have demonstrated also and it has been interpreted as a first step towards the harmonization of MPEG-DASH and HLS; at least on the segment format. The delivery of CMAF contents with DASH will be described in part 7 of MPEG-DASH that basically comprises a mapping of CMAF concepts to DASH terms.

From a research perspective, it would be interesting to explore how certain CMAF concepts are able to address current industry needs, specifically in the context of low-latency streaming which has been demonstrated recently.

Beyond HEVC…

The preliminary call for evidence (CfE) on video compression with capability beyond HEVC has been issued and is addressed to interested parties that have technology providing better compression capability than the existing standard, either for conventional video material, or for other domains such as HDR/WCG or 360-degree (“VR”) video. Test cases are defined for SDR, HDR, and 360-degree content. This call has been made jointly by ISO/IEC MPEG and ITU-T SG16/Q6 (VCEG). The evaluation of the responses is scheduled for July 2017 and depending on the outcome of the CfE, the parent bodies of the Joint Video Exploration Team (JVET) of MPEG and VCEG collaboration intend to issue a Draft Call for Proposals by the end of the July meeting.

Finally, verification tests have been conducted for the Screen Content Coding (SCC) extensions to HEVC showing exceptional performance. Screen content is video containing a significant proportion of rendered (moving or static) graphics, text, or animation rather than, or in addition to, camera-captured video scenes. For scenes containing a substantial amount of text and graphics, the tests showed a major benefit in compression capability for the new extensions over both the Advanced Video Coding standard and the previous version of the newer HEVC standard without the new SCC features.

The question whether and how new codecs like (beyond) HEVC competes with AV1 is subject to research and development. It has been discussed also in the scientific literature but lacks of vendor neutral comparison which is difficult to achieve and not to compare apples with oranges (due to the high number of different coding tools and parameters). An important aspect which always needs to be considered is one typically compares specific implementations of a coding format and not the standard as the encoding is usually not defined, only the bitstream syntax that implicitly defines the decoder.

Publicly available documents from the 118th MPEG meeting can be found here (scroll down to the end of the page). The next MPEG meeting will be held in Torino, Italy, July 17-21, 2017. Feel free to contact us for any questions or comments.

JPEG Column: 74th JPEG Meeting

The 74th JPEG meeting was held at ITU Headquarters in Geneva, Switzerland, from 15 to 20 January featuring the following highlights:

  • A Final Call for Proposals on JPEG Pleno was issued focusing on light field coding;
  • Creation of a test model for the upcoming JPEG XS standard;
  • A draft Call for Proposals for JPEG Privacy & Security was issued;
  • JPEG AIC technical report finalized on Guidelines for image coding system evaluation;
  • An AHG was created to investigate the evidence of high throughput JPEG 2000;
  • An AHG on next generation image compression standard was initiated to explore a future image coding format with superior compression efficiency.

 

JPEG Pleno kicks off its activities towards JPEGmeeting74standardization of light field coding

At the 74th JPEG meeting in Geneva, Switzerland the final Call for Proposals (CfP) on JPEG Pleno was issued particularly focusing on light field coding. The CfP is available here.

The call encompasses coding technologies for lenslet light field cameras, and content produced by high-density arrays of cameras. In addition, system-level solutions associated with light field coding and processing technologies that have a normative impact are called for. In a later stage, calls for other modalities such as point cloud, holographic and omnidirectional data will be issued, encompassing image representations and new and rich forms of visual data beyond the traditional planar image representations.

JPEG Pleno intends to provide a standard framework to facilitate capture, representation and exchange of these omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. It aims to define new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms.

 

JPEG XS aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used for a wide range of applications including mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal issued on March 11th 2016 and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created during the 73rd JPEG meeting in Chengdu and the results of a first set of core experiments have been reviewed during the 74th JPEG meeting in Geneva. More core experiments are on their way before finalizing the standard: JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process.

 

JPEG Privacy & Security aims at developing a standard for realizing secure image information sharing which is capable of ensuring privacy, maintaining data integrity, and protecting intellectual property rights (IPR). JPEG Privacy & Security will explore ways on how to design and implement the necessary features without significantly impacting coding performance while ensuring scalability, interoperability, and forward and backward compatibility with current JPEG standard frameworks.

A draft Call for Proposals for JPEG Privacy & Security has been issued and the JPEG committee invites interested parties to contribute to this standardisation activity in JPEG Systems. The draft of CfP is available here.

The call addresses protection mechanisms and technologies such as handling hierarchical levels of access and multiple protection levels for metadata and image protection, checking integrity of image data and embedded metadata, and supporting backward and forward compatibility with JPEG coding technologies. Interested parties are encouraged to subscribe to the JPEG Privacy & Security email reflector for collecting more information. A final version of the JPEG Privacy & Security Call for Proposals is expected at the 75th JPEG meeting located in Sydney, Australia.

 

JPEG AIC provides guidance and standard procedures for advanced image coding evaluation.  At this meeting JPEG completed a technical report: TR 29170-1 Guidelines for image coding system evaluation. This report is a compendium of JPEGs best practices in evaluation that draws sources from several different international standards and international recommendations. The report discusses use of objective tools, subjective procedures and computational analysis techniques and when to use the different tools. Some of the techniques are tried and true tools familiar to image compression experts and vision scientists. Several tools represent new fields where few tools have been available, such as the evaluation of coding systems for high dynamic range content.

 

High throughput JPEG 2000

The JPEG committee started a new activity for high throughput JPEG 2000 and an AHG was created to investigate the evidence for such kind of standard. Experts are invited to participate in this expert group and to join the mailing list.

 

Final Quote

“JPEG continues to offer standards that redefine imaging products and services contributing to a better society without borders.” said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group JPEG-signatureof ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS and JPEG Systems and JPEG PLENO families of imaging standards.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Tim Bruylants of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.   Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

 

Future JPEG meetings are planned as follows:

  • No. 75, Sydney, AU, 26 – 31 March, 2017
  • No. 76, Torino, IT, 17 – 21 July, 2017
  • No. 77, Macau, CN, 23 – 27 October 2017

 

MPEG Column: 117th MPEG Meeting

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The 117th MPEG meeting was held in Geneva, Switzerland and its press release highlights the following aspects:

  • MPEG issues Committee Draft of the Omnidirectional Media Application Format (OMAF)
  • MPEG-H 3D Audio Verification Test Report
  • MPEG Workshop on 5-Year Roadmap Successfully Held in Geneva
  • Call for Proposals (CfP) for Point Cloud Compression (PCC)
  • Preliminary Call for Evidence on video compression with capability beyond HEVC
  • MPEG issues Committee Draft of the Media Orchestration (MORE) Standard
  • Technical Report on HDR/WCG Video Coding

In this article, I’d like to focus on the topics related to multimedia communication starting with OMAF.

Omnidirectional Media Application Format (OMAF)

Real-time entertainment services deployed over the open, unmanaged Internet – streaming audio and video – account now for more than 70% of the evening traffic in North American fixed access networks and it is assumed that this figure will reach 80 percent by 2020. More and more such bandwidth hungry applications and services are pushing onto the market including immersive media services such as virtual reality and, specifically 360-degree videos. However, the lack of appropriate standards and, consequently, reduced interoperability is becoming an issue. Thus, MPEG has started a project referred to as Omnidirectional Media Application Format (OMAF). The first milestone of this standard has been reached and the committee draft (CD) has been approved at the 117th MPEG meeting. Such application formats “are essentially superformats that combine selected technology components from MPEG (and other) standards to provide greater application interoperability, which helps satisfy users’ growing need for better-integrated multimedia solutions” [MPEG-A].” In the context of OMAF, the following aspects are defined:

  • Equirectangular projection format (note: others might be added in the future)
  • Metadata for interoperable rendering of 360-degree monoscopic and stereoscopic audio-visual data
  • Storage format: ISO base media file format (ISOBMFF)
  • Codecs: High Efficiency Video Coding (HEVC) and MPEG-H 3D audio

OMAF is the first specification which is defined as part of a bigger project currently referred to as ISO/IEC 23090 — Immersive Media (Coded Representation of Immersive Media). It currently has the acronym MPEG-I and we have previously used MPEG-VR which is now replaced by MPEG-I (that still might chance in the future). It is expected that the standard will become Final Draft International Standard (FDIS) by Q4 of 2017. Interestingly, it does not include AVC and AAC, probably the most obvious candidates for video and audio codecs which have been massively deployed in the last decade and probably still will be a major dominator (and also denominator) in upcoming years. On the other hand, the equirectangular projection format is currently the only one defined as it is broadly used already in off-the-shelf hardware/software solutions for the creation of omnidirectional/360-degree videos. Finally, the metadata formats enabling the rendering of 360-degree monoscopic and stereoscopic video is highly appreciated. A solution for MPEG-DASH based on AVC/AAC utilizing equirectangular projection format for both monoscopic and stereoscopic video is shown as part of Bitmovin’s solution for VR and 360-degree video.

Research aspects related to OMAF can be summarized as follows:

  • HEVC supports tiles which allow for efficient streaming of omnidirectional video but HEVC is not as widely deployed as AVC. Thus, it would be interesting how to mimic such a tile-based streaming approach utilizing AVC.
  • The question how to efficiently encode and package HEVC tile-based video is an open issue and call for a tradeoff between tile flexibility and coding efficiency.
  • When combined with MPEG-DASH (or similar), there’s a need to update the adaptation logic as the with tiles yet another dimension is added that needs to be considered in order to provide a good Quality of Experience (QoE).
  • QoE is a big issue here and not well covered in the literature. Various aspects are worth to be investigated including a comprehensive dataset to enable reproducibility of research results in this domain. Finally, as omnidirectional video allows for interactivity, also the user experience is becoming an issue which needs to be covered within the research community.

A second topic I’d like to highlight in this blog post is related to the preliminary call for evidence on video compression with capability beyond HEVC. 

Preliminary Call for Evidence on video compression with capability beyond HEVC

A call for evidence is issued to see whether sufficient technological potential exists to start a more rigid phase of standardization. Currently, MPEG together with VCEG have developed a Joint Exploration Model (JEM) algorithm that is already known to provide bit rate reductions in the range of 20-30% for relevant test cases, as well as subjective quality benefits. The goal of this new standard — with a preliminary target date for completion around late 2020 — is to develop technology providing better compression capability than the existing standard, not only for conventional video material but also for other domains such as HDR/WCG or VR/360-degrees video. An important aspect in this area is certainly over-the-top video delivery (like with MPEG-DASH) which includes features such as scalability and Quality of Experience (QoE). Scalable video coding has been added to video coding standards since MPEG-2 but never reached wide-spread adoption. That might change in case it becomes a prime-time feature of a new video codec as scalable video coding clearly shows benefits when doing dynamic adaptive streaming over HTTP. QoE did find its way already into video coding, at least when it comes to evaluating the results where subjective tests are now an integral part of every new video codec developed by MPEG (in addition to usual PSNR measurements). Therefore, the most interesting research topics from a multimedia communication point of view would be to optimize the DASH-like delivery of such new codecs with respect to scalability and QoE. Note that if you don’t like scalable video coding, feel free to propose something else as long as it reduces storage and networking costs significantly.

 

MPEG Workshop “Global Media Technology Standards for an Immersive Age”

On January 18, 2017 MPEG successfully held a public workshop on “Global Media Technology Standards for an Immersive Age” hosting a series of keynotes from Bitmovin, DVB, Orange, Sky Italia, and Technicolor. Stefan Lederer, CEO of Bitmovin discussed today’s and future challenges with new forms of content like 360°, AR and VR. All slides are available here and MPEG took their feedback into consideration in an update of its 5-year standardization roadmap. David Wood (EBU) reported on the DVB VR study mission and Ralf Schaefer (Technicolor) presented a snapshot on VR services. Gilles Teniou (Orange) discussed video formats for VR pointing out a new opportunity to increase the content value but also raising a question what is missing today. Finally, Massimo Bertolotti (Sky Italia) introduced his view on the immersive media experience age.

Overall, the workshop was well attended and as mentioned above, MPEG is currently working on a new standards project related to immersive media. Currently, this project comprises five parts. The first part comprises a technical report describing the scope (incl. kind of system architecture), use cases, and applications. The second part is OMAF (see above) and the third/forth parts are related to immersive video and audio respectively. Part five is about point cloud compression.

For those interested, please check out the slides from industry representatives in this field and draw your own conclusions what could be interesting for your own research. I’m happy to see any reactions, hints, etc. in the comments.

Finally, let’s have a look what happened related to MPEG-DASH, a topic with a long history on this blog.

MPEG-DASH and CMAF: Friend or Foe?

For MPEG-DASH and CMAF it was a meeting “in between” official standardization stages. MPEG-DASH experts are still working on the third edition which will be a consolidated version of the 2nd edition and various amendments and corrigenda. In the meantime, MPEG issues a white paper on the new features of MPEG-DASH which I would like to highlight here.

  • Spatial Relationship Description (SRD): allows to describe tiles and region of interests for partial delivery of media presentations. This is highly related to OMAF and VR/360-degree video streaming.
  • External MPD linking: this feature allows to describe the relationship between a single program/channel and a preview mosaic channel having all channels at once within the MPD.
  • Period continuity: simple signaling mechanism to indicate whether one period is a continuation of the previous one which is relevant for ad-insertion or live programs.
  • MPD chaining: allows for chaining two or more MPDs to each other, e.g., pre-roll ad when joining a live program.
  • Flexible segment format for broadcast TV: separates the signaling of the switching points and random access points in each stream and, thus, the content can be encoded with a good compression efficiency, yet allowing higher number of random access point, but with lower frequency of switching points.
  • Server and network-assisted DASH (SAND): enables asynchronous network-to-client and network-to-network communication of quality-related assisting information.
  • DASH with server push and WebSockets: basically addresses issues related to HTTP/2 push feature and WebSocket.

CMAF issued a study document which captures the current progress and all national bodies are encouraged to take this into account when commenting on the Committee Draft (CD). To answer the question in the headline above, it looks more and more like as DASH and CMAF will become friends — let’s hope that the friendship lasts for a long time.

What else happened at the MPEG meeting?

  • Committee Draft MORE (note: type in ‘man more’ on any unix/linux/max terminal and you’ll get ‘less – opposite of more’;): MORE stands for “Media Orchestration” and provides a specification that enables the automated combination of multiple media sources (cameras, microphones) into a coherent multimedia experience. Additionally, it targets use cases where a multimedia experience is rendered on multiple devices simultaneously, again giving a consistent and coherent experience.
  • Technical Report on HDR/WCG Video Coding: This technical report comprises conversion and coding practices for High Dynamic Range (HDR) and Wide Colour Gamut (WCG) video coding (ISO/IEC 23008-14). The purpose of this document is to provide a set of publicly referenceable recommended guidelines for the operation of AVC or HEVC systems adapted for compressing HDR/WCG video for consumer distribution applications
  • CfP Point Cloud Compression (PCC): This call solicits technologies for the coding of 3D point clouds with associated attributes such as color and material properties. It will be part of the immersive media project introduced above.
  • MPEG-H 3D Audio verification test report: This report presents results of four subjective listening tests that assessed the performance of the Low Complexity Profile of MPEG-H 3D Audio. The tests covered a range of bit rates and a range of “immersive audio” use cases (i.e., from 22.2 down to 2.0 channel presentations). Seven test sites participated in the tests with a total of 288 listeners.

The next MPEG meeting will be held in Hobart, April 3-7, 2017. Feel free to contact us for any questions or comments.