JPEG Column: 80th JPEG Meeting in Berlin, Germany

The 80th JPEG meeting was held in Berlin, Germany, from 7 to 13 July 2018. During this meeting, JPEG issued a record number of ballots and output documents, spread through the multiple activities taking place. These record numbers are very revealing of the level of commitment of JPEG standardisation committee. A strong effort is being accomplished on the standardisation of new solutions for the emerging image technologies enabling the interoperability of different systems on the growing market of multimedia. Moreover, it is intended that these new initiatives should provide royalty-free patent licensing solutions at least in one of the available profiles, which shall promote a wider adoption of these future JPEG standards from the consumer market, and applications and systems developers.

A significant progress in low latency and high throughput standardisation initiatives has taken place at Berlin meetings. The new part 15 of JPEG 2000, known as High Throughput JPEG 2000 (HTJ2K), is finally ready and reached committee draft status. Furthermore, JPEG XS profiles and levels were released for their second and final ballot. Hence, these new low complexity standards foresee to be finalised in a short time, providing new solutions for developers and consumers on applications where mobility is important and large bandwidth is available. Virtual and augmented reality, as well as 360º images and video, are among the several applications that might benefit from these new standards.

Berlin80T1cut

JPEG meeting plenary in Berlin.

The 80th JPEG meeting had the following highlights:

  • HTJ2K reaches Committee Draft status;
  • JPEG XS profiles and levels are under ballot;
  • JPEG XL publishes additional information to the CfP;
  • JPEG Systems – JUMBF & JPEG 360;
  • JPEG-in-HEIF;
  • JPEG Blockchain white paper;
  • JPEG Pleno Light Field verification model.

The following summarizes the various highlights during JPEG’s Berlin meeting.

HTJ2K

The JPEG committee is pleased to announce a significant milestone, with ISO/IEC 15444-15 High-Throughput JPEG 2000 (HTJ2K) reaching Committee Draft status.

HTJ2K introduces a new FAST block coder to the JPEG 2000 family. The FAST block coder can be used in place of the JPEG 2000 Part 1 arithmetic block coder, and, as illustrated in Table 1, offers in average an order of magnitude increase on decoding and encoding throughput – at the expense of slightly reduced coding efficiency and elimination of quality scalability.

Table 1. Comparison between FAST block coder and JPEG 2000 Part 1 arithmetic block coder. Results were generated by optimized implementations evaluated as part of the HTJ2K activity, using professional video test images in the transcoding context specified in the Call for Proposal available at https://jpeg.org.  Figures are relative to JPEG2000 Part1 arithmetic block coder (bpp – bits per pixel).

JPEG 2000 Part 1 Block Coder Bitrate 0.5 bpp 1 bpp 2 bpp 4 bpp 6 bpp lossless
Average FAST Block Coder Speedup Factor 17.5x 19.5x 21.1x 25.5x 27.4x 43.7x
Average FAST Block Decoder Speedup Factor 10.2x 11.4x 11.9x 14.1x 15.1x 24.0x
Average Increase in Codestream Size  8.4%  7.3%   7.1% 6.6%  6.5%  6.6% 

Apart from the block coding algorithm itself, the FAST block coding algorithm does not modify the JPEG 2000 codestream, and allows mathematically lossless transcoding to and from JPEG 2000 codestreams. As a result the FAST block coding algorithm can be readily integrated into existing JPEG 2000 applications, where it can bring significant increases in processing efficiency. 

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency and lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets. Targeted use cases are video transport over professional video links (SDI, IP, Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression (in particular in the automotive industry). The Core Coding System, expected to be published in Q4 2018 allows for visually lossless quality at 6:1 compression ratio for most content, 32 lines end-to-end latency, and ultra low complexity implementations in ASIC, FPGA, CPU and GPU. Following the 80th JPEG meeting in Berlin, profiles and levels (addressing specific application fields and use cases) are now under final ballot (expected publication in Q1 2019). Different means to store and transport JPEG XS codestreams in files, over IP networks or SDI infrastructures are also defined and go to a first ballot.

 

JPEG XL

The JPEG Committee issued a Call for Proposals (CfP) following its 79th meeting (April 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Coding activity. The CfP, with all related info, can be found in https://jpeg.org/downloads/jpegxl/jpegxl-cfp.pdf. The deadline for expression of interest and registration was August 15, 2018, and submissions to the CfP were due on September 1, 2018. 

As outcome of the 80th JPEG meeting in Berlin, a document was produced containing additional information related to the objective and subjective quality assessment methodologies that will be used to evaluate the anchors and proposals to the CfP, available on https://jpeg.org/downloads/jpegxl/wg1n80024-additional-information-cfp.pdf. Moreover, a detailed workflow is described, together with the software and command lines used to generate the anchors and to compute objective quality metrics.

To stay posted on the action plan of JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to its e-mail reflector.

 

JPEG Systems – JUMBF & JPEG 360

The JPEG Committee progressed towards a common framework and definition for metadata which will improve the ability to share 360 images. At the 80th meeting, the Committee Draft ballot was completed, the comments reviewed, and is now progressing towards DIS text for upcoming ballots on “JPEG Universal Metadata Box Format (JUMBF)” as ISO/IEC 19566-5, and “JPEG 360” as ISO/IEC 19566-6. Investigations have started to apply the framework on the structure of JPEG Pleno files.

 

JPEG-in-HEIF

The JPEG Committee made significant progress towards standardizing how JPEG XR, JPEG 2000 and the upcoming JPEG XS will be carried in ISO/IEC 23008-12 image file container.

 

JPEG Blockchain

Fake news, copyright violation, media forensics, privacy and security are emerging challenges for digital media. JPEG has determined that blockchain technology has great potential as a technology component to address these challenges in transparent and trustable media transactions. However, blockchain needs to be integrated closely with a widely adopted standard to ensure broad interoperability of protected images. JPEG calls for industry participation to help define use cases and requirements that will drive the standardization process. To reach this objective, JPEG issued a white paper entitled “Towards a Standardized Framework for Media Blockchain” that elaborates on the initiative, exploring relevant standardization activities, industrial needs and use cases. In addition, JPEG plans to organise a workshop during its 81st meeting in Vancouver on Tuesday 16th October 2018. More information about the workshop is available on https://www.jpeg.org. To keep informed and get involved, interested parties are invited to register on the ad hoc group’s mailing list at http://jpeg-blockchain-list.jpeg.org.

 

JPEG Pleno

The JPEG Committee is currently pursuing three activities in the framework of the JPEG Pleno Standardization: Light Field, Point Cloud and Holographic content coding.

At its Berlin meeting, a first version of the verification model software for light field coding has been produced. This software supports the core functionality that was indented for the light field coding standard. It serves for intensive testing of the standard. JPEG Pleno Light Field Coding supports various sensors ranging from lenslet cameras to high-density camera arrays, light field related content production chains up to light field displays.

For coding of point clouds and holographic data, activities are still in exploratory phase addressing the elaboration of use cases and the refinement of requirements for coding such modalities. In addition, experimental procedures are being designed to facilitate the quality evaluation and testing of technologies that will be submitted in later calls for coding technologies. Interested parties active in point cloud and holography related markets and applications, both from industry and academia are welcome to participate in this standardization activity.

 

Final Quote 

“After a record number of ballots and output documents generated during its 80th meeting, the JPEG Committee pursues its activity on the specification of effective and reliable solutions for image coding offering needed features in emerging multimedia applications. The new JPEG XS and JPEG 2000 part 15 provide low complexity compression solutions that will benefit many growing markets such as content production, virtual and augmented reality as well as autonomous cars and drones.” said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 80th JPEG Meeting was held on 7-13 July 2018, in Berlin, Germany. The next 81st JPEG Meeting will be held on 13-19 October 2018, in Vancouver, Canada.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

 

Future JPEG meetings are planned as follows:JPEG-signature

  • No 81, Vancouver, Canada, October 13 to 19, 2018
  • No 82, Lisbon, Portugal, January 19 to 25, 2019
  • No 83, Geneva, Switzerland, March 16 to 22, 2019

MPEG Column: 123rd MPEG Meeting in Ljubljana, Slovenia

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

IMG_5700The MPEG press release comprises the following topics:

  • MPEG issues Call for Evidence on Compressed Representation of Neural Networks
  • Network-Based Media Processing – MPEG evaluates responses to call for proposal and kicks off its technical work
  • MPEG finalizes 1st edition of Technical Report on Architectures for Immersive Media
  • MPEG releases software for MPEG-I visual activities
  • MPEG enhances ISO Base Media File Format (ISOBMFF) with new features

MPEG issues Call for Evidence on Compressed Representation of Neural Networks

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, media coding, data analytics, translation and many other fields. Their recent success is based on the feasibility of processing much larger and complex neural networks (deep neural networks, DNNs) than in the past, and the availability of large-scale training data sets. As a consequence, trained neural networks contain a large number of parameters (weights), resulting in a quite large size (e.g., several hundred MBs). Many applications require the deployment of a particular trained network instance, potentially to a larger number of devices, which may have limitations in terms of processing power and memory (e.g., mobile devices or smart cameras). Any use case, in which a trained neural network (and its updates) needs to be deployed to a number of devices could thus benefit from a standard for the compressed representation of neural networks.

At its 123rd meeting, MPEG has issued a Call for Evidence (CfE) for compression technology for neural networks. The compression technology will be evaluated in terms of compression efficiency, runtime, and memory consumption and the impact on performance in three use cases: visual object classification, visual feature extraction (as used in MPEG Compact Descriptors for Visual Analysis) and filters for video coding. Responses to the CfE will be analyzed on the weekend prior to and during the 124th MPEG meeting in October 2018 (Macau, CN).

Research aspects: As this is about “compression” of structured data, research aspects will mainly focus around compression efficiency for both lossy and lossless scenarios. Additionally, communication aspects such as transmission of compressed artificial neural networks within lossy, large-scale environments including update mechanisms may become relevant in the (near) future. Furthermore, additional use cases should be communicated towards MPEG until the next meeting.

Network-Based Media Processing – MPEG evaluates responses to call for proposal and kicks off its technical work

Recent developments in multimedia have brought significant innovation and disruption to the way multimedia content is created and consumed. At its 123rd meeting, MPEG analyzed the technologies submitted by eight industry leaders as responses to the Call for Proposals (CfP) for Network-Based Media Processing (NBMP, MPEG-I Part 8). These technologies address advanced media processing use cases such as network stitching for virtual reality (VR) services, super-resolution for enhanced visual quality, transcoding by a mobile edge cloud, or viewport extraction for 360-degree video within the network environment. NBMP allows service providers and end users to describe media processing operations that are to be performed by the entities in the networks. NBMP will describe the composition of network-based media processing services out of a set of NBMP functions and makes these NBMP services accessible through Application Programming Interfaces (APIs).

NBMP will support the existing delivery methods such as streaming, file delivery, push-based progressive download, hybrid delivery, and multipath delivery within heterogeneous network environments. MPEG issued a Call for Proposal (CfP) seeking technologies that allow end-user devices, which are limited in processing capabilities and power consumption, to offload certain kinds of processing to the network.

After a formal evaluation of submissions, MPEG selected three technologies as starting points for the (i) workflow, (ii) metadata, and (iii) interfaces for static and dynamically acquired NBMP. A key conclusion of the evaluation was that NBMP can significantly improve the performance and efficiency of the cloud infrastructure and media processing services.

Research aspects: I reported about NBMP in my previous post and basically the same applies here. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

MPEG finalizes 1st edition of Technical Report on Architectures for Immersive Media

At its 123nd meeting, MPEG finalized the first edition of its Technical Report (TR) on Architectures for Immersive Media. This report constitutes the first part of the MPEG-I standard for the coded representation of immersive media and introduces the eight MPEG-I parts currently under specification in MPEG. In particular, it addresses three Degrees of Freedom (3DoF; three rotational and un-limited movements around the X, Y and Z axes (respectively pitch, yaw and roll)), 3DoF+ (3DoF with additional limited translational movements (typically, head movements) along X, Y and Z axes), and 6DoF (3DoF with full translational movements along X, Y and Z axes) experiences but it mostly focuses on 3DoF. Future versions are expected to cover aspects beyond 3DoF. The report documents use cases and defines architectural views on elements that contribute to an overall immersive experience. Finally, the report also includes quality considerations for immersive services and introduces minimum requirements as well as objectives for a high-quality immersive media experience.

Research aspects: ISO/IEC technical reports are typically publicly available and provides informative descriptions of what the standard is about. In MPEG-I this technical report can be used as a guideline for possible architectures for immersive media. This first edition focuses on three Degrees of Freedom (3DoF; three rotational and un-limited movements around the X, Y and Z axes (respectively pitch, yaw and roll)) and outlines the other degrees of freedom currently foreseen in MPEG-I. It also highlights use cases and quality-related aspects that could be of interest for the research community.

MPEG releases software for MPEG-I visual activities

MPEG-I visual is an activity that addresses the specific requirements of immersive visual media for six degrees of freedom virtual walkthroughs with correct motion parallax within a bounded volume. MPEG-I visual covers application scenarios from 3DoF+ with slight body and head movements in a sitting position to 6DoF allowing some walking steps from a central position. At the 123nd MPEG meeting, an important progress has been achieved in software development. A new Reference View Synthesizer (RVS 2.0) has been released for 3DoF+, allowing to synthesize virtual viewpoints from an unlimited number of input views. RVS integrates code bases from Universite Libre de Bruxelles and Philips, who acted as software coordinator. A Weighted-to-Spherically-uniform PSNR (WS-PSNR) software utility, essential to 3DoF+ and 6DoF activities, has been developed by Zhejiang University. WS-PSNR is a full reference objective quality metric for all flavors of omnidirectional video. RVS and WS-PSNR are essential software tools for the upcoming Call for Proposals on 3DoF+ expected to be released at the 124th MPEG meeting in October 2018 (Macau, CN).

Research aspects: MPEG does not only produce text specifications but also reference software and conformance bitstreams, which are important assets for both research and development. Thus, it is very much appreciated to have a new Reference View Synthesizer (RVS 2.0) and Weighted-to-Spherically-uniform PSNR (WS-PSNR) software utility available which enables interoperability and reproducibility of R&D efforts/results in this area.

MPEG enhances ISO Base Media File Format (ISOBMFF) with new features

At the 123rd MPEG meeting, a couple of new amendments related to ISOBMFF has reached the first milestone. Amendment 2 to ISO/IEC 14496-12 6th edition will add the option to have relative addressing as an alternative to offset addressing, which in some environments and workflows can simplify the handling of files and will allow creation of derived visual tracks using items and samples in other tracks with some transformation, for example rotation. Another amendment reached its first milestone is the first amendment to ISO/IEC 23001-7 3rd edition. It will allow use of multiple keys to a single sample and scramble some parts of AVC or HEVC video bitstreams without breaking conformance to the existing decoders. That is, the bitstream will be decodable by existing decoders, but some parts of the video will be scrambled. It is expected that these amendments will reach the final milestone in Q3 2019.

Research aspects: The ISOBMFF reference software is now available on Github, which is a valuable service to the community and allows for active standard’s participation even from outside of MPEG. It is recommended that interested parties have a look at it and consider contributing to this project.


What else happened at #MPEG123?

  • The MPEG-DASH 3rd edition is finally available as output document (N17813; only available to MPEG members) combining 2nd edition, four amendments, and 2 corrigenda. We expect final publication later this year or early next year.
  • There is a new DASH amendment and corrigenda items in pipeline which should progress to final stages also some time next year. The status of MPEG-DASH (July 2018) can be seen below.

DASHstatus0718

  • MPEG received a rather interesting input document related to “streaming first” which resulted into a publicly available output document entitled “thoughts on adaptive delivery and access to immersive media”. The key idea here is to focus on streaming (first) rather than on file/encapsulation formats typically used for storage (and streaming second). This document should become available here.
  • Since a couple of meetings, MPEG maintains a standardization roadmap highlighting recent/major MPEG standards and documenting the roadmap for the next five years. It definitely worth keeping this in mind when defining/updating your own roadmap.
  • JVET/VVC issued Working Draft 2 of Versatile Video Coding (N17732 | JVET-K1001) and Test Model 2 of Versatile Video Coding (VTM 2) (N17733 | JVET-K1002). Please note that N-documents are MPEG internal but JVET-documents are publicly accessible here: http://phenix.it-sudparis.eu/jvet/. An interesting aspect is that VTM2/WD2 should have >20% rate reduction compared to HEVC, all with reasonable complexity and the next benchmark set (BMS) should have close to 30% rate reduction vs. HEVC. Further improvements expected from (a) improved merge, intra prediction, etc., (b) decoder-side estimation with low complexity, (c) multi-hypothesis prediction and OBMC, (d) diagonal and other geometric partitioning, (e) secondary transforms, (f) new approaches of loop filtering, reconstruction and prediction filtering (denoising, non-local, diffusion based, bilateral, etc.), (g) current picture referencing, palette, and (h) neural networks.
  • In addition to VVC — which is a joint activity with VCEG –, MPEG is working on two video-related exploration activities, namely (a) an enhanced quality profile of the AVC standard and (b) a low complexity enhancement video codec. Both topics will be further discussed within respective Ad-hoc Groups (AhGs) and further details are available here.
  • Finally, MPEG established an Ad-hoc Group (AhG) dedicated to the long-term planning which is also looking into application areas/domains other than media coding/representation.

In this context it is probably worth mentioning the following DASH awards at recent conferences

Additionally, there have been two tutorials at ICME related to MPEG standards, which you may find interesting

Quality of Experience Column: An Introduction

“Quality of Experience (QoE) is the degree of delight or annoyance of the user of an application or service. It results from the fulfillment of his or her expectations with respect to the utility and / or enjoyment of the application or service in the light of the user’s personality and current state.“ (Definition from the Qualinet Whitepaper 2013).

Research on Quality of Experience (QoE) has advanced significantly in recent years and attracts attention from various stakeholders. Different facets have been addressed by the research community like subjective user studies to identify QoE influence factors for particular applications like video streaming, QoE models to capture the effects of those influence factors on concrete applications, QoE monitoring approaches at the end user site but also within the network to assess QoE during service consumption and to provide means for QoE management for improved QoE. However, in order to progress in the area of QoE, new research directions have to be taken. The application of QoE in practice needs to consider the entire QoE eco-system and the stakeholders along the service delivery chain to the end user.

The term Quality of Experience dates back to a presentation in 2001 (interestingly, at a Quality of Service workshop) and Figure 1 depicts an overview of QoE showing some of the influence factors.

QualityofExperience

Figure 1. Quality of Experience (from Ebrahimi’09)

Different communities have been very active in the context of QoE. A long-established community is Qualinet which started in 2010. The Qualinet community (www.qualinet.eu) provided a definition of QoE in its [Qualinet Whitepaper] which is a contribution of the European Network on Quality of Experience in Multimedia Systems and Services, Qualinet (COST Action IC 1003), to the scientific discussion about the term QoE and its underlying concepts. The concepts and ideas cited in this paper mainly refer to the Quality of Experience of multimedia communication systems, but may be helpful also for other areas where QoE is an issue. Qualinet is organized in different task forces which address various research topics: Managing Web and Cloud QoE; Gaming; QoE in Medical Imaging and Healthcare; Crowdsourcing; Immersive Media Experiences (IMEx). There is also a liaison relation with VQEG and a task force on Qualinet Databases providing a platform with QoE-related dataset. The Qualinet database (http://dbq.multimediatech.cz/) is seen as a key for current and future developments in Quality of Experience, which resides in a rich and internationally recognized database of content of different sorts, and to share such a database with the scientific community at large.

Another example of the Qualinet activities is the Crowdsourcing task force. The goal of this task force is among others to identify the scientific challenges and problems for QoE assessment via crowdsourcing but also the strengths and benefits, and to derive a methodology and setup for crowdsourcing in QoE assessment including statistical approaches for proper analysis. Crowdsourcing is a popular approach that outsources tasks via the Internet to a large number of users. Commercial crowdsourcing platforms provide a global pool of users employed for performing short and simple online tasks. For quality assessment of multimedia services and applications, crowdsourcing enables new possibilities by moving the subjective test into the crowd resulting in larger diversity of the test subjects, faster turnover of test campaigns, and reduced costs due to low reimbursement costs of the participants. Further, crowdsourcing allows easily addressing additional features like real-life environments. Crowdsourced quality assessment however is not a straightforward implementation of existing subjective testing methodologies in an Internet-based environment. Additional challenges and differences to lab studies occur, in conceptual, technical, and motivational areas. The white paper [Crowdsourcing Best Practices] summarizes the recommendations and best practices for crowdsourced quality assessment of multimedia applications from the Qualinet Task Force on “Crowdsourcing” and is also discussed within the standardization ITU-T P.CROWD.

A selection of QoE related communities is provided in the following to give an overview on the pervasion of QoE in research.

  • Qualinet (http://www.qualinet.eu): European Network on Quality of Experience in Multimedia Systems and Services as outlined above. Qualinet is also technical sponsor of QoMEX.  
  • QoMEX (http://qomex.org/). The International Conference on Quality of Multimedia Experience (QoMEX) is a top-ranked international conference and among the twenty-best conferences in Google Scholar for subcategory Multimedia. In 2019, the 11th International Conference on Quality of Multimedia Experience  will be held in June 5th to 7th, 2019 in Berlin, Germany. It will bring together leading experts from academia and industry to present and discuss current and future research on multimedia quality, quality of experience (QoE) and user experience (UX). This way, it will contribute towards an integrated view on QoE and UX, and foster the exchange between the so-far distinct communities.
  • ACM SIGMM (http://www.sigmm.org/): Within the ACM community, QoE plays also a significant role in the major events like ACM Multimedia (ACM MM), where “Experience” is one of the four major themes. ACM Multimedia Systems (MMSys) regularly publishes works on QoE, and included special sessions on those topics in the last years. ACM MMsys 2019 will held from June 18 – 21, 2019 in Amherst, Massachusetts, USA.
  • ICME: The IEEE International Conference on Multimedia and Expo (IEEE ICME 2019) will be held from July 8-12, 2019 in Shanghai, China. It includes in the call for papers topics such as Multimedia quality assessment and metrics, and Multi-modal media computing and human-machine interaction.
  • ACM SIGCOMM (http://www.sigcomm.com): Within ACM SIGCOMM, Internet-QoE workshops have been initiated in 2016 and 2017. The focus of the last edition was on QoE Measurements, QoE-based Traffic Monitoring and Analysis, QoE-based Network Management.
  • Tracking QoE in the Internet Workshop: A summary and the outcomes of the “Workshop on Tracking Quality of Experience in the Internet” at Princeton gives a very good impression on the QoE activities in US with a recent focus on QoE monitoring and measurable QoE parameters in the presence of constraints like encryption.  
  • SPEC RG QoE (https://research.spec.org): The mission of SPEC’s Research Group (RG) is to promote innovative research in the area of quantitative system evaluation and analysis by serving as a platform for collaborative research efforts fostering the interaction between industry and academia in the field. The SPEC research group on QoE is the starting point for the release of QoE ideas, QoE approaches, QoE measurement tools, and QoE assessment paradigms.
  • QoENet (http://www.qoenet-itn.eu) is a Marie Curie project, whose focus is the analysis, design, optimization and management of the QoE in advanced multimedia services, creating a fully-integrated and multi-disciplinary network of 12 Early Stage Researchers working in and seconded by 7 academic institutions, 3 private companies and 1 standardization institute distributed in 6 European countries and in South Korea. The project is then fulfilling the major objective of training through research of the young fellows to broader the knowledge in the field of the new generation of researchers. Significant research results have been achieved in the field of: QoE for online gaming, social TV and storytelling, and adaptive video streaming; QoE management in collaborative ISP/OTT scenarios; models for HDR, VR/AR and 3D images and videos.
  • Many QoE-related activities at a national level are also happening. For example, a community of professors and researchers from Spain organize a yearly workshop entitled “QoS and QoE in Multimedia Communications” since 2015 (URL of its latest edition: https://bit.ly/2LSlb2N). This community is targeted at establishing collaborations, sharing resources, and discussing about the latest contributions and open issues. The community is also pursuing the creation of a national network on QoE (like the Spanish Qualinet), and then involving international researchers in that network.
  • There are several standardization-related activities ongoing e.g. in standardization groups ITU, JPEG, MPEG, VQEG. Their specific interest in QoE will be summarized in one of the upcoming QoE columns.

The first QoE column will discuss how to approach an integrated view of QoE and User Experience. While research on QoE has mostly been carried out in the area of multimedia communications, user experience (UX) has addressed hedonic and pragmatic usage aspects of interactive applications. In the case of QoE, the meaningfulness of the application to the user and the forces driving the use have been largely neglected, while in the UX field, respective research has been carried out but hardly been incorporated in a model combined with the pragmatic and hedonic aspects. In the first column will be dedicated to recent ideas “Toward an integrated view of QoE and User Experience”. To give the readers an impression on the expected contents, we foresee in the upcoming QoE columns topics to discuss about recent activities like

  • Point cloud subjective evaluation methodology
  • Complex, interactive narrative design for complexity
  • Large-Scale Visual Quality Assessment Databases
  • Status and upcoming QoE activities in standardization
  • Active Learning and Machine Learning for subjective testing and QoE modeling
  • QoE in 5G: QoE management in softwarized networks with big data analytics
  • Immersive Media Experiences e.g. for VR/AR/360° video applications

Our aim for SIGMM Records is to share insights from the QoE community and to highlight recent development, new research directions, but also lessons learned and best practices. If you are interested in writing for the QoE column, or have something you would like to know more about in this area, please do not hesitate to contact the editors. The SIGMM Records editors responsible for QoE are active in different communities and QoE research directions.

The QoE column is edited by Tobias Hoßfeld and Christian Timmerer.

[Qualinet Whitepaper] Qualinet White Paper on Definitions of Quality of Experience (2012).  European Network on Quality of Experience in Multimedia Systems and Services (COST Action IC 1003), Patrick Le Callet, Sebastian Möller and Andrew Perkis, eds., Lausanne, Switzerland, Version 1.2, March 2013.” Qualinet_QoE_whitepaper_v1.2

[Crowdsourcing Best Practices] Tobias Hoßfeld et al. “Best Practices and Recommendations for Crowdsourced QoE-Lessons learned from the Qualinet Task Force ‘Crowdsourcing’” (2014). Qualinet_CSLessonsLearned_29Oct2014

Hossfeld_Tobias Tobias Hoßfeld is full professor at the University of Würzburg, Chair of Communication Networks, and is active in QoE research and teaching for more than 10 years. He finished his PhD in 2009 and his professorial thesis (habilitation) “Modeling and Analysis of Internet Applications and Services” in 2013 at the University of Würzburg. From 2014 to 2018, he was head of the Chair “Modeling of Adaptive Systems” at the University of Duisburg-Essen, Germany. He has published more than 100 research papers in major conferences and journals and received the Fred W. Ellersick Prize 2013 (IEEE Communications Society) for one of his articles on QoE. Among others, he is member of the advisory board of the ITC (International Teletraffic Congress), the editorial board of IEEE Communications Surveys & Tutorials and of Springer Quality and User Experience.
ct2013oct Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constrained environments) both from the Alpen-Adria-Universität (AAU) Klagenfurt. He joined the AAU in 1999 (as a system administrator) and is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communications, streaming, adaptation, Quality of Experience, and Sensory Experience. He was the general chair of WIAMIS 2008, QoMEX 2013, and MMSys 2016 and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET, and ICoSOLE. He also participated in ISO/MPEG work for several years, notably in the area of MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as standard editor. In 2012 he cofounded Bitmovin (http://www.bitmovin.com/) to provide professional services around MPEG-DASH where he holds the position of the Chief Innovation Officer (CIO).

Opinion Column: Review Process of ACM Multimedia

 

This quarter, our Community column is dedicated to the review process of ACM Multimedia (MM). We report the summary of discussions arisen at various points in time, after the first round of reviews were returned to authors.

The core part of the discussion focused on how to improve review quality for ACM MM. Some participants pointed out that there have been complaints about the level and usefulness of some reviews in recent editions of ACM Multimedia. The members of our discussion forums (Facebook and Linkedin) proposed some solutions.

A semi-automated paper assignment. Participants debated about the best way of assigning papers to reviewers. Some suggested that automated assignment, i.e. using TPMS, helps reducing biases at scale: this year MM followed the review model of CVPR, which handled 1,000+ submissions and peer reviews. Other participants observed that automated assignment systems often fail in matching papers with the right reviewers. This is mainly due to the diversity of the Multimedia field: even within a single area, there is a lot of diversity in expertise and methodologies. Some participants advocated that the best solution is to have two steps (1) a bidding period where reviewers choose their favorite papers based on the areas of expertise, or, alternatively, an automated assignment step; (2) an “expert assignment” period, where, based on the previous choices, Area Chairs select the right people for a paper: a reviewer pool with relevant complementary expertise.

The authors’ advocate. Most participants agreed that the figure of the author’s advocate is crucial for a fair reviewing process, especially for a diverse community such as the Multimedia community. Most participants agreed that the author’s advocate should be provided in all tracks.

Non-anonymity among reviewers. It was observed that revealing the identity of reviewers to the other members of the program committee (e.g. Area Chairs and other reviewers) could encourage responsiveness and commitments during the review and discussion periods.

Quality over quantity. It was pointed out that increasing the number of reviews per paper is not always the right solution. This adds workload on the reviewers, thus potentially decreasing the quality of their reviews.

Less frequent changes in review process. A few participants discussed about the frequency of changes in the review process in ACM MM. In recent years, the conference organizers have tried different review formats, often inspired by other communities. It was observed that this lack of continuity in the review process might not give the time to evaluate the success of a format, or to measure the quality of the conference overall. Moreover, changes should be communicated and announced well before implemented (and repeatedly because people tend to oversight them) to the authors and the reviewers.

This debate lead to a higher-level discussion about the identity of the MM community. Some participants interpreted these frequent changes in the review process as some kind of identity crisis. It was proposed to use empirical evidence (e. g. a community survey) to analyse exactly what the MM community actually is and how it should evaluate itself. The risk of becoming a second tier conference to CVPR was brought up: not only authors submit to MM rejected papers from CVPR, but also, at times, reviewers are assuming that the MM papers have to be reviewed as CVPR papers, thus potentially losing a lot of interesting papers for the conference.

We would like to thank all participants for their time and precious thoughts. As next step for this column, we might consider making short surveys about specific topics, including the ones discussed in this issue of the SIGMM Records opinion column.

We hope this column will foster fruitful discussions during the conference, which will be held in Seoul, Korea, on 22-26 October 2018.

An interview with Assoc. Prof. Ragnhild Eg

Ragnhild Eg at the begin of hear research career in 2011

Please describe your journey into research from your youth up to the present. What foundational lessons did you learn from this journey? Why were you initially attracted to multimedia?

In high school, I really had no idea what I wanted to study in university. I liked writing, so I first tried out journalism. I soon discovered that I was too timid for this line of work, and the writing was less creative than I had imagined. So I returned to my favourite subject, psychology. I have always been fascinated by how the human mind works, how we can process all the information that surrounds us – and act on it. This fascination led me from a Bachelor in Australia, back to Norway where I started a Master in cognitive and biological psychology. One of my professors (whom I was lucky to have as a supervisor later) was working on a project on speech perception, and I still remember the first example she used to demonstrate how what we see can alter what we hear. I am delighted that I still encounter new examples of how multi-sensory processes can trick us. Most of all, I am interested by how these complex processes happen naturally, beyond our consciousness. And that is also what interests me in multimedia, how is it that we perceive information conveyed by digital systems in much the same way we perceive information from the physical world? And when we do not perceive it in the same way, what is causing the discrepancy?

My personal lessons are not to let a chosen path lead you in a direction you do not want to go. Moreover, not all of us are driven by a grand master plan. I am very much driven by impulses and curiosity, and this has led me to a line of work where curiosity is an asset.

Ragnhild Eg at the begin of hear research career in 2011

Ragnhild Eg at the beginning of her research career in 2011.

Tell us more about your vision and objectives behind your current roles? What do you hope to accomplish and how will you bring this about?

I currently work at a university college, where I have the opportunity to combine two passions: teaching and research. I wish to continue with both, so my vision relates to my research progression. My objective is pretty basic, I wish to broaden the scope of my research to include more perspectives on human perception. To do that, I want to start with new collaborations that can lead to long-term projects. As mentioned, I often let curiosity guide me, and I do not intend to stop doing just that.

Can you profile your current research, its challenges, opportunities, and implications?

In later years, my research scope has extended from perception of multimedia content to human-computer interactions, and further on to individual factors. Although we investigate perceptual processes in the context of computer systems’ limitations, our original approach was to generalise across a population. Yet, the question of how universal perceptual processes can differ so much between individuals has become more and more intriguing.

How would you describe the role of women especially in the field of multimedia?

I have a love-hate relationship when it comes to stereotypes. Not only are they unavoidable, they are essential for us to process information. Moreover, it can be quite amusing to apply characteristics to stereotypes. On the other hand, stereotypes contribute to preserve, and even strengthen, certain conceptions about individuals. On the topic of women in multimedia, I find it important because we are a minority and I believe any community benefits from diversity. However, I find it difficult to describe our role without falling back on stereotypical gender traits.

How would you describe your top innovative achievements in terms of the problems you were trying to solve, your solutions, and the impact it has today and into the future?

The path that led me to multimedia research started with my studies in psychology, so I came into the field with a different outlook. I use my theoretical knowledge about human cognition and perception, and my experience with psychological research methods, to tackle multimedia challenges. For instance, designing behavioural studies with experimental controls and validity checks. Perhaps not innovative, my first approach to study the perception of multimedia quality was to avoid addressing quality, and rather control it as an experimental factor. Instead, I explored variations in perceptual integration, across different quality levels. Interestingly, I see more and more knowledge introduced from psychology and neuroscience to multimedia research. I regard these cross-overs as an indication that multimedia research has come to be an established field with versatile research methods, and I look forward to seeing what insights come out of it.

Over your distinguished career, what are your top lessons you want to share with the audience?

When I started my PhD, I came into a research environment dominated by computer science. The transition went far smoother than I had imagined, mostly due to open-minded and welcoming colleagues. Yet, working with inter-disciplinary research will lead to encounters where you do not understand the contributions of others, and they may not understand yours. Have respect for the knowledge and expertise others bring with them, and expect the same respect for your own strengths. This type of collaboration can be demanding, but can also bring about the most interesting questions and results.

Another lesson I want to share, is perhaps one that can only come through personal experience. I enjoy collaborating on research projects, but being a researcher also requires a great deal of autonomy. Only at the end of the first year did I realise that no one could tell me what should be the focus of my PhD, even though I was expected to contribute to a larger project. Research is not constrained by clear boundaries, and I believe a researcher must be able to apply their own curiosity even when external forces seem to enforce limits.

Ragnhild Eg in 2018.

Ragnhild Eg in 2018.

If you were conducting this interview, what questions would you ask, and then what would be your answers?

I would ask what is the best joke you know! And my answer would undoubtedly be a knock-knock joke. 
Editor’s note: Officially added to the standard questionnaire!

What is the best joke you know? :)

Knock knock

– Who’s there?

A little old lady

– A little old lady who?

Wow, I had no idea you could yodel! 


Bios

Assoc. Prof. Ragnhild Eg: 

Ragnhild Eg is an associate professor at Kristiania University College, where she combines her background and interests in psychology with research and education. She teaches psychology and ethics, and pursue research interests spanning from perception and the effects of technological constraints, to the consequences of online media consumption.

Michael Alexander Riegler: 

Michael is a scientific researcher at Simula Research Laboratory. His research interests are medical multimedia data analysis and understanding, image processing, image retrieval, parallel processing, crowdsourcing, social computing and user intent. 

Multidisciplinary Column: An Interview with Andrew Quitmeyer

 

Picture of Dr. Andrew Quitmeyer

Could you tell us a bit about your background, and what the road to your current position was?

In general, a lot of my background has been motivated by personal independence and trying to find ways to sustain myself. I was a first-generation grad student (which may explain a lot of my skepticism and confusion about academia in general). I moved out of the house at 15 to go to this cool, free, experimental public boarding high school in Illinois. I went to the University of Illinois because it was the nicest school I could go to for free (despite horrible college counselors telling all the students they should take on hundreds of thousands of dollars of debt to go to the “school of their dreams”).  They didn’t have a film-making program, so I created my own degree for it. Thinking I could actually have a film career seemed risky, and I wanted something that would protect my ability to get a job, so I got an engineering degree too.

I was a bit disappointed in the engineering side though, because I felt we never actually got to build anything on our own. I think a lot of people know me as some kind of “hacker” or “maker”, but I didn’t start doing many physical projects until much later in grad school when I met my friend Hannah Perner-Wilson. She helped pioneer a lot of DIY e-textiles (kobakant.at), and what struck me was how beautifully you could document and play with physical projects. Physical computing seemed an attractive combination of my abilities in documentary filmmaking and engineering.

The other big revelation for me roped in the naturalist side of what I do. I have always loved adventuring in nature, and studying wild creatures, but growing up in the midwest USA, this was never presented as a viable career opportunity. In the same way that it was basically taken for granted in midwestern US culture that studying art was a sort of frivolous hobby for richer kids, a career in biology that didn’t feed into engineering work in some specific industry (agriculture, biotech, etc…) was treated as equally flippant. I tried taking as many science electives as I could in undergrad, but it was because they were fun. Again, it was not until grad school when I had a cool job doing computer vision programming with an ant-lab robot-lab collaboration that I realized the potential error of my ways. Some of the ant biologists invited me to go ant hunting out in the desert after our meeting, and it was so fun and interesting I had a sort of existential meltdown. “Oh no! I screwed up, I could have been a field biologist all this time? Like that’s a job?”

So I worked to sculpt my PhD around this revelations. I wanted to join field biologists in exploring the natural world while using and developing novel technology to help us probe and document these creatures in new ways. 

I plowed through my PhD as fast as I could because going to school in the US is expensive. You either have to join a lab to help work on someone else’s (already funded) project or take time away from your research to TA classes. After I got out of there, I did some other projects, and eventually got a job as an assistant professor at the National University of Singapore in the communications and new media department. Unfortunately it seems like I came at a pretty chaotic time (80% of my fellow professors are leaving my division, not to be replaced), and so I will actually be leaving at the end of this semester to figure out a new place to continue doing research and teaching others.

How does multidisciplinary work play a role in your research on “Digital Naturalism”?

The work is basically anti-disciplinary. Instead of a relying on specific field of practice, the work simply sets out towards some basic goals and happily uses any means necessary to get there. Currently this includes a blend of naturalistic experimentation, performance art, film making, interaction design, software and hardware engineering, industrial design, ergonomics, illustration, and storytelling.

The more this work spreads into other disciplines the more robust and interesting I think it will become. For instance, I would love to see more video games developed about interacting with wild animals.

Could you name a grand research challenge in your current field of work?

Let’s talk to animals.

I am a big follower of Janet Murray’s work in Digital Media. She sees the grand challenge of computers as forming this amazing new medium that we all have to collaboratively experiment with to figure out how to truly make use of the new affordances it provides us. For me, the coolest new ability of computers is their behavioral nature. Never before have we had a medium that shares the same unique qualities of living creatures in being able to sense stimuli from the world, process this and be able to create new stimuli in response. Putting together these senses and actions let’s computers give us the first truly “behavioral medium.” Intertwining the behaviors of computers with living creatures opens up a new world of dynamic experimentation. 

When most people think of talking to animals, they imagine some kind of sci-fi, dr. doolittle auto-translator. A bird chips a song, and we get a text message that says “I am looking for more bird food.” This is quite specist of us, and upholds that ingrained assumption that all living creatures strive to somehow become more like us.

Instead, I think this digital, behavioral medium holds more value and potential into bringing us into their world and modes of communication. You can learn what the ants are saying by building your own robot ant that taps antennae with the workers around her. You might learn more in a birds’ communication by capturing its bodily movements and physical interactions giving context to its thoughts than trying to brute-force decrypt the sounds it makes. 

I find anything we can learn about animals and their environments useful, and the behaviors that computers can enact as a key to bringing us into their world. There is a really long road ahead though. To facilitate rich, behavioral interactions with other creatures requires advances, experimentation, and refinement of our ability to sense non-human stimuli and provide realistic stimuli back. Meanwhile though, I can barely create a sensor that can detect just the presence or absence of an ant on a tree in the wild. Thus we need a lot more development and experimentation but, I imagine future digital naturalists using technology to turn themselves into Goat-men like Thomas Thwaites rather than as Star Trek commanders using some kind of universal translator.

You have been starring in a ‘Hacking the Wild’ television series on the Discovery Channel. How did the idea for this series come about? Do you aim to reach out to particular audiences with the show?

Yeah that was an interesting experience! Some producers had seen some of my work I had been documenting and producing from expeditions I led during my PhD, and contacted me about turning it into a show. A problem in the entertainment industry is that nobody seems to understand why you would ever not want to be in entertainment. They treat it as a given that that’s what everyone is striving for in their lives. This seems to give them license to not treat people great and say whatever it takes to get people to do what they want (even if some of these things turn out to be false). So, for instance, I was first told that my show would be about me working with scientists building technology in the jungle, but then it devolved into a survival genre TV show with just me. The plot became non-sensical (which could have been fun!), but pressure from the industry forced us to keep up the grizzled stereotypes of the genre (“if I don’t find food in the next couple hours…I might not make it out”).

It gave me an interesting chance to insert myself and some of my own ideals into this space though.  One thing that irks me about the survival genre in general is its rhetoric of “conquering nature.” They kept trying to feed me lines about how I would use this device, or this hack to “defeat nature” which is the exact opposite of what I want to do in my work. So I tried to stand my ground and assert that nature is beautiful and fun, and maybe we can use things we build to understand it even better. Many traditional survival audiences didn’t seem to care for it, but I have gotten lots of fan mail from around the world from people who seem to get the real idea of it a bit more – make things outside and use them to play in nature. I remember one nice email from a young kid who would prototype contraptions in their back yard with what they called “electric sticks,” and that was really nice.

You recently organized the first Digital Naturalism Conference (Dinacon), that was quite unlike the types of conferences we would normally encounter. Could you tell a bit more about Dinacon’s setup,and the reasons why you initiated a conference like this?

Dinacon was half a long-term dream and half a reaction to problems in current academic publishing and conferences.

The basic idea of of the Digital Naturalism Conference was to gather all the neat people in my network spanning many different fields and practices, and get them to hang out together in a beautiful, interesting place. For me, this was a direct continuation of my Digital Naturalism work to re-imagine the sites of scientific exploration. In previous events I had tried to explore combining hackathons with biological field expeditions. These “hiking hacks” looked to design the future of how scientific trips might function in tandem with the design of scientific tools. The conference looked to take this to the next stage and re-imagine what the biological field station of the future might look like.

The more specific design of this conference was built as a reaction to a lot of the problems I see in current academic traditions.  The academic conferences I have taken part in generally had these problems:

  • Exploitative – Powered by unpaid laborers (organizing, reviewing, formatting, advertising) who then have to pay to attend themselves
  • Expensive – only rich folks get to attend (generally with money from their institution)
  • Exclusive – generally you have to already be “vetted” with your papers to attend (not knocking Peer review! Just vetted networking)
  • Steer Money in not great directions – e.g. lining the pockets of fancy hotels and publishing companies
  • Restricted Time – Most conferences leave just enough time to get bored waiting for others unenthusiastic presentations to finish, and maybe grab a drink before heading back to all the duties one has. I think for good work to be done, and proper connections to be made in research, people need time to live and work together in a relaxing, exciting environment.

[I go into more details about all this in the post about our conference’s philosophy: https://www.dinacon.org/2017/11/01/philosophy/ ]

Based on these problems, I wanted to experiment with alternative methods for gathering people, sharing information, and reviewing the information they create. I wanted to show that these problems were illnesses within the current system and traditions we perpetuate, and that many alternatives not only exist, but are feasible even on a severely reduced budget. (We started on an initial budget self-funding the rental of the place with $7000 USD, we then crowdfunded $11,000 additionally after the conference was announced  to provide additional amenities and stipends).

Thus, when creating this conference, we sought to attack each of these challenges. First we made it free to attend and provided free or subsidized housing. We also made it open to absolutely anyone from any discipline or background. Then we tried to direct what money we did have to spend towards community improvements. For instance, we rented out the Diva Andaman for the duration of the conference. This was a tourism ship that was interested in also helping the biology community by serving as a mobile marine science lab. In return for letting us use the facilities and rooms on the ship, we helped develop ideas and tools for its new laboratories. Finally, and perhaps most importantly, we worked to provide time for the participants. They were allowed to stay for 3 days to 3 weeks and encouraged to take time to explore, adjust to the place, interact with each other. 

We tried to also streamline the responsibilities of the participants too by having just 3 official “rules”: 

  1. You must complete something. Aim big, aim small, just figure out a task for yourself that you can commit yourself to that you can accomplish during your time at the conference. It can be any format you want: sculpture, a movie, a poem, a fingerpainting, a journal article – you just have to finish it!
  1. Document and Share it. Everything will be made open-source and publicly accessible!
  1. Provide feedback on two (2) other people’s projects. 

The goal of these rules would be that, just like at a traditional conference, everyone would leave with a completed work that’s been reviewed by their peers. Also like the reality of any conference, not all of these rules were 100% met. Everyone created something, most documented it, and gave plenty of feedback to each other, but there wasn’t yet as much of an infrastructure in place for them to give this feedback and documentation a bit more formally. These rules functioned with great success, however, as goal posts leading people towards working on interesting things while also collaborating and sharing their work.

Do you feel Dinacon was successful in promoting inclusivity? What further actions can the community undertake towards this as a whole?

I do, and was quite happy with the results, but am excited to build on this aspect even more. We worked hard at reaching out to many communities around the world, especially within groups or demographics that may be overlooked otherwise. This was a big factor in where we decided to locate the conference as well. Thailand was great because many folks from around southeast asia could easily come, while people from generally richer nations in the west could also make it. I think this is a super important feature for any international conference: make it easier for the less privileged and more difficult for the more privileged.

I genuinely do not understand why giant expensive conferences just keep being held where the rich people already live. Anytime I am at some expensive conference hotel in Singapore, Japan, or the USA, I think about how all that money could go so much further and have a bigger impact on a community elsewhere. For instance, there has NEVER been a CHI conference held anywhere in Africa, South America, or Southeast Asia. These places do also have large hotels that you can hook up computers and show a PowerPoint as well, so it’s not like they are missing the key infrastructure of these types of conferences.

One of the biggest hurdles is money and logistics. We had folks accepted from every continent except Antarctica, but our friends from Ghana couldn’t make it due to the arduous visa process. We had a couple small micro-travel grants (300-600 bucks of my own money) to help get people over who might not have been able to otherwise, but I wished we could have made our conference entirely free and could cover transportation (instead of just free registration, free food, and free camping housing).

That’s a limitation of a self-funded project, you just try to help as much as you can until you are tapped out. The benefits of it, though, are proving that really many people with a middle class job can actually do this too. Before I got my job, I pledged to put 10% of my earnings towards creating fun, interesting experiences for others. It’s funny that when people spend this amount of money on more established things like cars or religious tithing, people accept it, but when I tell people I am spending $7000USD of my own money putting on a free conference about my research they balk and act like I am nuts. I couldn’t think of anything better to spend 10% of your money on than something that brings you and others joy.

Next year’s conference will likely have a sliding scale registration though to help promote greater inclusivity overall than what we could provide out of our own pockets. Having people who can afford to pay a couple hundred for registration help subsidize those who would have been prevented from coming seems like an equitable solution.

How and in what form do you feel we as academics can be most impactful?

Fighting competitiveness. I think the greatest threat put onto academics is the idea that we are competing with each other. Unfortunately, many institutional policies actually codify this competition into truth. As an academic your loyalty should be first and foremost into unlocking new ideas about our world that you can share with others. This quality is rarely directly rewarded by any large organization, though. This means that standing up for academic integrity will almost undoubtedly come at a cost. It may cost you your bonus, your grant application, or even your job. In terms of your life and your career, however, I think these will only be short term expenses, and in fact be investments into deeper, more impactful research and experiences.

Academics like to complain about the destructiveness of policies based on pointless metrics and academic cliques, but nothing will change unless you simply stand up against it. Not everyone can afford to stand up against the authorities. Maybe you cannot quit your job because you need the health care, but there are ways for all of us to call out exploitation that we see in institutional or community structures. You need to assess the privileges you do have, and do what you can to help share knowledge and lift up those around you. 

For instance, in my reflection after going to a more traditional conference (during my own conference), I pledged to 

  • no longer help recruit “reviewers” for papers if they are not compensated in some way.
  • avoid reviewing papers for exploitative systems

and

  • transfer my reviewing time to help conferences and journals with open policies.

(more info here: https://www.dinacon.org/2018/06/19/natural-reflection-andy-quitmeyer/). For now, this pledge excludes me from some of the major conferences in my field, which in turn makes me publish my work in other venues, which many institutions look down on, and this inhibits my hire-ability. I think it’s worth it though to help stop perpetuating these problems onto future generations.

In your opinion, what would make an academic community a healthy community?

I think a healthy academic community would be one where the people are happy, help each other, and help make space for people outside their community to join and share. The only metric I think I would want to judge quality of an institution on would be about how happy they feel their community is. I don’t care what their output is, especially in baseless numbers of publications or grant money, developing healthy communities is the only way to lead to any kind of long-term sustainable research. You need humans of different abilities and generations watching out for each other, helping each other learn new things, and protecting each other. 

Some people try to push the idea that competition is necessary to make people work hard and be productive, or else they will be lazy and greedy. In fact, it’s this competition that creates these side affects. When cared for, they are curious, constructive, and helpful.

So keep your eyes open for ways in which your peers or students are being exploited and stand up against it. Reach out to find out challenges people around you face, and work on developing opportunities outside the scope of the traditions in your field. I think doing this will help build healthy and productive communities.

 


Bios

Dr. Andrew Quitmeyer is a hacker / adventurer studying intersections between wild animals and computational devices. His academic research in “Digital Naturalism” at the National University of Singapore blends biological fieldwork and DIY digital crafting. This work has taken him through international wildernesses where he’s run workshops with diverse groups of scientists, artists, designers, and engineers.  He runs “Hiking Hacks” around the world where participants build technology entirely in the wild for interacting with nature. His research also inspired a ridiculous spin-off television series he hosted for Discovery Networks called “Hacking the Wild.” He is currently working to establish his own art-science field station fab lab.

Editor Biographies

Cynthia_Liem_2017Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.

 

jochen_huberDr. Jochen Huber is a Senior User Experience Researcher at Synaptics. Previously, he was an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage: http://jochenhuber.com

Interview with Dr. Magda Ek Zarki and Dr. De-Yu Chen: winners of the Best MMsys’18 Workshop paper award

Abstract

The ACM Multimedia Systems conference (MMSys’18) was recently held in Amsterdam from 9-15 June 2018. The conferencs brings together researchers in multimedia systems. Four workshops were co-located with MMSys, namely PV’18, NOSSDAV’18, MMVE’18, and NetGames’18. In this column we interview Magda El Zarki and De-Yu Chen, the authors of the best workshop paper entitled “Improving the Quality of 3D Immersive Interactive Cloud-Based Services Over Unreliable Network” that was presented at MMVE’18.

Introduction

The ACM Multimedia Systems Conference (MMSys) (mmsys2018.org) was held from the 12-15 June in Amsterdam, The Netherlands. The MMsys conference provides a forum for researchers to present and share their latest research findings in multimedia systems. MMSys is a venue for researchers who explore complete multimedia systems that provide a new kind of multimedia or overall performance improves the state-of-the-art. This touches aspects of many hot topics including but not limited to: adaptive streaming, games, virtual reality, augmented reality, mixed reality, 3D video, Ultra-HD, HDR, immersive systems, plenoptics, 360° video, multimedia IoT, multi- and many-core, GPGPUs, mobile multimedia and 5G, wearable multimedia, P2P, cloud-based multimedia, cyber-physical systems, multi-sensory experiences, smart cities, QoE.

Four workshops were co-located with MMSys in Amsterdam in June 2018. The paper titled “Improving the Quality of 3D Immersive Interactive Cloud-Based Services Over Unreliable Network” by De-Yu Chen and Magda El-Zarki from University of California, Irvine was awarded the Comcast Best Workshop Paper Award for MMSys 2018, chosen from among papers from the following workshops: 

  • MMVE’18 (10th International Workshop on Immersive Mixed and Virtual Environment Systems)
  • NetGames’18 (16th Annual Workshop on Network and Systems Support for Games)
  • NOSSDAV’18 (28th ACM SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video)
  • PV’18 (23rd Packet Video Workshop)

We approached the authors of the best workshop paper to learn about the research leading up to their paper. 

Could you please give a short summary of the paper that won the MMSys 2018 best workshop paper award?

In this paper we discussed our approach of an adaptive 3D cloud gaming framework. We utilized a collaborative rendering technique to generate partial content on the client, thus the network bandwidth required for streaming the content can be reduced. We also made use of progressive mesh so the system can dynamically adapt to changing performance requirements and resource availability, including network bandwidth and computing capacity. We conducted experiments that are focused on the system performance under unreliable network connections, e.g., when packets can be lost. Our experimental results show that the proposed framework is more resilient under such conditions, which indicates that the approach has potential advantage especially for mobile applications.

Does the work presented in the paper form part of some bigger research question / research project? If so, could you perhaps give some detail about the broader research that is being conducted?

A more complete discussion about the proposed framework can be found in our technical report, Improving the Quality and Efficiency of 3D Immersive Interactive Cloud Based Services by Providing an Adaptive Application Framework for Better Service Provisioning, where we discussed performance trade-off between video quality, network bandwidth, and local computation on the client. In this report, we also tried to tackle network latency issues by utilizing the 3D image warping technique. In another paper, Impact of information buffering on a flexible cloud gaming system, we further explored the potential performance improvement of our latency reduction approach, when more information can be cached and processed.

We received many valuable suggestions and identified a few important future directions. Unfortunately, De-Yu, graduated and decided to pursue a career in the industry. He will not likely to be able to continue working on this project in the near future.

Where do you see the impact of your research? What do you hope to accomplish?

Cloud gaming is an up-and-coming area. Major players like Microsoft and NVIDIA have already launched their own projects. However, it seems to me that there is not a good enough solution that is accepted by the users yet. By providing an alternative approach, we wanted to demonstrate that there are still many unsolved issues and research opportunities, and hopefully inspire further work in this area.

Describe your journey into the multimedia research. Why were you initially attracted to multimedia?

De-Yu: My research interest in cloud gaming system dated back to 2013 when I worked as a research assistant in Academia Sinica, Taiwan. When U first joined Dr. Kuan-Ta Chen’s lab, my background was in parallel and distributed computing. I joined the lab for a project that is aimed to provide a tool that help developers do load balancing on massively multiplayer online video games. Later on, I had the opportunity to participate in the lab’s other project, GamingAnywhere, which aimed to build the world’s first open-source cloud gaming system. Being an enthusiastic gamer myself, having the opportunity to work on such a project was really an enjoyable and valuable experience. That experience came to be the main reason for continuing to work in this area. 

Magda El Zarki: I have worked in multimedia research since the 1980’s when I worked for my PhD project on a project that involved the transmission of data, voice and video over a LAN. It was named MAGNET and was one of the first integrated LANs developed for multimedia transmission. My work continued in that direction with the transmission of Video over IP. In conjunction with several PhD students over the past 20—30 years I have developed several tools for the study of video transmission over IP (MPEGTool) and has several patents related to video over wireless networks. All the work focused on improving the quality of the video via pre and post processing of the signal.

Can you profile your current research, its challenges, opportunities, and implications?

There are quite some challenges in our research. First of all, our approach is an intrusive method. That means we need to modify the source code of the interactive applications, e.g. games, to apply our method. We found it very hard to find a suitable open source game whose source code is neat and clean and easy to modify. Developing our own fully functioning game is not a reasonable approach, alas, due to the complexity involved. We ended up building a 3D virtual environment walkthrough application to demonstrate our idea. Most reviewers have expressed concerns about synchronization issues in a real interactive game, where there may be AI controlled objects, non-deterministic processes, or even objects controlled by other players. We agree with the reviewers that this is a very important issue. But currently it is very hard for us to address it with our limited resources. Most of the other research work in this area faces similar problems to ours – lack of a viable open source game for researchers to modify. As a result, researchers are forced to build their own prototype application for performance evaluation purposes. This brings about another challenge: it is very hard for us to fairly compare the performance of different approaches given that we all use a different application for testing. However, these difficulties can also be deemed as opportunities. There are still many unsolved problems. Some of them may require a lot of time, effort, and resources, but even a little progress can mean a lot since cloud gaming is an area that is gaining more and more attention from industry to increase distribution of games over many platforms.

“3D immersive and interactive services” seems to encompass both massive multi-user online games as well augmented and virtual reality. What do you see as important problems for these fields? How can multimedia researchers help to address these problems?

When it comes to gaming or similar interactive applications, all comes down to the user experience. In the case of cloud gaming, there are many performance metrics that can affect user experience. Identifying what matters the most to the users would be one of the important problems. In my opinion, interactive latency would be the most difficult problem to solve among all performance metrics. There is no trivial way to reduce network latency unless you are willing to pay the cost for large bandwidth pipes. Edge computing may effectively reduce network latency, but it comes with high deployment cost.

As large companies start developing their own systems, it is getting harder and harder for independent researchers with limited funding and resources to make major contributions in this area. Still, we believe that there are a couple ways how independent researchers can make a difference. First, we can limit the scope of the research by simplifying the system, focusing on just one or a few features or components. Unlike corporations, independent researchers usually do not have the resources to build a fully functional system, but we also do not have the obligation to deliver one. That actually enables us to try out some interesting but not so realistic ideas. Second, be open to collaboration. Unlike corporations who need to keep their projects confidential, we have more freedom to share what we are doing, and potentially get more feedback from others. To sum up, I believe in an area that has already attracted a lot of interest from industry, researchers should try to find something that companies cannot or are not willing to do, instead of trying to compete with them.

If you were conducting this interview, what questions would you ask, and then what would be your answers?

 The real question is: Is Cloud Gaming viable? It seems to make economic sense to try to offer it as companies try to reach a broader  and more remote audience. However, computing costs are cheaper than bandwidth costs, so maybe throwing computing power at the problem makes more sense – make more powerful end devices that can handle the computing load of a complex game and only use the network for player interactivity.

Biographies of MMSys’18 Best Workshop Paper Authors

Prof Magda El Zarki (Professor, University of California, Irvine):

Magda El Zarki

Prof. El Zarki’s lab focuses on multimedia transmission over the Internet. The work consists of both theoretical studies and practical implementations to test the algorithms and new mechanisms to improve quality of service on the user device. Both wireline and wireless networks and all types of video and audio media are considered. Recent work has shifted to networked games and massively multi user virtual environments (MMUVE). Focus is mostly on studying the quality of experience of players in applications where precision and time constraints are a major concern for game playability. A new effort also focuses on the development of games and virtual experiences in the arena of education and digital heritage.

De-Yu Chen (PhD candidate, University of California, Irvine):

De-Yu Chen

De-Yu Chen is a PhD candidate at UC Irvine. He received his M.S. in Computer Science from National Taiwan University in 2009, and his B.B.A. in Business Administration from National Taiwan University in 2006. His research interests include multimedia systems, computer graphics, big data analytics and visualization, parallel and distributed computing, cloud computing. His most current research project is focused on improving quality and flexibility of cloud gaming systems.

Report from ACM MMSYS 2018 – by Gwendal Simon

While I was attending the MMSys conference (last June in Amsterdam), I tweeted about my personal highlights of the conference, in the hope to share with those who did not have the opportunity to attend the conference. Fortunately, I have been chosen as “Best Social Media Reporter” of the conference, a new award given by ACM SIGMM chapter to promote the sharing among researchers on social networks. To celebrate this award, here is a more complete report on the conference!

When I first heard that this year’s edition of MMsys would be attended by around 200 people, I was a bit concerned whether the event would maintain its signature atmosphere. It was not long before I realized that fortunately it would. The core group of researchers who were instrumental in the take-off of the conference in the early 2010’s is still present, and these scientists keep on being sincerely happy to meet new researchers, to chat about the latest trends in the fast-evolving world of online multimedia, and to make sure everybody feels comfortable talking with each other.

mmsys_1

I attended my first MMSys in 2012 in North Carolina. Although I did not even submit any paper to MMSys’12, I decided to attend because the short welcoming text on the website was astonishingly aligned with my own feeling of the academic research world. I rarely read the usually boring and unpassionate conference welcoming texts, but this particular day I took time to read this particular MMSys text changed my research career. Before 2012, I felt like one lost researcher among thousands of other researchers, whose only motivation is to publish more papers whatever at stake. I used to publish sometimes in networking venues, sometimes in system venues, sometimes in multimedia venues… My production was then quite inconsistent, and my experiences attending conferences were not especially exciting.

The MMsys community matches my expectations for several reasons:

  • The size of a typical MMSys conference is human: when you meet someone the first day, you’ll surely meet this fellow again the next day.
  • Informal chat groups are diverse. I’ve the feeling that anybody can feel comfortable enough to chat with any other attendee regardless of gender, nationality, and seniority.
  • A responsible vision of what should be an academic event. The community is not into show-off in luxury resorts, but rather promotes decently cheap conferences in standard places while maximizing fun and interactions. It comes sometimes with the cost of organizing the conference in the facilities of the university (which necessarily means much more work for organizers and volunteers), but social events have never been neglected.
  • People share a set of “values” into their research activities.

This last point is of course the most significant aspect of MMSys. The main idea behind this conference is that multimedia services are not only multimedia but also networks, systems, and experiences. This commitment to a holistic vision of multimedia systems has at least two consequences. First, the typical contributions that are discussed in this conference have both some theoretical and experimental parts, and, to be accepted, papers have to find the right balance between both sides of the problem. It is definitely challenging, but it brings passionate researchers to the conference. Second, the line between industry and academia is very porous. As a matter of facts, many core researchers of MMSys are either (past or current) employees of research centers in a company or involved into standard groups and industrial forums. The presence of people being involved in the design of products nurtures the academic debates.

While MMSys significantly grows, year after year, I was curious to see if these “values” remain. Fortunately, it does. The growing reputation has not changed the spirit.

mmsys_2

The 2018 edition of the MMSys conference was held in the campus of CWI, near Downtown Amsterdam. Thanks to the impressive efforts of all volunteers and local organizers, the event went smoothly in the modern facilities near the Amsterdam University. As can be expected from a conference in the Netherlands, especially in June, biking to the conference was the obviously best solution to commute every morning from anywhere in Amsterdam.

mmsys_3The program contains a fairly high number of inspiring talks, which altogether reflected the “style” of MMsys. We got a mix of entertaining technological industry-oriented talks discussing state-of-the-art and beyond. The two main conference keynotes were given by stellar researchers (who unsurprisingly have a bright career in both academia and industry) on the two hottest topics of the conference. First Philip Chou (8i Labs) introduced holograms. Phil kind of lives in the future, somewhere five years later than now. And from there, Phil was kind enough to give us a glimpse of the anticipatory technologies that will be developed between our and his nows. Undoubtedly everybody will remember his flash-forwarding talk. Then Nuria Oliver (Vodafone) discussed the opportunities to combine IoT and multimedia in a talk that was powerful and energizing. The conference also featured so-called overview talks. The main idea is that expert researchers present the state-of-the-art in areas that have been especially under the spotlights in the past months. The topics this year were 360-degree videos, 5G networks, and per-title video encoding. The experts were from Tiledmedia, Netflix, Huawei and University of Illinois. With such a program, MMSys attendees had the opportunity to catch-up on everything they may have missed during the past couple of years.

mmsys_4

mmsys_5The MMSys conference has also a long history of commitment for open-source and demonstration. This year’s conference was a peak with an astonishing ratio of 45% papers awarded by a reproducibility badge, which means that the authors of these papers have accepted to share their dataset, their code, and to make sure that their work can be reproduced by other researchers. I am not aware of any other conference reaching such a ratio of reproducible papers. MMSys is all about sharing, and this reproducibility ratio demonstrates that the MMSys researchers see their peers as cooperating researchers rather than competitors.

 

mmsys_6My personal highlights would go for two papers: the first one is a work from researchers from UT Dallas and Mobiweb. It shows a novel efficient approach to generate human models (skeletal poses) with regular Kinect. This paper is a sign that Augmented Reality and Virtual Reality will soon be populated by user-generated content, not only synthetized 3D models but also digital captures of real humans. The road toward easy integration of avatars in multimedia scenes is paved and this work is a good example of it. The second work I would like to highlight in this column is a work from researchers from Université Cote d’Azur. The paper deals with head movement in 360-degree videos but instead of trying to predict movements, the authors propose to edit the content to guide user attention so that head movements are reduced. The approach, which is validated by a real prototype and code source sharing, comes from a multi-disciplinary collaboration with designers, engineers, and human interaction experts. Such multi-disciplinary work is also largely encouraged in MMSys conferences.

mmsys_7b

Finally, MMSys is also a full event with several associated workshops. This year, Packet Video (PV) was held with MMSys for the very first time and it was successful with regards to the number of people who attended it. Fortunately, PV has not interfered with Nossdav, which is still the main venue for high-quality innovative and provocative studies. In comparison, both MMVE and Netgames were less crowded, but the discussion in these events was intense and lively, as can be expected when so many experts sit in the same room. It is the purpose of workshops, isn’t it?

mmsys_8

A very last word on the social events. The social events in the 2018 edition were at the reputation of MMSys: original and friendly. But I won’t say more about them: what happens in MMSys social events stays at MMSys.

mmsys_9The 2019 edition of MMSys will be held on the East Coast of US, hosted by University of Massachusetts-Amherst. The multimedia community is in a very exciting time of its history. The attention of researchers is shifting from video delivery to immersion, experience, and attention. More than ever, multimedia systems should be studied from multiple interplaying perspectives (network, computation, interfaces). MMSys is thus a perfect place to discuss research challenges and to present breakthrough proposals.

[1] This means that I also had my bunch of rejected papers at MMSys and affiliated workshops. Reviewer #3, whoever you are, you ruined my life (for a couple of hours)

JPEG Column: 79th JPEG Meeting in La Jolla, California, U.S.A.

The JPEG Committee had its 79th meeting in La Jolla, California, U.S.A., from 9 to 15 April 2018.

During this meeting, JPEG had a final celebration of the 25th anniversary of its first JPEG standard, usually known as JPEG-1. This celebration coincides with two interesting facts. The first was the approval of a reference software for JPEG-1, “only” after 25 years. At the time of approval of the first JPEG standard a reference software was not considered, as it is common in recent image standards. However, the JPEG committee decided that was still important to provide a reference software, as current applications and standards can largely benefit on this specification. The second coincidence was the launch of a call for proposals for a next generation image coding standard, JPEG XL. This standard will define a new representation format for Photographic information, that includes the current technological developments, and can become an alternative to the 25 years old JPEG standard.

An informative two-hour JPEG Technologies Workshop marked the 25th anniversary celebration on Friday April 13, 2018. The workshop had presentations of several committee members on the current and future JPEG committee activity, with the following program:

IMG_4560

Touradj Ebrahimi, convenor of JPEG, presenting an overview of JPEG technologies.

  • Overview of JPEG activities, by Touradj Ebrahimi
  • JPEG XS by Antonin Descampe and Thomas Richter
  • HTJ2K by Pierre-Anthony Lemieux
  • JPEG Pleno – Light Field, Point Cloud, Holography by Ioan Tabus, Antonio Pinheiro, Peter Schelkens
  • JPEG Systems – Privacy and Security, 360 by Siegfried Foessel, Frederik Temmermans, Andy Kuzma
  • JPEG XL by Fernando Pereira, Jan De Cock

After the workshop, a social event was organized where a past JPEG committee Convenor, Eric Hamilton was recognized for key contributions to the JPEG standardization.

La Jolla JPEG meetings comprise mainly the following highlights:

  • Call for proposals of a next generation image coding standard, JPEG XL
  • JPEG XS profiles and levels definition
  • JPEG Systems defines a 360 degree format
  • HTJ2K
  • JPEG Pleno
  • JPEG XT
  • Approval of the JPEG Reference Software

The following summarizes various activities during JPEG’s La Jolla meeting.

JPEG XL

Billions of images are captured, stored and shared on a daily basis demonstrating the self-evident need for efficient image compression. Applications, websites and user interfaces are increasingly relying on images to share experiences, stories, visual information and appealing designs.

User interfaces can target devices with stringent constraints on network connection and/or power consumption in bandwidth constrained environments. Even though network capacities are improving globally, bandwidth is constrained to levels that inhibit application responsiveness in many situations. User interfaces that utilize images containing larger resolutions, higher dynamic ranges, wider color gamuts and higher bit depths, further contribute to larger volumes of data in higher bandwidth environments.

The JPEG Committee has launched a Next Generation Image Coding activity, referred to as JPEG XL. This activity aims to develop a standard for image coding that offers substantially better compression efficiency than existing image formats (e.g. more than 60% improvement when compared to the widely used legacy JPEG format), along with features desirable for web distribution and efficient compression of high-quality images.

To this end, the JPEG Committee has issued a Call for Proposals following its 79th meeting in April 2018, with the objective of seeking technologies that fulfill the objectives and scope of a Next Generation Image Coding. The Call for Proposals (CfP), with all related info, can be found at jpeg.org. The deadline for expression of interest and registration is August 15, 2018, and submissions to the Call are due September 1, 2018. To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector.

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets such as VR/AR/MR applications and autonomous cars. Among important use cases identified one can mention in particular video transport over professional video links (SDI, IP, Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression in the automotive industry. During the La Jolla meeting, profiles and levels have been defined to help implementers accurately size their design for their specific use cases. Transport of JPEG XS over IP networks or SDI infrastructures, are also being specified and will be finalized during the next JPEG meeting in Berlin (July 9-13, 2018). The JPEG committee therefore invites interested parties, in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions, to contribute to the specification process. Publication of the core coding system as an International Standard is expected in Q4 2018.

 

JPEG Systems – JPEG 360

The JPEG Committee continues to make progress towards its goals to define a common framework and definitions for metadata which will improve the ability to share 360 images and provide the basis to enable new user interaction with images.  At the 79th JPEG meeting in La Jolla, the JPEG committee received responses to a call for proposals it issued for JPEG 360 metadata. As a result, JPEG Systems is readying a committee draft of “JPEG Universal Metadata Box Format (JUMBF)” as ISO/IEC 19566-5, and “JPEG 360” as ISO/IEC 19566-6.  The box structure defined by JUMBF allows JPEG 360 to define a flexible metadata schema and the ability to link JPEG code streams embedded in the file. It also allows keeping unstitched image elements for omnidirectional captures together with the main image and descriptive metadata in a single file.  Furthermore, JUMBF lays the groundwork for a uniform approach to integrate tools satisfying the emerging requirements for privacy and security metadata.

To stay posted on JPEG 360, please regularly consult our website at jpeg.org and/or subscribe to the JPEG 360 e-mail reflector. 

 

HTJ2K

High Throughput JPEG 2000 (HTJ2K) aims to develop an alternate block-coding algorithm that can be used in place of the existing block coding algorithm specified in ISO/IEC 15444-1 (JPEG 2000 Part 1). The objective is to significantly increase the throughput of JPEG 2000, at the expense of a small reduction in coding efficiency, while allowing mathematically lossless transcoding to and from codestreams using the existing block coding algorithm.

As a result of a Call for Proposals issued at its 76th meeting, the JPEG Committee has selected a block-coding algorithm as the basis for Part 15 of the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). The algorithm has demonstrated an average tenfold increase in encoding and decoding throughput, compared to the algorithms based on JPEG 2000 Part 1. This increase in throughput results in less than 15% average loss in coding efficiency, and allows mathematically lossless transcoding to and from JPEG 2000 Part 1 codestreams.

A Working Draft of Part 15 to the JPEG 2000 suite of standards is now under development.

 

JPEG Pleno

The JPEG Committee is currently pursuing three activities in the framework of the JPEG Pleno Standardization: Light Field, Point Cloud and Holographic content coding.

JPEG Pleno Light Field finished a third round of core experiments for assessing the impact of individual coding modules and started work on creating software for a verification model. Moreover, additional test data has been studied and approved for use in future core experiments. Working Draft documents for JPEG Pleno specifications Part 1 and Part 2 were updated. A JPEG Pleno Light Field AhG was established with mandates to create a common test conditions document; perform exploration studies on new datasets, quality metrics, and random-access performance indicators; and to update the working draft documents for Part 1 and Part 2.

Furthermore, use cases were studied and are under consideration for JPEG Pleno Point Cloud. A current draft list is under discussion for the next period and will be updated and mapped to the JPEG Pleno requirements. A final document on use cases and requirements for JPEG Pleno Point Cloud is expected at the next meeting.

JPEG Pleno Holography has reviewed the draft of a holography overview document. Moreover, the current databases were classified according to use cases, and plans to analyze numerical reconstruction tools were established.

 

JPEG XT

The JPEG Committee released two corrigenda to JPEG XT Part 1 (core coding system) and JPEG XT Part 8 (lossless extension JPEG-1). These corrigenda clarify the upsampling procedure for chroma-subsampled images by adopting the centered upsampling in use by JFIF.

 

JPEG Reference Software

The JPEG Committee is pleased to announce that the CD ballot for Reference Software has been issued for the original JPEG-1 standard. This initiative closes a long-standing gap in the legacy JPEG standard by providing two reference implementations for this widely used and popular image coding format.

Final Quote

The JPEG Committee is hopeful to see its recently launched Next Generation Image Coding, JPEG XL, can result in a format that will become as important for imaging products and services as its predecessor was; the widely used and popular legacy JPEG format which has been in service for a quarter of century. said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 79th JPEG Meeting was held on 9-15 April 2018, in La Jolla, California, USA. The next 80th JPEG Meeting will be held on 7-13, July 2018, in Berlin, Germany.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

 

Future JPEG meetings are planned as follows:JPEG-signature

  • No 80, Berlin, Germany, July 7 to13, 2018
  • No 81, Vancouver, Canada, October 13 to 19, 2018
  • No 82, Lisbon, Portugal, January 19 to 25, 2019

MPEG Column: 122nd MPEG Meeting in San Diego, CA, USA

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

MPEG122 Plenary, San Diego, CA, USA.

MPEG122 Plenary, San Diego, CA, USA.

The MPEG press release comprises the following topics:

  • Versatile Video Coding (VVC) project starts strongly in the Joint Video Experts Team
  • MPEG issues Call for Proposals on Network-based Media Processing
  • MPEG finalizes 7th edition of MPEG-2 Systems Standard
  • MPEG enhances ISO Base Media File Format (ISOBMFF) with two new features
  • MPEG-G standards reach Draft International Standard for transport and compression technologies

Versatile Video Coding (VVC) – MPEG’ & VCEG’s new video coding project starts strong

The Joint Video Experts Team (JVET), a collaborative team formed by MPEG and ITU-T Study Group 16’s VCEG, commenced work on a new video coding standard referred to as Versatile Video Coding (VVC). The goal of VVC is to provide significant improvements in compression performance over the existing HEVC standard (i.e., typically twice as much as before) and to be completed in 2020. The main target applications and services include — but not limited to — 360-degree and high-dynamic-range (HDR) videos. In total, JVET evaluated responses from 32 organizations using formal subjective tests conducted by independent test labs. Interestingly, some proposals demonstrated compression efficiency gains of typically 40% or more when compared to using HEVC. Particular effectiveness was shown on ultra-high definition (UHD) video test material. Thus, we may expect compression efficiency gains well-beyond the targeted 50% for the final standard.

Research aspects: Compression tools and everything around it including its objective and subjective assessment. The main application area is clearly 360-degree and HDR. Watch out conferences like PCS and ICIP (later this year), which will be full of papers making references to VVC. Interestingly, VVC comes with a first draft, a test model for simulation experiments, and a technology benchmark set which is useful and important for any developments for both inside and outside MPEG as it allows for reproducibility.

MPEG issues Call for Proposals on Network-based Media Processing

This Call for Proposals (CfP) addresses advanced media processing technologies such as network stitching for VR service, super resolution for enhanced visual quality, transcoding, and viewport extraction for 360-degree video within the network environment that allows service providers and end users to describe media processing operations that are to be performed by the network. Therefore, the aim of network-based media processing (NBMP) is to allow end user devices to offload certain kinds of processing to the network. Therefore, NBMP describes the composition of network-based media processing services based on a set of media processing functions and makes them accessible through Application Programming Interfaces (APIs). Responses to the NBMP CfP will be evaluated on the weekend prior to the 123rd MPEG meeting in July 2018.

Research aspects: This project reminds me a lot about what has been done in the past in MPEG-21, specifically Digital Item Adaptation (DIA) and Digital Item Processing (DIP). The main difference is that MPEG targets APIs rather than pure metadata formats, which is a step forward into the right direction as APIs can be implemented and used right away. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

7th edition of MPEG-2 Systems Standard and ISO Base Media File Format (ISOBMFF) with two new features

More than 20 years since its inception development of MPEG-2 systems technology (i.e., transport/program stream) continues. New features include support for: (i) JPEG 2000 video with 4K resolution and ultra-low latency, (ii) media orchestration related metadata, (iii) sample variance, and (iv) HEVC tiles.

The partial file format enables the description of an ISOBMFF file partially received over lossy communication channels. This format provides tools to describe reception data, the received data and document transmission information such as received or lost byte ranges and whether the corrupted/lost bytes are present in the file and repair information such as location of the source file, possible byte offsets in that source, byte stream position at which a parser can try processing a corrupted file. Depending on the communication channel, this information may be setup by the receiver or through out-of-band means.

ISOBMFF’s sample variants (2nd edition), which are typically used to provide forensic information in the rendered sample data that can, for example, identify the specific Digital Rights Management (DRM) client which has decrypted the content. This variant framework is intended to be fully compatible with MPEG’s Common Encryption (CENC) and agnostic to the particular forensic marking system used.

Research aspects: MPEG systems standards are mainly relevant for multimedia systems research with all its characteristics. The partial file format is specifically interesting as it targets scenarios with lossy communication channels.

MPEG-G standards reach Draft International Standard for transport and compression technologies

MPEG-G provides a set of standards enabling interoperability for applications and services dealing with high-throughput deoxyribonucleic acid (DNA) sequencing. At its 122nd meeting, MPEG promoted its core set of MPEG-G specifications, i.e., transport and compression technologies, to Draft International Standard (DIS) stage. Such parts of the standard provide new transport technologies (ISO/IEC 23092-1) and compression technologies (ISO/IEC 23092-2) supporting rich functionality for the access and transport including streaming of genomic data by interoperable applications. Reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-5) will reach this stage in the next 12 months.

Research aspects: the main focus of this work item is compression and transport is still in its infancy. Therefore, research on the actual delivery for compressed DNA information as well as its processing is solicited.

What else happened at MPEG122?

  • Requirements is exploring new video coding tools dealing with low-complexity and process enhancements.
  • The activity around coded representation of neural networks has defined a set of vital use cases and is now calling for test data to be solicited until the next meeting.
  • The MP4 registration authority (MP4RA) has a new awesome web site http://mp4ra.org/.
  • MPEG-DASH is finally approving and working the 3rd edition comprising consolidated version of recent amendments and corrigenda.
  • CMAF started an exploration on multi-stream support, which could be relevant for tiled streaming and multi-channel audio.
  • OMAF kicked-off its activity towards a 2nd edition enabling support for 3DoF+ and social VR with the plan going to committee draft (CD) in Oct’18. Additionally, there’s a test framework proposed, which allows to assess performance of various CMAF tools. Its main focus is on video but MPEG’s audio subgroup has a similar framework to enable subjective testing. It could be interesting seeing these two frameworks combined in one way or the other.
  • MPEG-I architectures (yes plural) are becoming mature and I think this technical report will become available very soon. In terms of video, MPEG-I looks more closer at 3DoF+ defining common test conditions and a call for proposals (CfP) planned for MPEG123 in Ljubljana, Slovenia. Additionally, explorations for 6DoF and compression of dense representation of light fields are ongoing and have been started, respectively.
  • Finally, point cloud compression (PCC) is in its hot phase of core experiments for various coding tools resulting into updated versions of the test model and working draft.

Research aspects: In this section I would like to focus on DASH, CMAF, and OMAF. Multi-stream support, as mentioned above, is relevant for tiled streaming and multi-channel audio which has been recently studied in the literature and is also highly relevant for industry. The efficient storage and streaming of such kind of content within the file format is an important aspect and often underrepresented in both research and standardization. The goal here is to keep the overhead low while maximizing the utility of the format to enable certain functionalities. OMAF now targets the social VR use case, which has been discussed in the research literature for a while and, finally, makes its way into standardization. An important aspect here is both user and quality of experience, which requires intensive subjective testing.

Finally, on May 10 MPEG will celebrate 30 years as its first meeting dates back to 1988 in Ottawa, Canada with around 30 attendees. The 122nd meeting had more than 500 attendees and MPEG has around 20 active work items. A total of more than 170 standards have been produces (that’s approx. six standards per year) where some standards have up to nine editions like the HEVC standards. Overall, MPEG is responsible for more that 23% of all JTC 1 standards and some of them showing extraordinary longevity regarding extensions, e.g., MPEG-2 systems (24 years), MPEG-4 file format (19 years), and AVC (15 years). MPEG standards serve billions of users (e.g., MPEG-1 video, MP2, MP3, AAC, MPEG-2, AVC, ISOBMFF, DASH). Some — more precisely five — standards have receive Emmy awards in the past (MPEG-1, MPEG-2, AVC (2x), and HEVC).

Thus, happy birthday MPEG! In today’s society starts the high performance era with 30 years, basically the time of “compression”, i.e., we apply all what we learnt and live out everything, truly optimistic perspective for our generation X (millennials) standards body!