JPEG Column: 74th JPEG Meeting

The 74th JPEG meeting was held at ITU Headquarters in Geneva, Switzerland, from 15 to 20 January featuring the following highlights:

  • A Final Call for Proposals on JPEG Pleno was issued focusing on light field coding;
  • Creation of a test model for the upcoming JPEG XS standard;
  • A draft Call for Proposals for JPEG Privacy & Security was issued;
  • JPEG AIC technical report finalized on Guidelines for image coding system evaluation;
  • An AHG was created to investigate the evidence of high throughput JPEG 2000;
  • An AHG on next generation image compression standard was initiated to explore a future image coding format with superior compression efficiency.

 

JPEG Pleno kicks off its activities towards JPEGmeeting74standardization of light field coding

At the 74th JPEG meeting in Geneva, Switzerland the final Call for Proposals (CfP) on JPEG Pleno was issued particularly focusing on light field coding. The CfP is available here.

The call encompasses coding technologies for lenslet light field cameras, and content produced by high-density arrays of cameras. In addition, system-level solutions associated with light field coding and processing technologies that have a normative impact are called for. In a later stage, calls for other modalities such as point cloud, holographic and omnidirectional data will be issued, encompassing image representations and new and rich forms of visual data beyond the traditional planar image representations.

JPEG Pleno intends to provide a standard framework to facilitate capture, representation and exchange of these omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. It aims to define new tools for improved compression while providing advanced functionalities at the system level. Moreover, it targets to support data and metadata manipulation, editing, random access and interaction, protection of privacy and ownership rights as well as other security mechanisms.

 

JPEG XS aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used for a wide range of applications including mezzanine codec for the broadcast industry and Pro-AV markets. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. After a Call for Proposal issued on March 11th 2016 and the assessment of the submitted technologies, a test model for the upcoming JPEG XS standard was created during the 73rd JPEG meeting in Chengdu and the results of a first set of core experiments have been reviewed during the 74th JPEG meeting in Geneva. More core experiments are on their way before finalizing the standard: JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the further specification process.

 

JPEG Privacy & Security aims at developing a standard for realizing secure image information sharing which is capable of ensuring privacy, maintaining data integrity, and protecting intellectual property rights (IPR). JPEG Privacy & Security will explore ways on how to design and implement the necessary features without significantly impacting coding performance while ensuring scalability, interoperability, and forward and backward compatibility with current JPEG standard frameworks.

A draft Call for Proposals for JPEG Privacy & Security has been issued and the JPEG committee invites interested parties to contribute to this standardisation activity in JPEG Systems. The draft of CfP is available here.

The call addresses protection mechanisms and technologies such as handling hierarchical levels of access and multiple protection levels for metadata and image protection, checking integrity of image data and embedded metadata, and supporting backward and forward compatibility with JPEG coding technologies. Interested parties are encouraged to subscribe to the JPEG Privacy & Security email reflector for collecting more information. A final version of the JPEG Privacy & Security Call for Proposals is expected at the 75th JPEG meeting located in Sydney, Australia.

 

JPEG AIC provides guidance and standard procedures for advanced image coding evaluation.  At this meeting JPEG completed a technical report: TR 29170-1 Guidelines for image coding system evaluation. This report is a compendium of JPEGs best practices in evaluation that draws sources from several different international standards and international recommendations. The report discusses use of objective tools, subjective procedures and computational analysis techniques and when to use the different tools. Some of the techniques are tried and true tools familiar to image compression experts and vision scientists. Several tools represent new fields where few tools have been available, such as the evaluation of coding systems for high dynamic range content.

 

High throughput JPEG 2000

The JPEG committee started a new activity for high throughput JPEG 2000 and an AHG was created to investigate the evidence for such kind of standard. Experts are invited to participate in this expert group and to join the mailing list.

 

Final Quote

“JPEG continues to offer standards that redefine imaging products and services contributing to a better society without borders.” said Prof. Touradj Ebrahimi, the Convener of the JPEG committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group JPEG-signatureof ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS and JPEG Systems and JPEG PLENO families of imaging standards.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro and Tim Bruylants of the JPEG Communication Subgroup at pr@jpeg.org.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on https://listserv.uni-stuttgart.de/mailman/listinfo/jpeg-news.   Moreover, you can follow JPEG twitter account on http://twitter.com/WG1JPEG.

 

Future JPEG meetings are planned as follows:

  • No. 75, Sydney, AU, 26 – 31 March, 2017
  • No. 76, Torino, IT, 17 – 21 July, 2017
  • No. 77, Macau, CN, 23 – 27 October 2017

 

Call for Grand Challenge Problem Proposals

Original page: http://www.acmmm.org/2017/contribute/call-for-multimedia-grand-challenge-proposals/

 

The Multimedia Grand Challenge was first presented as part of ACM Multimedia 2009 and has established itself as a prestigious competition in the multimedia community.  The purpose of the Multimedia Grand Challenge is to engage with the multimedia research community by establishing well-defined and objectively judged challenge problems intended to exercise state-of-the-art techniques and methods and inspire future research directions.

Industry leaders and academic institutions are invited to submit proposals for specific Multimedia Grand Challenges to be included in this year’s program.

A Grand Challenge proposal should include:

  • A brief description motivating why the challenge problem is important and relevant for the multimedia research community, industry, and/or society today and going forward for the next 3-5 years.
  • A description of a specific set of tasks or goals to be accomplished by challenge problem submissions.
  • Links to relevant datasets to be used for experimentation, training, and evaluation as necessary. Full appropriate documentation on any datasets should be provided or made accessible.
  • A description of rigorously defined objective criteria and/or procedures for how submissions will be judged.
  • Contact information of at least two organizers who will be responsible for accepting and judging submissions as described in the proposal.

Grand Challenge proposals will be considered until March 1st and will be evaluated on an on-going basis as they are received. Grand Challenge proposals that are accepted to be part of the ACM Multimedia 2017 program will be posted on the conference website and included in subsequent calls for participation. All material, datasets, and procedures for a Grand Challenge problem should be ready for dissemination no later than March 14th.

While each Grand Challenge is allowed to define an independent timeline for solution evaluation and may allow iterative resubmission and possible feedback (e.g., a publicly posted leaderboard), challenge submissions must be complete and a paper describing the solution and results should be submitted to the conference program committee by July 14, 2017.

Grand Challenge proposals should be sent via email to the Grand Challenge chair, Ketan Mayer-Patel.

Those interested in submitting a Grand Challenge proposal are encouraged to review the problem descriptions from ACM Multimedia 2016 as examples. These are available here: http://www.acmmm.org/2016/?page_id=353

MPEG Column: 117th MPEG Meeting

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The 117th MPEG meeting was held in Geneva, Switzerland and its press release highlights the following aspects:

  • MPEG issues Committee Draft of the Omnidirectional Media Application Format (OMAF)
  • MPEG-H 3D Audio Verification Test Report
  • MPEG Workshop on 5-Year Roadmap Successfully Held in Geneva
  • Call for Proposals (CfP) for Point Cloud Compression (PCC)
  • Preliminary Call for Evidence on video compression with capability beyond HEVC
  • MPEG issues Committee Draft of the Media Orchestration (MORE) Standard
  • Technical Report on HDR/WCG Video Coding

In this article, I’d like to focus on the topics related to multimedia communication starting with OMAF.

Omnidirectional Media Application Format (OMAF)

Real-time entertainment services deployed over the open, unmanaged Internet – streaming audio and video – account now for more than 70% of the evening traffic in North American fixed access networks and it is assumed that this figure will reach 80 percent by 2020. More and more such bandwidth hungry applications and services are pushing onto the market including immersive media services such as virtual reality and, specifically 360-degree videos. However, the lack of appropriate standards and, consequently, reduced interoperability is becoming an issue. Thus, MPEG has started a project referred to as Omnidirectional Media Application Format (OMAF). The first milestone of this standard has been reached and the committee draft (CD) has been approved at the 117th MPEG meeting. Such application formats “are essentially superformats that combine selected technology components from MPEG (and other) standards to provide greater application interoperability, which helps satisfy users’ growing need for better-integrated multimedia solutions” [MPEG-A].” In the context of OMAF, the following aspects are defined:

  • Equirectangular projection format (note: others might be added in the future)
  • Metadata for interoperable rendering of 360-degree monoscopic and stereoscopic audio-visual data
  • Storage format: ISO base media file format (ISOBMFF)
  • Codecs: High Efficiency Video Coding (HEVC) and MPEG-H 3D audio

OMAF is the first specification which is defined as part of a bigger project currently referred to as ISO/IEC 23090 — Immersive Media (Coded Representation of Immersive Media). It currently has the acronym MPEG-I and we have previously used MPEG-VR which is now replaced by MPEG-I (that still might chance in the future). It is expected that the standard will become Final Draft International Standard (FDIS) by Q4 of 2017. Interestingly, it does not include AVC and AAC, probably the most obvious candidates for video and audio codecs which have been massively deployed in the last decade and probably still will be a major dominator (and also denominator) in upcoming years. On the other hand, the equirectangular projection format is currently the only one defined as it is broadly used already in off-the-shelf hardware/software solutions for the creation of omnidirectional/360-degree videos. Finally, the metadata formats enabling the rendering of 360-degree monoscopic and stereoscopic video is highly appreciated. A solution for MPEG-DASH based on AVC/AAC utilizing equirectangular projection format for both monoscopic and stereoscopic video is shown as part of Bitmovin’s solution for VR and 360-degree video.

Research aspects related to OMAF can be summarized as follows:

  • HEVC supports tiles which allow for efficient streaming of omnidirectional video but HEVC is not as widely deployed as AVC. Thus, it would be interesting how to mimic such a tile-based streaming approach utilizing AVC.
  • The question how to efficiently encode and package HEVC tile-based video is an open issue and call for a tradeoff between tile flexibility and coding efficiency.
  • When combined with MPEG-DASH (or similar), there’s a need to update the adaptation logic as the with tiles yet another dimension is added that needs to be considered in order to provide a good Quality of Experience (QoE).
  • QoE is a big issue here and not well covered in the literature. Various aspects are worth to be investigated including a comprehensive dataset to enable reproducibility of research results in this domain. Finally, as omnidirectional video allows for interactivity, also the user experience is becoming an issue which needs to be covered within the research community.

A second topic I’d like to highlight in this blog post is related to the preliminary call for evidence on video compression with capability beyond HEVC. 

Preliminary Call for Evidence on video compression with capability beyond HEVC

A call for evidence is issued to see whether sufficient technological potential exists to start a more rigid phase of standardization. Currently, MPEG together with VCEG have developed a Joint Exploration Model (JEM) algorithm that is already known to provide bit rate reductions in the range of 20-30% for relevant test cases, as well as subjective quality benefits. The goal of this new standard — with a preliminary target date for completion around late 2020 — is to develop technology providing better compression capability than the existing standard, not only for conventional video material but also for other domains such as HDR/WCG or VR/360-degrees video. An important aspect in this area is certainly over-the-top video delivery (like with MPEG-DASH) which includes features such as scalability and Quality of Experience (QoE). Scalable video coding has been added to video coding standards since MPEG-2 but never reached wide-spread adoption. That might change in case it becomes a prime-time feature of a new video codec as scalable video coding clearly shows benefits when doing dynamic adaptive streaming over HTTP. QoE did find its way already into video coding, at least when it comes to evaluating the results where subjective tests are now an integral part of every new video codec developed by MPEG (in addition to usual PSNR measurements). Therefore, the most interesting research topics from a multimedia communication point of view would be to optimize the DASH-like delivery of such new codecs with respect to scalability and QoE. Note that if you don’t like scalable video coding, feel free to propose something else as long as it reduces storage and networking costs significantly.

 

MPEG Workshop “Global Media Technology Standards for an Immersive Age”

On January 18, 2017 MPEG successfully held a public workshop on “Global Media Technology Standards for an Immersive Age” hosting a series of keynotes from Bitmovin, DVB, Orange, Sky Italia, and Technicolor. Stefan Lederer, CEO of Bitmovin discussed today’s and future challenges with new forms of content like 360°, AR and VR. All slides are available here and MPEG took their feedback into consideration in an update of its 5-year standardization roadmap. David Wood (EBU) reported on the DVB VR study mission and Ralf Schaefer (Technicolor) presented a snapshot on VR services. Gilles Teniou (Orange) discussed video formats for VR pointing out a new opportunity to increase the content value but also raising a question what is missing today. Finally, Massimo Bertolotti (Sky Italia) introduced his view on the immersive media experience age.

Overall, the workshop was well attended and as mentioned above, MPEG is currently working on a new standards project related to immersive media. Currently, this project comprises five parts. The first part comprises a technical report describing the scope (incl. kind of system architecture), use cases, and applications. The second part is OMAF (see above) and the third/forth parts are related to immersive video and audio respectively. Part five is about point cloud compression.

For those interested, please check out the slides from industry representatives in this field and draw your own conclusions what could be interesting for your own research. I’m happy to see any reactions, hints, etc. in the comments.

Finally, let’s have a look what happened related to MPEG-DASH, a topic with a long history on this blog.

MPEG-DASH and CMAF: Friend or Foe?

For MPEG-DASH and CMAF it was a meeting “in between” official standardization stages. MPEG-DASH experts are still working on the third edition which will be a consolidated version of the 2nd edition and various amendments and corrigenda. In the meantime, MPEG issues a white paper on the new features of MPEG-DASH which I would like to highlight here.

  • Spatial Relationship Description (SRD): allows to describe tiles and region of interests for partial delivery of media presentations. This is highly related to OMAF and VR/360-degree video streaming.
  • External MPD linking: this feature allows to describe the relationship between a single program/channel and a preview mosaic channel having all channels at once within the MPD.
  • Period continuity: simple signaling mechanism to indicate whether one period is a continuation of the previous one which is relevant for ad-insertion or live programs.
  • MPD chaining: allows for chaining two or more MPDs to each other, e.g., pre-roll ad when joining a live program.
  • Flexible segment format for broadcast TV: separates the signaling of the switching points and random access points in each stream and, thus, the content can be encoded with a good compression efficiency, yet allowing higher number of random access point, but with lower frequency of switching points.
  • Server and network-assisted DASH (SAND): enables asynchronous network-to-client and network-to-network communication of quality-related assisting information.
  • DASH with server push and WebSockets: basically addresses issues related to HTTP/2 push feature and WebSocket.

CMAF issued a study document which captures the current progress and all national bodies are encouraged to take this into account when commenting on the Committee Draft (CD). To answer the question in the headline above, it looks more and more like as DASH and CMAF will become friends — let’s hope that the friendship lasts for a long time.

What else happened at the MPEG meeting?

  • Committee Draft MORE (note: type in ‘man more’ on any unix/linux/max terminal and you’ll get ‘less – opposite of more’;): MORE stands for “Media Orchestration” and provides a specification that enables the automated combination of multiple media sources (cameras, microphones) into a coherent multimedia experience. Additionally, it targets use cases where a multimedia experience is rendered on multiple devices simultaneously, again giving a consistent and coherent experience.
  • Technical Report on HDR/WCG Video Coding: This technical report comprises conversion and coding practices for High Dynamic Range (HDR) and Wide Colour Gamut (WCG) video coding (ISO/IEC 23008-14). The purpose of this document is to provide a set of publicly referenceable recommended guidelines for the operation of AVC or HEVC systems adapted for compressing HDR/WCG video for consumer distribution applications
  • CfP Point Cloud Compression (PCC): This call solicits technologies for the coding of 3D point clouds with associated attributes such as color and material properties. It will be part of the immersive media project introduced above.
  • MPEG-H 3D Audio verification test report: This report presents results of four subjective listening tests that assessed the performance of the Low Complexity Profile of MPEG-H 3D Audio. The tests covered a range of bit rates and a range of “immersive audio” use cases (i.e., from 22.2 down to 2.0 channel presentations). Seven test sites participated in the tests with a total of 288 listeners.

The next MPEG meeting will be held in Hobart, April 3-7, 2017. Feel free to contact us for any questions or comments.

ACM TVX — Call for Volunteer Associate Chairs

CALL FOR VOLUNTEER ASSOCIATE CHAIRS – Applications for Technical Program Committee

ACM TVX 2017 International Conference on Interactive Experiences for Television and Online Video June 14-16, 2017, Hilversum, The Netherlands www.tvx2017.com


We are welcoming applications to become part of the TVX 2017 Technical Program Committee (TPC), as Associate Chair (AC). This involves playing a key role in the submission and review process, including attendance at the TPC meeting (please note that this is not a call for reviewers, but a call for Associate Chairs). We are opening applications to all members of the community, from both industry and academia, who feel they can contribute to this team.

  • This call is open to new Associate Chairs and to those who have been Associate Chairs in previous years and want to be an Associate Chair again for TVX 2017
  • Application form: https://goo.gl/forms/c9gNPHYZbh2m6VhJ3
  • The application deadline is December 12, 2016

Following the success of previous years’ invitations for open applications to join our Technical Program Committee, we again invite applications for Associate Chairs. Successful applicants would be responsible for arranging and coordinating reviews for around 3 or 4 submissions in the main Full and Short Papers track of ACM TVX2017, and attend the Technical Program Committee meeting in Delft, The Netherlands, in mid-March 2017 (participation in person is strongly recommended). Our aim is to broaden participation, ensuring a diverse Technical Program Committee, and to help widen the ACM TVX community to include a full range of perspectives.

We welcome applications from academics, industrial practitioners and (where appropriate) senior PhD students, who have expertise in Human Computer Interaction or related fields, and who have an interest in topics related to interactive experiences for television or online video. We would expect all applicants to have ‘top-tier’ publications related to this area. Applicants should have an expertise or interest in at least one or more topics in our call for papers: https://tvx.acm.org/2017/participation/full-and-short-paper-submissions/

After the application deadline, the volunteers will be considered and selected for ACs, and the TPC Chairs will be free to also invite previous ACs or other researchers of the community to integrate the team. The ultimate goal is to reach a balanced, diverse and inclusive TPC team in terms of fields of expertise, experience and perspectives, both from academia and industry.

To submit, just fill in the application form above!

CONTACT INFORMATION

For up to date information and further details please visit: www.tvx2017.com or get in touch with the Inclusion Chairs:

Teresa Chambel, University of Lisbon, PT; Rob Koenen, TNO, NL
at: inclusion@tvx2017.com

In collaboration with the Program Chairs: Wendy van den Broeck, Vrije Universiteit Brussel, BE; Mike Darnell, Samsung, USA; Roger Zimmermann, NUS, Singapore

MPEG Column: 116th MPEG Meeting

MPEG Workshop on 5-Year Roadmap Successfully Held in Chengdu

Chengdu, China – The 116th MPEG meeting was held in Chengdu, China, from 17 – 21 October 2016

MPEG Workshop on 5-Year Roadmap Successfully Held in Chengdu

At its 116th meeting, MPEG successfully organised a workshop on its 5-year standardisation roadmap. Various industry representatives presented their views and reflected on the need for standards for new services and applications, specifically in the area of immersive media. The results of the workshop (roadmap, presentations) and the planned phases for the standardisation of “immersive media” are available at http://mpeg.chiariglione.org/. A follow-up workshop will be held on 18 January 2017 in Geneva, co-located with the 117th MPEG meeting. The workshop is open to all interested parties and free of charge. Details on the program and registration will be available at http://mpeg.chiariglione.org/.

Summary of the “Survey on Virtual Reality”

At its 115th meeting, MPEG established an ad-hoc group on virtual reality which conducted a survey on virtual reality with relevant stakeholders in this domain. The feedback from this survey has been provided as input for the 116th MPEG meeting where the results have been evaluated. Based on these results, MPEG aligned its standardisation timeline with the expected deployment timelines for 360-degree video and virtual reality services. An initial specification for 360-degree video and virtual reality services will be ready by the end of 2017 and is referred to as the Omnidirectional Media Application Format (OMAF; MPEG-A Part 20, ISO/IEC 23000-20). A standard addressing audio and video coding for 6 degrees of freedom where users can freely move around is on MPEG’s 5-year roadmap. The summary of the survey on virtual reality is available at http://mpeg.chiariglione.org/.

MPEG and ISO/TC 276/WG 5 have collected and evaluated the answers to the Genomic Information Compression and Storage joint Call for Proposals

At its 115th meeting, MPEG issued a Call for Proposals (CfP) for Genomic Information Compression and Storage in conjunction with the working group for standardisation of data processing and integration of the ISO Technical Committee for biotechnology standards (ISO/TC 276/WG5). The call sought submissions of technologies that can provide efficient compression of genomic data and metadata for storage and processing applications. During the 116th MPEG meeting, responses to this CfP have been collected and evaluated by a joint ad-hoc group of both working groups, comprising twelve distinct technologies submitted. An initial assessment of the performance of the best eleven solutions for the different categories reported compression factors ranging from 8 to 58 for the different classes of data.

The submitted twelve technologies show consistent improvements versus the results assessed as an answer to the Call for Evidence in February 2016. Further improvements of the technologies under consideration are expected with the first phase of core experiments that has been defined at the 116th MPEG meeting. The open core experiments process planned in the next 12 months will address multiple, independent, directly comparable rigorous experiments performed by independent entities to determine the specific merit of each technology and their mutual integration into a single solution for standardisation. The core experiment process will consider submitted technologies as well as new solutions in the scope of each specific core experiment. The final inclusion of submitted technologies into the standard will be based on the experimental comparison of performance, as well as on the validation of requirements and inclusion of essential metadata describing the context of the sequence data, and will be reached by consensus within and across both committees.

Call for Proposals: Internet of Media Things and Wearables (IoMT&W)

At its 116th meeting, MPEG issued a Call for Proposals (CfP) for Internet of Media Things and Wearables (see http://mpeg.chiariglione.org/), motivated by the understanding that more than half of major new business processes and systems will incorporate some element of the Internet of Things (IoT) by 2020. Therefore, the CfP seeks submissions of protocols and data representation enabling dynamic discovery of media things and media wearables. A standard in this space will facilitate the large-scale deployment of complex media systems that can exchange data in an interoperable way between media things and media wearables.

MPEG-DASH Amendment with Media Presentation Description Chaining and Pre-Selection of Adaptation Sets

At its 116th MPEG meeting, a new amendment for MPEG-DASH reached the final stage of Final Draft Amendment (ISO/IEC 23009-1:2014 FDAM 4). This amendment includes several technologies useful for industry practices of adaptive media presentation delivery. For example, the media presentation description (MPD) can be daisy chained to simplify implementation of pre-roll ads in cases of targeted dynamic advertising for live linear services. Additionally, support for pre-selection in order to signal suitable combinations of audio elements that are offered in different adaptation sets is enabled by this amendment. As there have been several amendments and corrigenda produced, this amendment will be published as a part of the 3rd edition of ISO/IEC 23009-1 together with the amendments and corrigenda approved after the 2nd edition.

How to contact MPEG, learn more, and find other MPEG facts

To learn about MPEG basics, discover how to participate in the committee, or find out more about the array of technologies developed or currently under development by MPEG, visit MPEG’s home page at http://mpeg.chiariglione.org. There you will find information publicly available from MPEG experts past and present including tutorials, white papers, vision documents, and requirements under consideration for new standards efforts. You can also find useful information in many public documents by using the search window.

Examples of tutorials that can be found on the MPEG homepage include tutorials for: High Efficiency Video Coding, Advanced Audio Coding, Universal Speech and Audio Coding, and DASH to name a few. A rich repository of white papers can also be found and continues to grow. You can find these papers and tutorials for many of MPEG’s standards freely available. Press releases from previous MPEG meetings are also available. Journalists that wish to receive MPEG Press Releases by email should contact Dr. Christian Timmerer at christian.timmerer@itec.uni-klu.ac.at or christian.timmerer@bitmovin.com.

Further Information

Future MPEG meetings are planned as follows:
No. 117, Geneva, CH, 16 – 20 January, 2017
No. 118, Hobart, AU, 03 – 07 April, 2017
No. 119, Torino, IT, 17 – 21 July, 2017
No. 120, Macau, CN, 23 – 27 October 2017

For further information about MPEG, please contact:
Dr. Leonardo Chiariglione (Convenor of MPEG, Italy)
Via Borgionera, 103
10040 Villar Dora (TO), Italy
Tel: +39 011 935 04 61
leonardo@chiariglione.org

or

Priv.-Doz. Dr. Christian Timmerer
Alpen-Adria-Universität Klagenfurt | Bitmovin Inc.
9020 Klagenfurt am Wörthersee, Austria, Europe
Tel: +43 463 2700 3621
Email: christian.timmerer@itec.aau.at | christian.timmerer@bitmovin.com

Call for Task Proposals: Multimedia Evaluation 2017

MediaEval 2017 Multimedia Evaluation Benchmark

Call for Task Proposals

Proposal Deadline: 3 December 2016

MediaEval is a benchmarking initiative dedicated to developing and evaluating new algorithms and technologies for multimedia retrieval, access and exploration. It offers tasks to the research community that are related to human and social aspects of multimedia. MediaEval emphasizes the ‘multi’ in multimedia and seeks tasks involving multiple modalities, e.g., audio, visual, textual, and/or contextual.

MediaEval is now calling for proposals for tasks to run in the 2017 benchmarking season. The proposal consists of a description of the motivation for the task and challenges that task participants must address. It provides information on the data and evaluation methodology to be used. The proposal also includes a statement of how the task is related to MediaEval (i.e., its human or social component), and how it extends the state of the art in an area related to multimedia indexing, search or other technologies that support users in accessing multimedia collections.

For more detailed information about the content of the task proposal, please see:
http://www.multimediaeval.org/files/mediaeval2017_taskproposals.html

Task proposal deadline: 3 December 2016

Task proposals are chosen on the basis of their feasibility, their match with the topical focus of MediaEval, and also according to the outcome of a survey circulated to the wider multimedia research community.

The MediaEval 2017 Workshop will be held 13-15 September 2017 in Dublin, Ireland, co-located with CLEF 2017 (http://clef2017.clef-initiative.eu)

For more information about MediaEval see http://multimediaeval.org or contact Martha Larson m.a.larson@tudelft.nl

 

SIGMM Award for Outstanding Ph.D. Thesis in Multimedia Computing, Communications and Applications 2016

image001

image001ACM Special Interest Group on Multimedia (SIGMM) is pleased to present the 2016 SIGMM Outstanding Ph.D. Thesis Award to Dr. Christoph Kofler. The award committee considers Dr. Kofler’s dissertation entitled “User Intent in Online Video Search” worthy of the recognition as the thesis is the first to innovatively consider a user’s intent in multimedia search yielding significantly improved results in satisfying the information need of the user. The work has high originality and is expected to have significant impact, especially in boosting the search performance for multimedia data.

Dr. Kofler’s thesis systematically explores a user’s video search intent that is behind a user’s information need in three steps: (1) analyzing a real-world transaction log produced by a large video search engine to understand why searches fail, (2) understanding the possible intents of users behind video search and uploads, and (3) designing an intent-aware video search result optimization approach that re-ranks initial video search results so as to yield the highest potential to satisfy the users’ search intent.

The effectiveness of the framework developed in the thesis has been successfully justified by a thorough range of experiments. The thesis topic itself is highly topical and the framework makes groundbreaking contributions to our understanding and knowledge in the area of users’ information seeking, user intent, user satisfaction, and multimedia search engine usability.  The publications related to the thesis clearly demonstrate the impact of this work across several research disciplines including multimedia, web, and information retrieval.  Overall, the committee recognizes that the thesis has significant impact and makes considerable contributions to the multimedia community. 

Bio of Awardee:

Dr. Christoph Kofler is a software engineer and data scientist at Bloomberg L.P., NY, USA. He holds a Ph.D. degree from Delft University of Technology, The Netherlands, and an M.Sc. and B.Sc. degree from Klagenfurt University, Austria – all in Computer Science. His research interests include the broad fields of multimedia and text-based information retrieval with focus on search intent inference and its applications for search results optimization throughout the entire search engine pipeline (indexing, ranking, query formulation). In addition to “what” a user is looking for using search, Dr. Kofler is particularly interested in the “why” component behind the search and in the related opportunities for improving the efficiency and effectiveness of information retrieval systems. Dr. Kofler has co-authored more than 20 scientific publications with predominant focus on venues such as ACM Multimedia, IEEE Transactions on Multimedia, and ACM Computing Surveys. He has been a task co-organizer of the MediaEval Benchmark initiative. He received the Grand Challenge Best Presentation Award at ACM Multimedia and the Best Paper nomination at the European Conference on Information Retrieval. Dr. Kofler is a recipient of the Google Doctoral Fellowship in Information Retrieval (Video Search). He has held positions at Microsoft Research, Beijing, China; Columbia University, NY, USA; and Google, NY, USA.

 

image003-1The award committee is pleased to present an honorable mention to Dr. Varun Singh for the thesis entitled: “Protocols and Algorithms for Adaptive Multimedia Systems.” The thesis develops and presents congestion control algorithms and signaling protocols that are used in interactive multimedia communications.  The committee is impressed by the thorough theoretical and experimental depth of the thesis. Additionally, remarkable are Dr. Singh’s efforts to shepherd his work to real world adoption which has led him to author four RFCs and several standards-track documents in the IETF. This has resulted in the incorporation of his work in the production versions of the Chrome and Firefox web browsers. Therefore, it can be seen that his work has already achieved impact in the multimedia community.

Bio of Awardee:

Dr. Varun Singh received his Master’s degree in Electrical Engineering from Helsinki University of Technology, Finland, in 2009 his Ph.D. degree from Aalto University, Finland, in 2015.  His research has led him to making important contributions to different standardization organization: 3GPP (2008 – 2010), IETF (since 2010), and W3C (since 2014). He is the co-author of the WebRTC Statistics API. Beyond this, his research work led him to found and become CEO of callstats.io, a startup which analyses and optimizes the Quality of multimedia in real-time communication (currently, WebRTC).

 

ACM TOMM Special Issues and Special Sections

ACM TOMM journal has launched a new two-year program of SPECIAL ISSUES and SPECIAL SECTIONS on strategic and emerging topics in Multimedia research. Each Special Issue will also include an extended survey paper on the subject of the issue, prepared by the Guest Editors. It will help to highlight trends and research paths and will position the contributed papers appropriately.

On May, we received 11 proposals and selected 4 proposals for Special Issues and 2 proposals for Special Sections, based on the timeliness and relevance of the topic and the qualification of the proponents:

SPECIAL ISSUES (8 papers each)

  • “Deep Learning for Mobile Multimedia”
    for publication on April’17. Submission deadline Oct 15, 2016
  • “Delay-Sensitive Video Computing in the Cloud”
    for publication on July’17. Submission deadline Nov. 30, 2016
  • “Representation, Analysis and Recognition of 3D Human”
    for publication on Nov’17. Submission deadline Jan. 15, 2017
  • “QoE Management for Multimedia Services”
    for publication on April’18. Submission deadline May 15, 2017

SPECIAL SECTIONS (4 papers each)

  • “Multimedia Computing and Applications of Socio-Affective Behaviors in the Wild”
    for publication on May ’17. Submission deadline Oct 31, 2016
  • “Multimedia Understanding via Multimodal Analytics”
    for publication on May ’17. Submission deadline Oct 31, 2016

You can visit the ACM TOMM home page at  http://tomm.acm.org, news section, for more in-detail information. We will be definitely happy of your valuable contributions to this initiative.

 

ACM SIGMM Award for Outstanding Technical Contributions to Multimedia Computing, Communications and Applications

image002

image002The 2016 winner of the prestigious ACM Special Interest Group on Multimedia (SIGMM) award for Outstanding Technical Contributions to Multimedia Computing, Communications and Applications is Prof. Dr. Alberto del Bimbo. The award is given in recognition of his outstanding, pioneering and continued research contributions in the areas of multimedia processing, multimedia content analysis, and multimedia applications, his leadership in multimedia education, and his outstanding and continued service to the community.

Prof. del Bimbo was among the very few who pioneered the research in image and video content-based retrieval in the late 80’s. Since that time, for over 25 years, he has been among the most visionary and influential researchers in Europe and world-wide in this field. His research has influenced several generations of researchers that are now active in some of the most important research centers world-wide. Over the years, he has made significant innovative research contributions.

In the early times of the discipline he explored all the modalities for retrieval by visual similarity of images and video. In his early paper Visual Image Retrieval by Elastic Matching of User Sketches published in IEEE Trans. on Pattern Analysis and Machine Intelligence in 1997, he presented one of the first and top performing methods for image retrieval by shape similarity from user’s sketches. He also published in IEEE Trans. on Pattern Analysis and Machine Intelligence and IEEE Trans. on Multimedia his original research on representations for spatial relationships between image regions based on spatial logic. This ground-breaking research was accompanied by the definition of efficient index structures to permit retrieval from large datasets. He was one of the first to address this large datasets aspect that has now become very important for the research community.

Since the early 2000s, with the advancement of 3D imaging technologies and the availability of a new generation of acquisition devices capable of capturing the geometry of 3D objects in the three-dimensional physical space, Prof. del Bimbo and his team initiated research in 3D content based retrieval that has now become increasingly popular in mainstream research. Again, he was among the very first researchers to initiate this research. Particularly, he focused on 3D face recognition extending the weighted walkthrough representation of spatial relationships between image regions to model the 3D relationships between facial stripes. His solution of 3D Face Recognition Using Iso-geodesic Stripes scored the best performance at SHREC Shape Retrieval Contest in 2008, and was published in IEEE Trans. on Pattern Analysis and Machine Intelligence, in 2010. At CVPR’15 he presented a novel idea for representing 3D textured mesh manifolds using Local Binary Patterns, that is highly effective for 3D face retrieval. This was the first attempt to combine 3D geometry and photometric texture into a single unified representation. In 2016 he has co-authored a forward looking survey on content-based image retrieval in the context of social image platforms, that has appeared on ACM Computing Surveys. It includes an extensive treatise of image tag assignment, refinement and tag-based retrieval and explores the differences between traditional image retrieval and retrieval with socially generated images.
One very important aspect of his contribution to the community is Professor del Bimbo’s educational impact during his career. He was the author of the monograph, Visual Information Retrieval, published by Morgan Kaufmann in 1999 which became one of the most cited and influential books from the early years of image and video content-based retrieval. Many young researchers have used this book as the main reference in their studies, and their career has been shaped by the ideas discussed in this book. Being the first and sole book on that subject in the early times of the discipline, it played a key role to develop content-based retrieval from a research niche to a largely populated field of research and to make it central to Multimedia research.

Professor del Bimbo has an extraordinary and long-lasting track record of services to the scientific community through the last 20 years. As the General Chair he organized two of the most successful conferences in Multimedia, namely IEEE ICMCS’99, the Int’l Conf. on Multimedia Computing and Systems (now renamed IEEE ICME) and ACM MULTIMEDIA’10. The quality and success of these conferences were highly influential to attract new young researchers in the field and form the present research community. Since 2016, he is the Editor-in-Chief for ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM).

Announcement of ACM SIGMM Rising Star Award 2016

yu-gang-jiang

image003The ACM Special Interest Group on Multimedia (SIGMM) is pleased to present this year’s Rising Star Award in multimedia computing, communications and applications to Dr. Bart Thomee for his significant contributions in the areas of geo-multimedia computing, media evaluation, and open research datasets. The ACM SIGMM Rising Star Award recognizes a young researcher who has made outstanding research contributions to the field of multimedia computing, communication and applications during the early part of his or her career.

Dr. Bart Thomee received his Ph.D. from Leiden University in 2010. In his thesis, he focused on multimedia search and exploration, specifically targeting artificial imagination and duplicate detection. On the topic of artificial imagination, he aimed to more rapidly understand the user’s search intent by generating imagery that resemble the ideal image the user is looking for. Using the synthesized images as queries instead of existing images from the database boosted the relevance of the image results by up to 23%. On the topic of duplicate detection, he designed descriptors to compactly represent web-scale image collections and to accurately detect transformed versions of the same image. This work led to an Outstanding Paper Citation at ACM Conference on Multimedia Information Retrieval 2008.

In 2011, he jointed Yahoo Labs, where Dr. Thomee ‘s interests grew into geographic computing in Multimedia. He began characterizing spatiotemporal regions from labeled (e.g. tagged) georeferenced media, for which he devised a technique based on scale-space theory that could process billions of georeferenced labels in a matter of hours. This work was published at WWW 2013 and became a reference example at Yahoo for how to disambiguate multi-language and multi-meaning labels from media with noisy annotations.

He also started to use an overlooked piece of information that is found in most camera phone images: compass information. He developed a technique to accurately pinpoint the locations and surface area of landmarks, solely based on the positions and orientations of photos taken of them which may have been taken hundreds of yards to miles away.

Dr. Thomee’s recent work on the YFCC100M dataset has had important impacts on the multimedia and SIGMM research community. This new dataset was real in size and structure to fuel and change the landscape of research in Multimedia. What started as an initiative to release a geo-Flickr dataset, Dr. Thomee quickly saw the broader impact and worked rapidly to scale the size. He had to push the limits of openness without violating licensing terms, copyright, or privacy. He worked closely with many lawyers to overturn the default, restrictive terms of use by making it also available to non-academics all over the world. He coordinated and led the efforts to share the data and effort horizontally with ICSI, LLNL, and Amazon Open Data. It was highlighted in the 2016 February issue of the Communications of ACM (CACM). The dataset has been requested over 1200 times in just a few months and cited many times since launch. Dr. Thomee has continued by releasing expansion packs to the YFCC100M. This dataset is expected to impact Multimedia research significantly over the future years.

Dr. Thomee has also been an exemplary community member of the Multimedia community. For example, he organized the ImageCLEF photo annotation task (2012-2013) and MediaEval placing task (2013-2016) as well as designed the ACM Grand Challenge on Event Summarization (2015) and on Tag & Caption Prediction (2016).

In summary, Dr. Bart Thomee receives the 2016 ACM SIGMM Rising Star Award Thomee for significant contributions in the areas of geo-multimedia computing, media evaluation, and open datasets for research.