Sharing and Reproducibility in ACM SIGMM

 

This column discusses the efforts of ACM SIGMM towards sharing and reproducibility. Apart from the specific sessions dedicated to open source and datasets, ACM Multimedia Systems started to provide official ACM badges for articles that make artifacts available since last year. This year, it has marked a record with 45% of the articles acquiring such a badge.


Without data it is impossible to put theories to the test. Moreover, without running code it is tedious at best to (re)produce and evaluate any results. Yet collecting data and writing code can be a road full of pitfalls, ranging from datasets containing copyrighted materials to algorithms containing bugs. The ideal datasets and software packages are those that are open and transparent for the world to look at, inspect, and use without or with limited restrictions. Such “artifacts” make it possible to establish public consensus on their correctness or otherwise to start a dialogue on how to fix any identified problems.

In our interconnected world, storing and sharing information has never been easier. Despite the temptation for researchers to keep datasets and software to themselves, a growing number are willing to share their resources with others. To further promote this sharing behavior, conferences, workshops, publishers, non-profit and even for-profit companies are increasingly recognizing and supporting these efforts. For example, the ACM Multimedia conference has hosted an open source software competition since 2004, and the ACM Multimedia Systems conference has included an open datasets and software track since 2011 . The ACM Digital Library now also hands out badges to public artifacts that have been made available and optionally reviewed and verified by members of the community. At the same time, organizations such as Zenodo and Amazon host open datasets for free. Sharing ultimately pays off: the citation statistics for ACM Multimedia Systems conferences over the past five years, for example, show that half of the 20 most cited papers shared data and code although they have represented a small fraction of the published papers so far.

graphic datasets

Good practices are increasingly adopted. In this year’s edition of the ACM Multimedia Systems conference, 69 works (papers, demos, datasets, software) were accepted, out of which 31 (45%) were awarded an ACM badge. This is a large increase compared to last year, when out of 42 works only a total of 13 (31%) received one. This greatly expands one of the core objectives of both the conference and SIGMM towards open science. At this moment, the ACM Digital Library does not separately index which papers received a badge, making it challenging to find all papers who have one. It further appears not many other ACM conferences are aware of the badges yet; for example, while ACM Multimedia accepted 16 open source papers in 2016 and 6 papers in 2017, none applied for a badge. This year at ACM Multimedia Systems only “artifacts available” badges have been awarded. For next year our intention is to ensure all dataset and software submissions receive the “artifacts evaluated” badge. This would require several committed community members to spend time working with the authors to get the artifacts running on all major platforms with corresponding detailed documentation.

The accepted artifacts this year are diverse in nature: several submissions focus on releasing artifacts related to quality of experience of (mobile/wireless) streaming video, while others center on making datasets and tools related to images, videos, speech, sensors, and events available; in addition, there are a number of contributions in the medical domain. It is great to see such a range of interests in our community!

MPEG Column: 122nd MPEG Meeting in San Diego, CA, USA

The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects.

MPEG122 Plenary, San Diego, CA, USA.

MPEG122 Plenary, San Diego, CA, USA.

The MPEG press release comprises the following topics:

  • Versatile Video Coding (VVC) project starts strongly in the Joint Video Experts Team
  • MPEG issues Call for Proposals on Network-based Media Processing
  • MPEG finalizes 7th edition of MPEG-2 Systems Standard
  • MPEG enhances ISO Base Media File Format (ISOBMFF) with two new features
  • MPEG-G standards reach Draft International Standard for transport and compression technologies

Versatile Video Coding (VVC) – MPEG’ & VCEG’s new video coding project starts strong

The Joint Video Experts Team (JVET), a collaborative team formed by MPEG and ITU-T Study Group 16’s VCEG, commenced work on a new video coding standard referred to as Versatile Video Coding (VVC). The goal of VVC is to provide significant improvements in compression performance over the existing HEVC standard (i.e., typically twice as much as before) and to be completed in 2020. The main target applications and services include — but not limited to — 360-degree and high-dynamic-range (HDR) videos. In total, JVET evaluated responses from 32 organizations using formal subjective tests conducted by independent test labs. Interestingly, some proposals demonstrated compression efficiency gains of typically 40% or more when compared to using HEVC. Particular effectiveness was shown on ultra-high definition (UHD) video test material. Thus, we may expect compression efficiency gains well-beyond the targeted 50% for the final standard.

Research aspects: Compression tools and everything around it including its objective and subjective assessment. The main application area is clearly 360-degree and HDR. Watch out conferences like PCS and ICIP (later this year), which will be full of papers making references to VVC. Interestingly, VVC comes with a first draft, a test model for simulation experiments, and a technology benchmark set which is useful and important for any developments for both inside and outside MPEG as it allows for reproducibility.

MPEG issues Call for Proposals on Network-based Media Processing

This Call for Proposals (CfP) addresses advanced media processing technologies such as network stitching for VR service, super resolution for enhanced visual quality, transcoding, and viewport extraction for 360-degree video within the network environment that allows service providers and end users to describe media processing operations that are to be performed by the network. Therefore, the aim of network-based media processing (NBMP) is to allow end user devices to offload certain kinds of processing to the network. Therefore, NBMP describes the composition of network-based media processing services based on a set of media processing functions and makes them accessible through Application Programming Interfaces (APIs). Responses to the NBMP CfP will be evaluated on the weekend prior to the 123rd MPEG meeting in July 2018.

Research aspects: This project reminds me a lot about what has been done in the past in MPEG-21, specifically Digital Item Adaptation (DIA) and Digital Item Processing (DIP). The main difference is that MPEG targets APIs rather than pure metadata formats, which is a step forward into the right direction as APIs can be implemented and used right away. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

7th edition of MPEG-2 Systems Standard and ISO Base Media File Format (ISOBMFF) with two new features

More than 20 years since its inception development of MPEG-2 systems technology (i.e., transport/program stream) continues. New features include support for: (i) JPEG 2000 video with 4K resolution and ultra-low latency, (ii) media orchestration related metadata, (iii) sample variance, and (iv) HEVC tiles.

The partial file format enables the description of an ISOBMFF file partially received over lossy communication channels. This format provides tools to describe reception data, the received data and document transmission information such as received or lost byte ranges and whether the corrupted/lost bytes are present in the file and repair information such as location of the source file, possible byte offsets in that source, byte stream position at which a parser can try processing a corrupted file. Depending on the communication channel, this information may be setup by the receiver or through out-of-band means.

ISOBMFF’s sample variants (2nd edition), which are typically used to provide forensic information in the rendered sample data that can, for example, identify the specific Digital Rights Management (DRM) client which has decrypted the content. This variant framework is intended to be fully compatible with MPEG’s Common Encryption (CENC) and agnostic to the particular forensic marking system used.

Research aspects: MPEG systems standards are mainly relevant for multimedia systems research with all its characteristics. The partial file format is specifically interesting as it targets scenarios with lossy communication channels.

MPEG-G standards reach Draft International Standard for transport and compression technologies

MPEG-G provides a set of standards enabling interoperability for applications and services dealing with high-throughput deoxyribonucleic acid (DNA) sequencing. At its 122nd meeting, MPEG promoted its core set of MPEG-G specifications, i.e., transport and compression technologies, to Draft International Standard (DIS) stage. Such parts of the standard provide new transport technologies (ISO/IEC 23092-1) and compression technologies (ISO/IEC 23092-2) supporting rich functionality for the access and transport including streaming of genomic data by interoperable applications. Reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-5) will reach this stage in the next 12 months.

Research aspects: the main focus of this work item is compression and transport is still in its infancy. Therefore, research on the actual delivery for compressed DNA information as well as its processing is solicited.

What else happened at MPEG122?

  • Requirements is exploring new video coding tools dealing with low-complexity and process enhancements.
  • The activity around coded representation of neural networks has defined a set of vital use cases and is now calling for test data to be solicited until the next meeting.
  • The MP4 registration authority (MP4RA) has a new awesome web site http://mp4ra.org/.
  • MPEG-DASH is finally approving and working the 3rd edition comprising consolidated version of recent amendments and corrigenda.
  • CMAF started an exploration on multi-stream support, which could be relevant for tiled streaming and multi-channel audio.
  • OMAF kicked-off its activity towards a 2nd edition enabling support for 3DoF+ and social VR with the plan going to committee draft (CD) in Oct’18. Additionally, there’s a test framework proposed, which allows to assess performance of various CMAF tools. Its main focus is on video but MPEG’s audio subgroup has a similar framework to enable subjective testing. It could be interesting seeing these two frameworks combined in one way or the other.
  • MPEG-I architectures (yes plural) are becoming mature and I think this technical report will become available very soon. In terms of video, MPEG-I looks more closer at 3DoF+ defining common test conditions and a call for proposals (CfP) planned for MPEG123 in Ljubljana, Slovenia. Additionally, explorations for 6DoF and compression of dense representation of light fields are ongoing and have been started, respectively.
  • Finally, point cloud compression (PCC) is in its hot phase of core experiments for various coding tools resulting into updated versions of the test model and working draft.

Research aspects: In this section I would like to focus on DASH, CMAF, and OMAF. Multi-stream support, as mentioned above, is relevant for tiled streaming and multi-channel audio which has been recently studied in the literature and is also highly relevant for industry. The efficient storage and streaming of such kind of content within the file format is an important aspect and often underrepresented in both research and standardization. The goal here is to keep the overhead low while maximizing the utility of the format to enable certain functionalities. OMAF now targets the social VR use case, which has been discussed in the research literature for a while and, finally, makes its way into standardization. An important aspect here is both user and quality of experience, which requires intensive subjective testing.

Finally, on May 10 MPEG will celebrate 30 years as its first meeting dates back to 1988 in Ottawa, Canada with around 30 attendees. The 122nd meeting had more than 500 attendees and MPEG has around 20 active work items. A total of more than 170 standards have been produces (that’s approx. six standards per year) where some standards have up to nine editions like the HEVC standards. Overall, MPEG is responsible for more that 23% of all JTC 1 standards and some of them showing extraordinary longevity regarding extensions, e.g., MPEG-2 systems (24 years), MPEG-4 file format (19 years), and AVC (15 years). MPEG standards serve billions of users (e.g., MPEG-1 video, MP2, MP3, AAC, MPEG-2, AVC, ISOBMFF, DASH). Some — more precisely five — standards have receive Emmy awards in the past (MPEG-1, MPEG-2, AVC (2x), and HEVC).

Thus, happy birthday MPEG! In today’s society starts the high performance era with 30 years, basically the time of “compression”, i.e., we apply all what we learnt and live out everything, truly optimistic perspective for our generation X (millennials) standards body!

JPEG Column: 79th JPEG Meeting in La Jolla, California, U.S.A.

The JPEG Committee had its 79th meeting in La Jolla, California, U.S.A., from 9 to 15 April 2018.

During this meeting, JPEG had a final celebration of the 25th anniversary of its first JPEG standard, usually known as JPEG-1. This celebration coincides with two interesting facts. The first was the approval of a reference software for JPEG-1, “only” after 25 years. At the time of approval of the first JPEG standard a reference software was not considered, as it is common in recent image standards. However, the JPEG committee decided that was still important to provide a reference software, as current applications and standards can largely benefit on this specification. The second coincidence was the launch of a call for proposals for a next generation image coding standard, JPEG XL. This standard will define a new representation format for Photographic information, that includes the current technological developments, and can become an alternative to the 25 years old JPEG standard.

An informative two-hour JPEG Technologies Workshop marked the 25th anniversary celebration on Friday April 13, 2018. The workshop had presentations of several committee members on the current and future JPEG committee activity, with the following program:

IMG_4560

Touradj Ebrahimi, convenor of JPEG, presenting an overview of JPEG technologies.

  • Overview of JPEG activities, by Touradj Ebrahimi
  • JPEG XS by Antonin Descampe and Thomas Richter
  • HTJ2K by Pierre-Anthony Lemieux
  • JPEG Pleno – Light Field, Point Cloud, Holography by Ioan Tabus, Antonio Pinheiro, Peter Schelkens
  • JPEG Systems – Privacy and Security, 360 by Siegfried Foessel, Frederik Temmermans, Andy Kuzma
  • JPEG XL by Fernando Pereira, Jan De Cock

After the workshop, a social event was organized where a past JPEG committee Convenor, Eric Hamilton was recognized for key contributions to the JPEG standardization.

La Jolla JPEG meetings comprise mainly the following highlights:

  • Call for proposals of a next generation image coding standard, JPEG XL
  • JPEG XS profiles and levels definition
  • JPEG Systems defines a 360 degree format
  • HTJ2K
  • JPEG Pleno
  • JPEG XT
  • Approval of the JPEG Reference Software

The following summarizes various activities during JPEG’s La Jolla meeting.

JPEG XL

Billions of images are captured, stored and shared on a daily basis demonstrating the self-evident need for efficient image compression. Applications, websites and user interfaces are increasingly relying on images to share experiences, stories, visual information and appealing designs.

User interfaces can target devices with stringent constraints on network connection and/or power consumption in bandwidth constrained environments. Even though network capacities are improving globally, bandwidth is constrained to levels that inhibit application responsiveness in many situations. User interfaces that utilize images containing larger resolutions, higher dynamic ranges, wider color gamuts and higher bit depths, further contribute to larger volumes of data in higher bandwidth environments.

The JPEG Committee has launched a Next Generation Image Coding activity, referred to as JPEG XL. This activity aims to develop a standard for image coding that offers substantially better compression efficiency than existing image formats (e.g. more than 60% improvement when compared to the widely used legacy JPEG format), along with features desirable for web distribution and efficient compression of high-quality images.

To this end, the JPEG Committee has issued a Call for Proposals following its 79th meeting in April 2018, with the objective of seeking technologies that fulfill the objectives and scope of a Next Generation Image Coding. The Call for Proposals (CfP), with all related info, can be found at jpeg.org. The deadline for expression of interest and registration is August 15, 2018, and submissions to the Call are due September 1, 2018. To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector.

 

JPEG XS

This project aims at the standardization of a visually lossless low-latency lightweight compression scheme that can be used as a mezzanine codec for the broadcast industry, Pro-AV and other markets such as VR/AR/MR applications and autonomous cars. Among important use cases identified one can mention in particular video transport over professional video links (SDI, IP, Ethernet), real-time video storage, memory buffers, omnidirectional video capture and rendering, and sensor compression in the automotive industry. During the La Jolla meeting, profiles and levels have been defined to help implementers accurately size their design for their specific use cases. Transport of JPEG XS over IP networks or SDI infrastructures, are also being specified and will be finalized during the next JPEG meeting in Berlin (July 9-13, 2018). The JPEG committee therefore invites interested parties, in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions, to contribute to the specification process. Publication of the core coding system as an International Standard is expected in Q4 2018.

 

JPEG Systems – JPEG 360

The JPEG Committee continues to make progress towards its goals to define a common framework and definitions for metadata which will improve the ability to share 360 images and provide the basis to enable new user interaction with images.  At the 79th JPEG meeting in La Jolla, the JPEG committee received responses to a call for proposals it issued for JPEG 360 metadata. As a result, JPEG Systems is readying a committee draft of “JPEG Universal Metadata Box Format (JUMBF)” as ISO/IEC 19566-5, and “JPEG 360” as ISO/IEC 19566-6.  The box structure defined by JUMBF allows JPEG 360 to define a flexible metadata schema and the ability to link JPEG code streams embedded in the file. It also allows keeping unstitched image elements for omnidirectional captures together with the main image and descriptive metadata in a single file.  Furthermore, JUMBF lays the groundwork for a uniform approach to integrate tools satisfying the emerging requirements for privacy and security metadata.

To stay posted on JPEG 360, please regularly consult our website at jpeg.org and/or subscribe to the JPEG 360 e-mail reflector. 

 

HTJ2K

High Throughput JPEG 2000 (HTJ2K) aims to develop an alternate block-coding algorithm that can be used in place of the existing block coding algorithm specified in ISO/IEC 15444-1 (JPEG 2000 Part 1). The objective is to significantly increase the throughput of JPEG 2000, at the expense of a small reduction in coding efficiency, while allowing mathematically lossless transcoding to and from codestreams using the existing block coding algorithm.

As a result of a Call for Proposals issued at its 76th meeting, the JPEG Committee has selected a block-coding algorithm as the basis for Part 15 of the JPEG 2000 suite of standards, known as High Throughput JPEG 2000 (HTJ2K). The algorithm has demonstrated an average tenfold increase in encoding and decoding throughput, compared to the algorithms based on JPEG 2000 Part 1. This increase in throughput results in less than 15% average loss in coding efficiency, and allows mathematically lossless transcoding to and from JPEG 2000 Part 1 codestreams.

A Working Draft of Part 15 to the JPEG 2000 suite of standards is now under development.

 

JPEG Pleno

The JPEG Committee is currently pursuing three activities in the framework of the JPEG Pleno Standardization: Light Field, Point Cloud and Holographic content coding.

JPEG Pleno Light Field finished a third round of core experiments for assessing the impact of individual coding modules and started work on creating software for a verification model. Moreover, additional test data has been studied and approved for use in future core experiments. Working Draft documents for JPEG Pleno specifications Part 1 and Part 2 were updated. A JPEG Pleno Light Field AhG was established with mandates to create a common test conditions document; perform exploration studies on new datasets, quality metrics, and random-access performance indicators; and to update the working draft documents for Part 1 and Part 2.

Furthermore, use cases were studied and are under consideration for JPEG Pleno Point Cloud. A current draft list is under discussion for the next period and will be updated and mapped to the JPEG Pleno requirements. A final document on use cases and requirements for JPEG Pleno Point Cloud is expected at the next meeting.

JPEG Pleno Holography has reviewed the draft of a holography overview document. Moreover, the current databases were classified according to use cases, and plans to analyze numerical reconstruction tools were established.

 

JPEG XT

The JPEG Committee released two corrigenda to JPEG XT Part 1 (core coding system) and JPEG XT Part 8 (lossless extension JPEG-1). These corrigenda clarify the upsampling procedure for chroma-subsampled images by adopting the centered upsampling in use by JFIF.

 

JPEG Reference Software

The JPEG Committee is pleased to announce that the CD ballot for Reference Software has been issued for the original JPEG-1 standard. This initiative closes a long-standing gap in the legacy JPEG standard by providing two reference implementations for this widely used and popular image coding format.

Final Quote

The JPEG Committee is hopeful to see its recently launched Next Generation Image Coding, JPEG XL, can result in a format that will become as important for imaging products and services as its predecessor was; the widely used and popular legacy JPEG format which has been in service for a quarter of century. said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee nominally meets four times a year, in different world locations. The 79th JPEG Meeting was held on 9-15 April 2018, in La Jolla, California, USA. The next 80th JPEG Meeting will be held on 7-13, July 2018, in Berlin, Germany.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

 

Future JPEG meetings are planned as follows:JPEG-signature

  • No 80, Berlin, Germany, July 7 to13, 2018
  • No 81, Vancouver, Canada, October 13 to 19, 2018
  • No 82, Lisbon, Portugal, January 19 to 25, 2019

Socially significant music events

Social media sharing platforms (e.g., YouTube, Flickr, Instagram, and SoundCloud) have revolutionized how users access multimedia content online. Most of these platforms provide a variety of ways for the user to interact with the different types of media: images, video, music. In addition to watching or listening to the media content, users can also engage with content in different ways, e.g., like, share, tag, or comment. Social media sharing platforms have become an important resource for scientific researchers, who aim to develop new indexing and retrieval algorithms that can improve users’ access to multimedia content. As a result, enhancing the experience provided by social media sharing platforms.

Historically, the multimedia research community has focused on developing multimedia analysis algorithms that combine visual and text modalities. Less highly visible is research devoted to algorithms that exploit an audio signal as the main modality. Recently, awareness for the importance of audio has experienced a resurgence. Particularly notable is Google’s release of the AudioSet, “A large-scale dataset of manually annotated audio events” [7]. In a similar spirit, we have developed the “Socially Significant Music Event“ dataset that supports research on music events [3]. The dataset contains Electronic Dance Music (EDM) tracks with a Creative Commons license that have been collected from SoundCloud. Using this dataset, one can build machine learning algorithms to detect specific events in a given music track.

What are socially significant music events? Within a music track, listeners are able to identify certain acoustic patterns as nameable music events.  We call a music event “socially significant” if it is popular in social media circles, implying that it is readily identifiable and an important part of how listeners experience a certain music track or music genre. For example, listeners might talk about these events in their comments, suggesting that these events are important for the listeners (Figure 1).

Traditional music event detection has only tackled low-level events like music onsets [4] or music auto-tagging [810]. In our dataset, we consider events that are at a higher abstraction level than the low-level musical onsets. In auto-tagging, descriptive tags are associated with 10-second music segments. These tags generally fall into three categories: musical instruments (guitar, drums, etc.), musical genres (pop, electronic, etc.) and mood based tags (serene, intense, etc.). The types of tags are different than what we are detecting as part of this dataset. The events in our dataset have a particular temporal structure unlike the categories that are the target of auto-tagging. Additionally, we analyze the entire music track and detect start points of music events rather than short segments like auto-tagging.

There are three music events in our Socially Significant Music Event dataset: Drop, Build, and Break. These events can be considered to form the basic set of events used by the EDM producers [1, 2]. They have a certain temporal structure internal to themselves, which can be of varying complexity. Their social significance is visible from the presence of large number of timed comments related to these events on SoundCloud (Figure 1,2). The three events are popular in the social media circles with listeners often mentioning them in comments. Here, we define these events [2]:

  1. Drop: A point in the EDM track, where the full bassline is re-introduced and generally follows a recognizable build section
  2. Build: A section in the EDM track, where the intensity continuously increases and generally climaxes towards a drop
  3. Break: A section in an EDM track with a significantly thinner texture, usually marked by the removal of the bass drum

Figure 1. Screenshot from SoundCloud showing a list of timed comments left by listeners on a music track [11].

Figure 1. Screenshot from SoundCloud showing a list of timed comments left by listeners on a music track [11].


SoundCloud

SoundCloud is an online music sharing platform that allows users to record, upload, promote and share their self-created music. SoundCloud started out as a platform for amateur musicians, but currently many leading music labels are also represented. One of the interesting features of SoundCloud is that it allows “timed comments” on the music tracks. “Timed comments” are comments, left by listeners, associated with a particular time point in the music track. Our “Socially Significant Music Events” dataset is inspired by the potential usefulness of these timed comments as ground truth for training music event detectors. Figure 2 contains an example of a timed comment: “That intense buildup tho” (timestamp 00:46). We could potentially use this as a training label to detect a build, for example. In a similar way, listeners also mention the other events in their timed comments. So, these timed comments can serve as training labels to build machine learning algorithms to detect events.

Figure 2. Screenshot from SoundCloud indicating the useful information present in the timed comments. [11]

Figure 2. Screenshot from SoundCloud indicating the useful information present in the timed comments. [11]

SoundCloud also provides a well-documented API [6] with interfaces to many programming languages: Python, Ruby, JavaScript etc. Through this API, one can download the music tracks (if allowed by the uploader), timed comments and also other metadata related to the track. We used this API to collect our dataset. Via the search functionality we searched for tracks uploaded during the year 2014 with a Creative Commons license, which results in a list of tracks with unique identification numbers. We looked at the timed comments of these tracks for the keywords: drop, break and build. We kept the tracks whose timed comments contained a reference to these keywords and discarded the other tracks.

Dataset

The dataset contains 402 music tracks with an average duration of 4.9 minutes. Each track is accompanied by timed comments relating to Drop, Build, and Break. It is also accompanied by ground truth labels that mark the true locations of the three events within the tracks. The labels were created by a team of experts. Unlike many other publicly available music datasets that provide only metadata or short previews of music tracks  [9], we provide the entire track for research purposes. The download instructions for the dataset can be found here: [3]. All the music tracks in the dataset are distributed under the Creative Commons license. Some statistics of the dataset are provided in Table 1.  

Table 1. Statistics of the dataset: Number of events, Number of timed comments

Event Name Total number of events Number of events per track Total number of timed comments Number of timed comments per track
Drop  435  1.08  604  1.50
Build  596  1.48  609  1.51
Break  372  0.92  619  1.54

The main purpose of the dataset is to support training of detectors for the three events of interest (Drop, Build, and Break) in a given music track. These three events can be considered a case study to prove that it is possible to detect socially significant musical events, opening the way for future work on an extended inventory of events. Additionally, the dataset can be used to understand the properties of timed comments related to music events. Specifically, timed comments can be used to reduce the need for manually acquired ground truth, which is expensive and difficult to obtain.

Timed comments present an interesting research challenge: temporal noise. The timed comments and the actual events do not always coincide. The comments could be at the same position, before, or after the actual event. For example, in the below music track (Figure 3), there is a timed comment about a drop at 00:40, while the actual drop occurs only at 01:00. Because of this noisy nature, we cannot use the timed comments alone as ground truth. We need strategies to handle temporal noise in order to use timed comments for training [1].

Figure 3. Screenshot from SoundCloud indicating the noisy nature of timed comments [11].

Figure 3. Screenshot from SoundCloud indicating the noisy nature of timed comments [11].

In addition to music event detection, our “Socially Significant Music Event” dataset opens up other possibilities for research. Timed comments have the potential to improve users’ access to music and to support them in discovering new music. Specifically, timed comments mention aspects of music that are difficult to derive from the signal, and may be useful to calculate song-to-song similarity needed to improve music recommendation. The fact that the comments are related to a certain time point is important because it allows us to derive continuous information over time from a music track. Timed comments are potentially very helpful for supporting listeners in finding specific points of interest within a track, or deciding whether they want to listen to a track, since they allow users to jump-in and listen to specific moments, without listening to the track end-to-end.

State of the art

The detection of music events requires training classifiers that are able to generalize over the variability in the audio signal patterns corresponding to events. In Figure 4, we see that the build-drop combination has a characteristic pattern in the spectral representation of the music signal. The build is a sweep-like structure and is followed by the drop, which we indicate by a red vertical line. More details about the state-of-the-art features useful for music event detection and the strategies to filter the noisy timed comments can be found in our publication [1].

Figure 4. The spectral representation of the musical segment containing a drop. You can observe the sweeping structure indicating the buildup. The red vertical line is the drop.

Figure 4. The spectral representation of the musical segment containing a drop. You can observe the sweeping structure indicating the buildup. The red vertical line is the drop.

The evaluation metric used to measure the performance of a music event detector should be chosen according to the user scenario for that detector. For example, if the music event detector is used for non-linear access (i.e., creating jump-in points along the playbar) it is important that the detected time point of the event falls before, rather than after, the actual event.  In this case, we recommend using the “event anticipation distance” (ea_dist) as a metric. The ea_dist is amount of time that the predicted event time point precedes an actual event time point and represents the time the user would have to wait to listen to the actual event. More details about ea_dist can be found in our paper [1].

In [1], we report the implementation of a baseline music event detector that uses only timed comments as training labels. This detector attains an ea_dist of 18 seconds for a drop. We point out that from the user point of view, this level of performance could already lead to quite useful jump-in points. Note that the typical length of a build-drop combination is between 15-20 seconds. If the user is positioned 18 seconds before the drop, the build would have already started and the user knows that a drop is coming. Using an optimized combination of timed comments and manually acquired ground truth labels we are able to achieve an ea_dist of 6 seconds.

Conclusion

Timed comments, on their own, can be used as training labels to train detectors for socially significant events. A detector trained on timed comments performs reasonably well in applications like non-linear access, where the listener wants to jump through different events in the music track without listening to it in its entirety. We hope that the dataset will encourage researchers to explore the usefulness of timed comments for all media. Additionally, we would like to point out that our work has demonstrated that the impact of temporal noise can be overcome and that the contribution of timed comments to video event detection is worth investigating further.

Contact

Should you have any inquiries or questions about the dataset, do not hesitate to contact us via email at: n.k.yadati@tudelft.nl

References

[1] K. Yadati, M. Larson, C. Liem and A. Hanjalic, “Detecting Socially Significant Music Events using Temporally Noisy Labels,” in IEEE Transactions on Multimedia. 2018. http://ieeexplore.ieee.org/document/8279544/

[2] M. Butler, Unlocking the Groove: Rhythm, Meter, and Musical Design in Electronic Dance Music, ser. Profiles in Popular Music. Indiana University Press, 2006 

[3] http://osf.io/eydxk

[4] http://www.music-ir.org/mirex/wiki/2017:Audio_Onset_Detection

[5] https://developers.soundcloud.com/docs/api/guide

[6] https://developers.soundcloud.com/docs/api/guide

[7] https://research.google.com/audioset/

[8] H. Y. Lo, J. C. Wang, H. M. Wang and S. D. Lin, “Cost-Sensitive Multi-Label Learning for Audio Tag Annotation and Retrieval,” in IEEE Transactions on Multimedia, vol. 13, no. 3, pp. 518-529, June 2011. http://ieeexplore.ieee.org/document/5733421/

[9] http://majorminer.org/info/intro

[10] http://www.music-ir.org/mirex/wiki/2016:Audio_Tag_Classification

[11] https://soundcloud.com/spinninrecords/ummet-ozcan-lose-control-original-mix

JPEG Column: 78th JPEG Meeting in Rio de Janeiro, Brazil

The JPEG Committee had its 78th meeting in Rio de Janeiro, Brazil. Relevant to its ongoing standardization efforts in JPEG Privacy and Security, JPEG organized a special session to explore how to support blockchain and distributed ledger technologies to past, ongoing and future JPEG family of standards. This is motivated by the fact that considering the potential impact of such technologies in the future of multimedia, standardization will be required to enable interoperability between different systems and services of imaging relying on blockchain and distributed ledger technologies.

Blockchain and distributed ledger technologies are behind the well-known crypto-currencies. These technologies can provide means for content authorship, or intellectual property and rights management control of the multimedia information. New possibilities can be made available, namely support for tracking online use of copyrighted images and ownership of the digital content.

IMG_3596_half

JPEG meeting session.

Rio de Janeiro JPEG meetings comprise mainly the following highlights:

  • JPEG explores blockchain and distributed ledger technologies
  • JPEG 360 Metadata
  • JPEG XL
  • JPEG XS
  • JPEG Pleno
  • JPEG Reference Software
  • JPEG 25th anniversary of the first JPEG standard

The following summarizes various activities during JPEG’s Rio de Janeiro meeting.

JPEG explores blockchain and distributed ledger technologies

During the 78th JPEG meeting in Rio de Janeiro, the JPEG committee organized a special session on blockchain and distributed ledger technologies and their impact on JPEG standards. As a result, the committee decided to explore use cases and standardization needs related to blockchain technology in a multimedia context. Use cases will be explored in relation to the recently launched JPEG Privacy and Security, as well as in the broader landscape of imaging and multimedia applications. To that end, the committee created an ad hoc group with the aim to gather input from experts to define these use cases and to explore eventual needs and advantages to support a standardization effort focused on imaging and multimedia applications. To get involved in the discussion, interested parties can register to the ad hoc group’s mailing list. Instructions to join the list are available on http://jpeg-blockchain-list.jpeg.org

JPEG 360 Metadata

The JPEG Committee notes the increasing use of multi-sensor images from multi-sensor devices, such as 360 degree capturing cameras or dual-camera smartphones available to consumers. Images from these cameras are shown on computers, smartphones, and Head Mounted Displays (HMDs). JPEG standards are commonly used for image compression and file format. However, because existing JPEG standards do not fully cover these new uses, incompatibilities have reduced the interoperability of their images, and thus reducing the widespread ubiquity, which consumers have come to expect when using JPEG files. Additionally, new modalities for interacting with images, such as computer-based augmentation, face-tagging, and object classification, require support for metadata that was not part of the original scope of JPEG.  A set of such JPEG 360 use cases is described in JPEG 360 Metadata Use Cases document. 

To avoid fragmentation in the market and to ensure wide interoperability, a standard way of interacting with multi-sensor images with richer metadata is desired in JPEG standards. JPEG invites all interested parties, including manufacturers, vendors and users of such devices to submit technology proposals for enabling interactions with multi-sensor images and metadata that fulfill the scope, objectives and requirements that are outlined in the final Call for Proposals, available on the JPEG website.

To stay posted on JPEG 360, please regularly consult our website at jpeg.org and/or subscribe to the JPEG 360 e-mail reflector.

JPEG XL

The Next-Generation Image Compression activity (JPEG XL) has produced a revised draft Call for Proposals, and intends to publish a final Call for Proposals (CfP) following its 79th meeting (April 2018), with the objective of seeking technologies that fulfill the objectives and scope of the Next-Generation Image Compression. During the 78th meeting, objective and subjective quality assessment methodologies for anchor and proposal evaluations were discussed and analyzed. As outcome of the meeting, source code for objective quality assessment has been made available.

The draft Call for Proposals, with all related info, can be found in JPEG website. Comments are welcome and should be submitted as specified in the document. To stay posted on the action plan for JPEG XL, please regularly consult our website at jpeg.org and/or subscribe to our e-mail reflector.

 

JPEG XS

Since its previous 77th meeting, subjective quality evaluations have shown that the initial quality requirement of the JPEG XS Core Coding System has been met, i.e. a visually lossless quality at a compression ratio of 6:1 for large majority of images under test has been met. Several profiles are now under development in JPEG XS, as well as transport and container formats. JPEG committee therefore invites interested parties – in particular coding experts, codec providers, system integrators and potential users of the foreseen solutions – to contribute to the furthering of the specifications in the above directions. Publication of the International Standard is expected for Q3 2018.

JPEG Pleno

JPEG Pleno activity is currently divided into Pleno Light Field, Pleno Point Cloud and Pleno Holography. JPEG Pleno Light Field has been preparing a third round of core experiments for assessing the impact of individual coding modules on the overall rate-distortion performance. Moreover, it was decided to pursue with collecting additional test data, and progress with the preparation of working documents for JPEG Pleno specifications Part 1 and Part 2.

Furthermore, quality modelling studies are under consideration for both JPEG Pleno Point Clouds, and JPEG Pleno Holography. In particular, JPEG Pleno Point Cloud is considering a set of new quality metrics provided as contributions to this work item. It is expected that the new metrics replace the current state of the art as they have shown superior correlation with subjective quality as perceived by humans. Moreover, new subjective assessment models have been tested and analysed to better understand the perception of quality for such new types of visual information.

JPEG Reference Software

The JPEG committee is pleased to announce that its first JPEG image coding specifications is now augmented by a new part, ISO/IEC 10918-7, that contains a reference software. The proposed candidate software implementations have been checked for compliance with 10918-2. Considering the positive results, this new part of the JPEG standard will continue to evolve quickly. 

RioJanView27332626_10155421780114370_2546088045026774482_n

JPEG meeting room window view during a break.

JPEG 25th anniversary of the first JPEG standard

JPEG’s first standard third and final 25th anniversary celebration is planned at its next 79th JPEG meeting taking place in La Jolla, CA, USA. The anniversary will be marked by a 2 hours workshop on Friday 13th April on current and emerging JPEG technologies, followed by a social event where past JPEG committee members with relevant contributions will be awarded.

Final Quote

“Blockchain and distributed ledger technologies promise a significant impact on the future of many fields. JPEG is committed to provide standard mechanisms to apply blockchain on multimedia applications in general and on imaging in particular. said Prof. Touradj Ebrahimi, the Convenor of the JPEG Committee.

 

About JPEG

The Joint Photographic Experts Group (JPEG) is a Working Group of ISO/IEC, the International Organisation for Standardization / International Electrotechnical Commission, (ISO/IEC JTC 1/SC 29/WG 1) and of the International Telecommunication Union (ITU-T SG16), responsible for the popular JBIG, JPEG, JPEG 2000, JPEG XR, JPSearch and more recently, the JPEG XT, JPEG XS, JPEG Systems and JPEG Pleno families of imaging standards.

The JPEG Committee meets nominally four times a year, in different world locations. The latest 77th meeting was held from 21st to 27th of October 2017, in Macau, China. The next 79th JPEG Meeting will be held on 9-15 April 2018, in La Jolla, California, USA.

More information about JPEG and its work is available at www.jpeg.org or by contacting Antonio Pinheiro or Frederik Temmermans (pr@jpeg.org) of the JPEG Communication Subgroup.

If you would like to stay posted on JPEG activities, please subscribe to the jpeg-news mailing list on http://jpeg-news-list.jpeg.org.  

Future JPEG meetings are planned as follows:

  • No 79, La Jolla (San Diego), CA, USA, April 9 to 15, 2018
  • No 80, Berlin, Germany, July 7 to13, 2018
  • No 81, Vancouver, Canada, October 13 to 19, 2018

 

 

MPEG Column: 121st MPEG Meeting in Gwangju, Korea

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects.

The MPEG press release comprises the following topics:

  • Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level
  • MPEG-G standards reach Committee Draft for metadata and APIs
  • MPEG issues Calls for Visual Test Material for Immersive Applications
  • Internet of Media Things (IoMT) reaches Committee Draft level
  • MPEG finalizes its Media Orchestration (MORE) standard

At the end I will also briefly summarize what else happened with respect to DASH, CMAF, OMAF as well as discuss future aspects of MPEG.

Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level

The Committee Draft (CD) for CDVA has been approved at the 121st MPEG meeting, which is the first formal step of the ISO/IEC approval process for a new standard. This will become a new part of MPEG-7 to support video search and retrieval applications (ISO/IEC 15938-15).

Managing and organizing the quickly increasing volume of video content is a challenge for many industry sectors, such as media and entertainment or surveillance. One example task is scalable instance search, i.e., finding content containing a specific object instance or location in a very large video database. This requires video descriptors which can be efficiently extracted, stored, and matched. Standardization enables extracting interoperable descriptors on different devices and using software from different providers, so that only the compact descriptors instead of the much larger source videos can be exchanged for matching or querying. The CDVA standard specifies descriptors that fulfil these needs and includes (i) the components of the CDVA descriptor, (ii) its bitstream representation and (iii) the extraction process. The final standard is expected to be finished in early 2019.

CDVA introduces a new descriptor based on features which are output from a Deep Neural Network (DNN). CDVA is robust against viewpoint changes and moderate transformations of the video (e.g., re-encoding, overlays), it supports partial matching and temporal localization of the matching content. The CDVA descriptor has a typical size of 2–4 KBytes per second of video. For typical test cases, it has been demonstrated to reach a correct matching rate of 88% (at 1% false matching rate).

Research aspects: There are probably endless research aspects in the visual descriptor space ranging from validation of the achieved to results so far to further improving informative aspects with the goal to increase correct matching rate (and consequently decreasing the false matching rating). In general, however, the question is whether there’s a need for descriptors in the era of bandwidth-storage-computing over-provisioning and the raising usage of artificial intelligence techniques such as machine learning and deep learning.

MPEG-G standards reach Committee Draft for metadata and APIs

In my previous report I introduced the MPEG-G standard for compression and transport technologies of genomic data. At the 121st MPEG meeting, metadata and APIs reached CD level. The former – metadata – provides relevant information associated to genomic data and the latter – APIs – allow for building interoperable applications capable of manipulating MPEG-G files. Additional standardization plans for MPEG-G include the CDs for reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-4), which are planned to be issued at the next 122nd MPEG meeting with the objective of producing Draft International Standards (DIS) at the end of 2018.

Research aspects: Metadata typically enables certain functionality which can be tested and evaluated against requirements. APIs allow to build applications and services on top of the underlying functions, which could be a driver for research projects to make use of such APIs.

MPEG issues Calls for Visual Test Material for Immersive Applications

I have reported about the Omnidirectional Media Format (OMAF) in my previous report. At the 121st MPEG meeting, MPEG was working on extending OMAF functionalities to allow the modification of viewing positions, e.g., in case of head movements when using a head-mounted display, or for use with other forms of interactive navigation. Unlike OMAF which only provides 3 degrees of freedom (3DoF) for the user to view the content from a perspective looking outwards from the original camera position, the anticipated extension will also support motion parallax within some limited range which is referred to as 3DoF+. In the future with further enhanced technologies, a full 6 degrees of freedom (6DoF) will be achieved with changes of viewing position over a much larger range. To develop technology in these domains, MPEG has issued two Calls for Test Material in the areas of 3DoF+ and 6DoF, asking owners of image and video material to provide such content for use in developing and testing candidate technologies for standardization. Details about these calls can be found at https://mpeg.chiariglione.org/.

Research aspects: The good thing about test material is that it allows for reproducibility, which is an important aspect within the research community. Thus, it is more than appreciated that MPEG issues such a call and let’s hope that this material will become publicly available. Typically this kind of visual test material targets coding but it would be also interesting to have such test content for storage and delivery.

Internet of Media Things (IoMT) reaches Committee Draft level

The goal of IoMT is is to facilitate the large-scale deployment of distributed media systems with interoperable audio/visual data and metadata exchange. This standard specifies APIs providing media things (i.e., cameras/displays and microphones/loudspeakers, possibly capable of significant processing power) with the capability of being discovered, setting-up ad-hoc communication protocols, exposing usage conditions, and providing media and metadata as well as services processing them. IoMT APIs encompass a large variety of devices, not just connected cameras and displays but also sophisticated devices such as smart glasses, image/speech analyzers and gesture recognizers. IoMT enables the expression of the economic value of resources (media and metadata) and of associated processing in terms of digital tokens leveraged by the use of blockchain technologies.

Research aspects: The main focus of IoMT is APIs which provides easy and flexible access to the underlying device’ functionality and, thus, are an important factor to enable research within this interesting domain. For example, using these APIs to enable communicates among these various media things could bring up new forms of interaction with these technologies.

MPEG finalizes its Media Orchestration (MORE) standard

MPEG “Media Orchestration” (MORE) standard reached Final Draft International Standard (FDIS), the final stage of development before being published by ISO/IEC. The scope of the Media Orchestration standard is as follows:

  • It supports the automated combination of multiple media sources (i.e., cameras, microphones) into a coherent multimedia experience.
  • It supports rendering multimedia experiences on multiple devices simultaneously, again giving a consistent and coherent experience.
  • It contains tools for orchestration in time (synchronization) and space.

MPEG expects that the Media Orchestration standard to be especially useful in immersive media settings. This applies notably in social virtual reality (VR) applications, where people share a VR experience and are able to communicate about it. Media Orchestration is expected to allow synchronizing the media experience for all users, and to give them a spatially consistent experience as it is important for a social VR user to be able to understand when other users are looking at them.

Research aspects: This standard enables the social multimedia experience proposed in literature. Interestingly, the W3C is working on something similar referred to as timing object and it would be interesting to see whether these approaches have some commonalities.


What else happened at the MPEG meeting?

DASH is fully in maintenance mode and we are still waiting for the 3rd edition which is supposed to be a consolidation of existing corrigenda and amendments. Currently only minor extensions are proposed and conformance/reference software is being updated. Similar things can be said for CMAF where we have one amendment and one corrigendum under development. Additionally, MPEG is working on CMAF conformance. OMAF has reached FDIS at the last meeting and MPEG is working on reference software and conformance also. It is expected that in the future we will see additional standards and/or technical reports defining/describing how to use CMAF and OMAF in DASH.

Regarding the future video codec, the call for proposals is out since the last meeting as announced in my previous report and responses are due for the next meeting. Thus, it is expected that the 122nd MPEG meeting will be the place to be in terms of MPEG’s future video codec. Speaking about the future, shortly after the 121st MPEG, Leonardo Chiariglione published a blog post entitled “a crisis, the causes and a solution”, which is related to HEVC licensing, Alliance for Open Media (AOM), and possible future options. The blog post certainly caused some reactions within the video community at large and I think this was also intended. Let’s hope it will galvanice the video industry — not to push the button — but to start addressing and resolving the issues. As pointed out in one of my other blog posts about what to care about in 2018, the upcoming MPEG meeting in April 2018 is certainly a place to be. Additionally, it highlights some conferences related to various aspects also discussed in MPEG which I’d like to republish here:

  • QoMEX — Int’l Conf. on Quality of Multimedia Experience — will be hosted in Sardinia, Italy from May 29-31, which is THE conference to be for QoE of multimedia applications and services. Submission deadline is January 15/22, 2018.
  • MMSys — Multimedia Systems Conf. — and specifically Packet Video, which will be on June 12 in Amsterdam, The Netherlands. Packet Video is THE adaptive streaming scientific event 2018. Submission deadline is March 1, 2018.
  • Additionally, you might be interested in ICME (July 23-27, 2018, San Diego, USA), ICIP (October 7-10, 2018, Athens, Greece; specifically in the context of video coding), and PCS (June 24-27, 2018, San Francisco, CA, USA; also in the context of video coding).
  • The DASH-IF academic track hosts special events at MMSys (Excellence in DASH Award) and ICME (DASH Grand Challenge).
  • MIPR — 1st Int’l Conf. on Multimedia Information Processing and Retrieval — will be in Miami, Florida, USA from April 10-12, 2018. It has a broad range of topics including networking for multimedia systems as well as systems and infrastructures.
 

Report from ACM Multimedia 2017 – by Benoit Huet

 

Best #SIGMM Social Media Reporter Award! Me? Really?? fig_huet_1

This was my reaction after being informed by the SIGMM Social Media Editors that I was one of the two recipients following ACM Multimedia 2017! #ACMMM What a wonderful idea this is to encourage our community to communicate, both internally and to other related communities, about our events, our key research results and all the wonderful things the multimedia community stands for!  I have always been surprised by how limited social media engagement is within the multimedia community. Your initiative has all my support! Let’s disseminate our research interest and activities on social media! @SIGMM #Motivated

fig_huet_2

The SIGMM flagship conference took place on October 23-27 at the Computer History Museum in Mountain View California, USA. For its 25th edition, the organizing committee had prepared an attractive program cleverly mixing expected classics (i.e. Best Paper session, Grand Challenges, Open Source software competition, etc…) and brand new sessions (such as Fast Forward and Thematic Workshops, Business Idea Venture, and the Novel Topics Track). In this last edition, the conference adopted a single paper length, removing the boundary between long and short papers. The TPC Co-Chairs and Area Chairs had the responsibility of directing accepted papers to either an oral session or a thematic workshop.

Thematic workshops took the form of poster presentations. Presenters were asked to provide a short video briefly motivating their work with the intention of making them available online for reference after the conference (possibly with a link to the full paper and the poster!). However, this did not come through as publication permissions were not cleared out in time, but the idea is interesting and should be taken into account for future editions. Fast forward (or Thematic workshop pitches) are short targeted presentations aimed at attracting the audience to the Thematic Workshop where the papers are presented (in the form of posters in this case). While such short presentations allow conference attendees to efficiently identify which poster are relevant to them, it is crucial for presenters to be well prepared and concentrate on highlighting one key research idea, as time is very limited. It also gives more exposure to poster. I would be in favor of keeping such sessions for future ACM Multimedia editions.

The 25th edition of ACM MM wasn’t short of keynotes. No less than 6 industry keynotes had punctuated each of the conference half day. The first keynote by Achin Bhowmik from Starkey focused on Audio as a mean to “Enhancing and Augmenting Human Perception with Artificial Intelligence”. Bill Dally from NVidia presented “Efficient Methods and Hardware for Deep Learning”, in short why we all need GPUs! “Building Multi-Modal Interfaces for Smartphones” was the topic presented by Injong Rhee (Samsung Electronics), Scott Silver (YouTube) discussed the difficulties in “Bringing a Billion Hours to Life” (referring to the vast quantities of videos uploaded and viewed on the sharing platform, and the long tail). Ed. Chang from HTC presented “DeepQ: Advancing Healthcare Through AI and VR” and demonstrated how healthcare is and will benefit from AR, VR and AI. Danny Lange from Unity Technologies highlighted how important machine learning and deep learning are in the game industry in ”Bringing Gaming, VR, and AR to Life with Deep Learning”.  Personally, I would have preferred a mix of industry/academic keynotes as I found some of the keynotes not targeting an audience of computer scientists.

Arnold W. M. Smeulders received the SIGMM Technical Achievement Award for his outstanding and pioneeringfig_huet_3 contribution defining and bridging the semantic gap in content based image retrieval (his lecture is here: https://youtu.be/n8kLxKNjQ0A). His talk was sharp, enlightening and very well received by the audience.

The @sigmm rising star award went to Dr Liangliang Cao for his contribution to large-scale multimedia recognition and social media mining.

The conference was noticeably flavored with trendy topics such as AI, Human augmenting technologies, Virtual and Augmented Reality, and Machine (Deep) Learning, as can be noticed from the various works rewarded.

The Best Paper award was given to Bokun Wang, Yang Yang, Xing Xu, Alan Hanjalic, Heng Tao Shen for their work on “Adversarial Cross-Modal Retrieval“.

Yuan Tian, Suraj Raghuraman, Thiru Annaswamy, Aleksander Borresen, Klara Nahrstedt, Balakrishnan Prabhakaran received the Best Student Paper award for the paper “H-TIME: Haptic-enabled Tele-Immersive Musculoskeletal Examination“.

The Best demo award went to “NexGenTV: Providing Real-Time Insight during Political Debates in a Second Screen Application” by Olfa Ben Ahmed, Gabriel Sargent, Florian Garnier, Benoit Huet, Vincent Claveau, Laurence Couturier, Raphaël Troncy, Guillaume Gravier, Philémon Bouzy and Fabrice Leménorel.

The Best Open source software award was received by Hao Dong, Akara Supratak, Luo Mai, Fangde Liu, Axel Oehmichen, Simiao Yu, Yike Guo for “TensorLayer: A Versatile Library for Efficient Deep Learning Development“.

The Best Grand Challenge Video Captioning Paper award went to “Knowing Yourself: Improving Video Caption via In-depth Recap“, by Qin Jin, Shizhe Chen, Jia Chen, Alexander Hauptmann.

The Best Grand Challenge Social Media Prediction Paper award went to Chih-Chung Hsu, Ying-Chin Lee, Ping-En Lu, Shian-Shin Lu, Hsiao-Ting Lai, Chihg-Chu Huang,Chun Wang, Yang-Jiun Lin, Weng-Tai Su for “Social Media Prediction Based on Residual Learning and Random Forest“.

Finally, the Best Brave New Idea Paper award was conferred to John R Smith, Dhiraj Joshi, Benoit Huet, Winston Hsu and Zef Cota for the paper “Harnessing A.I. for Augmenting Creativity: Application to Movie Trailer Creation“.

A few years back, the multimedia community was concerned with the lack of truly multimedia publications. In my opinion, those days are behind us. The technical program has evolved into a richer and broader one, let’s keep the momentum!

The location was a wonderful opportunity for many of the attendees to take a stroll down memory lane and see fig_huet_4computers and devices (VT100, PC, etc…) from the past thanks to the complementary entrance to the museum exhibitions. The “isolated” location of the conference venue meant going out for lunch breaks was out of the question given the duration of the lunch break. As a solution, the organizers catered buffet lunches. This resulted in the majority of the attendees interacting and mixing over the lunch break while eating. This could be an effective way to better integrate new participants and strengthen the community.  Both the welcome reception and the banquet were held successfully within Computer Museum. Both events offer yet another opportunity for new connections to be made and for further interaction between attendees. Indeed, the atmosphere of both occasions was relaxed, lively and joyful. 

All in all, ACM MM 2017 was another successful edition of our flagship conference, many thanks to the entire organizing team and see you all in Seoul for ACM MM 2018 http://www.acmmm.org/2018/ and follow @sigmm on Twitter!

Report from ACM Multimedia 2017 – by Conor Keighrey

conor1

My name is Conor Keighrey, I’m a PhD. candidate at the Athlone Institute Technology in Athlone, Co. Westmeath, Ireland.  The focus of my research is to understand the key influencing factors that affect Quality of Experience (QoE) in emerging immersive multimedia experiences, with a specific focus on applications in the speech and language therapy domain. I am funded for this research by the Irish Research Council Government of Ireland Postgraduate Scholarship Programme. I’m delighted to have been asked to present this report to the SIGMM community as a result of my social media activity at ACM Multimedia Conference.

Launched in 1993, the ACM Multimedia (ACMMM) Conference held its 25th anniversary event in the Mountain View, California. The conference was located in the heart of Silicon Valley, at the inspirational Computer History Museum.

Under five focal themes, the conference called for multimedia papers which focused on topics relating to multimedia: Experience, Systems and Applications, Understanding, Novel Topics, and Engagement.

Keynote addresses were delivered by high-profile industry leading experts from the field of multimedia. These talks provided insight into the active development from the following experts:

  • Achin Bhowmik (CTO & EVP, Starkey, USA)
  • Bill Dally (Senior Vice President and Chief Scientist, NVidia, USA)
  • Injong Rhee (CTO & EVP, Samsung Electronics, Korea)
  • Edward Y. Chang (President, HTC, Taiwan)
  • Scott Silver (Vice President, Google, USA)
  • Danny Lange (Vice President, Unity Technologies, USA)

Some keynote highlights include Bill Dally’s talk on “Efficient Methods and Hardware for Deep Learning”. Bill provided insight into the work NVidia are doing with neural networks, the hardware which drives them, and the techniques the company are using to make them more efficient. He also highlighted how AI should not be thought of as a mechanism which replaces, but empower humans, thus allowing us to explore more intellectual activities.

Danny Lange of Unity Technologies discussed the application of the Unity game engine to create scenarios in which machine learning models can be trained. His presentation entitled “Bringing Gaming, VR, and AR to Life with Deep Learning” described the capture of data for self-driving cars to prepare for unexpected occurrences in the real world (e.g. pedestrians activity or other cars behaving in unpredicted ways).

A number of the Keynotes were captured by FXPAL (an ACMMM Platinum Sponsor) and are available here.

With an acceptance rate of 27.63% (684 reviewed, 189 accepted), the main track at ACMMM showcased a diverse collection of research from academic institutes around the globe. An abundance of work was presented in the ever-expanding area of deep/machine learning, virtual/augmented/mixed realities, and the traditional multimedia field.

conor2

The importance of gender equality and diversity with respect to advancing careers of women in STEM has never been greater. Sponsored by SIGMM, the Women/Diversity in MM lunch took place on the first day of ACMMM. Speakers such as Prof. Noel O’Conner discussed the significance of initiatives such as Athena SWAN (Scientific Women’s Academic Network) within Dublin City University (DCU). Katherine Breeden (Pictured left), an Assistant Professor in the Department of Computer Science at Harvey Mudd College (HMC), presented a fantastic talk on gender balance at HMC. Katherine’s discussion highlighted the key changes which have occurred resulting in more women than men graduating with a degree in computer science at the college.

Other highlights from day 1 include a paper presented at the Experience 2 (Perceptual, Affect, and Interaction) session, chaired by Susanne Boll (University of Oldenburg). Researchers from the National University of Singapore presented the results of a multisensory virtual cocktail (Vocktail) experience which was well received. 

 

conor3Through the stimulation of 3 sensory modalities, Vocktails aim to create virtual flavor, and augment taste experiences through a customizable interactive drinking utensil. Controlled by a mobile device, participants of the study experienced augmented taste (electrical stimulation of the tongue), smell (micro air-pumps), and visual (RGB light projected onto the liquid) stimulus as they used the system. For more information, check out their paper entitled “Vocktail: A Virtual Cocktail for Pairing Digital Taste, Smell, and Color Sensations” on the ACM Digital Library.

Day 3 of the conference included a session entitled Brave New Ideas. The session presented a fantastic variety of work which focused on the use of multimedia technologies to enhance or create intelligent systems. Demonstrating AI as an assistive tool and winning the Best Brave New Idea Paper award, a paper entitled “Harnessing A.I. for Augmenting Creativity: Application to Movie Trailer Creation” (ACM Digital Library) describes the first-ever human machine collaboration for creating a real movie trailer. Through multi-modal semantic extraction, inclusive of audio-visual, scene analysis, and a statistical approach, key moments which characterize horror films were defined. As a result of this, the AI selected 10 scenes from a feature length film which were further developed alongside a professional film maker to finalize an exciting movie trailer. Officially released by 20th Century Fox, the complete AI trailer for the horror movie “Morgan” can be viewed here.

A new addition to the last ACMMM edition year has been the inclusion of thematic workshops. Four individual workshops (as outlined below) provided opportunity for papers which could not be accommodated within the main track to be presented to the multimedia research community. A total of 495 papers were reviewed from which 64 were accepted (12.93%). Authors of accepted papers presented their work via on-stage thematic workshop pitches, which were followed by poster presentations on Monday the 23rd and Friday the 27th. The workshop themes were as follows:

  • Experience (Organised by Wanmin Wu)
  • Systems and Applications (Organised by Roger Zimmermann & He Ma)
  • Engagement (Organised by Jianchao Yang)
  • Understanding (Organised by Qi Tian)

Presented as part of the thematic workshop pitches, one of the most fascinating demos at the conference was a body of work carried out by Audrey Ziwei Hu (University of Toronto). Her paper entitled “Liquid Jets as Logic-Computing Fluid-User-Interfaces” describes a fluid (water) user interface which is presented as a logic-computing device. Water jets form a medium for tactile interaction and control to create a musical instrument known as a hydraulophone.

conor4Steve Mann (Pictured left) from Stanford University, who is regarded as “The Father of Wearable Computing”, provided a fantastic live demonstration of the device. The full paper can be found on the ACM Digital Library, and a live demo can be seen here.

In large scale events such ACMMM, the importance of social media reporting/interaction has never been greater. More than 250 social media interactions (tweets, retweets, and likes) were monitored using the #SIGMM and #ACMMM hashtags, as outlined by the SIGMM Records prior to the event. Descriptive (and multimedia enhanced) social media reports provide a chance for those who encounter an unavoidable schedule overlap, and an opportunity to gather some insight into alternative works presented at the conference.

From my own perspective (as a PhD. student), the most important aspect of social media interaction is that reports often serve as a conversational piece. Developing a social presence throughout the many coffee breaks and social events during the conference is key to the success of building a network of contacts within any community. As a newcomer this can often be a daunting task, recognition of other social media reporters offers the perfect ice-breaker, providing opportunity to discuss and inform each other of the on-going work within the multimedia community. As a result of my own online reporting, I was recognized numerous times throughout the conference. Staying active on social media often leads to the development of a research audience, and social media presence among peers. Engaging in such an audience is key to the success of those who wish to follow a path in academia/research.

Building on my own personal experience, continued attendance to SIGMM conferences (irrespective of paper submission) has so many advantages. While the predominant role of a conference is to disseminate work, the informative aspect of attending such events is often overlooked. The area of multimedia research is moving at a fast pace, and thus having the opportunity to engage directly with researchers in your field of expertise is of upmost importance. Attendance to ACMMM and other SIGMM conferences, such ACM Multimedia Systems, has inspired me to explore alternative methodologies within my own respective research. Without a doubt, continued attendance will inspire my research as I move forward.

ACM Multimedia ‘18 (October 22nd – 26th) – The diverse landscape of modern skyscrapers mixed with traditional Buddhist temples, and palaces that is Seoul, South Korea, will be host to the 26th Annual ACMMM. The 2018 event will without a doubt present a variety of work from the multimedia research community. Regular paper abstracts are due on the 30th of March (Full manuscripts are due on the 8th of April). For more information on next year’s ACM Multimedia conference check out the following link: http://www.acmmm.org/2018

Multidisciplinary Column: An Interview with Emilia Gómez

Could you tell us a bit about your background, and what the road to your current position was?

I have a technical background in engineering (telecommunication engineer specialized in signal processing, PhD in Computer Science), but I also followed formal musical studies at the conservatory since I was a child. So I think I have an interdisciplinary background.

Could you tell us a bit more about how you have encountered multidisciplinarity and interdisciplinarity both in your work on music information retrieval and your current project on human behavior and machine intelligence?

Music Information Retrieval (MIR) is itself a multidisciplinarity research area intended to help humans better make sense of this data. MIR draws from a diverse set of disciplines, including, but by no means limited to, music theory, computer science, psychology, neuroscience, library science, electrical engineering, and machine learning.

In my current project HUMAINT at the Joint Research Centre of the European Commission, we try to understand the impact that algorithms will have on humans, including our decision making and cognitive capabilities. This challenging topic can only be addressed in a holistic way and by incorporating insights from different disciplines. At our kick-off workshopwe gathered researchers working on distant fields, e.g. from computer science to philosophy, including law, neuroscience and psychology and we realised the need to engage on scientific discussions from different views and perspectives to address human challenges in a holistic way.

What have, in your personal experience, been the main advantages of multidisciplinarity and interdisciplinarity? Have you also encountered any disadvantages or obstacles?

The main advantage I see is the fact that we can combine distinct methodologies to generate new insights. For researchers, the fact of stepping out a discipline’s comfort zone makes us more creative and innovative.

One disadvantage is the fact that when you work on a multidisciplinary field you seem not to fit into traditional academic standards. In my case, I am perceived as a musician by engineers and as an engineer by musicians.

Beyond the academic community, your work also closely connects to interests by diverse types of stakeholders (e.g. industry, policy-makers). In your opinion, what are the most challenging aspects for an academic to operate in such a diverse stakeholder environment?

The most challenging part of diverse teams is communication, e.g. being able to speak the same language (we might need to create interdisciplinary glossaries!) and explain about our research in an accessible way so that it is understood by people with diverse backgrounds and expertises.

Regarding your work on music, you often have been speaking about making all music accessible to everyone. What do you consider the grand research challenges regarding this mission?

Many MIR researchers desire that technology can be used to make all music accessible to everyone, i.e. that our algorithms can help people discover new music, develop a varied musical taste and make them open to new music and, at the same time, to new ideas and cultures. We often talk of our desire that MIR algorithms help people discover music in the so called ´long tail`, i.e. music that is not so popular or present in the mainstream scenario. I believe the variety of music styles reflect the variety of human beings, e.g. in terms of culture, personalities and ideas. Through music we can then enrich our culture and understanding.

As the newly elected president of the ISMIR society, are there any specific missions regarding the community you would like to emphasize?

I have had the chance to work with an amazing ISMIR board over the last years, an incredible group of people willing to contribute to our community with their talent and time. With this team is very easy to work! 

This year, ISMIR is organizing its 19th edition (yes, we are getting old)! There are many challenges at ISMIR that we as a community should address, but at the moment I would like to emphasize some relevant aspects that are now somehow a priority for the board.

The first one is to maintain and expand its scientific excellence, as ISMIR should continue to provide key scientific advancements in our field. In this respect, we have recently launched our open access journal Transactions of ISMIR to foster the publication of more deep and mature research works in our area.

The second one is to promote variety in our community, e.g. in terms of discipline, gender or geographical location, also related to music culture and repertoire. In this respect, and thanks to our members, we have promoted ISMIR taking place at different locations, including editions in Asia (e.g. 2014 in Taipei, Taiwan, and 2017 in Suzhou, China).

Other aspects we put into value is reproducibility, openness and accessibility. In this sense, our priority is to maintain affordable registration rates, taking advantage of sponsorships from our industrial members, and devote our membership fees to provide travel funds for students or other members in need to attend ISMIR.

How and in what form do you feel we as academics can be most impactful?

The academic environment gives you a lot of flexibility and freedom to define research roadmaps, although there are always some dependencies on funding. In addition, academia provides time  to reflect and go deep into problems that are not directly related to a product in a short-term. In the technological field, academia has the potential to advance technologies by focusing on deeper understanding of why these technologies work well or not, e.g. through theoretical analysis or comprehensive evaluation

You also have been very engaged in missions surrounding Women in STEM, for example through the Women in MIR initiatives. In discussions on fostering diversity, the importance of role models is frequently mentioned. How can we be good role models?

Yes, I have become more and more concerned about the lack of opportunities that women have in our field with respect to their male colleagues. In this sense, Women in MIR is playing a major role in promoting the role and opportunities of women in our field, including a mentoring program, funding for women to attend ISMIR, and the creation of a public repository of female researchers to make them more visible and present.

I think women are already great role models in their different profiles, but they lack visibility with respect to their male colleagues.


Bios

Dr. Emilia Gómez graduated as a Telecommunication Engineer at Universidad de Sevilla and studied piano performance at the Seville Conservatoire of Music, Spain. She then received a DEA in Acoustics, Signal Processing and Computer Science applied to Music at IRCAM, Paris and a PhD in Computer Science at Universitat Pompeu Fabra in Barcelona (2006). She has been visiting researcher at the Royal Institute of Technology, Stockholm (Marie Curie Fellow, 2003), McGill University, Montreal (AGAUR competitive fellowship. 2010), and Queen Mary University of London (José de Castillejos competitive fellowship, 2015). After her PhD, she was first a lecturer in Sonology at the Higher School of Music of Catalonia and then joined the Music Technology Group, Department of Information and Communication Technologies,  Universitat Pompeu Fabra in Barcelona, Spain, first as an assistant professor and then as an associate professor (2011) and ICREA Academia fellow (2015). In 2017, she became the first female president of the International Society for Music Information Retrieval, and in January 2018, she joined the Joint Research Centre of the European Commission as Lead Scientist of the HUMAINT project, studying the impact of machine intelligence into human behavior.

Editor Biographies

Cynthia_Liem_2017Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.

 

 

jochen_huberDr. Jochen Huber is a Senior User Experience Researcher at Synaptics. Previously, he was an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage: http://jochenhuber.com

An interview with Miriam Redi

Miriam nowadays.

Miriam at the begin of her research career.

Miriam at the begin of her research career.

Describe your journey into computing from your youth up to the present. What foundational lessons did you learn from this journey? Why were you initially attracted to multimedia?

I literally grew up with computers all around me. I was born in a little town raised around the headquarters of Olivetti, one of the biggest tech companies of the last century: becoming a computer geek, in that place, at that time, was easier than usual! I have always been fascinated by the power of visuals and music to convey ideas. I loved to learn about history and the world through songs and movies. How to merge my love for computers with my passion for the audiovisual arts? I enrolled  in Media Engineering studies, where, aside from the traditional Computer Engineering knowledge, I had the chance to learn more about media history and design. The main message? Multidisciplinarity is key. We cannot design intelligent multimedia technologies without deeply understanding how a media is created, perceived and distributed.

Talking about multidisciplinary, what do you think is the current state of multidisciplinarity in the multimedia community?

My impression is that, due to the inherent multimodality of our research, our community has developed a natural ability of blending techniques and theories from various domains. I believe we can push the boundaries of this multidisciplinarity even further. I am thinking, for example, of that MM subcommunity interested in mining subjective attributes from data, such as mood, sentiment, or beauty. I believe such research works could incredibly benefit from a collaboration between MM scientists and domain experts in psychology, cognitive science, visual perception, or visual arts.

Tell us more about your vision and objectives behind your current roles? What do you hope to accomplish and how will you bring this about?

My dream is to make multimedia science even more useful for society and for collective growth. Multimedia data allows to easily absorb and communicate knowledge, without language barriers. Producing and generating audiovisual content has never been easier: today, the potential of multimedia for learning and sharing human knowledge is unprecedented! Intelligent multimedia systems could be put in place to support editors communities in making free online encyclopedias like Wikipedia or collaborative knowledge bases like Wikidata more “visual” – and therefore less tied to individual languages. By doing so, we could increase the possibility for people around the world to freely access the sum of all knowledge.

I like your approach about making something useful for society. What do you think about the criticism that multimedia research is too applied?

For me, high-quality research means creative research. Where ‘creative’ means ‘new and valuable’. The coexistence of breath and depth in Multimedia allows to create novel and useful applied research works, thus making these, to me, as interesting as inspiring as more theoretical research works.

Can you profile your current research, its challenges, opportunities, and implications?

I work on responsible multimedia algorithms. I love building machines that can classify audiovisual and textual data according to subjective properties – for example, the informativeness of an image with respect to a topic, its epistemic value, the beauty of a photo, the creative degree of a video. Given the inherently subjective nature of these algorithms, one of the main challenges of my research is to make such models responsible, namely:
1) Diversity-Aware i.e. reflecting the real subjective perception of people with different cultural backgrounds; this is key to empower specific cultures, designing AI to grow diversified content and fill the knowledge gaps in online knowledge repositories.
2) Interpretable and Unbiased, namely not only able to classify content, but also able to say why the content was classified in a certain way (so that we can detect algorithmic bias). Such powerful algorithms can be used to study the visual preferences of users of web and social media platforms, and retrieve interesting content accordingly.

Do you think that one day we will have algorithms that truly understand human perception of beauty and art? Or will it always be depended on the data?

Philosophers have been triying for centuries to understand the true nature of aesthetic perception. In general, I do not believe in absolute truths. And I am not really confident that algorithms will be able to become great philosophers anytime soon.

How would you describe the role of women especially in the field of multimedia?

The role of women in multimedia is the role of any researcher in their scientific community: contribute to scientific development, push the boundaries of what is known, doubt the widely accepted notions, make this world a better place (no pressure!). Maintaining diversity (any kind of diversity – including gender, expertise, race, age) in the scientific discourse is crucial: as opposed to a single mono-culture, a diverse community gathers, elaborates and combines different perspectives, thus forcing a collective creative process of exchange and growth, which is essential to scientific development.

Do you think that female researchers are well presented in the multimedia community? For example, there was not female keynote speaker at ACM MM 2017.

I am not sure about the numbers, so I can’t say for sure the percentage of women and non-binary gender persons in the multimedia community. But I am sure that percentage is greater than 0. When filling positions of high visibility such as keynotes or committee members, I we should always keep in mind that one of our tasks is to inspire younger generations. Generations of young, brilliant, beautifully diverse researchers.

How would you describe your top innovative achievements in terms of the problems you were trying to solve, your solutions, and the impact it has today and in the future?

Since my early days in multimedia, when we were retrieving video shots of airplanes, until today, when we classify creative videos or interesting pictures, I would say that the main contribution of my research has been to “break the boundaries”.
We broke the scientific field boundaries. We designed multimedia algorithms inspired by the visual arts and psychology; we collaborated with experts from philosophy, media history, sociology; and we could deliver creative, interdisciplinary research works which would contribute to the advancement of multimedia and all the fields involved.

We broke the social network boundaries: with models able to quantify the intrinsic quality of images in a photo sharing platform. Furthermore, we showed that popularity-driven mechanisms, typical of social networks, fail to promote high-quality content, and that only content-based quality assessment tools could restore meritocracy in online media platforms.

We broke the cultural boundaries: together with an amazing multi-cultural research team, we were able to design computer vision models that can adapt to different cultures and language communities. While the effectiveness of our approaches and the scientific growth is per-se a main achievement, the publications resulting from this collaborative effort reached the top-level Computer Vision, Multimedia and Social media conferences (with a best paper award – ICWSM -and a multimodal best paper award – ICMR) and our work was featured by a number of tech journals and in a TedX presentation. Together with other scientists, we also started a number of initiatives to gather people from different communities who are interested in this area: a special session at ICMR 2017, a workshop at MM 2017, one at CVPR 2018, and, a special issue of ACM TOMM.

What are in your opinion the future topics in multimedia? Where is the community strong, and where could it improve or increase focus?

My feeling is that we should re-discover and empower the ‘multi-’ness of our research field.
I think the beauty of multimedia research is the ability to tell compelling multimodal stories from signals of very diverse nature, with a focus on the positive experience of the user. We are able to process multiple sources of information and use them, for example, to generate multi-sensorial artistic compositions, expose interesting findings about users and their behavior in multiple modalities, or provide tools to explore and align multimodal information, allowing easier knowledge absorption. We should not forget the diversity of modalities we are able to process (e.g. music or social signals, or traditional image data), the types of attributes we can draw from these modalities (e.g. sentiment or appeal, or more binary semantic labels), and the variety of applications scenarios we can imagine for our research works (e.g. arts, photography, cooking, or more consolidated use cases, such as image search or retrieval). And we should encourage emerging topics and applications towards these ‘multi-nesses’.
Beyond multidisciplinarity and multiple modalities, I would also hope to see more multi-cultural research works: given the beautifully diverse world we are part of, I believe multimedia research works and applications should model and take into account the multiple points of views, diverse perceptual responses, as well as the cultural and language differences of users around the world.

Miriam nowadays.

Miriam nowadays.

Over your distinguished career, what are your top lessons you want to share with the audience?

I am not sure if this is a real lesson, more something I deeply believe in. Stereotypes kill ideas. Stereotyping on others (colleagues, friends) might make communication, brainstorming, aor collective problem solving much harder, because it somehow influences the importance given to other people ideas. Also, stereotyping on oneself and one’s limits might constrain the possibilities and narrow one’s view on the shapes of possible future paths.

How was it to have a sister working in the same field of research? Is it motivation or pressure? Did you collaborate on some topics?

In one word: inspiring. We never officially collaborated in any research work. Unofficially, we’ve been ‘collaborating’ for 32 years :) (Interview with Judith Redi)