ACM SIGMM Records

Volume 8, Issue 4, December 2016 (ISSN 1947-4598)

Call for Task Proposals: Multimedia Evaluation 2017

MediaEval 2017 Multimedia Evaluation Benchmark

Call for Task Proposals

Proposal Deadline: 3 December 2016

MediaEval is a benchmarking initiative dedicated to developing and evaluating new algorithms and technologies for multimedia retrieval, access and exploration. It offers tasks to the research community that are related to human and social aspects of multimedia. MediaEval emphasizes the ‘multi’ in multimedia and seeks tasks involving multiple modalities, e.g., audio, visual, textual, and/or contextual.

MediaEval is now calling for proposals for tasks to run in the 2017 benchmarking season. The proposal consists of a description of the motivation for the task and challenges that task participants must address. It provides information on the data and evaluation methodology to be used. The proposal also includes a statement of how the task is related to MediaEval (i.e., its human or social component), and how it extends the state of the art in an area related to multimedia indexing, search or other technologies that support users in accessing multimedia collections.

For more detailed information about the content of the task proposal, please see:
http://www.multimediaeval.org/files/mediaeval2017_taskproposals.html

Task proposal deadline: 3 December 2016

Task proposals are chosen on the basis of their feasibility, their match with the topical focus of MediaEval, and also according to the outcome of a survey circulated to the wider multimedia research community.

The MediaEval 2017 Workshop will be held 13-15 September 2017 in Dublin, Ireland, co-located with CLEF 2017 (http://clef2017.clef-initiative.eu)

For more information about MediaEval see http://multimediaeval.org or contact Martha Larson m.a.larson@tudelft.nl

 

MPEG Column: 116th MPEG Meeting

MPEG Workshop on 5-Year Roadmap Successfully Held in Chengdu

Chengdu, China – The 116th MPEG meeting was held in Chengdu, China, from 17 – 21 October 2016

MPEG Workshop on 5-Year Roadmap Successfully Held in Chengdu

At its 116th meeting, MPEG successfully organised a workshop on its 5-year standardisation roadmap. Various industry representatives presented their views and reflected on the need for standards for new services and applications, specifically in the area of immersive media. The results of the workshop (roadmap, presentations) and the planned phases for the standardisation of “immersive media” are available at http://mpeg.chiariglione.org/. A follow-up workshop will be held on 18 January 2017 in Geneva, co-located with the 117th MPEG meeting. The workshop is open to all interested parties and free of charge. Details on the program and registration will be available at http://mpeg.chiariglione.org/.

Summary of the “Survey on Virtual Reality”

At its 115th meeting, MPEG established an ad-hoc group on virtual reality which conducted a survey on virtual reality with relevant stakeholders in this domain. The feedback from this survey has been provided as input for the 116th MPEG meeting where the results have been evaluated. Based on these results, MPEG aligned its standardisation timeline with the expected deployment timelines for 360-degree video and virtual reality services. An initial specification for 360-degree video and virtual reality services will be ready by the end of 2017 and is referred to as the Omnidirectional Media Application Format (OMAF; MPEG-A Part 20, ISO/IEC 23000-20). A standard addressing audio and video coding for 6 degrees of freedom where users can freely move around is on MPEG’s 5-year roadmap. The summary of the survey on virtual reality is available at http://mpeg.chiariglione.org/.

MPEG and ISO/TC 276/WG 5 have collected and evaluated the answers to the Genomic Information Compression and Storage joint Call for Proposals

At its 115th meeting, MPEG issued a Call for Proposals (CfP) for Genomic Information Compression and Storage in conjunction with the working group for standardisation of data processing and integration of the ISO Technical Committee for biotechnology standards (ISO/TC 276/WG5). The call sought submissions of technologies that can provide efficient compression of genomic data and metadata for storage and processing applications. During the 116th MPEG meeting, responses to this CfP have been collected and evaluated by a joint ad-hoc group of both working groups, comprising twelve distinct technologies submitted. An initial assessment of the performance of the best eleven solutions for the different categories reported compression factors ranging from 8 to 58 for the different classes of data.

The submitted twelve technologies show consistent improvements versus the results assessed as an answer to the Call for Evidence in February 2016. Further improvements of the technologies under consideration are expected with the first phase of core experiments that has been defined at the 116th MPEG meeting. The open core experiments process planned in the next 12 months will address multiple, independent, directly comparable rigorous experiments performed by independent entities to determine the specific merit of each technology and their mutual integration into a single solution for standardisation. The core experiment process will consider submitted technologies as well as new solutions in the scope of each specific core experiment. The final inclusion of submitted technologies into the standard will be based on the experimental comparison of performance, as well as on the validation of requirements and inclusion of essential metadata describing the context of the sequence data, and will be reached by consensus within and across both committees.

Call for Proposals: Internet of Media Things and Wearables (IoMT&W)

At its 116th meeting, MPEG issued a Call for Proposals (CfP) for Internet of Media Things and Wearables (see http://mpeg.chiariglione.org/), motivated by the understanding that more than half of major new business processes and systems will incorporate some element of the Internet of Things (IoT) by 2020. Therefore, the CfP seeks submissions of protocols and data representation enabling dynamic discovery of media things and media wearables. A standard in this space will facilitate the large-scale deployment of complex media systems that can exchange data in an interoperable way between media things and media wearables.

MPEG-DASH Amendment with Media Presentation Description Chaining and Pre-Selection of Adaptation Sets

At its 116th MPEG meeting, a new amendment for MPEG-DASH reached the final stage of Final Draft Amendment (ISO/IEC 23009-1:2014 FDAM 4). This amendment includes several technologies useful for industry practices of adaptive media presentation delivery. For example, the media presentation description (MPD) can be daisy chained to simplify implementation of pre-roll ads in cases of targeted dynamic advertising for live linear services. Additionally, support for pre-selection in order to signal suitable combinations of audio elements that are offered in different adaptation sets is enabled by this amendment. As there have been several amendments and corrigenda produced, this amendment will be published as a part of the 3rd edition of ISO/IEC 23009-1 together with the amendments and corrigenda approved after the 2nd edition.

How to contact MPEG, learn more, and find other MPEG facts

To learn about MPEG basics, discover how to participate in the committee, or find out more about the array of technologies developed or currently under development by MPEG, visit MPEG’s home page at http://mpeg.chiariglione.org. There you will find information publicly available from MPEG experts past and present including tutorials, white papers, vision documents, and requirements under consideration for new standards efforts. You can also find useful information in many public documents by using the search window.

Examples of tutorials that can be found on the MPEG homepage include tutorials for: High Efficiency Video Coding, Advanced Audio Coding, Universal Speech and Audio Coding, and DASH to name a few. A rich repository of white papers can also be found and continues to grow. You can find these papers and tutorials for many of MPEG’s standards freely available. Press releases from previous MPEG meetings are also available. Journalists that wish to receive MPEG Press Releases by email should contact Dr. Christian Timmerer at christian.timmerer@itec.uni-klu.ac.at or christian.timmerer@bitmovin.com.

Further Information

Future MPEG meetings are planned as follows:
No. 117, Geneva, CH, 16 – 20 January, 2017
No. 118, Hobart, AU, 03 – 07 April, 2017
No. 119, Torino, IT, 17 – 21 July, 2017
No. 120, Macau, CN, 23 – 27 October 2017

For further information about MPEG, please contact:
Dr. Leonardo Chiariglione (Convenor of MPEG, Italy)
Via Borgionera, 103
10040 Villar Dora (TO), Italy
Tel: +39 011 935 04 61
leonardo@chiariglione.org

or

Priv.-Doz. Dr. Christian Timmerer
Alpen-Adria-Universität Klagenfurt | Bitmovin Inc.
9020 Klagenfurt am Wörthersee, Austria, Europe
Tel: +43 463 2700 3621
Email: christian.timmerer@itec.aau.at | christian.timmerer@bitmovin.com

ACM TVX — Call for Volunteer Associate Chairs

CALL FOR VOLUNTEER ASSOCIATE CHAIRS – Applications for Technical Program Committee

ACM TVX 2017 International Conference on Interactive Experiences for Television and Online Video June 14-16, 2017, Hilversum, The Netherlands www.tvx2017.com


We are welcoming applications to become part of the TVX 2017 Technical Program Committee (TPC), as Associate Chair (AC). This involves playing a key role in the submission and review process, including attendance at the TPC meeting (please note that this is not a call for reviewers, but a call for Associate Chairs). We are opening applications to all members of the community, from both industry and academia, who feel they can contribute to this team.

Following the success of previous years’ invitations for open applications to join our Technical Program Committee, we again invite applications for Associate Chairs. Successful applicants would be responsible for arranging and coordinating reviews for around 3 or 4 submissions in the main Full and Short Papers track of ACM TVX2017, and attend the Technical Program Committee meeting in Delft, The Netherlands, in mid-March 2017 (participation in person is strongly recommended). Our aim is to broaden participation, ensuring a diverse Technical Program Committee, and to help widen the ACM TVX community to include a full range of perspectives.

We welcome applications from academics, industrial practitioners and (where appropriate) senior PhD students, who have expertise in Human Computer Interaction or related fields, and who have an interest in topics related to interactive experiences for television or online video. We would expect all applicants to have ‘top-tier’ publications related to this area. Applicants should have an expertise or interest in at least one or more topics in our call for papers: https://tvx.acm.org/2017/participation/full-and-short-paper-submissions/

After the application deadline, the volunteers will be considered and selected for ACs, and the TPC Chairs will be free to also invite previous ACs or other researchers of the community to integrate the team. The ultimate goal is to reach a balanced, diverse and inclusive TPC team in terms of fields of expertise, experience and perspectives, both from academia and industry.

To submit, just fill in the application form above!

CONTACT INFORMATION

For up to date information and further details please visit: www.tvx2017.com or get in touch with the Inclusion Chairs:

Teresa Chambel, University of Lisbon, PT; Rob Koenen, TNO, NL
at: inclusion@tvx2017.com

In collaboration with the Program Chairs: Wendy van den Broeck, Vrije Universiteit Brussel, BE; Mike Darnell, Samsung, USA; Roger Zimmermann, NUS, Singapore

Call for Grand Challenge Problem Proposals

Original page: http://www.acmmm.org/2017/contribute/call-for-multimedia-grand-challenge-proposals/

 

The Multimedia Grand Challenge was first presented as part of ACM Multimedia 2009 and has established itself as a prestigious competition in the multimedia community.  The purpose of the Multimedia Grand Challenge is to engage with the multimedia research community by establishing well-defined and objectively judged challenge problems intended to exercise state-of-the-art techniques and methods and inspire future research directions.

Industry leaders and academic institutions are invited to submit proposals for specific Multimedia Grand Challenges to be included in this year’s program.

A Grand Challenge proposal should include:

Grand Challenge proposals will be considered until March 1st and will be evaluated on an on-going basis as they are received. Grand Challenge proposals that are accepted to be part of the ACM Multimedia 2017 program will be posted on the conference website and included in subsequent calls for participation. All material, datasets, and procedures for a Grand Challenge problem should be ready for dissemination no later than March 14th.

While each Grand Challenge is allowed to define an independent timeline for solution evaluation and may allow iterative resubmission and possible feedback (e.g., a publicly posted leaderboard), challenge submissions must be complete and a paper describing the solution and results should be submitted to the conference program committee by July 14, 2017.

Grand Challenge proposals should be sent via email to the Grand Challenge chair, Ketan Mayer-Patel.

Those interested in submitting a Grand Challenge proposal are encouraged to review the problem descriptions from ACM Multimedia 2016 as examples. These are available here: http://www.acmmm.org/2016/?page_id=353

PhD Thesis Summaries

Amirhossein Habibian

Storytelling Machines for Video Search

hack begin box

Supervisor(s) and Committee member(s): Advisor(s): Arnold W.M. Smeulders (promotor), Cees G.M. Snoek (co-promotor).

URL: http://dare.uva.nl/record/1/540787

ISBN: 978-94-6182-715-9

hack end box

0 (1)This thesis studies the fundamental question: what vocabulary of concepts are suited for machines to describe video content? The answer to this question involves two annotation steps: First, to specify a list of concepts by which videos are described. Second, to label a set of videos per concept as its examples or counter examples. Subsequently, the vocabulary is constructed as a set of video concept detectors learned from the provided annotations by supervised learning.

Starting from handcrafting the vocabulary by manual annotation, we gradually automate vocabulary construction by concept composition, and by learning from human stories. As a case study, we focus on vocabularies for describing events, such as marriage proposal, graduation ceremony, and changing a vehicle tire, in videos.

As the first step, we rely on an extensive pool of manually specified concepts to study what are the best practices for handcrafting the vocabulary? From our analysis, we conclude that the vocabulary should encompass over thousands of concepts from various types, including object, action, scene, people, animal, and attribute. Moreover, the vocabulary should include the detectors for both generic concepts and specific concepts, which are trained and normalized in an appropriate way.
We alleviate the manual labor for vocabulary construction by addressing the next research question: can a machine learn novel concepts by composition? We propose an algorithm, which learns new concepts by composing the ground concepts by Boolean logic connectives, i.e. “ride-AND-bike”. We demonstrate that concept composition is an effective trick to infer the annotations, needed for training new concept detectors, without additional human annotation.
As a further step towards reducing the manual labor for vocabulary construction, we investigate the question of can a machine learn its vocabulary from human stories, i.e. video captions or subtitles? By analyzing the human stories using topic models, we effectively extract the concepts that humans use for describing videos. Moreover, we show that the occurrences of concepts in stories can be effectively used as weak supervision to train concept detectors.
Finally, we address the question of how to learn the vocabulary from human stories? We learn the vocabulary as an embedding from videos into their stories. We utilize the correlations between the terms to learn the embedding more effectively. More specifically, we learn similar embeddings for the terms, which highly co-occur in the stories, as these terms are usually synonyms. Furthermore, we extend our embedding to learn the vocabulary from various video modalities including audio and motion. It makes us able to generate more natural descriptions by incorporating concepts from various modalities, i.e. the laughing and singing concepts from audio, and the jumping and dancing concepts from motion.

hack begin box

Intelligent Sensory Information Systems group

URL: https://ivi.fnwi.uva.nl/isis/

hack end box

The world is full of digital images and videos. In this deluge of visual information, the grand challenge is to unlock its content. This quest is the central research aim of the Intelligent Sensory Information Systems group. We address the complete knowledge chain of image and video retrieval by machine and human. Topics of study are semantic understanding, image and video mining, interactive picture analytics, and scalability. Our research strives for automation that matches human visual cognition, interaction surpassing man and machine intelligence, visualization blending it all in interfaces giving instant insight, and database architectures for extreme sized visual collections. Our research culminates in state-of-the-art image and video search engines which we evaluate in leading benchmarks, often as the best performer, in user studies, and in challenging applications.

Chien-nan Chen

Semantic-Aware Content Delivery Framework for 3D Tele-Immersion

hack begin box

Supervisor(s) and Committee member(s): Klara Nahrstedt (advisor), Roy Campbell (opponent), Indranil Gupta (opponent), Cheng-Hsin Hsu (opponent)

URL: http://cairo.cs.uiuc.edu/publications/papers/Shannon_Thesis.pdf

hack end box

chen3D Tele-immersion (3DTI) technology allows full-body, multimodal interaction among geographically dispersed users, which opens a variety of possibilities in cyber collaborative applications such as art performance, exergaming, and physical rehabilitation. However, with its great potential, the resource and quality demands of 3DTI rise inevitably, especially when some advanced applications target resource-limited computing environments with stringent scalability demands. Under these circumstances, the tradeoffs between 1) resource requirements, 2) content complexity, and 3) user satisfaction in delivery of 3DTI services are magnified.

In this dissertation, we argue that these tradeoffs of 3DTI systems are actually avoidable when the underlying delivery framework of 3DTI takes the semantic information into consideration. We introduce the concept of semantic information into 3DTI, which encompasses information about the three factors: environment, activity, and user role in 3DTI applications. With semantic information, 3DTI systems are able to 1) identify the characteristics of its computing environment to allocate computing power and bandwidth to delivery of prioritized contents, 2) pinpoint and discard the dispensable content in activity capturing according to properties of target application, and 3) differentiate contents by their contributions on fulfilling the objectives and expectation of user’s role in the application so that the adaptation module can allocate resource budget accordingly. With these capabilities we can change the tradeoffs into synergy between resource requirements, content complexity, and user satisfaction.

We implement semantics-aware 3DTI systems to verify the performance gain on the three phases in 3DTI systems’ delivery chain: capturing phase, dissemination phase, and receiving phase. By introducing semantics information to distinct 3DTI systems, the efficiency improvements brought by our semantics-aware content delivery framework are validated under different application requirements, different scalability bottlenecks, and different user and application models.

To sum up, in this dissertation we aim to change the tradeoff between requirements, complexity, and satisfaction in 3DTI services by exploiting the semantic information about the computing environment, the activity, and the user role upon the underlying delivery systems of 3DTI. The devised mechanisms will enhance the efficiency of 3DTI systems targeting on serving different purposes and 3DTI applications with different computation and scalability requirements.

hack begin box

MONET

URL: http://cairo.cs.uiuc.edu/

hack end box

The Multimedia Operating Systems and Networking (MONET) Research Group, led by Professor Klara Nahrstedt in the Department of Computer Science at the University of Illinois at Urbana-Champaign, is engaged in research in various areas of distributed multimedia systems.

Masoud Mazloom

In Search of Video Event Semantics

hack begin box

Supervisor(s) and Committee member(s): Advisor(s): Arnold W.M. Smeulders (promotor), Cees G.M. Snoek (co-promotor).

URL: http://dare.uva.nl/record/1/430219

ISBN: 978-94-6182-717-3

hack end box

0 (2)In this thesis we aim to represent an event in a video using semantic features. We start from a bank of concept detectors for representing events in video.
At first we considered the relevance of concepts to the event inside the video representation. We address the problem of video event classification using a bank of concept detectors. Different from existing work, which simply relies on a bank containing all available detectors, we propose an algorithm that learns from examples what concepts in bank are most informative per event.
Secondly, we concentrated on the accuracy of concept detectors. Different from existing works, which obtain a semantic representation by training concepts over entire video clips, we propose an algorithm that learns a set of relevant frames as the concept prototypes from web video examples, without the need for frame-level annotations, and use them for representing an event video.
Thirdly, we consider the problem of searching video events with concepts. We aim at querying web videos for events using only a handful of video query examples, where the standard approach learns a ranker from hundreds of examples. We consider a semantic representation, consisting of off-the-shelf concept detectors, to capture the variance in semantic appearance of events.
Finally, we consider the problem of video event search without semantic concepts. The prevailing solutions in literature rely on a semantic video representation obtained from thousands of pre-trained concept detectors. Different from them, we propose a new semantic video representation that is based on freely available social tagged videos only, without the need for training any intermediate concept detectors.

hack begin box

Intelligent Sensory Information Systems group

URL: https://ivi.fnwi.uva.nl/isis/

hack end box

The world is full of digital images and videos. In this deluge of visual information, the grand challenge is to unlock its content. This quest is the central research aim of the Intelligent Sensory Information Systems group. We address the complete knowledge chain of image and video retrieval by machine and human. Topics of study are semantic understanding, image and video mining, interactive picture analytics, and scalability. Our research strives for automation that matches human visual cognition, interaction surpassing man and machine intelligence, visualization blending it all in interfaces giving instant insight, and database architectures for extreme sized visual collections. Our research culminates in state-of-the-art image and video search engines which we evaluate in leading benchmarks, often as the best performer, in user studies, and in challenging applications.

Svetlana Kordumova

Learning to Search for Images without Annotations

hack begin box

Supervisor(s) and Committee member(s): Advisor(s): Arnold W.M. Smeulders (promotor), Cees G.M. Snoek (co-promotor).

URL: http://dare.uva.nl/record/1/540788

ISBN: 978-608-4784-15-9

hack end box

0This thesis contributes to learning machines what is in an image by avoiding direct manual annotation as training data. We either rely on tagged data from social media platforms to recognize concepts, or on objects semantics and layout to recognize scenes. We focus our effort on image search.
We firstly demonstrate that concepts detectors can be learned using tagged examples from social media platforms. We show that using tagged images and videos directly as ground truth for learning can be problematic because of the noisy nature of tags. To this end, through extensive experimental analysis, we recommend to calculate the relevance of tags, and select only relevant positive and relevant negative examples for learning. Inclusive, we present four best practices which led to a winning entry on the TRECVID 2013 benchmark for the semantic indexing with no annotations task. Following the findings that important concepts appear rarely as tags in social media platforms, we propose to use semantic knowledge from an ontology to improve calculating tag relevance and to enrich training data for learning concept detectors of rare tags.
When searching images of a particular scene, instead of using annotated scene images, we show that with object classifiers we can reasonably well recognize scenes. We exploit 15,000 object classifiers trained with a convolutional neural network. Since not all objects can contribute equally in describing a scene, we show that pooling only the 100 most prominent object classifiers per image is good enough to recognize its scene. Furthermore, we go to the extreme of recognizing scenes by removing all object identities. We refer to the most probable positions in images to contain objects as things. We show that the ensemble of things properties, size, position, aspect ratio and prominent color, and those only, can discriminate scenes. The benefit of removing all object identities is that we also eliminate the learning of object classifiers in the process, and thus demonstrate that scenes can be recognized with no learning at all.
Overall, this thesis presents alternative ways to learn what concept is in an image or what scene it belongs to, without using manually annotated data, for the goal of image search. It investigates new approaches for learning machines to recognize the visually depicted environment captured in images, all the while dismissing the annotation process.

hack begin box

Intelligent Sensory Information Systems group

URL: https://ivi.fnwi.uva.nl/isis/

hack end box

The world is full of digital images and videos. In this deluge of visual information, the grand challenge is to unlock its content. This quest is the central research aim of the Intelligent Sensory Information Systems group. We address the complete knowledge chain of image and video retrieval by machine and human. Topics of study are semantic understanding, image and video mining, interactive picture analytics, and scalability. Our research strives for automation that matches human visual cognition, interaction surpassing man and machine intelligence, visualization blending it all in interfaces giving instant insight, and database architectures for extreme sized visual collections. Our research culminates in state-of-the-art image and video search engines which we evaluate in leading benchmarks, often as the best performer, in user studies, and in challenging applications.

Job Opportunities

Post-doc opportunity @ Irisa/Inria Rennes, France – Multimodal Audiovisual Content Analysis

LINKMEDIA is a research team of IRISA and Inria Rennes, France, working on the development of future technology enabling content-based description of and access to multimedia content, combining computer vision and image processing, speech and audio processing, natural language processing, information retrieval and media mining. LINKMEDIA participates to the NexGenTV project, an industry-academia joint venture on the analysis and enrichment of TV content. Television is undergoing a revolution, moving from the TV screen to multiple screens. Today’s user watches TV while exploring the web, searching for complementary information and commenting on social networks. Facing this situation, NexGenTV was thought to offer news solutions for the creation of rich multiscreen content and applications.

In this context, we are recruiting for a post-doctoral researcher specialized in audiovisual content analysis to develop, study and evaluate novel approaches for multimodal person recognition, clustering and linking in TV content. Research activities will take place at IRISA/Inria Rennes, France, within the LINKMEDIA team, in close collaboration with the partners of NexGenTV. Particular interaction with EURECOM is foreseen.

Prospective candidates should hold a PhD degree in a domain close to the research topic, preferably in one of the following specialism: multimodal modeling, speech and audio processing, speaker recognition, computer vision.

hack begin box

Employer: CNRS, Irisa, Rennes, France

Expiration date: Saturday, April 1, 2017

More information: https://www-linkmedia.irisa.fr/files/2011/07/Offre-Emploi-Linkmedia-NexGenTV-En.pdf

hack end box

PostDoc (AreaHead) Position in Ubiquitous Computing

The Telecooperation lab from TU Darmstadt is looking for a new postdoctoral researcher (area head)
for the Smart Proactive Assistance Area. The area covers several research fields ranging from
mobile sensing via machine learning to human-computer-interaction and persuasive computing.

To apply you must hold a PhD (or be close to its completion) in the areas of Computer Science,
Data Science, or related disciplines. Also, you should have demonstrated your research competence
through high­-quality and high­-impact publications in top conferences or journals in one (or more)
of the following areas: ubiquitous computing, data science, and/or machine learning.

More information about the post can be found at: https://www.tk.informatik.tu-darmstadt.de/index.php?id=3046

Contact for clarification and informal inquiries:
Christian Meurisch (meurisch@tk.tu-darmstadt.de)

hack begin box

Employer: TU Darmstadt (Telecooperation Lab), Germany

Expiration date: Wednesday, March 1, 2017

More information: https://www.tk.informatik.tu-darmstadt.de/index.php?id=3046

hack end box

PhD Position in 3D Computer Vision at University of Amsterdam

The Informatics Institute at the University of Amsterdam invites applications for a PhD position for four years, on the topic of 3D Computer Vision. The candidate will be supervised by Thomas Mensink and Arnold Smeulders.

The ultimate goal of this position is to enable 3D reasoning based on a single 2D photo. We aim to estimate the rough 3D geometry by separating the layout of objects in the scene from the global scene layout. While objects have an almost infinite number of possible configurations, the global scene layout is relatively more stable and can be cast in about 20 scene geometry types. The first research question is to define these different types and infer them from a single image alone using deep learning. Next, we focus on the local ordering of objects, to infer out-of-context objects and to describe an image based on this 3D ordering.

Context

The research position is part of a collaboration between SRI Stanford (USA), IDIAP (Martigny, Swiss) and the University of Amsterdam to automatically infer inconsistencies among the different modalities of a video. To this end the 3D geometry delivers an important cue to match the visual and audio channel. Within the collaboration the University of Amsterdam focusses on the visual scene analysis.

Informal inquiries may be obtained from: Thomas Mensink (thomas-dot-mensink-at-uva.nl)
Additional information and application procedure:
http://www.uva.nl/en/about-the-uva/working-at-the-uva/vacancies/item/17-046-phd-candidate-in-3d-computer-vision.html?n

Kind regards,
Thomas Mensink

hack begin box

Employer: University of Amsterdam

Expiration date: Wednesday, March 15, 2017

More information: http://www.uva.nl/en/about-the-uva/working-at-the-uva/vacancies/item/17-046-phd-candidate-in-3d-computer-vision.html

hack end box

Research Fellows (Postdocs) in Creative Technologies /Visual Computing

Join our new team of 20+ researchers (half postdocs half PhDs) in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing. We are building a dynamic environment where enthusiastic young scientists with different backgrounds get together to shape the future in fundamental as well as applied research projects. Possible directions include but are not limited to:
• augmented reality (AR),
• virtual reality (VR),
• free viewpoint video (FVV),
• 3D video,
• 360/omni-directional video,
• high dynamic range (HDR),
• wide colour gamut (WCG),
• light-field technologies,
• segmentation/matting,
• 3D reconstruction, etc.

Individual research plans will be designed between PI, successful candidates and team, considering individual background, expertise, skills and interests, matching the overall strategy, and exploiting opportunities and inspirations.

The research project “V-Sense – Extending Visual Sensation through Image-Based Visual Computing” is funded by SFI over five years with a substantial budget to cover over 20 researchers. This is part of a strategic investment in Creative Technologies by SFI and Trinity College, which is defined as one of the strategic research themes of the College. V-Sense intends to become an incubator in this context, to stimulate further integration and growth and to impact Creative Industries in Ireland as a whole.

Standard duties and Responsibilities of the Post

• Fundamental and/or applied research in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing
• Scientific publications
• Contribution to prototype and demonstrator development
• Overall contribution to V-SENSE and teamwork
• Supervision of PhD and other students
• Outreach & dissemination

Funding Information

The position is funded through the Science Foundation Ireland V-SENSE project.

Salary
Appointment will be made on the SFI Team member Budget Postdoctorate Research Fellow Level 2A Salary scale at a point in line with Government Pay Policy

Post Status
Specific Purpose contract approximately 4 years – Full-time
The successful candidate will be expected to take up post as soon as possible, preferable in March/April 2017.

Person Specification

Qualifications
• A Ph.D. in Computer Science, Engineering, or a related field in the area of ICT.

Knowledge & Experience
• An established track record of publication in leading journals and/or conferences, in one or more sub-areas of Visual Computing.
• Excellent knowledge of and integration in the related scientific communities.
• The ability to work well in a group, and the ability to mentor junior researchers, such as Ph.D. students.
• Affinity for creative dimensions of visual computing

Skills & Competencies
• Good written and oral proficiency in English (essential).
• Good communication and interpersonal skills both written and verbal.
• Proven aptitude for Programming, System Analysis and Design.
• Proven ability to prioritise workload and work to exacting deadlines.
• Proven track record of publication in high-quality venues.
• Flexible and adaptable in responding to stakeholder needs.
• Strong team player who is able to take responsibility to contribute to the overall success of the team.
• Enthusiastic and structured approach to research and development.
• Excellent problem solving abilities. – Desire to learn about new products, technologies and keep abreast of new product and technical and research developments.

Contacts and application
Candidates should submit a cover letter together with a full curriculum vitae to include the names and contact details of 2 referees (email addresses if possible) to:
Name: Orla Fox
Title: Research Project Administrator
Email Address: Orla.Fox@SCSS.TCD.ie
Contact Telephone Number: 018968176
Please include the reference code: VS-RF on all correspondence.

hack begin box

Employer: V-SENSE Project, School of Computer Science and Statistics, Trinity College Dublin, the University of Dublin

Expiration date: Tuesday, January 31, 2017

More information: https://www.scss.tcd.ie/vacancies/index.php?id=189

hack end box

PhD Studentship (4 positions) in Creative Technologies /Visual Computing

Join our new team of 20+ researchers (half postdocs half PhDs) in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing. We are building a dynamic environment where enthusiastic young scientists with different backgrounds get together to shape the future in fundamental as well as applied research projects. Possible directions include but are not limited to:
• augmented reality (AR),
• virtual reality (VR),
• free viewpoint video (FVV),
• 3D video,
• 360/omni-directional video,
• high dynamic range (HDR),
• wide colour gamut (WCG),
• light-field technologies,
• segmentation/matting,
• 3D reconstruction, etc.

Individual research plans will be designed between PI, successful candidates and team, considering individual background, expertise, skills and interests, matching the overall strategy, and exploiting opportunities and inspirations.

The research project “V-Sense – Extending Visual Sensation through Image-Based Visual Computing” is funded by SFI over five years with a substantial budget to cover over 20 researchers. This is part of a strategic investment in Creative Technologies by SFI and Trinity College, which is defined as one of the strategic research themes of the College. V-Sense intends to become an incubator in this context, to stimulate further integration and growth and to impact Creative Industries in Ireland as a whole.
The successful candidate will be expected to take up post as soon as possible, preferable in March 2017.

Standard duties and Responsibilities of the Post

• Fundamental and/or applied research in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing
• Scientific publications
• Contribution to prototype and demonstrator development
• Overall contribution to V-SENSE and teamwork

Funding Information

The position is funded through the Science Foundation Ireland V-SENSE project.
Payment of tax-free stipend 18k per annum. In addition, payment of EU academic fees.
Applicants must have been resident in an EU member state for 3 out of the last 5 years to be eligible for EU fees

Qualifications
The researcher will be expected to have a good primary degree (preferably MSc) in Computer Science, ICT, Electronic Engineering, Mathematics, Statistics, or a related discipline. Good programming skills are essential.
The successful candidate must meet Trinity College Dublin entry requirements for Postgraduate Research Degrees, and also have excellent communication skills.

https://www.tcd.ie/courses/postgraduate/how-to-apply/requirements/index.php

Knowledge & Experience
• Enthusiasm for scientific research
• Strong ambition to learn and to master skills and knowledge to a world leading level
• Background in a sub-area of Visual Computing such as Computer Vision, Computer Graphics, and Medial Signal Processing
• Programming experience in bigger projects, e.g. in C++, OpenCV, OpenGL, Matlab, etc.
• Affinity for creative dimensions of visual computing

Skills & Competencies
• Good written and oral proficiency in English (essential).
• Good communication and interpersonal skills both written and verbal.
• Proven aptitude for Programming, System Analysis and Design.
• Proven ability to prioritise workload and work to exacting deadlines.
• Strong team player who is able to take responsibility to contribute to the overall success of the team.
• Enthusiastic and structured approach to research and development.
• Excellent problem solving abilities. – Desire to learn about new products, technologies and keep abreast of new product and technical and research developments.

Contacts and application
Please apply via email to Orla.Fox@SCSS.TCD.ie and include a;
-Targeted cover letter (600-1000 words) expressing your suitability for a position
-Complete CV
Please include the reference code: VS-PhD on all correspondence.
There will be an interview process, and the successful candidate will be invited to apply via the TCD graduate studies admission system.

General enquires concerning this post can be addressed to Orla.Fox@scss.tcd.ie

hack begin box

Employer: V-SENSE Project, School of Computer Science and Statistics, Trinity College Dublin, the University of Dublin

Expiration date: Tuesday, January 31, 2017

More information: https://www.scss.tcd.ie/vacancies/index.php?id=188

hack end box

Experienced Research Fellow (Postdoc 4+ years) in Creative Technologies /Visual Computing

Join our new team of 20+ researchers (half postdocs half PhDs) in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing. We are building a dynamic environment where enthusiastic young scientists with different backgrounds get together to shape the future in fundamental as well as applied research projects. Possible directions include but are not limited to:
• augmented reality (AR),
• virtual reality (VR),
• free viewpoint video (FVV),
• 3D video,
• 360/omni-directional video,
• high dynamic range (HDR),
• wide colour gamut (WCG),
• light-field technologies,
• segmentation/matting,
• 3D reconstruction, etc.
Individual research plans will be designed between PI, successful candidates and team, considering individual background, expertise, skills and interests, matching the overall strategy, and exploiting opportunities and inspirations.

The Experienced Research Fellow will provide leadership in the team, e.g. as supervisor and/or project leader. Academic career development will be encouraged and supported.

The research project “V-Sense – Extending Visual Sensation through Image-Based Visual Computing” is funded by SFI over five years with a substantial budget to cover over 20 researchers. This is part of a strategic investment in Creative Technologies by SFI and Trinity College, which is defined as one of the strategic research themes of the College. V-Sense intends to become an incubator in this context, to stimulate further integration and growth and to impact Creative Industries in Ireland as a whole.

Standard duties and Responsibilities of the Post
• Fundamental and/or applied research in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing
• Scientific publications
• Contribution to prototype and demonstrator development
• Overall contribution to V-SENSE and teamwork
• Supervision of PhD and other students
• Outreach & dissemination
• Leadership in the team

Funding Information
The position is funded through the Science Foundation Ireland V-SENSE project.

Salary
Appointment will be made on the SFI Team member Budget Experienced Postdoctorate Research Fellow Level 2B Salary scale at a point in line with Government Pay Policy.

Post Status
Specific Purpose contract approximately 4 years – Full-time
The successful candidate will be expected to take up post as soon as possible, preferable in March/April 2017.

Person Specification
Qualifications
• A Ph.D. in Computer Science, Engineering, or a related field in the area of ICT.
• A minimum of 4 years of postdoctoral experience.

Knowledge & Experience (Essential & Desirable)
Required
• An established track record of publication in leading journals and/or conferences, in one or more sub-areas of Visual Computing.
• Excellent knowledge of and integration in the related scientific communities.
• The ability to work well in a group, and the ability to mentor junior researchers, such as Ph.D. students.
Desired
• Affinity for creative dimensions of visual computing
• Experience in supervision and project leadership.
• Experience in academic services, e.g. peer reviewing, workshop/conference committee, etc.
• Experience with funding acquisition
• Teaching experience
• Academic track record, e.g. talks, tutorials
• Industry experience or engagement
• Prototype development
• Exhibitions, demos
• Standardization

Skills & Competencies
• Good written and oral proficiency in English (essential).
• Good communication and interpersonal skills both written and verbal.
• Proven aptitude for Programming, System Analysis and Design.
• Proven ability to prioritise workload and work to exacting deadlines.
• Proven track record of publication in high-quality venues.
• Flexible and adaptable in responding to stakeholder needs.
• Strong team player who is able to take responsibility to contribute to the overall success of the team.
• Enthusiastic and structured approach to research and development.
• Excellent problem solving abilities. – Desire to learn about new products, technologies and keep abreast of new product and technical and research developments.

Contacts and application
Candidates should submit a cover letter together with a full curriculum vitae to include the names and contact details of 2 referees (email addresses if possible) to:
Name: Orla Fox
Title: Research Project Administrator
Email Address: Orla.Fox@SCSS.TCD.ie
Contact Telephone Number: 018968176
Please include the reference code: VS-ERF on all correspondence.

hack begin box

Employer: V-SENSE Project, School of Computer Science and Statistics, Trinity College Dublin, the University of Dublin

Expiration date: Tuesday, January 31, 2017

More information: https://www.scss.tcd.ie/vacancies/index.php?id=184

hack end box

PhD Studentship (4 positions) in Creative Technologies /Visual Computing

Post Title: PhD Studentship

Research Project: V-SENSE Project, School of Computer Science and Statistics, Trinity College Dublin, the University of Dublin

Post Status: Specific Purpose contract-up to 4 years- PhD Studentship in Creative Technologies/Visual Computing

Post Summary:
Join our new team of 20+ researchers (half postdocs half PhDs) in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing. We are building a dynamic environment where enthusiastic young scientists with different backgrounds get together to shape the future in fundamental as well as applied research projects. Possible directions include but are not limited to:
• augmented reality (AR),
• virtual reality (VR),
• free viewpoint video (FVV),
• 3D video,
• 360/omni-directional video,
• high dynamic range (HDR),
• wide colour gamut (WCG),
• light-field technologies,
• segmentation/matting,
• 3D reconstruction, etc.

Individual research plans will be designed between PI, successful candidates and team, considering individual background, expertise, skills and interests, matching the overall strategy, and exploiting opportunities and inspirations.

The research project “V-Sense – Extending Visual Sensation through Image-Based Visual Computing” is funded by SFI over five years with a substantial budget to cover over 20 researchers. This is part of a strategic investment in Creative Technologies by SFI and Trinity College, which is defined as one of the strategic research themes of the College. V-Sense intends to become an incubator in this context, to stimulate further integration and growth and to impact Creative Industries in Ireland as a whole.

Standard duties and Responsibilities of the Post:
• Fundamental and/or applied research in Visual Computing at the intersection of Computer Vision, Computer Graphics and Media Signal Processing
• Scientific publications
• Contribution to prototype and demonstrator development
• Overall contribution to V-SENSE and teamwork

Qualifications:
The researcher will be expected to have a good primary degree (preferably MSc) in Computer Science, ICT, Electronic Engineering, Mathematics, Statistics, or a related discipline. Good programming skills are essential.
The successful candidate must meet Trinity College Dublin entry requirements for Postgraduate Research Degrees, and also have excellent communication skills.

https://www.tcd.ie/courses/postgraduate/how-to-apply/requirements/index.php

Knowledge & Experience:
• Enthusiasm for scientific research
• Strong ambition to learn and to master skills and knowledge to a world leading level
• Background in a sub-area of Visual Computing such as Computer Vision, Computer Graphics, and Medial Signal Processing
• Programming experience in bigger projects, e.g. in C++, OpenCV, OpenGL, Matlab, etc.
• Affinity for creative dimensions of visual computing

Skills & Competencies:
• Good written and oral proficiency in English (essential).
• Good communication and interpersonal skills both written and verbal.
• Proven aptitude for Programming, System Analysis and Design.
• Proven ability to prioritise workload and work to exacting deadlines.
• Strong team player who is able to take responsibility to contribute to the overall success of the team.
• Enthusiastic and structured approach to research and development.
• Excellent problem solving abilities. – Desire to learn about new products, technologies and keep abreast of new product and technical and research developments.

Funding Information: The position is funded through the Science Foundation Ireland V-SENSE project.

Benefits: Payment of tax-free stipend 18k per annum. In addition, payment of EU academic fees.
NOTE: Applicants must have been resident in an EU member state for 3 out of the last 5 years to be eligible for EU fees

Closing Date: Open until filled
The successful candidate will be expected to take up post as soon as possible, preferable in March 2017.

Application Procedure:
Please apply via email to Orla.Fox@SCSS.TCD.ie and include a;
• -Targeted cover letter (600-1000 words) expressing your suitability for a position
• -Complete CV
Please include the reference code: VS-PhD on all correspondence.
There will be an interview process, and the successful candidate will be invited to apply via the TCD graduate studies admission system.
General enquires concerning this post can be addressed to Orla.Fox@scss.tcd.ie

Trinity College is an equal opportunities employer

hack begin box

Employer: Trinity College Dublin

Expiration date: Tuesday, January 31, 2017

More information: https://www.scss.tcd.ie/vacancies/index.php?id=188

hack end box

Postdocs positions at Intelligent Information Media Laboratory

Postdocs positions at Intelligent Information Media Laboratory (established in October, 2016), Toyota Technological Institute (TTI-Japan)

Positions: Post-doctoral Research Fellow

Number of Positions Available Three (3)

Research Field
Research on various kinds of human sensing and modeling with multimedia data such as images and videos (e.g., human motion sensing, human pose estimation, human action recognition, facial expression recognition, mental state estimation, surveillance) and basic techniques for these fields (e.g., computer vision, pattern recognition, machine learning such as deep neural networks and unsupervised learning).

Qualifications
PhD in a related field.

Starting Date
At the earliest possible date, after the employment contract is completed.

Terms of Employment
Yearly contract: renewable up to 3 years if positively evaluated.

Salary
JPY 320,000/month, plus commuting expenses and partial support for housing.

Documents to submit
(1) CV, including the applicant’s photograph, e-mail contact address, and the possible starting date of employment
(2) List of publications
(3) Summary of research achievements (about one page)
(4) Future plan of research and other activities (about one page)
(5) Name, affiliation, and e-mail address of two contact references
– Candidacy will not be considered unless the full documents are submitted.
– Interested applicants can submit the documents by either email with the subject line “Postdoc for intelligent information media” (PDF format is preferred) or postal mail with “Postdoc for intelligent information media” on the envelope; see the following address.
– The application documents will not be returned.

Deadline for submission
March 31, 2017. The application will be closed after filling this position regardless of the deadline.

Inquiry
Professor Norimichi Ukita
Toyota Technological Institute
2-12-1 Hisakata, Tempaku-ku, Nagoya 468-8511, JAPAN
Phone: +81-52-809-1832
e-mail: ukita@toyota-ti.ac.jp
http://www.toyota-ti.ac.jp/Lab/Denshi/iim/ukita/

Toyota Technological Institute is an Equal Opportunity/Affirmative Action Employer.

hack begin box

Employer: Toyota Technological Institute (TTI-Japan)

Expiration date: Friday, March 31, 2017

More information: http://www.toyota-ti.ac.jp/english/employment/2016/10/000321.html

hack end box

Post-doc: Social Media Analytics

CSIRO offers PhD graduates an opportunity to launch their scientific careers through our Postdoctoral Fellowships. These fellowships provide experience that will enhance career prospects and facilitate the development of potential leaders for CSIRO.

 

hack begin box

Employer: CSIRO (Commonwealth Scientific and Industrial Research Organisation), Australia

Expiration date: Monday, October 31, 2016

More information: https://jobs.csiro.au/job/Sydney%2C-NSW-CSIRO-Postdoctoral-Fellowship-Social-Media-Analytics/365517500/

hack end box

Post-doc offer: Social Media Analytics

CSIRO offers PhD graduates an opportunity to launch their scientific careers through our Postdoctoral Fellowships. These fellowships provide experience that will enhance career prospects and facilitate the development of potential leaders for CSIRO.

In this role you will find an attractive balance between theoretical research in social media analytics and its application to the mining sector.  The role also provides a unique opportunity to work in the emerging research area of Social Media analytics, looking at a number of aspects, such as Trust in the Social Web.

You will be part of a supportive, vibrant, multidisciplinary team of world -leading researchers, and contribute your expertise in modelling information flow, attitudes and influence in social media to develop a theoretical model for information flow and influence in social media such as:

Through this Postdoctoral Fellowship you will gain expertise in the real-time assessment of trust depicted in social media, with strong skills in big data analytics, data modelling, and visualisation.

hack begin box

Employer: CSIRO (Commonwealth Scientific and Industrial Research Organisation), Australia

Expiration date: Monday, October 31, 2016

More information: https://jobs.csiro.au/job/Sydney%2C-NSW-CSIRO-Postdoctoral-Fellowship-Social-Media-Analytics/365517500/

hack end box

Calls for Contribution

CFPs: Sponsored by ACM SIGMM

ACM MM 2017

ACM International Conference on Multimedia

hack begin box

Submission deadline: 07. April 2017

Location: Mountain View, CA, USA
Dates: 23. October 2017 -27. October 2017

More information: http://www.acmmm.org/2017/

Sponsored by ACM SIGMM

hack end box

Call for Regular Papers ACM Multimedia is the premier conference in multimedia, a research field that discusses emerging computing methods from a perspective in which each medium — e.g. images, text, audio — is a strong component of the complete, integrated exchange of information. The multimedia community has a tradition … Read more

ACM MoVid 2017

ACM Workshop on Mobile Video 2017

hack begin box

Submission deadline: 10. March 2017

Location: Taipei, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://mmsys17.iis.sinica.edu.tw/movid/

Sponsored by ACM SIGMM

hack end box

ACM MoVid 2017 solicits original and unpublished research achievements in various aspects of mobile video services. The focus of this workshop is to present and discuss recent advances in the broad area of mobile video services. Specifically, the workshop intends to address the following topics: a) Novel mobile video applications … Read more

Demo Track @ ACM MMSys 2017

ACM Multimedia Systems 2017

hack begin box

Submission deadline: 10. February 2017

Location: Taipei, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://mmsys17.iis.sinica.edu.tw/index.php/demo-track/

Sponsored by ACM SIGMM

hack end box

As in previous years, the demo sessions will promote applied research, application prototypes and systems along with the scientific program. The sessions will not only showcase the applicability of recent results to real-world problems but also trigger ideas exchanges between theory and practice and collaborations between MMSys attendees. Submissions from … Read more

Open Dataset and Software Track @ ACM MMSys 2017

ACM Multimedia Systems 2017

hack begin box

Submission deadline: 10. February 2017

Location: Taipei, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://mmsys17.iis.sinica.edu.tw/index.php/dataset-track/

Sponsored by ACM SIGMM

hack end box

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, realtime system, and database communities, MMSys … Read more

MMVE @ ACM MMSys 2017

International Workshop on Massively Multiuser Virtual Environments

hack begin box

Submission deadline: 10. March 2017

Location: Taipeh, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://mmsys17.iis.sinica.edu.tw/mmve/

Sponsored by ACM SIGMM

hack end box

Virtual Environment systems are spatial simulations that provide real-time human interactions with other users or a simulated virtual world. Virtual environments have experienced phenomenal growth in recent years in the form of massively multiplayer online games (MMOGs) such as World of Warcraft and Lineage, and social communities such as Second … Read more

ACM MMSys 2017

ACM Multimedia Systems Conference 2017

hack begin box

Submission deadline: 10. January 2017

Location: Taipei, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://mmsys17.iis.sinica.edu.tw

Sponsored by ACM SIGMM

hack end box

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, realtime system, and database communities, MMSys … Read more

ACM NOSSDAV 2017

The 27th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video

hack begin box

Submission deadline: 10. March 2017

Location: Taipei, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://mmsys17.iis.sinica.edu.tw/nossdav/

Sponsored by ACM SIGMM

hack end box

NOSSDAV 2017, the 27th ACM SIGMM Workshop on Network and Operating Systems Support for Digital Audio and Video, will be co-located with MMSys 2017 in Taipei, Taiwan on June 20—23, 2017. As in previous years, the workshop will continue to focus on both established and emerging research topics, high-risk high-return … Read more

CFPs: Sponsored by ACM (any SIG)

ACM TVX 2017 WoP

ACM International Conference on Interactive Experiences for Television and Online Video

hack begin box

Submission deadline: 10. March 2017

Location: Hilversum, the Netherlands
Dates: 14. June 2017 -16. June 2017

More information: https://tvx.acm.org/2017/participation/

Sponsored by ACM

hack end box

ACM TVX, the leading international conference for research into online video, TV interaction and user experience is now calling for Work-in-progress, Doctoral Consortium, Demo and TVX-in-industry submissions.  

ACM TVX 2017

ACM International Conference on Interactive Experiences for Television and Online Video

hack begin box

Submission deadline: 20. January 2017

Location: Hilversum, The Netherlands
Dates: 14. June 2017 -16. June 2017

More information: http://www.tvx2017.com

Sponsored by ACM

hack end box

ACM TVX is the leading international conference for research into online video, TV interaction and user experience. It is a multi-disciplinary conference and we welcome submissions in a broad range of topics. Our aim is to foster discussions and innovative experiences amongst the academic research community and industry. In particular, … Read more

ACM ICMR 2017

ACM International Conference on Multimedia Retrieval

hack begin box

Submission deadline: 27. January 2017

Location: Bucharest, Romania
Dates: 06. June 2017 -09. June 2017

More information: http://www.icmr2017.ro/

Sponsored by ACM

hack end box

The Annual ACM International Conference on Multimedia Retrieval (ICMR) offers a great opportunity for exchanging leading-edge multimedia retrieval ideas among researchers, practitioners and other potential users of multimedia retrieval systems. This annual conference, which puts together the long-lasting experiences of the former ACM CIVR (International Conference on Image and Video … Read more

CFPs: Sponsored by IEEE (any TC)

IEEE TMM

IEEE Transactions on Mutimedia
Video Analytics: Challenges, Algorithms, and Applications

hack begin box

Submission deadline: 15. April 2017

Special issue

Sponsored by IEEE

hack end box

SmartMM2017

The 2017 International Workshop on Smart Multimedia (SmartMM2017)

hack begin box

Submission deadline: 10. March 2017

Location: Hong kong
Dates: 29. May 2017 -31. May 2017

More information: http://smartmm.org/

Sponsored by IEEE

hack end box

DCER&HPE 2017

Joint Challenge and Workshop on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation

hack begin box

Submission deadline: 24. March 2017

Location: Washington, DC., USA
Dates: 31. May 2017 -31. May 2017

More information: http://icv.tuit.ut.ee/fc2017

Sponsored by IEEE

hack end box

ACII 2017

Affective Computing and Intelligent Interaction

hack begin box

Submission deadline: 02. May 2017

Location: San Antonio, Texas
Dates: 23. October 2017 -26. October 2017

More information: http://www.acii2017.org

Sponsored by IEEE

hack end box

IEEE BigMM 2017

Third IEEE International Conference on Multimedia Big Data

hack begin box

Submission deadline: 05. December 2016

Location: The Hills Hotel, Laguna Hills, California, USA
Dates: 19. April 2017 -21. April 2017

More information: http://www.BigMM.org

Sponsored by IEEE

hack end box

CFPs: Not ACM-/IEEE-sponsored

DMCIT 2017

International Conference on Data Mining, Communications and Information Technology

hack begin box

Submission deadline: 20. February 2017

Location: Phuket, Thailand
Dates: 25. May 2017 -27. May 2017

More information: http://www.dmcit.net/

In cooperation with ACM

hack end box

DIPEWC 2017

(e.g. The Second International Conference on Digital Information Processing, Electronics, and Wireless Communications (DIPEWC2017)orkshop on Tiny Details of TCP 2012)

hack begin box

Submission deadline: 15. August 2017

Location: ISGA (Higher Institute of Engineering and Business - Marrakesh), Marrakesh, Kingdom of Morocco
Dates: 28. September 2017 -30. September 2017

More information: http://sdiwc.net/conferences/2nd-international-conference-on-digital-information-processing-electronics-and-wireless-communications/

hack end box

ISDF 2017

The Third International Conference on Information Security and Digital Forensics (ISDF2017)

hack begin box

Submission deadline: 07. November 2017

Location: Metropolitan College, Thessaloniki, Greece
Dates: 08. December 2017 -10. December 2017

More information: http://sdiwc.net/conferences/3rd-international-information-security-digital-forensics/

hack end box

ICDIPC 2017

The Seventh International Conference on Digital Information Processing and Communications (ICDIPC2017)

hack begin box

Submission deadline: 11. June 2017

Location: Asia Pacific University of Technology and Innovation (APU), Kuala Lumpur, Malaysia
Dates: 11. July 2017 -13. February 2017

More information: http://sdiwc.net/conferences/7th-international-conference-digital-information-processing-communications/

hack end box

ICIAP 2017

19th International Conference on Image Analysis and Processing (ICIAP)

hack begin box

Submission deadline: 31. March 2017

Location: Catania, Italy
Dates: 11. September 2017 -15. September 2017

More information: http://www.iciap2017.com

hack end box

CEA2017 @ IJCAI2017

The 9th International Workshop on Multimedia for Cooking and Eating Activities

hack begin box

Submission deadline: 01. May 2017

Location: Melbourne, Australia
Dates: 19. August 2017 -21. August 2017

More information: http://www.mm.media.kyoto-u.ac.jp/CEA2017/

hack end box

MuSIC 2017 @ IEEE ICME 2017

Workshop on Multimedia Streaming in Information-/Content-Centric Networks

hack begin box

Submission deadline: 03. March 2017

Location: Hong Kong
Dates: 10. July 2017 -14. July 1972

More information: http://music2017.itec.aau.at/

hack end box

MUST-EH 2017@ IEEE ICME 2016

7th IEEE ICME International Workshop on Multimedia Services and Technologies for E-health (MUST-EH 2017)

hack begin box

Submission deadline: 27. February 2017

Location: Hong Kong
Dates: 10. July 2017 -14. July 2017

More information: http://www.mcrlab.net/must-eh-workshop/

hack end box

SparDa @ CBMI 2017

(e.g. International Workshop on Tiny Details of TCP 2012)

hack begin box

Submission deadline: 28. February 2017

Location: Firenze, Italy
Dates: 19. June 2017 -21. June 2017

More information: http://www.micc.unifi.it/cbmi2017/

In cooperation with ACM SIGMM

hack end box

DigitalSec2017

The Fourth International Conference on Digital Security and Forensics (DigitalSec2017)

hack begin box

Submission deadline: 11. June 2017

Location: Kuala Lumpur, Malaysia
Dates: 11. July 2017 -13. July 2017

More information: http://sdiwc.net/conferences/4th-conference-digital-security-forensics/

hack end box

ICESS2017

The Third International Conference on Electronics and Software Science

hack begin box

Submission deadline: 30. June 2017

Location: Takamatsu Sunport Hall Building, Takamatsu, Japan
Dates: 31. July 2017 -02. August 2017

More information: http://sdiwc.net/conferences/3rd-international-electronics-software-science/

hack end box

CBMI2017

International Workshop on Content-Based Multimedia Indexing

hack begin box

Submission deadline: 28. February 2017

Location: Firenze, Italy
Dates: 19. June 2017 -21. June 2017

More information: https://www.micc.unifi.it/cbmi2017/

hack end box

NetGames 2017

The 15th Annual Workshop on Network and Systems Support for Games

hack begin box

Submission deadline: 10. February 2017

Location: Taipei, Taiwan
Dates: 20. June 2017 -23. June 2017

More information: http://netgames2017.web.nitech.ac.jp/

In cooperation with ACM SIGMM

hack end box

INFOSEC 2017

The Third International Conference on Information Security and Cyber Forensics (INFOSEC2017)

hack begin box

Submission deadline: 29. May 2017

Location: Comenius University in Bratislava, Slovakia
Dates: 29. June 2017 -01. July 2017

More information: http://sdiwc.net/conferences/3rd-international-conference-information-security-cyber-forensics/

hack end box

MVA 2017

The 15th IAPR Conference on Machine Vision Applications

hack begin box

Submission deadline: 05. December 2016

Location: Nagoya, Japan
Dates: 08. May 2017 -12. May 2017

More information: http://www.mva-org.jp/mva2017/

hack end box

ICETC 2017

The Fourth International Conference on Education, Technologies and Computers

hack begin box

Submission deadline: 01. April 2017

Location: St. Mary's University
Dates: 22. April 2017 -24. April 2017

More information: http://sdiwc.net/conferences/4th-international-education-technologies-computers/

hack end box

OSBMRM 2016

The Third International Conference on Organizational Strategy, Business Models, and Risk Management

hack begin box

Submission deadline: 26. January 2017

Location: Gulf University
Dates: 26. February 2017 -27. February 2017

More information: http://sdiwc.net/conferences/3rd-international-conference-on-organizational-strategy-business-models-and-risk-management/

hack end box

CSCESM2017

The Fourth International Conference on Computer Science, Computer Engineering and Social Media (CSCESM2017)

hack begin box

Submission deadline: 16. April 2017

Location: Jadara University
Dates: 16. May 2017 -18. May 2017

More information: http://sdiwc.net/conferences/4th-international-conference-computer-science-computer-engineering-social-media/

hack end box

Back Matter

Notice to Contributing Authors to SIG Newsletters

By submitting your article for distribution in this Special Interest Group publication, you hereby grant to ACM the following non-exclusive, perpetual, worldwide rights:

However, as a contributing author, you retain copyright to your article and ACM will refer requests for republication directly to you.

Impressum

Editor-in-Chief

Carsten Griwodz, Simula Research Laboratory

Editors