Chunxi Liu

A unified user preference based framework for video content personalization

Supervisor(s) and Committee member(s): Qingming Huang

Nowdays, there are many ways for users to access video resource, and the number of videos grows rapidly. On the other hand, the user’s needs become more diversified and personalized. However, people’s capacities of using and managing video data have not increased with the growth of the video. The confliction between the user’s requirement and the actual technologies results in the ‘intention gap’ between the users and the video data. In order to meet the diverse need of the user and overcome the ‘intention gap’, the video content personalization technologies are required. Compared with the traditional video services, the personalization system can better meet the needs of users, improve service quality, and enhance the user experience. Video content personalization technologies have broad application background and market demand, therefore the research is very important.

The traditional personalization recommendation systems, such as the online book recommendation system etc, almost employ the collaborative filtering algorithm. However, the algorithm only considers the similarity between the users and the items for recommendation, and not considers the content of the data. Therefore, it is not suitable for video content personalization. Some works have been contributed to deal with the video content personalization. However due to the diversity and the complexity of the video data, these works are limited to the specific application environment.

This thesis proposes a unified video content personalization model. In the model, firstly the structure of the videos is analyzed. Then, by considering the user’s requirement the contents of the videos are analyzed. Finally, by ranking the video contents according to the user’s preference the video content personalization is achieved. In order to verify the validity and generality of the model, the thesis tests it on three different types of videos: news video, online video and sports video. The experimental results show that the model is valid and the generalization capacity is good. The research results of the thesis have strong practical application value, and set up a guideline in the video content personalization domain.



Joint Research & Development Laboratory for Advanced Computer and Communication Technologies (JDL) is a research unit specialized on multimedia, communication and intelligent human-computer interaction, aiming at the kernel researches in intelligent wide-band network multimedia systems, as well as the development of the key applications in these fields. JDL was founded in March 1996, originally as a joint laboratory cosponsored by the Motorola US and the National Center on Intelligent Computers (NCIC) Institute of Computing Technology of Chinese Academy of Sciences. From the July 2000, it is cooperated by Institute of Computing Technology and Graduate School Chinese Academy of Sciences. The researchers in the laboratory are from several units: the Research Center for Digital Media of Graduate School CAS, Institute of Computing Technology, School of Computer Science and Technology of Harbin Institute of Technology, College of Computer Science of Beijing University of Technology. There are also some visiting researchers from other domestic/foreign units and industrial companies.

The main research fields of JDL include: audio video coding technologies, content-based information retrieval from mass multimedia data, Biometrics, intelligent human-computer interaction, and applied algorithms. Currently, several projects from the National "973" Program, the National Fund of Sciences, the National Hi-Tech R&D Program of China(863 Program), the Key Technologies R&D Program, and the Knowledge Innovation Program of Chinese Academy of Sciences, are being studied in the lab.

After years of efforts, many original research fruits are achieved in the lab. More than 200 academic papers are published on the domestic and/or international journals or conferences. We are especially advanced on audio video coding technologies, face detection and recognition, content-based multimedia retrieval and multi-perception technologies, in which many innovative contributions have been achieved.

Four more research units are also contained in JDL: the workgroup for the standardization of the Chinese Audio-Video coding/decoding technologies cooperated by the MPEG-China National Body and the China Ministry of Information Industry, the Technical Center of China-America Digital Academic Library Graduate School of Chinese Academy of Sciences, the UNU/NUL Chinese Language Center, the ICT-YCNC Joint Research & Development Lab for face recognition. JDL is always keeping well cooperation with both domestic and international universities, research units and IT companies. Broad cooperation projects are warmly welcome concerning face recognition, di1Ggital library, distance education and digital broadcast etc.

Bookmark the permalink.