MediaMixer partner CERTH has released a video showing a demo of video shot segmentation and visual concept detection applied to lecture videos. All the processing of the videos was performed using automatic analysis techniques. Results can be used to support the subsequent annotation and re-use of the video materials, as in the VideoLecturesMashup use case.

See the video at

MediaMixer co-ordinator Lyndon Nixon presented at the Internet of Education 2013 conference at Ljubljana, Slovenia (organised by project partner Jozef Stefan Institute) one possible MediaMixer future of e-learning. In this future, learning videos are mashed up to generate new learning offers for online learners, and such re-mixes could be the basis for new MOOC offers that are more flexible and personalised to individual learners. Dr Nixon noted that as e-learning materials become increasingly video, there are new requirements on how to retrieve relevant video by topic and access it in term of its parts (fragments), especially relevant for learners on the go or on mobile devices.

MediaMixer technology is a solution for this, as shown by our use case with VideoLectures.NET, the VideoLecturesMashup. This use case has been described previously, and the video of the demonstrator is also online.

Our slides on MediaMixing for e-learning are available:

Our colleague Vasileios Mezaris (CERTH) will hold the next live Webinar on 14 November (1100 CET) at  It is titled “Fragmenting your Media Assets meaningfully – media analysis for fragment detection and extraction“.

The Webinar will discuss a set of video processing techniques for media fragment creation and annotation. These include techniques for the temporal segmentation of the video into shots and scenes, the re-detection of appearances of specific objects throughout the video, and the detection of concepts that describe the temporal video fragments. Such techniques are the first step towards converting the raw video material into meaningful media fragments.

Join the live presentation and discuss with us via Twitter @project_mmixer. Webinars are also recorded and will be available online at later. MediaMixer community members receive regular mails reminding them of MediaMixer activities – join free at 

The MediaMixer project is pleased to release a demo of semantic technologies facilitating copyright management in the context of User Generated Content.

It is a key issue for the media industry these days, in addition to unauthorised media reproduction and distribution, to control the reuse of media in user generated content (UGC). To solve this issue and avoid publishing content that infringes copyright, UGC services like YouTube offer mechanism to detect the unauthorised reuse of media, and give the choice to monetarise its use rather than take down the content. However, all the potential of this new revenue stream is at risk if copyright subtleties are not managed appropriately. For instance, if the same song is owned by different rights holders depending on the territory.

What is required is a scalable decision support system capable of integrating digital rights languages, like DDEX or ODRL, together with contracts or policies, like talent contracts or business policies.
MediaMixer semantic technologies provide a common and expressive framework where all these copyright information sources can be represented together.

See the demo video.

Test the online prototype.


The MediaMixer project is pleased to announce the release of the recording of our recent Webinar on Describing Media Assets, given by Raphael Troncy of EURECOM. In this talk he explains how semantic descriptions of non-textual media available on the Web can facilitate retrieval, re-use and presentation of media assets.

He first presents the Media Fragment URI specification, a recent W3C recommendation that enables to uniquely identifying sub-parts of media assets in the same way that the fragment identifier in the URI can refer to part of an HTML or XML document. Then models and ontologies are described that we will illustrate with several real world applications using semantic annotations attached to media fragments.

This is the second in a set of Webinars on MediaMixer technology and their benefits for the enterprise, see for the full schedule and past Webinar recordings.

MediaMixer Webinars are made possible thanks to the Jozef Stefan Institute and VideoLectures.NET.

In today’s Grand Challenge session at ACM Multimedia 2013, two paper-solutions for the MediaMixer / VideoLectures.NET challenge on temporal segmentation of e-learning videos were presented:

1) MUST VIS system (IDIAP Switzerland) offers an appealing visualisation of video-to-video and segment-to-segment linking based on video analysis and tagging, and their associations. A first demo was presented which can be seen at

2) The Hasso Platner Institute (HPI) presented a solution based purely on OCR analysis of the accompanying lecture slides, using common partners to provide for solutions for Global and Partial Segmentation of the video, being more accurate than the use of shot segmentation techniques relying on transitions between video frames. The slideset is available at

While the global Grand Challenge winners will be announced tonight, the MediaMixer project has already selected their Grand Challenge winner, and congratulates IDIAP Switzerland on an appealing visualisation of video fragment associations which can be of great usability value in our VideoLecturesMashup demonstrator. The authors win a chance to visit the VideoLectures team in Ljubljana, Slovenia and collaborate with us on a new version of VideoLecturesMashup!


MediaMixer partner CONDAT will present their use case during the event ‘Semantic Media Web Innovationsforum‘ this September 26 in Berlin, Germany. The two day event is the primary German language conference on semantic and media technologies. All aspects of semantic and multimedia technologies will be covered with a focus on German research and industry outcomes. CONDAT, for example, already works with the German public broadcasters in developing innovative new solutions for their workflows and data management. In this talk, entitled Wiederverwendung von Medienfragmenten für den Newsroom in EU-FP7 Projekt MediaMixer, Rolf Fricke will show how MediaMixer technologies can be applied in supporting TV newsrooms in the near time insertion of related media content into news stories, making use of innovations in Media Fragment annotation and retrieval.

MediaMixer partner University of Lleida gave a presentation at the DATA 2013 conference, which is focused on Data Management Technologies and Applications, entitled “Semantic Copyright Management of Media Fragments”.

DATA was an interesting venue from the point of view of DRM systems, one of the conference topics. Therefore, a good place to discuss a semantic approach to DRM. The proposal was well received and seen as a interesting approach to global DRM. In this sense, the value was perceived in moving beyond “Digital Restrictions Management” to full Copyright management embedded in a web of linked media and data. Moreover, as a venue devoted to data and particularly with an interest in open data, there was a great interest in applying semantic rights management to open data licenses. 
MediaMixer develops a demonstrator for semantic rights management for media fragments, to be launched in October (and seen during the ACM Multimedia 2013 conference). Check out the published Rights Intelligence use case and the DATA 2013 conference slides.

Pre-registration now open for the 1st Winter School on Multimedia Processing and Applications!

Masters/PhD students with Multimedia & Semantic Web topics can reserve their place before registration opens in September*

Experience top speakers in Dublin in January 2014 on media analysis, annotation, fragmentation, rights management, broadcasting, digital preservation, and e-learning!

For more information and the pre-registration form see

* Preregistration puts you on the list for places at the school. Your attendance is only guaranteed once you complete the registration process in September.

The first workshop on Media Fragment Creation and Re-mixing took place last week at the ICME 2013. Despite being on the day after the main conference ended, and there being 9 parallel workshops/tutorials taking place, the workshop was well attended with 20-25 unique attendees over the day who first heard a keynote by Prof. Noboru Babaguchi of Osaka University on “Example-based Remixing of Multimedia Contents”.

This was followed by scientific talks which covered the workshop topics of fragment creation and remixing.

Topics for fragment creation included video concept detection, visual similarity analysis and object re-detection in video.

Topics for fragment remixing included use of the Media Fragment URI specification in describing (social) media item differences and  a Remix Instrument based on Fragment feature analysis.

All presentation slides can be seen at the workshop page.