About lyndon.nixon@sti2.org

In today’s Grand Challenge session at ACM Multimedia 2013, two paper-solutions for the MediaMixer / VideoLectures.NET challenge on temporal segmentation of e-learning videos were presented:

1) MUST VIS system (IDIAP Switzerland) offers an appealing visualisation of video-to-video and segment-to-segment linking based on video analysis and tagging, and their associations. A first demo was presented which can be seen at http://portal.klewel.com/graph

2) The Hasso Platner Institute (HPI) presented a solution based purely on OCR analysis of the accompanying lecture slides, using common partners to provide for solutions for Global and Partial Segmentation of the video, being more accurate than the use of shot segmentation techniques relying on transitions between video frames. The slideset is available at http://www.yanghaojin.com/research/ACM-MM-GC-DEMO/

While the global Grand Challenge winners will be announced tonight, the MediaMixer project has already selected their Grand Challenge winner, and congratulates IDIAP Switzerland on an appealing visualisation of video fragment associations which can be of great usability value in our VideoLecturesMashup demonstrator. The authors win a chance to visit the VideoLectures team in Ljubljana, Slovenia and collaborate with us on a new version of VideoLecturesMashup!


At this weeks ACM Multimedia, the solutions for the Multimedia Grand Challenges will be presented. The Multimedia Grand Challenge presents a set of problems and issues from industry leaders, geared to engage the Multimedia research community in solving relevant, interesting and challenging questions about the industry’s 3-5 year vision for multimedia. MediaMixer and VideoLectures.NET have introduced a brand new challenge this year, along their vision of enabling fragmentation of online media and annotating it to support new services like VideoLecturesMashup. 

In a session on Thursday both shortlisted solutions will be presented and a winner will be announced, winning a MediaMixer prize to travel to Ljubljana and meet the VideoLectures.NET team:

  • Multi-factor Segmentation for Topic Visualization and Recommendation: the MUST-VIS System.” Chidansh Bhatt, Andrei Popescu-Belis, Maryam Habibi, Sandy Ingram, Stefano Masneri, Fergus McInnes, Nikolaos Pappas and Oliver Schreer.
  • Lecture Video Segmentation by Automatically Analyzing the Synchronized Slides.” Xiaoyin Che, Haojin Yang and Christoph Meinel.
ACM Multimedia attendees are invited to find out in the Grand Challenge session on Thursday. MediaMixer will be tweeting @project_mmixer and the winner will be announced via the website.

The partner EURECOM, represented by Raphael Troncy, will keynote at two International Semantic Web Conference 2013 (ISWC2013) workshops, speaking about Media Fragments and then their annotation (basing on the entity extraction technology NERD):

* A talk at the First International Workshop on Semantic Music and Media (SNAM) entitled “Deep-linking into Media Assets at the Fragment Level: Specification, Model and Applications”, see http://semanticmedia.org.uk/smam2013/#programme

* A talk at the NLP & DBpedia Workshop entitled “NERD: an open source platform for extracting and disambiguating named entities in very diverse documents”, see http://nlp-dbpedia2013.blogs.aksw.org/program/

For those not in Sydney, Australia this week MediaMixer will publish the slides as soon as we have them!

MediaMixer is pleased to show its industry demonstrators during ACM Multimedia 2013, taking place next week in Barcelona, Oct 23-25 2013. The first public showing of the industry benefits from semantic multimedia fragment technology covers:

  • VideoLecturesMashup, offering online learners the opportunity to explore a topic or subject across parts of different learning materials;
  • Rights Integration and Intelligence for your Media, giving companies an intuitive tool to semantically model the Copyright of their media and a reasoner to determine permissions for re-use or purchase;
  • Re-use of Video in the Newsroom, demonstrating a tool for newsrooms to directly browse and include relevant video fragments into their news programming.

MediaMixer’s demos will be shown during conference breaks from Wednesday morning until Friday lunchtime, and visitors who see a demo & leave their business card will also receive a MediaMixer stressball, memo holder or pen*!

Find out more about MediaMixer’s use cases (free registration to access materials)!

See you at #acmmm13!

* Subject to availability!

The MediaMixer project is pleased to announce the release of the recording of our first Webinar “What is MediaMixing?” by Lyndon Nixon (MODUL University). Dr Nixon is the coordinator of the MediaMixer project and in this talk he explains how MediaMixer was set up to react to enterprise trends in increased media creation and re-use, and why we believe semantic multimedia technology can support making media assets more valuable for their owner and more useful for their consumer.

This is the first in a set of Webinars on MediaMixer technology and their benefits for the enterprise, see http://mediamixer.eu/live for the full schedule and past Webinar recordings.

MediaMixer Webinars are made possible thanks to the Jozef Stefan Institute and VideoLectures.NET.

The next LIVE MediaMixer Webinar will take place on Wednesday 2 October at 1400 CET streamed at http://mediamixer.eu/live.

Our colleague Raphael Troncy (EURECOM) will give an introduction to the Media Fragments specification, already a recommendation at the W3C, which allows for an agreed syntax to refer to parts of media assets using Web-friendly URIs. He will go on to discuss how annotations may be attached to those Media Fragments so that media metadata can become more fine grained, allowing for retrieval and re-use of media at the fragment level.

Join the live presentation and discuss with us via Twitter @project_mmixer. Webinars are also recorded and will be available online at http://mediamixer.eu/live later. MediaMixer community members receive regular mails reminding them of MediaMixer activities – join free at http://community.mediamixer.eu 


Calling all interested multimedia practitioners, especially MSc/PhD students with a research topic covering Multimedia Applications or Multimedia Processing, we are pleased to announce that our Winter School is now open for registration!

Attendees will hear about and get hands on experience with semantic multimedia technologies including media fragments, analysis, annotation and digital rights, from MediaMixer partner experts.  Invited speakers (EU project coordinators!) will give insights in current & future use of the technology in their domains (broadcasting, digital preservation, e-learning…)

Even better, while stand alone registration for this event is already amazingly low (€150 !), thanks to our cooperation with the MMM 2014 conference, conference registrees need only pay €50 more on their registration fee to attend the school!

For more about the school, see http://winterschool.mediamixer.eu

For registration, go to http://mmm2014.org/registration/

MediaMixer partner CONDAT will present their use case during the event ‘Semantic Media Web Innovationsforum‘ this September 26 in Berlin, Germany. The two day event is the primary German language conference on semantic and media technologies. All aspects of semantic and multimedia technologies will be covered with a focus on German research and industry outcomes. CONDAT, for example, already works with the German public broadcasters in developing innovative new solutions for their workflows and data management. In this talk, entitled Wiederverwendung von Medienfragmenten für den Newsroom in EU-FP7 Projekt MediaMixer, Rolf Fricke will show how MediaMixer technologies can be applied in supporting TV newsrooms in the near time insertion of related media content into news stories, making use of innovations in Media Fragment annotation and retrieval.

MediaMixer partner University of Lleida gave a presentation at the DATA 2013 conference, which is focused on Data Management Technologies and Applications, entitled “Semantic Copyright Management of Media Fragments”.

DATA was an interesting venue from the point of view of DRM systems, one of the conference topics. Therefore, a good place to discuss a semantic approach to DRM. The proposal was well received and seen as a interesting approach to global DRM. In this sense, the value was perceived in moving beyond “Digital Restrictions Management” to full Copyright management embedded in a web of linked media and data. Moreover, as a venue devoted to data and particularly with an interest in open data, there was a great interest in applying semantic rights management to open data licenses. 
MediaMixer develops a demonstrator for semantic rights management for media fragments, to be launched in October (and seen during the ACM Multimedia 2013 conference). Check out the published Rights Intelligence use case and the DATA 2013 conference slides.

We are delighted to announce that the MediaMixer partner UdL’s presentation entitled “Linked Data: the Entry Point for Worldwide Media Fragments Re-use and Copyright Management?” has been invited to be presented at the 2013 Semantic Technology & Business Conference – NYC.

Roberto García Gonzalez will present on the conference 1st day, 2nd October 2013, this MediaMixer case study on future Content Management:

“One of the biggest barriers for the uptake of a Web of Media is the availability of easy ways to reuse media fragments and manage their copyright. Existing proposals provide limited solutions or find it difficult to scale to the Web. MediaMixer contributes state of the art techniques for media fragment detection and semantic annotation.

This is complemented with copyright management integrated into the Web fabric, using Linked Data principles and reasoning based on a Copyright Ontology. Altogether, it can make possible to navigate the Web retrieving the metadata describing a piece of content to be reused, linked to the agreement about its copyright, the parties that will share the revenue, etc.

A typical MediaMixer demo involves:

* Fragmenting media assets

* Annotating them using semantic descriptions

* Modeling licenses, policies,… using the Copyright Ontology

* Exposing them for fragment level retrieval and re-use, including copyright reasoning”