Martin Dow, of MediaMixer partner Acuity Unlimited, was asked to give a short talk on the subject of “Web Semantics”, as it might apply to the FOCAL audience, at the FOCAL conference on 10 July in London, “Metadata and Why it is Important”.

The talk was billed under the “New Developments” section. The focus of the talk was to briefly introduce what “web semantics” referred to, to highlight recent achievements in institutional archival practices through engineering with the web architecture, and how the properties of the web might help realise scale and reuse – particularly through the property of enabling cooperation without coordination, the separation of concerns for metadata acquisition, preservation and reuse.

The conference attracted FOCAL professionals: archive owners, managers, technical staff and archive researchers/archive producers. Some, for example from major broadcast archives, were already familiar with semantic web concepts, whilst for others the concepts were of interest but the practice still seemed quite distant from tape- and file- based day-to-day environments. The New Developments section also featured talks from Godfrey Rust, leading data modelling challenges for the Linked Content Coalition and the UK Copyright Hu, an organisationally and technically coordinated “hub” solution to interoperable rights metadata, and Mark Vermaat of SilverMouse regarding content identifiers. The programme was broad in scope, with sessions from industry expert Carol Owen and Sara Hill from services organisation Prime Focus, case studies from AP and ITV, industry experts Richard Wright on realities of working with legacy formats in a file-based broadcast environment, Paul Collard on his product’s metadata interface for digitisation work, “data wrangling” at the BBC, and an insightful session around archive research practices, with Matthew Butson from Getty Images. Given the semantic web is capable of generically representing all kinds of structured metadata, current and emergent industry practices are important to MediaMixer’s future understanding of the maturity models required to engage within these industry segments.


MediaMixer partner EURECOM not only contributes directly to future media technology, co-authoring the Media Fragments URI specification and working on media fragments and semantic multimedia implementations, but has been demonstrating the technologies’ value at leading computing conferences, namely the World Wide Web Conference (WWW) 2013 this past May in Brazil as well as the Extended Semantic Web Conference (ESWC) 2013 shortly thereafter in France.

At the Linked Media (LiMe2013) workshop at WWW2013, EURECOM presented “Enriching Media Fragments with Named Entities for Video Classification”. In this work, we propose a framework which classifies video according to textual features such as named entities spotted from subtitles, and temporal features such as the duration of media fragments where entities are spotted. We implement four automatic machine learning algorithms for multi-class classification problems named Logistic Regression (LG), K-Nearest Neighbour (KNN), Naive Bayes (NB) and Support Vector Machine (SVM). The results show that this approach is promising in the context of online videos classification.

EURECOM also organised a workshop at WWW2013 entitled “Web of Linked Entities (WoLE)“, which transparently connects the World Wide Web (WWW) and the Giant Global Graph (GGG) using methods from Information Retrieval (IR), Natural Language Processing (NLP) and Database Systems (DB). The workshop has attracted 23 submissions from all over the world on which the program committee has selected 9 papers. The workshop also features two keynotes from Peter Mika (Yahoo!) for a talk entitled Entity Search on the Web and from Icaro Medeiros (Globo.com) for a talk entitled Linked Data at Globo. Finally, David Graus won the workshop challenge with the SemanticTED demo.

At another workshop, on real-time analysis and mining of social streams (RAMSS), EURECOM has given an invited talk: MediaFinder: Collect, Enrich and Visualize Media Memes Shared by the Crowd which introduces the MediaFinder tool developed for finding social media content shared on social networks and organize automatically this content for creating stories. See also the slides at http://www.slideshare.net/troncy/mediafinder-collect-enrich-and-visualize-media-memes-shared-by-the-crowd

EURECOM participated in the MSM (Making Sense of Microposts) challenge and sponsored by eBay and finished 2nd out of the 22 candidates with its NERD tool. See also the presentation athttp://www.slideshare.net/giusepperizzo/learning-with-the-web-spotting-named-entities-on-the-intersection-of-nerd-and-machine-learning

At ESWC2013, EURECOM presented the demo paper entitled Tracking and Analyzing The 2013 Italian Election which builds on top of the MediaFinder tool and enables to repeat the same query over a longer period of time in order to monitor a particular event and its dynamics. You can relive the demo at http://mediafinder.eurecom.fr/story/elezioni2013 where the Italian election has been used as a use case for visualizing how social media content have been shared during the 7 days following the election day.


MediaMixer project is proud to be a sponsor of the ACM Multimedia 2013 conference (ACMMM13). As one of the leading multimedia research events, MediaMixer is actively looking there to promote the innovative semantic multimedia and media fragment technology that it believes can help build a new generation of multimedia systems.

During ACMMM13, MediaMixer will announce the winner of its ACM Grand Challenge, highlighting the role of automated segmentation and annotation of video in enabling an industry partner (VideoLectures.NET) to offer a new service around its e-learning video assets (VideoLecturesMashup).

MediaMixer will also be exhibiting in the conference industry demos area its use case demonstrators during the three days of ACMMM13, highlighting how MediaMixer promoted technology not only enables mashups of e-learning video (VideoLecturesMashup) but also access to media fragments for newsrooms and negotiating digital rights for media assets between media owners and media consumers.

 

 


The W3C recommendation on the Media Fragment URI specification is an important part of the technological solution promoted by MediaMixer. It provides a standardised manner to refer to spatial and temporal fragments of a media item with the goal that this can be used across software systems in a media workflow. Now MediaMixer has published links on its community portal to two background presentations on Media Fragments:

Material on Media Fragments will also be updated after the summer and the subject of a dedicated Webinar by MediaMixer, stay tuned!


The recent Metadata Developer Network Workshop organised by the EBU in Geneva on 5-6 June 2013 included a talk by Roberto García González of the University of Lleida. The subject was “facilitating media fragment mixing and its rights management using semantic technology” (slides online)

Our finding from the event was that broadcasters are progressively maturing towards applying semantic technology, perceiving opportunities there but taking slow steps to avoid internal technological disruptions, where many are now at the stage of working with structured data in the form of XML, mappings with XSLT etc.

MediaMixer will continue to work to encourage gradual adoption of the new technology through providing informative materials, highlighting use cases and preparing proof of concept demonstrators.


In the context of the MediaMixer use case on mashing up e-learning video, the technical partner CERTH and the use case partner JSI (for VideoLectures.net) have produced an online demonstration of technology necessary for the first step: creating fragments of the learning video assets and detecting concepts of relevance in those fragments.

Check the demonstrator at http://160.40.50.201/mediamixer/demonstrator.html and give us your feedback and comments on the technology at our forum on fragment creation and description.

*Due to the VideoLectures.NET dataset (lectures) the demo does not express the full potential of the video shot segmentation and concept detection process, but is still a nice example.
Do you have datasets which could be more suitable for creating media fragments with video analysis techniques? Become our community member and get to know MediaMixer technologies!


MediaMixer has published two new tutorials on its community portal, free to access by registered members:

They cover techniques and technologies to analyse media assets in order to determine the significant spatial and temporal regions which can be defined as separate media fragments. Each fragment can then be individually annotated with the concepts and topics they represent, using structured annotation models and ontologies to allow for semantic search and machine-automated processing.

The combination of these technologies can allow media owners to better prepare their media for retrieval and re-use, both internally and if desired by external parties who may have to agree to a specified license and/or pay a fee. Future tutorials by MediaMixer will address the topics of media fragment management, rights and re-use!


MediaMixer as a project is promoting the use of innovative semantic technology for analysing, annotating, managing and re-using media assets at the fragment level to the benefit of both the media owner and the media consumer.

However, any new technology needs explanation: what technology exists for each task? where is it and how can I use it? what does it support, what not, and what experiences have been had with it? MediaMixer supports interested adopters on its community portal with lists of useful software and demonstrators, and can directly help community members via online forums, events and offers of knowledge and technology transfer*.

As part of this, we just published the first version of the Core Technology Set.  This document introduces the different aspects of MediaMixer technology with a state of the art, existing technology and specifications, and links to further information and downloads. We cover:

  • Media Fragment specification, server playout and client playout
  • Media Fragment creation via visual analysis
  • Media Fragment description using annotation models and controlled vocabularies
  • Media Fragment rights management
  • Media Fragment asset management including fragment identification
  • Media Fragment re-use via search and retrieval in IT systems

This technology is in flux and we will update the Core Technology Set when necessary. Any registered community member (registration is free) can access the document, and if you have questions or comments on it, you can start a discussion in our What is MediaMixer? forum.

 


We are pleased to announce the 1st Summer School on Media Fragment Creation and Remixing in Chania, Crete, Greece on June 3-6, 2013.

The summer school aims at offering participants from all over the world – both PhD/MSc students or young researchers, and media professionals / practitioners – top level education in the emerging area of media fragment technologies and applications (including topics such as video analysis, video annotation, semantic multimedia, social media, digital rights and multimedia applications). It will combine delivering in-depth lectures with giving to its participants the possibility for gaining hands-on experience on the use of media fragment annotation and re-use technologies, and of the effectiveness, privacy etc. issues that may arise. The school will be organized in two tracks, one for academia members (PhD/MSc students or young researchers) and one for industry professionals.

Application Deadline: 31 March 2013

For further information and for applying, please visit this link!