CERTH just publicly released version 1.2 of its software for the automatic temporal segmentation of videos. This extends the previous versions by integrating algorithms for video scene segmentation and keyframe selection (up to 5 most representative keyframes of each scene are identified), on top of the segmentation to shots that is also performed by the released software. The new algorithmic additions do not affect the processing speed, which remains at least 2-3 times faster than real-time processing (depending on the processing capability of the graphics card) for the entire video analysis chain (i.e., shot segmentation, scene segmentation and keyframe selection).

The development and release of this software was supported by EU research projects LinkedTV (http://www.linkedtv.eu/) and MediaMixer (http://www.mediamixer.eu/).

The software is available for download at http://mklab.iti.gr/project/video-shot-segm

Related technical papers:

– E. Apostolidis, V. Mezaris, “Fast Shot Segmentation Combining Global and Local Visual Descriptors”, Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, May 2014. (available at http://www.iti.gr/~bmezaris/publications/icassp14_preprint.pdf)

– P. Sidiropoulos, V. Mezaris, I. Kompatsiaris, H. Meinedo, M. Bugalho, I. Trancoso, “Temporal video segmentation to scenes using high-level audiovisual features”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 8, pp. 1163-1177, August 2011. (available at http://www.iti.gr/~bmezaris/publications/csvt11_preprint.pdf)

Related demos:



Firstly, an acknowledgement that we successfully held our Winter School on Multimedia Processing and Applications at the beginning of this year co-located with the MMM2014 conference in Dublin. 30 international attendees, mostly PhDs and young researchers, were present for a set of technology centred presentations by MediaMixer experts and industry centred presentations by research project leaders in areas such as broadcasting, digital preservation and the Internet of Things.

The student Siriwat Kasamwattanarote from the National Institute of Informatics, Tokyo, Japan won the school Best Poster award for work done on “Tell me about TV commercials of this product”.

The Winter School talks covered the latest technological developments in the area of multimedia processing (media analysis, media annotation, media rights management) and of emerging multimedia applications (in the Sensor Web, audiovisual archives, TV broadcasting, digital preservation and e-learning domains). We didn’t want to keep this purely for the 30 attending students, so now all of the talks are published online courtesy of VideoLectures.NET:

MediaMixer will hold a half day tutorial on Re-mixing Media on the Web at this year’s WWW2014 conference on April 7 2014 in Seoul, S Korea. This tutorial will address the state of the art in the area of online media analysis, annotation and linking, reflecting that a number of Web-based specifications and technologies are now emerging that in combination can provide the technical solution for media owners to be enabled to manage and re-use their online media at a fragment level.

To join, just include the Tutorial in your conference registration at http://www2014.kr/registration/

For more information about the topics and speakers at the tutorial, see http://mediamixer.eu/tutorial

MediaMixer will give its next LIVE Webinar – online at http://mediamixer.eu/live – on February 3rd at 1130 CET on the topic of Semantic Management of your Media Fragments Rights

This webinar continues with the MediaMixer semantics-based media workflow. Once media has been fragmented and fragments semantically annotated, it is time to manage them. Digital Asset Management (DAM) solutions empowered by semantic technologies help managing assets lifecycle at the fragment level. This includes copyright management to facilitate their reuse and exploitation. MediaMixer proposes the use of a copyright ontology based on semantic technologies, which models access control policies and offers the possibility of automating licence checks and filtering of available content against their terms of use. In this case, semantic technologies make it possible to go beyond Digital Rights Management and, because it is possible to model copyright through the whole media value chain, manage media rights from creation or remix to end-user consumption.

Join us on February 3rd, 1130 CET, live with Q&A via Twitter and TitanPad at http://mediamixer.eu/live

A MediaMixer submission together with the team at VideoLectures.NET entitied “Video Lectures Mashup – remixing learning materials for topic-centred learning across collections” will be presented in April 23-25 at the OCW Global Conference 2014, in Ljubljana, Slovenia.

The OCW Consortium global conference is the annual opportunity for researchers, practitioners, policy makers and educators to deeply explore open education and its impact on global education.  Conference participants will hear from global thought leaders in open education and have the opportunity to share ideas, practices and discuss issues important to the future of education worldwide.  Sessions cover new developments in open education, research results, innovative technology, policy implementation, best practices and practical solutions to challenges facing education around the world.

MediaMixer’s presentation will look at using media fragment and semantic media technology to help make it easier for learners to browse topics across video lectures from different collections, and show our prototype with VideoLectures.NET content.

MediaMixer promotes technology for fragmenting media assets and making it possible to annotate those fragments for subsequent retrieval and re-mixing in new contexts. We believe this can help media owners to derive more value from their assets by making those assets more useful for media consumers.

Our last live Webinar addressed how to create fragments out of larger media assets such as videos.  Vasileios Mezaris of Greek research centre CERTH discusses a set of video processing techniques for media fragment creation and annotation. These include techniques for the temporal segmentation of the video into shots and scenes, the re-detection of appearances of specific objects throughout the video, and the detection of concepts that describe the temporal video fragments. Such techniques are the first step towards converting the raw video material into meaningful media fragments.


MediaMixer partner JSI will present the project demos on e-learning, news and media rights management this week during a fully booked workshop at Online Educa (Berlin, 4-6 December).

The Wednesday afternoon workshop “Artificial Intelligence Methods for Online-Based Education” has as its aims:

  • To present state-of-the-art machine translation methods and tools
  • To present state-of-the-art user profiling, aggregation and methods and tools
  • To present state-of-the-art cross lingual knowledge technologies
  • To present success stories from similar domains
  • To discuss future directions and potential projects

The participants as an outcome of the workshop will be able to understand how new technologies offer universities and academic communities new solutions to old and familiar problems. Higher education is trying to catch up with the changes of the digital age and the internet, and so the main goal of this half-day event is to clarify how emerging technologies, based on machine learning, machine translation, text mining, semantic web, open access, academic video journals, free video libraries, open lecture capture systems, OER and more, can change and help co-create emerging publishing, curriculum, designation, filtration, validation and research trends in Academia in Europe and in general.

For non-attendees, the demo videos can be seen at http://community.mediamixer.eu/demonstrators

MediaMixer co-ordinator Lyndon Nixon presented at the Internet of Education 2013 conference at Ljubljana, Slovenia (organised by project partner Jozef Stefan Institute) one possible MediaMixer future of e-learning. In this future, learning videos are mashed up to generate new learning offers for online learners, and such re-mixes could be the basis for new MOOC offers that are more flexible and personalised to individual learners. Dr Nixon noted that as e-learning materials become increasingly video, there are new requirements on how to retrieve relevant video by topic and access it in term of its parts (fragments), especially relevant for learners on the go or on mobile devices.

MediaMixer technology is a solution for this, as shown by our use case with VideoLectures.NET, the VideoLecturesMashup. This use case has been described previously, and the video of the demonstrator is also online.

Our slides on MediaMixing for e-learning are available:

Project coordinator Lyndon Nixon will speak at the Internet of Education conference, in Ljubljana, Slovenia this coming November 11, 2013. (see http://www.k4all.org/Internet_of_Education/)

The already booked out event will bring together researchers and policy makers from both university and academia to research into methods for improving the effectiveness of video based MOOC education.

Dr Nixon will highlight the MediaMixer offer of semantic media and media fragments, concretely demonstrating its benefits in an extension of the VideoLectures.NET platform in which learning videos can be explored by topic, across collections, in the form of sequences of different video fragments which are annotated with the same terms.

The MediaMixer project is pleased to release a demo of semantic technologies facilitating copyright management in the context of User Generated Content.

It is a key issue for the media industry these days, in addition to unauthorised media reproduction and distribution, to control the reuse of media in user generated content (UGC). To solve this issue and avoid publishing content that infringes copyright, UGC services like YouTube offer mechanism to detect the unauthorised reuse of media, and give the choice to monetarise its use rather than take down the content. However, all the potential of this new revenue stream is at risk if copyright subtleties are not managed appropriately. For instance, if the same song is owned by different rights holders depending on the territory.

What is required is a scalable decision support system capable of integrating digital rights languages, like DDEX or ODRL, together with contracts or policies, like talent contracts or business policies.
MediaMixer semantic technologies provide a common and expressive framework where all these copyright information sources can be represented together.

See the demo video.

Test the online prototype.