In the context of the MediaMixer use case on mashing up e-learning video, the technical partner CERTH and the use case partner JSI (for VideoLectures.net) have produced an online demonstration of technology necessary for the first step: creating fragments of the learning video assets and detecting concepts of relevance in those fragments.
Check the demonstrator at http://188.8.131.52/mediamixer/demonstrator.html and give us your feedback and comments on the technology at our forum on fragment creation and description.
*Due to the VideoLectures.NET dataset (lectures) the demo does not express the full potential of the video shot segmentation and concept detection process, but is still a nice example.
Do you have datasets which could be more suitable for creating media fragments with video analysis techniques? Become our community member and get to know MediaMixer technologies!