MediaMixer partner CERTH has released a video showing a demo of video shot segmentation and visual concept detection applied to lecture videos. All the processing of the videos was performed using automatic analysis techniques. Results can be used to support the subsequent annotation and re-use of the video materials, as in the VideoLecturesMashup use case.

See the video at https://www.youtube.com/watch?v=S-Xt2bw3LkA


In the context of the MediaMixer use case on mashing up e-learning video, the technical partner CERTH and the use case partner JSI (for VideoLectures.net) have produced an online demonstration of technology necessary for the first step: creating fragments of the learning video assets and detecting concepts of relevance in those fragments.

Check the demonstrator at http://160.40.50.201/mediamixer/demonstrator.html and give us your feedback and comments on the technology at our forum on fragment creation and description.

*Due to the VideoLectures.NET dataset (lectures) the demo does not express the full potential of the video shot segmentation and concept detection process, but is still a nice example.
Do you have datasets which could be more suitable for creating media fragments with video analysis techniques? Become our community member and get to know MediaMixer technologies!