Logo multitel

1st Multitel Spring workshop on video analysis – June 8, 2006

  • 10:00 Welcome and coffee
  • 10:25 Opening Remarks and Welcome

Paper Session 1: Tracking

  • 10:30 “Robust Active Contour Tracking for Real-Time Applications”, Joanna Olszewska, Ucl.

The developed active contour based-on method is efficient and well-suited for tracking of non-rigid objects, evolving in real-time applications. Moreover, some original criteria are introduced to increase its robustness. Results are presented on soccer and video-surveillance sequences.

  • 11:00 “Collaborative Multi-Camera Tracking of Athletes in Team Sports”, Wei Du, Ulg. (presentation from cvbase’06)

I will introduce my work, a collaborative multi-camera tracking algorithm, which was presented in CVBASE’06. The general idea is to combine particle filters and belief propagation in a unified framework: A target is tracked in each camera by a dedicated particle filter, and BP is used to have the particle filters in different cameras collaborate. In doing so, we provide a systematic approach to integrating multi-camera information for e.g. dealing with occlusions.

  • 11:30 “Target detection and tracking using feature points “,Tom Mathes, Ulg

I will present my recent work about target detection using only interest points. This approach has two main advantages: it is robust to illumination changes and works with pan-tilt-zoom cameras. Furthermore, I will present my latest ideas and results about robust feauture-point based tracking.

  • 12:00 “Effects of Parameters Variations in Particle Filter Tracking”, Caroline Machy, Multitel.

We propose to evaluate different particle filter methods in people tracking applications. We introduce an objective metric and give results according to different parameter variations. Finally, based on our evaluations, we can propose a new particle filter configuration that outperforms other current implementations.


Paper Session 2: Others

  • 13:30 “Overview of geometrical aspects to handle in the Trictrac project and proposed solutions”, Jean-Bernard Hayet, Ulg

I’ll try to review the different difficulties we have to cope with in the Trictrac project for maintaining camera/scene geometrical relations throughout long video sequences and present my last works and the points I would like to investigate to improve the overall robustness.

  • 14:00 “Content-based Retrieval of Video Surveillance Scenes”, J. Meessen

A novel method for content-based retrieval of surveillance video data will be presented. The solution is based on a new and generic dissimilarity measure for discriminating video surveillance scenes. This weighted compound measure can be interactively adapted during a session in order to capture the user’s subjectivity. Upon this, a key-frame selection and a content-based retrieval system have been developed and tested on several acutal surveillance sequences.

  • 15:00 “Performance evaluation of frequent events detection systems”, Xavier Desurmont, Multitel. (presentation for Pets 2006)

We describe a new algorithm that can evaluate a class of detection systems in the case of frequent events; like people detection in a corridor or cars on the motorways. To do so, we introduce an automatic re-alignment between results and ground truth by using dynamic programming. The second point of the paper describes, as an example, a practical implementation of a stand alone system that can perform a counting of people in a shopping center. Finally we evaluate the performance of this system.

  • 15:30 “Crowd video analysis”, Francois-Noel Martin, Multitel.

Crowd surveillance in places – like airports, stations- and during public events – like sports stadium, theatre halls – is a tedious and boring task for security agents, but safety-critical. The last ones are applicants of systems which can provide informations – like an esimation of the density of the crowd – or can detect abnormal motions – like congestion, spacing of a point. The majority of existing techniques responding to these problems are based on the recovering of the movement field of the crowd – by optical flow or block matching computation. I’ll try to present some of them, and some objectives we would like to discover in this way. .

  • 16:00 “Toward generic intelligent knowledge extraction from video and audio: the EU-funded CARETAKER project”, Cyril carincote, Multitel. (presentation from ICDP 2006)

The CARETAKER project, which is a 30-month project that has just kicked off, aims at studying, developing and assessing multimedia knowledge-based content analysis, knowledge extraction components, and metadata management sub-systems in the context of automated situation awareness, diagnosis and decision support. More precisely, CARETAKER will focus on the extraction of structured knowledge from large multimedia collections recorded over networks of cameras and microphones deployed in real sites. The produced audio-visual streams, in addition to surveillance and safety issues, could represent a useful source of information if stored and automatically analyzed, in urban/environment planning, resource optimization, disabled/elderly person monitoring, . . .h2. Paper Session 1: Transport

  • 16:00 Discussion & Closing