Logo multitel

2rd Multitel Spring workshop on video analysis – July 6, 2007

  • 10:00 Welcome and coffee
  • 10:25 Opening Remarks and Welcome

Paper Session 1

  • 10:30 “IRMFocus : a new focusing material for the improvement of MRI’s images quality”, Agnes Lapeyronnie, Multitel

The purpose of this project is to develop a new material that would allow to improve the quality of magnetic resonance images. The concept of metamaterials will be used for this aim, because metamaterials, newly introduced materials, seem to have very interesting focusing properties. The participation of Multitel in this project is multiple. It brings its know-how in the phase of characterization and quantification of the disturbances appearing on the images obtained by NMR, when a metal structure is present. It contributes to the conception of the filters and algorithmic for the prediction (simulation), correction and calibration of the focusing effects of the metamaterials on the images. Lastly, it will also intervene for the signal processing associated with the electromagnetic problem related to the design of multiple antennas network, and its effect on the rough image provided by IRM.

  • 11:00 “Progressive Learning for Interactive Surveillance Scenes Retrieval”, J. Meessen, Multitel (from IEEE VS 2007 during CVPR)

This presentation tackles the challenge of interactively retrieving visual scenes within surveillance sequences acquired with fixed camera. Contrarily to today’s solutions, we assume that no a-priori knowledge is available so that the system must progressively learn the target scenes thanks to interactive labelling of a few frames by the user. The proposed method is based on very low-cost features extraction and integrates relevance feedback, multiple-instance SVM classification and active learning. Each of these 3 steps runs iteratively over the session, and takes advantage of the progressively increasing training set. Repeatable experiments on both simulated and real data demonstrate the efficiency of the approach and show how it allows reaching high retrieval performances.

Paper Session 2: Tracking

Model learning and tracking are two important topics in computer vision. While there are many applications where one of them is used to support the other, there are currently only few where both aid each other simultaneously. In this work, we seek to incrementally learn a graphical model from tracking and to simultaneously use whatever has been learned to improve the tracking in the next frames. The main problem encountered in this situation is that the current intermediate model may be inconsistent with future observations, creating a bias in the tracking results. We propose an \emph{uncertain model} that explicitly accounts for such uncertainties by representing relations by an appropriately weighted sum of informative (parametric) and uninformative (uniform) components. The method is completely unsupervised and operates in real time.

  • 12:00 “Multi-Cue Tracking by Interacting Processes with Linked Hidden Markov Models”, Wei Du, Ulg.

This paper presents a novel approach to integrating multiple cues in visual tracking. We perform tracking in different cues by interacting processes. Each process is represented by a Hidden Markov Model, and these parallel processes are arranged in a chain topology by undirected links. The resulting Linked Hidden Markov Models naturally allow the use of two inference algorithms, particle filtering and Belief Propagation. By combining them in a unified framework, a target is tracked in each cue by a particle filter, and the particle filters in different cues interact by exchanging messages. Our examples selectively integrate four visual cues including color, edges, motion and contours. The general framework of our approach allows a customized combination of different cues in different situations. We empirically demonstrate that the order of the cues along the chain is inconsequential, and that our approach is superior to others approaches such as Independent Integration and Hierarchical Integration in terms of flexibility and robustness.

Lunch

Paper Session 3: Performance Analysis.

  • 13:30 “Performance evaluation of tracking system using Viterbi”, Xavier Desurmont, Multitel. (from Spie 2007)

We describe a new algorithm that can evaluate tracking systems. The basic idea is to find the best interpretation of the results according to the ground truth. We first present the set of all possible interpretations and then an optimized method to build the best one without testing all the possibilities (which is computationally too much expensive).

  • 14:00 “An Object-based Comparative Methodology for Motion Detection based on the F-Measure”, Neda Lazarevic, Inrets.

The majority of visual surveillance algorithms rely on effective and accurate motion detection. However, most evaluation techniques described in the literature do not address the complexity and range of the issues which underpin the design of a good evaluation methodology. Here we explore the problems associated with both optimising the operating point of any motion detection algorithms and the objective performance comparison of competing algorithms. In particular, we develop an object-based approach based on the F-Measure – a single-valued ROC-like measure which provides a straight-forward mechanism for both optimising and comparing motion detection algorithms. Despite the advantages over pixel-based ROC approaches, a number of important issues associated with parameterising the evaluation algorithm need to be addressed.

Paper Session 3: On-going projects

  • 14:30 “Real-time Agent-based Data Warehousing for Video Monitoring Systems”, Cyril carincotte, Multitel.

Scientific advances in the development of reliable processing algorithms now allow various embedded vision-based applications in distributed systems. Nevertheless, the efficient integration and management of the analysis results in these distributed environments are currently barely tackled. To address this issue, we propose to use a multi-dimensional database (data warehouse) with distributed features to add a centric view of the data shared between all the sensors of the network. We also propose to enhance the possibilities of this kind of system by delegating the intelligence to autonomous and reactive entities exhibiting very adaptive behaviours, namely ``agents’‘. In this context, agents can be seen as specialized applications, able to walk across the network and work on dedicated sets of data related to their core domain. We will describe the motivation, design and implementation of such a system; preliminary evaluation and illustration with traffic monitoring application are also provided.

  • 15:00 “Hierarchical Integration of Local 3D Features for Probabilistic Pose Recovery”, Renaud Detry, Ulg.

This paper presents a 3D object representation framework. We develop a hierarchical model based on probabilistic correspondences and probabilistic relations between 3D visual features. Features at the bottom of the hierarchy are bound to local observations. Pairs of features that present strong geometric correlation are iteratively grouped into higher-level meta-features that encode probabilistic relative spatial relationships between their children. The model is instantiated by propagating evidence up and down the hierarchy using a Belief Propagation algorithm, which infers the pose of high-level features from local evidence and reinforces local evidence from globally consistent knowledge. We demonstrate how to use our framework to estimate the pose of a known object in an unknown scene, and provide a quantitative performance evaluation on synthetic data.

  • 15:30 “Computer-aided Diagnosis of Pulmonary Embolism in Opacified CT Images”, Raphael Sebbe, Fpms.

Pulmonary embolism (PE) is an extremely common and highly lethal condition that is a leading cause of death in all age groups. Over the past 10 years, computed tomography (CT) scanners have gained acceptance as a minimally invasive method for diagnosing PE. In this manuscript, a framework for computer-aided diagnosis of PE in contrast-enhanced CT images is presented. It consists of a combination of a method for segmenting the pulmonary arteries (PA), emboli detection methods as well as a scheme for evaluating their performances. The segmentation of the PA serves one of the clot detection methods, and is carried out through a region growing method that makes use of a priori knowledge of vessel topology. Two different approaches for clot detection are proposed: the first one performs clot detection by analyzing the concavities in the segmentation of the pulmonary arterial tree. It works in a semi-automatic way and it enables the detection of thrombi in the larger sections of the PA. The second method does not make use of PA segmentation and is thus fully automatic, enabling detection of clots farther in the vessels. The combination of these methods provides a robust detection technique that can be used as a safeguard by radiologists, or even as preliminary computer-aided diagnosis (CAD) tool. The evaluation of the method is also discussed, and a scheme for measuring its performance is proposed, including a practical approach to making reference detection data, or ground truths, by radiologists.

  • 16:00 Discussion & Closing
Print
rss2_feed