Logo multitel

3rd Multitel Spring workshop on video analysis – May 30, 2008

  • 10:00 Welcome and coffee
  • 10:25 Opening Remarks and Welcome

Paper Session 1: Transport

  • 10:30 “A Multi-Camera System for Video-based Road Traffic Analysis”, Jacek Czyz, Macq Electronique, Q. Houben, J.C. Tocino Diaz , Ulb.

Accurate traffic data such as vehicle count, speed, class of vehicles, detection of incidents, etc. are very important in many applications. In this talk, we describe a system for analysis of multiple video streams for road traffic surveillance. The system consists of several modules working either on a single video stream (vehicle detection) or at a higher level on features extracted from the multiple stream (stereo vision). A specific module is devoted to automatic camera calibration. Information fusion is used at several levels between the modules in order to increase the robustness of the system.

  • 11:00 “MORYNE: city traffic flow data collection through public transport vehicles “, Christophe Parisot, Multitel.

The EU co-funded MORYNE project aims at improving transport efficiency, transport safety and environmental friendliness by adding new tools for efficient traffic management in urban and sub-urban areas. Public transport vehicles, such as busses, are equipped with environmental sensors, e.g. temperature and humidity, and cameras to monitor road and traffic condition. To assist traffic management the collected data are transferred in real time to the traffic management centre. Assuming that a bus is operating on the same road as other vehicles but on a reserved lane, the Multitel traffic scene analysis solution evaluates the traffic flow of the lanes adjacent to the bus and sends the classification of the traffic at regular time intervals to the traffic management module. This video analysis method is intended to provide traffic flow information wherever the bus is operating (at any location of a street, in any street, etc) thus allowing Traffic Management Centres to get more localized information (in case of long road sections) as well as in places where that information was not available before. Our method focuses on the automatic classification of the traffic conditions on the lanes adjacent to the bus. The algorithm is also able to generate alarms when congestions or bus lane violations are detected. Traffic management operators are able to stream the videos from any bus in real time, e.g. to check the situation on the recorded video sequences when a congestion has been detected and take appropriate actions.

  • 11:30 “Application of Face recognition for Person Matching in Trains” , Warmon Renaud, Multitel.

Face recognition has made tremendous advance during these last two decade and is still a very active researched domain. A lot of algorithms has been developped and tested on large popular face database with very good results reported. However the faces of these database remain far from what we can expect to see in videosurveillance applications. We present an application of one of the most successful face recognition algorithm in a realistic environment for person matching in trains. First we proceed to skin color region segmentation then we use the famous haar cascade classifier to extract the faces. The faces are normalized by color detection and labeled by a local binary pattern operator (LBP). Faces matching is done by nearest neighbour classifier applied on a small database generated and updated in real time. Results shows the interest of this technique to reinforce the decisions of another person matching system based on other caracteristics.

  • 12:00 “Automatic Information Extraction of Recording Strips”, Caroline Machy, Multitel.

Even if the number of accidents involving the railway system is decreasing due to the technical progress, the statistics are still too high. For instance in 2004 in EU 25, 9309 accidents were reported, including 142 in France [ERA]. For each of these accidents in France, one element can be used as evidence in the eyes of the law: the recording strip and its associated filling-card or the so-called ATESS file recorded by digital Juridical Recording Units (JRU) introduced in the mid 80s. The strip contains all the information concerning the journey of the train, speed and time and all the driving events (such as emergency breaks). The card gives additional information on train’s driver, departure/arrival stations, number of trains, etc. These two elements are currently checked manually. The idea of this project is to simplify and speed up the procedure in performing the checking as automatically as possible. This paper then aims at presenting a whole system for the Automatic Read of Recording Strips (ARRS). Firstly, we will present the mechanical system for the digitalization of both the strip and the card. Then we will introduce curve tracking and feature matching used successively for the extraction of speed/time and driving events. Finally, performance evaluation will present the different detection rates of the system.

Lunch

  • 13:30 “Improving public transportation management and planning using Closed Circuit Television cameras analysis “, C. carincotte, Multitel.

In this paper, we propose to show how video data available in standard CCTV transportation systems could represent a useful source of information if automatically analyzed, e.g. for planning and resource optimization purposes (i.e. facilitate equipment usage understanding, ease diagnostic and planning for system managers\ldots). More precisely, we present two algorithms, allowing to estimate the number of people in a camera view and to measure the platform time-occupancy by trains. In a first time, each algorithm is evaluated on large amounts of data from the Roma underground; analysis of the algorithm results firstly provide interesting insights regarding station usage and operation general trends. In a second time, both people counting and train stop information are combined and analysed by a end-user public transportation system manager to provide a better understanding of the station usage.

Paper Session 2: Other

  • 14:00 “Progressive Learning for Interactive Surveillance Scenes Retrieval”, J. Meessen, Multitel.

Active learning has proven to improve the performances of interactive image classification, as it tends to maximize the information gain at each labelling round. Various techniques have been proposed over the years, as well as combinations of algorithms. In this talk, we consider an interactive region-based classification problem, where the number of target examples is rare compared to the mass of negative examples. We propose a new active learning algorithm adapted to this problem and show how it should be combined with other existing techniques. This was tested on both synthetic data and real-life video surveillance scenes. Our experiments demonstrate that the search for positive examples – “positive thinking” – and dynamic adaptation of the learning strategy allows improving the classification performances.

  • 14:30 “Bundle Adjustment for Homography Computation Using Lines “, Wei Du, Ulg.

This paper presents a method for bundle adjustment for homography computation using lines. In contrast to points, bundle adjustment using lines depends on how lines are parameterized. Two different parameterizations, the normalized and the geometric representations of lines, were tested and compared. The former gives rise to a similar Jacobian matrix to that of using points but is sensitive to the setup of the coordinate system. The latter is numerically more stable. An explicit expression of the Jacobian matrix is derived for the latter representation.

Short Break

  • 15:15 “Study and Implementation of a tracking method and 3D reconstruction of objects having a planar movement “, Guillaume Masse, Fpms.

The presentation is about the study of a tracking and recognition method of objects having planar mouvement. The purpose is to detect, follow and recognize objects like cars or pedestrians in a video of a car park. To do this, objects are described by SIFT points. SIFT points extraction is a recent method using gradients in the space-scale space. Objects are matched and so tracked between one frame to the next one using those SIFT points. A signature is then created for each object, regrouping all SIFT points from all views of the object, and stored in a database. Then a recognition process uses this database to provide a recognition probability (between 0 and 1), indicating if the current object matches with one of the database. This project was done for a master thesis (‘TFE’).

  • 15:45 “An event recognition framework for automated visual surveillance”, Cedric Simon, Ucl.

The ARCADE project presents a classifier-based approach to recognize events in video surveillance sequences. The aim of this work is to propose a flexible and generic event recognition system that can be used in a real world context. The overall system is composed of one supervised learning algorithm that can handle low level features to classify possibly sophisticated events. An ensemble of decision trees models each event as a sequence of structured activity patterns, without the use of any tracking methods. Our system is tested on simulated events and in a real world context with the CAVIAR video sequences dataset. Experimental results demonstrate the robustness of the system, and its efficiency for events recognition applications in visual surveillance.

  • 16:15 “Stereovision for people counting & pedestrian activity monitoring”, Damien Delannay, Ucl.

We will present shortly recent investigations on the relevance of real-time stereo based disparity maps for people activity monitoring. The first scenario adresses the design of an accurate people counting system. The second scenario deals with people activity detection on and around pedestrian crossings.

  • 16:45 Discussion & Closing
Print
rss2_feed