Foreground motion detection with a single moving camera
Motion cues provide valuable information when carrying out tasks such as collision warning/detection for driving applications in a complex traffic environment, in which case moving objects (e.g. coming vehicles, pedestrians, etc.) signifies a higher risk and should be differentiated from the background.
In this project we propose a novel approach for non-background motion detection by exploiting both geometric constraints and deep learned object recognition technologies. Instead of using a stereo rig, we build our solution based solely on the input from a single frontal camera mounted on a moving vehicle whose motion model is unknown.
Our objective is to produce a dense motion map that differentiates moving object from the static environment.
The aim of our research is to develop a purely vision-based system for detecting moving objects in a highly dynamic traffic scene. The system should be capable of extracting geometric and semantic information from a single moving camera, and robustly detect both rigid (e.g. other vehicles) and non-rigid motions (e.g. pedestrians) without the aid from external odometry sensors (e.g. IMU).
We experimented with data collected from both pinhole and fish eye camera models. The fish eye camera is able to provide a greater field of view but the distortion needs to be rectified by calibration.
The recently emerged deep learning technologies are employed to generate accurate object mask and dense optical flow, which allow our algorithm to understand the scene components and cluster motion vectors over each object. We combine this knowledge with geometric constraints to build a probabilistic filter for tracking foreground motions over time, and demonstrate state-of-the-art results for monocular camera-based foreground motion detection.
Our highly skilled team of world class researchers and engineers is open to partnerships and collaborations for research, development, and commercialisation.
Contact us to learn more.