Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/21480
Title: Nonparametric pixel-wise background modelling and segmentation to detect moving object with RGB-D camera
Authors: Dorudian, Navid
Advisors: Luria, S
Swift, S
Keywords: Background-model Update;Dynamic Environments;GPS-denied Environments;Background Subtraction;Micro Air Detection
Issue Date: 2020
Publisher: Brunel University London
Abstract: Moving object detection is one of the main fundamental parts in various computer vision applications, particularly for real-time object tracking and recognition in automated video surveillance. Although human eyes can simply recognise objects and changes in the scene, automated detection of moving objects in some scenarios are still a challenging task for existing systems. One of the main effective ways to improve the detection rate is to use colour and depth cameras together (RGB-D). Despite the effort of previous researchers with various sensors, moving object detection is still challenging in some scenarios such as dynamic background, sudden illumination-changes, colour and depth camouflage, intermittent motion, out of sensor range, bootstrapping, slow and stationary moving objects, etc. Thus, the aim of this thesis is to improve the detection accuracy and efficiency of moving object detection by achieving more precise and consistent detection in different challenging scenarios. To attain this, three new robust pixel-wise nonparametric methods for real-time automatic detection of moving objects in indoor environments using an external RGB-D sensor are presented. The methods introduced in chapter 4 and 5 (BSABU, NBM-GA, NBM-HC) are an improved version of the proposed method in chapter 3 called NBMS. These methods are able to deal with various complex scenarios and different type of moving objects such as high-speed drones or slow and stationary moving objects such as a human. NBMS method first creates two background models by storing some observed colour and depth pixels. Then each pixel from the new frame will be compared with the stored models to mark the new pixel as foreground or background. These models require a continuous update to adapt to the changes in the environment. A novel regular update based on the distance of the pixels is proposed which only applies to the pixels marked as a background after pixels classification. Besides, the method also blindly updates the model to adapt to sudden changes in the background. This approach is compared with other methods in different collected datasets from the drone, a publicly available dataset and a live application. Results show improvements over current methods. In chapter four, an adaptive blind update policy has been added to the method to improve the detection accuracy of stationary moving objects. In particular, blind update frequency changes based on the speed of the moving object or any other changes in the background to prevent absorption of a stationary moving object in the background models. Besides, a new shadow detection method using CIEL*a*b* colour added to enhance the detection accuracy in the case of shadow and depth camouflage. Results show significant improvement compared to the original method. This method also evaluated in 32 datasets in the benchmark and the results have shown robust and consistent in different challenging scenarios. Due to a large number of samples in the model, optimisation algorithms such as Hill-Climbing, and Genetic Algorithm (GA) could help to improve the efficiency and accuracy even further. Instead of updating the models pixel by pixel, a fitness function calculates the fitness of each stored sample image and only one image will receive an update each time. GA selects this image by a Roulette Wheel. This updating mechanism allows all the pixels in the depth model to have a chance to get updated and therefore the system does not stop in the local optima, which is usually created by the noise of the sensors. Results indicate improvement in the depth-camouflage scenario and reduction of computational costs.
Description: This thesis was submitted for the awrad of Doctor of Philosophy and was awarded by Brunel University London
URI: http://bura.brunel.ac.uk/handle/2438/21480
Appears in Collections:Computer Science
Dept of Computer Science Theses

Files in This Item:
File Description SizeFormat 
FulltextThesis.pdf4.4 MBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.