Video object articulation using depth-based content segmentation approaches
Authors: Ntalianis, Klimis 
Doulamis, Anastasios 
Kollias, Stefanos 
Doulamis, Nikolaos 
Publisher: IEEE
Issue Date: 10-Dec-2002
Conference: 2002 International Conference on Image Processing (ICIP 2002), 22-25 September 2002, Rochester, New York, USA 
Book: Proceedings of the 2002 International Conference on Image Processing 
Volume: 2
Abstract: 
In this paper two efficient unsupervised video object segmentation approaches are proposed and then extensively compared in terms of computational cost and quality of segmentation results. Both methods exploit depth information. In particular a depth segments map is initially estimated by analyzing a stereo-scopic pair of frames and applying a segmentation algorithm. In the first a "Constrained Fusion of Color Segments" (CFCS) in which video object segmentation is performed by fusion of color segments according to a depth similarity criterion. In the second approach firstly a dilated version of the boundary of each depth segment is produced and several feature points are estimated on this dilated boundary. Then for each initial point a normalized Motion Geometric Space (MGS) is created which determines the only allowed way the point can move onto. In the last step each initial point moves onto its MGS and stops according to a weighted stop-function. Experiments on real life stereoscopic sequences are presented to exhibit the speed and accuracy of the proposed schemes.
ISBN: 0-7803-7622-6
ISSN: 1522-4880
DOI: 10.1109/ICIP.2002.1039976
URI: https://uniwacris.uniwa.gr/handle/3000/2815
Type: Conference Paper
Department: Department of Business Administration 
School: School of Administrative, Economics and Social Sciences 
Affiliation: University of West Attica (UNIWA) 
Appears in Collections:Book Chapter / Κεφάλαιο Βιβλίου

CORE Recommender
Show full item record

Page view(s)

13
checked on Dec 22, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.