Adaptable neural networks for unsupervised video object segmentation of stereoscopic sequences
Authors: Ntalianis, Klimis 
Doulamis, Nikolaos 
Doulamis, Anastasios 
Kollias, Stefanos 
Issue Date: 1-Aug-2001
Conference: International Conference on Artificial Neural Networks (ICANN 2001), 21-25 August 2001, Vienna, Austria 
Book: Artificial Neural Networks - ICANN 2001 
Series: Lecture Notes in Computer Science
Abstract: 
In this paper, an adaptive neural network architecture is proposed for efficient video object segmentation and tracking of stereoscopic video sequences. Object extraction is a very important issue, addressed by the emerging multimedia applications, since it provides a meaningful description of the visual content. The scheme includes:(A) A retraining algorithm that optimally adapts the network weights to the current conditions and simultaneously minimally degrades the previous knowledge.(B) A semantically meaningful object extraction module for constructing the retraining set of the current conditions and (C) a decision mechanism, which detects the time instances when network retraining is required. The algorithm results in the minimization of a convex function subject to linear constraints. Furthermore description of the current conditions is achieved by appropriate combination of color and depth information. Experimental results on real life video sequences indicate the promising performance of the proposed adaptive neural network-based scheme.
ISBN: 3-540-42486-5
ISSN: 0302-9743
DOI: 10.1007/3-540-44668-0_147
URI: https://uniwacris.uniwa.gr/handle/3000/2748
Type: Conference Paper
Department: Department of Business Administration 
School: School of Administrative, Economics and Social Sciences 
Affiliation: University of West Attica (UNIWA) 
Appears in Collections:Book Chapter / Κεφάλαιο Βιβλίου

CORE Recommender
Show full item record

Page view(s)

26
checked on Dec 22, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.