DC FieldValueLanguage
dc.contributor.authorNtalianis, Klimis-
dc.contributor.authorDoulamis, Anastasios-
dc.contributor.authorMastorakis, Nikolaos-
dc.contributor.authorDoulamis, Nikolaos-
dc.contributor.authorDrigas, Athanasios-
dc.date.accessioned2024-11-18T08:00:12Z-
dc.date.available2024-11-18T08:00:12Z-
dc.date.issued2015-11-27-
dc.identifierscopus-84938988204-
dc.identifier.issn1939-8115-
dc.identifier.issn1939-8018-
dc.identifier.other84938988204-
dc.identifier.urihttps://uniwacris.uniwa.gr/handle/3000/2900-
dc.description.abstractIn this paper two efficient unsupervised video object segmentation approaches are proposed and thoroughly compared. Both methods are based on the exploitation of depth information, estimated from stereoscopic pairs. Depth is a more efficient semantic descriptor of visual content, since usually an object is located on one depth plane. However, depth information fails to accurately represent the contours of an object mainly due to erroneous disparity estimation and occlusion issues. For this reason, the first approach projects color segments onto depth information in order to address the limitations of both depth and color segmentation; color segmentation usually over-partitions an object into several regions, while depth fails to precisely represent object contours. Depth information is produced through an occlusion compensated disparity field and then a depth map is generated. On the contrary, color segmentation is accomplished by incorporating a modified version of the Multiresolution Recursive Shortest Spanning Tree segmentation algorithm (M-RSST). Next considering the first “Constrained Fusion of Color Segments” (CFCS) approach, a color segments map is created, by applying the M-RSST to one of the stereoscopic channels. In this case video objects are extracted by fusing color segments according to depth similarity criteria. The second method also utilizes the depth segments map. In particular an active contour is automatically initialized onto the boundary of each depth segment, which is usually different from a video object’s boundary. Initialization is accomplished by a fitness function that considers different color areas and preserves the shapes of depth segments’ boundaries. For acceleration purposes each point of the active contour is associated to an “attractive edge” point and a greedy approach is incorporated so that the active contour converges to its final position. Several experiments on real life stereoscopic sequences are performed and extensive comparisons in terms of speed and accuracy indicate the promising performance of both methods.en_US
dc.language.isoenen_US
dc.relation.ispartofJournal of Signal Processing Systemsen_US
dc.subjectActive contouren_US
dc.subjectAttractive edge pointen_US
dc.subjectDepth mapen_US
dc.subjectDisparity fielden_US
dc.subjectGreedy approachen_US
dc.subjectM-RSSTen_US
dc.subjectPerformance evaluationen_US
dc.subjectUnsupervised video object segmentationen_US
dc.titleUnsupervised segmentation of stereoscopic video objects: constrained segmentation fusion versus greedy active contoursen_US
dc.typeArticleen_US
dc.identifier.doi10.1007/s11265-014-0921-0en_US
dc.identifier.scopus2-s2.0-84938988204-
dcterms.accessRights0en_US
dc.relation.deptDepartment of Business Administrationen_US
dc.relation.facultySchool of Administrative, Economics and Social Sciencesen_US
dc.relation.volume81en_US
dc.relation.issue2en_US
dc.identifier.spage153en_US
dc.identifier.epage181en_US
dc.collaborationUniversity of West Attica (UNIWA)en_US
dc.journalsOpen Accessen_US
dc.publicationPeer Revieweden_US
dc.countryGreeceen_US
local.metadatastatusverifieden_US
item.fulltextNo Fulltext-
item.grantfulltextnone-
item.openairetypeArticle-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
crisitem.author.deptDepartment of Business Administration-
crisitem.author.facultySchool of Administrative, Economics and Social Sciences-
crisitem.author.parentorgSchool of Administrative, Economics and Social Sciences-
Appears in Collections:Articles / Άρθρα
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

3
checked on Nov 19, 2024

Page view(s)

4
checked on Nov 23, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.