Mostrar registro simples

dc.contributor.authorHeinen, Milton Robertopt_BR
dc.contributor.authorEngel, Paulo Martinspt_BR
dc.date.accessioned2013-06-19T01:43:54Zpt_BR
dc.date.issued2009pt_BR
dc.identifier.issn0104-6500pt_BR
dc.identifier.urihttp://hdl.handle.net/10183/72579pt_BR
dc.description.abstractThe computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However, these kinds of applications have different requirements from those originally proposed. More specifically, a robotic vision system must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflections and scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model, called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2D similarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides, NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, the proposed model is a good tool to be used in robot vision systems.en
dc.format.mimetypeapplication/pdfpt_BR
dc.language.isoengpt_BR
dc.relation.ispartofJournal of the Brazilian Computer Society. Porto Alegre. Vol. 15, n. 3 (2009 Sept.), p. 3-17pt_BR
dc.rightsOpen Accessen
dc.subjectRobot visionen
dc.subjectInteligência artificialpt_BR
dc.subjectVisão computacionalpt_BR
dc.subjectVisual attentionen
dc.subjectSelective attentionen
dc.subjectFocus of attentionen
dc.subjectBiomimetic visionen
dc.titleNLOOK : a computational attention model for robot visionpt_BR
dc.typeArtigo de periódicopt_BR
dc.identifier.nrb000733068pt_BR
dc.type.originNacionalpt_BR


Thumbnail
   

Este item está licenciado na Creative Commons License

Mostrar registro simples