invited talks

Christian Balkenius Christian Balkenius

Full Professor of Cognitive Science at Lund University, Sweden

Christian Balkenius is full professor of Cognitive Science at Lund University. His research aims for the dual goal of (1) understanding development in biological systems and (2) reproducing properties of such systems in robots. This research includes large scale system-level modelling of brain processes, computer simulations and robot application of various adaptive behaviours and developmental processes, and experimental studies of animals and humans. Balkenius has published more than 180 peer reviewed scientific papers, edited twelve conference volumes and a number of special issues.

Invited Talk: Friday, October 2nd | 2:15pm – 2:40pm

Beyond Here and Now

Although attention is usually seen as a mechanism operating on external stimuli, there is much evidence that similar processes operate internally on memory states. In this view, the external environment and internal memory processes work together as a content addressable memory to support both cognitive and physical operations. Spatial indices plays a central role in both externally directed attention and in memory access and allows information from both systems to blended together during perceptual processing and memory recall. A number or computer simulations and robotics experiment have been performed that show how a system built according to these principle can efficiently learn complex goal-directed behaviours.

Frederik Beuth Frederik Beuth

Senior Researcher, Department of Artificial Intelligence, Chemnitz University of Technology, Germany

Invited Talk: Friday, October 2nd | 2:40pm – 3:05pm

Object Localization with a Neurophysiologically-Precise Model of Visual Attention

The talk explains how the primate brain searches an object in a scene using visual attention, and outlines a neuro-computational model based on this concept. Recent neuroscience studies suggest a novel view of attention as a cognitive and holistic control process, which tunes the visual system for the current task. We implement such a control process and the neurophysiological mechanisms of attention in a new model. Its evaluation shows promising localization accuracies on a large and realistic data set with 100 objects, 1000 scenes, and three backgrounds (black, white-noise, and real-world).

Ali Borji Ali Borji

Assistant Professor at University of Wisconsin, Milwaukee, USA

Ali Borji received his BS and MS degrees in computer engineering from Petroleum University of Technology, Tehran, Iran, 2001 and Shiraz University, 2004, respectively. He did his Ph.D. in computational neurosciences at the Institute for Studies in Fundamental Sciences (IPM) in Tehran, Iran, 2009. After a one-year research position at the university of Bonn, he spent four years as a postdoctoral scholar at iLab, University of Southern California from 2010 to 2014. He is currently an assistant professor at University of Wisconsin, Milwaukee. His research interests include visual attention, active learning, object and scene recognition, and cognitive and computational neurosciences.

Invited Talk: Friday, October 2nd | 5:15pm – 5:45pm

Bottom-up Visual Attention: Where are We Now?

Bottom-up component of visual attention has been subject to extensive cognitive and computational research in psychology and computer science. A multitude of models and theories have addressed this aspect of human behavior. One category of models known as “visual saliency models” attempts to predict human fixations over natural scenes (static scenes and videos). In this talk, I will discuss the recent progress in this field and hurdles ahead that need to be resolved in order to move forward.

Michael Dorr Michael Dorr

Chair of Human-Machine Communication, Technische Universität München, Germany

Michael Dorr studied Computer Science and Artificial Intelligence in Edinburgh and Luebeck. He received his PhD in Engineering from the University of Luebeck in 2010 and then did a post-doc in the Department of Ophthalmology of Harvard Medical School. In 2014, he joined Technische Universität München to head the research group “Visual Efficient Sensing for the Perception-Action Loop” at the Institute for Human-Machine Communication. Michael’s research interests lie at the intersection of neuroscience and
technology: how do humans and other biological organisms cope so easily with the overwhelming complexity of the natural world, and how can we teach machines to mimic this performance?

Invited Talk: Friday, October 2nd | 11:15am – 11:40am

Space-Variant Processing and Attention in Humans and Machines

Humans and other biological organisms seem to perceive and act in complex dynamic environments almost effortlessly. One trick employed by the human visual system in order to reduce complexity is to use eye movements to attend only to relevant parts of a scene. Here, I will review our work on eye movement modelling for dynamic natural scenes, and how results can be used to improve state-of-the-art computer vision algorithms.

Ralf Engbert Ralf Engbert

Professor of Experimental and Biological Psychology, Universität Potsdam  short CV
Invited Talk: Friday, October 2nd | 11:40am – 12:05am

Using Spatial Statistics to Understand Attentional Dynamics in Scene Viewing

The control of eye movements is an important example of active perception. During visual fixation, covert attention is the key driving mechanism for the planning of an upcoming saccade. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate. Here we use methods from the theory of spatial point processes to characterize spatial aggregation of gaze positions. We propose a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial scanpath statistics. The model relies on activation dynamics based on a spatially-limited attention map and on inhibitory tagging. This theoretical framework links neural dynamics of attention to behavioral gaze data. Reference: Engbert R, Trukenbrod HA, Barthelmé S, Wichmann FA (2015). Spatial statistics and attentional dynamics in scene viewing. Journal of Vision, 15(1), 14: 1-17. (DOI: 10.1167/15.1.14)

Jochen Triesch Jochen Triesch

Vice Chairman, Board of Directors Frankfurt Institute for Advanced Studies, Germany

Invited Talk: Friday, October 2nd | 10:50am – 11:15am

Overt Attention Meets Active Efficient Coding

A central goal of biological and technical vision systems is to extract novel information from the environment and to encode it efficiently. Here we present a principled approach to both problems based on the active efficient coding principle. Our model uses sparse coding to learn a representation of its binocular input. This representation drives overt attention via an information maximizing binocular saliency model and vergence eye movements that maximize the system’s coding efficiency. The interplay of the two leads to rapid learning of active binocular vision.

Michael Zillich Michael Zillich

Postdoctoral Researcher at Vienna University of Technology and Co-founder of Blue Danube Robotics, Austria

Michael Zillich is a postdoctoral researcher in the field of robot vision, which integrates machine vision methods such as object segmentation, recognition and tracking into (more or less cognitive) robotic control architectures. Michael holds a PhD in Electrical Engineering from Vienna University of Technology (2007) and a diploma in Mechatronics from Johannes Kepler University Linz (1998). He spent 6 months during his PhD studies at KTH Stockholm in 2000, and was a postdoctoral researcher at the University of Birmingham 2006 – 2008. Having returned back to ACIN/TU Wien, he was principal investigator in EU project CogX and coordinated FWF project InSitu. He is currently coordinating EU project SQUIRREL and is principal investigator in EU project STRANDS and national WWTF project RALLI. Michael Zillich is (co)-author of 90 publications and served on the program committees of a number of international conferences and as reviewer for international journals. In 2013, together with ACIN colleague Walter Wohlkinger, he founded Blue Danube Robotics, a company dedicated to safe human robot collaboration in industrial and home environments.

Invited Talk: Friday, October 2nd | 9:30am – 9:55am

Why Robots Need Attention as Much as Humans Do

Despite a steadily growing body of research in visual attention a lot of current robotics work still falls short of making proper use of it. I am going to give examples of successful applications of attention, including some of our own work, as well as examples where it is clearly lacking (again, including some of our own work …). I will propose some attempts of principled approaches and invite discussions of how to move forward.