Highlights of the program

Keynote talks

July 4, 5.45 pm: David C. Burr (University Florence)

July 5, 5.45 pm: Benedikt Grohte (LMU Munich)

July 6, 5.45 pm: Albrecht Schmid (LMU Munich)

The Talk by Vincent Hayward unfortunately had to be cancelled

_______________________________________________________

Spotlight talk

July 6, 9.15 am: Daphne Maurer (McMaster University, Hamilton)

_______________________________________________________

Keynote July 4:

David C. Burr
(University of Florence, Italy)

The multimodal number sense: spanning space, time, sensory modality, and action

Abstract. Humans and other animals can estimate rapidly the number of items in a scene, flashes or tones in a sequence and motor actions. Adaptation techniques provide clear evidence in humans for the existence of specialized numerosity mechanisms that make up the numbersense. This sense of number is truly general, encoding the numerosity of both spatial arrays and sequential sets, in vision and audition, and interacting strongly with action. The adaptation (cross-sensory and cross-format) acts on sensory mechanisms rather than decisional processes, pointing to a truly general sense. 

Bio. David Burr is a world-class leader in visual neuroscience. For most of his career he has worked on human vision, although he has more recently ventured into touch, audition, multi-sensory research and perceptual decision-making. His research approach is fundamentally multidisciplinary, embracing classical psychophysics, animal electrophysiology, human physiology –including evoked potentials, functional imaging and clinical studies –and computational modelling. He has made major contributions to most areas in sensory neuroscience, including the perception of space, time, motion, eye-movements, colour, and number–and their development.

_______________________________________________________ 

Keynote July 5:

Benedikt Grothe (LMU Munich)

The spatial representations of sounds –
A journey through time and along the auditory pathway

Benedikt Grothe, Matthias Gumbert,
Michael H. Myoga
Max Planck Institute of Neurobiology, Martinsried, Germany

Abstract. The perception of a sound’s spatial location must be computed and processed entirely within the brain, as space is not mapped onto the auditory epithelium. Interestingly, and in contrast to birds, mammals compute no topographical map of space along the early canonical auditory hierarchy. It, is, however, still debated how auditory space is neuronally represented at higher levels of the mammalian auditory system. On the one hand, the two-channel hypothesis is primarily based on auditory brainstem studies and states that in mammals, spatial information is not encoded via labeled-line projections as in the bird brainstem but rather encoded in the general firing rates of large, opposing, hemispheric channels.  On the other hand, there is circumstantial evidence for cortical neurons responding to frontal locations only as well as human EEG studies postulating a third (central) channel. To investigate this apparent contradiction, we employed repeated two-photon calcium imaging in the AC of awake and anesthetized mice and probed the spatial tuning of the same hundreds of neurons over weeks. We found that evoked responses were generally stronger under awake conditions (consistent with previous studies), but also that spatial tuning toward the front of the animal was specifically suppressed under anesthesia. Spatial tuning of individual neurons changed from session to session, but the population as a whole remained predominantly stable. These findings indicate that the information from the two ears is initially contrasted in the brainstem into a two-channel system which, at the level of the cortex is then differentiated into a dynamic representation of all positions in space. This raises specific question how visual and auditory information can be neuronally aligned to a coherent perception of space.   

Bio. Prof. Dr. Benedikt Grothe is Chair of Neurobiology at the Ludwig-Maximilians-Universität München (LMU) and fellow of the Fellow of the Max Planck Society. He studied Biology and conducted a PhD in Munich at the LMU. Between 1991 and 1993 he did a postdoc at the University of Texas at Austin with George D. Police and a postdoc at the New York University, Center for Neural Sciences with Dan. H. Sanes. 1994 He became Assistent Professor at the LMU and in 1999 research Group Leader at the MPI Neurobiology in Martinsried. Since 2003 he is Professor of Neurobiology at he  LMU.

_______________________________________________________

Keynote July 6:

Albrecht Schmidt (LMU Munich)

Interacting with Intelligent Systems:
Human-AI Collaboration to Amplify Human Abilities

Abstract. The use and development of tools are strongly linked to human evolution and intelligence. Physical tools, from the wheel to the plane and from knifes to production machines, have transformed what people can do and how people live. Currently, we are at the beginning of an even more fundamental transformation fueled by artificial intelligence and autonomous systems: digital tools, with built-in intelligence to amplify human abilities. Such digital technologies will provide us with entirely new opportunities to (1) enhance the perceptual and cognitive abilities of humans and (2) to delegate intermediate decisions and to interact on a different level of granularity. In our research we create novel digital technologies to systematically and empirically explore how to enhance human abilities. We aim to create an efficient and pleasant cooperation between (embodied) intelligent systems driven by artificial intelligence and human actors. If such a cooperation is successful, the resulting human-technology-system will outperform the technical system as well as the human user. It is exciting to see how ultimately these new ubiquitous computing technologies have the potential for making human actors more powerful. The vision is to overcome fundamental limitations in human perception, action, and cognition and eventually create abilities, currently considered as super powers. But this vision will come at a price, we delegate control and strive towards highly optimized systems – that apparently work great for the user, but at the same time may end the element of randomness and serendipity in our lives. This opens the question if interactive human centered artificial intelligence can help to keep the user in control or if this is just an illusion.

Bio. Albrecht Schmidt is professor for Human-Centered Ubiquitous Media in the computer science department of the Ludwig-Maximilians-Universität München in Germany. He studied computer science in Ulm and Manchester and received a PhD from Lancaster University, UK, in 2003. He held several prior academic positions at different universities, including Stuttgart, Cambridge, Duisburg-Essen, and Bonn and also worked as a researcher at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and at Microsoft Research in Cambridge. In his research, he investigates the inherent complexity of human-computer interaction in ubiquitous computing environments, particularly in view of increasing computer intelligence and system autonomy. Albrecht has actively contributed to the scientific discourse in human-computer interaction through the development, deployment, and study of functional prototypes of interactive systems and interface technologies in different real world domains. His early experimental work addressed the use of diverse sensors to recognize situations and interactions, influencing our understanding of context-awareness and situated computing. He proposed the concept of implicit human-computer interaction. Over the years, he worked on automotive user interfaces, tangible interaction, interactive public display systems, interaction with large high-resolution screens, and physiological interfaces. Most recently, he focuses on how information technology can provide cognitive and perceptual support to amplify the human mind. To investigate this further, he received in 2016 a ERC grant. Albrecht has co-chaired several SIGCHI conferences; he is in the editorial board of ACM TOCHI, edits a forum in ACM interactions, a column of human augmentation in IEEE Pervasive, and formerly edited a column on interaction technologies in IEEE Computer. The ACM conferences on tangible and embedded interaction in 2007 and on automotive user interfaces in 2010 were co-founded by him. In 2018 Albrecht was induced into the ACM SIGCH Academy. 

_______________________________________________________

Cancelled:

Vincent Hayward
(UPMC Paris & Actronika SAS)


Early Touch

Abstract: Touch begins in the skin, mostly but not exclusively. The intriguing skin mechanics and the huge diversity of skin interactions during direct contact as well as during tool use raise the question of what could be “early touch »?. This notion would designate, by analogy to easy vision, the capture, transformation, and coding of tactile information, before any cognitive processing. I will attempt to answer this question in the first part of the talk and discuss in the second half a dozen misconceptions about early touch that are frequently encountered.

Short bio: Vincent Hayward joined the Department of Electrical and Computer Engineering at McGill University in 1989 as assistant, associate and then full professor in 2006. He joined the Université Pierre et Marie Curie in 2008 and took a leave of absence in 2017-2018 to be Professor of Tactile Perception and Technology at the School of Advanced Studies of the University of London, supported by a Leverhulme Trust Fellowship, following a six-year period as an advanced ERC grantee. His main research interests are touch and haptics, robotics, and control. Since 2016, he spends part of his time contributing to the development of a start-up company in Paris, Actronika SAS, dedicated to the development of haptic technology.  He was elected a Fellow of the IEEE  in 2008 a member of the French Academy of Sciences in 2019.

_______________________________________________________

Spotlight talk, July 6

Daphne Maurer
(McMaster University, Hamilton)

Pretty Ugly: Why we like some songs, faces, foods, plays, pictures, poems, etc., and dislike others.

Abstract. Homo sapiens are aesthetic beasts. People have decorated their environments since palaeolithic times. This talk will draw on experimental evidence from human development to explain how such aesthetic preferences are formed, and will show how the same principles apply across sensory modalities. Their origin appears to lie in how the multi-sensory environment interacts with the structure of the nervous system. A baby’s structural biases and limitations constrain attention, making some stimuli easier to process and some of those particularly salient. From these structures and limitations, the mechanism of aesthetic preferences emerges. This is a consilient approach to aesthetics. In this short talk I shall illustrate it for taste preferences, judgments of facial beauty, music and dance. The talk will draw on my 50 years of laboratory research on the development of perception plus 30 years of library and field research that went into my recent book, published with Charles Maurer, Pretty Ugly

Bio. Daphne Maurer is a Distinguished University Professor
from the Department of Psychology, Neuroscience and
Behaviour at McMaster University in Canada. She has
over 200 publications and she is a Fellow of the Royal
Society of Canada.