Public lecture

Unlocking the Power of Artificial Intelligence for Real World Challenges

  • 26 April 2022
  • 10:15 am - 4:30 pm
  • Virtual IAS

As part of the IAS Annual Theme 'AI:Facts, Fictions, Futures', this virtual event will bring together a range of academics to discuss Unlocking the Power of Artificial Intelligence for Real World Challenges.

AI is widely used in many different real-world application areas. However, this creates its own challenges with a number of research questions for the AI community to develop more advanced AI algorithms. In this event, six world-leading researchers will share their project research experience on both the fundamentals of AI and its applications. It covers psychologically and biologically inspired cognition development for robots; neuromorphic sensors and computing; combining AI, model based control and embodied intelligence; crossmodal learning, integration of knowledge and learning; cooperative AI for integration into society; and explainable deep learning.

Convened by:  Qinggang Meng, Georgina Cosma, Wen-hua Chen, Haibin Cai, Syeda Fatima

IAS Visiting Fellows in residence:
Plamen Angelov, Lancaster University
Jianwei Zhang, University of Hamburg
Angelo Cangelosi, University of Manchester
Bram Vanderborght, Vrije Universiteit Brussel
Kate Larson, University of Waterloo
Shih-Chii Liu,  ETH Zürich

Programme:

10.15

Introductory Remarks

Director of the IAS, Marsha Meskimmon & AI Theme Lead, Qinggang Meng

10:30

Developmental Robotics for Language Learning, Trust and Theory of Mind

Angelo Cangelosi

11:15

Cooperative AI

Kate Larson

12:00

Welcome by the Vice Chancellor Nick Jennings, Lunch

13:00

Welcome remarks by PVCR Professor Steve Rothberg
and introduction to the afternoon session by Qinggang Meng

13:10

Edge AI with Neuromorphic Spiking Sensors

Shih-Chii Liu

14:00

Crossmodal Learning Approaches to Robust Autonomous Systems

Jianwei Zhang

14:45

I need somebody: combining AI, model based control and embodied intelligence

Bram Vanderborght

15:30

Exemplar-based Deep Learning

Plamen Angelov

16:15

Concluding remarks and close

Qinggang Meng

Angelo Cangelosi

Developmental Robotics for Language Learning, Trust and Theory of Mind

Growing theoretical and experimental research on action and language processing and on number learning and gestures clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience, this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012). In robotics and AI, these studies have important implications for the design of linguistic capabilities in cognitive agents and robots for human-robot collaboration, and have led to the new interdisciplinary approach of Developmental Robotics, as part of the wider Cognitive Robotics field (Cangelosi & Schlesinger 2015; Cangelosi & Asada 2021). During the talk we will present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition and grammar learning (Morse et al. 2015; Morse & Cangelosi 2017) and experiments on pointing gestures and finger counting for number learning (De La Cruz et al. 2014). We will then present a novel developmental robotics model, and experiments, on Theory of Mind and its use for autonomous trust behavior in robots (Vinanzi et al. 2019). The implications for the use of such embodied approaches for embodied cognition in AI and cognitive sciences, and for robot companion applications will also be discussed.

Kate Larson

Cooperative AI

Problems of cooperation are ubiquitous and important. They can be found at scales ranging from our daily routines—such as driving on highways, scheduling meetings, and working collaboratively—to global challenges—such as peace, commerce, and pandemic preparedness.  In a recent paper, colleagues and I argued that since machines powered by artificial intelligence are playing an ever-greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation. That is, AI needs social understanding and cooperative intelligence to integrate well into society.  In this talk I will discuss what we mean by this and outline opportunities and challenges that may arise as we put cooperation at the centre of our research in AI.

Shih-Chii Liu

Edge AI with Neuromorphic Spiking Sensors

A fundamental organizing principle of brain computing enabling its amazing combination of intelligence, quick responsiveness, and low power consumption is its use of sparse spiking activity to drive computation. The development of higher-performance, more usable neuromorphic spike-event-based visual Dynamic Vision Sensors and auditory Dynamic Audio Sensors along with versatile hardware such as FPGAs have stimulated exploration of real-time edge sensor processing for wearable and IoT platforms. These sensors enable "always-on" low-latency system-level response time at lower power than conventional sampled sensors. We will describe event-driven deep neural networks that process the sensor data, and the real-time hardware implementation of event-driven delta networks together with a bio-inspired audio sensor for edge audio tasks such as voice activity detection (VAD) and keyword spotting (KWS) that are prevalent for always-on edge audio devices. We further give a demonstration example that combines a bio-inspired spiking cochlea front-end with a deep-neural-network classifier showing the end-to-end audio-inference.

Jianwei Zhang

Crossmodal Learning Approaches to Robust Autonomous Systems

Robot systems are needed to solve some real-world challenges by combining machine automation with the realization of cognitive abilities in ICT systems. There has been substantial progress in deep neural networks and AI in terms of individual data-driven benchmarking. However, such existing data-driven systems are not yet crossmodal; they are not robust in a dynamic and changing world. My talk will first introduce concepts of cognitive systems which allow a robot to better understand multimodal scenarios by integrating knowledge and learning and then the necessary modules to empower the robot’s intelligence. Then I will explain how a robot can enhance its model due to learning from experiences; and how such cross-modal learning methods can be realized in intelligent robots. At the end, I will demonstrate several novel robot systems in potential applications. 

Bram Vanderborght

I need somebody: combining AI, model based control and embodied intelligence

Human-robot collaboration has great potential to face societal challenges (as ageing population, need for better and healthier work) and create new economic markets.This for applications in health as exoskeletons, prostheses and social robots, and manufacturing with cobots, soft grippers and exoskeletons. Moreover by introducing self-healing materials, soft robots are produced that can heal damages, contributing to more sustainable robots. In this talk we will focus on the combination of embodied intelligence, model based control and AI techniques to make the robots safe and energy efficient, but also useful for human centred robotics. In this research is extensively collaborated with not only other technical fields as engineering, material sciences and AI/computer sciences, but also medical, human and social sciences. At the VUB we started the Brussels Human Robotics Research Center, BruBotics, which is a joint initiative of 8 research groups of the Vrije Universiteit Brussel (VUB) sharing a common vision: improve our quality of life through Human centered Robotics. Therefor we also introduce Homo Roboticus, to keep the human values central in a robotized world.

Plamen Angelov

Exemplar-based Deep Learning

Machine Learning (ML) and AI justifiably attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. However, even the most powerful (in terms of accuracy) algorithms such as deep learning (DL) can give a wrong output, which may be fatal. Due to the cumbersome and opaque model used by DL, some authors started to talk about a dystopian “black box” society. Despite the success in this area, the way computers learn is still principally different from the way people acquire new knowledge, recognise objects and make decisions. People do not need a huge amount of annotated data. They learn by example, using similarities to previously acquired exemplars, not by using parametric analytical models. Current ML approaches are focused primarily on accuracy and overlook explainability, the semantic meaning of the internal model representation, reasoning and its link with the problem domain. They also overlook the efforts to collect and label training data and rely on assumptions about the data distribution that are often not satisfied. The ability to detect the unseen and unexpected and start learning this new class/es in real time with no or very little supervision is critically important and is something that no currently existing classifier can offer. The challenge is to fill this gap between high level of accuracy and the semantically meaningful solutions. 

Book now

Contact and booking details

Name
Kieran Teasdale
Email address
K.Teasdale@lboro.ac.uk
Cost
Free
Booking required?
Yes
Booking information
Please register via Zoom to join this webinar using the 'Book Now' link above