Skilled Collaborative Robotics

Skilled collaborative robotics photo

The Skilled Collaborative Robotics Lab has been setup to investigate and validate research that will enable industrial robots to have more human-like skills to carry out increasingly less repetitive tasks and robustly cope with variations.

The skill to effectively carry out increasingly more complex tasks under controlled conditions both individually and collaboratively is central to adding value in the workplace. Humans have evolved the inherent ability to learn and enhance new skills. However, human performance is limited by physical capability and external risks. Automation and robotics has gradually replaced human labour in dangerous, dirty and difficult tasks since the 1960’s. Despite the transformative effect robotics have had on manual tasks, they are still limited to repetitive tasks with well-controlled variations.

The central focus of this lab is on developing robots with ever more capable perception, reasoning and learning (cognition) and acting modalities and how to harness them to carry out ever more demanding industrial processes. New methods for developing strategies and coping with expected and unexpected variations and difficult to predict behaviours are being developed to allow robotic systems to have more-human like dexterous skills and adaptation capabilities. Sensing and perception is key to recognising variations and changes. Learning and reasoning is required to enable robots to adapt their behaviour and ultimately discover more effective strategies to achieve their goals using learning and cognitive skills.

Whilst significant progress has been made in perception, cognition and acting over the last years in robotics and autonomous systems research, the focus has been largely on service robots working in various degrees of extreme environments. This lab focuses on harnessing and evolving robotic skills to achieve industry performance requirements in terms of high speeds, accuracy and right-first-time quality. Emerging artificial intelligence, cognition, perception and mechatronic models and technologies are being investigated in order to establish resilient and validatable robotic skills.

This lab is adopting a multi-disciplinary systems approach to understand skill within the context of the wider workplace. Understanding and reasoning about the dynamic state of the environment and other actors within a workspace is important to improve both the effectiveness and capability of a human-robot system to carry out ever more demanding tasks. For instance, optimising the synchronisation of interdependent actions between actors can significantly improve the overall performance of the system. This may require both more effective ways of communication, as well as an ability to anticipate each other’s behaviour. Robots learning from human demonstrations is another form of collaboration that is expected to significantly reduce robot programming times and will lead to the emergence of complex robot skills.

Fundamental Research Questions

  • What models and representations will allow us to precisely control highly non-linear manipulators, devices and objects under variation within industrial workspaces?

  • What representations will allow robust and accurate interaction of industrial objects and tools in dynamically changing semi-structured environments?

  • Can we remove the need for robot programming by allowing industrial robots to learn directly from observations and experience?

  • What transfer methods can robots use to share their experience and learned skills within a wider network of actors?

  • How can we harness new manipulator and end-effector designs to create more humanlike manipulation and process interaction capabilities. e.g. dynamic controlled rigidity during variable force application, controlled grip during time dependent process variations and component behaviour changes?

  • Which multi-modal communication methods and protocols will enable humans and robots to work closely together in shared workspaces and coordinate common tasks more effectively, especially in the presence of noise and uncertainty?

  • How can we create robots capable of building shared plans for interaction with multiple actors operating as a team, and which are capable of re-planning in response to expected and unexpected changes during their execution via interaction?

  • Can robots use enhanced perception to learn to be efficient during multi-actor team interactions by predicting future actions, and thereby acting pre-emptively during shared plan execution?

  • How do we create robots that are capable of acting as efficient team workers, whilst remaining completely behaviourally safe - even when exerting higher forces and working with dangerous tools in restricted workspaces?

  • How do we determine the inherent risk, usability and acceptance of human-robot shared workplaces which allow high degrees of autonomy and guarantee the safety of all actors?

  • How can we verify and validate that the required performance (e.g. ‘right first time’ quality) will be achieved by a non-deterministic, learning and evolving system?

  • What is the most appropriate level of human-robot collaboration for a given environment and task complexity?


A highly instrumented human-robot workspace has been created in the Centre to allow the rapid deployment and validation of new sensing, reasoning and acting capabilities. A range of commercial off the shelf volumetric tracking systems with various degrees of accuracy are available to both investigate human and robot skills and to validate and benchmark new algorithms and mechatronic solutions. High-end computational facilities are used to allow fast large volume data acquisition and analysis in situ.

Our projects

Our team

Dr Niels Lohse

Centre Director & Reader in Manufacturing Automation and Robotics

Dr Peter Kinnell

Reader in Metrology & High Value Manufacturing Beacon Lead

Dr Thomas Bamber

Lecturer in Robotics & Autonomous Systems

Dr Ali Al-Yacoub

Research Associate