The application of collaborative robots in manufacturing: regulation, governance and ethics

With increasing strides being made in technological development, we are encountering more automation, artificial intelligence (AI) and robots in everyday life. As interactions with this technology increase, pressing questions emerge as to how they should be governed to ensure they are used in safe and ethical ways, especially when they interact with people—in other words, how we can strike the right balance between innovation and regulation.

Smart robotic systems are envisaged to work with humans in collaboration, as a hybrid work team where the best of both are combined: the robot's precision capabilities with the human's dexterity and problem-solving skills. This collaboration will offer manufacturers greater flexibility, enhanced process adaptability, and optimised production capacity. Unlike traditional industrial robots designed for mass production, which must be isolated behind safety fences to ensure worker safety, current collaborative applications interact more closely with humans.

However, for full collaboration to take place, various social and technical barriers must be overcome to ensure robots’ acceptance, both at organisational and societal levels. Chief among these barriers is ensuring smart collaborative systems are safe and will not harm their human ‘colleagues’. Other concerns relate to privacy and data protection, and further ethical aspects of the human-robot interaction in the workplace.

Despite the potential barriers set out above, AI and smart robotic systems have the potential to drive growth in the UK economy – set out in the recently published Industrial Strategy. They also have the potential to advance the speed and efficiency of tasks which currently require a great deal of human resource. This briefing paper will outline research carried by the Smart Cobotics Centre and offer practical recommendations and actions which should be considered when applying this technology.

Research

Research and engagement activities in the Smart Cobotics Centre focus on the use of collaborative robots and smart robotic systems in manufacturing. However, some of the topics identified as relevant to collaborative robotic systems in manufacturing will be relevant to the use in other sectors, but other issues will be more specific to the manufacturing context. Our Centre has made significant strides in cutting-edge research, but many collaborative and smart robotic innovations remain confined to the laboratory due to technical and non-technical barriers. As social science researchers at the Smart Cobotics Centre, we have sought to gain a deeper understanding of the non-technical aspects of integrating smart robotic systems into the workplace, while identifying challenges to their deployment and acceptance, focussing on regulatory, governance and ethical frameworks.

For instance, achieving human-robot collaboration necessitates rethinking the role of future workforce on the shop floor, particularly in relation to human autonomy and agency in interaction with robotic counterparts. While the term ‘collaboration’ may suggest that humans and robots function as teammates, we argue that future smart robotic systems primarily serve as aids to human workers. In this context, the professional development of the future workforce, particularly through reskilling and upskilling, is crucial. One of our proposals is to place greater emphasis on issues related to meaningful work, fostering a dignified and ethically driven working environment.

Furthermore, legal and regulatory frameworks which apply to collaborative robots and smart robotic systems need to be clarified. Fundamental principles of health and safety, and data protection apply – and must be safeguarded – in robotic interactions with humans. Developments in the EU in updating safety laws and the AI Act should be taken into account by the UK and possibly followed. More clarity is needed on liability for harm across UK jurisdictions to aid in collaborative robots and smart robotic systems’ integration in workplaces.

Recommendations

The recommendations set out in the White Paper identify five key themes: Clarity, Safety, Privacy, Equity and Dignity. These are summarised below, and set out in more detail in the White Paper.

  1. More clarity is required on how robotic systems are regulated in the UK, which will require more resources and support going to relevant regulatory bodies. The UK Government (and devolved administrations where appropriate) should ensure there is clarity about how existing legal and regulatory frameworks would apply to collaborative robots.
  2. All stakeholders should work together to ensure that robotic systems can operate safely alongside human workers. Appropriate frameworks should be developed to ensure safety, taking account of physical, psychological and other forms of safety in manufacturing settings. The UK should consider aligning with the new EU safety framework.
  3. The data protection and privacy of humans must be upheld in deployments of robotic systems, so that only necessary data about workers and others is collected.
  4. Robotic systems should be designed in an equitable manner, to prevent discrimination and the deepening of societal divides.
  5. Human dignity should be upheld in human-robot collaboration, which involves ensuring meaningful work for humans. This means that the complementary strengths of both human workers and robotic systems should be recognised and used to design systems, and for human workers to retain a sense of meaning in their roles.