Expert comment: AI Safety Summit - why it's important and what can be expected

Cybersecurity lock

For the next two days, leaders, tech executives, and experts - including Elon Musk, will gather at Bletchley Park for the AI Safety Summit 2023.

Dr John Woodward
Dr John Woodward

The event, taking place 1-2 November, will consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action.

Dr John Woodward, a Reader in Computer Science at Loughborough University, shares his thoughts on why the summit is needed, the key topics and insights expected to emerge, and the critical challenges attendees are set to confront.

Why is the AI summit important? 

"We have seen huge advancements in artificial intelligence in the last year and we are all blown away at the jaw dropping results. This shock includes many of the experts who lead the field", said Dr Woodward. 

"Just like we talk about the workplace as pre-COVID and post-COVID, we will start talking about technology as pre and post-ChatGPT era. That is how big the current era will be looked back on.  

"Humanity has reached an 'elbow point' where this technology is producing results which are sophisticated and human-like. Therefore we fall into the trap of deep-fake and dis-information.  

"We will not be suddenly 'terminated' by AI, but the risks are more subtle and deeper than the plot of a Hollywood movie. That is the danger."

What do you expect to come out of the summit? 

Dr Woodward said: "The applications of AI are huge, more than we can think about. Let's consider social media.  

"If we look back at the short history of social media over the past 20 years, we will see that it is gone from simple platforms innocently sharing photographs of holidays, food and pets, to inciting racial hatred, encouraging self harm, and even suicide, plus sharing of other antisocial pictures and ideologies.  

"AI will have many benefits that we are aware of, but there will also be some hidden dangers, just as there has been with the overuse and addictive nature of social media.  

"I do not think we can currently come up with a set of regulations to control the use of artificial intelligence and this will need to be developed as unforeseen applications and uses emerge, just as we have done with any other technology – but with AI we will have to move faster.

"I expect an international task force to be set up; this should include experts from artificial intelligence, but also lawmakers, politicians, and people with a deep understanding of ethics and morality." 

What are the major challenges involved? 

"One of the major challenges of regulation regarding artificial intelligence is obtaining agreement between countries. Of course, each country wants to have a competitive edge over other countries and we will all see the risks and benefits of artificial intelligence differently", said Dr Woodward.

"Behind closed doors, how will we know how artificial intelligence is actually being used? In some circumstances it will be very difficult to monitor the development of products supported by artificial intelligence.  

"It will also be difficult to measure the 'human factors' and the impact on people especially mental well-being.  

"We should not lose sight of the fact that artificial intelligence is a panacea for many, but the downside is that there are many unforeseen consequences and these need to be considered.  

"We're all aware how mechanisation has changed the labour market and computers have changed our day-to-day working. We can expect to see a similar shift in working patterns with artificial intelligence and this will include reskilling of the workforce. 

"Just as we have a digital divide between the IT literate and the IT-resistant, we should be aware of society splitting into AI-embracers and AI sceptics or rejectors."