About the lecture
For decades, popular culture has framed new technology, particularly artificial intelligence, as an existential threat. We have been promised autonomous machines ready to turn against their creators. Instead, AI systems are now embedded in everyday life to write emails, draw memes and help make decisions at scale.
They do not resemble killer robots, but they are reshaping how we experience information, authority and authenticity.
Professor Buckley’s Inaugural Lecture explores what this shift means for digital trust. If machines can imitate human expression, produce convincing misinformation and automate persuasion, how should we think about security? What happens when authenticity becomes difficult to verify and confidence becomes easier to manufacture? The reality is that facing an army of killer robots might be an easier problem to solve.
Drawing on research spanning insider threat, digital identity and generative AI, he argues that the central challenge of contemporary cyber security is not simply preventing technical breaches, but understanding how trust is formed, exploited and calibrated in digital environments.
Rather than focusing on dystopian futures, his lecture considers the everyday realities of AI-enabled systems and asks how we design technologies and societies that remain secure, resilient and worthy of trust.
About the lecturer
Professor Oli Buckley specialises in cyber security, with a particular focus on the human dimensions of security.
His research examines how trust is formed, manipulated or undermined as emerging technologies become embedded in everyday life. His work spans technology-first identification methods based on behavioural biometrics to creative approaches for engaging with cyber security concepts.
He has addressed insider threat, digital identity and online deception, exploring how security failures often arise from social and behavioural dynamics rather than purely technical weaknesses. More recently, his research has focused on the implications of generative AI for security, including trust calibration, misinformation and resilience.
He is interested in how security and technology extend beyond technical controls, leveraging design, education and participatory approaches. This has led to the development of innovative methods – including game-based research, graphic novels, picture books and public engagement initiatives – to explore how people understand and respond to technology.
Before joining the University, he held academic positions at Cranfield University and the University of East Anglia. He was previously a postdoctoral researcher at the University of Oxford. His career began in software engineering, following a PhD at the University of Wales (Bangor), where he specialised in computer graphics for surgical simulation.
For further information on this lecture, please contact the Events team.