Bridging Minds: Exploring AI Adoption Through the Royal Society Pairing Scheme

AI

Dr Yang Lu has written with her reflections after shadowing the Principal Researcher for AI Opportunities at the Department for Science, Innovation and Technology as part of The Royal Society Pairing Scheme.

What does it take to ensure that artificial intelligence is adopted responsibly and effectively across society? And how can researchers support that journey—not just through innovation, but through collaboration with those shaping the policies that guide adoption?

Last month, I had the opportunity to explore these questions up close through the Royal Society Pairing Scheme, a unique initiative designed to connect scientists with decision-makers at the heart of UK government. I was paired with Maeve Fitzmaurice, Principal Researcher for AI Opportunities at the Department for Science, Innovation and Technology (DSIT). Over an energising week in Westminster, we explored what meaningful AI adoption looks like—from both sides of the evidence-policy interface.

Walking the Corridors of Power

From the outset, it was clear that the scheme offers much more than a shadowing experience. Alongside a cohort of researchers from across disciplines, I took part in a series of workshops, roundtables, and briefings that unveiled the machinery of government. We delved into how science advice flows within Whitehall, how Select Committees and departments operate, and where researchers can most effectively contribute.

A guided tour of Parliament reminded us that policymaking is a human process, influenced by history, relationships, and timing as much as evidence. It was inspiring—and humbling—to consider how technologies like AI will shape future debates in these very halls, and how essential it is that those debates are informed by balanced, timely research.

Translating AI Research into Policy Insight

Through my pairing with Maeve, I gained a clearer appreciation of the civil servant’s role as a translator and facilitator, particularly when it comes to emergent technologies. The adoption of AI across sectors raises not only technical but also questions of governance, capability, and public trust. Watching DSIT policy professionals navigate this terrain, I saw how essential it is to align technical innovation with policy realities.

This has deep resonance with my research in trustworthy digital technologies, privacy, and human-centred AI. The experience reframed how I think about engagement: rather than focusing solely on research outputs, I now see the need to shape how, when, and with whom those insights are shared. It’s not just about evidence—it’s about relevance, trust, and timing.

What I’m Taking Forward

The scheme has not only inspired me, but also clarified specific steps I want to take to support responsible AI adoption through research and engagement:

  1. Translate complex findings into accessible insights.
    I will develop short, policy-friendly summaries of my work on AI safety and digital trust, designed for non-technical audiences who need quick, actionable understanding.
  2. Engage earlier and more often.
    I will keep in closer touch with policy cycles—monitoring consultations, calls for evidence, and knowledge exchange opportunities—especially in areas related to AI ethics and data protection.
  3. Nurture long-term connection.
    Policy impact doesn’t come from single papers; it comes from ongoing dialogue. I’ll continue to build partnerships with civil servants, researchers, and interdisciplinary networks working at the intersection of technology and governance.
  4. Champion human-centric AI in policy discussions.
    As AI systems become more embedded in public services, it's essential to include diverse voices—especially those focused on accountability, inclusion, and human behaviour. I’ll advocate for these perspectives in both academic and policy spaces.

Final Reflections

This experience has shifted my view of what impact looks like. It’s not just about publishing papers or influencing outcomes—it's about becoming part of a system where science and policy co-evolve. As researchers, we have a responsibility to inform, not persuade; to listen, as well as to speak; and to make our work legible to those shaping the rules of our digital future.

I’m deeply grateful to Maeve for her openness and generosity, and to DSIT for the support and insights provided throughout the week. I also wish to express my appreciation to the Royal Society for establishing a thoughtful and empowering programme for academics. For anyone working in areas where science meets society—especially in fast-moving fields like AI—I can’t recommend this experience highly enough.

Find out more about the Pairing Scheme here: https://royalsociety.org/grants/training-mentoring-partnership-schemes/pairing-scheme

Loughborough University Policy Unit

Loughborough University’s Policy Unit provides a channel for the University’s research and researchers to realise productive and beneficial impact on public policy, at local, national and international level through promoting an evidence-based approach to practical on-the-ground projects responding to public policy challenges.

If you’d like to get in contact with the Policy Unit, please email policy@lboro.ac.uk, or call +44 (0)20 3805 1343.

Sandy Robertson Policy Communications Officer