Stage 1
Sep 16, 2026
15:00

Centering Humans in Artificially Intelligence-Driven Research and Innovation

What would change if the success of artificially-intelligent driven research were measured by the risks it prevents, not only the breakthroughs it produces?

Nakshathra Suresh

About this session

As artificial intelligence (AI) is embedded into research and innovation workflows, most governance discussions remain focused on technical performance, regulatory compliance or competitive advantage. Far less attention is paid to how these systems reshape who carries risk, whose knowledge counts and which harms are rendered invisible in the process of innovation.

My work sits at the intersection of cyber criminology and AI safety, where I regularly examine how the deployment of technologies create new forms of vulnerabilities for communities. This perspective offers a different entry point into conversations about AI in research-intensive environments - not from the standpoint of optimisation, but rather, harm. Across sectors, research and engineering teams are increasingly making upstream decisions that determine downstream human impact. Yet the people most affected by those decisions are rarely present in the design, testing, or evaluation stages. In practice, this means that issues such as surveillance, privacy risks, technology-facilitated gender-based abuse, unintentional discrimination and psychosocial risk are treated as externalities rather than as core research variables.

This talk introduces the idea that human risk must be embedded into the way research is conducted, not added as a compliance layer after deployment. Drawing on research and case work in digital safety, public advocacy, and interdisciplinary AI education, I will show how:

  • Harm is often produced through ordinary design and data decisions rather than malicious intent
  • Marginalised users function as “early warning systems” for systemic risk
  • Continuous consent and community feedback loops can operate as research infrastructure, not just ethical add-ons

For leaders in research-intensive industries, this represents a cultural and organisational challenge. It requires moving from a model where responsibility is located in ethics teams to one where safety is a shared methodological practice across the innovation lifecycle, including a ‘tone-from-the-top’ enforcement.

This session offers a social science reframing: if AI is accelerating the pace of discovery, then the ability to identify and mitigate human harm must also become a core research capability.

Add to Calendar

Ticket sale for the EviDynamics will start soon!