Artificial intelligence is reshaping how research is conducted across universities in Europe, from literature reviews and data analysis to supervision and collaboration. But alongside opportunity comes uncertainty: academic supervisors, doctoral training leaders and research organisations face rapidly evolving AI tools with unclear institutional guidance and emerging ethical challenges.
That’s where the AI Researchers project steps in, a European initiative designed to help higher education institutions navigate responsible AI use in research.
Project Purpose: Clarity, Confidence, and Responsible Practice
The AI Researchers project aims to support academic supervisors, research leaders and institutions as they respond to the widespread adoption of AI tools in research practice. Rather than prescribing rigid rules, the initiative focuses on understanding how AI is actually used, and how it can be used responsibly and ethically.
Across Europe, there are significant variations in how institutions approach AI from welcoming experimentation to worrying about research integrity, authorship, and oversight. To respond to this, AI Researchers is gathering real-world insights into:
- How supervisors and research leaders perceive and interact with AI tools
- What cultural, institutional, and practical barriers hinder adoption
- Where opportunities lie for credible, accountable, and trust-enhancing use of AI in research contexts
Building Practical Tools for Institutions and Educators
Based on this evidence base, the project is developing a set of resources and frameworks designed to make responsible AI adoption practical — not just aspirational:
AI Engagement Framework
This framework will offer realistic interventions and guidance for departments and doctoral schools to build AI literacy into their teaching, supervision and research cultures.
AI Readiness Toolkit
A collection of reusable assets including case studies, testimonials, and formats for peer learning — the toolkit will support supervisors and research development teams as they integrate AI into everyday research practice.
Together, these outputs aim to bridge the gap between uncertainty and confidence helping institutions use AI in ways that enhance research quality, transparency and trust.
From Hesitation to Leadership
Rather than treating AI adoption as a risk to be avoided, AI Researchers encourages universities to embrace AI responsibly, with a clear understanding of its benefits and limitations. This means equipping supervisors and research leaders with:
- Evidence-based insights into AI adoption challenges
- Practical strategies for managing ethical and integrity concerns
- Tools that can be immediately applied in research training and supervision contexts
By doing so, the initiative supports quality research in an AI-enabled world one where innovation and responsibility go hand-in-hand.



