AI Policy for Research & Evaluation 

At Agulhas, we approach artificial intelligence (AI) as a tool to support human expertise in evaluation, research and advisory work. We begin with a simple question: how can AI add value to our work – while our skills and expertise remain the key driver? This question shapes our decisions of when and how to use AI, tailoring our AI approach and tools to the policies and needs of our clients and to the opportunities and challenges identified for each research or advisory task we take on. 

We seek to take advantage of opportunities and efficiencies from using AI for research and evaluation while retaining the methodological rigour and nuanced analysis that established our reputation. Cognisant of limitations and built-in biases as well as opportunities, we see AI as a powerful tool that can augment, but not replace, human expertise. Agulhas’ reports and publications are authored by humans, who remain accountable for interpretations, findings and recommendations. This means choosing when to use and when not to use AI as part of a careful design process.  

Our AI policy is integrated with our ethical research, information security, environmental and GDPR policies and is part of the Agulhas Code of Conduct for staff and external experts. 

The four principles guiding our use of AI 

Our AI policy is organised around four principles that define our commitments. These principles are aligned with recognised international guidance, including the UK Government’s AI Playbook and the OECD AI Principles 

1. Purposeful and ethical application 

We use AI strategically and selectively to add genuine value to research and evaluation. Decisions about AI use are made in collaboration with clients and partners and are grounded in our wider ethical commitments to do no harm, practice informed consent, confidentiality, and data protection, in line with our ethical research policy. We leverage AI based on fitness for purpose and our capabilities, rather than novelty, and often combine it with existing tools.  

When AI tools are used in participant-facing activities (for example, for recording and automated transcription in qualitative interviews), we incorporate this into consent and briefing processes to ensure that informed consent remains meaningful. Where participants are children or members of marginalised or at-risk groups, we apply additional safeguards proportionate to the risks involved. Any potential efficiency gains from AIassisted processing are weighed carefully against ethical obligations to research participants.  

We recognise that AI use carries a significant environmental cost through the energy consumed during model training and use. We therefore support streamlined, efficient use of AI through staff training, shared prompts and team-based collaboration to avoid duplication of effort and to promote consistent, highquality practice. 

2. Information security by design

At Agulhas, we integrate AI into our established data and knowledge management protocols, developed to routinely handle sensitive confidential information and personal data. Our data handling policy, AI policy, and Code of Conduct categorically prohibit undisclosed or inappropriate use of off-the-shelf cloud tools on primary data, by Agulhas staff and any associates working for us.  

AI use is discussed with clients at the outset of each project and calibrated to the sensitivity of materials, contractual obligations and dataprotection requirements. Our teams are supported with practical guidance on how to use AI safely and effectively within these agreed parameters.  

Agulhas meets Cyber Essentials Plus certification standards and uses secure, enterprisegrade platforms for data storage and collaboration. We restrict the use of cloudbased AI tools to materials that are in the public domain or have been explicitly cleared and anonymised, and we opt out of model training wherever possible.  

3. Methodological integrity 

We use AI to strengthen, not complicate, methodological rigour and analytical integrity. Our findings must remain meaningful, evidence-based, and actionable, reflecting the complex challenges our clients and their stakeholders face. Our assessments must withstand scrutiny from a wide range of stakeholders, including government officials, academics, civil society and our peers. Where appropriate, we use innovative applications, such as retrievalaugmented generation (RAG) or AIsupported literature reviews, while remaining grounded in social science ethics and methodological best practice. 

Agulhas applies a structured decisionmaking process to AI use that begins with sound research design. We assess where AI can add value, incorporate relevant ethical and security standards, and build in human oversight at every stage. No element of our work is fully AIdependent.  

We are explicit about the limitations of off-the-shelf AI tools for social science research. These include tendencies towards bias, sycophancy, hallucinated outputs, and uneven performance across languages, accents and social groups.  

We continue to value direct engagement with source materials. In some contexts, we deliberately choose not to use AI, or we combine it with researcher-led data analysis – not merely to counteract and control for AI limitations but to build essential capabilities in less experienced staff and maintain our connection to source material, which is also critical for verifying AI-assisted outputs.  

4. Transparent and adaptable practice 

We will be open about when, how and why we use AI. Our approach is guided by client needs and policies, industry standards and compliance requirements, and project-specific methodological, ethical and practical considerations. Transparency is not just about disclosure, it is a foundation for responsible innovation, supported by continuous internal reflection, documentation and learning.  

Our approach is deliberately adaptable. We recognise that client requirements, ethical standards and regulatory frameworks vary across contexts and evolve over time. We are also cognisant of how fast AI tools are changing, and the need to adjust our policy and practice alongside these changes. 

We invest in AI literacy across the organisation, creating an environment in which staff are encouraged to discuss AI use openly and seek guidance when needed.  

Translating principles to practice

Principles alone are not sufficient to guide daytoday decisions. Our safeguards span the full project cycle, from design and data collection to analysis, reporting and dissemination, and are supported by clear organisational responsibilities.  

The AI landscape is evolving rapidly, as are the tools, regulations and expectations that shape responsible use. Our AI policy is therefore a living document, updated as new developments emerge and as we learn from practice. Through this approach, we aim to use AI thoughtfully, responsibly and transparently—always in service of highquality, ethical research and evaluation.  

If you need more information about our AI policy and approach, let us know