Skip to main content

Responsible AI & Data Governance

Last updated: April 2026

At Strasys, we are committed to the ethical and responsible use of Artificial Intelligence in healthcare decision-making. Our AI-enabled tools, including the Strasys Intelligence Agent (SIA) and the analytics products it powers, support healthcare leaders in making informed, evidence-led decisions. We understand the profound impact that AI can have on patient care, organisational governance, and workforce planning, and we take our responsibility to act with transparency, integrity, and accountability seriously.

This policy outlines our commitment to ethical practices, transparency in AI development, responsible data governance, and the clinical safety of AI systems within healthcare.

1. Our position on AI in healthcare

We believe AI should accelerate insight, not replace judgement. Every AI-enabled capability within the Strasys platform is designed to support human-led decision-making, not to automate clinical or governance decisions.

Our tools sit at the management and governance layer of healthcare organisations. They help leaders see patterns, surface risks, triangulate data, and make better-informed choices. They do not sit on clinical pathways, do not make treatment decisions, and do not interact directly with patients.

2. The Strasys Intelligence Agent (SIA)

SIA is the AI agent architecture that powers intelligent capabilities across our product suite. Its first live deployment is MIA (the Maternity Intelligence Agent) within the Strasys Maternity Index (SMI).

2.1 How SIA works

SIA analyses structured datasets provided by partner organisations, including operational, workforce, financial, and governance data. It identifies patterns, anomalies, and correlations that support decision-making. All outputs are generated using structured data models with defined parameters, not open-ended generative processes.

2.2 Human oversight

Every SIA output is designed to be reviewed by a qualified professional before informing any decision. Our tools present findings alongside the underlying evidence, enabling users to interrogate, challenge, and contextualise the analysis. SIA does not make recommendations in isolation. It surfaces evidence for humans to act on.

2.3 Where SIA operates today

3. Ethical use of AI

We recognise the ethical considerations inherent in using AI in healthcare. Our AI systems are designed to:

4. Transparency and explainability

We believe that AI systems used in healthcare must be understandable to the people who rely on them. Our approach includes:

5. Data governance and sovereignty

We are dedicated to maintaining the highest standards of data privacy, security, and governance. Our approach aligns with UK GDPR and other applicable data protection laws.

5.1 Data security

All data used for AI models is stored securely with strict access controls. We ensure that all data is protected against unauthorised access and use. Our technical measures include encryption, access management, and secure infrastructure.

5.2 Data minimisation

We only process data that is necessary for the specific purpose of our analysis. We do not collect more data than required, and we do not use partner data for unrelated purposes.

5.3 Data sovereignty

Partner organisations retain ownership and control of their data at all times. Data provided to Strasys for analysis remains the property of the partner. We process it under agreed terms and return or securely delete it in accordance with the engagement agreement.

5.4 Informed consent and transparency

We ensure that data used for AI analysis is collected and shared with appropriate consent and governance. Partners are fully informed about how their data will be used, stored, and protected.

5.5 Anonymisation and pseudonymisation

Where appropriate, we anonymise or pseudonymise data to safeguard privacy while maintaining the integrity of the insights generated. In published case studies and impact stories, organisations and individuals are anonymised unless explicit permission has been given.

6. Clinical safety and regulatory alignment

Our products operate at the management and governance layer of healthcare organisations. They do not sit on clinical pathways and do not make or influence individual patient treatment decisions.

6.1 Clinical review

All analytics products are clinically reviewed by qualified healthcare professionals within our team, led by the Chief Medical and Innovation Officer. This review covers the clinical relevance, safety, and appropriateness of the analytical frameworks and outputs.

6.2 Regulatory position

Because our tools support organisational decision-making rather than clinical pathways, they do not fall within the scope of DCB0129/DCB0160 (clinical risk management standards for health IT systems). We apply proportionate governance appropriate to management and governance tools, including structured testing, version control, and clinical oversight.

6.3 Continuous monitoring

We regularly assess the performance and safety of our AI systems against evolving regulatory and clinical standards. Where new guidance emerges from NHS England, the Medicines and Healthcare products Regulatory Agency (MHRA), or other relevant bodies, we review and update our approach accordingly.

7. Accountability and governance

Strasys maintains a governance framework to oversee the ethical use of AI in our products:

8. Limitations of AI

We recognise that AI, while powerful, has limitations. Our approach is built on honest communication about what AI can and cannot do:

9. Continuous improvement

Responsible AI is an ongoing commitment, not a static policy. We are dedicated to the continuous development and improvement of our AI systems:

10. Partnering for responsible AI

Responsible AI in healthcare is achieved through collaboration. We work closely with partner organisations to ensure that AI solutions are deployed in ways that are safe, transparent, and aligned with their own governance requirements.

Our goal is to:

11. Contact

For questions about this policy, our approach to responsible AI, or how we govern data within our analytics products, please contact us.

You can also write to us at:

Strasys Limited
3rd Floor Marlborough House
298 Regents Park Road
Finchley, London, N3 2SZ
Company number: 09396355

Related policies