At Strasys, we are committed to the ethical and responsible use of Artificial Intelligence in healthcare decision-making. Our AI-enabled tools, including the Strasys Intelligence Agent (SIA) and the analytics products it powers, support healthcare leaders in making informed, evidence-led decisions. We understand the profound impact that AI can have on patient care, organisational governance, and workforce planning, and we take our responsibility to act with transparency, integrity, and accountability seriously.
This policy outlines our commitment to ethical practices, transparency in AI development, responsible data governance, and the clinical safety of AI systems within healthcare.
1. Our position on AI in healthcare
We believe AI should accelerate insight, not replace judgement. Every AI-enabled capability within the Strasys platform is designed to support human-led decision-making, not to automate clinical or governance decisions.
Our tools sit at the management and governance layer of healthcare organisations. They help leaders see patterns, surface risks, triangulate data, and make better-informed choices. They do not sit on clinical pathways, do not make treatment decisions, and do not interact directly with patients.
2. The Strasys Intelligence Agent (SIA)
SIA is the AI agent architecture that powers intelligent capabilities across our product suite. Its first live deployment is MIA (the Maternity Intelligence Agent) within the Strasys Maternity Index (SMI).
2.1 How SIA works
SIA analyses structured datasets provided by partner organisations, including operational, workforce, financial, and governance data. It identifies patterns, anomalies, and correlations that support decision-making. All outputs are generated using structured data models with defined parameters, not open-ended generative processes.
2.2 Human oversight
Every SIA output is designed to be reviewed by a qualified professional before informing any decision. Our tools present findings alongside the underlying evidence, enabling users to interrogate, challenge, and contextualise the analysis. SIA does not make recommendations in isolation. It surfaces evidence for humans to act on.
2.3 Where SIA operates today
- Strasys Maternity Index (SMI) — MIA is live, supporting maternity governance and quality improvement through data triangulation.
- Other products — SIA capabilities for Workforce Decision Intelligence (WDI), Board Intelligence, Consultant Workforce Optimisation System (CWOS), and Clinical Service Review (CSR) are in development, following the same governance framework.
3. Ethical use of AI
We recognise the ethical considerations inherent in using AI in healthcare. Our AI systems are designed to:
- Support, not replace, human judgement. Our tools provide insights that enhance decision-making. Final decisions remain with the clinician, board member, or executive.
- Prevent bias. We use representative data and test our models for fairness and equality in their outputs. Where data gaps exist, we are transparent about the limitations they create.
- Provide equitable access. Our solutions are designed to be accessible and useful across different types of healthcare organisations, from large acute trusts to specialist providers and integrated care systems.
4. Transparency and explainability
We believe that AI systems used in healthcare must be understandable to the people who rely on them. Our approach includes:
- Clear explanations. All stakeholders, including healthcare professionals and board members, can understand how AI-generated insights are derived.
- Model interpretability. Our tools show the patterns, data sources, and features that contribute to each output, not just the conclusion.
- No black boxes. We do not deploy models where the reasoning cannot be explained to the user. If we cannot explain it, we do not ship it.
5. Data governance and sovereignty
We are dedicated to maintaining the highest standards of data privacy, security, and governance. Our approach aligns with UK GDPR and other applicable data protection laws.
5.1 Data security
All data used for AI models is stored securely with strict access controls. We ensure that all data is protected against unauthorised access and use. Our technical measures include encryption, access management, and secure infrastructure.
5.2 Data minimisation
We only process data that is necessary for the specific purpose of our analysis. We do not collect more data than required, and we do not use partner data for unrelated purposes.
5.3 Data sovereignty
Partner organisations retain ownership and control of their data at all times. Data provided to Strasys for analysis remains the property of the partner. We process it under agreed terms and return or securely delete it in accordance with the engagement agreement.
5.4 Informed consent and transparency
We ensure that data used for AI analysis is collected and shared with appropriate consent and governance. Partners are fully informed about how their data will be used, stored, and protected.
5.5 Anonymisation and pseudonymisation
Where appropriate, we anonymise or pseudonymise data to safeguard privacy while maintaining the integrity of the insights generated. In published case studies and impact stories, organisations and individuals are anonymised unless explicit permission has been given.
6. Clinical safety and regulatory alignment
Our products operate at the management and governance layer of healthcare organisations. They do not sit on clinical pathways and do not make or influence individual patient treatment decisions.
6.1 Clinical review
All analytics products are clinically reviewed by qualified healthcare professionals within our team, led by the Chief Medical and Innovation Officer. This review covers the clinical relevance, safety, and appropriateness of the analytical frameworks and outputs.
6.2 Regulatory position
Because our tools support organisational decision-making rather than clinical pathways, they do not fall within the scope of DCB0129/DCB0160 (clinical risk management standards for health IT systems). We apply proportionate governance appropriate to management and governance tools, including structured testing, version control, and clinical oversight.
6.3 Continuous monitoring
We regularly assess the performance and safety of our AI systems against evolving regulatory and clinical standards. Where new guidance emerges from NHS England, the Medicines and Healthcare products Regulatory Agency (MHRA), or other relevant bodies, we review and update our approach accordingly.
7. Accountability and governance
Strasys maintains a governance framework to oversee the ethical use of AI in our products:
- Leadership oversight. Our leadership team, including the Chief Medical and Innovation Officer and the Chief Decision Intelligence Officer, plays an active role in overseeing AI governance and ensuring compliance with ethical standards.
- Dedicated accountability. Each AI tool and model has a clear point of accountability within Strasys. Stakeholders understand who is responsible for the governance, performance, and safety of each product.
- Version control and audit trail. All models and analytical frameworks are versioned, with changes documented and reviewed before deployment.
8. Limitations of AI
We recognise that AI, while powerful, has limitations. Our approach is built on honest communication about what AI can and cannot do:
- Clear communication of limitations. All users are informed about the boundaries of AI-generated insights and the importance of human oversight in governance and clinical decisions.
- Expert review. AI outputs are reviewed by healthcare and analytics professionals before being used in decision-making, ensuring that insights are validated by human expertise.
- No fabricated certainty. We match the confidence of our outputs to the strength of the underlying evidence. Where data is incomplete or uncertain, we say so.
9. Continuous improvement
Responsible AI is an ongoing commitment, not a static policy. We are dedicated to the continuous development and improvement of our AI systems:
- Regular evaluation. We monitor and assess our AI models to ensure they remain accurate, effective, and compliant with the latest regulatory standards.
- Adapting to new standards. We stay informed of changes in regulatory requirements and best practices for AI governance in healthcare, including guidance from NHS England, the AI Safety Institute, and international bodies.
- Learning from deployment. Each deployment generates learning that feeds back into our product development and governance processes.
10. Partnering for responsible AI
Responsible AI in healthcare is achieved through collaboration. We work closely with partner organisations to ensure that AI solutions are deployed in ways that are safe, transparent, and aligned with their own governance requirements.
Our goal is to:
- Provide transparency in AI use and its potential impact on decisions.
- Foster trust by ensuring that our tools are used ethically and responsibly.
- Collaborate closely with partners to align AI deployment with their clinical, operational, and regulatory requirements.
- Support partners in building their own internal capability to govern and interpret AI-enabled analytics.
11. Contact
For questions about this policy, our approach to responsible AI, or how we govern data within our analytics products, please contact us.
You can also write to us at:
Strasys Limited
3rd Floor Marlborough House
298 Regents Park Road
Finchley, London, N3 2SZ
Company number: 09396355