Responsible AI, built into how we ship.
Healthcare operates at the intersection of high stakes and low tolerance for error. That's where Steer operates too. This page documents the commitments that guide how we build, test, deploy, and govern AI agents inside hospitals, specialty groups, and ambulatory practices.
Steer Health is a member of CHAI.
CHAI, the Coalition for Health AI, is the cross-industry body aligning health systems, payers, vendors, and academic centers on responsible AI development and deployment in clinical and operational settings. It establishes shared frameworks for transparency, fairness, and accountability in healthcare AI.
Our participation means Steer's product decisions are informed by, and accountable to, the same standards the broader industry is rallying around. Not a self-written rulebook, a shared one.
Learn more about CHAI →Six principles we actually ship against.
These aren't a mission statement. They are product requirements. Every agent, every release, every customer deployment is measured against them before it ships.
Human in the loop, by default.
Every clinical agent defaults to human review. SteerNotes drafts, providers sign. Luna escalates outside configured thresholds. AI Front Door surfaces exceptions to your staff, not autopilot. Customers configure where autonomy starts and ends, not Steer.
Clinical accuracy ahead of speed.
When latency and accuracy trade off, accuracy wins. An agent that books the wrong appointment fast is worse than one that escalates. Our Founding Clinical Advisor reviews every agent behavior before release. Speed is a benefit. Accuracy is the contract.
Transparent about capabilities.
Patients are told when they are speaking with an AI agent. Operators see exactly what each agent can and cannot do before go-live. No hidden autonomy. No capability creep post-deployment without customer sign-off.
Bias tested, quarterly.
We audit outcome parity across age, sex, race and ethnicity, primary language, payer mix, and geography. Booking conversion. Escalation rates. Note accuracy. Findings documented, remediation tracked, results available to customers on request.
Minimum data, maximum trust.
Steer processes only the patient data needed to perform the task. No training foundation models on patient conversations. No secondary commercial use without explicit consent. BAA standard. SOC 2 Type II. HIPAA throughout.
Auditable, end to end.
Every agent decision leaves a trail. Who was called, what was said, what was filed, on what authority. Customer compliance teams have read access to the full log. Regulators and auditors can reconstruct any patient interaction on demand.
Dr. Kishori Yerramreddy is married to Steer's Founder & CEO. The relationship is disclosed in all clinical governance contexts to ensure transparency about the reporting structure.
Every agent behavior passes a physician's eye before release.
Dr. Kishori Yerramreddy, a board-certified physician, leads clinical review of every agent behavior Steer ships. This is not an advisory-board arrangement. It is a product gate. A release does not leave our staging environment without clinical sign-off on intent, escalation thresholds, and edge-case handling.
Her remit covers three things: clinical appropriateness (is the agent doing what a provider would want done), patient safety (does the agent recognize and escalate red flags), and trust preservation (would a patient, told this was an AI agent, consent to this interaction).
Aligned with the frameworks that matter.
We don't write our own rulebook. We align with the frameworks health system compliance teams, regulators, and procurement leaders already use to evaluate AI vendors.
AI Risk Management Framework.
Steer's governance program maps to the four NIST functions: Govern, Map, Measure, Manage. Applied across data sourcing, model development, agent behavior, and deployment monitoring.
Decision Support Interventions.
Where Steer agents surface predictive or evidence-based decision support, we provide the transparency attributes required by the ONC HTI-1 rule. Source data, intended use, cautions, and fairness attestations are disclosed to the provider.
HHS responsible AI framework.
Alignment with HHS responsible AI guidance: safety, effectiveness, equity, patient agency, transparency, accountability. Documented in our annual responsible AI report, available to customers.
Software as a Medical Device.
Steer's current agents operate in administrative and operational contexts, outside SaMD scope. Where future agent behavior enters regulated territory, we follow FDA's risk-based SaMD pathway, beginning with pre-submission.
Security and privacy baseline.
SOC 2 Type II controls audited annually. HIPAA across all data flows. BAA standard on every deployment. Coalfire penetration testing. Incident response tested, documented, and reported.
Global readiness, in scope.
For customers operating internationally, Steer tracks EU AI Act requirements for high-risk AI systems in healthcare. Conformity assessment documentation is in active development ahead of enforcement milestones.
What SteerBrain is, plainly.
A lot of AI vendors are vague about what sits underneath the agent. We are not. Here is what SteerBrain is, what it is trained on, what it is not, and how we validate it.
Patients interacting with Steer have the same rights as patients interacting with your front desk.
Every customer deployment is configured to honor these patient rights by default. They are not settings that can be turned off. They are contractual.
- Transfer to human, any time. A patient can request a human at any point in any interaction. The transfer happens without further qualification.
- AI disclosure. Patients are told at the start of every call that they are speaking with an AI assistant. Not buried in a script. Said clearly.
- Decision explanation. Patients can ask how an outcome was reached. Steer provides a plain-language explanation of the logic, not a model output.
- Transcript access. Patients can request a transcript of their conversation with Steer. Provided in HIPAA-compliant delivery.
- No secondary commercial use. Patient conversations are not used to train upstream foundation models. Not used for third-party marketing. Not sold.
- Escalation for clinical concern. If a patient describes symptoms outside operational scope, Steer escalates to a clinical contact immediately, by protocol.
Who owns responsible AI at Steer.
Responsible AI cannot live with a single person or sit in a policy doc. Steer operates a standing governance committee that meets on a fixed cadence, with decision authority over model release, agent capabilities, and incident response.
Procurement, security, or compliance review?
Start here.
Our responsible AI package: full SOC 2 Type II report, penetration test summary, BAA, bias audit methodology, governance charter, and framework crosswalk. Available to customers and prospective customers under NDA.