Our Position

AI is a tool. It is never the clinician.

At NOYO, artificial intelligence exists to extend human reach - not to replace human judgment. Every AI interaction on our platform is designed, reviewed, and bounded by forensic psychiatric expertise. Our AI does not diagnose. It does not prescribe. It does not pretend to be human.

What it does is hold space. It listens with consistency, patience, and cultural awareness at a scale no human team could replicate. And when it reaches the boundary of what it can safely do, it says so - and routes to human support.

Our Principles

The six commitments that guide our AI.

01
Transparency over simulation

NOYO's AI always identifies itself as an AI. It will never imply it is human, and it will never deny its limitations when asked. Users deserve to know exactly what they are interacting with.

02
Safety as a non-negotiable

Our ERAS (Enhanced Risk Assessment System) monitors interactions for crisis signals in real time. When risk is detected, safety protocols override everything else - automatically and without exception.

03
Clinical grounding, not internet scraping

Our AI framework - HAM (Human Attunement Model) - was developed from forensic psychiatric expertise, not scraped from general web data. Every response pattern is clinically reviewed before deployment.

04
Cultural respect by design

NOYO's AI is trained on 20+ languages and regional cultural contexts. It does not apply a Western psychological framework universally. Healing language is adapted to the cultural reality of each user.

05
Data minimisation and user privacy

We collect only what is necessary to deliver care. Conversation data is not used to train third-party models. Users may delete their data at any time. See our Privacy Policy for full details.

06
Continuous human oversight

No AI output in a clinical-adjacent context operates without human oversight structures. Our clinical team reviews edge cases, flags model drift, and revises training frameworks regularly.

Clarity of Role

What NOYO's AI is - and is not.

What it is
  • A clinically informed emotional support companion
  • A consistent, non-judgmental presence available 24/7
  • A culturally aware listener across 20+ languages
  • A crisis signal detector with escalation protocols
  • A guide through NOYO programmes and resources
  • Transparent about its nature and limitations
What it is not
  • A licensed therapist or psychiatrist
  • A diagnostic tool
  • A replacement for professional mental health treatment
  • A human being or a simulation of one
  • A system without limits or fallibility
  • A repository for medical decision-making

Proprietary Frameworks

The architecture behind the AI.

HAM - Human Attunement Model
Behavioral Intelligence Architecture

HAM is NOYO's proprietary 108-section behavioral intelligence framework governing how the AI listens, interprets, and responds. It covers psychological decompensation mapping, linguistic signal analysis, violence and threat signal matrices, and the Personal Grounding Theory - a clinical arbitration layer that ensures every AI response is therapeutically appropriate, culturally aware, and safety-anchored.

ERAS - Enhanced Risk Assessment System
Real-Time Crisis Detection

ERAS monitors every interaction for risk indicators across multiple domains simultaneously. It applies weighted scoring to detect escalating crisis states and triggers human review or emergency protocol pathways before a user reaches a point of irreversible harm. ERAS operates silently beneath every conversation - not as a surveillance tool, but as a safety net.

Questions or Concerns

We welcome scrutiny.

Responsible AI in mental health requires accountability to the public, not just internal standards. If you have questions about how NOYO's AI works, concerns about a specific interaction, or research enquiries about our clinical frameworks, we want to hear from you.

Contact [email protected] for clinical and AI ethics enquiries.

Last updated: January 2025