Colorado Capitol Coverage
Assembly Required
All bills
In CommitteeHB26-11392026 Regular Session

Your Therapist Can't Be a Bot: Inside Colorado's New AI Health Care Bill

Sponsors: Junie Joseph, Sheila Lieder·Health & Human Services·

Editorial photograph for HB26-1139

Illustration: Assembly Required

The Bottom Line

Insurance companies are increasingly using artificial intelligence to deny health coverage, and tech companies are rolling out chatbots that act like therapists. This bill pumps the brakes on both trends by requiring a human doctor to review any AI-driven coverage denials and making it illegal for an AI bot to pass itself off as a licensed mental health professional. If you've ever fought an automated insurance denial or worried about your medical data privacy, this is the legislation you need to watch.

What This Bill Actually Does

At its core, HB26-1139 tackles two rapidly growing intersections of technology and health care: how insurance companies use algorithms to approve or deny your medical care, and how tech companies use artificial intelligence to provide mental health support. The legislation establishes strict guardrails to ensure that human judgment remains central to medical decisions.

First, let's look at the insurance side. Section 2 of the bill regulates utilization review—the process insurance carriers, pharmacy benefit managers (PBMs), and managed care entities use to decide if a medical treatment is medically necessary and covered by your plan. Under current industry practices, algorithms can sometimes automatically deny claims based on statistical averages. This bill mandates that any artificial intelligence system used to determine coverage must base its decisions on your specific medical history and individual clinical circumstances, not just group data. Furthermore, while an AI system can be used to fast-track approvals, it cannot issue a denial or delay based on medical necessity. Any adverse decision must be independently reviewed and approved by a human—specifically, a licensed clinician or physician competent to evaluate your specific clinical issues. The algorithms must also be regularly audited to ensure they do not result in algorithmic discrimination.

Second, the legislation takes aim at the booming industry of automated therapy. The bill defines a mental health companion chatbot as a generative AI system designed to sustain a one-on-one relationship with a user, mimicking the interactive conversations you would have with a licensed professional. If an AI system explicitly or implicitly acts like a therapist, the state considers that the unauthorized practice of psychotherapy.

To stay legal, companies offering these chatbots must jump through several new hoops. The bot must feature a clear pop-up notification at the beginning of the interaction—and every thirty minutes thereafter—reminding the user that it is not human and not a licensed professional. It must admit it is AI if the user asks. Crucially, the AI must have a built-in protocol to detect suicidal ideation or self-harm and immediately refer the user to a crisis text line or suicide hotline. Finally, companies are strictly prohibited from selling or sharing a user's identifiable mental health data, or forcing users to consent to data sales as a condition of using the application. Furthermore, human therapists who use AI tools in their background operations must explicitly disclose that fact to their patients.

What It Means for You

If you have ever had a necessary medical procedure or prescription denied by a faceless algorithm, this bill fundamentally changes how your insurance company operates. By forcing a real, licensed human doctor to review any AI-generated denials, it ensures your claims are evaluated based on your actual medical chart, not just a statistical average of people who look like you. This means fewer automatic rejections and a clearer chain of accountability when your health care is delayed.

For parents and individuals using digital wellness apps, this bill introduces massive privacy and safety upgrades. Teenagers and adults alike are increasingly turning to digital apps and AI companions for mental health support. If you or your kids use these services, the companies behind them will no longer be allowed to quietly sell your most intimate, identifiable mental health data to third-party data brokers. The apps will also be required to constantly remind you that you are talking to a machine, and they must have life-saving guardrails in place to connect you to a real human crisis hotline if you express suicidal thoughts.

Finally, this legislation impacts your traditional therapy sessions. If your human therapist uses an AI tool to transcribe your sessions, generate clinical notes, or assist with a diagnosis, they legally have to tell you and get your explicit consent regarding how your data is protected under HIPAA. Additionally, state programs like Medicaid and the Children's Basic Health Plan are strictly prohibited from paying for therapy sessions conducted by an AI, ensuring public tax dollars only fund real, human care.

What you should do this week:

  • Check your app settings: Look at the privacy agreements on any mental health or wellness apps you or your family use. Understand what data they are currently collecting before these new protections take effect.
  • Contact your lawmakers: This bill is in the Health & Human Services Committee. If you have a personal story about a frustrating algorithmic insurance denial, now is the time to share it with your state representative.

What It Means for Your Business

If you operate in Colorado's health care, insurance, or tech sectors, this bill requires immediate attention before its effective date of January 1, 2027. The compliance burdens here are significant, and they vary heavily depending on your exact industry.

For health insurance carriers, pharmacy benefit managers, and private utilization review organizations, your claims processing infrastructure will need a comprehensive audit. You are required to maintain extensive documentation, audit logs, and model-governance records to prove your algorithms are not engaging in unfair discrimination. You must also submit written disclosures to state agencies detailing your AI oversight. Most importantly, you must ensure you have enough licensed human clinicians on staff to manually review and sign off on every single coverage denial or delay that your AI flags for medical necessity. Relying entirely on automated rejection letters will become a legal liability.

For licensed mental health professionals, Section 4 introduces strict new informed consent rules. If you use AI systems in your practice—even for administrative tasks like AI-generated clinical note-taking or diagnostic assistance—you must disclose this to your clients. Your intake paperwork will need to be updated to include the name of the AI system, whether it is an FDA-approved device, a link to its website, its specific purpose, and an explanation of whether you have a Business Associate Agreement in place to protect their data under HIPAA. Furthermore, you are barred from billing any public or private insurance for psychotherapy sessions, professional supervision, or consultations conducted by an AI.

For tech developers and startups, the guardrails on mental health chatbots are severe. If you are building consumer-facing wellness apps in Colorado, you cannot monetize identifiable mental health data. You must build "break-glass" protocols for suicidal ideation, which includes annual reporting to the state Office of Suicide Prevention regarding how many times your bot detected self-harm risks. You also must hard-code mandatory 30-minute pop-up disclaimers into your user interface to ensure users know they are interacting with software, not a licensed professional.

Action items for business owners:

  • Audit your tech stack: If you run a therapy practice or clinic, make a comprehensive list of every AI tool currently interacting with client data.
  • Revise your intake forms: Start drafting new disclosure agreements that explicitly mention AI usage, data protection, and client consent rights.
  • Check vendor contracts: If your business outsources utilization review, verify that the vendor has a scalable process for human-led denial reviews to ensure you aren't held liable for their automated rejections.

Follow the Money

While the official fiscal note is still pending, we can anticipate several major economic ripple effects. First, state agencies like the Department of Health Care Policy and Financing (which manages Medicaid) and the Behavioral Health Administration will need financial resources to audit their managed care partners to ensure compliance with the new human-review mandates. Second, the Department of Regulatory Agencies (DORA) will likely require additional investigators and funding to handle consumer complaints against tech companies accused of the "unauthorized practice of psychotherapy."

On the private market side, mandating that a licensed clinician review every single AI-flagged medical necessity denial will increase administrative costs for health insurance carriers. Insurers frequently argue that algorithmic automation keeps overhead—and consequently, premiums—down. If they are forced to hire armies of medical directors to manually sign off on denials, business owners and residents might see a slight bump in their monthly insurance premiums. However, on the flip side, the state protects its own budget: by explicitly prohibiting Medicaid and the Children's Basic Health Plan from paying for AI-delivered psychotherapy, Colorado prevents its public coffers from being drained by highly scalable, automated bot-therapy billing mills.

Where This Bill Stands

HB26-1139 was introduced in the House on February 4, 2026, by Representatives Junie Joseph and Sheila Lieder. It has been assigned to the House Health & Human Services Committee, where it currently awaits its first hearing.

Because this bill touches two massive lobbying powerhouses—the health insurance industry and the booming tech sector—expect fierce debate in committee. The insurance industry may push back heavily on the administrative costs of mandating human review for every medical necessity denial, while tech startups will likely fight the stringent restrictions and constant disclaimers required for mental health chatbots. However, the bill's focus on consumer protection and medical accountability makes it highly appealing to patient advocacy groups and professional medical associations. Keep an eye out for committee hearing dates in late February or early March to see if the sponsors agree to any amendments watering down the human-review requirements.

The Opportunity Signal

Where this bill creates practical upside for operators: the opening, the key constraints, and the move to make while the window is still favorable.

  • Specialized Clinical Review Services for Health Payers

    Colorado's HB26-1139 mandates that all AI-driven denials or delays of medical necessity claims by health insurance carriers, pharmacy benefit managers (PBMs), and managed care entities must be reviewed and approved by a licensed human clinician or physician. This creates an immediate and substantial demand for third-party medical review organizations or staffing agencies capable of providing competent, licensed clinical personnel to manage this increased workload. Businesses can capitalize on this compliance burden by offering scalable, expert review services that integrate with existing payer systems, ensuring adherence to the January 1, 2027, deadline and mitigating legal liabilities for automated rejections. Execution risk includes attracting and retaining a sufficient pool of qualified, licensed clinicians across various specialties.

    • Deadline: January 1, 2027, for all AI-driven medical necessity denials to have human clinician review.
    • Target Client: Health insurance carriers, PBMs, and utilization review organizations operating in Colorado.
    • Requirements: Provide licensed clinicians/physicians competent to evaluate specific clinical issues for review; ensure robust audit trails and compliance documentation.
    • Procurement: Opportunities for B2B service contracts and staffing agreements.

    Next move: Develop a service offering outlining the capability to provide licensed clinical review panels and a pricing model, then schedule introductory meetings with Colorado's major health insurance carriers and PBMs by June 2026.

  • AI Compliance & Data Privacy Solutions for Therapy Practices

    The bill introduces strict informed consent and data protection rules for licensed mental health professionals using AI tools, even for administrative tasks like note-taking. This creates a market for specialized software solutions, legal templates, and consulting services that help therapy practices comply. Entrepreneurs can offer HIPAA-compliant AI tools designed for administrative support with built-in disclosure mechanisms, or provide legal and consulting services to help practices revise intake forms, establish Business Associate Agreements (BAAs), and ensure transparent patient communication about AI use. The timing is critical as practices must update their operations ahead of the bill's effective date to avoid non-compliance and potential regulatory scrutiny. A key dependency is the clarity on what constitutes an 'FDA-approved device' in this context for disclosure.

    • Compliance: Therapists must disclose AI use, including AI system name, purpose, FDA status (if applicable), and data protection measures (BAA).
    • Prohibition: Public and private insurance will not pay for psychotherapy sessions, supervision, or consultations conducted by an AI.
    • Target Client: Individual licensed therapists, group practices, and clinics across Colorado.
    • Deliverables: Updated intake forms, BAA templates, AI-integrated administrative software solutions.

    Next move: Host a webinar or workshop for Colorado licensed mental health professionals by July 2026, detailing the new AI disclosure requirements and offering compliant template packages for intake forms and BAAs.

  • Crisis Intervention Integration for Mental Health Tech

    Mental health companion chatbots operating in Colorado will be legally required to feature 'break-glass' protocols that detect suicidal ideation or self-harm and immediately refer users to a crisis text line or suicide hotline. This mandate creates an opportunity for developers specializing in AI safety features or existing crisis intervention services to partner with or offer their solutions to tech companies building mental wellness apps. Businesses can provide API integrations, develop custom detection algorithms, or offer consulting on implementing robust crisis referral pathways, including the annual reporting required for the Office of Suicide Prevention. The challenge lies in accurately detecting nuanced language related to self-harm and ensuring seamless, reliable referrals without false positives.

    • Requirement: Chatbots must detect suicidal ideation/self-harm and refer users to crisis hotlines.
    • Reporting: Annual reporting to the state Office of Suicide Prevention on detected self-harm risks.
    • Target Client: Tech developers and startups creating mental health companion chatbots or wellness apps for Colorado users.
    • Timeline: Implement robust protocols before the bill's effective date, impacting product development cycles immediately.

    Next move: Research existing Colorado crisis hotlines/text lines to understand their integration capabilities, then develop a proposal for an 'AI Crisis Referral API' or partnership model to present to local mental health tech startups by August 2026.

Get the Wednesday briefing

Colorado legislature coverage, in plain language. Free.

Frequently Asked Questions

What does HB26-1139 do?
This bill sets ground rules for how artificial intelligence can be used in your health care and insurance. It makes sure that if your health insurance denies a medical claim, a real human doctor or clinician has to review it, not just a computer algorithm. It also stops AI chatbots from pretending to be licensed human therapists and bans them from selling your sensitive mental health data.
What is the current status of HB26-1139?
HB26-1139 is currently "In Committee" in the 2026 Regular Session. It was introduced by Junie Joseph and is assigned to the Health & Human Services committee.
Who sponsors HB26-1139?
HB26-1139 is sponsored by Junie Joseph, Sheila Lieder.
How does HB26-1139 affect Colorado businesses?
Colorado's HB26-1139 mandates that all AI-driven denials or delays of medical necessity claims by health insurance carriers, pharmacy benefit managers (PBMs), and managed care entities must be reviewed and approved by a licensed human clinician or physician. This creates an immediate and substantial demand for third-party medical review organizations or staffing agencies capable of providing competent, licensed clinical personnel to manage this increased workload. Businesses can capitalize on this compliance burden by offering scalable, expert review services that integrate with existing payer systems, ensuring adherence to the January 1, 2027, deadline and mitigating legal liabilities for automated rejections. Execution risk includes attracting and retaining a sufficient pool of qualified, licensed clinicians across various specialties. The bill introduces strict informed consent and data protection rules for licensed mental health professionals using AI tools, even for administrative tasks like note-taking. This creates a market for specialized software solutions, legal templates, and consulting services that help therapy practices comply. Entrepreneurs can offer HIPAA-compliant AI tools designed for administrative support with built-in disclosure mechanisms, or provide legal and consulting services to help practices revise intake forms, establish Business Associate Agreements (BAAs), and ensure transparent patient communication about AI use. The timing is critical as practices must update their operations ahead of the bill's effective date to avoid non-compliance and potential regulatory scrutiny. A key dependency is the clarity on what constitutes an 'FDA-approved device' in this context for disclosure. Mental health companion chatbots operating in Colorado will be legally required to feature 'break-glass' protocols that detect suicidal ideation or self-harm and immediately refer users to a crisis text line or suicide hotline. This mandate creates an opportunity for developers specializing in AI safety features or existing crisis intervention services to partner with or offer their solutions to tech companies building mental wellness apps. Businesses can provide API integrations, develop custom detection algorithms, or offer consulting on implementing robust crisis referral pathways, including the annual reporting required for the Office of Suicide Prevention. The challenge lies in accurately detecting nuanced language related to self-harm and ensuring seamless, reliable referrals without false positives.
What committee is reviewing HB26-1139?
HB26-1139 is assigned to the Health & Human Services committee in the Colorado House.
When was HB26-1139 last updated?
The last action on HB26-1139 was "House Committee on Health & Human Services Refer Amended to House Committee of the Whole" on 03/04/2026.

Related Bills