Each day, thousands of Floridians seek therapy, some for routine mental health check-ins, others confronting serious psychiatric challenges. Technology has been an unmistakable force for good in this arena: telemedicine liberated patients from geographic constraints, and now artificial intelligence promises to extend quality care to populations previously without access.
Yet predictably, the regulatory state has mobilized against innovation. Consider the recent legislative interventions: In June 2025, Nevada became the first state to explicitly prohibit “offering AI systems designed to provide services that constitute the practice of professional mental or behavioral healthcare.” Illinois swiftly followed suit, banning the use of AI “to provide mental health and therapeutic decision-making.” Both states have cloaked their interventions in the language of consumer protection while ignoring the millions of Americans who cannot afford or access mental health care. These restrictions don’t protect vulnerable patients; they simply guarantee that fewer patients receive help at all and maintain the status quo.
Florida’s legislators would be wise to reject this model entirely. The Sunshine State has long understood that the best regulation is often the least regulation, allowing markets to serve consumers in ways that bureaucrats in Carson City and Springfield cannot anticipate. When it comes to expanding access to mental health treatment, Tallahassee should champion the innovators, not join the regulators strangling progress with misguided paternalism.
Much of the confusion driving these bans stems from a fundamental misunderstanding of what AI therapy actually is. Critics picture patients pouring out their troubles to ChatGPT or Claude, general-purpose chatbots that might offer sympathy but lack any clinical foundation.
Legitimate AI therapy tools are purpose-built clinical applications, developed in partnership with licensed mental health professionals and grounded in evidence-based frameworks like cognitive behavioral therapy. They incorporate crisis-detection protocols, clinical guardrails, and structured interventions tailored to specific conditions. These aren’t chatbots moonlighting as therapists; they’re specialized medical tools designed to supplement traditional care, offering support between sessions or reaching patients who might never seek help otherwise.
The distinction matters. Banning AI therapy because someone might misuse ChatGPT is like prohibiting telemedicine because people sometimes get bad medical advice on Reddit. It conflates serious clinical tools with casual conversation and, in doing so, throws out demonstrably effective treatment alongside the bathwater of legitimate concerns.
The real-world cost of this confusion is staggering, and for Florida, the stakes extend beyond abstract questions of regulatory philosophy. The state faces a crushing shortage of mental health professionals, some 500 practitioners short of what’s needed, according to the Kaiser Family Foundation. The result? An estimated 7 million Floridians live in Mental Health Professional Shortage Areas, where finding a therapist can mean months-long waiting lists or prohibitive costs.
This is precisely the environment where AI therapy could serve as a force multiplier for overwhelmed providers. By handling routine cognitive-behavioral treatments and interventions and providing round-the-clock access for patients when they need it, AI systems can extend the reach of existing professionals rather than replace them. Prohibiting this technology wouldn’t protect Floridians from substandard care; it would simply condemn millions to no care at all, while human therapists remain hopelessly overbooked.
The case for AI therapy, moreover, isn’t merely theoretical. The evidence base is growing and impressive. Researchers at Dartmouth College reported in March 2025 that AI-driven interventions produced clinically significant improvements in patients with major depressive disorder, generalized anxiety disorder, and eating disorder risks. A separate study documented dramatic symptom reductions: 48 percent decreases in depression, 43 percent in anxiety.
These aren’t marginal improvements. They represent the kind of outcomes that would be celebrated as breakthroughs if delivered through traditional pharmaceutical interventions or talk therapy. Yet when the delivery mechanism is algorithmic rather than human, suddenly the regulatory instinct is to prohibit rather than encourage.
The logic here is perverse. Legislators in Nevada and Illinois have effectively decided that promising, evidence-backed technology should be banned, not because it fails to work, but because it works differently than the traditional model.
The consequences are predictable and severe. By prohibiting AI therapy, these states haven’t merely delayed innovation; they’ve actively denied millions of Americans access to a promising way of delivering care to those in need. The irony is rich: in the name of protecting patients, regulators have ensured that countless individuals who might have found relief through AI-driven interventions will instead go untreated.
Florida faces a choice that should be straightforward. Seven million Floridians live in mental health shortage areas, facing months-long waiting lists and prohibitive costs. Evidence-backed technology can reduce their depression by 48 percent and anxiety by 43 percent.
The answer cannot be to ban technology that works.
Nevada and Illinois have made their choice: preserve the regulatory status quo even if patients suffer untreated. Florida should reject this calculus. When the alternative is no care at all, blocking effective treatment isn’t protection—it’s abandonment.
The patients are waiting. The evidence is clear. Florida should choose wisely.










