Thumbnail

Vetting Mental Health Apps in January

Vetting Mental Health Apps in January

Mental health apps promise support and healing, but many collect excessive data, hide behind vague policies, or lack clinical validation. This article compiles insights from privacy experts, clinicians, and digital health researchers to help readers make informed choices about which apps to trust. The guidance covers five key areas: avoiding ad-supported trackers, choosing transparent tools, prioritizing FDA-cleared therapeutics, rejecting manipulative design, and verifying creator credentials.

Select VA Options over Ad-Supported Trackers

As a Nurse Practitioner with over 15 years in mental health, I approach the "New Year, New Me" app rush with caution. When considering How do you vet mental health apps for data privacy and clinical validity, I start by looking at the business model. If a mental health app is completely free, I immediately investigate whether they are selling user data to third-party advertisers to make money. To check for clinical validity, I look for the "About" section to see if the app was developed in partnership with universities or medical centers, rather than just software engineers. I look for specific evidence-based frameworks, such as Cognitive Behavioral Therapy (CBT) or Acceptance and Commitment Therapy (ACT), rather than vague promises to "boost happiness."
I encountered a significant issue with a patient named Leo, who was using a popular mood-tracking app to manage his bipolar disorder. The red flag you encountered that led you to withdraw an app was buried in the privacy policy. While the app claimed to be private, the fine print stated that they reserved the right to share "anonymized" mood data with social media platforms for targeted advertising. Leo had noticed he was getting ads for depression medication on his social media feeds, which made him feel paranoid and watched. This was a massive breach of trust and actually worsened his anxiety.
Regarding what did you recommend instead, I immediately transitioned him to "PTSD Coach" and "CBT-i Coach." These apps were developed by the Department of Veterans Affairs (VA). While they are government-funded, they are available to the public for free, they do not sell data, they do not require an account login, and they are rigorously backed by clinical science. We switched his digital routine to these trusted sources, ensuring his data stayed on his phone and nowhere else.
The concern regarding privacy is well-founded. A study published in JAMA Network Open analyzed top-rated apps for depression and smoking cessation and found that 29 out of 36 apps transmitted data to Facebook or Google, but only 12 of those apps actually disclosed this in their privacy policy. This means many apps are sharing sensitive mental health data without the user ever knowing.

Shebna N Osanmoh
Shebna N OsanmohPsychiatric Nurse Practitioner, Savantcare

Choose Transparent, Structured Tools with Minimal Retention

I focus on two things first: data privacy and clinical structure.

For privacy, I look closely at what data is collected, how long it's stored, and whether conversations are reused or shared. If a privacy policy is vague or allows emotional data to be used for training or "improvements" without clear consent, that's a red flag.

For clinical validity, I check whether there's a real evidence based framework like CBT behind the experience, not just generic encouragement. Responsible apps are also clear about their limits and when human support is needed.

One app I stopped recommending stored conversations indefinitely and reused them for model training while delivering responses that sounded supportive but lacked therapeutic depth. Instead, I recommended tools that were transparent, minimized data retention, and followed a clear therapeutic structure or, in some cases, a human therapist or simple journaling. During the New Year rush, safe and structured always beats flashy.

Ali Yilmaz
Ali YilmazCo-founder&CEO, Aitherapy

Prioritize FDA-Cleared Digital Therapeutics

Wellness apps can be helpful for low risk supports such as medication reminders, basic habit tracking, or guided meditation—particularly when there's minimal need to input sensitive personal health information.
In those cases, they can complement care without introducing significant privacy or clinical risk.

However, most wellness apps are not regulated in the way prescription based treatments are and they don't always deliver what they promise. Many rely on engagement metrics rather than clinically meaningful outcomes, offer limited transparency around data use, and blur the line between general wellness tools and medical treatment.

When an app starts making mental health treatment claims, collects symptom data that could influence care decisions, or positions itself as a replacement for proper assessment or diagnosis, I become much more cautious. In those cases, I prioritize FDA-cleared or FDA-authorized, prescription-based digital therapeutics that are supported by clinical evidence, have clearer data-privacy standards, and are designed to be used within an ongoing clinician-patient relationship.

In my day-to-day clinical practice, I use FDA-cleared digital therapies to help treat and manage conditions such as depression, insomnia, and ADHD. These products have undergone formal regulatory review and, in many cases, clinical trials similar to those required for medications. That process gives me greater confidence not only in how patient data are handled, but also in who the product is appropriate for and how it actually works—which is especially important during periods like the New Year self-improvement rush, when patients are often exposed to tools that promise more than they can deliver.

Desiree Matthews
Desiree MatthewsPsychiatric Mental Health Nurse Practitioner | CEO and Founder, Different Mental Health Program

Reject Dark Patterns, Favor Open-Source Mindfulness

I validate a mental health app's clinical validity by examining the development team's credentials. I prefer mental health apps built by licensed psychologists, and I verify that they integrate evidence-based practice rather than relying on simple aesthetic gamification. Also, for privacy, I check whether the mental health app provides a way for the user to have ownership of their data and the right to delete it permanently. Additionally, it is critical that the mental health app is SOC 2 Type II compliant to provide adequate oversight over users' information by a larger organization.

An alarming red flag I discovered was the presence of "dark patterns" in the mental health app, design characteristics that manipulate users to keep scrolling so the app can generate revenue through advertising, rather than facilitating therapeutic reflection for its users. Dark patterns can induce anxiety in vulnerable users during peak periods like the New Year. I removed this app from my list of recommended apps and replaced it with open-source mindfulness tools designed to support user autonomy, with solid clinical evidence that they are effective, without predatory marketing strategies involved.

Verify Credible Creators and Stated Policies

I vet mental health apps by researching creator credentials (academic institutions, licensed clinicians, nonprofits) and reviewing their privacy policies for data sharing. For example, "How We Feel" is an app that helps people identify, track, and understand their moods. It was built by scientists, therapists, designers, engineers, and founded by a science-based nonprofit. The team is led by Professor Marc Brackett from the Yale Child Study Center and his colleagues from the Yale Center for Emotional Intelligence. It's not just another mental health startup led by a bunch of people who haven't studied psychology. How We Feel's privacy policy clearly states they do not sell personal information or use it to sell ads or make money. Transparency is important. This is the kind of app I can get behind.

A red flag I've encountered is basically when you can't tell who's behind it or what they're doing with user data. To be honest, most of the apps I recommend, I learn about in trainings or from colleagues. I always use them myself before recommending them to clients. So I've never "withdrawn apps." Another trusted source is the app iChill, created by the Trauma Research Institute, and the expertise is crystal clear.

Especially in the age of tech bros, mental health startups and AI, I think it's critical to understand who is creating the app and for what purpose, and whether they truly value mental health, clinicians, and folks' well-being or they're just mining them for data.

Related Articles

Copyright © 2026 Featured. All rights reserved.