In recent years, the rise of mental health apps has offered unprecedented access to support and resources for individuals seeking assistance with their well-being. However, beneath the surface of these innovative solutions, lies a complex interplay of economics and ethics, particularly concerning the collection and exploitation of personal data.

Mental health apps operate within a thriving digital health market, valued at billions of dollars globally. These apps offer a range of services designed to support emotional well-being and mental health management. These services often include guided meditation and mindfulness exercises to reduce stress and anxiety, mood tracking to help users monitor their emotional state over time, and cognitive behavioral therapy (CBT) techniques to address negative thought patterns. For example, Headspace provides meditation and mindfulness sessions tailored to various needs, such as improving sleep or managing stress. Another popular app, Talkspace, connects users with licensed therapists for text, audio, or video counseling sessions, offering professional support from the convenience of a smartphone.

The appeal of such apps is clear: accessibility, affordability, and anonymity. Users can access support from the comfort of their own homes, often at a fraction of the cost of traditional therapy sessions. Additionally, the scalability of digital platforms allows app developers to reach a wide audience, further fueling the market's growth.

To monetize their operations, mental health apps can charge their users directly. This is often done through a freemium model, offering basic services for free while charging for premium features or subscriptions. Other revenue streams can come from partnerships with employers, healthcare providers or insurers, as these entities seek to improve employee well-being and to reduce healthcare costs (see, e.g., Strauss, 2019).

Yet another monetization route involves leveraging users’ personal data. As we explain in our book, this strategy presents both opportunities and risks. Here is what we write:

* Opportunities. Other organizations may value the data you collect from your users and from the transactions they conduct on your platform. You may then want to sell direct access to your data to these organizations, either in raw form or in a transformed form (that is, data analytics or insights).

* Risks. First and foremost, you must make sure that you do not run afoul of data protection legislation, such as the European General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). Second, even if you comply with the legislation, you may still lose the confidence of users who are concerned about their online privacy. You could thus face the same dilemma as with advertising: What you gain by selling your users’ personal data must be balanced against what you lose by driving some users away.

Collecting personal data is inherent to these mental health apps, as they rely on user input to provide personalized recommendations and support. Due to the highly sensitive nature of the data, both the monetization potential and the associated risks are higher compared to other types of apps.

* Higher opportunities. Third parties – such as advertisers, researchers, and even employers – may be willing to pay hefty sums for personal data, including browsing history, location data, and communication patterns,

* Higher risks. One of the primary ethical issues revolves around informed consent. Users may not fully understand the implications of sharing their sensitive information, especially if it is done under the guise of anonymous support. Moreover, the opaque nature of data algorithms makes it challenging for users to comprehend how their data is being utilized to shape their experiences within the app.

Recent events indicate that many apps have chosen to commodify mental health data with little regard for user privacy, leading to serious harm, as evidenced by a 2022 report by Mozilla Foundation:

When it comes to protecting people’s privacy and security, mental health and prayer apps are worse than any other product category Mozilla researchers have reviewed over the past six years,(…) Mozilla investigated the privacy and security practices of 32 mental health and prayer apps, like Talkspace, Better Help, Calm, and Glorify. 28 of the 32 apps were slapped with a *Privacy Not Included warning label, indicating strong concerns over user data management. And 25 apps failed to meet Mozilla’s Minimum Security Standards, like requiring strong passwords and managing security updates and vulnerabilities.

Worse, as The Guardian reports,

[in 2023] the US Federal Trade Commission handed BetterHelp a $7.8m (£6.1m) fine after the agency found that it had deceived consumers and shared sensitive data with third parties for advertising purposes, despite promising to keep such information private.

Although the market for mental health apps is expected to grow in the coming years, its exact evolution is difficult to predict. However, it is clear that to endure, platforms will have to balance innovation with accountability and user protection. First and foremost, by providing clear and accessible privacy policies, obtaining explicit consent for data collection and sharing, and empowering users with options to manage their privacy settings.

(During the preparation of this post, the author used ChatGPT to collect ideas and improve the expression. After using this service, the author reviewed and edited the content as needed and takes full responsibility for the content of the publication.)