Governments and organizations must identify adolescent and young populations that are vulnerable to ill mental health at an earlier stage in order to aid in overall recovery and wellbeing, particularly as they begin to exhibit harmful behaviors that will negatively impact their lives and society at large. GeestMEDx intends to use ethical artificial intelligence to monitor mental health and well-being in order to meet the UN's SDGs by 2030. We will use additional data sources to identify people who are at risk and require additional care. Algorithms will produce impressive results, but they will not replace clinicians in diagnosing patients.

We want to use algorithmic information (AI) and social media interactions to actively create surveillance in mental healthcare among the Dutch population by identifying citizens with mental health challenges based on their internet behavior.

If dogs can be trained to detect suicidal people, why can't we train AI to detect mentally disturbed people online?


The Social Support Act (WMO) replaced the Welfare Act in 2007 and introduced nine performance fields that provided incentive paradigms aimed at reducing homelessness and indirectly addressing public mental health. The nine performance fields were as follows:

(1) promotion of social cohesion and quality of life
(2) the provision of prevention-focused support to young people
(3) the provision of information, advice and client support
(4) support for informal carers and voluntary workers
(5) promotion of social participation of people with disabilities (including mental health problems)
(6) provision of services to people with disabilities
(7) policies on homeless services, women’s refuges and domestic violence
(8) policies on addiction
(9) the organisation of public mental health care

The gap between the need for large-scale mental health care and mental health management (including its consequences) has grown rapidly over the years. The sharp increase is directly related to our changing lifestyle paradigms. As the internet and remote work or education become more common, the gap widens. The existing incentive paradigms could rely on citizens, family members, or other people to report cases in their systems to ensure people showing signs of mental illness are systematically reported in the old paradigm (characterized by a Physical or in-person way of life). As people's interactions become more remote and contactless, old paradigms must rely on new models of diagnosis and prevention to mitigate risks and minimize damages.

It is said that who you are online is who you are in real life. We are seeing an increase in the number of people, trolls, and bots using words, visuals, and other signals to indicate instabilities. The Netherlands has an estimated population of 17,15 million Dutch citizens and foreigners. Approximately 96% of those use the internet on a daily basis, with 92% using social media for an estimated 1H24 minutes per day, providing direct and indirect notice of their personal mental health challenges.

In terms of mental distress, our AI will assist in detecting and identifying Dutch citizens or people living in Dutch territory. We believe that mental health is comprised of four major pillars.


- Mind: How am I thinking?
- Body: Am I moving enough?
- Nutrition: Am I eating well?
- Sleep: Am I recharged?

The congruence of these four factors is what drives mental health. At any given time, the majority of people are out of line on 2+. They frequently post these on the internet. We want to use AI to identify them and refer them to the appropriate public and private services.


Artificial intelligence (AI) has enormous potential for improving healthcare and medicine delivery and assisting all countries in achieving universal health coverage at a much faster rate. However, one of the barriers and threats to leveraging AI is that it has frequently been approached through the lens of leveraging data and information for consumption and profit, which is a strong legacy established by American multinationals (GAFAs). MEDx seeks to investigate how AI design and application could be improved if AI were designed and applied with impact as the primary goal.


AI has the potential to empower patients and communities to take control of their own health care and better understand their changing needs if used wisely. However, if appropriate measures are not taken, AI may lead to situations in which decisions that should be made by providers and patients are transferred to machines, undermining human autonomy because humans may not understand how an AI technology arrives at a decision, nor will they be able to negotiate with a technology to reach a shared decision. In the context of artificial intelligence for health, autonomy means that humans should retain complete control over health-care systems and medical decisions.


To have a positive impact on public health and medicine, ethical and human rights considerations must be prioritized in the design, development, and deployment of AI technologies for health. Existing biases in healthcare services and systems based on race, ethnicity, age, and gender that are encoded in data used to train algorithms must be overcome for AI to be used effectively for health. Governments will need to eliminate a pre-existing digital divide (or unequal access to) information and communication technologies. Such a digital divide excludes populations in rich countries, regardless of gender, geography, culture, religion, language, or age. Many of the world's largest technology companies are heavily investing in data collection (including health data), algorithm development, and AI deployment. The spread of AI could result in the delivery of healthcare services in unregulated settings and by unregulated providers, posing challenges for government oversight of health care. As a result, appropriate regulatory oversight mechanisms must be developed to hold the private sector accountable and responsive to those who can benefit from AI products and services, as well as to ensure the transparency of private sector decision-making and operations.


The six core principles for ethical use of AI for health identified by the WHO Expert Group are the following:
Protect autonomy;
Promote human well-being, human safety, and the public interest;
Ensure transparency, explainability, and intelligibility;
Foster responsibility and accountability;
Ensure inclusiveness and equity;
Promote AI that is responsive and sustainable.

That will be critical in launching any AI solution for mental health in the future

Author: MEDx Team

Previous
Previous

With a growing population, how can we provide access to care for all? GLOBAL UHC

Next
Next

EU Tech - Ethics/Rights and Inclusion