In Kenya, it is believed that one in four people is likely to suffer from a mental disorder, making the country the sixth highest in depression cases in Africa. Kenya Mental Health Policy Report 2015-2030 estimates that five in six Kenyans do not receive treatment. The report attributes this to the huge inequity in the distribution of skilled human resources (psychiatrists, psychiatric nurses, psychologists and social workers) for mental health.
Kenya Mental Health Policy Report 2015-2030 estimates that five in six Kenyans do not receive treatment.
ARTIFICIAL INTELLIGENCE (AI)
“AI is a constellation of technologies that enable machines to act with higher levels of intelligence and emulate human capabilities to sense, comprehend, and act. These human capabilities are augmented by the ability to learn from experience and adapt over time. In other words, AI enables machines to sense their environment, think, and in some cases learn, to take action in response to the environment and the circumstances underpinning it. AI systems are finding ever-wider application across enterprises as they grow in sophistication.” – Microsoft White Paper (AI in Africa).
AI Diagnostics and treatment
Kenya is grappling with a critical shortfall of psychiatrists and other mental health practitioners. As of 2015, the office of the Auditor-General revealed that there were only 92 psychiatrists in the country instead of 1,533 as required. Similarly, there were 327 psychiatrist nurses instead of 7,666. Ideally, the globally accepted ratio of psychiatrist to citizens is 1:30,000, but in Kenya, psychiatrist to citizens’ ratio is 1:500,000.
AI solutions seem to be rife at an opportune time. Even though AI technology adoption in mental health is yet to pick up in Kenya, its advent nonetheless gives hope to reverse the decline in mental wellness among the citizenry. Innovative technologies such as Virtual therapists, Chat Bots and APIs seem to have some capability for mental disorder diagnostics and psychotherapy treatment capabilities among patients, as suggested by those who have tried them.
The development and adoption of AI tools, therefore, offers the opportunity to immensely advance mental healthcare services, while at the same time contributing to a decrease in treatment costs. A feature by the World Economic Forum (WEF) cites that the algorithmic component of care, which is estimated to be 70% of the workload of a doctor, needs to be taken on by technologies. This would largely free up the doctor to focus on the 30%, thus resulting in better, more cost-effective and more enjoyable healthcare delivery.
A feature by the World Economic Forum (WEF) cites that the algorithmic component of care, which is estimated to be 70% of the workload of a doctor, needs to be taken on by technologies.
A new report launched by the International Longevity Centre UK on the 30th of April 2019, elucidates that through the use of Big data, the sharing economy and AI amongst others, technology could play a major role in overcoming some of the barriers in healthcare. Mr Farhan Yusuf, a Pharmacist, feels that AI would make early detection easy for mental health conditions. Farhan cites an example of an app that can scan people’s posts on Instagram for hints of depression. According to him, there is a lot more work to be done to perfect apps like these, but it is a step in the right direction altogether.
With an already overwhelmed mental healthcare workforce in the country, AI tools could be the definitive solution to serving more patients and increasing accessibility to mental healthcare diagnostics, and psychotherapy treatment.
ETHICAL ISSUES OF AI
Although AI holds a greater promise to improve mental healthcare in the country, its deployment equally poses greater challenges. Privacy is identified as one of the major concerns in AI. As we already know, AI works because a lot of personal data is fed into it, and with this comes privacy concerns over what happens to the data patients may share with AI tools. Additionally, as more health records become digitized, they are seemingly becoming prime targets by hackers.
AI works because a lot of personal data is fed into it, and with this comes privacy concerns over what happens to the data patients may share with AI tools.
The other issue is the Algorithmic bias of AI. AI essentially works based on the data that is fed into it, thereby sometimes displaying a real-world bias that isn’t inclusive and representative of all demographic groups. Last but not least, AI poses the risk of harming millions of patients in the likely event of a misdiagnosis, thereby compromising on patient safety.
MEASURES TO CURB ETHICAL ISSUES IN AI
With hacking increasingly becoming a threat to digitized tools and records, there is a need for AI designers to develop their solutions with mitigation techniques in mind from the very onset. These techniques could, for instance, include; storing minimal personally identifiable patient data, regularly deleting patient session transcripts following analysis, and encrypting all patient-doctor communication, as well as data stored on the server.
There is a need for AI designers to develop their solutions with mitigation techniques in mind from the very onset.
The other measure is enacting a data protection legislation, which should be able to ensure strong, meaningful, comprehensive privacy protections, and redress for privacy violations. The recent approval of the Data Protection Policy and Law by Kenya’s cabinet offers hope to the citizenry, as it is believed that it will enhance the government’s commitment to protect personal data in order to guard against misuse, and thus eliminate unwarranted invasion of privacy which is a constitutional right as enshrined in Article 31 of Kenya’s constitution.
Additionally, with Kenya boasting that it is the only African country with an AI taskforce in place, having AI specific policies and strategies will be a great leap forward and a shot in the arm to the various industries aspiring to adopt AI innovative tools in their fields, especially the health sector.
Despite this progress, Mr Francis Monyango, a Tech law and Policy researcher was categorical that AI poses legal liability challenges. Francis stated that if disruptive technology were to cause harm for instance, it would be difficult to establish whether the blame should be placed on the AI programmer or the owner of the disruptive technology. Monyango added that there was a need for a “sandbox approach” on AI, so as to tackle the legal issues appropriately.
With regards to bias, AI has been alleged to have the tendency to discriminate based on race, gender, or demographics. Usually, this is said to be as a result of the data that is fed into it, thereby forcing AI to at times display a real-world bias that isn’t inclusive and representative of all demographic groups. AI experts argue that we should urge AI developers to ideate work with data from several group representations as a measure to test for bias, discrimination, and other harms posed by AI. The experts emphasize on having developers incorporate ethics and human rights as part of their AI design.
Is it about the data that is the basis of AI, or that developers should incorporate ethics and human rights into AI design?
On the issue of safety, since the use of strong AI is not yet in use, it means AI systems in health cannot be fully autonomous in some areas, thus require humans to intervene and oversee that large-scale treatment protocols parameters remain safe and effective to patients. This will be key in minimizing errors present in any given undertaking by an AI system.
AI has the potential to offer vast opportunities to transform Kenya’s healthcare system by helping scarce psychiatrists and other mental health practitioners do more with less. According to a Microsoft white paper, this is likely to speed up initial processing, triage, diagnoses, and post-care follow up, thereby stretching their limited time to serve more patients and increase access to healthcare.
The paradigm shift in healthcare by AI is likely to improve mental healthcare services and contributing to decreasing treatment costs, a factor that could culminate the country achieve agenda 3 of the global Sustainable Development Goals (SDGs), and the highest standard of mental health by 2030 as envisioned in the Kenya Mental Health Policy.
Write to us firstname.lastname@example.org