Technology has always presented numerous opportunities for improving and transforming healthcare. Such improvements include reducing human errors, improving clinical outcomes, facilitating care coordination, improving practice efficiencies, and tracking data over time. Machine learning (ML) has already proven effective at disease identification and prediction, recognizing patterns that are too subtle for the human eye to detect, guiding physicians towards better-targeted therapies and improved outcomes for patients. Researchers have also used ML as a tool to recognize signs of depression and suicidality by assessing patients’ voices, picking up changes in speech too subtle for a doctor to notice. Artificial intelligence (AI) and machine learning can expand our approach to mental health.
Mapping Mental Health
Researchers at Massachusetts General Hospital have developed an artificial intelligence model that generates ‘personalized maps’ to guide individuals toward improved mental well-being. In this study, the researchers developed a model based on deep learning, a type of machine learning that uses layered algorithmic architectures to analyze data. The researchers also identified the most depression-prone psychological configurations on the self-organizing maps, which they used to develop an algorithm to help individuals move away from potentially dangerous mental states.
Shortest Path to Human Happiness
Deep Longevity, in collaboration with Harvard Medical School, offers another deep learning approach to mental health. Researchers have created two digital models of psychology that work together to find a path to happiness.
The first model depicts the trajectories of the human mind as it ages. The second model is a self-organizing map that serves as the foundation for a recommendation engine for mental health applications. This learning algorithm splits all respondents into clusters depending on their likelihood of developing depression and determines the shortest path to mental stability for any individual.
Combining Technology & Therapy is Key
Anyone with a smartphone can access conversational agent phone apps, also known as chatbots, which are meant to help users cope with the anxieties of daily life. These language processing systems can imitate human discussion by simulating conversations with a therapist via text. They can be a gateway to therapy or can reinforce lessons from in-person sessions. Research has shown that some people prefer interaction with chatbots rather than with real humans.
With the help of AI and machine learning, researchers are hoping the brain can help identify mental health issues. By applying specially designed algorithms to brain scans, labs could identify distinctive features that determine a patient’s optimal treatment. Machine learning could also assist in suicide-prevention. Currently, doctors only have a slight advantage over random probability in recognizing this risk. But algorithms, using data that are easily accessible to health care providers, can predict attempts with significantly improved accuracy.
Stay Current with Technology Advances
From healthcare to security, machine learning plays a critical role in developing the technology that will determine our future. Covering machine learning models, algorithms, and platforms, Machine Learning: Predictive Analysis for Business Decisions, is a five-course program from IEEE.
Connect with an IEEE Content Specialist today to learn more about this program and how to get access to it for your organization.
Interested in the program for yourself? Visit the IEEE Learning Network.
Resources
Deep Longevity LTD. (2 July 2022). Harvard Developed AI Identifies the Shortest Path to Human Happiness. SciTechDaily.
Gavrilova, Yulia. (4 July 2022). AI Chatbots & Mental Healthcare. IOT for All.
Glick, Molly. (1 July 2022). Your Next Therapist Could Be a Chatbot App. Discover.
Kennedy, Shania. (28 June 2022). AI-Generated ‘Maps’ May Help Improve Mental Well-being. Health IT Analytics.
Kesari, Ganes. (24 May 2021). AI Can Now Detect Depression from Just Your Voice. Forbes.
Rutherford, Lucie. (18 February 2022). Medicine Meets Big Data: Clinicians Look to AI For Disease Prediction and Prevention. UVAToday.
Savage, Neil. (25 March 2020). How AI is improving cancer diagnostics. Nature.
When it comes to designing ethical artificial intelligence (AI) systems, developers usually have the best intentions. However, problems often occur when developers fail to follow their intentions, what’s dubbed the “intention-action gap.”
To avoid this, a new report from the World Economic Forum and the Markkula Center for Applied Ethics at Santa Clara University, titled “Responsible Use of Technology: The Microsoft Case Study,” recommends developers follow the lessons listed below.
AI Standards Lessons
- Before you can innovate responsibly, you must transform your organization’s culture:
To innovate ethically, you need a company culture that encourages introspection and learning from mistakes. For example, by adopting what Microsoft calls a “hub-and-spoke” cultural model across the various departments that influence product development, Microsoft ensures that security, privacy, and accessibility are embedded into all of its products. This “hub” consists of a trio of internal groups that work like “spokes” within its governance: The AI, Ethics, and Effects in Engineering and Research (AETHER) Committee; The Office of Responsible AI (ORA); and the Responsible AI Strategy in Engineering (RAISE) group. Additionally, Microsoft launched the Responsible AI Standard, a series of steps that internal teams have to follow to support the creation of responsible AI systems. - Use tools and methods that make ethics implementation simple:
With the right technical tools, it will be easier to integrate your new ethics model into the many facets of your organization. Microsoft uses several technical tools—Fairlearn, InterpretML, and Error Analysis—to implement ethics. For example, Fairlearn allows data scientists to analyze and enhance the fairness of machine learning models. Each platform offers dashboards that make it easier for workers to visualize performance. By using checklists, role-playing exercises, and stakeholder engagement, these tools also help teams understand the possible consequences of their products. It also fosters more compassion for how underrepresented stakeholders might be affected. - Create employee accountability by measuring impact:
Make sure your employees are aligned with your company’s ethical values by evaluating their performance against your ethics principles. To do this, Microsoft team members meet with managers for bi-yearly performance evaluations and goal settings to establish personal goals in line with those of the company. - Inclusive products are superior products:
By innovating responsibly through the lifecycle of a product, companies will make products that are better and more inclusive. They can do this by creating principles for AI toolkits that set expectations from the outset of product development.
New Healthcare Industry AI Standard Considers Three Areas of Trust
The Consumer Technology Association (CTA), a working group of 64 organizations, recently created a new standard that identifies the basic requirements for establishing reliable AI solutions. Healthcare organizations involved in the project include AdvaMed, America’s Health Insurance Plans, Ginger, Philips, 98point6, and ResMed.
The standard, released in February 2021 and accredited by the American National Standards Institute, considers three ways to create trustworthy and sustainable AI healthcare solutions:
- Human trust: Consider the way humans interact and how they will interpret the AI solution.
- Technical trust: Address data use, such as data access, privacy, quality, integrity, and issues around bias. Additionally, technical trust considers the technical execution and training of an AI design to provide predictable results.
- Regulatory trust: Ensure compliance to regulatory agencies, federal and state laws, accreditation boards, and global standardization frameworks.
Developing standards for AI applications is difficult, but necessary. By having a plan that integrates ethics throughout your organization, you can better ensure your AI systems are reliable and safe.
Establishing AI Standards for Your Organization
Artificial intelligence continues to spread across various industries, including healthcare, manufacturing, transportation, and finance among others. It’s vital to keep in mind rigorous ethical standards designed to protect the end-user when leveraging these new digital environments. AI Standards: Roadmap for Ethical and Responsible Digital Environments, is a new five-course program from IEEE that provides instructions for a comprehensive approach to creating ethical and responsible digital ecosystems.
Contact an IEEE Content Specialist to learn more about how this program can benefit your organization.
Interested in getting access for yourself? Visit the IEEE Learning Network (ILN) today!
Resources
Green, Brian and Lim, Daniel. (25 February 2021). 4 lessons on designing responsible, ethical tech: Microsoft case study. World Economic Forum.
Landi, Heather. (18 February 2021). AHIP, tech companies create new healthcare AI standard as industry aims to provide more guardrails. Fierce Healthcare.