About Us

Artificial intelligence (AI) aids in several areas of the healthcare field, such as medical imaging. As AI continues to impact the field, its uses have expanded to non-imaging applications as well, though the precise role of AI in these applications remains unclear. Therefore, we have developed a report and an accompanying evidence map to help patients, caregivers, and other stakeholders obtain a better understanding of non-imaging AI applications in health care.

The report identifies the types of AI healthcare applications that are already in use or are expected to be available in the next five years and examines the evidence of their benefits and harms.

Image

View the report

Many Applications in Use and in the Pipeline

The report reviews 109 non-imaging-based AI applications that are currently in use or are likely to be used soon. Notably, some of these applications require clearance from the US Food and Drug Administration (FDA) before using in clinical practice. The report found that only 20 percent of the applications both in the pipeline and in use have FDA clearance, while 40 percent are already in use without FDA clearance and the remaining 40 percent are in development.

The reviewed non-imaging applications in use and on the horizon provide the following health information: 61 percent elaborate on the evaluation of an individual’s health, 30 percent give health recommendations, and 9 percent offer information on treatment delivery.

Evaluations of published and grey-literature studies found only benefits to AI applications. However, the quality of evidence varied in terms of study designs. Studies only validated the accuracy of the AI algorithms or patient satisfaction, but there was no evaluation of health outcomes. Future work needs to address these shortcomings.

Stakeholders who participated in this review recognized issues around bias, interpretability, privacy, and cybersecurity with the widespread adoption of AI in health care.

Stakeholder Feedback Addresses Bias, Interpretability, and Privacy

Stakeholders who participated in this review recognized issues around bias, interpretability, privacy, and cybersecurity with the widespread adoption of AI in health care.

Issues of bias and value misalignment (i.e., systems not properly responsive to underlying human interests and values) were consistently expressed by stakeholders. Bias and value alignment problems occur because AI algorithms tend to be optimized for global accuracy. Whereas the general public values equity and is sensitive to the distribution of errors across the population, AI algorithms may not acknowledge where the errors occur. It is encouraging, however, that researchers and developers are striving to address the issue of fairness in machine learning and are discovering ways to ensure the needs and attributes of the diverse population are included in the design and implementation of AI applications.

Another common concern shared by stakeholders is that AI applications sometimes provide an answer without revealing how they derived it. This “black box” approach can thus become a barrier to adoption. Interpretability of an AI application relates to the type of machine learning used and this was unknown for 61 percent of the applications reviewed in the report.

Due to concerns about individual privacy protections, stakeholders discussed safeguarding data and devices from theft and cyberattacks. The report did not find information on how issues of safety and privacy are addressed by the developers, other than the mere mention of these issues in the evaluated studies.

Image

Artifical Intelligence in Clinical Care: Explore Evidence Map and Visualization

Evidence Map Shows Available Applications across Conditions

The AI in Clinical Care evidence map provides a high-level, visual overview of the available AI applications across nine diseases: Alzheimer’s and dementia, cancer, cardiovascular disease, cerebrovascular disease, diabetes, kidney disease, mental health, respiratory disease, and substance use disorder.

We acknowledge that further work is necessary to evaluate the benefits and risks of AI across a range of applications in health care. There is an opportunity for future work to improve the quality of the evidence, determine more accurate algorithms, and conduct rigorous evaluations and reporting.

As more research is conducted in this fast-moving field, we hope the report and evidence map will be useful tools for understanding the current landscape of evidence.

What's Happening at PCORI?

The Patient-Centered Outcomes Research Institute sends weekly emails about opportunities to apply for funding, newly funded research studies and engagement projects, results of our funded research, webinars, and other new information posted on our site.

Subscribe to PCORI Emails

Image

Hand pointing to email icon