The promise of so-called “Big Data” for improving the quality, safety, and efficiency of health care is vast. With continual improvements in the capability of electronic medical record (EMR) systems to link to external patient registries and analyze and interpret larger amounts of data, the potential expands for physicians to more accurately diagnose and effectively treat illness.

Big data is typically characterized by its “volume, velocity, and variety,” which translates to large amounts of easily and rapidly accessible, diverse, heterogenous patient data that had not been previously available for analysis.1 Big data can be used for traditional analytic approaches, but it is now being touted for predictive analytics (PA) applications using machine learning and artificial intelligence.

Electronic algorithms of care


Continue Reading

PA is the use of electronic algorithms for clinical decision support that forecasts future events in real time. Proponents argue that this approach will help clinicians make important clinical decisions in the clinic or at the bedside. For example, it could be used when deciding which patients will benefit most from ICU-level care or which patient will have more post-operative complications. PA used in machine learning algorithms of image detection for diabetic retinopathy are well validated and have already been demonstrated to be successful.2 With this technology and its wider application, however, there is the potential for misuse, bias, and less equity if it is not used properly. Using large data sets linked to extensive patient registries allows for the possibility of great benefit but also harm.3 Physicians, who have ethical and professional obligations to represent patients’ best interests, will be on the front lines for ensuring that this emerging technology does not interfere with high-quality, individualized, patient-centered care.

What will physicians need to know about a PA algorithm when using it in the care of their patients? In general, physicians should apply the same critical thinking and analysis to evaluating decision support as they already do for assessing the methodology and applicability of a conventional evidence-based study. First, they should ask themselves how the PA algorithm was developed and validated. Different stakeholders have different values. In developing an algorithm, a hospital may be subtly and differentially concerned about cost, a physician about workflow, and a patient about adverse outcomes or quality of life. The more individuals know about how a model was developed, the more confident they can be that it accounts for these diverse considerations.

Another important point is whether an appropriate sample of patients was included in a model’s development. If a model was designed without sufficient samples of specific populations or if those populations have worse health outcomes due to social factors, the results of the PA algorithm may reinforce existing health disparities and worsen health equity.4

What about conflicts of interest in the development of PA for patient care? It is easy to review conflict of interest disclosures in a peer-reviewed study, but these conflicts may not necessarily be transparent in a PA model. Physicians likely want to know what variables were used in creating the model, or if some stakeholders had a profit incentive that may interfere with objective model development.

Finally, the results of a PA algorithm may compromise physicians when helping a patient to decide on treatment. If a model recommends against offering a surgical intervention but the model was deliberately designed primarily to improve the overall health of a population or reduce health care costs, then it may be at odds with an individual patient’s needs and preferences and conflict with a physician’s obligation to advocate for the best interests of that patient.

Obligation to the patient remains

Some have suggested that PA technology is more hype than substance.5 Although the ability of big data and PA to improve our predictive power will continue to grow, it retains some of the same challenges and limitations of evidence-based medicine. Ultimately, physicians are responsible for applying evidence to individual patients and using it as part of a shared decision making process to identify and advocate for what is best for that patient. PA models will have vastly more data, but they are not yet close to eliminating all uncertainty in clinical care.

All of these considerations raise an important question: Will PA provide additional data for discussion, or will it be used as a definitive clinical answer that is provided to the patient? Regardless of how this technology will be used in health care, clinicians will always be needed to discuss and apply information to the patient in the room. This will help ensure patients’ interests continue to be adequately represented.

David J. Alfandre MD, MSPH, is a health care ethicist for the National Center for Ethics in Health Care (NCEHC) at the Department of Veterans Affairs (VA) and an Associate Professor in the Department of Medicine and the Department of Population Health at the NYU School of Medicine in New York. The views expressed in this article are those of the author and do not necessarily reflect the position or policy of the NCEHC or the VA.

References

1. Price WN 2nd, Cohen IG. Privacy in the age of medical big data. Nat Med. 2019;25:37-43.

2. Gulshan V, Peng L, Coram M, et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016;316:2402-2410.

3. Shah ND, Steyerberg EW, Kent DM. Big data and predictive analytics: recalibrating expectations. JAMA. 2019;320:27-28.

4. Cohen IG, Amarasingham R, Shah A, et al. The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Aff. 2014:33.7:1139-1147.

5. Emanuel EJ, Wachter RM. Artificial intelligence in health care: Will the value match the hype? JAMA. 2019; published online May 20, 2019. doi:10.1001/jama.2019.4914.