Artificial Intelligence: The New Hype in Healthcare

Jingmei Yang

by Jingmei Yang
SECOND PRIZE in OR/MS Tomorrow Student Writing Competition 2019
Jingmei Yang is a Ph.D. student in Industrial Engineering at the University of Texas at Arlington.

The future of the healthcare industry has never been as bright as today. The application of Artificial Intelligence (AI) has made remarkable progress in its impact across a range of medical applications including drug discovery, remote patient monitoring, medical diagnostics, risk management, virtual assistants and hospital management. Improvements in accuracy and efficiency are made possible by innovations in deep neural networks and high-end computational resources in combination with increasing availability of medical data. In this report, we summarize some significant AI innovations in healthcare followed by a discussion on future challenges and opportunities.

AI is achieving the expert-level disease diagnostics. AI is achieving expert-level prediction and diagnosing of diseases based on image recognition using deep neural networks. AI can help dermatologists diagnose skin cancer. Roughly 5.4 million incidences of skin cancer are reported annually in the United States. Early detection of skin cancer allows medical practitioners and patients to take proactive action in treatment (Rogers et al., 2012). In 2017, a deep convolution network was built for automated dermatology by Esteva et al. (2017) at Stanford. Trained on 129,450 images comprised of 2,032 diseases, the system achieved accuracy levels on par with dermatologists. Powered with this framework, one possible application is to mobile devices which can potentially extend the reach of specialists, widening the scope of primary care practice and offering a low-cost approach to diagnostic care.

Healthcare_1
Convolution network for skin diseases detection, Source: Esteva et al. (2017), picture courtesy of the authors.

AI supports the diagnosis of cardiovascular diseases. Cardiovascular risk can be revealed through analysis of retinal fundus images, a non-invasive way to visualize blood vessels. Companies such as Google have dedicated resources to developing models which can extract and quantify risk markers in retinal images. Risk factors such as age, gender, smoking status, and systolic blood pressure have been used in well established cardiovascular risk calculators such as SCORE (Framingham and Systemic Coronary Risk Evaluation), however, were previously not extracted from retinal images. In the Nature paper by Poplin et al. (2018), they demonstrated how to identify the presence of these risk factors in the retina. This neural network model is capable of predicting cardiovascular risk directly through retinal images and quantifies the risk factors to a degree of precision not achieved before Poplin et al. (2018). With the aid from such an AI system, cardiovascular risk can be obtained immediately from non-invasive retinal images.

AI can be used for retinal disease diagnosis. A research team in DeepMind developed an innovative framework that investigates eye scans from routine clinical practice. In a paper published in Nature, De Fauw et al. (2018) demonstrated that an AI system is capable of automatically identify- ing retinal diseases in only a minute. Additionally, the system can classify patients based on their severity and redistribute medical resource to the patients most in need of urgent care. This prioritization attempts to reduce the long delays between scan and treatment resulting from the complexity of interpreting the Optical Coherence Tomography images and the reducing numbers of qualified interpreters. This framework can make referral recommendations for over 50 sight-threatening retinal diseases at a level comparable to clinical experts and has great potential for preventing patients with diabetic retinal disease from sight loss.

Healthcare_2
Retinal disease diagnosis and referral suggestions framework. Source: De Fauw et al. (2018), picture courtesy of the authors.

AI can support speech synthesis. Edward Chang, a neurosurgeon, collaborated with a research team at the University of California - San Francisco successfully map brain signals into auditable expression using AI (Anumanchipalli et al., 2018). Recurrent neural networks were trained on brain signals collected from five epilepsy subjects to predict articulatory movements in the tongue, lips, jaw, and larynx. Next, the networks mapped the estimated movements onto synthesized phonic speech. This pioneering technology has a direct application in speech-decoding devices for people with speech impairments resulting from stroke, traumatic brain injury, or neurological diseases like multiple sclerosis and Parkinson’s disease.

Healthcare_3
The neural decoding process with AI, picture taken from Anumanchipalli et al., (2019).

AI in Radiology. Recently there has been a massive amount of publications using deep learning to process medical imaging such as image denoising, segmentation, and super-resolution. Of particular interest is a research team from the University of Texas at Southwestern Medical Center (UTSW) lead by Dr. Steve Jiang. Due to increasing treatment modalities in radiation therapy, treatment planning is complicated and time-consuming for dosimetrists. With an effort to cut down on the planning time while maintaining quality, the team from UTSW built a convolutional neural network to predict the radiation therapy dose for prostate cancer patients (Nguyen et al., 2019). By mapping the patient’s contours into local and global features, the model is empowered to predict a dose distribution with impressive accuracy. If equipped with this dose prediction model in clinical practice, physicians can use the prediction as a preliminary plan and cooperate with dosimetrists for further tailoring, making the planning workflow smooth and efficient.

Healthcare_4
U-net architecture with additional CNN layers used for dose prediction. Source: Nguyen et al. (2019), picture courtesy of the authors.

Future challenges. Despite the promising applications, it is acknowledged that AI has unique limitations when applied to healthcare such as clinical interpretability, data heterogeneity, and patient privacy.

Deep learning is often treated as a black box; its features and parameters are challenging to understand and interpret in a healthcare setting. Not being able to explain the internal mechanics of a model is a barrier for the broad adoption of AI since clinical practitioners place trust heavily on interpretability. As such, researchers have worked on developing interpretable AI systems. Hopefully, this work will make AI systems easier to understand and adopt in practice. Data heterogeneity is another challenge. As shown in the study led by Zech et al. (2018), image data from different hospitals, vendors of imaging modalities and scanners or reconstruction conditions has significant influence on the model performance. Acquiring a training dataset from diverse settings, nonidentical populations, or multiple institutions is beneficial to overcome this problem. Patients’ privacy is also a concern. Unlike other domains, the healthcare industry handles a lot of sensitive patients’ information. How to balance the usage of all the data and control the infringement of privacy of patients requires care and effort when developing models.

External validation is necessary for AI to prove its promise. All AI-based models need to be validated with clinical trials to test its practical value and performance in a real-world setting. If clinical performance is validated and interpretability of the models are enhanced, AI has the potential to positively impact clinical practice with better performance and increased efficiency.

 1. Anumanchipalli, G.K., Chartier, J., and E.F. Chang (2018). Speech synthesis from neural decoding of spoken sentences. Nature, 568(7753):493-498.
 2. De Fauw, J., Ledsam, J.R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., Askham, H., Glorot, X., O’Donoghue, B., Visentin, D., et al. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9):1342- 1350.
 3. Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., and S. Thrun (2017). Dermatologist- level classification of skin cancer with deep neural networks. Nature, 542(7639):115-118
 4. Nguyen, D., Long, T., Jia, X. Lu, W., Gu, W., Iqbal, Z., and Steve Jiang (2019). A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning. Scientific reports, 9(1):1076.
 5. Poplin, R., Varadarajan, A.V., Blumer, K., Liu, Y., McConnell, M.V., Corrado, G.S., Peng, L., and D.R. Webster (2018). Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering, 2(3):158-164.
 6. Rogers, H.W., Weinstock, M.A., Feldman, S.R., and B.M. Coldiron (2015). Incidence estimate of non- melanoma skin cancer (keratinocyte carcinomas) in the U.S. population, 2012. Jama Dermatology, 151(10):1081-1086.