Parkinson’s disease, a neurodegenerative disorder that affects movement, impacts more than ten million people worldwide; roughly 60,000 are diagnosed each year. Early detection can forestall the onset of severe symptoms, but it’s easier said than done — no specific test exists to diagnose Parkinson’s.
Researchers at the Institute for Robotics and Intelligent Systems in Zurich, Switzerland have made encouraging progress, though. In a paper published on the preprint service Arxiv.org (“Learning to Diagnose Parkinson’s Disease from Smartphone Data”), they describe an AI system that can diagnose Parkinson’s disease with data collected from a suite of smartphone-based tests.
Their work builds on a prior study by Johns Hopkins University and the University of London, which developed apps — HopkinsPD and CloudUPDRS — to monitor changes in Parkinson’s symptoms throughout the day.
“[M]isdiagnoses [of Parkinson’s disease] are common,” the researchers wrote. “One factor that contributes to misdiagnoses is that the symptoms of Parkinson’s disease may not be prominent at the time the clinical assessment is performed.”
They sourced data collected during the mPower clinical trial, a large-scale, smartphone-based study of Parkinson’s disease that had 1,853 users provide demographic information and possible prior professional diagnoses of Parkinson’s Disease. It also tasked them with completing a series of tests designed to measure movement, speech, finger dexterity, and spatial memory impairments.
A walking test had them put their phone in their pocket, walk forward, turn around, and retrace their steps. A voice assessment tasked them with saying “aaaah” into their phones’ microphone. A tapping test had them alternatively tap two on-screen buttons, and the final test — a memory test — instructed them to repeat a sequence of images illuminated on a grid.
After preprocessing, the team ended up with 300, 250, 25, and 400 samples per record for the walking, voice, memory, and tapping tests, respectively.
The results fed into predictive models — specifically convolutional neural network for the walking, voice, and tapping test, a recurrent neural network with bidirectional long short-term memory (BLSTM) for the memory test, and random forest models — that in turn fed another algorithm — an “evidence aggregation model” (EAM), also a recurrent neural network — which generated a diagnostic score.
To bring transparency to the EAM’s predictions, the team designed a complementary model — a “neural soft attention mechanism — that identified which test and test segments in the data were most important for the model’s output.
“Presenting [a] clinician with information about which data the model output is based on can help put the diagnostic score … in perspective and inform the clinician’s further clinical decision-making,” they wrote. “For example, in a patient whose diagnostic prediction focused primarily on motor symptoms, the diagnosing clinician can focus her efforts on ruling out other movement disorders that may cause similar symptoms.”
In the end, the EAM outperformed baseline models that relied strictly on demographic information to diagnose Parkinson’s, with an AUC of 0.85, a measure of overall test performance. (It had a 15 percent change of misdiagnosing the disease.)
It’s not a perfect model. The training data included professional diagnoses on Parkinson’s disease, which are notoriously fraught with inaccuracies — as many as 25 percent are incorrect, according to some studies. Moreover, because it was collected on a smartphone, there’s its accuracy might have been reduced by random movement, neurological disorders that appear similar to Parkinson’s disease, and other variabilities.
However, they contend that it’s robust enough to be deployed in the wild.
“Our results confirm that smartphone data collected over extended periods of time could in the future potentially be used as additional evidence for the diagnosis of Parkinson’s disease,” the researchers wrote.