The utopia where an artificial agent such as Star Trek’s medical hologram can diagnose and solve all of humanity's ills and maladies in an efficient and personal manner is centuries away, and at the moment is only a fanciful idea whose home is the imagination of the sci-fi author. This has not stopped researchers, and companies from trying to use artificial methods to diagnose and recommend treatment. None of the attempts have come close to the promise of the fiction scribblers, and in some cases the advice given is dangerous and life threatening. This does not mean that current AI and machine learning advances cannot dramatically improve health outcomes for patients.
Healthcare is being disrupted by AI and big data and deep learning have long passed into the social discourse of non-AI practitioners, and have made many promises which are on the verge of being kept. Deep learning has dramatically improved the performance of computer vision systems, and this has translated into improved performance for disease detection. For example, image systems have shown super-human performance in areas such as breast cancer, prostate cancer and more recently COVID. Deep learning uses CNN (Convolutional Neural Networks) in a supervised learning pipeline where an Oracle such as a human labels a set of images which consists of images of people with the target disease, and ones without. The system from the images will generalize what a health image should like, and an image with the disease should look like. It will then assign a label to unseen examples as either having the disease or not. This is called image classification. It does not highlight on the image where the tumor, or the infected area is.
Systems that identify the diseased area are often referred to as object detection. Object detection is where the model highlights the disease area with a box, similar to what you can see on consumer grade cameras where the system tries to highlight the faces of the subjects. An example is shown below
As you can see in the last image the tumor is highlighted by a red bounding box. This information is obviously more informative than a label. This technique has been used in screening for common cancers such as: cervical cancer and lung cancer.
An extension to object detection is semantic pixel labeling. This is where each pixel is assigned a label. In the above image, semantic pixel labeling would have labels typically in the form of a colour for healthy lung tissue and a different colour for the diseased lung material. The image below shows the output of simple semantic segmentation where the lungs are marked in green and the heart in red. Although the image is simple, and lacks detail needed for disease recognition, it is clear that this image holds more information than the equivalent X-ray or CT-Scan.
Semantic pixel labelling has been used to assist detection of breast cancer and COVID .
Training deep learning models from scratch can take many thousands of images, which often are done by hand, which can take many hours of manual labour. However once a model is trained it can be repurposed for other diseases. The repurposing of existing models uses a family of techniques called transfer learning which adapts a model from one domain to another. It is possible to use transfer learning from non-medical images to image tasks as diverse as: chest radiography, mammography, and dermatology.
Despite the large advances in image processing by deep learning techniques, they should be seen as an assistant to a medical professional and not a replacement. And as normal with current AI technologies there is bias against non-white patients. Deep learning has pushed the envelope in medical imaging, but there is a long way to go before it can be used reliably without human intervention.
Deep learning is not limited to image processing and can be used to predict health outcomes such as heart attacks, diabetes and mental health. These types of tasks take time series data from which an inference can be made. Again most of the attempts use supervised learning where data from known cases are used to train a model. Unlike the image processing previous classifications will have an effect on the next classification. Traditional learners such as CNN don’t remember state. Learners like LSTM (Long Short Term Memory) and GRU (Gated Recurrent Unit) remember state and can learn long range dependencies, and therefore these learners are used to tackle health outcome problems. With the rise of wearable tech such as smartwatches it is now possible with deep learning to monitor for possible future health conditions such as heart attacks and deduce other medical information such as respiration rate. The time is not far off where paid services could be available where an at risk individual’s health could be constantly monitored, and early intervention could be arranged before the condition or event takes hold.
The final application of deep learning in the medical domain is natural language processing where deep learning is used to analyze text and makes inferences about the underlying text. Mental health conditions can be detected through writing. Depression for example has a specific writing style or changes in writing style. Deep learning for text can be used to perform large scale mental health filtering by: monitoring social media. And in common with sensor monitoring, it is quite feasible that at risk patients could be monitored, and early intervention could be arranged when there are indications of mental health problems such as psychosis. Natural language processing is an under exploited area in medicine, and therefore we can expect a growth of applications in this area.
The future is here, but it is unevenly distributed, is a truism for deep learning in medicine, currently there are applications for mass remote health screenings, and assisting doctors and medical professionals. However the dream of automated medicine is a long way off. In the next blog post we will look at the near future applications of deep learning in medicine.
If you would like some help to find out what AI can do to help you and to find out what we have already built in the field of AI healthcare, have a look at our data science consulting services.
From finance to healthcare, from market research to media monitoring, we can help your people make better decisions. We work alongside companies like yours to help deliver successful AI and ML projects - to make a real business value impact
The challenge: Deloitte’s partners and account managers found they were drowning in news from sales support teams, and unable to react quickly to market changes
The solution: Deeper Insights built a prototype Automated Insights app allowing them to have better conversations with clients and close more business
The outcome: Account managers at Deloitte close more business thanks to actionable insights delivered straight to their phones
Client said: "There are a number of gems we’ve found that are far better than the standard services we use" - Dimitar Milanov, Partner, Deloitte
The challenge: Help the sales and marketing teams know more about their customers to enable them to drive deeper customer engagement in sales meetings
The solution: Deeper Insights developed a CMS that scraped the web and automatically identified and summarised customer events relating to key accounts at JLL
The outcome: The automation of the whole previous manual process, and being able to identify 60% more news stories than the manual process, enabling JLL have better and more informed conversations with their clients
Client said: "We have lots of researchers and people who generate insights for our clients, Deeper Insights™ (formerly Skim Technologies) helped us improve the speed at which we get insights and have better conversations with our clients." - Chris Zissis, CIO, Jones Lang LaSalle
The challenge: In the UK, the total number of Total Knee Replacements(TKA's) per year has increased from 13,546 in 2003 to 98,147 in 2019 costing the NHS an estimated £585m per year. The average cost of a TKA in the UK is £12,000, however, post-surgical complications, e.g surgical site infection, increases this cost by between £1618 and £2398 per patient.
The solution: Our consortium, which is comprised of Smith&Nephew Ltd, Deeper Insights and Imperial College London, won Innovate-UK funding to carry out an ambitious and innovative project that is focussed on developing markerless and automated registration and tracking of the patient's limbs tailored for robotic-assisted orthopedic procedures using structured light technology assisted by deep learning to continuously capture the patient's anatomy during surgery.
This new platform will be integrated within S&N's commercially available robotic platform "NAVIO," which was previously supported by I-UK funding, and will obviate the need for percutaneous markers reducing set-up time, cost and complexity during surgery.
Discuss your AI project with us and lets see if we can help. We can dive into the data you have, the data we can gather from the web and other data sources, how we can manipulate that data for you and how we can output it in a dashboard that your business can actually use.
Our Data Science experts are recognised globally with over 500+ citations and patents
We have a combined experience of over 40 years in developing cutting edge, and innovative Artificial Intelligence from both academia and industry. We are specialists in Computational Linguistics, Natural Language Processing, Machine Learning, Deep Learning and Data Analytics
Dr Márcia Oliveira,
PhD Network Science
Dr Claudio Sa,
PhD Deep Learning
Dr Catarina Carvalho,
PhD Image Processing