We’ve written articles about technology and dermatology in the past such as 3D Skin Printing and the potentials of teledermatology. This time, we look at a recent study, published in Nature, that claims machine learning using neural networks can now identify skin cancers on par with a certified dermatologist.
Why Dermatology and Why Skin Cancer?
Dermatology can be a pioneer in the practical application of technology, in this case, machine intelligence. This is because a lot of the symptoms are visual – right on the skin. Unlike other areas like internal medicine, there are less complicating factors that have an element of hidden information, where judgment and experience are, at least at this time, still paramount and irreplaceable. When most of the information is contained in the visual elements, due to the proliferation of cell phones and the ability to take and send images quickly, the potential and practicality of image diagnosis can’t be understated. We wrote an article on teledermatology before and believe that technological advances will be creating waves in the field of dermatology very quickly – likely leading the way of many other medical specialties.
Skin cancer fills all the boxes that make it a high priority to find significant advancements in detection and identification: It affects many people, it can cause severe and sometimes lethal damage, and the payoff for early detection is extremely high. Skin cancer affects over 5 million Americans every year, and about 1 in 3 cancers diagnosed are skin cancers.1 For melanoma, the prognosis and survival rates at various stages of the disease vary significantly.2 It isn’t an exaggeration to say that improvements to skin cancer detection could be the most critical and impactful advancement in dermatology in a very long time in terms of total lives saved.
Background: Practical Implications
When it comes to diagnosing skin cancer early, there are numerous practical challenges. The responsibility of being the first responder in identifying skin cancer is on the patient – and patients are not experts at identifying skin cancer. While educating patients on how to look for skin cancer is crucial, and of immense value, there will always be a big gap between a patient’s knowledge and an expert’s knowledge. Another problem is that it’s always tempting to ignore a potential problem, given that most of the time it’s nothing – and going to the doctor is time-consuming, often unpleasant, and sometimes costly. This is why teledermatology is so important for patients in remote locations, where the various costs to see a doctor is the highest. This has the potential to take it a step further, identifying patients that are at risk. If these patients see the doctor and receive early treatment, it can make an enormous difference in survival rates and health care costs. This type of technology is especially relevant in the United States, where uninsured patients are in a position where they pay a high price for believing it may be skin cancer, but being wrong.3
If we take it a step further and allow technology to help patients identify potential skin cancer with greater certainty, the at-risk patients can now see the doctor at a high rate, and this will be a significant game-changer in skin cancer survival rates. If we can process medical images and identify and demarcate dangerous conditions like skin cancer from benign skin growths like moles, this can save lives and fundamentally change how a doctor’s time is used, and potentially even change health care costs.
The Study
A trained machine using convolutional neural networks (CNN) was used to identify and analyze skin cancer images. CNN’s are known mostly for their ability to identify and analyze visual imagery – search engines like Google have an obvious interest in this type of image identification and ability to classify images by the pixel, and it has already produced countless interpreted images by “freehanding” art4 – so this isn’t science fiction. How reliable are they? (CNN) are used primarily to identify and analyze visual imagery. They can be used to identify pictures of animals or recognize human expressions.5 They are not perfect yet – they are still prone to error from time to time and can be exploited as well.6 There is no way to imagine the possible scopes of future use of artificial intelligence (AI) at this point as its scope is open-ended. This project, however, is more precise, narrow, and practical: How accurately can a machine intelligence pick out skin cancer images from images that merely resemble or are often confused with skin cancer, like moles or seborrheic keratoses?
How Reliable Are They? Will they Replace Doctors?
In terms of reliability, the short answer is about as good as a certified dermatologist. Unlike dermatologists, of course, this machine can do only one particular task. For patients concerned about specifically skin cancer – and there are over 5 million Americans who develop skin cancer this year – this is significant. If this diagnostic tool were to be developed into an app that can be downloaded onto a phone and integrated with health care systems, it could already have a significant impact. Additionally, as machine learning can make improvements very quickly, it may be that for identifying skin cancer images in a few years, it can surpass even the most trained dermatologists.
Will programs like these replace doctors? Extremely unlikely. A human doctor’s intelligence – or even that of a dog is far more general – can apply concepts that they’ve learned to various situations and tasks. It’s the generality of intelligence, the ability to use intelligence to any problem that we haven’t reached yet – and may never reach, depending on who you ask. The idea of strong AI in the future is still fraught with difficult challenges like consciousness, sentience, mind that pose both practical and philosophical problems. A medical doctor, for example, has a plethora of information throughout their long and arduous journey through medical school and in their clinical experience and practice. We could conceive of a program or even an encyclopedia that can store more medical information than anyone medical doctor could know – we could store all such medical information, in theory. Still, the encyclopedic knowledge would never be able to replace the doctor. We are adept at numerous tasks and can use our knowledge flexibly and across different domains; with the doctor, they exercise judgment in deciding what’s relevant and what is not. A much more challenging task is to make machines adept at general tasks, although some ambitious projects already have this as a goalpost.7
Use #AskDermLetter to ask us skincare questions on Twitter. Follow us @SkinExpertsTalks for daily tips and articles on skincare.
1http://www.canadianskincancerfoundation.com/about-skin-cancer.html
2https://www.healthline.com/health/melanoma-prognosis-and-survival-rates
3In Canada, the risk of having a false positive is low from the patient’s perspective. You waste time out of your day and perhaps miss some time off work. If you are uninsured in the United States, this changes the risk/reward calculus significantly, as the cost of being cautious can be high.
4http://www.iflscience.com/technology/artificial-intelligence-dreams/
5This is less of a concern with identifying skin cancer – there is little or no motivation for patients to manipulate images to decrease the chances of a correct diagnosis.
7https://www.wired.com/story/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter/
7https://www.theverge.com/2017/7/19/15998610/ai-neuroscience-machine-learning-deepmind-demis-hassabis-interview