Precise brain tumor localization lies in the correct interpretation of a patient’s brain scans across modalities (MRI, PET, etc.), which allows well-timed confirming of diagnosis. While healthcare is no doubt based on human proficiency and experience, technology appears to be advanced enough to back up health specialists’ decisions with complex mathematical algorithms.
Major advances in creating artificial intelligence solutions with machine learning seem promising to enable automation of medical image analysis, thus reduce time on image interpretation and precisely localize tumors and their subregions for further treatment prescription and surgery planning.
However, there are many doubts on the reliability of automated localization of brain tumors in the light of multiple challenges on the way. We’ve decided to put together these challenges, define the ways to increase the accuracy of diagnosing tumors in brain automatically, and also take a look at the state of the art in this field.
For automated brain tumor analysis, segmentation and registration are the methods appearing the most challenging. Some of the following issues stay unsolved yet, and some can be addressed with a different approach to previous steps (e.g. pre-processing).
Segmentation allows spotting the tumor area, including its sub-compartments and surrounding tissues. Segmenting allegedly tumorous brain images is challenging on several levels:
In brain tumor localization, the aim of registration is to enable either simultaneous analysis of different modalities at the diagnostic stage or further monitoring of tumor growth.
A few challenges arise on the way to seamless registration:
While there is a range of suggested methods to overcome the issues above, most of them require a significant computational time and capacity. This can become one of the major pitfalls on the way to solve segmentation and registration challenges.
To improve tumor localization accuracy in MRI image analysis, a few sequences can be registered. The difference in imaging outcomes of T1 and T2 sequences can ensure precise automatic detection of lesions and their subregions.
For example, a T1-weighted image allows to properly segment and, therefore, detect an active tumor and necrosis regions, while the edema region can be segmented based on a registered T2-weighted image. When those two sequences are fused, the image analysis software algorithm is able to form a tumor’s complete overview with all the affected areas.
Combined with the MRI scan, PET metabolic data (blood flow, oxygen and glucose metabolism) allows to create a precise picture of how the tumor looks, how it is outlined and how it affects surrounding tissue, separating the abnormality itself from edema and necrosis.
With high-grade gliomas, for example, the affected area can be misleadingly vast. But when the MRI and PET images are fused, the real division of subregions appears.
Currently, there is a variety of approaches to handle segmentation and registration as primary steps in automating brain cancer diagnostics. But they are rather taken as a set of methods where they can be partially semi- or fully automated.
In the survey by S. Bauer et al., authors summarize various sets of approaches to segmentation and registration of brain MRI images. Some of them can be automated, such as:
Segmentation: fuzzy clustering plus knowledge-based techniques, SVM classification, difference image for volumetric tumor assessment, decision forests for tissue-specific segmentation, and more.
Registration: non-rigid registration to capture brain shift, geometric metamorphosis, differential analysis for tumor growth quantification, registration with EM algorithm and diffusion modeling, and more.
But these are only parts that don’t really present an end-to-end system of automatic lesion localization.
One of the recent papers (March 2016) from Stanford University presents a new algorithm for fully automatic brain tumor segmentation based on 3D convolutional neural networks (CNN). The authors (C. Elamri and T. de Planque) claim that this algorithm allows achieving 89% accuracy in the entire tumor segmentation. Researchers also compared their 3D CNN method to the performance of human radiologists (85%), the leading methods of 2013 (75-82%) and 2014 (83-88%) and got the highest Dice Score.
The benefit of their approach is that it uses data analytics and machine learning to precisely spot the tumor area, edema, enhancing and non-enhancing lesions. Authors also create algorithms tuned to process 3D images right away without implementing 2D-oriented algorithms to a 3D environment.
This approach allows keeping spatial information accurate and improve robustness. Moreover, CNNs learn over time, which will allow increasing accuracy even more. Yet, their algorithm doesn’t focus on registration at all, which makes automatic tumor localization impossible when two and more modalities or sequences are involved.
A lot of efforts have been made in developing algorithms that would enable automatic segmentation for brain tumors localization. Still, the main goal is to evolve diagnostic support software up to the level of widespread clinical application, which is still challenging. The main challenge is that radiologists and oncologists will continue to rely on manual brain tumor delineations until there is a full-cycle option – the software performing end-to-end automatic image analysis. And, importantly, the software that can be used by clinicians, not only by researchers.
Such solution should be able to tell whether a patient has a tumor, what tumor subregions are, and how they are located. Later on, healthcare specialists will also need the possibility to track tumor growth and analyze the treatment progress (e.g. surgery, chemotherapy).
Feel free to share your thoughts in the comments.