Improving radiotherapy using image analysis and machine learning
View/ Open
Date
29/11/2016Author
Montgomery, Dean
Metadata
Abstract
With ever increasing advancements in imaging, there is an increasing abundance of images
being acquired in the clinical environment. However, this increase in information can be a
burden as well as a blessing as it may require significant amounts of time to interpret the
information contained in these images. Computer assisted evaluation is one way in which better
use could be made of these images. This thesis presents the combination of texture analysis of
images acquired during the treatment of cancer with machine learning in order to improve
radiotherapy. The first application is to the prediction of radiation induced pneumonitis. In 13-
37% of cases, lung cancer patients treated with radiotherapy develop radiation induced lung
disease, such as radiation induced pneumonitis. Three dimensional texture analysis, combined
with patient-specific clinical parameters, were used to compute unique features. On radiotherapy
planning CT data of 57 patients, (14 symptomatic, 43 asymptomatic), a Support Vector
Machine (SVM) obtained an area under the receiver operator curve (AUROC) of 0.873 with
sensitivity, specificity and accuracy of 92%, 72% and 87% respectively. Furthermore, it was
demonstrated that a Decision Tree classifier was capable of a similar level of performance
using sub-regions of the lung volume. The second application is related to prostate cancer
identification.
T2 MRI scans are used in the diagnosis of prostate cancer and in the identification of the
primary cancer within the prostate gland. The manual identification of the cancer relies on
the assessment of multiple scans and the integration of clinical information by a clinician.
This requires considerable experience and time. As MRI becomes more integrated within the
radiotherapy work flow and as adaptive radiotherapy (where the treatment plan is modified
based on multi-modality image information acquired during or between RT fractions) develops
it is timely to develop automatic segmentation techniques for reliably identifying cancerous
regions. In this work a number of texture features were coupled with a supervised learning
model for the automatic segmentation of the main cancerous focus in the prostate - the focal
lesion. A mean AUROC of 0.713 was demonstrated with 10-fold stratified cross validation
strategy on an aggregate data set. On a leave one case out basis a mean AUROC of 0.60 was
achieved which resulted in a mean DICE coefficient of 0.710. These results showed that is was
possible to delineate the focal lesion in the majority (11) of the 14 cases used in the study.