Edinburgh Research Archive

Machine learning for retinal image analysis

dc.contributor.advisor
Bernabeu, Miguel O.
dc.contributor.advisor
Storkey, Amos
dc.contributor.author
Engelmann, Justin
dc.date.accessioned
2024-09-17T14:58:05Z
dc.date.available
2024-09-17T14:58:05Z
dc.date.issued
2024-09-17
dc.description.abstract
Retinal images, images of the retina at the back of our eyes, are an important part of modern ophthalmology and further capture the retinal vasculature and nerves, which could allow insight into cardio- and neurovascular disease. This is especially promising as retinal images are non-invasive, fast-to-acquire and low-cost compared to other types of medical imaging such as brain magnetic resonance imaging. A variety of retinal imaging modalities exist, most importantly traditional colour fundus photography (CFP) and optical coherence tomography (OCT). CFP is the most widespread type of retinal imaging and captures a true colour en-face image of the retina, typically with a field of view of around 45 degrees. OCT imaging captures the retina in depth and thus allows assessment of individual layers of the retina and – with modern methods such as Enhanced Depth Imaging – even captures the choroid, a dense vascular tissue beneath the retina. More recent modalities include OCT angiography which uses repeated OCT images to estimate blood flow and ultra-widefield fundus imaging which captures most of the retina with a field of view of around 200 degrees. Retinal imaging is already widespread and continuously proliferating: lower-cost handheld devices or smartphone addons make CFP available in lower resource settings, while once cutting-edge OCT can now be found at high-street opticians in the UK. Retinal images provide a wealth of information but are complex to analyse, in part due to variations in image quality, anatomy, and retinal pathology that make traditional development of handcrafted analysis pipelines challenging. The recent decade saw great advances in machine learning methods, particularly deep learning for computer vision. Instead of manually designing a pipeline, a machine learning model is a parameterised pipeline that can be fit to training data to approximate the mapping from inputs to outputs. This approach is highly effective for many vision tasks, including classification, regression and segmentation. In this thesis, I present three themes of work using machine learning for retinal image analysis. First, using machine learning for retinal disease detection. Second, using machine learning for developing efficient and robust automated analysis pipelines for retinal imaging. And third, validating and applying these tools. For the first theme, I developed a deep learning model that can detect seven key retinal diseases in ultra-widefield pseudo-colour retinal images with very promising performance and investigate which regions of the ultra-widefield images are important for automated disease detection in a data driven way. For the second theme, I developed three tools. First, deep approximation of retinal traits, or DART for short, that computes retinal fractal dimension (FD), a metric relating to the complexity of the blood vessels in CFPs, orders of magnitudes faster and more robustly than traditional methods. Second, jointly with a colleague, I developed a tool initially for segmenting the choroid region in OCT, called DeepGPET. Next, we developed Choroidalyzer, which segments the choroid and the choroidal vasculature while also identifying the location of the fovea. This allows for fully-automated computation of choroidal thickness, area, vascular index in a fovea-centred region of interest. Third, I developed QuickQual an efficient and easy-to-use method for CFP quality assessment that obtains state-of-the-art performance on a commonly used quality assessment dataset. Finally, for the third theme, I applied DART to real-world, primary care data and found a significant association between lower FD and prevalent systemic health conditions. Furthermore, I compared the repeatability and robustness of DART to AutoMorph, a method that follows the traditional paradigm for computing FD, finding that DART was not only more robust to image quality issues but also more repeatable even for high quality images. In my opinion, this thesis exemplifies the potential of machine learning for retinal image analysis. I hope that my work will – eventually and incrementally – advance the field of retinal image analysis and one day make a positive difference for clinical practice.
en
dc.identifier.uri
https://hdl.handle.net/1842/42182
dc.identifier.uri
http://dx.doi.org/10.7488/era/4903
dc.language.iso
en
en
dc.publisher
The University of Edinburgh
en
dc.relation.hasversion
Burke, J., Engelmann, J., Hamid, C., Moukaddem, D., Pugh, D., Dhaun, N., Storkey, A., Strang, N., King, S., MacGillivray, T., Bernabeu, M. O., and MacCormick, I. J. C. (2024). Domain-specific augmentations with resolution agnostic self-attention mechanism improves choroid segmentation in optical coherence tomography images. arXiv preprint arXiv:2405.14453
en
dc.relation.hasversion
Burke, J., Engelmann, J., Hamid, C., Reid-Schachter, M., Pearson, T., Pugh, D., Dhaun, N., Storkey, A., King, S., MacGillivray, T. J., Bernabeu, M. O., and MacCormick, I. J. C. (2023a). An Open-Source Deep Learning Algorithm for Efficient and Fully Automatic Analysis of the Choroid in Optical Coherence Tomography. Translational Vision Science & Technology, 12(11):27–27
en
dc.relation.hasversion
Engelmann, J. and Bernabeu, M. O. (2024). Training a high-performance retinal foundation model with half-the-data and 400 times less compute. arXiv preprint arXiv:2405.00117
en
dc.relation.hasversion
Engelmann, J., Burke, J., Hamid, C., Reid-Schachter, M., Pugh, D., Dhaun, N., Moukaddem, D., Gray, L., Strang, N., McGraw, P., Storkey, A., Steptoe, P. J., King, S., MacGillivray, T., Bernabeu, M. O., and MacCormick, I. J. C. (2024a). Choroidalyzer: An Open-Source, End-to-End Pipeline for Choroidal Analysis in Optical Coherence Tomography. Investigative Ophthalmology & Visual Science, 65(6):6–6
en
dc.relation.hasversion
Engelmann, J., Kearney, S., McTrusty, A., McKinlay, G., Bernabeu, M. O., and Strang, N. (2024b). Retinal fractal dimension is a potential biomarker for systemic health—evidence from a mixed-age, primary-care population. Translational Vision Science & Technology, 13(4):19–19
en
dc.relation.hasversion
Engelmann, J., McTrusty, A. D., MacCormick, I. J. C., Pead, E., Storkey, A., and Bernabeu, M. O. (2022a). Detecting multiple retinal diseases in ultra-widefield fundus imaging and data-driven identification of informative regions with deep learning. Nature Machine Intelligence, 4(12):1143–1154. Publisher: Nature Publishing Group
en
dc.relation.hasversion
Engelmann, J., Moukaddem, D., Gago, L., Strang, N., and Bernabeu, M. O. (2024c). Applicability of Oculomics for Individual Risk Prediction: Repeatability and Robustness of Retinal Fractal Dimension Using DART and AutoMorph. Investigative Ophthalmology & Visual Science, 65(6)
en
dc.relation.hasversion
Engelmann, J., Storkey, A., and Bernabeu, M. O. (2021). Global explainability in aligned image modalities. arXiv preprint arXiv:2112.09591
en
dc.relation.hasversion
Engelmann, J., Storkey, A., and Bernabeu, M. O. (2023a). QuickQual: Lightweight, Convenient Retinal Image Quality Scoring with Off-the-Shelf Pretrained Models. In Antony, B., Chen, H., Fang, H., Fu, H., Lee, C. S., and Zheng, Y., editors, Ophthalmic Medical Image Analysis, Lecture Notes in Computer Science, pages 32–41, Cham. Springer Nature Switzerland
en
dc.relation.hasversion
Engelmann, J., Storkey, A., and LLinares, M. B. (2023b). Exclusion of poor quality fundus images biases health research linking retinal traits and systemic health. Investigative Ophthalmology & Visual Science, 64(8):2922–2922. ISBN: 1552- 5783 Publisher: The Association for Research in Vision and Ophthalmology
en
dc.relation.hasversion
Engelmann, J., Villaplana-Velasco, A., Storkey, A., and Bernabeu, M. O. (2022b). Robust and efficient computation of retinal fractal dimension through deep approximation. In International Workshop on Ophthalmic Medical Image Analysis, pages 84–93. Springer
en
dc.relation.hasversion
Tabuchi, H., Engelmann, J., Maeda, F., Nishikawa, R., Nagasawa, T., Yamauchi, T., Tanabe, M., Akada, M., Kihara, K., Nakae, Y., et al. (2024). Using artificial intelligence to improve human performance: efficient retinal disease detection training with synthetic images. British Journal of Ophthalmology
en
dc.relation.hasversion
Villaplana-Velasco, A., Pigeyre, M., Engelmann, J., Rawlik, K., Canela-Xandri, O., Tochel, C., Lona-Durazo, F., Mookiah, M. R. K., Doney, A., Parra, E. J., Trucco, E., MacGillivray, T., Rannikmae, K., Tenesa, A., Pairo-Castineira, E., and Bernabeu, M. O. (2023). Fine-mapping of retinal vascular complexity loci identifies Notch regulation as a shared mechanism with myocardial infarction outcomes. Communications Biology, 6(1):1–13. Publisher: Nature Publishing Group
en
dc.subject
machine learning
en
dc.subject
retinal imaging
en
dc.subject
retinal image analysis
en
dc.subject
medical image analysis
en
dc.title
Machine learning for retinal image analysis
en
dc.type
Thesis or Dissertation
en
dc.type.qualificationlevel
Doctoral
en
dc.type.qualificationname
PhD Doctor of Philosophy
en

Files

Original bundle

Now showing 1 - 1 of 1
Name:
EngelmannJ_2024.pdf
Size:
18.26 MB
Format:
Adobe Portable Document Format
Description:

This item appears in the following Collection(s)