Edinburgh Research Archive

Compressive MRI with deep convolutional and attentive models

Item Status

Embargo End Date

Authors

Liu, Jingshuai

Abstract

Since its advent in the last century, Magnetic Resonance Imaging (MRI) has demonstrated a significant impact on modern medicine and spectroscopy and witnessed widespread use in medical imaging and clinical practice, owing to the flexibility and excellent ability in viewing anatomical structures. Although it provides a non-invasive and ionizing radiation-free tool to create images of the anatomy of the human body being inspected, the long data acquisition process hinders its growth and development in time-critical applications. To shorten the scanning time and reduce the discomfort of patients, the sampling process can be accelerated by leaving out an amount of sampling steps and performing image reconstruction from a subset of measurements. However, the images created with under-sampled signals can suffer from strong aliasing artifacts which unfavorably affect the quality of diagnosis and treatment. Compressed sensing (CS) methods were introduced to alleviate the aliasing artifacts by reconstructing an image from the observed measurements via model-based optimization algorithms. Despite achieved success, the sparsity prior assumed by CS methods can be difficult to hold in real-world practice and challenging to capture complex anatomical structures. The iterative optimization algorithms are often computationally expensive and time-consuming, against the speed demand of modern MRI. Those factors limit the quality of reconstructed images and put restrictions on the achievable acceleration rates. This thesis mainly focuses on developing deep learning-based methods, specifically using modern over-parametrized models, for MRI reconstruction, by leveraging the powerful learning ability and representation capacity of such models. Firstly, we introduce an attentive selection generative adversarial network to achieve fine-grained reconstruction by performing large-field contextual information integration and attention selection mechanism. To incorporate domain-specific knowledge into the reconstruction procedure, an optimization-inspired deep cascaded framework is proposed with a novel deep data consistency block to leverage domain-specific knowledge and an adaptive spatial attention selection module to capture the correlations among high-resolution features, aiming to enhance the quality of recovered images. To efficiently utilize the contextual information hidden in the spatial dimensions, a novel region-guided channel-wise attention network is introduced to incorporate the spatial semantics into a channel-based attention mechanism, demonstrating a light-weight and flexible design to attain improved reconstruction performance. Secondly, a coil-agnostic reconstruction framework is introduced to solve the unknown forward process problem in parallel MRI reconstruction. To avoid the estimation of sensitivity maps, a novel data aggregation consistency block is proposed to approximately perform the data consistency enforcement without resorting to coil sensitivity information. A locality-aware spatial attention module is devised and embedded into the reconstruction pipeline to enhance the model performance by capturing spatial contextual information via data-adaptive kernel prediction. It is demonstrated by experiments that the proposed coil-agnostic method is robust and resilient to different machine configurations and outperforms other sensitivity estimation-based methods. Finally, the research work focusing on dynamic MRI reconstruction is presented. We introduce an optimization-inspired deep cascaded framework to recover a sequence of MRI images, which utilizes a novel mask-guided motion feature incorporation method to explicitly extract and incorporate the motion information into the reconstruction iterations, showing to better preserve the dynamic content. A spatio-temporal Fourier neural block is proposed and embedded into the network to improve the model performance by efficiently retrieving useful information in both spatial and temporal domains. It is demonstrated that the devised framework surpasses other competing methods and can generalize well on other reconstruction models and unseen data, validating its transferability and generalization capacity.

This item appears in the following Collection(s)