Skip to main content

Deep neural networks (DNNs) have achieved outstanding performance and broad implementation in computer vision tasks such as classification, denoising, segmentation and image synthesis. However, DNN-based models and algorithms have seen limited adaptation and development within radiomics which aim to improve diagnosis or prognosis of cancer. Traditionally, medical practitioners have used expert-derived features such as intensity, shape, textual, and others. We hypothesize that, despite the potential of DNNs to improve oncological classification performances in radiomics, a lack of interpretability of such models prevents their broad utilization, performance, and generalizability. Therefore, the INFORM consortium proposes to investigate explainable artificial intelligence (XAI) with a dual aim of building high performance DNN-based classifiers and developing novel interpretability techniques for radiomics. First, in order to overcome the limited data typically available in radomic studies, we will investigate Monte Carlo methods and generative adversarial networks (GAN) for realistic simulation that can aid building and training DNN architectures. Second, we tackle the interpretability of DNN-based feature engineering and latent variable modeling with innovative developments of saliency maps and related visualization techniques. Both supervised and unsupervised learning will be used to generate features, which can be interpreted in terms of input pixels and expert-derived features. Third, we propose to build explainable AI models that incorporate both expert-derived and DNN-based features. By quantitatively understanding the interplay between expert-derived and DNN-based features, our models will be readily understood and translated into medical applications. Fourth, evaluation will be carried out by clinical collaborators with a focus on lung, cervical and rectal cancer. These proposed DNN models, specifically developed to reveal their innerworkings, will leverage the robustness and trustworthiness of expert-derived features that medical practitioners are familiar with, while providing quantitative and visual feedback. Overall, our methodological research will advance interpretability of feature engineering, generative models, and DNN classifiers with applications in radiomics and broad medical imaging.

With this project we aim at maximizing the impact on the patient management of ML and DL techniques by developing novel methods to facilitate training of decision-aid systems for clinical treatment strategies optimization. The methodological approaches we propose in this specific area will play a major role in facilitating the acceptability of DL-based decision-aid systems relying on medical imaging for oncology. The proposed validated predictive models in various cancer types within the context of this project might subsequently be used to drive future prospective clinical studies in which patients could be offered alternative treatment strategies based on the results of these predictive models. Such a clinical and social potential is further enhanced by the public-private collaboration proposed in this project, where the developed methodologies will find their way in products.

The multidisciplinarity of INFORM is key to meet the target challenges and achieve the proposed goals. All partners have their individual world-leading qualifications and additional scientific expertise providing all the prerequisites for the efficient implementation of INFORM’s approach. The successful implementation of this project will have a large and prolonged impact both in the Medical/Oncology and the Computing/ Artificial Intelligence field of predictive radiomics model, as well as the same methodology could be extended to other diagnostic and therapeutic medical applications.

Call Topic: Explainable Machine Learning-based Artificial Intelligence (XAI), Call 2019
Start date: (36 months)
Funding support: 701 222 €

Publications in Open Access