MIDL 2021

International Conference on
Medical Imaging with Deep Learning
5. bis 7. Juli 2021

Projekt KI-Labor Lübeck

Im Rahmen des Projektes „KI-LAB Lübeck“ entsteht am Campus ein neues KI-Labor. Ziel ist es, eine noch leistungsfähigere Umgebung für die Forschung mit KI-Algorithmen zu bieten. Mithilfe des maschinellen Lernens werden IT-Systeme in die Lage versetzt, auf Basis vorhandener Datenbestände und Algorithmen Muster und Gesetzmäßigkeiten zu erkennen und Lösungen für verschiedenste Problemstellungen zu entwickeln. Auch die Lehre wird vom neuen KI-Schwerpunkt profitieren: In einer zweijährigen Pilotphase ist ab WS 2020/2021 ein berufsbegleitendes, englischsprachiges Masterstudium zum Thema Artificial Intelligence an der Universität zu Lübeck geplant. Das Projekt wird vom Bundesministerium für Bildung und Forschung (BMBF) von 2019-2022 gefördert.

Das Institut für Medizinische Informatik ist hier als Projektpartner aktiv eingebunden und beteiligt sich an der Entwicklung der KI-Infrastruktur zur intelligenten Bildanalyse und an der exemplarischen Umsetzung von Use Cases wie z.B. zur Deep Learning-basierten Analyse von OCT-Bildern des Auges (OCT: Optische Kohärenztomographie). Weiterhin ist das Institut an der Etablierung des neuen, berufsbegleitenden Masterstudiengangs Artificial Intelligence beteiligt.

Projektteam IMI:

M.Sc. Julia Andresen
Prof. Dr. Heinz Handels


Weitere Informationen zu diesem von den verschiedenen Instituten der Sektion Informatik/Technik der Universität zu Lübeck getragenen Projekt finden Sie unter



Created at July 13, 2020 - 9:36am by Kulbe.

KI-SIGS project iAuge: Homecare eye diagnostics and intelligent image analysis in ophthalmology

iAuge is a sub-project of the transregional joint project "KI-Space für intelligente Gesundheitssysteme” (KI-SIGS, engl. AI spaces for intelligent health systems). In cooperation with partners at the Universities of Lübeck, Kiel and Bremen, the University Medical Center Schleswig-Holstein (UKSH), Visotec GmbH and the UniTransferKlinik in Lübeck, new AI-based solutions for ophthalmic diagnostics will be developed. The main goal of the work at the Institute of Medical Informatics (IMI) is the development of optimized deep neural networks for the automatic AI-based evaluation of three-dimensional OCT images (OCT: optical coherence tomography) for the improved treatment of patients with eye diseases. The project is funded by the Federal Ministry of Economic Affairs and Energy for the period 2020-2023.

Within the subproject iAuge, an AI platform will be established to support the integrated treatment of patients with eye diseases such as the common age-related macular degeneration (AMD) and retinopathy centralis serosa (RCS). In addition to the AI support of multimodal image analysis for ophthalmologists, a deep-learning based automated data analysis for a novel homecare OCT application will be realized and incorporated into the KI-SIGS platform. This will enable the patient to monitor the disease at home, which is expected to lead to a significant improvement in therapy, especially for AMD patients. Continuous monitoring at home using homecare OCT images will automatically detect deterioration of the eye condition using an IMI developed AI system and determine individually optimal treatment time points.

A technologically innovative, mobile OCT scanner is being developed by Visotec GmbH in cooperation with the Medizinisches Laserzentrum Lübeck GmbH for the homecare sector (Fig. 1). Together with the University Eye Clinic Kiel, this technology was validated on different patient cohorts and compared with already established, high-resolution OCT scanners. Visotec GmbH will provide image data of patients for the development of the algorithms and aims to commercialize the system. The large amount of daily acquired 3-dimensional image data requires new intelligent and efficient deep learning-based evaluation algorithms.For this purpose, the IMI designs and develops problem-optimized deep learning networks as well as image processing algorithms that quantitatively capture relevant AMD biomarkers (Fig. 2) in images of the home care OCT and quantify their temporal changes and relevance for therapy control. In addition, existing OCT images acquired in the clinic will be also incorporated into the development process. At the same time, reconstruction methods are being further developed at the Institute for Biomedical Optics (BMO) in order to compensate for motion and to improve the quality of relevant structures in the image. Due to its compact and cost-efficient design as well as its independent use by usually elderly patients with reduced visual acuity, the homecare application demands special requirements for the operation and evaluation of the data. An optimized user interface for the homecare OCT device is being developed in cooperation with Prof. Schöning (University of Bremen). Furthermore, in cooperation with the UniTransferKlinik Lübeck, the practical use of the system will be supported via a central AI platform and a demonstrator for the developed AI methods will be established.

Fig. 1: Handheld OCT scanner.

Fig. 2: Comparison of two retinal scans of the same AMD patient with PED between Spectralis OCT from Heidelberg Engineering (top) and SELF-OCT (bottom).

Project Team:

M. Sc. Timo Kepp
Prof. Dr. Heinz Handels

Cooperation Partners:

Prof. Dr. Gereon Hüttmann
Institute for Biomedical Optics (BMO) of the University of Lübeck

Prof. Dr. Johann Roider, Dr. Claus von der Burchard
Department for Ophthalmology, UKSH Campus Kiel (UKSH Kiel)

Prof. Dr. Johannes Schöning
AG Human Computer Interaction, University Bremen (Uni Bremen)

Prof. Dr. Reinhard Koch, Monty Santarossa, University of Kiel
Multimedia Information Processing Group, University Kiel (Uni Kiel)

Prof. Dr. Martin Leucker, Dr. Tim Suthau
UniTransferKlinik, Lübeck (UTK)

Helge Sudkamp
Fa. Visotec GmbH, Lübeck

Created at June 8, 2020 - 12:41pm by Kulbe. Last modified at October 9, 2020 - 11:27am by Kulbe.

Deep Learning-based Generative Models for Unsupervised Anomaly Detection in Medical Images

Deep learning methods for image analysis have shown to excel at many tasks such as segmentation, detection and classification. However, they usually require a large amount of annotated images to learn a particular task. For example, if the aim is to segment brain tumors in MRI images, a deep learning algorithm would require hundreds or thousands of brain MRIs with expert annotations of the tumor tissue. Such datasets are rarely accessible in the medical field since the annotation of images is a complicated and time consuming process.

In this project, the focus lies on detecting pathologies and anomalies in medical images without using ground truth segmentations for training, so called unsupervised learning. The main idea is to learn and model the variability of healthy tissue appearance of a certain data domain, so that pathologies are recognized as deviations from the learned norm. Deep learning-based generative models such as variational autoencoders or GANs are a tool of choice in this project, since they enable the learning of complex representations and distributions.


deep learning p1deep learning ue1

deep learning p2deep learning ue2

Fig.1: Example pathological images (left) and unsupervised localization of the pathologies (right) shown as heat maps. From up to down: Thorax CT with a large tumor; Brain MRI with contrasted glioblastoma tomor tissue; Retinal OCT with macular edema.

Selected Publications:

  1. Uzunova H., Handels H., Ehrhardt J.
    Unsupervised Pathology Detection in Medical Images using Learning-based Methods
    In: Maier A., Deserno T.M., Handels H., Maier-Hein K.H., Palm C., Tolxdorff T. (eds.), Bildverarbeitung für die Medizin 2018, Erlangen, Informatik aktuell, Springer Vieweg, Berlin Heidelberg, 61-66, 2018
  2. Uzunova H., Ehrhardt J., Kepp T., Handels H.
    Interpretable Explanations of Black Box Classifiers Applied on Medical Images by Meaningful Perturbations using Variational Autoencoders
    In: SPIE 10949, Medical Imaging 2019: Image Processing, San Diego, USA, 10949, 1094911-1-1094911-8, 2019


M.Sc. Hristina Uzunova
Dr. rer. nat. Jan Ehrhardt
Prof. Dr. rer. nat. habil. Heinz Handels


Created at May 9, 2019 - 1:11pm by Kulbe. Last modified at June 8, 2020 - 1:07pm by Kulbe.

Automatic Patient Individual Contrast Agent Dose Optimization (iQ-CM)

Contrast agents are used in medical imaging to visualize anatomical structures (fig. 1) and to make physiological parameters e.g. perfusion measurable. They usually consist of substances whose toxicity can be mitigated biochemically, still side effects occur in numerous cases, which sometimes are severe. Most radiologists use universal doses of contrast agents for contrast enhanced imaging, that may exceed the requirements in some cases. The aim of this project is to develop methods and algorithms to optimize the patient individual contrast agent doses for contrast-enhanced computed tomography (CT) examinations, which can reduce the average dose required.

The focus of our work lies in the development and evaluation of multi-parametric models on the basis of patient individual parameters such as age, gender, blood pressure, pulse rate, as well as examination-specific parameters such as injection speed of the bolus. Model-based and machine learning methods will be used to analyze and describe complex, multi-parametric correlations in order to perform a patient-specific optimized prediction of contrast agent requirements.

The predictive power of the model-based methods and the machine learning methods will be evaluated on the basis of clinical study that will be carried out at the Clinic for Radiology and Nuclear Medicine in the context of this project.

Fig.1: CT slices with different contrast agent enhancements. With a sufficient contrast agent dose, anatomical structures, such as the aorta in the right image (arrow), become clearly visible.

The project is realized in collaboration with the Clinic for Radiology and Nuclear Medicine of the University Medical Center Schleswig-Holstein, Campus Lübeck, and the company IMAGE Information Systems Europe GmbH and is funded by the German Federal Ministry of Education and Research (BMBF).

Project team:

M.Sc. Kira Leane Soika
Prof. Dr. rer. nat. Heinz Handels

Cooperation Partners:

Prof. Dr. med. J. Barkhausen, PD Dr. med. P. Hunold and M. Schürmann
Klinik für Radiologie und Nuklearmedizin
Universitätsklinikum Schleswig-Holstein, Campus Lübeck

Dr. med. A. Bischof, Dr. rer. nat. C. Godemann, H. Marien, E. Virtel and S. Schülke
IMAGE Information Systems Europe GmbH

Created at September 24, 2018 - 1:48pm.

Automatische Analyse und Erkennung von Bildstrukturen in 3D-/4D-OCT-Bilddaten unter Verwendung von Verfahren des maschinellen Lernens

Die optische Kohärenztomographie (OCT) ist ein nichtinvasives Bildgebungsverfahren, das auf Licht geringer Kohärenzlänge basiert. Die OCT funktioniert auf dem Prinzip der Ultraschallbildgebung, wobei anstatt Schallwellen Licht verwendet wird. Dabei werden mittels Interferometrie Laufzeitunterschiede zwischen dem zurückgestreuten Licht des Probenstrahls und dem Licht des Referenzstrahls gemessen, wodurch dreidimensionale Volumenaufnahmen in Mikrometerauflösung möglich sind.

Deep-Learning-Methoden für ein verbessertes Monitoring der altersbedingten Makuladegeneration in zeitlich-räumlichen 4D-OCT-Bildfolgen

Im diesem Projekt werden Deep Learning basierte Lernmethoden für ein verbessertes Monitoring der altersbedingten Makuladegeneration (AMD) entwickelt. Das Projekt wird in Kooperation mit der Klinik für Ophthalmologie des Universitätsklinikums Schleswig-Holstein in Kiel und dem Institut für Biomedizinische Optik der Universität zu Lübeck bearbeitet.

Die altersbedingte Makuladegeneration (AMD) ist die häufigste Erblindungsursache im Alter über 60 Lebensjahren in der westlichen Welt. Bei der feuchten, exsudativen Form der AMD kommt es zu einer Ödembildung unter der Netzhaut. Aufgrund einer Sauerstoffunterversorgung der Photorezeptoren werden Botenstoffe wie z.B. der Vascular Endothelial Growth Factor (VEGF) gebildet, der zu einer choroidalen Neovaskularisation führt. Die neu gebildeten Gefäße sind sehr fragil, wodurch sie in der Makula unerwünschte Einblutungen verursachen. Um dem unkontrollierten Blutgefäßwachstum entgegenzuwirken, lassen sich VEGF-Antikörper in den Glaskörperraum injizieren. Der Therapieeffekt hält jedoch nur wenige Wochen an, so dass eine Reiinjektion nach diesem Zeitraum erforderlich ist. Für eine verbesserte Diagnostik und Therapie der AMD werden im Rahmen des Projekts neue Deep-Learning-Methoden für die computergestützte Analyse und Erkennung von Biomarkern (Abb. 1) in räumlich-zeitlichen 4D-OCT-Bilddaten entwickelt, implementiert und validiert. Die entwickelten Methoden sollen die Biomarker in den OCT-Bilddaten erkennen und klassifizieren, so dass ein Rezidiv frühzeitig erkannt werden kann.

Abb. 1: OCT-Schicht (B-Scan) der Makula eines AMD-Patienten. Die typischen AMD-Pathologien sind segmentiert: Intraretinales Fluid (IRF) in blau, subretinales Fluid (SRF) in rot, und Pigmentepithelablösung (PED) in grün.

Maschinelle Lernmethoden zur Segmentierung und Analyse der subkutanen Fettschicht der Mäusehaut in 3D/4D-OCT-Bilddaten

Das Ziel dieses Projektes ist die automatische Segmentierung und quantitative Analyse des subkutanen Fettgewebes von Mäusen in 3D-OCT-Bilddaten mittels maschineller Lernmethoden, die zur Evaluation des Kryolipolyseverfahrens benötigt werden. Dieses Projekt wird in Zusammenarbeit mit dem Cutaneous Biology Research Center am Massachusetts General Hospital in Boston durchgeführt.

Die Kryolipolyse ist ein von dem Kooperationspartner Dr. Dieter Manstein mitentwickeltes kosmetisches Verfahren für die nichtinvasive Fettreduktion. Durch eine spezielle kontrollierte Kühlungstechnik lassen sich selektiv subkutane Fettzellen zerstören. Für die zukünftige Evaluation des Verfahrens soll ein Mausmodell erprobt werden. Mittels OCT lassen sich 3D-Aufnahmen der Mäusehaut in Mikrometerauflösung erstellen (Abb. 1). Diese hohe Auflösung resultiert in einer großen Bilddatenmenge, deren manuelle Auswertung praktisch aufgrund des hohen Zeitaufwands nicht durchführbar ist. Das Ziel dieses Projekts besteht in der Entwicklung eines computergestützten Verfahrens, um Segmentierungen der subkutanen Fettschicht in 3D-OCT-Bilddaten automatisiert zu erstellen (Abb. 2). Für die Umsetzung sollen gezielt lernbasierte Methoden entwickelt werden, die eine hohe Genauigkeit und Robustheit aufweisen.

Abb. 2: Darstellung der Mäusehaut im Leistenbereich. Links: Beispiel eines einzelnen B-Scans der Mäusehaut. Rechts: Übersichtsbild während der Aufnahme mit markiertem Scanbereich (rotes Rechteck). Die gestrichelte Linie zeigt die Position des B-Scans an.

Abb. 3: Farbcodierte Darstellung der Dicke des segmentierten subkutanen Fetts.

Ausgewählte Publikationen:

  1. Kepp T., Droigk C., Casper M., Evers M., Salma N., Manstein D., Handels H.,
    Segmentation of subcutaneous fat within mouse skin in 3D OCT image data using random forests", Proc. SPIE 10574, Medical Imaging 2018: Image Processing, 1057426 (2 March 2018); doi: 10.1117/12.2290085
  2. Kepp T., Droigk C., Casper M., Evers M., Salma N., Manstein D., Handels H.
    Abstract: Random-Forest-basierte Segmentierung der subkutanen Fettschicht der Mäusehaut in 3D-OCT-Bilddaten, In: Maier A., Deserno T.M., Handels H., Maier-Hein K.H., Palm C., Tolxdorff T. (eds.), Bildverarbeitung für die Medizin 2018, Erlangen, Informatik aktuell, Springer Vieweg, Berlin Heidelberg, 203, 2018


M.Sc. Timo Kepp
Prof. Dr. rer. nat. habil. Heinz Handels (Leitung)


PD Dr. rer. nat. Gereon Hüttmann
Institut für Biomedizinische Optik
Universität zu Lübeck

Dr. med. Claus von der Burchard
Klinik für Ophthalmologie
Universitätsklinikum Schleswig-Holstein, Kiel

Prof. Dr. med. Johann Roider
Klinik für Ophthalmologie
Universitätsklinikum Schleswig-Holstein, Kiel

Dr. Dieter Manstein
Cutaneous Biology Research Center
Massachusetts General Hospital, Boston, Massachusetts


Created at April 4, 2018 - 8:00am by Kepp. Last modified at June 8, 2020 - 1:00pm by Kulbe.

Decision Forest Variants for Brain Lesion Segmentation

Many diseases, such as ischemic stroke and multiple sclerosis (MS), cause focal alteration of the brain tissue, so called lesions, which are visible in magnetic resonance (MR) imaging sequences (see Fig. 1). Reliable diagnosis and informed treatment decisions require a quantitative assessment of these lesions in time and space, which can only be obtained through further analysis of the imaging data. A brain lesion segmentation that meets the high requirements on the accuracy, reproducibility and robustness placed by clinical and research standards, is in high demand.

In this project, new methods for brain lesion segmentation based on decision forests (DF) were developed and investigated.

Fig. 1: Example of ischemic stroke lesion appearance in different MR sequences


Brain lesion segmentation in multi-spectral MR images with decision forests

By intelligently combining a chain of MR preprocessing methods, a set of carefully handcrafted image features and the robust DF classifiers, a novel automatic method for the exact and robust segmentation of brain lesions from multi-spectral MR images was developed. The solution is tailored towards a use in clinical practice and research scenarios and as such poses minimal requirements on the quality of the input data. Since it follows the machine learning paradigm, the method can be adapted to various types of brain lesion causing diseases by simple re-training and has been accordingly evaluated on stroke (see Fig. 2), MS and glioma cases. The approach achieved high ranks and won multiple awards on international benchmarks on brain lesion segmentation stoke (ISBIMS2015, ISLES2015, BRATS2015, ISLES2016), proving its outstanding segmentation performance and high adaptability to new diseases.

Fig. 2: Automatically generated segmentation result (blue) compared to manually delineated expert ground truth (green)


Local problem forests

The segmentation of brain lesions from MR images incorporates a number of particular challenging areas (see Fig. 3). To target these spatially local problems, a methodological extension of the DF models was developed. By preceding the training of the forest’s trees by an unsupervised spectral clustering step based on image patches, a topology of local segmentation problems is created. Instead of using bagging, trees are placed in the topology’s areas of high training patch accumulations and trained on the proximal training samples with a normal distributed fuzzy catchment area. Thus, trees specializing on particular subproblems are trained and selectively applied to new cases, significantly increasing the segmentation accuracy for stroke segmentation from mono-spectral MR images.

Fig. 3: Particular challenging local areas for brain lesion segmentation from MR images


Semi-supervised forests

In particular for MS, the lesion dissemination in time is as important as in space and MR scans are regularly conducted to monitor disease progression. Based on the manual segmentation of one time point’s scans, the previously developed classification method can be trained and used to segment the subsequent time points automatically. But, following the semi-supervised segmentation paradigm, it is potentially beneficial to incorporate the unlabeled testing samples from the time point to be segmented in the training process driven by the labeled samples from previous time points.

To this end, the DF model was methodologically extended to a allow for training with partially labeled training data. While the standard DF optimize the information gain via the Shanon entropy to determine the best split at each tree node, this novel approach balances the labeled term against a term based on differential entropy representing the unlabeled samples. Thus, each split is set at the equilibrium between label purity and cluster density in feature space.

Applied to longitudinal MS segmentation, the proposed semi-supervised forest lead to a significant improvement in segmentation accuracy when compared to the classical supervised DFs.


In the scope of this project, a robust and accurate brain lesion segmentation method was developed and shown to outperform the state of the art. Additional methodological and algorithmic improvements designed for specific use-cases furthermore improved the segmentation accuracy. All methods were evaluated on brain MR data of clinical relevance in direct international benchmarks and are currently employed in research settings.

The project was carried out in close cooperation with experts from the cognitive neuroscience group at the Department of Neurology, University Medical Center Schleswig-Holstein, Germany. Oskar Maier is a member of the Graduate School for Computing in Medicine and Live Science, Universität zu Lübeck, Germany.


MedPy – Medical Image Processing in Python
      package (PyPI) / source code (GitHub)

sklearnef – Extension module providing un- and semi-supervised decision forests for scikit-learn
      source code (GitHub)

DynStatCov - Cython Library for fast dynamic statistical co-variance update
      package (PyPI) / source code (GitHub)

albo – Automatic Lesion to Brain region Overlap (by Lennart Weckeck)
      source code (GitHub)

Selected Publications

  1. Maier O., Menze B.H., von der Gablentz J., Häni L., Heinrich M.P., Liebrand M., Winzeck S., Basit A., Bentley P., Chen L. et al.
    ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI
    Medical Image Analysis, 35, 250-269, 2017

  2. Maier O., Schröder C., Forkert N.D., Martinetz T., Handels H.
    Classifiers for Ischemic Stroke Lesion Segmentation: A Comparison Study
    PLOS ONE, 10, 12, e0145118, 2015

  3. Maier O., Wilms M., von der Gablentz J., Krämer U.M., Münte T.F., Handels H.
    Extra Tree forests for sub-acute ischemic stroke lesion segmentation in MR sequences
    Journal of Neuroscience Methods, 240, 89-100, 2015

  4. Maier O., Handels H.
    Local Problem Forests: Classifier Training for Locally Limited Sub-Problems Using Spectral Clustering
    In: 2015 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI 2015, New York, IEEE Proceedings, The Printing House, 806-809, 2015

  5. Crimi A., Menze B., Maier O., Reyes M., Handels H. (eds.)
    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries
    First International Workshop, Brainles 2015, Held in Conjuction with MICCAI 2015
    Springer International Publishing, München, 2016

Further activities


Organization of a stroke lesion segmentation challenge at the MICCAI 2015


Organization of a stroke outcome prediction challenge at the MICCAI 2016


Project Team

M.Sc. Oskar Maier
Prof. Dr. Heinz Handels

Cooperation Partners

Prof. Dr. rer. nat. Ulrike M. Krämer, Prof. Dr. med. habil. Thomas F. Münte and Dr. Med. Matthias Liebrand
Cognitive Neuroscience Group, Department of Neurology
University Medical Center Schleswig-Holstein, Lübeck, Germany


Created at February 21, 2017 - 1:55pm by Maier. Last modified at February 23, 2017 - 4:57pm.

Learning Contrast-invariant Contextual Local Descriptors and Similarity Metrics for Multi-modal Image Registration

Deformable image registration is a key component for clinical imaging applications involving multi-modal image fusion, estimation of local deformations and image-guided interventions. A particular challenge for establishing correspondences between scans from different modalities: magnetic resonance imaging (MRI), computer tomography (CT) or ultrasound, is the definition of image similarity. Relying directly on intensity differences is not sufficient for most clinical images, which exhibit non-uniform changes in contrast, image noise, intensity distortions, artefacts, and globally non-linear intensity relations (for different modalities).

In this project algorithms with increased robustness for medical image registration will be developed. We will improve on current state-of-the-art similarity measures by combining a larger number of versatile image features using simple local patch or histogram distances. Contrast-invariance and strong discrimination between corresponding and non-matching regions will be reached by capturing contextual information through pair-wise comparisons within an extended spatial neighbourhood of each voxel. Recent advances in machine learning will be used to learn problem-specific binary descriptors in a semi-supervised manner that can improve upon hand-crafted features by including a priori knowledge. Metric learning and higher-order mutual information will be employed for finding mappings between feature vectors across scans in order to reveal new relations among feature dimensions. Employing binary descriptors and sparse feature selection will improve computational efficiency (because it enables the use of the Hamming distance), while maintaining the robustness of the proposed methods.

A deeper understanding of models for image similarity will be reached during the course of this project. The development of new methods for currently challenging (multi-modal) medical image registration problems will open new perspectives of computer-aided applications in clinical practice, including multi-modal diagnosis, modality synthesis, and image-guided interventions or radiotherapy.

Abb. 1: Overview of project plan for learning multi-modal metrics using correspondences in aligned training data.

The project is funded by Deutsche Forschungsgemeinschaft (DFG) (HE 7364/2-1).

Selected Publications:

  1. Heinrich M.P., Blendowski M.
    Multi-Organ Segmentation using Vantage Point Forests and Binary Context Features
    MICCAI 2016

  2. Blendowski M., Heinrich M.P.
    Kombination binärer Kontextfeatures mit Vantage Point Forests zur Multi-Organ-Segmentierung
    BVM 2017

  3. Heinrich M.P., Jenkinson M., Bhushan M., Matin T., Gleeson F.V., Brady S.M., Schnabel J.A.
    MIND: modality independent neighbourhood descriptor for multi-modal deformable registration.
    Medical image analysis 2012

  4. Heinrich M.P., Jenkinson M., Papiez B.W., Brady S.M., Schnabel J.A.
    Towards realtime multimodal fusion for image-guided interventions using self-similarities
    MICCAI 2013

Project Team:

M.Sc. Max Blendowski
Jun.-Prof. Dr. Mattias P. Heinrich


Created at January 23, 2017 - 11:40am by Heinrich. Last modified at February 23, 2017 - 4:56pm.

Towards Realtime MR-guided Motion Compensation using Model-based Registration without Fiducial Markers

Physiological patient motion is an important problem in accurate dose delivering during radiotherapy. Accurate and realtime motion compensation based on image-guidance could be realised in a combined MR-radiotherapy treatment setup. The objective of this project is to develop algorithms that can estimate intra-fraction motion reliably without implanted fiducial markers and improve on state-of-the-art techniques in terms of accuracy and computational speed.

In contrast to previous work, which predominantly used template matching to achieve realtime speed, we propose to incorporate prior knowledge as well as patient-specific information of plausible deformations during motion estimation. Superior motion estimation, especially for peripheral organs of risk, will be achieved using these models. To account for motion variability, MR images with high temporal resolution acquired during a short setup phase under free breathing could be incorporated for a patient-specific training. Building upon previous work, a motion-model based on principal component analysis or a Bayesian framework can be robustly trained using highly efficient deformable registration. Multiple distributed keypoints at discriminative geometric locations will be extracted automatically using machine learning techniques to avoid the need for invasive implantation of fiducial markers. Robust and accurate realtime motion estimation will be performed within a computationally efficient optimisation framework that incorporates the training model for plausible regularised motion estimation and avoids tracking errors by sampling a large space of potential motion vectors. The algorithms will be validated on retrospective clinical 4D MRI scans using manually annotated landmarks to demonstrate its suitability and advances over state-of-the-art methods.

Fig. 1: Overview of project plan for motion estimation in 4D-MRI using model-based regularisation

The project is funded by Deutsche Forschungsgemeinschaft (DFG) (HE 7364/1-1).

Selected Publications:

  1. Wilms M., Ha I.Y., Handels H., Heinrich M.P.
    Model-based Regularisation for Respiratory Motion Estimation with Sparse Features in Image-guided Interventions
    MICCAI 2016

  2. Ha I.Y., Wilms M., Heinrich M.P.
    Multi-object segmentation in chest X-ray using cascaded regression ferns
    BVM 2017

  3. Heinrich M.P., Papiez B.W., Schnabel J., Handels H.
    Non-Parametric Discrete Registration with Convex Optimisation
    WBIR 2014

Project Team:

M.Sc. In Young Ha
M.Sc. Matthias Wilms
Jun.-Prof. Dr. Mattias P. Heinrich


Created at January 23, 2017 - 11:40am by Heinrich. Last modified at February 23, 2017 - 4:53pm.

MRI-based pseudo-CT synthesis for attenuation correction in PET-MRI and Linac-MRI

Combining positron emission tomography (PET) and within the same scanner magnetic resonance imaging (MRI) has recently evolved into a research topic of great interest, since this new multimodal imaging technique enables improved tumor localization and delineation from healthy tissue compared to conventional PET-CT. Another important emerging research area is the integration of MRI into radiotherapy treatment delivery systems (e.g. linear accelerators, Linacs) to improve the planning of dose delivery and also tracking and correction of tumor motion. Despite its potential advantages, there are a number of challenging problems when replacing CT completely by MRI in radiotherapeutic treatments and PET imaging, in particular the lack of correlation between the measured MRI intensities and the attenuation-related mass densities. However, since in both cases information about the attenuation behavior of the tissue is required, there is a need for synthetic CT scans (so-called pseudo-CTs) based on the acquired MRI scans.

Our previous work on modality-independent neighbourhood descriptors is considered state-of-the-art for multimodal deformable image registration, which is a vital step for multi-atlas-based CT synthesis. Our first published results for image synthesis include a local formulation of the canonical correlation analysis (CCA)  for MRI synthesis based on local histograms and multi-atlas registration based MRI-based pseudo-CT synthesis. In this project, we will advance registration-based techniques and combine them with machine learning algorithms to investigate its potential for image synthesis. For this purpose, various multivariate statistical methods, such as the non-linear Kernel-CCA will be analysed. To overcome the lack of a functional relationship between MRI and CT intensities, the consideration of rich contextual image descriptors will be studied. Learning algorithms for detecting and correcting errors in the generation of pseudo-CT scan could also be explored. Following image synthesis, attenuation maps can be directly created based on the synthesized pseudo-CT images and its influence on PET image reconstruction may be evaluated. Due to the limitation of previous work on the generation of attenuation maps mainly for the head, a particular focus of this project is the development of new methods for MRI-based attenuation correction for the whole body.

Abb. 1: Exemplary view of combined MR-PET scan. Pseudo-CT image, which was synthesised using atlas-based registration.

The project is funded by  Graduate School for Computing in Medicine & Life Sciences.

Selected Publications:

  1. Degen J., Heinrich M.P.
    Multi-Atlas Based Pseudo-CT Synthesis using Multimodal Image Registration and Local Atlas Fusion Strategies

  2. Degen J., Modersitzki J., Heinrich M.P.
    Dimensionality Reduction of Medical Image Descriptors for Multimodal Image Registration
    Current Directions in Biomedical Engineering, 2015

  3. Heinrich M.P., Schnabel J., Papiez B.W., Handels H.
    Multispectral Image Similarity Based on Local Canonical Correlation Analysis
    MICCAI 2014

Project Team:

M.Sc. Johanna Degen
Jun.-Prof. Dr. Mattias P. Heinrich

Cooperation Partners:

Prof. Dr. Jörg Barkhausen, Department of Radiology and Nuclear Medicine, Universitätsklinikum Schleswig-Holstein, Lübeck
Prof. Dr. Magdalena Rafecas, Institute of Medical Engineering, Universität zu Lübeck


Created at January 23, 2017 - 11:40am by Heinrich. Last modified at February 23, 2017 - 4:54pm.

Learning to Predict Stroke Outcome based on Multivariate CT Images

The treatment of acute brain strokes requires very careful decisions in a very narrow time-window. Shortly after patients are admitted to the hospital and scans have been acquired, clinicians need to decide which treatment path offers best chances survival and avoidance of brain damage. These decisions have to be based on a multitude of 3D tomographic image data (CT perfusion maps and potentially thermal imaging) and other clinical indicators (patient age, NIHSS, etc).

Novel image processing techniques, mathematical models and machine learning algorithms, which can deal with the challenges of real clinical data, will be devised, implemented and tested during this research project. A large dataset of retrospective multispectral images of stroke patients is employed for the development and training of new models.

The new algorithms will be used to derive an automatic prediction of a pixel-wise map of tissue that is likely to be recovered if a certain treatment (in particular vessel recanalisation) is performed and how urgent this intervention is. The project is being carried out in close collaboration with the neuroradiology department.

Abb. 1: Multivariate CT scans of stroke patient together with automatically generated outcome prediction.

Funded by Lübeck Medical School with TRAVE Stroke Project.

Selected Publications:

  1. Lucas C., Maier O., Heinrich M.P.
    Shallow fully-connected neural networks for ischemic stroke-lesion segmentation in MRI
    BVM 2017

  2. Maier O., Menze B.H., von der Gablentz J., Häni L., Heinrich M.P., Liebrand M., Winzeck S., Basit A., Bentley P., Chen L. et al.
    ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI
    Medical Image Analysis 2017

  3. Kemmling A., Flottmann F., Forkert N.D. et al.
    Multivariate dynamic prediction of ischemic infarction and tissue salvage as a function of time and degree of recanalization
    Journal of Cerebral Blood Flow & Metabolism 2015

Project Team:

M.Sc. Christian Lucas
Jun.-Prof. Dr. Mattias P. Heinrich

Cooperation Partners:

Dr. André Kemmling, Department of Neuroradiology, Universitätsklinikum Schleswig-Holstein, Lübeck
Dr. Amir Madany-Mamlouk, Institute for Neuro- and Bioinformatics, Universität zu Lübeck


Created at January 23, 2017 - 11:40am by Heinrich. Last modified at February 24, 2017 - 11:08am.


Program of Study

Study Medical Informatics
at the University of Lübeck

read more ...


Susanne Petersen

Tel+49 451 3101 5601
Fax+49 451 3101 5604

Ratzeburger Allee 160
23538 Lübeck