Skip Ribbon Commands
Skip to main content
Menu

AI Lab

Diagnostic radiology is on the cusp of a revolution driven by machine learning (ML)/artificial intelligence (AI). This powerful technology is transforming how medical images are analyzed, improving diagnostic accuracy, empowering workflow efficiency, and ultimately, in hopes of improving patient care.

We are developing and test-bedding ML/AI tools in diagnostic radiology to:

·       Streamline workflows: Automate tasks like prioritizing studies, and generating preliminary reports, to allow radiologists to save time from uncomplicated screening burden and focus on complex cases.

·       Automate lesion detection and classification: Consistently detect abnormalities in medical images and classify these based on features of benignity vs malignancy, potentially leading to earlier accurate diagnoses.

·       Boost diagnostic confidence: Automate provision of quantitative image biomarkers to assist radiologists in analysis and providing consistent reference points, to reduce inter-reader variability and improve radiological diagnostic accuracy overall.

·       Reduce turnaround times: Save time to notification by automating the diagnostic workflow, differentiating screening studies with abnormalities from those without, hence triaging significant abnormalities for immediate medical attention and treatment decisions.

Artificial Intelligence has transformed medical imaging and overall healthcare industry. Our radiologists work hand in hand with partners from SingHealth HSRC, A*STAR IHPC, NTU, NUS on the following AI projects:


Automated classification of intracranial haemorrhage on head CT scans – Prof Chan Ling Ling


image.png


ICH is a time-sensitive medical emergency where bleeding occurs within the intracranial space. This can be caused by a ruptured blood vessel or trauma, leading to a rapid increase in pressure within the skull. Early diagnosis of ICH and appropriate medical or surgical intervention are critical for optimal patient outcomes. Delays can lead to devastating consequences, including increased intracranial pressure, brain herniation, and death.

An artificial intelligence (AI)-empowered tool that offers rapid and reliable ICH identification holds immense potential to transform emergency room care by facilitating swifter treatment decisions and improving patient prognoses. This project aims to significantly improve emergency room efficiency, reduce clinical burden and optimize case triaging with an AI-powered tool capable of automatically detecting and classifying intracranial hemorrhage (ICH) within CT head scans.

Our team has curated a large dataset of ICH cases compiled from the Singapore General Hospital (SGH) database, with ground truth annotation encompassing subtype classifications labeled at the slice level by expert neuroradiologists. This rich, expertly labeled data was used to train and validate an AI model specifically designed to meet the stringent performance metrics required for clinical rigor.

We have tested-bedded a robust prototype within a simulated clinical environment with good performance results. Further inhouse testing of deployment of this home-grown prototype within our clinical workflow interface, complete with integrated pipelines, is pending. The success of the latter is crucial in determining real-world applicability of this tool in emergency room settings.

Contact person:


Using artificial intelligence (AI) to aid in diagnosis of Ring-Enhancing Brain Lesions – Prof Chan Ling Ling

Ring-enhancing brain lesions (REBLs), areas of low density or signal surrounded by a bright rim on neuroimaging, present a diagnostic challenge as they may be due to infective or non-infective etiologies. In Singapore General hospital where we see many patients who are immunocompromised through disease and/or treatment, the differentials are even broader including various opportunistic infections such as Nocardia brain abscesses and toxoplasmosis. As these patients can deteriorate rapidly, early diagnosis and appropriate treatment are crucial to achieve a good clinical outcome. However, there are limitations with the current diagnostic approaches - clinical presentation may be atypical, neuroimaging features can overlap between etiologies, and brain biopsy, the gold standard for diagnosis, may not be feasible due to unacceptable surgical risks. There is an urgent need for a non-invasive method of diagnosing REBLs.  

In recent years, artificial intelligence and radiomics in particular are increasingly utilised in clinical radiology for the detection, characterisation, and monitoring of lesions. In our study, we aim to investigate the role of radiomics in differentiating infective REBLs from neoplastic REBLs.  

Radiological reports of all patients who underwent CT or MRI brain scans in SGH between 1 Nov 2013 and 31 Oct 2017 were filtered for search terms indicative of REBLs (supplementary data 1). Two board-certified infectious diseases physicians independently reviewed the electronic medical records of the identified patients to verify the final diagnoses, which are made at least three years after first presentation. Diagnoses were defined as “definite” if an infectious agent or a neoplasm was detected on brain tissue or CSF. Diagnoses were defined as “probable” if an infectious agent or a neoplasm was detected on blood or extra-cranial tissue, and the clinical presentation was consistent with the diagnosis. Patients whose diagnoses did not fulfill the above criteria for either “definite” or “probable” were excluded from the study. Relevant clinico-demographic and radiological data were collected for each patient through a comprehensive electronic medical record review. The REBLs were manually annotated using bounding boxes on axial 2D and coronal 3D scans under the supervision of a neuroradiologist using the XNAT software. Quantitative 3D MRI features were extracted. We evaluated six machine learning classifiers to discern patterns within the radiomics features. Area under receiver operating charcteristics, precision and recall were used as performance metrics.   

Contact person:


Federated Learning for Postoperative Segmentation of Treated Glioblastoma (FL-PoST – Dr Lim Kheng Choon 

SGH Department of Neuroradiology, along with NCC Division of Oncological Imaging are proud to be involved in the Federated Learning for Postoperative Segmentation of Treated Glioblastoma (FL-PoST) study. Lead by Duke University, Indiana University and Response Assessment to Neuro-Oncology (RANO) team, this is a multinational federation of healthcare institutions collaborating to develop an open source software that provides simple, automated, accurate tumor segmentation free from inter-reader variability. It builds on the previous work in Federated Tumor Segmentation (FeTS), extending to post-treatment glioblastoma MRI. Hence FL-PoST is also known as FeTS 2.0.

Glioblastoma is the most common adult malignant brain tumor with extremely poor prognosis and limited treatment options. Imaging assessment of glioblastoma on MRI is inherently challenging due to its infiltrative heterogeneous nature. Besides the enhancing component of the tumor which are often heterogeneous with areas of haemorrhage and necrosis, there are often multifocal non-enhancing disease present, which make quantifying the tumor burden, delineating tumor margins for treatment and assessing treatment response difficult on routine MRI brain currently used in clinical practice. Post-treatment changes such as surgery, chemotherapy further complicates the imaging assessment due to complex heterogeneous post-treatment tumor micro-environment that is often admixed with treatment change and viable tumor.

The study aims to allow quantitative longitudinal volumetric assessment of glioblastoma MRI subregions in the post-treatment follow up setting, which is the most common setting for obtaining brain MRI in these patients, and validate the FL-PoST model using clinical trial data with expert consensus RANO assessments.

Besides contributed imaging datasets, we were involved in segmentation of glioblastoma MRIs and federated training of these datasets.


Contact person:

Contact

Please reach out to the respective Principal Investigators above if you are interested to be part of our exciting projects or for more information.

Publications:

  1. Zhu L, Chan LL, Ng TK, Yang K, Zhang M, Ooi BC. Semi-Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation. 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2021
  2. Zhang AY, Lam SSW, Ong MEH, Tang PH, Chan LL. Explainable AI: Classification of MRI Brain Scans for Quality Improvement. BDCAT '19: Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies. Dec 2019;95-102. https://doi.org/10.1145/3365109.3368791 
  3. Zhang AY, Lam SSW, Liu N, Pang Y, Chan LL, Tang PH. Development of a Radiology Decision Support System for the Classification of MRI Brain Scans. Proceedings - 5th IEEE/ACM International Conference on Big Data Computing 2019, Applications and Technologies, BDCAT 2018 pp. 107-115