Enhanced brain tumor segmentation in medical imaging using multi-modal multi-scale contextual aggregation and attention fusion.

Accurate segmentation of brain tumors from multi-modal MRI scans is critical for diagnosis, treatment planning, and disease monitoring. Tumor heterogeneity and inter-image variability across MRI sequences pose challenging problems to state-of-the-art segmentation models. This paper presents a novel Multi-Modal Multi-Scale Contextual Aggregation with Attention Fusion (MM-MSCA-AF) framework that leverages multi-modal MRI images (T1, T2, FLAIR, and T1-CE) to enhance segmentation performance. The model employs multi-scale contextual aggregation to obtain global and fine-grained spatial features, and gated attention fusion for selectively refining effective feature representations and discarding noise. Evaluated on the BRATS 2020 dataset, MM-MSCA-AF achieves a Dice value of 0.8158 for necrotic tumor regions and 0.8589 in total, outperforming state-of-the-art architectures such as U-Net, nnU-Net, and Attention U-Net. These results demonstrate the effectiveness of MM-MSCA-AF in handling complex tumor shapes and improving segmentation accuracy. The proposed approach has significant clinical value, offering a more accurate and automatic brain tumor segmentation solution in medical imaging.
Cancer
Care/Management

Authors

Aslam Aslam, Hussain Hussain, Aslam Aslam, Jan Jan, Riaz Riaz, Iqbal Iqbal, Arif Arif, Khan Khan
View on Pubmed
Share
Facebook
X (Twitter)
Bluesky
Linkedin
Copy to clipboard