Accurate and simultaneous classification and segmentation of brain tumors is critical for personalized treatment planning, yet remains a significant challenge due to the heterogeneous nature of tumor subregions and the inconsistent alignment between structural (MRI) and functional (PET) data samples. Existing deep learning models largely treat classification and segmentation as separate tasks and do not exploit the complementary strengths of multimodal imaging, and hence perform poorly in uncertain or anatomically ambiguous regions.
To address this gap, we propose a novel Cross-Attention Convolutional Neural Network (CA-CNN) framework for simultaneous classification and segmentation of brain tumors by fusing MRI and PET modalities. The model introduces five analytical innovations: (1) the Dual-Modality Self-Supervised Contrastive Consistency (DM-SSCC) loss function, aligning MRI and PET feature spaces to achieve cross-modality robustness; (2) an Uncertainty-Guided Adaptive Sampling Module (UGASM) which refines high-uncertainty regions; (3) a Multi-Resolution Fourier Attention Fusion (MFAF) layer; (4) a Semi-Supervised Knowledge Graph Propagation (SS-KGP) engine; and (5) a Cross-Domain Biometric Embedding Calibration (CBEC) module.
This integrated architecture yields substantial improvements over state-of-the-art baselines: a 6.2% increase in F1-score for classification, a 0.06 boost in Dice score for segmentation, and a 63% reduction in calibration error. With its strong, clinically grounded performance, high computational efficiency, and ability to set new benchmarks for multimodal brain tumor analysis, the proposed model establishes an entirely new standard for real-world clinical settings.
Keywords: Brain Tumor, Multimodal Fusion, Cross-Attention CNN, Segmentation, Medical Imaging, Deep Learning.
How to cite this article: Lakhanpal A, Baghela VS, Tiwari S. Design of an iterative model using cross attention frequency guided fusion with anatomical reasoning for simultaneous brain tumor classification using multimodal sources. Int J Drug Deliv Technol. 2026;16(3s): 842-853; DOI: 10.25258/ijddt.16.3s.102
Source of support: Nil.
Conflict of interest: None