International Journal of Drug Delivery Technology
Volume 16, Issue 16s, 2026

Riemannian Manifold Learning for Cross-Modal Feature Alignment in Bioinformatics-Oriented Multimodal Medical Imaging

Dr Janarthanam Selvarasu1*, Ms. Sangeetha Periyasamy2, Ms Sandhya Billahalli Gangadharappa3, Prof. Raja Sarath Kumar Boddu4, Dr Balachandramohan Manavalan5

1Post-Doctoral Researcher, Lincoln University College, Malaysia. Associate Professor, School of Science and Computer Studies, CMR University, Bengaluru, India. Email: professorjana@gmail.com. ORCID: https://orcid.org/0000-0002-8676-8998

2Ph.D. Researcher, Department of Physics, Erode Arts and Science College, Bharathiar University. Lecturer, Sri Ramakrishna Polytechnic College, Coimbatore. Email: sange.samy@gmail.com. ORCID: https://orcid.org/0009-0009-0335-7092

3Research scholar, Department of studies in Mathematics, Davangere University, Tholahunase, India. Email: sandhyascholar2023@gmail.com. ORCID: https://orcid.org/0009-0000-8666-9394

4Professor and Head, Department of Artificial Intelligence and Machine Learning, Raghu Engineering College, Visakhapatnam, India. Email: rajaboddu@lincoln.edu.my. ORCID: https://orcid.org/0000-0002-2508-6715

5Associate Professor, Department of Physics, Erode Arts and Science College (Autonomus), Erode. Email: m.balachandramohan@easc.ac.in. ORCID: https://orcid.org/0000-0001-6462-9017


ABSTRACT

Integrating medical images from diverse modalities—such as MRI, PET, and whole-slide histopathology—remains challenging because each technique generates data in fundamentally incompatible scales, resolutions, and physical principles. This study introduces a novel Riemannian manifold framework that models each modality as residing on its own curved geometric manifold. By computing geodesic alignments, the approach maps all modalities into a unified latent space while rigorously preserving local neighborhoods and global topology. A distinctive geometry-aware attention mechanism dynamically weighs each modality's contribution based on its geodesic consistency with the others, enabling adaptive fusion. Features are then seamlessly transferred to a common tangent space via parallel transport, guaranteeing distortion-free integration. Evaluated on BraTS 2021, ADNI, and TCGA-GBM benchmarks, the method surpassed leading baselines, delivering a 4.7% Dice score gain over Cross-Modal Transformer (BraTS), 6.2% higher accuracy than Multimodal GCN (ADNI), and a 5.8% improved concordance index versus Hypergraph NN (TCGA-GBM). Ablation studies verified that geodesic alignment and learned attention closely mirror established clinical importance of individual sequences. These findings demonstrate that Riemannian geometry offers a mathematically principled foundation for robust multimodal fusion, paving the way for more reliable and interpretable AI-driven diagnostics.

Keywords: Cross-modal feature alignment, Diagnostic classification, Geodesic mapping, Geometric deep learning, Multimodal medical imaging, Riemannian manifold learning

How to cite this article: Selvarasu J, Periyasamy S, Gangadharappa SB, Boddu RSK, Manavalan B. Riemannian Manifold Learning for Cross-Modal Feature Alignment in Bioinformatics-Oriented Multimodal Medical Imaging. Int J Drug Deliv Technol. 2026;16(16s): 191-202. DOI: 10.25258/ijddt.16.16s.20

Source of support: Nil.

Conflict of interest: None