Multimodal Medical Image Fusion Techniques using CNN

Main Article Content

Pranita Sudhakarrao Chaudhary, Sharda Chhabria

Abstract

For the purpose of medical research, combining many medical images will improve the diagnosis of illness and highlight the intricate link between them. The technique of taking relevant visual information out of two or more photos and fusing them together to create a single fused image is called image fusion. From remote sensing to medical applications, image fusion is crucial in a wide range of image processing domains. Until recently, a variety of techniques have been employed for image fusion.


With the prevalence of health problems rising rapidly these days, it is imperative to comprehend human MRI and CT scan pictures in order to diagnose medical disorders accurately and promptly. Computed tomography (CT) and magnetic resonance imaging (MRI) are two common modalities in medical image processing that are used to extract information about hard and soft tissues, respectively. Nevertheless, it is extremely difficult to extract the necessary pathological traits to detect questionable tissue details using a single modality. Over the last few decades, a number of medical image fusion techniques have sought to overcome the previously described problem by combining complementing information from MRI and CT.


However, depending on variables like location, race, age, etc, the user's MRI and CT data fluctuation may cause variations in image fusion accuracy. The "Generalized Solution" that we propose in this work for the prompt diagnosis and early identification of health concerns using the merging of CT and MRI images which involves convolutional neural networks (CNNs) that can assist us in achieving a generic fusion strategy since they operate by extracting the key characteristics from CNNs without requiring particular data to be trained, may be used with a wide range of patients.

Article Details

Section
Articles