The incidence of melanoma, a highly aggressive form of skin cancer, continues to rise globally. Early detection significantly improves survival rates, as melanoma is visible on the skin’s surface during initial stages. Recent advances in automated systems, particularly deep learning, have enhanced non-invasive melanoma detection, reducing the need for biopsies and optimizing healthcare resources. In this study, we present MelDetect, a multimodal deep learning system that integrates dermoscopic images—captured using a dermoscope, a magnifying device that illuminates and visualizes subsurface skin structures—with clinical metadata to improve diagnostic accuracy. Using the HAM10000 dataset, our approach achieves a test accuracy of 81.83% with a macro-average AUC of 0.95. MelDetect shows high sensitivity for critical lesion classes, achieving 89.63% recall for melanocytic nevi (Class 5) and 92.86% recall for vascular lesions (Class 6), while the confusion matrix reveals clinically plausible misclassifications between visually similar benign and malignant lesions. These results highlight MelDetect’s potential as a reliable, non-invasive tool for early melanoma detection and clinical decision support.



