Generative Adversarial Networks for Forensic Image Synthesis and Identification
Main Article Content
Abstract
In forensic investigations and identity verification, manual facial sketching remains a time-consuming and subjective process. This paper proposes a two-phase automated system that integrates generative and deep learning techniques to overcome the limitations of traditional sketch-based recognition. In the first phase, facial sketches are synthesized from detailed textual descriptions using Stable Diffusion enhanced with ControlNet, effectively translating semantic features into visual representations. In the second phase, a deep metric learning approach using FaceNet is employed to extract embeddings from both the generated sketches and the CelebA dataset. Cosine similarity is then used to retrieve the top-matching faces from the precomputed database. The system demonstrates promising results in accurately identifying similar facial images based on sketch inputs, offering potential applications in criminal investigations, surveillance, and identity verification. Experimental results validate the effectiveness and scalability of the proposed approach.