Abstract :This Project Presents An Advanced Deep Learning Framework For Generating Realistic Human Faces And Synthesizing Images From Textual Descriptions. The System Leverages Deep Convolutional Generative Adversarial Networks (DCGAN), Consisting Of A Generator And Discriminator, To Produce High-quality Human Face Images. The Generator Creates Synthetic Images From Learned Patterns, While The Discriminator Evaluates Their Authenticity By Distinguishing Between Real And Generated Images. Additionally, The Project Integrates A Hybrid CNN-LSTM Architecture For Text-toimage Generation, Where The LSTM Extracts Semantic Features From Textual Input And The CNN Utilizes These Features To Generate Corresponding Images. The Model Is Trained On A Limited Dataset Of Approximately 8000 Images, Enabling It To Generate Contextually Relevant Outputs For Predefined Text Inputs. The System Includes Multiple Modules Such As User Authentication, DCGAN Training And Loading, Face Generation, And Text-to-image Synthesis, All Accessible Through A Web-based Interface. Experimental Results Demonstrate An Accuracy Of 96% In Distinguishing Real And Generated Images, With Progressive Improvement Observed During Training Epochs. This Project Highlights The Potential Of Combining Generative Models And Sequence Learning Techniques For Creative AI Applications, While Also Addressing Challenges Related To Limited Data And Computational Constraints. |
Published:08-4-2026 Issue:Vol. 26 No. 4 (2026) Page Nos:1807-1813 Section:Articles License:This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. How to Cite |