SPEAKSIGNS: REAL-TIME GESTURE RECOGNITION AND SPEECH SYNTHESIS FOR SIGN LANGUAGE USERSID: 2607 Abstract :This Paper Presents A Real-time Sign Language Recognition (SLR) Framework Capable Of Processing Both Static Images And Live Video Streams To Generate Corresponding Textual Representations And Synthesized Speech. The Proposed System Integrates Media Pipe Based Hand And Poses Tracking To Achieve Robust Landmark Extraction, Followed By A Hybrid Deep Learning Architecture That Combines Convolutional Neural Networks (CNNs) For Spatial Feature Learning With Bidirectional Long Short-Term Memory (Bi- LSTM) Networks For Temporal Sequence Modeling. The Framework Supports Both Isolated Sign Recognition And Continuous Sign Sentence Translation, Enabling Efficient Handling Of Dynamic Gesture Sequences. Additionally, A Text-to-speech (TTS) Module Is Incorporated To Convert Recognized Text Into Naturalistic Speech Output, Thereby Facilitating Real-time, Bidirectional Communication Between Deaf And Hearing Individuals. Extensive Experimental Evaluations Conducted On Multiple Benchmark Datasets Demonstrate The Effectiveness Of The Proposed Approach, Achieving Up To 99% Recognition Accuracy Along With High Responsiveness And Fluency. The Results Indicate The Strong Potential Of The Proposed Framework For Deployment In Real-world Assistive Communication And Human–computer Interaction Applications. |
Published:09-4-1-2026 Issue:Vol. 26 No. 4-1 (2026) Page Nos:250-256 Section:Articles License:This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. How to CiteDr. T. SRINIVASA RAO, SAYANI HARIKA, YERNINTI LOWSHIK TEJA, VANABATHINA SARITHA, USTELA HEMANTH NAGA KUMAR, SPEAKSIGNS: REAL-TIME GESTURE RECOGNITION AND SPEECH SYNTHESIS FOR SIGN LANGUAGE USERS , 2026, International Journal of Engineering Sciences and Advanced Technology, 26(4-1), Page 250-256, ISSN No: 2250-3676. |