The The Recognition of American Sign Language Using CNN with Hand Keypoint

Authors

  • Muhamad Asep Ridwan Universitas Siliwangi
  • Aradea
  • Husni Mubarok

Keywords:

American Sign Language, Convoltional Neural Network, Hand keypoint, Massey dataset.

Abstract

Sign Language is a method used by the deaf community for their communication. In line with the advances of deep learning, researchers have widely interpreted neural networks for language recognition in recent years. Many models and hardware have been developed to help get high accuracy in language recognition, but generally, the problem of accuracy is still a concern of researchers, even the accuracy problem related to American language or American sign language (ASL) still requires further research to solve. This paper discusses a method to improve ASL recognition accuracy using Convolutional Neural Network (CNN) with hand keypoint. Pre-trained Keypoint detector is used to generate hand keypoints on the massey dataset as an input for classification in the CNN model. The results show that the accuracy of the proposed method is better than the previous studies, obtaining an accuracy of 99.1% in recognizing the 26 statistical signs of the ASL alphabet.

Downloads

Download data is not yet available.

References

[1] Sulfayanti, Dewiani, and A. Lawi, “A real time alphabets sign language recognition system using hands tracking,” Proc. - Cybern. 2016 Int. Conf. Comput. Intell. Cybern., pp. 69–72, 2017, doi: 10.1109/CyberneticsCom.2016.7892569.
[2] C. M. Jin, Z. Omar, and M. H. Jaward, “A mobile application of American sign language translation via image processing algorithms,” Proc. - 2016 IEEE Reg. 10 Symp. TENSYMP 2016, pp. 104–109, 2016, doi: 10.1109/TENCONSpring.2016.7519386.
[3] A. L. C. Barczak, N. H. Reyes, M. Abastillas, A. Piccio, and T. Susnjak, “A New 2D Static Hand Gesture Colour Image Dataset for ASL Gestures,” Res. Lett. Inf. Math. Sci, vol. 15, pp. 12–20, 2011.
[4] C. Vogler and D. Metaxas, “Parallel hidden Markov models for American sign language recognition,” Proc. IEEE Int. Conf. Comput. Vis., vol. 1, pp. 116–122, 1999, doi: 10.1109/iccv.1999.791206.
[5] K. Y. Fok, N. Ganganath, C. T. Cheng, and C. K. Tse, “A Real-Time ASL Recognition System Using Leap Motion Sensors,” Proc. - 2015 Int. Conf. Cyber-Enabled Distrib. Comput. Knowl. Discov. CyberC 2015, pp. 411–414, 2015, doi: 10.1109/CyberC.2015.81.
[6] B. Garcia and S. A. Viesca, “Real-time Mexican Sign Language recognition,” 2017 IEEE Int. Autumn Meet. Power, Electron. Comput. ROPEC 2017, vol. 2018-Janua, pp. 1–6, 2017, doi: 10.1109/ROPEC.2017.8261606.
[7] V. Bheda and N. D. Radpour, “Using Deep Convolutional Networks for gesture recognotion in american sign language,” 2019 Int. Conf. Sustain. Technol. Ind. 4.0, STI 2019, vol. 31, no. 9, pp. 198-203Rivera-Acosta, M., Ruiz-Varela, J. M., Orte, 2017.
[8] G. A. Rao, K. Syamala, P. V. V. Kishore, and A. S. C. S. Sastry, “Deep convolutional neural networks for sign language recognition,” 2018 Conf. Signal Process. Commun. Eng. Syst. SPACES 2018, vol. 2018-Janua, pp. 194–197, 2018, doi: 10.1109/SPACES.2018.8316344.
[9] W. Tao, M. C. Leu, and Z. Yin, “American Sign Language alphabet recognition using Convolutional Neural Networks with multiview augmentation and inference fusion,” Eng. Appl. Artif. Intell., vol. 76, no. July, pp. 202–213, 2018, doi: 10.1016/j.engappai.2018.09.006.
[10] Suharjito, N. Thiracitta, and H. Gunawan, “SIBI Sign Language Recognition Using Convolutional Neural Network Combined with Transfer Learning and non-trainable Parameters,” Procedia Comput. Sci., vol. 179, no. 2019, pp. 72–80, 2021, doi: 10.1016/j.procs.2020.12.011.
[11] M. Taskiran, M. Killioglu, and N. Kahraman, “A Real-Time System for Recognition of American Sign Language by using Deep Learning,” 2018 41st Int. Conf. Telecommun. Signal Process. TSP 2018, pp. 1–5, 2018, doi: 10.1109/TSP.2018.8441304.
[12] C. Dong, M. C. Leu, and Z. Yin, “American Sign Language alphabet recognition using Microsoft Kinect,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2015-Octob, pp. 44–52, 2015, doi: 10.1109/CVPRW.2015.7301347.
[13] P. Rathi, R. K. Gupta, S. Agarwal, A. Shukla, and R. Tiwari, “Next Generation Computing Technologies,” pp. 1–7, 2019.
[14] T. Simon, H. Joo, I. Matthews, and Y. Sheikh, “Hand keypoint detection in single images using multiview bootstrapping,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 4645–4653, 2017, doi: 10.1109/CVPR.2017.494.
[15] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting Nitish,” J. Mach. Learn. Res. 15, vol. 299, no. 15, pp. 1929–1958, 2014, doi: 10.1016/0370-2693(93)90272-J.
[16] Kania, K, Markowska-Kaczmar U (2018) American sign language fingerspelling recognition using wide residual networks. In: International conference on artificial intelligence and soft computing, Springer, pp 97–107

Downloads

Published

2023-12-16

How to Cite

Ridwan, M. A., Aradea, & Mubarok, H. (2023). The The Recognition of American Sign Language Using CNN with Hand Keypoint. International Journal on Information and Communication Technology (IJoICT), 9(2), 86–95. Retrieved from https://socjs.telkomuniversity.ac.id/ojs/index.php/ijoict/article/view/845

Issue

Section

Intelligence System