Research Fellow
Homepage: https://manojramanathan.github.io/
LinkedIn: www.linkedin.com/in/manoj-ramanathan-19935557

PhD- School of Electrical and Electronics Engineering, Nanyang Technological University 2016 - 2017
B. Tech (Bachelor of Technology) - Instrumentation and Control Engineering, National Institute of Technology, Tiruchirapalli, 2009


Dr. Manoj Ramanathan is an AI researcher, whose primary research interests include Computer vision, Full stack robot platform, Social Robotics and Virtual Character embodiments, Human Computer Interaction (HCI). Currently he is working on the development of vision-based perception module for a lower limb exoskeleton to be used by patients.

He graduated from School of Electrical and Electronics Engineering, NTU, Singapore with a PhD specialized in Computer Vision. After graduation, He worked as a research fellow on the development and handling of AI/ Full stack platform of Nadine social humanoid robot in IMI, NTU, Singapore. After which he gained industrial experience when he joined a startup, DEX-Lab AI, Singapore as an AI Research Scientist to develop Virtual humans and Humanoid robots.

​​​​​Research Interest
  • Computer Vision
  • Artificial Intelligence – Deep learning and Machine learning
  • Human Robot Interaction/ Human Computer Interaction
  • Rehabilitative and Assistive Robotics
  • Mobile and Social Robotics
  • Virtual Human Embodiments


  • ​Sensor-Fusion with Intelligent Shared Control for Wearable Lower Limb Assistive Robotic Devices
  • Semi-Autonomous and Shared Control Wheelchair



  • Ramanathan, M., Yau, W.-Y., Teoh, E. K., & Thalmann, N. M. (2019, March). Mutually Reinforcing Motion-Pose Framework for Pose-Invariant Action Recognition. Intl. Journal of Biometrics, 11(2), 113–147.
  • Ramanathan, M., Kochanowicz, J., & Thalmann, N. M. (2019, February). Combining Pose-Invariant Kinematic Features and Object Context Features for RGB-D Action Recognition. Intl. Journal of Machine Learning and Computing, 9(1), 44–50.
  • Ramanathan, M., Yau, W.-Y., & Teoh, E. K. (2014b, October). Human action recognition with video data: Research and evaluation challenges. IEEE Trans. on Human Machine Systems, 44(5), 650–663

Book Chapter

  • Ramanathan, M., Satapathy, R., & Thalmann, N. M. (2021). Survey of Speechless Interaction Techniques in Social Robotics. In N. M. Thalmann, J. J. Zhang, M. Ramanathan, & D. Thalmann (Eds.), Intelligent Scene Modelling and Human-Computer Interaction, Human Computer Interaction Series (pp. 241–257). Springer, Cham. doi:10.1007/978-3-030-71002-6_14


  • Ramanathan, M., Singh, A., Suresh, A., Thalmann, D., & Magnenat-Thalmann, N. (2022). Virtual safety assistant: an efficient tool for ensuring safety during covid-19 pandemic. In M. Kurosu (Ed.), Human-computer interaction. user experience and behavior (pp. 546–565). Cham: Springer International Publishing.
  • Mishra, N., Ramanathan, M., Satapathy, R., Cambria, E., & Thalmann, N. M. (2019, October). Can a Humanoid Robot be part of the Organizational Work Force? A User Study leveraging on Sentiment Analysis. In 28th IEEE Intl. Conf. on Robot and Human Interactive Communication (RO-MAN). IEEE.
  • Ramanathan, M., Mishra, N., & Thalmann, N. M. (2019, June). Nadine Humanoid Social Robotics Platform. In M. Gavrilova, J. Zhang, N. M. Thalmann, E. Hitzer, & H. Ishikawa (Eds.), Computer Graphics International (CGI) (Vol. 11542, pp. 490–496). Advances in Computer Graphics, Part of LNCS book series. Springer, Cham.
  • Ramanathan, M., Yau, W.-Y., Teoh, E. K., & Thalmann, N. M. (2017, December). Pose-Invariant Kinematic Features for Action Recognition. In AsiaPacific Signal and Information Processing Association Annual Summit and Conference (pp. 292–299). IEEE.
  • Thalmann, D., Thalmann, N. M., & Ramanathan, M. (2017). Real Humans with Virtual Humans and Social Robots Interactions (HCI). In SIGGRAPH Asia 2017 Courses (15:1–15:221). SA ’17. Bangkok, Thailand: ACM. doi:10.1145/3134472.3134513
  • Ramanathan, M., Yau, W.-Y., & Teoh, E. K. (2016a, November). Improving Human Body Part Detection using Deep Learning and Motion Consistency. In Intl. Conf. on Control, Automation, Robotics and Vision (pp. 1–5). IEEE.