Hao Tang

Picture of Hao Tang


Professor
Computer Information Systems

EMAIL: htang@bmcc.cuny.edu

Office: F-930N

Office Hours:

Phone: +1 (212) 220-1479

Dr. Tang is interested in researching augmented and accessible learning for people with special needs, especially people who are blind, have low vision, or have Autism Spectrum Disorders. His lab is working on cutting-edge research in virtual reality, augmented reality, artificial intelligence, and geospatial information science.

Dr. Tang has encouraged students to participate in his research projects and has guided students in presenting their research findings related to artificial intelligence and assistive technology, and many students continued working on research projects with him after transferring to other colleges. Some students have become software developers at the top tech companies, including Apple, Amazon, and Microsoft, fintech companies, such as JPMorgan Chase, as well as federal agencies, such as the Environmental Protection Agency and Department of Homeland Security.

Dr.Tang is also a member of the CUNY Computational Vision and Convergence Laboratory (http://ccvcl.org). He is looking for research assistants with scholarships: prospective students (undergraduate, master, and doctoral students) will work on cutting-edge research. Please send your resume and briefly introduce your research experience and interests to htang@bmcc.cuny.edu.

Expertise

Virtual and augmented reality, crowdsourcing, artificial intelligence, mobile computer vision and their applications in security, surveillance, assistive technology and education.

Degrees

Ph.D. in Computer Science

Courses Taught

Research and Projects

NSF REU research assistant position is available, please contact Dr. Tang if you are interested in it.

Dr. Tang’s most recent research Projects include

  1. Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane
  2. Build an Accessible Storefront Open Source Map using Crowdsourcing and Deep Learning
  3. Sidewalk Material Classification on Multimodal Data using Deep Learning
  4. Integrating AR and VR for Mobile Remote Collaboration
  5. Assistive Navigation using Mobile App
  6. Virtual Reality Mobile Physics Lab App

Dr. Tang’s research projects include

  1. National Science Foundation Research Grant (#2131186), “CISE-MSI, Training a Virtual Guide Dog for Visually Impaired People to Learn Safe Routes Using Crowdsourcing Multimodal Data”, PI, 2021-2024.
  2. CUNY C.C. Research Grant – track 2, Mentored Undergraduate Research, “Exploring Virtual Environments by Visually Impaired using a Mixed Reality Cane without Visual Feedback”, Single-PI, 1/2021-12/2021
  3. National Science Foundation Research Grant, “PFI-RP: Smart and Accessible Transportation Hub for Assistive Navigation and Facility Management”, BMCC PI, collaboration with faculty at CCNY, Rutgers University and Lighthouse Guild, 2018-2021.
  4. National Science Foundation Research Grant, “SCC-Planning: Integrative Research and Community Engagement for Smart and Accessible Transportation Hub (SAT-Hub)”, Senior Personnel, with faculty in CCNY and Rutgers University, 2017-2018.
  5. Department of Homeland Security Research Grant, “Verification of Crowd Behavior Simulation by Video Analysis”, Single-PI, 3/2016-12/2017
  6. Faculty Development Grant, “Accurate Indoor 3D Model Generation by Integrating Architectural Floor Plan and RGBD Images”, PI, 4/2016-4/2017
  7. PSC-CUNY Research Awards Enhanced, Single-PI, 2022
  8. PSC-CUNY Research Awards Track B, Single-PI, 2013, 2014, 2015, 2017, 2018, 2020
  9. CUNY C.C. Research Grant – track 2, Mentored Undergraduate Research, “Mobile Indoor Navigation for the Blind”, Single-PI, 9/2016-9/2017
  10. CUNY Innovations in Language Education (ILE) Grants, “Microlearning Based Mobile Game for Mandarin Learning and Assessment”, Co-PI, 2016-2017

Publications

Research Book Chapters (2012-present):

  1. F. Hu, H. Tang, T. Alexander, Z. Zhu, “Computer Vision Techniques to Assist Visually Impaired People to Navigate in an Indoor Environment”, Computer Vision for Assistive Healthcare,Elsevier
  2. Edgardo Molina, Wai Khoo and Hao Tang and Zhigang Zhu, Registration of Video Images,Theory and Applications of Image Registration, http://www.wiley.com/WileyCDA/WileyTitle/productCd-1119171717.html,Wiley Press

Peer-Reviewed Journal Papers (2012-present):

  1. X. Wang, J. Liu, H. Tang,  Z. Zhu, and W. Seiple. An AI-enabled Annotation Platform for Storefront Accessibility and Localization, Journal on Technology and Persons with Disabilities, 2023
  2. J. Liu, H. Tang, W. Seiple, Z. Zhu. Annotating Storefront Accessibility Data Using Crowdsourcing, Journal on Technology and Persons with Disabilities, v. 10, 2022, Project website
  3. G. Olmschenk, X. Wang, H. Tang and Z. Zhu, Impact of Labeling Schemes on Dense Crowd Counting Using Convolutional Neural Networks with Multiscale Upsampling. International Journal of Pattern Recognition and Artificial Intelligence, Special Issue for VISAPP, Vol. 35, No. 13 October 2021
  4. Zhigang Zhu, Jin Chen, Lei Zhang, Yaohua, Chang, Tyler Franklin, Hao Tang, Arber Ruci, iASSIST: “An iPhone-Based Multimedia Information System for Indoor Assistive Navigation”, accepted by International Journal of Multimedia Data Engineering and Management, 2020.
  5. Greg Olmschenk, Hao Tang, and Zhigang Zhu, “Generalizing semi-supervised generative adversarial networks to regression using feature contrasting”, Computer Vision and Image Understanding, V. 186, September, 2019
  6. Hu Feng, Zhigang Zhu, Juery Mejia, Hao Tang and Jianting Zhang, “Real-time indoor assistive localization with mobile omnidirectional vision and cloud GPU acceleration”, ASM EE Journal, V1 (1), Dec. 2017
  7. Hao Tang, Tayo Amuneke, Juan Lantigua, Huang Zou, William Seiple and Zhigang Zhu. “Indoor Map Learning for the Visually Impaired”, Journal on Technology and Persons with Disabilities, Journal on Technology and Persons with Disabilities, V5. June 2017.
  8. Hao Tang, Norbu Tsering, Feng Hu, and Zhigang Zhu. “Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan”, Journal on Technology and Persons with Disabilities, Journal on Technology and Persons with Disabilities, V4. Oct. 2016
  9. Hu Feng, Norbu Tsering, Hao Tang, and Zhigang Zhu. “Indoor Localization for the Visually Impaired Using a 3D Sensor”. Journal on Technology and Persons with Disabilities, V4. Oct. 2016
  10. Maggie Vincent, Hao Tang, Wai Khoo, Zhigang Zhu and Tony Ro, “Shape Discrimination using the Tongue: Feasibility of a Visual to Tongue Stimulation Substitution Device”, Journal of Multisensory Research, 2016 29, 773-798.
  11. Hao Tang, and Zhigang Zhu, “Content-Based 3D Mosaics for Representing Videos of Dynamic Urban Scenes”, IEEE Transactions on Circuits and Systems for Video Technology, 22(2), 2012, 295-308

Peer-Reviewed Conference Papers (2012-present):

  1. Xuan Wang, Jiajun Chen, Hao Tang and Zhigang. Zhu. MultiCLU: Multi-stage Context Learning and Utilization for Storefront Accessibility Detection and EvaluationACM International Conference on Multimedia Retrieval, Newark, NJ, USA, June 27-30, 2022. Pages 304–312.
  2. Lei Zhang, Kelvin Wu, Bin Yang, Hao Tang, and Zhigang Zhu. “Exploring Virtual Environments by Visually Impaired Using a Mixed Reality Cane Without Visual Feedback”, ISMAR 2020 – International Symposium on Mixed and Augmented Reality, November 9-13, 2020. Video Demo 
  3. Yaohua Chang, Jin Chen, Tyler Franklin, Lei Zhang, Arber Ruci, Hao Tang and Zhigang Zhu. “Multimodal Information Integration for Indoor Navigation Using a Smartphone”. IRI2020 -The 21st IEEE International Conference on Information Reuse and Integration for Data Science, August 11-13, 2020 (Full Regular Paper for Oral Presentation, 28% acceptance rate)
  4. Zhigang Zhu, Jie Gong, Cecilia Feeley, Huy Vo, Hao Tang, Arber Ruci, William Seiple and Zhengyi Wu. “SAT-Hub: Smart and Accessible Transportation Hub for Assistive Navigation and Facility Management”. Harvard CRCS Workshop on AI for Social Good, July 20-21, 2020
  5. Greg Olmschenk, Hao Tang, and Zhigang Zhu. “Improving Dense Crowd Counting Convolutional Neural Networks using Inverse k-Nearest Neighbor Maps and Multiscale Upsampling”. VISAPP 2020, the 15th International Conference on Computer Vision Theory and Applications.
  6. Hao Tang, Xuan Wang, Greg Olmschenk. Cecilia Feeley, Zhigang Zhu. “Assistive Navigation and Interaction with Mobile & VR Apps for People with ASD”. The 35th CSUN Assistive Technology Conference, March 9-13, 2020.
  7. Huang Zou, Hao Tang, “Remote Collaboration in a Complex Environment”, Proceedings of the International Conference on Artificial Intelligence and Computer Vision, March 2020
  8. Jeremy Venerella, Lakpa Sherpa, Tyler Franklin, Hao Tang, Zhigang. Zhu. “Integrating AR and VR for Mobile Remote Collaboration”, In: Proceedings of the International Symposium on Mixed and Augmented Reality, Oct 2019. Video Demo
  9. Greg Olmschenk, Hao Tang, Jin Chen and Zhigang Zhu, “Dense Crowd Counting Convolutional Neural Networks with Minimal Data using Semi-Supervised Dual-Goal Generative Adversarial Networks”, CVPR Workshop on Weakly Supervised Learning for Real-World Computer Vision Applications, Long Beach, CA, 2019.
  10. Jeremy Venerella, Lakpa Sherpa, Hao Tang, Zhigang Zhu, “A Lightweight Mobile Remote Collaboration Using Mixed Reality”, CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Long Beach, CA, 2019.
  11. Greg Olmschenk, Hao Tang, and Zhigang Zhu, “Crowd Counting with Minimal Data Using Generative Adversarial Networks for Multiple Target Regression”, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 1151-1159, Lake Tahoe, NV, 2018
  12. Jie Gong, Cecilia Feeley, Hao Tang, Greg Olmschenk, Vishnu Nair, Zhixiang Zhou, Yi Yu, Ken Yamamoto and Zhigang Zhu. “Building Smart Transportation Hubs with Internet of Things to Improve Services to People with Special Needs”, Transportation Research Board (TRB) 96th Annual Meeting, January 8-12, 2017
  13. Greg Olmschenk, Hao Tang, and Zhigang Zhu, “Pitch and Roll Camera Orientation from a Single 2D Image Using Convolutional Neural Networks”. Proceedings of the 14th Conference on Computer and Robot Vision, Edmonton, Alberta, May 17-19, 2017
  14. Feng Hu, Norbu Tsering and Hao Tang, Zhigang Zhu, “RGB-D Sensor Based Indoor Localization for the Visually Impaired”, 31th Annual International Technology and Persons with Disabilities Conference, March 21-26, 2016
  15. Hao Tang, Norbu Tsering and Feng Hu, “Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan”, 31th Annual International Technology and Persons with Disabilities Conference, March 21-26, 2016
  16. Zhigang Zhu, Wai L. Khoo, Camille Santistevan, Yuying Gosser, Edgardo Molina, Hao Tang, Tony Ro and Yingli Tian, “EFRI-REM at CCNY: Research Experience and Mentoring for Underrepresented Groups in Cross-disciplinary Research on Assistive Technology”. The 6th IEEE Integrated STEM Education Conference (ISEC), March 6, 2016, Princeton, New Jersey (one of the 5 H. Robert Schroeder Best Paper Award Nominees among 50 oral papers).
  17. Hao Tang, Tony Ro, Zhigang Zhu. “Smart Sampling and Transducing 3D Scenes for the Visually Impaired”. IEEE International conference on Multimedia and Expo (ICME), 2013 (oral). The paper is selected in the best paper award Nominee (rate: 2.4%).
  18. Hao Tang, Maggie Vincnt, Tony Ro, Zhigang Zhu. “From RGB-D to Low-Resolution Tactile: Smart Sampling and Early Testing”. IEEE Workshop on Multimodal and Alternative Perception for Visually Impaired People, ICME 2013

Honors, Awards and Affiliations

  1. The Best Paper Award Nominees, the 15th International Conference on Computer Vision Theory and Applications, Malta, February 2020.
  2. “CUNY-American Dream Machine” on New York Post and MTA, 2016-2017
  3. DHS S&T Research Grant, U.S. Department of Homeland Security, 2016 Link
  4. The Best Paper Award Nominees, The 6th IEEE Integrated STEM Education Conference (ISEC), March 6, 2016, Princeton, New Jersey, 2016.
  5. Summer Research Team Award, U.S. Department of Homeland Security, 2015
  6. The Best Paper Award Finalist, IEEE International Conference on Multimedia and Expo (ICME), 2013

Additional Information

Former Research Assistants:

  1. Benjamin Rosado, Cybersecurity using Virtual Reality, 2021-2022, Now a Data Analyst
  2. Jeremy Venerella, Remote Collaboration via Mixed Reality, NSF-PFI and PSC-CUNY, 2018-2021, Now a Software Engineer at Capital One Bank.
  3. Erii Sugimoto, Indoor Navigation for Visually Impaired, 2016-2018, CUNY Collaborative Research Grant and BFF, Now a Software Engineer at Apple Inc.
  4. Ben Adame, Crowd Counting from Video Footage, 2018-2018, DHS S&T Research Grant, Now a Data Analyst
  5. Sihan Lin, Indoor Navigation for Visually Impaired, 2016-2018, MEISP, Now a Software Engineer at JPMorgan Chase.
  6. Tayo Amuneke, Pre-journey Mobile App for the Visually Impaired, 2015-2017, LSAMP, Now a Software Engineer at Microsoft Inc.
  7. Sanou Wourohire Laurent, Language-based Learning Mobile App, 2015-2016, LSAMP, Now a Software Engineer at JPMorgan Chase.
  8. Juan Lantigua, Pre-journey Mobile App for the Visually Impaired, 2015-2017, MEISP, Now a Software Engineer at JPMorgan Chase.
  9. Norbu Tsering, Automatic Pre-Journey Indoor Map Generation Using AutoCAD Floor Plan, 2015-2017, NSF-REM, Now a Software Development Engineer at Amazon Web Services
  10. Huang Zou, Remote Collaboration in a Complex Environment, 2014-2017, CRSP, Now a Software Development Engineer at Velan Studios, Inc (Video Game Development)
  11. Jeury Mejia, Real-time indoor assistive localization, 2014-2016, NSF-REM, Now a Software Engineer at Jopwell
  12. Jiayi An, Accessible Game for Blind People, 2015-2016, NSF-REM, Now a Software Engineer at US Environmental Protection Agency
  13. Olesya Medvedeva, Machine Learning Algorithm for Speaker Recognition and Emotion Detection, 2014-2016, Transfer to Columbia U. Now a Software Engineer at MLB Advanced Media, L.P
  14. Rodny Perez, Detect MTA Door on a Mobile Phone, 2013-2014, LSAMP, Now a Software Engineer at Amazon.

Acknowledgments:

  • Datacamp: a learning platform for data science