Haoxiang (Steven) Yu, 于浩翔

Research Scientist @ Meta Reality Labs

prof_pic.jpg

Curriculum vitae (CV)

Redwood City, CA, USA

Building the future of Multimodality & Contextual AI

I am a Research Scientist at Meta Reality Labs, where I lead the design and development of multimodal AI systems for next-generation AR/VR devices and smart wearables. My work spans on-device machine learning, computer vision, and multimodal LLMs (mLLMs), with a focus on creating scalable, efficient AI for Human-Computer Interaction (HCI).

My research centers on pervasive/ubiquitous computing & Machine Learning — specifically the collaboration between machine learning models (e.g., federated/collaborative learning), the interaction between models and the environment (e.g., edge AI, context-aware systems), and the synergy between models and humans (e.g., human-in-the-loop learning).

Previously, I earned my PhD in Electrical and Computer Engineering from the University of Texas at Austin under the guidance of Dr. Christine Julien, focusing on integrating machine learning with ubiquitous computing and edge AI. I have also worked at Toyota Research Lab on vehicle and mobility systems.

Outside of my research pursuits, I enjoy cooking :curry:, baking :cake:, and participating in outdoor activities :deciduous_tree:. I love bringing the world to my kitchen by experimenting with diverse dishes from around the globe.

news

Aug 15, 2025 Ph.D. secured! I graduated from UT Austin.
May 27, 2025 Join Meta Reality Labs as a Research Scientist working on wearable contextual AI.