Haoxiang (Steven) Yu, 于浩翔
Research Scientist @ Meta Reality Labs
Building the future of Multimodality & Contextual AI
I sit at the intersection of cutting-edge research and product engineering, building scalable AI systems that redefine Human-Computer Interaction (HCI) through deep learning and Multimodal LLMs. I don’t just theorize; I ship. In this role, I bridge the gap between concept and implementation, leading the design and development of “0 to 1” features—including some of the most critical product capabilities in next-generation AR devices.
I specialize in multimodal AI models for Augmented Reality (AR), Virtual Reality (VR), and smart wearables. My expertise spans on-device, resource-constrained machine learning, computer vision, and mLLMs, with a focus on creating scalable and efficient AI systems for HCI.
My research focuses on pervasive/ubiquitous computing & Machine Learning, specifically the collaboration between machine learning models, the interaction between models and the environment, and the synergy between models and humans.
Previously, I earned my PhD in Electrical and Computer Engineering from the University of Texas at Austin under the guidance of Dr. Christine Julien, where my thesis and research focused on integrating machine learning with ubiquitous computing and edge AI. I have also worked at Toyota Research Lab, where I applied my research to real-world challenges in vehicle and mobility systems.
My passion lies in bridging the gap between AI theory/research and real-world applications, developing solutions that are both technically rigorous and impactful at scale—from academic research to industry-grade systems.
Outside of my research pursuits, I enjoy cooking
, baking
, and participating in outdoor activities
. I love bringing the world to my kitchen by experimenting with diverse dishes from around the globe.
news
| Aug 15, 2025 | Ph.D. secured! I graduated from UT Austin. |
|---|---|
| May 27, 2025 | Join Meta Reality Labs as a Research Scientist working on wearable contextual AI. |