Quang-Huy Nguyen
Quang-Huy Nguyen
PhD Student | The Ohio State University
Reliable Machine Learning under Imperfect Data

About Me

I am a second-year Ph.D. student in Computer Science at The Ohio State University, advised by Prof. Wei-Lun Chao. My research focuses on reliable machine learning under imperfect data and distribution shift.

I received my bachelor’s degree in Computer Engineering from the Vietnam National University Ho Chi Minh City, in 2020. From 2022 to 2024, I worked as a research assistant at the VinUni–Illinois Smart Health Center (VISHC) at VinUniversity, and as an AI research resident at the FPT Software AI Residency, mentored by Prof. Dung Le.

Email: nguyen.2959@osu.edu


I am currently seeking Summer 2026 research internship.
Recommendations or referrals are greatly appreciated.

Research Interests

My research focuses on three directions: (1) learning from imperfect data (e.g., limited, noisy, long-tail, or imbalanced data), (2) quantifying uncertainty and unknown, and (3) adapting models to novel distributions and environments, with applications in medical imaging and animal behavior analysis.

More details about my research can be found here.

Publications and Preprints

Detecting Out-of-Distribution Objects through Class-Conditioned Inpainting
Quang-Huy Nguyen*, Jin Zhou*, Zhenzhen Liu*, Huyen Bui, Kilian Q. Weinberger, Wei-Lun Chao, Dung D. Le
WACV, 2026

We address OOD Object Detection by leveraging the inconsistency between generative and discriminative model outputs. We employ an off-the-shelf generative model as an auxiliary to the object detector and introduce a triplet similarity metric that captures both semantic and visual differences, enabling effective OOD object dection in the zero-shot manner.


Revisiting Semi-Supervised Learning in the Era of Foundation Models
Zheda Mai*, Ping Zhang*, Quang-Huy Nguyen, Wei-Lun Chao
NeurIPS, 2025

We present a comprehensive study on Semi-Supervised Learning (SSL) using Vision Foundation Models (VFMs) and propose a simple yet effective baseline that leverages diverse predictions from multiple Parameter-Efficient Fine-Tuning (PEFT) strategies to enhance SSL performance.


Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition
Zheda Mai, Ping Zhang, Cheng-Hao Tu, Hong-You Chen, Quang-Huy Nguyen, Li Zhang, Wei-Lun Chao CVPR, 2025 Highlight (2.98%).

We present a unified empirical study of Parameter-Efficient Fine-Tuning (PEFT) methods in visual recognition, offering complementary perspectives to deeply understand their behaviors under different regimes (low-shot, many-shot, domain shift), and highlight their complementary predictions and robustness trade-offs.


Improving Pareto Set Learning for Expensive Multi-objective Optimization via Stein Variational Hypernetworks
Minh-Duc Nguyen, Phuong Mai Dinh, Quang-Huy Nguyen, Long P. Hoang, Dung D. Le
AAAI, 2025

We investigate Expensive Multi-Objective Optimization by introducing the Stein Variational Hypernetwork for Pareto Set Learning, which alleviates fragmented and uncertain regions in surrogate models while preserving the diversity of learned solutions, demonstrating strong performance on expensive multi-objective optimization problems.


Controllable Expensive Multi-objective Learning with Warm-starting Bayesian Optimization
Quang-Huy Nguyen*, Long P. Hoang*, Hoang V. Vu, Dung D. Le
Preprint, 2024

We explore Multi-Objective Black-Box Optimization through Pareto Front Learning, aligning trade-off preferences with their corresponding optimal solutions across conflicting objectives. To achieve this, we warm-start the Gaussian Process to obtain an accurate initial approximation of the Pareto front, and reinitialize the Pareto Set Model during optimization steps to stabilize the learning process.


Enhancing Few-shot Image Classification with Cosine Transformer
Quang-Huy Nguyen, Cuong Q. Nguyen, Dung D. Le, Hieu H. Pham
IEEE Access, 2023

We explore Few-shot Image Classification by proposing a new cross-attention mechanism based on cosine similarity, without using softmax, to further emphasizes the correlation between labeled supports and unlabeled query representations, thus enhancing ViT-based few-shot algorithms across various settings and scenarios compare to convention attention mechanism.

News