×

Overall, I am broadly interested in learning with imperfect data (e.g., limited, noisy, or imbalanced data), uncertainty estimation and adaptation (e.g., continual learning, transfer learning, out-of-distribution detection, domain adaptation), and black-box optimization (e.g., Bayesian optimization). My long-term research goal is to develop machine learning algorithms that can learn continuously with minimal supervision and effectively adapt to new domains beyond the training distribution, paving the way toward efficient open-world machine learning.

In particular, I focus on addressing the limitations of imperfect data in machine learning and applying these insights to real-world scientific problems such as medical imaging, biological quantification, and animal behavior analysis. These domains often face challenges arising from data scarcity, noise, and imbalance. Thus, my research is directed at answering the following key questions:

  1. How can models learn effectively from small and noisy datasets?
  2. How can we identify and estimate data distributions unseen during training?
  3. How can these challenges be addressed in specific applied machine learning settings?

Another research perspective I am interested in is black-box optimization and multi-objective optimization. By optimizing unknown objective functions under various conditions, such as constrained evaluations, conflicting objectives, or distribution shifts, through the lens of imperfect data or domain adaptation problems, we can design flexible and robust optimization algorithms. These methods have broad implications for both fundamental and applied scientific research, including experimental design, scientific discovery, and behavioral analysis.