Wooseok Seo
Hi! I am a 1st year PhD student at MIRLAB, part of the School of Computing at Yonsei University, advised by Prof. Youngjae Yu.
My research interest is in building trustworthy and interpretable AI Agents. Recently, I am specifically interested in:
- Evaluating and Mitigating Hallucinations of Language Models: Is hallucination inevitable for LLMs? How can we develop truthful models and correctly evaluate them?
- Interpreting and Monitoring Model Reasoning: Are language models producing faithful reasoning? If so, how can we leverage this to better understand and improve them? If not, how should we build more faithful models?
I am also interested in leveraging models to evaluate or improve other models, or utilizing them to augment human capabilities. Please reach me via email to chat about research! 🤗
CV  / 
Email /
GitHub /
Google Scholar /
LinkedIn /
Twitter
|
|
2025.07
One paper on studying fact verifiers is accepted at COLM 2025!
|
2025.06
One paper on video diffusion distillation via preference learning is accepted at ICCV 2025!
|
|
Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers
Wooseok Seo*, Seungju Han*, Jaehun Jung, Benjamin Newman, Seungwon Lim, Seungbeen Lee, Ximing Lu, Yejin Choi, Youngjae Yu
COLM, 2025
arxiv
/
code
/
bibtex
|
|
V.I.P. : Iterative Online Preference Distillation for Efficient Video Diffusion Models
Jisoo Kim, Wooseok Seo, Junwan Kim, Seungho Park, Sooyeon Park, Youngjae Yu
ICCV, 2025
arxiv
|
|
Layout-and-Retouch: A Dual-stage Framework for Improving Diversity in Personalized Image Generation
Kangyeol Kim*, Wooseok Seo*, Sehyun Nam, Bodam Kim, Suhyeon Jeong, Wonwoo Cho, Jaegul Choo, Youngjae Yu
Under Review, 2024
arxiv
/
bibtex
|
|