News
2026.03
I will be joining
as a Research Intern, working on deep research agents!
|
|
2025.10
I am attending COLM 2025! I will be at Montreal from 10/5 to 10/11, so please reach out to have a chat ☕
|
2025.09
I will be joining
as a Research Intern, working on foundational language models!
|
|
2025.07
One paper on studying fact verifiers is accepted at COLM 2025!
|
|
2025.06
One paper on video diffusion distillation via preference learning is accepted at ICCV 2025!
|
Research Experience
|
Microsoft, Copilot Team
Research Intern
|
Redmond, WA (Remote)
Mar 2026 ~ Jun 2026 (Exp.)
|
|
LG AI Research, EXAONE Lab
Research Intern
|
Seoul, South Korea
Sep 2025 ~ Feb 2026
|
Research
|
K-EXAONE Technical Report
LG AI Research
Technical Report, 2026
We present K-EXAONE-236B-A23B, the best model in Korea. I contribute as a member of the post-training team, specifically working on synthetic data for reasoning.
arxiv
/
code
|
|
Verifying the Verifiers: Unveiling Pitfalls and Potentials in Fact Verifiers
Wooseok Seo*, Seungju Han*, Jaehun Jung, Benjamin Newman, Seungwon Lim, Seungbeen Lee, Ximing Lu, Yejin Choi, Youngjae Yu
COLM, 2025
We systematically detect ambiguous & mislabeled examples in fact-verification benchmarks and introduce Clearfacts and Grayfacts, along with a SOTA 8B fact verifer and insights on building better fact verifiers.
arxiv
/
code
/
bibtex
|
|
V.I.P. : Iterative Online Preference Distillation for Efficient Video Diffusion Models
Jisoo Kim, Wooseok Seo, Junwan Kim, Seungho Park, Sooyeon Park, Youngjae Yu
ICCV, 2025
We integrate DPO and SFT loss for distillation to build an efficient video diffusion model, with an automatic pair curation pipeline and outperform the teacher only with the synthetic data generated from the teacher itself.
arxiv
/
bibtex
|
|
Layout-and-Retouch: A Dual-stage Framework for Improving Diversity in Personalized Image Generation
Kangyeol Kim*, Wooseok Seo*, Sehyun Nam, Bodam Kim, Suhyeon Jeong, Wonwoo Cho, Jaegul Choo, Youngjae Yu
Under Review, 2024
We use a two-stage approach for personalized T2I generation, to first draw the context with step-blended denoising and enhance the context with multi-source attention swapping.
arxiv
/
bibtex
|
Academic Services
Reviewer
|