Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

portfolio

publications

Can Language Models Laugh at YouTube Short-form Videos?

Published in EMNLP 2023, 2023

A video humor explanation benchmark to evaluate LLMs understanding of complex multimodal tasks through multimodal-filtering pipeline.

Recommended citation: Dayoon Ko, Sangho Lee, Gunhee Kim. (2023). "Can Language Models Laugh at YouTube Short-form Videos?" EMNLP 2023.
Download Paper

GrowOVER: How Can LLMs Adapt to Growing Real-World Knowledge?

Published in ACL 2024, 2024

Continuously updated QA & dialogue benchmarks to assess whether LLMs can handle evolving knowledge, enabling RAG systems to adapt without retraining.

Recommended citation: Dayoon Ko, Jinyoung Kim, Hahyeon Choi, Gunhee Kim. (2024). "GrowOVER: How Can LLMs Adapt to Growing Real-World Knowledge?" ACL 2024.
Download Paper

DynamicER: Resolving Emerging Mentions to Dynamic Entities for RAG

Published in EMNLP 2024, 2024

This work addresses challenges in resolving temporally evolving mentions to entities, improving retrieval and enhancing RAG accuracy in dynamic environments.

Recommended citation: Jinyoung Kim, Dayoon Ko, Gunhee Kim. (2024). "DynamicER: Resolving Emerging Mentions to Dynamic Entities for RAG." EMNLP 2024.
Download Paper

When Should Dense Retrievers Be Updated in Evolving Corpora? Detecting Out-of-Distribution Corpora Using GradNormIR

Published in ACL 2025 Findings, 2025

A method to detect when dense retrievers need updating in evolving corpora using gradient norms, enabling efficient adaptation to distribution shifts.

Recommended citation: Dayoon Ko, Jinyoung Kim, Sohyeon Kim, Jinhyuk Kim, Jaehoon Lee, Seonghak Song, Minyoung Lee, Gunhee Kim. (2025). "When Should Dense Retrievers Be Updated in Evolving Corpora? Detecting Out-of-Distribution Corpora Using GradNormIR." Findings of ACL 2025.
Download Paper

Can LLMs Deceive CLIP? Benchmarking Adversarial Compositionality of Pre-trained Multimodal Representation via Text Updates

Published in ACL 2025, 2025

MAC benchmark for evaluating adversarial compositionality of multimodal models, revealing vulnerabilities in vision-language models like CLIP to text-based adversarial attacks.

Recommended citation: Jaewoo Ahn, Heeseung Yun, Dayoon Ko, Gunhee Kim. (2025). "Can LLMs Deceive CLIP? Benchmarking Adversarial Compositionality of Pre-trained Multimodal Representation via Text Updates." ACL 2025.
Download Paper

talks

teaching