McGill University | Mila - Quebec AI Institute

Dun Yuan

I work on large language models for technical domains, with a focus on post-training, knowledge-grounded generation, telecommunications, networked systems, and distributed AI.

Portrait of Dun Yuan
Ph.D. Candidate, School of Computer Science, McGill University
Affiliation
Mila - Quebec AI Institute
Current focus
Stable LLM fine-tuning and knowledge-grounded telecom systems
Also
Student Researcher, Samsung Research America

About

Language models, networks, and trustworthy systems.

I am a Ph.D. candidate in Computer Science at McGill University, supervised by Prof. Xue Liu, and affiliated with Mila - Quebec AI Institute. I study how language models can be trained, grounded, and deployed in technical environments where accuracy and reliability matter.

Recent projects include contraction-aware reinforcement learning for language model fine-tuning, knowledge-graph-enhanced retrieval for telecommunications, semantic alignment in agent communication protocols, and optimization for wireless and data-center systems.

Research Areas

Research themes

01

LLMs and post-training

RLHF, PPO variants, alignment, test-time reasoning, adaptive inference, long-context modeling, and domain-specific fine-tuning.

02

Knowledge-centric AI

Retrieval-augmented generation, knowledge graphs, provenance-aware question answering, information extraction, and grounded generation.

03

AI for telecom and systems

LLMs for wireless networks, telecom question answering, digital twins, multi-agent reinforcement learning, and edge intelligence.

04

Trustworthy distributed AI

Web3, decentralized systems, cryptographic protocols, secure data infrastructures, and privacy-aware distributed computing.

Featured Project

Beyond Message Passing

Project page for my survey on agent communication protocols, organized around communication, syntactic, and semantic layers.

Open Survey

Recent Publications

Recent work

2026

Escaping Policy Contraction: Contraction-Aware PPO (CaPPO) for Stable Language Model Fine-Tuning

Dun Yuan, Di Wu, Xue Liu. ICLR 2026 Poster.

OpenReview
2026

Beyond Message Passing: A Semantic View of Agent Communication Protocols

Dun Yuan, Fuyuan Lyu, Ye Yuan, Weixu Zhang, et al. arXiv:2604.02369.

Project page
2025

Enhancing Large Language Models for Telecommunications using Knowledge Graphs and Retrieval-Augmented Generation

Dun Yuan, Hao Zhou, Di Wu, Xue Liu, Hao Chen, Yan Xin, Jianzhong Charlie Zhang. ICC Workshops 2025.

DBLP

For the complete publication list, see Google Scholar.