Changhun Lee
Ph.D Student. POSTECH

I am a Ph.D. candidate in the Department of Convergence IT Engineering at POSTECH. Currently I belong to the Efficient Computing Lab. under instructor Prof. Eunhyeok Park.
I received my B.S. from POSTECH in the Department of Creative IT Engineering.
I am researching on improving the efficiency and performance of large language models (LLMs). Recently, my focus has been on LLM quantization, efficient fine-tuning, and enhancing long-context abilities. In addition to this, I am also interested in overall neural network quantization, Binary Neural Networks (BNNs), and efficient hardware accelerators — all of which I have actively researched.
I am currently seeking internship and job opportunities! If you’re interested, please feel free to contact me.
For more information about me, please visit the cv page.
news
Mar 02, 2025 | I am pleased to share that the paper “PTQ4VM: Post-Training Quantization for Visual Mamba”, in which I participated as a co-first author, has been accepted as an Oral paper at WACV 2025! 😄 |
---|---|
Jan 16, 2024 | The paper “OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models” has been accepted as an Oral paper at AAAI 2024! 😄 |
selected publications
- ArXivSEAL: Scaling to Emphasize Attention for Long-Context Retrieval2025
- WACV OralPTQ4VM: Post-Training Quantization for Visual MambaIn Proceedings of the Winter Conference on Applications of Computer Vision (WACV), Feb 2025
- EMNLP FindingsQEFT: Quantization for Efficient Fine-Tuning of LLMsIn Findings of the Association for Computational Linguistics: EMNLP 2024, Feb 2024
- AAAI OralOwq: Outlier-aware weight quantization for efficient fine-tuning and inference of large language modelsIn Proceedings of the AAAI Conference on Artificial Intelligence, Feb 2024
- ICCVINSTA-BNN: Binary neural network with instance-aware thresholdIn Proceedings of the IEEE/CVF International Conference on Computer Vision, Feb 2023