People

Edward Hu

1995–, Computer scientist

Also known as: Edward J. Hu

Edward J. Hu is a Canadian computer scientist whose 2021 paper at Microsoft Research LoRA: Low-Rank Adaptation of Large Language Models (with Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang and Weizhu Chen) introduced LoRA, a parameter-efficient fine-tuning method that injects trainable low-rank decomposition matrices into the weights of a pre-trained model while freezing the original weights. LoRA enables fine-tuning of multi-billion-parameter LLMs on consumer GPUs by reducing the number of trainable parameters by orders of magnitude.

LoRA is now the dominant fine-tuning method for open-source LLM adaptation. The QLoRA extension (Dettmers et al., 2023) combines LoRA with 4-bit quantisation, enabling fine-tuning of 65B-parameter models on a single 48GB GPU. The Hugging Face PEFT library makes LoRA-style adapters trivial to deploy in practice.

Video

Related people: Tri Dao

Works cited in this book:

Discussed in:

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).