References

QLoRA: Efficient Finetuning of Quantized LLMs

Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, & Luke Zettlemoyer (2023)

arXiv.

DOI: https://doi.org/10.48550/arxiv.2305.14314

Abstract. Combines 4-bit quantisation of a frozen base model with LoRA adapters, enabling fine-tuning of 65-billion-parameter language models on a single consumer GPU while preserving 16-bit full-fine-tuning performance.

Tags: efficiency fine-tuning quantisation lora

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).