References

RoFormer: Enhanced Transformer with Rotary Position Embedding

Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, & Yunfeng Liu (2021)

arXiv.

DOI: https://doi.org/10.48550/arxiv.2104.09864

Abstract. Introduces Rotary Position Embedding (RoPE), which encodes positional information by rotating query and key vectors in a position-dependent manner. RoPE has been adopted by LLaMA, PaLM, and most modern large language models for its clean handling of relative positions.

Tags: transformer positional-encoding rope

This site is currently in Beta. Contact: Chris Paton

Textbook of Usability · Textbook of Digital Health

Auckland Maths and Science Tutoring

AI tools used: Claude (research, coding, text), ChatGPT (diagrams, images), Grammarly (editing).