Cheng, EmilyDoimo, DiegoKervadec, CorentinMacocco, IuriYu, LeiLaio, AlessandroBaroni, Marco2025-06-112025-06-112025Cheng E, Doimo D, Kervadec C, Macocco I, Yu L, Laio A, Baroni M. Emergence of a high-dimensional abstraction phase in language transformers. In: 13th International Conference on Learning Representations (ICLR 2025); 2025 Apr 24-28; Singapore, Republic of Singapore. Appleton: ICLR; 2025. 26 p.http://hdl.handle.net/10230/70669A language model (LM) is a mapping from a linguistic context to an output token. However, much remains to be known about this mapping, including how its geometric properties relate to its function. We take a high-level geometric approach to its analysis, observing, across five pre-trained transformer-based LMs and three input datasets, a distinct phase characterized by high intrinsic dimensionality. During this phase, representations (1) correspond to the first full linguistic abstraction of the input; (2) are the first to viably transfer to downstream tasks; (3) predict each other across different LMs. Moreover, we find that an earlier onset of the phase strongly predicts better language modelling performance. In short, our results suggest that a central high-dimensionality phase underlies core linguistic processing in many common LM architectures.application/pdfeng© Els autors. Aquesta obra està sota Llicència CC BY 4.0Emergence of a high-dimensional abstraction phase in language transformersinfo:eu-repo/semantics/conferenceObjectInterpretabilityIntrinsic dimensionLarge language modelsinfo:eu-repo/semantics/openAccess