Prediction hubs are context-informed frequent tokens in LLMs

Mostra el registre complet Registre parcial de l'ítem

  • dc.contributor.author Nielsen, Beatrix MG
  • dc.contributor.author Macocco, Iuri
  • dc.contributor.author Baroni, Marco
  • dc.date.accessioned 2025-08-01T06:37:56Z
  • dc.date.available 2025-08-01T06:37:56Z
  • dc.date.issued 2025
  • dc.description.abstract Hubness, the tendency for a few points to be among the nearest neighbours of a disproportionate number of other points, commonly arises when applying standard distance measures to high-dimensional data, often negatively impacting distance-based analysis. As autoregressive large language models (LLMs) operate on high-dimensional representations, we ask whether they are also affected by hubness. We first prove that the only large-scale representation comparison operation performed by LLMs, namely that between context and unembedding vectors to determine continuation probabilities, is not characterized by the concentration of distances phenomenon that typically causes the appearance of nuisance hubness. We then empirically show that this comparison still leads to a high degree of hubness, but the hubs in this case do not constitute a disturbance. They are rather the result of context-modulated frequent tokens often appearing in the pool of likely candidates for next token prediction. However, when other distances are used to compare LLM representations, we do not have the same theoretical guarantees, and, indeed, we see nuisance hubs appear. There are two main takeaways. First, hubness, while omnipresent in high-dimensional spaces, is not a negative property that needs to be mitigated when LLMs are being used for next token prediction. Second, when comparing representations from LLMs using Euclidean or cosine distance, there is a high risk of nuisance hubs and practitioners should use mitigation techniques if relevant.
  • dc.description.sponsorship We thank Santiago Acevedo, Luca Moschella, the members of the COLT group at Universitat Pompeu Fabra and the ARR reviewers for feedback and advice. Beatrix M. G. Nielsen was supported by the Danish Pioneer Centre for AI, DNRF grant number P1. Iuri Macocco and Marco Baroni received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 101019291) and from the Catalan government (AGAUR grant SGR 2021 00470). This paper reflects the authors’ view only, and the funding agencies are not responsible for any use that may be made of the information it contains.
  • dc.format.mimetype application/pdf
  • dc.identifier.citation Nielsen BMG, Macocco I, Baroni M. Prediction hubs are context-informed frequent tokens in LLMs. In: Che W, Nabende J, Shutova E, Taher Pilehvar M, editors. 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025); 2025 July 27 - August 1; Vienna, Austria. Kerrville: Association for Computational Linguistics; 2025. p. 23715-45.
  • dc.identifier.uri http://hdl.handle.net/10230/71063
  • dc.language.iso eng
  • dc.publisher ACL (Association for Computational Linguistics)
  • dc.relation.ispartof Che W, Nabende J, Shutova E, Taher Pilehvar M, editors. 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025); 2025 July 27 - August 1; Vienna, Austria. Kerrville: Association for Computational Linguistics; 2025.
  • dc.relation.projectID info:eu-repo/grantAgreement/EC/H2020/101019291
  • dc.rights © ACL, Creative Commons Attribution 4.0 License
  • dc.rights.accessRights info:eu-repo/semantics/openAccess
  • dc.rights.uri http://creativecommons.org/licenses/by/4.0/
  • dc.subject.keyword Computation and language
  • dc.subject.keyword Artificial intelligence
  • dc.title Prediction hubs are context-informed frequent tokens in LLMs
  • dc.type info:eu-repo/semantics/conferenceObject
  • dc.type.version info:eu-repo/semantics/publishedVersion