Psycholinguistic probing of language models' internal layers

Enllaç permanent

Descripció

  • Resum

    This study investigates how transformer-based large language models (LLMs) resolve subject–verb agreement across syntactic structures of varying complexity and distractor conditions. We hypothesized that LLMs would succeed in simple configurations but struggle under deeper embedding and number mismatches. Using a controlled dataset adapted from psycholinguistic research, we analyze model behavior across six sentence structures, two attractor conditions (mismatch vs. no mismatch), and four lexical variants. Using the Pythia 6.9B model, we apply three evaluation metrics—accuracy, prediction depth, and the Tuned Lens interpretability method—to track how agreement resolution evolves across layers. Results confirm our hypothesis: the model performs reliably in simple structures but fails in deeply embedded object-relative clauses. Prediction depth shows early resolution in simple cases and delayed or failed resolution in complex ones. These findings clarify LLM limitations in syntactic processing and highlight the importance of using linguistically informed evaluation methods to better understand model behavior across structural configurations.
  • Descripció

    Treball de fi de màster en Lingüística Teòrica i Aplicada
    Directors: : Dra. Dra. Iria de Dios Flores i Dr. Corentin Kervadec
  • Mostra el registre complet