Exploring different facets of human trust in LLMs

Day - Time: 06 November 2025, h.11:00
Place: Area della Ricerca CNR di Pisa - Room: C-29
Speakers
  • Edoardo Sebastiano De Duro (Università di Trento, Dipartimento di Psicologia e Scienze Cognitive)
Referent

Giulio Rossetti

Abstract

The Trust-In-LLMs Index (TILLMI) is a new framework for measuring human trust in Large Language Models (LLMs) by adapting McAllister’s cognitive and effective dimensions [1] to human-LLM

interactions. We developed TILLMI as a psychometric scale using a novel validation protocol we called ”LLMsimulated validity.” Our approach began with pilot testing a preliminary scale on an LLM (GPT-4), building on recent studies showing that language embeddings and models can effectively estimate psychometric measurements traditionally requiring empirical data collection [2]. Exploratory Factor Analysis (EFA) on this AI-generated dataset revealed clusters with good internal consistency, suggesting that items within each identified factor measured related constructs. Following this initial validation, we administered the scale to a sample of 1,000 U.S. respondents for human-based validation.

We used Exploratory Graph Analysis (EGA; [2]) a novel technique to infer the empirical number of factors in complex psychometric datasets by leveraging network analysis to identify communities from items’ correlations [2]. Exploratory Graph Analysis yielded a 2-factor structure (see Figure 1). Similarly, traditional Exploratory Factor Analysis (EFA) identified a 2-factor structure, leading to the removal of two redundant items (Items 1 and 5) and resulting in a final 6-item scale. Confirmatory Factor Analysis on a separate subsample demonstrated a strong model fit (CF I=.995, T LI=

.991, RM SEA=.046, pX2 > .05).

Convergent validity analysis showed that trust in LLMs correlated positively with psychological measures like openness to experience, extraversion, and cognitive flexibility, while correlating negatively with neuroticism. Based on these results, we named TILLMI’s factors as “closeness with LLMs” (affective trust) and “reliance on LLMs” (cognitive trust). We carry out a regression analysis between the aggregated measure of trust (affective and cognitive) with gender and age. We find younger males to report greater closeness with and reliance on LLMs compared to older women, and individuals without direct LLM experience to exhibit lower trust than users.

These findings provide a new empirical foundation for measuring trust in AI-driven communication, supporting responsible design and fostering balanced human-AI collaboration.

[1] McAllister, D. J. (1995). Affect and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of management journal, 38(1), 24-59.

[2] Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants?.Trends in Cognitive Sciences, 27(7), 597-600.

[3] Golino, H. F., & Epskamp, S. (2017). Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research. Plos one, 12(6), e0174035.

[4] Golino, H., & Christensen, A. P. (2025). EGAnet: Exploratory Graph Analysis – A framework for estimating the number of dimensions in multivariate data using network psychometrics. https://doi.org/10.32614/CRAN.package.