Back to Articles
Research ArticlePsychoactive Triggers Seriesv1.0CC BY 4.0
PersonaMatrix Research Journal·Volume 1 (2026)·ISSN: Pending·DOI: 10.69635/mssl.2026.2.1.31

Psychoactive Triggers as a Stimulus Battery for Measuring Large Language Models (LLMs): A Bridge Between Psychometrics, Clinical Psychology, and LLM Engineering

Authors
Anatoliy DrobakhaCorresponding
Independent Researcher, Author of the PersonaMatrix Project, Florida, USA
M. Kalitkin
Independent Researcher
K. Klymenko
Independent Researcher
R. Nayda
Independent Researcher
L. Lahuta
NGO Institute of Psychological Maturity, Florida, USA
O. Kostenko
Independent Researcher
Received2025-11-20
Accepted2025-12-28
Published2026-01-15
LicenseCC BY 4.0
Full Article (Coming Soon)

Abstract

This paper presents psychoactive triggers as a stimulus battery for measuring large language models (LLMs), bridging psychometrics, clinical psychology, and LLM engineering. The study introduces a novel framework for evaluating LLM behavioral consistency, emotional responsiveness, and safety boundaries using structured psychological inputs derived from clinical practice. Results demonstrate measurable behavioral drift across models when exposed to cognitively loaded prompts, with implications for AI safety and evaluation methodology.

Keywords

psychoactive triggersLLM evaluationpsychometricsAI safetystimulus batterybehavioral measurement

Citation

Drobakha, A., Kalitkin, M., Klymenko, K., Nayda, R., Lahuta, L., & Kostenko, O. (2026). Psychoactive triggers as a stimulus battery for measuring large language models (LLMs): A bridge between psychometrics, clinical psychology, and LLM engineering. PersonaMatrix Research Journal (PMRJ).

References

  1. [1]Drobakha, A. (2024). Using artificial intelligence to automate psychological assistance. In Digital transformation in Ukraine: AI, metaverse, and Society 5.0 (pp. 185–189). SciFormat Publishing Inc.
  2. [2]Drobakha, A., & Zolotar, O. (2025). Psychological research in metaverses: The PersonaMatrix model. SciFormat Publishing Inc.
  3. [3]Bommasani, R., et al. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
  4. [4]Srivastava, A., et al. (2023). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research.