AI-Augmented Content Validation in Behavioral Research: Development and Evaluation of the RATER System. Journal Article uri icon

Overview

abstract

  • Content validation is an essential aspect of the scale development process that ensures that measurement instruments capture their intended constructs. However, researchers rarely undertake this core step in behavioral research because it requires costly data collection and specialized expertise. We present RATER (Replicable Approach to Expert Ratings), a free web-based system (www.contval.org) that can help the broader research community (scientists, reviewers, students) gain quick and reliable insights into the content validity of measurement instruments. Guided by psychometric measurement theory, RATER evaluates whether a scale's items correspond to their intended construct, remain distinct from other constructs, and adequately represent all aspects of the construct's content domain. The system employs two unique artificial intelligence models, RATERC and RATERD, which leverage psychometric scales from 2,443 journal articles spanning eight disciplines and two state-of-the-art large language model architectures (i.e., BERT and GPT). A set of six complementary studies confirms the RATER system's accuracy, reliability, and usefulness. We find RATER can augment the scale development and validation process, increasing the validity of findings in behavioral research.

publication date

  • June 23, 2025

Date in CU Experts

  • November 1, 2025 4:06 AM

Full Author List

  • Pillet J-C; Larsen KR; Dobolyi D; Queiroz M; Handler A; Arnulf JK; Sharma R

author count

  • 7

Other Profiles

International Standard Serial Number (ISSN)

  • 0276-7783