Journal of Educational Measurement

Papers
(The TQCC of Journal of Educational Measurement is 1. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-04-01 to 2024-04-01.)
ArticleCitations
Using Retest Data to Evaluate and Improve Effort‐Moderated Scoring26
Model‐Based Treatment of Rapid Guessing23
A Response Time Process Model for Not‐Reached and Omitted Items14
Optimizing Implementation of Artificial‐Intelligence‐Based Automated Scoring: An Evidence Centered Design Approach for Designing Assessments for AI‐based Scoring7
Random Responders in the TIMSS 2015 Student Questionnaire: A Threat to Validity?7
Using Eye‐Tracking Data as Part of the Validity Argument for Multiple‐Choice Questions: A Demonstration6
Variation in Respondent Speed and its Implications: Evidence from an Adaptive Testing Scenario5
Score Comparability between Online Proctored and In‐Person Credentialing Exams5
Examining the Impacts of Ignoring Rater Effects in Mixed‐Format Tests5
Linking and Comparability across Conditions of Measurement: Established Frameworks and Proposed Updates5
An Unsupervised‐Learning‐Based Approach to Compromised Items Detection4
A Residual‐Based Differential Item Functioning Detection Framework in Item Response Theory4
Score Comparability Issues with At‐Home Testing and How to Address Them4
Detecting Differential Item Functioning Using Posterior Predictive Model Checking: A Comparison of Discrepancy Statistics4
Generating Models for Item Preknowledge4
Toward Argument‐Based Fairness with an Application to AI‐Enhanced Educational Assessments4
Psychometric Methods to Evaluate Measurement and Algorithmic Bias in Automated Scoring4
A Novel Partial Credit Extension Using Varying Thresholds to Account for Response Tendencies4
Exploring the Impact of Random Guessing in Distractor Analysis4
The Impact of Cheating on Score Comparability via Pool‐Based IRT Pre‐equating4
On Joining a Signal Detection Choice Model with Response Time Models3
Standard Errors of Variance Components, Measurement Errors and Generalizability Coefficients for Crossed Designs3
Using Item Scores and Distractors in Person‐Fit Assessment3
A Unified Comparison of IRT‐Based Effect Sizes for DIF Investigations3
Multiple‐Group Joint Modeling of Item Responses, Response Times, and Action Counts with the Conway‐Maxwell‐Poisson Distribution3
Validity Arguments for AI‐Based Automated Scores: Essay Scoring as an Illustration3
Robust Estimation for Response Time Modeling3
A Recursion‐Based Analytical Approach to Evaluate the Performance of MST2
Validity Arguments Meet Artificial Intelligence in Innovative Educational Assessment2
The Automated Test Assembly and Routing Rule for Multistage Adaptive Testing with Multidimensional Item Response Theory2
Historical Perspectives on Score Comparability Issues Raised by Innovations in Testing2
On the Positive Correlation between DIF and Difficulty: A New Theory on the Correlation as Methodological Artifact1
Using Linkage Sets to Improve Connectedness in Rater Response Model Estimation1
Introduction to the Special Issue Maintaining Score Comparability: Recent Challenges and Some Possible Solutions1
Simultaneous Constrained Adaptive Item Selection for Group‐Based Testing1
Several Variations of Simple‐Structure MIRT Equating1
Anchoring Validity Evidence for Automated Essay Scoring1
1
Assessing the Impact of Equating Error on Group Means and Group Mean Differences1
Measuring the Impact of Peer Interaction in Group Oral Assessments with an Extended Many‐Facet Rasch Model1
Detecting Multidimensional DIF in Polytomous Items with IRT Methods and Estimation Approaches1
Pretest Item Calibration in Computerized Multistage Adaptive Testing1
Validating Performance Standards via Latent Class Analysis1
Using a Projection IRT Method for Vertical Scaling When Construct Shift Is Present1
Measuring the Uncertainty of Imputed Scores1
DIF Detection for Multiple Groups: Comparing Three‐Level GLMMs and Multiple‐Group IRT Models1
Estimating Classification Accuracy and Consistency Indices for Multiple Measures with the Simple Structure MIRT Model1
An Exponentially Weighted Moving Average Procedure for Detecting Back Random Responding Behavior1
Explanatory Cognitive Diagnostic Modeling Incorporating Response Times1
Specifying the Three Ws in Educational Measurement: Who Uses Which Scores for What Purpose?1
A Statistical Test for the Detection of Item Compromise Combining Responses and Response Times1
NCME Presidential Address 2022: Turning the Page to the Next Chapter of Educational Measurement1
Classical Item Analysis from a Signal Detection Perspective1
Cognitive Diagnostic Multistage Testing by Partitioning Hierarchically Structured Attributes1
Robust Estimation of Ability and Mental Speed Employing the Hierarchical Model for Responses and Response Times1
0.018255949020386