Educational Measurement-Issues and Practice

Papers
(The TQCC of Educational Measurement-Issues and Practice is 2. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-11-01 to 2024-11-01.)
ArticleCitations
“Color‐Neutral” Is Not a Thing: Redefining Construct Definition and Representation through a Justice‐Oriented Critical Antiracist Lens69
The Impact of COVID‐19‐Related School Closures on Student Achievement—A Meta‐Analysis60
NCME Presidential Address 2020: Valuing Educational Measurement19
College Admission Tests and Social Responsibility17
Evaluating Item Fit Statistic Thresholds in PISA: Analysis of Cross‐Country Comparability of Cognitive Items15
Learning Theory, Classroom Assessment, and Equity15
Changing Educational Assessments in the Post‐COVID‐19 Era: From Assessment of Learning (AoL) to Assessment as Learning (AaL)12
Foundational Competencies in Educational Measurement11
Commentary: From Construct to Consequences: Extending the Notion of Social Responsibility8
A Longitudinal Diagnostic Model with Hierarchical Learning Trajectories8
Using Active Learning Methods to Strategically Select Essays for Automated Scoring8
Investigating the Split‐Attention Effect in Computer‐Based Assessment: Spatial Integration and Interactive Signaling Approaches7
The Multidimensionality of Measurement Bias in High‐Stakes Testing: Using Machine Learning to Evaluate Complex Sources of Differential Item Functioning7
Coconstructing a Meaningful Online Environment: Faculty–Student Rapport in the English as a Foreign Language College Classroom7
Machine Learning and Small Data7
The Ramifications of COVID‐19 in Education: Beyond the Extension of the Kano Model and Unipolar View of Satisfaction and Dissatisfaction in the Field of Blended Learning5
An Exploration and Critical Examination of How “Intelligent Classroom Technologies” Can Improve Specific Uses of Direct Student Behavior Observation Methods5
Commentary: Evolution of Equity Perspectives on Higher Education Admissions Testing: A Call for Increased Critical Consciousness5
How Did Students Engage with a Remote Educational Assessment? A Case Study5
The Effect of Item Preknowledge on Response Time: Analysis of Two Datasets Using the Multiple‐Group Lognormal Response Time Model with a Gating Mechanism5
Measuring Comparison Effects: A Critical View on the Internal/External Frame of Reference Model5
An Ecological Framework for Item Responding Within the Context of a Youth Risk and Needs Assessment5
A Rubric for the Detection of Students in Crisis4
The Good Side of COVID‐194
Bilevel Topic Model‐Based Multitask Learning for Constructed‐Responses Multidimensional Automated Scoring and Interpretation4
Estimating Classification Decisions for Incomplete Tests4
Are Fourth‐Year College Students Better Critical Thinkers than Their First‐Year Peers? Not So Much, and College Major and Ethnicity Matter4
A Probabilistic Filtering Approach to Non‐Effortful Responding4
Defining Test‐Score Interpretation, Use, and Claims: Delphi Study for the Validity Argument4
Considerations for Future Online Testing and Assessment in Colleges and Universities4
Inflection Point: The Role of Testing in Admissions Decisions in a Postpandemic Environment4
Measuring Digital Literacy during the COVID‐19 Pandemic: Experiences with Remote Assessment in Hong Kong4
Development and Validation of an Automatic Item Generation System for English Idioms4
To Score or Not to Score: Factors Influencing Performance and Feasibility of Automatic Content Scoring of Text Responses3
Balancing Trade‐Offs in the Detection of Primary Schools at Risk3
Introduction to the Focal Article, Commentaries, and Response to the Commentaries3
Gender‐Based EFL Writing Error Analysis Using Human and Computer‐Aided Approaches3
Development of a New Learning Progression Verification Method based on the Hierarchical Diagnostic Classification Model: Taking Grade 5 Students’ Fractional Operations as an Example3
The Role of Response Style Adjustments in Cross‐Country Comparisons—A Case Study Using Data from the PISA 2015 Questionnaire3
An Evaluation of Automatic Item Generation: A Case Study of Weak Theory Approach3
Average Rank and Adjusted Rank Are Better Measures of College Student Success than GPA3
Boolean Analysis of Interobserver Agreement: Formal and Functional Evidence Sampling in Complex Coding Endeavors3
Reporting Pass–Fail Decisions to Examinees with Incomplete Data: A Commentary on Feinberg (2021)2
Combining Process Information and Item Response Modeling to Estimate Problem‐Solving Ability2
Commentary: Design Tests with a Learning Purpose2
Commentary: Social Responsibility, Fairness, and College Admissions Tests2
A Machine Learning Approach for the Simultaneous Detection of Preknowledge in Examinees and Items When Both Are Unknown2
Digital Module 21: Results Reporting for Large‐Scale Assessments2
College Admission Tests and Social Responsibility: A Response to the Commentaries2
On the Cover: Time Spent on Multiple‐Choice Items2
Personalizing Large‐Scale Assessment in Practice2
Using Classification Tree Models to Determine Course Placement2
Commentary: Social Responsibility in College Admissions Requires a Reimagining of Standardized Testing2
ITEMS Corner Update: The New ITEMS Module Development Process2
Examining Comparability across CAT Assessments2
Are There Distinctive Profiles in Examinee Essay‐Writing Processes?2
Mode Effects in College Admissions Testing and Differential Speededness as a Possible Explanation2
Commentary: Comment on College Admissions Tests and Social Responsibility2
Transforming Assessment: The Impacts and Implications of Large Language Models and Generative AI2
Assessing the Impact of a Test Question: Evidence from the “Underground Railroad” Controversy2
Commentary: The Future of College Admissions Tests2
Clarifying the Terminology of Validity and the Investigative Stages of Validation2
Commentary: Achieving Educational Equity Requires a Communal Effort2
Using OpenAI GPT to Generate Reading Comprehension Items2
0.082679986953735