Educational Measurement-Issues and Practice

Papers
(The median citation count of Educational Measurement-Issues and Practice is 0. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2022-01-01 to 2026-01-01.)
ArticleCitations
On the Cover: Turning the Page91
Digital Module 31: Testing Accommodations for Students with Disabilities43
Blending Strategic Expertise and Technology: A Case Study for Practice Analysis23
21
Visualizing Distributions Across Grades18
ITEMS Corner Update: The Initial Steps in the ITEMS Development Process17
Issue Information12
Applying a Mixture Rasch Model‐Based Approach to Standard Setting10
Editorial10
Issue Cover8
Evolving Educational Testing to Meet Students’ Needs: Design‐in‐Real‐Time Assessment8
Issue Information7
Item Selection Algorithm Based on Collaborative Filtering for Item Exposure Control7
Growth across Grades and Common Item Grade Alignment in Vertical Scaling Using the Rasch Model7
Editorial7
ITEMS Corner Update: The New ITEMS Module Development Process7
Issue Information6
A Probabilistic Filtering Approach to Non‐Effortful Responding6
Commentary: A Data‐Driven Analysis of Recent Job Posts to Evaluate the Foundational Competencies6
What Makes Measurement Important for Education?6
Digital Module 38: Differential Item Functioning by Multiple Variables Using Moderated Nonlinear Factor Analysis5
The Impact of the COVID‐19 Pandemic on American Board of Surgery's Oral Certifying Exams5
Guesses and Slips as Proficiency‐Related Phenomena and Impacts on Parameter Invariance4
Digital Module 36: Applying Intersectionality Theory to Educational Measurement4
Cheating Detection of Test Collusion: A Study on Machine Learning Techniques and Feature Representation4
The 2024 EM:IP Cover Graphic/Data Visualization Competition4
The University of California Was Wrong to Abolish the SAT: Admissions When Affirmative Action Was Banned4
Commentary: Perspectives of Early Career Professionals on Enhancing Cultural Responsiveness in Educational Measurement4
Digital Module 28: Unusual Things That Usually Occur in a Credentialing Testing Program4
Foundational Competencies in Educational Measurement4
Editorial3
Issue Information3
Applications and Modeling of Keystroke Logs in Writing Assessments3
Editorial3
An Evaluation of Automatic Item Generation: A Case Study of Weak Theory Approach3
Exploration of Latent Structure in Test Revision and Review Log Data3
Issue Cover3
Detecting Aberrant Test‐Taking Behaviors in Computer‐Based Testing Using One‐Dimensional Convolutional Neural Networks3
Commentary: What Is the Breadth of “Educational Measurement?”3
Foundational Competencies in Educational Measurement: A Rejoinder3
ITEMS Corner Update: High Traffic to the ITEMS Portal on the NCME Website3
Commentary: Past, Present, and Future of Educational Measurement2
What Should Psychometricians Know about the History of Testing and Testing Policy?2
Instruction‐Tuned Large‐Language Models for Quality Control in Automatic Item Generation: A Feasibility Study2
Issue Cover2
2
Ronald K. Hambleton (1943–2022): Setting the Standard for Measurement Excellence2
On the Cover: Exploring Artificial Intelligence in Education2
ITEMS Corner Update: Recording Audio and Adding an Editorial Polish to an ITEMS Module2
An Investigation of the Nature and Consequence of the Relationship between IRT Difficulty and Discrimination2
Commentary: How Research and Testing Companies can Support Early‐Career Measurement Professionals2
Issue Cover2
Measurement Reflections2
Issue Cover2
Transforming Assessment: The Impacts and Implications of Large Language Models and Generative AI2
Current Psychometric Models and Some Uses of Technology in Educational Testing2
ITEMS Corner Update: The Final Three Steps in the Development Process2
Measuring Variability in Proctor Decision Making on High‐Stakes Assessments: Improving Test Security in the Digital Age2
Issue Cover2
Comparative Analysis of Psychometric Frameworks and Properties of Scores from Autogenerated Test Forms2
Issue Cover2
Investigating Approaches to Controlling Item Position Effects in Computerized Adaptive Tests2
Validation as Evaluating Desired and Undesired Effects: Insights From Cross‐Classified Mixed Effects Model2
Development of a New Learning Progression Verification Method based on the Hierarchical Diagnostic Classification Model: Taking Grade 5 Students’ Fractional Operations as an Example2
Digital Module 34: Introduction to Multilevel Measurement Modeling2
1
On the Cover: Sequential Progression and Item Review in Timed Tests: Patterns in Process Data1
Issue Information1
On the Cover: Person Infit Density Contour1
Issue Cover1
Changing Educational Assessments in the Post‐COVID‐19 Era: From Assessment of Learning (AoL) to Assessment as Learning (AaL)1
Reached or Not Reached: A Tale of Two Data Sources1
Exploring the Effect of Human Error When Using Expert Judgments to Train an Automated Scoring System1
You Win Some, You Lose Some1
On the Cover: Illustrating Collusion Networks with Graph Theory1
AI: Can You Help Address This Issue?1
Supporting the Interpretive Validity of Student‐Level Claims in Science Assessment with Tiered Claim Structures1
Issue Information1
Editorial1
Issue Information1
Issue Cover1
1
Issue Cover1
NCME Presidential Address 2021: Assessment Research and Practice in the Post‐COVID‐19 Era1
Digital Module 33: Fairness in Classroom Assessment: Dimensions and Tensions1
Introduction to the Special Section “Issues and Practice in Applying Machine Learning in Educational Measurement”1
Diving Into Students’ Transcripts: High School Course‐Taking Sequences and Postsecondary Enrollment1
On the Cover: Tell‐Tale Triangles of Subscore Value1
Still Interested in Multidimensional Item Response Theory Modeling? Here Are Some Thoughts on How to Make It Work in Practice1
Modeling Slipping Effects in a Large‐Scale Assessment with Innovative Item Formats0
Using OpenAI GPT to Generate Reading Comprehension Items0
Evaluating Population Invariance of Test Equating During the COVID‐19 Pandemic0
There Is No Right Way: A Reply to Sinharay (2022)0
Revisiting the Usage of Alpha in Scale Evaluation: Effects of Scale Length and Sample Size0
On the Cover: Indicators for Item Preknowledge0
0
MxML (Exploring the Relationship between Measurement and Machine Learning): Current State of the Field0
Improving Instructional Decision‐Making Using Diagnostic Classification Models0
To Score or Not to Score: Factors Influencing Performance and Feasibility of Automatic Content Scoring of Text Responses0
Measuring Digital Literacy during the COVID‐19 Pandemic: Experiences with Remote Assessment in Hong Kong0
The Impact of COVID‐19‐Related School Closures on Student Achievement—A Meta‐Analysis0
Issue Information0
On the Cover: The Increasing Impact of EM:IP0
Does It Matter How the Rigor of High School Coursework Is Measured? Gaps in Coursework Among Students and Across Grades0
Weighing the Value of Complex Growth Estimation Methods to Evaluate Individual Student Response to Instruction0
Educational Measurement: Models, Methods, and Theory0
Editorial0
Measurement Must Be Qualitative, then Quantitative, then Qualitative Again0
Measurement Invariance for Multilingual Learners Using Item Response and Response Time in PISA 20180
Examining Gender Differences in TIMSS 2019 Using a Multiple‐Group Hierarchical Speed‐Accuracy‐Revisits Model0
Issue Information0
The 2025 EM:IP Cover Graphic/Data Visualization Competition0
Editorial0
Introduction to the Special Section “Lingering Impact of COVID‐19 on Educational Measurement”0
Reconceptualization of Coefficient Alpha Reliability for Test Summed and Scaled Scores0
Communicating Measurement Outcomes with (Better) Graphics0
On the Cover: Unraveling Reading Recognition Trajectories: Classifying Student Development through Growth Mixture Modeling0
Commentary: Modernizing Educational Assessment Training for Changing Job Markets0
Editorial0
Reframing Research and Assessment Practices: Advancing an Antiracist and Anti‐Ableist Research Agenda0
Personalizing Assessment: Dream or Nightmare?0
Machine Learning–Based Profiling in Test Cheating Detection0
Bilevel Topic Model‐Based Multitask Learning for Constructed‐Responses Multidimensional Automated Scoring and Interpretation0
Editorial0
An Application of Text Embeddings to Support Alignment of Educational Content Standards0
On the Cover: Key Specifications for a Large‐Scale Medical Exam0
On the Cover: High School Coursetaking Sequence Clusters and Postsecondary Enrollment0
ITEMS Corner: Educating the Educational Measurement Community0
0
Editorial0
Disrupted Data: Using Longitudinal Assessment Systems to Monitor Test Score Quality0
0
A Machine Learning Approach for the Simultaneous Detection of Preknowledge in Examinees and Items When Both Are Unknown0
0
Digital Module 39: Introduction to Generalizability Theory0
Introduction to the Special Section on the Past, Present, and Future of Educational Measurement0
Knowledge Integration in Science Learning: Tracking Students' Knowledge Development and Skill Acquisition with Cognitive Diagnosis Models0
Expected Classification Accuracy for Categorical Growth Models0
A Special Case of Brennan's Index for Tests That Aim to Select a Limited Number of Students: A Monte Carlo Simulation Study0
0
Issue Cover0
Editorial0
0
What Mathematics Content Do Teachers Teach? Optimizing Measurement of Opportunities to Learn in the Classroom0
Call for Papers: Leveraging Measurement for Better Decisions0
From Mandated to Test‐Optional College Admissions Testing: Where Do We Go from Here?0
Digital Module 29: Multidimensional Item Response Theory Equating0
Reporting Pass–Fail Decisions to Examinees with Incomplete Data: A Commentary on Feinberg (2021)0
0
Digital Module 32: Understanding and Mitigating the Impact of Low Effort on Common Uses of Test and Survey Scores0
Commentary: What Is Truly Foundational?0
Examining the Psychometric Impact of Targeted and Random Double‐Scoring in Mixed‐Format Assessments0
Comparing Large‐Scale Assessments in Two Proctoring Modalities with Interactive Log Data Analysis0
Personalizing Large‐Scale Assessment in Practice0
Editorial0
On the Cover: Predicted Racial‐Ethnic Composition of Educational Measurement Publications0
Digital Module 37: Introduction to Item Response Tree (IRTree) Models0
Hierarchical Agglomerative Clustering to Detect Test Collusion on Computer‐Based Tests0
0
Machine Learning Literacy for Measurement Professionals: A Practical Tutorial0
Editorial0
The Multidimensionality of Measurement Bias in High‐Stakes Testing: Using Machine Learning to Evaluate Complex Sources of Differential Item Functioning0
Generalizability Theory Approach to Analyzing Automated‐Item Generated Test Forms0
Digital Module 35: Through‐Year Assessment0
2024 NCME Presidential Address: Challenging Traditional Views of Measurement0
Defining Test‐Score Interpretation, Use, and Claims: Delphi Study for the Validity Argument0
Commentary: Where Does Classroom Assessment Fit in Educational Measurement?0
Call for Papers: Issues and Practice in Applying Machine Learning in Educational Measurement0
Issue Cover0
Admission Testing in Higher Education: Changing Landscape and Outcomes from Test‐Optional Policies0
Digital Module 30: Validity and Educational Testing: Purposes and Uses of Educational Tests0
Issue Information0
Investigating the Split‐Attention Effect in Computer‐Based Assessment: Spatial Integration and Interactive Signaling Approaches0
Issue Information0
Issue Information0
Investigating Elements of Culturally Responsive Assessments in the Context of the National Assessment of Educational Progress: An Initial Exploration0
ITEMS Corner Update: Two Years of Changes to ITEMS0
0
Item Response Theory Models for Polytomous Multidimensional Forced‐Choice Items to Measure Construct Differentiation0
Issue Information0
Universal by Design: Unveiling the Effectiveness of Accommodations and Universal Design Features through Process Data0
Automated Scoring in Learning Progression‐Based Assessment: A Comparison of Researcher and Machine Interpretations0
The Role of Response Style Adjustments in Cross‐Country Comparisons—A Case Study Using Data from the PISA 2015 Questionnaire0
Inflection Point: The Role of Testing in Admissions Decisions in a Postpandemic Environment0
The Good Side of COVID‐190
Issue Cover0
ITEMS Corner Update: Announcing Two Significant Changes to ITEMS0
Psychometric Evaluation of the Preschool Early Numeracy Skills Test–Brief Version Within the Item Response Theory Framework0
ITEMS Corner: Next Chapter of ITEMS0
Issue Information0
Using Active Learning Methods to Strategically Select Essays for Automated Scoring0
On the Cover: Distractor Cascade Analysis0
Mode Effects in College Admissions Testing and Differential Speededness as a Possible Explanation0
0
Weighting Content Specifications for the National Medical Licensing Examination via Group Analytic Hierarchy Process0
Average Rank and Adjusted Rank Are Better Measures of College Student Success than GPA0
Do Subject Matter Experts’ Judgments of Multiple‐Choice Format Suitability Predict Item Quality?0
A Workflow for Minimizing Errors in Template‐Based Automated Item‐Generation Development0
Issue Cover0
Linking Unlinkable Tests: A Step Forward0
The 2023 EM:IP Cover Graphic/Data Visualization Competition0
On the Cover: Gendered Trajectories of Digital Literacy Development: Insights from a Longitudinal Cohort Study0
The Past, Present, and Future of Large‐Scale Assessment Consortia0
Adjusting for Ability Differences of Equating Samples When Randomization Is Suboptimal0
Issue Information0
Causal Inference and COVID: Contrasting Methods for Evaluating Pandemic Impacts Using State Assessments0
0
Considerations for Future Online Testing and Assessment in Colleges and Universities0
Issue Information0
Issue Cover0
Issue Cover0
Editorial0
Deriving Decisions from Disrupted Data0
0
2023 NCME Presidential Address: Some Musings on Comparable Scores0
Using Process Data to Evaluate the Impact of Shortening Allotted Case Time in a Simulation‐Based Assessment0
Leading ITEMS: A Retrospective on Progress and Future Goals0
A Case for Reimagining Universal Design of Assessment Systems0
Editorial0
Demystifying Adequate Growth Percentiles0
In the beginning, there was an item…0
An Automated Item Pool Assembly Framework for Maximizing Item Utilization for CAT0
Achievement and Growth on English Language Proficiency and Content Assessments for English Learners in Elementary Grades0
Editorial0
Measurement Efficiency for Technology‐Enhanced and Multiple‐Choice Items in a K–12 Mathematics Accountability Assessment0
0.044019937515259