Project 10 - Proper Use of Assessment Results from Common Core State Standards

Since the Common Core State Standards (CCSS) were released in 2010, nearly every U. S. state has formally adopted these standards in mathematics and English language arts (ELA) and many joined one of two Consortia to develop and implement common tests. Given the high stakes associated with the use of scores from the Common Core Assessment as well as the need for proper diagnosis of student learning, this report focuses on a discussion of three fundamental concerns: (1) score comparability and DIF for multiple groups, (2) the selection of software packages for multiple-group IRT analysis, and (3) CDMs.

Executive Summary

 

Project 9 - Score Comparability and Differential Item Functioning

Under the Common Core State Standards (CCSS), tests developed by each consortium are based on the same common core standards; however, states within one consortium may adopt different curriculum and instruction and student populations from different states could be very diverse. As acknowledged by PARCC, test score comparability across states is an important issue to be addressed. In this part, we will discuss briefly methods for detecting DIF items across multiple groups as well as multiple-group IRT models for dealing with DIF.

Executive Summary

 

Project 8 - Software Packages For Multiple Group IRT Analysis and Accuracy of Parameter Estimates

In this report, we compare several IRT software packages for multiple group analysis including BILOG-MG, MULTILOG, IRTPRO, flexMIRT, Mplus, BMIRT, and FLIRT (R package).  Given that different software programs employ different defaults and/or options for model identification and commonality, this report provides information on these two issues (model identification and commonality). This review focuses on use of software programs for multiple group IRT analysis in the context where the same test form is administered to different groups.

Executive Summary

 

Project 7 - Cognitive Diagnostic Models

The purpose of this document is to provide theoretical background on cognitive diagnostic models by first explaining some technical terminology and then providing an overview of the models that could be used in practice. The purpose of this is to prime the way for more analyses regarding these types of diagnostic assessments, in an overall effort to provide background information about commonly studied diagnostic models that could be useful for the stakeholders of such tests, particularly in the state of Maryland. Ultimately, we hope that this review will shed light on the models that will be most useful for giving students and teachers accurate and useful information for the use of formative assessments as proposed by the CCSSO, PARCC, and Maryland public schools.

Executive Summary

 

Project 6 - Issues and Considerations regarding Linking between Old and New Assessment

Beginning in the fall of 2014, according to new standards set forth by the Common Core State Standards initiative (CCSS), the state of Maryland, under the Partnership for Assessment of Readiness for College and Careers (PARCC), will replace the Maryland School Assessment (MSA) with PARCC assessment, which will differ in content coverage, scope and sequence and psychometric properties, to name a few. Such discrepancies between new and old assessments require a careful linking study in order to compare the two assessments and correctly measure progress. This outline was created to address problems and suggest different linking designs.

Executive Summary

 

Project 5 - Issues and Considerations Regarding Standard Setting

Standard setting is a process of setting cut scores on a test. It  indicates whether a student has achieved an established level of proficiency. As a new summative assessment for new common core standards will be implemented in the 2014-2015 school year by two consortia (PARCC and Smarter Balanced), standard setting for a new common core assessment will be conducted. Because there are many distinct differences between the assessments by the two consortia, the process of setting performance standards will not be the same. This outline was created to provide detailed differences in standard setting by the two consortia and the related issues and consideration as well as a general description on standard setting.

Executive Summary

 

Project 4 - Issues and Technical Considerations Related to Transitioning to a New Test According to Common Core State Standards (CCSS)

An executive summary is conducted to identify the issues and the corresponding technical suggestions related to adopting the new national testing effort in Maryland. The issues we identified fall into four big categories, including psychometric related issues (e.g., scaling, linking, and DIF), technology related (e.g., readiness and security), implementation related (e.g., test delivery, scoring, and reporting) and policy related (e.g., student growth and evaluation of teacher effectiveness). For each issue that we identified, potential technical considerations are provided.

Executive Summary

 

Project 3 - Context Effect on Item Parameter Invariance

Context effects occur when item parameters are influenced by item location, order effects, or the characteristics of other items in a test. Though a large amount of research on context effect showed changes in item positions can have great impact on both item parameter estimates and the subsequent equating results, inconsistent findings showed context effects did not always significantly affect item difficulty or item discrimination. Based on a thorough literature review, this project summarized the research findings on item parameter invariance as well as equating under the influence of context effects. Recommendations from literature on test construction or development were also provided.  

Executive Summary

 

Project 2 - The Use of Student Learning Objectives (SLOs) in Teacher Evaluation

Students’ achievement data have been increasingly used to assist in teacher effectiveness evaluation. Using only state test data means that a large part of the teaching force would not be able to participate in evaluations. The Student Learning Objective (SLO) is an option to incorporate student performance results into the evaluation of teachers in subjects and grades not assessed by state standardized tests. This executive summary investigated how SLOs are used for teacher evaluation at both the state and the district level and suggested what consideration states or districts need to address as they evaluate teachers’ performance using SLOs. Examples of the pilot use of SLOs in teacher evaluation at both the state level and the district level were provided.

Executive Summary

 

Project 1 - Student Characteristics and CBT Performance: An Overview of the Literature

One big change in the field of education and assessment under the influence of modern technology is the transition from paper-based to computer-based assessment.  Computer-based testing (CBT) is gaining popularity over the traditional paper-and-pencil test  (PPT) due to many advantages that computer-based assessment provides.  Meanwhile, more and more educators and researchers have shown interest in investigating the factors that influence students’ CBT performance. That is, for whom is CBT best suited?  Or, what student characteristics are important in effective use of CBTs? The objective of this project was to examine the relationship between student characteristics and CBT, compared with PPT.  In the literature, factors related to student characteristics, such as student demographic attributes, learning style, computer familiarity and test anxiety, were found to have somewhat different relations with CBT performance compared with PPT.  

Student Characteristics and CBT Performance / Annotation-Abstract and Key Points/ Reference for the Literature Review

 

Last Modified: August 2014

 
  EDMS Home Page
web counter
web counter