assessments

Common Core Implementation

We've outsourced this post on Common Core State Standards to guest contributor Christina Hank. Christina is a Curriculum Coordinator for Medina City Schools. You can read more of her work at turnonyourbrain.wordpress.com, and you should definitely follow her on Twitter at @ChristinaHank

There’s been a lot of confusion around what’s happening to curriculum in Ohio education. Let’s break it down into two pieces: standards and assessments.

STANDARDS

Standards are the platform for everything that is taught in a school district, we go above and beyond I the standards to address all the needs of children, such as social and emotional growth. By themselves, standards do not impact anything in our classrooms; they are documents that sit on shelves. It is in how we implement the standards and integrate their intent into our teaching practices that they have any role in teaching and learning.

So, what are the standards in Ohio?

Common Core State Standards (CCSS)—The CCSS are a set of standards in grades kindergarten through twelve in English language arts and mathematics. Many states have adopted this as a common set of standards.

Are included in…

Ohio’s New Learning Standards—Ohio’s New Learning Standards is the title given to all of Ohio’s standards in all contents (including the CCSS in English language arts and math).

ASSESSMENTS

Standards are not the same as their assessments, even though we are seeing “Common Core” used interchangeably with everything that is happening right now. Though the assessments of our new learning standards are rooted in the standards and attempting to assess the intent of these standards, the assessments are a separate piece of educational reform.

Partnership for Assessment of Readiness for College and Careers (PARCC)—PARCC is one of two national testing consortia develop assessments for the CCSS in English language arts and math. Ohio and 21 other states belong to this consortium, which means Ohio’s students will be taking the same test as students in all of those other states (unlike Ohio’s current assessments with are only taken by students in the state). In each subject (English language arts and math), the test is structured to have two optional tests in the fall (may not be finished by 2014-2015) and two tests in the spring. The first of these spring tests in each subject will be around March and will be a performance-based assessment. The second of these will be in May and will be an End of Year test.

Are included in…

Next Generation Assessments—This all-encompassing term includes both Ohio-developed tests in social studies and science as well as the PARCC tests in English language arts and math.

TIMELINE

As it is almost the start of the 2013-2014 school year (where is the summer going?!), we’re entering the final year of Ohio Achievement/Graduation Assessments and getting ready for our first year of Next Generation Assessments in 2014-2015.

Assessments
English 2014-2015:
  • MS: PARCC Tests for grades 3-8 (National)
  • HS: PARCC End of Course exams (National)
Mathematics 2014-2015:
  • MS: PARCC Tests for grades 3-8 (National)
  • HS: PARCC EOC in Alg 1, Geo, Alg 2 OR Math 1, Math 2, Math 3 depending on student track (National)
Social Studies 2013-2014:
  • MS: Continue with OAA in MS.
  • HS ONLY: our self-created EOC Assessment in U.S. History and Government

2014-2015:

  • MS: Grade 4 and 6, grade-level tests (not cumulative). New SS tests will be "Next Generation Assessments" reflective of PARCC tests
  • HS: State created EOC in U.S. History and Government
Science 2014-2015:
  • MS: Grade 5 and 8, grade-level tests (not cumulative). New science tests will be "Next Generation Assessments" reflective of PARCC tests
  • HS: State created EOC in Biology and Physical Science

10 reasons why VAM is harmful to students

[...]No one is asking how value-added assessments may affect the very students that this evaluation system is intended to help. By my count, there are at least ten separate ways in which value-added assessment either does not accurately measure the needs of a student or is actually harmful to a child’s education. Until these flaws are addressed, value-added assessment will be nothing more than a toy for politicians and headline writers, not a serious tool for improving learning.

1. The premise of value-added assessment is that standardized tests are an accurate and decisive measure of student learning. In fact, standardized testing is neither definitive nor especially reliable. City and state exams are snapshots, not in-depth diagnostic tools.

2. Value-added assessments will ultimately require all students to take standardized exams, whether or not such examinations are developmentally appropriate. Kindergarteners and first graders will be subjected to the same pressures of high-stakes testing as older children.

3. Value-added assessments will dramatically increase the number of standardized tests for each student. Children will need to take exams in subjects such art, music and physical education in order to evaluate the teachers of these subjects.

4. The most successful students will get less enrichment work and more test prep. It is actually more difficult to improve the scores of gifted students since they have already done so well on standardized exams.

5. Teachers will need to avoid necessary remediation in order to attain short-term gains in test scores. Most standardized English tests require students to demonstrate high-order thinking skills, yet a growing body of academic research indicates that many children—especially those growing up in poverty—require huge boosts of vocabulary to function well in school. Teachers may be forced to forego a vocabulary-rich curriculum that would have the most long-term benefits for their children. Instead, they will have to focus on the skills that might help students gain an extra point or two on this year’s tests.

[readon2 url="http://www.dailykos.com/story/2013/03/11/1193372/-Ten-Reasons-Why-Value-Added-Assessments-are-Harmful-to-a-Child-s-Education"]Continue reading...[/readon2]

Misconceptions and Realities about Teacher Evaluations

A letter, signed by 88 educational researchers from 16 universities was recently sent to the Mayor of Chicago regarding his plans to implement a teacher evaluation system. Because of some of the similarities of the Chicago plan to that of Ohio, we thought we would reprint the letter here.

In what follows, we draw on research to describe three significant concerns with this plan.

Concern #1: CPS is not ready to implement a teacher-evaluation system that is based on significant use of “student growth.” For Type I or Type II assessments, CPS must identify the assessments to be used, decide how to measure student growth on those assessments, and translate student growth into teacher-evaluation ratings. They must determine how certain student characteristics such as placement in special education, limited English-language proficiency, and residence in low-income households will be taken into consideration. They have to make sure that the necessary technology is available and usable, guarantee that they can correctly match teachers to their actual students, and determine that the tests are aligned to the new Common Core State Standards (CCSS).

In addition, teachers, principals, and other school administrators have to be trained on the use of student assessments for teacher evaluation. This training is on top of training already planned about CCSS and the Charlotte Danielson Framework for Teaching, used for the “teacher practice” part of evaluation.

For most teachers, a Type I or II assessment does not exist for their subject or grade level, so most teachers will need a Type III assessment. While work is being done nationally to develop what are commonly called assessments for “non-tested” subjects, this work is in its infancy. CPS must identify at least one Type III assessment for every grade and every subject, determine how student growth will be measured on these assessments, and translate the student growth from these different assessments into teacher-evaluation ratings in an equitable manner.

If CPS insists on implementing a teacher-evaluation system that incorporates student growth in September 2012, we can expect to see a widely flawed system that overwhelms principals and teachers and causes students to suffer.

Concern #2: Educational research and researchers strongly caution against teacher-evaluation approaches that use Value-Added Models (VAMs).

Chicago already uses a VAM statistical model to determine which schools are put on probation, closed, or turned around. For the new teacher-evaluation system, student growth on Type I or Type II assessments will be measured with VAMs or similar models. Yet, ten prominent researchers of assessment, teaching, and learning recently wrote an open letter that included some of the following concerns about using student test scores to evaluate educators[1]:

a. Value-added models (VAMs) of teacher effectiveness do not produce stable ratings of teachers. For example, different statistical models (all based on reasonable assumptions) can yield different effectiveness scores. [2] Researchers have found that how a teacher is rated changes from class to class, from year to year, and even from test to test. [3]

b. There is no evidence that evaluation systems that incorporate student test scores produce gains in student achievement. In order to determine if there is a relationship, researchers recommend small-scale pilot testing of such systems. Student test scores have not been found to be a strong predictor of the quality of teaching as measured by other instruments or approaches. [4]

c. Assessments designed to evaluate student learning are not necessarily valid for measuring teacher effectiveness or student learning growth. [5] Using them to measure the latter is akin to using a meter stick to weigh a person: you might be able to develop a formula that links height and weight, but there will be plenty of error in your calculations.

Concern #3: Students will be adversely affected by the implementation of this new teacher-evaluation system.

When a teacher’s livelihood is directly impacted by his or her students’ scores on an end-of-year examination, test scores take front and center. The nurturing relationship between teacher and student changes for the worse, including in the following ways:

a. With a focus on end-of-year testing, there inevitably will be a narrowing of the curriculum as teachers focus more on test preparation and skill-and-drill teaching. [6] Enrichment activities in the arts, music, civics, and other non-tested areas will diminish.

b. Teachers will subtly but surely be incentivized to avoid students with health issues, students with disabilities, students who are English Language Learners, or students suffering from emotional issues. Research has shown that no model yet developed can adequately account for all of these ongoing factors. [7]

c. The dynamic between students and teacher will change. Instead of “teacher and student versus the exam,” it will be “teacher versus students’ performance on the exam.”

d. Collaboration among teachers will be replaced by competition. With a “value-added” system, a 5th grade teacher has little incentive to make sure that his or her incoming students score well on the 4th grade exams, because incoming students with high scores would make his or her job more challenging.

e. When competition replaces collaboration, every student loses.

You can read the whole letter below.

Misconceptions and Realities about Teacher and Principal Evaluation

Parents Agree – Better Assessments, Less High-Stakes Testing

Educators aren’t alone in being fed up with narrow, punitive student accountability measures. Parents also want well-designed, timely assessments that monitor individual student performance and progress across a range of subjects and skills. That’s one of the key findings in a new study by the Northwest Evaluation Association (NWEA).

NWEA, a non-profit educational services organization headquartered in Portland, set out to find how the views of parents – often ignored in the debate over the direction of public education – stacked up against those of teachers and administrators.

After conducting online surveys of more than 1,000 respondents, NWEA found that these stakeholders essentially want the same thing. Large majorities say that, although year-end tests might provide some sort of useful snapshot, they strongly prefer more timely formative assessments to track student progress and provide educators with the flexibility to adjust their instruction during the school year.

“The research reinforces the notion that no one assessment can provide the breadth and depth of information needed to help students succeed,” explained Matt Chapman, president and CEO of NWEA. “For every child we need multiple measures of performance.”

As the reauthorization of the Elementary and Secondary Education Act (ESEA) slowly moves on Capitol Hill, redefining how student progress is measured will be a key debate. The National Education Association believes it is time to move beyond the No Child Left Behind Law (the 2001 revision of ESEA), scrap the obsession with high-stakes testing and enter into a new phase of education accountability.

“Well-designed assessment systems do have a critical role in student success,” said NEA President Dennis Van Roekel. “We should use assessments to help students evaluate their own strengths and needs, and help teachers improve their practice and provide extra help to the students who need it.”

“I use different types of assessments because all students are different,” explained Krista Vega, a middle school teacher in Maryland and NEA member who participated in the NWEA survey. “I use quizzes, games, teacher-made tests, computerized tests, portfolios, and alternative assignments.”

“What I’m looking for is, first, are they mastering the skill I’m trying to teach, or did they not master the skill? I’m looking to see if there is an area of weakness. I’m looking to see if they have background knowledge sometimes. There’s just a whole range of things that I’m looking for,” Vega said.

Source: Northwest Evaluation Association and Grunwald Associates

According to the survey, it is the types of formative assessments Vega identifies, such as quizzes, portfolios, homework and end-of-unit tests that provide timely data about individual student growth and achievement. Respondents cited these types of assessments as providing educators with the necessary information to pace the instruction and ensure students learn fundamental skills.

Parents are also worried about the narrowing of the curriculum. Large majorities believe it is important to measure students in math and English/language arts but also say it is important to measure performance in science, history, government and civics, and environmental literacy.

The students who are often hurt the most by a restricted curriculum are those who don’t have the opportunities, because of their socioeconomic background, to diversify their learning outside the classroom.

Beyond subject matter, parents and educators believe so-called “higher order” thinking skills such as creativity, communication, problem-solving, and collaboration – so critical in the modern economy and workplace – aren’t being properly measured by current assessment systems.

“It is really, really important,” Vega says “that we prepare students for when they enter the workforce to compete in the 21st century.”

Read the NWEA Report ”For Every Child, Multiple Measures”

More on NEA’s Position on Student Assessments (Word Document)

Mapping State Proficiency Standards Onto NAEP Scales

The National Assessment of Educational Progress (NAEP) has just published their report "Mapping State Proficiency Standards Onto NAEP Scales: Variation and Change in State Standards for Reading and Mathematics, 2005-2009"

This research looked at the following issues

How do states’ 2009 standards for proficient performance compare with one another when mapped onto the NAEP scale? There is wide variation among state proficiency standards.
Most states’ proficiency standards are at or below NAEP’s definition of Basic performance.

How do the 2009 NAEP scale equivalents of state standards compare with those estimated for 2007 and 2005? For those states that made substantive changes in their assessments between 2007 and 2009 most moved toward more rigorous standards as measured by NAEP.
For those states that made substantive changes in their assessments between 2005 and 2009, changes in the rigor of states’ standards as measured by NAEP were mixed but showed more decreases than increases in the rigor of their standards.

Does NAEP corroborate a state’s changes in the proportion of students meeting the state’s standard for proficiency from 2007 to 2009? From 2005 to 2009? Changes in the proportion of students meeting states’ standards for proficiency between 2007 and 2009 are not corroborated by the proportion of students meeting proficiency, as measured by NAEP, in at least half of the states in the comparison sample.
Results of comparisons between changes in the proportion of students meeting states’ standards for proficiency between 2005 and 2009 and the proportion of students meeting proficiency, as measured by NAEP, were mixed.

The full report can be found here (PDF). We've pulled out some of the graphs that show Ohio's performance vs the rest of the country for each of the 4th and 8th grade reading and math achievement levels.

4th grade reading

8th grade reading

4th grade math

8th grade math

Common Core Cooperation?

Terry Ryan of the Fordham Institute had a sit down with the new Ohio Superintendent Stan Heffner and discussed the development of Ohio's common core academic standards. Heffner revealed to Ryan that he believed teachers input would be crucial to success

Heffner argued to me (and previously had written in a February 2011 paper for the Council of Chief State Schools Officers) that the successful implementation of the Common Core, in any state, will come down to teacher involvement and ultimate buy-in. He believes that teachers should be involved in the implementation process in five significant ways:
  • They must have a significant presence in the development of the new common assessments.
  • They will have to change their instructional practices in critical ways if the Common Core is to ultimately lead to higher levels of student achievement.
  • They will need model curricula – either generated by states themselves or by SBAC or PARCC in partnership with states – to help them understand and embrace the rigor and expectations of the Common Core standards.
  • They must be involved in the development of the model curricula.
  • They will need significant amounts of professional development in order to change their established practices and culture in favor of a new design that the Common Core standards and common assessments will demand.

We can only hope that cooperation breaks out, so that Ohio education policy can take a turn for the better.