Friday, October 3, 2008

Leonie on Teachers to Be Measured Based on Students' Standardized Test Scores

Click on image to enlarge


"Using a complicated statistical formula, the report computes a "predicted gain" for each teacher's class, then compares it to the students' actual improvements on the test. The result is a snapshot analysis of how much the teacher contributed to student growth. "

Leonie Haimson writes to her listserve:

What factor did they use in terms of improvements -- one year's gains or losses in test scores? Such a small number of students as are included one class would likely lead to an even more unreliable measurement than the progress category at the school level, which culminated in the highly unreliable school grades.

How does such a highly erratic and variable measure get teachers "comfortable with the data, in a positive, affirming way," as Chris Cerf asserts? How exactly does it "help teachers identify their strengths and weaknesses" as Randi writes?

Moreover, according to the "performance predictor" chart above -- the formula was supposed to control for class size at the classroom and school level. Did it?

It appears so. "The teacher data report balances the progress students make on state tests and their absences with factors that include whether they receive special-education services or qualify for free lunch, as well as the size, race and gender breakdown of the teacher's class."

In an oped about evaluating teacher performance in the Daily news in April, Klein wrote that “Nor should test scores be used without controlling for things like where students start academically, class size and demographics.”

http://www.nydailynews.com/opinions/2008/04/08/2008-04-08_beware_the_teacher_tenure_trap.html

Will we ever get to see the formula? How much of a factor did they attribute to class size?

I'd like Eduwonkette and other statistical experts to be able to analyze it.

No comments: