Thursday, April 17, 2014

Why NYC Will Never Be Syracuse

News this week that the Syracuse Teachers' Union has sued the state's Education Department over the new teacher evaluations came with a bit of a thud and then just sort of vanished.  At the heart of the lawsuit is this allegation:

The evaluation system led to about 35 percent of Syracuse teachers getting "developing" or "ineffective" ratings in 2012-13 after appeals were decided. Those are the two lowest ratings in the four-tier system.
About 5 percent of teachers across the state got those ratings. In Onondaga County outside of Syracuse, only 1.8 percent of teachers were rated in the lowest two categories.
Specifically, the STA said the Education Department failed to recognize the full impacts of poverty on students when it set the standards on student improvement on the state's fourth- through eighth-grade math and English language arts tests.

The decision to sue NYSED over teacher evals should, by rights, be a pretty big deal. This is, after all, the first major lawsuit SED has faced over the APPR system that has been put into place in districts across the state. The reality, however, is that lawsuit brought by the Syracuse school district won't much effect districts from other parts of the state.  This is because the method Syracuse chose to evaluate student test scores is different from the way other New York State school districts chose.

In selecting what way student test scores would be used to measure teachers, districts across the state had a choice. They could select option A and have NYSED set target scores for each student. Under this target method, districts would send their student's biographic data up to NYSED. SED would examine the background of each student (as well as how he or she had performed on previous exams), then return target goals for each student to the district. Teachers were held responsible based on whether or not they had reached those targets. Option B, however, used a method called SGP, or student growth percentiles. Under this option, each student would be compared with other district students who:

1) Scored the same on the state assessment
2) Had scored the same on the previous, baseline, assessment
3) Had similar biographic information (such as ethnicity and socio-economic background)
4) Had been with the teacher for roughly the same period of time and
5) Had roughly the same attendance in school during roughly the same period of time.

I know, it sounds confusing (you can see NYSED's video, which I posted here), so let's boil it down to this: Under Option B, SED is comparing similar students to one another, and then placing the performance of all of those "similar students" on a large bell curve. So if your low-income beginner level ESL student performed better than 65% of the other *beginner level ESL students*, than it reflects well on you. The same thing goes for that the affluent boy that sits next to the low-income beginner level ESL student. If he performed better than 55% of other affluent male students across the state, then that reflects well on you too.

After they're  measured along other similar students a bell curve, your Mean Growth PErcentile (MGP) is calculated. You, as a teacher, are then placed on a bell curve along other teachers who taught the same assessment throughout the state.

If you're confused about that, then don't worry! Because the only thing you need to understand is the Bell Curve. I'm no math genius, you see, but whenever a bell curve is used, it is not possible for over one third of a group to be at the bottom of the bell curve. If 35% of Syracuse teachers are rated in the lower two categories of the APPR, then it's clear that no bell curve was used.

And if a bell curve wasn't used, it means that they had let the state set the targets for their students. This is why
About 5 percent of teachers across the state got those ratings. In Onondaga County outside of Syracuse, only 1.8 percent of teachers were rated in the lowest two categories.
It's because the overwhelming majority of those districts chose to go with the option that uses the bell curve. Say want you want about this APPR system. But if it uses multiple measures (including measuring the performance of each student in multiple ways)  it means that each and every measure has a bell curve. And in order to rated "SUCKS EGGS" you would have to be unfortunate enough to find yourself at the bottom of the majority of the bell curves that you're being placed on.

Your luck would have to be so bad as to be in a plane crash and be run over by a car on the same day (that's like Jack Bauer bad luck).

NYC offered their schools' MOSL committees the option of going with "TARGET" scores (the option that Syracuse went with) or "GROWTH" scores (the option that I described as Option B here). Many schools in New York City chose to go with growth scores.

Don't get me wrong; I think Syracuse has a pretty strong case. 40% of Syracuse students live in poverty. If SED set targets that 35% of the teachers didn't 'measure up' to, then it's plain that King & Co. weren't setting realistic goals. They should pay for that.

 I'm just saying that sticking with the bell curve gives SED less of a chance of screwing up teachers' careers. That's what many schools did here in New York  City. And that's why what happened in Syracuse will never happen here.


  1. What goes around comes around. See the link below for more info.


  2. Love it! Very interesting topics, I hope the incoming comments and suggestion are equally positive. Thank you for sharing this information that is actually helpful.