The news site of Glenbrook South High School.

The Oracle

The news site of Glenbrook South High School.

The Oracle

The news site of Glenbrook South High School.

The Oracle

Advertisement
Advertisement
Find Stories and More:

Student progress added to teacher evals

Student progress added to teacher evals

In February 2014, the Illinois State Board of Education (ISBE) formally mandated that public schools in Illinois incorporate a “student growth component” as part of teachers’ evaluations by Sept. 1 of the 2016-2017 school year. 

This most recent mandate comes four years following legislation Patt Quinn, former Illinois Governor, passed concerning teacher evaluations. PERA, the Performance Evaluations Reform Act (Senate Bill 315; Public Act 96-0861), was passed by the General Assembly in Illinois 2010 and has been the standard for teachers’ performance reviews statewide.

The reforms set forth in 2010 PERA legislation currently have schools following the Danielson Model, which is a framework for evaluations that Riggle said the District has featured as one segment of teachers’ evaluations since approximately 2002. Charlotte Danielson’s techniques, detailed in the Framework for Teaching Evaluation Instrument, have evolved since 1996 to include giving individual teachers overall ratings of unsatisfactory, needs improvement, proficient and excellent. The state adopted this model as well and used the same four categories as criteria to rate teacher performance in PERA. Now, the Danielson model will be used in combination with student improvement, among other criteria.

“By the time we get to the 2016-2017 school year, we are supposed to have student-based growth component be [25] percent of the teachers’ evaluations,” Dr. Michael Riggle, District 225 superintendent, said. “[The ISBE has] outlined…how that growth has been measured, and they leave a lot of options for the district to reach compliance when we get to the ’16-’17 school year.”

Rosanne Williamson, assistant superintendent for educational services, details the process of coming up with some of the standards—that some of those decisions are made by district personnel in coordination with the state’s requirements—and gives

gives longer-term estimates of the significance of the student growth component in teachers’ evaluations. The changes are set to be increases.

“PERA requires that the district and the Glenbrook Education Association convene a group called the Joint Committee that will determine the percentage of teachers’ evaluations that will be comprised by student growth. The law requires that at a minimum this will be 25 percent in the first two years of implementation, with the minimum changing to 30 percent in the third year and up to a maximum of 50 percent.”

Riggle added that there are a handful of ways to reach compliance with new standards by the 2016-2017 school year. Given the haziness around some of the new requirements, which are still slightly more than a year away from officially coming to fruition, certain teachers and departments in both schools will be “piloting” the new assessment tools in order to give the district a better sense of the potential variables in measuring growth before the growth is required to become a significant component of teachers’ performance reviews.

Not only are the conditions of the new standards ambiguous, but so, too, is the term “student growth”, Riggle noted. A mere reference to the term begs the question of how student progress is actually assessed. Once again, there are multiple ways—three, more specifically—in which student growth can be tracked. The requirements listed in the ISBE Student Growth Component Guidebook state that “the evaluation plan shall include at least one Type I or Type II assessment and at least one Type III assessment.”

A Type I assessment is a test scored outside of the district and one that is also used widely outside of the state—tests like NWEA used for many through eighth grade, the ACT or the recently-introduced PARCC tests. Type II assessments, which the district has elected not to use, are district-wide and could include collaboratively-developed curriculum tests or tests designed by textbook publishers that would then be adopted by the schools. Lastly, a Type III assessment is perhaps the tool with which students and teachers are already most familiar; these tests are “(a) rigorous, (b) aligned with the course’s curriculum, and (c) determined by the evaluator and teacher to measure student learning,” according to ISBE. Teachers will often be responsible for creating these assessments.

The primary focus of the district when it comes to professional development measures is coordinating with teachers how best to develop Type III assessments, according to Williamson. Type III assessments are “more relevant to what they teach” and more specific to their individual interests as teachers, Riggle mentioned. During the first semester of the 2015-16 school year, all teachers will be required to utilize at least one Type III assessment, Williamson said.

“Philosophically, I am not in agreement with a student growth component as part of the teachers’ evaluations,” Riggle said. “I think student growth is hard to define. It is difficult to quantify. I don’t believe it’s fair. […] When you are dealing with someone’s evaluation, there are many things you’re looking at, and I think [a teacher’s rating] can either be inflated for a teacher because they are working with some outstanding students, [making the teacher seem] a little bit better than they actually are. Or, [student growth] can do the opposite; it can take a really great teacher and make them look very average.”

Principal Dr. Brian Wegley didn’t take as hard of a line against student growth measures becoming a part of teacher evaluations. However, he said he was more interested in the vast quantities of information about students that the tests would obtain to help put their results in the context of the larger South student body.

“This isn’t unique to GBS,” Wegley said. “Every school is working on figuring out what’s going to be meaningful to them. I have a bias…I think we can look at growth. We can. I’m not partial to it belonging in teacher evaluations, but I am partial to us sharing collective information that makes sure our students are growing compared to one another, and, more importantly, that all of…the professionals are talking about the implications of that.”

According to Riggle, much of the reasoning behind the current system comes from national issues in education. He explained that when President Barack Obama came into office, the administration started issuing waivers for schools who were unable to meet or exceed near-impossible No Child Left Behind (NCLB) standards remaining from the preceding George W. Bush presidency. In order to receive said waiver, states were required to begin measuring student growth as a means of gauging schoolwide progress and individual teacher performance.

“This is where PERA came into being in Illinois,” Riggle said. “There were a lot of negotiations, and then there was basically a law that was put together and adopted.”

While there may be a common consensus that student growth is the goal behind teaching, Riggle feels that there are a couple of variables that could throw off the accuracy of some of the student growth measurements and thus the teachers’ evaluations. He pointed to the student side of the equation, concluding that there are some tests that students decide are worth their time and others that they decide simply aren’t. Prioritization occurs long beyond students’ educations and isn’t anything novel. On the other hand, while poor performance really may not damage the students in the grand scheme, the difference going forward will be that the lack of effort may affect the scores teachers are receiving.

That being said, however, Riggle believes that even though folding student growth into teachers’ reviews isn’t really fair, the high levels of student growth he expects to see will lead to a relatively minimal impact on the ratings teachers will receive starting in 2016-17.

“Because [student growth] is 30 percent of their evaluation, I think most of our teachers will do very well on that portion,” Riggle said. “The other 70 percent is based on their professionalism, their communication skills, their lesson planning, knowledge of curriculum, methodologies they use, all [of which are determined] by formal observation and self-reflection. But when it comes down to the bottom line specimen, I don’t think the student growth component is going to have a great impact on how they are being rated.”

Leave a Comment
More to Discover

Comments (0)

The Oracle intends for this area to be used to foster healthy, thought-provoking discussion. Comments are expected to adhere to our standards and to be respectful and constructive. As such, we do not permit the use of profanity, foul language, personal attacks, or the use of language that might be interpreted as libelous. Comments are reviewed and must be approved by a moderator to ensure that they meet these standards. The Oracle does not allow anonymous comments, and The Oracle requires a valid email address. The email address will not be displayed but will be used to confirm your comments.
All The Oracle Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *