“We, the undersigned, have been empowered by the Constitution of the State of New York and appointed by the New York State Legislature to serve as the policy makers and guardians of educational goals for the residents of New York State. As Regents, we are obligated to determine the best contemporary approaches to meeting the educational needs of the state’s three million P-12 students as well as all students enrolled in our post-secondary schools and the entire community of participants who use and value our cultural institutions” (Comments by seven members of the Board of Regents)
The New York State Board of Regents is the oldest state education policy board in the nation, established in 1794 as part of the state constitution. The current 17-member board is “elected’ by a joint meeting of the State legislature. The selection as Regents was an honorific, a reward for long service in the state. Members served term after term in anonymity, met monthly from September to August, selected a commissioner and set policy, generally the policy originated with the commissioner. Typically the commissioner was a state superintendent with long and distinguished service.
All that changed with the election of Merryl Tisch to replace Bob Bennett as chancellor. The quiescent board became an activist board under the leadership of Tisch. An outsider, David Steiner was selected as commissioner, a report was commissioned that exposed the state testing program, and, with the resignation of Steiner and the appointment of John King the board pursued aggressive polices championed by the new commissioner. In line with the Arne Duncan game plan one major initiative after another was approved by the Regents, frequently over the objections of Regents Cashin and Rosa.
In order to qualify for the Race to the Top grant the state entered into a long negotiations with the state teacher and principal unions and an agreement was reached – called the Annual Personnel Performance Review, aka, APPR, teachers and principals would be assessed annually using a combination of student progress on standardized tests, locally negotiated measures of student achievement and supervisory observations.
In spite of trepidation the APPR plan was benign, with exceptions in a few districts, the overwhelming percentage of teachers received “highly effective” and “effective” scores and only 1% received ” an ineffective score.
The governor was displeased. Last spring the governor accepted millions in charter school campaign contributions and forced New York City to either co-locate charters in public schools or pay the rent for non-school space. Teacher anger grew, the teacher union made no endorsement for governor and it was clear that many teachers in the Democratic primary voted for Zephyr Teachout, an unknown law professor from Fordham. While Cuomo won handily it was not the landslide he anticipated, and, again, in the November general election Cuomo won, not by the landslide he had clearly expected.
On December 18th, barely six weeks after the election Jim Malatras, the governor’s chief of operations penned an accusatory letter to chancellor Tisch and commissioner King. The letter listed a series of condemning questions, demanding responses, and threatened to take over the policy-making role of the board.
As you know, Governor Cuomo has little power over education, which is governed by the Board of Regents, The Governor’s power is through the budget process and he intends the reforms during that process.
Read the full text of the Malatras letter here.
A few weeks later chancellor Tisch and acting commissioner Berlyn responded meekly; agreeing with the accusation of Cuomo through Malatras.
Differentiation is a necessary component of any evaluation system intended to
support professional development and growth. However, as the Governor has previously indicated, changes in State law are necessary in order to achieve better differentiation and to fulfill the goal of a Statewide evaluation system that identifies those who are excelling so that they can be mentors for their colleagues, identifies those who are struggling so they can get support to improve, and informs high‐quality professional development for all educators.
Read the full text of the Tisch-Berlyn response here.
During the last hours of the budget process the Governor did exactly what he threatened; he forced through a totally new teacher evaluation system based on a Massachusetts model known as the “Matrix.”
The board convened an “Education Learning Summit,” and an all-day series of speakers, experts and stakeholders, to express opinions. Three of the four experts were critical of the use of student growth data for high stakes teacher assessment.
At the May Regents meeting acting commissioner Ken Wagner outlined a 56-page proposal to implement the new teacher evaluation law, although, the reconstituted board was much less sanguine; the four newly appointed Regents and the original critics, expressed displeasure with the new proposals.
Seven members of the board, let’s use Diane Ravitch’s term and call them the dissidents, objected to the role of the governor and outlined an alternative plan (Read Ravitch’s blog post here).
Read the entire position paper at the end of the post.
On Monday the board convenes in Albany, under the current law the board must turn the law into regulations. The guidelines introduced by the “dissident” seven directly challenges the December 18th Malatras letter.
The law clearly gives the board substantial leeway to establish regulations; will the remainder of the board support the dissident seven, reject their assertions, or find another path?
The Board of Regents, for the first time in anyone’s memory is asserting itself and reminding the governor that the state constitution grants to the Regents, not the Governor, the power to determine educational policy.
Unfortunately the P-12 Committee, the meeting at which the action will take place is not webcast. I will be tweeting (@edintheapple) – as fast as my fingers can “tweet.”
Maybe, just maybe, a brave group of board members will listen to their constituents and return education to parents and educators.
Position Paper Amendments
to Current APPR Proposed Regulations
BY SIGNATORIES BELOW JUNE 2, 2015
We, the undersigned, have been empowered by the Constitution of the State of New York and appointed by the New York State Legislature to serve as the policy makers and guardians of educational goals for the residents of New York State. As Regents, we are obligated to determine the best contemporary approaches to meeting the educational needs of the state’s three million P-12 students as well as all students enrolled in our post-secondary schools and the entire community of participants who use and value our cultural institutions.
We hold ourselves accountable to the public for the trust they have in our ability to represent and educate them about the outcomes of our actions which requires that we engage in ongoing evaluations of our efforts. The results of our efforts must be transparent and invite public comment.
We recognize that we must strengthen the accountability systems intended to ensure our students benefit from the most effective teaching practices identified in research.
After extensive deliberation that included a review of research and information gained from listening tours, we have determined that the current proposed amendments to the APPR system are based on an incomplete and inadequate understanding of how to address the task of continuously improving our educational system.
Therefore, we have determined that the following amendments are essential, and thus required, in the proposed emergency regulations to remedy the current malfunctioning APPR system.
What we seek is a well thought out, comprehensive evaluation plan which sets the framework for establishing a sound professional learning community for educators. To that end we offer these carefully considered amendments to the emergency regulations.
I. Delay implementation of district APPR plans based on April 1, 2015 legislative action until September 1, 2016.
A system that has integrity, fidelity and reliability cannot be developed absent time to review research on best practices. We must have in place a process for evaluating the evaluation system. There is insufficient evidence to support using test measures that were never meant to be used to evaluate teacher performance.
We need a large scale study, that collects rigorous evidence for fairness and reliability and the results need to be published annually. The current system should not be simply repeated with a greater emphasis on a single test score. We do not understand and do not support the elimination of the instructional evidence that defines the teaching, learning, achievement process as an element of the observation process.
Revise the submission date. Allow all districts to submit by November 15, 2015 a letter of intent regarding how they will utilize the time to review/revise their current APPR Plan.
II. A. Base the teacher evaluation process on student standardized test scores, consistent with research; the scores will account for a maximum of no more than 20% on the matrix.
B. Base 80% of teacher evaluation on student performance, leaving the following options for local school districts to select from: keeping the current local measures generating new assessments with performance –driven student activities, (performance-assessments, portfolios, scientific experiments, research projects) utilizing options like NYC Measures of Student Learning, and corresponding student growth measures.
C. Base the teacher observation category on NYSUT and UFT’s scoring ranges using their rounding up process rather than the percentage process.
III. Base no more than 10% of the teacher observation score on the work of external/peer evaluators, an option to be decided at the local district level where the decisions as to what training is needed, will also be made.
IV. Develop weighting algorithms that accommodate the developmental stages for English Language Learners (ELL) and special needs (SWD) students. Testing of ELL students who have less than 3 years of English language instruction should be prohibited.
V. Establish a work group that includes respected experts and practitioners who are to be charged with constructing an accountability system that reflects research and identifies the most effective practices. In addition, the committee will be charged with identifying rubrics and a guide for assessing our progress annually against expected outcomes.
Our recommendations should allow flexibility which allows school systems to submit locally developed accountability plans that offer evidence of rigor, validity and a theory of action that defines the system.
VI. Establish a work group to analyze the elements of the Common Core Learning Standards and Assessments to determine levels of validity, reliability, rigor and appropriateness of the developmental aspiration levels embedded in the assessment items.
No one argues against the notion of a rigorous, fair accountability system. We disagree on the implied theory of action that frames its tenet such as firing educators instead of promoting a professional learning community that attracts and retains talented educators committed to ensuring our educational goals include preparing students to be contributing members committed to sustaining and improving the standards that represent a democratic society.
We find it important to note that researchers, who often represent opposing views about the characteristics that define effective teaching, do agree on the dangers of using the VAM student growth model to measure teacher effectiveness. They agree that effectiveness can depend on a number of variables that are not constant from school year to school year. Chetty, a professor at Harvard University, often quoted as the expert in the interpretation of VAM along with co-researchers Friedman & Rockoff, offers the following two cautions: “First, using VAM for high-stakes evaluation could lead to unproductive responses such as teaching to the test or cheating; to date, there is insufficient evidence to assess the importance of this concern. Second, other measures of teacher performance, such as principal evaluations, student ratings, or classroom observations, may ultimately prove to be better predictors of teachers’ long-term impacts on students than VAMs. While we have learned much about VAM through statistical research, further work is needed to understand how VAM estimates should (or should not) be combined with other metrics to identify and retain effective teachers.”i Linda Darling Hammond agrees, in a Phi Delta Kappan March 2012 article and cautions that “none of the assumptions for the use of VAM to measure teacher effectiveness are well supported by evidence.”ii
We recommend that while the system is under review we minimize the disruption to local school districts for the 2015/16 school year and allow for a continuation of approved plans in light of the phasing in of the amended regulations.
Last year, Vicki Phillips, Executive Director for the Gates Foundation, cautioned districts to move slowly in the rollout of an accountability system based on Common Core Systems and advised a two year moratorium before using the system for high stakes outcomes. Her cautions were endorsed by Bill Gates.
We, the undersigned, wish to reach a collaborative solution to the many issues before us, specifically at this moment, the revisions to APPR. However, as we struggle with the limitations of the new law, we also wish to state that we are unwilling to forsake the ethics we value, thus this list of amendments.
Kathleen Cashin
Judith Chin
Catherine Collins
*Josephine Finn
Judith Johnson
Beverly L. Ouderkirk
Betty A. Rosa
Regent Josephine Finn said: *”I support the intent of the position paper”
i Raj Chetty, John Friedman, Jonah Rockoff, “Discussion of the American Statistical Association’s Statement (2014) on Using Value-Added Models for Educational Assessment,” May 2014, retrieved from:
http://obs.rc.fas.harvard.edu/chetty/value_added.html. The American Statistical Association (ASA) concurs with Chetty et al. (2014): “It is unknown how full implementation of an accountability system incorporating test-based indicators, such as those derived from VAMs, will affect the actions and dispositions of teachers, principals and other educators. Perceptions of transparency, fairness and credibility will be crucial in determining the degree of success of the system as a whole in achieving its goals of improving the quality of teaching. Given the unpredictability of such complex interacting forces, it is difficult to anticipate how the education system as a whole will be affected and how the educator labor market will respond. We know from experience with other quality improvement undertakings that changes in evaluation strategy have unintended consequences. A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. Certain schools may be hard to staff if there is a perception that it is harder for teachers to achieve good VAM scores when working in them. Overreliance on VAM scores may foster a competitive environment, discouraging collaboration and efforts to improve the educational system as a whole. David Morganstein & Ron Wasserstein, “ASA Statement on Using Value-Added Models for Educational Assessment,” Published with license by American Statistical Association, April 8 2014, published online November 7, 2014:http://amstat.tandfonline.com/doi/abs/10.1080/2330443X.2014.956906. Bachman-Hicks, Kane and Staiger (2014), likewise admit, “we know very little about how the validity of the value-added estimates may change when they are put to high stakes use. All of the available studies have relied primarily on data drawn from periods when there were no stakes attached to the teacher value-added measures.” Andrew Bacher-Hicks, Thomas J. Kane, Douglas O. Staiger, “Validating Teacher Effect Estimates Using Changes in Teacher Assignments in Los Angeles,” NBER Working Paper No. 20657, Issued in November 2014, 24-5:http://www.nber.org/papers/w20657.
ii Linda Darling-Hammond, “Can Value Added Add Value to Teacher Evaluation?” Educational Researcher, March 2015 44, 132-37:http://edr.sagepub.com/content/44/2/132.full.pdf+html?ijkey=jEZWtoEsiWg92&keytype=ref&siteid=spedr.