Tag Archives: Charlotte Danielson

“I Hate Being Observed! It’s a Waste of Time and too frequently is Harassment.”  (A view commonly held by teachers) Can teacher observations lead to constructive conversations?

A decade ago The New Teacher Project (TNTP) issued a report, “The Widget Effect “  that concluded,

* All teachers are rated good or great

* Professional development is inadequate

* Novice teachers are neglected, and

* Poor performance goes unaddressed

The report has had enormous, and toxic, impacts. The feds and states moved to assessments of teachers using student outcomes on standardized tests, value added measurements (VAM), a dense algorithm only understood by psychometricians.

For decades teachers were observed once or twice a year, or, not at all, a mechanical process, a compliance chore.  Teachers resented, or, feared being observed, supervisors found it burdensome. If you were lucky you were in a school in which the observation process was part of a an ongoing discussion of the teaching/learning process.

New York State adopted an Annual Personnel Performance Review (APPR) scenario; each school district in the state negotiated a process within strict regulations with the union. In New York City the system was imposed by the state commissioner. The process included VAM scores and observations using a rubric (Danielson, Marshall, Marzano, etc.,).  The pushback from the unions, and parents grew, teachers in high poverty schools received lower VAM scores, the critics of the VAM methodologies grew and grew; finally the Board of Regents declared a four year moratorium on the use of student test scores, and, have just announced a one year extension to create a new teacher evaluation tool.

While VAM scores are scorned teacher observations by supervisors are equally flawed. Different supervisors rate the same lesson differently, these is no consensus. The use of a single rubric, in New York City, the Danielson Frameworks, simply became another compliance task, a checklist. All observations are entered into a computerized database, ADVANCE, and principals who fall behind in their observations are dunned.

A principal related to me: all the principals in a district were divided into teams and observed classes in a school. The group facilitators asked the principals how they would rate the lesson. One principal asked: Shouldn’t we be discussing how we would handle the post observation conference?  The facilitator demurred, no; we’re only here to assess the lesson according to Danielson.

Danielson is not the Holy Grail, and, following Danielson to the letter does not guarantee successful student outcomes.

Early in the Danielson era I was at her presentation, at the end I asked,

“Supreme Court Justice Potter Stewart wrote he couldn’t define pornography; however, he knew it when he saw it, isn’t it the same with effective instruction?”

Charlotte disagreed.

She’s incorrect, after watching many hundreds of lessons you can “feel” a good lesson. Different classes of student require different instructional strategies, effective teaching is varying teaching techniques to suit the kids in front of you.

Attempting to use student test scores to assess teacher performance was disastrous, and, emphasizing the summative assessment rather than the formative assessment is racing down another wrong path; the light at the end of the tunnel is an oncoming locomotive.

An irony: the other Danielson book, “Talk About Teaching! Leading Professional Conversations, (2009)” should be required reading for supervisors.

Danielson writes,

An important mechanism to promote teacher learning …. is that of conversation. Through focused and occasionally structured conversations, teachers are encouraged to think deeply about their work, to reflect on their approaches and student responses. And yet conducting such conversations requires skill. Many teachers assume that if their principal or supervisor wants to discuss the events in a classroom it means there is something wrong … by neglecting to engage in professional conversations with teachers, educational leaders decline to take advantage of one of the most powerful approaches at their disposal to promote teacher learning.

 Conducting a post observation conference is a skill; and should not be a burdensome, compliance chore for the observer and the observed.

Post observation conferences might be a Socratic Method, engaging the teachers in a dialogue, or, a few teachers might observe colleagues and jointly discuss the lesson among themselves with a facilitator. In my school the principal allowed us to substitute a peer observation system in lieu of traditional supervisory observations. Triads of teachers, Teacher A observed B, B observed C and C observed A, in the same week, teaching a lesson on a similar topic, the teachers met and engaged in a facilitated conversation around a template of questions; the “notes” were the observation report. The teachers who participated had never watched a colleague teach, and, reflected deeply on their own practice.

The just-approved New York City teacher contract contains two changes to the teacher evaluation section, the number of observation are reduced.

… the contract approved this week also significantly cuts back how often teachers need to be observed under the city’s evaluation system. Top-rated teachers will receive only two classroom visits — down from three or four. For new teachers or those with low marks, observations are cut from a high of six to a low of three.

 And, new professional learning teams will support “school-based professional development committees to align PD to the observations conducted throughout the school year.”

  Professional development on evaluation

  • A professional learning team consisting of UFT and DOE representatives will plan and conduct annual training sessions on the implementation of the evaluation system by the last Friday in October. 
  • The professional learning team will also ensure that teacher development tools and resources will be developed and distributed, including resources regarding evaluation of specific school settings such as co-teaching, special education settings, ENL and physical education.
  • The professional learning team will provide support to school-based professional development committees to align PD to the observations conducted throughout the year. 

Is this meaningful change?

The union took a risk, convincing teachers that formative assessment, conversations, will make them into better teachers.  Maybe they will jump on board, maybe they will continue to close their doors and do what they do. Maybe the union is alienating members or maybe changing compliance-driven cultures to collaborative school cultures.

Unions are demeaned, the “right-wing” establishment spent years to get the Janus case before the court and maneuvered the court to get the “right” justices. So far, Janus seems to have motivated unions, teacher strikes in non-collective bargaining states, the public supporting teachers, and a voucher plan in Arizona soundly defeated.

Teachers can continue to win over the public by continuing to improve, as professionals, and improve the end product, students outcomes.

A friend always reminds staffs that the solution is in the room, changing school cultures never begins with edicts from superintendents, it begins in teacher lunch rooms, in teacher rooms, it begins from the ground up, yes, superintendents must seed the fields, must change from para-military attitudes to supporting collaborative cultures.

The union president and the chancellor took a risk: risk-taking can be the path to positive embedded change

Can Education Technology Narrow the Achievement Gap? A Panel at the Hunter College/John Hopkins Policy Forums

“In the early days of television, there was no shortage of predictions that the medium would have a major positive impact on student learning. Today, we find the same optimism among some education reformers with regard to such technologies as digital tablets, data crunching, personalized learning, and adaptive testing. Some research suggests that, carefully used, the application of educational technology brings real gains in student learning. Other research summaries are far more pessimistic…

Our country spends more than $10 billion on K-12 education technology. What can we say with any confidence about the promise and possibilities of such investments? Are there clear conclusions to be drawn about what, where and when the use of technology is beneficial, and for which students? What are the key challenges to be met in maximizing the potential contribution of technology to raise students’ achievement overall and to accelerate the learning of our most underprivileged students?”

From time to time the Hunter/John Hopkins Education Policy folks take deep dives into controversial education issues. The format never varies – a brief presentation by the presenter and the presenters grilled by David Steiner. Steiner is a Charlie Rose-like interviewer, pointed, probing questions who controls the interview and allows sufficient time for questions.

Julia Freeland Fisher is the lead researcher at the Clayton Christensen Institute for Disruptive Innovation. Fisher argues that technology is the innovation that can disrupt the current standard pattern of education through the use of blended learning; the infusion of technology education can be personalized down to the student by student level. Fisher, a lawyer by training, writes a regular blog at the Institute. In New York City the credit recovery scandal has made us highly suspicious of “disruptive” innovations.

Jamie Stewart is the co-Head of School and Lead Educator at AltSchool in Brooklyn Heights. AltSchool is a micro school, a number of very small private schools designed for personalized learning, with tuition of about $30,000 a year.

Kevin Wenzel, Specialist, Blended Learning, District of Columbia Public Schools; the DC public schools, have 49,000 kids and 112 schools, DC leads the nation in increases in NAEP scores in the 4th grade in the TUDA results.

DC Public Schools (DCPS) students grew by eight points in 4th grade reading over the 2013 test, representing the biggest increase of any school district and the largest increase in the history of the 4th-grade reading test. DCPS students also saw a four-point increase in 4th grade math scores, no change in 8th grade reading scores, and a two-point drop in 8th grade math scores.

The growth of blended learning and rotational teams of students has been widely praised, especially by the conservative side of the educational debate.

Steiner began by asking Fisher whether she was aware of any double-blind studies supporting that technology-based instruction has better results than traditional or constructive classrooms. Fisher responded it was the wrong question – Steiner insisted, her answer: no.  Steiner asked Wenzel why the 8th scores were flat in English and declined in Math – Wenzel didn’t know.

The Q & A was fascinating.  Were the panelists aware that the revolution against testing was exploding around the nation, and, the same suspicion of technology replacing traditional instruction was also growing?  Sort of …  a weak yes…

Were the panelists aware that parents and teacher saw the move to technology as a way of replacing teachers and saving money?   Again, sort of….

Were the panelists aware that many educators saw the technology revolution as the private sector ripping off the public sector foe education dollars?  Fisher responded that innovation comes from the private sector and the public sector should be open to new technologies; avoiding the essence of the question.

The use of technology is widespread in schools across the nation; ClassDojo is an easy to use and a popular communications/classroom management tool. The net offers an endless array of lesson plans and classroom materials by grade and subject. Teachers create Facebook pages for classes, kids write blogs, the use of cyber tools are widespread.

Integrating cyber tools into the instructional fabric of a classroom is a challenge and using cyber tools to replace instructional techniques may or may not improve outcomes. The Charlotte Danielson instructional frameworks describe a “highly developed” lesson as a lesson with a deep level of classroom discussion among students, “… research in cognitive psychology has confirmed, namely, that students learn through active intellectual engagement with content.” Is tapping away on a tablet or IPad the equivalent of “active intellectual engagement”?  I think not.

A Window: The Regents and the Commissioner Have an Opportunity to Craft Student Tests and Teacher Evaluation Plans That Are Meaningful to Families and Staffs

In New York State parents opted one in five students out of the grades 3-8 English and Math exams that are required by federal law – 225,000 students. Other parents considered opting out and fearing some negative impact on their children decided not to opt out this year. As it turns out these exams are not “high stakes” for children, in fact, they are “no stakes” for children. The exams exist to rank the state, school districts, schools and teachers. By federal statute and regulation the state must determine “priority,” “focus” and “out-of-time” schools, require intervention plans with the ultimate threat of school closings. Part of a teacher’s “score” is based on student progress on state tests as determined by a complex algorithm usually referred to as Value-Added Modeling (VAM). The teacher “score” can be used as the basis of dismissal procedures.

The opt out parents are part of a rejection of a stumbling political system; the political parties spar, attack each other, and fail to pass what appear to be “no-brainer” ideas. The popularity of Trump is a rejection of everyday politics; voters seem to be rejecting incumbency, seeking a new crop of candidates who promise to listen to the concerns of voters.

We are fourteen months away from a presidential election as well as the election of the 150 members of the Assembly and the 63 members of the Senate.

The opt out parents are among the worst fears of electeds, they cross party lines, they are passionate, they are single issue voters and the issue can’t be reduced to a meaningful single vote on a piece of legislation.

The new state commissioner, MaryEllen Elia arrived in early July and immediately began dripping gasoline on the embers of opt out: Parents have a right to opt out; however I support state tests; superintendents must do everything possible to convince parents not to opt out, districts may lose funding, whoops, no they won’t lose funding … as she stumbled from comment to comment the opt out parents saw her as yet another bureaucrat looking to test and punish their child. Perhaps incorrectly, she sent the wrong message as her first message.

Let’s take a deep breath; there is a window for the Board of Regents to explore a major course correction. At the September meeting the regents will give final approval, with opposing votes, to the new, controversial, Cuomo-imposed principal-teacher evaluation plan – the state will move from the current plan (3012-c: read the 166-page SED Guidance document here) to the new plan, referred to as the “matrix” (3012-d: read links to guidance here).

The new law, acknowledging the complexity of designing new plans within brief timelines allow for districts to ask for waivers (Read Guidance here) – delaying the date of completing a plan from November 15th to March 15th, effectively delaying the implementation of 3012-d for a year.

Districts/BOCES that are facing hardships and are therefore unable to have an APPR plan consistent with §3012-d approved by the Department by the November 15, 2015 deadline must submit a Hardship Waiver application in order to maintain their eligibility for a State aid increase.

Chancellor Tisch, to her credit, has made it clear that the regents will look favorably on applications for waivers.

Five hours down I-95 the Congress will be considering the much-delayed reauthorization of No Child Left Behind. While the bills that passed the House and the Senate contain substantial differences there is a good chance that the conference will craft a final bill, a bill that the president will have to sign or veto. While it is difficult to know with certainty a bill might be on the president’s desk later this year or early in 2016.

I provided a civics lesson on How a Bill Becomes a Law earlier in the year: https://mets2006.wordpress.com/2015/01/23/civics-101-the-struggle-over-the-reauthorization-of-nclbesea-as-a-teaching-tool/

Education Week has written extensively about the differences in the House and Senate bills; however, both bills give far more authority to the states on issues of school accountability.

Pending ESEA Reauthorization
Under both House and Senate bills, states would have to stick with the NCLB law’s testing schedule. But they could decide how much weight to give those tests in gauging school performance and could set their own goals for student achievement. There would be no requirement that states identify a certain percentage of schools as low-performing, or use any specific turnaround techniques. Both bills would also open the door to some sort of local assessment, although the House bill goes further than the Senate measure.

The regents and the commissioner, in a transparent climate, should begin to discuss changes in the state testing and principal-teacher assessment laws and regulations, which may be possible under a new NCLB.

While the new NCLB will require annual testing will it require the testing of every child or will the law allow using sampling techniques that are used by the National Assessment of Educational Progress – NAEP – referred to as the nation’s report card?

Since NAEP assessments are administered uniformly using the same sets of test booklets across the nation, NAEP results serve as a common metric for all states and selected urban districts. The assessment stays essentially the same from year to year, with only carefully documented changes. This permits NAEP to provide a clear ppicture of student academic progress over time.

NAEP does not test every subject every year; NAEP uses sampling methods,

In state assessments (mathematics, reading, science, and writing), a sample of schools and students is selected to represent each participating state. In an average state, 2,500 students in approximately 100 public schools are assessed per grade, for each subject assessed. The selection process for schools uses stratified random sampling within categories of schools with similar characteristics.

Could New York State use the same stratified random sampling processes to assess student performance across the state?

I admit this is a complex process, it may not be permitted under the yet to be negotiated new NCLB; however, a NAEP-type sampling, if possible, would remove the stigma of testing and provide the state, the localities and the public with the data required to assess our progress.

If we move away from testing every student every year how can we assess teacher performance?

The two assessment plans in New York State, 3012-c and the new “matrix,” 3012-d reply on highly questionable algorithms with substantial errors of measurement and supervisory observations using state-approved rubrics such as the Danielson Frameworks.

Supervisory observation of lessons has an inherent flaw – will all supervisors view lessons through the same lens? While the lens may be the Danielson Frameworks a supervisor in an inner city high poverty school may “score” a teacher quite differently than a supervisor in a high achieving suburban school. In the last round of teacher assessments (APPR) there were districts in which virtually every teacher received a maximum or near maximum score – every teacher was “highly effective.” Charlotte Danielson demurs, at a meeting I attended she responded to a principal who proudly proclaimed in her school every teacher would be highly effective. Danielson interrupted, “We’re lucky if a teacher occasionally visits highly effective.”

Inter-rater reliability is a complex and core issue that has been the subject of considerable research: read a few of the studies,

“Inter-rater reliability Measuring and Promoting Inter-Rater Agreement of Teacher and Principal Performance Ratings” http://files.eric.ed.gov/fulltext/ED532068.pdf

“Evaluating Inter-rater Reliability of a National Assessment Model for Teacher Performance” http://ijep.icpres.org/2011/v5n2/jmporter_djelinek.pdf

The new law, 3012-d addressed the issue by requiring “outside evaluators,” well-intentioned; however, why would the outside observer be any more reliable than the in-school observer? The New York City system, called ADVANCE does try to address the reliability issue; how successfully only time will tell.

Unfortunately the teacher observation reliability problem is separate and apart from the teacher improvement conundrum. Does the teacher observation/feedback process actually impact teacher performance? Charlotte Danielson’s other book, “Talk About Teaching: Leading Professional Conversations” (2010) explains that the conversations that have nothing to do with assessment are the key to improving practice,

Another process to investigate is the Inspectorate System that is commonplace in Europe. Trained and well-respected “inspectors,” make in-depth visits to schools, not unlike the Schools Under Registration Review (SURR) teams that visited low-performing schools and wrote detailed “findings-recommendations” reports based on a public set of standards.

I wrote about the Inspectorate Systems: https://mets2006.wordpress.com/2014/05/14/flawed-evaluation-systems-how-should-we-assess-schoolteacher-performance-who-will-have-the-cojones-to-admit-their-errors-and-choose-a-validreliablestable-system/

With a new reauthorized NCLB in the wings and with waivers postponing the requirement to produce 3012-d plans the regents and the commissioner have a window, an opportunity to craft a new approach that would relieve families and students of the burden for sitting for meaningless tests and time to create a plan that both assesses principal and teacher performance and assists all educators in improving their practice.

The failure to find “fixes” could lead to many hundreds of thousands of opt out families and the angry voter-parents seeking elected scalps in September 2016 primaries and the November 2016 general election.

We don’t have a lot of time – the regents and the commissioner should begin a review process, a public transparent process as soon as possible with a goal of producing proposed legislation for the new legislative session.

Hot Potato: The NYS Regents May Have to Decide How Teachers Are Assessed (in 60 days), Any Ideas?

Ever play “hot potato”?

♫ One potato
Two potatoes
Three potatoes
Five potatoes
Six potatoes
Seven potatoes

Maybe its appropriate that the state legislature and the governor are playing a children’s game?

The governor’s blustering and threatening resulted in a cyclonic backlash; from the teachers to parents to electeds with his approval rating in free fall. Others new plans, a “matrix,” as convoluted as the movie, a committee, each suggestion was met with suspicion, the “hot potato” was bouncing from the governor to the Senate to the Assembly, no one wanted it, it was too politically hot.

News reports and the rumor mill claim the budget process will return teacher evaluation to the Board of Regents to craft a plan and return to the legislature for action by June 1. Let me underline, I have not seen a bill, just reporting based on news reports.

The entire teacher evaluation catastrophe seems beyond redemption. The heart of the argument is tying teacher effectiveness to test scores, using a dense algorithm so that teachers teaching similar students are compared to each other; the statistical term is value-added modeling (VAM). All the experts agree that while the data is interesting, especially over time, it should not be used for high stakes decisions, like firing and promotion, it’s too unstable; however, arguing that teachers are totally responsible for test scores is a simple answer to a complex issue has swept from state to state.

The Race to the Top (RttT) application required a teacher evaluation plan, a multiple measures plan incorporating student test scores (VAM).

The pushback has been unabated, with examples of teachers rated highly effective by the principal and ineffective by the VAM algorithm, and a few the reverse.

To further confuse, about 75% of teachers teach non-tested subjects or classes, how do you use student data to assess the 75%? The Measures of Student Learning (MOSL) vary from school to school from school district to school district.

The New York State plan calls for 20% assessment by student test scores: 20% by a locally negotiated metric and 60% by supervisory observations using one of six approved rubrics.

When the dust settled after year one 51% of teachers were rated “highly effective” and 1% “ineffective.”

In the days before Charlotte Danielson became an iconic name, I met with Charlotte and about twenty principals. At the end of the session one principal proudly proclaimed, “In my school every teacher will be highly effective.” Danielson shook her head, “You’re lucky if a teacher is highly effective occasionally during a single lesson.”

BTW, Danielson emphasized that her Frameworks were a professional development tool not an evaluative tool and trashed the use of student test scores as an evaluative tool.

A closer look at the scores across the state is disturbing. In many districts every teacher received very high observation scores: can all 200 teachers in a district be highly effective? Teachers teaching special education, English language learners, and very high poverty kids tended to get lower scores. Do we attract less competent teachers or is the algorithm flawed? Some teachers (art, music, physical education) are rated based upon the school-wide ELA and/or Math scores, does that make any sense?

How are the seventeen members of the Board of Regents going to “correct” the current system in 60 days?

In a handful of schools teachers play a role in assessing colleagues: peer assessment is commonplace in other professions. Should we include teachers, colleagues in the same school, on teams with principals? Should we use teachers from other schools?

A paper from the Chicago Consortium on School Research published in Education Next, Does Better Observation Make Better Teachers, November 2014, assesses a teacher observation experiment in Chicago,

The principals’ role evolved from pure evaluation to a dual role in which, by incorporating instructional coaching, the principal served as both evaluator and formative assessor of a teacher’s instructional practice. It seems reasonable to expect that more-able principals could make this transition more effectively than less-able principals. A very similar argument can be made for the demands that the new evaluation process placed on teachers. More-capable teachers are likely more able to incorporate principal feedback and assessment into their instructional practice.

Our results indicate that while the pilot evaluation system led to large short-term, positive effects on school reading performance, these effects were concentrated in schools that, on average, served higher-achieving and less-disadvantaged students. For high-poverty schools, the effect of the pilot is basically zero.

In another study, “Teacher Dismissal Under New Evaluation System” (Grover Whitehurst and Katherine Lindquist), published also in Education Next sees “troublesome” flaws in observational teacher evaluation systems,

… we identified flaws in the evaluation systems that need correction. The most troublesome of these is a strong bias in classroom observations that leads to teachers who are assigned more able students receiving better observation scores. The classroom observation systems capture not only what the teacher is doing, but also how students are responding. This makes the teacher’s classroom performance look better to an observer when the teacher has academically well-prepared students than when she doesn’t.

We are a long way from a system that clearly differentiates effectiveness among teachers. There is no question that the variation from school to school, from school district to school district, is significant.

European countries use teacher inspectorates, teams that visit schools and assess both school and teacher quality.

I wish the Board of Regent luck.

Any ideas?

UPDATE: Just Out!! General outline of education initatives in the budget here

Teaching Academic Tenacity: Why the SAT, Pearson and PARCC tests Are Poor Predictors of College/Career Readiness and Why Non-Cognitive Skills Trump Faulty Exams.

We are obsessed with judging teacher quality by measuring student achievement. To make it even more complex we are measuring student achievement by a brand new yardstick, the Common Core State Standards.

Parents, educators and the New York State governor are confused, two-thirds of students scored “below proficient” on the latest tests, which the State Education Department now defines as “approaching proficiency.” (smile) and half of all teachers scored “highly effective” and less than 1% scored “ineffective” on the extremely complex APPR teacher evaluation metric.

The governor asks: if two-thirds of kids are failing state tests how can teachers score so highly on the teacher evaluation tool? How can principals give teachers high grades on the 60% lesson assessment section of the teacher evaluation tool when so many kids doing so poorly on the tests?

Unfortunately we are using the wrong tools to measure the wrong outcomes.

We base a range of decisions on a test, a few hours of bubbling in answers and writing an essay; however the SAT and the ACT, which also use bubble sheets and essays, are poor predictors of college success. The best predictor is standing in class as measured by the student’s GPA. It should not be surprising; the GPA is determined by numerous tests over four years of high school reflecting the judgment of many teachers.

The largest study of students at colleges that do not require SAT or ACT scores has found that there is “virtually no difference” in the academic performance (measured in grades or graduation rates) of those who do and don’t submit scores.

The study — involving 123,000 students at 33 colleges and universities of varying types — found that high school grades do predict student success. And this extends to those who do better or worse than expected on standardized exams. So those students with low high school grades but high test scores generally receive low college grades, while those with high grades in high school, but low test scores, generally receive high grades in college.

This is not an isolated example of research, in 2005 a study explains,

… researchers examined differences in the predictive strength of high school grades and standardized test scores for student involvement, academic achievement, retention, and satisfaction. Findings indicate that high school grades are stronger predictors of success than standardized test scores for both racial and religious minority students.

In another study the Council for Aid to Education and NYU supports the finding of the research supra

In spite of the evidence that the SAT does not achieve its purposes the folks at the College Board are rolling out a new exam in the spring of 2016, a test that reflects the Common Core standard competencies; at the same time more and more colleges are abandoning the SAT.

If tests, be it the SAT or Pearson-produced Grade 3-8 state tests or the PARCC exams are not accurate predictors of college success, or, teacher competence, how should we assess teacher performance and student achievement?

The answer may be in a Gates-funded study, Academic Tenacity: Mindsets and Skills that Promote Long Term Learning, (Carole Dweck and others, Stanford University). The introduction is exceptionally important,

In a nationwide survey of high school dropouts, 69% said that school had not motivated or inspired them to work hard. In fact, many of the students who remain in school are not motivated or inspired either, and the more time students spend in K-12 education the worse it gets. What prevents students from working hard in school? Is it something about them, or it something about school? Is there a solution to this problem?

Most education reform focuses on curriculum and pedagogy – what material is taught and how is it taught? However, curriculum and pedagogy have often been narrowly defined as the academic content and students’ intellectual processing of that material. Research shows that this is insufficient. In our pursuit of education reform, something has been missing: the psychology of the student. Psychological factors, often called motivational or non-cognitive factors – can matter even more than cognitive factors for student academic performance …

Academic tenacity is about the mindsets and skills that allow students to:

* Look beyond short-term concerns to higher order goals, and

* Withstand challenges to setbacks to persevere towards these goals.

Dweck and her co-authors make it clear, it’s not the “right” curriculum or the “right” pedagogy, there are many paths to the same ends, the “solution” is not the Common Core, the “solution” is not in the Charlotte Danielson frameworks, without a teaching/learning environment that supports Academic Tenacity too many students, too many high poverty students and student of color will be left behind.

The authors specifically define “key characteristics and behaviors” that can be defined and taught,

Key Characteristics and Behaviors of Academically Tenacious Students

* Belong academically and socially
* See school as relevant to their future
* Work hard and postpone immediate pleasures
* Not derailed by intellectual and social difficulties
* Seek out challenges
* Remain engaged over the long haul

Scientific American affirms the research findings and links to a range of research findings (Check out here)

For academic achievement, ability is not enough. What’s also needed are mindsets and strategies for overcoming obstacles, staying on task, and learning and growing over the long-term … academic tenacity is not about being smart, but learning smart.

I was visiting a middle school in one of the poorest neighborhoods in the city, a neighborhood at the top of the list of handgun violence and homicides. As I walked toward the office a student “introduced” himself, “My name is xx, can I help you?” Each classroom displayed the banner from a college and the advisory rooms had names, the name of a college. No one was yelling at kids, a student was talking loudly and a teacher simply put his find to his lips. The school leader took me into a classroom, and asked, “”What are we learning today?” The kids all raised their hands, anxious to tell me all about the lesson.

The middle school downstairs was chaos.

Danielson frameworks are a guide and set a standard; however, students in screened schools or schools with more middle class students are far more likely to reach the “highly effective” category, as evidenced by the teacher grades on the APPR, the state teacher evaluation metric.

Challenging content, rigorous curriculum and pedagogy combined with the teaching skills that promote academic tenacity is the path to creating successful schools and college and/or career ready students.

Are schools of education and school-based professional development emphasizing the teaching of Academic Tenacity? I fear not. Hopefully research will trump the current faulty teaching and learning trends.

Can New/Revised Rules for English Language Learners Improve Student Outcomes? or Does Change Begin in Schools and Classrooms? How Do We Encourage “Bottom Up” Reform?

Until now I don’t think I’ve agreed with an editorial in the NY Post since Dorothy Schiff sold the paper to Rudolf Murdoch.

A NY Post editorial includes comments made by Chancellor Farina’s newly appointed, and returnee from retirement, chief for “English-language learners,” Milady Baez, the Post writes,

[The Department] plans to help schools with kids struggling because of poor English by “increasing bilingual program options for ELLs,” “strategically using ELL density enrollment data,” “collaborating with a broad range of partners,” “strengthening the specialized skill sets necessary to effectively address the academic and linguistic needs of the diverse ELL population,” etc.

The problem is the Department leaders of programs for English language learners could have written the same sentences in 2004 or 1994 or 1984.

The Post reports a 2011 study,

• Of English learners who were in first grade in 2003, 36 percent failed the English proficiency test seven years in a row.
•  Only 30 percent passed within three years. The average kid took more than five.
•  Almost 70 percent of kids who failed for six or more years were born in America — meaning US citizens, not immigrants.

And, the editorial concludes,

In New York, we even reward schools for this failure, because they get money for each foreign-language speaker they have. In any language, that should be a recipe for change — not more of the same.

The unanimous 1974 Lau v. Nichols Supreme Court decision required school districts to provide specialized instruction to children deficit in English skills, the court wrote,

The failure of the San Francisco school system to provide English language instruction to … students of Chinese ancestry who do not speak English, or to provide them with other adequate instructional procedures, denies them a meaningful opportunity to participate in the public educational program, quoting Senator Humphey [the court averred[,

“Simple justice requires that public funds, to which all taxpayers of all races contribute, not be spent in any fashion which encourages, entrenches, subsidizes, or results in racial discrimination.”

For forty years New York City, and more recently New York State have struggled with the issue of how you adequately provide the particular type of education to children whose primary language is not English.

Under the wave of 1970-2002 reform, fully empowered community school districts, in the poorest districts with the least unsuccessful students; jobs came before education. In a South Bronx school district the superintendent told the principals they must create at least one bilingual class on every grade in every school. When a principal complained he didn’t have enough kids the superintendent snapped back, “OK, but the school board has teachers who need jobs, form the classes”

The Supreme Court decision rather than providing targeted instruction for English language learners simply was a vehicle to provide jobs.

The battle over whether to create bilingual classes or English as a Second Language (ESL) echoed across the city – with bilingual classes as the default unless the parent opted out. While I’m sure there are “highly effective” bi-lingual teachers; unfortunately we don;t see expected gains in classrooms.

New York State responded to the Lau decision by doing what the state does, they wrote dense regulations that required school districts to develop a system to identify English language learners, required minutes of instruction related to the level of the student’s English competency, and a system deciding whether the student had “scored out” of the program – compliance rules. The thirty year old rules are referred to as “Part 154.” (See regs here).

For the last three years the state and a “committee of practitioners” have been dueling over revisions to the rules, and, finally, made a number of changes. (See revised regs here and excellent power point here).

While the changes to the regulations are an improvement they are far, far from a solution – they are still compliance rules written by lawyers.

If a school used the correct procedures for identifying English language learners, provided the appropriate minutes of instruction and the other rules all is fine – the regulations ignore student progress; a prime example of “…the operation was a success but the patient died.”

The number of children who qualify for English language learners services continues to increase and increase rapidly outside of New York City.

NYC: 151,000
Brentwood: 5.100
Buffalo: 4.100
Rochester: 3,500
Yonkers: 3,000

That’s right; the city with the second largest numbers of ELLs is Brentwood on Long Island. School districts outside of New York City are struggling with increasing numbers of students who require ELL instruction.

Complying with state regulations cost additional dollars – hiring appropriately certified teachers, class sizes, training, materials, etc., who pays the additional costs? The state funding formula does not provide additional dollars for English language learners (New York City does provide additional funding per student). As Commissioner King explained, school districts will have to make difficult choices – it may be necessary to dump popular programs, maybe an advanced placement class or a sports team to create English language learner classes and services. In the era of the 2% property tax cap these will be difficult and potentially politically toxic decisions.

The core questions are not confronted in state regulations: what is working, why is it working, can successful practices be transferred to other schools?

And, BTW, there are a number of highly successful schools.

Twenty-five years ago the International High School at La Guardia College was opened – a high school that only admitted students who were in the country four or fewer years: the principal, Eric Nadelstern was innovative, irascible and a thorn in the skin of the bureaucracy. The state approved his plans to assess students by portfolio instead of regents exams; he worked with the union to create a different kind of teacher transfer program and created a model for peer evaluation. The number of International High Schools increased, the Internationals Network for Public Schools, a 401(c) not-for-profit supports the increasing number of schools – fifteen in New York City and a number of others across the country. The student results are at or above the results for all students (See student results here).

Newcomer High School in Queens accepts students “new to the nation” and receives superb marks under the department’s rigid accountability rules (See School Progress Report here)

What can we learn?

* School leadership and school district supports are crucial … only alchemists can change dross to gold and you can’t change mediocrity to model leadership – collections of college credits do not a school leader make, and, I’ve yet to meet an alchemist. There is an alarming shortage of effective school leaders.

* Sadly, colleges accept almost anyone into education programs; too many students attain certifications that do not have the skills. – the Council for the Accreditation of Educator Preparation (CAEP) may be forcing sweeping changes in teacher preparation, there will be considerable pushback.

* Collaboration: school leader to school leader, school leader to staff, collaboration among staff members, among students, a top to bottom collaborative environment. The vast majority of schools are top down management models and teachers primarily work alone in classrooms only occasionally interacting with colleagues.

How many school leaders tell a teacher, watch me, I’m going to teach a mini-lesson in your class … and we can talk about it. How many school leaders are capable of engaging teachers and staffs in meaningful discussions about practice? (See Charlotte Danielson, Talk About Teaching! Leading Professional Conversations)

How many schools are designed to facilitate teacher collaboration – teachers working together, discussing actual kids, jointly creating lessons and rubrics, seeing student work from other teachers’ classrooms, watching colleagues teach classes and engaging in discussions, etc.?

Press releases, memoranda, ukases, “programs,” rarely change what happens within schools and classrooms: to change outcomes for children with limited or absent English skills schools have to change practice not simply comply with the rules. Skilled teachers, skilled teachers working with other skilled teachers, “cultural awareness,” socio-emotional supports for children and caregivers, change is complex and difficult, we inherently look at calls for change as punishment.

In spite of the clarion calls from Gracie Mansion and Tweed change starts in schools and classrooms, I don’t see a commitment to change schools, only pleas to hug more, which is not a bad thing; however, hugs alone don’t make kids better speakers of English or writers or readers or mathematicians, or, maybe more importantly, better coders (See www. code.org)

“I’ll Show You Mine If You Show Me Yours … I Promise Not to Tell Anyone,” (Teacher Evaluation Scores are Released to Teachers/Principals)

Teachers flocked back to school on the Tuesday after Labor Day and aside from greeting colleagues each teacher and principal received their score under the state teacher/principal evaluation plan.

The system, called Advance, is described by the department,

Advance, New York City’s new system of teacher evaluation and development, was designed to provide the City’s teachers with accurate feedback on their performance and the support necessary to improve their practice, with the goal of improved student outcomes to ensure all students graduate college and career ready.

Though Advance was formally established on June 1, 2013 in alignment with the New York State Education Department’s education law 3012-c on teacher and school leader performance reviews, its design was informed by three years of pilot work in New York City’s schools. Advance uses multiple measures – including observations of classroom practice, review of teachers’ artifacts, student outcome data, and student feedback – to provide teachers, school leaders, and families with a more accurate understanding of teacher effectiveness than ever before.

As reported by Chalkbeat,

Ninety-eight percent of teachers statewide received top ratings, “effective” or “highly effective,” on the 60 percent of their evaluations made up primarily of observations, the data shows. Less than 1 percent of teachers earned the lowest rating on their observations.

Nearly nine times as many teachers, or about 4 percent, received low ratings on the 40 percent of their evaluations that use a combination of state and local tests.

Under the former “S” or “U,” satisfactory or unsatisfactory system, 2.7%, of teachers received a “U rating” for the 12-13 school year.

The percent of teachers in New York City rated “unsatisfactory/ineffective” dropped to 1%.for the 2013-14 school year.

There is a new two-level system of appeals of “ineffective” (“U”) ratings in New York City.

There are two different types of appeals in the new evaluation system: chancellor’s appeals and panel appeals. All teachers are entitled to a chancellor’s appeal. After talking to you and reviewing your forms and supporting documentation, the UFT will determine whether your case may be appropriate for a panel appeal.

Chancellor’s appeals

A hearing office from the DOE’s Office of Appeals and Review, the same office that hears U rating appeals, will hear your case. Unlike the U rating appeals process, which can drag on for months, the DOE hearing officer has 30 days to issue a decision in a chancellor’s appeal.

Panel appeals

The union can identify up to 13 percent of all Ineffective ratings each year to challenge on grounds of harassment or reasons not related to job performance.

These cases will be heard by a three-member panel comprised of a person selected by the DOE, a person selected by the UFT, and a neutral arbitrator.

While the number of teacher rated unsatisfactory exceeded 2000 in the 12-13 school year the number of teachers who faced dismissal charges for incompetence was under 100.

It is baffling that the Bloomberg administration did not vigorously pursue charges of imcompetence against teachers, in fact, department lawyers discouraged principals.

The new law (State Education Law 3012c) sets forth a process in which two consecutive ineffective ratings, and, if the year two independent validator agrees, the school district may bring dismissal charges,

If a teacher receives an ineffective rating for a school year in
which the teacher is in year two status and the independent validator
agrees, the district may bring a proceeding pursuant to sections three
thousand twenty and three thousand twenty-a of this article based on a
pattern of ineffective teaching or performance. In such proceeding, the
charges shall allege that the employing board has developed and
substantially implemented a teacher improvement plan in accordance with
subdivision four of this section for the employee following the
evaluation made for the year in which the employee was in year one
status and was rated ineffective. The pattern of ineffective teaching or
performance shall give rise to a rebuttable presumption of incompetence
and if the presumption is not successfully rebutted, the finding, absent
extraordinary circumstances, shall be just cause for removal.

One of the major criticisms of the new system is the instability of the scores. The swings in individual teacher scores can vary significantly from year to year – the bottom line: the 1% of teachers who received “ineffective” ratings in the 12-13 school year may NOT be the same teachers who received an “ineffective” rating for the 13-14 school year.

Since the students change every year and some teachers change grades taught the supposed impact of teachers on students can vary widely. If a teacher receives an “ineffective” rating due to low student scores on state tests a legal challenge may be sustained. Additionally an unanswerable question: is there a consistency in scoring among supervisory observers? Yes, all supervisors in New York City use the Danielson Frameworks; do they see lessons through the same lens? We don’t know.

The core question: does the evaluation score assist the teacher in improving their practice? The answer is a resounding “no.” Hopefully the principal meets with the teacher after every observation and informally during the school year and coaches the teacher; for example. does the lesson foster higher order thinking skills? Do questions move up the ladder from recall to analysis to comparison to inference to evaluation?

However, the grades on student tests scores (20%) and the local measures (20%) are baffling, neither teachers nor supervisors can tell a teacher why they got their grade and how the grade can be used for improvement.

Perhaps the 35% of New York State democratic voters who cast ballots for Zephyr Teachout will impact the Governor’s education policy … perhaps a teacher evaluation plan that both assesses practice and assists teachers in getting better.

Will Electoral Politics Force Governor Cuomo to Modify the NYS Teacher Evaluation Law?

I wish they did a teacher draft, “He has a good understanding of the common core, but lacks the measurables”. (From a principal on the day of the NFL draft)

The lesson could have been taught in the fanciest private school in the city, it was taking place in a high school in the heart of Harlem. Mr. M projected “God is Dead” on the board and facilitated a discussion about Nietzsche. He listened, he prodded, he provoked, he facilitated a discussion among the students, had them jot down ideas … it was a magnificent lesson. I thanked him for inviting us into his classroom and commented on the excellence of the lesson. He replied, “Come back tomorrow to my first period Global Studies class.”

The students, many of whom were repeating the class, dribbled in after the late bell. In spite the efforts of the teacher the kids were morose, almost hostile, disengaged and skipped out of the room the moment the class ended.

Mr. M approached me, “Well, am I an exemplary teacher or a bum?”


State Education Schools under Registration Review (SURR) teams spent four days in the lowest performing schools in the state and wrote a “findings and recommendations” report based on a 21-item template. We arrived at the middle school Monday morning, the principal was busy and we twiddled our thumbs until he finished what he was doing. He apologized, “I have three vacancies and four absent teachers, and I have to figure out the class coverages every morning.

One of the team members began with a “softball” question – “What criteria do you use to assess teacher effectiveness?”

The principal blurted, “They come every day and blood doesn’t run out from under the door.”


A couple years ago, before Charlotte Danielson became a very rich “rock star,” I attended a meeting with a 25-principal Network – Danielson made her standard presentation.

At the end of the presentation I asked, “Would you agree with Justice Potter Stewart: he could never succeed in intelligibly defining pornography but said “I know it when I see it,” …. Doesn’t the same apply for good teaching?”

Charlotte demurred, rather vigorously.


Principal A: “I rate most teachers ‘highly effective,’ it raises morale, dissuades teachers from leaving and it reflects positively on my evaluation.”

Principal B: “I haven’t written a single ‘highly effective’ observation this year – the ‘highly effective’ standard is extremely high, once in a while I’ll see a lesson with ‘highly effective ‘ elements, not an entire ‘highly effective lesson.”


Early in the fall teachers of English and Mathematics in grade 3-8 (about 20% of all teachers) receive a Teacher Data Report (TDR) – the teacher percentile standing using a Value-Added Metric (VAM). Principals and teachers have no idea what the TDR score means, two principals, Carol Burris and Liz Phillips skewer the entire evaluation system, “Why APPR Must Be Changed

The fatal flaw is that VAM teacher scores are unstable.

United States Department of Education: Value-added estimates for teacher-level analyses are subject to a considerable degree of random error when based on the amount of data that are typically used in practice for estimation

Di Carlo: A recent analysis of VAM scores in New York City shows that the average error margin is plus or minus 30 percentile points. That puts the “true score” (which we can’t know) of a 50th percentile teacher at somewhere between the 20th and 80th percentile – an incredible 60 point spread.

Economic Policy Institute: VAM estimates have proven to be unstable across statistical models, years, and classes that teachers teach. One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%

The principal evaluation section – worth 60% of a teacher’s APPR – is totally dependent on the principal and we know that principal assessments vary from school to school, principals only see teachers in their schools, and principals are concerned, rightly or wrongly, that assessments are a reflection of the ability of the principal.

The VAM constructed scores emanating from student tests sores are highly unstable.

The teacher APPR scores for the 12-13 school year in New York State.

51% highly effective
40% effective
8% developing
1% ineffective

The instability of the scores, the 20% to 30% swing from year to year assures that the 1% ineffective next year will not be the same as the 1% last year.

If the current system is deeply flawed what should replace it?

Linda Darling-Hammond in the spring edition of the American Educator suggests,

Although there is widespread consensus that teacher evaluation in the United States needs serious attention, simply changing on-the-job evaluation will not, by itself, transform the quality of teaching. For all of the attention focused on identifying and removing poor teachers, we will not improve the quality of the profession if we do not also cultivate an excellent supply of good teachers who are well prepared and committed to career-long learning. And teachers’ ongoing learning, in turn, depends on the construction of a strong professional development system and useful career development approaches that can help spread expertise. Finally, improving the skills of individual teachers will not be enough: we need to create and sustain productive, collegial working conditions that allow teachers to work collectively in an environment that supports learning for them and their students.

In short, what this country really needs is a conception of teacher evaluation as part of a teaching and learning system that supports continuous improvement, both for individual teachers and for the profession as a whole. Such a system should enhance teacher learning and skill, while at the same time ensuring that teachers who are retained and tenured can effectively support student learning throughout their careers.

Unfortunately we do not live in an environment where science and rationality rules, we live in a political world, a world in which politics rules.

Governor Cuomo chose to jump on the charter school band wagon, probably to deprive his opponent if charter school dollars, he saw what $5 million could buy in the charter operators attack on Mayor de Blasio. Now, he has to win back teachers, and, amending, easing or modifying the current teacher evaluation plan would mollify teachers and their union.

I think a dog-eared copy of The Prince sits on Governor Cuomo’s night stand, with the following underlined,

Politics have no relation to morals.

Byte-ing Teachers: Is Teaching an Art or a Science? Does VAM Help Improve Practice?

Billy Beane, the manager of the Oakland Athletics, changed the method of evaluating baseball players – instead of the cigar chomping baseball scouts Beane used a data-driven approach, memorialized in Michael Lewis’ Moneyball (2003). The analysis/manipulation of large, really large sets of data has become the sine qua non, the “standard” for decision-making, in baseball, in healthcare as well as in education. In an Atlantic article, “Can the Government Do Moneyball,” the authors aver,

The moneyball formula in baseball—replacing scouts’ traditional beliefs and biases about players with data-intensive studies of what skills actually contribute most to winning—is just as applicable to the battle against out-of-control health-care costs. According to the Institute of Medicine, more than half of treatments provided to patients lack clear evidence that they’re effective. If we could stop ineffective treatments, and swap out expensive treatments for ones that are less expensive but just as effective, we would achieve better outcomes for patients and save money.

The field of education is no different – we now have the ability to parse large data sets to assist educators to fine tune, to individualize instruction to students,

“The important thing with the data as we see it is this: How does it improve instruction in the classroom? … The trick is to be able to combine what I call ‘autopsy data’ of what has happened with the child, with what goes on currently in class, with formative and summative evaluations on an ongoing basis.”

In addition to longitudinal data, scores from online work or assess¬ments scanned in from offline work go ¬immediately into the platform, and its predictive ¬analytics engines go to work to ¬develop recommendations to help the student get up to speed ….Teachers don’t continue to teach things their students already know. It gives just-in-time feedback of what to pay attention to now, ¬before students get so far behind that they can’t catch up.”

One thing is certain: As education becomes more Big Data–driven, ¬educators and IT leaders must ¬remember that human judgment matters too. “You have to pay attention to whether the data resonates with what the teachers know to be true about a student’s performance, …There’s no ¬substitute for authentic analysis.”

We can receive real time feedback on suggested approaches to remediating student errors, of course, the process does not explain why a student gets a wrong answer and does not involve critical thinking skills, it can tell us that, for example, 26% of Afro-American seventh graders who are eligible for Title 1 services cannot successfully divide fractions 80% of the time. How can the school district use the data? Why are 74% of students succeeding? Are the teachers of the 74% consistently successful year to year? Are the textbooks the same? Is the race/gender/experience level of the teachers a significant factor?

The use of “big data” can, within a statistical range, answer the questions; however, the “answer” does not tell us why…

The Gates-funded Measures of Effective Teaching Study identifies more effective teachers using test scores to define effectiveness. What the MET study does not do is tell us why some teachers are more effective than other teachers. Are they more effective in teaching particular skills or more effective in motivating students or some combination? Highly effective teachers have no idea why they are more or less effective from year to year.

Paul Tough, in “How Students Succeed,” challenges, “…the cognitive hypothesis, the belief ‘that success today depends primarily on cognitive skills — the kind of intelligence that gets measured on I.Q. tests, including the abilities to recognize letters and words, to calculate, to detect patterns — and that the best way to develop these skills is to practice them as much as possible, beginning as early as possible.” … Tough sets out to replace this assumption with what might be called the character hypothesis: the notion that noncognitive skills, like persistence, self-control, curiosity, conscientiousness, grit and self-confidence, are more crucial than sheer brainpower to achieving success.”

The teacher who ignites noncognitive skills may be more effective than the teacher who uses the proper teaching techniques, as “measured” by the Danielson Frameworks.

The policy wonks, the decision-makers are seduced by data, the right data set, the right algorithm, the right combination of variables can result in attributing a numerical score to a teacher. Once we’ve identified the most effective teachers we can use the “score” to drive decisions: who gets tenure, who gets fired, who gets a raise or a promotion. What used to be a decision solely made by the principal is now of part of a multiple measures rubric with a value-added measurement (VAM) counting for 20% to 50% of the scores.

The Educational Testing Service (ETS) warns us that the instability and unreliability of VAM algorithms should not be used for decisions that impact careers.

Edward H. Haertel (March, 2013) in “Reliability and Validity of Inferences About Teachers Based on Student Test Scores,” warns,

Teacher value-added scores are unreliable … that means that teachers whose students show the biggest gains one year are often not the same whose students show the largest gains the next year…

The goal for VAM is to strip away just those student differences that are outside of the current teachers control … those things the teacher should not be held accountable for…

Teacher VAM scores should emphatically not be included as a substantial factor with a fixed weight in consequential teacher personnel decisions … It is not just that the information is noisy … the scores may be systemically biased for some teachers and against others.

In spite of the evidence the US Department of Education steadfastly hews to the VAM line. Data is the answer – if you can create the right mathematical equation, if the mountain of data is large enough – you can solve all.

The use of data-making decision-making presumes that teaching is a science; it presumes that with the proper mix of “chemicals” you can “create” a desired outcome.

Is the process of teaching a science which can be measured, or, is teaching an art? Can you assign numerical values to every Danielson element and VAM growth score and assign a teacher a grade, or, was Judge Potter Stewart correct when he wrote he could not define pornography but he “knew it when he saw it.”

The teacher evaluation law in New York State is incredibly dense (see website here). The State Education Department approved 700 locally negotiated plans, collected gigabytes of data, spun the computers, and, (roll of drums!!)

51% of teachers are “highly effective”
40% of teachers are “effective”
8% of teachers are “developing”
1% of teachers are “ineffective.”

In June, 2012 2.7% of New York City teachers received an “unsatisfactory” rating. The dense formula identifed fewer ineffective teachers.

Data has become an addiction, the “meth” of the world of education.

Perhaps it would be more cost effective if we poured the dollars into a genome project – is there a “teaching” gene? Are “highly effective” teachers the product of “nature” or “nurture”?

What does a numerical score tell a teacher? What can they learn from a VAM score?

The byte-ing of teaching is a failure.

Networks versus Districts: Moving from a Structure of Compliance to a Culture of Collaboration.

Gotham Schools reports,

The next mayor should “reconsider” the current system of school-support networks, State Board of Regents Chancellor Merryl Tisch said Monday, adding her voice to a chorus of critics – including mayoral frontrunner Bill de Blasio – who have questioned the signature Bloomberg education policy.

“Me, if I were going to take over the school system, I would look heavily to change the networks,” Tisch said …

“I think the networks have basically failed children who are [English-language learners],” added Tisch, … They have failed children who have special needs.”

The 32 geographic Community School Districts were not paeans of service to children. A few middle class districts were high functioning with a high level of community participation and support, the districts serving the poorest children were dysfunctional and too many districts were patronage pits for local electeds. I was speaking with a well-regarded superintendent of a deeply poor district, “I had to dance between the competing factions on the board, satisfying them with jobs for friends and still providing the best possible education under the circumstances … I feel I’m now fully qualified to take on the Israeli-Palestinian talks.” District offices were richly staffed and too often distant from schools.

Eric Nadelstern, a former deputy schools chancellor who led the design of the networks, forcefully defended the system in an interview, saying it was at the “center of the reforms” under Bloomberg that raised the graduation rate by 30 percent.

“It’s wonderful that people in authority offer opinions that aren’t aligned with the data,” he quipped when told of Tisch’s comments.

Nadelstern said the networks stamped out the corruption of the district system – where politicians would dole out jobs and school seats as gifts – while also slashing costs, since each network employs about 15 people, compared to some 120 staffers in the old district offices, he said.

The networks range from aloof to deeply engaged. One network, rated near the top of the 55 networks and serving poor kids with many English language learners, the teachers were deeply engaged in the Common Core with highly collaborative schools, a model for the remainder of the networks. A few networks worked, with the right leadership.

Unfortunately the network system has morphed into a top down compliance system totally driven by ukases from the leadership at Tweed. What was envisioned as like-minded schools working together with a network leadership is now an endless series of requirements, data collection and compliance checklists.

Are superintendents and network leaders capable of leading? Can they engage principals and teachers? Or, can they simply review data and check off boxes on check lists? Conducting Quality Reviews is not leading.

Ernest Logan, president of the city principals union, said that oversight of schools’ budgets and personnel should be returned from networks to superintendents, which would provide clarity to principals.

“People need a boss,” he said.

Logan is incorrect and reflects why schools struggle and teachers feel abandoned and unappreciated. With rare exceptions 110 Livingston Street and now the Tweed Courthouse “lead” by issuing edits, regulations after regulation imposing what the chancellor or superintendent of the moment espouses. The role of the principal was simply to comply with what the “flavor” of the day was passed down. School leaders never had to build capacity, never had to create teams of teachers, never had to explore and struggle and learn, as long as the check list was satisfied, all was fine.

Yes, there were extraordinary superintendents, very few, who worked with teachers and principals and communities, for most it was the old paradigm, the paramilitary structure: issue orders, monitor compliance, threaten, collect data, it’s the numbers that rule.

I was invited to a School Leadership Team meeting, after a long discussion, I forget the topic, the principal said; “I don’t think it can work, you’re all passionate, you have my vote, show me it can work.” Scattered around the city there are principals with the skills to work together, to create a synergy that is the essence of an effective school.

A network leader who routinely attended school faculty meetings and engaged staffs, challenged them to ask questions, to offer ideas, his team lived in schools interacting with staffs, s/he was the exception.

Charlotte Danielson’s other book, “Talk About Teaching: Leading Professional Conversations” (2009), in my judgment, is more important than her Frameworks tome.

…if formal school leaders …have forged consensus on big ideas underlying practice, there is transparency in what a visitor could expect to observe in a classroom, That is, if everyone in a school accepts that students learn through their own intellectual engagement with content (asking questions, making connections, analyzing information, etc.) then an observer would expect to see student engaged in such activities. .When consensus on such big ideas has been established, then it is understood that the implications of such ideas are always on the table for discussion…

Unfortunately the Department of Education Instructional Expectations 2013-14 document is construed at the network and school level as a compliance document, a checklist. How many principals actually engage teachers and staffs? How many superintendents, network leaders and principals engage kids on a daily basis? Actually engage with teachers?

Sadly, too few.

The essence of leadership, in districts, networks and schools, to quote Danielson, is “…forg[ing] consensus on big ideas underlying practice,” leaders are coaches, working with his/her “players” to improve their performance.

In one school, not uncommon, there are four schools, three public schools, all in different networks, and a charter school. Simple issue: how do we resolve the HR and salary issues so that we can share related-service providers becomes an enormously complex task.

We should return to geographic districts for most schools, not all. For example transfer high schools, schools dealing only with English language learners, alternative high schools, should remain clustered in network structures.

School and District Leadership Teams, required by the state, should be reinvigorated. School district leaders, teacher and parent leaders must engage and play core roles in establishing goals and assisting school teams.

Testing must be delinked from instruction. The current “end all and be all” of school instructional programs is satisfying the checklist to raise scores on the standardized test. We haven’t agreed to staple computer chips into earlobes, not yet.

Returning to a geographic structure and retaining the original goals of networks must be the goal of the new administration.

During the Autonomy Zone days I attended a professional development on a Saturday at Julia Richmond Complex – a series of workshops taught by teachers for teachers on topics selected by teachers. (I ran a session on School-Based Options in Article 8 of the Agreement).

It’s time to erase the words now chiseled (at least in teacher minds) over the Tweed Courthouse (Department of Education headquarters),

Lasciate ogni speranza, voi ch’entrate.