Friday, September 8, 2017

Data Driven Into the Weeds

Having a data-driven school has been all the rage for a while now, because when you express your ideas, thoughts, and biases in numbers, they qualify as "facts," whereas judgment expressed in words obviously lacks data-rich factiness, and so should be ignored. Yes, the fact that I am 100% an English teacher may make me about 62% bitter about the implied valuing of numbers over words; I'd say I'm at about 7 on the 11-point Bitterness Scale, and that's a fact.

Pretty sure the rest of the vehicle is around here somewhere.


Being data-driven (which usually means test-result-driven) is a bad idea for several reason.

Data vs. Standards

Mind you, I am not and never was a fan of nationalized standards like the Common Core [Insert New Name Here] Standards. But at some point lots of folks quietly switched standards-aligned to data-driven curriculum management, and that matters a great deal. Almost an 8 on the 10-point Great Deal scale.

It matters because tests ignore many of the standards, starting with non-starters like the speaking and listening standards. No standardized test will address the cooperative standards, nor can writing or research be measured in any meaningful way on a standardized multiple-choice test. And no-- critical thinking can not be measured on a standardized test any more than creativity can be measured by a multiple choice question.

In other words, the moment we switch from standards-aligned to data-driven, we significantly and dramatically narrow the curriculum to a handful of math and reading standards that can be most easily addressed with a narrow standardized test. The Curriculum Breadth Index moves from a 20 down to a 3.

Remember GIGO

Because the instrument we use for gathering our data is a single standardized test that, in many states, carries no significant stakes for the students, we are essentially trying to gather jello with a pitchfork.

The very first hurdle we have to clear is that students mostly don't care how they do on the test. In some cases, states have tried to clear the first hurdle by installing moronically disproportionate stakes, such as the states where third graders who are A students can still find themselves failing for the year because of a single test. But if you imagine my juniors approach the Big Standardized Test thinking, "Golly, I must try to do my very best because researchers and policy makers are really depending on this data to make informed decisions, and my own school district really needs to do my very, very bestest work so that the data will help the school leaders,"-- well, if that's what you imagine, then you must rank around 98% on the Never Met An Actual Human Teenager scale.

That's before we even address the question of whether the test does a good job of measuring what it claims to measure-- and there is no reason to believe that it does. Of course, it's "unethical" for teachers to actually look at the test, but apparently I and many of my colleagues are ethically impaired, so we've peeked. As it turns out, many of the questions are junk. I would talk about some specific examples, but the last time I and other bloggers tried that, we got cranky letters and our hosting platforms put our posts in Time Out. Seriously. I have a post that discusses specific PARCC questions in fairly general ways, but Blogger took it down. So you will have to simply accept my word when I say that in my professional opinion, BS Test questions are about 65% bunk.

For a testing instrument to gather good data, the testing questions have to be good, valid, reliable questions that accurately measure the skill or knowledge area they purport to measure. Then the students have to make a sincere, full-effort honest attempt to do their best.

The tests being used to generate data fail both measures. Letting this data drive your school is like letting your very drunk uncle drive your car.

Inside the Black Box

When I collect my own data for driving my own instruction, I create an instrument based on what I've been teaching, I give it to students, and I look at the results. I look for patterns, like finding many students flubbing the same task, and then I look at the question or task, so that I can figure out exactly what they don't get.

The BS Test is backwards. First, it was designed with no knowledge of or attention to what I taught. So what is required here is not testing what we teach, but teaching to the test.

Except that we all know that teaching to the standardized test is Bad and Wrong, so we have to pretend not to do that. On top of that, we have installed a system that puts the proprietary rights and fiscal interests of test manufacturers ahead of the educational needs of our students, with the end result that teachers are not allowed to look at the test.

So to be data-driven, we must first be data-inventors, trying to figure out what exactly our students did poorly on on the BS Test. We may eventually be given result breakdowns that tell us the student got Score X on Some Number of Questions that were collectively meant to assess This Batch of standards. But as far as a neat, simple "here's the list and text of questions that this student answered incorrectly," no such animal is occurring. This is particularly frustrating in the case of a multiple choice test, since to really track where our students are going wrong, we need to see the wrong answers they selected, which are our only clues to the hitch in their thinking about the standard. In short, we have 32% of the actual information needed to inform instruction.

We are supposed to do teach to the test with our eyes blindfolded and our fingers duct-taped together.

Put Them All Together

Consider all of these factors, and I have to conclude that data-driven instruction is a snare and a delusion. Or, rather, 87% snare and 92% delusion, with a score of 8 on the ten-point Not Really Helping. And I think the weeds measure about 6'7".

 

15 comments:

  1. I'm not a teacher. I'm a professional journalist, and I own and do most of the editing for a book publishing company. I had an opportunity to review a few test questions from an elementary-level standardized test, and I concur—they're crap. Not only are they pointless, the writing is appalling. I believe I still have my edited copy in storage somewhere.

    ReplyDelete
  2. I remember getting the C&D letter and my post getting taken down too. I put it right back up again, got another letter maybe a month later. The third time I took out specific book title names and left the questions and the blogger's impressions/discussion, and that one is still up if anyone wants to see it: http://crunchyprogressiveparentingtwo.blogspot.com/2016/05/the-parcc-test-exposed.html

    ReplyDelete
  3. As a tool of evaluating either students or teachers, standardized tests are atrocious.

    I can see a standard based on teacher evaluation of student skills (maybe a report card, perhaps) to evaluate students, but what would you use to evaluate teachers? My wife is a 2nd grade teacher, so we've talked about how she would prefer to be evaluated.

    In the private sector, you would be evaluated subjectively by your manager. But some of her principals over the years have exemplified the saying "you shall rise to the level of your own incompetence." Considering the lack of incentives, managerial skills, and teaching skills of many principals, neither of us thinks a subjective principal evaluation would be either fair or effective.

    Since you've thought a lot about this, do you have any ideas for what sort of teacher evaluation system you think might work?

    ReplyDelete
    Replies
    1. You might want to check the archive. Pretty sure Peter has addressed that question.

      Delete
    2. Might I suggest Peter that you place a link on your banner next to the topics of tenure and such to your posts on teacher evaluation?

      Delete
    3. Peter absolutely HAS addressed it; been around the block w/Peter Cunningham about this not long ago, actually.

      Delete
  4. I have to admit I've peeked as well. My fellow teacher and proctor told me to stop it. So there's that. :)
    Funny how they have to hide it. I'd love to tell a parent they couldn't see the test their child took in my class that I wrote.
    And I'm beyond sick of the pearl-clutching as I hear reports every year here in SC that the ACT scores might have fallen (they have to take it, but of course they get no grade for it...and yet there are those out there that think there *aren't* students who circle C straight down their papers and take a nap? Like you said, evidently people who develop these policies have never met a teenager).

    ReplyDelete
    Replies
    1. I had a nephew who made designs while filling in the bubbles on such tests.

      Delete
  5. Your first sentence says it all. And your injection "factiness" into the article - clever, clever.

    ReplyDelete
  6. Its 3 minutes into the live, 4 network broadcast on the Super High School of the future and I'm about to toss my dinner - and cookies. Please do a follow up post on this crapfest.

    ReplyDelete
  7. For years now the whole apocolypse has reminded me of a class in Asian Studies in which I was the special education consultant. The cultural revolution had this highly poignant practice of demanding that farmers produce larger and larger vegetables. This without any support--no supplies, no fertilizer, no training, no nothing. Just do it! Of course eventually pumpkins (or some other vegetable)the size of houses were eventually "documented." Then heads would roll when the "deception" was uncovered. Shame on America's school reformsters!! We thought we were better than that.

    ReplyDelete
  8. 25 minutes in and my thoughts are getting suicidal. AAAAAAAAAAARRRRRRRRRRRRGGGGGGGGGGGGGG

    ReplyDelete
  9. "Except that we all know that teaching to the standardized test is Bad and Wrong."
    Why is teaching to the test wrong ? Look, I get the point that standardized tests can narrow the curriculum. At least at the college admissions side of things, the test makers have tried to address this. See for example the writing test (including writing sample) on the SAT's used for about the last 15 years.

    In any event, I concede the point. The tests are a subset of what students are expected to learn. But why does that make it a bad thing ? A doctor takes your temperature, blood pressure and weight. It's intended to be a first order measure of your health - not perfect.

    But you fail to recognize the main benefit of such tests ... They are typically uniform across the state. This allows school leaders and elected officials to further investigate if there are differences between similar students across the state. How would you do so in your approach ?

    ReplyDelete
    Replies
    1. I'd first look for an answer to the question, "Why do I need to do it at all?" If we don't know what we're trying to find out, exactly, we can't know how best to assess it. Which is why teaching to the test is backwards-- teach first, then assess to see how students did. Don't create the test before the students have even set foot in the classroom.

      Delete
    2. Mr. Backman, one would think you had never read any of Peter's posts, as he has answered all these questions many times over.

      Delete