Data is one of the many controversies that dog education. We argue over the collection of data, what is appropriate to capture versus what is not, and who should collect the data and what use they should be permitted to do with it.
Teachers need data to check our classroom observations against. Traditionally, we gathered data from review of student work (I thought Gladys understood, but her work shows me she is confused about what to do) and reflection upon our assessments.
We use that data to make instructional decisions, which is appropriate since it is teacher-generated and understood.
A problem comes with data that cannot be understood, for example, state tests. Teachers don’t see the questions or student answers. Shoot, we don’t even get a breakdown by standard. All we see is an overall score and a breakdown into broad categories. That tells us nothing about what the student knows.
Because state tests are not designed to measure teacher or school performance, they do not yield useful, understandable, or accurate results for those things. In short, data from state tests is worthless.
Then there are the privacy concerns. Using online educational platforms is beneficial if wielded by the expertise of teachers who assign and modify what students do based upon student work and professional judgment. But teachers are usually denied that opportunity. Instead, they are relegated to stalking the room to suppress student misbehavior as students fail to be enthralled by sitting at a machine for hours with next-to-none interaction with other live human beings.
Despite FERPA protections, online providers of educational platforms reserve the right to use (SELL) student data as they wish. Don’t get me wrong, for a platform to work, it has to gather data. How else will an educational professional be able to know what a student accomplished?
But it doesn’t stop there. Data is big bucks and the platforms are eager to sell.
To recap, data is problematical for reasons well and often discussed: it is misinterpreted, it doesn’t provide the insight it pretends to provide, student privacy rights are disregarded because the data is too valuable to protect.
BUT! One thing I never see discussed is what happens if the data is wrong.
Commercial packages and state tests are worth huge profits to the providers. They have a vested interest in making sure that the results are scored accurately.
But there are other data sources. Districts fret over what those state tests will reveal and do their own testing: mimic tests during the year to see how the students might do, unit tests to provide a periodic report of how well progress is being made.
It is those tests I have in mind. My district does a mid-year scrimmage and exhorts us to inflict district-staff created unit tests upon students. Whenever I review these tests, I find errors. Forget that some questions will be confusing, not up to par, never field-tested, seldom revised, the district answer key is many times flat out wrong.
The tests are scored incorrectly.
What happens when the data is wrong?
What happens when teachers are called in because the results are not what administration and the district want, but the data is wrong because the answer key was flawed?
What then?
What happens when a superintendent interprets results that are nothing more than garbage in, garbage out (GIGO)?
What happens when a Board of Education gets the report?
At best, nothing. At worst, personnel changes and good people get canned.
What happens when the data is wrong?