Data Use, Technology, and Educational Leadership
One of the core ideas behind standards-based reform is that alignment between state curriculum standards and state tests promotes instructional coherence. After all, if the tests aligned to the curriculum, then policymakers, district leaders, and school level educators have more information about how kids are doing according to the standards. But what if state standards and state tests are not well aligned? It may be that doing what the state wants (teaching the curriculum) may be contrary to doing what the state actually uses to make decisions about you (the state test).
I just finished re-reading Polikoff, Porter, and Smithson’s “How Well Aligned Are State Assessments of Student Achievement with State Content Standards?” Some of the key findings in this article are that state assessments and state content standards are always so well aligned. The researchers examined the match between state tests and standards in terms of both content and cognitive rigor targeted. They did so across several states, grade levels, and content areas. The degrees to which there was misalignment depended upon if you think about alignment in terms of content or in terms of rigor, or both. It also depended upon subject area.
Problems in alignment included: under- or over-testing content relative to its proportion in the standards; not testing certain standards at all (especially in ELAR); testing content that are not at all in the standards for that grade level. The authors put these issues into perspective with more exact figures and details. For example, roughly 20% to 30% of test content is on the right standard, but at the wrong level of cognitive demand. The authors also provide recommendations for test designers and policymakers. It’s worth reading the actual paper for all these, but I especially like their point that as student achievement tests are get rolled into decisions about hiring and firing, “pay for performance” (merit pay), and curriculum revision, we need to be better about making sure that the tests actually reflect student learning. Consequences associated with testing ought to at least be relevant to expectations about things that should’ve been taught.
The authors pay less attention to the everyday school who is attempting to improve its data use. That’s unfortunate, because state test data with some of the first things that administrators and teachers were asked to use in becoming more “data-driven.” Another problem is that most schools are probably unprepared to do this sort of analysis themselves (matching up each test item to each grade level/curriculum area’s content area and cognitive rigor). At the same time, many schools that resist “teaching to the test” have assumed that teaching the actual expanse and rigor of the curriculum would be both good for kids and result in test achievement.
Without improvements to test design or policy, providing answers for school-level educators is hard to do. My gut reaction is to re-emphasize that educational decisions for students should be based on a variety of information sources, and that we’ve always kind of known that state test data are typically stale and not so good at identifying issues for individual students. So, it may be that state test data should be read as providing information about overall issues in the school (e.g., equitable outcomes for economically disadvantaged students), but only so-so when it comes to curriculum implementation itself. In order to do what is “best for kids,” it may be necessary for schools and districts to have conversations about what might be better data (formal, informal, curricular, or not) toward those ends.
My lingering problem: Does that mean that schools should do this at the risk of lower state test scores/rankings? I really dislike the idea that in order to do well on the test, teachers ought to teach to the test– even if that test is wrong. Anyone have solutions or comments? Calls for rebellion?