Every spring thousands upon thousands of Texas students take the State of Texas Assessments of Academic Readiness (STAAR for short). It’s a one-day snapshot meant to evaluate a year of learning within a subject area. Even though many disagree with one-time events as assessments of learning, the fact of the matter is that they are a reality for us and our students. Because these assessments carry so much weight, we pore over the data they generate, often looking for standards where our students performed poorly so we can identify what to focus on in our instruction and intervention.
But what if I told you this well-intentioned practice may be sending us in unproductive directions? Rather than focusing on what our students really need, we may be spending time on topics and/or skills that are not the priority.
Let me illustrate what I mean with a story. I was working with a 4th grade team after a district benchmark we call STAAR Ready. Every spring in my district we give our students a released STAAR to gauge readiness for the actual STAAR coming up in May. Afterward, teams analyze the data to determine which topics to revisit and which students to put into intervention groups.
As I met with this 4th grade team, they showed me a list of the low-performing TEKS (Side note: this is what we call our standards in Texas – the Texas Essential Knowledge and Skills, TEKS for short) they had identified after analyzing the STAAR Ready data. One of the TEKS jumped out at me immediately because I was familiar with the test:
TEKS 4.4A add and subtract whole numbers and decimals to the hundredths place using the standard algorithm;
I asked them to tell me more, and the team told me they had identified students who performed poorly on the questions correlated to this standard. They created an intervention group with these students to work on adding and subtracting whole numbers and decimals to make sure they could do these computations accurately.
I followed up with a question, “Have you looked at the actual questions correlated to these TEKS?” Because they were looking at so much data and so many standards, they hadn’t gotten back into the test. Instead they’d just been identifying high-priority TEKS based on student performance on the questions.
I pulled up the test and showed them this question that had immediately come to mind when they told me they were making a group focused on TEKS 4.4A:

Source: Texas Education Agency, STAAR Math, Grade 4, Item 34
Take a moment and analyze the question.
- Can you see how it involves adding and/or subtracting with whole numbers and/or decimals?
- But what other skills are involved in answering this question correctly?
- What features of the problem might have made it more difficult for the students to answer correctly?
As it turns out, this was an incredibly difficult problem for students! When it was given to students on the actual STAAR in spring 2016, only 43% of students across the state of Texas were able to answer correctly. That means 57% of Texas 4th graders, or roughly 209,390 students, couldn’t find the total cost of three items in a shopping basket. That’s…concerning.
In my own school district, we used the 2016 released STAAR as our STAAR Ready in spring 2017. This allowed me to collect data Texas doesn’t make available to everyone. When we gave the test in spring 2017, the problem was nearly as difficult for our students. About 48% of students in my district answered it correctly. I was also able to determine this was the 6th most difficult item on the entire test of 48 questions!
What’s going on? A lot actually, for such a short question. For starters, key information is spread across two sentences. The first sentence of the problem indicates the quantities of items purchased – 1 hat and 2 skirts. The second sentence indicates their prices. This is subtle, but separating that information across two sentences upped the level of difficulty significantly for 9 and 10 year olds. Students who are not reading closely can quickly jump to the conclusion that they only need to add the two prices shown without realizing that one of those prices needs to be used twice.
The second feature of this problem that ups the difficulty is the fact that it is an open response question, not multiple choice. On this kind of question, a student’s answer has to be absolutely 100% accurate. If they’re off by even 1 penny, the answer is marked wrong. No pressure, kids!
I was curious which feature made the problem more difficult for the students in my district, so I dove into the data. One thing I had available that Texas doesn’t release is the actual answers every student submitted for this problem. I was able to analyze roughly 3,600 answers to see what students were doing. Here’s what I found out.
While only 48% of students got this question correct, there was a chunk of students whose answers were in the ballpark. These are kids who likely made a small calculation error. Unfortunately, if I calculate the percent of students who got it right or reasonably close, that only brings it up to 51% of our 4th graders. That’s not terribly impressive.
So what was everyone else doing? Here’s where it gets interesting. I predicted that these students only found the cost of 1 hat and 1 skirt, and it turns out that’s exactly what 33% of students in my district did. Nearly 1,200 students failed to comprehend that the total cost is composed of a hat, a skirt, and another skirt.
Going back to the team I was working with, I asked, “So now that we’ve analyzed this question, do you think the issue is that your students are struggling with adding and subtracting whole numbers and decimals?” We talked about it and they agreed that the bigger issue is how their students read and comprehend word problems.
Looking just at the standards is a very limiting view of analyzing data. There are often many different ways to assess a standard, and if we don’t take the time to look at the exact questions our students interact with, we might be missing critical information. Had this team done an intervention on pure addition and subtraction of whole numbers and decimals their kids would have gotten better at those skills for sure. But is that really what they needed?
Over the past year, I’ve been analyzing assessment data differently than in the past. In follow up posts I’d like to share some of that with you. In the meantime, please dive into your assessments and analyze those questions, not just the standards. You’ll hopefully come away with a truer picture of what’s challenging your students so that you can more accurately target with what and how to support them.