Category Archives: Uncategorized

Origin Story

This summer I was invited to give an Ignite Talk at my school district’s Teaching & Learning SummeRR Camp.

SummeRR Camp

It’s a talk I’m proud of because in five minutes I was able to share why it’s so urgent to me that we ensure sense making is the focus of our work with students. Students deserve to develop positive relationships with what they’re learning now. It’s a disservice to assume they’ll learn to like it later. We never know what doors are closed to students because they learned to hate a subject or grow up thinking they’re not smart enough.

It’s the Great (Big) Pumpkin, Charlie Brown!

At this point it’s become an annual tradition that I make a batch of numberless word problems based on the results of the Safeway World Championship Pumpkin Weigh-Off. I’ve collected together the problems I’ve written based on the results from 2016, 2017, and now 2018 in this folder. [Update: Thanks to a request from my daughter, I’ve added some primary level pumpkin problems as well.]As with all the other files I share, you are welcome to edit them if you want to tweak them for your students. All you have to do is make a copy if you have a Google account or download the file in an editable format like Power Point. You will have full editing rights of your copy.

If you’re looking for some more mathematical inspiration as Halloween approaches, check out these three blog posts I wrote which include lots of photos and ideas for how to use them to spark mathematical conversations with your students.

Enjoy!

Areas of Celebration and Exploration

After a brief interlude, it’s time to get back to the blog series I started recently about analyzing assessments.

  • In the first post, I shared the importance of digging into the questions, not just the standards they’re correlated to.
  • In the second post, I talked about how understanding how a test is designed can help us better understand the results we get.
  • In the third post, I shared how I learned to organize assessment data by item difficulty and the implications for supporting our students.
  • In this post, I’d like to talk about another way to look at assessment data to uncover areas of celebration and areas of exploration.

Let’s get started!


In my previous post I shared the order of questions based on item difficulty for the 2018 5th grade STAAR for the entire state of Texas. Here it is again:

2018-G5-Item-Difficulty-Sort

According to this ordering, question 9 was the most difficult item on the test, followed by question 18, question 8, and so on down to question 10 as the least difficult item (tied with questions 2 and 4).

Here’s my question: What is the likelihood that any given campus across the state would have the exact same order if they analyzed the item difficulty just for their students?

Hopefully you’re like me and you’re thinking, “Not very likely.” Let’s check to see. Here’s the item difficulty of the state of Texas compared to the item difficulty at just one campus with about 80 students. What do you notice? What do you wonder?

2018-G5-Texas-vs-Campus

Some of my noticings:

  • Questions 8, 9, 18, and 21 were some of the most difficult items for both the state and for this particular campus.
  • Question 5 was not particular difficulty for the state of Texas as a whole (it’s about midway down the list), but it was surprisingly difficult for this particular campus.
  • Question 22 was one of the most difficult items for the state of Texas as a whole, but it was not particularly difficult for this campus (it’s almost halfway down the list).
  • Questions 1, 2, 10, 25, and 36 were some of the least difficult items for both the state and for this particular campus.
  • Question 4 was tied with questions 2 and 10 for being the least difficult item for the state, but for this particular campus it didn’t crack the top 5 list of least difficult items.
  • There were more questions tied for being the most difficult items for the state and more questions tied for being the least difficult items for this particular campus.

My takeaway?

What is difficult for the state as a whole might not be difficult for the students at a particular school. Likewise, what is not very difficult for the state as a whole might have been more difficult than expected for the students at a particular school.

But is there an easier way to identify these differences than looking at an item on one list and then hunting it down on the second list? There is!

This image shows the item difficult rank for each question for Texas and for the campus. The final column shows the difference between these rankings.

2018-G5-Rank-Order

 

Just in case you’re having trouble making sense of it, let’s just look at question 9.

2018-G5-Rank-Order-Q9

As you can see, this was the number 1 most difficult item for the state of Texas, but it was number 3 on the same list for this campus. As a result, the rank difference is 2 because this question was 2 questions less difficult for the campus. However that’s a pretty small difference, which I interpret to mean that this question was generally about as difficult for this campus as it was for the state as a whole. What I’m curious about and interested in finding are the notable differences.

Let’s look at another example, question 5.

2018-G5-Rank-Order-Q5

This is interesting! This question was number 18 in the item difficulty for Texas, where 1 is the most difficult and 36 is the least difficult. However, this same question was number 5 in the list of questions for the campus. The rank difference is -13 because this questions was 13 questions more difficult for the campus. That’s a huge difference! I call questions like this areas of exploration. These questions are worth exploring because they buck the trend. If instruction at the campus were like the rest of Texas, this question should have been just as difficult for the campus than for the rest of the state…but it wasn’t. That’s a big red flag that I want to start digging to uncover why this question was so much more difficult. There are lots of reasons this could be the case, such as:

  • It includes a model the teachers never introduced their students to.
  • Teacher(s) at the campus didn’t know how to teach this particular concept well.
  • The question included terminology the students hadn’t been exposed to.
  • Teacher(s) at the campus skipped this content for one reason or another, or they quickly glossed over it.

In case you’re curious, here’s question 5 so you can see for yourself. Since you weren’t at the school that got this data, your guesses are even more hypothetical than there’s, but it is interesting to wonder.

2018-G5-Q5

Let me be clear. Exploring this question isn’t about placing blame. It’s about uncovering, learning what can be learned, and making a plan for future instruction so students at this campus hopefully don’t find questions like this so difficult in the future.

Let’s look at one more question from the rank order list, question 22.

2018-G5-Rank-Order-Q7

This is sort of the reverse of the previous question. Question 7 was much more difficult for the state as a whole than it was for this campus. So much so that it was 7 questions less difficult for this campus than it was for the state. Whereas question 5 is an area of exploration, I consider question 7 an area of celebration! Something going on at that campus made it so that this particular question was a lot less difficult for the students there.

  • Maybe the teachers taught that unit really well and student understanding was solid.
  • Maybe the students had encountered some problems very similar to question 7.
  • Maybe students were very familiar with the context of the problem.
  • Maybe the teachers were especially comfortable with the content from this question.

Again, in case you’re curious, here’s question 22 to get you wondering.

2018-G5-Q22

 

In Texas this is called a griddable question. Rather than being multiple choice, students have to grid their answer like this on their answer sheet:

2018-G5-Q22-Grid

Griddable items are usually some of the most difficult items on STAAR because of their demand for accuracy. That makes it even more interesting that this item was less difficult at this particular campus.

We can never know exactly why a question was significantly more or less difficult at a particular campus, but analyzing and comparing the rank orders of item difficulty does bring to the surface unexpected, and sometimes tantalizing, differences that are well worth exploring and celebrating.

Just this week I met with teams at a campus in my district to go over their own campus rank order data compared to our district data. They very quickly generated thoughtful hypotheses about why certain questions were more difficult and others were less so based on their memories of last year’s instruction. In meeting with their 5th grade team, for example, we were surprised to find that many of the questions that were much more difficult for their students involved incorrect answers that were most likely caused by calculation errors, especially if decimals were involved. That was very eye opening and got us brainstorming ideas of what we can work on together this year.


This post wraps up my series on analyzing assessment data. I might follow up with some posts specifically about the 2018 STAAR for grades 3-5 to share my analysis of questions from those assessments. At this point, however, I’ve shared the big lessons I’ve learned about how to look at assessments in new ways, particularly with regards to test design and item difficulty.

Before I go, I owe a big thank you to Dr. David Osman, Director of Research and Evaluation at Round Rock ISD, for his help and support with this work. And I also want to thank you for reading. I hope you’ve come away with some new ideas you can try in your own work!

By Design

(Note: This post is a follow up to my previous post, Misplaced Priorities.)

When you teach a unit on, say, multiplication, what are you hoping your students will score on an end-of-unit assessment? If you’re like me, you’re probably hoping that most, if not all, of your students will score between 90% and 100%. Considering all the backward designing, the intentional lesson planning, and the re-teaching and support provided to students, it’s not unreasonable to expect that everyone should succeed on that final assessment, right?

So what message does it send to teachers and parents in Texas that STAAR has the following passing rates as an end-of-year assessment?

  • 3rd grade – Students only need to score 50% to pass
  • 4th grade – Students only need to score 50% to pass
  • 5th grade – Students only need to score approximately 47% to pass

Wow! We’ve got really low expectations for Texas students! They can earn an F and still pass the test. How terrible!

Comments like this are what I often hear from teachers, parents, administrators, and other curriculum specialists. I used to believe the same thing and echo these sentiments myself, but not anymore.

Last year, our district’s Teaching & Learning department attended a provocative session hosted by Dr. Kevin Barlow, Executive Director of Research and Accountability in Arlington ISD. He challenged our assumptions about how we interpret passing standards and changed the way I analyze assessments, including STAAR.

The first thing he challenged is this grading scheme as the universal default in schools:

  • A = 90% and up
  • B = 80-89%
  • C = 70-79%
  • D = 60-69%
  • F = Below 60%

The question he posed to us is, “Who decided 70% is passing? Where did that come from?” He admitted that he’s looked into it, and yet he hasn’t found any evidence for why 70% is the universal benchmark for passing in schools. According to Dr. Barlow, percentages are relative to a given situation and our desired outcome(s):

  • Let’s say you’re evaluating an airline pilot. What percentage of flights would you expect the pilot to land safely to be considered a good pilot? Hopefully something in the high 90s like 99.99%!
  • Let’s say you’re evaluating a baseball player. What percentage of pitches would you expect a batter to successfully hit to be considered a great baseball player? According to current MLB batting stats, we’re looking at around 34%.

It’s all relative.

Let’s say you’re a 5th grade teacher and your goal, according to state standards, is to ensure your students can multiply up to a three-digit number by a two-digit number. And here’s the assessment you’ve been given for your students to take. How many questions on this assessment would you expect your students to answer correctly to meet the goal you have for them?

  1. 2 × 3
  2. 8 × 4
  3. 23 × 5
  4. 59 × 37
  5. 481 × 26
  6. 195 × 148
  7. 2,843 × 183
  8. 7,395 × 6,929
  9. 23,948 × 8,321
  10. 93,872 × 93,842

If my students could answer questions 1 through 5 correctly, I would say they’ve met the goal. They have demonstrated they can multiply up to a three-digit number by a two-digit number.

  1. 2 × 3 (Meets my goal)
  2. 8 × 4 (Meets my goal)
  3. 23 × 5 (Meets my goal)
  4. 59 × 37 (Meets my goal)
  5. 481 × 26 (Meets my goal)
  6. 195 × 148 (Beyond my goal)
  7. 2,843 × 183 (Beyond my goal)
  8. 7,395 × 6,929 (Beyond my goal)
  9. 23,948 × 8,321 (Beyond my goal)
  10. 93,872 × 93,842 (Beyond my goal)

Questions 6 through 10 might be possible for some of my students, but I wouldn’t want to require students to get those questions correct. As a result, my passing rate on this assessment is only 50%. Shouldn’t I think that’s terrible? Isn’t 70% the magic number for passing? But given the assessment, I’m perfectly happy with saying 50% is passing. Expecting an arbitrary 70% on this assessment would mean expecting students to demonstrate proficiency above grade level. That’s not fair to my students.

Some of you might be thinking, “I would never give my students this assessment because questions 6 through 10 are a waste of time because they’re above grade level.” In that case, your assessment might look like this instead:

  1. 2 × 3
  2. 8 × 4
  3. 23 × 5
  4. 59 × 37
  5. 481 × 26

It hasn’t changed the expectation of what students have to do to demonstrate proficiency, and yet, to pass this assessment, I would expect students to earn a score of 100%, rather than 50%. Again, I would be unhappy with the arbitrary passing standard of 70%. That would mean it’s okay for students to miss questions that I think they should be able to answer. On this assessment, requiring a score of 100% makes sense because I would expect 5th graders to get all of these problems correct. If they don’t, then they aren’t meeting the goal I’ve set for them.

So why not just give the second assessment where students should all earn 100%? If that’s the expectation, then why bother with the extra questions?

This is exactly the issue going on with STAAR and its perceived low passing rate.

When you have an assessment where 100% of students can answer 100% of the questions correctly, all you learn is that everyone can get all the questions right. It masks the fact that some students actually know more than their peers. In terms of uncovering what our learners actually know, it’s just not very useful data.

More useful (and interesting) is an assessment where we can tell who knows more (or less) and by how much.

STAAR is designed to do this. The assessment is constructed in such a way that we can differentiate between learners to get a better sense of what they know relative to one another. In order to do this, however, it requires constructing an assessment similar to that 10-item multiplication assessment.

Just like how questions 1 through 5 on the multiplication assessment were aligned with the goal for multiplication in 5th grade, about half the questions on STAAR (16 or 17 questions, depending on the grade level) are aligned with Texas’ base level expectations of what students in 3rd, 4th, and 5th grade should be able to do. That half of the assessment is what we expect all of our students to answer correctly, just like we would expect all 5th graders to answer questions 1 through 5 correctly on the 10-item multiplication assessment.

So how do Texas students fare in reality? Here are the numbers of students at each grade level who answered at least half of the questions correctly on STAAR in spring 2018:

  • Grade 3 – 77% passed with at least half the question correct (299,275 students out of 386,467 total students)
  • Grade 4 – 78% passed with at least half the questions correct (308,760 students out of 397,924 total students)
  • Grade 5 – 84% passed with about half of the questions correct (337,891 students out of 400,664 total students)

Not bad! More than three quarters of the students at each grade level demonstrated that they can answer at least half of the questions correctly. These students are meeting, if not exceeding, the base level expectations of their respective grade levels. (Side note: Texas actually says students earning a 50% are Approaching grade level and a higher percentage is called Meets grade level. I’m not going to play with the semantics here. For all intents and purposes, earning a 50% means a student has passed regardless of what you want to call it.) But we’re left with some questions:

  • How many of these roughly 300,000 students at each grade level performed just barely above the base level expectations?
  • How deep is any given student’s understanding?
  • How many of these students exhibited mastery of all the content assessed?

Good news! Because of how the assessment is designed, we have another set of 16 or 17 questions to help us differentiate further among the nearly 300,000 students at each grade level who passed. This other half of the questions on STAAR incrementally ramps up the difficulty beyond that base level of understanding. The more questions students get correct beyond that first half of the assessment, the better we’re able to distinguish not only who knows more but also by how much.

Since percents are relative and 70% is our culturally accepted passing standard, why isn’t the STAAR designed to use that passing standard instead? It would definitely remove the criticisms people have about how students in Texas pass with an F.

Here are two rough draft graphs I created to attempt to illustrate the issue. Both graphs represent the 3rd grade STAAR which has a total of 32 questions. The top graph is showing a hypothetical passing standard of 70% and the bottom graph is showing the actual passing standard of 50%

20180902_171915

The first graph represents a 3rd grade STAAR where 70% is designed to be the passing standard. This means 22 questions are needed to represent the base level of understanding (assuming this assessment also has a total of 32 items). Since we’re not changing the level of understanding required to pass, presumably 300,000 students would pass this version of the assessment as well. That leaves only 10 questions to help us differentiate among those 300,000 students who passed to see by how much they’ve exceeded the base level. That’s not a lot of wiggle room.

The second graph represents the current 3rd grade STAAR where 50% is designed to be the passing standard. This means 16 questions are needed to represent the base level of understanding, but now we have another 16 questions to help us differentiate among the 300,000 students who passed. Because there are a number of high performing students in our state, this still won’t let us differentiate completely, but there’s definitely more room for it with the 50% passing standard than the 70% passing standard.

Some points I want to make clear at this point in the post:

  • There are definitely issues with an assessment where half of it is by design more difficult than the expectations of the grade level. We have roughly a quarter of students in Texas who can’t even answer the base level questions correctly (half the assessment). Unfortunately, they’re subjected to the full assessment and the base level questions are interspersed throughout. There are a lot of issues around working memory, motivation, and identity that could be considered and discussed here. That’s not what I’m trying to do in this post, however. As I mentioned in my previous post, regardless of how I feel, this is the reality for our teachers and students. I want to understand that reality as best I can because I still have to live and work in it. I can simultaneously try to effect changes around it, but at the end of the day my job requires supporting the teachers and students in my district with STAAR as it is currently designed.
  • STAAR is trying to provide a mechanism for differentiating among students…in general. However, having analyzed this data at the campus level (and thanks to the expertise of my colleague Dr. David Osman) it’s clear that STAAR is too difficult for some campuses and too easy for other campuses. In those extremes, it’s not helping those campuses differentiate very well because too many students are either getting questions wrong or right.
  • This post is specifically about how STAAR is designed. I can’t make any claims about assessments in other states. However, I hope this post might inspire you to dig more deeply into how your state assessment is constructed.
  • I’m not trying to claim that every assessment should be designed this way. I’m sharing what I learned specifically about how STAAR is designed. Teachers and school districts have to make their own decisions about how they want to design their unit assessments and benchmark assessments based around their own goals.

In my next post I’m going to dive into the ways I’ve been analyzing assessment data differently this past year. I wanted to write this post first because this information I learned from Dr. Barlow has completely re-framed how I think about STAAR. I no longer believe that the 50% passing rate is a sign that we have low expectations for Texas students. Rather, STAAR is an assessment that is designed to not only tell us who has met grade level expectations, but also by how much many of our students have exceeded them. With that in mind, we can start to look at our data in interesting and productive ways.

Disconnect

At the end of June, I had the pleasure of spending a week learning from Kathy Richardson at the Math Perspectives Leadership Institute in Hutto, Texas. I’ve been a fan of Kathy Richardson ever since my first week on the job as elementary math curriculum coordinator in Round Rock ISD. That week I sat in on a summer PD session on early numeracy led by Mary Beth Cordon, one of our district instructional coaches. She had us read a little out of Kathy Richardson’s book How Children Learn Number Concepts: A Guide to the Critical Learning Phases. I was hooked from the little I read, so I asked if I could borrow the book.

I devoured it in a couple of days.

Since then I’ve purchased multiple copies for all 34 elementary campuses, led campus and district PD sessions on the critical learning phases, and led a book study with over a hundred math interventionists. The book is so eye opening because it makes tangible and explicit just how rigorous it is for young children to grapple with and learn counting concepts that are second nature to us as adults.

I was so excited for the opportunity to learn from Kathy Richardson in person this summer, and she didn’t disappoint. If you’d like to see what I learned from the institute, check out this collection of tweets I put together. It’s a gold mine, full of nuggets of wisdom. I’ll probably be referring back to it regularly going forward.

As happy as I am for the opportunity I had to learn with her, I also left the institute in a bit of a crisis. There is a HUGE disconnect between what her experience says students are ready to learn in grades K-2 and what our state standards expect students to learn in those grades. I’ve been trying to reconcile this disconnect ever since, and I can tell it’s not going to be easy. I wanted to share about it in this blog post, and I’ll also be thinking about it and talking to folks a lot about it throughout our next school year.

So what’s the disconnect?

Here’s a (very) basic K-2 trajectory laid out by Kathy Richardson:

  • Kindergarten
    • Throughout the year, students learn to count increasingly larger collections of objects. Students might start the year counting collections less than 10 and end the year counting collections of 30 or more.
    • Students work on learning that there are numbers within numbers. Depending on their readiness and the experiences they’re provided, they may get this insight in Kindergarten or they might not. If students don’t have this idea by the end of Kindergarten, it needs to be developed immediately in 1st grade because this is a necessary idea before students can start working on number relationships, addition, and subtraction.
  • 1st Grade
    • Students begin to develop an understanding of number relationships. After a year of work, Kathy Richardson says that typical 1st graders end the year internalizing numbers combinations for numbers up to around 6 or 7. For example, the number combinations for 6 are 1 & 5, 2 & 4, 3 & 3, 4 & 2, and 5 & 1. Students can solve addition and subtraction problems beyond this, but they will most likely be counting all or counting on to find these sums or differences rather than having internalized them.
    • Students can just begin building the idea of unitizing as they work with teen numbers. Students can begin to see teen numbers as composed of 1 group of ten and some ones, extending the idea that teen numbers are composed of 10 and some more.
  • 2nd Grade
    • Students are finally ready to learn about place value, specifically unitizing groups of ten to make 2-digit numbers. According to Kathy Richardson, she says teachers should spend as much time as possible on 2-digit place value throughout 2nd grade.
    • Students apply what they learn about place value to add and subtract 2-digit numbers. By the end of the year, students typically are at a point where they need to practice this skill – which needs to happen in 3rd grade. It is typically not mastered by the end of 2nd grade.

And here’s what’s expected by the Texas math standards:

  • Kindergarten
    • Lots of number concepts within 20. Most of these aren’t too bad. The biggest offender that Kathy Richardson doesn’t think typical Kindergarten students are ready for is K.2I compose and decompose numbers up to 10 with objects and pictures. If students don’t yet grasp that there are numbers within numbers, then they are not ready for this standard.
    • One way to tell if a student is ready is to ask them to change one number into another and see how they react. For example, put 5 cubes in front of a student and say, “Change this to 8 cubes.” If the student is able to add on more cubes to make it 8, then they demonstrate an understanding that there are numbers within numbers. If, on the other hand, the student removes all 5 cubes and counts out 8 more, or if the student just adds 8 more cubes to the pile of 5, then they do not yet see that there are numbers within numbers.
    • My biggest revelation with the Kindergarten standards is that students are going to be all over the map regarding what they’re ready to learn and what they actually learn during the year. Age is a huge factor at the primary grades. A Kindergarten student with a birthday in September is going to be in a much different place than a Kindergarten student with a birthday in May. It’s only a difference of 8 months, but when you’ve only been alive 60 months and you’re going through a period of life involving lots of growth and development, that difference matters. It makes me want to gather some data on what our Kindergarten students truly understand at the end of Kindergarten compared to what our standards expect them to learn.
  • 1st Grade
    • Our standards want students to do a lot of adding and subtracting within 20. Kathy Richardson believes this is possible. Students can get answers to addition and subtraction problems within 20, but this doesn’t tell us what they understand about number relationships. If we have students adding and subtracting before they understand that there are numbers within numbers, then it’s likely to be just a counting exercise to them. These students are not going to be anywhere near ready to develop strategies related to addition and subtraction. And then there’s that typical threshold where most 1st graders don’t internalize number combinations past 6 or 7. So despite working on combinations to 20 all year, many students aren’t even internalizing combinations for half the numbers required by the standards.
    • The bigger issue is place value. The 1st grade standards require students to learn 2-digit place value, something Kathy Richardson says students aren’t really ready for until 2nd grade. And yet our standards want students to:
      • compose and decompose numbers to 120 in more than one way as so many hundreds, so many tens, and so many ones;
      • use objects, pictures, and expanded and standard forms to represent numbers up to 120;
      • generate a number that is greater than or less than a given whole number up to 120;
      • use place value to compare whole numbers up to 120 using comparing language; and
      • order whole numbers up to 120 using place value and open number lines.
    • I’m at a loss for how to reconcile her experience that students in 1st grade are ready to start putting their toes into the water of unitizing as they work with teen numbers and our Texas standards that expect not only facility with 2-digit place value but also numbers up to 120.
  • 2nd Grade
    • And then there’s second grade where students have to do all of the same things they did in 1st grade, but now with numbers up to 1,200! Thankfully 2-digit addition and subtraction isn’t introduced until 2nd grade, which is where Kathy Richardson said students should work on it, but they also have to add and subtract 3-digit numbers according to our standards. Kathy Richardson brought up numerous times how 2nd grade is the year students are ready to begin learning about place value with 2-digit numbers, and she kept emphasizing that she felt like as much of the year as possible should be spent on 2-digit place value. If the disconnect in 1st grade was difficult to reconcile, the disconnect in 2nd grade feels downright impossible to bridge.

I’m very conflicted right now. I’ve got two very different trajectories in front of me. One is based on years upon years of experience of a woman working with actual young children and the other is based on a set of standards created by committee to create a direct path from Kindergarten to College and Career Ready. Why are they so different, especially the pacing of what students are expected to learn each year? It’s one thing to demand high expectations and it’s another to provide reasonable expectations.

And what do these different trajectories imply about what it means to learn mathematics? Kathy Richardson is all about insight and understanding. Students are not ready to see…until they are. “We’re not in control of student learning. All we can do is stimulate learning.”

Our standards on the other hand are all about getting answers and going at a pace that is likely too fast for many of our students. We end up with classrooms where many students are just imitating procedures or saying words they do not really understand. How long before these students find themselves in intervention? We blame the students (and they likely blame themselves) and put the burden on teachers down the road to try to build the foundation because we never gave it the time it deserved.

But how to provide that time? That’s the question I need to explore going forward. If you were hoping for any answers in this post, I don’t have them. Rather, if you have any advice or insights, I’d love to hear them, and if I learn anything interesting along the way, I’ll be sure to share on my blog.

 

 

 

 

Moving On Before It’s Over (3rd Grade)

If you’re just joining us, I’ve been writing a series of posts as I embark on my spring curriculum work to prepare for the 2018-19 school year. I’m sharing how our scope and sequence has evolved over time, rationales for why things are the way they are, and thoughts on what changes I might make for next school year. If you’d like to back up and read about an earlier grade level, here are the previous posts in this series:

Today I’ll be talking about our 3rd grade scope and sequence. Here they are for the past three school years. What do you notice? What do you wonder?

3rd Grade – School Year 2015-16

3rd15-16

3rd Grade – School Year 2016-17

3rd16-17

3rd Grade – School Year 2017-18

3rd17-18

Remember back in my first post in this series when I said, “Now that I’ve been doing this for a few years – and I’m starting to feel like I actually know what I’m doing…“? Yeah, 3rd grade is a prime example of how I have learned a lot over the past few years. I’m a little (maybe a lot) embarrassed to show you what it used to look like back in 2015. I had good reasons for what I attempted to do, but this was just a tough nut to crack.

So what was going on several years ago when I put our 3rd grade teachers through the wringer with 18 units in one school year? If you look at the 2015-16 scope and sequence closely, you’ll notice that one topic appears waaaaay more frequently than the others – multiplication and division. There were a total of 7 units just on multiplying and dividing.

This was very intentional. Just like I have specific numeracy goals in the previous grade levels, my goal in 3rd grade is to ensure students leave the school year as strong as possible in their understanding of multiplication and division. Specifically, I want to ensure students have the chance to develop mental strategies for multiplication and division.

Before I became the Curriculum Coordinator in my district, a team of folks analyzed fluency programs and ultimately decided that ORIGO’s Book of Facts is the one we would purchase for our entire district. After that decision, but still before I started working in this role, our district went through the adoption process for a new math instructional resource. Teachers selected ORIGO’s Stepping Stones program.

This turned out to be a wonderful fit because the mental strategies from the Book of Facts are baked into the lessons in Stepping Stones. (If you want to learn more about these mental strategies, check out these awesome 1-minute videos from ORIGO.) I didn’t want to rush students through the strategies, so I followed the Stepping Stones sequence of multiplication and division lessons. This gave each strategy its due, but it also resulted in 7 units on just this one topic.

Unfortunately, this meant squeezing in everything else in between all of those multiplication and division units. To my credit, I did share this scope and sequence with a team of six or eight 3rd grade teachers to get their feedback before putting it in place. I must be a good salesman because they thought it made sense and wanted to give it a try.

I’m sure you can imagine, it was tough that year. Just as teachers started a unit, it felt like it was ending. This happened to also be the year that our district started requiring teachers to give a district common assessment at the end of every unit. That decision was made after I’d already made all of my scope and sequences, otherwise I might have thought twice….maybe. The teachers felt like they were rushing through unit after unit and assessing their kids constantly. It was too much.

The next year we tightened things up quite a bit. We were able reconfigure concepts to end up with five fewer units than the year before. Without sacrificing my ultimate goal, I do feel like we ended up with a scope and sequence that has a reasonable amount of breathing room.

A major change that happened between last year and this year is that we removed the 10-day STAAR Review unit. We took 5 of those days and gave them to teachers at the beginning of the year to kick off with a Week of Inspirational Math from YouCubed. We took the other 5 days and gave them to units that needed more time. My rationale is that teachers often tell me they don’t have enough time to teach topics the first go round. If that’s the case, then I can’t justify spending 10 days at the end of the year for review. Those days should be made available earlier in the year to ensure there’s enough time for first instruction. If you’re interested, I shared additional reasons for this change along with an alternative to the traditional test prep review unit in this post on my district blog.

As embarrassed as I am to share the scope and sequence I inflicted on our 3rd grade teachers for an entire school year, looking at it now, I am proud of what we attempted and proud of the revisions we’ve been able to make over time. It’s finally a wieldy scope and sequence!

My reason for sharing this is to let people to know this work isn’t easy, especially people who are in the same boat as me or considering moving into this kind of role. There are a lot of moving parts within and across years, and you’re bound to make some mistakes. The important thing is to always have an eye for continuous improvement, because there is always something that could use improving. And if you can enlist the help of great teachers to provide their expertise and feedback, even better. This is not work that should be undertaken solo.

3rd Grade – School Year 2018-19

So what’s the plan for next school year? One area that’s been nagging me is addition and subtraction. If you read the 2nd and 3rd grade standards on this topic, you’ll notice the first half of each standard is identical except for one word: fluency.

  • Second grade
    • 2.4C Solve one-step and multi-step word problems involving addition and subtraction within 1,000 using a variety of strategies based on place value, including algorithms
  • Third grade
    • 3.4A Solve with fluency one-step and two-step problems involving addition and subtraction within 1,000 using strategies based on place value, properties of operations, and the relationship between addition and subtraction

One of the 8 effective teaching practices from NCTM’s Principles to Actions is that we should build procedural fluency from conceptual understanding. I see this happening in in our 2nd grade curriculum:

  • We build conceptual understanding of multi-digit addition and subtraction across 60 days in 3 units
  • And this helps us build fluency of 2-digit addition and subtraction in our computational fluency component across up to 97 days in 6 units

What about in 3rd grade? We kick off the year reconnecting with 2-digit addition and subtraction in our computational fluency component for 30 days in Units 1 and 2. This overlaps with our efforts to reconnect with the conceptual understanding of adding and subtracting 3-digit numbers in Unit 2.

Starting in Unit 3, our goal becomes moving students toward fluency. We strive to achieve this by having it as a computational fluency topic for up to 64 days in 4 units. Problem solving with addition and subtraction, and later with all four operations, also appears throughout the year in 41 days of spiral review in 3 units.

3rdAAGFall3rdAAGSpring

When I write it all out like that, I feel pretty good about it, but I do wonder if it’s enough. I hear from 3rd grade teachers, especially in the fall, that their students are having a really difficult time with addition and subtraction, a much harder time than they are with multiplication and division.

I’m not sure I want to make a change to 3rd grade’s scope and sequence though. They have enough on their plate. I want their kids to begin building multiplicative thinking, build a strong understanding of how multiplication and division are related, and, oh yeah, build fluency with all of their multiplication and division facts. That’s a lot to accomplish!

What I really want to do is look at how our 2nd and 3rd grade teachers are teaching addition and subtraction. My gut tells me the problems I’m hearing about have something to do with the standard US algorithms for addition and subtraction.

In case you’re wondering, the phrase “standard algorithm” does not appear in our addition and subtraction TEKS until 4th grade. And that makes sense. When you’re adding or subtracting 2- and 3-digit numbers, that can be done fluently in your head, given practice. However, once you hit 4th grade, and you start adding 6-, 7-, and 8- digit numbers, you’re going to want to pull out a calculat…er…I mean algorithm.

Despite my best efforts, I know there are some 2nd and 3rd grade students being taught the standard US algorithms which might be causing some of the issues I’m hearing about. As I like to say in this sentence I just made up, “When standard algorithms are in play, number sense goes away.” If teachers are still teaching standard algorithms despite everything in our curriculum pointing to the contrary, then I’ve got some work to do to shift some practices, including providing professional development. Thankfully I’ve already got some lined up this summer! I also need to work more with our instructional coaches on this topic so they’re better equipped to support the teachers on their campuses.

Got a question about our scope and sequence? Wondering what in the world I’m thinking about planning things this way? Ask in the comments. I’ll continue with 4th grade’s scope and sequence in my next post.

 

 

 

 

 

Our Venn Diagrams are One Circle

This past week my work life and my daughter’s school life came crashing together in the most wonderful way.

I.

On the way home from school on Thursday, she asked if we could practice “take away.” At first we practiced numerical problems like “What is 3 take away 1?” and “What is 5 take away 2?” Eventually I asked her if I could tell my problems in a story. The rest of the ride home we told “take away” stories. I told a few, and then she wanted it to be her turn:

  • “This one is sad. There were 2 cats and 1 of them died.”
  • “There were 6 oranges on the counter. A girl ate 2 of them and they died in her mouth.”
  • “There were 8 trees, and 3 of them got cut down.”
  • “There were 6 roads, and 2 of them fell down.” (I was able to figure out she was referring to overpasses because that’s what we were driving under at the time.)

Slightly morbid, but she’s 6 years old, so I roll with it, especially since she isn’t usually this chatty about anything related to school.

Anyway, as we were getting closer to home, I remembered that the math unit she’s currently in in school uses some numberless word problems, so I asked, “Have you ever had a problem about some geese and some of them stop to rest?”

(Stunned silence)

“How did you know that?!”

“What about a problem about a boy who checks out some books from the library and returns only some of them?”

(Stunned silence)

“Yes! How did you know that one!”

“Because I wrote them.”

“What do you mean?!”

“I’m the author of the take away stories you’ve been working on in math class.”

And thus our two worlds – my work and her school – came crashing together for the first time ever.

I’ve mentioned to her before that I work with and help teachers, but it’s always been in the abstract. Finding out that I was the author of specific problems she’s encountered in her classroom just blew her mind. She wanted to see some of them when she got home. Knowing she probably won’t always be this interested in my work, I was only too happy to oblige.

II.

As I was scrolling through the suggested unit plan to find the numberless word problems, I asked her about other tasks in the unit to see which ones she remembered. I asked about Bag-O-Chips, a 3 Act Task from Graham Fletcher, which was planned for the day after the numberless word problems, but she said she’d never seen it before. I have no idea how closely her teacher follows the unit plan, but lo and behold, the next day in the car when I asked what she did at school she said, “We did the bags of chips!”

We talked a little bit about the task in the car, and a little later as we finished up dinner I showed her the Act 1 video. Her eyes lit up. “That’s the video!”

We kept going back and forth between the image of what came in the bag and the image of what should have come in the bag. She happily used her fingers to figure out how many missing bags there were of each flavor.

I thoroughly enjoyed talking through the task with her, and what a pleasant surprise when she wanted to do another.

III.

I’m not one to pass up an opportunity talk about math with my daughter, so I quickly scanned Graham’s list of 3 Act Tasks to find one I know we didn’t include in our suggested unit plans. I settled on Peas in a Pod.

Peas01

Source: https://gfletchy.com/peas-in-a-pod/

First, we watched the video and estimated how many peas would be in each of the pods.

“I think there are 3 in this one, 4 in this one, and 10 in this one. No, 13 in this one.” (She estimated from right to left in case you’re wondering.)

“Hmm,” I said, “I think 3 is a good guess for the first one. I think there might be 4 or 5 in the second one, and I’m going to agree with your first guess of 10 for the third one.”

Estimation is a new skill for Kindergarten students. I talk about guessing and she talks about being right. She thinks the goal is to be the person who guesses the correct (exact) amount. I’m going to keep talking about being close and reasonable because over time I know her understanding of what estimation is will develop and refine.

Then we watched the reveal video.

Peas02

Source: https://gfletchy.com/peas-in-a-pod/

“I wasn’t right and you weren’t right!” She exclaimed.

“That’s okay. All of our guesses were pretty close, even though none of them matched the exact number of peas. I was surprised that this one only had 2 peas in it. I thought for sure there were more in there.”

“Me, too.”

“Hmm, I have another question for you. How many peas are there altogether?”

“Let me count.”

“I want to see if you can do it without counting on the picture. How many peas were in each pod?”

“8 and 7…and 2.”

“So how could you figure out the total?”

At first she tried using her fingers. She counted out 8 fingers, and then continued counting from there. I couldn’t really tell what she was doing, but at one point, after lots of ups and downs of fingers, she said, “18.”

Pretty close!

I didn’t say that though. Instead I said, “Hmm, I wonder if that’s the right amount. What other tool could we use to check your answer?”

She decided to get her Math Rack to check, and as a complete surprise to me she said, “Can you make a video of me?” Make a video of you solving a math problem? Why, of course!

Watching her first attempt, it was fascinating seeing her trying to keep track of two separate counts: (1) counting on from 8, “…9, 10, 11, 12, 13, 14,…” and (2) counting the 7 she was combining with the 8, “1, 2, 3, 4, 5,…”

It seems like she abandoned the double counting  when she was so close to being done. I wonder if she sort of gave up and just continued counting to 18 since that’s what she had thought the answer was before.

I had a split second to think about how to respond. I didn’t want to confirm whether the answer was correct, and I wanted to see if she would be willing to try combining the three quantities again.

There was definitely a lot more accuracy when she separately modeled each quantity! I was impressed with the double counting she was attempting earlier, but in the end she was more successful when she could show each quantity separately and then count all.

It was a proud dad moment when she didn’t just accept 17 as the correct answer. She decided we should look at the picture of all the open pea pods to check. And, sure enough, when I held up the phone with the image of all the open pea pods, she was able to count all and verify that there were in fact 17 peas.

All in all, I’m over the moon. All year long I’ve asked her about school (and math), but up until now her answers have been fairly vague. (“I’m so surprised,” said no parent ever.) The most I’d gotten out of her before was that they did Counting Collections.

But now we’ve actually had a full blown conversation about the work she’s been doing in school, specifically activities I wrote or helped plan for our Kindergarten units. I’ve always loved talking about counting and shapes and patterns with my daughter since before she ever started school, but to have our worlds collide like this was really special. I enjoyed getting to share and talk about my work with a very different, and more personal, audience than I’m used to.