Going Gradeless vs. Traditional Grading: What’s the difference?

I am often confronted by curious people asking why traditional grading using points is a worse evaluation method than the scoring in the Going Gradeless approach. And, I say: it’s NOT worse. It’s just not as useful. It doesn’t provide as much information, to the teacher or to the students, about the strengths and weaknesses of the students. So what’s different with how I am doing things in my class?

In this post, I’m going to focus on the middle-range students. These are kids who can range from challenged to gifted depending where their math levels, work ethic, and personal motivations fall. I have about 40 of them this year, spread over 3 classes. (For your reference, this is greatly reduced quantity, since we have reduced class sizes due to social distancing mandates from the CDC (Covid restrictions.) What a strange year!)

During the last week in February, I scored the end-of-unit assessments: a lab, a test, and a project. (If you recall from my earlier posts, we do approximately 1 lab, a checkpoint (aka quiz), and a project check-in each week, but only the one at the end of the unit is used for reporting progress.) This third unit, on Newton’s Laws and their applications, began on 1/4/21, so we spent a good 9 weeks with the material. My goal was to move them solidly out of Beginning level, and for all students to move into Developing and Proficient on each of the 10 skills.

I enter all of the students scores into an Excel spreadsheet at the end of the unit. I use this as a record, but also to analyze patterns across classes, and for individuals. In order to quantitatively analyze the scores, I assign each level a numerical score, where “Not Enough Evidence” is equal to zero, “Beginning” 1, “Developing” 2, etc. Then I calculate the average for all 38 students taking this course with me. Here were the average scores for each standard for each of the three completed units:

Unit
Pro.1 – Experi-mental Design
Pro.2 – Data Analysis
Pro. 3 – Arguing a Scientific Claim
Pro.4 – Using Feedback
Pro.5 – Creating Explanations & Making Predictions
Pro.6 – Problem Solving
Pro.7 – Graph Interpre-tation
Pro.8 – Graph Creation
Pro.9 – Engaging with Content
Pro.10 – Engineering Design Cycle
1
1.2
1.2
1.2
xxx
1.6
2.1
1.4
1.3
1.2
0.9
2
2.3
1.8
1.7
1.3
2.2
2.5
1.7
1.5
1.0
1.5
3
3.2
2.3
2.5
1.8
2.8
3.0
1.8
2.8
0.8
2.0

 

There are clear increases across the board, except for Pro. 9, Engaging with Content. Put into a graph, the pattern is striking:

Graph of Average Scores across three units

The gray lines are the most recent performance levels, which you can see, in 9 out of 10 cases, outstrip those of the second and first units. In many cases, such as Experimental Design, Arguing a Claim, Creating Explanations, and Graph Creation, there was an overall improvement of greater than 1 level. This is great news! Honestly, I didn’t even realize the extent of their improvement until I saw it displayed in this way. When grading, I often feel disappointed when the students respond incorrectly. I wouldn’t even notice what they did right if I didn’t use this type of feedback and if I didn’t analyze it. Restating this more emphatically: it is because I use this type of feedback in combination with the analysis of the results that allows me to acknowledge student growth. Going gradeless is important for MY state of mind, knowing that I am making a difference. And if I didn’t notice the subtle changes in their performance, they wouldn’t notice it either. In addition, if I don’t overtly acknowledge their improvement, they will continue thinking that they aren’t doing well because they make mistakes and aren’t perfect. It is important for STUDENTS’ states of mind, knowing that they are making progress, learning, and succeeding. It allows them to push forward, even when the material is challenging, when the circumstances are less than ideal, when it’s mid-March and they are sick of the bad weather, social distancing, and working from home interacting via computer.

A few other things that I thought about after considering the results:

  • I believe that the improvement on 6 (Problem Solving) is even more dramatic than it looks on the graph, because the problem on this unit’s test was significantly more challenging than that in the previous unit. The problem on the previous unit test was only one step, and pretty simple; this current one required multiple steps, including material from the previous unit. Most students did a terrific job, setting it up and producing the correct evidence, using the correct process, but made calculation errors or minor omissions, which prohibited achievement at the Advanced level.
  • The only standard in which there has been no improvement is 9 (Engaging with Content). The first two levels on this standard have to do with performance on the Content Mastery Checkpoint, on which students must get 100% on a simple matching and multiple choice quiz, which is entirely focused on vocabulary, terminology and conceptual understanding. They get to retake (a new version of) this 4-5 times over the course of the unit… but even for my best students, I don’t see much improvement. I have been wrestling with this issue for years, and have yet to successfully address it. I am going to try spending time after each attempt, going through the various questions and issues that pop up, like any other checkpoint. This is what Dave does with his classes and he says it helps, but takes time. I’ll have to plan for it and see if that helps.
  • I also noticed that the number of “Not Enough Evidence” scores are dropping, from 39 to 31 to 26. The number of students moving up out of this level has increased each unit from 15 to 19 to 24. In addition, currently 50% of the NEEs are due to only 3 students having 4 or more, and only the remaining 13 scattered amongst 10 students. They are working. They are progressing. I am reaching them!

So once that I came to these conclusions, what is the next step? I need to make some changes, set new benchmarks for Unit 4, and have a directed conversation with the students. More on that, next time.