By Brian Foster
I’ve been working on ways to move beyond “teaching with my gut”. What I mean is – when I started teaching a few years ago I just used my best guess – “Hey students, this stuff is cool! Watch me and then do it yourself!”. As the classes have gone on I’ve gotten IDEA forms back and attended all of the Professional Development classes that I can and I feel that my classes are quite good. However, I haven’t reached my three-to-five year “Good Enough” full-time teaching plateau and so I must get better! But I feel that I’m doing my students a disservice by trusting in intuition and with 6 class that I’m re-writing half of every semester – intuition is getting jumbled and less effective. I’m not getting the gains that I used to! So I have decided to take a more structured and research-driven approach to my pedagogy.
Formative-Assessment tools are excellent! Some, like “write down the muddiest point and throw it at me” engage the students and create a bond of interaction between students and professor. Another, the “write down the most important point and sticky-note it to the door on your way out” leave qualitative data for you to incorporate into the next lesson.
But I want a way to know if my students are quantitatively getting better.
In my programming class after several weeks of learning I could tell that we needed a day of recap because most of my students were completely lost at the beginning of my lecture. So I tasked them with a few problems and had them get started on solving them, things we had covered in previous lessons. I then went around the room and noted which students
- Were totally lost
- Sort of understood
- Totally understood
Then I sorted them into four small groups making sure that each group had someone who totally understood and someone who was totally lost. After asking them to find out each other’s names and why they were in this class I asked them to rate themselves as 1, 2, or 3 on the scale and then report back how many of each were in their group (no need for me or the other groups to know individually who). I was pleasantly surprised (I was making this up as I went along) to find that their self-prescribed numbers were very close to my own! They of course wanted to categorize themselves as 1.5 or 2.25 and that didn’t matter to me, I’m glad they felt a little bit over “totally lost” or “sort of understood”.
I then proceeded to put questions on the board for them to solve and gave every team one or two minutes to come up with answers. The 1’s and 2’s asked the questions and the 3’s further cemented their own ideas as they explained and worked towards the answers with the others in the group. I also made them draw pictures of each concept as a way to look at their answer from another angle. Synthesizing a concept into a drawing is good higher-level thinking.
My students all felt much better about their competence with programming after that! But unless I teach the same thing next week and ask them how they feel about it, how will I know if any of the 1’s became 2’s, or 2’s became 3’s? There are two ways to judge this data: If you have your students cover the learning material before class starts, have them gauge their confidence at the beginning and end of the lecture. If you don’t and they’re coming in to this week’s information for the first time in your class, you will have to watch the trends week to week – do you have the same number of 1’s, 2’s, and 3’s as the semester progresses? You’ll want to do this regardless, but until you stop introducing new material to the students and start recapping previous lessons you won’t be able to watch these numbers improve because you’ll be teaching new material they aren’t comfortable with yet.
Ideally in your class you want everyone to be at a 2 or 2.5 – you want them to be challenged by the material that they learn each week. If they are interested in the subject, their skill will grow and the difficulty of the subject can increase as well. If the spread of students gets narrower over the course of my class, I will be quite happy with that quantitative assessment.