I recently wrote about a cheat sheet and quiz wrapper strategy. Students are allowed to prepare one side of a 3×5 index card to be used during a quiz that’s worth 10% of the course grade.
I intentionally refer to the assessment as a quiz, not an exam. I stopped calling it a test when I sensed the names “exam” and “test” increase anxiety in unproductive ways. I also wanted to make a clear distinction on the syllabus. Learning in this course is assessed in a variety of ways: homework, classwork, essay exams, multiple-choice quiz, paper, and project.
The multiple-choice format is also an intentional decision. The quiz assesses students’ ability to perform computations, interpret them, interpret related graphs, and understand the implications of a specific set of characteristics. The quiz assesses lower-level Bloom’s taxonomy learning. Understanding the concepts and preparing computations is the foundation for more advanced thinking and analysis in the second part of the unit. Unlike the essay exams, where I want students drawing graphs and explaining in writing, the mechanical basics addressed in this part of the course can be assessed conveniently with bubble sheets.
I introduced the cheat-sheet index card policy a couple of years ago, as a way to reduce stress during the quiz and promote active study strategies. The goal was to get students thinking about the material earlier and differently, prepare more effectively, and perform better on the quiz.
Unfortunately the strategy is,
as my kids would say,
an Epic Fail.
Calling it a quiz has not reduced anxiety. Because this is the first and only multiple-choice assessment, students don’t know what to expect. Lack of familiarity increases apprehension, regardless of what I call the assessment.
To give students an idea of what to expect, I provide a “practice quiz.” Perhaps it should be called “sample questions.” The practice quiz has the unintended consequence of limiting the scope of material studied. At least one student noted they didn’t study anything that wasn’t in the practice quiz.
Calling this assessment a quiz may actually produce more harm than good. “Quiz” may be less stressful than “test” or “exam.” But some stress is good. An unintended consequence of the name change: students may study less when it’s “only” a quiz.
Because quizzes may be seen as less important than exams or tests, some students may conclude the “cheat sheet” notecard is unnecessary. Thus, some students were insufficiently motivated to prepare one. In one case, a student noted they forgot about the quiz. Another prepared a card but neglected to bring it. Overall, about 10% of the class didn’t have one.
To be effective in promoting learning and improving scores, the card needs to be prepared in advance. Unfortunately, I noticed some students writing on theirs in the few minutes before the quiz. Others turned in cards that were incomplete or disorganized. I haven’t been able to analyze the data yet, but early findings are clear: many students aren’t preparing the cards or studying for the quiz as I intended.
From the student perspective, the purpose of a cheat sheet card is to improve their quiz score. Unfortunately, anecdotal evidence (which is all I have at this point) doesn’t bear that out. Grades on the quizzes aren’t much, if at all higher than before I allowed the cards.
That’s at least partly due to the observations above. I tried to convince students that the act of preparing the card promotes learning. The value of the card during the quiz is directly related to the quality of time and effort that went into preparing it. The message didn’t get through. Worse, feedback suggests students may have shifted from thinking about concept interrelationships toward putting basic definitions on the card.
Readers of this blog may be surprised (disappointed?) by this post’s focus on an ineffective strategy. But there is much you and I can learn from epic fails. Here are two quick takeaways:
- Wrappers or other mid-semester feedback is vital to understanding our students. I’m gaining valuable insight from the honest admissions about study time and strategies used. A lot of it depressed me (more about that soon). But I can’t improve instruction or enhance learning if I’m unaware of where my students are as learners.
- We learn a lot from mistakes. Be brave. Pick one instructional strategy and critically examine the intended and unintended consequences. What are your assumptions? What are your intentions? What evidence can you gather to test how well they are or aren’t being met?
I’d appreciate learning from your “epic fail.” Please share what you learned from a strategy or policy that didn’t work. Let’s learn from and with each other.
The Feb 2nd post (Easy A?) ended with a series of questions about grades, learning and instructional strategies. I fully intended to begin addressing them here. But as I dug into the literature I realized other issues need exploring.
When students report my courses are “hard,” my first instinct is to write them off as whining complaints. Then I look at grade distributions and review the number and type of assessments to try to discredit the feedback. I usually succeed, but nagging questions remain. What do students mean when they say my course is hard? What if our definitions are different? Does it matter?
What makes a course hard? Draeger, del Prado Hill & Mahler (2015) find “faculty perceived learning to be most rigorous when students are actively learning meaningful content with higher-order thinking at the appropriate level of expectation within a given context” (p. 216). Interactive, collaborative, engaging, synthesizing, interpreting, predicting, and increasing levels of challenge are a small sample of the ways faculty describe rigor. In contrast, “students explained academic rigor in terms of workload, grading standards, level of difficulty, level of interest, and perceived relevance to future goals” (p.215) and course quality is “a function of their ability to meet reasonable faculty expectations rather than as a function of mastery of learning outcomes” (p.216). Their findings are consistent with previous research, match my views of what makes a course challenging, and reflect the comments my students made.
We are not on the same page about what makes a course rigorous.
Does it matter? I think it does for two reasons. Clearly, if you’re concerned about course evaluations, the scores will be lower if students’ and teachers’ definitions and aims are not aligned. Beyond the ratings, the mismatched definitions, expectations, and criteria have significant implications for learning. Consider this analogy.
Monique wants to lose weight. She plans to eat fewer calories and exercise more. She hires a personal trainer to set up a cardio program. Monique isn’t very knowledgeable about weight loss physiology; she thinks less food and more cardio are all she needs. And for the short term, she has a point. Thus, she’s surprised when the trainer starts the session with ten minutes of cardio and then tells her to head over to the weight machines. Monique, despite her limited background in exercise science, says she’s only interested in cardio: treadmill, elliptical, climber, and spinning. The trainer persists and Monique begrudgingly complies. But, Monique’s enthusiasm for the program is diminished and she leaves without knowing why weight training is a hard but necessary component of the trainer’s plan.
Many students are like Monique. She’s paid good money for the trainer’s services. She knows she’s going to sweat on the cardio machines. She’s willing to work. But her expectations and understanding about exercise are incomplete. Because of this, she may not realize the trainer’s program will do more to help achieve her goals in the short- and long-term than cardio alone. Or, she might comprehend what the trainer is trying to help her achieve, but Monique may only care about the short-term fix. Monique may not have the time (or may not value time at the gym enough) to devote an hour when 20 minutes of cardio would seem to be enough, at least for now. Monique’s goals and understanding of the process do not match the trainer’s.
Similarly, many teachers are like the trainer. The trainer assumed Monique would accept, on faith, that she has her client’s best interest in mind. The trainer believes she knows what’s best for her client. The trainer assumes Monique knows what a comprehensive exercise program looks like so she didn’t take time to explain why weight training is necessary. Notice that the story discusses the trainer’s plan, not a plan they developed together. Notice this too- the trainer is thinking like an expert, forgetting that novices see and approach things very differently.
As long as the trainer/trainee and teacher/student hold different definitions and expectations, the working relationship will produce less than optimal results and “satisfaction surveys” will reflect the mismatched priorities.
What can we do about it? Martin, et al., (2008) investigate students’ perceptions of hard and easy courses across engineering programs. Two of their strategies have broad application.
- Consider student characteristics. Student differences with respect to semester standing, level of academic preparation, in-major v. general education course, and student major affect perceptions of course difficulty. The more teachers know their students, the better equipped we are to determine where students are in the learning maturation process. “The key is determining what an appropriate challenge is for a course and for a particular group of students. The more an instructor interacts with students, the more likely the instructor is to notice the overwhelmed or bored students” (p. 112).
- Emphasize content connections. Applicability of content is an important filter students use to gauge course rigor. “Real” and “relevant” are the levers that push students to work harder and longer. Content needs to matter to students personally or professionally. Teachers need to keep that in mind.
The more I read and think about what makes a course “hard,” the more it feels like we’re trying to nail jello to the wall. When we meet the needs of some, the rest may feel squished. It may not be possible to get it right, all the time, for every student. But I do believe, and the research on learning bears this out, there is value in initiating conversations with students about learning. We can’t dispel misperceptions if we’re unaware. The goal of the conversations isn’t to negotiate watering down the course, making grading easier, or lowering expectations. It’s to give students a voice and share ownership so that learning becomes more than a series of assignments reflecting only the teacher’s goals.
Draeger, J., P. del Prado Hill, & R. Mahler. (2015). Developing a Student Concept of Academic Rigor. Innovation in Higher Education, 40: 215-228.
Martin, J.H., Hands, K.B., Lancaster, S.M., Trytten, D.A., & Murphy, T.J. (2008). Hard But Not Too Hard: Challenging Courses and Engineering Students, College Teaching, 56(2): 107-113.
Last year, I got a bicycle for my birthday. I’m sure you’ve seen cyclists wearing those clingy bike shorts, looking like they’re training for an Iron Man. I’m not one of them. When I got the bike, I wasn’t in good shape or even sure how much I’d enjoy riding. Thus, I got a “comfort bike.” The seat is bigger and you ride upright which is more comfortable for shorter distances. Since I didn’t want to always ride alone, my husband got a mountain bike. It has more gears, disc brakes, and a skinny saddle. Mine is turquoise; the very snazzy, “highlighter on wheels” is his.
I recently brought the bikes in for spring tune-ups. The technician took one look at mine and said “Wow! You’ve logged a lot of miles. Way more than your husband. We’ll need to replace the chain. Good for you!”
That got me wondering how many miles I traveled last season. Our first ride was April 18. We rode about 4 miles. I know this because I used Map My Ride, an app that tracks route, distance, pace and time. I rode about 80 miles between April and October. Average speed increased from 4 mph to over 7mph and since most rides were about an hour, average distance rose from 4 to 7 miles. Not bad for a casual rider / non-athlete. Before the bike shop visit, I would not have estimated that I rode that far nor did I realize just how much my pace improved. Receiving a hearty acknowledgement from an expert made me feel great. I never thought of myself as a cyclist before!
Teachers, by virtue of our position, can do for learners what the bike technician’s comment did for me. We can influence perceptions about our discipline and shape students’ understanding of themselves as learners. What systems and practices help students identify and celebrate their growth as learners?
Gather & Examine Data. Learning management systems collect data about time on task and other metrics. This information can be used to scrutinize learning behaviors. The data can provide insights about study behavior. Teachers can help students identify strengths and weaknesses revealed by the patterns. Data can be the basis for recommendations about timing, duration and frequency of study. This kind of information is often looked at when teaching online, but it can be used in face-to-face as well. When students know this kind of information is being tracked and used by teachers, it adds a layer of accountability; this can motivate students to work more consistently or increase time on task.
Attend to Process, not just Product. Grades reflect content mastery, not intellectual development. Grades focus on product, not process. Teachers frequently lament students’ grades-over-learning perspective. Are teachers partly to blame? One way to shift attention toward learning is to provide data and feedback about process improvement in addition to grades. Be explicit about how students are advancing their understanding. In a large class this can be done by saying things like, “When we began this unit, I had to scaffold the entire process. I no longer need to do that.” In smaller classes, comments on papers can recognize qualitative improvement. Acknowledge an insightful comment during discussion. Recognize student effort. Informal and formative feedback can have a significant impact on motivation to learn, particularly for students who aren’t getting top grades.
Reflection. Kitsantas & Zimmerman (2009) developed a series of reflection questions in a document they call the SELF (Self Efficacy For Learning) form. It asks students about a variety of learning issues:
- When you discover that your homework assignments are much longer than expected, can you change your priorities to have enough time for studying?
- When you have to take a test in a subject you dislike, can you find a way to motivate yourself to study and learn?
- When you are struggling to remember technical details for a test, can you find a way to associate them together to help you remember?
Teachers can promote self efficacy and metacognition while teaching content by integrating a reflective component in some assignments. The reflections can be fairly short and teachers don’t have to read or grade all of them. The purpose is to get students thinking differently, or just thinking about how they’re learning, not just what they’re learning.
Collectively, data from the app and the expert’s positive feedback made me proud of what I accomplished last year. I’m motivated to ride more and work harder this season. Learning content is the destination. It’s important. But if we want to develop self-directed, life-long learners, we need to provide opportunities to practice and offer feedback about the learning process, not just grades. Because learning, like biking, isn’t really about the destination, it’s about the ride.
Reference: Kitsantasm A., Zimmerman, B.J. 2009. College students’ homework and academic achievement: The mediating role of self-regulatory beliefs. Metacognition Learning, 4:97-119.