I recently wrote about a cheat sheet and quiz wrapper strategy. Students are allowed to prepare one side of a 3×5 index card to be used during a quiz that’s worth 10% of the course grade.
I intentionally refer to the assessment as a quiz, not an exam. I stopped calling it a test when I sensed the names “exam” and “test” increase anxiety in unproductive ways. I also wanted to make a clear distinction on the syllabus. Learning in this course is assessed in a variety of ways: homework, classwork, essay exams, multiple-choice quiz, paper, and project.
The multiple-choice format is also an intentional decision. The quiz assesses students’ ability to perform computations, interpret them, interpret related graphs, and understand the implications of a specific set of characteristics. The quiz assesses lower-level Bloom’s taxonomy learning. Understanding the concepts and preparing computations is the foundation for more advanced thinking and analysis in the second part of the unit. Unlike the essay exams, where I want students drawing graphs and explaining in writing, the mechanical basics addressed in this part of the course can be assessed conveniently with bubble sheets.
I introduced the cheat-sheet index card policy a couple of years ago, as a way to reduce stress during the quiz and promote active study strategies. The goal was to get students thinking about the material earlier and differently, prepare more effectively, and perform better on the quiz.
Unfortunately the strategy is,
as my kids would say,
an Epic Fail.
Calling it a quiz has not reduced anxiety. Because this is the first and only multiple-choice assessment, students don’t know what to expect. Lack of familiarity increases apprehension, regardless of what I call the assessment.
To give students an idea of what to expect, I provide a “practice quiz.” Perhaps it should be called “sample questions.” The practice quiz has the unintended consequence of limiting the scope of material studied. At least one student noted they didn’t study anything that wasn’t in the practice quiz.
Calling this assessment a quiz may actually produce more harm than good. “Quiz” may be less stressful than “test” or “exam.” But some stress is good. An unintended consequence of the name change: students may study less when it’s “only” a quiz.
Because quizzes may be seen as less important than exams or tests, some students may conclude the “cheat sheet” notecard is unnecessary. Thus, some students were insufficiently motivated to prepare one. In one case, a student noted they forgot about the quiz. Another prepared a card but neglected to bring it. Overall, about 10% of the class didn’t have one.
To be effective in promoting learning and improving scores, the card needs to be prepared in advance. Unfortunately, I noticed some students writing on theirs in the few minutes before the quiz. Others turned in cards that were incomplete or disorganized. I haven’t been able to analyze the data yet, but early findings are clear: many students aren’t preparing the cards or studying for the quiz as I intended.
From the student perspective, the purpose of a cheat sheet card is to improve their quiz score. Unfortunately, anecdotal evidence (which is all I have at this point) doesn’t bear that out. Grades on the quizzes aren’t much, if at all higher than before I allowed the cards.
That’s at least partly due to the observations above. I tried to convince students that the act of preparing the card promotes learning. The value of the card during the quiz is directly related to the quality of time and effort that went into preparing it. The message didn’t get through. Worse, feedback suggests students may have shifted from thinking about concept interrelationships toward putting basic definitions on the card.
Readers of this blog may be surprised (disappointed?) by this post’s focus on an ineffective strategy. But there is much you and I can learn from epic fails. Here are two quick takeaways:
- Wrappers or other mid-semester feedback is vital to understanding our students. I’m gaining valuable insight from the honest admissions about study time and strategies used. A lot of it depressed me (more about that soon). But I can’t improve instruction or enhance learning if I’m unaware of where my students are as learners.
- We learn a lot from mistakes. Be brave. Pick one instructional strategy and critically examine the intended and unintended consequences. What are your assumptions? What are your intentions? What evidence can you gather to test how well they are or aren’t being met?
I’d appreciate learning from your “epic fail.” Please share what you learned from a strategy or policy that didn’t work. Let’s learn from and with each other.
The Feb 2nd post (Easy A?) ended with a series of questions about grades, learning and instructional strategies. I fully intended to begin addressing them here. But as I dug into the literature I realized other issues need exploring.
When students report my courses are “hard,” my first instinct is to write them off as whining complaints. Then I look at grade distributions and review the number and type of assessments to try to discredit the feedback. I usually succeed, but nagging questions remain. What do students mean when they say my course is hard? What if our definitions are different? Does it matter?
What makes a course hard? Draeger, del Prado Hill & Mahler (2015) find “faculty perceived learning to be most rigorous when students are actively learning meaningful content with higher-order thinking at the appropriate level of expectation within a given context” (p. 216). Interactive, collaborative, engaging, synthesizing, interpreting, predicting, and increasing levels of challenge are a small sample of the ways faculty describe rigor. In contrast, “students explained academic rigor in terms of workload, grading standards, level of difficulty, level of interest, and perceived relevance to future goals” (p.215) and course quality is “a function of their ability to meet reasonable faculty expectations rather than as a function of mastery of learning outcomes” (p.216). Their findings are consistent with previous research, match my views of what makes a course challenging, and reflect the comments my students made.
We are not on the same page about what makes a course rigorous.
Does it matter? I think it does for two reasons. Clearly, if you’re concerned about course evaluations, the scores will be lower if students’ and teachers’ definitions and aims are not aligned. Beyond the ratings, the mismatched definitions, expectations, and criteria have significant implications for learning. Consider this analogy.
Monique wants to lose weight. She plans to eat fewer calories and exercise more. She hires a personal trainer to set up a cardio program. Monique isn’t very knowledgeable about weight loss physiology; she thinks less food and more cardio are all she needs. And for the short term, she has a point. Thus, she’s surprised when the trainer starts the session with ten minutes of cardio and then tells her to head over to the weight machines. Monique, despite her limited background in exercise science, says she’s only interested in cardio: treadmill, elliptical, climber, and spinning. The trainer persists and Monique begrudgingly complies. But, Monique’s enthusiasm for the program is diminished and she leaves without knowing why weight training is a hard but necessary component of the trainer’s plan.
Many students are like Monique. She’s paid good money for the trainer’s services. She knows she’s going to sweat on the cardio machines. She’s willing to work. But her expectations and understanding about exercise are incomplete. Because of this, she may not realize the trainer’s program will do more to help achieve her goals in the short- and long-term than cardio alone. Or, she might comprehend what the trainer is trying to help her achieve, but Monique may only care about the short-term fix. Monique may not have the time (or may not value time at the gym enough) to devote an hour when 20 minutes of cardio would seem to be enough, at least for now. Monique’s goals and understanding of the process do not match the trainer’s.
Similarly, many teachers are like the trainer. The trainer assumed Monique would accept, on faith, that she has her client’s best interest in mind. The trainer believes she knows what’s best for her client. The trainer assumes Monique knows what a comprehensive exercise program looks like so she didn’t take time to explain why weight training is necessary. Notice that the story discusses the trainer’s plan, not a plan they developed together. Notice this too- the trainer is thinking like an expert, forgetting that novices see and approach things very differently.
As long as the trainer/trainee and teacher/student hold different definitions and expectations, the working relationship will produce less than optimal results and “satisfaction surveys” will reflect the mismatched priorities.
What can we do about it? Martin, et al., (2008) investigate students’ perceptions of hard and easy courses across engineering programs. Two of their strategies have broad application.
- Consider student characteristics. Student differences with respect to semester standing, level of academic preparation, in-major v. general education course, and student major affect perceptions of course difficulty. The more teachers know their students, the better equipped we are to determine where students are in the learning maturation process. “The key is determining what an appropriate challenge is for a course and for a particular group of students. The more an instructor interacts with students, the more likely the instructor is to notice the overwhelmed or bored students” (p. 112).
- Emphasize content connections. Applicability of content is an important filter students use to gauge course rigor. “Real” and “relevant” are the levers that push students to work harder and longer. Content needs to matter to students personally or professionally. Teachers need to keep that in mind.
The more I read and think about what makes a course “hard,” the more it feels like we’re trying to nail jello to the wall. When we meet the needs of some, the rest may feel squished. It may not be possible to get it right, all the time, for every student. But I do believe, and the research on learning bears this out, there is value in initiating conversations with students about learning. We can’t dispel misperceptions if we’re unaware. The goal of the conversations isn’t to negotiate watering down the course, making grading easier, or lowering expectations. It’s to give students a voice and share ownership so that learning becomes more than a series of assignments reflecting only the teacher’s goals.
Draeger, J., P. del Prado Hill, & R. Mahler. (2015). Developing a Student Concept of Academic Rigor. Innovation in Higher Education, 40: 215-228.
Martin, J.H., Hands, K.B., Lancaster, S.M., Trytten, D.A., & Murphy, T.J. (2008). Hard But Not Too Hard: Challenging Courses and Engineering Students, College Teaching, 56(2): 107-113.
Jan 17’s post discussed a bold student question. “Is this course an easy A?” Asked at the start of the new semester the query lead to speculation about student motivation, their beliefs about learning and grades. Then I received my fall course evaluations.
“If you want to learn about Economics she teaches it.. if you want to get a good grade take it with someone else.”
“While Dr. Paff is a nice and a good teacher for accounting and economic students, it is unnecessarily difficult. The exams and projects add up to a course that is much, much harder from her than it is for the other professors. I would advice (sic) students in an engineering major or technology-related major to avoid Dr. Paff’s section. It is not for you. She teaches well. But, to get a good grade, based on what I have heard, the other professors are marginally easier.”
“Class is not easy, be prepared to spend some time doing projects and learning concepts. The class was informative but I do not think it needed to be as hard as it was for the concepts.”
“If you want to learn material take Paff. If you [want to] make a good grade take someone else.”
My students answered the “easy A” question and their feedback got me asking more questions. This (limited) sample suggests for some students: grades and learning are unrelated, easy is better than hard, and learning and easy generally don’t go together.
Grades v. Learning. I can’t blame students for focusing on grades. They affect career, graduate school, scholarships, etc. But these statement show why Alfie Kohn’s compelling arguments against an emphasis on grades reduces student motivation. Note the dichotomy. The choice is between learning or a good grade. In their view, grades are not integrated with or a reflection of learning. Yikes! Clearly that’s not my intent. How can I do a better job integrating and making explicit the connection between grades and learning?
Easy v. Hard. What makes a course “hard”? Is it the number of assignments? The type of assignment? How much it counts? How it’s graded? How long it takes to complete? How much mental energy is required? Something else?
I don’t plan to change the number of assessments. Each one is designed to help students learn a new concept or apply what they’ve learned. But I do need to reconsider how I am helping students make connections between assignments/assessments and their learning.
Learning isn’t easy. This is a golden nugget buried in the comments. Deep down, students know learning is hard. Some want to learn and are willing to make the effort and take the risk of pushing themselves into new territories. Others would prefer to go through the motions or do only what’s necessary. (We can say the same of faculty!) Why do some students prefer easy? Are they insecure about their ability to learn? Are they worried the effort won’t be worth it? Have I made a strong case for content relevance and the value of learning?
It’s easy to write off student comments like these as uninformed complaints. But I’d argue they offer a perspective on student beliefs and attitudes many teachers suspect students hold. More important, these issues lie within our sphere of influence to examine with students and address. The next few posts will explore student assumptions and beliefs about hard and easy courses along these lines:
- What instructional strategies integrate and make explicit the connection between grades and learning?
- How can teachers help students see the connections between the assignments/assessments and their learning?
- What practices build a strong case for content relevance?
- What strategies help students see their efforts to learn as worthwhile?
What other questions would you ask? Please share your thoughts, strategies, and suggestions.