My recent Epic Fail led to lots of reading in some unfamiliar areas of the teaching and learning literature. You may be familiar with the work of John Biggs; I was not. His 1985 article, “The Role of Metalearning in Study Processes” provides a lengthy survey teachers can use to explore how students approach learning. Specifically, the paper considers how motivation, locus of control, and students’ experiences in and out of school affect their attitudes toward learning and use of surface- or deep-learning strategies.
The 43-item questionnaire is a bit unwieldy, fortunately Biggs, Kember & Leung, (2001) developed a more focused instrument in their paper, “The revised two-factor Study Process Questionnaire: R-SPQ-2F.” A list of twenty statistically reliable statements are evaluated by students using a 5-pt Likert scale (“never or rarely like me”=1; “always or almost always true of me”= 5)
In light of my epic fail, five piqued my interest:
- I only study seriously what’s given out in class or in the course outlines.
- I find I can get by in most assessments by memorising key sections rather than trying to understand them.
- I generally restrict my study to what is specifically set as I think it is unnecessary to do anything extra.
- I find it is not helpful to study topics in depth. It confuses and wastes time, when all you need is a passing acquaintance with topics.
- I find the best way to pass examinations is to try to remember answers to likely questions. (p.148)
My observations of student behavior and recent comments on quiz wrappers reflect agreement with these statements, all or most of the time. Essentially, the feedback suggests students are using a lot of surface learning strategies.
Relief! My students are surface learners!
Maybe I was too hard on myself?
“Please note that the R-SPQ-2F is designed to reflect students’ approaches in their current teaching context, so it is an instrument to evaluate teaching (my emphasis) rather than one that characterizes students as “surface learners” or “deep learners”. The earlier instrument has been used also to label students (he is a surface learner and she is a deep learner) but I now think that is inappropriate. I have had a lot of correspondence from researchers who want to use the instrument for labeling students, that is as an independent variable, but it should not be so used; it provides a set of independent variables that may be used for assessing teaching (my emphasis); ” [http://www.johnbiggs.com.au/academic/students-approaches-to-learning/ Accessed: 11/6/17]
If I was looking for a bit of vindication in the literature, it isn’t here.
But wait, there’s more:
“A particularly depressing finding is that most students in most undergraduate courses become increasingly surface and decreasingly deep in their orientation to learning…One might call it the ‘institutionalisation’ of learning, whereby students tend to pick up the tricks that get you by…” (Biggs, Kember, & Leung, p.138).
The takeaway: Teacher and students share responsibility. Surface strategies are used because they work. Students will be forced to change if the expectations, work and deliverables of learning require drilling down instead of scratching the surface.
Here are a few of the authors’ suggestions for appropriate uses of the surveys:
- Monitoring day-to-day teaching; conducting action research; or structuring long-term pedagogical research.
- Diagnosing study problems: comparing individuals’ deep and surface scores to others in the same cohort.
- Examining the relationship of approaches to learning with other curriculum variables with a view to fine tuning curricula.
Before I gathered data and dug into the literature, I was firmly convinced the assessments in my course require more than surface strategies. Now I’m not so sure. If students are unaccustomed to deeper learning, how can I help them develop or expand their learning strategy repertoire? What can I do to move our expectations and beliefs about learning closer together?
These issues connect closely with previous posts: students’ learning misperceptions, passive study strategies, and the divergent views teachers and students have about what makes a course “hard.” I strive to grow and improve as teacher, but the introductory courses I teach are just one touch point in students’ academic journey. Unless we coordinate across courses and programs, ad hoc efforts merely scratch the surface of students’ learning potential.
Biggs, J.B., Kember, D., Leung, D.Y.P. (2001). The revised two-factor Study Process Questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71: 133-149.
Biggs, J.B. (1985). The Role of Metalearning in Study Processes. British Journal of Educational Psychology, 55:185-212.
John Biggs’ website provides the article and survey instrument: http://www.johnbiggs.com.au/academic/students-approaches-to-learning/
I recently wrote about a cheat sheet and quiz wrapper strategy. Students are allowed to prepare one side of a 3×5 index card to be used during a quiz that’s worth 10% of the course grade.
I intentionally refer to the assessment as a quiz, not an exam. I stopped calling it a test when I sensed the names “exam” and “test” increase anxiety in unproductive ways. I also wanted to make a clear distinction on the syllabus. Learning in this course is assessed in a variety of ways: homework, classwork, essay exams, multiple-choice quiz, paper, and project.
The multiple-choice format is also an intentional decision. The quiz assesses students’ ability to perform computations, interpret them, interpret related graphs, and understand the implications of a specific set of characteristics. The quiz assesses lower-level Bloom’s taxonomy learning. Understanding the concepts and preparing computations is the foundation for more advanced thinking and analysis in the second part of the unit. Unlike the essay exams, where I want students drawing graphs and explaining in writing, the mechanical basics addressed in this part of the course can be assessed conveniently with bubble sheets.
I introduced the cheat-sheet index card policy a couple of years ago, as a way to reduce stress during the quiz and promote active study strategies. The goal was to get students thinking about the material earlier and differently, prepare more effectively, and perform better on the quiz.
Unfortunately the strategy is,
as my kids would say,
an Epic Fail.
Calling it a quiz has not reduced anxiety. Because this is the first and only multiple-choice assessment, students don’t know what to expect. Lack of familiarity increases apprehension, regardless of what I call the assessment.
To give students an idea of what to expect, I provide a “practice quiz.” Perhaps it should be called “sample questions.” The practice quiz has the unintended consequence of limiting the scope of material studied. At least one student noted they didn’t study anything that wasn’t in the practice quiz.
Calling this assessment a quiz may actually produce more harm than good. “Quiz” may be less stressful than “test” or “exam.” But some stress is good. An unintended consequence of the name change: students may study less when it’s “only” a quiz.
Because quizzes may be seen as less important than exams or tests, some students may conclude the “cheat sheet” notecard is unnecessary. Thus, some students were insufficiently motivated to prepare one. In one case, a student noted they forgot about the quiz. Another prepared a card but neglected to bring it. Overall, about 10% of the class didn’t have one.
To be effective in promoting learning and improving scores, the card needs to be prepared in advance. Unfortunately, I noticed some students writing on theirs in the few minutes before the quiz. Others turned in cards that were incomplete or disorganized. I haven’t been able to analyze the data yet, but early findings are clear: many students aren’t preparing the cards or studying for the quiz as I intended.
From the student perspective, the purpose of a cheat sheet card is to improve their quiz score. Unfortunately, anecdotal evidence (which is all I have at this point) doesn’t bear that out. Grades on the quizzes aren’t much, if at all higher than before I allowed the cards.
That’s at least partly due to the observations above. I tried to convince students that the act of preparing the card promotes learning. The value of the card during the quiz is directly related to the quality of time and effort that went into preparing it. The message didn’t get through. Worse, feedback suggests students may have shifted from thinking about concept interrelationships toward putting basic definitions on the card.
Readers of this blog may be surprised (disappointed?) by this post’s focus on an ineffective strategy. But there is much you and I can learn from epic fails. Here are two quick takeaways:
- Wrappers or other mid-semester feedback is vital to understanding our students. I’m gaining valuable insight from the honest admissions about study time and strategies used. A lot of it depressed me (more about that soon). But I can’t improve instruction or enhance learning if I’m unaware of where my students are as learners.
- We learn a lot from mistakes. Be brave. Pick one instructional strategy and critically examine the intended and unintended consequences. What are your assumptions? What are your intentions? What evidence can you gather to test how well they are or aren’t being met?
I’d appreciate learning from your “epic fail.” Please share what you learned from a strategy or policy that didn’t work. Let’s learn from and with each other.
For the past year or so I’ve been allowing students to use a note card, essentially a “cheat sheet,” when completing the single in-class “exam” in my microeconomics course. I put exam in quotes because I refer to the assessment as a quiz. It’s essentially an exam; it carries the same weight as a test. But, like calling an assignment a warm-up instead of homework, naming it a quiz makes it somewhat less scary.
Students appreciate having their card during the quiz. From the learning standpoint, I see it as an incentivized study activity. To be most helpful during the exam, students have to really think about the content. What do they understand best? What areas are most confusing? How can they use the card to clarify understanding? How should the content be organized? What are the priorities? What are the interrelationships between and among concepts?
Song, Guo & Thuente (2016) report a positive correlation between the quality of the card, in terms of organization and content presentation, and exam performance. My students’ quiz track similarly to their card quality. The key questions for me are: Do better students already know how to study and prepare the cards? Or, can weaker students be coached on card preparation as a study strategy, to improve learning and academic performance?
Focusing on the students who have been less academically successful, I’ve become very intentional about discussing study strategies during class this semester. I handed out the cards this week, describing their preparation as a learning activity, not just an aid during the quiz.
I set aside just a few minutes of class time to talk about how and when to prepare them. Preparing the card is best done after the student has invested some time in the material, discovering potential problem spots. We discussed a timeline for gapped study. I reminded them of practice questions, resources and active learning strategies.
Of course, I also reviewed the “rules” for the cards.
- The card must only have information on one side.
- Their name goes on the other side.
- Cards are turned in with their exam.
- Content on the card must be handwritten. No multiple reductions of cut-and-pasted content.
Another twist I’m adding this term is a post-quiz wrapper. Here are the details.
This activity is designed to give you a chance to reflect on your quiz performance and, more importantly, on the effectiveness of your preparation. Please answer the questions sincerely. Your responses will be collected to inform instruction; they have no impact on your grade.
- Approximately how much time did you spend preparing for the quiz? _______
- What percentage (%) of your time was spent
- Reading or re-reading the textbook
- Self-testing / reciting
- Reviewing homework & classwork solutions
- Reworking problems & in-class practice
- Watching the screencasts
- Reviewing your own notes
- Preparing the note card
Other? Please explain.
3. What areas do you think were most challenging on the quiz?
- Trouble with the computations
- Unclear about vocabulary
- Confusion about the graphs
- Lack of understanding of concepts
- Careless mistakes
Other? Please explain.
- Did you feel prepared for the quiz? Are you surprised by your grade? Please explain.
- What advice would you give to future students preparing for the quiz?
I’ll share the takeaways in a future post. What are your thoughts or experiences with cheat sheets? Have you asked students to reflect on how they used them or what they discovered about themselves as learners from preparing them? Please share!
Reference: Song, Y., Guo, Y., Thuente, D. (2016). A Quantitative Case Study on Students’ Strategy for Using Authorized Cheat-sheets. IEEE Frontiers in Education Conference Paper. 10.1109/FIE.2016.7757656, 1-9.