Over the years I’ve unsystematically made changes to the items on the end of course evaluation form. Typically, it was hastily considered, with changes requested at the deadline. My motivation was a desire to capture feedback related to instructional strategies I used that term. Note the key words here- “hasty” and “unsystematic.”
I needed to be more intentional. Thus began an experiment last semester. Instead of thinking about the evaluation criteria near the end of the course, I chose questions before the semester started. As I prepared to teach and throughout the term, pedagogical decisions were made with these course evaluation criteria in mind:
- Relating course material to real life situations
- Making class sessions intellectually stimulating
- Helping students answer their own questions
- Encouraging students to apply concepts to demonstrate understanding
- Emphasizing learning rather than tests and grades
- Guiding students to be more self-directed in their learning
See Feedback Questions as Course Scaffold for additional background.
Student-created Practice Problems: I created several blank templates that allow students to set variable amounts and prepare the related analyses and entries. Sometimes the template was an in-class learning activity. In other cases, I offered a little extra credit. While learning accounting is the official goal of these templates (accounting colleagues, contact me if you’d like to discuss or obtain copies), they accomplish so much more. Students engaged in an active study strategy, instead of “looking over” notes. Peer evaluation fostered collaboration and community. Prepare-pair-share led to discipline-based interaction as students discussed the variables, solved each other’s problems and corrected mistakes.
Suggested Study Timeline: For the first exam, we mapped out a suggested timeline for study. This was done during class. Then, over the following five days, I posted encouraging (and humorous) reminders ad announcements in the LMS. Later in the term, students remarked how helpful that was. They also asked if I would map that out for them again. My response- Now that we’ve developed a study plan together in class, will you create a plan of your own? Will you use this strategy in other classes? While it’s tempting to “just do it” with or for them before each test, students sometimes need the teacher to step back and not fill the gap.
Textbook Reading Notes: One of the homework assignments (toward the middle of the term) asked students to take notes on the chapter reading. This led them to a wondrous discovery: Class time makes so much more sense when you read the chapter beforehand! Students got so much from this homework they suggested it be routinely assigned. My response- Now that you’ve seen how beneficial it is to read before class, will you continue to do it, even if it’s not for credit? Will you do it for the sake of learning?
During mid-semester informal feedback, some students acknowledged they could be doing more to own their learning. That is a distinct shift from prior semesters. Instead of primarily looking to the teacher to teach, there was a clear recognition by students regarding their contributions to promoting learning collectively during class as well as the kinds of activities and effort learning requires. This closely aligns with USC’s recent initiatives in the area of student evaluations of teaching:
Umbrella questions such as, “How would you rate your professor?” and “How would you rate this course?” — which Clark called “popularity contest” questions — are now out. In are questions on course design, course impact and instructional, inclusive and assessment practices. Did the assignments make sense? Do students feel they learned something? Students also are now asked about what they brought to a course. How many hours did they spend on coursework outside of class? How many times did they contact the professor? What study strategies did they use? While such questions help professors gauge how their students learn, Clark said, they also signal to students that “your learning in this class depends as much as your input as your professor’s work.” [emphasis added] [Source: Teaching Eval Shake-Up, InsideHigherEd, May 22, 2018]
When asked to describe what the instructor did to facilitate learning, one student put it this way: “Dr. Paff had a great method of having students read the chapter and use the screencasts that she prepared to grasp some fundamental concept before class…. This style of class is effective because it drives students learning by themselves and rewards students for being good students.”
With this framework, I spent less time picking out homework problems from the textbook (a standard practice in accounting) and more time devising strategies that help students become answerers of their questions, promote self-direct learning, and make class time intellectually stimulating. Students still learned accounting, but this time they learned about themselves as learners too.
Jan 17’s post discussed a bold student question. “Is this course an easy A?” Asked at the start of the new semester the query lead to speculation about student motivation, their beliefs about learning and grades. Then I received my fall course evaluations.
“If you want to learn about Economics she teaches it.. if you want to get a good grade take it with someone else.”
“While Dr. Paff is a nice and a good teacher for accounting and economic students, it is unnecessarily difficult. The exams and projects add up to a course that is much, much harder from her than it is for the other professors. I would advice (sic) students in an engineering major or technology-related major to avoid Dr. Paff’s section. It is not for you. She teaches well. But, to get a good grade, based on what I have heard, the other professors are marginally easier.”
“Class is not easy, be prepared to spend some time doing projects and learning concepts. The class was informative but I do not think it needed to be as hard as it was for the concepts.”
“If you want to learn material take Paff. If you [want to] make a good grade take someone else.”
My students answered the “easy A” question and their feedback got me asking more questions. This (limited) sample suggests for some students: grades and learning are unrelated, easy is better than hard, and learning and easy generally don’t go together.
Grades v. Learning. I can’t blame students for focusing on grades. They affect career, graduate school, scholarships, etc. But these statement show why Alfie Kohn’s compelling arguments against an emphasis on grades reduces student motivation. Note the dichotomy. The choice is between learning or a good grade. In their view, grades are not integrated with or a reflection of learning. Yikes! Clearly that’s not my intent. How can I do a better job integrating and making explicit the connection between grades and learning?
Easy v. Hard. What makes a course “hard”? Is it the number of assignments? The type of assignment? How much it counts? How it’s graded? How long it takes to complete? How much mental energy is required? Something else?
I don’t plan to change the number of assessments. Each one is designed to help students learn a new concept or apply what they’ve learned. But I do need to reconsider how I am helping students make connections between assignments/assessments and their learning.
Learning isn’t easy. This is a golden nugget buried in the comments. Deep down, students know learning is hard. Some want to learn and are willing to make the effort and take the risk of pushing themselves into new territories. Others would prefer to go through the motions or do only what’s necessary. (We can say the same of faculty!) Why do some students prefer easy? Are they insecure about their ability to learn? Are they worried the effort won’t be worth it? Have I made a strong case for content relevance and the value of learning?
It’s easy to write off student comments like these as uninformed complaints. But I’d argue they offer a perspective on student beliefs and attitudes many teachers suspect students hold. More important, these issues lie within our sphere of influence to examine with students and address. The next few posts will explore student assumptions and beliefs about hard and easy courses along these lines:
- What instructional strategies integrate and make explicit the connection between grades and learning?
- How can teachers help students see the connections between the assignments/assessments and their learning?
- What practices build a strong case for content relevance?
- What strategies help students see their efforts to learn as worthwhile?
What other questions would you ask? Please share your thoughts, strategies, and suggestions.
Do you ever feel like you’re teaching in the movie Groundhog Day? Where the same thing happens every semester? I’m feeling a bit of that right now.
It’s first-exam Groundhog Day in accounting. We worked through the foundational material and students completed the first test last week. Each term I try different strategies to make content clearer, improve access to resources, provide more practice, and enhance opportunities to learn. In addition to the usual strategies and resources, I assigned practice questions as homework prior to the test. Solutions were provided. Office hours were increased and shifted to before the test.
I gave the exam… graded them… and good morning Groundhog Day! I find myself scratching my head after one of my students revealed the following:
I sat down with someone that took your class previously
and they showed me where everything is….
I now found all the helpful stuff…
Even though you went over this stuff in class I didn’t follow…
Past variations of this include “I didn’t know what would be on the test” and “I thought I understood the material.” Every term there are students who ignore advice, skip the learning resources, underestimate the challenges, overestimate understanding, and study insufficiently. Every first accounting exam triggers Groundhog Day, where students (this time about 12%) have fallen into (some might say they dug) a hole that’ll required extra effort to escape.
Recently, Maryellen Weimer blogged about Five Ways to Improve Exam Review Sessions. She provided several before-the-exam strategies. It’s a helpful piece. Before the first test I used a number of the practices she describes.
What about after the exam?
What post-exam strategies reduce the chances students dig a hole at the beginning of their courses in the future?
Offer Help. I write a personal note to each student who did poorly (received a D or F). I ask them to stop by during office hours or see me during the break to make an appointment. I promise no blaming or shaming. It’s water under the bridge. The point is to talk about how they prepared to discern what will work better next time. Sometimes it’s about working smarter, not harder. I also make a point of reminding students they can recover from this misstep by stressing one important truth: for the outcome to change, behavior must change.
Debrief. “Let’s go over the exam.” [Insert YAWN here] For students who did well, this is a complete waste of time. For the students who didn’t perform well, a straight review of the answers won’t advance learning much. But exam debrief can be about much more than the answers. Here’s where I reinforce the message that behavior must change for the outcome to improve next time. Examples of behavior changing information: absences and homework completion v. test scores. I let the data speak for itself. I limit post-exam debrief to conceptual issues and grading philosophy. Since I want students to visit me, I don’t review each item in class.
Peer Advice. Sometimes I’ll ask students who did very well on the test to privately (on a notecard or in an email) share their best practices. These strategies are provided to the class before the next test. Students tend to take advice more willingly from peers than the teacher.
Exam Wrappers. After the first exam is a good time to provide an exam wrapper. Essentially, it’s a form wrapped around the test. Wrappers convey the message that exams are more than just an assessment of content learning; they are also a means of teaching students how to learn. If you’re unfamiliar with them, search “exam wrapper” and Google will provide over 1000 hits linking to valuable resources provided by teaching and learning centers. I like the explanations, examples and resources at Carnegie Mellon; Purdue and Duquesne. My favorite wrapper question asks students to assign percentages to the amount of time they spent on different kinds of exam preparation behaviors: preparing notecards, rereading the chapter, practicing problems, reviewing notes, etc. I find this diagnostic to be particularly insightful when helping students “learn to learn” in accounting.
Allow a Resubmit. Sometimes I allow students to earn some of the points they missed by resubmitting part of the exam, though this is generally more appropriate in economics than accounting. Some might disagree with this, and it’s probably not appropriate in all settings. But because my economics exams are take home essays, if a large number of students miss points, that means the class didn’t learn the material and/or I asked unclear questions. In those cases, my priority is learning, not assigning grades. The possibility of earning a portion of missed points motivates students to go back and rethink their answers to improve understanding.
Grade-Estimator. Usually after the first exam I post a spreadsheet I developed that helps students predict their course grade. Many LMS track grades, but I am unaware of any that allow students to conduct “what-if” analysis. The spreadsheet is set up to reflect course grade percentages. Students enter current or predicted grades for the various components and then see what their grade will be. Here’s a snapshot:
Some students use the estimator to answer the age-old question: How badly can I mess up the final and still get ____? I use the estimator as a diagnostic tool during office visits. I ask the student about their grades. Some cannot report them. Meaning, they are unsure how consistently they have completed homework, have forgotten exam scores, etc. This suggests they may not feel responsible or “own” their learning. When a student admits “I didn’t realize I missed so many assignments,” they are taking an important step toward self-directed learning. Improvement hinges on knowing where you stand.
Sometimes students consider dropping the course when the situation doesn’t warrant it. Other times they should consider dropping the course, and the estimator provides objective information to help them make an informed decision.
Most of my students are in their first- or second-year of college. That probably means first-exam Groundhog Day is part of the territory. But after our time together, these post-exam strategies should advance their understanding of themselves as learners. They may never record another debit or credit again, but if they learned about learning and use that insight going forward, that would be a wonderful Groundhog Day for them to repeat.
If you’re interested in reading more, I recommend: Susan A. Ambrose, Michael W. Bridges, Michele DiPietro, Marsha C. Lovett, Marie K. Norman, Richard E. Mayer. 2010. How Learning Works: Seven Research-Based Principles for Smart Teaching, San Francisco: Jossey-Bass.
Photo credit: Janet Morse Church, Your Shot; http://animals.nationalgeographic.com/animals/mammals/groundhog/