Blog Archives

Easy A? Perspectives from Course Evaluations


Survey form

Jan 17’s post discussed a bold student question. “Is this course an easy A?” Asked at the start of the new semester the query lead to speculation about student motivation, their beliefs about learning and grades. Then I received my fall course evaluations.

“If you want to learn about Economics she teaches it.. if you want to get a good grade take it with someone else.”

“While Dr. Paff is a nice and a good teacher for accounting and economic students, it is unnecessarily difficult. The exams and projects add up to a course that is much, much harder from her than it is for the other professors. I would advice (sic) students in an engineering major or technology-related major to avoid Dr. Paff’s section. It is not for you. She teaches well. But, to get a good grade, based on what I have heard, the other professors are marginally easier.”

“Class is not easy, be prepared to spend some time doing projects and learning concepts. The class was informative but I do not think it needed to be as hard as it was for the concepts.”

“If you want to learn material take Paff. If you [want to] make a good grade take someone else.”

My students answered the “easy A” question and their feedback got me asking more questions. This (limited) sample suggests for some students: grades and learning are unrelated, easy is better than hard, and learning and easy generally don’t go together.

Grades v. Learning. I can’t blame students for focusing on grades. They affect career, graduate school, scholarships, etc. But these statement show why Alfie Kohn’s compelling arguments against an emphasis on grades reduces student motivation. Note the dichotomy. The choice is between learning or a good grade. In their view, grades are not integrated with or a reflection of learning. Yikes! Clearly that’s not my intent. How can I do a better job integrating and making explicit the connection between grades and learning?

Easy v. Hard. What makes a course “hard”? Is it the number of assignments? The type of assignment? How much it counts? How it’s graded? How long it takes to complete? How much mental energy is required? Something else?

I don’t plan to change the number of assessments. Each one is designed to help students learn a new concept or apply what they’ve learned. But I do need to reconsider how I am helping students make connections between assignments/assessments and their learning.

Learning isn’t easy. This is a golden nugget buried in the comments. Deep down, students know learning is hard. Some want to learn and are willing to make the effort and take the risk of pushing themselves into new territories. Others would prefer to go through the motions or do only what’s necessary. (We can say the same of faculty!) Why do some students prefer easy? Are they insecure about their ability to learn? Are they worried the effort won’t be worth it? Have I made a strong case for content relevance and the value of learning?

It’s easy to write off student comments like these as uninformed complaints.  But I’d argue they offer a perspective on student beliefs and attitudes many teachers suspect students hold. More important,  these issues lie within our sphere of influence to examine with students and address. The next few posts will explore student assumptions and beliefs about hard and easy courses along these lines:

  • What instructional strategies integrate and make explicit the connection between grades and learning?
  • How can teachers help students see the connections between the assignments/assessments and their learning?
  • What practices build a strong case for content relevance?
  • What strategies help students see their efforts to learn as worthwhile?

What other questions would you ask? Please share your thoughts, strategies, and suggestions.




Groundhog Day

Do you ever feel like you’re teaching in the movie Groundhog Day? Where the same thing happens every semester? I’m feeling a bit of that right now.

It’s first-exam Groundhog Day in accounting. We worked through the foundational material and students completed the first test last week. Each term I try different strategies to make content clearer, improve access to resources, provide more practice, and enhance opportunities to learn. In addition to the usual strategies and resources, I assigned practice questions as homework prior to the test. Solutions were provided. Office hours were increased and shifted to before the test.

I gave the exam… graded them… and good morning Groundhog Day! I find myself scratching my head after one of my students revealed the following:

I sat down with someone that took your class previously

and they showed me where everything is….

I now found all the helpful stuff…

Even though you went over this stuff in class I didn’t follow…

Past variations of this include “I didn’t know what would be on the test” and “I thought I understood the material.” Every term there are students who ignore advice, skip the learning resources, underestimate the challenges, overestimate understanding, and study insufficiently. Every first accounting exam triggers Groundhog Day, where students (this time about 12%) have fallen into (some might say they dug) a hole that’ll required extra effort to escape.

Recently, Maryellen Weimer blogged about Five Ways to Improve Exam Review Sessions. She provided several before-the-exam strategies. It’s a helpful piece. Before the first test I used a number of the practices she describes.

What about after the exam?

What post-exam strategies reduce the chances students dig a hole at the beginning of their courses in the future?

Offer Help. I write a personal note to each student who did poorly (received a D or F). I ask them to stop by during office hours or see me during the break to make an appointment. I promise no blaming or shaming.  It’s water under the bridge. The point is to talk about how they prepared to discern what will work better next time. Sometimes it’s about working smarter, not harder. I also make a point of reminding students they can recover from this misstep by stressing one important truth: for the outcome to change, behavior must change.

 Debrief. “Let’s go over the exam.” [Insert YAWN here] For students who did well, this is a complete waste of time. For the students who didn’t perform well, a straight review of the answers won’t advance learning much. But exam debrief can be about much more than the answers. Here’s where I reinforce the message that behavior must change for the outcome to improve next time. Examples of behavior changing information: absences and homework completion v. test scores. I let the data speak for itself. I limit post-exam debrief to conceptual issues and grading philosophy. Since I want students to visit me, I don’t review each item in class.

Peer Advice. Sometimes I’ll ask students who did very well on the test to privately (on a notecard or in an email) share their best practices. These strategies are provided to the class before the next test. Students tend to take advice more willingly from peers than the teacher.

Exam Wrappers. After the first exam is a good time to provide an exam wrapper. Essentially, it’s a form wrapped around the test. Wrappers convey the message that exams are more than just an assessment of content learning; they are also a means of teaching students how to learn. If you’re unfamiliar with them, search “exam wrapper” and Google will provide over 1000 hits linking to valuable resources provided by teaching and learning centers. I like the explanations, examples and resources at Carnegie Mellon;  Purdue and Duquesne.  My favorite wrapper question asks students to assign percentages to the amount of time they spent on different kinds of exam preparation behaviors: preparing notecards, rereading the chapter, practicing problems, reviewing notes, etc. I find this diagnostic to be particularly insightful when helping students “learn to learn” in accounting.

Allow a Resubmit. Sometimes I allow students to earn some of the points they missed by resubmitting part of the exam, though this is generally more appropriate in economics than accounting. Some might disagree with this, and it’s probably not appropriate in all settings. But because my economics exams are take home essays, if a large number of students miss points, that means the class didn’t learn the material and/or I asked unclear questions. In those cases, my priority is learning, not assigning grades. The possibility of earning a portion of missed points motivates students to go back and rethink their answers to improve understanding.

Grade-Estimator. Usually after the first exam I post a spreadsheet I developed that helps students predict their course grade. Many LMS track grades, but I am unaware of any that allow students to conduct “what-if” analysis. The spreadsheet is set up to reflect course grade percentages. Students enter current or predicted grades for the various components and then see what their grade will be. Here’s a snapshot:

Screen Shot 2016-02-15 at 7.38.19 PM

Some students use the estimator to answer the age-old question: How badly can I mess up the final and still get ____?  I use the estimator as a diagnostic tool during office visits. I ask the student about their grades. Some cannot report them. Meaning, they are unsure how consistently they have completed homework, have forgotten exam scores, etc. This suggests they may not feel responsible or “own” their learning. When a student admits “I didn’t realize I missed so many assignments,” they are taking an important step toward self-directed learning. Improvement hinges on knowing where you stand.

Sometimes students consider dropping the course when the situation doesn’t warrant it. Other times they should consider dropping the course, and the estimator provides objective information to help them make an informed decision.

Most of my students are in their first- or second-year of college. That probably means first-exam Groundhog Day is part of the territory. But after our time together, these post-exam strategies should advance their understanding of themselves as learners. They may never record another debit or credit again, but if they learned about learning and use that insight going forward, that would be a wonderful Groundhog Day for them to repeat.

If you’re interested in reading more, I recommend: Susan A. Ambrose, Michael W. Bridges, Michele DiPietro, Marsha C. Lovett, Marie K. Norman, Richard E. Mayer. 2010. How Learning Works: Seven Research-Based Principles for Smart Teaching, San Francisco: Jossey-Bass.


Photo credit: Janet Morse Church, Your Shot;

Glass Half Full

Are you excited about the possibilities a new term and year brings? Or is this just a routine [yawn] start to another spring semester?

I am filled with hope and anticipation. I’m looking forward to meeting my students, discovering what they already know, identifying what their interests are, and exploring accounting with them.

Lest you think I am naïve, or a Pollyanna, I recognize there are lots issues that could (some might say should) discourage me. Each term also brings trials and difficulties. Some students aren’t very motivated. Teachers compete with technology for students’ attention. The list goes on and on.

But overall, I’m an optimistic, glass-half-full gal. I’ve always liked this quote from Churchill: A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.  Optimism is a major reason why my teaching remains fresh, more than 20 years into my academic career. Let’s consider a couple of opportunities others may view as difficulties.

Student motivation. Assigned readings go unread. We ask students to participate in discussions and they sit mutely, avoiding eye contact with the teacher. They don’t take notes well, study enough, follow directions, etc.

I could throw my hands in the air and groan about “students these days.” I could wish my institution admitted better quality students. I could resign myself to another “same old, same old” semester.

I could, but I don’t. Many students are curious. They want to learn. An optimist doesn’t sell students short. Each term is a chance to test a new method, tweak an old strategy, and learn from experience. It begins with examining what’s been tried, evaluating effectiveness, identifying alternatives from the pedagogical literature, gathering advice from skilled teachers, and seeking student feedback. An optimistic teacher is motivated by the opportunity to ignite student interest, recognizing that what worked one term may not work as well with another cohort, so it’s important to develop a set of alternatives.

One strategy I like is partnering with students; shifting power increases motivation. One way to signal partnering is by sharing power and ceding some control. I also like to ask students about their interests to connect them to content. [Recent followers of this blog might want to read Setting the Table for Learning, Big Questions, and Student Disinterest- Who’s Responsible? for additional examples and insights.]

Of course the devil is in the details. For a solid primer on student motivation, I recommend Pintrich’s (2003) A Motivational Science Perspective on the Role of Student Motivation in Learning and Teaching Contexts, Journal of Educational Psychology 95(4): 67-686. He explores “What do students want?” and “What motivates students in classrooms?” It’s a well-researched and thought-provoking piece.

Technology Distractions. Some teachers respond to the technology distraction issue with a “no-screens” policy. Others incorporate technology in class, putting screens to productive use. Some adopt an “I don’t care” approach, while others must adhere to their institutional policy.

Each new term is an opportunity to explore (or reexamine) students’ relationship to technology, its implications for learning, the ramifications for teaching and the ways it is changing content. One of my favorite articles about technology & teaching is Mishra and Koehler’s (2006), Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6): 1017-1054. The authors use a Venn diagram to consider the paired and triad relationships between/among teaching, learning, content and technology in a thoughtful and practical way. They ask the reader to:

  • Identify examples of technological content in your discipline.
  • Explore the ways your teaching has responded to the creation of new technological content knowledge.
  • Characterize how technology has changed / is changing pedagogy in your discipline.

Answering these questions shifts the focus away from “technology as distraction,” by moving it toward technology’s potential to advance learning. That shift leads to more strategic use of technology by teacher and students.

Perhaps I’ll reread this post in six weeks and wonder what I was thinking? But that’s the beauty of the academic cycle; each semester presents new opportunities and different students. It’s never the same. Each lasts only fifteen weeks. When optimism fails and difficulties grow so large they block sight of the opportunities, I rely on this variant of the serenity prayer:

God, give me coffee to change the things I can,

and wine to get over the things I can’t.

~ Unknown


Pouring wine