Blog Archives

Goal Shift: Starting with the End in Mind

A soccer ball stuck on the net behind goal

Over the years I’ve unsystematically made changes to the items on the end of course evaluation form. Typically, it was hastily considered, with changes requested at the deadline. My motivation was a desire to capture feedback related to instructional strategies I used that term. Note the key words here- “hasty” and “unsystematic.”

I needed to be more intentional. Thus began an experiment last semester. Instead of thinking about the evaluation criteria near the end of the course, I chose questions before the semester started. As I prepared to teach and throughout the term, pedagogical decisions were made with these course evaluation criteria in mind:

  • Relating course material to real life situations
  • Making class sessions intellectually stimulating
  • Helping students answer their own questions
  • Encouraging students to apply concepts to demonstrate understanding
  • Emphasizing learning rather than tests and grades
  • Guiding students to be more self-directed in their learning

See Feedback Questions as Course Scaffold for additional background.

Student-created Practice Problems: I created several blank templates that allow students to set variable amounts and prepare the related analyses and entries. Sometimes the template was an in-class learning activity. In other cases, I offered a little extra credit. While learning accounting is the official goal of these templates (accounting colleagues, contact me if you’d like to discuss or obtain copies), they accomplish so much more. Students engaged in an active study strategy, instead of “looking over” notes. Peer evaluation fostered collaboration and community. Prepare-pair-share led to discipline-based interaction as students discussed the variables, solved each other’s problems and corrected mistakes.

Suggested Study Timeline: For the first exam, we mapped out a suggested timeline for study. This was done during class. Then, over the following five days, I posted encouraging (and humorous) reminders ad announcements in the LMS. Later in the term, students remarked how helpful that was. They also asked if I would map that out for them again. My response- Now that we’ve developed a study plan together in class, will you create a plan of your own? Will you use this strategy in other classes? While it’s tempting to “just do it” with or for them before each test, students sometimes need the teacher to step back and not fill the gap.

Textbook Reading Notes: One of the homework assignments (toward the middle of the term) asked students to take notes on the chapter reading. This led them to a wondrous discovery: Class time makes so much more sense when you read the chapter beforehand! Students got so much from this homework they suggested it be routinely assigned. My response- Now that you’ve seen how beneficial it is to read before class, will you continue to do it, even if it’s not for credit? Will you do it for the sake of learning?

During mid-semester informal feedback, some students acknowledged they could be doing more to own their learning. That is a distinct shift from prior semesters. Instead of primarily looking to the teacher to teach, there was a clear recognition by students regarding their contributions to promoting learning collectively during class as well as the kinds of activities and effort learning requires. This closely aligns with USC’s recent initiatives in the area of student evaluations of teaching:

Umbrella questions such as, “How would you rate your professor?” and “How would you rate this course?” — which Clark called “popularity contest” questions — are now out. In are questions on course design, course impact and instructional, inclusive and assessment practices. Did the assignments make sense? Do students feel they learned something? Students also are now asked about what they brought to a course. How many hours did they spend on coursework outside of class? How many times did they contact the professor? What study strategies did they use? While such questions help professors gauge how their students learn, Clark said, they also signal to students that “your learning in this class depends as much as your input as your professor’s work.” [emphasis added] [Source: Teaching Eval Shake-Up, InsideHigherEd, May 22, 2018]

When asked to describe what the instructor did to facilitate learning, one student put it this way: “Dr. Paff had a great method of having students read the chapter and use the screencasts that she prepared to grasp some fundamental concept before class…. This style of class is effective because it drives students learning by themselves and rewards students for being good students.”

With this framework, I spent less time picking out homework problems from the textbook (a standard practice in accounting) and more time devising strategies that help students become answerers of their questions, promote self-direct learning, and make class time intellectually stimulating. Students still learned accounting, but this time they learned about themselves as learners too.

Good Intentions & my “Epic Fail”

I recently wrote about a cheat sheet and quiz wrapper strategy. Students are allowed to prepare one side of a 3×5 index card to be used during a quiz that’s worth 10% of the course grade.

Good Intentions

I intentionally refer to the assessment as a quiz, not an exam. I stopped calling it a test when I sensed the names “exam” and “test” increase anxiety in unproductive ways. I also wanted to make a clear distinction on the syllabus. Learning in this course is assessed in a variety of ways: homework, classwork, essay exams, multiple-choice quiz, paper, and project.

The multiple-choice format is also an intentional decision. The quiz assesses students’ ability to perform computations, interpret them, interpret related graphs, and understand the implications of a specific set of characteristics. The quiz assesses lower-level Bloom’s taxonomy learning. Understanding the concepts and preparing computations is the foundation for more advanced thinking and analysis in the second part of the unit. Unlike the essay exams, where I want students drawing graphs and explaining in writing, the mechanical basics addressed in this part of the course can be assessed conveniently with bubble sheets.

I introduced the cheat-sheet index card policy a couple of years ago, as a way to reduce stress during the quiz and promote active study strategies. The goal was to get students thinking about the material earlier and differently, prepare more effectively, and perform better on the quiz.

Unfortunately the strategy is,

as my kids would say,

an Epic Fail.

Failed Stamp Showing Reject Crisis Or Failure

Calling it a quiz has not reduced anxiety. Because this is the first and only multiple-choice assessment, students don’t know what to expect. Lack of familiarity increases apprehension, regardless of what I call the assessment.

To give students an idea of what to expect, I provide a “practice quiz.” Perhaps it should be called “sample questions.” The practice quiz has the unintended consequence of limiting the scope of material studied. At least one student noted they didn’t study anything that wasn’t in the practice quiz.

Calling this assessment a quiz may actually produce more harm than good. “Quiz” may be less stressful than “test” or “exam.” But some stress is good. An unintended consequence of the name change: students may study less when it’s “only” a quiz.

Because quizzes may be seen as less important than exams or tests, some students may conclude the “cheat sheet” notecard is unnecessary. Thus, some students were insufficiently motivated to prepare one. In one case, a student noted they forgot about the quiz. Another prepared a card but neglected to bring it. Overall, about 10% of the class didn’t have one.

To be effective in promoting learning and improving scores, the card needs to be prepared in advance. Unfortunately, I noticed some students writing on theirs in the few minutes before the quiz. Others turned in cards that were incomplete or disorganized. I haven’t been able to analyze the data yet, but early findings are clear: many students aren’t preparing the cards or studying for the quiz as I intended.

From the student perspective, the purpose of a cheat sheet card is to improve their quiz score. Unfortunately, anecdotal evidence (which is all I have at this point) doesn’t bear that out. Grades on the quizzes aren’t much, if at all higher than before I allowed the cards.

That’s at least partly due to the observations above. I tried to convince students that the act of preparing the card promotes learning. The value of the card during the quiz is directly related to the quality of time and effort that went into preparing it. The message didn’t get through. Worse, feedback suggests students may have shifted from thinking about concept interrelationships toward putting basic definitions on the card.

Readers of this blog may be surprised (disappointed?) by this post’s focus on an ineffective strategy. But there is much you and I can learn from epic fails. Here are two quick takeaways:

  1. Wrappers or other mid-semester feedback is vital to understanding our students. I’m gaining valuable insight from the honest admissions about study time and strategies used. A lot of it depressed me (more about that soon). But I can’t improve instruction or enhance learning if I’m unaware of where my students are as learners.
  2. We learn a lot from mistakes.  Be brave. Pick one instructional strategy and critically examine the intended and unintended consequences. What are your assumptions? What are your intentions? What evidence can you gather to test how well they are or aren’t being met?

I’d appreciate learning from your “epic fail.” Please share what you learned from a strategy or policy that didn’t work. Let’s learn from and with each other.

What makes a course hard?

The Feb 2nd post (Easy A?) ended with a series of questions about grades, learning and instructional strategies. I fully intended to begin addressing them here. But as I dug into the literature I realized other issues need exploring.

When students report my courses are “hard,” my first instinct is to write them off as whining complaints. Then I look at grade distributions and review the number and type of assessments to try to discredit the feedback. I usually succeed, but nagging questions remain. What do students mean when they say my course is hard? What if our definitions are different? Does it matter?

What makes a course hard? Draeger, del Prado Hill & Mahler (2015) find “faculty perceived learning to be most rigorous when students are actively learning meaningful content with higher-order thinking at the appropriate level of expectation within a given context” (p. 216). Interactive, collaborative, engaging, synthesizing, interpreting, predicting, and increasing levels of challenge are a small sample of the ways faculty describe rigor. In contrast, “students explained academic rigor in terms of workload, grading standards, level of difficulty, level of interest, and perceived relevance to future goals” (p.215) and course quality is “a function of their ability to meet reasonable faculty expectations rather than as a function of mastery of learning outcomes” (p.216). Their findings are consistent with previous research, match my views of what makes a course challenging, and reflect the comments my students made.

It’s clear.

We are not on the same page about what makes a course rigorous.

Book

Does it matter? I think it does for two reasons. Clearly, if you’re concerned about course evaluations, the scores will be lower if students’ and teachers’ definitions and aims are not aligned. Beyond the ratings, the mismatched definitions, expectations, and criteria have significant implications for learning. Consider this analogy.

Monique wants to lose weight. She plans to eat fewer calories and exercise more. She hires a personal trainer to set up a cardio program. Monique isn’t very knowledgeable about weight loss physiology; she thinks less food and more cardio are all she needs. And for the short term, she has a point. Thus, she’s surprised when the trainer starts the session with ten minutes of cardio and then tells her to head over to the weight machines. Monique, despite her limited background in exercise science, says she’s only interested in cardio: treadmill, elliptical, climber, and spinning. The trainer persists and Monique begrudgingly complies. But, Monique’s enthusiasm for the program is diminished and she leaves without knowing why weight training is a hard but necessary component of the trainer’s plan.

Many students are like Monique. She’s paid good money for the trainer’s services. She knows she’s going to sweat on the cardio machines. She’s willing to work. But her expectations and understanding about exercise are incomplete. Because of this, she may not realize the trainer’s program will do more to help achieve her goals in the short- and long-term than cardio alone.  Or, she might comprehend what the trainer is trying to help her achieve, but Monique may only care about the short-term fix. Monique may not have the time (or may not value time at the gym enough) to devote an hour when 20 minutes of cardio would seem to be enough, at least for now. Monique’s goals and understanding of the process do not match the trainer’s.

Similarly, many teachers are like the trainer. The trainer assumed Monique would accept, on faith, that she has her client’s best interest in mind. The trainer believes she knows what’s best for her client. The trainer assumes Monique knows what a comprehensive exercise program looks like so she didn’t take time to explain why weight training is necessary. Notice that the story discusses the trainer’s plan, not a plan they developed together. Notice this too- the trainer is thinking like an expert, forgetting that novices see and approach things very differently.

As long as the trainer/trainee and teacher/student hold different definitions and expectations, the working relationship will produce less than optimal results and “satisfaction surveys” will reflect the mismatched priorities.

What can we do about it? Martin, et al., (2008) investigate students’ perceptions of hard and easy courses across engineering programs. Two of their strategies have broad application.

  • Consider student characteristics. Student differences with respect to semester standing, level of academic preparation, in-major v. general education course, and student major affect perceptions of course difficulty. The more teachers know their students, the better equipped we are to determine where students are in the learning maturation process. “The key is determining what an appropriate challenge is for a course and for a particular group of students. The more an instructor interacts with students, the more likely the instructor is to notice the overwhelmed or bored students” (p. 112).
  • Emphasize content connections. Applicability of content is an important filter students use to gauge course rigor. “Real” and “relevant” are the levers that push students to work harder and longer. Content needs to matter to students personally or professionally.  Teachers need to keep that in mind.

The more I read and think about what makes a course “hard,” the more it feels like we’re trying to nail jello to the wall. When we meet the needs of some, the rest may feel squished. It may not be possible to get it right, all the time, for every student. But I do believe, and the research on learning bears this out, there is value in initiating conversations with students about learning. We can’t dispel misperceptions if we’re unaware. The goal of the conversations isn’t to negotiate watering down the course, making grading easier, or lowering expectations. It’s to give students a voice and share ownership so that learning becomes more than a series of assignments reflecting only the teacher’s goals.

References:

Draeger, J., P. del Prado Hill, & R. Mahler. (2015). Developing a Student Concept of Academic Rigor. Innovation in Higher Education, 40: 215-228.

Martin, J.H.,  Hands, K.B., Lancaster, S.M., Trytten, D.A.,  & Murphy, T.J. (2008). Hard But Not Too Hard: Challenging Courses and Engineering Students, College Teaching, 56(2): 107-113.