The semester is winding down. Soon we’ll face the sometimes anticipated but often dreaded task of reviewing students’ course feedback. While many schools refer to them as student evaluations of teaching (SET) or SRTEs (student ratings of teaching effectiveness), they aren’t really meant to be evaluations and shouldn’t be viewed as ratings. The instruments are only one source of feedback. The unfortunate truth is they’re often assigned undeserved weight and meaning.
I’ve spent a few months reading the course feedback literature. Much of it is conflicting and I’m not confident enough in my understanding to write about best practices yet. So for now, let’s put aside the hot button issues related to ratings differences and biases across major, gender, student-level, discipline and other concerns.
Instead, I’d like to focus on the questions. Do you have the authority to set or change some or all of the questions on your feedback instrument? If so, when was the last time you did? Do you ask the same questions in different courses? Why or why not? I recently examined my forms, reviewed our university’s question pool, and made significant changes. It was interesting and eye opening. I highly recommend going through this exercise during a break, not mid-semester (speaking from experience).
Like many schools, our form specifies some questions. Beyond those mandated by the university, college, and program I have a lot of choices. Here are a few that capture my instructional goals. Rate the instructor’s skill/effectiveness in
- Relating course material to real life situations
- Making class sessions intellectually stimulating
- Helping students answer their own questions
- Encouraging students to apply concepts to demonstrate understanding
- Emphasizing learning rather than tests and grades
- Guiding students to be more self-directed in their learning
These items reflect instruction involving more than content coverage. I want to develop students as learners while they master accounting or economics.
The questions asked in prior semesters primarily focused on the administrative aspects of instruction such as my availability during office hours, preparation for class, skill in stressing important material, and presentation clarity. These are important, but they present a very narrow interpretation of teaching. So much more should be happening in vibrant, engaging learning spaces; course feedback should reflect that.
Unfortunately, I cannot predict how students will react. I may, as a dear friend suggested, need to sip a good glass of wine when reviewing them. Students, unfamiliar with these questions, may not know how to respond. They may have never thought about the educational process in these terms. This made me realize how the questions can and should be used for more than obtaining feedback at the end of the semester.
Teachers can use them as a framework for scaffolding instruction. Here’s an example to illustrate. On the wall opposite my desk at home is a sheet of paper titled The Far Goal. It lists four questions that drove my scholarly activity over the past several years. This was a period where I shifted away from economics scholarship toward faculty development, mentoring and consulting.
- What publishing, writing, and thinking positions me for the far goal?
- What publishing projects will best support writing a book?
- What publishing projects will open more consulting opportunities?
- What publishing projects will continue the quest to be the best teacher I can become?
Each time I sat at my desk, the Far Goals were there, providing a visible reminder of what I want to achieve. The Far Goals provide a framework from which consulting and research opportunities and projects were evaluated.
Similarly, course feedback questions can scaffold instructional decisions in many ways: choosing how and when to cede control; selecting formative and summative assessments; balancing student- v. teacher-selected topics; and instructional practice decisions like developing a pool of response strategies aimed at helping students answer questions for themselves.
Teachers can also share their questions with students at the start of the course. They can be part of the syllabus and class discussion. Building on this process may lead to asking students to select items from a pool of choices, thereby sharing ownership, responsibility and control of learning even more. If you haven’t thought about course evaluation questions or the syllabus in this way before, please considering reading Learner Centered Syllabi for details about my syllabi, and beliefs about sharing control with students. Reservations & Setting the Table for Learning considers how tone and underlying messages can detract from learning even before the course starts.
I also plan to reframe the questions as Instructional Goals to be prominently posted in my school office. The Instructional Goals will orient my choices and remind me of my teaching/learning priorities before I head to the classroom each day.
Does your course evaluation incorporate particularly good questions? Are you using them in unexpected ways? Please share!
We’ll examine the results and their implications in January. Until then, thank you for sharing your ideas and feedback in the comments and through email this year. I appreciate learning from and with you. Happy Holidays!
I’m currently reading Duffy & Jones (1995) Teaching Within the Rhythms of the Semester. I just finished the chapter titled, The Interim Weeks: Beating the Doldrums. Quite a coincidence! The reading matches my reality. Dictionary.com defines doldrums as “a dull, listless, depressed mood; low spirits.” It’s an apt description of post-Thanksgiving break letdown. Are you experiencing it too? Are your students?
Duffy & Jones describe the doldrums’ implications for learning this way:
“During a class period, professors and student interact intellectually and emotionally; they rely on each other to move the course forward. From an emotional point of view, the response of students can influence a professor both positively or negatively. Students who are attentive and focused in class provide the necessary spark for a professor who is feeling tired or discouraged, and a student with a good sense of humor can shift the mood of a class, redirecting potentially negative feelings into positive interactions… In contrast, the apathy of inattentive students is one of the greatest challenges. The ways in which professors acknowledge the doldrums and the techniques that they use to cope with them are critical, for it is the professors’ responses that will determine whether the semester ends with a bang or a whimper” (pp.162-163).
What happens when teachers overlook or disregard the significance of the emotional ebbs during the semester? Student energy wanes and class preparation falls. Unprepared students can’t engage effectively, resulting in weak class discussions. Flipped instruction becomes less effective when students aren’t motivated or prepared. A common instructional response is to resort to lecturing, which may further decrease students’ energy and motivation.
Some teachers may feel powerless to reenergize the class. Others may not believe it’s their responsibility. A few may blame “students these days.” Absences increase and for some teachers that can lead to resentment and frustration.
What can teachers do to minimize the doldrums?
Reconsider WHAT & WHEN. Some faculty teach material in the order it’s presented in the textbook. That may not be the most pedagogically effective in light of the doldrums. If the most challenging course content coincides with a period of listlessness, learning will be negatively impacted. To address systemic doldrums, like the period after a break, course planning should be designed with a focus on these questions:
- What topics are typically most interesting, from the students’ perspective?
- When should the most and least interesting material be taught and learned?
Save fascinating topics for the emotional low points of the semester. Front load challenging material whenever possible, to take advantage of the enthusiasm that exists at the start of a new term.
Reconsider HOW. If altering the content or sequence is impractical or pedagogically unsound, consider introducing a fresh approach when the doldrums occur. Novelty is a powerful force in regaining students’ interest, attention and focus. Do something unexpected. Introduce alternative media, allow different assignment formats or integrate some student choice. Bring in props. Incorporate humor. Integrate activities that encourage interaction and collaboration. Anything that breaks the routine can be the spark that moves learning forward.
The National Oceanic and Atmospheric Administration (NOAA) provides a slightly different and visually apt definition of the doldrums: a nautical term referring to a belt around the Earth, near the equator, where sailing ships sometimes get stuck on windless waters (this can endure for weeks!). Without action, the doldrums can persist in the classroom as well. Thus, teachers bear some responsibility for restoring progress. To do so, teachers should acknowledge the doldrums explicitly with their classes. It’s important to explain the reasoning behind the strategies we use to reinvigorate learning. And we should ask students about their effectiveness so that the doldrums are temporary lulls, not permanent fixtures along the learning journey.
What strategies do you employ to lift your students’ sails during the doldrums? Please share in the comments.
Duffy, D.K. & Jones, J.W. (1995). Teaching Within the Rhythms of the Semester. San Francisco: Jossey-Bass.
My recent Epic Fail led to lots of reading in some unfamiliar areas of the teaching and learning literature. You may be familiar with the work of John Biggs; I was not. His 1985 article, “The Role of Metalearning in Study Processes” provides a lengthy survey teachers can use to explore how students approach learning. Specifically, the paper considers how motivation, locus of control, and students’ experiences in and out of school affect their attitudes toward learning and use of surface- or deep-learning strategies.
The 43-item questionnaire is a bit unwieldy, fortunately Biggs, Kember & Leung, (2001) developed a more focused instrument in their paper, “The revised two-factor Study Process Questionnaire: R-SPQ-2F.” A list of twenty statistically reliable statements are evaluated by students using a 5-pt Likert scale (“never or rarely like me”=1; “always or almost always true of me”= 5)
In light of my epic fail, five piqued my interest:
- I only study seriously what’s given out in class or in the course outlines.
- I find I can get by in most assessments by memorising key sections rather than trying to understand them.
- I generally restrict my study to what is specifically set as I think it is unnecessary to do anything extra.
- I find it is not helpful to study topics in depth. It confuses and wastes time, when all you need is a passing acquaintance with topics.
- I find the best way to pass examinations is to try to remember answers to likely questions. (p.148)
My observations of student behavior and recent comments on quiz wrappers reflect agreement with these statements, all or most of the time. Essentially, the feedback suggests students are using a lot of surface learning strategies.
Relief! My students are surface learners!
Maybe I was too hard on myself?
“Please note that the R-SPQ-2F is designed to reflect students’ approaches in their current teaching context, so it is an instrument to evaluate teaching (my emphasis) rather than one that characterizes students as “surface learners” or “deep learners”. The earlier instrument has been used also to label students (he is a surface learner and she is a deep learner) but I now think that is inappropriate. I have had a lot of correspondence from researchers who want to use the instrument for labeling students, that is as an independent variable, but it should not be so used; it provides a set of independent variables that may be used for assessing teaching (my emphasis); ” [http://www.johnbiggs.com.au/academic/students-approaches-to-learning/ Accessed: 11/6/17]
If I was looking for a bit of vindication in the literature, it isn’t here.
But wait, there’s more:
“A particularly depressing finding is that most students in most undergraduate courses become increasingly surface and decreasingly deep in their orientation to learning…One might call it the ‘institutionalisation’ of learning, whereby students tend to pick up the tricks that get you by…” (Biggs, Kember, & Leung, p.138).
The takeaway: Teacher and students share responsibility. Surface strategies are used because they work. Students will be forced to change if the expectations, work and deliverables of learning require drilling down instead of scratching the surface.
Here are a few of the authors’ suggestions for appropriate uses of the surveys:
- Monitoring day-to-day teaching; conducting action research; or structuring long-term pedagogical research.
- Diagnosing study problems: comparing individuals’ deep and surface scores to others in the same cohort.
- Examining the relationship of approaches to learning with other curriculum variables with a view to fine tuning curricula.
Before I gathered data and dug into the literature, I was firmly convinced the assessments in my course require more than surface strategies. Now I’m not so sure. If students are unaccustomed to deeper learning, how can I help them develop or expand their learning strategy repertoire? What can I do to move our expectations and beliefs about learning closer together?
These issues connect closely with previous posts: students’ learning misperceptions, passive study strategies, and the divergent views teachers and students have about what makes a course “hard.” I strive to grow and improve as teacher, but the introductory courses I teach are just one touch point in students’ academic journey. Unless we coordinate across courses and programs, ad hoc efforts merely scratch the surface of students’ learning potential.
Biggs, J.B., Kember, D., Leung, D.Y.P. (2001). The revised two-factor Study Process Questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71: 133-149.
Biggs, J.B. (1985). The Role of Metalearning in Study Processes. British Journal of Educational Psychology, 55:185-212.
John Biggs’ website provides the article and survey instrument: http://www.johnbiggs.com.au/academic/students-approaches-to-learning/