Student Evaluations of Teaching I: The Good, The Bad, and The Ugly
TAPP Radio Episode 84
Episode | Quick Take
Student evaluations of teaching (SETs) are problematic in many ways—but perhaps useful in other ways. Host Kevin Patton discusses the good, the bad, and the ugly. What are the issues and what’s behind those issues?
- 00:47 | Student Evaluation of Teaching (intro)
- 02:28 | Share the Fun: Refer & Earn
- 05:37 | The Good
- 08:39 | Sponsored by AAA
- 10:12 | The Bad
- 26:10 | Sponsored by HAPI
- 28:13 | The Ugly
- 44:15 | Sponsored by HAPS
- 45:26 | Staying Connected
Episode | Listen Now
Episode | Show Notes
Good teaching cannot be reduced to technique; good teaching comes from the identity and integrity of the teacher. (Parker Palmer)
Student Evaluation of Teaching (intro)
A brief intro to this discussion of student evaluation of teaching. This is the first of two planned episodes on this subject.
Share the Fun: Refer & Earn
You can earn cash rewards—up to $25 for referring other A&P faculty, teaching assistants, and grad students to this podcast. Just go to theAPprofessor.org/refer to get your personal referral URL.
Student Evaluation of Teaching: The Good
There is useful, actionable information that can be obtained from valid and fair student evaluations of teaching. When they work.
Sponsored by AAA
A searchable transcript for this episode, as well as the captioned audiogram of this episode, are sponsored by the American Association for Anatomy (AAA) at anatomy.org.
Don’t forget—HAPS members get a deep discount on AAA membership!
Student Evaluation of Teaching: The Bad
A lot can go wrong with student evaluations of teaching. In this segment, Kevin uses a recent research article demonstrating unfairness of valid evaluations as a launching point for discussion.
- Unbiased, reliable, and valid student evaluations can still be unfair (journal article) my-ap.us/38baMg3
- Even ‘Valid’ Student Evaluations Are ‘Unfair’ (online article) my-ap.us/34eyAhG
- Actual Learning vs. Feeling of Learning | Journal Club Episode | TAPP 83 (previous episode mentioned in this discussion)
Sponsored by HAPI Online Graduate Program
The Master of Science in Human Anatomy & Physiology Instruction—the MS-HAPI—is a graduate program for A&P teachers, especially for those who already have a graduate/professional degree. A combination of science courses (enough to qualify you to teach at the college level) and courses in contemporary instructional practice, this program helps you be your best in both on-campus and remote teaching. Kevin Patton is a faculty member in this program. Check it out!
Student Evaluation of Teaching: The Ugly
Kevin turns his attention to a few of the potential ugly issues concerning student evaluations of faculty.
- The 20 Meanest Teacher Evaluations of All Time (an informal list of anecdotes) my-ap.us/3r6WANE
- Prof Evaluations PART 3 – The Ugly | Evaluations can bring out the least attractive aspects of human nature (online essay) my-ap.us/3p9QFFW
- Teaching Evals: Bias and Tenure (online essay) my-ap.us/3asoH43
- The Frequency of “Brilliant” and “Genius” in Teaching Evaluations Predicts the Representation of Women and African Americans across Fields (research article on bias in online professor-evaluation sites) my-ap.us/3h1r9jc
Sponsored by HAPS
The Human Anatomy & Physiology Society (HAPS) is a sponsor of this podcast. You can help appreciate their support by clicking the link below and checking out the many resources and benefits found there. Watch for virtual town hall meetings and upcoming regional meetings!
Need help accessing resources locked behind a paywall?
Check out this advice from Episode 32 to get what you need!
Episode | Transcript
The A&P Professor podcast (TAPP radio) episodes are made for listening, not reading. This transcript is provided for your convenience, but hey, it’s just not possible to capture the emphasis and dramatic delivery of the audio version. Or the cool theme music. Or laughs and snorts. And because it’s generated by a combo of machine and human transcription, it may not be exactly right. So I strongly recommend listening by clicking the audio player provided.
This searchable transcript is supported by the
American Association for Anatomy.
I'm a member—maybe you should be one, too!
In his book, The Courage to Teach, Parker Palmer wrote, “Good teaching cannot be reduced to technique. Good teaching comes from the identity and integrity of the teacher.”
Welcome to The A&P Professor, a few minutes to focus on teaching Human Anatomy & Physiology with a veteran educator and teaching mentor, your host, Kevin Patton.
In this episode, I discuss the good, the bad and the ugly of student evaluations of teaching.
Student Evaluation of Teaching (intro)
Student evaluations of teaching. What goes through your mind when you hear that phrase? Yeah, I know, all kinds of things go through my mind when I think about student evaluations of teaching, good, bad, and ugly. So that’s what I’m going to talk about right now. Think of this as a starter discussion, because there’s such a big set of things to discuss that I can’t possibly hit all of them in this episode. So I’ll make a start now, then I’ll continue the discussion with a sequel in the next episode. Maybe I’ll call it return to the planet of student evaluations of teaching. Nah, that doesn’t work.
I clearly, going to have to work on a better title. And then after that next step episode, who knows, maybe I’ll return to it again sometime down the road. Of course, you’re always invited to call in to the podcast hotline and give your take on things or send an email besides insights and tips, perhaps a mini editorial, you might consider sharing a story or two. Good or bad, with or without sound effects on your experiences in student evaluations of teaching. In this episode, I’m going to start with a few ideas about the good, the bad and the ugly of student evaluations of teaching.
Share the Fun: Refer & Earn
Before we get started, I want to chat for a moment about something important. If you listen to a lot of podcasts or read blogs or both, you’ve probably run across those, buy me a coffee buttons. Where you click the link and sign up to give a $5 donation, maybe on a regular basis to help support the creator of that podcast or blog or whatever. I bring this up because I want to offer you a kind of reversal of that scenario. Even though I do have expenses in producing this podcast, I’d rather have more listeners. So I’m going to reverse the buy me a coffee thing by offering to buy you a coffee. Or since I’m being contrarian about this, I’ll buy you a tea or a chocolate rather than a coffee…
I tell you what, I’ll just give you a cash, okay. So go ahead and get that coffee or tea or chocolate or whatever on me, if you find new listeners for this podcast. Here’s how it works. theAPprofessor.org/refer and get your referral URL. Copy that personal link and share it with other Anatomy and Physiology Faculty. Maybe include a sentence or two telling them something about what you get out of listening to this podcast. Then when your friend clicks your link, they’ll be introduced to The A&P Professor podcast and can subscribe in the platform of their choice. Once you get two or more new subscribers, you’ll automatically receive $5 in cash for tea or chocolate or coffee. Once you rack up 10 or more subscribers, you’ll automatically receive $25 cash. Not kidding, up to $25 cash, simply for sending a few emails or tweets or posts or DMS to other A&P faculty, inviting them to listen to a podcast that you know they might enjoy, or at least get some benefit from if they don’t exactly enjoy it.
They benefit, you benefit, the cafe benefits, where you buy your tea, chocolate or coffee. It’s win-win-win right? I know you were planning to send me a big holiday gift this year, but I’d much rather do this for you instead. Just go to theAPprofessor.org/refer, get your personal URL and send it to a few friends and then sit back and wait for the cash to roll in. Once again, that’s theAPprofessor.org/refer.
Let’s start with the good. Yeah, there are lots of good things about student evaluations of teaching when they work, when they’re both valid and fair. Sometimes, we find things that we’ve been doing forever, things everybody else does in their course all the time. Things we thought were working, but they’re not really, maybe they’re actually harming learning in some way, or we might find out that we’re exhibiting behaviors that are impeding learning in some way.
Like facing the whiteboard when talking or using jargon that students aren’t familiar with yet, or using cultural references that students just don’t get, or we might have forgotten to be transparent and explain why we’ve constructed our course, the way we have. And students have then made the wrong assumptions. For example, we may find out that a significant number of students incorrectly concluded that having small groups each teach the whole class a different concept was simply a way for the instructor to get out of some work. Now, clearly that’s not why we do that. We do it because it enhances learning. But from a student perspective, if they don’t know, that’s why we’re doing it, they may see it as a way for the teacher to take a day off or maybe a whole week or two off.
Sometimes student evaluations confirm whether new experimental strategies we’re trying out are really working or not. I often describe myself as an experimental teacher. That means that I love to try new things that seem promising, perhaps something I learned from a colleague or something I read about in HAPS Educator or the Anatomical Sciences Education journal. If they’re working and I’ll find that out by using student evaluations, then we’ll continue to use them, right. And if they’re not working or they’re creating some kind of obstacle, then we know we need to tweak them or perhaps reject them entirely. We might even get enough information to know exactly what to fix and how. I consider it to be a good result of student evaluations when I find things I’m doing wrong or could be doing better.
That’s what the whole exercise is about, right? It gives me information to work with, to help me be a better educator and build a better course. That’s the good side of student evaluations of teaching.
Sponsored by AAA
Back in the olden days, I got the impression that the American Association for Anatomy, AAA was an organization of, and for people actively doing research in anatomy. That their conferences were research conferences. That their journals were all research journals. So I kind of stayed away because I had stopped to doing biological research and I had become focused on teaching. But you know what? AAA has evolved.
Yeah, they have a fresh, updated name and a fresh updated logo. And most importantly, a fresh updated mission that reflects their growing interest in and support of teaching. Not only graduate and clinical teaching, but undergraduate teaching too. They have a teaching track at their conferences, an awesome teaching journal called Anatomical Sciences Education. And well, all kinds of other stuff. Evidence of their interest in A&P teaching is their support of this podcast by sponsoring the searchable transcript and captioned audiogram of this and every episode, why not take a quick moment to check out AAA, the American Association for Anatomy at anatomy.org.
And then there’s the bad side of student evaluations of teaching, when they don’t work. Now, a couple of things have gotten me thinking about student evaluations recently, which is why I’m talking about them on this and the next episode. One is, I just got a batch of student evaluations back from a course. And unfortunately there weren’t enough responses to really make any kind of conclusions. And that’s always disappointing when that happens. And around the same time I ran across a recent journal article that really got me thinking more deeply about student evaluations. The article is entitled Unbiased, Reliable and Valid Student Evaluations Can Still Be Unfair. It’s written by Justin Esarey and Natalie Valdes. And it demonstrates some important things that don’t surprise me, nor do I suspect that they’re going to surprise you.
But before it lay out those results, I want to mention that the authors used computational simulation that started with assuming the most optimistic conditions for student evaluations of teaching or SETs as they call them. SET for Student Evaluation of Teaching. And they assumed that as SETs are moderately correlated with teaching quality, which they describe as student learning and instructional best practices. And do they assume that SETs are highly reliable and that they don’t discriminate on any instructionally, irrelevant basis. In other words, they looked at an ideal situation that in my opinion is far more generous than reality in many, if not most cases. But what did their computation simulation and their analysis find?
Well, they found a large difference in SETs scores fails to reliably identify the best teacher in a pairwise comparison. In other words, you put two faculty members side by side, and you see a big difference in SET score, you would assume that by doing that, you would find which one of those two is the best teacher. And they showed you can’t do that. It doesn’t work. Something else they found is that more than a quarter of faculty with evaluations at or below the 20th percentile in their SET score are above the median in instructional quality. So again, that’s an indication that the SET isn’t really showing what we think it’s showing. They attributed these problems to imprecision and the relationships between SETs and instructor quality, even when they’re moderately correlated.
What that means is that even when there is a decent correlation between evaluations and instructor quality in general, that correlation isn’t precise enough to actually say this score for this instructor means exactly this, which is kind of the problem with statistics in general, right? They identify trends and groupings, which is great to get a ballpark idea of something. But they may not be precise enough to always apply to every individual in a group. So what can we do if student evaluations are generally reliable, but potentially unfair when applying them to individual faculty. The authors suggest that using multiple imperfect measures, even including student evaluations of teaching, but not limited to just SETs might be more fair and more useful than only using student evaluations.
And that makes sense, right? I think we all believe that or we all would like to see that. Now, I don’t think I’m the only one who has seen student evaluations being used as the sole or primary assessment of teacher quality. And I’m going to be completely open with you. I’ve done that myself. I look at those evaluations and let them take on more precision than they’re capable of. And that leads me to giving them more importance than they deserve. I think academics are generally very trusting of measurements, especially the kinds of measurements that have been around so long and have been so widely trusted for generations. So we just kind of go there, even though I’ve grown to realize we probably shouldn’t.
Yeah, sometimes they’re accurate. In general, a really good evaluation tool, under ideal conditions is probably accurate a lot, in general, on average shooting from the hip, they’re accurate. But by golly, each of us faculty are individuals and many of us individual faculty, including you and me, care passionately about our teaching. So we want some precision when measuring how well we’re doing that because that’s how we grow. But we want it to be fair also because we want others, including future students and our supervisors to judge us accurately. So even when conditions are ideal, I think we can say that student evaluations cannot be relied upon to give an accurate picture for any one faculty member, at least not when used by themselves.
Okay, so that’s when student evaluations of teaching are moderately reliable. What about student evaluations of teaching that are not very reliable? For example, I can’t be the only one who is running into badly constructed survey items. The better I’ve gotten in making a well constructed, clear, and reliable test items in my courses, the more I recognize how many badly constructed, unclear and therefore unreliable items show up in student evaluations that are given to my students, even those made by companies or organizations or consultants that claim that their instruments are awesome. Yeah, of course nothing is perfect, but by golly shouldn’t, we still strive to be perfect? Especially considering what’s at stake here.
Not only keeping or losing our position or gaining tenure or not, or getting that promotion or not, or whatever, but possibly more importantly, the quality of learning among our students. Part of the problem, I think is that most of the survey tools we’re asked to use are one size fits all. And our courses, even those that follow a department or course template are not all exactly the same form and fit. I happened to be looking at the results of my latest student evaluation, as I prepare the outline for the discussion, as I mentioned, and besides getting a low enough number of responses that the results are pretty much meaningless, there are items on the survey that ask about the classroom environment. When I don’t have a classroom, it’s a completely online course.
Yeah. Sometimes students can think of my course is having an online or virtual classroom, but I can’t rely on that interpretation if it’s not stated clearly in the survey. I don’t know what the students are thinking when they try to answer that question. Not only that, there’s some things that simply don’t apply to a virtual classroom and therefore should be left blank or answered with does not apply. But for these, I got a range of answers. So here’s an item that students shouldn’t even be answering, but they’re answering it. And the answers are all over the place. Well, no wonder because it’s not really a valid question for my course. And yet it still goes into my total score and it’s compared with the rest of my department, some of whom are teaching in classrooms. So that part’s broken at least.
And if I spend time to really look at it and I can tell that, but I don’t know that people looking at my results are going to spend that time and really think about what my course is like when they’re interpreting those results. A perennial issue for me is giving surveys designed for lecture classes to students in a lab course. With the rise of active learning, sometimes these disparities also distinguish one lecture section or course from another lecture section or lecture course. And yet students are going to answer them without us being clear about how they might be interpreting these items that don’t really apply to our courses. And yes, those scores get tallied into our combined overall results and compared to department and institutional averages.
Another issue, one that we discussed in the previous episode that is Episode 83, we learned that students in general are not good at assessing their own learning. That is their feeling of learning after a learning experience, doesn’t always match their actual learning. I know that for me, when I’m in and just after a particularly difficult course with a particularly demanding instructor, I don’t have a lot of good things to say about that course as a student in that course. But I have plenty of complaints. Maybe I think there’s too many assignments. Assignments that repeated content from earlier courses in earlier modules, maybe I’ll complain that there are too many quizzes and tests. Maybe I’ll complain that the test questions are difficult. Of course, now that I know a bit more about how learning works most effectively, I realize it’s all those things that make course a good one.
All those things that I usually complained about, in did probably complain about when I was an undergraduate student. And it’s those things that not only make that course a good course, it’s those things that make an instructor, a good instructor. And when I look back at those really difficult courses, the ones I probably complained the most about, those were certainly some of my best courses. Those are the ones that keep coming back to mind when I need that information or that concept or that insight or that approach. So I’m sure, I was unreasonably harsh in doing some of the student evaluations of those courses. And if I was overly harsh, wouldn’t that do the opposite of their intent? By pushing them back against the very things that made those courses effective.
My point here is that I think we’re not being realistic when we think that student evaluators of our courses are giving informed and unbiased opinions in their student evaluation responses. And the more we demand of our students, the worse it could potentially be for us when it comes time to doing student evaluations. Related to students not being good at assessing their own learning in a course, I think even if they are fairly good at assessing their own learning, that many, perhaps most students are also not competent in giving evaluations. So that’s two different things. One is, are they competent and really understanding how much they’re learning in a course? And then there’s this other thing, are they really competent in how to give an evaluation properly?
I don’t think they really they understand the purpose of these evaluations. They don’t know enough about evidence-based teaching practices to evaluate a course. They know little or nothing about course or curriculum design. They don’t know what’s coming up in their next courses or what they really need to know for their professional program or professional career. Even if they were competent in all of that, which would be an unreasonable expectation, in my opinion, they’re not likely to be competent in evaluating all that nor in reporting on that evaluation in a meaningful, helpful manner. Yet another issue with student evaluations is the number of students sampled. As scientists, you and I know that reliable data often depends on having a large n, a large number of samples. If I have a large number of students, but a small number of evaluations coming in, then I have to be careful about how reliable that information is.
If I have a small number of students in a course, even if all of them respond to student evaluations, then I still have a small sample size, and there can be a lot of fluctuation within that small group where one or two outliers can skew everything. Which makes it possible for one student who makes a mistake on this scale and thinks, one is good and five is bad, the opposite of the direction given. And wow, there goes your average just for a simple mistake. So a mistake, a disgruntled or anxious student, a biased student, any number of things can make a good course or a good teacher look not as good as they are. Or on the other hand, the opposite could occur. They give me a false sense that something I tried in the course worked much better than it did, when really the tally was skewed by two students who really like me and gave me high marks for everything no matter what, probably didn’t even read half the items, if that may.
So if I’m really wanting to get some reliable, actionable results from student evaluations, I think I’m more likely to get them from big numbers rather than small numbers. But I don’t always get big numbers and that’s bad, I think.
Sponsored by HAPI
A Twitter thread that I was part of recently talked about an issue that has always seemed odd to me. Think about this, educators in K-12 institutions have to be certified by taking courses in how to teach and prove themselves in a teaching practicum. Besides taking content courses in their subject area, college educators have no such requirements other than content courses, many of which may be focused on a specialized area rather than being generalized enough to cover all the topics of the courses that we actually teach.
And there are very few opportunities to get either the general content or teaching training once they realized the great value of it. And they’re out there teaching at the college level. Well, my friends, I’m going to tell you about such an opportunity. It’s the master of science in Human Anatomy & Physiology Instruction. Also known as the HAPI degree. Besides providing a thorough review of all the core concepts of both anatomy and physiology, the HAPI program also provides comprehensive training in contemporary teaching practice, plus a teaching practicum that allows you to try out your new skills in a safe, supportive environment. If you already have a degree in your specialized academic or clinical expertise, but would like to add some practical training in how to teach A&P, and I mean really teach it, check out the HAPI program at nycc.edu/hapi, that’s H-A-P-I or click the link in the show notes or episode page. The free distribution of this podcast is sponsored by the Online HAPI Program.
Student evaluations of teaching can get ugly. What do we do with the results of student evaluations or teaching? Do we glance at them and file them away? Or do we really look at them closely and try to tease out their meanings. Even have a discussion with a peer to help us see beyond our own biases. If we’re not paying close attention to them, I think we’re missing some opportunities. And besides that, if we haven’t considered them carefully, how are we going to competently discuss them if asked about them by a course director, a department chair, D;6ean, or promotion committee. That could get ugly sometimes if we’re not really thoroughly familiar with what’s in there and why. But even uglier is when you’re called on the carpet and challenged with a low score, whether it’s an overall low score or a lower than average score in a particular area.
I’ve seen student evaluations being used as justification for denying tenure or a continuing contract. For firing a faculty member or other ugly business. Sometimes student evaluations are used appropriately in these situations, but it becomes ugly when they’re not used appropriately. In my opinion, it serves everyone well, the college as a whole, the students and the individual faculty member, when the results are used formatively. What I mean by that is when they’re used for improvement, not for some bottom line evaluation of the entire career of that faculty member. For example, a high score in one area may help me see where my strengths are and help me figure out how I can apply that success to all areas of teaching, or even just make that area even better than it is.
A lower than average score, gives me some information on areas that students perceive to be lacking. And I can then tease out whether I’m not transparent enough with them or they’re misunderstanding me or some part of my course or whether my execution or implementation in that area needs improvement. Ideally, it’ll give me clues on what to fix and how to fix it, or at least where to start looking for those kinds of solutions. A way low score is an even bigger opportunity to look at my course and of myself. I might need to buckle down and do some serious self-learning in that area. After all very few higher ed faculty have any formal training in teaching. So it’s a wonder that any of us ever pass muster in student evaluations. Or I may need some coaching or mentoring, whether it’s peer coaching from within my institution or an outside mentor or consultant.
My dear friend, the late Joe Griswold was for many years an A&P Professor who taught classes in a large lecture hall. He told the story of one day, after a particularly good lecture when one of the attendees approached him and complimented him on the lecture. But then she told them that she’d been sent by the university to help him fix his low success rate in the course, ouch. Yeah, he was taken aback and I’m sure that did not feel good. But you got to know Joe. He immediately saw opportunity and worked closely with that coach that had been sent to him. And he not only turned his class around, he became an advocate for active learning and gave workshops all across the country for the rest of his career. There’s an outcome that’s not ugly, but beautiful.
Man, I miss Joe. But often the results are not beautiful. Why? Because student evaluations are not typically used in this formative way. They’re used as records of success or failure, period. And when they seem to be records of failure, that can lead to contracts not being renewed, tenure being denied. Well, all those ugly things I already talked about. Another ugly side of student evaluations comes from the anonymity of student responses. Yeah. Okay, I get that anonymous surveys are standard because that allows respondents to be more honest and forthcoming in their responses. And therefore they can be more useful. But anonymity allows respondents to be mean and ugly without consequence. I think the rise of social media and the ugly behavior exhibited by some of our cultural and social icons, has made this worse in recent years.
So I’m beginning to question the usefulness of anonymity in student evaluations of teaching. And as I question it, I wonder about the imbalance of it. After all my feedback to my students is not anonymous. I have to be nice to them because they know it’s me. Even when my gut reaction is not the nice one. And that keeps me in a professional lane and in an ethical and moral lane. And it helps me be a better person. And honestly it helps me give better, more useful feedback to my students. Why don’t we treat student evaluations of teaching as professional instruments that have a secondary purpose of training our students to do professional evaluations? After all, that’s a skill they’re probably need in their careers, right?
I have a feeling that when they become a supervisor or practicum mentor or a team lead in their job, they’re not going to be very effective if they provide mean-spirited Twitter storms, as feedback to those folks that they’re supervising. A lot of that meanness abled by the anonymity of student evaluations of teaching produces personal attacks based on the ugliness of racism, sexism, homophobia, xenophobia, and all kinds of other isms and phobias. As a straight, white, professional, cis-gendered, middle-class male, you won’t be surprised that I’ve not experienced much of that kind of take down by students. But it wasn’t long into my teaching career, so this goes decades back then long into my teaching career when I learned that my female colleagues frequently got comments and even low scores because of their appearance.
One colleague was salt and pepper hair like mine is frequently cited for not coloring her hair. I’ve never once gotten that comment or anything like it. It’s sexist and it’s mean. And it has nothing to do with teaching effectiveness. But that’s just the tip of the iceberg. When I first started teaching, a colleague was moved out of the classroom because his students commented on their valuations that he quote, acted gay. Well, he was gay, but so what? Whether he acts gay, whatever that means, or he is gay or trans or whatever that is not what’s being evaluated. And therefore should not be mentioned, especially in a derogatory way. It’s just wrong in so many ways. And this ugliness often reaches many other corners of our identities as individuals, immigrants, other people with accents, including those with pronounced rural drawls or Southern drawls or urban dialects that are very pronounced or even stutters or other speech issues are often demeaned even when they’re understood by everyone easily.
I’ve worked with a number of professors, especially adjunct faculty who can’t afford a big or stylish wardrobe and are insulted for that in their evaluations by students. I don’t want this to become more of a rant and it is, by continuing that list, you get my point. And here’s another point. This ugliness harms students too. It’s not just that they’re not learning professional behavior, I think the habit of being judgmental in a biased, unfair, and frankly rotten way reinforces the rottenness of their thinking and their behavior. And by accepting it on anonymous surveys, we’re allowing this behavior to grow and to fester and therefore our students become less, not more. Is that what we want for them? I want my students to be better coming out of my classes, not worse.
Okay. I almost had to take a tearful break when talking about Joe Griswold and I feel it coming on again. So let’s move on to my last example of ugliness. Yes, you guessed it, public online platforms that rate professors. I’m not going to mention any by name, to avoid legal consequences. But you know which ones I’m talking about. If not ask your colleagues in any discipline and be ready for a rant because not everybody has a good opinion of these things. There’s a lot of ugliness there. Now, these online platforms are also anonymous and they allow students or even just people posing as students to rate you and comment on you as a teacher. It’s all the good, the bad and the ugly about student evaluations of teaching taken to another level.
One issue I have with these platforms is that they often ask questions that sort of help students avoid the really good courses and good instructors. Like a rating for level of difficulty, really isn’t that course with a high score for level of difficulty, the best course to take, or at least potentially the best course to take? One where you have to work really hard. Or these platforms might ask whether a textbook is required. Now I get the textbooks cost a lot, but skipping classes with textbooks may steer you away from some really good courses. Another question they ask is, is attendance mandatory? Well, yeah, if you want to learn something by attending, reading your textbook and doing a lot of difficult work, see where I’m going with this, they’re asking questions that really aren’t necessarily the most useful questions.
As a matter of fact, they might be steering students in a way wrong direction when trying to decide which section of a course to take, or which course of several that they can choose from. They can, I don’t want to give a long list of examples of all the bad and ugly. And after all many of these sites now allow for faculty rebuttals, which tops a little bit. But think about this regarding public evaluation sites, when you’re accused of something bad, and then you deny it or explain that it’s not bad, that just wipes that bad first impression out of everyone’s mind, right? Yeah, no, my point in bringing this up is to show where our worship of anonymous student evaluations of teaching has gotten us. It’s now a public thing. Like in-house evaluations, that has some good aspects and some bad aspects.
But there are serious risks, I think. Now, before I wrap things up for this episode, you may be wondering, did Kevin just get a really bad evaluation or does he often get really bad evaluation reports? Is he grinding the proverbial axe here? Well, let me tell you, that’s none of your business. Well, okay. That’s true. It isn’t any of your business, but I don’t mind telling you that I’ve consistently done pretty well on my student evaluations. Even those crazy public ones where sometimes the comments are so wrong, I’m thinking were they thinking of their history professor or something. I mean, have I ever met this person? Because sometimes those responses are just so off, based I’m thinking they’re not on the same planet that I am. So I kind of have an interest in keeping the status quo because I tend to get pretty good student valuations.
But I’ve seen the damage the status quo can do, and it’s just not right. And I have a podcast so, well, I have a platform from which I can say something. And I’m saying it. Okay, this episode has gone on long enough, even though I started off with the good, I spent a lot of time on the bad and ugly, I realized. And I don’t want to just leave it there. Not with this being the holiday season and all, in all this darkness, a light will come in the next episode. In that episode, I’ll talk about, about some proactive and reactive things that we can do to make this all work at least a little bit better for us, maybe a lot better for us and for our students. And yes, there might be even be a bit of magic in that next episode that will make our spirits bright. In the meantime, all those thoughts you’re having and I know you’re having thoughts, why don’t you clean them up a bit and call them into the podcast hotline? Really, I want to hear your thoughts, filtered for a professional audience, of course.
Sponsored by HAPS
I talk about HAPS, the Human Anatomy & Physiology Society a lot on this podcast. And there’s a reason for it. It’s a big part of my professional life. I’ve been a member since it was first formed and I participate regularly. Why? Because it feeds me. It sustains me as an A&P teacher. It’s also important to me personally, some of my very best friends are people I met in HAPS or through some connection in HAPS. It’s a very friend oriented and friendly organization. It’s kind of hard to describe unless you try it. Heck, they’re so nice that they provide marketing support for this podcast so that you and your friends can learn about this podcast and enjoy it. Want to know more, go visit HAPS at theAPprofessor.org/haps. That’s H-A-P-S.
As I mentioned earlier, there’s an easy way to share this podcast with friends and acquaintances, and also earn yourself a bit of cash. Simply go to theAPprofessor.org/refer to get a personalized share link that will not only get your friend all set up in a podcast player of their choice, it’ll also get you on your way to earning a cash reward. Oh, and I always give you links to other resources on topics I cover in each episode, if you don’t see links in your podcast player, go to the show notes at the episode page at theAPprofessor.org/84, where you can explore any ideas mentioned in this podcast.
There’s another episode about student evaluations of teaching coming soon. And I’d love, love, love, love to get, that’s four loves. Wow. That’s where really high on my scale. I’d love, love, love, love to get your questions, comments, and ideas at the podcast hotline. That’s 1-833-LION-DEN, or +1 833-546-6336, or send a recording or written message to podcast@theAPprofessor.org. You’re more than welcome to join the A&P Professor Community way off the social grid on its own private platform. To do that, just go to theAPprofessor.org/community. And… I’ll see you down the road.
The A&P Professor is hosted by Dr. Kevin Patton, an award-winning professor and textbook author in Human Anatomy & Physiology.
Do not listen to any podcast to which you have had previous life-threatening reactions.
Episode | Captioned Audiogram
This podcast is sponsored by the
Human Anatomy & Physiology Society
This podcast is sponsored by the
Master of Science in
Human Anatomy & Physiology Instruction
Transcripts & captions supported by
The American Association for Anatomy.
The easiest way to keep up with new episodes is with the free mobile app:
Or wherever you listen to audio!
Click here to be notified by email when new episodes become available (make sure The A&P Professor option is checked).
Record your question or share an idea and I may use it in a future podcast!
Toll-free: 1·833·LION·DEN (1·833·546·6336)
Please click the orange share button at the bottom left corner of the screen to share this page!