But I’ve got a blank space, baby / And I’ll write your name

I have, in some way, involved in higher education for most of my life. I entered the freshman class of Northeastern University in 1992, and have since then been either a student or a teacher for at least one college or university (sometimes both at the same time). In my long time involved with universities in MA, CT, and NY, I have seen a variety of fads come and go.* Anthologies and readers have gone through several editions. And the ratio of adjuncts to tenure-track faculty has grown to frightening proportions.**

But one constant has been student evaluations. Every year I was a student, I filled them out. And I have administered them for every class I have ever taught, as a graduate student, an adjunct, and a tenure-track professor. Similarly, the basic methodology of student evaluations have not changed: the first portion involves filling in bubbles, and the second portion allows for written comments. There is always some combination of questions asking students how much they have learned, how available the instructor was, how knowledgable the instructor seemed, etc.

Those who know me know that I find this all to be a waste of time. This is because student evaluations, on the whole, are a worthless endeavor whose practice has nothing to do with improving the quality of education.

Don’t believe me? Check this out. (And feel free to find and read any of the large number of reports, studies, and stories that basically say the exact same thing. Go ahead; I’ll wait.) If you have been looking around, you probably also discovered that student evaluations are also sexist and racist, too. Able-bodied, straight, white men who speak without an accent do shockingly well on student evaluations.***

So why do we continue with this farce? Because student evaluations produce data, and we are a data-driven business. Faculty turn in copies of student evaluations as proof of their quality in the classroom (and similarly, poor student evaluations are used against faculty to prove their lack of worth in the classroom). Administrators use evaluations to reward some faculty and punish others. This is particularly the case for contingent faculty, for whom bad evaluations can mean unemployment at the end of the semester.

Student evaluations provide concrete data. It’s limited data. It’s biased data. It’s unreliable data. It’s bad data. But it’s data, and so it can be assessed.

And the thing is, everyone knows it. And most people don’t really care enough to do anything about it.

Every semester, when I mechanically administer my student evaluations (and await the eventual lavishing of praise that is my due as an able-bodied, straight, white male), I wonder why we don’t administer evaluations for administrators.**** I wonder how the day-to-day life of my university might change if faculty – or students – evaluated administrators. I mean, if students evaluations are a reliable means of determining the worth of some employees, why not use them for all employees? If the data collected really is valuable, why wouldn’t we want more of it?

I suspect the reason we don’t collect that kind of data is because everyone recognizes that the data is worthless. However, evaluations are still a wonderful tool for keeping faculty in check. Their power, in other words, lies not in the data they collect but in their use for keeping faculty compliant.^ And it works. Untenured faculty, contingent faculty, and graduate students (whose funding can be tied to student evaluations) have long found simple and effective ways to bribe their students. And while I find it loathsome to bring in baked goods, or hold class outside, or offer extra credit to students who show up to fill out evaluations, I understand the impulse. When your evaluations are used – without any context – to determine your merit, only a fool wouldn’t do what he/she could to make them sparkle. And unlike in professional sports, nobody in higher education cares if you “juice” your evaluations.

Personally, I would be fine if we just stopped administering evaluations.

But that isn’t ideal, either. Students should have some mechanism where they can comment on their education. We shouldn’t treat them like customers; however, we also shouldn’t treat them like houseplants.

There are ways we could improve student evaluations, and ways we could use that data to improve the quality of the education students receive. It would take some doing, but the people who need to be doing this have all attained graduate degrees, so this shouldn’t be all that hard, right?^^

First, we need to change when we ask students to complete evaluations.

This is key. Right now, students are asked to complete evaluations near or at the end of the semester, for courses they are taking that semester. This is idiotic. To begin with, students are still too close to the class. They are in the midst of studying for their finals, working on final projects, or are otherwise stressed out. Stressed out people just don’t give the kind of thoughtful feedback we are looking for. But just as importantly, students still doing the work for the class have no idea how valuable that class has been yet. One common question is some form of “how much have you learned?” This is impossible to answer while still learning in the course. To give but one specific example: every Freshman Composition teacher I have ever met has had at least one student some to them later in their academic career, to thank them for teaching the student how to write. Sometimes, these students also find some way of apologizing for being a difficult student, who didn’t get just how important the class would be. Students need time to reflect, to process, and to test drive all that fancy learning. Filling out evaluations during the semester one is enrolled in those classes doesn’t allow for any of that.

If we really wanted to know how useful a class is, we would ask students later in their career how useful that class has been. This might not be a perfect solution, but it would give the students a chance to better reflect on their courses. To give another example: I am constantly listening to freshmen and sophomores gripe about how pointless their gen-ed requirements are; many of those students come back to me as seniors and express surprise, and tell me that they didn’t realize until later on how useful those courses – and the skills they teach – were.

Second, we need to change what kind of data we collect.

Often, evaluations are completed on some standardized sheet (on paper or online); sometimes, faculty design their own evaluations that are in some way tailored to that faculty member’s key interests. But even those evaluations are limited by being given too early. Changing the questions might make for some more focused data, but it’s still – at best – incomplete data.

If we really wanted to see how much students have learned, we would track those students in their academic careers. We already do this in many ways (for student athletes, for at-risk students, for honors students, etc.); so why not expand the scope of this work to collect the data we really want? Asking a student how much he learned in Freshman Composition (or Calculus I, or Introduction to Psychology, etc.) will only give you how much the student thinks he/she has learned. And the student has to answer this questions without having to use that knowledge and/or skill set yet. If we really wanted to know how much the student learned, we would track that student in upper-division writing classes (or Calculus II, or upper-division Psychology classes, etc.). Again, this is an imperfect assessment method, but it provides more data, and more useful data. That is, students who cannot perform well in writing-based assignments simply did not learn much in their Freshman Composition class, no matter what they claimed on their evaluations.

However, this points to a much more important point…

Third, we need to separate student evaluations from faculty performance.

My hypothetical student above could have done very poorly in Freshman Composition, but still rated the faculty member very highly. The “data” thus shows the instructor performing very well, despite the student not performing well. Similarly, the student could have given the instructor terrible evaluations, despite learning a great deal and performing well in later classes. My point here is that the students evaluations, without any other context, do not give a clear picture about the instructor’s performance.

Additionally, just as there is no way to measure or account for the biases that are in play^^^, there is no way to account for students using evaluations to take out their aggressions. I’ve heard students coming out of classes commenting about how they “really fucked” their professor, etc. Did the professor deserve it? I have no idea. But the point is, some students use evaluations as a chance to assert some sort of power over their faculty members (who, students sometimes believe, are exerting their own power over students). Again, the evaluations serve only one function: the exertion of power, and the chance to keep people in check.

If we really wanted to evaluate faculty performance, we would track students as noted above, but we would also include other kinds of evaluation. The classroom visit – whereby a colleague sits in for a day or two to write a review – is common, but also provides incomplete information. Anybody can spruce up and do a job for one day; similarly, anyone can have a terrible day and drop the ball. And neither day is – or can be – an accurate reflection of that colleague’s worth in the classroom. We could do a better job of putting colleagues together in the classroom – mentoring programs, team teaching, etc. – that allow us to spend more time engaged with each other’s craft. For more suggestions, feel free to check out any of the various programs devoted to the craft of teaching (and there are a great many out there).

In that regard, we would also provide the time and space – and funding – for improvement. If the goal of evaluations is to measure the quality of the faculty, why wouldn’t we also put into place systems that work to improve faculty performance? We hire faculty to do a job; let’s give them the opportunity to do the best job they can. The goal should not be evaluating faculty, but training them.

These are just my early thoughts, and if anyone has any suggestions to add, please leave them in the comments.

We all know the problem. So when do we solve it, and how?

*For instance, I have seen three different waves to move college classes online. I once even tried to teach an all-online freshman composition course. It was a disaster. This most recent push looks like it’s here to stay, of only because many universities see it as a cheap way to provide educational content without having to cut the pay of much-needed administrators.

**This is completely anecdotal, but in all my time at Northeastern working on my undergraduate studies, only one of my professor was an adjunct. I studied with a fair number of graduate students (in a variety of fields), but only one adjunct. I can’t imagine earning a degree from that – or any other major research university – with such a record these days.

***Trust me. I get amazing student evaluations. I even tried one semester to get bad evaluations, but it didn’t work. I’m good, but I’m not that good. Nobody is.

****I’m told that some institutions to evaluate their administrators. I’d love for examples of these evaluations, as well as a chain of custody regarding who gets to see them and how they are used for administrator performance review.

^In my own department, I have seen bad evaluations be used against one faculty member whom most of the department wanted to deny tenure to. However, when most of the department wishes to grant tenure, bad evaluations never come up, or are explained away. How you use them depends entirely on how you already feel about the colleague. I’ve seen bad evaluations be offered up as proof of one colleague’s poor teaching, while bad evaluations (that made pretty much the same claims) were singled out as examples of biased students. I’ve even had one administrator tell me flat out that “students complain,” and so we shouldn’t take anything they say in evaluations seriously.

^^Please don’t actually answer this question. I can’t bear the truth.

^^^Except for those obvious instances where students write things like, “I think she was too hormonal” in their evaluations. (Yes, this is a real example, from one of my colleagues.)

Advertisements

13 Comments

Filed under Uncategorized

13 responses to “But I’ve got a blank space, baby / And I’ll write your name

  1. We produce a lot of other data besides student evaluations. Essays, oral presentations, test scores, discussions, blogs – all of it is data. But admin don’t want to read qualitative data. It’s quantitative data they want, because then they don’t have to do any work interpreting the results. Student evals are simply the laziest way they could conceive to measure success at teaching. Written comments are rarely read by anyone other than the professor.

    • True, we do generate a great deal of data. And it’s all very useful for the faculty members, and largely ignored by administration.

    • And the qualitative data analysis really should be the assessment work of a department. It can take a lot of political will power for a department to push back against the desire for easy quantitative data that comes from above (my own institution wanted to use a multiple choice test to evaluate students’ critical thinking skills), but it is possible, at least on some issues. I’m in the middle of wrapping up my own assessment report of one of our pilot programs, and it’s all based on qualitative data. We’ve translated some of it into numbers for the purposes of the upper admin (especially on issues we want to pursue but can’t without admin support, like cutting back on the number of students who get AP credit for composition courses), but used the qualitative data to make decisions about what issues we’re going to work on collaboratively as a program (like how we teach process).

      • One of my fears is that, if we do convince administration to spend more time on collecting and evaluating the myriad forms of data available, all we will end up with is another administrative office with more paperwork, and no real changes in the classroom. But maybe I’m overly pessimistic right now.

      • I don’t think that we should try and convince the administration to do this kind of assessment. Instead, we need to convince them to let us do it, for a couple of reasons. 1) they’re not really qualified; non-experts generally can’t effectively evaluate the work in an expert’s classroom. When they try, they tend to evaluate secondary issues that are more accessible to them. 2) It’s still cheaper for faculty to do this than to open another office, and we’ll actually be able to “close the loop” (to use accreditation/assessment language) back to the classroom, which an outside office will never be able to do (I think this gets at your primary objection).

        I have found that my time doing assessment this semester and last year some of the most valuable teaching development time I’ve ever had (only thing better? Doing the planning for UConn’s orientation). That’s because our assessment was having to sit down with another faculty member and read student work (from yet another instructor’s classroom), evaluate it, and then discuss it. It led to conversations about what makes an effective or ineffective assignments, teaching strategies, and a broader sense of what we could ask our students to do.

        The hard part is that this is time intensive (and faculty don’t really have time, not with most teaching and research and other service demands). But it’s necessary work, IMO.

  2. Many departments do similar things for internal reports, but what do we do about administration? I think we are talking about two different things here. The first is, what do we do to better evaluate our faculty? What you suggest makes perfect sense, and while it’s time-consuming, it’s likely worth it. But the bigger concern is, what do we do about administration’s use of student evaluations? This is a pressing concern for the largest faculty work force, who are the ones who has the most to lose. (And, coincidentally, likely not going to be part of the solution you propose, unless they get paid to do the work.)

    • Yeah, I’ve got less of an answer for what to do about the administration use of student evaluations.

      Part of me wants some statistician to take the data that’s been collected on the bias issues and student learning and figure out how to “correct” the student evals in some sort of systematic way. If women have, on average, a half point lower score than men in certain categories, then we add a half point onto all the women’s scores in those categories–or something like that. Dunno if that’s really feasible, though.

  3. robin.curtis@hhu.de

    In Germany a central body for professors, the Deutsche Hochschulverband, compiles ratings of chancellors. You are only allowed to rate your own chancellor/president. It is a great thing!

  4. Lisa

    I add questions and do my own evaluations for that reason. I like to ask specific questions about activites and projects I do for example. I also think a big problem with judging student performance in place of evaluations is that a great teacher can have an awesome student and vice versa…some of this is on the student too! The first batch of questions I add to my evals are designed to make students think about their responsibility for what they learned/their grades (I ask how many classes they miss, how much time they spent studying each week). I also especially hate online evals since no one does them, making them even more worthless. Oh, and you can’t add questions and they don’t want you to distribute your own evals. Total nonsense.

    • I have found that students wildly overestimate how much time they spend on their classes, but that may have been the students I was asking (back when I asked). To me, that’s like the Nielson Ratings (for tv), where people overestimated how much educational programming they watched.

  5. Faculty eval data are easy data, and like most things that are easy, they are worthless. They are easy to collect, aggregate, and disaggregate. They don’t really mean anything beyond sound and fury.

    • Agreed. And yet, we continue to use them and give them value. It’s baffling, because it flies in the face of the “critical thinking” skills we claim to be teaching our students.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s