How do we evaluate college teaching…

And what do a couple of Research 1 schools have to teach those of us at liberal arts schools?

I’ve been trying to update our department’s annual Individual Faculty Report (IFR) to make it easier to understand. It’s forced me to think more about what makes for an effective teacher. Yes, there are lots of books out there about higher education pedagogy, and I’ll get to some of those in future posts. The report has to be both simple to understand and simple to do since it passes through several different levels of departmental and college review. Our report has typically involved lots of math (not that there’s anything wrong with that :)), but how we’ve derived the numbers hasn’t exactly been the easiest for people to figure out. And in some cases, the numbers are derived from very small sample sizes (in one case, n=2). So at the same time that our state’s university system has called for increased rigor in the post-tenure review, I’ve been thinking about how we can better help each other evaluate our teaching, and also find ways to get better data so that I can be more articulate about why my colleagues are effective teachers.

We know that various evaluation systems are biased in ways that harm young faculty, women, etc., and some have even argued that they have little or no correlation to actual learning. My former colleagues have articulated their criticisms of IDEA forms over many years with strong justification. But coming up with something better hasn’t been easy, and I’ve spent quite a bit of time puzzling over our current documents and trying to make them better. It’s easy to critique; it’s harder to come up with something better.

So as I was trying to think about some more changes before upcoming faculty meetings, it was time to become efficient and multitask. When the going gets tough, the tough try to reduce their email inboxes! 🙂 I still have a bit of Bethel email to purge, and some of that email has to do with teaching (old emails from Chronicle of Higher Education and Magna Group). I saved it to look at it later; now it’s time to see what I can find! Today’s post deals with one email I found during the email purge.

I ran across an old Chronicle Vitae email that had a link to a James Lang article. James Lang taught and directed teaching centers for quite a few years before his recent early retirement. He wrote an article for the Chronicle back in 2019 entitled, “We Don’t Trust Course Evaluations, But Are Peer Observations of Teaching Much Better?” I don’t remember that article from when it first came out, but the links have quite a bit to think about. In particular, he cites a National Science Foundation grant awarded to the Universities of Texas, Colorado, Kansas and Massachusetts to improve thought and reflection about teaching.

I noticed that some of my new colleagues have raised the question of why we do peer teaching. I get the points behind the question: on one hand, our colleagues are uniquely equipped to understand the content that we teach better than colleagues from other disciplines; yet, at the same time, there’s a reluctance to be too negative or critical because it is a departmental colleague. And there’s something to be said for naive evaluation; after all, our students don’t have same scholarly background we do. So how do we make the process more concrete, more rigorous, and more beneficial at the same time? I think the TEVAL project has some answers.

Most of the time, we tend to use peer review as part of summative review, as in, “Does this person deserve tenure/promotion/merit awards?” But we tend to forget about the formative aspects of peer review; just as we have formative and summative assessments in our classes, we should be utilizing the formative aspects of peer review. That’s something I’m thinking about including as part of the IFR.

I also plan on having conversations before and after the teaching session when I evaluate classes. In reading through some of the TEVAL linked documents, I realize that I could have been so much more proactive in my previous evaluations. Our colleagues in K-12 education are often pretty rigorous about going through lesson plans, class discussions and behaviors before and after teaching evaluations; we tend not to do as much on the college level because of the strong autonomy of the college instructor. What I’d like to do is to help the instructor increase their autonomy through these discussions and evaluations. By showing how they uniquely utilize their background, experience, teaching philosophies and stories to help students learn, we’ll be better able to articulate both inside and outside the institution why what we do is so important, and that we’re not just indoctrinating students. (As someone posted in a meme, if we were so good at control, our students would always read the syllabus!)

A second concern of TEVAL is the actual syllabus. My current department requires at least one syllabus per semester be reviewed as part of the Individual Faculty Reports; however, some of our criteria have been a bit fuzzy (what does a “neat” syllabus look like anyway?). A future post will discuss some of the ideas and rubrics other universities such as Kansas have come up with, and I’ll say they look quite promising.

There will certainly be more to write about TEVAL and evaluating college teaching. If anything, this is the preface before getting into the really hard work of developing a system that’s fair, rigorous, and better articulates how we as professors work on meeting our course objectives. For now, I’m starting simple and encouraging my colleagues to do so: adding each of our actual course objectives into the course evaluations as additional questions, and asking our students to discuss how we did or didn’t meet those objectives.

Leave a Reply

Your email address will not be published. Required fields are marked *