Wednesday, November 08, 2006

Measuring teaching

I read an article that had to do with how to measure how much teaching faculty are doing. Apparently the old way was to ask the professor what percent of their time they spent on teaching, research, and service. The problem with that is that one doesn't know how much teaching work actually gets done. According to the article, the new improved way of measuring teaching productivity had to do with counting the number of credit hours taught, and multiplying by the number of students taking the course.

To me, this is still not good enough. It is a measure of how much teaching gets done, but it gives no indication of the amount the students learn, and student learning is the point of teaching.

Student learning is difficult to measure. You could look at student grades on the assumption that good grades are an indication of mastering the material. However, anyone with any familiarity with education knows that student grades will vary from professor to professor depending on how a particular professor grades.

Another way is to use some sort of standardized test. All calculus classes in the university/state/country/world could have students take the same test at the end. The students' scores on this test would then not be determined by the professor's grading practices. The drawback of this approach is that standardized tests have a way of not entirely getting at what is important. Standardized tests may test memorization of specific items. A good education should also help students improve in things like critical thinking and creativity. Also, one size does not fit all. It may be beneficial for students in different regions or different disciplines or from different backgrounds to approach the subject matter in a different way.

Perhaps theoretically in any subject there is a defined body of knowledge which all students should learn. And perhaps theoretically one can devise tests of critical thinking skill as well as tests of content knowledge. However, developing a test which is appropriate for classes in different schools in different locations is so challenging as to be impossible for all practical purposes. ETS puts a lot of effort into it, but I don't think the SAT and GRE are very good measures.

Another idea is to use course evaluations. However, course evaluations may tend to be betterfor teachers who make the class fun and/or easy. These factors do not always result in better student learning. Certainly a teacher who gets the material across and makes it seem fun in easy is a good teacher. But the class could also seem fun and easy because the material is not being covered fully.

To overcome the limitations of course evaluations done at the end of the semester, one could survey students a year or two later, when they have taken more advanced courses which use the material they were supposed to have learned in the course being evaluated, or when they have needed to use the material in the workplace. Perhaps at this time, students will have a better perspective on how much they really learned in the course.

I think none of these approaches is perfect, but combining the approaches can give you a picture of how well the courses are succeeding in bringing about student learning.