A.A.U.P. Fall 2000 Forum
AEffective Teaching: What Is It? and How Can
We Assess It?
Thursday, November 2, 2000
Moderator: Barry Burkhart
(Psychology)
Panelists
Paula
Backscheider, Philpott -West Point Stevens Eminent Scholar in English
Jeffrey Fergus,
Associate Professor, Mechanical Engineering and Chair of the Teaching
Effectiveness Committee
Philip Lewis,
Professor, Psychology
Philip Shevlin,
Mosley Professor of Sciences and Chemistry
Introduction
What follows is a transcript of an audiotape of the Fall 2000 Forum
sponsored by the Auburn Chapter of the American Association of University
Professors called AEffective
Teaching: What Is It? And How Can We Assess It.@ The impetus for this forum
came from recent articles from AAUP's journal Academe:
"Teaching
Portfolios: A Positive Appraisal," Academe v. 86, no. 1, January‑February
2000: p. 37‑44.
"Teaching
Portfolios: Another Perspective," Academe v. 86, no. 1, January‑February
2000: p. 44‑47.
"Flunking the
Test: The Dismal Record of Student Evaluations," Academe v. 86, no. 4,
July‑August 2000: p. 58‑61.
Please note that the invited speakers have not had an opportunity to
review and edit their comments; they have not been able to modify or improve
their remarks for publication here. The
speakers= comments and the discussion that followed
are presented here as best as the quality of the audio tape would allow (we
regret that we were unable to transcribe some brief, inaudible passages; we
were also unable to identify some of the members of the audience who asked
questions). In any case, we hope that
the remarks will be helpful to you, especially for those of you who were not
able to attend the session.
Transcript of Forum
Discussion
GEORGE CRANDELL
(A.A.U.P. Chapter President): The AAUP is a group of people that defends the
values of academic freedom and shared governance and we welcome you here today
to today=s forum on AEffective Teaching: What Is It? and How Can We Assess It?@ prompted by an article in Academe, The
Bulletin of the American Association of University Professors, an article by
Paul Trout, who questioned the value of teaching effectiveness surveys. His
article neglects to consider some research that is available about what
teaching effectiveness is and so there is a spirited debate about the value of
teaching effectiveness surveys as well as the question of what effective
teaching is. We=ve got a panel today, that Barry Burkhart
will introduce, who are all expertsBbut don=t claim to be experts on this topicBbut they do have some interesting ideas to
share on these two questions. Thanks
for coming today.
BARRY BURKHART: I=d like to open with my amendment to the titleBEffective Teaching: What Is It and How Can We
Assess It? --if it is an it, which it probably ain=t. I=d like to introduce our panel. Jeffrey Fergus, who is Associate Professor
of Mechanical Engineering and current Chair of the Teaching Effectiveness
Committee, and he=s
going to speak on the work of the Teaching Effectiveness Committee; Paula
Backscheider, who is the Philpott West-Point Stevens Eminent Scholar in English,
who has spent five years chairing the student retention committee. Philip Shevlin, who is the Mosely Professor
of Sciences and Chemistry, whose email to me that he was worried because he was
going to be on a committee with people who knew something, and he didn=t know anything, so I should, as chair,
exercise the privilege of removing him.
Recognizing that academic ploy, I, of course, said ANo,@ that you had to come and do your thing. And Phillip Lewis, is Professor of Psychology, a Developmental
and Cognitive Psychologist, who will talk about the difficulty of assessing
critical thinking.
My central duty is sergeant-at-arms.
I=ve asked people to speak from six to eight
minutes, each of them, and then we will have a dialogue with the audience. And first, we=ll ask Jeffrey to speak.
JEFFREY FERGUS: As
Barry said, I=m the current chair of the Teaching
Effectiveness Committee, and over the last year, about a year ago, we were
asked to evaluate the Teaching Effectiveness Form. The first thing that we did--we recognized that there are two or
three parts to that. One is--and I
should say this in light of the fact that in going to semesters the form had to
be changed anyway because of the format--so there is the form itself, the
questions, how it was administered, and how it was used. And, of course, that Ahow it was used@ is probably what is going to get the most discussion, the most
controversial, but we haven=t got to that yet. I=m not sure how much we will. What I will talk about is what we done in
terms of revising the questions.
So what we did first of all--we had the form, and we made the changes
for the semester and we realized we didn=t want to make changes in questions too quickly, so we changed the
format, and then, over the last year, we=ve done some things to try to change the questions.
The first thing we did was to send out a survey to all faculty. You may have remembered this--about, I guess
it was probably winter last year--asking faculty what they thought of the
current questions, which ones should be kept and which should not. And also for general comments.
And, basically, the two questions which most people objected to were
those related to motivation and to stimulating thought. And I=m going to pass around some survey for you to fill out or to take with
you, and it has the current questions on it and the revised questions which I=ll get to in a little, towards the end.
And so the first thing was those two questions were most
objectionable. Most people thought that
it=s important, but that students really should
be motivated, and it=s not
the faculty=s responsibility to motivate students. There were others comments relating to
having a question relating to overall effectiveness. And a number of other comments.
And what we did, based on those results, we made a few changes. One is we removed those two questions. And another is we changed some of the
wording. We thought some of it was too
wordy and got rid of some of the qualifiers.
Another comment was that organization and preparedness were really
redundant, so we combined those two questions and we set some questions for the
course. Because one of the concerns is
whether, when evaluating the instructor, are you mixing up, are the students
mixing up what they think about the course and the instructor together as
opposed to just the instructor. And the
other thing, we had a couple of questions for both measuring overall
effectiveness versus recommending to a friend.
And this gave us twelve questions and we did a pilot survey over the
summer. We did. We had over 600 students take this in about
30 courses. Of those twelve questions. We had them take both that survey and the
current survey. And one of the things
we found, is first of all, that our questions, the new questions, the overall
score is lower; however, if you just took the questions related to the
instructor and averaged those, the score was essentially the same as the
current survey, even removing those two objectionable questions, which were
always the lowest score.
Another thing we wanted to change and that we made was the question
related to verbal, how is it worded, Aspoke audibly and clearly@--we changed that to Adid the instructor communicate effectively,@ to avoid what may be one of the things we
thought of as bias against either TAs or faculty who have accents, students
complaining about that, to broaden it to Acommunicate effectively,@ and that score was the only one that was significantly lower of the the
questions that we made minor changes to.
The other thing was that we took the average of the course--all
students--consistently I think, all but one course rated the course lower than
the instructor. That was quite
consistent.
The other thing that
we did was we checked the trend and, of course, the instructor=s score does go up with the course score so
there is some interrelation, and that is something to think about in terms of
interpretation if you normalized to what the students think of the score of the
course. Basically, the idea is that if the students are taking a course because
they don=t like it, they may take it out on the
instructor to normalize for that. The
other thing related to the Aorganized and prepared@ question is we found there was a significant difference in the scores
between whether the instructor was organized or prepared. It was consistent. The students always thought the instructor was lower on
organization and more unprepared. So
maybe it wasn=t valid to combine those two questions. But based on those and on some other
results, we decided to make some changes on the pilot. One is as we said, we took those organized
and prepared questions back apart. We
did leave two questions on the course content.
We had four on the pilot to test some different things. We do think, not that the student may not
know the value of the course at that point, but it=s a good indicator of what the student thinks
of the course in evaluating how they evaluate the instructor. And so what we ended up with are the nine
questions which are on that sheet which I passed out, and as I indicated, if
you could take time to tell me what you think about those, fill this out. Any other comments you have related to that
I=d appreciate it. I figure this was a good chance to get some input. So that=s where we stand right now. One
of the things we=re talking about is using the Breeden grants,
to try and solicit some work on evaluating these teaching effectiveness forms,
and as I say, we=re not sure.
We have not got to the really messy part in terms of how this is used,
and that=s something that we need to talk about as to
what our role will be in that.
BARRY BURKHART: Let=s hold questions to the end.
PAULA BACKSCHEIDER:
Well, I=m going to read you a couple of lists,
because I thought that this way I could present more information perhaps and
then, in the question period, if any of it seems particularly interesting or
important to you, you can pick it up there.
I want to begin by thanking a lot of you who worked with me on the
Retention Committee and particularly the History and Chemistry department who
took on a year-long project on student learning and on teaching.
Well, when I first started thinking about this question AWhat is Effective Teaching?@ the answer I came up with was Awhatever works,@ and for a long time that was my total talk. I can see you all looking happy about that. But anyway.
If you think about it, what works for one teacher and one student may
not work at all for another teacher and another student. And we=ve all seen people who use methods very, very different from our own
who get as good or better results, and so I guess I begin with a hail to
diversity, but that also leaves me to the recognition of some very hard truths
and that=s my first list.
The first is that teachers can revel in their diversity. They can exercise it responsibly or
irresponsibly, but students are largely trapped.
And secondly, few faculty members are truly analytical about their
teaching and its effectiveness. They
have a style, and like comfortable old clothes or a bottom-sprung chair, they
just sink into it.
Thirdly, the environment is more suspicious and even hostile than
proactive. We fear grade inflation when
we should be rewarding teachers who are committed to teach them, teach them,
and teach them again until they get it.
We accuse faculty who give high grades of being easy, when in fact they
may be stimulating and inspiring. One
of the things we did with the Chemistry Department was check up on so called
grade-inflators, and what we found was that AA@ students tended to get AA@ students in all the courses that they were taking and sometimes you=re just
really lucky and get a really good class, or sometimes you=re really unlucky and you give a AC@ and that=s what
that student is getting from everybody else. So we often evaluate people=s teaching as far as what whether they=re grade-inflators or not without knowing
anything about the student or even thinking terms about their philosophy of Ateach them,
teach them again.@
There are well-established, well-tested, and well-known truths about
effective teaching and many faculty don=t know them, and many who do make no effort to incorporate them into
their teaching styles to form new habits.
And what I brought with me is a handout, something that the retention
committee hands out every year. We give
it to deans and we ask them to share it with all their faculty and some of them
do and some of them don=t. And this is really a list of
some well known, well-established, well-tested truths about teaching. And there are a couple of them that I just
want to mention very briefly.
The first is to learn about the academic and psychological support
services on campus and watch for students who may need referrals. Faculty are likely to be the very first
people who see a student beginning to withdraw, someone who moves to the back
of the room, someone who starts biting their fingernails, and so what I brought
with me also is a handy little card, and you can keep this in your desk drawer,
and as you watch a student unravel before you, you can slide open your desk
drawer open and say AWhy do
you try calling 4-5972 and get help in note-taking or time management. Anyway.
Most faculty really want to help students, but they have no idea what
services are offered and where students might go to get them.
The second hard truth on this list that I want to call to your
attention is way down. And it is
remember that there are several types of learners, and explaining important
concepts for at least two types every time.
The most common learner that we see in college and who succeeds is the
visual learner. They are the kind of
people who can read books. And some
students are naturally that. Some of
them have been socialized to do it.
Unfortunately, not all students are like that.
There are also aural learners.
And interesting, faculty often hate them. Because they sit in your class and they don=t take notes. And the reason is they learn so much better by what they hear and
to take notes actually distracts them.
The third type of learner, and the one who does poorly in college and
that we don=t serve very well is what=s known as the kinesthetic learner. Those learners have to imagine something, do
it, see it, touch it, and we don=t really turn at lot of the things we do into an experience like that
for that kind of learner, and faculty, most of them, don=t even know that there are those types of
learners, and they don=t
really think about how they might change their teaching style to reach them,
but if you think about the blackboard, the old-fashioned blackboard. So see how you could reach those students in
doing that. For instance, you say it,
and so you hit the aural learners, you write the key words on the board, and
you hit the visual learners, and then, this is where my students always laugh
at me, you draw a picture and tell a little story about it, like precipitation
or evaporation or the medieval humours, where the fumes come from various body
organs, so all of a sudden your kinesthetic learner can make this into a
narrative. It becomes more physical for
them. So these are examples of the
well-known truths that faculty either don=t have any access to or, if they do, don=t put into practice.
OK. My second list. Good teaching and learning if we are
effective is expensive. And very few
people on this campus or anywhere else are paying for it. For instance, it takes time to write good
lesson plans and lectures with new materials and new strategies, and where do
we get that time. Secondly, it takes
times to take advantage of instructional technology and for instance to write
interactive computer assignments, which we know work much better than straight
lectures. It takes time to spend with
students, even encouraging email dialogue is a new drain on faculty time and so
people=s teaching styles, the size of their classes,
the demands of the course they are teaching should be factored into the
assignment of teaching load. Someone
with 200 freshman that they need to teach how to take notes should have a lower
teaching load than someone who has upper-classmen who can pretty much get along
on their own. But see that=s really expensive.
There are things that we do now and this relates to the teaching
evaluation surveys. They are basically
worthless, unless you have data giving us such things as the average scores of
small required courses, large required courses, small courses for majors, large
courses for majors. What does a 4.2
mean when you have nothing but your own score to look at? What are all the other required Calculus II
classes of between 25 and 40 students?
How do you measure up with that?
But that=s expensive, and it seems to me, that until
we have that, the teaching evaluation results are basically meaningless and
uninterpretable.
Final list. How do we know if
we are effective? And this is also
something that I don=t
think we are paying for. Obviously, our
best way of telling if we are effective is what the student can do at the end
that he or she couldn=t go
at the beginning. If you are teaching a
chemistry course and your students can do a titration at the end and do it
well, they=ve learned something. If they can write a good argument essay with
a beginning, middle, and end, and persuasive evidence at the end, they=ve learned something. And most of us know, what we=ve actually managed to teach our students.
There are other ways to judge effectiveness.
And the rest are fairly expensive.
For instance, there=s performance in the next related course. For instance, how does a
student who takes Chemistry 102 do in Chemistry 103? in History 101 and History
102? How does a student who has had
Freshman Composition manage to succeed in a course where they have to write
essay exams or term papers? But who=s going to pay for doing a longitudinal study
of those students in sequential courses?
Third, another measure of effective
teaching is sustained interest and perseverance in a field. In other words, how many students remain
math majors? How many take an English
course and become majors? How many take
philosophy and then when they get an elective in their senior year come back
and take it? Another tactic that many
colleges are using today, and I say colleges, not universities, are two-year
later course evaluations. I once taught
at a school that did that. And it=s pretty interesting. There are some very dramatic flip-flops. Teachers who got fairly low scores for being
personable and charismatic usually got very high scores two years later and
people would write on the bottom, AI didn=t realize how much I was learning.@ or AThis teacher challenged me and I wish more had done that.@ And
so by taking a selected group of students two years later and have them
evaluate History 102 or Freshman Composition or math, perhaps, you get a much
more realistic and helpful evaluation of the course.
Another is the regular compilation of success/non-successes rates of
student with drops figured in, coupled with profiles of the make-up of a
class. Unpopular as this statement has
made me, if more than 35% of Auburn students are failing your course, there=s something wrong with the course, the
teaching, the atmosphere, or even a group of students that you=re
taking into the course. Maybe
there should be a pre-requisite. Maybe
the students should be more mature. But
that figure should be a warning. 35%
non-success is a whole lot. And it=s expensive. Those students are coming
back. And you=ve got to teach them, make spaces for them
every year.
So at the end of all this about--look how expensive this is, you come
back to a cost-benefit analysis. Would
we, for instance, revamp a core course with results from any of the things
above? Would we insist that a faculty
member have a mentor or do some serious reworking of their content and method
based on the results of any of these things above? Would we give that teacher time to rework his or her course? Would we provide more GTAs so that there was
a more personal attachment between the courses, instructors, and the students
themselves? If we are not going to pay
for it, then why do we pretend to do it?
PHILIP SHEVLIN: OK.
Well, I just viewed this panel as having a broad area of assessing
teaching effectiveness and then a more specific area of the role played by
teaching evaluation. I know there=s been a lot of research on teaching
effectiveness, and I don=t really know that much about it, and so thought I would do would be to
address this problem of teaching evaluations, because I=m a big fan of teaching evaluations, and I
didn=t agree with, too much, in Professor Trout=s article, and I--so all I really want to do
is tell you sort of my vision of where I think we ought to be going with
teaching evaluations.
And first of all, I could summarize Professor Trout=s article.
He made several points in my view.
He says, first of all, AIf we don=t know
what effective teaching is, how can we evaluate it?@
Well, we may not know, but I think as Paula says, we can recognize
it. I think it=s kind of like the Supreme Court and
pornography. We=ll know it when we see it. And if you think about it, at least as I
think about it, the most effective teachers that I remember are, of course,
were those that I had when I was an undergraduate, and I think looking back
that I=m well-qualified to evaluate their
effectiveness. And I think that this is
why student evaluations are important.
The best person to evaluate the effectiveness of a teacher, I believe,
is the student. And you know I=ve sat in on some of my peers lectures in
chemistry and it=s very difficult, sometimes, as long as he=s articulate. It=s difficult for me to evaluate because I know
what he=s trying to say. I=m not hearing the material for the first
time. I know exactly what he=s trying to say and so I think we need to pay
attention to student evaluations.
Trout says the evaluations contain questions that are ambiguous or
unclear and certainly they do, and you guys are addressing that problem, and we
all know that many of the responses reflect some confusion on the part of the
students. My favorite one is one that
Kurt Ward in our department got. He
said that the student said AThis course is so difficult that it=s impossible for the average student to get an AA@. That=s always been one of the [inaudible] around
chemistry.
Trout says they=re dumbing down undergraduate education. Well, again, I think you have to buy into this idea that the
students are getting to be worse and worse quality. And, we all say that, I say that, everybody says it, but I=m always cautioned by thinking back that they
probably said this since the time of Socrates.
Students have been declining.
You know, where are we? Where
were we when we went to college? So I
think you have to be careful about this.
But I think the most damaging thing about this article is that he
implies is that students are just not responsible enough to participate in
evaluation, and I think that=s the problem. If you treat the
students as if they are not responsible, they won=t be responsible. The
impression is that the students are paying $10-20,000 per year or more to go to
university and then engage in a widespread conspiracy to learn as little as
possible. And I just don=t think that that=s valid reasoning, but that=s sort of the impression I got out of
hearing.
Of course, we all worry about a quantitative scale to evaluate faculty
effectiveness in anythingBpublications, grants, in anything and I think that=s always going to be a problemBhow these things are used, and I really don=t know what to do about that. But I believe that the goal of evaluation
should be mainly to increase effectiveness, and it=s just a couple of things that I think we
should do, my view of the evaluation process.
First of all, I think we should convince the students that they=re part of a process whose goal is to improve
the university, and I think that many students have a very, very fuzzy idea of
student evaluations. I think during
orientation, during the advising process, they ought to be told how important
these are. They ought to be told how
important is it to improve the quality of education and how the effect that
they have on professors= careers, and these are things that they should take seriously. I believe we should remove the evaluation
process, from filling out the forms in the classroom. I know that many do this on the last or next to last day of class
when we have a lot of stuff to cover, and it=s a hassle, and the students are stressed out. It seems to me we have the perfect mechanism
to do that with the development of all this information technology on
campus. Why can=t we do these evaluations on the web after
the class is over? I think we can
easily think of a completely anonymous
way to do this. The student could log on with his id number, anonymously fill
out whatever evaluation we had. The
problem, of course, is how are you
going to motivate the students to do this.
And we could either do this in a positive or a negative way. We could say you can=t enroll for the next quarter unless you do
this, which I would not, I would not favor, but perhaps we could give them
positive reinforcements. Maybe people that would do this would get priority
during the registration process or something.
And I really think that students have a very different view of the class
right when they=re studying for three or four different
finals than the next week when they feel like they=ve worked hard and the work has paid
off. And I think we might get better
comments.
Again I think we should recognize that the major purpose of student
evaluation is to improve the quality of instruction, and the way I think we
should do it is to develop a dialogue between the faculty and the students and
I know personally that the things I find most valuable about student
evaluations are the comments that the students write, and I think, what I would
love to see in my classes is--we do have
email dialogues with our students--and I actually disagree with Paula--
I think that=s an efficient way to do things, because I
usually email the questions to the rest of the class, and a lot of them do have
questions--they all do--but anyway, what I would like to see is an anonymous
way the students could write to you, again, log in, and-- they would not be
every student on campus but those in the classBthey would be able to give you some feedback, and I know that they are
afraid to criticize and I would really like to see. I=d like to see that throughout the quarter,
and perhaps it could even be a two-way street.
So, I think that=s what I would like to see. I
would like to see the major purpose of evaluation is to improve quality by
developing a dialogue between the faculty and the students, and I think these
things have to continually evolve. Now
we might ask questions or we might want to know whether the students thought
that computer-aided instruction was valuable or not. I mean, we=re going in, a lot of us are going a lot of different ways, and I=d like some feedback. I do a lot of
computer-aided instruction in class and I=m still worried about whether that=s the best way to do it or no, and I don=t get much feedback. So those
are basically my personal views of the way I would like to see the process
evolve.
PHILIP LEWIS: Well, let me just say a couple of things and
just take a minute or two. I, as Barry
said, think of myself as a cognitive-developmental psychologist, so I have a
somewhat different take on student learning.
Paula talked about individual differences and learning styles. I think you can also think about the fact
that many students come to college with a fairly primitive orientation to
learning, where they think what learning is all about is taking in information
and assimilating it so that they can give it back to the professor and retain
it. And, of course, where we want them
to get to is where they become able to generate information, evaluate
information, and actively construct ways of thinking about it and working with
what they are learning. And that is a
developmental process, and one of the challenges for us as teachers is to
figure out how to get the ones who are not there yet through that process, and
then, also, another challenge is that you have both kinds of students in your
classes or students at different points in this development process, and this
can be a challenge to the instructor.
Then, the last thing that I would like to say is that evaluating where
students are, and evaluating change from this development perspective is a very
difficult proposition because you have to have them generate material that you
can then look at and evaluate to see how they are processing that. Any form that they just put a check mark on
is never going to come anywhere close to allowing you to evaluate how students
are sort of processing, constructing, making sense of the experience that they
are having in the class.
I=m going to stop there so that we can have a
discussion.
BARRY BURKHART: You
get the star. Short and sweet. OK.
Please feel free to ask questions.
[UNIDENTIFIED
SPEAKER (History)]: I actually think I
like what, you know, the tendency of what you did with the questions. I got appointed to the head of the history
department evaluation team by yelling and screaming at a meeting about our
current formsBit=s quite long, and which asks a lot of what I call touchy-feely
questions, a measure it seems to me, or elicits students= responsesBit has to do with their comfort levelBand that speaks to what Paula and other people saidBwhat makes people, what makes students
comfortable in the middle of the class may have nothing to do with the
effectiveness of the course. And later
they look at it very differently. It=s a complicated subject, but it seems to me
when youBthese things like Amotivation@ and Aorganize the class well@ are versions of those kinds of
questions. Motivation is a strange
thing. But come over to that very
problem with the two questions you have on the evaluation reports. Number eight I like. The course content was interesting. That seems like a very nice question to
ask. If the professor can=t make the course, the material interesting,
then there is a problem. On the other
hand, the course content was valuable. That one worries me. For instance, if you are teaching core
courses that students are required to take.
Their view of what=s valuable or beneficial at the moment may not be very realistic. I bet you=re inviting low scores on that.
JEFFREY FERGUS: Exactly.
The idea is not that you=re trying to find out from the students if it is valuable, but do they
think it is valuable. The reason we
were thinking of two is because we looked at the difference, for example,
between English Composition versus Great Books. The Comp was a little bit higher on--the questions weren=t exact the same--but on the
relevancy/value-- whereas the great books was a little bit higher on the interest. So what we were trying to get at is to find
out where the student is coming from.
Not to find out from them is it valuable but do they think it is
valuable at that point. If an engineer
is taking calculus and said it=s not valuable, obviously, they don=t know what=s
coming. But we want to find out what
they think at that point.
PAULA BACKSCHEIDER:
Can I ask? Are you going to finally
give usBthese are machine scores, so it will be quite
possible to actually give overall averages?
Because it seems to me that a student could answer Acourse content was valuable@ and they would be voting against requiring
say that geology course that a lot of them take as part of the core. So do you plan to provide people with that
so that the person teaching that or freshman composition wouldn=t be compared to the engineer you=ve just given as an example?
JEFFREY FERGUS:
Well, you would get all the scores. And
we=re also looking at different ways of
presenting them, so, for example, you=d maybe get a separate average for the course and a separate average
for the instructor.
PAULA BACKSCHEIDER:
So would you get the average for that kind of course?
JEFFREY FERGUS: Well
that=s at the higher level. We haven=t really talked about that. But
that=s certainly a possibilityBhaving that information. And again, that=s the reason for having the context of how
the student feels about the course and how that might affect the rating. If you=re taking a course that you don=t think is valuable and you don=t think it interesting, the instructor=s score may be lower accordingly.
So it=s really just to give a context.
PAULA BACKSCHEIDER:
But that would also. If you had that
breakdown, you would have a way to judge effectiveness, because if that teacher
has made Great Books seem valuable and they get say a 4.7, but the average for
Great Books in general is 3.2, then you=ve got a reason to reward that teacher, but if you don=t get the category presented with the
individual score, you=re never
really using science to judge effectiveness.
JEFFREY FERGUS: Yes.
And other things, such as grades, the grade for the course. There=s a lot of correlations that would help to better use those numbers.
[UNIDENTIFIED
SPEAKER]: When we use these at a
university wide level, the evaluations-- they just don=t reveal information. You know, somebody sits back and utilizes,
one person for one purpose or another, but it does have an effect, depending on
how they are used on, you know, a lot of thingsBraises, promotions. They are
used in a lot of ways, so what I was suggesting, of course, it doesn=t tell us the value of a course, but we ought
to be very careful not to use questions that, in some cases, will almost surely
get lower ratings than they probably should .
It=s a very kind of subjective judgment to
people who are being required to take courses that, you know, at that age and
level of maturity, they don=t really see the value of. Do
you see what I=m saying?
JEFFREY FERGUS: Right.
Again, we=re looking at a context to evaluate, because
the evaluation of instructors--really what the instructors should be most
concerned about--the course to some extent is outside, in some cases at least,
somewhat outside of your control. You
have to teach this course, and that course is very difficult to get some
students to see that it is valuable.
PHILIP SHEVLINl:
Yeah, but Paula I think addressed that when she said we would compare it to the
mean in that course, and if somebody was doing better than the mean, maybe,
does poorly with respect to all the other classes, but if he=s doing better than mean he=s doing a good job.
BARRY BURKHART: I=d like to spread it around and let other
people talk too.
DENNIS RYGIEL
(English): I have a few very brief specific questions and one suggestion. Specific question: Is the form set now? Is
it at the point that it will be implemented in the fall? And, if so, are the forms available?
JEFFREY FERGUS: The
new questions? No. We=re still using the current questions. We only changed the format in
terms of the thing that had to be changed for the semester, you know, having
four digits or letters for courses.
DENNIS RYGIEL
(English): So these are still in draft form?
JEFFREY FERGUS:
These are still in draft. We=re hoping that we can propose recommendations
in the spring. We=ve gone through a couple of stages. We=re trying to get some student input now.
DENNIS RYGIEL
(English): I have one suggestion based on something Paula said. I agree that it would be very useful to have
comparative data. As it is now, when
departments submit teaching effectiveness surveys, we are required to submit
them as a package. They all go in, and
then we get the averages, but they are for every course and every
instructor. They are not specific to
English Composition I, for example, English Composition. I, 128 sections. It=s very difficult for us at the department to come up with comparative
statistics by hand when it=s the kind of thing that could be handled very easily. And my specific suggestion being--I don=t think they=d accept this suggestion, but if they don=t want to write a program to split them out, let us submit them in
packets. Here=s the packet for Composition. I, here=s the packet for Composition II, and here=s the packet for Great Books I, and here=s the packet for Great Books II, and then we=ll take the averages.
JEFFREY FERGUS:
Guys, I think the computing division is willing to present it in different
ways, but they don=t want
three hundred different ways of presenting data. So I think that=s possible. I think what we
need to do is to find out what would be useful for various units and come up
with a consensus that would be helpful to everybody.
LARRY GERBER
(History): Twenty years ago, when I was teaching at the University of Arizona,
and computers had to be more primitive than they are now. We got the university-wide student
evaluations. We got a college- wide
average, and a university-wide average.
I mean it should not be that difficult to do that. It would certainly be useful, but even in
terms of talking about merit raises for a department head or chair-- you might
be low in your department, but if you were substantially above university
norms, you wouldn=t want
to penalize somebody by saying you=re the poorest teacher in this department but you=re [inaudible].
PAULA BACKSCHEIDER:
I think it=s grossly unfair the way the system is
now. Because if a teacher has a year in
which they teach a lot of service courses, like that big chemistry or physics
course. They can really get whacked. And then you=ve got people sitting around teaching, maybe, a lot of upper-level courses
where the majors are really happy, and you=re a good teacher in your field, and yet that poor person who=s teaching this big core course, and the
student sees that question and they say: AI don=t believe in requirements@ and so they check Anot valuable.@ It=s just not fair to compare a person in a
required core course to the teacher of a 400 level major course. It=s just not fair.
HERB ROTFELD
(Marketing & Transportation): I=m extremely bothered by part of the tone here. Because you keep talking about averages from
the scores. This is not rational data.
This is not the sort of data that lends itself to averages in any
manner, shape, or form. At its best, it
is nominal data, and you=re taking averages comparing.
One member of the Promotion and Tenure Committee once said at one of
their reviews, A4.2 is average teaching and that sort of
person shouldn=t be a full professor because 4.2 is an
average teacher the way a B is an average grade.@ And there=s a lot of people on this campus that think
this way.
And when you have quantitative statements including overall
effectiveness of an instructor, administrators who like to rank everybody 1, 2,
3, 4, 5, will do this despite the fact that, as a friend of mine always likes
to say, Ahalf of all astronauts are below average.@ I
think at the risk of saying something more pragmatic here instead of just
saying how can we assess it, I think the question also can be AWhy should we assess it?@ And
the bottom line is, you say, for assessment, because we want better teachers,
and my question is, every time the teaching effectiveness committee takes on
its job they say let=s
revise the form. And no one ever says: ADoes this form lead to somebody being a
better teacher?@ And
I don=t see this in this question.
And I=ll take one in particular about the
instructor who was well-organized or questions about how well-prepared they
are. And I=ll speak of myself, but I have a lot of other
personal people that tell me these things.
When I was in graduate school, I defended a project in the morning and
went to lunch with my advisors and was gone for two hours, and I was plastered
by the time I was lecturing class. This
student came up and said, ABoy, this was the most prepared you ever were.@ They
can=t tell.
And what they tell me about how prepared I am is not going to change
whether I prepare more or less. So the
question is ADoes this form lead to better teachers?@
DON CUNNINGHAM
(English): I=m old enough to have taught in the 60s at a
university that was very large--closed down for students at Kent State. A feisty bunch of students on this
campus. One of the things that student
government did was they madeBI don=t know how they did itBthey had some sort of arrangement with the
central administrationBthey
published the results of teaching effectivenessBof everybodyBthis
was at Southern Illinois University. One of the thingsBthat got everybody=s attentionBand so there was a big move to assess the teaching at midway in the
semester. That made a big difference in
the way a lot of teachers changed their teaching methods. It=s not that we don=t constantly try to monitor and sense how well we=re teaching. I get feedback every day. I=m wondering if there=s not a possibility of consideringBnot publishing the results of the surveyB but having an interim mid-termBfor one thing, it will get away from loading everything in that
stressful last day or last two days of class and you=ve not sacrificed a whole semester to learn
how bad a job or how good a job you did.
You get some kind of assessment.
HERB ROTFELD
(Marketing & Transportation): Can I
say something on that? I used to give
my own midterm evaluation, an open-ended form that was designed with a group
trying to improve my teaching at another school. I was directed to stop by an administrator in my college because
he had student complaints that said they can=t evaluate me until they know, until they=re closer to knowing their final grades.
PHILIP SHEVLIN: You
know when I first came to Auburn, the SGA ran the evaluations and they
published them.
BARRY BURKHART: At a
lot of schools they are put up on, the student government or some parallel form
of that does evaluations, and they post it on the web and you can find it.
[UNIDENTIFIED
SPEAKER]: [Inaudible] And I did my own
evaluations. [inaudible] every dynamic.
Every class is different. You get
some who love the topic. You get some
who will talk. Every classroom has a
personalty of its own. There are different ways, perhaps, the make-up of who
teaches, who learns, and in what kind of manner. But I find it very effective.
And what you find, or what I have found, and more importantly is a kind
of message from the students that you care and, really, that is a very big part
of this effectiveness--whether you learn someone=s name.
You know it=s all
marketing, you know, and you know, whether I=m a good teacher or not I can say that honestly tried, and that=s about the best we can expect of anyone if
they have the intention--so maybe a couple of questions is the intention of the
instructor to be effective. Maybe that
would be a good question.
Maybe another good question could be the evaluation of the
courses. Do you like this kind of, do
you like this subject? For instance, if
you had me taking a Greek course, you know, I probably wouldn=t be as excited about it if I was taking
something else. So, maybe that would be
another kind of backup on ADo you think this course valuable/beneficial.@ Do
you even like the topic? That would be
another route to take. You can get into
the complexity of interactive relationships and all that other stuff that,
maybe, we=re trying too hard. Maybe we need to be just real basic. Maybe we need not try to be so elaborate and try to weed out
information. Maybe we should have a few
basic questions. Basically, are we on
the right track? Are we being pretty
effective? Maybe we ought to be a
little bit more simple and not too complex.
PAULA BACKSCHEIDER:
On of the things that always fascinates me about these discussion is how when
we talk about how do we assess teaching effectiveness, somehow, we always get
back to the form, and we begin to talk more and more about the details of the
questions. And I think that your
comment suggests another way of doing that.
Suppose people had to, as part of their end of the semester, end of the
year, annual review had to talk about how they assessed their effectiveness,
and what people have said about getting feedback at mid-term is a good
idea.
One of the thing the Retention Committee tried really hard to sell was
getting faculty members to use student notes on a lecture as a quiz instead of
giving a quizBmake the kids photocopy their notes from the
last lecture. And I was particularly
interested in that because we were very concerned with the very developmental
issues that he brought up and there is nothing that tells you where your
students are like their notes. They
tell you whether you are being clear, whether you are being interesting. They tell you whether the student is even
able to take notes. They tell you some
real interesting developmental things about the student.
And there are lots of ways you can assess your own effectiveness, and
you see that=s the part of the being analytical about
teaching, and in a way the teaching evaluation form is a distraction, because
if you thought, AOK, I got everybody=s first set of notes and everybody=s last or everybody=s first quiz, and then I ask them the same
question againBDid they solve that problem in a more complex
way a more efficient economical way?@ and that=s the
kind of thing that would engage departments and groups in assessing
effectiveness. You know, I noticed for
instance, that they took out Athe instructor was actively helpful,@ well, number two on the Retention Committee=s list was do everything you can to keep the
student from feeling like a number. And
you=re talking about being actively helpful. And yet that disappears from the form.
JEFFREY FERGUS:
Actually it was just changed. It=s still number six.
[UNIDENTIFIED
SPEAKER]: I have a question for Dr.
Backscheider. You mentioned that there
are three types of learners, and I=m assuming that no one just learns one way
but I=m wondering if there=s any data that suggests which of those works
best?
PAULA BACKSCHEIDER:
There actually is. And, in fact, I
omitted tactile learners. I kind of
squished them in with kinesthetic learners.
And I=m sure you could do a better job of
this. But people do learn to learn in
other ways. For instance, almost
everyone who=s received into college is socialized or
adapts into a visual learner, but there=s some really interesting data on who succeeds in different fields
because of the way they learn. For
instance, kinesthetic learners often make good engineers. They
may be more three-dimensional.
There=s some real interesting data that very
creative people, if they survive the university system, are kinesthetic
learners, and so when you get an answer to an essay question that just-- wow!
Where did this come from? That student
may be a kinesthetic learner because they=ve translated into some sort of narrative or some sort of experience.
So most students learn to learn in a variety of ways, but it=s very rapid that they have to adapt to being
aural learners. Think about how few
high school students ever hear a lecture, and that=s why the failure rate in lecture courses for
freshmen is as high as it is. These
kids really have to learn very quickly to be that kind of learner. It=s a real interesting question.
There=s some data that suggest that students at
universities that have remedial programs are very heavily tactile and
kinesthetic learners.
[UNIDENTIFIED
SPEAKER]: I read somewhere, sorry, I don=t remember where, that teaching evaluations are always meaningless
unless they=re combined with student and peer
evaluations. And my question is has
anybody in the group had any experience with peer evaluations. I wonder if that=s ever been considered here at Auburn.
LARRY GERBER
(History): It=s actually required in the Faculty Handbook.
[UNIDENTIFIED
SPEAKER]: Oh!. I didn=t realize that.
PAULA BACKSCHEIDER:
Our [the English] Department does it routinely.
PHILIP SHEVLIN: It=s in all the tenure packets.
Everybody has them.
[UNIDENTIFIED
SPEAKER]: So how did it happen that I=ve never had the experience of being
evaluated by peers or evaluating peers.
HERB ROTFELD
(Marketing & Transportation): It=s not done that commonly.
BARRY BURKHART:
Because you were tenured way back when.
HERB ROTFELD
(Marketing & Transportation): It=s still not done in some colleges.
[UNIDENTIFIED
SPEAKER (History): The History Department decided years ago to be quite
suspicious of the results, although they serve us well. We get good results, but we are very
suspicious. You see a lot of anomalous
results. You know they think the
speaker wasn=t [inaudible]. And so we decided long ago to have administrator evaluations,
peer evaluations, and peer evaluations, a professor will ask for a peer
evaluation, ask someone to come in, or you know, it=s mediated in various ways, and then that
report was simply written up in verbal fashion and turned in to the
departmental chairman. I mean there are
different ways that one could do it. But I think, what I would like to see is
very, very simple questions, and I agree with tendency since we do need to
generate quantitative data that=s accepted everywhere and go to a variety of other means to temper the
results, whatever those are.
[UNIDENTIFIED
SPEAKER]: I=d like to pose a question to any and all of
the panel about [inaudible]. This
document is called a course evaluation, and teaching is one component of many
[inaudible] --choice of texts, kind of meetings, atmosphere of the classroom,
the course itself in terms of distribution and the quality and the whole shape
of it. The questions that departments
themselves submit have to go back into the process [inaudible]. In addition to
the teaching situation, and when I came to Auburn, I realized that this was
called a teaching effectiveness survey, and
I wondered what was going on with my students when near the end of the
term they were suddenly meeting this kind of reverse of reality. All term long they had been told that this
was your responsibility You=re going to have to make the grade.
You=re going to have to do this. It=s going to be up to you what you get out of this class. And you have to help create this
experience. And now on the last day it
occurs to them. Oh! It=s all about my teacher.
Oh. She should have been doing x
number of things for me, and I now I feel like the questions just sort of
target her and her abilities, and all at one time I=m a cheated consumer, or I have to reassess
what I should have been doing and what all she had done for me. I wonder what you think would happen to this
particular teaching evaluation component if we were to rename this document Athe course evaluation@ and to even out the different areas of
evaluation, and to begin by putting questions 8 and 9 first, or more than two,
and the teacher sort of coming as she really does, as I certainly do,
underneath this rubric which I have little control over.
JEFFREY FERGUS:
Actually, we, in some of the discussions, we talk about thisBit should really be learning effectiveness as
opposed to teaching effectiveness, and that has come up. The other thing related to that in--and its
fairly true in engineering and I think that=s true in SACS and so forthBthat there=s more
and more emphasis on evaluating what the students can do, the learning
objectives, as opposed to the input, which is the teacher, so I think there are
movements in that respect. This
particular instrumentBit
would be difficult to do that in just a few questions. And it would be different for different
programs.
[UNIDENTIFIED
SPEAKER]: [inaudible] two or three just about the actual classroom experience.
JEFFREY FERGUS: One
of the things, we didn=t want
to make it longer. Because I know
personally, and Phillip also mentioned it, the most valuable thing is the
comments and if you have to fill out forty questions and then get to
comment. As it is, even with 8
questions, most of the students don=t seem to make comments. So we
didn=t want to make it too long.
[UNIDENTIFIED
SPEAKER (Pharmacy): I just rotated off after three years on the teaching
effectiveness committee, and I=m afraid I=m
going to have to disagree with Jeff. I
don=t necessarily think that simpler is
better. In the School of Pharmacy, we
pretty much take the evaluation process in house. We do course evaluations that incorporate teacher
evaluation. The reason is because in
our program, we=re going more and more to team teaching, even
inter-disciplinary team-teaching. The
faculty essentially generate the questions on the evaluations themselves. There=s some difficulty now because we are trying to standardize the process,
but we have much more complex evaluations, but we also get much better feedback
including many, many comments. The
difference is the students will believe the process is meaningful. I would disagree with Philip. One of the problems is not that students are
naiveBthough they probably are in the first year or
twoBbut after than they get increasingly cynical
about this whole business. Faculty are
cynical because they don=t think the process works at all.
And they=re right.
And students are cynical because they never see any impact of anything
they say on evaluations and acceptable change.
And they are right about that, too.
They can or hopefully they will be able to see our programs change, made
on the basis of their comments and their recommendations.
So, I think it=s
right you should have a university-wide system, process, that you do
necessarily have to use, but I=m very concerned about that. I
think it may work when you=re talking about the difference between Great Books or an English
course but when you start getting into major programmatic differences, I=m not sure that this one size fits all
approach is helpful at all.
PAULA BACKSCHEIDER:
It sounds like that you all are making what I think is really an AAUP issue
kind of shift--that we=re
calling this Ateaching effectiveness@ and we=re talking about that, but we know that these forms and most of the
teaching effectiveness tools that we have here are really evaluation tools, and
all of you who have mentioned anything about their use have talked about merit
raises, and promotion and tenure, and things like that.
And interestingly enough, this project that we ran with History and
Chemistry was called AChange
the Culture.@ And
if somebody really used these forms as teaching effectiveness they might be
using them like where people would sit down and discuss a course and say
everybody says, or a large number of students say this about this course. How do we rework it, or year after year
people say on your evaluations they wish there were more time for
discussion. How can you build that in? What do you think of that? So I don=t know if it is possible.
I used to work at a school where the Dean was into teaching improvement
and not very much into evaluation and to see how the forms are used here,
almost like clubs to hold over faculty instead of ways to say: AGee, you=re doing this really well.
Maybe you could mentor this teacher who=s feeling pretty uncomfortable about having a discussion in their
section or group. So maybe this is an
AAUP kind of issue . Can we make this a
truly teaching effectiveness climate instead of a let=s figure out a perfect way to--somebody used
the word Arank@-- the teachers in the exact order.
BARRY BURKHART: Last
question ...
DON CUNNINGHAM
(English): I have a question. I guess
it goes under the rubric of permulations or something about teaching. Almost every discussion that I have with
colleagues about teaching effectiveness, always the point comes up that what=s really being assessed here is rapport,
nothing else. What does that
imply. What are the implications of
that? Another is that if I teach an
effective and challenging [inaudible] and dislodge some cobwebs out of students= heads, I=m going to pay for it by a significant number of these students because
I=ve challenged their value systems, I=ve challenged whether learning in courses in
their major where they all share exactly the same kinds of thought processes
[inaudible] I=m asking, sort of trying to de-center,
reorient that. Another suspicion is
that the lecture course is privileged on questions about whether the course is
well-organized or not rather than a course that is much more inter-active and
perhaps inductively introduces topics where you don=t really a conspiracy with students until
later and this causes students to work and to think apparently more accurately
than usual. There=s alsoBI=ll just mention one moreBand that is that what the central
administration. . . [end of tape].