The problem with quantifying "good teaching" are myriad. Clearly the first, and foremost, issue is that the purpose of 99 44/100% of all such efforts is to increase efficiency and enhance the ability of administrators to discipline their labor force. The second problem is that our classes are filled with individuals with wildly divergent histories, interestedness and academic capacities, classically defined. Third, not only are our classes filled with students of wildly divergent background preparations/interestedness, these same students have wildly divergent "lifestyles" - defined most clearly by those who never work for pay during their years in college and those who work multiple jobs while seeking to complete their schooling. Fourth, teachers in many institutions have things like master syllabi or mandated textbooks which constrain their ability to teach as they teach best. Fifth, operationalizing "higher"-order learning (did I say screw Bloom and his taxonomy?!) - genuinely contextual, creative, and critical thinking - in terms of a 10-15 week quarter/semester verges on the farcical. Many of my best students didn't test all that well and, committed to working through the material, often struggles on take-home exams because they were making a genuine effort to intensely engage the material, an effort not completed by the time the assignment was due which meant that the assignment was the stimulus for learning rather than a product indicative of completed learning. However, that stimulus to engage/learn, generates percolations and iterative distillations which play themselves out over periods of time lasting far past undergraduate coursework.
On the one hand, I am fairly sympathetic to the idea of learning objectives and even some kinds of pre- and post-testing to see whether objectives intended to be met were met. On the other hand, the mere generation and collection of this kind of data serves to refocus teaching/learning towards producing learning objectives most students can meet and then teaching to those objectives in such a manner as to feed the nightmare that is instrumental accreditation fever with a bit of "just tell me what the fuck's going to be on the test" flu on the side. My sociology classes are as historical as they are sociological, as geographic as they are cultural, and as provocative as they are focused on making the taken for granted a problem, and not necessarily a resolvable one. All the best courses I ever took - in the natural sciences, social sciences and humanities - were critically anti-disciplinary in this way... and the priorities embedded in every attempt to quantify successful teaching/learning all work against this mode of classroom engagement.
So there, probably doesn't help, sorry... feeling cantankerous (which is probably why I top-posted).
A
On Wed, Jan 11, 2012 at 7:21 PM, Jeffrey Fisher <jeff.jfisher at gmail.com>wrote:
> I only thought you might have seen something. Alan I thought might do
> some stats on it.
>
> I like studies. but in fact the long running argument I'm in is about my
> resistance to the idea of quantifying good teaching, or possibly even
> identifying it. I don't know if we have any really good evidence that such
> a thing as good teaching exists, even though I think I am a good teacher
> and that I've been subjected to a lot of good teaching. maybe we need to be
> much more specific than "good teaching" to say anything meaningful?
>
> but then I keep coming back to that study last year using data from air
> force academy math and physics classes. and that study flies in the face of
> the focus on standardized testing. well, more specifically, the focus on
> short term results.
>
>
>
>
> sent from phone. please excuse tipos or bad autoconnect.
>
> On Jan 11, 2012, at 7:03 PM, 123hop at comcast.net wrote:
>
> > I'm of that generation who never studied statistics. I know the diff
> between average and mean, and that's it.
> >
> > If I wanted to become a journalist or statistician, I'd take a class.
> >
> > Also, I believe that the way you evaluate a teacher is that parents,
> students, and other teachers get together and evaluate. I don't see what
> numbers have to do with it.
> >
> > The reason why the bureaucrats want numbers is because they understand
> nothing about teaching; but if they get numbers then they get power cause
> they know how to spin numbers.
> >
> >
> > Joanna
> >
> >
> >
> > ----- Original Message -----
> >
> >
> > On Jan 10, 2012, at 6:08 PM, David Green <davegreen84 at yahoo.com> wrote:
> >
> >> There are enormous ideological and statistical problems in the
> NYT-referenced study,
> >
> > this is precisely what I want to hear about: the statistical problems.
> it's where I'm the weakest in training and experience. everything else
> you've talked about again seems more or less straightforwardly the case.
> but it doesn't help me see the statistical flaws. i just got served with it
> as part of a long running argument about the possibility of measuring good
> teaching. and I don't, I'm sorry, have the statistical chops to just look
> at the article and know what's wrong with the analysis in the study. that
> is where, I think, I need some concrete assistance. but I won't keep
> whining about it. I just thought joanna or alan or someone might have
> something to hand.
> >
> > j
> > ___________________________________
> > http://mailman.lbo-talk.org/mailman/listinfo/lbo-talk
> > ___________________________________
> > http://mailman.lbo-talk.org/mailman/listinfo/lbo-talk
>
> ___________________________________
> http://mailman.lbo-talk.org/mailman/listinfo/lbo-talk
>
-- ********************************************************* Alan P. Rudy Assistant Professor Sociology, Anthropology and Social Work Central Michigan University 124 Anspach Hall Mt Pleasant, MI 48858 517-881-6319