Showing posts with label Assessment. Show all posts
Showing posts with label Assessment. Show all posts

Weighing Scholarship

Randall Stephens

On this blog we've looked at the issue of assessment, standards, and weighing scholarship here, here, and here. But I'm willing to bet that nothing we've posted will come close to stirring the kind of controversy and debate that Mark Bauerlein's essay in the Chronicle will likely provoke ("The Research Bust," December 4). The amount of time that literary studies scholars spend on articles and books, he says, just isn't paying off. One major problem: overproduction.

"However much they certify their authors as professionals and win them jobs and tenure, essays and books of high scholarly merit in literary studies suffer the same inattention all the time" observes Bauerlein. He goes on:

Why? Because after four decades of mountainous publication, literary studies has reached a saturation point, the cascade of research having exhausted most of the subfields and overwhelmed the capacity of individuals to absorb the annual output. Who can read all of the 80 items of scholarship that are published on George Eliot each year? After 5,000 studies of Melville since 1960, what can the 5,001st say that will have anything but a microscopic audience of interested readers?*

He knows that it's a controversial point. He uses Google Scholar to track citations. (See the lively comments section.) Doubters will point out, writes Bauerlein, that this is a flat-footed approach, which does not take in the larger contribution of scholarship. Some will say that research makes scholars into better teachers. And others will point out that we need lots of work on subjects that will not draw major attention. That does not mean that the work is useless or can be tossed aside. Still, Bauerlein counters, these objections hardly justify a college or university paying 1/3 of a salary for work that doesn't have a significant impact.

Could this same sort of assessment be on the table for historians? (Get ready to figure out how to amp up your Google Scholar stats.) How should administrators and reformers measure impact or influence? Should they be doing so at all?

History’s Tests

Chris Beneke

Testing brings out the anti-Whig in all of us it seems. The declension model was back in fashion last week as the American public was reminded of how little history it knows. The National Assessment of Educational Progress’ report on U.S. history revealed that American eighth graders have difficulty enumerating colonial advantages over the British in the Revolutionary War, fourth graders have trouble explaining Abe Lincoln’s importance, and twelfth graders often fail to grasp who was allied with who during the Korean War.

Here’s a quick summary of the academic fallout:

Several historians suggested that we un-knot our knickers. Sam Wineburg reminds us that we’ve been wringing our hands over our historical ignorance for about a century now and assures that common knowledge questions are not included in these assessments. In other words, graduating seniors probably do appreciate the significance of Harriet Tubman, Rosa Parks, the bombing of Hiroshima, and Auschwitz. They just aren’t asked about stuff that we know they know. Paul Burke echoes Wineburg’s claim that the much bemoaned results may simply reflect the test’s design, rather than the United States’ descent into barbarity. James Grossman adds that, whatever the use of the individual questions, they may not have been asked of the right students. “[I]n many states children don’t study much U.S. history until fifth grade.” “Next year,” he quips, “let’s give fourteen-year-olds a test on their driving skills.”

Very little history until the 5th grade? Linda Salvucci’s argument is that this is precisely the problem. Indiana elementary students, for example, get a grand total of twelve minutes of history instruction per week. Salvucci says that “parents … really ought to be mobilizing to demand that public officials get serious about adequately funding history education in the schools. History must not be allowed to become some optional or occasional add-on to the ‘real’ curriculum.” Her conclusion: “We need a STEM-like (Science, Technology, Engineering and Math) initiative for history.”

James Livingston lays some blame for the alleged poor performance at the feet of professional historians and their affinity for anti-glorious counter narrative. Viewing the matter from the perspective of a 12th grader, he writes: “If you tell me the past doesn’t matter because it’s a record of broken promises, systematic cruelty, and failed dreams, or because it’s an irretrievable moment of eccentric deviations from a norm of appalling complacency, fine, f--- it. If I can’t use it to think about the present, why should I bother? Thanks, Doc, you convinced me that I don’t have to.”

Happily, the NEAP flare up coincided almost exactly with the publication of Historically Speaking’s roundtable on historical thinking at the K-12 level. Fritz Fischer suggests that “[w]hen it comes time to write the guidelines for how history is taught in the classroom, historical thinking [as opposed to the digestion of content] needs to become the guide.” Bruce Lesh agrees and details how he focuses student attention on a series of provocative questions and getting them engaged in interpreting primary sources. Robert Bain draws this lesson from the teaching of world history—teachers need to keep the overarching economic and social forces in mind “while attending to what their students are thinking and learning.” The two goals, he points out, are not easily reconciled. Because while you may be thinking about the geopolitical forces that propelled the European conquest of America, your students are thinking about “Columbus’s desire for personal wealth and glory” (or something along those lines . . . ). Linda Salvucci wraps up the forum with a call for nudging the public toward better history. “[W]e need to grab, define, and educate the audience,” she writes. We need to offer history that is both accessible and edifying.

A Hotline for Teachers

Heather Cox Richardson

It hasn’t been a great week for history teachers. News media made headlines out of a new report that only 13% of high school seniors are proficient in American history. Students perform worse in history than other subjects routinely tested in the NAEP, the National Assessment of Educational Progress.

Who is to blame for the appalling condition of our historical knowledge? Most of us could make a pretty decent list of things that make it difficult to teach history today, but according to Rick Santorum, the problem is liberal teachers. In Ames, Iowa, hot on the presidential trail, the Republican former Senator from Pennsylvania ignored the Texas textbook controversy, the Virginia textbook controversy, the rewriting of history by David Barton and Sarah Palin and Michele Bachmann and Mike
Huckabee
. Instead, Santorum declared:

We don’t even know our own history. There was a report that just came out last week that the worst subject of children in American schools is — not math and science — its history. It’s the worst subject. How can we be a free people. How can we be a people that fight for America if we don’t know who America is or what we’re all about. This is, in my opinion, a conscious effort on the part of the left who has a huge influence on our curriculum, to desensitize America to what American values are so they are more pliable to the new values that they would like to impose on America.

The bad news for teachers continued. How does Congress propose to combat this deficit in history? By slashing, or perhaps eliminating altogether, the funding for the Teaching American History program.

So for all the disheartened teachers out there, I offer a ray of hope. Months ago, I mentioned a high school sophomore who had never heard of the Cold War hotline between the U.S. and the U.S.S.R. Just yesterday, the student sent me a copy of her final project for National History Day: an interactive website about the history of the hotline. There was a strict word limit and an equally strict limit for images, so the project does not take long to read. But I highly recommend you take some time to click through it (my favorite is the section on pop culture). It shows that, without a doubt, at least some students are learning and some teachers are teaching well.

The student is a minor, so I’m not going to give her name, but hats off to both her and to her teacher, Mr. Christopher Kurhajetz!

Nice work, both of you. You give us hope.

Some Teaching Ideas

Heather Cox Richardson

Next year, I’m going to have the opportunity to teach the American Civil War and Reconstruction to a small group of students at a new university. It seems like a perfect time to try out some new approaches. I’ve been trolling through syllabi and teaching discussions on the internet, and have unearthed a couple of new ideas I think are worth entertaining:

First of all, I always organize classes backward, using my own version of the Understanding by Design method explored by Grant Wiggins and Jay McTighe. This method requires tight organization of material to illuminate a larger point: the “take away” students should learn from a course. Every reading, every lecture, supports this larger theme (sometimes by challenging it).

A number of people use diagnostic essays early in the semester to gauge the level of student writing, but professor Lendol Calder of Augustana College in Illinois has put an interesting quirk in an early assignment that could be used to make the diagnostic essay do double duty. Calder asks students in the survey to write a two-page essay explaining what they know about American history. His goal is less to gauge their writing than it is to see what “story” they use to organize their knowledge. Have they learned “the glory story, the people versus the elites,” or that the story of America is that of “high ideals/mixed results”? This twist on the diagnostic essay would fit particularly well with a course on the Civil War, since so many students come to such a course with fervent views of the conflict, but it seems like it would work well with a number of courses.

And then there’s this gem. I’m afraid I might have to break down and switch to a Mac to be able to use this timeline. Constructed carefully—and yes, it will take ages, I’m sure—this would free up a great deal of time I used to use lecturing, enabling us to spend more time analyzing primary documents.

What about assessments? I’m still not sure about them, or rather, I’m even less sure about them than I am about the other aspects of the course. For an upper division course on American history, a long paper based in primary research is a no-brainer. Americanists have the great luxury of being able to send students to almost limitless primary sources that are in a language they speak. There’s no reason not to let them experience the excitement of their own research and having the fun of writing it up. (There will soon be similar reason to celebrate for people studying British history, too.)

But what about exams? I’ve written before about team research projects, and I do like the idea of assigning teams of people to research a topic. In this era, knowing how to find information, weigh it, and assimilate it into an argument seems crucial. But for a topic as widely covered as the Civil War, is it possible to come up with interesting assignments that will really require significant teamwork?

The same friend who has tried cooperative work has also tried exams that place the student directly in the era being studied, as in: “You are a 25-year-old man from southern Ohio, visiting New York City on July 12, 1863. How will you spend the day?” In this example, a student would have to understand the history that made southern Ohioans tend to be Democratic and sympathize with the South, and would have to realize that the young man would arrive in New York City in the middle of the Draft Riots that pitted Democrats against Republicans, local government against the national government, and workers against African Americans. I have not yet heard how the experiment went when my friend tried it. It seems to me to have great potential, but also the chance of getting some utter fantasies that would be incredibly irritating.

It’s a wonderful thing to have so many new tools for teaching. Perhaps most of all, it’s wonderful to have the intellectual space to try new approaches. If anyone else has been kicking around new ideas, do let the rest of us know.

Key Performance Indicators and the Heightening Contradictions of Academia

Chris Beneke

Of what value is your scholarship? Historians in Britain are receiving unsettling precise answers to that question. In case you hadn’t heard the news, British academics are now locked into a quality control regime that forces them to measure up against “Key Performance Indicators” over a 6 to 7 year span. The measures are largely determined by government officials though the actual measuring is done by historians.

Better minds have already suggested that the quantification of humanities scholarship through such mechanisms will dampen creativity, discourage ambitious long-term projects, and lower scholarly quality, while sucking much of the joy out of professional historical work. Randall blogged about funding-driven assessment tools a couple of weeks ago, Anthony Grafton’s AHA President’s column mentioned it in January, and Simon Head recently wrote a more extended analysis for the New York Review of Books. (I did some hand wringing myself in November.) Nonetheless, one paragraph in Simon Head’s account of the system struck me as especially noteworthy:

In the humanities the RAE (Research Assessment Exercise) bias also works in favor of the 180–200-page monograph, hyperspecialized, cautious and incremental in its findings, with few prospects for sale as a bound book but again with a good chance of being completed and peer-reviewed in time for the RAE deadline. A bookseller at Blackwell’s, the leading Oxford bookstore, told me that he dreaded the influx of such books as the RAE deadline approached.

In other words, the new British accountability systems seems perfectly designed to heighten the contradictions in the academic world, ensuring that the “research output” that scholars are dourly incentivized to produce is less accessible to the larger public and therefore less likely to contribute to the informed consideration of things that the public worries about—like, for instance, big social, political, and ethical problems. The argument can surely be made that rigorous research measures drive scholars toward more focused and more readily publishable research, which will ultimately makes a greater indirect contribution on the world. But I doubt that it would be convincing, especially when it comes to historical research.

Indeed, the RAE seems well designed to thwart scholars who believe that they should write engaging volumes that people who don’t care a whit about the distinction between social history and cultural history (the common term for them is “general readers”) might actually desire to read. Despite the precarious state of academic publishing, historians have enjoyed the indulgence of academic presses in recent years because of their faith that we will eventually write such books—if not the first time around, then the second, or third. It’s hard to imagine that system holding up if we’re flooding bookstore shelves with carefully calibrated units of research output, rather than, you know, books.

Rarely is the Question Asked: Is Our Professors Teaching? Part III

Randall Stephens

Guess what? Many college students do not learn analytical and writing skills during the four years they spend in college. Students don't study. Courses are not demanding. Collaborative learning does not work like professors think or hope it does. . .

Or, so argues a new book, Academically Adrift: Limited Learning on College Campuses (University of Chicago Press), by Richard Arum and Josipa Roksa. More and more students--more likely parents--are throwing down the cash for college. But the authors ask: "are undergraduates really learning anything once they get there?"

Last week the Chronicle highlighted Academically Adrift and the authors' controversial findings. (David Glenn, "New Book Lays Failure to Learn on Colleges' Doorsteps," Chronicle, January 18, 2011.) Arum and Roksa tracked 2,000 students at 24 four-year colleges. Thirty-six percent of these students who took the Collegiate Learning Assessment essay test showed no significant improvement from their freshman to senior year.

Arum and Roksa certainly have their critics. The study asked too few questions about collaborative learning, say some. Others say that the study, limited in scope, should not challenge the whole undergraduate enterprise.

But, overall, the findings should give us pause. "Mr. Arum and Ms. Roksa don't see any simple remedies for the problems they have identified," writes David Glenn in the Chronicle. "They discovered more variation in CLA-score gains within institutions than across institutions, and they say there are no simple lessons to draw about effective and ineffective colleges." Still, Glenn points out that business and education programs in Texas colleges require that students "take only a small number of writing-intensive courses." The path of least resistance.

Are students today less likely to major in history when the workload is high and the perceived payoff is so low? ("So I'm going to spend all this time reading primary and secondary works just so I can be unemployed after four years of reading, writing, and reading some more?") Five years ago Robert Townsend noted in Perspectives that: "Information from the latest Department of Education (DoE) report (pertaining to the years 1997–98 to 2001–02) suggests that in the competition for students, history lost ground while the total number of undergraduate students at colleges and universities grew quite quickly." I haven't see more recent data, but I can't help but think that there are fewer majors today then there were 20 years ago.

Perhaps history departments could do a better job of emphasizing the portable skills students learn in the major. Why not stress in clear terms that history trains students to think critically and to write clearly? I have my students read Peter Stearns excellent essay, "Why Study History," for this very reason. They learn that history students gain: "The Ability to Assess Evidence. . . . The Ability to Assess Conflicting Interpretations. . . . Experience in Assessing Past Examples of Change." Stearns ably shows that "Work in history also improves basic writing and speaking skills and is directly relevant to many of the analytical requirements in the public and private sectors, where the capacity to identify, assess, and explain trends is essential." I've also had students read Heather's excellent post on this subject from our blog. She noted: "History is the study of how and why things happen. What creates change in human society? What stops it? Why do people act in certain ways? Are there patterns in human behavior? What makes a society successful? . . . . When you study history, you’re not just studying the history of, for example, colonial America. You’ll learn a great deal about the specifics of colonial America in such a class, of course, but you’ll also learn about the role of economics in the establishment of human societies and about how class and racial divisions can either weaken the stability of a government or be used to shore it up."

Sounds like a cure for the "I-learned-little-in-four-years-of-college" blues.

Rarely is the Question Asked: Is Our Professors Teaching? Part II

Heather Cox Richardson

Randall asked a good question in his post wondering whether or not college and university professors are encouraged to improve their teaching. He has inspired me to blog about teaching issues in a more systematic way than I have before.

Today the topic that is consuming me is assessment. This is not a new obsession, either on my part or on that of the profession. We’ve talked about assessment for years. . . but what have we learned?

What, exactly, do we want our students to learn in our classes? Long ago, I figured out I should design my courses backward, identifying one key theme and several key developments that were students’ “takeaway” from a course. That seems to have worked (and I’ll write more on it in future).

But I’m still trying to figure out how to use assessments, especially exams, more intelligently that I do now. My brother, himself an educator who specializes in assessments, recently showed me this video (below), which—aside from being entertaining—tears apart the idea that traditional midterms and finals do anything useful in today’s world.

Shortly after watching the video, I happened to talk separately with two professors who use collaborative assignments and collaborative, open-book, take-home exams. They do this to emphasize that students should be learning the real-world skills of research and cooperation just as much—or more—than they learn facts. As one said,
facts in today’s world are at anyone’s fingertips . . . but people must know how to find them, and to use them intelligently. This is a skill we can teach more deliberately than we currently do.

These two people are from different universities and are in different fields, but both thought their experiment had generally worked well. One pointed out—as the video does—that the real world is not about isolation and memorization; it’s about cooperation to achieve a good result.

The other said she had had doubts about the exercise because she had worried that all the students would get an “A.” Then she realized that it would, in fact, be excellent news if all her students had mastered the skills she thought were important. When she actually gave the take-home, collaborative assignment, though, she was surprised—and chagrined—to discover the same grade spread she had always seen on traditional exams. She also saw that some of her student groups had no idea how to answer some very basic questions, and that she would have to go back over the idea that history was not just dates, but was about significance and change.

And that is maybe the most important lesson. The collaborative exam revealed that there were major concepts that a number of students simply weren’t getting. So she can now go back and reiterate them.

I’m still mulling this over, but I do think I’ll experiment with collaborative assessment techniques. Historians have some advantages doing this that teachers in other fields don’t. We can ask students to identify the significance of certain events, to write essays, and to analyze problems. With the huge amount of good—and bad—information on the web in our field, though, we could also ask students to research a topic, then judge their ability to distinguish between legitimate and illegitimate sources (something that might have helped Joy Masoff when she was writing her Virginia history textbook).

As I’ve been thinking this over, a third colleague has inadvertently weighed in on it. He discovered students had cheated on a take-home exam, working together and then slightly changing each essay to make them look original. At least an assigned collaboration would eliminate the problem of unapproved collaboration!

Rarely is the Question Asked: Is Our Professors Teaching?

Randall Stephens

Academics are, by nature, hand wringers. We worry about the decline in the humanities. We worry about grade inflation. We worry about the troubles of academic presses. Once in a while we worry about the state of teaching. Or, to paraphrase our former president, "rarely is the question asked: is our professors teaching"?

Quite often the appraisal of teaching is negative, though academics and non-academics offer different points of view. In the popular imagination, the old stereotypes persist, as Anthony Grafton points out, with tongue firmly in cheek:

We don’t teach undergraduates at all, even though we shamelessly charge them hundreds of dollars for an hour of our time. Mostly we leave them to the graduate students and adjuncts. Yet that may not be such a bad thing. For on the rare occasions when we do enter a classroom, we don’t offer students close encounters with powerful forms of knowledge, new or old. Rather, we make them master our “theories”—systems of interpretation as complicated and mechanical as sausage machines. However rich and varied the ingredients that go in the hopper, what comes out looks and tastes the same: philosophy and poetry, history and oratory, each is deconstructed and revealed to be Eurocentric, logocentric and all the other centrics an academic mind might concoct.*

Across the water, historian and filmmaker Tariq Ali and and Harvard historian and teledon Niall Ferguson speak to the BBC about what they see as the abysmal state of history teaching. (Hat tip to the AHA.) Students stop pursuing history in England at an early age, says Ferguson. And what history is taught is "too fragmentary." Ali agrees, saying that what is presented is, basically, "worthless," and hobbled by a chasing after so-called relevance. They both argue that the old anachronistic, triumphalist, island history of Britain, should be avoided, but students need a larger narrative. "It could hardly be worse than what is going on in schools today," concludes Ferguson.

How does history teaching fare in America's colleges and universities? Are teaching awards more than a feather in the cap? Do promotion and tenure committees value persistently good evaluations and commend teaching effectiveness in the same way that they reward scholarship? Do peers sit in on classes and make assessments? Do departments do anything when a professor continues to receive poor teaching evaluations one semester after another?

Nearly ten years ago Daniel Bernstein and Richard Edwards proposed that we need more peer review of teaching in the Chronicle. "[I]f educators are going to sustain the progress made, we will need to move toward a more rigorous and objective form of review," they wrote. "The goal of peer review has been to provide the same level of support, consultation, and evaluation for teaching that universities now provide for research." I can't imagine what the results of such efforts have been. Certainly, peer evaluation can turn into a messy, political business.

Does graduate training in history prepare men and women for classroom success? Budding historians spend far more time in graduate school working on research, parsing theory, and getting the historiography down. Less time is devoted to developing teaching skills and, at least as it was in my case, there is not much mentoring on teaching. (Most grad students I encountered came prewired with an interest in teaching. So, that was a plus.) Could graduate training be better oriented to prepare good history teachers? What would that look like?