4.28.2010

Assessing Assessment: the Multiple-Choice Exam, the Impromptu Essay, and the Portfolio

This week’s readings for Theories of Assessment trace the very recent (mostly because of its short history) interest in speculating how a reviewer should go about assessing student writing. The “reviewer” seems to also be a subject of controversy in these articles, as scholars debate whether or not educators, students, or test raters are best qualified to assess student writing. Nonetheless, it is clear that theories of assessment have changed dramatically in the span of twenty years this sample of articles portray and stability for assessment seems nowhere near being close at hand.

Huot begins with an introduction to writing assessment and its academic research as it was known and studied from about 1975-1990. In general, Huot’s work outlines the rise of direct writing assessment research during this time period and also briefly attempts to situate it alongside its counterpart—indirect writing assessment—which flourished until the mid-1960s. Describing indirect writing assessment as the evaluation of student writing ability based on “examinations on grammar and usage” devoid of critical review by “independent readers” (237), Huot goes on to list the three main procedures for direct writing assessment: primary trait, analytic, and holistic. Not surprisingly, the holistic procedure is named the most popular because of its cost-effectiveness (such as I suspect it still is today) even though Huot warns it is not always the most appropriate.

The bulk of Huot’s article then reviews three main interests for direct writing assessment research and argues for or against each interest’s findings, speculating where the future of writing assessment will progress into the future. Essentially, these three interests include topic development and task selection, text and writing quality, and influences on rater judgment of writing quality. For topic development and task selection, Huot concludes “structure, wording, and overall presentation of a writing assignment can sometimes have important consequences within particular writing contexts” (246) and also that very little has been done in exploring these influences. For text and writing quality, he suggests that recent (to 1990) findings in linguistics and discourse analysis has seen a shift from interest in syntax to interest in “global-level language features” (250), speculating that the future of writing quality assessment lies in the hands of future discourse-level research and interests. Finally, for influences on rater judgment of writing quality, Huot makes very interesting conclusions on the inconsistency of research and need for further study in the area. Though he does remark that a majority of the literature address “content and organization” (256) as important factors on rater judgment, Huot predicts that a rise in the popularity of portfolio writing will begin to change the way scholarship studies the influences on rater judgment of writing quality.

Following almost exactly where Huot leaves off, White takes us to the relative present-day understanding of assessment as it is understood through portfolio scoring. Drawing attention to faulty current interest in holistic scoring of student portfolios, White suggests a new (“Phase 2”) method of scoring portfolios that sounds to me quite similar to Huot’s identification of primary trait assessment. This method adds to the portfolio requirement a clear statement of goals for each sample of writing on the part of the faculty and a cover letter submitted on top of the portfolio that rhetorically argues why the student believes the portfolio attached meets or does not meet the aforementioned statement of goals. White believes this creates a beneficial, more practical, and cost-effective way for raters to not reevaluate work that has already been graded by faculty members but instead critique the student’s effectiveness in arguing and supporting through the cover letter that the portfolio completes the goals set of them.

Following Huot and White’s rather dense and informative articles, Royer and Gilles take us back to the 1990s with an enjoyable anecdote revealing another radical idea stemming from assessment theory: allowing students to choose their own course of action in placement for freshman writing classes. Called “directed self-placement,” Royer and Gilles suggest this method pleases every party involved (student, teacher, administrator, etc.) and puts the burden of assessing a student’s ability’s squarely in the hands of the student him/herself. Personally, I found the idea to be incredible. I think it could be potentially dangerous, though, to have no requirements distinguishing basic writing from typical freshman writing courses apart from a student’s choice. If a director could rhetorically convince students who need basic writing to enroll in those classes I very much see the benefits (a more relaxed sense of belonging in basic writing, a student’s feeling of personal endowment in the classroom, and the such); however, unless those students who we were certain needed basic instruction take it we may be toying with unstable young adult feelings.

The report of the NCTE on SAT and ACT writing tests provides an eye-opening (but really not surprising) account of the tests ineffectiveness in assessing student writing. Harkening back to Huot’s “early days of assessment,” the NCTE Task Force not only concludes that the SAT encourages superficial, formulaic, non-critical writing on its tests but also risks encouraging the same vapid writing in the writing classroom. The Task Force also found that these tests continue to influence conventional “correctness” as “good” writing, favor certain ethno-diverse groups over others, and influence placement in university programs which are in direct conflict with the test’s purpose of measuring student ability for college admission, not college performance.

White ends the week’s readings by rebutting NCTE’s attack on single-sitting essay writing and attempts to expose it as a means to support the “currently” popular portfolio assessment method taking up arms at universities that can afford it. Notice that White’s “rebuttal” comes in 1995 while the NCTE report on the “new and improved” SAT/ACT structure comes in 2005. It is clear the NCTE has stood firm on this issue for over a decade, and it is unfortunate that nobody seems to be listening. White argues again and again that the multiple-choice and single-sitting essay’s prevalence, cost-effectiveness, and ease of use in college assessment seem to point to the fact that they are the tools to use; however, I think it is in kindergarten when we learn that just because something is popular it doesn’t automatically make it right. White’s logic is dangerously faulty in this piece and seems to almost settle for essay writing since it’s better than multiple-choice tests and cheaper than portfolio assessment. I don’t agree with this argument at all, and I certainly hope that he is wrong. Portfolio writing may not be the best method of assessment, but it a step forward, not one or two steps back.

1 comment:

  1. I'm glad to see you take a stand against White in this reflection. Although, we do not know White's stance on the SAT/ACT essay test. (There is a professional listserv for the Council of Writing Program Administrators--we might be able to search the archives to see if White commented on it back then.) I don't know, my guess would be that White would side as he does in his article, that the timed essay is better than indirect measures, but he would worry the results would be used for things like placement without any other measures being considered.

    Regardless, the subject of assessment is certainly one where composition researchers have had little say, ironically. This is another area where the field's decision to pursue qualitative, humanistic research methods may have lost it some credibility among outside educational researchers who rely on more quantitative, statistical type evidence. Writing assessment and writing program administration, like technology, is another "growth area" for composition studies.

    ReplyDelete