Assessments of Writing More Authentic,
I had the right idea, but I was missing key knowledge. Based on my experience of the impact of grades and standardized tests, I had concluded that assessment was bad for the teaching of writing, and that assessment was therefore my enemy. Brian Huot observes that this self-defeating view is widely held among writing instructors: “assessment has often been seen as a negative, disruptive feature for the teaching of writing” (9) . It took me a few years to realize what now seems obvious: that assessment strategies can either help or hurt teaching and learning in the writing classroom, depending on which strategies we are talking about. With the help of scholars like Ed White, Peter Elbow, Michael M. Williamson, Richard Haswell, Brian Huot, Grant Wiggins, and Pamela Moss, I realized (with some relief, though I admit also with a mild pang of regret) that I did not have to spend my professional life riding around slashing Z-shaped tears in test-makers’ shirts. I could stand for something; I could advocate particular assessment strategies instead of merely opposing all of them.
What I, following others, would come to advocate was assessment that I believed supported best practices in teaching and learning composition. Huot consistently sounds this theme: “I am specifically interested in neutralizing assessment's more negative influences and accentuating its more positive effects for teaching and learning” (7) . Practices in evaluating writing with such positive effects go variously by the names authentic, educative, and rhetorical.
I’m going to blithely assume that we subscribers and contributors to the Teaching Composition List share a fairly strong and clear consensus regarding what “best practices” in the teaching of writing might include, such as: substantive choices for student-authors, writing for multiple and real rhetorical situations, peer and instructor response while writing is in process, research, deep revision, proofreading, and publication. This minor fantasy of pedagogical consensus allows me to proceed directly to the two questions I hope you will take up in discussion on the list:
Which of our writing assessment practices (both within and outside of the classroom)
best support our most cherished theoretical beliefs about composition and our
most productive pedagogical practices?
2. Which evaluative techniques should we shift, tweak, adopt, or throw out to better serve the rhetorical learning we are trying to promote?
Grant Wiggins used to call such pedagogically beneficial approaches to assessment authentic (1993) . Personally, I liked the polemic edge to that term, because it correctly implied that many traditional approaches to assessment (which Wiggins likes to summarize as “teach, test, and hope for the best”) lack legitimacy in relation to the world outside the walls of schools, colleges, and universities. By challenging the authenticity of our evaluative practices, Wiggins provoked us to critically question the closeness of fit between what we want our students to learn and how we assess that learning. He also wanted us to check the fit between what we are teaching/assessing in our classrooms and what our students need to know and be able to do in the world beyond our classrooms. Authenticity of assessment depends on the strength of these two correspondences: between teaching and assessment, between classroom and world.
A few years later (1998) , Wiggins shifted to calling such assessment educative. This term has the advantage of being less obviously critical of, and therefore less alienating to, people whose assessment practices we may be challenging. “Educative assessment” also focuses our attention on the importance of scrutinizing what our assessments teach. My only critique of this phrase is that it may mistakenly imply that some assessments teach while others fail to teach. To the contrary, every assessment teaches. The only question is what we teach our students through our evaluative choices and designs. From the standpoint of educative assessment, the key thing to ask ourselves is: “Do my assessment practices teach my students what I want them to learn?”
Perhaps the single most powerful technology/ideology of writing assessment in the past twenty years to support teachers’ rhetorical visions is the writing portfolio. Portfolios encourage robust writing processes by allowing students to revise over time. By giving students significant choices among topics, audiences, purposes, genres, forums, and other rhetorical elements, portfolios set the stage for writing that students care about instead of writing that students dutifully crank out only to fulfill teachers’ assignments. Portfolios also nurture revision and the collaborative and social aspects of writing by making room for peer response and instructor response while projects are still in process. And the standard “portfolio preface” invites students to self-assess and reflect on their writing processes and products. So portfolios are the classic instance of assessment design that supports our hopes for students’ rhetorical development. They help close the gap between our ideals and our practices, as well as between our classrooms and most rhetorical situations in which our students are likely to find themselves in the outside world.
Last fall, Richard Haswell raised the issue on this list of how state-mandated
writing tests for students in primary and secondary education affect our work
teaching composition in colleges and universities. I am currently working with
a group of eight secondary English teachers in
In addition to improved support for the teaching of writing in schools, we anticipate that ISPAW would bring great benefits in professional development to groups of teachers from across the state who would gather to articulate and negotiate their standards and criteria for evaluation. This value points to another authentic, educative, and rhetorical practice: communal writing assessment.
Just as portfolios provide more valid assessment of students’ writing abilities because they show students working at different genres, topics, audiences, and purposes, communal or shared writing assessment boosts the validity (i.e., persuasiveness) of our judgments of students’ writing by grounding those judgments in multiple rhetorical perspectives. The theoretical principle that portfolios and communal assessment share is complementarity (Alford) . The principle of complementarity (first articulated by nuclear physicist and theoretician Niels Bohr) asserts that any phenomenon can be most fully and usefully understood when studied from varied perspectives and by various methods. Because each reader brings to the evaluative act distinctive and positioned rhetorical abilities, expectations, and sensitivities, two or three readers can provide a much richer and more informative reading than one. And when you get those two or three readers talking with each other about what they value in their students’ work and why, you unleash the most powerful professional development most teachers of writing ever experience. So portfolios and shared evaluation help close the fissures between what we want to teach our students about rhetoric and textuality vs. what our assessments teach them.
Recently, I’ve become interested in another kind of gap between teaching and assessing writing. I’m talking about the difference between what writing instructors care about in their day-to-day interactions with students (both during class discussions and in instructors’ responses to students’ writing) vs. how instructors grade their students’ work—or how their evaluation rubrics say they grade. Both for individual classroom instructors and for writing programs as a whole, I came to believe that grading rubrics or scoring guides were too brief, simple, rigid, and de-contextualized to do justice to the rhetorical and pedagogical richness of writing classrooms and writing programs. My book What We Really Value: Beyond Rubrics in Teaching and Assessing Writing explores ways instructors and writing program administrators can bring their written accounts of the criteria and standards by which writing is judged into alignment with the criteria and standards at work in their actual writing classrooms.
In the case of all three of these innovations in writing assessment practice (portfolios, communal writing assessment, and moving beyond rubrics), writing teachers identified assessment practices that interfered with, distracted from, or distorted the rhetorical learning for which they and their students strove. I invite those on this discussion list to further the project of spotting the gaps or points of friction and dissonance in our teaching. Having identified those gaps, we will know where and how to invest our creative and critical energies to further boost the integrity of our professional practice as teachers of writing.
Here I pull together questions already posed or implied in the above discussion.
Web Resources and Works Cited
If you have
a question or a problem about a specific book or product, please fill out our
For further information about this site contact firstname.lastname@example.org