Making Our Assessments of Writing More Authentic,
Educative, and Rhetorical

Bob Broad
Illinois State University

Like a lot of other people researching the assessment of writing, I entered this part of our field as a guerilla fighter.  While teaching secondary English during the 1980s, I had felt discouraged and disgusted by what many assessment practices inside and outside of my classroom did to my teaching of writing and to my students’ developing rhetorical abilities.  After reading David Owen’s heroic and devastating critique of the Educational Testing Service and its Scholastic Aptitude Test in None of the Above: Behind the Myth of Scholastic Aptitude (1985) , I dedicated myself to liberating the teaching of writing from destructive assessments.  For a few years there I felt, and acted, a little like my childhood hero, Zorro: riding across the countryside in a black cape and mask, intimidating bad guys, protecting the weak. 

I had the right idea, but I was missing key knowledge.  Based on my experience of the impact of grades and standardized tests, I had concluded that assessment was bad for the teaching of writing, and that assessment was therefore my enemy.  Brian Huot observes that this self-defeating view is widely held among writing instructors: “assessment has often been seen as a negative, disruptive feature for the teaching of writing” (9) .  It took me a few years to realize what now seems obvious: that assessment strategies can either help or hurt teaching and learning in the writing classroom, depending on which strategies we are talking about.  With the help of scholars like Ed White, Peter Elbow, Michael M. Williamson, Richard Haswell, Brian Huot, Grant Wiggins, and Pamela Moss, I realized (with some relief, though I admit also with a mild pang of regret) that I did not have to spend my professional life riding around slashing Z-shaped tears in test-makers’ shirts.  I could stand for something; I could advocate particular assessment strategies instead of merely opposing all of them. 

What I, following others, would come to advocate was assessment that I believed supported best practices in teaching and learning composition.  Huot consistently sounds this theme:  “I am specifically interested in neutralizing assessment's more negative influences and accentuating its more positive effects for teaching and learning” (7) .  Practices in evaluating writing with such positive effects go variously by the names authentic, educative, and rhetorical

I’m going to blithely assume that we subscribers and contributors to the Teaching Composition List share a fairly strong and clear consensus regarding what “best practices” in the teaching of writing might include, such as:  substantive choices for student-authors, writing for multiple and real rhetorical situations, peer and instructor response while writing is in process, research, deep revision, proofreading, and publication.  This minor fantasy of pedagogical consensus allows me to proceed directly to the two questions I hope you will take up in discussion on the list: 

1.      Which of our writing assessment practices (both within and outside of the classroom) best support our most cherished theoretical beliefs about composition and our most productive pedagogical practices? 

2.      Which evaluative techniques should we shift, tweak, adopt, or throw out to better serve the rhetorical learning we are trying to promote? 

A little theory and terminology for authentic and educative assessment

Grant Wiggins used to call such pedagogically beneficial approaches to assessment authentic (1993) .  Personally, I liked the polemic edge to that term, because it correctly implied that many traditional approaches to assessment (which Wiggins likes to summarize as “teach, test, and hope for the best”) lack legitimacy in relation to the world outside the walls of schools, colleges, and universities.  By challenging the authenticity of our evaluative practices, Wiggins provoked us to critically question the closeness of fit between what we want our students to learn and how we assess that learning. He also wanted us to check the fit between what we are teaching/assessing in our classrooms and what our students need to know and be able to do in the world beyond our classrooms.  Authenticity of assessment depends on the strength of these two correspondences:  between teaching and assessment, between classroom and world. 

A few years later (1998) , Wiggins shifted to calling such assessment educative.  This term has the advantage of being less obviously critical of, and therefore less alienating to, people whose assessment practices we may be challenging.  “Educative assessment” also focuses our attention on the importance of scrutinizing what our assessments teach.  My only critique of this phrase is that it may mistakenly imply that some assessments teach while others fail to teach.  To the contrary, every assessment teaches.  The only question is what we teach our students through our evaluative choices and designs.  From the standpoint of educative assessment, the key thing to ask ourselves is:  “Do my assessment practices teach my students what I want them to learn?” 

Examples of rhetorical assessment practices

Portfolios

Perhaps the single most powerful technology/ideology of writing assessment in the past twenty years to support teachers’ rhetorical visions is the writing portfolio.  Portfolios encourage robust writing processes by allowing students to revise over time.  By giving students significant choices among topics, audiences, purposes, genres, forums, and other rhetorical elements, portfolios set the stage for writing that students care about instead of writing that students dutifully crank out only to fulfill teachers’ assignments.  Portfolios also nurture revision and the collaborative and social aspects of writing by making room for peer response and instructor response while projects are still in process.  And the standard “portfolio preface” invites students to self-assess and reflect on their writing processes and products.  So portfolios are the classic instance of assessment design that supports our hopes for students’ rhetorical development.  They help close the gap between our ideals and our practices, as well as between our classrooms and most rhetorical situations in which our students are likely to find themselves in the outside world. 

Last fall, Richard Haswell raised the issue on this list of how state-mandated writing tests for students in primary and secondary education affect our work teaching composition in colleges and universities.  I am currently working with a group of eight secondary English teachers in Illinois to change our state’s 40-minute, one-shot writing test to a portfolio system we call the Illinois State Portfolio Assessment of Writing, or “ISPAW.”  Portfolios would move us from judging students’ abilities by looking at a single, brief, formulaic, unrevised writing sample written on an assigned topic (about which students typically know and care nothing) to judging proficiency based on a varied collection of projects written to topics, purposes, and audiences of students’ choice, and developed over time through invention, drafting, collaboration, research, revision, proofreading/editing, and publication.  Our group has felt powerfully supported and helpfully guided in its work by George Hillocks, Jr.’s recent book The Testing Trap

In addition to improved support for the teaching of writing in schools, we anticipate that ISPAW would bring great benefits in professional development to groups of teachers from across the state who would gather to articulate and negotiate their standards and criteria for evaluation.  This value points to another authentic, educative, and rhetorical practice:  communal writing assessment. 

Communal Writing Assessment

Just as portfolios provide more valid assessment of students’ writing abilities because they show students working at different genres, topics, audiences, and purposes, communal or shared writing assessment boosts the validity (i.e., persuasiveness) of our judgments of students’ writing by grounding those judgments in multiple rhetorical perspectives.  The theoretical principle that portfolios and communal assessment share is complementarity (Alford) .  The principle of complementarity (first articulated by nuclear physicist and theoretician Niels Bohr) asserts that any phenomenon can be most fully and usefully understood when studied from varied perspectives and by various methods.  Because each reader brings to the evaluative act distinctive and positioned rhetorical abilities, expectations, and sensitivities, two or three readers can provide a much richer and more informative reading than one.  And when you get those two or three readers talking with each other about what they value in their students’ work and why, you unleash the most powerful professional development most teachers of writing ever experience.  So portfolios and shared evaluation help close the fissures between what we want to teach our students about rhetoric and textuality vs. what our assessments teach them. 

Beyond Rubrics

Recently, I’ve become interested in another kind of gap between teaching and assessing writing.  I’m talking about the difference between what writing instructors care about in their day-to-day interactions with students (both during class discussions and in instructors’ responses to students’ writing) vs. how instructors grade their students’ work—or how their evaluation rubrics say they grade.  Both for individual classroom instructors and for writing programs as a whole, I came to believe that grading rubrics or scoring guides were too brief, simple, rigid, and de-contextualized to do justice to the rhetorical and pedagogical richness of writing classrooms and writing programs.  My book What We Really Value: Beyond Rubrics in Teaching and Assessing Writing  explores ways instructors and writing program administrators can bring their written accounts of the criteria and standards by which writing is judged into alignment with the criteria and standards at work in their actual writing classrooms. 

Find the Gaps and Bridge Them

In the case of all three of these innovations in writing assessment practice (portfolios, communal writing assessment, and moving beyond rubrics), writing teachers identified assessment practices that interfered with, distracted from, or distorted the rhetorical learning for which they and their students strove.  I invite those on this discussion list to further the project of spotting the gaps or points of friction and dissonance in our teaching.  Having identified those gaps, we will know where and how to invest our creative and critical energies to further boost the integrity of our professional practice as teachers of writing. 

Suggested Topics and Questions for Discussion

Here I pull together questions already posed or implied in the above discussion.

  1. How do you assess your students’ writing?  Conduct a brief inventory of your assessment practices.  In addition to obvious evaluative tools (portfolios, tests, quizzes, journals, etc.), consider your informal and implicit acts of assessment, such as which readings you assign, how you shape writing assignments, what you respond to in your students’ writing, and values you articulate during class discussions and activities. 

  2. For what rhetorical situations, and with what rhetorical abilities, are you trying to prepare your students?  Consider the rhetorical demands of citizenship in democratic society, of professional life, and of personal life. 

  3. Which of your writing assessment practices (both within and outside of the classroom) best support your most cherished theoretical beliefs about composition and your most productive pedagogical practices?  That is, which of your evaluations seem most authentic, educative, and rhetorical?  (“In what ways do my assessment practices teach my students what I want them to learn?”)
  1. Which evaluative techniques should you shift, tweak, adopt, or throw out to better serve the rhetorical learning you are trying to promote? 

Web Resources and Works Cited

Teaching Composition


Copyright © 2003 The McGraw-Hill Companies. All rights reserved. Any use is subject to the Terms of Use and Privacy Policy.
McGraw-Hill Higher Education is one of the many fine businesses of The McGraw-Hill Companies.

If you have a question or a problem about a specific book or product, please fill out our Product Feedback Form.
For further information about this site contact english@mcgraw-hill.com