judgment, she said, until she got more
information from the MET project.
In other words, the administrator was
more comfortable counting indicators
than she was discussing the values
evident in the classroom, and she is
waiting on the MET project to tell her
what is “true and genuine” about the
behaviors she observed.
Do evaluation tools promote valuable
We reject the idea that the best we can
hope for is a continuous search for objectivity in measures of student learning
and teacher effectiveness. Instead, we
argue that the best we should hope for is
authenticity in the tasks we ask students
to engage in and the assessments we
use to understand their progress. The
questions that deserve million dollar
price tags should be those that we pose
as educators every day: Are students
experiencing the education we hope for
them? How do we know? If some are
not, how can we help?
If your school values creating a
democratic citizenry, supporting children’s socioemotional needs, or helping
students read the world (Friere &
Macedo, 1987) not just the word (or
nonsense word, in the case of some
current progress monitoring), then we
may need another $45 million. Neither
lists of indicators nor the so-called gold
standard of value-added data measure
those things. Moreover, a set of multiple
measures designed to correlate with test
scores doesn’t keep such goals in sight.
Beyond Letters and Numbers
As Alfred Tatum (2007) wrote about
increased evaluation of students,
“Pressure to meet adequate yearly
progress (AYP) has led to overlooking
young people (OYP)” (p. 83). Let’s not
let teacher evaluation do the same. EL
Allington, R., & Johnston, P. (2002). Reading
to learn: Lessons from exemplary 4th grade
classrooms. New York: Guilford Press.
Bond, G., & Dykstra, R. (1967). The
only recently been
teacher effects on
student test scores.
cooperative research program in first-grade reading instruction. Reading
Research Quarterly, 2, 5–142.
Coy, P. (2002, April 1). Economic trends:
When hospitals get graded: There’s a
downside to rankings. Business Week, 6.
Darling-Hammond, L. (1990). Teacher
evaluation in transition: Emerging roles
and evolving methods. In J. Millman &
L. Darling-Hammond (Eds.), The new
handbook of teacher evaluation: Assessing
elementary and secondary school teachers
(pp. 17–34). Thousand Oaks, CA:
Duke, N. K., & Pearson, P. D. (2002).
Effective practices for developing reading
comprehension. In A. E. Farstrup &
S. J. Samuels (Eds.), What research has
to say about reading instruction (3rd ed.,
pp. 205–242). Newark, DE: International
Ewing, D. (2011, April 5). Leading mathematician debunks “value-added” [blog
post]. Retrieved from The Answer Sheet at
The Washington Post at www.washington
Freire, P., & Macedo, D. (1987). Literacy:
Reading the word and the world. South
Hadley, MA: Bergin and Garvey.
Gabriel, R. (2012, April). Constructions of
value-added measurement and teacher
effectiveness in the Los Angeles Times: A
discourse analysis of the talk surrounding
measures of teacher effectiveness. Paper presented at the conference of the American
Educational Research Association, Vancouver, BC, Canada.
Gabriel, R., & Lester, J. (2010, December
15). Public displays of teacher effec-
tiveness. Education Week. Retrieved
Gates, B., & Gates, M. (2011). Grading
the teachers: Schools have a lot to learn
from business about how to improve
performance, say Bill and Melinda Gates.
Wall Street Journal. Retrieved from http://
Kane, T. J. (2012, March 28). Measuring
effective teaching with a team of super-
heroes [blog post]. Retrieved from Voices
in Education at www.hepg.org/blog/74
Johnston, P. (2005). Literacy assessment and
the future. The Reading Teacher, 58( 7),
Rachael Gabriel ( rachael.gabriel@uconn
.edu) is an assistant professor of literacy
education at the Neag School of Education, University of Connecticut, Storrs.
Richard Allington ( email@example.com) is
professor of education at the University
of Tennessee, Knoxville.