Educational Indicators


ERIC Identifier: ED457536
Publication Date: 2001-08-00
Author: Lashway, Larry
Source: ERIC Clearinghouse on Educational Management Eugene OR.

"The great tragedy of science: the slaying of a beautiful theory by an ugly fact." That wry observation by the great British scientist T. H. Huxley applies equally well to educational practice. Like all professionals, educators use informal theories and assumptions to guide their actions, yet often fail to evaluate these beliefs (Donald Schon 1987).

The hectic pace of school life makes it difficult for teachers and administrators to step back and objectively assess the validity of their operating assumptions. In addition, educators tend to judge success anecdotally rather than through formal assessment. A small sign of progress from a recalcitrant student may outweigh months of low performance. Although these victories may be highly satisfying in human terms, today's accountability environment demands that educators collect and analyze objective data before making decisions.

Schools collect a large amount of data, but much of it is simply filed and forgotten (Theodore Creighton 2001). In recent years, policymakers and school officials have begun to recognize that these numbers can be turned into "performance indicators" that not only satisfy the demands of accountability but serve as a tool for school improvement.

This Digest examines the nature and purpose of educational-indicator systems, and it discusses the design of report cards by which schools can inform the public of their performance.

WHAT ARE EDUCATIONAL INDICATORS?

An indicator is any statistic that casts light on the conditions and performance of schools. The recent push for accountability has emphasized test scores, but Linda Darling-Hammond and Carol Ascher (1991) have suggested that a comprehensive indicator system should provide a wide range of information.

Some indicators, such as teacher turnover or student mobility, can signal problems that need attention. Some indicators can provide information geared to current policy issues; for example, data on course-taking will help policymakers who want students to take more academic courses.

Other indicators focus on context, such as student demographics, teacher workload, financial resources, and teacher qualifications. Such information can help schools interpret the sometimes ambiguous statistics that come from test scores and other outcome measures. Although contextual factors do not provide the bottom-line measure of success that policymakers seek, they do have an impact on student learning and can help explain a school's performance.

Currently, forty-five states require schools or districts to issue "school report cards" that include a wide range of information. Twenty-seven states also provide comparative ratings of schools (Ulrich Boser 2001). Alaska, for example, plans a four-grade ranking: "distinguished," "successful," "deficient," and "in crisis."

Given the wide range of data available, policymakers and school leaders should choose their key indicators by asking three questions: Why is this information important? How much effort is required to track the data? How will we use this information when we get it? (Larry Lashway 2001).

HOW DO INDICATORS SUPPORT SCHOOL IMPROVEMENT?

Indicators play a central role in today's accountability systems by focusing attention on results, especially the school's performance on standards-driven assessments. Policymakers believe that publicity has a motivational effect: Ratings raise awareness, provide focus and energize schools and communities to work to improve student achievement. At their best, ratings can provide momentum, measure schools' progress and show parents, the public and policymakers that schools can improve. (Southern Regional Education Board 2000).

This attention-getting feature is even stronger when indicators are the trigger for incentives, giving practitioners personal as well as professional reasons to focus on the target.

However, attention does not always lead to positive action. Educators may attempt to explain away poor results rather than act on them, while parents and community members often report that they are uncertain how to lobby for improvement. Teachers in high-need schools, struggling to educate large numbers of under prepared students with limited resources, may simply be demoralized by repeated public embarrassment (Lashway).

The more lasting value of indicators is their role in the school-improvement process. Used thoughtfully and systematically, they allow schools to take charge of their own assessment by identifying strengths and weaknesses and pinpointing which improvement strategies are working (Karen Levesque and colleagues 1998). Ideally, a school's indicator system will not be merely a grudging reaction to state mandates but will reflect a school's commitment to "an ethic of continuous improvement" (Annenberg Institute for School Reform). Used this way, indicators are merely an extension of what thoughtful professionals always try to do.

HOW ARE INDICATORS MISUSED?

Although indicators hold out the promise of improved decision-making, they can easily lead schools astray.

  1. One danger is to collect data indiscriminately. This not only costs effort and money, it swamps decision-makers in a sea of numbers that make it difficult to distinguish the significant from the trivial (Lashway).

  2. Raw numbers never speak for themselves, but require careful interpretation (Darling-Hammond and Ascher). For example, a rise in fourth-grade scores may be due to improved instruction, or it may be due to differences in capability between last year's group and this year's group.

  3. An over reliance on data may have unintended but perverse effects, particularly when those data are high-stakes test scores. Faced with the need to get the numbers up, educators may be tempted to replace curricular content with test-prep activities; may exclude special-education students from testing; or may even cheat. Recently, some school leaders have reported difficulty staffing fourth-grade classrooms because teachers don't want the pressure of the testing often done at this level (Abby Goodnough 2001).
Darling-Hammond and Ascher note that indicators by themselves do not constitute an accountability system; they merely provide information for the system. No matter how sophisticated the data collected, they will never substitute for informed human judgment.

HOW IS EDUCATIONAL PERFORMANCE REPORTED TO THE PUBLIC?

In many cases, states mandate the content and form of "school report cards," often aiming at a scorecard method that permits comparisons. Some districts have chosen to go beyond these state-mandated scorecards by creating and publicizing their own local report cards, which they believe portray their work with more accuracy.

Designing effective report cards poses a considerable challenge that goes beyond transcribing and sharing data. What parents and taxpayers want from report cards does not always match what policymakers have in mind. According to some surveys, the information most desired by parents and other citizens is data on school safety and teacher qualifications, followed by average class size, graduation rates, and dropout rates. Student-performance data are considered important, but not the highest priority (Richard Brown 1999).

Report cards need a clear sense of purpose. Why have these indicators been chosen? How do they relate to the school's goals? Providing a context for the data is vital; the numbers alone have little meaning for the public. Instead, they should be woven into a narrative that explains what the school is trying to accomplish, what progress has been made, and what steps will be taken next (Lashway).

Presentation and dissemination of the report are another key. Length, format, readability, and appearance will determine readership. Beyond relying on the usual dissemination through local papers and district newsletters, schools can get further mileage from the report by using it as the basis for "accountability dialogues" with stakeholders (Kate Jamentz 1998).

HOW DO SCHOOLS BECOME DATA DRIVEN?

Tracking and reporting selected indicators will satisfy the minimum demands of accountability, but significant improvement will come only when the data are used systematically and intelligently. For example, a Philadelphia middle school-serving students with high poverty, low academic performance, and frequent behavior problems-created a behavior database that eventually revealed many students were coming to school simply not knowing how to behave properly. After increasing supervision, the school was able to reduce inappropriate behavior by 95 percent (Lorraine Keeney 1998).

The Annenberg Institute for School Reform has outlined a six-part "inquiry cycle" that puts indicators to work. The first step is to identify the desired outcomes, which in turn generates questions about how well students are accomplishing those objectives (step two). Step three consists of selecting and organizing data that will help answer the school's questions. The fourth step is to interpret the collected data, followed by appropriate actions (step five). Finally, assessment of those actions marks the beginning of the next inquiry cycle (Keeney).

Similar processes are recommended by Levesque and colleagues as well as Penny Noyce and colleagues (2000). Underlying all three strategies is a willingness to face the fact that reality (as revealed in the data) falls short of the ideal (as embodied in the mission and goals). Only by confronting that reality can schools move toward their ideal.


This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract No. ED-99-C0-0011. The ideas and opinions expressed in this Digest do not necessarily reflect the positions or policies of OERI, ED, or the Clearinghouse. This Digest is in the public domain and may be freely reproduced.


RESOURCES

Annenberg Institute for School Reform. "A Framework for Accountability." No Date. http://www.aisr.brown.edu/accountability/framework/pgone.html

Brown, Richard. "Creating School Accountability Reports." School Administrator 56, 10 (November 1999): 12-14, 16-17. EJ 597 033.

Boser, Ulrich. "Pressure Without Support." Quality Counts 2001: A Better Balance: Standards, Tests, and the Tools To Succeed. Education Week (January 10, 2001).

Creighton, Theodore B. "Data Analysis in Administrators' Hands: An Oxymoron?" School Administrator 58, 4 (April 2001): 6-11.

Darling-Hammond, Linda, and Carol Ascher. Creating Accountability in Big City School Systems. Urban Diversity Series # 102. New York: ERIC Clearinghouse on Urban Education, 1991. 48 pages. ED 334 339.

Goodnough, Abby. "Strain of Fourth-Grade Tests Drives Off Veteran Teachers." New York Times, June 14, 2001.

Jamentz, Kate. "Authentic Accountability." Thrust for Educational Leadership (January 1998).

Keeney, Lorraine. Using Data for School Improvement. Providence, Rhode Island: Annenberg Institute for School Reform, 1998.

Lashway, Larry. The New Standards and Accountability: Will Rewards and Sanctions Motivate America's Schools to Peak Performance? Eugene, Oregon: ERIC Clearinghouse on Educational Management, University of Oregon, 2001.

Levesque, Karen; Denise Bradby; Kristi Rossi; and Peter Teitelbaum. At Your Fingertips: Using Everyday Data To Improve Schools. Berkeley, California: MPR Associates, 1998. 297 pages. ED 419 571.

Noyce, Penny; David Perda; and Rob Traver. "Creating Data-Driven Schools." Educational Leadership 57, 5 (February 2000): 52-56. EJ 609 608.

Schon, Donald. Educating the Reflective Practitioner. San Francisco: Jossey-Bass, 1987. 365 pages. ED 295 518.

Southern Regional Education Board. Getting Results with Accountability: Rating Schools, Assisting Schools, Improving Schools. A Fresh Look at School Accountability. Atlanta: Southern Regional Education Board, 2000.



Author, Title or Subject:

Feedback Form | Parenting the Next Generation