Imagens das páginas
PDF
ePub

PREFACE

Since the National Assessment began in the late sixties, considerable energy has been devoted to methodological and technical aspects of assessment design, item development, sampling procedures and analytic strategies. During the seventies, many states created their Own assessments and turned to NAEP for technical advice, assessment materials and networks such as the Large Scale Assessment Conference, where they could share technical concerns and expertise. Procedural and methodological matters understandably overshadowed utility questions until only a few years ago. But questions about how NAEP information or any assessment information -- is used or might be used by a variety of education actors and decision makers have grown increasingly pressing as the project's importance and visibility have grown.

[ocr errors]
[ocr errors]

Of particular interest because it affects school participation in the assessment and participation affects the accuracy of assessment results -- has been the utility of NAEP results for schools and school districts. How can national and regional data necessarily abstract and removed from day-to-day realities of any particular school -- have any significant impact upon classroom practices?

-

This special study addresses that question. Unlike other special studies (e.g., "Students from Homes in Which a Language Other Than English Dominates"), this paper does not showcase a secondary analysis of NAEP data. Rather, it represents a "repackaging" of existing data to answer some common questions about educational achievement, to relate the findings to past and future educational policy decisions and to raise questions for future research efforts.

The purpose of the paper is to provide a busy education leader with a "skeleton key" to the NAEP data base: a short overview of intriguing findings organized around topics likely to come up in speeches, articles or briefings. The arbitrary selection of findings and the interpretive judgments about them are the responsibility of the author only. Readers are encouraged to form their own opinions about the significance of these findings and are given specific references to the reports from which the findings derive, in case they want to pursue something to a deeper level of detail. We expect this paper to grow as additional findings are offered by other readers.

to

Many of the answers to questions and the recommendations for action in this paper could be buttressed through reference research findings about learning in reading, writing, mathematics and science, as well as research on effective teaching and effective schools. As a next step, we will be indexing findings and interpretations to appropriate research, looking for convergence and divergence of views. Having done that, we intend to extrapolate those findings of greatest utility to two policy

groups with great influence upon the classroom: curriculum guide writers and textbook selectors. We have already begun to learn exactly how and when each group uses various kinds of information, and we will be working with them during 1983 to perfect a model that satisfies their needs.

In the meantime, here is a paper that offers easy access to a huge data base, some food for thought and some recommendations for action. We invite reader response and welcome any insights or observations that would improve the effectiveness of this kind of effort.

Beverly Anderson
Director

National Assessment of
Educational Progress

NATIONAL ASSESSMENT FINDINGS AND

EDUCATIONAL POLICY QUESTIONS

This paper is intended to provide a bird's eye view of the National Assessment of Educational Progress's (NAEP) huge data base and to stimulate some thought about what 13 years of data collected about over 1,000,000 students can tell us about American education. From the thousands and thousands of assessment results already published, we have selected 63 findings from five assessment areas. We could have selected many more findings from all 10 areas and the four probes NAEP has administered, but the purpose of this paper is not to be comprehensive. The paper is meant to illustrate the fact that long-term assessment findings constitute a rich source of information, ideas and hypotheses about what is happening in American education. They should be used more widely than they are, especially in policy analysis and debate about the quality and the future of our educational system.

Readers can procede in one of two ways. They can read the discussion part of this paper, which follows immediately, and then examine the findings referred to throughout the discussion. or, they can peruse the findings first and then come back and read the discussion. Since the paper is designed to facilitate speech writing or quick review for a briefing, it makes no difference which way the reader procedes.

The discussion section is organized around questions which staff at the Assessment and the Education Commission of the States (ECS) have often been asked. Readers may think of other questions, of course, and the findings may help answer them. Note that each finding is indexed to the report in which it appeared so that anyone interested in getting deeper into the data can easily do so. We would suggest that articles written about these findings be based upon the original publications instead of this overview, since the original publications contain all the appropriate methodological information and caveats.

The discussion here is by no means definitive. These are one writer's judgments, hunches and educated guesses, and readers will have to decide for themselves whether they make sense. They are offered in the interest of starting a dialogue that includes, as a key feature, interpretive use of assessment data. The paper will grow as we add to the data base, index findings to other research studies and gather feedback from readers with many different perspectives on education. And as it grows, we intend to package and repackage the data to address very specific policy activities such as selecting textbooks or creating curriculum guidelines.

QUESTIONS

"Do National Assessment data shed any light on the effectiveness of federal or state education policy during seventies?"

the

Many of NAEP's findings separately and collectively suggest that federal and state equal educational opportunity policies, reading programs such as Right to Read and federal/state programs concentrating upon elementary education may well have had positive impact upon student achievement. Consider the following points:

a

• Black students' performance improved over the decade at a faster rate than white students in reading, writing and mathematics (findings 1, 4, 17, 46).

[ocr errors][ocr errors]

Reading performance remained relatively stable over the decade for teenagers and showed an improvement for 9-year-olds (findings 1, 3, 6).

While reading remained stable, there were declines among teenagers in science and mathematics (findings 1, 3, 6, 42, 43, 55, 56, 57).

Elementary students showed progress on language arts assessments, while older students did not (findings 1, 3, 6, 14-16).

Given the facts that, during the sixties and seventies black students were special policy targets, reading programs received more attention and resources than other programs and elementary schools were the primary beneficiaries of Title I and other monies, NAEP data provide strong circumstantial evidence that focused policy attention and focused resources may have had measurable impact upon educational achievement. At the very least, one can observe that the target groups and areas of federai policy did not show declines, whereas groups and areas ignored or downplayed by federal and state policies did show declines.

"Was there a need during the seventies to go back to the basics?

National Assessment results suggest that, in general, the vast majority of students had mastered the basics, i.e., low-level reading, writing and arithmatic skills, even before the "back to the basics" movement (Brown 1981). In addition, many of the findings listed here indicate stability or progress through the seventies (especially the late seventies) in literal comprehension, writing mechanics and computational skills (findings 1, 3, 6, 18, 45). To be sure, there were always students with problems in the basics (see 20, 30-34), but the NAEP

data do not support the claim that there was some new problem between 1969 and 1980 that required schools to go "back to the basics."

"If students were not doing badly on the basics, why were test scores going down throughout the seventies?"

Many, but not all, test scores were declining during the seventies, but the phenomenon was not necessarily caused by problems with the basic skills. In fact, findings 3, 6, 15, 16, 35-37, 44, 45 and 55-57 all point to problems with higher-order skills such as inference, analysis, interpretation or problem solving. Post hoc analyses of the Scholastic Aptitude Test (SAT) reveal that the greatest declines in that test also occurred among the items testing higher-level skills and state assessments (e.g., Illinois) have been finding the same pattern.

"What kinds of students accounted for the declines and improvements during the seventies?"

We have already noted that black students and some disadvantaged groups of students improved during the decade (e.g., finding 1). In addition, findings 2, 8, 9, 47 and 58 establish that the improvements were largely among the students in the lowest achievement class that is, those students who constitute

[ocr errors]

the lowest 25 percent of the students in each assessment.

On the other hand, the largest declines occurred in the highest achievement class that is, among the best students in

each assessment.

"What is the level of literacy in America today?"

It would appear from the reading and writing findings, and from NAEP's functional literacy assessments in the mid-seventies (NAEP 1976) that the vast majority of America's students are literate readers (close to 90 percent) and literate writers (probably close to 75 percent). However, the fact that declines have occurred in inferential comprehension and in more difficult writing tasks should be cause for concern. It would appear that a standard of literacy which was perfectly acceptable 10 to 15 years ago is rapidly becoming obsolete. To the extent that analytic, interpretive and evaluative literacy skills are increasingly demanded by an "Information Society, NAEP findings suggest that there is a growing illiteracy (see Gisi, Forbes 1982).

"Are bilingual programs helpful to students who are not literate in English?"

National Assessment findings do not bear directly on the efficacy of bilingual programs, but two studies provide some data which, in conjunction with other data, may help answer this question. Finding 1 indicates that Hispanic 9-year-olds improved

« AnteriorContinuar »