DURING the past 30 years there has been influential opposition to all forms of testing in Australian schools. Teachers' unions, in particular, have objected to basic skills tests in English and Mathematics, General Knowledge tests in the core subjects of English, Maths, Science, and Social Studies, and end-of-year examinations in these same subjects, especially in Years 10 and 12. Throughout the country, but especially in Victoria, powerful lobbies have been pressing for more internal assessment of student work and ignoring arguments about the need for reliable external measures of pupil achievement and school accountability.
Nevertheless, testing is being conducted throughout Australia. Indeed, the Australian Education Council suggests that despite pressures to do away with testing, more and more people in education are interested in external measures of pupil achievement. (1) Tests devised by the Australian Council of Educational Research (ACER) have been of particular interest across the country.
There are, of course, good and familiar reasons for the external assessment of school pupils. Well-designed examinations promote equity, allowing all students of whatever background an identical chance to demonstrate their knowledge of the same assigned material, guarding against unwitting teacher bias, and eliminating the cheating very hard to prove in work completed at home. They also serve a diagnostic purpose, enabling teachers and parents to see more clearly what the children in their charge can and cannot do in central curricular areas. By revealing large patterns of achievement across a State, they help schools to meet the repeated accusation that they are insufficiently accountable to the public that funds them.
As well, sound exams confer seriousness upon all study; promote, through revision, the reinforcement of knowledge; and underline the value of intellectual accomplishment. They help to motivate lethargic students, many of whom are unwilling to extend themselves fully on other assigned school work. Because they force pupils to decide for themselves what is most important as they revise great blocks of material, they build discernment and self-discipline. On the big day, they encourage the display of skills such as conciseness, fluency, and the logical ordering of ideas; and their inbuilt time limitations enable students with a tendency to procrastinate or tie themselves in knots over essays and projects to reveal on the spot, in clear and uncluttered form, what they can do.
For all these reasons, examinations have an important place in school life. Despite their limitations -- the most obvious one being that they cannot test all a student's essential knowledge in even one key segment of a major subject -- they are fairer to more pupils than any other known form of monitoring long-range accomplishment.
BASIC SKILLS TESTS
OPPOSITION TO THE TESTS
In an editorial published on 6 September 1990, The Sydney Morning Herald supported the use of Statewide literacy and numeracy tests in New South Wales on the grounds that they provide parents "with an informed account of the basic skills of their children" so that they are "in a position to remedy any weaknesses their children may have." Yet there is still opposition to testing of this kind.
The reasons for opposition are varied. It is said that standardised tests reveal too little about what pupils can do, and almost nothing that their teachers don't already know about them. It is also claimed that the multiple choice format normally used for the tests is intellectually inadequate; that the results of the tests are often misused; and that parents are rarely given helpful information about them. Allegedly, tests cause pupils undue stress, penalise minority groups but especially recently-arrived migrant children, (2) disadvantage those who aren't quick, promote guesswork rather than thought, and fail to disclose trustworthy information about literacy or numeracy.
Despite arguments of this kind, however, parents, employers, and teachers themselves have intensified their demands for basic skills tests in English and Mathematics. Even people who are sceptical about what tests can do have expressed an interest in the statistical bank thus far available, and a willingness to let testing be tried for a longer period to see what it accomplishes.
RECENT PRACTICE IN AUSTRALIAN STATES
In Tasmania, since 1976, 10- and 14-year-olds have been given tests in basic literacy and numeracy on prescribed dates. During a set period annually, the Northern Territory tests literacy and numeracy in Years 5 and 7, and from Year 5 upwards in Aboriginal schools. In New South Wales, also on prescribed dates, basic skills tests prepared by ACER, measuring literacy and numeracy in Years 3 and 6, have been successfully used by state schools since 1988 and recently bought by independent ones. In Victoria, Maths and English tests devised by ACER but used by local primary schools at their own discretion have been widely purchased. The Victorian Government has promised to introduce "literacy profiles" for all students in Years 3, 6 and 9, although it has refused to introduce standardised Statewide testing of basic skills and general knowledge. Queensland pupils have been tested in Reading, Writing, and Mathematics in Years 5, 7, and 9; and samples of students in Western Australia in Years 3, 7, and 10 have been similarly monitored. The recently introduced South Australian Writing and Reading Assessment Project (WRAP) measures samples of students in Years 6 and 10.
Throughout Australia, local efforts have been made to inform parents about the performance of their own children; but not all schools have managed to explain satisfactorily either the nature or the purpose of a basic skills testing program. At the State level, policy-makers have disagreed about whether average results calculated for individual schools should be made available to the public as well as to school heads and teachers. In New South Wales, no problems have arisen as a result of the release of comparative school figures to individual principals. But a broader public release of data has not been attempted. (3)
Those in favour of the release of school results have argued that comparisons between schools are not invidious if socio-economic, ethnic, and language profiles of school intake accompany the publication of each school's results. But administrators who fear that schools will be praised or blamed for determinants of educational performance beyond their control, and that teachers in disadvantaged regions will be vilified no matter what, continue to oppose the release of individual school figures.
THE CASE FOR STANDARDISED BASIC SKILLS TESTING
An argument regularly voiced by the more strident opponents of skills testing in Australia -- namely, that a basic skills testing program here couldn't possibly work because overseas tests have been such a failure -- is in the light of ACER's success in New South Wales, Tasmania, and Victoria, manifestly false. It also runs counter to the view expressed by the head of the American Federation of Teachers, Albert Shanker, that even though poorly constructed tests in the US have had the unfortunate effect of encouraging weak teachers to teach to the lowest common denominator, it would be foolish to stop basic skills testing.
Although educational spokesmen from abroad like Shanker are wary of testing programs which discourage imaginative curriculum design and concentrate on test "results" at the expense of genuine thought and discovery, many continue to support the construction of better skills tests. Especially, they argue for tests of the kind ACER has designed, using "skill bands" which include higher order thinking tasks, and which encourage teachers to expect more of pupils than early US testing programs did. (4)
Overseas proponents of basic skills tests (5) contend that testing which serves a clear diagnostic purpose and gives parents comparative information about their school's performance is essential -- even if the easily graded multiple choice format has obvious limitations. Like test organisers in Tasmania, the Northern Territory, Queensland, and New South Wales, they believe in publishing results in annual reports so that essential comparisons can be made; and they are convinced that the setting of clear, achievable standards of competence raises the levels of accomplishment in schools.
Although nobody, here or overseas, claims that basic skill tests measure all the significant learning taking place in classrooms, tests can disclose cognitive strengths and weaknesses in children which don't show-up in ordinary classroom activities. The facts provided by well-constructed tests supplement information already possessed by classroom teachers, and enable schools to develop more suitable programs of remediation and acceleration. Especially if they are administered early enough, tests identify weaknesses in learning-disabled six- and seven-year-olds which, untreated, cause years of school failure. (6)
According to members of ACER'S staff involved in basic skills testing, (7) more and more Australian teachers and administrators have been expressing an interest in standardised literacy and numeracy tests. Although teachers as a whole are divided on their usefulness, many have been prepared to examine a range of skills tests to find out for themselves about the testing being done here and overseas, whether particular tests are likely to be well received by pupils, how results can be used in remedial programs, and where their own students stand in relation to children of the same age and of similar socio-economic background in other parts of Australia and the world.
In short, increasing numbers of teachers believe that the external checks on their teaching provided by skills testing can give them essential information about their successes and failures, and point them in directions which will facilitate instructional change and improvement.
BRITAIN'S S.A.T.s
Some of the most innovative tests for young children measuring higher order thinking skills have been devised by researchers working for Britain's National Foundation for Educational Research (NFER). Their performance-based Standard Assessment Tasks (SATs) have been tied directly to the curriculum teachers have chosen to use, and incorporated stimulating and imaginative oral and written work across the major core disciplines of English, Maths, and Science. Some of the tests are administered to groups of four to six pupils; the majority are given to an entire class.
With the co-operation of school staffs, NFER has developed tests which allow pupils to answer questions orally in a normal class setting, to work out written problems in their own time without the pressure of having to complete work in a set period, and to combine problem-solving, creative writing, ordinary measuring and calculating, and a host of other skills in exercises which are indistinguishable from those regularly set in ordinary classrooms. The process is meant to be completed in a week.
The SAT testing program has shown that interesting and informative tests can find an accepted place in ordinary classrooms without the presence of fear or stress -- indeed, often, without any consciousness on the children's part that testing is taking place. Particularly when questions are closely connected with classroom work, pupils show pride in their ability to manage competently, and they display an interest in the problems set for them very similar to their normal interest in stimulating activity. So far, the achievement of one out of three children has surpassed teachers' expectations. (8)
According to reports published in the past year by The Times Educational Supplement, parents are very happy with the tests, but teachers' unions and lobbyists have been calling them a waste of time -- far too demanding on classroom teachers to be worth the effort it takes to conduct them. While many capable teachers, reportedly, are happy with the program, others feel excessively burdened by the need to keep many pupils quietly busy and productive while small groups are being tested, by the sheer volume of testing and grading time required, and by the knowledge and skill levels required to administer the tests well. How to give teachers needed support during testing periods is, therefore, a pressing question.
In Australia, ACER has already shown an interest in the British SATs, and has begun devising SATs of its own for use in Victorian schools. Even after only one round of SAT testing in Science in Years 5 and 9 in 1990, it was clear to ACER researchers that testing of this type, in both its performance-testing and multiple-choice forms, can reveal important facts about students' conceptual understanding which are not available in more conventionally conceived basic skills tests. (9)
RECOMMENDATIONS BASIC SKILLS TESTS Basic standardised skills tests in literacy and numeracy, of the kind now being devised by ACER and used at local and State levels, should be given throughout Australia at key stages in pupils' schooling -- preferably, in Years 3, 6, and 9 -- to ensure that acceptable standards in English and Mathematics are being attained, and to identify strengths and weaknesses at the individual, school and systemic levels. Individual results of the basic skills tests should be clarified and made available only to parents and the school principal. School results should be made available to parents and the general public, but they should be accompanied by a socio-economic, ethnic, and language profile of each school's intake to discourage hasty, unfair comparisons. Schools should not claim praise or receive blame for social considerations beyond their control; and the public must be fully informed on this important subject. Basic skills testing programs which encourage high-order thinking in natural classroom settings, such as NFER's in Great Britain and more recently devised ACER science testing programs in Victoria, have a clear place in our own schools; but there is a need to provide support for teachers in administering the more demanding performance-based tests. |
GENERAL KNOWLEDGE TESTING
AUSTRALIA'S RECENT HISTORY
On the subject of children's basic factual knowledge of the core subjects of English, Mathematics, Science and Social Studies at local, State, national, or international levels, much less information about Australia is available than about overseas countries, for the obvious reason that we have done very little general knowledge testing. Australia participates in international tests in Science, but not in other core disciplines. (10)
To date, there are no national tests in Australia which correspond to the well-designed multiple-choice tests in History and Literature prepared in the United States by the National Assessment of Educational Progress (NAEP) at the behest of the National Endowment for the Humanities. Although many educators suspect that our pupils know as little about key events in the cultural history of the West and of their own country as the American students described by Chester Finn and Diane Ravitch in their widely publicised study What Do Our Seventeen-Year-Olds Know? (see diagram), we have no hard evidence either way.
NAEP SAMPLE QUESTIONS ON HISTORY AND LITERATURE
|
Towards the end of 1990, a series of articles in The Australian entitled "The Great Education Debate" focused public attention on gaps in the general knowledge of selected junior secondary students. As well as discussing their appalling performance in the general knowledge test described in this report's Introduction, The Australian highlighted the poor results of a general knowledge test in Social Studies commissioned in 1989 and conducted by ACER'S Dr Malcolm Rosier and educational consultant Peter McGregor with 232 Year 10 students in government and non-government schools in Victoria (see diagrams). (11) Like Donald Horne's recent general knowledge test, both tests underline the need for greater school accountability in the area of cultural literacy.
Australia is one of the few advanced countries in the world not involved in national and international testing of general knowledge in all the major core subjects. In sport, Australia has long participated proudly in international competition. The globalisation of financial and other markets is forcing us to focus on inefficiencies in our economy. In education, it is high time that we shed our aversion to international comparisons, and the fears implicit in it. Until problems are faced, they cannot be solved; and unless important features of our school programs are identified, we cannot determine our own needs.
One section of the General Knowledge Test given by On the left is a list of books and on the right, in no particular order, is a list of their authors. Match them by writing the correct author's surname after the book title.
|
THE SOCIAL STUDIES SURVEY Percentage of correct answers to sample questions asked in the survey on (1) Civics, (2) Politics, (3) History, (4) Economics/Industry, and (5) Geography.
|
RECOMMENDATIONS GENERAL KNOWLEDGE TESTS Australia should become involved, as most of the world's educationally advanced nations are, in national and international programs testing knowledge of every core subject. Our recent involvement in international science testing is to be applauded. International tests provide essential diagnostic information about how students' general knowledge compares with that of pupils in the same age group in other States and nations. They also provide important checks on the school curriculum, and its provision in the broad area of cultural literacy. |
EXAMINATIONS IN CORE SUBJECTS
RECENT AUSTRALIAN EXAM HISTORY
In 1987, in a report called In the National Interest, the Commonwealth Schools Commission urged that all public examinations be phased out. It claimed that they led to bad educational practices, encouraging "a narrowing of teaching and learning" and promoting rivalry between academic and non-academic pupils.
Since the 1950s there have been no formal examinations in Australia at the end of primary schooling. By the end of the 1960s formal exams in core subjects about half-way through high school, at the minimum leaving age of 15 years, were abandoned almost everywhere. In the 1970s the HSC exam was dropped in Queensland and the ACT in favour of the Australian Scholastic Aptitude Test (ASAT), and significantly altered in the other States by being coupled with school-based, internal assessment. In the early 1990s in Victoria, the required HSC examination was replaced by Common Assessment Tasks (CATs) in each subject, marked on a scale of 1-10 and monitored by regional "panels".
In New South Wales, the Year 10 School Certificate has been awarded on the basis of exams -- now called Reference Tests -- since 1965. In 1990 a test in Science, as well as English and Mathematics, was introduced. Substantial criticisms of questions asked in these tests, (12) and of the methods adopted for scoring them, have been made. (13) Nevertheless, there is widespread support for the testing program as a measure of accountability.
GENERAL ARGUMENTS AGAINST EXAMS
Some opponents of exams in core subjects are clearly anti-intellectual -- eager to avoid any form of mental reckoning, and bent on allowing almost any species of "work" to pass through the system. Others believe in objective measuring of student performance, but claim that examinations do a worse job than essays or portfolios and encourage "cramming". Still others argue that although exams aren't objectionable in principle, in practice they encourage students to memorise great chunks of information which mean little to them, and which they very rapidly forget. A minority assert that only oral exams examine pupils adequately, since they enable teachers to distinguish between the glib and the truly thoughtful and informed.
For over 30 years in Australia, influential community groups have argued that other kinds of external checks on pupil performance work as well as, if not better than, exams. Final examinations, they claim, create undue stress in pupils because they "count" so much. Bad performances on a single day can affect job opportunities to an unjust degree. Instead of rewarding continuing effort, as proper teacher assessment does, exams create division among students and foster the success of "élites" whose sole claim to fame is that they are good at taking tests.
THE PUSH FOR CUMULATIVE ASSESSMENT
Those who oppose end-of-year or end-of-term examinations in core subjects often argue that the written work which pupils complete themselves, at their own speed, day by day and week by week, should be the "cumulative" means of assessing them. What children can actually do, they assert, is much more clearly shown by the essays or problem-solving tasks they complete in ordinary circumstances than by exam answers produced under pressure.
Leaving aside the unverified claim that all or even most children do better work on assignments than on exams, the fact is that complete or near-complete reliance on cumulative (sometimes called "continuous") assessment imposes a heavy burden on teachers. In subjects like Mathematics, where this species of assessment often involves complex problem-solving, cheating is not always easy to detect; so it is not easy for teachers to know whether the understanding of individual students is adequate, or whether they have actually done the bulk of the assigned work.
In Humanities subjects such as English or History, where the staple of cumulative assessment is the essay or project, teachers must comment on and grade a huge volume of work on a wide variety of topics. Doing this well presupposes an extremely solid knowledge base (including familiarity with secondary sources likely to be used inappropriately by weak pupils) and more time and energy than is available in an ordinary 50-hour working week.
For the most conscientious teachers, therefore, the paper work imposed by cumulative assessment is exhausting in the extreme. Tests, regularly given, save teachers time and energy because the number of topics treated is limited; and their comments on representative strengths and weaknesses in pupil performance can be made to the entire class. Any additional student queries can be taken up individually during class after tests are returned, or afterwards if more time is needed.
One common result of the burden imposed on teachers by cumulative assessment is that large numbers of students -- especially those whose teachers lack necessary knowledge, dedication, time, or energy -- receive scanty feedback on their completed tasks. They encounter single phrases like "Well done" or "A good effort" or "Needs to be more carefully researched" but little or no commentary on the substance of their work. Many pupils who routinely copy huge slabs of material from encyclopædias or other reference books in order to complete "projects" are never caught, and therefore never have to demonstrate either an awareness of the difference between plagiarism and summary or a real understanding of their subject.
Particularly neglected in cumulative assessment programs is systematic instruction in the mechanics of writing and research. Countless pupils are not told whether their use of English in assessment tasks is acceptable. Their teachers do not comment on such essential matters as where argument breaks down, why paragraphs don't cohere, how to cull key facts from secondary sources, what kinds of information should be included to give the assigned piece of work greater weight, or the vocabulary suited to particular tasks.
PRO-EXAM ARGUMENTS
Public examinations in the senior years of high school were introduced in Australia for reasons of equity. The school system favoured those already favoured by their parents' recognition of the value of education and their ability to pay for it. External examinations gave students who lacked the advantage of attending a prestigious school the opportunity to be judged according to their ability rather than their social origins. These exams still do this, because they enable all pupils, regardless of their background, to reveal their own prowess and nobody else's on an identical form of assessment.
In workable examination systems, students actually learn more than they do by relying entirely on cumulatively assessed tasks; for by preparing for their exams, reviewing texts and notes, consolidating material, and deciding without anyone else's help where they need to concentrate as they go over set work, pupils absorb what cannot be absorbed as effectively in other ways. At the examination itself, the very fact of having to make good use of their time -- to take in the essentials covered by a question, and to write logically, clearly, and coherently about topics which cannot be tackled with prepared answers -- helps students to acquire greater confidence, fluency, and intellectual proficiency.
The usual bugbears of internal assessment -- especially, teacher bias, lack of parity among schools, the impossibility of eradicating cheating, and teacher burn-out caused by the sheer volume of work they must comment on in writing -- do not obtain during Statewide examinations. Employers cannot "write off" whole schools and everyone in them, on the grounds that teachers have skewed marks, without getting caught. Everyone has an equal chance to do well. Hence, even though exams should not be relied on as the sole means of evaluating student performance, their place in all programs of instruction is essential.
RECOMMENDATIONS ASSESSMENT IN CORE SUBJECTS A wide range of assessment tools should be used in primary and secondary schools. Because they perform different functions, exams and tests (both written and oral), essays, participation in class discussion, long-range assignments and portfolios should all be used extensively to monitor pupils' work and give students thorough feedback. Exams in English, Mathematics, and Science should be given to all students at the end of Years 9 or 10, not simply to those in New South Wales, to ensure that their work meets international standards of basic competence. Required written examinations in core subjects for all students leaving school in Year 12 should be used in conjunction with other methods of assessment (i.e., essays, portfolios). Written exams insure parity among pupils from different schools and regions; they measure knowledge and thinking capacity in ways cumulative assessment tasks, by themselves, cannot; they contain reliable safeguards against cheating; and they provide an annual check on the quality of the school curriculum and its implementation. |
NEW ASSESSMENT PROCEDURES IN VICTORIA: THE V.C.E.
The most influential current opponents of examinations believe competitive tests like the traditional HSC should be replaced by other, less threatening checks on pupil performance. Disagreeing with those who believe exams are the best protection against injustice for all pupils, particularly those who have been economically disadvantaged, they argue that internal school assessment complemented by external project evaluation by panels of teachers is equally, if not more, equitable. But in Victoria, where an alternative Year 11-12 assessment scheme -- the Victorian Certificate of Education -- involving "verification panels" and relatively little external examining has recently been instituted, major unresolved problems have plagued pupils, teachers, and schools.
HOW THE V.C.E. WORKS
To obtain the VCE, students must complete 16 satisfactory units of work, with all work in the units graded satisfactory or non-satisfactory. But unless students complete Common Assessment Tasks (CATs), which in most subjects include one recommended examination (worth 25 per cent of the final mark) as part of their unit work, they are unlikely to gain tertiary entrance. The CATs are graded from A to E and scrutinised outside the school by regional verification panels composed of senior teachers. The A-E grades are converted, for university entrance, to marks from 1-10. Students who complete four CATs for every major subject studied will therefore receive a grade out of 40 which will make up the score used for tertiary selection.
MAJOR PROBLEMS CREATED BY THE V.C.E.
Of the many problems created by this complex system, the major ones have involved the extremely vague and vacuous nature of the syllabus "guides"; the lack of incentives generated by an inadequate Satisfactory/Unsatisfactory (S or N) grading procedure for all units; and the work of the "verification panels". Even teachers who agree that exams are not the only valid external means of monitoring the work of senior pupils and of ensuring equity across the State are calling the plan in practice a bureaucratic nightmare.
Not only are the administrative procedures teachers must follow to assess CATs time-consuming and enervating in the extreme; but the need for teachers to be away from school on verification panels is reducing essential teaching time. It is estimated that by 1992 more than 1,200 people, all of them requiring training, will have been appointed to run local, regional, and State panels in 44 fields of study -- a much higher figure than was originally envisaged. (14) And because authentication is a much more complex problem than preventing straight-out cheating, the work of the panels is likely to prove unmanageable.
As the Caulley Report has made clear, (15) students new to an area are not capable of original research and must rely heavily on the work of others. The question, "How much 'help' can parents and teachers legitimately give to a student?" is therefore very difficult to answer. Under the HSC, adults could reasonably help children with their work, knowing that their knowledge would ultimately be tested under exam conditions. But under the VCE, the boundaries between guiding pupils and ensuring that their work is genuine are not clear. For teachers, who are expected to decide whether student work can be presented as their own, the difficulty of making a final pronouncement can be acute.
A further worry about VCE assessment is that respected experts in core subject fields such as English have disagreed among themselves about the quality of new course designs. Although many students report that the new courses are encouraging them to think, and not merely to memorise big chunks of material, large numbers complain that the work being set -- especially in the Australian Studies course -- is so hollow, boring, and patchy that being in class is a waste of time, though an assessment requirement.
A final difficulty with the VCE is that the CATs (including the external exam) fail to discriminate sufficiently among senior students seeking admission to university, both because they include some tasks and questions too general to be assessed meaningfully, and because the grades they confer are too imprecise. A major problem, particularly striking in the Maths course until its recent revision, has resulted from the requirement that all pupils study the same components of each subject. This requirement has had the unfortunate effect of watering down essential content and encouraging designers of the recommended but not obligatory exam to set extremely broad questions which encourage reliance upon prepared answers.
RECOMMENDATIONS THE VICTORIAN CERTIFICATE OF EDUCATION (V.C.E.) To restore public confidence in work completed by senior pupils, while maintaining diverse course offerings which cater for the varied aptitudes and aspirations of the expanded Year 11-12 student population, the more cumbersome and unworkable elements in the VCE must be altered. What is needed is a valid system of awarding a certificate, based on (1) recorded distinctions between subjects (comparable to Group 1 and Group 2 distinctions formerly operating in Victoria), acknowledging their relative difficulty; (2) syllabuses with clearly-defined content (to save teachers work by clarifying what they need to do); (3) a grading system which provides precise, accurate information about student performance in each unit (not simply S and N grades); and (4) assessment which minimises the problem of "authenticating" student work by requiring externally set and graded examinations worth at least 50 per cent of the final mark awarded for all subjects. (16) |
ENDNOTES
1. National Report on Schooling in Australia 1989. The Australian Education Council consists of the Federal Minister for Education and all of the State Education Ministers.
2. Yet an editorial in The Sydney Morning Herald on 15 August indicates that in 1991 students from non-English-speaking backgrounds (about a fifth of those tested) improved their literacy scores on the 1990 results and showed the biggest improvement in aspects of numeracy of any group tested.
3. According to one of Australia's most respected testing experts, Dr Geoff Masters of ACER, author of a discussion paper prepared in late 1991 for the National Industry Education Forum, Assessing Achievement in Australian Schools, there is no evidence that the NSW testing program has led to "invalid or unfair comparisons between schools or regions." Indeed, evidence suggests the reverse.
4. Geoff Masters, ibid, reports that tests designed from 1988 onwards for use in Victoria, Western Australia, New South Wales, and Queensland have attracted worldwide attention.
5. Among the overseas defenders of basic skills testing whose views have been widely publicised is Barbara Lerner. Her March 1991 Commentary article, "Good News About American Education", argues persuasively that the Minimum Competence Movement in the US, which, since 1976, has required high school graduates to pass basic skills tests, has significantly raised the level of achievement of disadvantaged children because the goals set for pupils have been clearly defined, demonstrably important, and achievable.
6. According to Geoff Masters, op. cit., the Basic Skills Testing program in New South Wales has already disclosed previously unidentified features of student learning. Without skills tests given by schools, the parents of dyslexic children are forced to pay hundreds of dollars to acquire basic facts needed for their proper remediation -- facts available through testing programs run by metropolitan teaching hospitals or private (often expensive) testing services. Clearly, our schools should be engaged in testing of this kind and in the development of appropriate remedial programs for children with specific learning difficulties. Currently, such children are casualties of the system. See further on this subject, Susan Moore, "Learning-Disabled Children: The Most Neglected Pupils in NSW", Education Monitor, Spring 1990. On reading tests in particular, and the kinds of information they alone provide, Dr Byron Harrison's research (referred to in Chapter 2) is particularly useful.
7. Particularly helpful in conversations were John King and Geoff Masters.
8. According to The Times Educational Supplement, 20 September 1991, this finding was reported by Britain's Education Secretary, Kenneth Clarke, at a July 1991 conference of the Professional Association of Teachers.
9. See Geoff Masters, op. cit., pp. 11-15.
10. See Malcolm Rosier, "International Science Tests: How Australia Performs", Education Monitor, Winter 1991.
11. For a more detailed look at the series in The Australian, consult that newspaper beginning on 27 October 1990, and continuing until mid-November. A report of the Rosier/McGregor survey is in Education Monitor, Winter 1990.
12. See, for example, a "States' Survey" item on the 1989 English reference test in Education Monitor, Spring 1989.
13. The exact results of Year 10 exams, unlike those for Year 12, are not given to teachers, parents, or students. Instead, schools rank their Year 10 pupils (1 to 25 or 1 to 45 or whatever) before they take the exams. The ranking is based on the students' total performance all year and on how teachers anticipate they will perform on the tests. Each school is then awarded a certain number of As, Bs, Cs, and so on, depending upon how the pupils actually perform on the exam; but the school is not told which students received what grades. A represents the top 10 per cent; B the next 20 per cent; C the next 40 per cent; D the next 20 per cent; and E the bottom 10 per cent. The school must award the grades it receives (for example, 4 As, 3 Bs, 10 Cs, 7 Ds, 6 Es) according to the ranking given to the students before the exam is taken. So, a student who has been working poorly all year, who nonetheless manages to do very well on the exam, will almost certainly receive an exam grade lower than his performance, because the grade he gets will depend on his teacher rating. That is why debate on the justice of the system is still raging.
14. According to the McGaw Report, Assessment in the Victorian Certificate of Education, April 1990, the time and resources required to make the process of verification work properly were grossly underestimated.
15. See An Evaluation of Common Assessment Tasks and their Trials, May 1990, published by a La Trobe University research team led by Dr Darrel Caulley.
16. For further discussion see Ken Baker, What's Wrong with the VCE?, Education Study Paper No. 24.
No comments:
Post a Comment