[Journal of the Simplified Spelling Society, J25, 1999/1 pp20-23]
See also Spelling Standards of Undergraduates.

Aspects of Spelling Standards among English 16-year-olds in the 1980s and 1990s.

Jennifer Chew.

Jennifer Chew started teaching English in South Africa in 1962 and has taught the subject up to Advanced Level in England since 1978. She has written articles and pamphlets on spelling among sixth-formers and on the initial teaching of literacy in primary schools.

0. Abstract.

A survey of standards of written English carried out by the University of Cambridge Local Examinations Syndicate (UCLES, 1996) gave evidence of a sharp fall in spelling standards from 1980 to 1994, though the researchers had reservations about the representativeness of the available data. The present article suggests that data collected at one college annually from 1984 until the late 1990s may be regarded as supplementing and confirming the Cambridge findings. Apparent discrepancies between the two studies are explained, reasons for declining spelling standards are offered, and remedies are suggested.

1. The UCLES survey.

In JSSS J22, Chris Upward (1997) reviewed a report from UCLES by Massey & Elliott (1996) which showed a steep decline in spelling standards between 1980 and 1994. The authors of that report were cautious about drawing firm conclusions:
This paper ... cannot say conclusively if grading standards in English have risen or fallen in recent years. But it does present some rare comparative date concerning features of the writing of pupils awarded ostensibly 'equivalent' 16+ examination grades between 1980 and 1994, which are interesting and worth public consideration. (p 5)
Massey & Elliott looked at scripts produced under examination conditions in 1980, 1993 and 1994 and analysed the fourth sentence in 60 scripts (30 from girls and 30 from boys) for each grade (A to E in the O Level year (1980) and A to G in the later GCSE years). Although sentence-length, range of vocabulary and punctuation were among the aspects of writing studied, it is the findings on spelling which will be of greatest interest to JSSS readers.

Massey & Elliott were aware of certain factors which might have affected the reliability of their comparisons: in particular, the scripts they had for 1980 were originally selected for a different purpose, and unrepresentative of the weaker end of the ability-range, and the scripts they had for 1993 did not represent candidates who had been assessed entirely by coursework. The present article suggests that a survey carried out at a sixth-form college in south east England may go at least some way towards filling the gaps.

The UCLES study was undertaken as a result of public concern about a possible decline in standards of written English following a major change in the public examination system in 1988. Until 1987, there had been a two-tier examination system in England: abler sixteen-year-olds had taken the General Certificate of Education Ordinary Level (O Level) examination and the less able had taken the Certificate of Secondary Education (CSE) examination. From 1988, all students took a common examination, the General Certificate of Secondary Education (GCSE). The rapid rise in the pass-rate suggested to many people that grade-inflation had occurred, that is, higher grades were being awarded without higher standards being achieved.

2. Sixth-form college study: background and method.

Sixth-form colleges cater for students aged 16 and over who wish to continue their education after the period of compulsory schooling. The college study discussed here arose out of a need for a quick screening process which would help to identify without delay students entering the college after taking their GCSE exams at age 16 who had literacy difficulties, so that they could be given appropriate help.

From September 1984 onwards, a spelling test was administered to all new entrants to the college. The test used has always been the same: Fred Schonell's Graded Word Spelling Test B (Schonell 1950, minus the first 30 words, which were considered by Schonell himself to be suitable for children up to the age of eight). Scores have always been recorded together with the students' grades in the 16+ English examination taken about three months earlier. Examination grades are given a numerical value according to a nationally-accepted formula (7 for an A, 6 for a B, 5 for a C etc) and this makes it possible for an average English examination score as well as an average spelling score to be calculated for the college intake each year. A starred A grade (A*) was introduced in 1994, to which a numerical value of 8 was assigned nationally, but the college survey continued to count A* as 7 as it was a subdivision of the old A grade and counting it as 8 would have made average GCSE grades from 1994 onwards look even more inflated than it was suspected they already were. The implications of changing levels of performance over the years are set out in Fig.1.

Fig. 1: Grade Inflation at Sixth-form College 1984-1998:

Exam scores rise, spelling scores fall.
Rising exam scores. Falling spelling scores. Student numbers.

3. Comparability of the UCLES data from the three years.

The UCLES researchers recognized that the available scripts might not provide a fully reliable basis for comparison across the years. Two problems are particularly relevant to the present article, one affecting comparability of the one O Level year with the two GCSE years, and the other affecting the comparability of the two GCSE years with each other:

3.1. For 1980, O Level scripts were available but not CSE scripts, which meant that a less able group which was represented in 1993 and 1994 was not represented in 1980.

3.2. The 1994 GCSE scripts came from the full cohort of GCSE English candidates, whereas the 1993 scripts came from only about 20% of the cohort. The reason for this was that in 1993 it was still possible for candidates to be assessed entirely by coursework, and about 80% of candidates were entered for this option. By 1994, the 100% coursework option had been withdrawn and all candidates had to take an external examination for the first time since the inception of GCSE six years earlier. This was therefore the first GCSE year in which examiners saw scripts from all candidates.

The use of grade-for-grade comparisons by the UCLES researchers should have meant that neither of these points affected the comparability of the data: educationists had always assured the public that the standard of each grade was being maintained, regardless of the name of the examination or the amount of coursework involved. The credibility of these claims was, however, rendered rather suspect by the study, which showed seriously declining spelling accuracy within the 'unchanged' grades.

4 Comparability of sixth-form college data from year to year.

The college study included data from the groups which were missing from the UCLES study. In the first four years of the project (1984-7), the spelling test was taken not only by students who had taken O Level English but also by some students who had taken CSE. In the six years from 1988 to 1993, the test was taken not only by students entered for an external examination but also by large numbers of students who had taken the 100% coursework option. The key point is that grade-for-grade comparisons suggest declining spelling standards whether the CSE and 100% coursework candidates are included or excluded.

5. Findings common to both studies.

Both studies found that candidates in GCSE years (ie, 1988 onward) made more spelling errors than those in earlier O Level/CSE years. Grades A (and A* after 1994) to E should have represented the same standard in O Level and GCSE English examinations, but the UCLES researchers found that '1994's writing samples had about two to three times the error rate of their 1980 equivalent' (Massey & Elliott, 1996, p2). At the college, the average spelling score of the whole intake dropped from 57.83 out of 70 in 1984 (the first year of the project) to 52.51 in 1993. (1994 will be dealt with in the next paragraph). The same grade-for-grade discrepancies were noted in the college study as in the UCLES study: for example, the average spelling score of candidates with grades A to C in the last four years of O Level/CSE (with a CSE grade 1 counted as an O Level grade C, according to the convention of the time) varied only slightly, between 60.19 and 59.84, but by 1993 it had dropped to 55.51. This meant that the top end of the ability-range in 1993 (the A to C candidates) made, on average, not only more errors on the spelling test than candidates with ostensibly 'equivalent' grades in 1984-87 but also more errors than the whole college intake, many of whom had English grades below C, had made in 1984.

6. Apparent discrepancy between the studies.

The only point at which the college findings appear to diverge from the UCLES findings is in the comparison between 1993 and 1994. The UCLES researchers found more spelling errors in the writing of the 1994 candidates than in the writing of the 1993 candidates, whereas the college study showed 1994 entrants to be slightly better spellers than 1993 entrants: the average score rose from 52.51 in 1993 to 53.04 in 1994.

The difference is almost certainly explained by the first-time inclusion, in the 1994 UCLES data, of candidates from the 80% of schools which had formerly favoured the 100% coursework option. The UCLES researchers suspected that this might be the case:
Might schools which had formerly used the 100% coursework option (who formed the majority of those examined in 1994) have placed less emphasis on the necessity of accurate spelling? No other explanation comes readily to hand. (Massey & Elliott 1996, p26)
At the college, the spelling of these students had always been seen in the annual spelling test. Occasional checks on students whose English grades seemed surprisingly good in relation to their spelling ability indicated that they had usually done the 100% coursework option. More objective checks were made in 1992 and 1993: students were asked to state on their spelling scripts whether or not they had done 100% coursework, and separate averages were calculated for the two groups. In 1992, the average spelling score of students with grades A, B, C, E and below for GCSE English was lower if they had done 100% coursework than if their assessment had included an external examination; only at grade D did the 100% coursework candidates have a slightly higher average mark on the spelling test than the 'examination' candidates. In 1993, the difference was more marked: the two groups had the same average spelling score at grade A, but at every other grade, the average spelling score of the 100% coursework candidates was about 2 marks below that of the 'examination' candidates.

This pattern seemed consistent with the great emphasis placed, by teachers who favoured 100% coursework, on the correction of spelling during redrafting rather than on first-time accuracy. It seemed likely that the 100% coursework candidates had not had much incentive to internalize correct spelling. It was certainly true that teachers favouring 100% coursework had often said that first-time accuracy was less important than accuracy achieved in redrafting, and it is arguable that this had left their students inexperienced in coping with examinations and spelling tests where redrafting, at least with the aid of a dictionary, was not possible. In 1994, when an examination became compulsory, these schools no doubt made an effort to encourage first-time accuracy, but were still inexperienced at doing so. In the college survey, which had always included these schools, it was the effort which showed in the improved test results, whereas in the UCLES survey, which had not previously included these schools, it was the inexperience which showed in the far worse GCSE scripts. To put it slightly differently: in a situation where the work of inexperienced examinees was being seen for the first time, it was the gap between them and the experienced examinees which was striking, whereas in a situation where both groups had always been seen, it was the slight closing of the gap between them which was striking. The discrepancy between the UCLES findings and the college findings was apparent rather than real.

The slight improvement in the average spelling score of the whole college intake in 1994 suggested that the prospect of a compulsory external examination might have made first-time accuracy a higher priority for teachers and students in schools that previously used the 100% coursework option. A further improvement occurred in 1995, followed by a slight decline in 1996 and then another improvement in 1997. Unfortunately, however, a major decline occurred in 1998, taking the average spelling score of the whole college intake down to its lowest-ever level (52.22). This was disappointing, but as the average GCSE English grade also dropped slightly it could be argued that examiners were at least not over-rewarding poor spellers as much as they had seemed to do in the past.

7. Other factors affecting spelling standards.

It is likely that the fluctuations noted in the UCLES and college studies were largely, if not entirely, the result of developments at secondary level: changes in assessment methods at 16+ influenced the amount of emphasis secondary-school teachers placed on first-time spelling accuracy.

It is also likely, however, that developments at secondary level have only a modest effect on spelling habits acquired during the seven primary school years. Students who turned 16 in 1993 and 1994 had started primary school in the early 1980s. During this period, there was a well-documented move towards a type of early literacy teaching which played down the need to teach beginners the letters of the alphabet and their relationship to speech sounds (phonics). In Britain, the approach was called 'real books' or the 'apprenticeship approach'; in North America, it was called 'Whole Language'. The American term highlights the movement's preoccupation with keeping language 'whole' - i.e. not breaking it up into little bits for teaching purposes. Two of its leading proponents are Frank Smith and Kenneth Goodman. Both are hostile to phonics teaching, because its focus on graphemes and phonemes constitutes a breaking up of language into small and meaningless units. A famous dictum of Smith's is 'We learn to read by reading' (Smith, 1978), and Goodman has called reading a 'psycholinguistic guessing game' (Goodman, 1967). Their theories dominated teacher training throughout the English-speaking world from the 1970s onwards. In the UK, Dr Tom Gorman, of the National Foundation for Educational Research, surveyed the reading lists issued to students by teacher-trainers, and made the following statements:
The majority of the books in the lists cited espouse an approach to the teaching of reading which is now sufficiently widely accepted to be considered orthodox, sometimes referred to as the 'apprenticeship' approach ... The approach, as it is frequently expounded, tends to underplay the amount of knowledge teachers need to have about the sound system and the written system of English ... I concluded from this enquiry, therefore, that it is likely that many teachers in training are not being provided with the information that they need to provide information to beginning readers. (Gorman, 1989)
The sixteen-year-olds taking GCSE from 1988 onwards had been 'beginning readers' during a period when teachers had been inadequately trained in how to teach 'the sound system and the written system of English', and it is difficult to avoid the conclusion that the spelling of these students had suffered as a result

8. The way forward.

Large-scale spelling reform may not be necessary to reverse this decline in spelling accuracy. Checks made at the college suggest that even the weaker sixteen-year-olds (those with grades D and below in GCSE English) misspell, on average, only about 3% of words in their normal writing. The proposals of some spelling reformers would alter the spelling of a far higher percentage of words: the first 100 words of Chris Upward's (1997) article (ritn in Cut Spelng) on the UCLES report, for example, contain 46 spellings which diverge from TO. This will surely seem like overkill to anyone who is familiar with the writing of average youngsters. When spelling standards change over a period as short as that covered by the UCLES and college studies, the orthography cannot be to blame. Changes in teaching and assessment methods over the same period, however, can evidently have a noticeable impact. The best hope for an improvement in spelling standards seems to lie with good teaching at primary level followed by high expectations from teachers and examiners at secondary level.

The logical place to start teaching beginners to read and spell in a language with an alphabetic writing-system is with the simplest letter-sound correspondences. Even in countries with much more straightforward orthographies than English, teachers start with the shortest words and delay the introduction of digraphs or other complications (for example Umlauts in German). In English-speaking countries, by contrast, such ideas were increasingly rejected from the 1960s until the mid-1990s: it has been considered more important that beginners' reading books should have 'natural' vocabulary (ie, vocabulary which is not controlled for orthographic simplicity), and that 'invented' or 'emergent' spelling should be encouraged (children make their own attempts to spell words which they want to use in their writing and misspellings are not corrected).

If teaching methods for English beginners were governed by the same principles as they are for beginners in non-English-speaking countries, the first stages of learning to read and spell in English should be no more difficult than the first stages of learning to read and spell in other languages providing the vocabulary is controlled. It is only later that English traditional orthography makes greater demands on learners, but the evidence suggests that a good phonics start makes it relatively easy for children to go on to master the more complex aspects of English spelling. In South Africa, where systematic phonics teaching was routine at the time in primary schools, several hundred English-speaking sixteen-year-olds tested in 1987 had a much higher average score on the Schonell spelling test than the college students (Chew 1990). There were signs that it was the phonics teaching which had produced this result: the weakest South Africans were better at matching symbol to sound, producing (for equipped) attempts such as *equipt and *equiped compared with the weakest college students' attempts of *equit, *quipet, *epitt and *accipt.

A change in teaching methods would be well supported by a very large body of empirical research and would have massive public support. It would therefore be easier to justify than spelling reform as a first step in raising spelling standards.

References.

Chew, J (1990) Spelling Standards and Examination Results among Sixth Formers. 1984-1990, York: Campaign for Real Education

Goodman, K (1967) Reading: A psycholinguistic guessing game. Journal of the Reading Specialist (May 1967), pp126-135.

Gorman, T (1989) 'What teachers in training read about reading.' Slough: Centre for Research in Language and Communication, National Foundation for Educational Research.

Massey, A J & Elliott, G L (1996) Aspects of Writing in English Examinations at 16+ between 1980 and 1994. Cambridge: University of Cambridge Local Examinations Syndicate (UCLES).

Smith, F (1978) Reading. Cambridge: Cambridge University Press

Upward, C (1997) Alarm Bels Ring for Fonics and/or Spelng Reform. Journal of the Simplified Spelling Society J22 1997/2, pp26-32.


Back to top.