Why is it important to test all four skills?

Listening, reading, speaking and writing: these, as every language teacher knows, are the “four skills”. I expect that most language teachers, if asked, would say that they are all equally important. For sure, there is a great deal of variation between different learners’ skills profiles – their relative strengths in the respective skills, and the effort it takes to acquire them – but most general language courses these days assume that the skills are best learned in parallel, and that they should all receive roughly equal amounts of attention.

 

And yet some General English language exams claim to measure someone’s proficiency in the language without assessing all four skills. I want to consider the likely reasons for this practice and its implications.
The four skills haven’t always been considered of equal importance, of course. Until about the middle of last century language teaching in Europe was heavily biased towards reading and writing. It followed the tradition of the teaching of languages that are no longer spoken – Ancient Greek and Latin. The main reason for learning a language, it was assumed, was to be able to read literary texts, and grammar was considered a worthy subject of study in its own right. To some extent this also reflected the real needs of language users. People travelled less; international trade and the exchange of ideas was conducted largely through written correspondence and publications.

 

Even today there are some specialized language needs for which some skills are more important than others. For air traffic controllers, for example, super accurate listening and clear pronunciation are essential, but reading and writing are marginal. But for general language courses, where we don’t know precisely what each learner’s future language needs are, there is no basis for such a selective approach. Whatever the sphere of activity in which the learner will operate later – international business, tourism and hospitality, academia, as an employee of a multinational company, or simply as a tourist – we do know that he or she will have to function in a globalized environment in which a great variety of channels of communication are used. With increased mobility, face to face communication skills (speaking and listening) are essential, but reading and writing have not gone away. Indeed, with the increasing use of email and the advent of text messaging they are as important as ever.

 

If you asked most non-specialists which skill is the most important they would probably say speaking. This is partly because speaking is popularly thought of as the primordial language skill. After all, when we want to ask someone about their proficiency in a language we say “Do you speak English?”, not “Do you write Greek?” or “Do you listen Japanese?”. It may also be partly in reaction to the neglect of the oral/aural skills in the past. Among my generation it is common to hear people say “I learned French for five years at school but I can hardly speak a word!”.

 

It is surprising, then, that where tests don’t cover all four skills, speaking is the one that they tend to leave out, or to treat as an optional extra. The most likely reason for this has to do with the practicalities of large scale testing. Generally speaking, tests of the literate skills – reading and writing – are easier to administer than tests of the oral/aural skills, because they can be delivered on paper in the same way as exams in most other school subjects. You simply need to print the requisite number of papers and then provide rooms in which large numbers of candidates can sit the test at once, with suitable supervision. With speaking, however, it is not possible to test a room full of people at once, and listening requires, in addition to the paper, an audio recording and a means of playing it.

 

When it comes to marking tests, the receptive skills – reading and listening – are generally easier to score than the productive ones. Answers to comprehension questions can be scored quickly and cheaply, by electronic scanning in the case of multiple choice items. However written and spoken productions are a different matter. In recent years great progress has been made in the mechanical scoring of written and spoken responses using computers programmed with sophisticated Artificial Intelligence software. The speaking and writing components of Pearson’s PTE Academic are scored entirely by this means. But if these resources are not available then the marking of spoken and written responses requires expert judgment. Added to the effort put in by the expert examiners themselves is the work of training, standardization and monitoring, to ensure that as far as possible different examiners mark to the same standard. For speaking this process is especially complicated as the responses have to be distributed in the form of audio recordings which have to be listened to in real time.

 

All this costs money. It is no wonder, then, that of the four skills, testers find that speaking is the most difficult to assess. But the fact that it is difficult is not a reason to avoid it. In fact it places more responsibility on testing organisations to use their resources and expertise for this purpose. The variation in skills profiles which I referred to at the beginning of this article means that you cannot make a valid prediction of someone’s ability in speaking – or in any skill for that matter – from their scores in the other three skills. Employers and other test score users are increasingly aware of this.

 

In PTE General we test speaking by means of a face to face interaction with a trained interlocutor, with a separate examiner present, the whole test being recorded for monitoring and standardization purposes. For practical reasons this is done separately from the testing of the other three skills, which can be done with groups of test takers in a single session. But the scores from the speaking test are combined with the scores of the other three skills and given equal weight, and it is not possible to obtain a certificate without being assessed for speaking. In this way, we can assure employers and universities that our evaluation of test takers’ language proficiency is based on real evidence of their performance in all four skills. If they want to know how well a candidate can speak English they can refer to a score obtained by hearing them speak, not by drawing an unreliable inference from their scores for other skills. •

Author

ELT News

ELT News