Language tests play a powerful role in many people’s lives, acting as gateways at important transitional moments in education, in employment and in moving from one country to another. Since language tests are devices for the institutional control of individuals, it is clearly important that they should be understood and subjected to scrutiny. Many people work with language tests in their professional life as teachers, test developers, raters, administrators etc. Teachers who prepare students for a test rely on information from tests to make decisions: which from the numerous tests available is more suitable to the learning style and ability of their candidates.
Thus an understanding of language testing is relevant both for those actually involved in creating language tests, and also more generally for those involved in using tests.
Not all language tests are of the same kind. They differ with respect to how they are designed, and what they are for; in other words, in respect to test method and purpose.
In terms of method, we can broadly distinguish traditional paper-and-pencil language tests from performance tests.
Paper-and-pencil tests take the form of the familiar examination question paper and are typically used for assessing separate components of language: grammar, vocabulary etc. or of receptive understanding (listening and reading comprehension.
Test items in standardized tests are often in fixed response format in which a number of possible responses is presented from which the candidate is required to choose. The most usual type is the multiple choice format, in which only one of the presented alternatives is correct. The others (distractors) have been placed in the question to confuse the candidate. Scoring is done automatically by a machine.
In performance based tests, language skills are assessed in an act of communication. Performance tests are most commonly tests of speaking and writing, in which a more or less extended sample of speech or writing is elicited from the test-taker and judged by one or more trained raters using an agreed rating procedure.
Language tests also differ according to their purpose. The most familiar distinction in terms of test purpose is that between achievement and proficiency tests.
Achievement tests accumulate evidence during the course of study in order to see whether and where progress has been made. Achievement tests should support the teaching to which they relate.
In Greece very little attention has been paid to achievement tests. Students’ work is usually marked numerically and no detailed feedback is provided explaining where the student currently is, where he wants to go and how to get there. Classroom-based assessment is not done systematically. The majority of state and private school teachers test and rate using the 1-20 scale unaware of alternative assessment approaches and methods, which, among others, help in evaluating the suitability and effectiveness of the curriculum, the teaching methodology, and the instructional materials. This is hardly surprising because classroom-based assessment, if included at all in teacher training programmes, is often treated as an afterthought.
Proficiency tests on the other hand measure the current language use without any reference to the previous process of teaching. Proficiency tests are like snapshots. They depict what a candidate can do on the exam day. If the same candidate takes the same test two months later the picture would be different.
Materials and Tasks
Even if materials and tasks in language testing appear relatively realistic, they can never be real. Most test situations allow only a very brief period of sampling of candidate behaviour –usually a couple of hours or so at most. Oral tests may last only a few minutes. Is it enough to provide evidence of candidates’ ‘proficiency’, or ‘readiness’ for communicative roles in the real world?
I am afraid not. •