The Implementation of a First Year English Placement System
Anthony Mills, Lillian Swain and Robert Weschler
For the past three years, the Comprehensive English Language Test (CELT) has been administered to all incoming first year students in the Faculty of International Studies at Kyoritsu Women's University. It has been used as a tool for separating the approximately 300 students into three broad levels of proficiency and placing them into appropriate classes. The purpose of this paper is to explain why it was initially decided to separate the students into levels, how that placement was achieved primarily through the use of the CELT test, and to report some observations based on the results of that test.
Part I: Background
In years prior to implementing the CELT as a placement test, students were separated into first year English classes in a fairly random manner based primarily on their student number. It soon became apparent that within the freshman class there were students with a broad range of proficiency and experience in the English language. The result was that in any given class, the teacher was faced with a dilemma: teach to the level of the more proficient students and the rest of the class would feel lost and frustrated; teach to the lower -level students, and the others would feel unchallenged and bored; teach to the middle, and nobody would be satisfied.
This problem is particularly pronounced in Japan where it has often been noted that, "the nail that sticks out gets hammered down." Thus, the better students feel hesitant to display their knowledge in front of the group for fear of appearing egotistical while the poorer students are reluctant to display their ignorance for fear of appearing stupid. Furthermore, few students are trained to express their dissatisfaction and needs, and thus teachers are deprived of the feedback necessary to make the appropriate adjustments. This is a problem particularly in speaking classes, where so much depends on student interaction. The net result is that nobody learns very well. Clearly, the problem demanded a solution.
In choosing to administer a test and place students by ability, there are advantages for both the teacher and the student . Most obviously, by knowing beforehand the approximate level of the class, the teacher can adjust the course to the particular students (rather than try to do the opposite, as is so often the case.) Perhaps first and foremost, the teacher can determine appropriate curricular goals for the class. Once this is done, the teacher can choose a suitable text in terms of content and level of difficulty.
Since implementing this program, we have found that as the course progresses, it is much easier to appropriately adjust the teaching methodology based on the more clearly homogenous and evident response of our students. For example, one can easily choose from among lectures, pairwork, group activities, etc. We can adjust the pace of the class and pay more attention to individual students who might otherwise feel too embarrassed to ask for help. Disciplinary problems which usually stem from the extremes of boredom or frustration are less likely to arise and disrupt the class. Evaluation is made more accurate and efficient since tests can be made throughout the term which are more finely calibrated, neither too difficult nor too easy. Classroom management is simplified, a more congenial atmosphere pervades the room and ultimately, we have been freed to create a true learning environment.
Perhaps the most noticeable difference can be found in the students' motivation to learn English. We believe that, from the student's point of view, the fact that she knows those around her are all roughly on the same level places her greatly at ease. She is neither intimidated by those above nor bored by those below. Thus, she feels freer to express herself (and make mistakes) without fear of embarassment. She no longer worries that she might be wasting the time of the other students. She takes confidence in the fact that the materials have been specifically chosen to suit her, that the content will be personally relevent, challenging and yet within her grasp if she merely applies herself. Suddenly, she has a reason to focus her attention and study hard.
At the lower level, the student is pleased to learn at the slower pace which allows her the precious time to absorb the new material. She can benefit from the increased use of fundamental, controlled practice activities. These might include more traditional substitution games and drills, clearly focused task-based activities, information-gaps and situational dialogues. She can feel freer to practice the basic mechanics of grammar, listening comprehension and pronunciation, knowing that those around her are all doing the same.
Meanwhile, in higher level classes, students can be challenged with a greater emphasis on open-ended practice, chunking of larger fragments of language and freer discussion. The content of the course can be broadened and deepened to include higher level grammatical points, academic topics, and sociolinguistic factors such as speech register and appropriacy. But perhaps most importantly, the content can be drawn more from the students' own personal experiences. We have found that those students who have acquired their proficiency through special experiences abroad or in Japan are a rich source of knowledge to their peers. Such students, when grouped together, are actively interested in hearing what their classmates have to say, and in turn are less inhibited in sharing their own experiences. There is a sense of challenge in trying to learn what others have to offer which adds excitement to the class. Finally, since the teacher has to spend less time explaining basic terms in such classes, the pace of the class can be accelerated and thus the students can be kept stimulated.
With all these advantages to the correct placement of students, it is hard to think of reasons not to do so. However, an argument can be made that such separation leads to a hierarchical structure within the school, with the higher-level groups feeling elitist and the lower groups feeling inferior. If in fact this is a problem, we have tried to minimize it by merely not telling the students what groups they are in. While they may guess, our experience has been that they are simply too busy enjoying the learning experience to worry about their placement relative to other groups.
It has also been argued that placement has two other negative consequences not found in heterogeneous classrooms. On the one hand, it deprives the lower level students of the chance to learn from better ones, while on the other hand, it deprives the better students of the opportunity to learn by helping those less able. While this logic may hold in theory, our experience has been that especially in Japan, students in practice simply don't interact that way, perhaps because of the greater cultural value placed on saving face. And even if they did actively seek a form of peer instruction, there is still enough variation left within each class to allow for that type of interaction.
If one accepts the need to place students into appropriate classes based on their English proficiency, then the question becomes what kind of test to use to accomplish that end. At this juncture, one first has to decide whether to try to create the school's own test, or to use a pre-existing test. This question will be addressed and the process of student placement will be described in the next section.
Part II : The Placement Process
The second part of this paper describes how the placement of students into different ability levels is accomplished. It explains how decisions were made about the form the placement test was to take, and goes on to describe how the test was administered.
As a first step, salient features of the testing situation were considered. A clear statement of what we hoped to achieve by conducting the test was required. In addition, any physical, pedagogical, temporal or other constraints pertaining to test administration had to be taken into account.
The test was to be used for placing the approximately 300 first year students in the Faculty of International Studies into streamed classes for their required Eigo 1 course. (Editor's Note: "Eigo 1" is Japanese for "English 1".) Eigo 1 concentrates primarily on developing the students' abilities in listening to and speaking English.
The test needed to be capable of discriminating among the students with sufficient accuracy so that they could be divided into three main ability levels (lower, middle and upper,) plus one advanced class comprised mostly of students who had lived abroad or who for other reasons had had an unusually high level of exposure to English. It was felt that three main ability levels would divide the majority of the students into classes of sufficient homogeneity so that they would benefit from the advantages of streaming.
The rationale for creating an advanced class in addition to the upper, middle and lower ability levels was that in any one year there appeared to be a significant number of returnee students and others whose English language ability was considerably higher than the norm. It was felt that these students needed a challenge which was commensurate with their ability. Moreover, before the placement test was used, students whose English language ability far exceeded that of their peers often seemed to find it hard to settle in a 'standard' class. It was felt that requiring these students to participate in such a class was potentially detrimental for the students themselves, for other class members and for the teaching situation. It was also felt that the creation of an advanced class would give students who had returned from living abroad the chance to compare similar educational and life experiences.
For administrative reasons, first year students' schedules for all of their courses have to be fixed before the placement test is conducted in Orientation Week. The only way that it is possible to assign each student to an appropriate Eigo 1 class without changing their schedule, therefore, is to have all three ability levels offered during the same class period. (Please refer to Table 1.) The students are first assigned to a particular time slot for Eigo 1 by the university administration. Then they can be be put into the appropriate class within that time period after the placement test has been conducted. In each year there are twelve Eigo 1 classes. Eleven of these classes are distributed among four main time slots. Three of these time slots each contains three classes, one at each of the ability levels. The students assigned to the fourth time slot are divided into two, rather than three levels, upper-middle and middle-lower.. The twelfth class is set aside as the advanced class. This class is placed at a relatively 'unpopular' point on the timetable (such as at the end of an afternoon session) in order to avoid clashes with other courses. Students who acheive particularly high scores on the test and/or show evidence of having extensive experience in English as shown on a separate Student Information Sheet (to be discussed later) are placed in this advanced class.
Table 1: Eigo 1 Class Distribution (1995)
Day / Period | 1-2 | 3-4 | 5-6 | 7-8 |
---|---|---|---|---|
Wednesday | U, M, L | U, M, L | ||
Thursday | U, M, L | U-M, M-L | A |
A = Advanced U-M = Upper-Middle U = Upper M-L = Middle-Lower M = Middle L = Lower
It should be pointed out that the organisation of this relatively complex timetabling system for Eigo 1 classes has entailed a considerable amount of liaison both between administrative staff and teachers, and among teachers themselves. Part-time teachers have agreed to arrange their own timetables so as to fit in with the system and members of the administration have been extremely cooperative in helping to put the timetable together.
The decision also had to be made whether to create a test especially for Kyoritsu, or whether to use a pre-existing commercially developed one. The fact is that to create a good test which is valid (measures what you intend to measure) and reliable (whose results can be closely duplicated) requires a great investment in time, energy, expertise and money. For these reasons, it was eventually decided to use a commercially produced test . It was recognised that, ideally, a placement test should, "fit the group for which it was designed"1, and if the one which is used at Kyoritsu is ever required in the future to do more than simply divide students into three broad groups, the desirability of writing our own test has been noted.
Having decided to use a commercially produced test, we next had to decide which test among the many available would be appropriate for our situation. Alderson, Krahnke and Stansfield2 provide a wide range of evaluative information on English Language Tests which are available commercially, and we used this information as a basis for our decision making. The test which we were looking for needed to reliably and effectively divide the students into three groups. It also needed to be reasonably priced, re-usable (i..e. usable every year) , easy to administer and easy to grade. We also decided that the test should include a listening component . The Comprehensive English Language Test (CELT) was eventually adopted because it most closely fulfilled all of the above criteria. In addition to being pedagogically effective the cost is not excessive in comparison to other commercial tests, the test booklets can be re-used many times and administration is relatively simple.
The test is comprised of 200 discrete-point, multiple-choice questions grouped into three sections. Fifty questions are in the listening section, seventy-five in the structure section and seventy-five in the vocabulary section. The test can be graded by hand, using a simple template. The absence of any integrative questions means that the problem of inter-rater reliability does not occur because all responses are either right or wrong and there is no room for subjective interpretation of the answers which students provide. (It should be noted, however, that some commentators3 have said that multiple-choice questions, which only require recognition of a correct form by the candidate, rather than production of the same form, are to be avoided in most testing situations. It was decided that in our case, with a fairly large population of test candidates, a relatively small number of test graders, and because of the lack of time available for grading, that multiple-choice was at present the most satisfactory alternative.) The CELT also contains a substantial listening section, which was appropriate in this situation. In addition to the advantages listed above, according to the statistics for one sample of 200 test candidates, the test has an overall reliability of .98 and a standard error of measurement of 7.05.4 These figures are quite good, and indicate that we can be fairly sure that a test candidate's result on this test is close to a reflection of her real ability.
The full CELT takes two and a quarter hours to administer. As only ninety minutes were available for test administration, it was decided to omit the vocabulary section, which brought total administration time down to eighty-five minutes. Although we were at first worried that omitting the vocabulary section would adversely skew the results, it would seem from statistics published by the testing company that using only the listening and the structure sections is sufficiently valid for our purposes. Although the manual states that the three sections of the test measure "somewhat different aspects of language competence," it also says that "each section is of sufficient length to yield reliable results, and therefore individual sections may be administered if conditions warrant."5 Correlation coefficients among the various sections of the test are fairly high (between .51 and .71 for one test group and between .45 and .66 for another test group. See Table 2.) These figures indicate there is considerable overlap in the language skills which are tested by the different parts of the CELT, or possibly that each of the different parts of the test in fact ascertain some kind of basic linguistic competence.
Table 2: CELT Intersectional Correlations (from CELT Technical Manual)
Group A (N=155) | Group B (N=60) | |||||
---|---|---|---|---|---|---|
Listening | Structure | Vocab. | Listening | Structure | Vocab. | |
Listening | -- | .69 | .51 | -- | .57 | .45 |
Structure | .69 | -- | .71 | .57 | -- | .66 |
Vocab. | .51 | .71 | -- | .45 | .66 | -- |
Overall it was felt that the decision to adopt the listening and structure sections was the right one, particularly as Eigo 1 is principally a listening and speaking course, and it was felt that students would be asked to use similar skills on the test as they would be asked to use during classes.
An additional advantage of using the CELT is that there are two versions of the test available (Form A and Form B.) The CELT technical manual describes the development of Form B as follows:
In the construction of Form B of CELT, care was taken that the new form would be parallel - and hence interchangeable - with the earlier Form A. Material for Form B was drawn from a pool of new items which had been pre-tested on large numbers of students similar to those used in the preparation of Form A.6
The advantages of having two forms of the same test to choose from are considerable. For us, the main potential advantage is that students who have taken Form A of the test at the beginning of the first year could possibly be asked to take Form B at a later stage. This would provide useful information about the students' English language development during their time at Kyoritsu. This information also has the potential to be used when decisions are made about possible changes to the language program.
In order to provide a correlative measure to the results which were obtained from the CELT, it was also decided to ask each of the incoming first year students to complete a Student Information Sheet, a questionnaire which asked for information about a student's English language learning experience to date and her feelings about studying the language. These information sheets were subsequently used to provide a confirmation of students' test scores. They were particularly useful in cases when a student's score placed her on the borderline between two ability levels. They were also useful in those cases where a student scored very high on one section of the test, but very low on the other. This situation most frequently arose when a student had spent a considerable time abroad, but had received relatively little formal English language instruction. In these cases, CELT scores tended to be high for the listening section and not so high for the structure section. Class placement of students in this or similar situations was slightly more problematical than in the majority of cases, and was achieved on an individual basis, often as a result of a private interview with the student concerned.
Part III: Results and Observations
The most important outcomes of the placement testing process at Kyoritsu are in the classroom. As noted above, the rewards to both the teachers and students can be seen in the greater degree of satisfaction with the courses. There is general agreement among the teachers of Eigo 1 that students are more at ease and more engaged, and that classroom management is both simpler and more effective in terms of lesson planning, text choice, pacing, and testing.
At the same time, the information which the placement test provides can be useful not only for the teachers of Eigo 1, but for the entire faculty, to get an overall view of the needs and abilities of the student body as a whole. This has the potential to give a useful new perspective to curriculum planning, program assessment, and goal-setting for the Faculty of International Studies.
Of course, the placement test scores of an incoming class can be used to gauge the overall level of English of the class. Also, scores can be tracked from year to year to monitor possible changes in the needs and abilities of the student body. Because the same test is given every year, a meaningful comparison can be made between one year and the next. The results of the placement test for the academic years 1994-1995 and 1995-1996 are given below in Tables 3 and 4. Test scores for each section of the CELT are expressed as the percent of items answered correctly. A perfect score for the listening and structure sections combined is 200.
Table 3: Results of Placement Tests, 1994 and 1995
1994 | 1995 | |
---|---|---|
TOTAL No. of Students | 301 | 269 |
High Score | 186 | 189 |
Average Score | 102.7 | 101.8 |
Low Score | 45 | 51 |
Table 4: Results of Placement Test by Level, 1994 and 1995
1994 | 1995 | |
---|---|---|
Advanced Class | ||
High Score | 186 | 189 |
Average Score | 143.5 | 143.1 |
Low Score | 129 | 129 |
Upper Group | ||
High Score | 128 | 129 |
Average Score | 116.5 | 114 |
Low Score | 108 | 104 |
Middle Group | ||
High Score | 107 | 104 |
Average Score | 101 | 99.2 |
Low Score | 92 | 89 |
Lower Group | ||
High Score | 91 | 89 |
Average Score | 77.4 | 79.3 |
Low Score | 45 | 51 |
It can be seen from the above tables that there is a remarkable similarity in the scores of the two years. This applies to both the scores of the total student body (Table 3) and to the scores within each level (Table 4.) For English language teachers this is significant in that they can be assured that the English curriculum in place has been appropriate from one year to the next. Choices made in 1994 -1995 regarding textbook, pacing, activities, etc., were valid for the 1995 -1996 school year as well.
For the Faculty as a whole, it appears that we have continued to attract a similar caliber of student this year as last, at least with regard to English language ability. At this point, only figures for these two years are available, but over a longer period of time, this kind of information can provide a growing body of data which could be useful for documenting the developing character of the student body.
The placement test results can also offer useful information about the students within each Area Studies course. The scores of the placement test that this year's second year students took in their first year are broken down below by the Area Studies course that they chose at the end of their first year.
Table 5: 1994 placement test scores by Area Studies course.
Course | No. of Students | Average Score | High | Low | Median |
---|---|---|---|---|---|
American | 83 | 110.3 | 186 | 71 | 108 |
European | 94 | 106.6 | 170 | 66* | 106 |
Chinese | 50 | 94.8 | 131 | 63 | 94.5 |
Japanese | 69 | 93.7 | 155 | 45 | 95 |
*One student scored a 0 on the placement test, attaching a note to her answer sheet stating that she had not studied English at all. Her score is not included.
The incoming first year students who were later to choose American Studies scored, on average, the highest on the placement test, followed by those who were to choose European Studies, Chinese Studies, and Japanese Studies. While it is not surprising that those who had the greatest proficiency in English were drawn to American Studies and, to a slightly lesser extent, European Studies, whether their abilities in English led them toward their choice of Area Studies, or their interest in American and European culture encouraged their study of English, is a difficult question and beyond the scope of this paper. At any rate, the faculties of American and European Studies may wish to take their students' higher proficiency, as evidenced by their placement test scores, into account in ensuring that they are sufficiently challenged and their abilities nurtured and strengthened.
For Chinese and Japanese Studies, the information gleaned from the placement test might provide a useful opportunity to assess their students' English proficiency, relative to their other capabilities. They might conclude that the students' present level is sufficient for their goals. On the other hand, it is possible to argue that Chinese and Japanese Studies in the context of international studies requires greater emphasis on English.
Finally, another useful perspective from which to view the placement test scores is in relation to the failure rate for Eigo I. No amount of curriculum planning can guarantee a 100% success rate, and the English language courses have their share of failures. We wondered if it would be possible to identify potential problems in the future by looking at the placement test scores of those who failed Eigo I in their first year at Kyoritsu. The results for academic year 1994 - 1995 are shown below, again by Area Studies course.
Table 6: 1994 - 1995 Eigo I Failures and Placement Test Scores
Area Studies | No. of Failures | Average Score on Placement Test |
---|---|---|
American | 4 | 99.6 |
European | 4 | 101.7 |
Chinese | 0 | --- |
Japanese | 3 | 92.6 |
Total | 11 | 97.9 |
The average placement test score of the students who went on to fail Eigo I in the academic year 1994 -1995 was only slightly less than the average placement test score of all the students in the freshman class (102.7. See Table 3.) These figures indicate that there is no "cut-off point" on the placement test below which a student may be in danger of failing Eigo I. In other words, the student's ability in English when entering Kyoritsu should not affect her chances of successfully completing her first year English requirement. This may be attributable to the fact that the students are placed in classes appropriate to their needs and abilities. Other factors besides level of English proficiency when entering which may contribute to a student's failure or success should be identified and appropriate strategies developed to deal with them to the greatest possible extent.
Part IV : Discussion
Having accomplished our aims in implementing a placement test, our next step is to look to the future and to consider any further uses for or improvements to the system we are using at present.
One of the most obvious developments for the system, and one which would be very simple to implement, would be to use Form B of the CELT to retest the students at some time during their career at Kyoritsu. This retesting could be arranged for the end of the first year the end of the second year, or any other time which was considered appropriate. It could involve the whole student body, or it could be restricted to those students who continue to study English in the European Studies or American Studies courses. Information gleaned from a second test administration would potentially be very useful for anybody involved in English teaching at Kyoritsu. It could also be used as a motivational tool to encourage students to monitor their own progress over the time they are studying at this university. In this way, the effects which the test has on students and their attitude to their language studies might be expected to be positive.
Another area in which the CELT scores are potentially useful is curriculum development. If it is noted that that the general standard of English among students who are entering the university is either rising or falling, appropriate changes can be made to the curriculum. Also, if a componential analysis is made of the scores which students obtain on the different sections of the CELT, it would be possible to make claims as to which areas of the English language curriculum need strengthening. Similarly, students' entry scores on the CELT in comparison with their scores at the second administration could be used to ascertain the need for curricular changes.
A further use for the CELT scores which we obtain each year would be to compare them with scores which students achieve on the university entrance exam. If such a comparison is made, and especially if it is made on a regular basis, this could provide an interesting new perspective on both the entrance exam system and on the English language program.
Finally, there is certainly the potential for students' CELT scores to be used to place them in other English courses in addition to Eigo 1. It has been the experience of the teachers of Eigo 1 classes that the improvements to class atmosphere and student attitude, and the degree to which preparation and teaching is facilitated and made more rewarding and enjoyable, far outweigh the negative aspects of having to administer the test and organise student and teacher timetables appropriately.
This research has so far yielded some valuable results which can be applied to our teaching, and as can be seen from the above discussion, there are many questions which might be explored in the future.
Notes
1 Brown, J.D. (1995) Developing Norm-Referenced Language Tests for Program-Level Decision Making. In J.D. Brown and Sayoko Okada Yamashita (Eds.) Language Testing in Japan (JALT Applied Materials) pp. 40-47. Tokyo: The Japan Association for Language Teaching.
2 Alderson, J.C., K.J. Krahnke, and C.W. Stansfield, (Eds.) (1987) Reviews of English Language Proficiency Tests Washington DC: Teachers of English to Speakers of Other Languages (TESOL.)
3 Hughes, A. (1989) Testing For Language Teachers Cambridge: Cambridge University Press.
4 Harris, D.P. and L.A. Palmer. (1986) CELT: Examiner's Instructions and Technical Manual. McGraw-Hill. p18
5 Ibid. p. 11
6 Ibid. p. 20
The Internet TESL Journal, Vol. II, No. 11, November 1996
http://iteslj.org/
http://iteslj.org/Articles/Mills-Placement.html