|
Who are our
script raters?
Script raters are
experienced English language
teachers who have been chosen from
an initial pool of applicants by the
Ministry of Education after a
successful initial screening
interview. Most script raters have
been KPG oral examiners as well. As
new levels of language competence
were introduced in the KPG exam
battery, the need for a larger
number of script raters arose. Thus,
in subsequent phases of the
programme, after screening more
recent applications from potential
script raters, ELT professionals
with postgraduate studies in Applied
Linguistics and with experience in
marking scripts in other language
examinations were invited to become
part of the KPG script rater
programme.
What are the aims
of the script rater training
programme?
The aims of the
English script rater training
programme include the development
of:
-
a
body of 300 script raters who
have been fully trained for
assessing all levels of the
writing module offered by the
KPG exam battery and whose
performance has been evaluated
on the job,
-
a
comprehensive and fully updated
database of trained script
raters which the Ministry of
Education can draw from to make
appointments for every exam
period,
-
comprehensive training handbooks
for script raters accompanied by
samples of candidates’ written
production for self training and
awareness raising purposes.
The stages of the
script rating programme
The script rater
training programme consists of a
series of stages which all script
raters are required to go
through.
Preparatory stage 1: Immediately
after test administration
Members of the KPG test development
team write the expected language
output for each activity for each
level of the exam. Expectations are
phrased in terms of the purpose of
the writing activity, the function
of each activity, the expected genre
and its characteristics in terms of
layout, organization, register,
cohesion/coherence and linguistic
features.
Preparatory stage 2: One week after
the exam administration
-
Immediately after the
examination, members of the test
development team prepare the
expected outcomes of each
writing task for every exam
level in terms of (a) the genre,
communicative purpose, register/style,
(b) expectations regarding
coherence and cohesion and (c)
lexicogrammatical choices.
-
At
the same time, the committees at
the Examination Centres around
the country pack the English
scripts and send them to the
Rating Centre in Athens. There,
the Rating Centre committee
randomly selects 100 scripts
from each level writing task;
the selected scripts have been
produced by candidates who have
taken the exam in cities and
towns in different parts of the
country.
-
After gathering 100 scripts, 20
experts from the KPG English
team, who are well aware of what
the writing tasks are aiming to
test and what the expected
outcomes of each writing task
are, as these have already been
recorded, meet at the Rating
Centre and, divided into groups
according to test level, use the
rating scale and evaluate 100
scripts. Each script is
evaluated and rated by two
“expert” raters, as when the
regular rating process begins.
-
Detailed discussion regarding
candidates’ performance on the
particular tasks follows with
the purpose of (a) assessing
task validity, (b) finalizing
expected outcomes, (c) fine-tuning
the rating scale, (d) selecting
the scripts which are best
examples of satisfactory and non
satisfactory scripts, (e)
conferring about scripts that
may have resulted in rating
discrepancy between experts.
-
On
the basis of this discussion,
members of the KPG test
development team, update, refine
and revise the Script Rater
Guide which includes information
on the nature, structure and
main features of the written
test for all levels, the
criteria for the assessment of
candidates’ written production,
the writing tasks of each
recently administered exam
together with detailed
information on the expected
output for each activity. Sample
candidate scripts are included
with the marks assigned by the
test development team and brief
commentaries justifying the
assigned mark. Finally, sample
candidate scripts are also
provided without marks and
commentaries to be evaluated by
script raters during the seminar
Stage 3: Two weeks after the exam
administration: the script rater
seminar
During this
seminar, trainees are provided with
a detailed script rater information
pack and are informed about the
theory of language underlying the
writing test, the content and
structure of the test for each
level, the expectations for written
language production for each level
and the assessment criteria. The
information pack also contains
samples of candidate scripts from
past examination periods which the
trainee script raters evaluate
applying the assessment criteria.
New script raters are then invited
to the script rater training
seminars which take place after each
exam administration. They are
requested to participate in all the
tasks assigned during the seminar.
New script raters are required to
take part in at least two script
rater seminars (apart from the
induction seminar) before they are
allowed to take part in the marking
process.
To download the May 2014 script rater booklet, click here.
To
download the Íovember 2013 script
rater booklet, click here.
To
download the May 2013 script rater
booklet, click here.
To download the November 2012 script
rater
booklet, click
here.
To
download the May 2012 script rater
booklet, click
here.
To
download the Íovember 2011 script
rater booklet, click
here.
To
download the May 2010 script
rater booklet, click
here.
Stage 4: Individualised Training at
the Marking Centre
The training of
script raters continues at the
marking centre on an individual
basis. Centre coordinators closely
monitor the marking process and
offer on the spot advice, help and
training to script raters. More
specifically, our body of script
rater coordinators consists of
highly qualified and experienced
associates who in their majority are
members of the KPG development team.
Coordinators are present at the
marking centre throughout the whole
period of the marking process. They
work at the centre at predetermined
shifts (there are two shifts per day
except Sunday when there is only
one) taking care that there are at
least 2-3 coordinators at the
marking centre at any one time. The
number of the coordinators needed
for each marking period depends on
the number of a) candidates of the
particular exam period and b) script
raters involved in the process in
each exam period. The duties of the
coordinators are briefly listed
below:
-
They
advice script raters whenever
the latter face a problem with
the application of the
assessment criteria.
-
They
monitor raters’ individual
performance during the rating of
a certain number (at least three
in every packet of 25) scripts
in all levels of the exam. This
procedure is followed each time
raters are obliged to move to
the next level of the exam.
-
They
monitor the script raters’
application of the assessment
criteria in each of these
scripts and they keep records of
their performance by filling in
two different statistical sheets.
The first one includes more
general comments while the
second one is more detailed and
asks for the coordinators’
justified evaluation of the
individual script raters.
-
Additionally, the coordinators
monitor raters’ performance
through randomly chosen scripts
already marked in which the
raters are asked to justify
their assigned marks. The
coordinators discuss the
application of the rating scale
and they keep records of the
whole procedure. These records
are analyzed after the rating
period has ended and details of
the raters’ actual performance
are recorded and analyzed for
further reference and evaluation
of the individuals and of the
process itself.
-
Whenever different coordinators
realize that particular script
raters need more training on the
correct application of the
rating scale, they keep working
together with these raters until
it is decided that they do not
need further assistance.
Stage 5: The end of the script
rating
At the end of each
rating period there follows a
statistical analysis of all rating
discrepancies between raters which
offers additional data for
evaluating both the rating process
and the performance of the
participating raters. Data are also
compared with those produced in
previous rating periods so the
results could be used for the
development of a comprehensive and
fully updated database of trained
script raters. Additionally,
coordinators’ and script raters’
feedback forms offer suggestions for
the improvement of the rating
procedure itself. |
|