KPG's concern to be a 'fair'
testing system led the
university of Athens team
preparing the exams in English
to embark on a large-scale
research project regarding.listening
comprehension, the assessment of
which is a rather neglected area
of investigation. Meanwhile, the
listening comprehension
component of all well-known exam
systems is what candidates
complain about the most. They
often whine that the activities
were either too difficult or
that they couldn't hear the
speakers well, that the
speaker didn't speak clearly
enough or that there was too
much noise in the exam
room which prevented them from
making out what was being said,
and so on. Whether or not,
though, these complaints
correspond to reality, the fact
remains that the listening test
is often the most
difficult section for many
candidates. This is the case
with our exams in English
(as our systematic analysis
reveals) and, thus, we have set
out to investigate the most
important factors involved in
making our listening items 'easy'
or 'difficult'.
There are several studies which
attempt listening comprehension
task analysis and investigate
the linguistic, pragmatic and
cognitive factors which
contribute to task difficulty
(e.g., Ur 1984, Anderson and
Lynch 1988, Rost 1990, Conrad
1985, Buck 2001). These studies
often point to factors
concerning the learner and
his/her lack of language skills
being the main cause for
listening comprehension
difficulty. They also point out
that it is the very nature of
the spoken language which is
usually considered much more
difficult to understand due to
such characteristics as the use
of elision, speech rate, accent
variation, stress and
intonation, hesitation,
redundancy, etc.
Furthermore, these and other
studies point to the cognitive
factors involved in listening
comprehension and attempt to
show that understanding is
invariably linked to the
listener's prior knowledge,
experiences and expectations.
Anderson & Lynch (1998), for
example, argue that
understanding is not something
that happens because of what
the speaker says, but that the
listener has a crucial part to
play in the process. S/he
activates various forms of
knowledge and by applying what
s/he knows to what s/he hears,
s/he ultimately understands the
message conveyed. Buck (2001)
agrees that the cognitive aspect
of listening comprehension is
very significant, and views
listening comprehension as an
inferential process which moves
beyond the knowledge of discrete
elements of language, such as
phonology, vocabulary and
syntax. According to him,
“meaning is not something in the
text that the listener has to
extract, but is constructed by
the listener in an active
process of inferencing and
hypothesis building” (ibid: 29).
Though we are in full agreement
that linguistic and cognitive
factors as well as learner
skills are all responsible for
successful or unsuccessful
listening comprehension, in
testing situations, however,
there are additional factors
which may cause comprehension
prevention. One of these
factors is the testing
environment itself: the
acoustics in the exam room, the
quality of sound in the
recordings (especially, when
speech is not studio recorded),
technical problems with the
audio equipment, intentional or
unintentional background sounds
and noise inside or outside the
exam room may seriously affect
comprehension. Of course, test
performance is contingent upon
skills and characteristics of
individual candidates and these
are not always related to their
language ability and knowledge.
They have to do with how well
different candidates have
learnt to retain information,
how anxious they get during an
exam situation and whether they
have developed test-taking
skills such as speed in
responding, self-evaluation and
self-corrections, etc. But
candidate's individual
characteristics are just as
important as group
characteristics. From the
research we are carrying out, it
seems that there are rather
significant differences between
how different ethnic, social and
age groups perform when assessed
for listening comprehension,
due to a series of factors.
Older candidates respond
differently to the same
listening texts and tasks than
younger candidates and so do
males and females, highly
literate and less highly
literate candidates, and so on.
The knowledge and experiences of
these groups play a crucial
role in how/what they understand
and what responses they select.
Naturally, the candidate is
responsible for the success or
failure to understand an oral
message, yet s/he is not the
only one to blame. The end
result has a lot to do with the
language of the text, how the
text is delivered (spoken), and
the nature of the task. Now,
choices of texts and tasks are
directly related to the approach
to language and to the language
testing aims of each exam
battery. This means that
research which is not
candidate, oral text and task
specific has very little to tell
us about listening
comprehension difficulties in
testing situations.
We have seen very few studies in
the literature drawing upon
actual data and reporting
findings related to one
particular testing system.
However, the KPG English team is
aspiring to complete the
project it began in 2007. In
order to investigate candidate
response to particular texts
and tasks, a variety of tools
have been used (questionnaires,
interviews and verbal
protocols). Systematic task
analysis is being carried out
as well as post administration
item analysis is being used, not
only to investigate difficulty
but also to assess the listening
comprehension test and
ultimately take any measures
necessary to secure the
reliability of the exam and the
candidates' scores.
Using Item Response Analysis
Research (Bachman 2004,
McNamara 1996), listening
comprehension item analysis is
conducted for both the listening
as well as the reading
comprehension test papers after
each administration, and they
provide the English test
development team with useful
information regarding a)
internal consistency or
reliability of the exam, b) item
difficulty (i.e., the
proportion of candidates that
get an item right or wrong), c) distractor analysis (i.e., the
frequency with which each option
of a particular test question
is chosen) and d) discrimination
efficiency (i.e., how well an
item succeeds in distinguishing
highly competent from less
competent candidates).
Any test item that item
analysis shows to have an index
of difficulty above 0,80 or
below 0,50 is considered to be
too easy or too difficult
respectively for the exam level
(since the normal values of
difficulty for a test item
should range between 0,50 and
0,80) and this is then analyzed
further so that conclusions can
be drawn as to what features
make the item difficult or easy
for the specific group of
candidates.
This investigation is then
complemented with a systematic
examination of the texts from
which the tasks originate, in
an attempt to find the
relationship between text
variables and item difficulty.
The analysis concerns (a)
linguistic features of the text
and especially lexical
appropriacy to exam level,
information structure,
information density, (b)
paralinguistic features (i.e.,
accent, speech rate, background
noise, visual support, number
of speakers involved). All these
can, of course, have a serious
impact on the level of
difficulty of the relevant test
items.
Although this part of the
research is still at an initial
stage, some first conclusions
have been drawn as to what task
and text-related factors can be
associated with item
difficulty. For example, it has
been determined that even a
single unfamiliar word or
phrase either in the stem or in
the distractors of a
multiple-choice item may throw
some candidates completely off,
while, on the other hand, some
candidates seem to do poorly
when they must interpret or make
an inference rather than get
straightforward information from
the text. The role of distractors has proven to be an
important factor in item
difficulty too, since, in a
large number of items examined,
candidates' failure to select
the correct response can be
explained by or attributed to
how 'easy' or 'difficult' the distractors are.
Distractors can also play a
significant role in making
items too easy. This is mostly
the case when the distractors
themselves are irrelevant to the
content of the aural message;
thus, making the correct
response far too obvious. An
additional factor that makes
some items too easy has to do
with the way the right option
is articulated. For example, an
item that uses some of the
wording of the text is easier
than if synonyms are supplied.
The research project is also
yielding interesting information
about what makes a listening
comprehension text difficult or
easy but the limited space does
not allow us to report findings
in detail.
As the project develops,
reports and papers will be
published through the RCeL
publications. The outcomes of
the particular research are
bound to be extremely useful to
KPG item writers and test
developers. Most importantly,
however, they will be a valuable
source of information for
candidates and teachers
preparing them. Once we
determine what is difficult for
whom, it is possible to teach
different groups of
learners/candidates how to
overcome these difficulties
by providing test-taking
strategies which will prove
helpful for the particular
listening comprehension testing
situations.
References
Anderson, A. and Lynch, T.
(1988) Listening. Oxford:
Oxford University Press.
Bachman, L. F. (2004)
Statistical Analyses for
Language Assessment. Cambridge:
Cambridge University Press.
Buck, G. (2001) Assessing
Listening. Cambridge: Cambridge
University Press.
Conrad, L. (1985) “Semantic
versus syntactic cues in
listening comprehension”.
Studies in Second Language
Acquisition, 7, 1, 59-72.
McNamara, T. (1996) Second
Language Performance Measuring.
London and New York: Longman.
Rost, M. (1990) Listening in
Language Learning. London and
New York: Longman.
Ur, P. (1984) Teaching
Listening Comprehension.
Cambridge: Cambridge University
Press.
Elizabeth Apostolou & Bessie
Dendrinos
Research regarding factors that
affect text comprehensibility,
based on KPG data, is being
carried out by different
researchers of the English team.
Jenny Liontou, under the
supervision of Bessie Dendrinos,
is doing systematic research on
factors that affect reading and
listening text difficulty. The
RCeL is also making available
data to Elizabeth Apostolou, who
is beginning to look
systematically into KPG
listening task difficulty and to
Eleni Charalambopoulou who is
investigating KPG listening
test-taking strategies. Both
these young scholars are
working under the supervision of
Kia Karavas.
The Research Centre for English
Language Teaching, Testing and
Assessment (RCeL) is a unit of
the Faculty of English Studies,
University of Athens (http://rcel.enl.uoa.gr/).