Evaluation of the effectiveness and predictive validity of english language assessment in two colleges of applied sciences in Oman
Item Status
Embargo End Date
Date
Authors
Abstract
This thesis investigates the effectiveness of English language assessment in the
Foundation Programme (FP) and its predictive validity for academic achievement in
the First Year (FY) at two Colleges of Applied Sciences (CAS) in Oman.
The objectives of this study are threefold:
(1) Identify how well the FP assessment has met its stated and unstated
objectives and evaluate its intended and unintended outcomes using impact
evaluation approaches.
(2) Study the predictive validity of FP assessment and analyse the linguistic
needs of FY academic courses and assessment.
(3) Investigate how FP assessment and its impact are perceived by students and
teachers.
The research design was influenced by Messick‟s (1989; 1994; 1996) unitary concept
of validity, by Norris (2006; 2008; 2009) views on validity evaluation and by
Owen‟s (2007) ideas on impact evaluation. The study was conducted in two phases
using five different methods: questionnaires, focus groups, interviews, document
analysis and a correlational study. In the first phase, 184 students completed a
questionnaire and 106 of these participated in 12 focus groups, whilst 27 teachers
completed a different questionnaire and 19 of these were interviewed. The aim of
this phase was to explore the perceptions of the students and teachers on the FP
assessment instruments in terms of their validity and reliability, structure, and
political and social impact. The findings indicated a general positive perception of
the instruments, though more so for the Academic English Skills course (AES) than
the General English Skills course (GES). There were also calls for increasing the
quantity and quality of the assessment instruments. The political impact of the
English language FP assessment was strongly felt by the participants.
In the second phase, 176 students completed a questionnaire and 83 of them
participated in 15 focus groups; 29 teachers completed a different questionnaire and
of these 23 teachers were interviewed. The main focus was on students and teachers‟
perceptions of FP assessment, and how language accuracy should be considered in
marking academic written courses. One finding was that most students in FY tended
to face difficulties not only in English but also in what could be called „study skills‟;
some of these were attributed to the leniency of FP assessment exit criteria.
Throughout the two phases, 118 documents on FP assessment at CAS were
thematically analysed. The objective was to understand the official procedures
prescribed for writing and using assessment instruments in FP and compare them
against actual test papers and classroom practices. The findings revealed the use of
norm-referenced assessment instead of criterion referenced, incompatibility between
what was assessed and what was taught, inconsistency in using assessment criteria
and in the unhelpful verbatim replication of national assessment standards.
The predictive validity studies generally found a low overall correlation between
students‟ scores in English language assessment instruments and their scores in
academic courses. The findings of this study are in line with most but not all
previous studies. The strength of predictive validity was dependent on a number of
variables especially the students‟ specializations, and their self-evaluations of their
own English language levels. Some recommendations are offered for the reform of
entry requirements of the Omani higher education.
This item appears in the following Collection(s)

