This month’s topic “The challenge of assessing competences in schools – what does the future look like?” certainly caught the interest of the Learnovate membership. Professor Barry O’Sullivan from the British Council spoke to a full room on Friday 21st of February.
Barry’s specific domain of interest is language assessment and by his own admission he has a history as a ‘hard-core tester’ in the traditional sense, but acknowledges that current means of assessment are failing to keep up with advances in learning and teaching. Barry pointed to the fact that the ‘Reliability’ of an assessment score is often interpreted as the reliability of the result in telling how accomplished a person is, when actually reliability in this instance is referring to consistency of the result, and has no necessary link to measuring what needs to be measured.
While Barry does not claim to have broken the secret of assessment for the next generation his research in the area leads him to believe meaningful assessment must contain three connected facets, curriculum, delivery and assessment. A paper written by Barry proposing the interaction of these three facets was rejected by a major journal in the United States as being too radical, possibly because assessment is currently held in the US as being separate to learning.
In relation to technology and how it can assist in assessment Barry believes that in the near term the more realistic goal is to localise assessment rather than personalise assessment. Research in localised assessment must cover three aspects, what is the construct being assessed? How do we measure that construct? and in terms of localising, what population is our measurement suitable for? While personalised assessment is the ideal, the ability to generalise to a population is a more realistic challenge. Barry talked about the British Council’s Aptis assessment service, but admits that while it is an excellent tool for localising assessment it is still more aligned to traditional assessment. Personalisation will require more significant advances in test design and delivery.
So as most of us are aware this search to improve methods of assessment is not an easy one and the contributions from the members in attendance raised interesting topics like what are we assessing for? If we do not assess via a number what do we measure? Are certain skills not open to being measured?
The debate was lively and it was obvious opinion is divided on the topic, while many questions were raised I think we may convene another session on this in the future around brainstorming approaches to a solution. Barry summed up his nirvana in terms of assessments with two words, “No Tests”!