Matrics’ Cass marks higher than final

Generic pic of blackboard and chalk

Generic pic of blackboard and chalk

Published Dec 16, 2015

Share

The marks teachers awarded to pupils during the course of the matric year (continuous assessment marks) are higher than pupils’ final exam marks - making these classroom-based assessments a poor predictor of how pupils will fare in the final exams.

Almost one-quarter of matrics who scored a continuous assessment mark (Cass) mark of 50% or more for a subject, ended up achieving less than 30% in the final exam, a study which compared matrics’ Cass marks and final exam marks shows.

In a policy brief released last week, study authors Professor Servaas van der Berg and Debra Shepherd suggest that the Basic Education Department (BED) use Cass and matric exam data to identify schools that are particularly lenient, and train teachers on how to properly assess pupils’ performance.

Continuous assessment marks are likely to affect how hard pupils prepare for the matric exam. They count 25% toward pupils’ final marks.

Van der Berg and Shepherd are both with the Research on Socio-Economic Policy (Resep) unit in the economics department of Stellenbosch University.

Van der Berg is the South African research chair in the economics of social policy.

“It is clear from the results of this study, and others, that many teachers simply do not have the expertise to carry out school-based assessment competently,” Van der Berg and Shepherd say.

To possibly increase the Cass weighting of 25%, coupled with unreliable and invalid assessment, could prove “disastrous” and increase the unreliability of matrics’ final marks.

“We recommend that the DBE undertake the relatively straightforward task of comparing Cass and matric marks for each school, and identifying schools where the average difference is in excess of 10 percentage points and 20 percentage points. The BED can notify these schools of their irregular or over-lenient marking, target assessment training and support to these schools, and monitor trends in assessment validity and reliability,” their policy brief recommends.

The BED was on Tuesday unable to comment.

Related Topics: