6.1.1: University Policy on the Quality Assurance of Examination and Assessment
Outline of University Policy
1. The overall purposes of the University's quality assurance mechanisms within the examinations and assessment process are:
a. to guarantee that departments apply their own and the University's agreed marking criteria appropriately across the range of modules they teach;
b. to guarantee that departments/schools maintain an overall consistency of standards across their various modules;
c. to protect candidates against bias, conscious or otherwise, on the part of examiners (this statement should be read in the context of the University's clear statement in General Regulation VII that 'matters of academic judgment cannot be appealed; see paragraph 2 of https://www.dur.ac.uk/resources/university.calendar/volumei/current/regs.appeals.pdf).
2. The University policy is therefore that all departments/schools should have robust mechanisms for marking and for the moderation of marks. The role of the external examiner is especially significant in assuring the quality of the assessment process, in respect of the standards of the awards made and the integrity of the assessment process. The following additional mechanisms must also be used by all departments/schools throughout the University:
a. anonymous marking of examination scripts (see paragraph 6 below);
b. anonymous classification of degree results (see paragraphs 10-11 below);
c. the use of a mark proforma for all examination scripts (see paragraphs 21-23 below).
3. In addition departments/schools should adopt a range of mechanisms appropriate for their own subject area. Such mechanisms might include some or all of:
a. anonymous marking (see paragraphs 7-9 below);
b. double marking (see paragraphs 12-20 below);
c. the use of a mark proforma for summatively assessed coursework (see paragraphs 21-23 below);
d. marking to a template (see paragraph 24 below);
e. objective marking (see paragraph 25 below);
f. the statistical moderation of examination marks (see paragraph 26 below).
4. However, departmental policies may include alternative procedures for which an argument acceptable to Education Committee has been advanced. Departments/schools should draw up assessment policies, in consultation with their external examiners, in which they specify which mechanisms they will adopt, and why, and how and where they will apply them. This policy must be made known to students.
5. Education Committee will be responsible for approving departmental/school policy statements and will monitor their implementation through periodic review.
Mechanisms for the quality assurance of assessment: anonymous marking
Anonymous marking of examination scripts
6. Anonymous marking is an important element in the University's strategy for the quality assurance of the assessment process. The rationale for anonymity is the protection of candidates against the possibility of bias in assessment. All university examinations must be sat and marked anonymously
Anonymous Marking of Coursework
7. Anonymous marking, although highly desirable, can conflict with the need to give feedback, especially in the case of coursework contributing to summative assessment. Feedback cannot be given on coursework before the end of the module if anonymity is to be preserved: the administrative complexities of using a separate code to mark each assignment anonymously are such as to make this an unrealistic option.
8. It is University policy to ensure that feedback to students on assessed coursework is a priority (Learning and Teaching Handbook 6.1.5. This is because Education Committee judges the advantage to students in terms of their learning experience through receiving feedback to outweigh any disadvantage resulting from removing anonymity. Feedback on coursework will normally be given before the end of the module to support learning within the module. Therefore there is no requirement that coursework be marked anonymously, although departments/schools wishing to include this in their policies are free to do so. (See paragraphs 12-26 below on double marking for cases in which anonymity has not been used.)
Anonymous marking of major projects and dissertations
9. Anonymity is probably the most secure quality assurance process against bias and prejudice. However, in the case of major projects and dissertations it is sometimes inevitable that the supervisor will also be the first marker: this is often a consequence of maintaining the link between research expertise and this kind of project work. In such situations it is impossible to maintain anonymity within the marking process. Therefore:
a. project / dissertation titles should be approved by boards of examiners anonymously. There is in fact no reason to include anonymous codes in the list of titles submitted for approval: a list of project titles and supervisors is sufficient;
b. wherever possible major projects and dissertations should be marked under full anonymity;
c. where anonymity is impossible to attain alternative strategies must be employed. In these cases:
i. full double marking or an approved equivalent such as multiple moderation of marking, is essential;
ii. if possible the second (and any subsequent) marking should be anonymous. To facilitate this, the work is submitted using a code even though the first marker may know the identity of the student.
Anonymous classification of degrees
10. The classification of degrees must be carried out anonymously. This means that boards of examiners must have anonymised mark sheets and only the chair, secretary or other designated member(s) of the Board should have access to medical and other evidence of mitigating circumstances naming the student(s) concerned. He/she should communicate the necessary information to the board using the anonymous code. The minutes of the board of examiners should also refer to students by code, and have appended to them a table 'translating' the codes into the student names.
Anonymous consideration by other boards of examiners
11. All meetings of boards of examiners, including those considering Level 1 marks, should be carried out with the students under consideration remaining anonymous. See the guidelines in paragraph 10 above.
Mechanisms for the quality assurance of assessment: double marking
12. Double-marking and moderation are two distinct mechanisms which may be used as part of the process of quality assurance of assessment. The University encourages departments to clearly distinguish between the use of double-marking (as applied to all scripts in a run) and moderation (of a sample of scripts) in their assessment policies as each serves a different purpose and imposes different actions upon examiners.
13. The purpose of double-marking is to provide quality assurance in regard to the marks assigned to individual scripts or coursework by providing combined academic judgement through the agreement of marks between first and second examiners and the resolution of problematic cases. In the case of work which is not marked anonymously it also provides some assurance against conscious or unconscious bias on the part of the first marker (who may have been a student’s tutor or supervisor).
14. As double-marking concerns the quality assurance of individual marks it must be applied to all scripts in a run. Individual marks awarded by the first marker cannot be changed unless the second marker has also marked all scripts in a run, as not all students would have been considered equally. Where double-marking is used on a sample of the total number of scripts this is ‘Moderation’ and the guidance under paragraphs 19-26 must be followed.
15. There is no University requirement that double-marking must be carried out blind or unseen (where the first marker's marks and the rationale for them are not communicated to the second marker until after they have completed their marking), Education Committee notes that the views of external examiners on this differ and has adjudged that it is not practicable to require double-blind marking of, for example, specialised modules for which a department does not have more than one expert marker. In such cases a second examiner could act as a moderator, examining the consistency of marking, but could not comment on the details of the material, and must be guided by the first marker in regard to individual marks.
16. Where double-marking is employed there should be a clear procedure in place for the agreement of marks between markers and for the resolution of any differences. An example of such a procedure might be the following:
a. a discrepancy of <5% in the mark for the module as a whole and which does not span a classification border is to be resolved by taking the average of the two marks;
b. a discrepancy of >5% in the mark for the module as a whole or spanning a classification border is to be resolved by discussion between the markers to reach an agreed mark if possible;
c. if agreement cannot be achieved refer to a third party (which may include an external examiner).
17. Wherever double-marking is used there should be a clear 'audit trail' showing the rationale for the mark reached by each marker, and the communication between them to reach an agreed mark. One means of achieving this is by the use of a mark proforma (see paragraph 28 below). Raw marks, as well as reconciled marks, should be made available to external examiners.
18. Double-marking is to be applied to all dissertations and major projects.
Moderation of marks
19. Moderation differs from double-marking in that it seeks exclusively to identify systematic defects in the first-marking process. Moderation focuses on the marks awarded to the full set of assessed work for a task, module or programme in the context of the academic standards for the award. It is therefore separate from the question of how differences in marks between two or more markers are resolved, and is not about making changes to an individual student's marks. For this reason moderation may be carried out on a sample of assessed work. Guidance on appropriate samples is given in paragraphs 23-24 below.
20. The role of the moderator is to ensure that the scale, range and standards of first-marking are appropriate, with any recommendations for change based upon the identification of systematic issues with the first marking, and resolutions being applied systematically to the whole run of scripts. If moderation reveals a pattern of excessively generous or punitive marking, omission or over-emphasis of some element of answers, large fluctuations in marks, or use of an excessively narrow range of marks, then this should be rectified by an appropriate systematic review of the marks. This may involve double-marking all work, but could also be a review of lesser scope, for instance of the marks for one question on an exam paper or within a particular mark range. It is expected that all scripts displaying the same general issue(s) will be double-marked by the second marker. Unless the second marker has seen all such scripts, the first marker’s mark should not be altered.
21. Marks identified by the moderator as being anomalous, as opposed to being part of a wider pattern, should be referred for discussion and/or further investigation by a third party. A change of mark should not be recommended until the anomaly has been confirmed by a full systematic check.
22. Statistical tools and techniques may be used to moderate assessment practices by numerical analysis. These can enable departments/schools to identify:
a. modules where marking profiles are out of line with departmental norms;
b. individual candidate performance on particular modules which appears to be out of line with their overall performance, so that the underlying causes can be addressed.
23. Where second marking for the purpose of moderation is the basic quality assurance mechanism Education Committee has specified the following acceptable minimum proportions of examination papers and assessed coursework to be second marked, in order that the department and the University should have confidence in the robustness of the moderation procedure without laying undue burdens on departments:
a. for all examination scripts and summatively assessed coursework, a minimum of 10% of each piece of assessed work contributing to the final module mark should be second marked for moderation (subject to a minimum sample size of 10);
b. Where there is more than one first marker for a piece of assessment (i.e. where the first marking of a run of examination scripts or coursework assessments is divided between two or more markers), 10% of the scripts/assessments first marked by each marker must be moderated;
c. for major projects and dissertations, the sample size must be 100% i.e. all major projects and dissertations should be double-marked in full (as a 100% sample is used the guidance issued in paragraph 16 should be followed for agreeing marks and resolving differences. An audit trail of this process must also be maintained as stated in paragraph 17).
24. Where departments rely on second marking a sample of the scripts of a module for moderation it is not satisfactory for the selection of scripts for second marking to depend entirely on the first marker's identification of work which seems to them to be problematic. In addition to these, a random sample, or one drawn in equal portions from the top, bottom, and middle of the marking range, of at least the size stipulated above, must also be scrutinised.
25. A clear audit trail must be maintained where moderation has taken place. This should demonstrate the samples that have been considered, any systematic issues identified, and details of the actions taken to rectify these. One means of achieving this is by the use of a mark proforma (see paragraph 28 below).
26. A department may consider it appropriate to use both double-marking and moderation as mechanisms of quality assurance for the same exam/assignment (for instance, double-marking all fails, borderlines and problematic scripts to provide a combined academic judgement, and moderating a sample of all other scripts to monitor the quality and consistency of marking). In such cases it is important that the departmental assessment policy distinguishes between the purposes of each mechanism, clearly setting out where each is to be applied and the action required from the second examiner (e.g. that for the moderated sample marks must not be altered as not all students would have been treated the same way). This is particularly important where the same second examiner will operate in both roles (i.e. as second marker and as moderator).
27. No marks or judgmental comments are to be written on examination scripts. It is, however, permitted to make factual annotations where these assist the marking process, for example in marking a language exercise or a mathematical problem. Marks or judgmental comments may be written on summative coursework, in order to support the provision of effective feedback to students.
28. A mark proforma is a separate sheet on which the mark itself and the rationale for the mark awarded are recorded. The proforma should:
a. reflect the agreed level descriptors and assessment criteria for the work concerned;
b. include a brief statement by the marker of the rationale for the mark awarded (consistent with the assessment criteria);
c. include, where appropriate, evidence of communication between markers and the rationale for the agreed mark reached or for failure to reach agreement and, in such an event, the steps then taken (e.g. to refer the work to the external examiner);
d. be retained for a minimum of one year or the duration of the period for which the script/assignment/dissertation is retained (if longer) and be kept with the examination script or assessed work (see also Learning and Teaching Handbook 6.1.4).
29. Marking to a template involves marking to a specified set of answers with marks clearly allocated for each element of the work. This sort of marking may be carried out in some circumstances by postgraduates, provided that their results are moderated by an academic staff member.
30. Objective marking is closely related to marking to a template but may include also marking by computer (e.g. of a multiple choice question using an optical mark reader).
Checking of assessment material
31. Departments should have in place clear procedures for the checking of assessment material. This should ensure that all pages/questions have been marked (by both markers) and that marks have been totalled correctly and there are no arithmetical or other errors in the marking process.