Comparison of Assessments
In a workshop it is common for the same piece of work to be assessed by the teacher and the students. If examples are used then the teacher first assesses them before the students assess a selection of them. The work from the students may well be assessed by the teacher, at least in part, and very possibly by a number of students. A workshop allows the teacher to award a proportion of the grade to the student's assessments, the remainder of the grade is allocated to the assessments of the work itself. (The proportions of the grade given to these two areas is set towards the end of the workshop.) A student's assessments are given a grade based on how well they match the corresponding assessments make by the teacher. (In the absence of a teacher assessment then the average of the peer assessments is used).
The degree of agreement between the student's and teacher's assessment is based on the differences between the scores in individual elements (actually the squared differences are used). The mean of these differences must to converted into a meaningful grade. The "Comparison of Assessments" option allows the teacher a degree of control on how these comparisons are converted into grades.
To get some idea on what effect this option has, take the (fairly simple) case of an assessment which has ten Yes/No questions. For example the assessment might use questions like "Is the chart correctly formatted?", "Is the calculated profit $100.66?", etc. Assume there are ten such questions. When the "Very Lax" setting is chosen, prefect agreement between the student's and teacher's assessment gives a grade of 100%, if there is only one question which does not match the grade is 90%, two disagreements give a grade of 80%, three disagreements 70%, etc.. That might seem very reasonable and you might be thinking why is this option called a "Very Lax" comparison. Well, consider the case of a student doing a completely random assessment where the answers of the ten questions are simply guessed. On average this would result in five of the ten questions being matched. So the "monkey's" assessment would get a grade of around 50%. The situation gets a little more sensible with the "Lax" option, then the random assessment gets around 20%. When the "Fair" option is chosen, random guessing will result in a zero grade most of the time. At this level, a grade of 50% is given when the two assessments agree on eight questions of the ten. If three questions are in disagreement then the grade given is 25%. When the option is set to "Strict" having two questions out of sync gives a grade of 40%. Moving into the "Very Strict" territory a disagreement in just two questions drops the grade to 35% and having a single question in disagreement gives a grade of 65%.
This example is sightly artificial as most assessments usually have elements which have a range of values rather than just Yes or No. In those cases the comparison is likely to result in somewhat higher grades then the values indicated above. The various levels (Very Lax, Lax, Fair...) are given so that the teacher can fine tune the comparisons. If they feel that the grades being given for assessments are too low then this option should be moved towards the "Lax" or even "Very Lax" choices. And alternatively, if the grades for the student's assessments are, in general, felt to be too high this option should be moved to either the "Strict" or "Very Strict" choices. It is really a matter of trial and error with the best starting point being the "Fair" option.
During the course of the workshop the teacher may feel that the grades given to the student assessments are either too high or too low. These grades are shown on the exercise's Administration Page. In this case, the teacher can change the setting of this option and re-calculate the student assessment grades (the "Grading Grades"). The re-calculation is done by clicking the "Re-grade Student Assessments" link found on the administration page of the workshop. This can be safely performed at any time in the workshop.