[php]titleHeader();[/php]
[php]banner(“banner4.png”,”Student-Read-03.png”);[/php]

Author: Steve Loddington

Editor: Nicola Wilkinson

Introduction

Background to WebPA and the WebPA project

WebPA is an online automated system which facilitates peer moderated marking of group work. Peer moderated marking is where students carry out a group task set by the tutor and then carry out assessment on the performance of themselves and their peer group (team) in relation to a task or series of tasks. WebPA is very easy to use and is appreciated by students as a fair way of assessing group work activities, as well as, academics for saving time on marking. WebPA is freely available for everyone to use within their teaching.

The WebPA project is a JISC funded research and development study focusing on peer moderated marking as described above. The project runs between October 2006 and March 2009 and is led by Faculty of Engineering, Loughborough University. Project partners include the eServices Integration Team at the University of Hull and the Higher Education Academy Engineering and Physical Sciences Subject Centres.

A number of iterations of WebPA have been used at Loughborough University since 1998 and the project attempts to embed WebPA within other institutions, so that others can benefit from the work already carried out over the last 15 years . WebPA was awarded an IMS Global Learning Impact Award in 2008, see
http://www.imsglobal.org/learningimpact2008/2008LIAwinners.html.

Methods

This document is informed by case study research which were developed from face-to-face interviews. Interviews were carried out with 11 academic WebPA users at Loughborough University and a case study template was completed for each one. Three other WebPA users completed the case study templates themselves. Therefore, a total of 14 case studies were created across eight departments at Loughborough University as shown in Table 1.

Table 1: Case Studies across faculties and departments.

Departments #
Faculty of Engineering (7)
Faculty of Science (3)
Faculty of Social Sciences and Humanities( (4)
Aeronautical and Automotive Engineering (3)
Civil and Building Engineering (3)
Mechanical and Manufacturing Engineering (1)
Information Science (1)
Mathematics (2)
Business School (1)
English and Drama (2)
Politics International Relation and European Studies (1)

The aim was to identify how WebPA was being used across different disciplines and year groups and to discover;

The case study methodology was chosen as a way of capturing detailed accounts of individual use that could be displayed in a similar format to one another. The JISC case study template was used as the skeleton for the template and questions were added to tailor the template to this exercise and to make it more explicit about what was being asked.

In the academic year 2007/2008 WebPA was used by 55 tutors across 106 modules within 15 departments. The 14 case studies represent over one-quarter (25.4%) of the total users at Loughborough University and over one-half (53.3%) of departments. WebPA was used on a range of modules with differing cohort sizes, which ranged from 20 to 297 students. The average number of times that WebPA was used by tutors within the case studies was 3-4 times.

Purpose of this document

This document is to provide information for new and existing users of WebPA. The information and advice provided was collated from research carried out by the WebPA project. It fulfils a need identified from experiences of running a variety of face to face sessions including, workshops, staff development sessions, taster events and conference presentations that a best practice booklet would be of use to new people wanting to use WebPA and those already using the tool. Earlier research (Loddington, 2008; Loddington et al. 2008) identified that academics use peer-moderated marking for many different group work activities and situations. However, this research identifies how WebPA is used at Loughborough University and highlights good practice. The intention of these guidelines is not to replicate the existing help mechanism within the WebPA tool, but to provide examples of good practice from real cases of WebPA use.

Benefits of WebPA

From those who use WebPA (as a method for peer-moderated-marking), the following personal incentives and reasons were identified;

Potential benefits for students included;

Identified advantages for institutions were;

Generic principles

The case studies highlighted that peer moderated marking was used in many different ways. It can sometimes be difficult to make recommendations because of the many ways and situations in which WebPA is used. Generic principles have been distilled from the discussions that were carried out with tutors and promote effective practice to those already using or are new to WebPA.

When carrying out peer moderated marking ensure that you are;

Comfortable
It is crucial that when using peer assessment that tutors are comfortable with the fundamental elements of the assessment. Example questions to consider:

Practical
This applies to the prescribed task and the assessment. Example questions to consider:

Reasonable
Think about the reasoning behind the task and the assessment and is the assessment. Example questions to consider:

Sensible
Maintain a sensible approach to peer assessment. Example questions to consider:

The Preparation

Preparing yourself for the assessment

What do you want to set out to achieve by using peer moderated marking as a way of assessing students for group projects? Is it because you want to give students individual scores for group work activities or simply an attempt to save you time. There may be a mixture of personal reasons, as well as, other benefits, some of which may have been outlined already. Outlining a reason for using peer assessment may be useful exercise for reflection to see if using such a method has achieved a certain goal. Below are some reasons that other academic tutors set out to achieve by using online peer assessment;

There are a number of decisions to made and issues to think about in relation to the assessment, some of which you may have not thought of previously. Some questions that you may need to think about include;

Preparing students for the assessment

One of the recommendations made by those who currently carry out peer moderated marking is the importance of briefing students on the assessment. Make sure that students are aware of as much of the details of the assessment as possible such as; the criteria, the intended assessment schedule and the link to the WebPA system.

Explaining the assessment to students at an early stage is important. This will eliminate rushing though the logistics of the assessment near the end of a module or semester when there is often little time to do so.

It was clear from the case studies that even those with a good amount of experience in using peer moderated marking as method of peer assessment identified ways to improve their assessment practices. Many adjust their assessment practices from time to time, depending on what works the best. Some introduce new features or procedures from year to year, to see if it makes a positive difference to the overall assessment process. The literature advises those new to peer moderated marking to start out small and what they are is comfortable with in the first instance.

Key things to tell students about the peer moderated marking assessment;

Differences in cultures, disciplines and year groups

The case studies show that WebPA was used across a wide range of years including foundation, undergraduate years 1-4 and taught postgraduate programmes. This involved a variety of group work within the classroom, within industry and within other countries. A number of important factors were identified in relation to cultures, disciplines and year groups.

Cultures

There were several important outcomes in relation to peer moderated marking and some overseas students.

Disciplines

It is obvious to say that different disciplines have different characteristics, however, this can have a direct impact upon assessment practices associated with designing and implementing of peer-moderated-marking assessments.

Year groups

When researching into differences in behaviour between year groups there seemed to be visible characteristics of first year students and third year students.

First year students

Third/final year students

The assessment

Group selection

Random

Students are assigned to a random group of students selected by the tutor.

Advantages

Disadvantages

Seeded

High achieving students are distributed evenly between groups selected by the tutor. For example, the tutor decides how many groups are required, lets say 25 groups of four students. The 25 students who scored the highest in previous assignments are then made the ‘seeds’. The rest of the students then sign up to a group of their choice.

Advantages

Disadvantages

Self-select

Students are free to choose their own groups.

Advantages

Disadvantages

Group size

When carrying out any method of group work, it is unlikely that you are going to have groups of exactly the same number of students. There is often groups with one student more/less than the other groups.

Table 2 shows the average number of students per group from the case study examples.

Table 2: Group sizes

Number of students per group 3 4 5 6 7 8 9 10+
Number of academics 1 7 3 1 1 0 0 1

Exactly one-half of the academic used the group size of four. The mean number of students per group was 4.4

The number of students per group can differ on the task that is being set, however, groups of four to five are recommended. It is identified that groups of three students are too small and groups over six are too large.

In principle, from the research carried out, the rule below applies in relation to the size of the group;

The lower the group size the more likely that a higher deviation will be seen between group members marks within that group.

There are other factors that can influence the number of students per group which need to be taken into consideration;

Criteria

There are a number of things to consider in relation to the assessment criteria. These include;

The number of criteria

Table 3 shows the number of criteria used by the academics within the case studies.

Table 3: Number of criteria used within the assessments

Number of criteria 1 2 3 4 5 6 7 8+
Number of academics 2 0 2 3 4 2 0 1

The results show that the number of criteria used by academics varied. It is recommended that between three and six criteria are used within an assessment. Making students assess too many criteria may take the students too long to complete the assessment. As a result too of too many criteria students, may rush at the end to complete the assessment. In contrast, having too few criteria may produce skewed results and students may not see the point in the assessment.

Scoring range
The scoring ranges used by academics were not recorded within the case studies. However, from the example criteria shown in the Appendix and from other investigations into the use of WebPA. A high majority of users use a scoring range of 0-5 or 1-5. Although the software does allow different scoring ranges for different questions within an assessment, it is recommended that a constant range is used through out.

One argument against using a scoring range of 1-5 is that students then have the option of choosing three as neutral. The other side of the argument is that it may be a good idea to give students the option to be neutral as may not be able to assess every students contribution within the group. This may be particularly so within large group sizes, where it is more difficult for students to know each and every persons contribution to the group task.

Criteria phrasing
It is important to be explicit when wording the criteria, so that students know what they are being asked to assess against. It sounds an obvious tip but from the case studies the one thing that existing users of WebPA stressed the most was the importance of creating clear criteria.

You may find that the criteria needs modifying year on year, especially if feedback is received from the students. One academic stated that a student had complained that they could not understand the criteria. Whilst this is a minority case and that it is almost certain that 100% of students are not happy with any marking criteria, it is something that needs careful consideration. If colleagues already use WebPA ask them for their criteria or any criteria that they feel works particularly well.

Linking criteria to the Intended Learning Outcomes
Many academics believe that the assessment criteria should be linked directly to the Intended Learning Outcomes (ILOs) of the module or task to ensure that students are assessing in line with the aims of the module. Therefore, it may make it easier when creating criteria to think about the ILOs and whether the criteria can be created around these. Some believe that criteria for peer moderated marking should always be linked to the ILOs.

WebPA Weighting

Table 4 shows the WebPA weightings used by 14 academics (from the case studies). The results show that 11/14 (78.6%) used a WebPA weighting of 50% or less. Interestingly the 3 remaining academics used a 100% weighting.

Table 4: WebPA weightings used by users of WebPA

WebPA weighting 0-10% 11-20% 21-30% 31-40% 41-50% 51-60% 61-70% 71-80% 81-90% 91-100%
Number of users 1 3 2 0 5 0 0 0 0 3

It is recommended that when first using WebPA that a weighting of 50% or less is used.

Within the software it is possible to create a number of mark sheets for the same set of results. For example, it is possible to create a mark sheet which shows a 50% tutor mark /50% WebPA weighting and one which shows 25% tutor mark/75% WebPA weighting. These can then be compared to show how much of a difference decreasing or increasing, the WebPA weighting makes to a student grades.

The WebPA Algorithm and WebPA Score

It is important to explain the WebPA algorithm to students when using WebPA as a method of peer assessment. This will help them understand how their marks are calculated within the system and hopefully reduce complaints. If each student all gave each other full marks then they will all receive the tutor mark because the WebPA algorithm normalises scores within the group.
Example: If a student gives every member of the group (including themselves) the highest mark – let’s say 5. Then that person would have awarded a total of 20 marks (5 marks X 4 people).

An individuals WebPA score is calculated as the number of marks allocated by that person (5) / the total number of marks that they awarded (20) divided by the number of people in the group (4). The result is 1.

This process is repeated for every person for every question.

Let us say that in this instance the tutor used a 50% tutor mark and 50% WebPA split to determine the overall scores and the tutor marked the group output at 65%

If students receive a WebPA score of 1 then they will receive the tutor mark:

50% of 65 + 50% of 65 x 1 = 65

If students had not given themselves and each other full or the same marks then the WebPA score will be above or below one and differentiation between marks occurs.

If students receive a WebPA of above 1 (e.g. 1.15) then they receive a higher mark than the tutor mark:

50% of 65 + 50% of 65 x 1.15 = 69.87 => 70%

If students receive a WebPA of below 1 (e.g. 0.85) then they receive a lower mark than the tutor mark:

50% of 65 + 50% of 65 x 0.85 = 60.12 => 60%

From the case studies some academics reassure their students that although the WebPA system provides a final mark for students the tutor can and will moderate marks as they see fit. It is the academic tutors duty to check marks and groups for anomalies such as very high or very low scores and adjust the marks accordingly. Particular attention is needed when there is one or more students that fail to submit their marks within a group as this can have an impact on other students scores within that group.

Non-completion penalty

The majority of tutors within the case studies did not impose a non-completion penalty on those that failed to take an assessment. The main reason behind introducing a non-completion penalty within an assessment would be to attempt to increase the number of students who complete an assessment and penalise the non-completers. Encouragingly, in the majority of cases, a high percentage of students carried out the assessment without a non-completion penalty in place.

Further recommended practices

The case studies captured what academics think are the most important aspects of obtaining effective online peer moderated marking assessment. Some potential solutions have been identified.

Ensuring the criteria are understood by the students is crucial. This is the most popular recommendation by those who have used WebPA.

Potential solutions:

Present very clear instructions to students. This is especially important when dealing with students from different backgrounds and cultures to ensure that every student understands what is being asked of them.

Potential solution: Give students a variety of opportunities to obtain help if they do not understand what is being asked of them or what the assessment will entail

Make sure students know how to access WebPA from an early stage in the group work process. This will give students no excuse to say that they are unaware of how to access the assessment.

Providing students with enough criteria to assess others against will provide a good basis for a successful assessment.

It is important to get the groupings of students within the system in line with the actual student groupings. This can save huge amount of time. Problems occur if students are allocated to the wrong groups or move groups without telling the academic tutor.

Potential solution: Email all students one or two weeks before the assessment is due to begin, asking them to log into WebPA and check their groups and report any errors (e.g. if students are in the wrong groups or if someone has left).

Obtaining support, where possible, can help effectively manage the assessment.

It is recommended that the right number of questions were given for the task being set.

In some cases large groups do not keep students busy enough and it can cause problems within groups. Group of 6 or more can end up splitting up into groups of 3 or 4 to do certain tasks and then other members of the group may not know how well people have performed within those sub groups.

Obtaining student participation.

Make adjustments to assessment practice year on year.

Potential solution: It is recommended that the assessment practice is reviewed every year to make adjustments for any areas that did not work well. However, if one feels that they have found an effective way of carrying out peer assessment then there is little need to change something that works particularly well.