Loughborough's support for the WebPA Project has now ended. Archived documentation can be found on the WebPA Github pages
Community Support can still be found through the JISC Mailing list
webPA Logo People

Effective Practice using WebPA

Author: Steve Loddington

Editor: Nicola Wilkinson

Introduction

Background to WebPA and the WebPA project

WebPA is an online automated system which facilitates peer moderated marking of group work. Peer moderated marking is where students carry out a group task set by the tutor and then carry out assessment on the performance of themselves and their peer group (team) in relation to a task or series of tasks. WebPA is very easy to use and is appreciated by students as a fair way of assessing group work activities, as well as, academics for saving time on marking. WebPA is freely available for everyone to use within their teaching.

The WebPA project is a JISC funded research and development study focusing on peer moderated marking as described above. The project runs between October 2006 and March 2009 and is led by Faculty of Engineering, Loughborough University. Project partners include the eServices Integration Team at the University of Hull and the Higher Education Academy Engineering and Physical Sciences Subject Centres.

A number of iterations of WebPA have been used at Loughborough University since 1998 and the project attempts to embed WebPA within other institutions, so that others can benefit from the work already carried out over the last 15 years . WebPA was awarded an IMS Global Learning Impact Award in 2008, see
http://www.imsglobal.org/learningimpact2008/2008LIAwinners.html.

Methods

This document is informed by case study research which were developed from face-to-face interviews. Interviews were carried out with 11 academic WebPA users at Loughborough University and a case study template was completed for each one. Three other WebPA users completed the case study templates themselves. Therefore, a total of 14 case studies were created across eight departments at Loughborough University as shown in Table 1.

Table 1: Case Studies across faculties and departments.

Departments #
Faculty of Engineering (7)
Faculty of Science (3)
Faculty of Social Sciences and Humanities( (4)
Aeronautical and Automotive Engineering (3)
Civil and Building Engineering (3)
Mechanical and Manufacturing Engineering (1)
Information Science (1)
Mathematics (2)
Business School (1)
English and Drama (2)
Politics International Relation and European Studies (1)

The aim was to identify how WebPA was being used across different disciplines and year groups and to discover;

  • module and assessment information (average group size, number of students per group, number of criteria used etc),
  • the benefits of peer assessment for academics and students,
  • the aims and incentives for using WebPA,
  • barriers or challenges that were faced,
  • and key aspects of effective peer assessment practice.

The case study methodology was chosen as a way of capturing detailed accounts of individual use that could be displayed in a similar format to one another. The JISC case study template was used as the skeleton for the template and questions were added to tailor the template to this exercise and to make it more explicit about what was being asked.

In the academic year 2007/2008 WebPA was used by 55 tutors across 106 modules within 15 departments. The 14 case studies represent over one-quarter (25.4%) of the total users at Loughborough University and over one-half (53.3%) of departments. WebPA was used on a range of modules with differing cohort sizes, which ranged from 20 to 297 students. The average number of times that WebPA was used by tutors within the case studies was 3-4 times.

Purpose of this document

This document is to provide information for new and existing users of WebPA. The information and advice provided was collated from research carried out by the WebPA project. It fulfils a need identified from experiences of running a variety of face to face sessions including, workshops, staff development sessions, taster events and conference presentations that a best practice booklet would be of use to new people wanting to use WebPA and those already using the tool. Earlier research (Loddington, 2008; Loddington et al. 2008) identified that academics use peer-moderated marking for many different group work activities and situations. However, this research identifies how WebPA is used at Loughborough University and highlights good practice. The intention of these guidelines is not to replicate the existing help mechanism within the WebPA tool, but to provide examples of good practice from real cases of WebPA use.

Benefits of WebPA

From those who use WebPA (as a method for peer-moderated-marking), the following personal incentives and reasons were identified;

  • Saved time / reduced workload.
  • Useful for obtaining a picture of what happens within groups.
  • Confidence (uniformed assessments) / feeling that the process is fair.
  • Saved budget / greener.
  • Fewer calculations to make as WebPA automates individual scores for students.
  • A reduction in the number of complaints to deal with.
  • Permanence of records stored centrally rather than on PCs.
  • Provides a sensible deviation of marks for group work projects.
  • A good aid when providing students with feedback on their assessment.

Potential benefits for students included;

  • Allowed academic tutors to provide students with timely feedback on their assessment through the peer assessment tool.
  • Provided an opportunity to reflect upon the group work process.
  • Enhanced key employability and interpersonal skills (e.g. communication, reflection, evaluation and team work skills).
  • Provided an opportunity to reward those that worked hard and penalise those who contributed less.
  • Positively impacted upon some students’ behaviour because they knew they were going to be assessed by their peers.
  • Peer assessed marks received were not deviated against tutor perceptions.
  • An opportunity for students to air their views within a secure environment.

Identified advantages for institutions were;

  • The majority of students liked it.
  • Records are stored centrally rather than on individual and often disparate PC’s/systems.
  • It contributes positively towards quality assurance procedures.
  • It can be used to provide a variety of assessment processes for students.
  • The system is made available to all academics at the institution.

Generic principles

The case studies highlighted that peer moderated marking was used in many different ways. It can sometimes be difficult to make recommendations because of the many ways and situations in which WebPA is used. Generic principles have been distilled from the discussions that were carried out with tutors and promote effective practice to those already using or are new to WebPA.

When carrying out peer moderated marking ensure that you are;

Comfortable
It is crucial that when using peer assessment that tutors are comfortable with the fundamental elements of the assessment. Example questions to consider:

  • Am I comfortable with the WebPA weighting that I set?
  • Am I happy with the non-completion penalty?
  • Am I happy with the assessment as a whole?

Practical
This applies to the prescribed task and the assessment. Example questions to consider:

  • Is there an equal balance between the workload for the prescribed task and the number of people in the group?
  • Is it practical to have small group numbers (e.g. groups of 2) and groups of large numbers (e.g. groups of 20)?

Reasonable
Think about the reasoning behind the task and the assessment and is the assessment. Example questions to consider:

  • Are the scheduled opening and closing times of the assessment reasonable?
  • Is it reasonable to expect students to have to justify their marks?
  • What is a reasonable adjustment to a group’s marks when students provide evidence of discrimination against a group member(s)?
  • Do I allow any students that have missed the deadline to complete the assessment?
  • Have I reasonably justified the use of peer assessment to students?

Sensible
Maintain a sensible approach to peer assessment. Example questions to consider:

  • Is the WebPA percentage a sensible one?
  • Is it sensible to divulge information about individual marks with the students?
  • Do the students fully understand peer assessment and the assessment process?

The Preparation

Preparing yourself for the assessment

What do you want to set out to achieve by using peer moderated marking as a way of assessing students for group projects? Is it because you want to give students individual scores for group work activities or simply an attempt to save you time. There may be a mixture of personal reasons, as well as, other benefits, some of which may have been outlined already. Outlining a reason for using peer assessment may be useful exercise for reflection to see if using such a method has achieved a certain goal. Below are some reasons that other academic tutors set out to achieve by using online peer assessment;

  • to provide a means for students to differentiate group mark projects,
  • to encourage participation in group project work,
  • to get students to accept that they have to carry out group work,
  • to obtain a 100% assessment completion rate,
  • to automate a process that I was already doing,
  • to satisfy departmental policy .

There are a number of decisions to made and issues to think about in relation to the assessment, some of which you may have not thought of previously. Some questions that you may need to think about include;

  • What task will be peer assessed?
  • What criteria are students going to be assessed against?
  • When will the assessment be scheduled for? What do I have to do before then?
  • How are you going to tell students about peer-moderated-marking?

Preparing students for the assessment

One of the recommendations made by those who currently carry out peer moderated marking is the importance of briefing students on the assessment. Make sure that students are aware of as much of the details of the assessment as possible such as; the criteria, the intended assessment schedule and the link to the WebPA system.

Explaining the assessment to students at an early stage is important. This will eliminate rushing though the logistics of the assessment near the end of a module or semester when there is often little time to do so.

It was clear from the case studies that even those with a good amount of experience in using peer moderated marking as method of peer assessment identified ways to improve their assessment practices. Many adjust their assessment practices from time to time, depending on what works the best. Some introduce new features or procedures from year to year, to see if it makes a positive difference to the overall assessment process. The literature advises those new to peer moderated marking to start out small and what they are is comfortable with in the first instance.

Key things to tell students about the peer moderated marking assessment;

  • It’s a confidential assessment and collusion is not permitted (unless you specifically want them to collude).
  • A description of how the algorithm works e.g. if they give each other full marks they will not be gaining anything and will receive the tutor mark (see section 5.5 for more details).
  • Give an explanation as to why peer moderated marking is being used (if appropriate, refer to the reasons that you set out to achieve).
  • Highlight the benefits for students if they engage with the task and the assessment as directed by you.
  • Provide students the exact details of the assessment as soon as possible (the timing, the location of WebPA, the groups, the criteria).

Differences in cultures, disciplines and year groups

The case studies show that WebPA was used across a wide range of years including foundation, undergraduate years 1-4 and taught postgraduate programmes. This involved a variety of group work within the classroom, within industry and within other countries. A number of important factors were identified in relation to cultures, disciplines and year groups.

Cultures

There were several important outcomes in relation to peer moderated marking and some overseas students.

  • Some cultural beliefs are that you should not judge others and therefore students may have problems in engaging with the assessment because of this. This is particularly true of those students who are of oriental origin. It is important to take such cultural beliefs into consideration when explaining the process to students.
  • Some students may not have been exposed to carrying out group work or evaluating the performance of others in their previous educational environment and therefore may feel uncomfortable when having to do so. Group work and peer assessment may be a new concept and therefore it requires a detailed explanation.

Disciplines

It is obvious to say that different disciplines have different characteristics, however, this can have a direct impact upon assessment practices associated with designing and implementing of peer-moderated-marking assessments.

  • For disciplines where there it is expected that there is more dialogue between students about there contribution to a group project (e.g. English and Drama, Art) students should be told about the importance of confidentiality of the assessment.
  • It is clear that some disciplines and activities (e.g. Politics, International Relations and European Studies) require students to air their views in groups and debate/discuss issues which based more upon opinion than theory. Therefore, it is important to give students some time to think about what you are asking them to do.
  • For disciplines where there is little group work (e.g. Mathematics) students tend to be less familiar and comfortable with group work projects. It is advised that extra time is taken to explain the reasoning behind the use of peer moderated marking and to explain the difference between this method and traditional methods where students receive the same mark for group projects regardless of individual contribution.

Year groups

When researching into differences in behaviour between year groups there seemed to be visible characteristics of first year students and third year students.

First year students

  • It is more likely that there is a lower deviation between marks.
  • It is more likely that students will drop out which can affect the results.
  • There is generally fewer complaints.
  • They need the process explaining to them in more detail as it is most likely that they would have never been subject to peer moderated marking or have used WebPA before.

Third/final year students

  • There tends to be a higher deviation between marks.
  • There are generally more complaints because the scores matter more than the first/ second year students.
  • It is more likely that students will try and play the system if they have used it several times previously so it is more important to check for anomalies.
  • In disputes tutors tend to give the benefit of the doubt to third years as it can have a larger impact upon their overall degree qualification.

The assessment

Group selection

Random

Students are assigned to a random group of students selected by the tutor.

Advantages

  • More realistic to that of industry where you can’t always choose who you work with.
  • Students tend to learn more from the exercise as they are taken out of their comfort zone.

Disadvantages

  • The least popular selection method for students
  • Random group selection still requires some tweaking. For example, as good practice many academics ensure that groups have an even spread of male/females and students from different courses.

Seeded

High achieving students are distributed evenly between groups selected by the tutor. For example, the tutor decides how many groups are required, lets say 25 groups of four students. The 25 students who scored the highest in previous assignments are then made the ‘seeds’. The rest of the students then sign up to a group of their choice.

Advantages

  • A relatively quick procedure.
  • Even distribution of ability.
  • Can provide the majority of students with an element of choice.
  • Few complaints.

Disadvantages

  • The seeds may feel that the process is unfair (if they are told that they have been selected based upon past performance).
  • This method is not used by many academics.

Self-select

Students are free to choose their own groups.

Advantages

  • The students favourite group selection method
  • Students are able to choose convenient groups (e.g. house mates, friends)

Disadvantages

  • The majority of student are not forced to work outside of their comfort zones or friendship groups.
  • Friendship loyalties can influence the assessment scores.

Group size

When carrying out any method of group work, it is unlikely that you are going to have groups of exactly the same number of students. There is often groups with one student more/less than the other groups.

Table 2 shows the average number of students per group from the case study examples.

Table 2: Group sizes

Number of students per group 3 4 5 6 7 8 9 10+
Number of academics 1 7 3 1 1 0 0 1

Exactly one-half of the academic used the group size of four. The mean number of students per group was 4.4

The number of students per group can differ on the task that is being set, however, groups of four to five are recommended. It is identified that groups of three students are too small and groups over six are too large.

In principle, from the research carried out, the rule below applies in relation to the size of the group;

The lower the group size the more likely that a higher deviation will be seen between group members marks within that group.

There are other factors that can influence the number of students per group which need to be taken into consideration;

  • Students drop out. Therefore, a group of three would become two and could involve a lot of work for the two students. The WebPA algorithm does not work very well with two students and can give skewed results.
  • Students may request that they change groups for a number of reasons. One may be that they do not get on with other members of the group therefore they wish to move. In this case, there could then be a group of three and a group of five.

Criteria

There are a number of things to consider in relation to the assessment criteria. These include;

  • The number of criteria to use,
  • The scoring range for the criteria,
  • The criteria phrasing,
  • Linking criteria to the Intended Learning Outcomes (ILOs).

The number of criteria

Table 3 shows the number of criteria used by the academics within the case studies.

Table 3: Number of criteria used within the assessments

Number of criteria 1 2 3 4 5 6 7 8+
Number of academics 2 0 2 3 4 2 0 1

The results show that the number of criteria used by academics varied. It is recommended that between three and six criteria are used within an assessment. Making students assess too many criteria may take the students too long to complete the assessment. As a result too of too many criteria students, may rush at the end to complete the assessment. In contrast, having too few criteria may produce skewed results and students may not see the point in the assessment.

Scoring range
The scoring ranges used by academics were not recorded within the case studies. However, from the example criteria shown in the Appendix and from other investigations into the use of WebPA. A high majority of users use a scoring range of 0-5 or 1-5. Although the software does allow different scoring ranges for different questions within an assessment, it is recommended that a constant range is used through out.

One argument against using a scoring range of 1-5 is that students then have the option of choosing three as neutral. The other side of the argument is that it may be a good idea to give students the option to be neutral as may not be able to assess every students contribution within the group. This may be particularly so within large group sizes, where it is more difficult for students to know each and every persons contribution to the group task.

Criteria phrasing
It is important to be explicit when wording the criteria, so that students know what they are being asked to assess against. It sounds an obvious tip but from the case studies the one thing that existing users of WebPA stressed the most was the importance of creating clear criteria.

You may find that the criteria needs modifying year on year, especially if feedback is received from the students. One academic stated that a student had complained that they could not understand the criteria. Whilst this is a minority case and that it is almost certain that 100% of students are not happy with any marking criteria, it is something that needs careful consideration. If colleagues already use WebPA ask them for their criteria or any criteria that they feel works particularly well.

Linking criteria to the Intended Learning Outcomes
Many academics believe that the assessment criteria should be linked directly to the Intended Learning Outcomes (ILOs) of the module or task to ensure that students are assessing in line with the aims of the module. Therefore, it may make it easier when creating criteria to think about the ILOs and whether the criteria can be created around these. Some believe that criteria for peer moderated marking should always be linked to the ILOs.

WebPA Weighting

Table 4 shows the WebPA weightings used by 14 academics (from the case studies). The results show that 11/14 (78.6%) used a WebPA weighting of 50% or less. Interestingly the 3 remaining academics used a 100% weighting.

Table 4: WebPA weightings used by users of WebPA

WebPA weighting 0-10% 11-20% 21-30% 31-40% 41-50% 51-60% 61-70% 71-80% 81-90% 91-100%
Number of users 1 3 2 0 5 0 0 0 0 3

It is recommended that when first using WebPA that a weighting of 50% or less is used.

Within the software it is possible to create a number of mark sheets for the same set of results. For example, it is possible to create a mark sheet which shows a 50% tutor mark /50% WebPA weighting and one which shows 25% tutor mark/75% WebPA weighting. These can then be compared to show how much of a difference decreasing or increasing, the WebPA weighting makes to a student grades.

The WebPA Algorithm and WebPA Score

It is important to explain the WebPA algorithm to students when using WebPA as a method of peer assessment. This will help them understand how their marks are calculated within the system and hopefully reduce complaints. If each student all gave each other full marks then they will all receive the tutor mark because the WebPA algorithm normalises scores within the group.
Example: If a student gives every member of the group (including themselves) the highest mark – let’s say 5. Then that person would have awarded a total of 20 marks (5 marks X 4 people).

An individuals WebPA score is calculated as the number of marks allocated by that person (5) / the total number of marks that they awarded (20) divided by the number of people in the group (4). The result is 1.`

This process is repeated for every person for every question.

Let us say that in this instance the tutor used a 50% tutor mark and 50% WebPA split to determine the overall scores and the tutor marked the group output at 65%

If students receive a WebPA score of 1 then they will receive the tutor mark:

50% of 65 + 50% of 65 x 1 = 65

If students had not given themselves and each other full or the same marks then the WebPA score will be above or below one and differentiation between marks occurs.

If students receive a WebPA of above 1 (e.g. 1.15) then they receive a higher mark than the tutor mark:

50% of 65 + 50% of 65 x 1.15 = 69.87 => 70%

If students receive a WebPA of below 1 (e.g. 0.85) then they receive a lower mark than the tutor mark:

50% of 65 + 50% of 65 x 0.85 = 60.12 => 60%

From the case studies some academics reassure their students that although the WebPA system provides a final mark for students the tutor can and will moderate marks as they see fit. It is the academic tutors duty to check marks and groups for anomalies such as very high or very low scores and adjust the marks accordingly. Particular attention is needed when there is one or more students that fail to submit their marks within a group as this can have an impact on other students scores within that group.

Non-completion penalty

The majority of tutors within the case studies did not impose a non-completion penalty on those that failed to take an assessment. The main reason behind introducing a non-completion penalty within an assessment would be to attempt to increase the number of students who complete an assessment and penalise the non-completers. Encouragingly, in the majority of cases, a high percentage of students carried out the assessment without a non-completion penalty in place.

Further recommended practices

The case studies captured what academics think are the most important aspects of obtaining effective online peer moderated marking assessment. Some potential solutions have been identified.

Ensuring the criteria are understood by the students is crucial. This is the most popular recommendation by those who have used WebPA.

Potential solutions:

  • Hand out the criteria with the assignment so that students can familiarise themselves with it so that they know what they are going to be assessed against. This may eliminate/complaints about criteria as it won’t just be sprung upon the students and they will have more time to understand it.
  • Be ambiguous and provide examples of what behavior would is expected. For example, give an example of the behavior associated with ‘engaging with the group task’ and ‘contributing to discussion’ etc.

Present very clear instructions to students. This is especially important when dealing with students from different backgrounds and cultures to ensure that every student understands what is being asked of them.

Potential solution: Give students a variety of opportunities to obtain help if they do not understand what is being asked of them or what the assessment will entail

Make sure students know how to access WebPA from an early stage in the group work process. This will give students no excuse to say that they are unaware of how to access the assessment.

Providing students with enough criteria to assess others against will provide a good basis for a successful assessment.

It is important to get the groupings of students within the system in line with the actual student groupings. This can save huge amount of time. Problems occur if students are allocated to the wrong groups or move groups without telling the academic tutor.

Potential solution: Email all students one or two weeks before the assessment is due to begin, asking them to log into WebPA and check their groups and report any errors (e.g. if students are in the wrong groups or if someone has left).

Obtaining support, where possible, can help effectively manage the assessment.

It is recommended that the right number of questions were given for the task being set.

In some cases large groups do not keep students busy enough and it can cause problems within groups. Group of 6 or more can end up splitting up into groups of 3 or 4 to do certain tasks and then other members of the group may not know how well people have performed within those sub groups.

Obtaining student participation.

Make adjustments to assessment practice year on year.

Potential solution: It is recommended that the assessment practice is reviewed every year to make adjustments for any areas that did not work well. However, if one feels that they have found an effective way of carrying out peer assessment then there is little need to change something that works particularly well.`


webPA Logo People
Page maintained by: IT Services