Pima Tech Guides

Perusall: Determining Annotation Quality

Updated on

About

Perusall uses machine learning algorithms with linguistic capacity to imitate a human instructor and create predictive scores. The quality of the annotations or comments of the students might be included with the Perusall score. This isn't mandatory, as sometimes the annotations are counted as participation points.

Check with your instructor to know specific annotation quality requirements.

For more information on the quality of annotations, view this article for example scoring criteria: Perusall Scoring Examples

1. Here's an example of a great quality annotation.

A response such as:

"Particles in close proximity are either attracted to or partially repelled by other particles..."

Is better than:

  • Wow, this makes sense.
  • This opened my perception.
  • That's great.

Responding in paragraph length and giving concrete details in your response will generate a higher annotative score when being graded by quality.

Image of a Perusall document assignment. On the right, a current conversation has been opened and there are 3 comments highlighted with an arrow indicating to view them. They are descriptive responses that are almost a paragraph, which make them higher quality than less documented comments.

The instructor can:

  • Adjust when the score is released to students.
  • Chose how they want Perusall's algorithm to interact with the course.
  • Set scoring criteria.

 

0 Comments

Add your comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Previous Guide Perusall: Keeping Track of your Scores
Next Guide Perusall: Rotating a Page While Viewing
Want Help From A Human? → Contact Us