Community Moderation Process

Community Moderation Process

Catalyst flags

All reviews provided by the Community Reviewers will be subject to an automated review process using Catalyst Flags. Catalyst Flags* are pieces of code known as scripts that are run to analyze all reviews by the Community Reviewers. Catalyst scripts will raise flags in case of:

  • Similarity ~ reviews which are identical or almost identical to a another review

  • Profanity ~ reviews that contain profanities or abusive content or language

  • Use of AI ~ reviews which do not reflect the opinion of the Community Reviewer or which are provided by others or an AI-agent acting on their behalf.

* Length requirements have already been applied for automatically (reviews containing less than 150 characters in impact, feasibility, and value for money criteria)

All reviews that have been flagged will go through a moderation process involving Community Moderators (LV2).

  • Community Moderators will be randomly allocated flagged reviews for further assessment. In addition, between 1-5% percent of reviews not flagged by scripts will be allocated to LV2 Community Reviewers for a spot check (depending on the total reviews received).

  • Community Moderators will be provided a secure login for their moderation function in the Moderation Module.

  • Community Moderators can see which flags have been raised by scripts about the review/reviewer.

  • After reading the review and flags, Community Moderators will submit their moderations, marking reviews they believe were legitimately flagged and providing a reason for why the review should be removed. Eg. Where the flag is raised because of potential ChatGPT AI involvement or plagiarism, the Moderator must make a judgment whether they believe the review not to be a reviewer's own opinion. If the Moderator agrees with the flag, they mark it as REMOVE and give a rationale.

  • Rationale should be a short written statement, and a selection must be made from a menu of reasons. For example, the moderator would select from options such as:

    1. There is no evident issue with review

    2. I believe this is not the person’s own opinion

    3. There is profanity

    4. There is too much similarity with another review

    5. Rationale doesn’t correspond with the score given

    6. Rationale contains a false statement

  • If Moderators disagree with the flag and believe the review should be included for voters to consider, they mark it as VALID and give a rationale.

  • Moderators should only discard reviews if there is a valid reason why a voter shouldn’t see that review. Unless the review has an obvious reason, such as the below flags or another obvious reason (spamming, for instance), then the review should be included.

Rules to be followed by Community Moderators (LV2)

  • All reviews should be provided to voters if a Community Moderator has good reason not to include it.

  • Community Moderators are anticipated to approve many flagged reviews and will be retained for voters to consider.

  • Moderators are there to provide a sanity check to verify if the flag(s) have raised a valid concern.

  • Moderators should use their best judgment and give a reason if they agree the flag is valid, meaning the review should not be included.

  • Most reviews will be displayed in the voting application, meaning only those legitimately flagged and tagged by LV2 Moderators will not be included.

  • A simple majority forms a consensus about including the review or not. In a scenario whereby 2 moderators (or an equal number of moderators on each side of the decision) do not form a consensus about including the review, then the benefit of the doubt will be given and the review included.

    • In a scenario where only 1 moderator checks a review, the Catalyst will fulfill the second necessary moderation

    • In a scenario where flagged or spot-checked reviews go without moderation by either LV2 moderators allocated to check, the Catalyst team will fulfill the moderation duty.

Last updated