I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.

In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.

I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.

!santabot@slrpnk.net

  • AlDente@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    30 days ago

    I must say I don’t like the idea of a social-credit-score bot.

    Regarding your implementation, I saw the summary of your own comments elsewhere in this post and I noticed all the annotations were on upvoted/blue segments. Other summaries you posted focused more on negative/red segments. Would it be possible to enforce a minimum of 1 or 2 from both categories?

    Also, would you be kind enough to read my tea leaves? Am I an acceptable citizen of the Lemmy community?

    • Draconic NEO@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      29 days ago

      I’m in agreement. The social credit idea really doesn’t bode well. Karma resrictions on Reddit are one of the bad parts of Reddit and I for one am glad that it’s not a thing here.

    • auk@slrpnk.netOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      29 days ago

      You’re fine. Why would you not be? You left 15 comments in the last month, and they were all upvoted. It doesn’t even really have much to go on to rank you, but your rank is positive, nowhere near 0, much less far enough into the negative side that it would need to be to even be greylisted.

      99% of Lemmy is made of acceptable citizens. That’s a real number. Only 1% of the users that it evaluates, which is itself only a tiny fraction of the total Lemmy population, ever get blacklisted. You have to be very obnoxious before it starts targeting you. I can understand the worry that this is going to arbitrarily start attacking people because of some vague AI bot decision, but that’s not what is happening.

      The visualization of someone’s social credit score just picks the 5 most impactful posts, it doesn’t discriminate based on positive or negative. If you want to see what the red corresponds to on my graph, the most negative things I have done within the time window are:

      They both contributed some red to the graph, I think. The red at the far right end is comments within this post that people are taking exception to.