• givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    2
    ·
    2 months ago

    A lot of the reasons not to use AI is it just makes up random shit…

    But even with that it’s probably more accurate than what a fucking cop would write.

    What I’m worried about is cops making sure the AI says what they want (lies) and then when questioned they’ll blame the AI to escape consequences

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      2 months ago

      I’ve signed a lot of forms that say something like “I certify that the information I have provided is true and accurate”. Using ChatGPT doesn’t absolve me of that. It shouldn’t for them either (but we all know they’re held to a different standard).

  • _cnt0@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    2 months ago

    What is more racist, though? The average cop or the average LLM? I’d wager a guess it’s the average cop. So, it would still be a net benefit.

  • fine_sandy_bottom@lemmy.federate.cc
    link
    fedilink
    arrow-up
    3
    arrow-down
    2
    ·
    2 months ago

    Devils advocate: in the case of a monthly report, often an LLM is used like “take these current statistics and update last month’s report to include them.”

    As in… the LLM is not developing an opinion it’s just presenting the numbers.

    Monthly reporting is usually very formulaic. There’s no scope for “I propose forming a lynch mob comprised of vigilanties”.

    • gAlienLifeform@lemmy.worldOP
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      2 months ago

      This isn’t about them using them for monthly reports, this is about them using LLMs for individual incident reports

      Pulling from all the sounds and radio chatter picked up by the microphone attached to Gilbert’s body camera, the AI tool churned out a report in eight seconds. …

      Oklahoma City’s police department is one of a handful to experiment with AI chatbots to produce the first drafts of incident reports. Police officers who’ve tried it are enthused about the time-saving technology, while some prosecutors, police watchdogs and legal scholars have concerns about how it could alter a fundamental document in the criminal justice system that plays a role in who gets prosecuted or imprisoned.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      “take these current statistics and update last month’s report to include them.”

      That is literally the worst use case for an LLM. Something a simple script could do, but it is hard dry data the LLM is free to hallucinate with and people are lazy to check over manually.

      Also, LLMs can’t math.

  • Media Bias Fact Checker@lemmy.worldB
    link
    fedilink
    arrow-up
    3
    arrow-down
    8
    ·
    2 months ago
    TheGrio - News Source Context (Click to view Full Report)

    Information for TheGrio:

    MBFC: Left - Credibility: High - Factual Reporting: Mostly Factual - United States of America
    Wikipedia about this source

    Internet Archive - News Source Context (Click to view Full Report)

    Information for Internet Archive:

    MBFC: Left-Center - Credibility: High - Factual Reporting: Mostly Factual - United States of America
    Wikipedia about this source

    Search topics on Ground.News

    https://web.archive.org/web/20240828120602/https://thegrio.com/2024/08/27/police-officers-are-starting-to-use-ai-chatbots-to-write-crime-reports-despite-concerns-over-racial-bias-in-ai-technology/
    https://thegrio.com/2024/08/27/police-officers-are-starting-to-use-ai-chatbots-to-write-crime-reports-despite-concerns-over-racial-bias-in-ai-technology/

    Media Bias Fact Check | bot support

  • Skullgrid@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    8
    ·
    2 months ago

    the AI doesn’t have fucking racial bias, humanity and the content they produce they fit into the AI has a racist bias.

    • wjrii@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      No need to split hairs here. The product that people use and call “AI” is what is relevant.

        • Buffalox@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          2 months ago

          I never saw a knife capable of writing a police report, or doing anything else we normally attribute to only brains.
          The problem with AI is that they seem to become racist by “choosing” racist stigma over the alternative, despite racist stigma is NOT the majority of the info they are based on.