• ContrarianTrail@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    19
    ·
    3 months ago

    I disagree. Information can be factual independent of who or what said it. If it’s false, then point to the errors in it, not to the source.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      3 months ago

      You’re correct, but why are you trusting the output by default? Why ask us to debunk something that is well-known to be easy to lead to the answer you want, and that doesn’t factually understand what it’s saying?

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        16
        ·
        3 months ago

        But I’m not trusting it by default and I’m not asking you to debunk anything. I’m simply stating that ad hominem is not a valid counter-argument even in the case of LLMs.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          2
          ·
          edit-2
          3 months ago

          You’re saying ad hominem isn’t valid as a counterargument, which means you think there’s an argument in the first place. But it’s not a counterargument at all, because the LLM’s claim is not an argument.

          ETA: And it wouldn’t be ad hominem anyways, since the claim about the reliability of the entity making an argument isn’t unrelated to what’s being discussed. Ad hominem only applies when the insult isn’t valid and related to the argument.

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            9
            ·
            3 months ago

            Dismissing something AI has ‘said’ not because of the content, but because it came from LLM is a choice any individual is free to make. However, that doesn’t serve as evidence against the validity of the content itself. To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              2
              ·
              3 months ago

              Ok, but if you aren’t assuming it’s valid, there doesn’t need to be evidence of invalidity. If you’re demanding evidence of invalidity, you’re claiming it’s valid in the first place, which you said you aren’t doing. In short: there is no need to disprove something which was not proved in the first place. It was claimed without any evidence besides the LLM’s output, so it can be dismissed without any evidence. (For the record, I do think Google engages in monopolistic practices; I just disagree that the LLM’s claim that this is true, is a valid argument).

              To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.

              How much do you know about how LLMs work? Their outputs aren’t nonsense or copying others directly; what they do is emulate the pattern of how we speak. This also results in them emulating the arguments that we make, and the opinions that we hold, etc., because we those are a part of what we say. But they aren’t reasoning. They don’t know they’re making an argument, and they frequently “make mistakes” in doing so. They will easily say something like… I don’t know, A=B, B=C, and D=E, so A=E, without realizing they’ve missed the critical step of C=D. It’s not a cop-out to say they’re unreliable; it’s reality.

              • ContrarianTrail@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                3 months ago

                I get the concerns about the reliability of LLM generated content, and it’s true that LLMs can produce errors because they don’t actually understand what they’re saying. But this isn’t an issue unique to LLMs. Humans also make mistakes, are biased, and get things wrong.

                The main point I’m trying to make is that dismissing information just because it came from an LLM is still an ad hominem fallacy. It’s rejecting the content based on the source, not the merits of the argument itself. Whether the information comes from a human or an LLM, it should be judged on its content, not dismissed out of hand. The standard should be the same for any source: evaluate the validity of the information based on evidence and reasoning, not just where it came from.

                • Feathercrown@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 months ago

                  Ok, I get what you’re saying, but I really don’t know how to say this differently for the third time: that’s not what ad hominem means

                  • ContrarianTrail@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 months ago

                    It’s a form of ad hominem fallacy. That’s atleast how I see it. I don’t know a better way to describe it. I guess we’ll just got to agree to disagree on that one.

                  • ContrarianTrail@lemm.ee
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    3 months ago

                    As a side note, I’d like to thank you for the polite, good-faith exchange. If more people adopted your conversational style, I’d definitely enjoy my time here a lot more.