• MartianSands@sh.itjust.works
    link
    fedilink
    arrow-up
    39
    ·
    3 days ago

    If that’s their solution, then they have absolutely no understanding of the systems they’re using.

    ChatGPT isn’t prone to hallucination because it’s ChatGPT, it’s prone because it’s an LLM. That’s a fundamental problem common to all LLMs

    • spechter@lemmy.ml
      link
      fedilink
      arrow-up
      24
      ·
      3 days ago

      Plus I don’t want some random ass server to crunch through couple hundred watt hours if scanning the barcode and running that against a database would not just suffice but also be more accurate.

      • jaybone@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        3 days ago

        More accurate, efficient, environmentally friendly. Why are we trying to solve all of this with LLMs?

          • AceStructor@feddit.org
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Exactly, developers can’t just come up with a complete database of all products in existence and where they come from, whereas LLMs are already trained on basically all data that is available on the Internet, with additional capabilities to browse the web if necessary. This is a reasonable approach.

    • DavidGarcia@feddit.nl
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      3 days ago

      phi-4 is the only one I am aware of that was deliberately trained to refuse instead of hallucinating. it’s mindblowing to me that that isn’t standard. everyone is trying to maximize benchmarks at all cost.

      I wonder if diffusion LLMs will be lower in hallucinations, since they inherently have error correction built into their inference process

      • MartianSands@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        Even that won’t be truly effective. It’s all marketing, at this point.

        The problem of hallucination really is fundamental to the technology. If there’s a way to prevent it, it won’t be as simple as training it differently