• Fleur_@aussie.zone
    link
    fedilink
    arrow-up
    3
    ·
    5 hours ago

    In large groups, yes. It’s just a statistics thing. For example I can’t tell if any given flipped coin will be heads or tails but I can tell you that of 100 million flipped coins about 50 million will be tails.

  • Zos_Kia@lemmynsfw.com
    link
    fedilink
    arrow-up
    6
    ·
    7 hours ago

    I think what’s important is to understand that these things work because they are at a certain scale. Algorithms are notoriously bad at predicting individual behaviour, hence why recommendation engines are a specialization that is far from solved. But when you have large amounts of traffic, the law of large numbers allows you to predict group behaviour with some accuracy.

    So you can’t follow a user around and predict their next move and show them the right ad at the right time. But you can take 50 000 middle-aged males, and bet that at least 10 of them will buy a motorbike if you randomly show them a picture of a guy riding in the sunset. Once you have a good volume of this kind of data you can do some casino math to tilt all your bets slightly in your favour, and start betting 24/7.

    It’s really cold reading, like they do in those mentalist shows. It’s a lot dumber than it looks, but it’s way more effective than you think.

  • Plesiohedron@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    First : the algorithm predicts thar our behavior today will be like our behavior yesterday. Which makes sense.

    Second : what you eat determines how you poop. And they do control what we eat. So that makes sense too.

    So both work together.

  • benni@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    The success of algorithmic feeds does not imply that humans are predictable in general. It just means that humans are predictable in terms of what content will keep them scrolling/watching/listening for some more time.

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    edit-2
    2 days ago

    Fun fact: LLMs that strictly generate the most predictable output are seen as boring and vacuous by human readers, so programmers add a bit of randomization they call “temperature”.

    It’s that unpredictable element that makes LLMs seem humanlike—not the predictable part that’s just functioning as a carrier signal.

    • EchoSnail@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      You just ruined the magic of ChatGpt for me lol. Fuck. I knew the illusion would break eventually but damn bro it’s fuckin 6 in the morning.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      1
      ·
      2 days ago

      The unpredictable element is also why they absolutely suck at being the reliable sources of accurate information that they are being advertised to be.

      Yeah, humans are wrong a lot of the time but AI forced into everything should be more reliable than the average human.

      • rhombus@sh.itjust.works
        link
        fedilink
        arrow-up
        23
        ·
        2 days ago

        That’s not it. Even without any added variability they would still be wrong all the time. The issue is inherent to LLMs; they don’t actually understand your questions or even their own responses. It’s just the most probable jumble of words that would follow the question.

        • gandalf_der_12te@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          8 hours ago

          First of all it doesn’t matter whether you think that AI can replace human workers. It only matter whether company think that AI can replace human workers.

          Secondly, you’re assuming that humans typically understand the question at stake. You’ve clearly never met, or been, an under-paid, over-worked employee who doesn’t give a flying fuck about the daily bullshit.

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          Yeah, they aren’t trained to make “correct” responses, but reasonably looking responses; they aren’t truth systems. However, I’m not sure what a truth system would even look like. At a certain point truth/fact become subjective, meaning that we probably have a fundamental problem with how we think about and evaluate these systems.

          I mean, it’s the whole reason programming languages were created, natural language is ambiguous.

          • Yeah, solipsism existing drives the point about truth home. Thing is, LLMs outright lie without knowing they’re lying, because there’s no understanding there. It’s statistics at the character level.

            AI is not my field, so I don’t know, either.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          If there are 800 sentences/whatever chunk of information it uses about what color a ball is, using the average can result in that sentence using red when it should be blue based on the current question or it could add information about balls that are a different type because it doesn’t understand what kind of ball it is talking about. It might be randomness, it might be using an average, or a combination of both.

          Like if asked about ‘what color is a basketball’ and the training set includes a it of custom color combinations by each team it might return a combination of colors that doesn’t match a team like brown (default leather) and yellow. This could also be the answer if you asked for an example of a basketball that matched team colors, because it might keep the default color from a ball that just has a team logo.

          If someone doesn’t know the training set it would probably look like it made something ip. To someone who knows it is impossible to tell of it is random, due to a lack of knowing what it is talking about, or if it had some other less obvious connection that combines the two which lead to yellow and brown result.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        2 days ago

        I’m not saying I agree with AI being shoehorned into everything, i’m seeing it being pushed into places it shouldn’t first hand, but strictly speaking, things don’t have to be more reliable if they’re fast enough.

        Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.

        Same thing like back when I was in grade school and teachers would say to not trust internet sources and make sure to look everything up in an physical book / encyclopedia because a book is more reliable. Like, yes, it is, but it also takes me 100x as long to look it up, so ultimately starting at Wikipedia is going to get me to the right answer faster, the vast majority of the time, even if it’s not 100% accurate or reliable (this was nearer Wikipedia’s original launch).

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          Quantum computers are inherently unreliable, but you can perform the same calculation multiple times and average the result / discard the outliers and it will still be faster than a classical computer.

          That works for pattern matching, but you don’t want to do that for doing accurate calculations. There is no reason to average the AI run calculation of 12345 x 54321 because that can be done with a tiny calculator with a solar cell the size of a pencil eraser. Doing calculations like that multiple times adds up fast and will always be less reliable than just doing it right in the first place. Same with reporting historical facts.

          There is a vslidation step that AI doesn’t do. If you feed it 1000 posts from unreliable sources like reddit or don’t add even more context about whether the ‘fact’ is a joke, baseless rumor, or from a reliable source you get the current AI.

          Yes, doing multiple calculations efficently and taking averages has a lot of uses, mainly in complex systems where this provides opportunities to test chaotic systems with wildly different starting states. There are a ton of great uses for AI!

          But the AI that is being forced down our throats is worse than wikipedia because it averages content from ALL of reddit, facebook, and other massive sites where crackpots are given the same weight as informed individuals and there are no guardrails.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            2 days ago

            That works for pattern matching, but you don’t want to do that for doing accurate calculations. There is no reason to average the AI run calculation of 12345 x 54321 because that can be done with a tiny calculator with a solar cell the size of a pencil eraser. Doing calculations like that multiple times adds up fast and will always be less reliable than just doing it right in the first place.

            I agree.

            Same with reporting historical facts.

            I disagree. Those are not remotely the same problem. Both in how they’re technically executed, and in what the user expects out of them.

            But the AI that is being forced down our throats is worse than wikipedia because it averages content from ALL of reddit, facebook, and other massive sites where crackpots are given the same weight as informed individuals and there are no guardrails.

            No, it’s just different. Is it wrong sometime? Yes. But it can also get you the right answer to a normal human question orders of magnitude faster than a series of traditional searches and documentation readings.

            Does that information still need to be vetted afterwards? Yeah, but it’s a lot easier to say “copilot, I’m looking at a crossover circuit and I’ve got one giant wire coil, three white rectangles and a capacitor, what is each of them doing and how kind of meter will I need to test them”, then it is to individually search for each component and search for what type of meter you need to test them. Do you still need to verify that info after? Yeah, but it’s a lot easier to verify once you know what to actually search for.

            Basically any time one human query needs to synthesize information from multiple different sources, an AI search is going to be significantly faster.

        • Appoxo@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          In latter classes our teachers just told us to not blindly believe what we read on Wikipedia but cross-reference that with other sources like public newspaper or (as you said) books.

  • JASN_DE@feddit.org
    link
    fedilink
    arrow-up
    29
    ·
    2 days ago

    Humans overall are extremely predictable. Other factors might aggravate this, but even without any tech involved it’s not looking good.

  • zarathustra0@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    2 days ago

    LLMs: high speed stochastic bureaucracy.

    Subtly categorising people into bureaucratically compatible holes since 2021.

  • ArgumentativeMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 day ago

    We are, but only the truly simple minded can be thoroughly swayed and changed into an antisocial beast of propaganda, tasked with toil and consumption. So, there’s no need to vilify “the algorithms” or their results… there’s nothing wrong with YouTube recommending me a Japanese “Careless Whisper” cover from the 80s, based on my previous input. 😅

    • gandalf_der_12te@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      7 hours ago

      oh you are so mistaken. propaganda, which is essentially advertisement for political stances, takes a toll on us all. you just don’t notice it because modern propaganda is targeted towards the subconscious more than towards the conscious, as many people have poorer defenses around their subconsciousness than around their consciousness.

      On top of that, you’re vastly underestimating how very pliable the human mind mostly is. When presented with one credible idea, an infestation takes place similar to a virus infestation which can make that idea grow exponentially, up to a target size.

      Yet you are right that we must not give up confronting ourselves with these kind of messages, in order to find truth. Dialogue is the essential foundation of democracy. Only dialogue can reveal the truth.