Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

In reply to the discussion: Bernie vs. Claude [View all]

HesNotHere

(19 posts)
11. No, it doesn't.
Tue Mar 24, 2026, 08:24 PM
18 hrs ago

The source code for many of these are public. You can walk through the loop and understand that LLMs are strictly non-deterministic number generators. Example: https://github.com/ggml-org/llama.cpp

You can build your own non-deterministic number generator too by following: https://karpathy.ai/zero-to-hero.html

Now, you may argue that there is the appearance of "emergent behavior" because the non-deterministic sequences that it produces appear very, very similar to the data that the weights are produced from (human generated). So a human may mistake such outputs for sensible or even factual content. It could in fact read factually. But that isn't the intent of an LLM. It has no intent beyond generating a sequence of numbers until a limit or a end of tokens signal is hit.

It is important for people to understand this fundamentally before they post their "conversations" (number generator sessions) with them, or make wild-eye claims about what they know or can do. LLMs are a very, very good parlor trick based on math.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Bernie vs. Claude [View all] Quixote1818 Yesterday OP
Don't like a thing about this. Tells him what he wants to hear. An Infamous MAGA get same answers to same questions? IA8IT Yesterday #1
Bernie, like most everyone here, understands the implications and societal costs of AI. Gaugamela Yesterday #3
The Waterboy was released in 1998 IA8IT Yesterday #4
How would 2 people.... RussBLib Yesterday #5
Large Language Models are really non-deterministic (semi-random) number generators HesNotHere 20 hrs ago #8
bernie is asking the right questions rampartd Yesterday #2
Thank you Bernie, for taking on this subject. AI is being pushed down our throats and Marie Marie 22 hrs ago #6
I got more specific answers for some reason, and I think it's interesting. scipan 21 hrs ago #7
LLMs are non-deterministic HesNotHere 20 hrs ago #9
I agree that it doesn't "know" anything, but it does alot more than just predicting the next token. scipan 20 hrs ago #10
No, it doesn't. HesNotHere 18 hrs ago #11
Number generator? Even parallel processors work in binary numbers, don't they? scipan 17 hrs ago #12
I disagree about its intent. scipan 16 hrs ago #13
Training is not programming. HesNotHere 16 hrs ago #14
It's a form of programming. Training is probably a better word. scipan 13 hrs ago #17
Training is not programming HesNotHere 13 hrs ago #20
BTW, if people can come to a real understanding of what is happening under the hood... HesNotHere 16 hrs ago #15
I know, it's scary nt scipan 13 hrs ago #18
Last thing....if you use your cellphone to ask ChatGPT what the solution is Fermi's Paradox... HesNotHere 16 hrs ago #16
Also no warp drive or Crucible scipan 13 hrs ago #19
I play with it and work with it too HesNotHere 13 hrs ago #21
Hell, give it an arm, and a goat, and maybe a few more updates. scipan 13 hrs ago #23
Side note, warp drive misses the point of the sad joke of the universe HesNotHere 13 hrs ago #22
Yeah it's not looking good. nt scipan 13 hrs ago #24
Latest Discussions»General Discussion»Bernie vs. Claude»Reply #11