Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

In reply to the discussion: Bernie vs. Claude [View all]

HesNotHere

(19 posts)
9. LLMs are non-deterministic
Tue Mar 24, 2026, 06:01 PM
12 hrs ago

They do this by sampling and randomizing. For example, consider "The dog is ____".

It may determine that "running", "barking" and "brown" are the only likely possible answer that meet a threshold (top_p). The machine may be told to only to only select the two most likely (top_k), which would be "running" and "barking". Now lets assume the probability of "running" is .75 and "barking" is .25, it will randomly pick either based on the likelihood.

Then it goes again. Maybe it picked "running". Now perhaps the most likely token is a ".", "fast" or "away".....

Every time a token is added, the context changes, and the possible next token probabilities shift. The longer the response and more variability allowed, the more likely two responses will differ.

IOW, LLMs do not know anything. They predict numerical sequences based on their training, and they do it with a random component. Thats why you got such different answers. The number of possible responses to a complicated question is infinite.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Bernie vs. Claude [View all] Quixote1818 18 hrs ago OP
Don't like a thing about this. Tells him what he wants to hear. An Infamous MAGA get same answers to same questions? IA8IT 18 hrs ago #1
Bernie, like most everyone here, understands the implications and societal costs of AI. Gaugamela 17 hrs ago #3
The Waterboy was released in 1998 IA8IT 17 hrs ago #4
How would 2 people.... RussBLib 17 hrs ago #5
Large Language Models are really non-deterministic (semi-random) number generators HesNotHere 12 hrs ago #8
bernie is asking the right questions rampartd 18 hrs ago #2
Thank you Bernie, for taking on this subject. AI is being pushed down our throats and Marie Marie 13 hrs ago #6
I got more specific answers for some reason, and I think it's interesting. scipan 12 hrs ago #7
LLMs are non-deterministic HesNotHere 12 hrs ago #9
I agree that it doesn't "know" anything, but it does alot more than just predicting the next token. scipan 11 hrs ago #10
No, it doesn't. HesNotHere 9 hrs ago #11
Number generator? Even parallel processors work in binary numbers, don't they? scipan 9 hrs ago #12
I disagree about its intent. scipan 8 hrs ago #13
Training is not programming. HesNotHere 8 hrs ago #14
It's a form of programming. Training is probably a better word. scipan 5 hrs ago #17
Training is not programming HesNotHere 5 hrs ago #20
BTW, if people can come to a real understanding of what is happening under the hood... HesNotHere 8 hrs ago #15
I know, it's scary nt scipan 5 hrs ago #18
Last thing....if you use your cellphone to ask ChatGPT what the solution is Fermi's Paradox... HesNotHere 8 hrs ago #16
Also no warp drive or Crucible scipan 5 hrs ago #19
I play with it and work with it too HesNotHere 5 hrs ago #21
Hell, give it an arm, and a goat, and maybe a few more updates. scipan 5 hrs ago #23
Side note, warp drive misses the point of the sad joke of the universe HesNotHere 5 hrs ago #22
Yeah it's not looking good. nt scipan 5 hrs ago #24
Latest Discussions»General Discussion»Bernie vs. Claude»Reply #9