Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Going from C to assembly is not deterministic in a sense because different compilers can produce different output.

That’s a weird definition of determinism.



Why is this a weird definition of determinism? Could you please define what you mean when you say deterministic?

A C program does not identify a single assembly program. It identifies a set of assembly programs. This fits the pretty standard definition of non-determinism.

A difference between natural language and C code is that natural language does not have a formal semantics. Having no formal semantics is a very different problem from having a semantics that admits a well-defined set of interpretations.


Determinism implies that the same input will result in the same output.

I agree with you that for a single C program there’s a set of assembly code that satisfies it. But by choosing a compiler, an architecture and a set of flags,… you will always get the same assembly code. If you decide to randomize them, then you can no longer guarantee a specific result, but you can still guarantee sets of result. Which is the definition of non-determinism.

Formalism is orthogonal as its about having well defined sets and transformation. LLMs are formal because it’s a finite set of weights and tokens ad the operations are well defined. But the prompt -> tokens -> tokens -> code transformation is non- deterministic in most tools (claude, chatgpt). And the relation between the input and the output os a mathematical one, not a semantic one.


I see. Then we’re on the same page. My follow up question is: why do we care if the LLM is deterministic?

Hypothetically, if we could guarantee a semantic relationship between the input and output we wouldn’t care if the LLM was deterministic. For instance, if I give the LLM a lean theorem and it instantiates a program and a mechanical proof that the program conforms to the lean theorem, I just don’t care about determinism. Edit: this is equivalent to me not caring very much about which particular conformant C compiler I pick

And my understanding of LLMs is that they actually are functions and the observed randomness is an artifact of how we use them. If you had the weights and the hardware, you could run the frontier models deterministically. But I don’t think you’d be satisfied even if you could do that. Edit: this is maybe analogous to picking a particular C compiler that does not promise conformance

There are valid concerns with LLMs but I’m not convinced non-determinism is the thing we should care about.


Non-determinism is not what people really care about.

If you remove randomness (temperature) from an LLM, It's going to be deterministic. But the relation between inputs and outputs is inscrutable (too many parameters) and there's no practical way prove the relation between a certain prompt and the output unless you run it.

Then you add randomness on top of that and the whole thing is a chaotic mess. Due to being formal, I believe generated code has a high probability of being correct (syntax) and generic patterns can be replicated easily. But the higher level concerns (the domain) and the nebulous concepts like maintainability, security,... is harder to replicate. Also correctness (logic) is hard to prove as you're unfamiliar with the code.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: