Reading this has only further convinced me that Haskell is unsuitable for production due to the difficulties laziness posses to correctly reasoning about the performance characteristics of code. Idris is not only strict by default, but has an even more advanced type system, and because of that it will probably overtake Haskell eventually.
> Reading this has only further convinced me that Haskell is unsuitable for production due to the difficulties laziness posses to correctly reasoning about the performance characteristics of code.
What are those difficulties? Here are a couple of sentences from the start of the article:
> I knew just based on common sense that m only gets evaluated once (when it is first needed, no sooner, no later) but I realised I didn't know the exact mechanics behind that.
> What I'm about to describe is completely useless to learn how to write Haskell, but if you're like me and like poking at things under the hood, by all means join in.
That sounds just like any other language to me:
- We know what the behaviour will be "based on common sense" (ie. the high-level spec and our experience)
- We don't really care about the exact implementation
- Some novices don't understand the language yet
- Trying to follow a real implementation is complicated by the masses of optimisation it's had, and the heavy dose of hooks it's gained for debugging, instrumentation, portability, etc.
> Idris is not only strict by default, but has an even more advanced type system, and because of that it will probably overtake Haskell eventually.
Whilst Idris is pretty cool, I think it's a long way from production use. The language itself is still evolving (eg. type-based laziness annotations, affine types, etc.) and many of those changes lead to large upheavals in the standard library. I've written some Idris, but I wouldn't count on it still working after a version bump.
Haskell seems to be a/the de facto functional language at the moment. Whilst Idris usage may increase, I doubt it will make up for Haskell's head start. More likely, something better will emerge and eclipse both, although that may take decades.
For what it's worth, IMVU has used Haskell in production for years and it's been great. Lots of people bring up the problem of laziness space leaks, but I think they caused a problem once? maybe twice?
It got to the point where, in the Haskell training classes I gave, I simply stopped bringing up laziness. It just doesn't matter much in practice, especially if you use data structures like Data.HashMap.Strict.
Feels like my new role is to help stop the spread of misinformation about Haskell by people who've never actually used it in production.
Did you keep in mind that for most people's code laziness is actually a performance improving feature? Most production code doesn't need that kind of reliability and isn't maintained by that kind of skilled developers. E.g., if MS Word needs 35 or 33 seconds to load is uncritical.
In most software having no laziness means that a lot of objects are created and a lot of calculations made that will never be used.
So, I bet that laziness is a feature to gain masses not to lose them.
But then again I don't live in circles where Haskell is considered mainstream. ;)
Laziness doesn't take away determinism. I agree you may have some surprises but they all make sense when you understand them. I've only ever had one really baffling problem due to un-evaluated thunks building up and it did not take long to find the root of it using the profiler.
What becomes non-deterministic with async is the order in which operations are executed (depending on how long the previous processes took) possibly creating race conditions that can sometimes become very hard to debug.
However if you work with immutable objects like Scala or Haskell encourages you to do, this shouldn't become much a problem.
I think laziness-by-default is now mostly seen as a sub-optimal choice for general-purpose programming languages, and the complicated cost-model is one of the reasons. As far as I'm aware, even SPJ would agree with this (please correct me if I'm wrong). Eager evaluation as default and
lazy evaluation as option is a good compromise. (Scala offers this.)
There's a rather extensive series of blog posts by Edward Yang, a GHC contributor, that cutely and obliquely describe the STG machine that gives semantics to STG code. Highly recommended for anyone who found this interesting!
Also, C-- has updated from its spec as the author noted and... as far as I'm aware there aren't actually any updated specs or documentation. Perhaps a few papers? C-- was originally marketed as a new, designed intermediary language for compiling functional programs but it never quite took off as far as I understand. My understanding is that GHC is also moving toward using LLVM instead.
There is a functional language called Mercury. I don't care about the language, but on the website there are incredibly insightful documents, and reduction order is explained in a very clear way: https://mercurylang.org/documentation/papers.html
(The oldest stuff)