I feel this article genuinely represents the gold standard for all such posts in this current genre and zeitgeist. It really does feel like a Spirit Bomb that Goku asked the entire planet to lend their energy to build, a collection of all the hopes and dreams against a mind-rending and incalculable opponent that seeks to eradicate the last remaining few shreds of the craft of software development that feel noble and decent.
I find myself wondering what to do with this article and the incredibly well-condensed collection of sentiments therein. Do I drop a furtive link in that #ai-enthusiasts Slack channel, in full view of everybody, including that CTO of mine who is REALLY excited about this stuff, and isn't quite ready to issue an AI usage mandate? Or do I keep this secreted away, like a bible in my left jacket pocket in case one of "their" sharpshooters attempts to assassinate me with an AI silver bullet, as though I were some wild and reckless heathen that seeks to abandon all good sense in the selfish pursuit of personal pride?
I don't mean to be hyperbolic, but I never envisioned this moment of feeling bizarre, feckless and iconoclast for wanting to do what feels like the reasonable thing. The circular finances propping this stuff up don't add up. Do I really want to be left dependent on a thing that only really feels like it just flashed into precarious existence, that will almost without a doubt be propped up, classed as "too big to fail" by one particular nation-state, whose actual intent and interests are to subjugate its subjects into a quivering oblivion when it's finally acknowledged the whole thing is in arrears? No, goddammit, I shouldn't have to accept this barrage against and wholesale weakening of my strongest assets: My mind and my discipline. At least that's how I feel about it anyway.
This really resonated with me, thank you for writing it <3
> Companies value velocity and new launches and shipping first at all costs because of course they do; it’s table stakes. Speed of delivery is basically the number one corporate value of every organization whether they admit to it or not.
Yeah this one is again one of the causes of where we are today (alongside profit extraction, or perhaps because of it). It used to be the case that you would find companies that would offer quality at a slightly higher price, and people would be more than willing to pay for it. Now the feeling is that this is all marketing driven and there is no 'higher quality' because everyone gave up and went after speed of delivery. And well, as the old saying goes, that's valuable when you're catching fleas.
I think this essay illustrates pretty well the value in indulging an experience not just for the sake of it but to try and truly know it emotionally.. and perhaps also given some of the responses, it is rightly counterbalancing a lack of appreciation and understanding for anyone doing just that.
I do wonder the prospects of any etsy-like outcome for largely hand crafted software though. While you can personally find stylistic expression in the craft i'm not sure how apparent the nuances of crafting code is to users of the product beyond the requirements of a UX design and vision. It's hard not to imagine generation industrializing a lot of this part of the craft of making software.
For me I think the important thing to not lose sight as we use generation more and more in software is our care for the work piece. It feels like care, and deep understanding are set up to become further valuable rarities in the future as we become less and less intimately involved and we have to be intentional about in order to keep.
I feel like there is some parallels here to industrial designers and their desire to hold on to obsessing about and understanding the details in the face of using industrialized tooling and being very much removed from the intimate feeling of crafting every millimetre. Deeply caring is still meaningful and valuable even if it isn't minimally required.
> I do wonder the prospects of any etsy-like outcome for largely hand crafted software though. While you can personally find stylistic expression in the craft i'm not sure how apparent the nuances of crafting code is to users of the product beyond the requirements of a UX design and vision.
The immediate example of something where good code DOES stand out to me is (one of my favorite games of all time) Factorio. There are lots of examples where I have been playing over the years and been amazed at the ability of the game engine to handle computationally complex operations at a really large scale. Coupled with a bunch of dev blogs explaining the little optimizations, its given me a ton of respect for Factorio as a piece of software.
That said, I am not sure it strictly invalidates your point. That’s the only example I can come up with, and it requires knowledge of the game’s design via those devblogs which the average user of a messaging app or something won’t have.
I think there’s probably a market for high performance consumer code, but the vast majority of what makes it to end users will just be good enough.
Maybe it’s just me, but I feel that same kind of treachery when somebody tries to pass off a piece of AI-generated work as if it were their own voice.
There's a flaw in the Milli Vanilli argument. The band had no input into their songs. They 'performed' them by lip-syncing on stage, but all of the music and lyrics were someone elses. Milli Vanilli had no part in the creative process.
That's not technically true of AI content. There's some tiny little seed of a creative starting point in the form of a prompt needed for AI. When someone makes something with Claude or Nano Banana it's based on their idea, with their prompt, and their taste selecting whether the output is an acceptable artefact of what they wanted to make. I don't think you can just disregard that. They might not have wielded the IDE or camera or whatever, and you might believe that prompting and selecting which output you like has no value, but you can't claim there's no input or creativity required from the author. There is.
I'd challenge that assertion. LLMs still produce very bad results with greenfield work, so that seed was generated by people who had both creativity and skill a thousand times before. Having a glimmer of an idea that you've probably seen more or less intact somewhere else and getting an AI to take it from that point is much closer to Milli Vanilli than any actual creative work.
I also believe the Milli Vanilli argument to be flawed, but the other way around: music videos were all the rage back then and the two supposed singers were actually just performers for the cameras. Does this mean they had no part in the success of the music? I don't believe that.
That's not to say they were right in misleading the public and their fans, but it seems to me that Milli Vanilli was a fruitful combination of the public-facing performers and the musical process behind them. Everyone is fine with ghostwriters, why is this so different? The entertainment industry is fake through and through, but nobody is actually taking offense from this fact.
I often wondered if a similar project could find success if it were presented differently, as a cooperation of musicians and performers
actually, i am not fine with ghostwriters. i am fine with speechwriters, because public speeches are a shared product, so is music. performing someone else's music is normal and probably has been done ever since music existed. in that sense i would also not mind a performance of a song if that song was originally created using AI. if the song and the performance are good that it is no different from performing a traditional song. if you don't like AI music, that's fine, i also don't like every traditional song, but that's not the fault of the performer (beyond choosing a song you don't like). the problem with milli vanilli is that they violated the expectation that we know who the singers are. milli vanilli were dancers, not singers, and if that had been properly communicated, it would have been fine.
> hey chatgpt give me a snarky response to this comment that would wittily refute the argument, make it funny and interesting, concise and to the point
Ah yes, the “tiny little seed” defense — because if I hum three notes and Quincy Jones writes the symphony, clearly we co-composed it.
Sure, prompting involves taste and direction. So does ordering at a restaurant. But if I tell the chef “spicy, but make it fusion” and then Instagram the plate as my culinary creation, I’m not suddenly Gordon Ramsay.
Nobody’s saying there’s zero input. We’re saying input isn’t authorship. A seed isn’t a forest — and picking your favorite output isn’t the same as growing it.
Yeah sorry, I can claim there's none, you can't stop me.
I could claim there's even less creativity than lip syncing, and I will.
And if there was any creativity, the use it is being put to is to do violence to artists. If you think you deserve someone else's work as your own, you better be prepared for the fact that you won't even really understand who you're ripping off, but someone else sure as shit will and they're going to be pissed as hell.
At least Milli themselves a) knew what they were doing, b) paid the real singers I presume and c) presented real art created by real people.
But still everyone was mad at the lie of it, at being asked to venerate an imposter. And being asked to believe that in the future "impostor" will be the most venerable role. No. Just no.
That reference though! I never thought I'd ever read on HN a blog mentioning Girl you know it's true from Milli Vanilli. Wow the throwback in (exceptionally cheesy) time.
I see a lot of arguing over whether this is "good" or not. This seems like a subjective question. Some people enjoy it, if you don't, it wasn't written for you--don't read it.
Maybe the arguing is really over whether it's higher-status to enjoy longform content, or to criticize it for not being more efficient? By identifying the argument, I've revealed it as silly, and clearly proven myself to be higher status than either side. The arguing may stop now. You're welcome.
I accidently clicked on the article instead of the comments link for this one, a rare mistake as I usually glance at the comments before deciding to read, but I'm glad I did in this case.
I read it all, and found myself engaged throughout. Not to say that it was all riveting, there were certainly dryer spots than others, but it felt 'real'. Maybe they did use AI (I somehow doubt that given the content), but even if they did they went over everything in a way that retained a voice that felt authentic.
I hate many of the articles I read now all feel like they have the same half hearted attempt at trying to grab your attention without every actually clearly saying what they mean.
As for the content, I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it. I do think AI has value to add, but I don't think it's something that should be forced and so this article resonated with me.
It's a long read, and not for everyone, but I recommend it as a way of hearing another humans opinion and deciding for yourself if it has value.
>I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it.
I hear this and FWIW, if there aren't very specific things being asked of you, using AI as a stack overflow replacement as the OP admits to doing is as "AI fluent" as anything else in my book.
"I’m not arguing that this technology should be unilaterally destroyed; I am arguing that we are collectively using it in the dumbest possible way, causing the most self-inflicted injury, and maximizing the amount of angst and suffering we’ll all have to contend with. I am angry at generative AI because it seems to be making us think and act like complete idiots."
> It is a miracle of human ingenuity that we can etch 100 billion transistors onto a piece of rock we dug out of the ground.
I know this is probably a deliberate simplification as part of a rhetorical flourish but one of my favourite parts about semiconductors is the fact that we don't dig up the rocks, we grow them to order. The though fills me with childlike wonder...
It’s a very principled and well reasoned stance. Think it underestimates the relentlessness of progress and capitalism though. Short of those that are independently wealthy and can do artisanal things for the sake of it I suspect most shortly won’t have a choice
> There were entire classes of Hacker News submissions that I refused to read the comments on. Including the comments about this article, should such comments ever materialize.
The author has made the correct call. There's a pretty deep irony that all the top-level comments at the time of this writing are about how the article is too long. It's quite clearly not trying to succinctly convince you of a point, it's meant to be a piece of genuinely human writing, and enjoyed (or not!) on the basis of that.
Author writes an interesting, nuanced, wide-reaching essay about AI and society, with a main theme being about AI and its impact on our humanity.
All other top level arguments offer AI summaries that miss all of the interesting, nuanced, wide-reaching topics about AI and its impact on our humanity, and complain it was too long to read.
No it's not. Even if the guy has valid points, it's shit writing. The first paragraph of any piece of text should be a summary so that the reader can decide whether to move forward or fuck off. Here the author says "if you follow me like apostles followed Jesus then please stay, if you don't then fuck off, I have zero interest convincing you to stay" a'igt, I bounce.
Reminds me of literature lessons in high school where the teacher would explain why given book is exceptionally important while for me it was exceptionally boring but I had to take part in this theatre where I need to pretend that the book is indeed flawless.
The true gem of irony is that the author could really benefit from an LLM which could review his text before publishing. It's not 1920 where people read everything they have access to multiple times over and over because text on its own is rare. It's 2026 and before I engage with your work, you need to convince me that it's not slop.
I am not here to tell you what to like or not, but doing my English Literature 'O' and 'A' levels were among my favourite parts of all my schooling, and even the books and plays and poems that I forced myself to wade through and hated have informed me for the rest of my life. Poems I hated at 15 I realised I loved deeply 30 or 40 years later.
And I really loved this essay. It's the single best piece of writing on "AI" I have read yet.
Design can go a long way when reading long form text. If someone here is in contact with the author please tell them to improve the typography; most notably smaller and justified text for mobile phones. Other designers could probably weigh in. I’m not an expert, but well designed text goes a long way towards comfortable reading.
Apart from that, content wise a preliminary abstract is nice to have. I do like how the author provides a table of contents.
> then I become a little pissed off at having my time and attention wasted by somebody who didn’t care enough about what they were doing to actually do that thing.
I remember people saying this about emails vs postal mail.
Either they actually wrote all that on their own, or they had an LLM spew it. Either way, why? They had a valid point; you don't have to use LLMs to write your stuff. Why bury that point in this insane pile of verbiage?
But thanks for saving the rest of us. This is why I read the comments first.
Because it was, even if you disagree with it, beautifully written, emotionally resonant, full of funny jokes and cute stories and metaphors, and states well — and encapsulates — all of the nuances and sub-arguments of its side of the argument?
...because reading and writing well-written prose is meaningful and enjoyable?
It feels like half the people here do not read or write in their free time, which would be understandable if this were not primarily a site for software engineers who write (sorta) as a job
It is funny how that's basically one of the core points the article makes -- and in fact the article paints Hacker News commentors specifically as people who don't see that kind of inherent value in craft and artistry -- but the AI-generated summaries those people are relying on have missed it completely.
I actually disagreed with that particular point made in the article, because I don't really see myself as somebody who sees value in craft and artistry, I just want effective code that works (which imho LLMs cannot create).
But after reading this comment section... I mean if enjoying well written prose counts as enjoying craft and artistry I guess I do then? Damn.
> because reading and writing well-written prose is meaningful and enjoyable?
This is not prose, it is exposition. It is perfectly valid to critique any expository essay, especially one of this length, for its density (or lack thereof) of substantive information.
Sometimes writing can both contain information and be beautiful? This article is charming and thoughtful. Its style may not be for everyone, but for me it really hit, I am thoroughly enjoying reading it. Its style gives me no problem calling it prose.
A person writing an essay on their own site doesn't need to have the information density of bus timetable.
I somewhat disagree that this is not prose? This didn't seem like a purely expository piece. Like if it were just a straightforward technical piece than yeah its way to long, it could have been a few sentences.
But this seemed like it bridges the gap between prose and an expository essay -it was doing both.
> prose and an expository essay -it was doing both.
Putting prose in an essay means there are more valid criticisms of a piece of writing, not fewer. If somebody is breakdancing and reciting the periodic table at the same time it’s ok if somebody notices if they skipped the lanthanides and actinides.
I’m a fan of blending the two! It’s just really really hard to do both well at the same time. My most recent example is Malcolm Harris’ history of Palo Alto, it is incredibly well-done.
That’s kind of the point that I was making. When you mash the two together, both lenses are valid critiques.
It’s an exponentially more difficult way to accomplish either goal because one reader will see it and think “this is a sixteen thousand word essay that says very little” and another will see it and think “what a wonderful story” and there’s nobody to adjudicate who is correct.
Like I posted “this is sixteen thousand words about how the author doesn’t really use language models but might one day” and some folks’ rebuttal is that they enjoyed reading it. Those are two completely unrelated things! It’s like if folks saw the cover of The Hobbit and thought “Hell yeah!” and then when they read “there and back again” thought “whoever wrote that was being unnecessarily reductive”
>Either they actually wrote all that on their own, or they had an LLM spew it. Either way, why?
I mostly skimmed it. It’s entirely feasible that the author buried a confession about getting away with manslaughter or whatever that I missed somewhere in a few sentences in the middle of that novella though. It does begin with several paragraphs essentially telling you not to read the post and has a lot of completely unnecessary exposition (for example the section on Luddites)
Edit: I want to point out that I went over the post with my own eyeballs and brain
This was so wordy I had to ask an LLM to tell me what the point is.
So you don't have to:
"you don’t have to embrace a trend, tool, or narrative simply because others say you should — especially if it doesn’t resonate with you or align with your values"
An important new twist to add to the great AI versus NO AI discussion.
>The rent-a-brain aspect is more acutely alarming. And I will be blunt here: It sure does seem like the prolonged use of LLMs can reliably turn certain people’s minds into mush...
>Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence.
Most people simply do not have the patience to spend 30 minutes reading something anymore. It's why magazines like The New Yorker are on life support. So, yes. "Had to."
I find myself wondering what to do with this article and the incredibly well-condensed collection of sentiments therein. Do I drop a furtive link in that #ai-enthusiasts Slack channel, in full view of everybody, including that CTO of mine who is REALLY excited about this stuff, and isn't quite ready to issue an AI usage mandate? Or do I keep this secreted away, like a bible in my left jacket pocket in case one of "their" sharpshooters attempts to assassinate me with an AI silver bullet, as though I were some wild and reckless heathen that seeks to abandon all good sense in the selfish pursuit of personal pride?
I don't mean to be hyperbolic, but I never envisioned this moment of feeling bizarre, feckless and iconoclast for wanting to do what feels like the reasonable thing. The circular finances propping this stuff up don't add up. Do I really want to be left dependent on a thing that only really feels like it just flashed into precarious existence, that will almost without a doubt be propped up, classed as "too big to fail" by one particular nation-state, whose actual intent and interests are to subjugate its subjects into a quivering oblivion when it's finally acknowledged the whole thing is in arrears? No, goddammit, I shouldn't have to accept this barrage against and wholesale weakening of my strongest assets: My mind and my discipline. At least that's how I feel about it anyway.
reply