Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They’ve built processes around it and don’t feel like/can’t afford to/ don’t know to how change them.


I guess we’ll see how that shakes out.

Because models are getting much better every couple months, I wonder if getting too attached to a process built around one in particular is a bad idea.


I would agree if Windows 2000 had the exact same APIs as the next version, but it doesn't. LLMs are text in -> text out, and you can drop in a new LLM and replace them without changing anything else. If anything, newer LLMs will just have more capabilities.


> LLMs are text in -> text out, and you can drop in a new LLM and replace them without changing anything else. If anything, newer LLMs will just have more capabilities.

I don't mean to be too pointed here, but it doesn't sound like you have built anything at scale with LLMs. They are absolutely not plug n play from a behavior perspective. Yes, there is API compatibility (text in, text out) but that is not what matters.

Even frontier SOTA models have their own quirks and specialties.


When I've built things with them I've mostly considered the quirks defects.

Kind of like how httpds will have quirks but those aren't really a good thing and they're kind of plug and play.


What kind of quirks have you seen that the next model wasn't better at?


A simple example would be when models get better at following instructions, the frantic and somewhat insane-sounding exhortations required to get the crappier model to do what you want can cause the stronger model to be a bit too literal and inflexible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: