Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dear Google Cloud: Your Deprecation Policy Is Killing You (medium.com/steve.yegge)
662 points by bigiain on Aug 15, 2020 | hide | past | favorite | 403 comments


I think Google couldn't do better, even if they wanted to.

The reason is the engineering culture and the promotion system. People expect to switch to new projects, and work towards to a promotion within 2-3 years. This shows in all products, and presumalby also the reason for the deprecation. New people come on the project, want and need to do a major overhaul, and do not have resource or incentives to support the old stuff.

The other half is that most projects are done efficiently by relatively small groups of engineers. I remember that Android Market was still a few people in MTV, with no payment possibilities outside of the US, when Apple already had significant revenue from apps. And that shows everywhere. I'd argue Google is incapable of tackling projects that require more than a few hundred people for more than 2-3 years.

But as a billion dollar company, that does not work. Throwing money at obvious new billion dollar opportunities like Microsoft did with gaming and cloud, and following through over 5 years is not possible for Google.


My armchair thought: what if the CEO said "wow, Yegge is right, we've gotten really off trail here. you need to show leadership in maintaining a 3+ year old system at Google for promotions?" Could that happen and would it work?


The promotion system isn't the issue, the promotion system was the issue 10 years ago. For the last 10 years Google has been hiring and promoting and keeping people who prioritize shiny new things more than maintaining old things.

If you start rewarding maintaining old things you aren't going to suddenly get what you want, you're just going to lose a tonne of people because they don't want to do what you're telling them and you won't be able to hire good people with your priorities because no one is going to believe they can have a good career at Google by prioritizing stability and long term support. It's all downstream of culture, and culture change is very difficult.

Of course if you do buy into this idea that this is the issue, then GCP has already lost. It would be years before they could make a dent in that culture, and more years after that for external customers to know and trust Google's new culture. By that time there's going to be a dominant player anyway. So maybe actually it's best to stick with the current culture - try and win on your strengths than try to fix your defficiencies.


It might be even deeper than that. I don't know if it's representative, the few Googlers I talked to were there only for the money. Their view of the company was actually quite negative.

Keeping backward-compatibility requires people that care. That have enough pride in their work to counter-balance the grind.


the few Googlers I talked to were there only for the money. Their view of the company was actually quite negative.

This is true for most people working for large multinational corporations. Good things can still be developed without passion.


Seems like this is probably true for the vast majority of people. I don't know why we circle jerk on passion and it's not very healthy. Sure I love to write code and most days I don't hate my job, but if I didn't need the money I'd rather write code for myself or open source projects, or do other things.

There's nothing wrong with that, and when I'm at work, I do my best to do a good job and make reliable, maintainable systems. Those two things aren't in conflict.


Obviously having people that care about the work they do is a positive thing, but there are plenty of people in plenty of industries managing to do a good job despite only being in it for the money.

It might not be a good business decision for Google, but surely if they create enough positions that are solely focussed on the boring stuff but are paid better to make up for it, there is a price point at which they'd get enough interested people.


The fact that most people are there mostly for the money (which I'm pretty sure is the case) actually works in their favor if they wanted to prioritize maintenance more. Currently people don't do it because they're not rewarded for it, not because they don't care. It's actually far easier to steer people towards certain behaviors if you have lots of money and all you need to do is give more or fewer RSUs or bonus $$s.


And that's assuming people cannot find satisfaction in maintaining things. Personally I do find it satisfying to be "polishing" the same software for a long time. Iron out bugs, make performance improvements here and there, update to newer frameworks/APIs. Code is like a living thing and keeping it alive can be very rewarding work.


That would be a career suicide to become a support engineer like this. 4 years later, when his stocks dry out, he'd have to switch the company and he'd need to tell a convincing story why the new company should pay him the new market rate. And "I supported legacy code" is not such a story. There's always an option to go to Microsoft and support legacy stuff for life, but beware that MS pays peanuts (relatively speaking) and with the MS pay you'd be priced out of the housing market.


If you have an understanding of the product, it is not difficult to come up with a convincing story.

MS pay is actually competitive for external hires, although internal raises aren't usually very high.


To show that you deserve whatever pay you need to show impact and complexity. I don't think you would have a problem showing either when talking about maintaining Google scale products...


Now we are talking. The thing is, only few work on those big projects. The rest are doing god knows what and have to jump ships often. If your resume says you spent 5 years maintaining a smallish noname project, your career is at risk. I don't believe that Google keeps all its 100k employees busy with complex and high impact projects.


Periodically I'm jealous of googlers who try new stuff. Our company started ~1975 and comes with a good bit of crappy legacy code that as good business and customer driven people we maintain while we deprecate and replace components. At times it's boring, an operational pain in the butt, and frustrating. I'm working on my second major deprecate and replace project now. But when I'm done I'm gonna move to green fields. We sometimes are too conservative.


If they're in it for the money then better maintenance and support of projects should be easily solved with a few paychecks.


Maybe... Just maybe they need to let those "shiny things" engineers leave if they don't like keeping things running and continually improving them after v1.0. If you only hire people who get bored with a project after a year or egotists who want to start from a green field on every single project, then this is the effect it will have on your products.

I think that's only part of it though. I think that the only way a company can let products wither for years without a single new feature or improvement is if they don't have a product manager to represent the users, and bring a vision to the engineering team.


> For the last 10 years Google has been hiring and promoting and keeping people who prioritize shiny new things more than maintaining old things.

This is the same reason why it is near impossible to reform a police department. It would take firing everyone and starting over.


> you're just going to lose a tonne of people because they don't want to do what you're telling them and you won't be able to hire good people with your priorities because no one is going to believe they can have a good career at Google by prioritizing stability and long term support

I don't follow. Wouldn't ridding Google of its reputation for inadequate maintenance, make it a more prestigious employer? Its reputation for paying well wouldn't change. How would this lead to an exodus?


Not among potential employees I would guess.

Obviously generalizing here, but most engineers (especially fresh graduates) would rather work on a new thing than do maintenance, if given the choice. Google offers a pretty good value proposition: Lots of money and you get to work at new exiting projects and if you don‘t like it you can change to something you like. There‘s obviously more nuance to this, but that seems like a pretty good deal for many people.


Reputation is a lagging indicator. You have to deal with the fact you're trying to recruit people who care about backwards compatibility into a company with a strong reputation for not caring about that. You're going to have the bad reputation long after the underlying problem is fixed.


Culture changes real quickly with the right leadership at the top. Bezos and his "everything as a service" mandate, it didn't take long to implement and worked just fine. What you need is empowered leadership and the willingness to take a short term pain for a long term gain, that thing that Yahoo couldn't do and ended up dying of.


> The promotion system isn't the issue, the promotion system was the issue 10 years ago. For the last 10 years Google has been hiring and promoting and keeping people who prioritize shiny new things more than maintaining old things.

Isn't that inherent to their interview process?


Yes, but there might be other downsides... Suddenly every developer turns into 'maintenance man' just refactoring existing code to make it neater without adding much.


The solution to this is more qualitative performance evaluations. People are really good at gaming quantitative systems, so once it becomes a rule that more releases == more promotions, people will prioritize that over actual value creation. Same for if maintenance work becomes the metric.

You need the people evaluating performance to really understand the work, and make informed, discretionary decisions about who to promote to maximize real value creation.


> You need the people evaluating performance to really understand the work, and make informed, discretionary decisions about who to promote to maximize real value creation.

I agree but for everyone working at / running a company without a money printing machine it is worth noting that this is really expensive. You need to take someone that’s a good engineer—-one with judgment and a modicum of people skills—-and give them mostly thankless, stressful work.


Increasing performance and cutting down infrastructure costs?

Looks like someone deserves a bonus


Those are seen as launches at Google and are rewarded.


Direct OKRs for stability. "99.9% of customers unaffected by any change this quarter"


That could be fixed in 1 day if the CEO simply said hire people to maintain this stuff, and don't require them to get promoted. Google's a mega corp now, not a startup. And they pay white well at entry level. Most people don't care about promo unless forced (up or out).


You grossly underestimate how long it takes to change the culture of a company, let alone one the size of Google. A CEO can’t snap his/her fingers and direct a change of this kind into existence.


> Let’s say hypothetically that Apple was dumb enough to pull a Guido van Rossum, and declare that Swift 6.0 is backwards-incompatible with Swift 5.0, much in the way that Python 3 is incompatible with Python 2.

Damn, I was waiting for this. The whole essay was a setup for this paragraph. Reframes the argument using a shared traumatic experience for everyone. This humanizes the effect, we all know someone that lost someone to the Python2/3 transition. I can't count the number of Python friends that turned into Gophers, joining some cult of middle class bearded tech dad with a side business distilling pear brandy.

Best Yegge Essay Yet! Welcome back Steve.

edit, finished the essay. He smoked GCP in this one, damn damn hard. Hickory.


Apple already pulled the Python 3 stunt twice with swift. Swift 1 to Swift 2 to Swift 3 were all very much incompatible.

Of course the language wasn’t as broadly used back then and Apple is known for quicker churn than Python so the need to write code compatible across Swift versions was much less pronounced


I haven’t used Swift heavily till recently, but didn’t they say from the beginning there would be lots of breaking changes in the first couple versions? Doesn’t make it less painful but they were at least up front about it.


They still don't have async-await. Once that arrives, a lot of code gets redesigned again.


Async/await currently looks like it'll be a non-breaking change. Obviously you'll have to redesign a whole bunch of code to actually take advantage of it, but you won't have to adopt it just to upgrade to the newest version of Swift. If you decide it's not worth the effort, you'll have the option to just not redesign anything.


Yes, hopefully not a breaking change, but a lot of APIs and libraries will have to be redesigned completely in order to allow people to take advantage of it. And a lot of old code will look inconsistent for quite a long while.

I'm a bit surprised that language support for such a key concurrency feature comes so late in the game.

It's not the same issue that Yegge describes though.


“So late in the game” is relative, no? Java doesn’t have it, Rust stabilised it last year after ~9 years (or around the same time period if we’re comparing 1.0 releases).

Async/await does not need to be a breaking change - C# managed fine without it being so. The breaking changes in Rust were around the standardisation of std::futures::Future rather than async and await.

That said async/await makes a world of difference to the usability of Rust for async code and I’d imagine it will for Swift too.


The only breaking change was that async because a keyword, the Futures stuff didn’t cause any breaking changes. There were three major versions of the library before it moved into the stdlib, though.


Right, by “breaking changes” I mean across the ecosystem rather than in the Stdlib. There’s still a decent amount of code based on futures 0.1 and many libraries that depended on it required rework as a result.


Swift can be very painful from this perspective. I just recently had to do surgery on a project because one of my dependencies changed the tools version.

The back-compat promise is one of the reasons I'm investing in Rust for new projects, even if I find the actual programming much more pleasant in Swift.

But I'm a bit less optimistic about Rust now given the recent layoffs at Mozilla.


Rust is not that dependent on Mozilla


I don’t think so either, but it does breed confidence when there is a large organization backing a project


I don't get the Python3 bw-incompatibility hate. Getting rid of cruft is literally the only way to get rid of cruft. It's literally the Pythonic to not have cruft.

I dread the day Python will be as bloated as C++ or Java, neither that dare to remove things.


Probably less to do with backwards compat then how it was handled. The first few versions of python 3 didn't really provide any compelling benefit, (indeed things like performance were worse, not better), and most people didn't _care_ about the cruft it removed. The "print" statement being a great example. Was it kind of weird? Sure. Did that weirdness have any consequences at all? Not really. Was it worth it to force people to replace thousands of lines of code for something that was essentially a minor aesthetic issue, if that? No way.

Also the python 2 string type was more in line with how other unix utilities work, so while I do think the 3 string type is generally better for the web, it wasn't an obvious improvement for the other large usage of python. While it's a very versatile language, I would say the major niches of python are: web dev, shell scripting, and scientific computing. The changes in python 3 were (somewhat) helpful for web dev, but really were inconsequential and/or actively harmful for the other uses.

Mixed with how condescending the python dev team acted about it, and how it really wasn't until about python 3.6 that there was any compelling reason to move to python 3,you can see where most of the hate comes from.

And by the way, even if Python is very popular today, I do think that rift really hurt the language. I worked at least at one company where Python was considered for a project, but eventually shelved because of management's confusion over the 2/3 issue.


Agree with everything you said here, but let me riff on this

> The "print" statement being a great example. Was it kind of weird? Sure.

I am reminded of the Emerson quip "A foolish consistency is the hobgoblin of little minds". The Py2 print statement is special/weird because it is a very important and special behavior, especially for newcomers. It's not randomly special, it's special because it models something that deserves special treatment.

Py3 is more consistent, but less humane. I use it over Py2 only because the market has moved on, but every time I type print with parentheses I curse Guido.


The funny thing is, if you type print with space it recognizes what you meant, and still wouldn't do it, raising an exception instead. It would've cost them literally nothing to support both ways to write print, since the parser has to detect statement print anyway.


Yes, but it would be a different way to do the same thing. It'd be anti-Pythonic.


> It's not randomly special, it's special because it models something that deserves special treatment.

Can you expand on this, why is this more humane? And why should print be a special function?


Two simple reasons. First, "hello world" requires less explanation

  print "hello world"
  vs
  print ("hello world")
Second, printing a line to stdout is an extremely common and special activity (especially during debugging). There is almost no programmer-defined function you could write that has a function as fundamental as that of "print".

Larry Wall of Perl thought about these things in terms of Huffman Coding, i.e. the most common thing you might want to write should have the syntax elide any easily understood context.

In the case of "print", we know we're not talking about any random function here: it's the function for getting output to the terminal, one of the most fundamental operations in all of programming. So I think it's totally fine for it to have a special representation in the language.


Interesting, thanks.

One nit I would call-out is that most people don't put a space between the print and the ("hello world"), so idiomatically when learning, print is 'just another function'. And since HelloWorld is usually the first thing people learn, I wouldn't want them to think print is different than any other function.

Also, the first line of The Zen of Python is "Beautiful is better than ugly", I can understand why consistency was a priority here.

We could get into a whole other discussion about whether or not a programming language should have and adhere to a "culture", but fundamentally I agree with the arguments against Python making this change. It's just not worth the effort, and like others have said, they might've lost of a lot of momentum to JS.


Most hate originates because they question whether it actually was cruft and wheter the new solutions are really better. And even more hate critics the way transition was handled. And indeed, that some removed features were added later again speaks for a certain lack of insight when decicions were made.

That the transition took nearly a decade (and for some still goes at this point) and several minor-versions is another hint at strong problems in planing. At the end we got something better, but the road toward it was just far to painful for what we got. And this fuels hate till today.


Hey! Leave Java out of this, Java didn't do anything to you!

Joking mostly aside, python is way way closer to C++ in terms of language feature bloat; where Java is notoriously spartan and slow moving (moreso historically).

Ofcourse I know you're also reffering to standard lib gunk but that's relatively less impactful.

The big issue with the python 3 shift was when you touch something as substantial, subtle, and pervasive as strings you're going to cause problems. I wont speak to if it was worth it.


Python3's intentions were good, but the execution was abysmal. It was just a bunch of extra work on top of what should have been a pretty straightforward migration.


That's great for new programmers. It's terrible for people in a mature ecosystem of lame duck software. Some production critical python code is 20 years old.


What cruft did py3 remove from py2?


Unintuitive string types, weird print function, unintuitive division, to name a few. I'm sure there are more things I can't recall at the moment.


> unintuitive division

Arguably worth it. Dividing 1 by 2 with "1 / 2" and getting 0 is a real 'wat' for novice programmers. And the workaround is a 'wat' for experts: instead of using a floating point division operator, cast one of your parameters to float.


> we all know someone that lost someone to the Python2/3 transition

This feels like a very strange perspective. Why is it a problem to "lose" someone from a language? You're an engineer (a presumption on my side); casting yourself into a subsection of a "Python engineer" seems like a move with all the downsides and no upsides.


You don't lose, you're still an engineer. Platform loses a user. And that's bad for everyone as explained in the essay.


Apple is AFAICT the least concerned of all the major platform vendors about backwards compatibility. Programs routinely stop working on new releases of macOS or iOS, sometimes in ways that are impossible to fix. On Mac they’ve already changed processor architectures twice and look poised to do so again. That’s much more disruptive than the python 2 to 3 migration.


> That’s much more disruptive than the python 2 to 3 migration.

Yet Python 2 to 3 started in 2008 - 12 years ago, and is still not “done” in any meaningful sense.

If I think back to 12 years after the intel OSX transition - 2018 - I can’t think of a single outstanding issue, whereas I bash into python 2 vs 3 literally every time I touch it (and therefore mostly choose not to).


I am slightly disappointed that folks choose to focus on Swift in this quote. The essay was pointing out how GCP deprecates products and makes users rewriting running applications. This is akin to forcing a Python2/3 split with nearly every deprecation.


this is a really nitpicky thing to debate, but for me, since this essay is basically a sequel to the infamous Yegge Platforms Rant, then the platforms rant is still better, because it is much older and still correct. he could have directly copy pasted paragraphs into this essay and it would still be relevant


I was surprised he called out the Percona folks by name by giving a link to a quick-start button.


Apple already ~mandated moving apps to Swift. I don't see why they couldn't do it again? Purely because of the existence of Flutter?


Apple has not mandated moving apps to Swift. Obj-c is still fully supported and even continuing to be improved. They have not announced any plans to deprecate obj-c either.


Good luck on deprecating Objective-C and the C++ toolchain.

As an example core of Unity is written with C++ with Objective-C bindings for APIs such as Metal.

If Apple would mandate Swift then things like Unity would just cross compile into swift, causing perf regressions for users, all for no benefit for Apple.


I don't they'd ever deprecate Objective-C when so much of their internal software and the biggest apps are still 100% Objective-C. Objective-C support is a good counter-example to Apple supposedly deprecating things.


Do you have a source?

Flutter and React Native both work on iOS. And obviously Objective C still works


When did Apple “mandate” apps move to Swift? The majority of code out there is still Objective C.

The problem isn’t making an incompatible “v2”. It’s getting rid of “v1”.

If I can’t take advantage of new functionality without moving to a new version that’s a headache. But it doesn’t cause me to throw away all of my existing investment. Especially if you can combine the old and the new.


It's odd to see an article emphasising backwards compatibility, but not a single mention of Microsoft (I even ctrl+F'd the page source to check!) They've regressed a bit in the recent years but I'd still consider MS the gold standard for back-compat. I have Win32 binaries I last modified over 15 years ago, small (or perhaps tiny - size measured in KBs) utilities that I use daily. They worked perfectly on Win95, and still do on Win10.

I wonder when/if the software industry will ever stabilise. It's possibly the only industry where frequent and disruptive change continues to happen, and is even welcomed by many in it (not me).


You missed this:

> I’ll also give a shout-out to our friends in the Operating Systems business: Windows, Linux, NOT APPLE FUCK YOU APPLE, FreeBSD, and so on, for doing such a great job of backwards compatibility on their successful platforms.

I share your desire for stability. I don't necessarily mind frequent updates for security reasons, but I'm getting to the point where if things are changing in big ways often, I'm just not going to use it --- I no longer find enjoyment in running on the upgrade treadmill, it's just work.

That said, I don't see a lot of hope for our outlook. Most people don't seem to have a concept of completable software. It would be nice if it were bug free, but getting to the point where the cost to fix bugs compares to the cost of leaving bugs as known issues is doable.


Linux code might be backwards compatible, but nothing like windows. I can run pbrush.exe taken from Windows 95 on windows 10, and it works fine. Want to run a 10yo binary on Linux? It's easier to set up a VM with a 10yo Linux release on it. You've hot the source code and you can compile it? Well, tough, the build tools have changed, and some autoconf script has been removed, even from autoconf-archive.


Actually, I've kept an ancient statically compiled Mathematica (linux) binary from 2000 and it actually still runs. I had to make some symlinks since X11 font paths have changed. That's a full GUI app with a sophisticated internal kernel.

This should be generally true of any well-compiled static binary on linux from 20 years ago.


This is a bad take, there are immeasurable numbers of binaries from even Windows 98 or Windows 7 that dont run on Windows 10.


OK but he talks about how Android OS has had backward compat for over ten years, but Windows has been doing it far longer and with a much bigger OS and more complex hardware base.


That's because he has much more detailed knowledge of Android's backward compat story than he does of Windows's.


"completable software". that's the ticket. the maturity to say no more features will be accepted; it is done. for the past 20 years we have been teaching the agile way, that points to the constant churn of requirements, the pivots, and the idea there is always another sprint


The absolute worst in my small experience with it is anything touching the node ecosystem. Try to follow any guide or use any library not updated in the last week and there will be breaking API changes and deprecation warnings everywhere.


I'm running into this problem more and more. It seems like Node is fundamentally unusable for anything but the nimblest teams that do not mind updating their dependencies automatically, every week, and when something breaks, they are capable of fixing it within hours.

I really like Node as a language (especially with TS), but Node as an ecosystem feels like a very hard fit into 90%+ of corporations.


I think the culture of Node comes from the earlier culture of in-browser JavaScript. When running on the browser platform you have to make sure your libraries are up to date and maintained, otherwise you're at the mercy of browser updates breaking things from under your feet (this is less bad now, but it was awful in the days of IE<9, and on android before they started shipping auto-updating Chrome webview)

Consequently, JavaScript developers got used to the burden of maintenance and rapidly updating to the latest versions of libraries, and this culture carried through to Node (partly because a lot of libraries are shared between node and the web).

Having said that, if you pick your libraries well, it's not too bad these days. When I upgraded from Node 12 to Node 14 earlier this year, I had to upgrade the `pg` package to a newer version that supported Node 14 (there was one available), but I didn't have to make any code changes. And other than that I've had no forced version upgrades in a long time.

I guess if you're lookig at 5-10 year timescales with literaly no maintenance then this would be a different matter though.


I strongly feel that you must have end to end tests while using node because of the dependency hell. Not even knowing if a upgrade of a dependency breaks your system is just hard. Also testing it by hand is just not maintainable.


> Not even knowing if a upgrade of a dependency breaks your system is just hard.

This is true for any language/platform.


There are massive differences in how much of a problem this is across different platforms.

With Clojure, I can think of 2 times ever when a dependency caused an issue. It was extremely obvious since the issue was "won't compile", and the fixes were simple.

With PHP, I expect any change to potentially break something. Bump your AWS SDK which uses a different minor version of guzzle? Fatal error.

There's a world of difference between breakage being an everyday thing and a true rarity.


Strong typing does partially help in this situation though.


Only for the most trivial bugs, though (as non-dependent type systems can't express anything but trivial properties). Still, it's better than nothing.


This is only a problem in some parts of the ecosystem - but they are the parts that most 'tutorials' are written about, because they look shiny and fancy. If I'm blunt, they're the parts of the ecosystem that HN loves to gloat about in their "how I saved my company thousands of dollars by building a todo app with GlamorousLibraryJS" type posts.

The short answer: if you avoid the shiny tools that claim to do everything, you will not have this problem. And as a special case, avoid Gulp, which is mismanaged.

The single-responsibility libraries that have a well-defined scope, on the other hand, have often been sitting on npm unchanged for 5-6 years, because they are simply done, and they do what they need to. They will very likely never deprecate anything, as there is simply nothing to change.

I would argue that these libraries are actually doing a better job of stability than their counterparts in other languages.

Edit: An additional factor is that it's easy to write a tutorial about an extensive framework with a large scope; there's plenty of stuff to write about. Writing a tutorial about "this is how you make a function call to this library to parse a geo URI", on the other hand, probably isn't going to happen.

So if you follow third-party tutorials, you are naturally going to end up at the packages that are most prone to deprecations.


The sad part about the js ecosystem is that even the biggest corps like Google can't keep their own js ecosystem stable on the last version. I have a small Angular app with Firebase backend and every time I come back to it to update, the libs are out of sync


Google is notoriously bad at maintaining (or even designing) their open-source stuff. I wouldn't say "even the biggest corps like Google" -- Google does considerably worse at this than many hobbyist maintainers! I just don't use Google libraries anymore, if I can at all avoid it.

See also my other comment[1] - Angular is an example of such a magical does-everything framework.

[1] https://news.ycombinator.com/item?id=24168279


We've abandoned an app because of the inability to keep Firebase working on it (JS libraries). Between out of sync Typescript definitions, broken APIs, broken interfaces, "this version of some other dependency works with this version of Firebase thanks to some random transient dependency", it just became too much. We won't use the JS libraries at all anymore.

Angular is the only one that has remained even remotely usable, but certainly not stable. It's never as simple as running the upgrade utilities, changing the version, and being done. It ALWAYS takes at least a day to find all the "little" things they didn't feel fit to mention in the upgrade docs.

It's aggravating. I genuinely don't like using Google software, at all.

I've felt for a long time that Google is coasting on the momentum of the early web and the awe people felt for what they built early on. That hasn't been Google for a long, long time.


Tensorflow is another example of this.


Google was better when they were smaller. Java Guava rules.


Google aren't very good at this, but Facebook are. Maybe give React a try.


try using four libraries and chances are they require two different version of a build tool installed globally


Anything Node, JS, or web-development related seems to churn much more than everything else.


I went googling for some Raymond Chen examples of the great and terrible things Windows does in the name of back compat, and came up with this old thread: https://news.ycombinator.com/item?id=13450160


> I'd still consider MS the gold standard for back-compat

No way! IBM mainframe (zSeries) is the gold standard for backward compatibility in IT.


When IBM starts to say they are going to deprecate something in mainframe, they will deprecate it in probably 10-12 years time. There will be 3 versions in between and each version will have a much better migration path. If a corporate don't want to take risk, they will be mostly the last ones to migrate, after confirming &consulting all the IBM tech champions, they would start the migration process which would last a year, before they install the new version.


I think NetBSD, or any other BSD generally, is also pretty good with backwards compatibility. I run programs well over 15 yrs old using the latest kernels. If I need to, I can build any of the older kernels, too, because all the code is still readily available (=gold standard). Earliest one predates Win95.

Your Win32 binaries sound better than whatever software is being distributed by the companies today.


Microsoft bend over backwards for app compatibility, but keep breaking UI compatibility every few releases for no good reason.


There was a short shout out to Windows.


The article talks about Windows, which is produced by Microsoft.


The popularity of Go suprises me when even relatively simple programs measure 10-20 MB or more. It is like Java. Is it even possible to write Go programs measured in KB?


This very long article doesn't describe even a single instance of anything GCP has deprecated.

It's a long rant with a bunch of f-words, and the claim that he gets deprecation e-mails "about once a month".

Can someone who is informed point to actual GCP services that have been, or are being, deprecated?

From the article, I have no idea whatsoever if this is some huge actual problem with core services... or pulling support for ancient versions of Linux... or sunsetting experimental/beta services that never had guarantees to begin with.

I'd find it pretty surprising for a platform as relatively new as GCP to already be deprecating anything meaningful, so I'd really appreciate if anyone has actual facts here.

(Also I know a lot of people who use GCP, and I've never heard anyone complaining about services being deprecated, so to hear someone complaining about it happening monthly is pretty surprising.)


Can’t blame you for missing it given the length of the article, but there’s this paragraph towards the end:

> I know I haven’t gone into a lot of specific details about GCP’s deprecations. I can tell you that virtually everything I’ve used, from networking (legacy to VPC) to storage (Cloud SQL v1 to v2) to Firebase (now Firestore with a totally different API) to App Engine (don’t even get me started) to Cloud Endpoints to… I dunno, everything, has forced me to rewrite it all after at most 2–3 years, and they never automate it for you, and often there is no documented migration path at all. It’s just crickets.


I just created a new firebase database for a personal project a couple months ago. They recommend using firestore now, but as far as I can tell there is nothing stopping you from using firebase and it still works fine. This seems to be the type of deprecation he is advocating for.

Similarly, there was tooling released to upgrade Cloud SQL versions:

> In 2016, we started offering Second Generation instances to Cloud SQL customers. Second Generation instances offer improved performance, availability, and storage capacity. In October, 2018, we released a tool for upgrading First Generation instances to Second Generation. On January 29, 2019, we deprecated First Generation instances in favor of Second Generation.

I think maybe Steve's past experiences from within Google are causing him to project the same behaviors onto GCP products, when it doesn't seem to be the case. Though, admittedly I did get bit by the Python 2 issue as well.


From what I can remember Datastore-> Firestore (and firestore is a PITA because you can only have one per project even though when interacting with apis you pass in a database name. Go figure)

Remote build execution - deprecated. Our live systems went down because of it and they didn’t even send an email they were killing it. Some person just pressed the kill switch and even support was unaware.

Their gcloud stuff asks for upgrades all the time. Older gcloud sometimes breaks. Sometimes upgrading to newer gcloud breaks existing code that interfaces with it.

It’s not that bad, but yeah GCP doesn’t give a shit a bit backwards compat as the other players do.

You get the feeling, they don’t dogfood their own shit. Like their pricing calculator is unusable on iOS. It baffles me that their support says “there is a feedback button” but there is no feedback button. They seem to only care about chrome with their cloud.google.com interface.

It really makes me think “what were they thinking?”


fyi, I use cloud datastore, and there was a breaking change about 2 years ago going from beta to "release", but that's it. You don't need to upgrade your current datastore to firestore, at least I haven't got any "it will break" emails about it.

However, I do generally agree about the rant. While datastore still works, I can't upgrade my node-js app because the newer node bindings for GCP break the api's, and I can't upgrade my app's Node version either because They decided to ship static WebRTC binaries for the node GCP packages and those arn't avaiable for node 12.x (for the old package version I'm using)


Google AppEngine recently stopped support for Go 1.9. My app ran for four years without my attention which was great. But I also didn’t want to give it any more attention. So Google pulled the plug after I didn’t upload a recompiled version. Yes, they gave half a year notice or so, but I really didn’t feel like crawling for the source code and re-compiling it just to satisfy some Google internal deprecation. When the time came, I redirected my users to my new app that is similar (and hosted on a 4€ Hetzner Cloud instance that I fully control).


It's more than just GCP. It seems to be a general Google mindset.

https://killedbygoogle.com/


Python2 on AppEngine is a huge, painful Google GCP deprecation.

I have a friend who built an AppEngine app within the first year of the platform's release, and he's spent the better part of the past 9 months doing almost nothing but managing his company's migration to Python3 AppEngine.

It wasn't just migrating from Python2 to Python3 syntax. Google used it as an opportunity to also turn down tons of APIs and services they no longer felt like maintaining.

One big difference was the Users API. Python2 AppEngine had a pretty simple way to authenticate users that you could set up in an hour.[0] In Python3, they've dumped that entirely and tell you that you have to move to a totally different solution, and they offer no help with that migration.[1]

In Python2 AppEngine, you essentially got a memcache for free. The ndb library[2] allowed applications to read/write to Cloud Datastore, but it also automagically managed the cache for you so that you didn't have to set up a separate cache service. In Python3, they've dumped that and force you to maintain your own cache service and handle all the caching logic yourself.[3]

It looks like the ndb library is pretty complete at this point,[4] but when I was looking at a migration earlier this year, the ndb library wasn't even out of beta. Meanwhile, they were breathing down their customers' necks about moving off the Python2 version.

I had a fairly small (~8 KLOC) Python2 AppEngine app that I wrote in 2018, but when I realized how much of a hassle it would be to migrate to Python3, I just rewrote it using Gridsome and moved off of AppEngine entirely.[5]

[0] https://cloud.google.com/appengine/docs/standard/python/user...

[1] https://cloud.google.com/appengine/docs/standard/python/migr...

[2] https://cloud.google.com/appengine/docs/standard/python/ndb

[3] https://cloud.google.com/appengine/docs/standard/python3/mig...

[4] https://googleapis.dev/python/python-ndb/latest/index.html

[5] https://whatgotdone.com/michael/2020-04-17


But Python 2 on App Engine is not officially deprecated yet (e.g. not listed here: https://cloud.google.com/appengine/docs/deprecations) and they have not sent the "[Action Required]" email (of which OP talks about) for that yet, so I'm not sure it counts. They are "just" recommending migration.

I have a ~60 KLOC AppEngine Python 2 app (started in 2011), but I'll probably migrate that over only when the deprecation actually occurs, as:

- that gives the python-ndb library more time to mature (performance and bugs),

- easier migration paths for some of the non-py3 services I use (e.g. Images, Users, Memcache) might appear when deprecation actually occurs, either from Google or 3rd parties,

- I'm in no hurry,

- I will be surprised if Google gives less than 1 year of deprecation notice (Python 2.5 got 4 years from deprecation to termination).

(If the APIs I use were available for py3, I would've probably already migrated.)


We're in that boat as well and are rewriting in Go: https://blog.khanacademy.org/go-services-one-goliath-project...


> [Google] contracted with a company called Bitnami to create a bunch of “click to deploy” solutions, or perhaps I should write “solutions”, because they don’t solve a fucking thing. They’re just there as checkboxes, as marketing filler, and Google never gave a shit whether any of them worked from Day One.

I've had similar experiences with broken Bitnami images that makes it seem like they aren't even tested before being released [1][2]. Their WordPress images do work, but the way file permissions are configured doesn't play well with the way they have WordPress set up [3]. Have I just had bad luck, or is this a common experience?

[1] https://community.bitnami.com/t/cant-authenticate-to-fauxton... [2] https://community.bitnami.com/t/couchdb-server-log-gives-wro... [3] https://github.com/WP2Static/wp2static/issues/598


Bitnami broke their MongoDB Helm chart. We're careful to pin versions, and hadn't anticipated someone would re-write history and break published code.

Can't get too upset, we don't pay them anything. Not in a rush to use them again though.


Have you reached out to (community) support? My colleagues there tend to do their best to help the community.

EDIT Disclaimer: I work for Bitnami; not in that team though. I reached out internally to see if this is a known issue, but it's Saturday...

EDIT 2: does the downvote mean bitnami community support is so bad or did my wording offend anybody?


There are lots of random downvotes for no real reason. Sometimes perfectly ordinary posts are dead. I try to upvote gray posts if they're not worthless.

That said, my take (and that of siblings) is that I'm much better off building my own containers than serving as unpaid Q&A for Bitnami. It would be smart to use containers built by people more expert than I am. Containers that are broken by default do not meet that criteria. So, "reach out to community support" is not helpful to people that have already decided to ditch Bitnami.

> We're careful to pin versions, and hadn't anticipated someone would re-write history and break published code.

This is exactly the kind of thing that will make me ditch a company forever and never look back.


It's not clear what happened though.

We use immutable tags like "4.2.8-debian-10-r50" which we never overwrite.

Then we have "semantic versioned" moving tags, like "4.2.8", "4.2", ... which resolve to the latest image matching that prefix.

Moving tags are what they are; I'm personally not a fan; all major popular images have that (https://hub.docker.com/_/golang, https://hub.docker.com/_/python, ...)

You can read more about Bitnami tagging scheme at: https://docs.bitnami.com/tutorials/understand-rolling-tags-c...

If you want to pin your image you should either use the most specific tag or just use the digest:

    $ crane digest bitnami/mongodb:4.2
    sha256:8650c2d92eea97732eae359a140ee86ee3923a2a19b19443e1dc01ec20d5387d
    $ docker run bitnami/mongodb:4.2@sha256:8650c2d92eea97732eae359a140ee86ee3923a2a19b19443e1dc01ec20d5387d

Now, we might have introduced a regression between some version of a container and the next minor version; shit happens; it's hard to tell without more specific information though.

> > We're careful to pin versions, and hadn't anticipated someone would re-write history and break published code.

> This is exactly the kind of thing that will make me ditch a company forever and never look back.

On the other hand, assuming that instead of a misunderstanding, what you saw is actually the image behind an immutable tag such as 4.2.8-debian-10-r50 being replaced, this is a serious security issue; somebody could have hacked docker hub, or crafted a valid certificate for docker hub and MitM'd you,

I'd also ditch a company forever and never look back if they honestly don't care about that problem, which I assure you it's not the case.

We'd greatly appreciate reporting such cases to security@bitnami.com / security@vmware.com .


I've used the Bitnami images to set up Wordpress a few times and won't again. It takes less time to do it from scratch once you know how, because of the exact issues you mention - you keep getting bitten by details that end up taking longer than just starting from scratch.

In fact at this point it's easier to use AWS Lightsail.


Reminds me when we were implementing Bitnami's OpenEdx and it was crashing due to a Python file importing newrelic (reporting SDK of that product) but it was not installed. Seems they didnt test that release ran at all.


Google is an absolutely fantastic company from which to get free stuff. Gmail, Maps, Docs, Youtube? Mmm-mm good!

Buy things from Google, though? You mean, give Google money and in exchange expect them to adhere to some kind of standard of behavior, support, and customer service? Go get a coffee because you're clearly not awake yet.


There are plenty of counterexamples among free stuff: Hangouts, Plus, etc.

https://en.wikipedia.org/wiki/Category:Discontinued_Google_s...


This is the exact opposite of the usual complaint: that if you don't pay for support you can expect Google to arbitrarily shut your account and scoop up all your data - however paying them actually allows you some control.


Many of the frequent horror stories on here about account suspensions and completely unresponsive support are coming from people or companies with paid accounts.

I do believe that there is a level of spending where you can actually get a hold of someone competent who can look into things. But that level appears to be rather high.


About 5 years ago, Hewlett-Packard tried being a paid GSuite customer, paying for tens of thousands of seats, with a path to onboarding the entire company. We could not get support, or even a TAM who would answer their phone.

After a year, HP dumped GSuite, and went to O365 instead. We got a responsive TAM, full multi-tier support, and white glove handholding for migration. And, I think it was cheaper, too.

There is NO level of spending as a customer at which Google actually gives a shit.


> I’m not actively developing on AWS, so I don’t have as much of a sense for how often they sunset APIs that they have previously dangled alluringly before unwitting developers.

SimpleDB still works, it's not even deprecated or grandfathered to not accept new users. There is really nothing else you need to know about how AWS treats deprecation than that single example.

Deprecation is practically unheard of in AWS. If it does happen you can be damn sure it takes years to actually remove obsolete functionality and there is a fairly easy migration path to something of equivalent functionality.


Another example: There was a change to how objects within S3 could be addressed by http(s): https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-...

Initially the (justified) change gave you ~18 months to implement necessary changes. Seems feedback was negative regardless, so they modified it in a way so current usage remains unaffected.


They didn't have a choice, their internal teams weren't ready for S3 SigV4. For example, EMR was not ready for SigV4. That means ton of customers can't use EMR.


One more example: on Dec 2019, AWS Cloudfront finally decided to drop support for Flash [1]. Yes, Flash. And anyone still using Flash still has until Dec 31, 2020 to migrate.

[1] https://forums.aws.amazon.com/ann.jspa?annID=7356


EC2 vs EC2-classic. I got badly burned (mentally) because I had to learn about creating EC2 instances using the company AWS account created before 2014, but then needed to create instances on a client account created in 2014.

Also: classic vs application load balancers - especially the health checks. That one drove me nuts for weeks!

AWS never seems to "deprecate" anything, but they do enjoy adding services which are very similar to other services but guaranteed to trip people (ie: me) up when it comes to the buried-in-the-documentation details.


the main thing here though is the older things you’re moving away from in those cases are still around. AWS didn’t remove classic load balancers and force you onto application load balancers.


Agreed. But they did deprecate EC2-Classic.

"The EC2-Classic platform was introduced in the original release of Amazon EC2. If you created your AWS account after 2013-12-04, it does not support EC2-Classic."

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-clas...


Yeah but this is the customer-friendly way to deprecate - let existing users keep using the old method indefinitely, new users have to use the new way.


> after 2013-12-04

That was 4 years after VPC was launched. So yes, while they do deprecate stuff on the rarest of occasions, they still give you much, much more time than GCP ever did.

Also, EC2-Classic is still available if you reach out to your TAM with a really good reason.


It is a total hassle to keep up with Googlers changing everything constantly. It's not just GCP it's every platform they control. Try keeping a website on the right side of Chrome policies, a G Suite app up, a Chrome extension running.

Thousands of engineers chasing promotions by dropping support for live code. If it was their code they wouldn't do it. The org is broken. If you want to see what mature software support looks like, check out Microsoft. Win32 binaries I wrote in college still run on Win 10. Google looks unimpressive by comparison. But they all got promoted!


I'm reminded of a blog post I once read, which annoyingly I now cannot find, on how the job of a library maintainer is to not break the API. If you break the API, you've failed at your job. This mindset makes more sense than treating API breakages as a routine case-by-case decision. As the article puts it, breaking changes are breaking the library's commitments. (The library/service distinction doesn't matter much here.)

I remember that same old blog post also lamented overhearing a conversation in which a developer bragged about how their idea for a change was so good that everyone agreed to break the library API to implement it. This reminds me of how a startup being 'highly disruptive' is sometimes fetishised as an end in itself, but of course it's even worse than that: they're celebrating undermining the library's dependability.


sounds kind of like this famous linus rant: https://lkml.org/lkml/2012/12/23/75


Linus is 100% correct here.

Too often it’s “a user error”.. no, it’s a platform/developer/provider ak lower level error or mark of laziness


Is he always like that? That's toxic af



Complaints about "toxic" are a screen to hide the fact that you can't say "incorrect".


He has toned it down significantly in the last few years.


"Yes."


Agreed, that is probably why you wind up with a lot of versioned Microsoft C++ runtime installers on your computer, but damn it at least the reliant software works as intended.


A library being dependably bad is not good. Sometimes changes really are good enough to break an api. It does take a significant amount of communication to do this without pissing people off, though.


Microsoft is definitely treating their developers well.

I was exposed to this firsthand yesterday when I cloned the [ShareX](https://github.com/ShareX/ShareX) project to take a look at a problem I was encountering.

Literally all I had to do was install Visual Studio Community, click the “Open in Visual Studio” button on Github, wait a few minutes to install the C# runtime that Visual Studio automatically prompted me to install, and click “Run”.

5 minutes was all it took to be writing code for that project and it makes me want to get into the .Net ecosystem more.


Chiming in: just did the same with Docu, HTML documentation generator from C# XML doc https://github.com/jagregory/docu

Project code did not change much since 2010. It even has bits of IE 9 support.


Try to keep up with Android development and Play Store policies...

I stopped publishing own apps on the Play Store, and unpublished all the one that I had, because of the stress and the time used just to follow the damn policies.


As an Android engineer, I'm not sure what you're talking about. There was a push to get people to move to the new permission model, but I believe that's still optional so long as you're willing to suffer with old SDKs


They're making it mandatory to constantly bump your targetSDK.

They're adding new requirements one has to follow in order to get access to certain permissions.

They're updating their content rules which make anything with user content very very difficult and might just kill your app one day.

They're updating graphics and icon requirements rather often.

It's a rather big amount of work if your app isn't super simple and relies on functionality they change.


> Win32 binaries I wrote in college still run on Win 10.

Hell they said they were dropping support for VB6 in Windows 8 if I remember correctly, but I'm pretty damn sure I can still install the IDE, I know VB6 apps aren't broken for darn sure since I've re-downloaded some old ones I used to use.


Not only that I had to go and update some zombie app I've left running on App Engine that I haven't touched in years because I got some email saying "we have changed something, and your app is running Python blah blah and that runtime will blah blah soon. Please log into the Cloud Console..."

I guess you can't just set it and forget it on Google Cloud either even though everything was legit fine at time of deployment; I mean come on, it's a simple Python Rest API, what you want me to do 7 years later? xD


Unfortunately I understand that Azure is not exactly what one would call "mature software". I have a colleague who's been using it for a few months, and he complains about it constantly. He's had to submit a couple of pull requests for somewhat silly issues like string escaping not working properly.


I don't have the same issues as your colleague. I've been using Azure for 8 years and rarely run into any issues. Maybe his issues isn't with Azure.


I just started using it, and ARM template just silently ignores misplaced resources (wrong JSON section). Spent 2 days debugging it.


I have heard similar from more than one person. Maybe it depends on the services you use.


Or, perhaps, the particular SDK you use to access their API?


Same, 8 year user. No real issues to complain about.


used azure to host an aspnet website, azure sql, and azure redis. everything worked pretty nicely, we made money and had a quality product reliably on the web with a small team. this was 2 years ago.


Microsoft under Nadella is infinitely more interesting than Google is under it’s current scourge of bean counters and MBA-graduates that seem to be running the place...

It’s not entirely comparable, but Google is basically in their “Steve Balmer”-phase.

Hopefully they can eventually get someone with competence and actual domain knowledge, and not some asshole who’s been taught they can lead whatever because they’re been “educated” to lead..


That's because product management at Google is non-existent. Yeah there's a lot of people called "product manager" but what they do ain't it.


People have to realize, that software is not done when written.

Software makes the fabric between everything in our lifes. It brought us tones of innovation; It is one of the biggest enabler, innovation drivers and tool we have. It is one of the complexest and cheapest tools we have.

You know what happens with software which is stoped being worked on?

Very old, unflexible Cobolt Software on Mainframe systems in banking systems. Guess why your bank is so antiinnovation?

Security holes

'Legacy hell'

We just need to do less Software and better Software.


People realize this, especially on HN. The post is referring to deprecations. You can patch security vulnerabilities and move a product forward without deprecations. Google does it well with Go.


Deprecation is too kind of a word. Traditionally deprecated means that it is no longer a best practice/canonical. It's nice to see deprecated flags when building new code because you know you found an out of date bit of documentation and you should look at what may be a better approach.

Deprecated APIs, functions, etc, in many orgs hang around forever. They might not be ideal, but they still work for code written before (or even code written after that might have some perverse reason).

Google (outside of Android) doesn't deprecate, they rip it out. It makes things easier for them, as this post details at length, but it makes it harder for the rest of the universe.


Until Go is deprecated in favor of Kotlin (as Kotlin is experimenting with server-side)


Its the same thing.

Versioning and backwards compatibility costs a ton of money and blocks your innovation.

Legacy hell comes often enough also from dependencies you can't get rid of anymore...


> I have been inexorably pushed away from GCP, towards cloud agnosticism.

On a side note I think this is what all of us should be aiming for. Every time someone on a team I'm working for starts trying to use some very complex tool that a SaaS/cloud has built for us I always urged them to look for a simpler solution.

Do you really need AWS Lambda/kinesis/etc or can you write your code and deploy it onto our cluster as a normal service/api/control loop?

If you make it really easy to write code, deploy it, and monitor it, you'll often find it's way less devops time to write your software as apis/control loops than it is to use a large amalgamation of cloud services. I also haven't found a good way to unit test cloud services and their interlinking.


On the contrary, every time someone comes to me with a suggestion to build something in-house or use an open source solution when a aws service exists I say the hell with that. I don't want to maintain it, build features into it or operate it.

I want full in vendor lock-in, it saves an immense amount of time.

You just need to pick a vendor that gives a shit about it's customers to be locked in to - and that's simply not google.


It's tough to avoid the time savings that vendor lock-in buys you these days. I have had to grudgingly accept that, despite the fact that I absolutely hate giving in to it. Look at what Apple has done with lock-in (or Google for that matter in other areas of the ecosystem) and you can see the reasons why.

However you can take steps that even when you give in to lock in, you structure your code/projects so that switching to another standard is straightforward: include non-vendor build scripts that would work on any Docker/VPS/baremetal server, isolate the locked-in config and scripts from the rest of the app as much as possible (even if it means running separate external scripts to execute them), etc.

Once you've done it a couple times you build up a suite of small tools and it gets easier.

I got bit hard by Zeit/Vercel, and then by GCP, and then by AWS. All of them do one thing or another to manage to find a way to extract more money. My goal is to make it so that if one day I decide we simply need to leave the platform, we can at a minimum of effort.

Because this actually does happen 1-2 times a year, thanks to exactly the issues raised by OP, it's paid dividends to put in the small time it takes to keep things mostly agnostic on the code side.


Ten years ago it would've been inconceivable to me, but I have to admit that my default assumption is that Google cannot follow through on any of its projects. It is no longer a "live player"[1] and lumbers along purely on its draining intellectual endowment and incumbency.

Network effects and increasing anti-competitive practices will keep Google at the top for years to come, but without a cultural reinvention it cannot possibly regain the prestige it once held as an institution.

[1] https://medium.com/@samo.burja/live-versus-dead-players-2b24...


Can't help wondering if some of the fall-off in Rails usage is due to a similar attitude toward deprecation of formerly supported APIs, which has made upgrading large Rails apps to a new major release a pretty significant project.

Things they've removed have included the entire original ActiveRecord query API, and the RJS templating system for Javascript. Those two required rewrites of significant portions of any app that tried to bridge the transition using only supported APIs. And every new point release (say, 5.1 to 5.2) brings a enough more minor deprecations to make an upgrade not a task, but a project. That's forcing a lot of significant rework, and every one of those gives people a chance to just take their project elsewhere.

(There's actually a third alternative. The release after these two major components were removed from core Rails, they were still semi-supported as add-on libraries. In both cases, core support for those was ultimately cut off -- but people with old apps could try to keep at least portions of them working to minimize the damage. I've done that to help avoid rewrites in components of an older Rails app. I didn't support all of either old API -- for example, rewriting uses of deprecated options to `belongs_to` was easier than keeping them on life support. But on balance, it was easier than the rewrites.)


I have personally used a large number of cloud compute services: AWS, packet, Alibaba, Digital Ocean, nimbix, Azure, Oracle, ... I still use all of these services from time to time. If you carefully look at my list of cloud vendors you realize that there is only one of the big cloud vendors missing: GCP. Why?

When GCP came out I wanted to switch from AWS to GCP because AWS was costing me a lot of money and GCP was marketed as being cheaper. So I went to their website and signed up. Or at least I tried to sign up, because GCP did not care about individuals, you had to be a company! No other cloud vendor I tested had this requirement! So up to this date I have never tried to sign up with GCP again.


Let me guess, you're based somewhere in Europe? I don't remember the specifics, but the "business only in Europe" thing was 100% the fault of the EU's (tax?) laws. Not sure if that ever got resolved, actually.


Yes, I am based in Europe. If this was EU's fault then why didn't any of the other cloud vendors have this requirement?


I guess it must have done as Im based in the EU and was signed up for a personal GCP account at one point.


that was sily because they only asked for a company name but you could've entered "Individual" inside the box and they would not care!


That's a bit risky. Google is well known to close accounts and have no humans you can contact.


This is one of the many things which prevents me from building to Google.

1) I don't want them closing my account. Doing development is the sort of thing which looks A LOT like suspicious activity, for Google's buggy anomalous behavior detectors.

2) They close accounts on working businesses all the time.


They also asked for the VAT number of the company...


The mishandled Python 3 transition sure created a lot of pain (I experienced some of it, even though I only started with Python in 2016), but I never realized how much it hurt the language’s momentum. This must be one of the reasons why JavaScript is so popular today. JS benefits here from being run in browsers, which the author points out are fantastic at back compat, even as the dev culture surrounding the language doesn’t value it at all.


I never realized how much it hurt the language’s momentum.

Yes. The combination of a botched transition with a high arrogance level was hell. Here's something I wrote in 2015 after porting a medium-sized system from Python 2 to Python 3.[1] I listed a number of package problems encountered during the conversion. Ones that indicated those libraries weren't being used much in production. I discovered that the SQL connector broke if you loaded a database of tens of thousands of records, for example. And oh, did I get hate mail. You can see the comments below mine.

That's what led me to Go. Go is a mediocre language. That's a strength when you just need to get something done. It comes with a set of libraries which are heavily used internally within Google. So, if you're doing something that is server-side for some web-related task, the libraries for that are probably both present and well-debugged. And they don't seem to be deprecated rapidly. I'm not using them for any Google-specific services, so I can't speak to that.

[1] https://lists.archive.carbon60.com/python/python/1187081/?pa...


Yeah I have the same thoughts about go, which I've been using every day for the last four years. It's super rare for an upgrade to break anything (has happened though). When it comes to running stuff in prod, boring is good.


Also: functional programming feels better in JS than in Python. On the other hand, it's nicer to write for-loops in Python.


I really love the array.map( e -> e.doStuff() ) syntax. You can do similar in Java and other JVM languages of course. Not sure whether you can do that in Python.


You can do that in most languages.

But map() (in either language) has the semantics of applying a transformation to each item in a collection to produce a new collection, so you wouldn’t really want to use map() to call something like doStuff(). That's what normal loops and forEach() are for. If you aren’t transforming the collection into a new collection, map() is the wrong choice; you’re telling anybody reading it that you’re doing something that you aren’t.

But assuming doStuff() just has a bad name and is actually transformational, then the most direct translation would be to use map() and a lambda:

    new_array = map(lambda e: e.doStuff(), array)
However it’s more idiomatic to use a list comprehension:

    new_array = [e.doStuff() for e in array]


There is map in Python, but nowhere near as nice as in JavaScript IMO.


I've been hiring dev ops engineers lately, and the consensus seems to be in the candidates I've talked to that all of the cloud providers are somewhat painful to work with.

I've worked the most with AWS so I'll use it as an example, but it seems like they sell on "new services", so they are incentivized to release quickly, which leads to services with a lot of holes and poor documentation, making it almost essential to get paid support to work around these issues.

It makes me wonder what it would look like to build a minimal cloud product which focused on a few core services with an emphasis on reliability, performance and developer ergonomics.


It’s a true difference of kind with Google though. Yes they are annoying, but other providers try much harder to have customer service, planned deprecation.

Google flagrantly, with absolutely the “fuuuck you” vibe Yegge writes about, does not care. It fundamentally doesn’t support you. It promises X, delivers incompatible Y, deprecates Y for Z, doesn’t document anything along the way, has no useful support docs, crams upselling ads into all its docs, it’s bonkers.

AWS can be hard to use and confusing. That’s just not the same thing. Poor ergonomics I can forgive, especially if there’s docs and support.


The only option I can see as an alternative "minimal" cloud is one that runs Kubernetes on one of the "mid-tier" hosting platforms.

DigitalOcean, Linode, heck even OVH now have K8s offerings with ready-to-go (and free) control node/panel instances.

Of course K8s isn't shallow but at least when you master their way of doing things you're free to go and do whatever you want.

It's cheaper too.


One small note of caution there is that using managed k8s solutions has some consequences for flexibility of what you can do.

For one example, in most managed k8s setups, you can't change the admission controllers used on a cluster unless the cloud provider exposes this via an interface. Some cloud providers do more of this than others.

So with a minimal k8s managed option you might get stuck if your cloud provider doesn't support an admission controller or other API server option you need.


That sounds like DigitalOcean’s approach.


Idk, DO is adding a lot of managed services lately so it doesn't feel like they're going for the simple, well implemented commodity play for the long run.


Managed services are where the money is. :)


Managed services are where the lock-in is; or perhaps more generously, where the "increased friction to change providers" arises.

If your deployment is nothing more than scp from the post-build products on Github, you can use literally any server anywhere with SSH access. No lock in.

But if your deployment is a button click from inside a DO managed service, well... you're SOL if you want to move. You have to build all the stuff you avoided building before (the time savings and therefore money savings).

I really feel like some combination of the two should be possible. I've not yet seen a host come along that offers it. No large company would because they want reliable revenue generated due to lock in.

I've thought about building it. The older I get the more it interests me. I've got all the basic scripts to make it work. Maybe I'll dive in one of these days..


I have heard that at Google, you get promoted/rewarded for launches. To me, that seems like an obvious perverse incentive to continuously deprecate things, so new things can be "launched" to replace them. There are no "launches" if you're just maintaining backwards compatibility!

Maybe GCP should take a hard look at their internal incentives?


That's one of the points of the essay. Steve also argues that it's not a GCP problem, but a Google-wide problem (excluding Android and Chrome).


Indeed; I suspect it's easier to get promoted for a "successful" deprecation than doing maintenance.


"Fuck yooooouuuuuuuu. Fuck you, fuck you, Fuck You. Drop whatever you are doing because it’s not important. What is important is OUR time."

Most NPM packages.


> “[...] Drop whatever you are doing because it’s not important. What is important is OUR time”

> Most NPM packages

NPM packages tend to be free and open source so morally they don’t owe us anything. So yes what is important is the time of the authors of the packages. I am grateful someone took the trouble to release them and many popular ones are actually well maintained.

But cloud services that you are paying for are a totally different ball game. The company does need to maintain backwards compatibility for a good amount time and they need to value your time.


I recently returned to dev work after years in a niche of systems administration in banking and healthcare. The existing product is built on npm, and after a month in, I'm simply disappointed that such a fragile set of software is this popular.

Almost every day there is an automated PR to bump one of the thousands of dependencies. LTS is ending for Angular 8, but one of the main component dependencies doesn't support 9 yet. I had to put typescript in an unsupported state so one of the dependency upgrades wouldn't break something -- the solution is an ugly hack not supported by anyone. Another required a bump to lodash, which suddenly started failing. This led me to the following comment in a github thread:

> The recursive implementation of cloneDeep was by design with the shortcoming that in some rare cases it could run into stack size exceeded issues. I'm not how cloneDeep is being used but chances are a more specialized clone tailored to their scenario will be a better option.

So, I agree with the OP: the messaging for support in the npm world can be distilled to "fuck you".

If I had ever proposed to a customer that we would build infrastructure for them written by thousands of pseudo-anonymous entities who may or may not break everything because they felt like it, I would have correctly been fired.

This leads me to believe that supportability, reliability, and security aren't important to the npm ecosystem because ultimately the products built with npm don't matter. A bank or a utility company may use them to build a customer facing application, but for the stuff that's important, a competent technology manager would never let npm through the door.

(edit: I replaced "tools" with "software" in the first paragraph, because tools don't change interfaces and effects on a whim).


I don't care about moral debts or obligations. I care about whether stuff works and I can get shit done.

Other ecosystems (like Python) don't have the same culture as NPM. There are clean, well-documented, maintained packages.

Google doesn't have a moral obligation to support GCE customers either. It's just that everyone I know who has used GCE would never use it in production, and loudly advises everyone they know not to do so either. They'll be bleeding money at some point, not to mention they're already bleeding reputation. Google went from smartest-people-in-the-room to smug-incompetent-arrogant-douchebags. That's hard to get over. That's about good business, not some kind of moral obligation.


Same for Java.


"GNU Emacs, which is a sort of hybrid between Windows Notepad, a monolithic-kernel operating system, and the International Space Station."

The article was worth a read for that quote alone.


I'd be interested in some concrete examples of things GCP have depreciated. I'm an (admittedly fairly light) user of GCP and haven't come across any myself.


Personal experience: I have to support an internal tool for a company in app engine. The contract is part time and I have to prioritize what issues to fix. The answer is usually "those caused by Google deprecating something". The nosql db some hyped individual has chosen years ago - "deprecated, we don't bother with solving bugs there". Email service: "it is in python2, but not in python3, look for management approval to use external service". File storage: "this library is deprecated, use whatever service is fancy now". Task queues: "nope, we are changing the api and calling it cloud tasks now, so please refactor all of your files to use the new semantics".

The list goes on and on.


Dataflow SDK libraries have updated their minor version from 2.4 to something like 2.16 in just two years. Guess what, everything below 2.14 is deprecated and not supported.

And yes. Despite these being minor version updates, stuff will break if you go from, say, version 2.9 to version 2.11.


I saw that he mentioned a few of them in the article. He writes:

I can tell you that virtually everything I’ve used, from networking (legacy to VPC) to storage (Cloud SQL v1 to v2) to Firebase (now Firestore with a totally different API) to App Engine (don’t even get me started) to Cloud Endpoints to... I dunno, _everything_, has forced me to rewrite it all after at most 2–3 years.


I work on Firebase. His characterization of Firestore is incorrect and lazy. It did not replace Realtime Database (which is the product he’s calling Firebase) they are two products which live side by side and both are being actively improved.

People assumed that because we made a new database we were killing the older one, but that was never true.


Well, that’s a novel take on the customer is always right


"the customer is always right" doesn't mean every individual customer is always correct and you should never disagree with any of them, it means the customer, in aggregate, is correct and you need to react to customer opinions.

But also idk what else you'd want them to say "the thing you're taking shit isn't deprecated" seems like an important response to "you mishandled this deprecation".


I haven’t been totally happy with Google’s stewardship of Firebase, but that line in the article bugged me too. I didn’t think RTDB was ever deprecated.


I wonder what the complaints are regarding App Engine. I've been using it for over five years without any issue.

Come to think of it, the Channels API shut down was a bit of a nuisance. But it never worked all that well, and deprecation seemed like a reasonable move to me.

Here's a detailed list[1].

[1]: https://cloud.google.com/appengine/docs/deprecations


Thanks for taking the time to list them out, didn't spot those in the article.

Personally, I think most of them seem quite sensible. Having a 'support everything forever' approach is obviously going to impose a huge burden on the teams who maintain this stuff which is then going to limit the ability to make anything better. The depreciation notices generally seem pretty good (12-15 months notice by the looks of it, sometimes followed by degraded functionality rather than complete removal of the feature).


that’s the whole point of the article though. Why should the “huge burden on the teams who maintain the stuff” be shifted to the customer who has less resources than Google to manage that burden. 12-15 months is not a long time. If you have to rebuild things that already work every year, you’re wasting a lot of time that could be spent building value for your customers.


> Having a 'support everything forever' approach is obviously going to impose a huge burden on the teams who maintain this stuff which is then going to limit the ability to make anything better

And yet AWS is doing just that while innovating at the same time.


AWS depreciate things too, though. For example https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-...


Deprecate [1] doesn't mean remove though. While the old style paths are officially deprecated, they will be supported indefinitely. GCP's problem is that they deprecate and remove features.

[1] "Depreciate" is something else.


In relation to this, you then have AWS who basically support everything forever until their customers stop using it. They _still_ support people on EC2-Classic even though they would rather not. Every TAM I've come into contact with brags about how they won't deprecate until the customer deprecates.

There's a reason that people trust Amazon with their compute, and that's because in regards to their technology they're trustworthy.


Search each product in docs for the "Deprecate", "Deprecated" or "Deprecations" page (it's not common across product lines), they list every single functionality and date they deprecated stuff.. And yeah, they've done a fair bit.

Example: https://cloud.google.com/stackdriver/docs/deprecations


"depreciated"? In what sense do you mean? Taxes?


Google thinks it can operate the same in paid user bases as it does in free user bases.

You aren't a customer, you're a user who paid to bypass a rate limit. There is a difference.


So much ranting, yet not a single example of a GCP product that Google has actually killed. Got examples?

I mean, there's stuff like Python 2 support being sunset on managed services, but as Yegge himself points out that's on Guido, not GCP.


No, the difference between python2 and python3 in the managed services goes much beyond unicode in the Google universe. The most obvious example is that in app engine standard python 2 you can send and receive emails while you need a third party service in python 3. In my other comment I've mentioned some other issues when Google start neglecting stuff.


Yes; exactly this. I'm contracted to Google, and so much of my Googler colleagues' time has been wasted in the last year porting projects off of the Python 2 APIs which are about to be shut down.

Are they porting to the AppEngine Python 3 APIs? Hell no: those are totally useless (no BigTable, no user authentication, etc.). If you have to rewrite everything to replace all the functionality that used to be "batteries included", you might as well build something can run on any old bare VM instead, and so—like any sensible external developer—my colleagues are doing everything they can to avoid depending on any Google-specific APIs, libraries or platforms.

I really though AppEngine and PaaS were the future, but evidently that future will not be written at Google.


> Are they porting to the AppEngine Python 3 APIs? Hell no: those are totally useless (no BigTable, no user authentication, etc.).

That's hilarious(ly bad): BigTable was like the main selling point of OG AppEngine, and they practically forced their user authentication scheme on you too.


I think you mean Datastore, not Bigtable?


The old Firebase API. No longer possible to create new databases that can be used from the old API.

Which in practice meant that "migrating to a new database under a different account" translated into "rewriting half a codebase due to a cascade of breaking changes in the surrounding tooling".


Are you talking about the Firebase RTDB? You can’t create new ones anymore? That’s news to me.


He gave a few in the article..

> > I know I haven’t gone into a lot of specific details about GCP’s deprecations. I can tell you that virtually everything I’ve used, from networking (legacy to VPC) to storage (Cloud SQL v1 to v2) to Firebase (now Firestore with a totally different API) to App Engine (don’t even get me started) to Cloud Endpoints to… I dunno, everything, has forced me to rewrite it all after at most 2–3 years, and they never automate it for you, and often there is no documented migration path at all. It’s just crickets.


I really want to like google cloud. Every time I use it things seem better set up than in AWS. I feel like I can't even bring it up at work though because of the issues with deprecation.


I hear you. At the end of the day though a little more effort on front end setup is infinitely better than the time wasted later on when you have to rebuilt everything every time Google decides to get rid of another product / service / platform / setting. Burned by that one too many times and moved on.


Scathing. Well written. Extremely hard to argue with.

GCP will fail, I think everyone knows it, and even if they don't, they fear it.


I had Google cloud break twice this week. First their "AI Notebooks" images broke some dependency, then some IAM change broke the cloud console interface.

Though our engg. team is on it, Pointless hours of productivity lost :-(


i think this is the key point of the article:

it actually winds up being less DevOps work, on average, to support open-source systems running on bare VMs, than to try to keep up with Google’s deprecation treadmill.


I honestly feel like this is a better idea in general, at least in data science stuff.

For context, I tend to get dumped with undocumented poorly written untested DS code bases every time I move to a new job.

The difference between setups where there's a bare VM/server and it runs using standard tools (command line scripts) or at least some open-source orchestration and any kind of proprietary technology tied to infrastructure is massive.

Additionally, the server stuff can be maintained and debugged by a far larger number of people, thus reducing bus factor.

Like, maybe in the web dev world there are things that everyone knows that are AWS services (maybe S3 counts, I guess) but in the DS world, I just know that I'm in for a whole lot more pain if they haven't just used bare servers.


this is the classic advantage of Free Software and Open Source over non-free tools. you often get a mix of commercial services and inhouse custom development that is no better documented or supported than FOSS alternatives.


Is it true though?

Keeping your stuff up and running, patched with the latest security updates takes time. And opensource systems also break compatibility at times. Sometimes the installation/upgrade process changes over time and you need to learn how to do proper upgrades even if the software technically can be made to work with the rest of the system.

It's hard to quantify, but I don't think it's so clear-cut as Yegge makes it look.


well, being forced to adapt your system because a service provider is not allowing you to keep it running can be a significant drain on your resources. especially when you can't control when you do it.

sure FOSS needs to be upgraded too, and sometimes there is change that you need to adapt to, but you are in control of the schedule, and if you are busy you can delay the upgrade work to when you have time for it. a service being shut down won't give you that flexibility


The timeframe is often quite different:

Google gives you 12 months notice before turning off the lights.

A FOSS tool "forces" you to upgrade when the old version you're using has a serious security issue that's is not backported. Which means that unless the supported version is fully backward compatible with your software, you have a quite tight schedule to deal with.

Sure, you always have the "option" of running insecure systems. Unfortunately a lot of people choose that "option" and severely underestimate its cost...


yes but,

an upgrade with some incompatible changes is still a world of difference compared to potentially being forced to rewrite a whole bunch of code because the system you used is completely discontinued and the alternative that replaces it isn't even remotely similar


Has AWS ever sunsetted a service?

I know they deprecated SimpleDB but I don't think they ever actually shut it down for existing customers, that might have changed in the last few years though.


As you mention my understanding is that that there’s one service no longer offered to new customers but they are still supporting it for those that were using it. This is a big difference between AWS and Google’s approach to customers. A number of friends’ companies got hit really hard when Google suddenly decided they no longer wanted to do something anymore (regardless of what their customer thought). That list is big and growing: killedbygoogle.com


There are always trade-offs though.

1) If you're busy maintaining backward compat, you're not busy building innovative new things -- Microsoft. 'nuf said.

2) You have a built-in excuse to not make your new stuff equal in functionality or greater to the old stuff, because hey, customers can just use the old stuff right? I built some software that talks to O365 Sharepoint. Ok, I have two choices of API: the older Sharepoint API or the newer Graph API. They recommend Graph, ok. Build the product and put it into production. Get a new customer requirement for fine-grained auth, and then find out that Graph doesn't handle fine-grained auth -- only the Sharepoint API does that. Oops. Ask Microsoft when that capability is coming to Graph? No timeline, because the Sharepoint API isn't deprecated.

3) Keeping old stuff working may be easier, but building new stuff is harder because the landscape is muddier -- how many times have you looked at Microsoft docs and found multiple ways to do things with no idea if the docs you're looking at actually apply to the approach you're using now?

I think there are ways to have (mostly) the best of both worlds. Linux does it by bring as much stuff in-tree as they can -- they break compat all the time for out-of-tree stuff. Ever try to keep a proprietary VMWare module building without errors? In-tree KVM never breaks because they're super careful with the ABI. Some languages have explicit mechanisms by which they attempt to keep things current, but make it easier on users -- Kotlin is an example of this [1] but its a young language so time will tell whether they can actually thread this needle. My experience so far is yes, I think they can -- I've updated code from Kotlin 1.2 to 1.3 to 1.4 with relatively little pain.

[1] https://kotlinlang.org/docs/reference/evolution/kotlin-evolu...


I can hear cries from tenderfoot developers whining about supporting multiple options and backwards compatibility...

"But that's sooooo muuuuch too teeessstt!! Whaaaaaaaaaaaaaaaa!!!!!!!!!!"

But that's your job dude. So what if it costs more money and time. Fucking pay for it anyway.

I feel like startup software dev needs a swift kick in the ass or three with that phrase.... "do it anyway!"

Who fucking cares if these founder assholes only make an 800% return instead of 900%? Do it right.


You didn't read the article, did you? Startup developers don't mind supporting multiple options and backwards compatibility. Often they have to because they don't have the time and the money to rewrite everything, so it's common in startups to run and support multiple versions of APIs, clients etc.

It's Google who doesn't provide backwards compatibility, upgrade paths and migration options.

> But that's your job dude. So what if it costs more money and time. Fucking pay for it anyway.

So why doesn't Google pay for it anyway?

> I feel like startup software dev needs a swift kick in the ass or three with that phrase.... "do it anyway!"

You mean Google devs.


I read the whole article. My reaction is such because I feel that much of software dev is plagued by the same bullshit that the article describes about Google

I've been on the web since the mid 90s. The last 15 years have had more cool things taken away from me because of sunset bullshit than the first half of that time period. Software (even web apps) should be eternal, there should never be "features removed" or "EOL" announcements. If they don't want to run something they should open source it and let the community take over. What the hell is the point of hoarding all this wonderful IP if it serves noone?


And then there is AWS, which has depreciated exactly one service, SDB, but only after they created a much better replacement (DynamoDB) and offered to help migrate all of their large customers, so they could get the usage percent so low that almost no one was affected by the shutdown.


Disclaimer: AWS SDE

The focus on consistent behavior and once launch, never deprecate practice has caused many headaches, I personally know of control planes specifically written to accommodate old behaviors that a few customers had years ago even when the entire service has been updated. But IMO this is still much better than deprecating production services and behaviors on the fly.


But by making in Amazon's problem, the customer never experiences breakage, and that's a huge win.


Which, I would argue, is an important lesson for Google to learn if they ever want to be #2.


This puts into words what I have suspected for a long time. When we were considering which cloud platform to move onto, we considered Google Cloud for about five seconds before someone said “How long until they kill it?”. And all of us knew exactly what they meant. It feels like Google’s main mode of operation is killing projects. We ended up going with AWS.


Besides AWS and Azure what's the third major cloud provider he's referring to? (I can think of others, just haven't really thought any were close to Amazon/Microsoft/Google.)


Was wondering the same thing. Alibaba? IBM?


Last I checked Alibaba was 3rd in the international market and Oracle cloud was growing. Steve might be referring to Oracle getting 3rd place.


“ it’s well-known that they refuse to host (as a managed service) any third-party software until after AWS has already done the same thing and built a successful business around it, at which point their customers hold them at gunpoint. But that’s the bar, to get Google to support something.”

This isn’t true, of course. They supported managed Kafka before AWS did: https://cloud.google.com/confluent


Gcp doesn’t have a 1st party Kafka offering. Confluent cloud is still 3rd party just with some nice integrations with GCP.

If you have an issue it’s not GCP support or GCP’s eng job to fix the cluster. There’s real service uptime implications of this.


People are confusing new versions of things, or new APIs, or new products, or not recommenced anymore, with actually removing products. From this thread people are complaining about:

- Datastore -> Firestore in Datastore Mode: This was a seamless transition with no API change, Google changed the underlying tech.

- Firebase -> Firestore: Never happened. They are different products. The Firebase Realtime Database still exists.

- gcloud CLI updates: It is a CLI tool, it gets new versions.

- App Engine with Python 2.7: It still exists, you can still use it. There is a new App Engine v2 that supports Python 3.

- Cloud SQL v1 to v2: They had a tool to automatically upgrade, it set up replication and switched over.

AWS has new versions too. RDS Aurora v1 (MySQL 5.6) can't be upgraded in-place to RDS Aurora v2 (MySQL 5.7)[1]. AWS Lambda runtimes get deprecated[2].

[1] https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...

[2] https://docs.aws.amazon.com/lambda/latest/dg/runtime-support...


I had a few laughs with this, but (maybe because I have been using Python since 1.2) I just can't relate to the Python bits.

I migrated all my "working set" stuff to Python 3 piecemeal over a couple of weeks and never looked back (except when I needed to pick up some older project and take an hour or so to refactor it).

The "There Will Come Soft Rains" reference, however, was brilliant. I hadn't thought of that story for _decades_ and it hit home.


> I just can't relate to the Python bits.

I work in visual effects/animation, and 2020 is literally the first year anyone is even trying to use Python 3 (and Python 2.x) is still supported everywhere. Prior to 2020 Python 3 couldn't even be used at all (and I mean that literally: could not be used outside of isolated toy examples on alpha software that wasn't actually running in production).

We (as an industry) have written thousands and thousands and thousands of business-necessary logic/scripts/libraries in Python 2.x that have to be updated to Python 3 that work across commercial tools so literally every tool has to be updated at the same time in tandem to Python 3, so much so that the VFX Platform[0] project was created in 2014 specifically to address how badly Python 3 had fucked everyone in my industry.

It's all make-work too: literally none of us actually need Python 3's new features.

Javascript's "use strict"; is the way to do backwards compatibility if you really really really need to change the semantics and keep existing code working as-is.

[0] https://vfxplatform.com/


>Businesses are taking a mercenary’s accounting of their dual mobile teams (iOS and Android), and starting to realize that those phony-sounding dog-and-pony-show cross-platform development systems like Flutter and React Native have real teeth, and using them could cut their mobile team sizes in half, or alternately, make them twice as productive.

Is there much Flutter and React Native dev going on at big companies outside of Google and Facebook?

I know plenty of smaller companies are using them because they don't have what sometimes feels like near-infinite money to throw at mobile engineers, but I can't think of a single Big Name On Blind with iOS or Android position listings that mention Flutter or RN. They're all native.

Even at Google and Facebook, usage of either of their respective frameworks seems to be an exception and not the norm, and they're still just components in a larger native project. I wonder if the goal is to eventually re-write everything? But that seems impossible given the size of these applications' source.


I do hope the intermediate solution he mentions, tooling that upgrades your code for you, starts to be used more. Go did this in the early days before 1.0, but it is fairly rare still outside the internal Google tools. It is a big chunk of work obviously but it saves a huge amount of downstream work, and lets everyone move forward.


Right? This is a strategy/trade-off that depends on migration tooling that exists and is user friendly.


While I agree that GCP's deprecation policy is painful, I don't think he's painting a compelling picture that the deprecation policy, specifically, is the reason GCP isn't ranked higher.

Taking two of the examples of "bad platform moves": Python 3 and Apple.

Where I work, we are simultaneously dealing with Python 3 and GCP deprecations[1], and it sucks. But Python is more popular than ever![2] Yes, some people (including us, at least for our website code) left Python, but lots and lots of people have moved to Python 3.

I've seen lots of people say "oh, the move from 2 to 3 wasn't that bad because of the tooling." That's probably true, and we probably wouldn't be moving to Go were it not for the fact that we had API changes _and_ Python's changes to deal with (and Go gives us other benefits).

And then there's Apple. Apple has a long history of maintaining compatibility for a time, and then getting rid of the bits that are holding them back. There are tradeoffs.

Maybe Android doesn't break things for developers, but Apple makes it so that iPhone users can keep updating their phones to the latest OS (and apps) for much longer than the typical Android phone. Because of deprecations and a general desire to make apps that fit the platform, developers of popular apps on iOS put the time into updating them, and these apps are really nice to use as a result.

I think fit and finish on non-first-party native Mac apps tends to be better than those on average on Windows, though that's just my opinion.

Carrying backwards compatibility comes with tradeoffs, and it's possible to be successful by making different tradeoffs than maximizing backward compatibility.

[1]: https://blog.khanacademy.org/go-services-one-goliath-project... [2]: https://redmonk.com/sogrady/2020/07/27/language-rankings-6-2...


One thing that I don't understand about all this deprecation talk, as pointed out here:

> So when it comes to shouldering the burden of compatibility, you need to pay for it. Not us.

Surely if there are shoppers/customers affected by a deprecated service on GCP (or any cloud provider), then that would indicate a paying customer behind it. That payment should at least cover costs of running that service (infra, development, software and maintenance). In that case, why deprecate at all? - version the API and continue running it as long as it economically makes sense?


TLDR of the article:

> In the Emacs world [...], when they make an API obsolete, they are basically saying: “You really shouldn’t use this approach, because even though it works, it suffers from various deficiencies which we enumerate here. But in the end it’s your call.”

> Whereas in the Google world, deprecation means: “We are breaking our commitments to you.” [...] It means they are going to force you to do some work, possibly a large amount of rework, on a regular basis

It's fine if they want to go hunting for their Perfect API v10™, but just leave the old APIs alone - build abstractions under them if needed. Don't push that work onto your customers.


Two things this essay made me think about.

First: Why don't more programs or services go the `make-obsolete` route and allow people to use deprecated APIs forever? If maintaining compatibility is the goal then I can't see how removing a function in a new API version will ever work. Maybe it's because companies are expecting people to be dumb and call support when they use this deprecated API that was specifically designated as unsupported. "But I can use it," they'd say, and still complain that it breaks or something. Maybe that workload is what they're trying to avoid. But maybe they'd silently move away instead if continuing to use the platform meant rewriting everything.

Second: Why don't people usually have automated API refactoring tools? When I was trying to port a Minecraft mod to a later version I found a bunch of things had changed in the engine. Okay, go to MCP and look at what changed. Guess what? It's all a bunch of Markdown. They painstakingly went through the process of labeling every single API change and method renaming and so on, and they put it all in Markdown.

There was so much missed potential there. Imagine if you had a policy encouraging your developers to put down every API change you make in your program in a machine-readable, schematized format that could be printed to a webpage or Markdown or something else, but could also be combined with a static analysis tool to at least print out the parts that need changing, if not refactor them, just like Google's internal tooling? If developers don't notate those changes by hand then that information is going to be permanently confined to unparseable changelogs and obscure IRC conversations. That's the kind of thing that a semver increment can never capture the nuance of. Why guess what broke or manually parse changelogs based on a major version increment when you can have a computer do all the work for you?

Maybe it's because a changelog is "information to throw away," that you'd have the painful time trying to read through the moment things break and then can forget about a couple of days later when the refactoring is done. But the changelogs and migration steps are essentially a bridge closing a chasm, with your entire userbase on one end trying to get to the other, and if it's not made easy enough for them to cross they will either give up and leave or stay on the other side forever.


> One fun bit of trivia about Bigtable is that they had these internal control-plane entities (as part of the implementation) called tablet servers, which had large indexes, and at some point they became a scaling bottleneck.

Anyone understand the technical jargon here? I'm a bit lost.


The BigTable paper is available here: https://research.google/pubs/pub27898/

> Bigtable maintains data in lexicographic order by row key. The row range for a table is dynamically partitioned. Each row range is called a tablet, which is the unit of dis- tribution and load balancing.

And later...

>The master is responsible for assigning tablets to tablet servers, detecting the addition and expiration of tablet servers, balancing tablet-server load, and garbage col- lection of files in GFS. In addition, it handles schema changes such as table and column family creations. Each tablet server manages a set of tablets (typically we have somewhere between ten to a thousand tablets per tablet server). The tablet server handles read and write requests to the tablets that it has loaded, and also splits tablets that have grown too large.


Check out the original bigtable paper and it’ll make more sense. I think the author maybe misremembering some technical details here though.


Maybe he's talking about the way that bigtable metadata1 is itself hosted on bigtable? The statements don't make much sense.


Clearly author is making things up - there were no gummy sharks in micro kitchens. Sergey spoke at lengths about this.

> I considered using Google Cloud Bigtable for my online game, but it costs an estimated $16,000/year for an empty Bigtable on GCP. I’m not saying they’re gouging you

Ok, but they are



Cool so i can build a three node cockroach/tidb/tikv cluster that will have much higher reliability with the same performance characteristics for less money. Sounds like they still gouging you.


There are other tradeoffs between cloud providers that are important to keep in mind here. In my experience, GCP has been easier to get started with, and has offerings that are by-and-large more comprehensible. One thing I really appreciate is that a lot of their client libs have automatic auth within GCP (last time I used AWS this wasn't the case or I didn't know about it).

As someone who hasn't been burned by it personally, it is hard to quantify the actual risk and maintenance cost of GCP's deprecation policy, but I know decision makers at larger organizations that are rightfully afraid of it.


Easier to get started ? Their UI , IAM architecture and firebase being kind of separate is so confusing .

Aws used to be simpler , while they are screwing it up now, the recent route53 upgrade was terrible , a single click action has now become 4-5 clicks they are still better than GCP.

Azure has the really the best interface (outside of DO) , it is easy to go to related and nested resource in one freaking app. Both google and aws end up forcing you to open three four tabs to setup interconnected apps .

DO interface is really really good ,however they don’t have the complexity of features that the big three do , so not a fair comparison


That's just been my experience – I remember having much more pains trying to get AWS IAMs and permissions to work, and I really like that they give you suggestions like "this machine could be smaller" or "this role never actually uses all these permissions."

Also I haven't worked _that_ much with firebase, but it seems like a great example of the benefit of using GCP. Firebase is a cohesive and accessible solution to a lot of what can be fairly nightmarish technical problems. This kind of thing will always depend on the project/team/team size, but I'm just trying to say that there are significant benefits to GCP that should be considered.


Google are not competing with DO for developer mindspace, for large enterprises stability matters a lot more than new features and enterprise support matters too, for startups serving enterprises they don't do much either, what's left is consumer focused companies like SnapChat who can thrive on just innovative new tech.

I can get Microsoft on a call anytime, I have account managers responsible who talk to me atleast once a month, reach their product teams, get preview access, get the MS account manager of my customer to help with a deal, put them in front of my customer, help with compliance, even get their sales guys to recommend my product. To a lesser extent I can do a lot of that with AWS too, with even sub $100k/year spends they will still put an account manager for you. I am not sure any of this was possible with GCP for most customers.

It is extremely hard to get a human from Google to talk to you even at $100k+/year GCP spends. Google has contracted a lot of partners to do all the heavy lifting in support for them so they don't have to do the hard work. It does not work, the partners can not do much beyond what is available on the portal or clarify beyond the documentation.

Azure serves enterprises really well, and has really made effort with developers and their support for startups in fantastic, AWS is not as good yet, however it feels like they are really trying and their tech popularity works in their favour and they care about backward compatibility a lot S3 API from 2006 still works. With Google and GCP it does not even look as though they are trying at all.


>S3 API from 2006 still works

An interesting note is that Amazon actually planned to EOL part of this vis-a-vis what path to address objects at, and then walked it back (for buckets created before a certain date): https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-... I think the behavior is still considered "old style" if not explicitly "deprecated", but is "supported".


Yes. That decision to continue to support is costing them a ton of money every day. The older API is very expensive, This is also why backblaze originally did not add a S3 compatibility layer.[1]

This is exactly the kind of commitment I expect from Microsoft or Amazon, it is why enterprises pay premium for a product.

In a similar situation I imagine Google would have just sent a note and with a window of few months and shutdown the old API. Great for innovation and keeping your tech cutting edge, not so much for the customers who are not as agile as they and can only move slowly if at all.

[1] https://www.backblaze.com/blog/design-thinking-b2-apis-the-h...


I would like to offer some kind words for the author of this post but in all honesty I can't.

Why is that guy still giving money to GCP if it's so bad?

Go back to AWS. There's a reason if AWS is still the market leader.


Google messed up when it sold boston dynamics (where's my cooking, dishwashing, and laundry (folding) robot?), didn't buy github, and sold motorola (i just bought my wife a motorola edge)


So what is the 4th best cloud platform that he thinks might overtake GCP?


Oracle, or globally, Alibaba.


This is why Microsoft, for all its faults, has huge mindshare in the enterprise: they don't give up on anything, and when they do, they give you years to make the transition.


Well, hmmm, if only there was one or two more FUs then I would have agreed with the Op.(sarcasm).

A phd quit my employer and went to google. On last week we were discussing relative differences. He pointed out searching for X really had no wrong results and even if you didn't like it who you gonna call? Google probably is a little siloed from customers in a way many other business are not. Our IT systems deal in money. There's nowhere to run and hide from customers.


What is the fourth cloud offering that may be number 3 soon.

I've used AWS and Azure gig.current uses GCP (don't get me started on that) but what is the fourth?


Weren't GCP users running around telling everyone "Google only shuts down consumer products and not its cloud services!!"


We’ve been in GCP for 5 years and this really hasn’t been an issue for us. Good price, solid performance and reliability, some fancy stuff you can’t get elsewhere.


Android is the top example of this by far, everything is half finished & deprecated.


Am I in the minority being a GCP fan? Articles like these leave me conflicted because on the one hand he's right but on the other hand I actually really like GCP and Firebase is a godsend. It would really suck if GCP goes under.


I've been saying this about Google for a decade... everyone thinks I'm nuts.


One thing I was left curious about after reading this is, what is that third cloud that Google could fall behind of? I know they're after AWS and Azure, but there are several candidates for that fourth place.


Alibaba, I think. Probably the 'natural' choice for Chinese-facing developers, but I think Azure has a region there as well.


A perfect, spot on analysis.

The only thing that’s weird about this is that it’s about 3 years late.

If this had been written 3 years ago it would have been topical.

But really it’s old news that google deprecates everything and leaves it’s developers in the lurch and that as a result no one wants to use their stuff.

It’s not surprising to hear this from Steve Yegge cause he’s a super switched on guy, but it’s surprising to hear this from him NOW.

Google’s cloud ship sailed years ago, they just haven’t yet got to shutting the whole thing down, but the writing is on the wall.

Google seems totally oblivious to how much developers distrust them, which is probably because dump trucks turn up every hour or so and dump cash onto everything which probably makes everyone feel that everything they do is awesome, even if it isn’t.


I mean maybe I’m not their dream customer or whatever, but we’re happy with google cloud for our purposes - GKE, CloudSQL, Redis.

I really enjoyed the essay and surely is some truth, but these products have all been pretty good for us.


No doubt what is there works and works well.

The issue is that developer confidence is very low due to google being in a never ending relentless drive to cancel stuff.


With 3 services, and Google randomly disabling accounts perhaps 5% of the time for running customers, you have a 15% chance of having your business taken out by a buggy Google algorithm each year. That gives your business a half-life of 4.25 years, approximately, before you get the Google hammer-of-death.

Of course, those might count as one product, in which case, it's 13 years. I'm not sure how Google's algorithms work. Does it detect suspicious behavior per-service or per-cloud-account?

Ironically, so few people use Google in production that successful production use IS anomalous behavior.


You think Google Cloud is gonna shut down?


Well that’s what they said if they don’t get to #2

Just search google for the topic of google cloud shutdown, many speculate it.

https://www.theinformation.com/articles/google-brass-set-202...


I used to work with Steve. He’s a good story teller. But a story teller tells good stories by having a fairly casual relationship with facts and the truth.

For example, emacs lisp suffers from bitrot. Some core things haven’t changed in forever but also you can’t pick up elisp config from 15 years ago and it’s fine today.

All live systems evolve and change. In this case emacs is just so slow that Steve thinks that it doesn’t.


Nobody pays EMACS a few million a year and runs their core business through it.


My elisp config file dates back to 1986. It's almost entirely unchanged. It still works. I have never had to remove anything because it stopped working. I just occasionally add stuff to it every few years.


I don't know why people are downvoting you. I think you're right. I agree with his general point (I expanded a bit on that here [1]), and I also agree that people often overlook that general point since it is so convenient to do so. But otherwise, the specific examples he wrote (and the other assertions he made) seem to me to be incorrect.

But, as you said, he is a good story teller. Like Taleb. Getting the moral across is more important, even if sometimes the wolf eats little red riding hood in malformed renderings.

[1]: https://notgpt.com/2020/08/16/long-lived-systems/


That's the price you pay for being lazy and using closed, proprietary technologies and acting on their marketing. Like GCP and AWS were the only clouds there. Like you had to have everything hosted by them under "planet-scale" buzzword.


Honestly I just use GCP for everything only because AWS console logs me out all the time and 2faing with TOTP is a pain.


The answer to why this is, I’m beginning to realize after talking to dozens of Googlers working at Google as acquaintances, friends, former coworkers, HN comments and twitter personalities is - Google has an active disrespect if you’re not part of their in group. If you haven’t passed their tests, you’re basically an object of derision - a mark, an object just used for incrementing a CPM metric, or an incompetent, low-class buffoon who doesn’t deserve to have their jet setting luxury lifestyles. And unfortunately, everything trickles down from that.


I have worked at a few orgs and I've heard so many time the "because we are Google" to explain why a product will be used. I've never understood that mentality; I've never understood people being proud basically about a product somebody else built 20 years ago.

I do believe leadership is failing Google, and their lack of vision is starting to affect the company. There was quite a bit of inertia, but I fear search might drop in the next 5 years and there will be nothing ready to replace ads.


NLP is rapidly improving, a time may come when someone else can create good search just because nlp tech has gotten so good


(warning rambling rant)

I feel like this was super true in the pre Facebook days but they were humbled A LOT by the massive flop of Google plus.

I honestly don't feel like I get the my s * doesn't stink vibe from them anymore.

Especially because everyone knows their Achilles heel now.

But for their ad revenue what would they be?

Everything else material from came from an acquisition or is infra stuff.

It's cool but they also had to learn the hard lesson of Hadoop too.

Yeah they built big table... But the rest of the industry standardized on Hadoop (at the time) because they never released an implementation and so they started losing out on good hires because they didn't want to get locked into proprietary Google infra.

I think you see them recognizing this with intiatives like all of the stuff around kubernetes.

But hey I wouldn't build anything on their (non open) stuff and get more scared by the day over my dependence on El Goog because they've shown time an again how little they care for us common plebs.

But I feel like they get better when they lose but they've only started losing more recently

(Rant over)


> Yeah they built big table... But the rest of the industry standardized on Hadoop (at the time) because they never released an implementation and so they started losing out on good hires because they didn't want to get locked into proprietary Google infra.

Hadoop and Bigtable are not the same thing. Bigtable is a noSQL database. Hadoop is a big data processing framework. Hadoop is actually an open source implementation of MapReduce, which was developed at Google and which has since been replaced by Flume


Hadoop is a collection of things: HDFS (filesystem), Hbase (keyvalue database), MapReduce (data processing), and YARN (resource scheduling and job management).

Hbase was open-source implementation of Bigtable while HDFS was a copy of Google File System. The divergence appeared with other processing frameworks like Spark but now there's Apache Beam to unify it all.


On the other hand, Hadoop does have HBase, which is their version of Bigtable. It was, for a number of years, a very popular columnar K/V store.


Lol Stadia launch was anything but humble. Absolute farce compared to Geforce now and with no compelling titles or experiences to give it a leg up.


I think this is really common. I've met quite a few Google engineers whose whole identity is defined by their employer. Its striking, and a testament to the malleability of the human mind, even if the individual is "smart".


The joke that I always used to have when I worked near their Dublin office was How do you know someone is a Googler? Don't worry, they'll tell you :)

This was prompted by me noticing that most of the people who still had their badges visible after work appeared to have Google badges.


From a brief look at job openings at Google Dublin, this mentality would be particularly mind boggling as it looks like the Dublin office houses the support staff (not that there's anything wrong with working support).


So, in general, Dublin Big Tech jobs are in: - customer support

- sales

- finance (explains itself really)

- SRE (mostly because of favourable time zones)

Most of the core engineering is in London/Zurich. Dunno what's gonna happen post Brexit, but anecdotally, the FAANG which I am most familiar with has had real problems hiring in London post-2016.


This hasn't been my experience, and I've worked there. There might've been a couple dicks like that but definitely not the norm


I don’t know about other locations, but in Seattle the googlers and SpaceX people walk around with a lot of swagger. And you don’t have to talk to them long until workplaces get brought up and a small jab at Amazon or Microsoft is made. Which idk maybe is fair if they used to work at Amazon/Microsoft. But still I wish we weren’t so intent on maintaining a hierarchy. Maybe we just all need to develop more of a life outside of work.


Maybe you just don't notice the googlers who don't go around with a lot of swagger?


Good point, I’m only noticing the people wearing workplace swag/bringing up their workplace. I don’t notice people who don’t bring it up.


The counter-case would be, "Do you also notice people from other companies who walk around with swagger?"

Not living in that area anymore, I can't say. I haven't lived there for 10 years.


its worth noting the size differences of the various companies in terms of employee count. Part of this is simple statistics.


This sounds so tiring. I don't care who you work for, tell me what you're working on and I may be impressed.


A "couple dicks"? It's endemic, dude. You can't be given everything you could ever dream of and _not_ end up like this.

By comparison, I have nothing. zilch, for my hard work. Neither do most people working on core tech at AWS. That's why they empathize with their customers (I'm guessing).


It is not endemic. I think you've met a few people you didn't like, read some tweets, and extrapolated poorly. And I don't think people working on GCP are really treated that much better than their counterparts at AWS. The fact that you think the difference in product depreciation policy between them is caused by people getting "everything they could dream of" is funny though


It's totally endemic. If you work at Google, and can't recognize the smug, superior, yet incompetent culture, odds are you've be incultured.

It's not unique to Google. Lots of organizations intentionally build a culture of elitism. However, it means I would never, ever rely on Google for anything. Free products like search are a-okay. My email is on gmail because legacy.

Build a business on Google platforms? Nope.


Trips to Europe or to random conferences in first class? Trips to _any office_ for "face time"? Massages? Bonuses of any sort, including peer bonuses and holiday gifts? Refreshers? Society thinking that you're "smart" for passing the loop? You're rewarded just for being alive at Google, but I get nothing but scorn and condescension.

Fun fact for you - if I get promoted this year at Amazon to L5, I'll _still_ make less than a new grad at Google. It's extremely depressing to the extent that I can't get out of bed in the morning, but I'd wager that inferiority complex probably helps on the product side to empathize with customers.


The saying goes that if you want a raise, you have to negotiate for it at your new job. Internal promotions typically make less than an outsider coming in for the same position. Some corps say that pay bumps have a max, so internal employees hit that max when getting promoted even though that max is still below what a new hire would be brought in at. I've run into that personally. I fixed it by getting a new job for a new company. When you negotiate that new job's salary, you have to make sure you're going to be okay with it for the next few years while the cycle starts over. And people wonder why job stints are so short today.


Stop comparing yourself to others and work on yourself. Look inward, not outward.

You make six figures at a top tech company and you’re bitterly complaining because you think you’re entitled to more, and in the same breath you’re chastising Google’s pretentious culture? There’s a lot of irony there.

Focus on improving yourself and don’t get caught up in comparing yourself to others.


The only thing to see looking inward is a void.

I see too many people talking about their Tahoe Google offsites (jbd@ on twitter claims a bunch of her coworkers just work from Tahoe during ski season) or their Colorado skiing weekends.

Anyways, hypocrisy doesn't mean I'm wrong, and I think its valid to be mad that I'm seen as inferior daily.


> Fun fact for you - if I get promoted this year at Amazon to L5, I'll _still_ make less than a new grad at Google.

Yet you still make more than what the vast majority of engineers in western Europe will ever make in their whole career. And that's still only looking at the most developed countries.

The job market isn't fair. No use getting burnt over that.


He makes far more than The vast majority of developers in the US.


I know several people making $350k at companies like Lyft two years out of undergrad.


That doesn’t negate my point that you still make more than most developers in the US outside of the HN SV bubble writing CRUD apps or bespoke internal apps that will never see the light of day outside of the company. Go to salary.com and pick any major city not on the west coast.


Give me a break. Lots of teams in Google have restrictive travel policies and never see the pointy end of a plane. Massages cost money. Holiday gifts ceased to exist years ago.

I'm also quite surprised by your wage comment: a common aphorism is that Amazon cheaps out on everything except real estate and compensation.


> Lots of teams in Google have restrictive travel policies and never see the pointy end of a plane.

Which ones do and don't? Because all of the ones I've heard of have pretty loose ones compared to my organization.

And yes, Amazon doesn't cheap out on real estate because we own so much of it. It's honestly remarkable how it's done. But I was under the impression everyone knows they pay less.


Do you not realize how much AMZN appreciated over the last 5 or 10 years compared to FB or GOOG? Many Amazon engineers came out way ahead financially compared to their FANG peers.


FB and GOOG provide more stock and more refreshers. I think its hard to say AMZN engineers have come out ahead consistently.

I only have about ~40 stocks up from about 35 before the refresh cycle (unvested) and that's unlikely to go up if I get promoted (if you have too many they don't give you more). Googler and FB refresher values are significantly higher.


> Fun fact for you - if I get promoted this year at Amazon to L5, I'll _still_ make less than a new grad at Google.

This doesn't appear to be close to true based on levels.fyi, unless you're severely underpaid for an Amazon L5.


Levels.fyi is good, but it's still polluted with TC's from multiple regions in the US (California + NYC have higher base and higher TC by 15% due to taxes). In addition, promos always happen at the low-band and aren't negotiated. There's very likely some intersection between high-band L4 and low-band L5, and I'd be there when/if I promo. The only way to get out of the band is to get a high raise during evaluation season, but this maxes out at 10% or so for "top tier" performance evaluated only once a year.

*The net effect is "pay for performance" isn't really a thing.

The only way for me to do a significant compensation bump is to do what's called a "dive and save", but this is very rarely done for L5's and is significantly more common for L6's. I got an offer for about $200k a year ago at a hedge fund, but that wasn't eligible for dive and save because of my level and I'm (clearly) too mentally defective to pass a Facebook or Google loop.


Dive and save is one option you have, as admittedly being a long time at a company does not usually end up in a great salary. Related: https://randsinrepose.com/archives/the-diving-save/

The other more popular approach is to "play the game" as I think of it, I.e. every 18-24 months you switch to a new job with a substantial increase in pay as result.

I wish this wasn't the case, but I guess it is what it is.


Huh. At this point I'm quite curious if I know you.


Haha, you don’t but I do know you from the Pythonista list.


Then why don’t you move to google if you know they pay better?


I've studied hundreds of hours, failed HC for full time once and I've concluded that I'm mentally incapable of doing so (I blame my parents due to the slight heritability of IQ).

"Just move" is something someone without any empathy for other people's circumstances would say - most people aren't capable of "just moving" as if the only concern is choosing an employer.


Man you really shouldn't be so hard on yourself. These coding interviews aren't IQ tests. If it's any help, a good recommendation from someone in a company would probably be a stronger signal than a leetcode type question.


I think G's ones are supposed to be. They claim themselves to be the top 1% of intelligence.

No disagreement on the good recommendation as a signal, but I doubt that'll matter to the HC if I can't pass 3/5 rounds at the minimum.


My feeling is that getting a job is like dating, a mysterious mix of luck skill chemistry etc, there are some rules but you can follow the rules precisely and still end up losing just because of some odd misfortune. However if you try long enough and put in a solid good faith effort, long enough being unspecified and possibly after you die or go broke, you generally end up with something decent.

That long enough part is the kicker and that terrifies me in every job search (to the extent I have sometimes jumped at the first half decent opportunity, and agreed to salaries below my potential). I find it does help to apply often and everywhere and not be invested in any particular opportunity.

With dating they say every failed relationship is a step toward finding the right one, and I think that same principle applies to job interviews.

Anyways, that’s my two cents


I worked at Facebook for years and did a lot of interviews and the hiring process has inconsistent results for all but the best and worst candidates. The amount of times a candidate was rejected because one iterviewer who had questionable rationale was surprising.


I once had a really good candidate rejected for wearing a T-shirt to a VC interview.

Mind you, he would probably have turned us down anyway to go do a PhD, so he probably dodged a bullet.


Are you really complaining that you only make probably around a quarter million a year?

I don’t even work on the “core tech”, I am on the consulting side, probably make less than you do (albeit in a much lower cost of living area) and I am probably older. But, I am not throwing a pity party on HN


You're $100k off - I make $150k a year. I'd be more satisfied with a quarter million a year to be clear.


Something doesn’t add up. When you say you’re working on the “core tech” are you a developer located in the US working at AWS? Is that your total comp or just your base are you not including RSUs/signing bonus?

I’m not even here to shame $150K. That’s what I was making as CRUD developer pre-Covid at 45 years old (long story) and I also graduated from a no name state school - in the 90s. $150k still means you’re making more than roughly around 80-85% of American households.


It's total compensation including everything. My base is only around $109k or so. I'm not in California or NYC and I joined out of college so base isn't as good as industry hires.

I fully understand shaming $150k, new grads at Google make $180-$210k.


I specifically said that I was not shaming $150k.

You graduated from college and are now making more than most developers in the US and you work at a big tech company. You have the opportunity to leverage that and either get promoted from within or change jobs.

Are you really struggling making $150K straight out of college? You graduated from a state school (as did I) you probably don’t have that much debt.

For context, I’ve been out of college almost a quarter century and I am just now making a little more than a college grad at Google. I’m not complaining and I only have a good 10-15 years to take advantage of the opportunity. You have your whole life ahead of you.

Stop complaining, get on the grind, and do what you need to do. No one owes you a quarter million just because you walked across the stage with a CS degree.


>You have the opportunity to leverage that and either get promoted from within or change jobs.

I think the real problem is I'm not capable of doing so, at least according to the fair labor market.

> Are you really struggling making $150K straight out of college? You graduated from a state school (as did I) you probably don’t have that much debt.

I don't have any debt, I just have a really pathetic savings rate and a really low net worth compared to nearly everyone I know, including folks that didn't do "the grind" that I did. I'll probably not be able to retire or own a car, much less own a house someday.


I spend a lot of time in the Ukraine and talk with people. A friend of mine earns $140/mo ($1680/yr) as a skilled machinist. He has probably as much accumulated skill as me and sweats at his work. I sit around and type out some JS and drink coffee, and make an order of magnitude more money than him. I drove a $500 car (on vegetable oil - cheaper). Live in one bedroom appartment with a Hungarian family. Outside the window are people shooting themselves up. I worked for a couple of years out of a McDonalds with earplugs so the music didn't distract me. I maintain a complex software stack that services nearly a million users with a software they love. I don't want to be doing anything else.

I don't like the IQ/talent fetish that seems to pervade Anglophone culture. Some people have a stronger natural predisposition to sit and stare at a screen and figure out puzzles, and that happens to be sought after on the labor market. Doesn't mean others have less inherent value as human beings.

Would be nice to have a small appartment one day. I just try to offer some perspective.


>> I don't have any debt, I just have a really pathetic savings rate and a really low net worth compared to nearly everyone I know, including folks that didn't do "the grind" that I did. I'll probably not be able to retire or own a car, much less own a house someday.

Dude, please seek mental help. You have so many working years ahead of you. Stop worrying about owning a house. Start budgeting using something like YNAB and get your savings rate under control. Also get all the basic investment vehicles going: 401k, after tax Roth, backdoor Roth etc. You’ll do just fine.


> Stop worrying about owning a house

A lot of my friends are already putting money down on houses and are making hundreds of thousands in options trading on principal that I _don't have_. I'm falling behind permanently and even maxing out my 401k (which I didn't do last year because I don't anticipate living to 65 but decided to this year) won't get me closer with a low principal.


I’m really not being facetious, and this is going to come across that way in writing. You really need to talk to a mental health professional.

I hate that we as a society can say that if you have signs of cancer go see an oncologist and it’s not okay to say that if you show signs of mental health issues go see a professional.

But everything about your comments and your user name points to it.


On the other hand, when someone expresses an opinion like this, you never know if they were contacted at some point by a Google recruiter and ended up with a case of sour grapes.

I had a phone conversation with a hiring manager once, and obviously I was not the right type or class or caste or something, but it wasn't a technical screen at all, so it's a mystery to me forever what exactly determined the "in group" as you put it.

As such, I know I'm biased towards them, but I can't really assess the overall company.


I've never been hired or approached by Google.

However, as a University lecturer I know a lot of people Google have employed. In my opinion, they have rejected some of the greatest students I have ever taught, and accepted some idiots who know how to speak well.

While they have employed some good people, I believe they purposefully taret the type of people who think working at Google makes you a fundamentally better person than anyone else, rather than the best programmers/researchers/AI/whatever.


As a counter argument, I don't hold much confidence in my former professor's ability to rank the real-world competence of their student's.


You may be right, I can only give my point of view.

Also, even if I am right, real-world competence may involve more bluffing, less high quality knowledge of algorithms, than I would like.


Maybe the ability to get along with people is more important than some people are willing to admit?

I’ve met plenty of “smart people” who couldn’t get things done to save their lives partially because they couldn’t communicate well or play well with others.


Google don't want knowledge of algorithms, Google want fluency in creating algorithms. As a lecturer you likely missed a lot of gems since college testing doesn't test peoples ability to create algorithms and instead mostly test their ability to apply known algorithms. A lot of people who are great at applying known algorithms actually suck at algorithms in practice.


> In my opinion, they have rejected some of the greatest students I have ever taught, and accepted some idiots who know how to speak well.

This has been a source of angst for friends of mine at google for over 15 years, which is almost 3/4 of google’s existence. The hiring process is just random.

Part of it is due to measures put in to avoid certain unconscious biases (hire your friends, regardless of how good they are; hire only people like yourself, etc). So I have some sympathy.

But only some.


> In my opinion, they have rejected some of the greatest students I have ever taught

There's no question that Google is happy to have false-negatives in interviews.

> I believe they purposefully taret the type of people who think working at Google makes you a fundamentally better person than anyone else

why would they do this? And, if they do this, how are they generally speaking so successful? (this is also an amusing comment because the only group I find more critical of Google than HN is Googlers)


> And, if they do this, how are they generally speaking so successful?

When a company has a lot of money coming in, they can be successful despite decision X, rather than because of decision X.

Microsoft can interview people asking about filling airliners with golf balls. Valve can just not bother with Half-Life 3. Google can have zero support and a self-driving car division that keeps avoiding chances to release anything.

That the companies are successful doesn't mean all their actions are smart - sometimes it's that their successes are big enough the occasional bad decision doesn't hurt them.


> And, if they do this, how are they generally speaking so successful?

They have almost complete monopoly on online search, web ads, online video, on phone OS, on browsers. And they are not afraid to abuse those to get more, and are getting away with this.

You don't need to do anything right when you rent-seek most of the online world.


How is this rent seeking and not just profit seeking?

https://en.wikipedia.org/wiki/Rent-seeking


In real estate, the value of a property is often tied to the value of the rental income it could bring in, even if it's never been offered for rent.

'Rent seeking' doesn't refer to the same kind of 'rent', of course, but I think the analogy holds up pretty well. All Google has to do is turn a few screws, and the rest of us will have no choice but to sing whatever tune they call.


Because they can raise the rents anytime and everyone else just has to pay. See Google Maps pricing fiasco for a rather public example.


Profit-seeking with market power is rent-seeking.


I think it's very destructive of meaningful discussion to redefine a term of jargon in order to exploit the stigma of the original definition while talking about something else.


No, if you have market/pricing (empirical monopoly) power, profit-maximizing behavior is inherently rent-seeking by the standard definition.


And, if they do this, how are they generally speaking so successful?

The usual recipe for success in tech: competitors who are even less competent.

See also: how Microsoft dominated personal computing back in the day.


Then this would imply that selecting for people who want to work at Google, instead of the "best candidate" is a valid strategy because it leads to better outcomes.

Again I think this whole line of thinking is nonsense, but even if you take it at face value, it still doesn't make sense.


>Microsoft dominated personal computing back in the day.

By making illegal deals with PC sellers, and by vandalizing apps running on their OS.


Of course. I'm well aware that I'm considered mentally defective (or maybe just garden variety low IQ) by the hiring committee. That doesn't really change my observation here.


You pop up in every post about Google or Facebook and make the conversation about yourself, wallowing in self pity because you didn't land a job there straight out of college. This might sound harsh, but you need to get over it and move on.


^^ what this guy said. You named yourself lowiqengineer because your SAT score was “only” 2200. That’s in the 98th percentile —- if it were an IQ test, it would get you into Mensa, the organization for high IQ people!

You talk as if people from FANG (your invented acronym to exclude Amazon) look down on all others, but it seems that you are the one looking down on all others, everyone who wasn’t lucky enough to get into G/FB straight out of college while also building up not just 1, but 2 or 300k or net worth.

These companies you aspire to do behavioral interviews, and I’m guessing you’re failing them.

Take a step back. You won’t “fail” life because you “only” got a job at one FAANG and not another, or because your net worth is only 100k straight out of college....even the sentence I just wrote sounds completely ridiculous.

Your problems are stemming from how you view yourself and the world, which is full of false assumptions. A lot of smart people on HN have told you this —- you should listen to them.


> These companies you aspire to do behavioral interviews, and I’m guessing you’re failing them.

Don't know why people say this. I'm usually pretty good at behavioral interviews. The problem is the algorithmic interviews (again, this is where my IQ fixation comes from).

IDK man, compared to most everyone else I seem to have failed life at this point given that I have zero accomplishments and little financial security.


> compared to most everyone else I seem to have failed life

Please realize how offensive and insulting this is.

You were hired by one of the most successful tech companies, which most people could never aspire to. You claim that Amazon pays you 80% of what (you assume) Facebook or Google would pay you, which puts you financially ahead of the vast majority of people. By calling yourself a failure, you’re calling nearly everyone on Earth an even bigger failure, which is offensive and makes you look extremely entitled and detached from reality.

> Don’t know why people say this.

It’s because you come off in your posts as entitled, obsessive, elitist, insensitive, and bitter. These are the only sideS of yourself that you convey here, so that’s why people assume you’re failing your behavioral interviews.

Sorry again if I’m being harsh, but you’ve been posting the same stuff here for months, and every discussion gets derailed by people consoling you or advising you, which you invariably reject. Let’s stop going in circles.


Serious question: How do you know you're good at behavioral interviews? Have you ever seen the feedback written on a behavioral interview? The companies you're describing don't usually give out feedback to applicants, so you can't know, right?

It sounds to me like you have anxiety. I'm saying this because I have anxiety, so I know what it's like. One hallmark of anxiety disorders is that you think things are true about the world, even though you don't actually have evidence this is the case. Or, you fixate on some pieces of information while ignoring evidence to the contrary, which again is irrational thinking. Or you hold assumptions about how the world works that aren't proven. You should look into cognitive behavioral therapy as a potential solution.

You say "compared to everyone else". But, you have to acknowledge that the net worth and income numbers you named put you into the 99th percentile, right? So, you actually mean "compared to a very small subset of people I have failed", right? And this is just a completely irrational argument. I mean according to this vein of thinking, then since I don't have as much money as Jeff Bezos, I've failed. And the implication then is the whole world has failed. Right?


I've always passed interviews that focused on behavioral aspects, I've frequently struggled in algorithm focused loops (bombed 2/4 in my Google onsite because, again, I'm intellectually inferior). Besides, until recently G didn't include any behavioral loops.

> "compared to a very small subset of people I have failed"

Compared to nearly everyone at elite universities I have absolutely failed. I'm sure some of them will become the Jeff Bezos of 2050 as well - it's irrelevant that it's still a small number. I don't want to be compared against somebody that works at Cisco or IBM for instance for the work and mental anguish I've put in.


Even here you have many false assumptions though. For instance, the Harvard median new grad salary is 69k [1]. So you’re wrong in your assertion that compared to nearly everyone at elite universities you have failed. By your own definition of success, compared to most (by definition of median) people at elite universities, you have succeeded, and it’s they who have failed.

You also have an implicit assumption that people who work at Cisco or IBM haven’t put in work or mental anguish, and you’re wrong. I know many people who worked and studied hard to get jobs at these places.

You’re artificially restricting the set of successful outcomes so that you can say you’ve failed because you don’t fall into that arbitrary set. Not to mention that your entire definition of success as defined by things like money and perks is wrong.

[1] https://www.google.com/amp/s/www.cnbc.com/amp/2018/09/14/sta...


The perspective is inverted. From the perspective of a Harvard/Stanford/CMU/MIT undergrad working at Amazon is failing, which is nothing to say of IBM or Cisco (this is what I've understood after talking to several). I'd wager that the median new grad TC for Harvard CS students is at least twice that.


That's completely unscientific content marketing "research".


There are billions of people worse off then pretty much anyone posting on HN.

Let me repeat: that is thousands of millions of people who can barely even conceptualize the life you're able to live.


I was about to disagree with what seems like a personal attack but then without looking at the posting history of the poster you replied to and just looking at the username. I have to agree.

On another note, I’m in my mid 40’s spent all of my time bouncing between yet another CRUD job most of my career (not complaining, it’s paid for a decent lifestyle in my relatively low cost of living area) and I’m always encouraging fresh CS grads to go for broke and take the r/cscareerquestions route of “grind leetCode and work for a FAANG”, knowing they will make more fresh out of college than I did until a few months ago.

Someone else’s success is not my failure.


Including the most critical group of all - customers. It’s astonishing how little Google cares about its paying customers, at least when it comes to AdWords and GSuite, and GCP.

I think the Pixel division is the only exception, though I don’t own one so I can’t comment personally.


After 3 years of updates, Google drops Pixel support like a hot potato.

As far as I'm informed every Pixel version so far had battery degradation issues. After year or so the battery seems to have only a fraction of the previous capacity.


So does every cell phone maker. I have a pile of old-but-working cell phones without security updates anymore in a drawer somewhere.


Sad. I want to like Google but the author is right.


Yep, Googles stuff looks nice, but I can’t afford to use their stuff. The time budget simply doesn’t allow for rewriting things more than once a decade, and even that’s a pain.


I want to argue with this but I don’t think I can.


It inherits primarily from the idiotic interview loop, and an entire cottage industry which has grown up around it.

I am reminded of this funny Feynman quote (paraphrasing): "The whole purpose of the existence of the Mensa club was to decide who else was eligible to join the club". Now, in all fairness, this is true of all the big tech companies to varying degrees, but boy does Google Cloud in particular take the cake on this one!

Of course, Big Tech does produce very good products, and at scale, but that is being built by about 10% of their engineers. The rest of them work there just to keep the interview loop as exclusive as possible. :-) After all, otherwise, one of them would have actually solved long runningcustomer problems.

Plus the rest of the world is now coming to an agreement that a lot of Big Tech's growth is coming from a lot of unfair advantages - such as massive data collection before it was understood as a bad thing, ability to lobby local politicians at a much bigger scale, ability to influence regulations that hurt them somewhat but entirely kill the competition (GDPR for example), complete lack of accountability for past malpractices (e.g. Facebook's friendly fraud case, for which no one went to jail) because they are now too powerful to actually be sent to jail.


I wonder if this is a result of Google not having users/customers but data points?


I recently migrated off of digital ocean into aws.

I was originally planning to go to go for the native k8s experience.

Then I read somewhere online that was doesn't sunset services where as gcp does.

I was half way through the migration process to gcp and bailed on a dime.


What I really want is a cloud provider that has AWS’s “we don't kill anything anyone is still using” and Google's probability of the documentation being both where you expect, having what you need, and being current and accurate (and, yes, I know Google is far from perfect on that, but dear God is AWS utterly horrible and that's just for the services that they are actively pushing.)


The main problem of Google Cloud is simply that that no amount of money can pay off a 10 year late start in a race against the #1 incumbent.

Microsoft still wins on the desktop. The only way to beat them was to compete in new emerging markets.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: