Grepping for symbols like function names and class names feels so anemic compared to using a tool that has a syntactic understanding of the code. Just "go to definition" and "find usages" alone reduce the need for text search enormously.
For the past decade-plus I have mostly only searched for user facing strings. Those have the advantage of being longer, so are more easily searched.
Honestly, posts like this sound like the author needs to invest some time in learning about better tools for his language. A good IDE alone will save you so much time.
Scenarios where an IDE with full syntactic understanding is better:
- It's your day to day project and you expect to be working in it for a long time.
Scenarios where grepping is more useful:
- Your language has #ifdef or equivalent syntax which does conditional compilation making syntactic tools incomplete.
- You just opened the project for the first time.
- It's in a language you don't daily drive (you write backend but have to delve in frontend code, it's a 3rd party library, it's configuration files, random json/xml files or data)
- You're editing or searching through documentation.
- You haven't even downloaded the project and are checking things out in github (or some similar site for your project).
- You're providing remote assistance to someone and you are not at your main development machine.
- You're remoting via SSH and have access to code there (say it's a python server).
Yes, an IDE will save you time daily driving. But there's no reason to sabotage all the other usecases.
Further important (to me) scenarios that also argue for greppability:
- greppability does not preclude IDE or language server tooling; there's often special cases where only certain e.g. context-dependant usages matter, and sometimes grep is the easiest way to find those.
- projects that include multiple languages, such as for instance the fairly common setup of HTML, JS, CSS, SQL, and some server-side language.
- performance in scenarios with huge amounts of code, or where you're searching very often (e.g. in each git commit for some amount of history)
- ease of use across repositories (e.g. a client app, a spec, and a server app in separate repos).
I treat greppability as an almost universal default. I'd much rather have code in a "weird" naming style in some language but have consistent identifiers across languages, than have normal-style-guide default identifiers in each language, but differing identifiers across languages. If code "looks weird", if anything that's often actually a _benefit_ in such cases, not a downside - most serialization libraries I use for this kind of stuff tend to do a lot of automagic mapping that can break in ways that are sometimes hard to detect at compile time if somebody renames something, or sometimes even just for a casing change or type change. Having a hint as to this fragility immediate at a glance even in dynamically typed languages is sometimes a nice side-effect. Very speculatively, I wouldn't be surprised if AI coding tools can deal with consistent names better than context-dependent ones too; greppability is likely not specifically about merely the tool grep.
And the best part is that there's almost no downside; it's not like you need to pick either a language server, IDE or grep - just use whatever is most convenient for each task.
Grep is also useful when IDE indexing isn't feasible for the entire project. At past employers I worked in monorepos where the sheer size of the index caused multiple seconds of delay in intellisense and UI stuttering; our devex team's preferred approach was to better integrate our IDE experience with the build system such that only symbols in scope of the module you were working on would be loaded. This was usually fine, and it works especially well for product teams, but it's a headache when you're doing cross-cutting work (e.g. for infrastructure projects/overhauls).
We also had a livegrep instance that we could use to grep any corporate repo, regardless of where it was hosted. That was extremely useful for investigating failures in build scripts that spanned multiple repositories (e.g. building a Go sidecar that relies on a service config in the Java monorepo).
As someone who runs into that daily, I'm surprised I never heard of this before.
I seem to have found the 64-bit mode under "Tools > Options" then "Text Editor > C/C++ > IntelliSense". The top option is [] Enable 64-bit IntelliSense.
But I can't seem to find the ram limit you mentioned and searching for it just keeps bringing up stuff related to vscode. Do you know where it is off the top of your head or a page that might describe it?
Thanks for that, while searching google for that result only lead me to vscode's IntelliSense settings. Searching for "Intelli Sense Memory Limit" setting in visual studio didn't lead me right to the result but it did give me a whole settings page that "matched". I found the setting in visual studio is "IntelliSense Process Memory Limit" which is under "Text Editor > C/C++ > Advanced" then under header "IntelliSense" towards the bottom of the section.
> It's your day to day project and you expect to be working in it for a long time.
I don't think we need to restrict the benefits quite that much—if it's a project that isn't my day-to-day but is in a language I already have set up in my IDE, I'd much prefer to open it up in my IDE and use jump to definition and friends than to try to grep and hope that the developers made it grepable.
Going further, I'd equally rather have plugins ready to go for every language my company works in and use them for exploring a foreign codebase. The navigation tools all work more or less the same, so it's not like I need to invest effort learning a new tool in order to benefit from navigation.
> Yes, an IDE will save you time daily driving. But there's no reason to sabotage all the other usecases.
Certainly don't sabotage, but some of these suggestions are bad for other reasons that aren't about grep.
For example: breaking the naming conventions of your language in order to avoid remapping is questionable at best. Operating like that binds your business logic way too tightly to the database representation, and while "just return the db object" sounds like a good optimization in theory, I've never not regretted having frontend code that assumes it's operating directly on database objects.
> if it's a project that isn't my day-to-day but is in a language I already have set up in my IDE, I'd much prefer to open it up in my IDE and use jump to definition and friends than to try to grep and hope that the developers made it grepable.
It's funny, because my preference and actual use is the exact opposite: for a project that isn't my day-to-day, I'm much more likely to try to grep through it rather than open it in an IDE.
Another overlooked advantage of greppability is to be able to fuzzy the search, or discover related code that wasn't directly linked to what you were looking for.
For instance if you were hunting for the method updating a `foo_bar` instance, grepping it will also give you instances of `generic_foo_bar` and `shim_foo_bar`. It can be noise, as it can be stuff you wouldn't have seen otherwise and save your bacon. If you're not familiar with a project I think it's quite an advantage.
I used to pipe things through black for that. (a script that imported black, not just black on the command line.)
I also had `j2p` and `p2j` that would convert between python (formatted via black) and json (formatted via jq), and the `j2p_clip`/`p2j_clip` versions that would pipe from clipboard and back into clipboards.
It's worth taking the time to build a few simple scripts for things you do a lot. I used to open up the repl and import json to convert between json and python dicts multiple times a day, so spending a few minutes throwing together a simple script to do it was well worth the effort.
> If a data structure literal (tuple, list, set, dict) or a line of “from” imports cannot fit in the allotted length, it’s always split into one element per line.
i'm not interested in minimizing diffs. i'm interested in being able to see all the fields of one record on one screen—moreover, i'd like to be able to see more than one record at a time so i can compare what's the same and what's different
black seems to be designed for the kind of person who always eats at mcdonald's when they travel because they value predictability over quality
My understanding of black is that it solves bikeshedding by making everyone a little unhappy.
For aligned column readability and other scenarios, # fmt: off and # fmt: on become crucial. The problem is that like # type: ignore, those start spreading if you're not careful.
My only complaint with black is that it only splits long definitions into per-line if they exceed a limit. That’s probably configurable, now that I write it down.
Other than that, I actually quite like its formatting choices.
i kind of get the vibe from the black documentation that it's written by the kind of person who thinks we're bad people for wanting that, and perhaps that everyone should wear the same uniform because vanity is sinful and aesthetics are frivolous
I honestly suspect that the amount of time spent dealing with the issues monorepos cause is net-larger than the gains most get from what a monorepo offers. It's just harder to measure because it tends to degrade slowly, happen to things you didn't realize you were relying on (until you need them), and without clear ways to point fingers at the cause.
Plus it means your engs don't learn how to deal with open source code concerns, e.g. libraries, forking, dependency management. Which gradually screws over the whole ecosystem.
If you're willing to put Google-scale effort into building your tooling, sure. Every problem is solvable. Only Google does that though, everyone else is getting by with a tiny fraction of the resources and doesn't already have a solid foundation to reduce those maintenance costs.
Sure. But those are far from the only massive codebases out there, and many of the biggest are monorepos because sorta by definition they are the size of multiple projects.
IMO the biggest problem with big projects is finding the right file and finding the right lines of code, so greppability is even more helpful with big repos.
clangd works fine for me with the linux kernel. For best results build the kernel with clang by setting LLVM=1 and KERNEL_LLVM=1 in the build environment and run ./scripts/clang-tools/gen_compile_commands.py after building.
Ok, but now every time you switch commits you have to wait for clangd to reindex. Grepping in the kernel is just as fast and you can do it without running a 4GB+ process that takes 10+ minutes to index
Sure. I wasn't intending to claim that there is no reason to care about greppability. Just providing some tips about getting clangd to work with linux for those who might find that useful.
>It's your day to day project and you expect to be working in it for a long time.
Bold of everyone here to assume that everyone has a day to day project. If you're a consultant or for other reasons you're switching projects on a month to month basis, greppability is probably the top metric second to UT coverage.
They said the scenario in which that would be useful was IF: "It's your day to day project and you expect to be working in it for a long time". The implication being that if neither of those hold then skip to the next section.
I don't think anyone is assuming anything here. I've contracted for most of my career and this didn't seem like an outlandish statement.
Also, if you're working in a project for a month, odds are you could set up an IDE in the first few hours. Not sure how any of this rises to the level of being
"bold".
No IDE will resolve this if the same code is preprocessed more than once to produce different files; or if you often build with different and conflicting preprocessing values; or if your build uses a tool your IDE doesn't know about; or if some of the preprocessing and compilation occurs at runtime.
> Go grab a coffee
So, you're saying "wait".
> Jetbrains all products pack, baby.
JetBrains CLion won't even try to properly index C/C++ files which aren't officially part of the project.
Plus, if you have errors in some places - which occurs while you're editing your code - then that breaks very badly with JetBrains IDEs. e.g. missing closing paren or #endif
> - Your language has #ifdef or equivalent syntax which does conditional compilation making syntactic tools incomplete.
LSP-based tools are fine with this, generally. A syntactic understanding is an incomplete solution. I suspect GP meant LSP. (as long as compile_commands.json or equivalent is avilable).
Many of those other caveats are non-issues once LSPs are widespread. Even Github has lsp-like go-to-def/go-to-ref, though it's not perfect.
> Your language has #ifdef or equivalent syntax which does conditional compilation making syntactic tools incomplete.
Your other points make sense, but in this case, at least for C/C++, you can generate a compile_commands.json that will let clangd interpret your code accurately.
If building with make just do `bear -- make` instead of `make`. If building with cmake pass `-DCMAKE_EXPORT_COMPILE_COMMANDS=1`.
The macros I see in the real world seem to usually work fine. I’m sure it’s not perfect and you can construct a macro that would confuse it, but it’s a lot better than not having a compilation db at all.
I abandoned VSCode and went back to vim + ctags + ripgrep after a year with the most popular IDE. I miss some features but it didn’t give me a 10x or even 1.5x improvement in my own work along any dimension.
I attribute that mostly to my several decades of experience with vi(m) and command line tools, not to anything inherently bad about VSCode.
What counts as “better” tools has a lot of subjectivity and circumstances implied. No one set of tools works for everyone. I very often have to work over ssh on servers that don’t allow installing anything, much less Node and npm for VSCode, so I invest my time in the tools that always work everywhere, for the work I do.
The main project I’ve worked on for the last few years has a little less than 500,000 lines of code. VSCode’s LSP takes a few seconds fairly often to maintain the LSP indexes. Running ctags over the same code takes about a second and I can control when that happens. vim has no delays at all, and ripgrep can search all of the files in a second or two.
I have similar feelings... I still use IntelliJ IDEA for JVM languages, but for C, Rust, Go, Python, etc., I've been using vim for years (decades?), and that's just how I prefer to write code in those languages. I do have LSP plugins installed in vim for the languages I work in, and do have a key sequence mapped for jump-to-definition... but I still find myself (rip)grepping through the source at least as often as I j-t-d, maybe more often.
Did you consider Neovim? You get the benefit of vim while also being able to mix in as much LSP tooling as you like. The tradeoff is that it takes some time to set up, although that is getting easier.
That won’t make LSP go any faster though. There’s still something interesting in the fact that a ripgrep of every line in the codebase can still be faster than a dedicated tool.
Considered it and have tried repeatedly to get it to work with mixed success. As you wrote, it takes "some time" to set up. In my case it would only offer marginal improvements over plain vim, since I'm not that interested in the LSP integration (and vim has that too, through a plugin).
In the environments I often work in I can't install anything or run processes like node. I ssh into a server and have to use whatever came with the Linux distro, which means sticking with the tools I will find everywhere. I can't copy the code from the server either. If I get lucky they used version control. I know not everyone works with those constraints. I specialize in working on abandoned and legacy code.
Yes ok. And legacy code might be a good example where grep works well, if it's fair to argue a greater propensity for things like preprocessors, older languages and custom builds that may not play as well with semantic-level tools, let alone be written with modern tooling in mind.
Lol, I'm not working with COBOL or Fortran. Legacy code in my world means the original developers have left, not that it dates from the 1970s. Mostly I work with PHP, shell scripts, various flavors of SQL, Python, sometimes Rails or other stuff. All things modern LSPs can handle.
can you not upload executables over ssh, say for policy reasons or disk-space reasons? how about shell scripts?
i mean, i doubt i'm going to come up with some brilliant breakthrough that makes your life easier that you've somehow overlooked, but i'd like to understand what kinds of constraints people like you often confront
I don't have to use TeamViewer, though I very occasionally have to use Windows RDP.
You can transfer any kind of file over ssh. scp, sftp, rsync will all copy binaries. Mainly the issues come down to policy and billable time. Many of my customers simply don't allow installing anything on their servers without a tedious approval process. Even if I can install things I might spin my wheels trying to get it to work in an environment I don't have root privileges on, with no one willing to help, and I can't bill for that time. I don't work for free to get an editor installed. I use the tools I know I can find on any Linux/BSD server.
With some customers I have root privileges and manage the server for them. With others their IT dept has rules I have to follow (I freelance) if I want to keep a good relationship. Since I juggle multiple customers and environments I find it simpler not having to manage different editors and environments, so I mostly stick with the defaults. I do have a .profile and .vimrc I copy around if allowed to, that's about it.
I can't lose time/money and possibly goodwill whining about not having everything just-so for me. I recently worked on a server over ssh that didn't have tmux installed. Fortunately it did have screen, and I can use that too, no big deal. I spent less than 60 seconds figuring that out and getting to work rather than wasting hours of non-billable time annoying someone about how I needed tmux installed.
I used the word "install" but the usual rule says I can't install, upload, or execute any non-approved software. Usually that just gets stated as a policy, but I have seen Linux home directories on noexec partitions -- government agencies and big corporations can get very strict about that. So copying a self-contained binary up and running it would violate the policy.
I pretty much live in ssh. Remote Desktop means a lot of clicking and watching a GUI visibly repaint. Not efficient. Every so often I have customers using applications that only run on Windows, no API, no command line, so they will enable RDP to that, usually through a VPN.
my cousin wrote a vt52 emulator in bash, and i was looking at a macro assembler written in bash the other day: https://github.com/jhswartz/mle-amd64/blob/master/amd64. i haven't seen a cscope written in bash, but you probably remember how the first versions of ctags were written in sh (or csh?) and ed. so there's not much limit to how far shell functions can go in augmenting your programming environment
if awk, python, or perl is accepted, the possibilities expand further
Sure, but this is taking things to a bit of an absurd extreme. If I worked in a restrictive environment where I couldn't install my own tools, I don't think I would be in a position to burn a ton of my employer's time building sophisticated development tools in bash.
(One-off small scripts for things, sure. But I'm not going to implement something like ctags or cscope or a LSP server in bash.)
certainly it's absurd! nobody would deny that. on the other hand, the problem to solve is also an absurd problem
and i wasn't suggesting trying to bill for doing it, but rather, if you were frequently in this situation, it might be reasonable to spend non-billable time between clients doing it
I guess I don’t see the problem as absurd. As a freelancer I need to focus on the problems the customer will pay for. I don’t write code for free or in my spare time anymore, I used to years ago. i feel comfortable working with the constraints imposed, I think of that as a valuable skill, not a handicap.
I looked at Helix but since I dream in vim motions at this point (vi user since it came out) I'd have to see a 10x improvement to switch. VSCode didn't give me a 10X improvement, I doubt Helix would.
Helix certainly won't give you a 10x improvement. It tends to convert a lot of people moving "up" from VS Code, and still a decent chunk, but certainly fewer neovim users moving "down".
Advantages of Helix are pretty straightforward:
1. Very little configuration bullshit to deal with. There's not even a plugin system yet! You just paste your favorite config file and language/LSP config file and you're good to go. For anything else, submit a pull request.
2. Built in LSP support for basically anything an LSP exists for.
3. There's a bit of a new generation command line IDE forming itself around zellij (tmux that doesn't suck) + helix + yazi (basically nnn or mc on crack, highly recommended).
That whole zellij+helix+yazi environment is frankly a joy to work in, and might be the 2-3x improvement over neovim that makes the switch worth it.
Like I wrote, I looked at Helix. Seems cool but not enough for me to switch. And I would have to install it on the machines I work on, which very often I can't do because of company policies, or can't waste the non-billable time on.
I only recently moved from screen to tmux, and I still have to fall back to screen sometimes because tmux doesn't come with every Linux distro. I expect I will retire before I think tmux (or screen, for that matter) "sucks" to the point I would look at something else. And again I very often can't install things on customer servers anyway.
It conflicts with the clipboard and a bunch of hotkeys, and configuring it never works because they have breaking change in how their config file works ever 6months or so.
These days I only use it to launch a long running job in ssh to detach the session it's on and leave.
That’s more or less what I use it for — keeping sessions alive. I don’t use 90% of the features. vim does splits, and there’s ctrl-Z to background it and get a shell.
I know I could get more out of tmux but haven’t really needed to. I use it with the default config. I have learned from experience that the less I try to customize my environment the less non-billable time I waste trying to get that working and maintaining it.
You can use a tool like ALE (the Asynchronous Linting Engine) to run LSPs in normal-Vim; I've been doing it for years and have no complaints! It's rapid.
VSCode is not an IDE, it's an extensible text editor. IDEs are integrated (it's in the name) and get developed as a whole. I'm 99% certain that if you were forced to spend a couple of months in a real IDE (like IDEA or Rider), you would not want to go back to vim, or any other text editor. Speaking as a long time user of both.
I get your point, but VSCode does far more than text editing. The line between an advanced editor and an IDE gets blurry. If you look at the Wikipedia page about IDEs[1] you see that VSCode ticks off more boxes than not. It has integration with source code control, refactoring, a debugger, etc. With the right combination of extensions it gets really close to an IDE as strictly defined. These days advanced text editor vs. "real" IDE seems more like a distinction without much of a difference.
You may feel 99% certain, but you got it wrong. I have quite a bit of experience with IDEs, you shouldn't assume I use vim out of ignorance. I have worked as a programmer for 40+ years, with development tools (integrated or not) that I have forgotten the names of. That includes "real" IDEs like Visual Studio, Metrowerks CodeWarrior, Symantec Think C, MPW, Oracle SQL Developer, Turbo Pascal, XCode, etc. and so on. When I started programming every mainframe and minicomputer came with an IDE for the platform. Unix came along with the tools broken out after I had worked for several years. In high school I learned programming on an HP-2000 BASIC minicomputer -- an IDE.
So I have spent more than "a couple of months in real IDEs" and I still use vim day to day. If I went back to C++ or C# for Windows I would use Visual Studio, but I don't do that anymore. For the kind of work I do now vim + ctags + ripgrep (and awk, sed, bash, etc.) get my work done. At my very first real job I used PWB/Unix[2] -- PWB means Programmer's Work Bench -- an IDE of sorts. I still use the same tools (on Linux) because they work and I can always count on finding a large subset of them on any server I have to work with.
I don't dislike or mean to crap on IDEs. I have used my share of IDEs and would again if the work called for that. I get what I need from the tools I've chosen, other people make different choices, no perfect language, editor, IDE, what have you exists.
The HP 2000 [1] had a timeshared BASIC system that the school district made available to schools, over ASR-33 teletypes with dial-up modems. The BASIC system could edit, run (translate to byte code and execute), manage files. No version control or debuggers back then. The HP 2000 had another layer of of the operating system accessible to administrators (the A000 account if I remember right) but it was the same timeshared BASIC system with some additional commands for managing user accounts and files.
No one familiar with modern IDEs would recognize the HP 2000 BASIC system as an IDE, but it was self-contained and fully integrated around writing BASIC programs. HP also offered FORTRAN for it but not under the timeshared BASIC system. A friend wrote an assembler (in BASIC!) and taking advantage of a glitch in the bytecode interpreter we could load and run programs written in assembly language.
After high school I got a job as night computer operator with the Multnomah County ESD (school district) so I had admin access to the HP 2000, and their two HP 3000 systems, and an IBM computer they used for crunching class registrations. Good times.
Someone had an emulator online for a while, accessible over telnet, but I can't find it now.
i think it's very reasonable to describe time-shared basic systems like that as ides. the paradigmatic example of an 'ide' is probably turbo pascal 1.0, and of the features that separated turbo pascal from 'unintegrated' editor/compiler/assembler/linker/debugger setups, i think the dartmouth timesharing system falls mostly on the 'ide' side of the line. you could stop your program at any point and inspect its variables, change them, evaluate expressions, change the source code, and continue execution. runtime errors would also pop you into the interactive basic prompt where you could do all those things. i believe the hp 2000 timesharing basic had all these features, too
At the time, in the context of other software development environments (like submitting decks of punch cards) the HP 2000 timeshared BASIC environment would count as an IDE. Compared to Turbo Pascal or any modern IDE it falls short.
HP TSB did not have a REPL. If your program crashed or stopped you could not examine variables from the terminal. You could not peek or poke memory locations as you could with microcomputer BASICs (which didn't support multiple users, so didn't have the security concern). You had to insert PRINT statements to debug the code. TSB BASIC didn't have compile/link steps, it tokenized the code as you entered the lines, and the interpreter amounted to a big switch statements on the tokens. P. J Brown's book Writing Interactive Compilers and Interpreters (1981) describes how TSB works. Eventually I got the source code to TSB (written in assembler) and figured it out for myself.
Other BASIC implementations that popped up around the same time had richer feature sets. In my senior year at high school I got (unauthorized) access to a couple of Unix systems in Portland, ordered the Bell Labs Technical Journal issues that described Unix and C, and taught myself from those. I didn't get paid to work on a Unix system until several years later (detours into RSTS-11, TOPS-20, VMS, Microdata, Pr1me, others) but I caught the Unix and C bugs young and I still work with those tools every day.
Some programmer friends and more than a few colleagues over the years have made fun of my continued use of what they call obsolete and arcane tools. I don't mind, I have never felt like I did less or sloppier work than my co-workers, and my freelance customers don't care what I use as long as I can solve their problems. Most of the work in programming doesn't happen at the keyboard anyway. I do pay attention and experiment with all kinds of tools but I usually end up going back to the Unix tools I have long familiarity with. That said I did spend many years in Visual Studio, MPW, CodeWarrior, and MPW writing C and C++ code, and I do think those tools (well, maybe not MPW) offered a lot of benefits over coding with vim and grep, for the projects I did back then.
Maybe ironically I use an iPad Pro, I don't have a desktop or laptop anymore. So I have the most modern hardware and a touch-based (soon) AI-assisted operating system that runs a terminal emulator during my work time.
thank you! i didn't realize that it lacked the features of the microcomputer basics; i have the impression that they were copying the same dartmouth timesharing system that hp was copying, but of course i've never used dtss myself
what kind of obsolete and arcane tools do you use? vim seems to be pretty popular among the youngsters these days. a friend of mine a bit younger than you prefers ex
My usual programming toolbox includes vim, ctags, bash, rg (or grep if I have to), sed, awk, tmux (or screen), git, and usually a CLI for one of MySQL, PostgreSQL, SQLite, Redis. The only vim plugins I like to have: ctrlp (file fuzzy finder) and advanced text motions.
On the iPad I use Blink shell, the Github client, Apple Notes. I always have a paper notebook and pen, my short-term memory gets less reliable every year.
I have noticed a lot of younger people using or at least learning vim and CLI tools, maybe retreating from the complexity of modern software development setups. Maybe just a retro fad, I don’t know.
I also see terminal setups described online that try to reproduce the GUI experience — lots of colors and planes and widgets. Neovim and zellij both look like that to me — a lot of extraneous functionality and visual clutter that mimics VSCode. I prefer a more minimalist environment. I don’t even use syntax coloring.
Everyone has to find the tools that work for them, that takes time. I think most programmers figure out at some point in their career that continually experimenting and polishing their tools doesn’t always help get the work done, and when you freelance you get more conscious of billable vs. non-billable time.
i think you're mostly more modern than i am, despite being much older. i generally use grep rather than rg, screen rather than tmux, emacs rather than vim for programming (though i use vim on my phone and for writing email), no redis, no fuzzy find, and no ipad. the only differences in the opposite direction are that i usually use python or perl rather than sed and awk, and i do use syntax coloring—for me, terminal colors were one of the big pluses of moving from sunos 4 to linux
i don't think younger people using vim is a fad, but we'll see. vim to me, especially neovim, seems mostly like emacs in vi clothing
I think you're arguing semantics here in a way that's not particularly productive. VSCode can be set up in a way that is nearly as featureful as an IDE like IntelliJ IDEA or Eclipse, and the default configuration and OOB experience pushes you hard in that direction. VSCode is designed for software development, not as a general text editor; I would never open up VSCode to edit a configuration file or type up a text file of notes, for example.
Something like vim is designed as a general text-editing tool. Sure, you can load it up with plugins and scripts that give you a bunch of features you'd find in an IDE, but the experience is not the same, and the "integrated" bit of "IDE" is still just not there.
(And I say this as someone who does most of his coding in vim, with LSP plugins installed, only reaching for a "proper" IDE for Java and Scala.)
One metric I would use: if I can sit down at a random co-worker's desk and feel more or less at home in their editor of choice, then it's probably an IDE that has reasonable defaults and is geared for software development. IDEA and VSCode would qualify... vim would certainly not.
A good IDE can be so much better iff it understands the code. However this requires the IDE to be able to understand the project structure, dependencies etc. which can be considerable effort. In a codebase with many projects employing several different languages it becomes hard to get and maintain the IDE understands everything state.
And an IDE would also fail to find references for most of the cases described in the article: name composition/manipulation, naming consistency across language barriers, and flat namespaces in serialization. And file/path folder naming seems to be irrelevant to the smart IDE argument. "Naming things is hard"
And especially in large monorepos anything that understands the code can become quite sluggish. While ripgrep remains fast.
A kind of in-between I've found for some search and replace action is comby (https://comby.dev/). Having a matching braces feature is a godsend for doing some kind of replacements properly.
I think the first sentence of the author counters your comment.
What you described works best in a familiar codebase where the organizing principles have been maintained well and are familiar to the reader and the tools are just the extension of those organizing principles. Even then a deviation from those rules might produce gaps in understanding of what the codebase does.
And grep cuts right through that in a pretty universal way. What the post describes are just ways to not work against grep to optimize for something ephemeral.
Go to definition and find usages only work one symbol at a time. I use both, but I still use global find/replace for groups of symbols sharing the same concept.
For example if I want to rename all “Dog” (DogModel, DogView, DogController) symbols to “Wolf”, find/replace is much better at that because it will tell me about symbols I had forgotten about.
For that use case I think you can use treesitter[1] you can find Dog.* but only if it is a variable name, for example. Avoiding replacement inside of say literals.
I would actually love a regexp search-and-replace assisted by either TreeSitter or LSP.
Something that lets me say that I want to replace “Dog\(.*\)” with “Wolf\1”, but where each substitution is performed only within single “symbols” as identified by TS or LSP.
That gets to the core of the issue doesn’t it? There are two cultures: Do you prefer to refactor DogView into Dog.View, or do you prefer to refactor Dog.View into DogView.
Personally I value uniqueness/canonicalness over conciseness. I would rather have DogView because then there is one name for the symbol regardless of where I am in the codebase. If the same symbol is used with differently qualified names it is confusing - I want the least qualified name to be more descriptive than “View”.
The other culture is to lean heavily on namespaces and to not worry about uniqueness. In this case you have View and Dog.View that may be used interchangeably in different files. This is the dominant culture in Java and C#.
Not everything you need to look for is a language identifier. I often grep for configuration option names in the code to see what the option actually does - sometimes it is easy to grep, sometimes there are too many matches, sometimes they cannot be found because option name composed in the code from separate unrepeatable (because of too many matches) parts. It's not hard to make config options greppable but some coders just don't care about this property.
strongly disagree here. This works if
- your IDE/language server is performant
- all the tools are fully set up
- you know how to query the specific semantic entity you're looking for (remembering shortcuts)
- you are only interested in a single specific semantic entity - mixing entities is rarely supported
I dont map out projects in terms of semantics, I map out projects in files and code - That makes querying intuitive and I can easily compose queries that match the specificity of what I care about (e.g. I might want to find a `Server` but I want to show both classes, interfaces and abstract classes).
For the specific toolchain I'm using - typescript - the symbol search is also unusable once it hits a certain project size, it's just way too slow for it to be part of my core workflow
Only thing I can recommend is using C# (obviously not always possible). Never had an issue with these functions in Visual Studio proper no matter how big the project.
On the flipside, IDE's can turn you into lazy, inefficient programmers by doing all the hand-holding for you.
If your feelings are anemic when tasked with doing a grep, its because you have lost a very valuable skill by delegating it to a computer. There are some things the IDE is never going to be able to find - lest it becomes the development environment - so keeping your grep fu sharpened is wise beyond the decades.
(Disclaimer: 40 years of software development, and vim+cscope+grep/silversearcher are all I really need, next to my compiler..)
Since when was that a bad thing? Since time immemorial, it has been hailed as a universal good for programmers to be lazy. I'm pretty sure Larry Wall has lots of jokes about this on Usenet.
Also, I can clearly remember switching from vim/emacs to Microsoft Visual Studio (please, don't throw your tomatoes just yet!). I was blown away by IntelliSense. Suddenly, I was focusing more on writing business logic, and less time searching for APIs.
Command line tools like grep are force multipliers for programmers. GUI's come with the risk of not being able to learn how to leverage this power. In the end, that often leads to more manual work.
And today, bash is a lingua franca that you can bring with you almost everywhere. Even Windows "speaks" bash these days, with WSL.
In itself, there's nothing wrong with using the built-in features of a GUI. Right-clicking a method (or using a keyboard shortcut) to find the definition in a given code base IS nice for that particular operation.
But by knowing grep/awk/find/git command line and so on, combined with bash scripting and advanced regular expressions, you open up a new world of possibilities.
All those things CAN be done using Python/C#/Java or whatever your language is. But a 1-liner in bash can be 10-100 lines of C#.
Where does this stupid notion come from that using powerful tools means you can't handle the less powerful ones anymore? Did your skills with a hand screwdriver atrophy when you learned how to use a powered screwdriver? Come on.
I use grep multiple times a day. I write bash scripts quite often. I'm not speaking from a position of ignorance of these tools. They have their place as a lowest common denominator of programming tools. But settling for the lowest common denominator is not a path to productivity.
Doesn't mean you should forget your skills, but it does mean you should investigate better tools. And leverage them. A lot.
> But a 1-liner in bash can be 10-100 lines of C#.
Yes. And the reverse is also true. bash is fast and easy if there's an existing tool you can leverage, and slow and hard when there's not.
Every person, including devlopers, have some constraints to what they're able to learn and use effectively. Those limits vary a lot from person to person, though.
For developers who learn technology a bit slowly (compared to some other developers, not the general population), some of these tools may not be worth the effort.
Also, these developers aren't necessarily low tier in terms of business value. They may have talents when it comes to understanding and communicating business requirements with other stakeholders in their organization, and their technical skills may be secondary to those skills and abilities.
BUT: For the general audience at HN, technical capability is central to their identity. Most people here have some capacity to learn technologies that go somewhat beyond the minimum skills required for a tech job. And for this audience, being confident on the linux/unix command line is generally worth the effort.
I count the IDE and stuff like LSP as natural extensions of the compiler. For sure I grep (or equivalent) for stuff, but I highly prefer statically typed languages/ecosystems.
At the end of the day, I'm here to solve problems, and there's no end to them -- might as well get a head start.
I'm not feeling anemic. The tool is anemic, as in, underpowered. It returns crap you don't want, and doesn't return stuff you do want.
My grep-fu is fine. It's a perfectly good tool if you have nothing better. But usually you do have something better.
Using the wrong tool to make yourself feel cool is stupid. Using the wrong tool because a good tool could make you lazy shows a lack of respect for the end result.
Huh? I have an old hand-powered drill from my Grandpa in my workshop. I used it once for fun. For all other tasks I use a powered drill.
Same for IDEs.
They help your refactor and reason about code - both properties I value.
Sure, I could print it and use a textmarker, but I'm not Grandpa
Knowing the bash ecosystem translates better to how you use the knife in the kitchen.
Sure you can replace most uses of a knife with power tools, but there is a reason why most top chefs still rely on that knife for most of those tasks.
A hand powered drill is more like a hand powered meatgrinder. It has the same limitation as the powered versions, and is simply a more primitive version.
The basis if this article (and its forebear "Too DRY - The Grep Test"[1]) is that grep is fragile. It's just fragile in a way that's different from the way that IDEs are fragile.
Even with IDEs, I find that I grep through source trees fairly often.
Sometimes it's because I don't completely trust the IDE to find everything I'm interested in (justifiably; sometimes it doesn't). Sometimes it's because I'm not looking to dive into the code and do serious work on it; I'm just doing a quick drive-by check/lookup for something. Sometimes it's because I'm ssh'd into another machine and I don't have the ability to easily open the sources in an IDE.
I've come to really like language servers for big personal and work projects where I already have my tools configured and tuned for efficiently working with it.
But being able to grep is really nice when trying to figure out something out about a source tree that I don't yet have set up to compile, nor am I a developer of. I.e., I've downloaded the source for a tool I've been using pre-built binaries of and am now trying to trace why I might be getting a particular error.
posts like this sound like the author routinely solves harder problems than you are, because the solutions you suggest don't work in the cases the post is about. we've had 'go to definition' since 01978 and 'find usages' since 01980, and you should definitely use them for the cases where they work
- dynamically built identifiers is 100% correct, never do this. Breaks both text search and symbol search, results in complete garbage code. I had to deal with bugs in early versions of docker-compose because of this.
- same name for things across the stack? Shouldn't matter, just use find usages on `getAddressById`. Also easy way to bait yourself because database fields aren't 1:1 with front-end fields in anything but the simplest of CRUD webshit.
- translation example: the fundamental problem is using strings as keys when they should be symbols. Flat vs nested is irrelevant here because you should be using neither.
- react component example: As I mentioned in another comment, trivially managed with Find Usages.
Nothing in here strikes me as "routinely solves harder problems," it's just standard web dev.
yes, i agree that standard web dev is full of these problems, which can't be solved with go-to-definition and find-usages. it's a mess. i wasn't claiming that these messy, hard problems where grep is more helpful than etags are exotic; they are in fact very common. they are harder than the problems lucumo is evidently accustomed to dealing with because they don't have correct, complete solutions, so we have to make do with heuristics
advice to the effect of 'you should not make a mess' is obviously correct but also, in many situations, unhelpful. sometimes i'm not smart enough to figure out how to solve a problem without making a mess, and sometimes i inherit other people's messes. in those situations that advice decays into 'you should not try to solve hard problems'
> they are harder than the problems lucumo is evidently accustomed to dealing with because they don't have correct, complete solutions, so we have to make do with heuristics
Funny.
But since you asked. The hardest problems I've solved haven't been technical problems for years. Not that I stopped solving technical problems, or that I started solving only the easier problems. I just learned to solve people problems more.
People problems are much harder than technical problems.
The author showed a simple people problem: someone who needs to know about better tooling. If we were working together, showing them some tricks wouldn't take much time and would improve their productivity.
An example of a harder problem is when someone tries to play aggressive little word games with you. For example, trying to put you down by loudly making assumptions about your career and skills. One way to deal with that is to just laugh it off. Maybe even make a self-deprecating joke. And then continuing as if nothing happened.
But that assumes you want or have to continue working productively with them. If you don't, it can be quite enjoyable to just laugh in their face. After all, it's never the sharpest tool in the shed, or the brightest light that does that. In fact, it's usually the least useful person around, who is just trying to hide that fact. Of course, once you realize that, it becomes hard to laugh, because it's no longer funny. Just sad and pitiful.
> look! i already told you! i deal with the god damned customers so the engineers don't have to! i have people skills! i am good at dealing with people! can't you understand that? what the hell is wrong with you people?
look, lucumo, i'm sure you have excellent people skills. which is why you're writing five-paragraph power-trip-fantasy comments on hn about laughing in people's faces as you demonstrate your undeniable dominance over them, then take pity on them. but i'm not sure those comments really represent a contribution to the conversation about code greppability; they're just ego defense. you probably should not have posted them
(edited to remove things that could be interpreted as a personal attack, since i've gotten feedback that my previous phrasing was too inflammatory)
you aren't the first person i've seen expressing the transparently nonsensical sentiment that 'people problems are much harder than technical problems'. i've seen it over and over again for decades, but i've never seen a clear and convincing explanation of why it's nonsense; i think this is worth discussing in some depth
an obvious thing about both people problems and technical problems is that they both cover a full spectrum of difficulty from trivial to impossible. a trivial people problem is buying a soft drink at a convenience store†. a trivial technical problem is tying your shoes. an impossible people problem is ending poverty. an impossible technical problem might be finding a polynomial-time decision procedure for an np-complete problem, or perhaps building a perpetual-motion machine, or a black-hole generator. both kinds of problems have every degree of difficulty in between, too. stable blue leds seemed like an impossible technical problem until shuji nakamura figured out how to make them. conquering asia seemed like an impossible people problem until genghis khan did it
even within the ambit of modifying a software system, figuring out what parts of the code are affected by a possible change, there are trivial technical problems and problems that far exceed current human capacities. nobody knows how to write a bug-free web browser or how to maintain the linux kernel without introducing new bugs
given this obvious fact, what are we to make of someone saying, 'people problems are much harder than technical problems'? obviously it isn't the case that all people problems are much harder than all technical problems, given that some people problems are easy, and some technical problems are impossible. and if we interpret it as meaning that some people problems are much harder than some technical problems, it's a trivial tautology which would be just as true if we reversed the terms to say '[some] technical problems are much harder than [some] people problems'. so nobody would bother making the effort to say it unless they thought someone was asserting the equally ridiculous position that all people problems were easier than technical problems
the most plausible interpretation is that it means that the people problems the speaker is most familiar with, and therefore considers typical, are much harder than the technical problems the speaker is most familiar with. it's not a statement about the world; it's a statement about the author and the environment they're familiar with
we can immediately deduce from this that you are not andrew wiles, who spent six years working alone on a technical problem which had eluded the world's leading mathematicians for some 350 years, for the solution of which he was appointed a knight commander of the order of the british empire and awarded the abel prize, along with a long list of other prizes. you give the appearance of being so unfamiliar with such difficult technical problems that you cannot imagine that they even exist, though surely with a little thought you can see that they do. in any case, for a very long time, you have not been working on any technical problems that seem impossible to you. i believe you that it's not that you started solving only the easier problems; that means that all the problems you ever solved were the easier problems
or, more briefly, you aren't accustomed to dealing with difficult technical problems
perhaps we can also infer that you frequently handle very difficult people problems—perhaps you are a politician or a clinical psychologist in a mental institution, or you have family members with serious mental illness. however, other aspects of your comment make that seem relatively unlikely
______
† if you have no money or don't speak the local language, this people problem becomes less trivial
"you aren't the first person i've seen expressing the transparently nonsensical sentiment that 'people problems are much harder than technical problems'."
No, it's not "transparently nonsensical" -- it expresses a common human experience that techies who are at least somewhat self-aware (obviously that excludes you) have had at work. Their education gave them a toolbox for approaching technical problems, but no training in the people problems.
It's not remotely saying that all technical problems are easy.
with all due respect, it sounds like you have the privilege of working in some relatively tidy codebases (and I'm jealous!)
with a legacy codebase, or a fork of a dependency that had to be patched which uses an incompatible buildsystem, or any C/C++/obj-c/etc that heavily uses the preprocessor or nonstandard build practices, or codebases that mix lots of different languages over awkward FFI boundaries and so on and so forth -- there are so many situations where sometimes an IDE just can't get you 100% of the way there and you have to revert to grepping to do any real work
that being said, I don't fully support the idea of handcuffing your code in the name of greppability, but I think dismissing it as a metric under the premise that IDEs make grepping "obsolete" is a little bit hasty
> with all due respect, it sounds like you have the privilege of working in some relatively tidy codebases (and I'm jealous!)
I wish, but no. I've found people will make a mess of everything. Which is why I don't trust solutions that rely on humans having more discipline, like what this article advocates.
In any situation where grep is your last saviour, you cannot rely on the greppability of the code. You'll have to check and double check everything, and still accept the risk of errors.
Working on a 32MLOC project, text search is still the quickest way to find a hook that gets you to the deeper investigation. From there, finding definitions/usage definitely matters.
You can maybe skip the greppability if the code base is of a size that you can hold the rough shape and names in your head, but a "get a list of things that sound like they might be related to my problem" operation is still extremely helpful. And it's also worth keeping in mind that greppability matters to onboarding.
Does that mean it should be an overriding design concern? No. But it does mean that if it's cheap to build greppable, you probably should, because it's a net positive.
Sure, if you have the luxury of having a functional IDE for all of your code.
You can't imagine how much faster I was than everybody else at answering questions about a large codebase just because I knew how to use ripgrep (on Windows). "Knowing how to grep" is a superpower.
A bit on the other side of the argument, I use grep plus find plus some shell work to do source code analysis for security reviews. grep doesn't really understand the syntax of languages, and that is mostly OK.
I've used this technique on auditing many code bases including the C family, perl, Visual Basic, C# and SQL.
With this sort of tool, I don't need to look for language-particular parsers--so long as the source is in a text file, this works well.
IDEs are cool and all, but there is no way I'm gonna let VSCode index my 80GB yocto tmp directory. Ctags can crunch the whole thing in a few minutes, and so can grep.
Plus there are cases where grep is really what you need, for example after updating a particular command line tool whose output changed, I was able to find all scripts which grepped the output of the tool in a way that was broken.
It seems like the law of diminishing returns; while I'm sure in a few cases this characteristic of a code writing style is extremely useful, it cuts into other things such as readability and conciseness. Fewer lines can mean fewer bugs, within reason, if you aren't in lisp and are using more than 3 parentheses, you might want to split it up because the compiler/JIT/interpreter is going to anyway.
Interface-heavy languages break IDEs. In .NET at least, "go to definition" jumps you to the interface definition which you probably aren't interested in (vs. the specific implementation you are trying to dig into). Also with .NET specifically XAML breaks IDE traceability as well.
I tried a good IDE recently: Jetbrains IntelliJ and Webstorm. Considered the topdog of IDEs. Was working on a typescript project which uses npm link to symlink another local project into the node_modules of current project.
The great IDEs IntelliJ and Webstorm stopped autosuggesting completions from the symlinked project.
Open up Sublime Text again. Worked perfectly. That is why Jetbrains and their behemoth IDEs are utter shite.
Write your code to have symmetry and make it easy to grep.
By not using literals everywhere.
All literals are defined somewhere (start of function, class etc) as enums or vars and used.
Just because I have 20 usage of 'shipping_address' doesn't mean I'll have this string 20 times in different places.
Grep has its place and I often need to grep code base which have been written without much thoughts towards DX. But writing it nicely allows LSP to take over.
This is what the article starts with: "Even in projects exclusively written by myself, I have to search a lot: function names, error messages, class names, that kind of thing."
All of that is trivial to search for with a tool that understands the language.
> All of that is trivial to search for with a tool that understands the language.
Isn't string search, or grepping for patterns, even more trivial? So what is your argument? You found an alternative method, good, but how is it any better?
In my own case, I wrote a library that we used in many projects, and I often wanted to know where and how functions from my lib were used in those projects. For example, to be able to tell how much of an effort it would be for the users to refactor when I changed something. However, your method of choice at least with my IDE (Webstorm) only worked locally within the project. Only string search would let me reliably and easily search all projects.
I actually experimented creating a "meta" project of all projects, but while it worked that lead to too many problems, and the main method to find anything still was string search (CTRL-SHIFT-F Find dialog in IDEA IDEs is string search and it's a wonderful dialog in that IDE family). I also had to open that meta project. Instead, I created a gitignored folder with symlinks to the sources of all the other projects and created a search scope for that folder, in which the search dialog let me string-search all projects' sources at once right from within the library project and still being able to use the excellent Find dialog.
In addition, I found that sometimes the IDE would not find a usage even within the project. I only noticed because I used both methods, and string search showed me one or two places more than the method that relied on the underlying code-parsing. Unfortunately IDEs have bugs, and the method you suggests relies on much more work of the IDE in parsing and indexing compared to the much more mundane string or string pattern search.
> Isn't string search, or grepping for patterns, even more trivial?
It's not trivial when you looking for symbols in context.
> the method you suggests relies on much more work of the IDE in parsing and indexing compared to
...compared to parsing and indexing you have to do manually because a full-text search (especially in a large codebase) will return a lot of irrelevant info?
Funnily enough I also have a personal anecdote. We had a huge PHP code base based on Symfony. We were in the middle of a huge refactoring spree. I saw my colleagues switch from vim/emacs to Idea/WebStorm looking at how I easily found symbols in the code base, found their usages, refactored them etc. compared to the full-text search they were always stuck with.
This was 5-6 years ago, before LSP became ubiquitous.
Did you miss the comparison? The "more trivial"? The context of my response?
Please read the parent comment I responded to, treating my comment as standalone and adding some new meaning makers no sense.
String search is more trivial than a search that involves an interpretation of the code structure and meaning. I have no idea why you wish to start a discussion about such trivial statement.
> * because a full-text search (especially in a large codebase) will return a lot of irrelevant info?*
It doesn't do that for me but instead works very well. I don't know what you do with your symbol names, but I have barely any generic function names, the vast majority of them are pretty unique.
No idea how you use search, but I'm never looking for "doSomething(", it's always "doSomethingVerySpecific()", or some equally specific string constant.
I don't have the problems you tell me I should have, and my use case was the subject of my comment, as should be clear, as well as my comment being a response to a specific point made by the parent comment.
> All of that is trivial to search for with a tool that understands the language.
Some literal in a log message may come from the code or it might be remapped in some config file outside the language the LSP is looking at, or an environment variable etc.. I just go back and forth with grep and IDE tools, both have different tradeoffs.
The thing is, so many people are weirdly obsessed with never using any other tools besides full-text search. As if using useful tools somehow makes them a lesser programmer or something :)
I actually don't think there's a tool that handles usages when using PHP varvars or when using example number one there which is parametrically choosing a table name.
When you string interpolate to build the name you lose searchability.
Yes, full-text search is a great fallback when everything else fails. But in the use cases listed at the beginning of the article it's usually not needed if you have proper tools
> Honestly, posts like this sound like the author needs to invest some time in learning about better tools for his language. A good IDE alone will save you so much time.
Completely agreed. The React component example in the article is trivial solvable with any modern IDE; right click on class name, "Find Usages" (or use the appropriate hotkey, of course). Trying to grep for a class name when you could just do that is insane.
I mainly see this from juniors who don't know any better, but as seen in this thread and the article, there are also experienced engineers who are stubborn and refuse to use tools made after 1990 for some reason.
That's a problem of code organisation though. Large codebases should be split into multiple repos. At the end of the day code structure is not something to be decided only by compilation strategy, but by developer ergonomics as well. A massive repo is a massive burden on productivity.
> experienced engineers who are stubborn and refuse to use tools made after 1990 for some reason.
Before calling people stubborn or assuming they got left behind out of ignorance, consider your assumptions. 40+ years experience, senior in both experience and age at this point. Long-term vim + command line tools user.
Do you have any evidence that shows "A good IDE alone will save you so much time?" Have you seen studies comparing productivity or code quality or any metric written by people using IDEs vs those using a plain editor with grep?
By "so much faster" what do you mean exactly? I have decades of experience with vim + ctags + grep (rg these days, because I don't want to get called a stubborn stick in the mud). I can find and change things in large codebases pretty fast. I used VSCode for a year on the same codebases and I didn't feel "so much faster," and I committed to it and watched numerous how-to videos and learned the tool well enough to train other programmers on it. No 10x improvement, not even 1.5x. For most tasks I would call it close to the same in terms of time taken to write code. After getting burned a couple times with "Replace symbol" in VSCode I stopped trusting it. After noticing the LSP failed to find some references I trusted it less. I know grep/ack/rg/ctags aren't perfect, but I also know their weaknesses and how to work with them to get them to do what I want. After a year I went back to vim + ctags + rg.
We might have more productive (and friendly) interactions as programmers if we remembered that not everyone works the same way, or on the same kind of code and projects. What we call "best practices" or "modern tools" largely come down to familiarity, received wisdom, opinion, and fashion -- almost never from rigorous metrics and testing. You like your IDE? Great! I like my tools too. Would either of us get "so much faster" using a different set of tools? Probably not. Trying to find the silver bullet that reduces accidental complexity in software development presents an ongoing challenge, but history shows that editors and IDEs don't do much because if they did programmers today would outperform old guys like me by 10x in a measurable way.
At the last full-time job I had, at an educational software company with 30+ programmers, everyone used Eclipse. My first day I got a new desktop with two big monitors, Eclipse installed, ready to go. I installed vim and the CLI subversion client and some other stuff and worked from the command line, as I usually do. I left one of the monitors off, I don't need that much screen space, and I don't have Twitter and Facebook and other junk running on a second monitor all day like most of the other people did. I got made fun of, old man using old tools. Then once a week, like clockwork, Eclipse would auto-install some updates and everyone came to a halt trying to resolve plugin version conflicts, getting the team in sync. Hours and hours wasted regularly just getting the IDE to work. That didn't affect me, I never opened Eclipse. Watching the other programmers it seemed really slow. So just maybe Eclipse could jump to a definition faster than vim + ctags (I doubt it), but amortized over a month Eclipse all by itself wasted more time than anyone possibly saved with the more powerful tool. Anecdote, I know, but I've seen this play out in similar ways at more than one shop.
Just last year a new hire at a place I freelance for spent days trying to get Jetbrains PHPStorm working on a shared remote dev server. Like VSCode it runs a heavy process on the server (including the LSP). Unlike VSCode, PHPStorm can actually kill the whole server, wasting everyone's time and maybe losing work. I have never seen vim or grep bring a whole server down. I could add up how much "faster" PHPStorm might turn out compared to vim, but it will have to recoup the days lost trying to get it to work at all first.
I'm not arguing that a modern IDE is superior to vim + ctags, I'm arguing that working with strings rather than symbols is only done by either the naive or the stubborn. I basically have a checklist that I work with new people on:
1. do you have the application working locally (for some context-specific meaning of locally, as this heavily depends on what you're doing);
2. can you run the tests, or at least some meaningful subset of tests;
3. does the source code not report errors (it is INSANE how many juniors will ignore big red squiggly lines from imported libraries which haven't been set up properly);
4. can you attach a debugger;
5. can you navigate the source code (i.e. symbols, go to definition, find usages).
If you can do all this, it doesn't matter if you're using vim or VSCode or JetBrains. The OP of the article failed at least number 5 and probably others as well because they haven't set up their development environment properly and are resorting to string-based tooling to get around that. Trying to enforce nonstandard naming conventions to get around this (per the JavaScript example) shows lack of experience.
I read the original article differently. Writing code in a way that makes it easier to search doesn’t mean not using an IDE, tests, etc. It doesn’t mean not using an LSP or ctags to find symbols. It just adds to the overall consistency and ease of reading and searching the code.
I work with legacy code that very often doesn’t have a development environment or unit tests, often not even version control. Searching the code for strings and symbols (a subset of strings) gets more important in an unfamiliar code base. So does jumping to symbols and debugging, but nothing the OP wrote implies they only use grep
> I read the original article differently. Writing code in a way that makes it easier to search doesn’t mean not using an IDE, tests, etc. It doesn’t mean not using an LSP or ctags to find symbols. It just adds to the overall consistency and ease of reading and searching the code.
Consistent and searchable code is great, but the article author picked awful ways to achieve that. Adding verbosity and nonstandard field names (which will require a bunch of custom linter rules...) to support someone who doesn't have a proper development environment set up is silly. It reminds me of the arguments from 00s Java developers who insisted that having to type out class names on the left, i.e. `EnterpriseBeanFactoryServerImpl foo = new EnterpriseBeanFactoryServerImpl()` made code more readable, but it's really because they didn't have a dev environment with working type inspection set up. It's a real problem, but other than their point about not using dynamically generated identifiers, the author presents totally the wrong solution.
> I work with legacy code that very often doesn’t have a development environment or unit tests, often not even version control. Searching the code for strings and symbols (a subset of strings) gets more important in an unfamiliar code base. So does jumping to symbols and debugging, but nothing the OP wrote implies they only use grep
Very true, but IMO if you're working with legacy code where tests don't run, there's no source control, and you can't get a LSP working, worrying about whether a class is called `SpecificAttributeAlertDialog` or just `Dialog` is plugging holes in the Titanic at that point.
We can agree that some code bases and developer environments seem better than others. We don’t always have control over that.
The OP described naming things consistently so you don’t have Invoice and customer_inv etc. scattered around referring to the same thing. I think most programmers understand that as a good practice. And the OP mentioned the problem with constructing symbols dynamically. I agree with that. Your examples address different problems.
I took issue with your statement about “experienced engineers who are stubborn and refuse to use tools made after 1990 for some reason.” If the tools made after 1990 have clear and measurable advantages, then I would agree that not using them might come from stubbornness. But you didn’t offer any examples of that, or metrics and experiments, just your opinions and preferences. I’m skeptical of such opinions and prescriptions —. the “clean code” type rules presented as scripture without supporting evidence. OP wrote about some practical considerations that make sense to me.
For the past decade-plus I have mostly only searched for user facing strings. Those have the advantage of being longer, so are more easily searched.
Honestly, posts like this sound like the author needs to invest some time in learning about better tools for his language. A good IDE alone will save you so much time.