"But it has comparatively few features and is unlikely to add
more. For instance, it has no implicit numeric conversions, no constructors or destructors, no
operator overloading, no default parameter values, no inheritance, no generics, no exceptions,
no macros, no function annotations, and no thread-local storage."
Hearing this leaves a bad taste in my mouth, because one of the (mis)features I've found in Go is its implicit default value in struct initialization. In Go, you don't get a compile error when you miss some fields while initializing a struct. e.g.
type T struct {
a int
b string
c float64
}
t := T{c: 1.5}
will happily set `t.a` to 0 and `t.b` to an empty string. While this is useful for maintaining backward compatibility (adding more fields to a struct doesn't break upstream code), it also hinders discovering genuine mistakes at compile time. So typical Go programs end up using 0 or an empty string as a sentinel value. This is probably what made me feel Go's dynamic nature the most. It really is pretty close to a dynamic language.
I should explicitly state I seem to be in the minority in the Go community here, but: You don't get a compiler error if you initialize by name. You do get a compiler error when you initialize by position. In my minority opinion, you can and should use that to your advantage whenever possible. Some structs are clearly "configuration-like", for instance, and you don't want an error if a new option shows up, which will probably default to whatever you had before anyhow. Some structs are clearly data structures, and you'd really like to know if your two-dimensional point suddenly grew a third parameter. Of course it's not a bright shining line, but it's often pretty easy to tell which you have, or which thing you want, and use the correct initialization.
In this case, if you used:
T{4, "hello", 3.5}}
most future type changes to the T struct will become compiler errors. (It won't be if the types are compatible, for instance, changing the first to a float would still result in a legal struct. If you have richer types in play that is less of an issue.)
golint will then complain at you, but you can pass a command-line switch to turn that off.
(This, amusingly, puts me in the rare position of siding against the Go community, on the side of the language designers. Bet you may not have known there is such a position to take. :) )
One non-obvious downside is that the Go 1 compatibility guarantee doesn't apply to struct literals that don't use field names. (I suspect you're aware of this, but other readers might not be.)
So it's possible that a future version of Go could add a field to some struct you're using and your code will stop compiling when you upgrade. It's an easy fix, of course, so it's not that big of a deal, but it's worth realizing.
The point is that if I'm using stuct literals, I want the compiler to stop me for those structs.
I'm explicitly rejecting the idea that all struct changes should be possible without producing compiler errors. Compilers errors when the guarantees your code is based on changes is a feature, not a bug.
I just don't get this attitude. I'm asking for the compiler to break my code if something I depended on changes. The alternative is the risky one! This is the safe alternative.
Compiler errors aren't evil. They're a tool. They work best when there is a one-to-one correspondence between problems and errors. That's not possible in the general case, but the closer we get, the better. And the worst case is not when I get a spurious error. That's easy to deal with. The worst case is when I don't get an error I should have. If you're going to worry about "riskiness", that's the risk that should keep you up at night. Not compiler errors for things that turn out to be no big deal, and can quite likely be fixed with one quick go fmt -s.
In all other cases I'd want the compiler to break my code. The problem here is that this technique is very fallible, the chance for false negative, undetected errors is high. It's risky because there's a ton of cases where you won't get a compiler error. It's unduly making you feel safe, which is not a good thing in my opinion. This is mostly why I think it's risky, because you feel safe when you shouldn't.
With keyed-fields, the worst case is that you have uninitialized fields, which typically doesn't cause much problems and get caught quickly where it matters. With unkeyed-fields, you might have code that compiles but sets unexpected fields. Things that would otherwise panic, now just keep working without you noticing, until strange things happen and you have to review all initializations and remember the struct layout every time you see the struct being created.
Personally, I don't like both techniques anyways. It's too error prone. I'll prefer writing a small constructor where I handle initialization deliberately. It's not super Go-ish but at least I centralize all the issues surrounding struct initialization in 1 place: the constructor. Then when I change what fields go in the struct, I change the function signature and the compiler breaks and doesn't let things fall through silently.
I agree. But the industry is hurtling down a tunnel of weak typing and runtime checking. So compiler features are diminishing in relevance at a geometric rate.
I see the exact opposite trend happening. Weak typing is plateauing. It's the last moment of apparent strength before long, slow, but inevitable collapse. Most interesting work is being done on the static side right now, partially because there's no more work to be done on the dynamic side. (A great deal of being dynamic is precisely throwing away all the structure you might build further features on.)
You can also see this in how all the dynamic languages are working on adding "optional" or "incremental" dynamic typing. Static languages, by contrast, generally create one dynamic type, stick it in a library somewhere, and let the small handful of people who really need it use it. Few, if any, of them are adding any dynamic features. The motion trends are clear.
So I get to go back and look at the struct to see the order every time I initialize an instance? Or watch everything break when the noob on the team alphabetizes the struct fields? Yeah, that's a great solution.
If you're doing this, it is, by definition, on structs you choose to do it on. If you lack the judgment ability to decide when you want that, fine, never do it.
And the noob that is so noobish that they change code and don't even compile it to check to see whether it works is a menace well beyond this issue. That's an overpowerful argument; the real problem is the noob that isn't even running the compiler. The noob doesn't "break struct initializations" specially, they break everything.
> If you lack the judgment ability to decide when you want that, fine, never do it.
The choice in question is whether I want code that breaks silently when I add a field to a struct (named fields in initializers), or code that breaks silently when I swap fields of the same type in a struct (positional fields in initializers). Please tell me more about how "judgment ability" makes this anything other than a choice between brittle code and brittle code.
> And the noob that is so noobish that they change code and don't even compile it to check to see whether it works is a menace well beyond this issue. That's an overpowerful argument; the real problem is the noob that isn't even running the compiler. The noob doesn't "break struct initializations" specially, they break everything.
Compilation will not catch all situations where struct fields are reordered. Consider the rather common case where two fields on a struct are of the same type. If a noob swaps the order of these fields, it will compile just fine using your method of struct initialization. It's even quite possible that if unit tests initialize the structs in the same way, this could get past unit tests as well.
This is a pretty obvious case, and the fact that I have to explain it to you is yet another example of having to dumb things down for Go users who don't know the first thing about programming language design.
"Consider the rather common case where two fields on a struct are of the same type."
Or perhaps even the even more complicated case I already mentioned upthread, that an int will still happily initialize a float?
"Consider the rather common case where two fields on a struct are of the same type. If a noob swaps the order of these fields, it will compile just fine using your method of struct initialization. It's even quite possible that if unit tests initialize the structs in the same way, this could get past unit tests as well.... dumb things down for Go users"
What does any of this have to do with Go? All languages with structs have these "problems"! Even Haskell will have the exact same problems (even before you turn on OverloadedStrings). You're reaching so hard to be dismissive of some sort of stereotypical programmer that only exists in your head that you've completely surrendered reason. You should reconsider whether that's really who you want to be.
I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. We're discussing ways of initializing Go structs.
> All languages with structs have these "problems"!
This is completely false, and exactly why I'm dismissing you: we wouldn't be having this conversation if you knew anything about other languages. There are plenty of languages which will warn you when you fail to initialize a struct field when initializing fields by name.
(Obviously, this looks like a local struct, so there's possibly no need for forward declaration. But you might have a snippet to generate this sort of thing for you. Or maybe it's just force of habit. And so on.)
I'm guilty of this when writing C. I really have no desire to learn the language properly (it's hard enough to fit C++ in my brain), so I'll just follow the patterns others have set.
When doing this in your own libraries, be sure to document how to generate the struct tag name from the typedef name. (MS don't do this - but they're not consistent about it anyway.) Then when people see a typedef'd struct used somewhere in a header, they'll know how to forward declare it in their own headers.
It's one of those features that seems mad until and unless one runs into the situation that justifies it.
Go provides default values to avoid the C error-factory of random undefined behavior resulting from re-use of whatever is in a memory address; that much is clear. But the reason Go lets you partially instantiate an object (and separates out construction from state) is to make it easier to write unit tests, where the common case is that you want to circumvent the "main line" object construction pathways.
I have long felt that floats should default to NaN, so that any attempt to perform operations with them before they're initialized results in an error.
> From the preface: "achieving maximum effect with minimum means."
Wouldn't be out of place at a marketing agency, with about the same level of truth too.
> Sort of the anti-Perl?
In what sense? The one thing you can say about Perl is that it's a huge language, so the anti-perl would be a very small language. Go isn't a very small language (like Forth), it isn't even a small one (like Smalltalk or Self) it's about the same size/complexity as an early Java. Somewhat bigger in some ways (more magical builtins and constructs) somewhat smaller in others (simpler visibility rules, no synchronised methods/blocks), but in the best case it's a wash.
It takes more than reversing the order of parameters and using known-braindead ideas like codified tabs-are-good syntax to make contrarian ideas valuable.
Just because you change green lights to mean stop and red lights to mean continue doesn't make contrarian suddenly better than the way things were.
> While I think your comment is unconstructive, I do have to say that I don't quite understand why Go decided to force hard tabs.
Since you're using goftm which imposes a strict discipline, tab-indents and space-align allows configuring tabwidth however you want locally without imposing that on other collaborators. The issue with the idea is usually doing it consistently and people properly configuring their editor (if the editor allows tab-indent+space-align at all), when the code is being hard-reformatted it's not an issue.
> Even more confusingly to me, I really don't understand why they seem to standardize on tabs expanding to 8 spaces rather than 4.
8 is the historical/default tabwidth on Unices (unconfigurable environments generally have a tabwidth of 8), using hard tabs but defaulting to anything else would be odd. And since it's tabs, you can configure your environment to whichever tabwidth you prefer (like 3 or 6, I've not seen editors with support for tabwidths in half-spaces or pixels but in theory that's also an option) (well technically the CSS tab-size property supports arbitrary <length> tabsizes but only Chrome >= 42 supports that, the rest only supports <integer> spaces, except for IE which has no support whatsoever).
A claimed benefit of 8 tabwidth is also that rightsward drift becomes a problem extremely early, the tabwidth thus acts as a check against over-nesting. Now that's inconvenient in languages with significant "natural drift" like C# (where your code lives in a method in a class in a namespace so you're already 3 indents deep before you've writing anything, class-in-files languages tend to have a tabwidth of 4 or even 2 probably for that reason), but IIRC Go only has a single "natural ident" the rest is all yours, so a tabwidth of 8 serves as a check against nesting code too much.
A lot of coding is reading examples online these days. Trying to read Go code on GitHub is awful since three forced tab indents feels like you're 50% across the screen already (and forget trying to read it on mobile).
Browsers don't really have a "set tab width" option that I've found (and forget trying to set user options on mobile browsers).
a check against nesting code too much.
For expert programmers coding for long-term correctness, then yes. But beginners and lean "we just gotta ship this shit" startups will just create 9 levels of unreadable cruft.
Github allows you to set the tab size to <n> when viewing code by add ing "?ts=<n>" to the end of the url. I don't know if there is a way to set it for an account.
> Browsers don't really have a "set tab width" option that I've found.
The `tab-width` CSS property is supported by all browsers except MSIE, though only for integer amount of spaces (aside from Chrome 42 which supports arbitrary widths). In most desktop browsers can setup a "user css" to set it.
> For expert programmers coding for long-term correctness, then yes. But beginners and lean "we just gotta ship this shit" startups will just create 9 levels of unreadable cruft.
Would their unreadable cruft be any more readable with a tabwidth of 4 or (god forbid) 2?
I am vastly in favor of hard tabs, as it doesn't enforce tab size. Question, why do you say that the standardize on tabs as 8 spaces? I've done all my golang programming with 4 tab spaces.
Hard tabs make "pretty/readable indent" formatting difficult too.
If you want to line up certain arguments across lines, you just can't because you're forced to an unknown width of alignment chosen by the reader. So, all your code will just be indents that ignore the specific visual alignment intentions of the author, and that reduces readability and understandability in multi-person teams (and programming is a team sport, not a one-person-does-it-all game).
But, that's arguing two points, right? That's like saying ASCII has 8 built in non-visual field separators, so people should use those instead of CSV/TSV for text tables.
Sure, it's technically the right distinction, but it's not practical in any reality in which we live.
Trying to say "alignment" is distinct from "indent" and that tabs and spaces can be mixed depending on your intention is just crazy talk.
The only place tabs should be used is in Makefiles, and Makefiles should be autogenerated by CMake these days, never written by hand.
> Sure, it's technically the right distinction, but it's not practical in any reality in which we live.
It's not practical to do by hand (because most people can't be arsed to configure their editor to do it, or their editor is incapable of it in the first place), why would it not be practical when a tool takes care of it for you and everybody uses that tool?
> Trying to say "alignment" is distinct from "indent" and that tabs and spaces can be mixed depending on your intention is just crazy talk.
And yet gofmt seems to work.
> The only place tabs should be used is in Makefiles, and Makefiles should be autogenerated by CMake these days, never written by hand.
Why? If the distinction between indentation and alignment can be made and can be made correctly, it means anyone can pick the tabwidth they prefer and things will just look right for everybody, that's strictly superior to either tabs or spaces. That's been advocated for decades, it just doesn't work when you leave it to people, which gofmt doesn't.
I'm quite far from a go fan, but achieving the ideal of "tabs for indentation, spaces for alignment" is definitely praiseworthy, whatever you think of other formatting rules.
I've worked on a (C++) codebase which mixes tabs and spaces for this reason. It works okay - in particular, since my editor is configured for 4-space tabs, while patches are reviewed in GitHub with an 8-space tabs, any mistakes are likely to be caught in review. But it's not that hard to avoid making them in the first place if you remember to use the space bar to line things up.
gofmt works freaking beautifully for this though. Sure, if I tried to do it myself, I'd screw it up. That's why I don't do things that the computer can do better, easier, faster then I can. I just hit save, and my editor routes it through gofmt and refreshes file. It's to the point where I'll hit ctrl-s after moving a brace just to have gofmt reformat, CAUSE IT DOES IT FASTER THEN I COULD.
A significant fraction of the Go core engineers use proportional fonts when programming. On their screens, hard tabs are the only tabs that work. Spaces on proportional fonts are too tiny to be useful for moving code around.
In this message I'm trying to make the argument I think they would make (I personally use monospaced fonts).
It's not that Acme can't use monospaced fonts, it's that Rob/Russ/others don't want to use them. Proportional fonts are better fonts, so why not use them instead? One possible reason is that existing code formatting conventions assume that text is lined up in columns, but we have a tab key that magically lines things up: it's the whole job of the tab key. So, why not forget about space-based alignment, use the tab key for the job it was built to do, and get the advantage of using pretty fonts?
I used Emacs from 1993-2004, and switched to Acme for the 11 years to present. I don't miss trying to memorize all the key combinations from Emacs. I like that Acme presents a clean, simple, and direct Unicode interface to what I work with: mostly editing shell scripts, and running shell commands, as a build engineer. It takes a while to get used to mouse-button chording, but I don't even think about it now. I constantly use guide files, in many directories, to store and modify commonly used commands to highlight and run, so I make many fewer typos now, and don't forget which commands to run or how I run them. I can also switch contexts a lot faster, both because commands are laid out in the directories where I use them, and because the Dump and Load commands store and retrieve sets of files in the tiled editor subwindows. When I had to work on Windows I enjoyed having a pared-down unixy userland that I could write scripts in, to use also in my Linux Inferno instance (mostly communicated from one instance to the other through a github repo for backup and version control). The biggest drawback to me with Inferno is that so few other people run it, that I have to compile it myself on any new platform on which I run it (there are not really rpms/debs/etc available to just install it). But your experience with Plan 9 Acme might be better, I just prefer also working with the Inferno OS improvements, such as bind, /env, sh, etc.
I love two spaces. I used to use four but switched a few years back and now anything more than two looks strange to me. It's just a preference, but it does keep your line length shorter, which is nice if you like to adhere to a maximum line length throughout your code.
This would seriously bum me out as I find the easiest to extend the functionality of an existing Python function is to add a new parameter with a default value. This way, regardless of whether the existing code base the calls the new or old version of the function, it performs the same way as it always has.
Since Go is statically-typed and compiled, it's much easier to refactor a function compared to Python. Change it and fix everywhere the compiler complains.
That's fine for internal functions, but a big problem when publishing any kind of interface. It's really nice to be able to extend an interface without breaking stuff or adding cruft.
No, the authors specifically said that they would like to have generic features but couldn't figure out a way to implement it without unacceptable performance problems.
I'm pretty sure the feature will show up in the next few minor version increments.
> I'm pretty sure the feature will show up in the next few minor version increments.
No it won't. People need to stop expecting generics because it will never happen. Not that agree with this but it was made pretty clear in the go-nuts mailing list that the Go team wants to keep Go type system "simple".
This is one of the things I like about Go: it's "done." In exchange for passing on extensions that might make certain use cases easier, we'll avoid the bloat and have decades of backward compatibility.
We just came out of a decade of nifty language mania. What I learned is that languages are boring but problems are interesting. Algorithms and solutions are interesting. A great solution to a challenging problem is really interesting even if it's in the most boring language ever.
I have code on my machine written in C in the 70s because C is largely "done." People today continue to write interesting stuff in C. Neuromancer was written in the same language as Lord of the Rings and Moby Dick, too.
C is clearly not done, actually. Both C99 and C11 added, deprecated, and even removed many features. You can run C in the 70s not because it didn't change, but because it retained backward compatibility. It will be pretty surprising if Go manages to prohibit even the same level of changes C has taken.
Basically they dismissed every implemented approach, despite generics obviously working in other systems. (And it's hardly a "new" feature unless we're counting in multiples of decades.)
> Basically they dismissed every implemented approach, despite generics obviously working in other systems.
But working at a price. And Go isn't willing to pay the price (especially in terms of compile time). If Go ever adds generics, it will be with a new approach that doesn't blow up compile times.
C# compiles and JITs incredibly fast and has generics. F# has the same generics and compiles far slower. The reason isn't compile times. (Especially since one impl of generics is just generating code, which is cheap.)
IIRC the reasons on the list were the standard tradeoffs of memory space and so on. Do they emit specialized versions for each function, or not, and so on. Again, stuff that's working fine in other platforms.
A few years ago, Andrei Alexandrescu showed that the dmd compiler was actually faster than the gc compiler, and D supports templates. Recently, with 1.5, Go has shown that it is ready to take a hit on compile-time speeds. I don't think the argument of compilation speed holds for generics. To me it sounds much more like a culture thing: generics, for better of for worse, add an extra thing to think about and I think that the Go authors and many Go developers are just not interested.
I find that to be a very odd statement. Usually, the developer waits for the compiler in order to find out if the code compiles and executes properly. That is, every minute of compiler time costs a minute of developer time.
Worse, the developer time you spend due to lack of a feature, you spend while writing some code that would benefit from the feature. The compiler time you pay every time you compile - year after year, for some projects.
> I find that to be a very odd statement. Usually, the developer waits for the compiler in order to find out if the code compiles and executes properly. That is, every minute of compiler time costs a minute of developer time.
If your business starts to hit a wall on compile times you can buy a computer that can compile twice as fast. It's much harder to buy a developer who can think twice as fast. And every year the computers get faster and the developers stay the same.
> Worse, the developer time you spend due to lack of a feature, you spend while writing some code that would benefit from the feature. The compiler time you pay every time you compile - year after year, for some projects.
No, the cost of being unable to abstract increases exponentially as your system grows. If using language X lets you cut 500 lines from a 1000-line Go project, then when you have a 2000-line Go project, in language X you'd be able to cut 500 lines from each half of it considering each half in isolation - and then you'd be able to cut some more because of things that were common between the two halves - so you'd end up with just 750 lines of language X. And you pay the cost of extra lines every time you read or debug, year after year.
> If your business starts to hit a wall on compile times you can buy a computer that can compile twice as fast.
Buying a fast machine only gets you so far. Large C++ projects take minutes to compile even on the fastest machines available. Plus, you'd need to buy one for every developer.
> when you have a 2000-line Go project, in language X you'd be able to cut 500 lines from each half of it considering each half in isolation - and then you'd be able to cut some more because of things that were common between the two halves - so you'd end up with just 750 lines of language X.
While this is true in theory, in practice I think the effect is not quite as large. As the project grows, developers take ownership of certain parts of the code and become ignorant of other parts. This is the whole point of abstraction. Under these conditions it will take a heavy investment of time and effort to find and replace the things in common between the two halves. So you might cut 250 lines in common between two 1000-line halves, but you're not going to cut 25,000 lines in common between two 100,000-line halves without a serious amount of work.
I think Go's design shows awareness of this effect. The Go literature does not preach the battle against code duplication as strongly as, say, Java. The goal is to make it easy to understand the other team's 100,000 lines, even if that comes at the expense of some code duplication.
Note: I am not a Go programmer, but I do think that optimizing for ``code entropy'' (lack of duplicated code) over all else is a mistake.
> Under these conditions it will take a heavy investment of time and effort to find and replace the things in common between the two halves.
Maybe. I find the same patterns tend to show up in a lot of code, so very high-level libraries like scalaz or recursion-schemes (that you can't even think without a powerful type system) turn out to save code virtually everywhere.
> Note: I am not a Go programmer, but I do think that optimizing for ``code entropy'' (lack of duplicated code) over all else is a mistake.
Intuitively it does seem like other things should be more important, but I've become more attached to that measure through experience. Even seemingly innocuous duplication tends to go wrong over time.
It depends on the use case... C# is a wonderful language, since the addition of generics and lambdas, it's downright beautiful to work with. But this does come with a cost... Even a simple hello world console app has a pretty significant spin up time compared to go, or anything that is truly compiled.
In some cases, if you have long-lived services, then Java and .Net make sense... You can get farther with the code in place. If you are running one-off executable handlers, that need to start and finish quickly, then you probably would favor go.
It's entirely possible for different options to be part of a larger solution.. and while I agree, the lack of generics is truly painful... I remember C# before, and feel that Java's generics are a horrible implementation... I'd rather wait for a nice implementation just the same.
I cannot find any reason to believe that lambdas have anything, at all, to do with sin-up time. A hello world console app wouldn't even be using them much (closures are just objects so...).
And I doubt generics make a significant difference in runtime but I don't have a CLR v1.1 around to test it out. For comparison, a C# hello world takes about 10ms longer than a C one (both compiled with optimizations; .NET 4.6 C# 6 / MSVC 19) on my i5 Broadwell laptop. Timing as measured by "time" in bash (~25ms vs ~35ms).
I'm guessing you're talking about JIT in general and of course have a point there. I doubt it's significant for any significant values of significant.
I wasn't saying lambdas are a reason that it's slower.. only that it was a feature that made it really nice to work with.
I remember the difference being a bit more than that, on the order of half a second in difference.. but that was around the .Net 1.0 timeframe.. I still used it for a lot of things because it didn't matter to me.. but a couple of things I wanted to use it for at the time was too much lag for starting an EXE and getting output from the command prompt.. running as a service was a different story.
A 1.2Ghz early Athlon was a lot slower than what we have today as well... even so, depending on what you need, even 10ms can make a difference.
I don't think it has to be a tradeoff though. Look at OCaml or possibly D - decent type system, fast compilation and fast runtime performance. And I'd expect Rust to do even better.
> Worse, the developer time you spend due to lack of a feature, you spend while writing some code that would benefit from the feature. The compiler time you pay every time you compile - year after year, for some projects.
Except that this is a false dichotomy: you don't have to have a lack of features to get a fast compile. Incremental compiles have existed for a very long time in various language ecosystems and will achieve very acceptable results.
In addition, SSDs and multicore CPUs can be leveraged to decrease compile times, and these things are only getting better.
I imagine not that much in practice. It is not like someone is going to manually write out identical functions twenty times for each type they want to support. That's precisely what computers are good at doing and there are countless tools to do it painlessly.
The bigger problem is that Go doesn't have type inheritance or similar. Meaning, there's no great way to say that this generic function will only work with number types, for example. You leave the burden on the programmer to ensure their generalized function is applied only to types which it is intended to be used with.
While that is less than ideal, I cannot see that increasing developer time by a significant margin.
I am afraid I am not entirely sure of what you are trying to get across here. Your mere mention of go generate indicates to me that you do understand my point about computers being able to free the programmer from doing the drudgery of implementing the same generalized function twenty times. And since you are familiar with go generate, I expect you also realize there are seemingly endless tools that exist to solve this specific problem.
I _think_ what you are trying to say is that templates in Go are less convenient than in some other languages. That is a completely fair assertion. But the idea of having to type `go generate` occasionally adding significant man hours to a project seems a little far fetched. You could even:
alias go='go generate && go'
I completely understand the appeal of templates/generics being a first-class language feature. Not even the Go authors themselves discount the usefulness. I don't understand why the lack of them is adding so many more man-hours to your projects? The overhead of working around the lack of them should not be that significant, even if less pleasant.
A fantastic idea, even slower to compile (since you need to parse extra source files), more complex workflow, and extra files which developers have to remember to ignore.
This is exactly the case that the problems of a language are solved by tooling. This attitude can be easily found in the Java ecosystem, where the answer to every question is "to use IDE". Interestingly, Java recently abandoned the idea of feature stagnation and started improving the language. I'm curious when this will happen to Go.
I'm not even aware of go compiling most of the time (it's typically under 1 second). go's build system handles these sort of things without a more complex workflow.
> I'm not even aware of go compiling most of the time (it's typically under 1 second).
That's besides the point, codegen from extra on-disk files can only be slower than codegen without extra on-disk files, so "generics are slow to compile" and "go has generate" don't make sense together, yet they are used together to assert that generics are bad and anyway go has a replacement.
> go's build system handles these sort of things without a more complex workflow.
No they don't. If you're using go generate you have to run go generate. That's a strictly more complex workflow than not having to run it.
I didn't say generics were bad. I said compilation is so fast that I didn't notice it.
While I agree that "strictly" your assessment is corect it's more complex, I think you're being a bit literal; it's just another command. most of my builds have hundreds of commands.
Incremental compiles have existed for quite some time.
Go is only unique in that it can do a complete recompile in very attractive times. The only hitch, of course, is that you sacrifice features known and loved in other languages for over a decade.
- What is Go well suited for other than network programming?
- Why might one decide to write the backend API of their web app in Go, compared to say Grails, Python etc.
Other than some quite significant performance gains, I'd say the main upside for the case of a backend API would be channels. Concurrency is dead simple in Go - and if you want to do 5 different requests to ElasticSearch in parallel and merge the results when all of them are finished (like we do for Universal Search), that's just a few lines of very readable Go. Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.
Moreover, I love static deploys and cross compilation with Go. Compile your app (even to Windows!), copy it to a server, simply run it as a single binary. No dependency management, no apt-get & easy_install & pip, it just runs.
While Go does provide channels, I'd argue that they are not dead simple. I'm not saying this to bash Go, and I have willingly used it to solve problems. But I think it needs to be made more clear that this often-praised aspect of Go may disappoint those who are familiar with alternative techniques available in mainstream (read: not Haskell) languages.
For instance, look at the "Go Concurrency Patterns: Pipelines and cancellation" article: https://blog.golang.org/pipelines You'll notice the line "We introduce a new function, merge, to fan in the results" and after that, you will see how you have to write merge() yourself for every different data type that you use. Yes, you will have to repeat this same type of code, over and over, any time you want to pipeline, fanout, or merge, a new data type (unless you resort to interface{}). Furthermore, you will have to use the Go race detector to make sure you didn't actually mess something up.
I can't speak for Python or Ruby, but if you are using Node.js you can use a library like Bluebird which provides promise combinators. Then it's very easy to perform 5 requests and to handle errors and cancellation on one or more requests. You can do this and more on any arbitrary data type without writing merge() and nesting coroutine returning functions repeatedly.
So for handling async operations like dealing with APIs, I personally prefer tools like promise combinators or reactive programming (see Reactive Extensions for Javascript, also available in many other languages, or supplies in Perl 6 if you're crazy like me) over the significantly more manual approach of using typed channels in Go. I'm sure there are tasks where tight manual control of channels is important, but for the type of work I've been doing Go is simply too low level.
Generics can be nice, but they're clearly not "necessary" to any of the languages that don't support them. Besides, the cost that generics would exact in terms of syntax complexity, compile time, and startup time is something that many Go-detractors dismiss as irrelevant.
Myself, I think enforced tab-based formatting is utterly insane, but ultimately it doesn't matter. Part of the philosophy behind Go is to keep certain things very simple, and leaving generics out is a big part of that. I don't think that they're going to change their mind because non-Go programmers advocate for them. If that makes it the wrong language for your project, then there are so many others to consider.
But Go already has generics: channels, maps, make(), len(), etc. are work on generic types. You couldn't have typed channels without generics!
Go just doesn't have any syntactical way of declaring generic types. But given the above, nobody can argue that generics isn't extremely useful, even in the context of Go. "Oh, but the official library is special because it's part of the language", someone might argue. But one would have to be fairly damn obtuse to claim that Go would be better without the built-in generics, or that for some reason those benefits wouldn't extend to the language as a whole, if implemented.
I'm really surprised the authors of Go decided to allow generics only for the official library, because they've had to jump through some serious hoops to avoid it. If you look at "reflect", "builtin" and "sort", to pick a few, those packages are a graveyard of typing awkwardness. Look at the sorry state of the sort package, which even has a special function to sort strings. It goes on and on; every time I work with Go code, I end up implementing functions like min() and max() and cmp(). Why is "range" special? Why isn't there a way for generate iterators for any value? Etc.
Go is "simple", sure, but ends up being rather complicated as a result, with tons of the same code having to be written over and over for different types, and tons of typecasting between interface{} and real types, and so on. Nobody (as far as I can see) is asking for Haskell-style typeclasses or operator overloading or type traits or higher-kinded types or any of that. Several up-and-coming languages (such as Nim) implement generics without going overboard with complexity; quite the opposite, generics makes those languages simpler.
>Go is "simple", sure, but ends up being rather complicated as a result, with tons of the same code having to be written over and over for different types, and tons of typecasting between interface{} and real types, and so on. Nobody (as far as I can see) is asking for Haskell-style typeclasses or operator overloading or type traits or higher-kinded types or any of that.
Exactly. We're not asking for much here, just the ability to do the most basic kind of type parameterization.
The only benefit afforded by missing generics is simplicity in the compiler. It does nothing or worse than nothing when it comes to simplicity in actual Go code.
Even if generics never come to Go, my hope is that at least common pipelining tasks like merge() will become part of the stdlib and have magical generic support... but I haven't heard anything to indicate that such a thing will happen. :(
Python has good co-routine support. In Python 3.5 we have async/await and the concurrent.futures module. For I/O bound tasks like running REST APIs or MapReduce jobs I don't see the GIL being much of a problem; you spend eons waiting to do a few milliseconds of work then wait some more.
> While Go does provide channels, I'd argue that they are not dead simple.
I'd agree.
In my experience it's easier to explain a solution to a fundamental problem than to an abstract one. Channels, futures, promises... all very abstract concepts; useful to the cognoscenti but none are satisfactory at solving the fundamental problem of parallel execution. Hence everyone in their camps about which is right for which tasks.
So even with channels parallel programming is still difficult. You just have the added burden of a different, unique abstraction. Every language ecosystem either has their own community-adopted one or a plethora of them.
I think I'll reserve, "dead simple," for when we have a universal language of parallel execution. Until then... we don't know how to compute!
> Concurrency is dead simple in Go - and if you want to do 5 different requests to ElasticSearch in parallel and merge the results when all of them are finished (like we do for Universal Search), that's just a few lines of very readable Go. Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.
This example is terrible. Waiting for DB responses, ElasticSearch queries, and long running IO is one place where Python and Ruby multi-threading with a GIL works great. A GIL means multiple threads can't execute Python code at the same time. All of those tasks are by definition NOT running Python code, they're sitting around blocked waiting for responses.
A better example would be something like image processing, where an image is loaded into memory, broken into multiple independent chunks, and each chunk is processed at the same time in multiple threads. In Go that should work just fine, but in Python and Ruby each thread will spend a lot of time waiting for the GIL.
>Other than some quite significant performance gains
I am aware that other languages perform better in benchmarks than, say, Python does. But in my experience, I've not ever found the speed of the language to be a bottleneck when I'm benchmarking and optimizing for scalability in a web app.
It's always something else. The database interface, the network, a crappy web framework, whatever. It always seems to be something other than the fundamental language that bogs things down.
I'll openly admit I might be missing something or that perhaps I haven't tried to scale high enough. I just don't get how it's relevant that x language is y times faster than Python when Python hasn't ever been the problem. There's always just so much more low-hanging fruit than the language.
> Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read
Actually that example does tend to be simple and readable in Python and Ruby. I've used a pattern similar to this to parallelize calls to a few back-end services from a Ruby app and it worked out great. The code I used looked something like this: https://gist.github.com/kevinmcconnell/8365521, which I think is quire readable.
I've definitely found Go's approach to concurrency to be very helpful in other situations; I just think that example could be a bit misleading.
> Try that with a GIL, sure, multithreaded Python and Ruby is possible, but it's not for the faint hearted and not as easy to read.
As others have noted, the example use case is one where multithreading with a GIL/GVL isn't particularly problematic. Moreover, both Python and Ruby have GIL/GVL-free implementations (in Python's case, Python 2 only in Jython/IronPython; in Ruby's case, much more current in the language level supported, since current JRuby is Ruby 2.2-compatible.)
I'm confused: a backend for a web app's API is a "networked" program.
With Go (as compared to Python), it's super easy to define some structs, serialize and deserialize them as JSON that you check, and send them to the client quickly and efficiently.
The more annoying question for web apps (and productivity in general) is the static (+-strong) vs dynamic (+-weak) typing. Go, like Scala, does a great job inferring types (IMO) letting you feel fairly productive, but some people will always yearn for the "anything goes" dynamism (sadly combined with the lack of compile-time checking that would save you on larger projects).
Apart from the already mentioned reasons, another one would be maintainability.
Go was designed with this in mind and some of its shortcomings come from this approach. It is a relatively small language, with one way of doing a certain thing and a mandate of using standard libraries and strict code formatting.
It sounds limiting but it is productive and makes every Go developer write in the same style.
So whether it is you that you return to your project after a few months or a new developer, you will be able to (re)engage with the code quickly.
This is one of the reasons I'm pushing Go at work.
I've had to maintain our current apps which have received very little maintenance work during the years and I wish Go would have been present when they were made.
> What is Go well suited for other than network programming? Why might one decide to write the backend API of their web app in Go, compared to say Grails, Python etc.
You'll get a lot of different opinions on this. I'll just speak based on my experience, since Python was my primary language before coming to Go.
Everything I used to use Python for, I can do faster in Go. The notable exception to this is statistical analysis, as Go does not have any FORTRAN bindings[0], whereas Python does (through Numpy & co.). I still use a combination of Python, R, and other languages for this.
I came to Go for the static typing and native concurrency[1], but I stayed because I'm more productive in it. Python and Go are about equally fun to write, but because I can build things much faster in Go, it ends up feeling more rewarding overall, given that my time is limited.
If you don't really have a pressing need for anything else, you can stick with Python. But I found Python to be slow to develop in and cumbersome, and after trying Go out, I realized I was 100x more productive in it.
[0] apparently this may no longer be true; if so, that's exciting news!
[1] this was also before Python had built-in async. I still prefer Go's concurrency model and syntax, but at least Python has this as an option now.
Everything I used to use Python for, I can do faster in Go. The notable exception to this is statistical analysis, as Go does not have any FORTRAN bindings, whereas Python does (through Numpy & co.).
Coming back to the main question. I use Go as my primary language these days. E.g., my dependency parser and neural net dependency parser (which uses the aforementioned BLAS binding) are written in Go:
What I like about Go: it's C-like without the unsafety of C nor the complexity of C++. Moreover, I've found that working in Go is generally as productive as Python (short compile times, good tooling, completion in vim, lightweight package system), while being much faster and better-fit for large projects.
What I dislike about Go: it's a cliché, but the lack of parametric polymorphism is jarring.
Assuming you need float64, BLAS and LAPACK are most easily accessed through the wrapper packages godoc.org/github.com/gonum/blas/blas64 and godoc.org/github.com/gonum/lapack/lapack64. This way code can be written to either use the native go implementations or the assembly/c/fortran ones.
100x? Even 10x is very significant. Why is it that you feel so much more productive in Go? Is it ease of refactoring? Lack of frustrating bindings bugs? Something else?
Yeah, I know 100x is a huge multiplier, but I've actually tracked myself to the extent possible, and that really is the right order of magnitude for me.
There are a number of reasons. I find refactoring is way, way simpler. Refactoring in Python feels painful enough that I always put it off until it gets absolutely necessary. With Go, I find it takes way less time, and also less mental energy.
The static typing and strict compiler (enforcement of imports and lvalues, etc.) works well for my writing style. I write what's in my head, without worrying about the small details (only the high level logic), and I can be confident that it'll be easy to fix the small details (syntax, typos) afterwards.
It's kind of like how writers often work - they dump words on a page, focusing on getting the main points across, and it's the editor's job to make sure they fix the grammar and style. I end up having a conversation with the compiler and when it stops telling me anything, the code usually does what I want it to.
I agree that static typing at least for me gives massive reduction in time in medium to large projects. And despite what many think about Java I am able to be a multiplier faster than Python with it (I would say as much as 4x at times).
What I don't understand is all these people claiming that the language is slowing them down by massive factors. I guess I'm either incredibly stupid or people on HN are incredibly smart (probably both) but I just can't even think fast enough for the syntax of the modern languages to really slow me down (ignoring copy n' paste exceptions and build time issues). For example to create the mental model of a dumb video game I'm making is taking more time than the actual coding.
In fact I would say if anything slows me down its bugs and/or features missing in immature open source libraries, crappy tooling, and lack of documentation and most importantly not fully understanding the problem (or what to create in terms of video game). And I can say this definitively about Rust... the language is awesome but I'm slow as crap in the language because of random stuff breaking and lack of good libraries.
I'm not saying Go has the above issues (on the contrary it now is rather mature) but I have hard time believing the 100x (and that is my opinion :) ) of going from problem to fully coded solution.
The static typing and strict compiler (enforcement of imports and lvalues, etc.) works well for my writing style.
It seems you more enjoy just "Not Python" than Go itself.
Go is still pretty awful and designed without really consulting what would help users. It's just designed for what the creators want, but now millions of people are trying to use it—not just 8 people inside the Nation of Google. It's getting worse for the average developer as time goes on. But, one thing developers love doing is understanding broken things, so in a away, the more difficult a system, the more nerds like it because it gives them accomplishment and the ability to exclude non-understanders. (Plus, Tabs? Tabs? In 2015? Is the Go development process run by monkeys living in Antarctica?)
> It seems you more enjoy just "Not Python" than Go itself.
No, I have experience with lots of languages. I'm comparing to Python here because that's what OP asked for.
I don't think Go is "just designed for what the creators want", but even if it were, I don't care, because that's what I want as well.
I'm talking about my personal experience of Go based on my own experience writing Go full-time for over three years, which is as long as the language has had a stable release. With all due respect, it's highly unlikely that a review from someone using it for four days is somehow going to change how productive I've already found the language makes me.
With all due respect, it's highly unlikely that a review from someone using it for four days
But, the review goes into a rant about how the Go community just thinks it's the greatest and refuses to listen to outside opinions due to a sense of inbred and ungrounded superiority... kinda like what you just did there.
> the Go community just thinks it's the greatest and refuses to listen to outside opinions due to a sense of inbred and ungrounded superiority... kinda like what you just did there.
It's not 'inbred superiority' to think that my own experience is a better predictor of what works for me than what someone else thinks. I never said "Go is the greatest". I said that, based on my experience with a wide range of languages, Go is the best for me. OP asked "why might someone decide to use Go", and I answered by explaining why I decided to use Go.
From your comments throughout this thread, it seems that you really dislike Go. That's fine, don't use it. But why try and pick fights with those of us who do use it?
my own experience is a better predictor of what works for me
blub paradox. We only know what we know. Outside opinions are very valuable to show us better things exist.
But why try and pick fights with those of us who do use it?
Because the language is bad. It's not conducive to reading code or writing code. It's code for code's sake. It's a bad platform. The more it grows the more programmer minds it corrupts. The more it grows the less easy it becomes to avoid in general.
Programming is important. Our programs will outlive us. We can't afford to have the entire system run on what a small group of isolated people feel is right. Systems have to be powerfully expressive and powerfully legible without succumbing to failure-to-understand errors due to typography or mass indirection ("magic") in too many places.
Programming isn't H&M fast fashion. It's The Golden Gate Bridge. If you screw it up in 2015, you're at risk of killing people, ongoing, in the future, in perpetuity. (also see: flash, android, java, the unmaintained openssl debacle, ...)
A very significant reason why I love Go is that it fits in my head. I can use all the keywords, constructs, all the builtin functions effortlessly, while the standard library is a breeze to work with.
If you don't like LOC, then what sort of interpretation are you giving to "100x more productive"? I'm not so sure LOC is way off.
But in any case, if a person says he is "100x more productive" in one language than another, then I expect he can complete in 1 to 7 days what would take 100 to 700 days (i.e., 3 months to 2 years) in the other language.
Strange that I have to even highlight the obvious, but anyone who says they're 100x more productive in Go than Python is exaggerating to an extent that I find it hard to trust anything they say.
Again, "lines of code" is a shitty metric. Functionality might be one, customer support might be another... but even if you choose LOC, 100x is not impossible or even unlikely. When learning Go, I went back to undergraduate coursework, and picked some example problems from an advanced programming class. I solved these problems with Go.
Seventeen years ago, I dropped that class, because it was taught in Java. According to pure LOC, I'm something like 10,000,000 times more productive in Go than in Java, because Java was so gross I chose to withdraw from the course rather than waste my time rolling in mud.
Perhaps, setting LOC aside, a given programming languages matches a developer's thought habits better. In such circumstances, not only does she finish the nominal task quicker, she is able to respond more quickly to QA feedback, or to changing requirements, or to other business considerations. Perhaps a different paradigm enables her to foresee shortcomings in an existing design. These things not only improve one developer's productivity; they make the entire team more productive.
There is so much more to programming than "how many lines of text did you shit out today" and it is incredibly naive to behave as though LOC is the prime metric. We, as an industry, have been aware of this for at least forty years now. It's time to stop.
It's difficult to write a program without writing lines of code.
There are dumb measure, smart measures, and dumb ways of applying smart measures. We could consider a little intellectual generosity, that intentions were well-met, and not that a 50 year old used car salesperson turned programing manager was irrationally demanding X lines of code per day.
- The language is simple. No need to lookup strange keywords or try and figure out too much cleverness in the language itself. It promotes readable code.
- A basic and good type system. I don't have to worry much about that int turning into a string.
- Strong default libraries. No need to rely on third party libraries to do a lot of the common API things.
I've used Go for neural networks and machine learning with great success. Fast, safe, fairly straightforward. Much easier to parallelize than with Python.
I wrote some scientific code in it (some parsing, some processing). It was much easier to ensure that the code was secure (important when you parse possibly evil files), compared to C++, and it was much faster than Python and pretty easy to maintain.
Situations where you want a modern, strongly-typed language with easy C bindings.
IO-heavy operations, especially where it's primarily IO bound but there is a decent computational aspect (eg, working with an embedded database or doing lots of flatfile munging).
I've found the place where Go crashes and burns is if you're trying to implement / use lots of sophisticated custom data structures. It really really wants you to use the builtins, and the lack of generics is irksome.
We use golang for a lot of our ops stuff. Earlier it was a repo of a mash of different python/ruby scripts. The runtime dependencies were hard to keep track of and they had to be installed on all the machines depending on the script being run.
With golang it's just one static binary that is scp'ed across the nodes. This easy of deployment isn't talked about much.
We were also able to go further and put a RESTFUL API on top of these tools using negroni and gorillamux.
If you are interested in an approachable API for CSP, Go is good, along with Clojure's core.async. If you want to write asynchronous code, they are both great leaders in that area right now. If you don't need that API, I personally find the reasons harder to find in favor of Go.
I think Go is a great language for sophisticated scientific computing. Go scales well to complex code bases, and thus it is a great language for building an ecosystem. The tools are not all there yet, but the foundations are being built github.com/gonum
"Typeset by the authors in Minion Pro, Lato, and Consolas, using Go, groff, ghostscript, and a host of other open-source Unix tools. Figures were created in Google Drawings."
groff still going strong... although it seems like Kernighan got tired of drawing using the 'pic' language...
Interesting to see that the direct lineage from Pike's prior languages and CSP experiments is reaffirmed. I wrote about this here earlier, with some notable disagreements in response: https://news.ycombinator.com/item?id=9711639
Yes it absolutely makes sense to buy this book to learn go; judging by the contents and first chapter it will be comprehensive and a very clear introduction to the core concepts of the language.
It will probably end up being the definitive reference.
^ This! It is one of the reasons I like Go. Effective Go is all you need to understand the language and be able to read even the most complex Go programs.
This book hasn't come out yet. Without having read it, I'd probably recommend it when it does, but for the time being, the online Go tutorial[0] is a great introduction.
I found "Go Blueprints" a good introduction for someone who wants to get into practical projects quickly, and doesn't need an introduction to programming. If I were to write a book about Go it would be exactly that, demonstrating how useful Go can be for day to day tasks, and for cli tools and servers.
You can easily learn go by going through all the pages under "Learning Go" here: https://golang.org/doc/ and probably some practice projects.
This book might be a good alternative but we can't know yet.
This is much better than the previous Go documentation, particularly in the concurrency area. The previous Go documentation introduced goroutines and channels, stated the mantra "share by communicating, not by sharing", and then gave examples with variables shared between goroutines.
It now seems to be recognized that, in Go, if you want to lock shared data, use the lock primitives. Don't try to construct locking primitives from channels; that's error-prone and hard to read. This new manual seems to recognize this. When they want a shared counter, they use a shared counter with traditional locks.
See the very first example of "channels" in that 2009 version:
"In the previous section we launched a sort in the background. A channel can allow the launching goroutine to wait for the sort to complete." ...
c <- 1; // Send a signal; value does not matter.
That's using a channel as a lock on shared data. Not seeing that in the new book. This is a step forward.
(What Go really needs is Rust's borrow checker and move semantics, so that when you communicate on a channel, the compiler checks that you're not sharing too much.)
Of course if you care about speed at all, you will stay away from Go's mutex's. Time your own code, it is shocking how slow they are. I haven't published my own test times, but with a quick search here's an example of a 10ms loop taking 2s with mutex's. http://www.arkxu.com/post/58998283664/performance-benchmark-...
A mutex held for up to 10ms on each iteration, over 500 iterations, averaging out to 2s doesn't seem surprising at all. In fact, this is quite expected. It would take considerably longer if the random sleep hit 10ms every time.
If you modify the code to release the lock before sleeping...
...and reacquiring it after, it still finishes in 10ms on my machine, as you'd expect. As you can see, the problem isn't so much that locks are inherently slow in Go, but rather the mutex, as the acqusition function name implies, is doing what it is supposed to do: Lock. What is true is that you have to be careful to use them properly.
This is a good place to ask this (because Go posts attract a lot of commenters, even those who dislike Go and like some other language):
Which language/framework would you choose today for writing WebServices? Preferably with the following characteristics: static type (or at least static analysis), easy deployment (ex, generates a single binary like in Go), supports concurrency very well, is small/simple, has good tooling and debug support, and is fun to write. Go (except for good debug support)? Elixir (dunno how good it is with deployment and debugging)?
Java. Static types (less sophisticated than state of the art, more sophisticated than Go), fairly easy deployment (you can easily make a package with everything except the JVM, so it's your one binary + /usr/bin/java + libjli + lib{c,dl,pthread,z}), concurrency is state of the art (whether old-school threads-and-locks with java.util.concurrent, actors with Akka, fibres with Quasar, whatever you like), is a quite small and simple language with a standard library that admittedly rivals any European capital for size, complexity, fascinating details, layers of history, and alleyways that stink of piss, tooling and debugging is state of the art, and fun, well, Java's really more golf than base jumping.
Documentation, performance, and community are varied but often excellent.
Libraries and frameworks abound (some are even quite good). Perhaps surprisingly, everyone seems to have settled on JAX-RS/Jersey as the way to write REST APIs. It's what's in the EE spec, so it's what guys in polyester suits in banks are writing, and it's what's in Dropwizard, so it's what gals with bright blue hair in startups are writing. The only real alternative is Spring MVC, which is really very similar, but requires that you buy into Spring.
Probably not the answer you wanted. But not bad for a language that's a day older than Braveheart.
If going down this route, I'd probably look into using Kotlin with dropwizard, rather than Java. Maybe it's just me, but the amount of code you have to write in Java that really should be auto-generated (and often is, by an IDE) is absurd.
I've not toyed extensively with Kotlin yet, but so far it does look like a "modern Java done right". Most of the code you don't actually need to write, or read -- like getters and setters that just get and set.
And it's close enough to plain Java that there's little overhead, and not a whole new language -- like with Scala, or Clojure.
Yes! Kotlin and Ceylon are both really interesting, as attempts to judiciously add and remove features to/from Java, to produce something recognisable, and easy for the unwashed masses (ie me) to pick up, but still much better.
My only concern is that they don't have much depth of community or history yet. That can be a very interesting and rewarding time to pick up a language, but it can also be a drag on actually getting things done.
True. Then again, that code that your IDE generated for 10 year old Java doesn't really have a community that refreshes it either, and it sits there in your VCS, and takes up space in code reviews etc.
I would choose Go. It comes with an excellent built-in package net/http (https://godoc.org/net/http) that quickly allows you to setup a web service. An Go's mechanic (?) of implicit satisfaction of interfaces allows you to do some really cool things. One of the gripes about Go seems to be error handling, and your code ends up with a whole bunch of 'if err != nil { stuff }'.
Matt Silverlock (http://elithrar.github.io/article/http-handler-error-handlin...) talks a bit about how you can use some of Go's strengths to create a web services without any frameworks or packages that aren't built-in.
Go and net/http is very powerful and allows you to do very much without needing any external packages.
If it was up to me, I would write the service in Go and run it in a container somewhere. But the container is not required :)
Yes, just using built-in packages, and not depending on too many third-party packages is definitely attractive to me. The code full of "if err != nil" is an issue that I've also noticed. However, I'm hopeful that this may actually remind me to be more exhaustive in error handling.
Why did you mention the container? How kind of benefits would you be getting from a container for Go?
A container is just useful. But given Go's static binaries, you could also just ship the binary over to a server and be up and running. But it's nice to just have a container with all the bits/config inside that you can just copy over. The benefits of a container are the same for any language.
Elixir is what I would choose. I think of deployment as a one time cost. So, the cost of exrm+docker solves your build problem and then your deployment is like a "single binary". (except its a single docker container which is sufficient for me, maybe not for you.)
As I understand it the go runtime must pause to resolve atomic locks for all the goroutines running. So, when you're doing a dozen goroutines in your app it's not a problem, but you couldn't do thousands of them. Meanwhile elixir processes can handle millions of processes without blocking like that, and that plus the ... strength... of immutable memory (vs. shared in go) makes me like elixir, especially when concurrency is important. (I don't consider go to be a concurrent language in this regard.)
Good tooling and debug support: I can't compare the two languages in this regard, go seems to have good library support. Elixir has excellent tooling and debug support (REPL mainly, and really nice error messages), and I think Elixir has good library support too.
Fun to write: So far for me, Elixir wins this hands down. Go is running neck and neck with Erlang in the "not very fun to write" range for me.
Of course this is all personal opinion.
The only objective thing is- if you need real concurrency and to build a distributed system (go doesn't even have the concept of language support for Nodes) then Elixir is the way to go.
If on the other hand you need max compiled speed on a single node, the Go wins hands down.
No such thing as Elixir processes. They're Erlang processes because they're constructs of the EVM. Whether or not you consider the latter fun to write is irrelevant.
Thanks! The point about immutability is important. Based on the comments I've been seeing on HN, I had a feeling that Docker brings its own set of problems. But, it may be worth checking it out.
> I think Elixir has good library support too.
I guess the support will be even better if you consider using Erlang libraries.
I learned erlang first, years ago. I absolutely fell in love with it. Somewhere when I wasn't writing a lot of erlang code, it crusted over and now the erlang syntax bothers me. While elixir is more than a syntax, yes, that is part of its value for me.
To me, in my head, symbolically Elixir and Erlang are the same language, the concepts I'm using most of the time are representable in both. I just find Elixir more fun and easier to write quickly.
Scala/Spray. Wonderful full-fledged type system, complete with a strongly typed model of HTTP (so that e.g. ContentType is well-typed, and takes a well-typed model of a MIME type, not just a String). Excellent concurrency support, very fun, all the JVM tooling/debug support. Deployment needs a tiny bit of fiddling - I recommend building with maven and using the maven shade plugin, then you get a standalone jar that you just need a JVM to run.
The unique selling point is that your routing is defined in a DSL that's almost as nice as the config file most systems would use - but it's ordinary Scala, well-typed and refactorable in the normal way because everything's just code. And because the type system has higher-kinded types you can use typed wrappers to represent cross-cutting concerns, with the lovely for/yield sugar for composing them. E.g. you can have database-session-in-view but in a principled way: you have a wrapper type that represents an operation that needs to happen in a database transaction, and if you want to compose two or more such operations you use the for/yield sugar, and it's all refactor-safe and you can tell which unit tests need a database (fake or real) because it's right there in the type. And at the route level you either have a marshaller that handles your wrapper completely transparently, or your own directive that works exactly like the built in ones because again, it's all just code. You can click through to the source of the directives and see how they're implemented, not just magic annotations like with JAX-RS. If you want to get super mind blowing you can even use for/yield to compose together directives to make a custom directive.
i've been partial to Go for the past few years. My toolkit is mainly comprised of martini for the web framework, gorm for the database handling (also includes auto migrations which is awesome!) and heroku for hosting (they have native Go support now) or even dokku on your own VPS
The software I write is primarily used in advertising, and Go allows me to write the software quickly and is able to handle a large load as you can see here: http://imgur.com/zpkjwlh
There is some overhead with Martini and Gorm, but the ease and speed of development more than make up for the performance loss
Negroni and Mux are great to use, and typically I will go back and refactor the code to squeeze out any extra performance if it's needed and the project has proven its worth, but Martini has middleware that is hard to beat and easy to set up to get proof of concepts out the door.
I think it is not so much about "good debug support", or "easy deployment".
It is really about what YOU, as an individual, without any regard to who wrote what language reimplementing its slice of CLisp, can do with a given language.
You're a rockstar in C? go for it, and just dockerize the result.
You're a Ruby ninja? fine, and Capistrano the result (or just dockerize it).
You're a Go ninja? why not, and then scp the result + restart the process via ssh.
You're a CLisp expert? fuck yeah, and then reinject the new code directly in the running process. Just because you can. Who will stop you?
I'm currently reading it as I started to program in Golang only one month ago and I'd never heard of goimports, this is a nice tool! Mixing it with GoSublime and it does a really good job http://michaelwhatcott.com/gosublime-goimports/
The book is well written and it looks like it covers a lot of common topics, I think I'm gonna buy it.
Just as K&R introduced us to "Hello, World," I'm amused they adapted their first program to an Unicode world: "Hello, 世界." Seems like a great first chapter, covering computer graphics and web server/byte fetching to boot.
I noticed that the lissajous program in 1.4, as included, generates non-random lissajous figures since the random number generator is not seeded. I couldn't find any reference to this in the text and this could be confusing to beginning readers. Is there a recommended way to submit errata?
We’ll discuss these topics only briefly here, pushing most details off to later chapters, since the primary goal right now is to give you an idea of what Go looks like...
it's probably clearer to use the default random source.
Good point, but I also think it will be confusing to beginners who compile and run the program only to find that it always generates the same figure without any explanation as to why. Both the package documentation and text say that the generated figures will be random.
The fix is just a couple lines and, I would argue, should be included in the source to eliminate the surprising behavior. http://play.golang.org/p/1WlhOdJ1pk
To run the examples, you will need at least version 1.5 of Go.
$ go version
go version go1.5 linux/amd64
Follow the instructions at https://golang.org/doc/install if the go tool on your computer is older or missing.
Would Go provide a viable alternative to C++ for numeric and computer graphics 'kind of stuff'? I have no problem with C++ but the better-than-python proclamations got me intrigued.
I am curious about the limitations around memory layout. Go provides a fair amount of control to the programmer. Fields appear in the order declared in a struct (although potentially padded), structs declared as values, not pointers, are located in memory with the declaring struct and we can have arrays and slices of non-pointer structs they will all be arranged contiguously in memory.
I am not a C/C++ programmer are there more powerful facilities provided in these languages?
D is a viable alternative for C++; you still get to enjoy near native performance and cleaner FFI compared to Go, without giving up the features of a modern programming language.
Also, Nim would be a good fit for those domains. Macros and custom operators allow one to write dsl for numeric and scientific programming, and you have complete control over memory layout. There is a gc, but you can tune it very precisely
I would guess Julia (or D) today, perhaps Rust in the not so distant future. I'm not sure if Go will ever be a good fit if you need really high and/or deterministic performance?
Stick to python- Numba is a python compiler that compiles numerical code to Cuda and LLVM that is faster than Julia (single threaded and multi threaded).
This is going to be the standard text for the language. The quality of writing is exceptionally good. It's high time the AW professional computing series had another hit.
It's a little odd, given that Kernighan and Pike have written a couple of excellent programming books together, and with Rob Pike's leadership in the Go project, that this one isn't K&P.
Can people post their opinions on static linking of go binaries? Doesn't it result in increased size of runtime binaries when compared to those generated by C/C++?
The link is to an image of the front cover of K&R2 (Kernighan & Ritchie, "The C Programming Language", 2nd edition). You probably meant the image to show up in your comment, but it didn't.
I still prefer forced data abstraction, i.e. classes as a core construct. GO does not force that. Data abstraction, when properly done maps the code more closely into the problem domain to be solved. In the long run that makes the code more maintainable and extensible.
I am really not sure this is true. In my opinion classes force an premature taxonomy almost all the time. I like that you can simply attach methods to structs when you decide you want object like behavior.
I understand both points of view but lean strongly toward why-el's. If there's one thing Java has taught us it's that "premature taxonomy" (love that term) can be a huge unnecessary tax on small to medium sized projects.
It's safe to say that C++ had an influence on the Go language at least as what not to do. The emphasis on language simplicity and compile speed seemed to be a result of fairly large C++ code bases at Google.
I have no idea what they will specifically say regarding calling C from Go, but the table of contents says that part is only going to be about five pages.
I really wish they would beef up this portion of the book.
The fact that it is a bit non-intuitive to call C from Go is exactly where there are so many 'Pure Go' libraries. I consider this as a short-term hurdle with long-term benefit.
There are lots of libraries in C (and Java) but the question is which ones are essential for whatever you're trying to do? There are lots of Go programs you can write just using the standard library. That appeals to me since I'm generally in favor of avoiding unnecessary dependencies. But if you need it, you need it.
It really depends on what you're doing, but even in common obvious stuff like networking, there can be functionality in libcurl or OpenSSL that you really need but isn't replicated in Go. Several years ago I remember trying to do SSL certificate verification with Python and finding the standard library lacking.
Also bear in mind that good compatibility with C also means good compatibility with just about every other compiled language, including Rust.
"But it has comparatively few features and is unlikely to add more. For instance, it has no implicit numeric conversions, no constructors or destructors, no operator overloading, no default parameter values, no inheritance, no generics, no exceptions, no macros, no function annotations, and no thread-local storage."