Warming up to Go

Posted on August 28, 2015

Lately, as I’ve had more experience with it, I’ve started to warm up to the language Go (aka “golang”).

Complaints

A year ago, I had a lot of complaints about Go.

I complained that, while you can’t use it as a true systems language (in the write-an-os-in-it sense), you get some of the annoyances of systems languages like needing to think about whether or not to use a pointer. The type system was in a unproductive valley between being strong enough to be automatic (like in Haskell) or weak enough to stay out of the way (like Python). I was struck by gotchas like how slicing a unicode string slices at byte boundaries, not characters. I called it out for not following its own advice of simplicity since it had many special-case features like make(). And of course, I complained about the oft-repeated if err != nil {} blocks.

In short, I wasn’t a fan.

In a technical sense, I feel that most of my complains are still valid. It is true that the type system could have been better designed, and that pointers could have been abstracted a little better. But I’m starting to think that some (not all, but some) of those issues I had were besides the point. It was complaining about the forest on account of the color of a few leaves. In day to day use of the language, most of the things I had worried over never became issues.

I’ve never seen a bug due to slicing into a string incorrectly. In real-world code, you don’t actually end up casting interface{} to something else that often. Calling make() in one place and new() in another isn’t a big deal.

Some of the language choices I complained the loudest about were the ones that made it difficult to create abstractions. You can’t create data structures that are drop-in replacements for the built-in ones. With no generics, you are discouraged from making big, overly-general abstractions. This might be on purpose.

Episode IV: A New Hope (for Go)

I have a new way of thinking about Go. It’s not a systems language, it’s not a dynamic language, it’s not even necessarily a web language. It’s an anti-abstraction language.

Whenever I think of Java, the first thing that comes to mind is giant, bloated systems caused by extreme examples of premature generalization and a love of creating object structures for the sake of creating object structures. Yes, the abstractions are very, very powerful. They let you do a ton with very little code. The problem is that they are horribly complex.

When the interface – just the interface, not the whole implementation – of an abstraction is too big to fit in your head at once, the abstraction is making the task of programming harder, not easier. Lines of code used to solve a problem is a very poor metric for how hard it was to solve (never mind as a metric for how hard the problem necessarily is to solve). Writing 2x or 10x as much code with a simpler abstraction (or with no abstraction at all) might, in the end, have been simpler and easier. Watch simple made easy by Rich Hickey for a great talk on the important difference between simple and easy.

For someone new to the codebase who knows nothing of the giant abstraction, which do you think is easier to grasp: 10 lines of code that only make sense if you know the abstraction, or 20 or 100 lines of code that can be understood on their own?

This is one area where Go tends to shine. By pushing you away from abstractions (at the cost of a few extra lines of code), it keeps the logic and meaning of a program localized to digestible pieces. This generally comes at a cost to the programmer initially writing the code. Assuming you have already put in the up-front effort to learn a big abstraction, you will be less productive when forced to use smaller abstractions. You’ll have to implement more of the logic yourself.

In many cases, that up-front cost is paid back many fold later, when other programmers (or you yourself, after you’ve forgotten the details of the code) come and try to read and work with the code. Go optimizes more for understanding the code later than for building big things quickly.

Reading

That’s the right thing to optimize for. Code that isn’t immediately deleted will always be read more often than it is written. Isaac Asimov famously coined the Three Laws of Robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I think we need a similar Three Laws of Programs:

  1. A program must not be hard for a human to understand, nor though lack of clarification allow a misunderstanding to take place.
  2. A program must be fast and concise, unless it conflicts with the first law.
  3. A program must be easy for the computer to understand, unless it conflicts with first two laws.

Others have stated this better as “programs must be written for people to read, and only incidentally for machines to execute.”

Often these rules are applied in the reverse order. First, the language design makes the program text easy to digest for the computer. Then, working in that constraint, the program is made as fast and concise as possible. As an after-thought, we are given a few minor aids in understanding the source code.

Go gets this order close to correct. It first makes programs easy to understand for humans, then easy for computers to understand (it cares a lot about compiler speed), and lastly fast and concise. Which is not to say that Go programs are not fast (Go is a pretty speedy language) nor that the programs can’t be concise. Go is perhaps a little verbose, but within reason.

Putting rule #2 first is called “premature optimization” or its less mentioned cousin “premature generalization.” Both are the root of much unwittingly evil programming. Putting rule #3 first gives you machine instructions in binary.

Go takes the bold stance of caring more about the programmer reading the code than the one writing it. I think I just had to become the one reading it to appreciate this.

A year ago

When I initially wrote down my reactions to Go, I didn’t know about another new systems language: Rust.

I think that if I had seen Rust then (at least, seen its current incarnation; a year ago Rust might have been a little rough around the edges) …If I’d seen it then, I would have vastly preferred it over Go. It has it all: proper type inference, generics, automatically derived traits, syntactic macros, all manner of fancy compile-time checking. You can even ask the compiler to check that return values of certain types are never ignored!

Rust answers nearly every complaint I had about Go. So why would I still reach for Go when starting a new project?

In Rust, you work with powerful abstractions. You can do a lot with a little code – sometimes, no code at all. This of course comes at the cost of learning the abstractions. Since it’s a new language, much of that up-front cost remains to be paid.

Whenever I work in Rust, I find myself having a good time mucking around with the abstractions, but not really getting anything done toward the problem I’m trying to solve. In Go, I start chipping away at the problem right from the start.

Size matters

That said, for some projects I’d prefer Rust. Specifically for very large projects, a heavier reliance on abstractions starts to make more sense. Rust has more powerful systems with allow for more safeguards in a large (6 digit lines of code) codebase. Even with the abstractions, the Rust code may not be more concise because it trades some conciseness for more powerful tools (e.g. generics).

Size matters, but it’s not the size of the codes in lines that matters. It’s the size of the scope you must think about in order to understand what is going on. For small to medium size programs, Go does a good job of minimizing that scope. Sure, it may involve extra typing. But typing is not what makes programming difficult.