Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Comparing Go and Java, Part 2 – Performance (boundary.com)
70 points by spahl on Sept 18, 2012 | hide | past | favorite | 82 comments


Please, if you're writing req/resp benchmarks, please include 98/99/99.9% latencies. These are essentially the only numbers that actually matter.

At the mean, at concurrency 50, we have things like 37.5ms vs. 12ms. To a user, that is "instant" vs. "instant". Sure, if we stack a bunch up on a page, maybe we'll start to care, but..

Much more directly, in real-world scaling scenarios, it matters far more if the 98% number is 2s or 248ms than the mean is 37ms. A 2s latency means 2 out of every 100 requests (and likely > 2% of page views), the user experience will suck. And, all it takes is 6 requests to ensure that 25% of users experience this (your homepage alone probably requires 6 requests).

I'm not just raising this to be pedantic--I've seen plenty of systems that have been tuned to do very well on things like req/s or even mean latency but do very poorly for 1 in 100 or 1 in 1000 (vs. being a very fair scheduler at the cost of overall throughput). We reward these systems by measuring and praising the wrong thing. (e.g. mongodb vs. riak)

edit: (btw, not intended as a specific defense of either go or java wrt the article, just a general statement about benchmarking these systems)


The article mentioned DropWizard (covered more in part 1). DropWizard is a very nice (the best?) way to write RESTful web services in Java. Coda Hale, that author, has glued together some of the best Java libraries with very sensible configurations, almost to the point of saying "look stupid, this is how you're supposed to do it (in Java)". The application configuration patterns (ugh... I said "patterns"), clear separation of resources, built-in metrics, and health checks is worth studying (at least for journeymen like me).

Check it out: http://dropwizard.codahale.com/


How does this compare to using Spring application development framework (with Maven, of course)? My understanding is that the Spring framework is how you're "supposed to do it" in Java, and it certainly is very popular.

http://www.springsource.org/


Spring and Dropwizard are really not comparable. Spring is a large, invasive framework which covers many different facets of enterprise application design. Dropwizard is a set of commonly-used libraries (jetty, jersey, jackson, slf4j, a couple others) with a little bit of glue to stitch them together.


Very well stated. You said it in half the words I did. I especially like your point about Spring being "invasive". For me, Spring MVC was so invasive I couldn't justify using it... the LDAP, JDBC, and Security components work usable without too much interference though.


As I understand it Spring offers a complete enterprise application development framework, while DropWizard focuses on delivering production quality RESTful web services with minimal developer work. I've used Spring a little, it's nice... very full featured; in fact I still have production code using Spring's JDBC and LDAP libraries and Spring Security... they've worked out very well for me. I think Spring is the Rails to DropWizard's Sinatra. They're both very good, but if you need less and DropWizard can cover what you need it's a lot easier to use and probably uses the libraries you'd want to use anyway (unless you know a lot of Spring or JEE already).


I've done a fair bit with Spring in Java. Unfortunately the promises seem to fall short of delivering as your application scales up in complexity. I've experienced a number of problems with annoying little bugs and some odd brick walls with transaction management (between JMS/Hibernate).

I'd rather build something with Java EE6 if I had the choice now, or ASP.Net MVC+WCF+NHibernate if I had the choice of platform.


> I'd rather build something with Java EE6 if I had the choice now

Don't, its a nightmare.


Thanks for the heads up :) anything in particular that kills it for you?


1) WELD CDI has no good testing story. The only option that allows CDI to work is to use Arquillian which deploys and undeploys an entire WAR for every test class, it other words it takes hours to run integration tests. Unit tests are livable using Mokito and its ability to mock injections.

2) JSF + WELD have holes between there specs. CDI can manage JSF session and request scoped beans and you can use conversation scoped beans in JSF. However they never got around to figuring out view scoped beans (which IMO is one of the most useful things in JSF). As a result you have to use some third party to make them work together. There are a number of other spec holes but I'll use this one as an example.

This means using SEAM 3 or CODI CDI extensions to fill the gaps. I went with SEAM 3 as it theoretically had a good lineage given that a lot of what was done in SEAM 2 ended up being the EE6 spec. What the SEAM 3 project did however, was to write a half assed version of all their extensions, then summarily announced they were all ditching the project to go work on Apache Deltaspike, which is basically the same extensions with a different name. Currently the SEAM 3 extensions are in various states of broken depending on the extension and your use case, and of course Delta Spike isn't ready yet (still in Apache Incubator). I've removed most of SEAM 3 from my app at this point, the remaining module is SEAM Faces which is unfortunately plugging the hole in JSF View Scope working with WELD. Also unfortunate is that there is and has been a bug in SEAM Faces for months now that causes the data JSF stored in the session to be unserializable. So I'm stuck with sticky sessions.

SEAM modules also fill in the hole with Hibernate's stupid session management, i.e. opening a transaction in the RENDER_RESPONSE phase so lazy loading works.

Several times now I've run into bugs, gone to research them and found they weren't fixed because people from the involved modules were arguing over the EE6 spec.

I could probably yap for awhile here but It may be easier to say this, I started my current product last April(2011), it started as a "pure" EE6 app. I ran into so many EE6 bugs and just pure stupidity in places that I've been moving away from it ASAP. I've replaced Hibernate with Ebean, ditched 90% of SEAM, setup an API with Jersey/Jackson and we're now pushing all front end stuff to Rails or Backbone apps which just talk to the API.

I'd never write another webapp with EE6. Jetty/Jersey/Jackson is great for web services, especially with groovy. If you need to handle the front end tasks, use play or grails.


> I'd never write another webapp with EE6. Jetty/Jersey/Jackson is great for web services, especially with groovy. If you need to handle the front end tasks, use play or grails.

Play or Grales might be just as bad as EE6. Do you also have similarly intensive experience with Play and/or Grales since April 2011 that you can compare with?


To be honest if I were starting again today I would not have used java except for performance limited jobs exposed as web services. Even those I would be tempted to write in Go. I may have used it for interfacing with some of the archaic XML APIs I have to interface with, JAXB does a good job with dealing with shenanigans in the format I've found.

That said to get to your question, I don't have experience with either at the same scale as the EE6 app. I have fiddled with both some to get a feel as I was deciding what the path away from the EE6 stuff I would take. I have ditched hibernate for Ebean (Ebean is the ORM that comes with Play). I've also started writing most of the controllers in Groovy, which comes from messing with grails. Both these changes have been awesome and significantly simplified the project.


Oddly I was just at a JUG last night about Arquillian. To your point #1, we were led to believe that the deployment was a micro deployment and you only deployed the bits that you were testing, so that this was "quick". Not so?


In theory your right, you put whatever you want in the WAR it deploys.

The entire point of Arquillian is to test in a live container, which means using it for things that will interact with the container services. They love to put up examples of stuffing 3 classes into a war and testing it, to which I say, why? It makes sense if your testing a CDI extension (who incidentally seem to be the only people using Arquillian). If I wanted to test 3 classes from my app I'd mock the injects with mockito. Its far faster and easier to use mockito to inject instances, especially since mockito can instrument those injections allowing you to assert method call information.

Where this all falls apart is where Arquillian should shine: integration tests. e.g. put up a functional JSF controller/view and fire requests at it with JSFUnit. You now have to package up enough stuff into your war to get a functional JSF environment running. Arquillian doesn't help you at all to figure out what dependencies you need to include to just get JSF running. As a result you end up playing games trying to get the WAR to include what it needs to run which can be much harder than it sounds due to the way much of the EE6 stack is layered and intertwined. You end up having to include a ludicrous amount of stuff to get JSF running. Or you just say 'fuck it' and tell Arquillian to include your entire pom and get a huge deploy. On top of this is managing your own dependencies. Is your view modularized and using 5 different sub views and supporting controllers (and associated helpers?). Its all up to you to track and manage this. Its like being thrown back into a world without maven for every single test case you try to setup. I found I spent more time trying to figure out what I needed to deploy than I did writing tests.

Also your completely screwed if you try to do a service layer down to database integration test and your using hibernate. Waiting for hibernate to start up on every test class is brutal. Getting rid of hibernate makes testing and many other things so much easier. If you stop to think about how much crap exists in the stack just to deal with hibernates session lifecycle and transaction requirements its amazing. In my current app I have about ~65 entity and ~70 tables, switching from Hibernate to Ebean and ditching the libraries I no longer needed to deal with hibernates session management dropped my final war size by ~15MB (40%), cut startup time in half, and allowed me to remove over a thousand lines of code.


Thank you for your time on this response. It's a lot to digest!


Thanks for the detailed response. Much appreciated.

I think I will avoid as well then.


There is never just one answer on what libraries/frameworks to choose, and anyone who tells you that there is just one way you are "supposed to do it" probably isn't worth listening to :)

With that said, I am a big fan of Spring and I don't consider it "intrusive" as some other commenters do. But if you aren't interested in using it as a DI container then you may find it easier to just get closer to the other libraries used (i.e. Jersey etc) and avoid any helper code.



No surprises here. Google did a benchmark on go, java, scala, and c++ that's worth reading. http://www.readwriteweb.com/hack/2011/06/cpp-go-java-scala-p...

Here's my personal experience. Simple go code is actually comparable to c, even for non-io bound tasks. I was very surprised when OpenSSL's AES implementation and go's AES implementations performed similarly in my microbenchmarks. The jvm actually performs very well (go figure that over a decade of optmization in running enterprise workloads would result in a fast runtime). I've inspected the generated assembly in a go code and compared it with that from the equivalent c code and there's no doubt go isn't the most efficient language ever created.

Use go if you want something that's (pretty darn) fast, productive, and has a standard library written by some of the most respected names in the field. Don't use go if you need the most mature and performant runtime and libraries. I have no doubt that they will get there eventually.


> No surprises here. Google did a benchmark on go, java, scala, and c++ that's worth reading.

No, is not worth reading, is misleading at best and has been throughly debunked:

http://blog.golang.org/2011/06/profiling-go-programs.html

Not to mention it used an ancient version of Go, even Go 1 is dramatically faster than that, and since Go 1 there have been even more dramatic performance improvements, but the main issue is that the guy that wrote the benchmarks really had no idea what he was doing (there were similar criticisms from outside the Go communities about the quality of the benchmark).


Honestly I'm happy to hear that. I'm one of those rare individuals who gets to write go at work. I'm really happy with go performance and memory usage. Like really really happy. The different between the cpu usage of a go program using protocol buffers and a python program using protocol buffers is pretty dramatic (go is the clear winner).

Most folks would assume a paper coming out of google involving go has some go experts involved. Clearly this is not the case.


Wow, thanks for sharing.


Every time I read something from Russ Cox he impresses me a bit more. That guy is amazing.


I'd make a guess in absence of code.

Since none of the implementations were IO bound (unsaturated link), I'd bet on memory management as the deciding factor (it's hard to assume that an authentication service will be CPU bound).

Then it's not difficult to see why Java won. When it comes to garbage collectors, JVM has the best (production) implementation of GC bar none.

Go runtime has a lot of catching up to do.


First paragraph: "In Part 1, we looked at the code of two web services that implement an authentication web service. One written in Java, and one written in Go. It’s time to beat up on them a little bit."

http://boundary.com/blog/2012/09/13/comparing-go-and-java/



I would like to see Ulterior Reference Counting, or age-based hybrid GC for both.

http://www.powershow.com/view/14655a-MGM5M/Ulterior_Referenc...


Your guess is probably quite accurate, is also probably mostly benchmarking the quality of the PostgreSQL drivers, the Go ones are good, but I'm sure there has been much more work in optimizing the Java ones.


Some notes from looking at the Go code. I don't know how important are those for performance, more like style nits.

Java version doesn't do type conversion: https://github.com/collinvandyck/go-and-java/blob/master/jav...

   rs.getString("id")
   rs.getBoolean("admin")
Go version does conversion at runtime, as row.Scan() accepts interface values.

Also, the usage of sql interface is not optimal (at least with regards to code length) -- since the query is for one row, why not use QueryRow instead of Query?

Authorization header decoding is strange -- it creates new base64 decoder, then string reader, then decodes. Why not just use base64.StdEncoding.DecodeString() and get rid of a few lines of code and a few allocations?

https://github.com/collinvandyck/go-and-java/blob/master/go/...

Similarly, JSON marshals into a newly allocated slice instead of creating a decoder and then marshaling directly into ResponseWriter.

https://github.com/collinvandyck/go-and-java/blob/master/go/...

Constants for HTTP error codes, that are declared in net/http, for some reason are redeclared

https://github.com/collinvandyck/go-and-java/blob/master/go/...

- - -

I sometimes wonder why various benchmarks include colorful charts, but fail to include a few lines from a simple run of profiler. It's so easy to do, and yet nobody bothers to learn where the cycles are actually spent!


Also somebody in the gonuts list noted that both versions are using the database differently, if you fix the Go version to use prepared statements it doubles throughput.


Does anyone else find 4 lines of error checking (log and panic) boiler plate code for every line of functional code a bit tedious?


It's annoying, the only thing it has going for it is it's better (in my subjective opinion) than the alternative.

In go you can happily ignore an exception by assigning it to _. That's probably a bad idea for a larger piece of code, but for little scripts, go for it.

I may be already permanently damaged by go. Every time a function can return an exception I have to stop and ask myself: "self, what should you do if this happens?". I'm starting to think it results in better code. Granted, not every shell script/utility benefits from this level of introspection, but that's what python's for I guess.

p.s. Anyone who's had to deal with checked exceptions in java puts up with a crazy about of boiler plate too.

p.p.s. Disclaimer I have been professionally employed writing all languages mentioned above, so hopefully I'm relatively unbiased.


I have invented a remarkable new programming tool that wires up to your chair, keyboard, and 110v AC. It gives you an electric shock every time you complete a line of code, reminding you to stop and think about it. Think of the productivity!

Checked exceptions are indeed obnoxious and a major language design failure, which is why pretty much every modern language since just has plain old (non-checked) exceptions. And even in Java, you can work around the brain damage by wrapping checked exceptions with runtime equivalents in API facades. Exceptions are still incredibly useful, and lack thereof is my biggest complaint about Go.

I'm also disappointed by the convention of capitalizing/lowercasing names to export them or not. Realize that you want to export an existing private method? What's that, your IDE doesn't support refactoring? Get typing, you have a lot of method calls to update.


> I'm also disappointed by the convention of capitalizing/lowercasing names to export them or not.

I think it's fine, personally. It's not overly obnoxious, it gives shape to the code, it avoids the redundancy of an explicit export list (although that also means it's harder to see at a glance what's exported from a module I guess) and it makes sense within Go's habit of mandating formatting, there's no reason not to leverage this mandate.

Coding conventions on steroids, if you will.


Exceptions make it harder to reason about your code's execution path. If you raise an exception, it is often not obvious who in the call stack is ultimately going to catch and handle it. The logic for that can live pretty much anywhere. This is not to mention the try/catch/finally pyramids you get from trying to cope with nested failure cases.

Go uses a well-understood mechanism: return. Control reverts to the caller. It's simple, which was an explicit design goal of Go.

IMHO, if you have a choice between exceptions or not, they just aren't worth the value they deliver. As Josh Bloch says, use them for exceptional circumstances only, to indicate truly exceptional circumstances, such as catastrophic errors.

Re: refactoring: http://golang.org/cmd/gofmt/. Check out the -r option.

In practice, is this really a huge deal? Modulo go fmt, what editor doesn't support multi-file S&R with regex? Isn't this what a compiler is for? All in all, this sounds like bikeshedding about syntax. We're all entitled to our opinions but it's awfully hard to say anything interesting about syntax which has not already been said a bajillion times.


"It gives you an electric shock every time you complete a line of code"

Is that on kickstarter? Put me down for 10!


> It's annoying, the only thing it has going for it is it's better (in my subjective opinion) than the alternative.

Why? There's nothing necessarily wrong about letting it crash, and letting a layer above report the error cleanly. Hell, in Erlang the usage is even to let an other process entirely handle the error.

In fact, my opinion would be the complete opposite of yours: checking every single return value (if only to return it to the caller unaltered) is fine for little script, but it's a bad pattern to need for large pieces of code, it's verbose, redundant and unhelpful.


>There's nothing necessarily wrong about letting it crash

Then do that. You don't have to handle errors, you can ignore them just like you would ignore an exception. The difference is that with an error return value, you are explicitly choosing to ignore it. With exceptions, it is easy to accidently ignore it when you didn't want to.

>but it's a bad pattern to need for large pieces of code, it's verbose, redundant and unhelpful.

Which is an argument for better error handling, not an argument for exceptions. If go had Maybe and Either, there would be no problem.


> Then do that. You don't have to handle errors, you can ignore them just like you would ignore an exception.

No, if I ignore an exception it bubbles up the stack and will either stop the program or find somebody handling it. If I ignore a return value, the program gets into a completely undefined state and will crash later in a completely different place.

Unless there's a way for go to do the same thing as the Erlang pattern:

    {ok, Value} = call(SomeArg, SomeOtherArg).
is there?


I'm not sure what you mean, that is the normal way you do it in go? Multiple return args, one being the one you use, the other being the error condition, which you can ignore by either not checking it, or just outright assigning it to _.


> I'm not sure what you mean, that is the normal way you do it in go?

No, that is the normal way I do it in Erlang (hence the note that this is an Erlang pattern), where there are exceptions (and nobody says there aren't) but most functions tend not to use it and to return tagged tuples: `{ok, Value}` if the call succeeded (or just `ok` if there's no value to return) or `{error, Reason}` if the call failed. Note: lowercase words in Erlang are atoms, you can think of them as interned strings. Words which start with a capital are "variables" (which can't vary, but close enough).

Now the caller can unpack the result:

    case some_call() of
        {ok, Value} -> %% code to execute if the call succeeded;
        {error, Reason} -> %% code to execute of the call failed
    end
this uses pattern matching (on the value being a tuple and having the right atom as its first element) to dispatch each case to the right branch.

But in this sub-thread, we don't want to ignore the error. In Erlang, "ignore the error" is written:

    {ok, Value} = some_call()
this doesn't really ignore the error (and let the function keep running), it asserts that the result of some_call() matches the tuple `{ok, Value}` and faults if that's not correct. The equivalent Go code is what is used in TFA, namely:

    result, err := SomeCall()
    if err != nil {
        panic(err)
    }
and is also equivalent to not catching the exception in Java: it does not let the code keep running.

And my question was thus: is there a way (shorter than the one used in TFAA) to do this, not handle the error but have the error prevent the code from running?

> which you can ignore by either not checking it, or just outright assigning it to _.

No, that leaves the code running in an unknown and corrupted state, I don't consider this acceptable.


>No, that is the normal way I do it in Erlang

It is also the normal way you do it in go. Read the examples, that's exactly why go has multiple return values.


Annoying as hell, that's what you get when you don't support exceptions.


> that's what you get when you don't support exceptions.

That's what you get when you refuse to use them anyway.

Go has exceptions, there's just a dogma about never using them.


You can't really use panics as exceptions. If you do your code will be strange and hard to work with. This strangeness is fine, because they aren't meant to be used like that.

A good rule of thumb is to use panics when it's a programmer error indicating a bug.


Not dogma, just common sense based on experience.

Panic() is for truly irrecoverable exceptional situations where you do not expect the caller to catch it.



You can ignore errors if you want to crash, just like Java.


Yes, exactly. If you find the boilerplate tedious, feel free to ignore errors completely. :) For test code it doesn't matter. It is the equivalent of providing no exception handler, or catching Exception or Throwable and doing nothing.


That is not optimal. It not going to crash in the location of first error. It will plough on a bit and crash somewhere else where the bad object/pointer was actually used to perform an illegal operation.

Something like a 'die on error' option might be useful for trivial scripts and applications.


When prototyping stuff is trivial to have a fail(err) function that panics in case err is not nil.

I sometimes do this, and then remove the fail() function to force myself to properly handle the errors. (Still, often is best to handle the errors as soon as you write the code anyway).

The key is that unlike with exceptions, the fact that you are ignoring the error is explicitly stated in the code, is not something that magically might happen.

And most importantly, errors are part of the documented API, with exceptions it is rarely documented what exceptions a function might throw, much less what exceptions the functions called by that function might throw.

Yes, Go error handling is a bit verbose, but that is a sign of how much better it is than exceptions, without falling into the 'checked exceptions' insanity.


Which is similar to the scenario where you ignore all exceptions and your test script got into a bad state. :)

Much as you could write this in Java:

    catch (Throwable t) {
      throw new RuntimeException("oh noes"); // or whatever
    }
You could write this in Go:

    if err != nil {
      panic("oh noes")
    }


But I can do the former for a whole block of code but have to do the later for each significant line of code.

They are are not the same.


That seems reasonable to me - Java's had decades of optimisation, but Go's a fairly new language and runtime.

I've just started playing around with Go, I'm quite liking it. It's a different approach to things, which is always fun.


Agree. I've been playing with Go a bit, and though I still lean towards python and scala, Go definitely shows promise. The interfaces model and channel synchronization primitives are compelling enough to keep me interested in its progress.


It seems like it might be a fun project to compile Go to Java bytecode. GCC might actually have all the relevant bits---it includes gcj, a Java compiler including bytecode generator, and a Go frontend.

Knowing this, someone's already done it---anyone aware of such a thing?


The big pull of go for me is AOT compilation to native code (which translates to fast start time for command line utilities), close integration with the OS,support for value types, and stack allocation.

These are not currently available on the JVM (although this may change as of JDK 8).


I think Java supports escape analysis on loops and will actually allocate stuff on the stack if it's in a tight loop. Your other points are valid; this is more a curiosity than anything else.



I'm aware of real time JVMs, but they seem to target smaller scale mission critical applications (I'd imagine embedded or Scada?)

None of these are free (or "open source" in the OpenJDK sense -- e.g., where most of the system with the exception of a few libraries is open), however, with the exception of the last -- where the first revision was available as part of Jikes RVM.

The only commercial JVM that I do believe has some production server-side deployments -- Oracle's JRocket -- still doesn't support value types and has (in many cases) actually performed worse than HotSpot with large heaps.

So to put it bluntly none of these fit "I'm willing to deploy this to thousands of nodes in production" criteria. To me HotSpot (due to its stability, performance, and support for concurrent GC) VM is more of a reason _to_ use Java/JVM languages -- otherwise I'd much rather use C#, F#, D, Go, or OCaml.

Finally, I should it's quite possible to do manual memory allocation, bypass bounds checking, and to control memory layout using direct byte buffers and Unsafe in HotSpot/OpenJDK. Problem is (and I've mentioned it) is that it incurs serialization costs from copying objects to these byte arrays. Of course may this may be acceptable for many kinds of application, especially if only a _small_ part actually needs this (which I'd imagine describes many -- if not most -- kinds of applications).


I usually work with Fortune 500 companies, so there the price of these JVMs is a water drop in the type of budgets we work with.

But for the scenarios you described, I agree with you.


Cost is an issue if you're deploying on large scale. The fact the source is not available is an even bigger issue.

OTOH if using a proprietary solution ($$ and closed source) was fine, I'd personally go with C# or F#. If you're fine using a closed-source runtime, why not just use a better language (not to mention Microsoft's products are free for startups via BizSpark)?


Speaking about our own projects.

Because Mono is not a solution when you need high scalable servers running in commercial UNIXes, z/OS and similar systems? These type of solutions always get to use C++ or Java in our projects.

My current project is actually done in C#, but it is a 100% Microsoft stack.


http://code.google.com/p/jgo/ but I have no idea how complete it is, I only know it exists.


This could be cool, but it'll be tough to simulate Go features like lightweight threads and cache-efficient object layout in RAM.


Java came out at in 96. GO has been out for couple years ago. The difference makes it far from decades.


So Java has had 1.6 decades of optimization. He didn't compare the difference, just what Java has.


Decade and a bit then?


Is this right? Java really has better latency than Go right now? And with 2 processors, Go's latency is 25ms for 50 concurrent requests. Anybody with better experience care to comment on this?


There's still this lingering belief that Java is slow, maybe related to the terrible start-up times of the JVM. But once it's going, Java is fast - really fast. And since its niche is basically enterprise webapps, a lot of attention has been paid to scalability and concurrency.


A little known secret is that most mainstream languages have nice libraries to handle multi-core programming that go beyond basic thread handling, similar to what Go offers.

You just have to know where to look for, plus you don't need to throw away mature languages and tooling.


For the JVM, Akka is pretty sweet. It comes with both Java and Scala bindings.

http://akka.io


1 - There has been a lot of man years put into the JVM (threading and garbage collection), and Java memory model. I'm somewhat surprised by the magnitude of the difference. Java and JVM are very mature technologies.

2 - Go's CPM paradigm is not necessarily more performant than the preemptive threading model of Java, and I don't believe that has ever been claimed for CPM. It is claimed that it is "easier" to write concurrent code using the (CPM/Go) message passing paradigm, but in my experience, you either get concurrency (and can do both variants ok) or you don't, in which case any real world concurrent system will hardly be "easy". That said, for the concurrency novice, Go is far less intimidating experience given the language level support for goroutines (fibers), channels, selectors, etc. (think Java NIO ...)


I have no idea if it is "right" since I'm not going to duplicate this exact benchmark, but it seems likely. Raw performance hasn't been a huge focus for Go, though this has been changing. The tip version produces much faster code than the last official release that seems to have been used here. Also you often get more efficient Go from the gccgo frontend than from the Google compiler since it can take advantage of a lot of front-end-neutral optimizations that gcc already has (though this gap is closing as the Google compiler gets improved). For the runtime, work continues on the goroutine scheduling, gc, etc.

Go is getting a lot faster and pretty quickly.


It'd be fun to throw Node.js and Erlang into the mix.


You can see vs Erlang in http://shootout.alioth.debian.org/u64q/benchmark.php?test=al... and have a idea, Go is based on Newsqueak (lauched later of Erlang, other perspective to implement CSP).


You chose not to compare against the faster Erlang HiPE measurements because...? ;-)


Sure, but this benchmark was more about concurrency, whereas those benchmarks are more generic. Erlang is not the fastest language out there, but it's supposed to be good at this concurrency stuff...


Erlang is supposed to be good at distributed, reliable, soft real-time concurrent systems.

http://www.erlang.org/faq/introduction.html#id49480


Somehow, it seems that when Erlang was getting a bit of hype, they abandoned the field for an even smaller niche, leaving the "real time web" type things to other languages. Granted, the language was created for that smaller niche, but if that's the only bit of terrain they want to defend... I see other languages squeezing them out over time.

I don't know if that makes sense, but languages need somewhat large communities to really thrive, in my opinion, and defining yourself into a small niche isn't a good way to get them.


For a short time some people hyped Erlang to be the solution for all things concurrent and chatterers didn't read the FAQ.


Here's an insightful comment from the blog page, about cpu and memory usage:

I downloaded patricks fork and got 4300r/s with maxprocs 1 with the go version, out the box.

Recompiling and running with gomaxprocs=100, i got 4400r/s.

mvn deploy and running auth-1.0.jar on JDK 1.7 on my box similarly peaked out at 3100r/s.

It's worth noting though, that I was using patricks httperf bench.sh modification, and it appears that httperf was cpu bound in both cases, with the kernel and postres taking about half a core between them.

Using wrk, by contrast, spun main (the go program) up to 2.5 cores, and 6000r/s. Java under wrk lit all cores for a time, then hit `Exception in thread "async-log-appender-0" java.lang.OutOfMemoryError: Java heap space` `Caused by: ! org.postgresql.util.PSQLException: FATAL: sorry, too many clients already`

A bit more investigation and tuning later, using 10 clients `wrk -c 10 -r 100000 -t 4 -H 'Authorization: basic apikey_value' http://localhost:8080/authenticate` Java was spinning out 10kr/s. Go was limited to 6kr/s. It's worth noting some more details however: Go only ever spun up two postgres forks, whereas java spun up 10. There's scope for optimization there. Go used 8mb of ram, whereas java was sitting on 130mb after a few runs. The Java version, cranking out at 10kr/s was maxing the kernel out on one core, so that's probably approaching the practical limit for single machine tests.

I suspect putting a load balancer in front of a couple of instances of the go program would allow you to totally smash the java performance, given that it lit 2 cores at 6kr/s, and java lit 8 at 10kr/s. The memory usage tradeoff is significant - JVM sitting at 130mb and Go sitting at 8mb. Clearly everyone needs to draw their own conclusions on this, for their own purposes. The JVM solution is carrying a stats server, a ton of other tooling and so on. The Go system has some - pprof was included, but it's limited by comparison. Arguably gdb and so on can actually be of real use in the Go case, but that's also an exercise for the reader.

Interesting any which way you look at it. There are a lot of other interesting side effects (that need working out) in both programs as evidenced by this simple testing. Postgres also needs some tuning if you really want to slam this with anything remotely resembling a real world scale test.

My tests were done on OSX 10.8 12B19 on a 2.3GHz i7 wiht 8GB of DDR3 at 1333MHz, an intel 510 SSD, totally untuned postgres. The machine is a Macbook from whatever year that is.

Here's wrk, for anyone looking for it: https://github.com/wg/wrk




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: