In my experience with GC’d languages - latency > throughput. And it has nothing to do with Go. The reason for this is quite simple - it’s easier to talk about overall performance with predictable latency. Yes, there is no free lunch and we are paying for everything with how fast our application is. But at least I have stable picture of how my application behave under load. No spikes or sudden drops.
As for Java GC - nobody is saying that it is bad. On a contrary - it one of the best (if not the best) collector in the world. JVM memory model on the other hand is… bad. Even with top notch GC abusing heap like that is just plain wrong.
When you create a runtime for lightweight threads (as I have), what matters isn’t what the programmer sees but what the runtime sees. POSIX threads and lightweight threads appear the same to the programmer, but behave very differently from the runtime’s perspective, and require very different memory management (and very different scheduling) algorithms. What matters to the runtime is: how big each stack is, whether it’s constant in size or can grow and shrink, and how often it’s allocated and deallocated. It’s those parameters that make the difference, not the user abstraction. The point of my example was to clarify that the stack abstraction can be used in different ways. Yes, POSIX thread stacks are also managed in some heap, but their usage parameters make them very different from lightweight thread stacks; the latter behave like a plain object would.
In Erlang, Go and Quasar, the lightweight thread stacks are managed very differently from how the OS manages heavyweight thread stacks, and this is because the two behave very differently despite exposing the same abstraction. You keep repeating that lightweight threads and heavyweight threads are the very same abstraction but what matters is the usage parameters, and that’s why their implementation is so different.