About PayPal's Node vs Java “fight”

Stephen Connolly's picture

So far I have held back from writing this blog post… but today in my email inbox I saw the following:

Screen Shot 2013 12 13 at 13 38 29

Yep, somebody is pimping their book, because of PayPal’s switch from Java to Node.js.

Let’s set one thing clear up front… namely

Which is the “faster” virtual machine…

I like the JVM. I think it is a superb piece of engineering. When it first came out, Java was dog slow… but by 2006 it was quite fast. At the time, as I was using Java for writing web applications,  my brother asked for some help playing with different protein folding modelling algorithms. I knocked a couple of them out in Java and started running them while working on hand tuning the algorithms in C. Back in 2006, once the first 10,000 iterations had passed the JVM’s full steam optimisations kicked in. My best hand tuned C version of the algorithms were at least 20% slower than the JVM, so my immediate thought was “I must be useless at hand tuning C” so then I implemented a standard algorithm for protein folding in Java and pitted it against the best of breed native code version that my brother’s supervisor had spent grant money getting optimised… JVM was still faster in server mode after compilation threshold had kicked in… by somewhere between 10 and 15%

Of course the reason is that the JVM can optimise for the exact CPU architecture that it is running on and can benefit from call flow analysis  as well as other things. That was Java 6.

Node.js runs on the V8 JavaScript virtual machine. That is a very fast virtual machine for JavaScript. When you are dealing with JavaScript you have an advantage because JavaScript is single threaded you can make all sorts of optimisations that you cannot achieve with the JVM because  the Java virtual machine has to handle potentially multi-threaded code. The downside of V8 is that it is dealing with JavaScript, a language which can provide far fewer hints to the virtual machine. Type information has to be inferred, and some JavaScript programming patterns make such type inference almost impossible.

So which is the faster virtual machine, V8 or JVM? My belief is that when you are coding in the most basic building blocks of either virtual machine (i.e JavaScript vs JVM Byte Code) that the JVM will win out every time. If you start to compare higher up, you may end up comparing apples with oranges and see false comparisons. For example, if we consider this comparison of V8 vs JVM performance at mathematical calculations. This blog post tells us that if we relax our specifications we can calculate things faster. Here is the specification that V8 must use for Math.pow and here is the specification that the JVM must use for Math.pow notice that the JavaScript specification allows for an “implementation-dependent approximation” (of unspecified accuracy) while the JVM version has the addition that 

The computed result must be within 1 ulp of the exact result. Results must be semi-monotonic.

And there is additional restrictions about when numbers can be considered to be integers. V8 has a faster version of Math.pow because the specification that it is implementing allows for a faster version. If we throw off the shackles of the JVM runtime specification we can (and do if you read the blog post) get an equivalently fast result… and if it turns out that we don’t even need the accuracy of V8’s implementation, we can make more trade-offs and get even faster performance.

My point is this:

You will see people throw out micro-benchmarks showing that the JVM is faster than V8 or V8 is faster than the JVM. Unless those benchmarks are comparing like for like, the innate specification differences between the two virtual machines will likely render such comparisons useless. A valid comparison would be between say Nashorn or DynJS and V8. At least then we are comparing on top of the same specification… 

What are PayPal comparing?

Here is what we know about PayPal’s original Java application:

  • It uses their internal framework based on Spring
  • Under minimum load the best page rendering time was 233ms
  • It doesn’t scale very well reaching a plateau at about 11 requests per second.

Here is what we know about PayPal’s Node.js application:

  • It uses their internal kraken.js framework
  • Under minimum load the best page rendering time was 249ms
  • It scales better than the Java application but still doesn’t scale very well.

So we are comparing two crappy applications in terms of scalability and concluding that because the Node.js one scales slightly better, then Node.js is better than Java.

I can only say one thing…

Screen Shot 2013 12 13 at 15 26 48

What we can conclude is that the internal Spring-based framework is overly complex for the task at hand. As Baron Schwartz says:

really? 1.8 pages/sec for a single user in Java, and 3.3 in Node.js? That’s just insanely, absurdly low if that amount of latency is really blamed on the application and the container running it. If the API calls that it depends on aren’t slow, I’d like to see many hundreds of pages per second, if not thousands or even more. Someone needs to explain this much more thoroughly.

Who is to say what performance they would have been able to achieve if they had built their Java application on a more modern framework. Spring brings a lot of functionality to the table. Likely far too much functionality. Most people are moving away from the monolithic application and moving towards smaller more lightweight frameworks… but if you have a corporate mandated framework that you must use when developing Java applications in-house… well you may not have much choice. On the other hand, if you move to a different technology stack there may be no corporate framework that you have to use.

Now we come to the second “benefit”, namely faster development.

We are told from the PayPal blog post that at the comparison point both applications had the same set of functionality…

Are we sure? How much functionality was the in house Spring based framework bringing to the table “for free” (or more correctly for a performance cost)?

I am not defending the in-house Spring framework, but I do find it a stretch to believe that the two applications were delivering the entirity of equivalent functionality. I do believe that the context specific functional tests were passed by both applications. So this tells us that the user will not see a difference between the two applications. But what about logging requests, transactions, etc? What about scalability and load reporting? I don’t want to defend the in-house Spring framework, in part because I find Spring to be an over-baked framework to start with, but potentially that framework is bringing a lot more to the table. If we threw all that extra “goodness” out would the Java developers have been able to develop the application faster? If we asked the Node.js developers to add all that extra “goodness” would they have been able to deliver as fast?

It is likely that we will not know the answers to these questions, what we do know is that it would seem that the extra “goodness” that the in-house framework adds appears to be a waste of time, as they are happy to go into production without them.

In other words, the in-house framework sounds a bit like one of these (at least from the point of view of somebody writing this specific application):


So it would not surprise me to hear that you can develop an application, when released from the shackles of an in-house framework, in 33% fewer lines of code and with 40% fewer files…

  • Spring as a base framework loves to have many many small files with lots of boilerplate
  • Even with annotations Spring can be rather XML heavy

If you were using a more modern Java based framework, likely you would not have had the same restrictions. For example I like using Jersey as my base framework, I find that it needs very little boilerplate and helps you to keep clear of the multi-threaded cargo cults that a lot of developers fall into. Node.js also helps you keep clear of the multi-threaded cargo cults… by forcing you to live in a single-threaded world.

OK, so the in-house framework is over-baked and does delivers very bad performance, so all we are left with in terms of benefits is that the Node.js version was

Build almost twice as fast with fewer people

Well first off, two people can develop faster than a team of five people when you are working in a close-knit codebase. The application itself has three routes. If you have a team of up to three developers, you give each one a route and let them code that route out. If you have more than three developers you will have more than one developer per route, which means that they will either end up pair-programming or stepping on each other’s toes. Add on top the unclear difference in delivered specification, i.e. the added “goodness” of the in-house framework… which will require hooking up before you even get out the gate… All we can really say is that this is probably at best an unfair comparison and at worst an apples to oranges comparison.

So what can we conclude?  

The above was my original thought when I read the PayPal blog post. 

  • I think, that in the scope of this application, the in-house framework was over-engineered on top of the over baked Spring framework and it probably does not bring much real value to the table and only costs in terms of a significant performance hit.
  • Any solution built on top of the JVM would have technically been able to be “integrated” with the in-house framework.
  • The only political route to avoid the in-house framework was to ditch the JVM
  • Node.js is simultaneously “just cool enough” and “just serious enough” to be a valid non-JVM candidate (you could try Ruby, but that’s been around long enough that there is likely an in-house framework for that too… and anyway you can run Ruby on the JVM… so it may not be the escape you need)

My take-home for you, the persistent reader who has read all my ramblings in this post…

Don’t build your app on top of a pile of crap in-house framework.

PayPal may have ditched one pile of crap framework based on Spring. What is not clear is whether the scalability limits in their Node.js in-house framework (i.e. 28 simultaneous clients with 64 instances) is a limit of their new framework or a limit of Node.js itself when used with the backing APIs that they have to call through to.

Time will tell, but don’t jump from one platform to another just because apples are greener than oranges.


Just to be clear, this post is not intended as a criticism of PayPal; PayPal’s internal frameworks or their decision to switch from Java to Node.js.

The intention of this post is to criticise anyone who cites a performance gain from 1.8 pages per second to 3.3 pages per second in what cannot be a CPU bound web application as being the primary reason to switch from Java to Node.js.

Similarly anyone citing PayPal’s blog as evidence that Java web development is harder than Node.js is mis-using the evidence. The only evidence on ease of development from PayPal’s blog is that their internal Node.js framework is easier to develop for than their internal Spring-based framework.

My personal opinion is that there were other non-performance related reasons for the switch. For example the reactive programming style enforced by Node.js’s single threaded model may suit the application layer that the switch was made in better than the Spring-based framework that the Java version was written in. Similarly, it may be that the responsible architect analysed the requirements of this tier and came to the conclusion that a lot of what the internal framework brings to the table is just not required for this tier. It is a pity that such detail was not provided in their blog post announcing their switch, as without such detail their blog post is being incorrectly used by others to draw conclusions that are just not supported by the data presented in that blog post. Hopefully PayPay’s development team will provide some of this additional information and analysis that was unfortunately lacking in their first blog post.

Finally, we should always remember that premature optimization is the major root to bugs and performance issues in software engineering. If the application tier they are developing in Node.js is not the bottleneck, in fact until it is proven to be the bottleneck, there is no need to worry about whether it is written in the most performant language or framework. What is most important with those elements that are not the bottleneck is that they be written in the simplest form so that if they do become the bottleneck later on (due to optimization of the current slowest moving part) it will be easy to rework them. 

For an tier with just three routes that is the front-end and likely calling through to multiple APIs, my gut tells me that a reactive framework such as Node.js or Vert.x will give you a very simple expression of the required logic without becoming the bottleneck. Perhaps that was the real reason why Node.js was considered as an experiment for this tier.


Stephen Connolly
Elite Developer and Architect



Sorry for asking You but I read that You are using Jersey. What are You using for rdbms access? I would like something close to SQL/JDBC like spring RowMapper. How do You connect Your db? dbcp? BoneCP? Thanks for a little info!

Just as a point to make, nobody should ever choose Node for computation performance, like your protein folding thingy. Choose it for an easy way to handle communications, and outsource the computation to a separate process.

Thanks for your post. What you are saying is often missing in the discussion about Framework A vs Framework B, or language A vs language B... Each one has usally its own benefits that cannot be compared. Sometimes the switch is beneficial for one because of gaining new features, but that must not be the case for everyone. Thanks for taking the time to share your thoughts

I think it is nothing to do with any technology: if you free yourself from clearly problematic constraints - you could write it in bash script and it would work better (stop laughing those that know I do this). There may be cases where a servers job in life is to wait on other servers - then an evented/reactive/callback driven tool can work better - but that didn't even seem to be the case here.

I can't stop laughing on PayPal people and I've got numbers to prove them wrong! http://blog.creapptives.com/post/9677133069/node-on-nails and also makes me wonder if these guys even tried vertex.io for this. I mean this shit is overrated A is better than B. I bet they don't have more traffic than Twitter yet Twitter moved from Ruby to JVM! Here is the deal to make it simple KNOW YOUR F**KING TOOL and you can do anything with it (Facebook, Twitter you name it!) Node.js is good for few things and should be only used for them declaring it FAST for something you didn't investigated explored you all options on is plain dumb!

I find it amusing you rationalize speed increases between node and java with the logic in this article, yet you don't seem to make the connection between your C and Java example. If your C application runs slower than your Java application, you're needlessly abstracting somewhere, or you are just doing things wrong. I don't see how it could ever follow that a C program could be outperformed by a virtual machine abstraction layer. It goes against all logic. You could say the JVM has instructions which invokes native code better than your team's variation of that code - but that just means Java's internal VM did right what you did wrong. It doesn't make Java inherently faster.

Why is it against all logic? What is a C compiler doing? Translating an abstract language into concrete CPU instructions. What is the VM doing? It's translating portable byte code into concrete CPU instructions. And because of Java's strongly typed nature the JVM can translate it into very concrete and optimised instructions. Both C and Java are low level languages. Also you should not forget that the performance on the C code depends on the quality of the compiler. For example, older versions of GCC are known to be slow. Microsoft's or Intel's C/C++ compiler produce much better code for the x86 platform. And the modern JVM is a damn good compiler, too. So logically C can be slower than Java code if your C compiler isn't super optimised for performance.

Apply the same logic you said to assembly and C. You sure can write faster assembly than the C compiler, but most developers can't or don't have the time to. The C compiler knows a lot about the chips it will run on and can then make various optimisations. What it can't do, however, is notice things at runtime and make tweaks too. If you go here, http://www.oracle.com/technetwork/java/whitepaper-135217.html#3, there's more info on what java does. Basically you'll get method inlining, reflective code will get compiled to 'usual code', loop unrolling etc. etc. For multi-threaded code you also get various locking optimisations too, such as bias to the thread creating the lock and it can also eliminate lock use completely.

The comparisons with assembly vs C from other replies are valid, but let's not forget about the other advantages of the JVM: JIT is able to re-adapt the compilation decisions based on runtime metrics, undo a previous optimisation decision to change to better strategies as its understanding of the actual execution improves. No matter how good your C compiler can be, it's limited to static decisions with no information on how the code is going to be used.

You might get lucky with http://www.jooq.org" rel="nofollow">jOOQ. Here's an example of how to use it with Jersey: http://blog.jooq.org/2013/11/28/using-jooq-with-jax-rs-to-build-a-simple-license-server/" rel="nofollow" http://blog.jooq.org/2013/11/28/using-jooq-with-jax-rs-to-build-a-simple-license-server/

I really agree with almost everything what you just said. I'm a Node.js Developer and I used a bit of Java, Spring specifically. And some of the articles about this matter I've seen are overly zealous. Node.js IS faster in some matters then stuff running on JVM. As you say, it has its advantages. But it will never be as good for heavy computational stuff. Finally, optimizing from 1.8 to 3.3 req/s is great, almoust double. But having 2 or 3 req/s is slow in any book - I bet you could get similar increase with PHP, with Python, even with some Java frameworks. I can account the Paypal gain to node being great at connecting other networked apps - that's where it shines. This is what I have originally read - they're not changing the entire stack, they're just going to replace front-facing services with node, which makes sense. On the other hand, I find that with whatever you use, there's much more boilerplate in Java then in Node.js. Given an average app to be built (including the types like paypal), I'll put my hands on 2 node.js devs vs 2 Java devs any day to be faster in getting it out there. That Spring gives you stuff for free doesn't matter if you're not using that stuff.

This comment has been removed by the author.

The high response time is introduced by API/service calls, that is the underlying API/service latency.
Stephen Connolly's picture

I don't mind that PayPal have moved some stuff from being written in Java to being written in Node. I object to others spinning their blog as something that it isn't. The reality is that the layer they have switched is not the bottleneck, likely in order to remove the current bottleneck they will have to keep modifying this top layer, so you want that layer in a form that is simple to modify and express. Any reactive framework would work well for such a layer. Node, Vert.x, Play, etc all would be fairly good in terms of expressiveness. Once the bottleneck has been shifted back to this top layer, only then do you need to worry about tuning the top layer... at that point in time it *may* be time to revisit whether Node is the best fit or whether some of the other reactive frameworks are more appropriate... time will tell... and you cannot even be sure that you will ever shift the bottleneck back to the top layer. The point of my post is that this context was missing from the PayPal blog... when you understand this context then you would not be one of the crowds of 3rd party people shouting out "Oh look PayPal got a 33% performance increase moving from Java to Node"... "Oh look PayPal got a 40% reduction in amount of boilerplate code they had to write moving from Java to Node." Neither of those two claims were being made by the PayPal blog (though you have to read it carefully to see that). In the case of performance "improvement": if your top performance is less than 10 requests per second for one client, then you must not be the bottleneck and any "performance" improvement you see is just accident. Heck you should be able to get a cgi-bin bash script up to 10 requests per second. In the case of reduced boilerplate: they are comparing their old in-house framework with their new in-house framework. There is almost 0% chance they will open source their old in-house framework... so we probably don't need to care how difficult it is to develop for. If they have smart people, likely they will take the lessons they have learned from the reactive framework and bring those back to their JVM based codebases

I'd also suggest that the quality of developer is a factor that isn't accounted for either. These sort of technology transitions are pushed by highly skilled developers who have the ability to quickly turn out code that is functional likely regardless of what language/framework/library/platform they're building on. This also then plays into the developer scaling and communication bottleneck.

"I sat on a panel with LinkedIn's head of mobile, the head of engineering at Google. The head of infra at eBay. And earlier with Eran Hammer, head arch at Walmart Labs (mobile). Additionally, Dav Glass who leads node arch at Yahoo! All of us have seen the same thing. In real web production, in real large scale systems, in real internet companies, node is outperforming the java stack in every case. ~ Bill Scott, PayPal" So... engineering heads at PayPal, LinkedIn, Walmart, Google, and Yahoo are promoting node and its performance. And you're telling me that performance gain is illusionary. Hmmm... Who to believe?
Stephen Connolly's picture

@mlong then you are misreading my post. What I said is that the evidence that they presented does not indicate a performance difference. While I also said that I believe that the JVM can probably deliver better performance than V8, I do not believe that heavyweight frameworks - such as Spring - can deliver that performance. We also have to ask whether you are comparing apples with apples. And at the end of the day, 3.3 pages per second (which is the node performance that was being raved about) is not the performance bottleneck. Is node faster than the JVM? It depends on the use case and the functional requirements and the skill of the developer team and more besides. I think that node makes it harder to write an application with completely shitty scaling properties (as if you follow such patterns the performance blows up as soon as you add the second client). The JVM, being multi-threaded, will let you use threads and locks and other primitives such that the scaling issues start to hit beyond the scale that can be replicated on your laptop. So you only see the problems in your design when you start putting real load on the system. Does that make the JVM less performant, or maybe is it a question of "with great power comes great responsibility"? Ultimately my point is that you should believe "evidence." Comparing 1.8 pages per second with 3.3 pages per second is not "evidence" (I do have to point out that we don't know if they adjusted the server JVMs compile threshold from the default of 10,000 or that they waited the 10,000/1.8/60 = 90 min of load on the JVM to measure the JVM performance? Or let's put it another way... was 1.8 pages per second the bytecode interpreter and not even the full set of optimizations that the JVM is capable of?) The post I linked to was not evidence of a performance difference... show me the evidence and I will be happy to back what claims it can support
Stephen Connolly's picture

@mlong: http://www.cubrid.org/blog/dev-platform/inside-vertx-comparison-with-nodejs/ is just one example of measuring relative performance of similar programming models between NodeJS and the JVM (i.e. Vert.X). Have a read of that evidence and see what you think.

I heard Azul's Zing is one of the fastest JVMs available (http://javarevisited.blogspot.sg/2011/12/jre-jvm-jdk-jit-in-java-programming.html) and performs better for applications with large heap space.

Yes the Java framework at that time was built upon an older version of Spring, an old JVM and an ancient app container. A half dead one legged donkey would have been faster than that framework. The good news is that a new in house Java framework has replaced that older framework (still Spring based) along with a newer JVM and updated container. The bad news is that this new framework now has (yet again) a million dependencies. It is a philosophical difference with app developers asking for the minimum OTS framework along with company 'customized' dependencies delivered as modules. Call it an opt in method for dependency management. Instead we get, "How about we give you every dependency under the sun either because somebody might have asked for the feature or.... we know better." Then when you include some other OTS tool you can almost guarantee a transitive dependency collision. It is unfortunate that node was sold into the company with so much hand wringing and hyperbole. It did nothing to help the company as the elephant in the room was not the 'Java framework'. At the time 90% of the UI and biz logic was going through an entirely different path implemented in a language other than Java. Call this other stack late 1990s technology.

Not quite true. gcc/g++ can compile using runtime information from previous code executed with profiling information.