Archived

This forum has been archived. Please start a new discussion on GitHub.

Ice vs. JNI throughput performance?

Hi again,

Wondering if anybody has done any throughput performance testing between a C++ server and Java client using both Ice and JNI. For example, based upon the demo/Ice/throughput demo.

How would you expect Ice to perform as compared to JNI?

Thanks,

Brian

Comments

  • benoit
    benoit Rennes, France
    Hi Brian,

    Ice is a general purpose middleware whereas JNI is an interface for the JVM to allow calling assembly/C/C++ code from a Java application. Is your intent to compare calling C++ code from a Java program with Ice and JNI?

    With JNI, the calls from the Java application to the C or C++ library are direct and done in the same process, there's no marshalling (afaik). With Ice, since the Java and C++ application are in different processes, the two processes communicate with TCP/IP sockets and Ice marshals the request on the sender side and unmarshals it on the receiving side.

    So I would expect the throughput between the Java code and the C++ code will be better if you're using JNI. But again you're comparing quite different technologies... Ice does a lot more than JNI ;)

    Benoit.
  • Benoit,

    Thanks for the reply. I modified the throughput demo in the installation to not only run against the Ice server, but also create a JNI layer between the C++ Client appliation and Java ThroughputI.java class. JNI is certainly faster as you say when both client and server are running on the same machine. However, I noticed that in our application when I changed from twoway to oneway protocol and put our client on a different machine, Ice throughput was better, perhaps because the unmarshalling was not competing with the same cpu...? Dunno.

    Another question I have is Ice::Current. I noticed that it is 56 bytes in size. When we use Ice, are we sending an extra 56 bytes with every message sent to a proxy?

    Thanks again,

    Brian
  • marc
    marc Florida
    Ice::Current only reflects certain information about the call, such as the identity of the Ice object, the facet name, the operation name, and the request context. You do not transfer the full Ice::Current struct, as the data on the wire is more compact than the representation in the struct.

    For example, if you have an object with the identity "foo", an operation "bar", and no facet, and no context, then the overhead is 4 + 4 + 1 + 1 = 10 bytes.

    This has nothing to do with Ice::Current, but simply with the fact that the protocol message must contain which operation on which object has to be called.

    For more information, please have a look at the Ice protocol chapter in the manual.

    As for the oneway performance, oneways are faster because no reply messages need to be sent. And the throughput on two machines is higher because there are no two processes that compete for the same CPU cycles.
  • All makes sense to me.

    Thanks Marc and Benoit.