Home Comments

Ice performance ?

Hello,

I have just been informed that TAO is faster than Ice. It was said there is a comment in the forum but the search tool refuses to search for words shorter then 4 letters.

Are there some benchmarks comparing Ice with other ORBs. From a recent study in our project it appeared that omniORB might be the faster CORBA ORB.

Comments

  • marcmarc FloridaAdministrators, ZeroC Staff Marc LaukienOrganization: ZeroC, Inc.Project: The Internet Communications Engine ZeroC Staff
    Have a look at the following thread :

    http://www.zeroc.com/vbulletin/showthread.php?threadid=27

    As you can see from these independent benchmark figures, Ice is faster in some areas, and Tao in others. Also note that different concurrency models are compared in some tests.

    The upcoming Ice 1.2 has further performance improvements. We believe that Ice will then be faster than Tao in basically all areas, provided that tests use equivalent concurrency models for Ice and Tao.

    Also note that most tests assume high-bandwith, low-latency connections. If you have remote internet users, with relatively low bandwith, then Ice will definitely be faster than any CORBA ORB, because of the more compact encoding, and the ability to compress data before it hits the wire.
  • Hello,

    well according to the study that was made for our project the marshalling time is apparently only a small fraction of the overall transaction time over 100mbit/s link.

    Beside the omniORB outperforms TAO on many aspects. Some anecdoticals are the size of source tree for instance :
    TAO 329 MB and OmniORB 17.

    Note however that omniORB does not support passing objects by value and portable interceptors (don' know what that might be).

    Size of base libraries are TAO 7.2 MB and OmniORB 1.7.
    Installation size is 32 MB for TAO and 11MB for OmniORB.

    For basic calls like void method call without parameters, apparently OmniORB is nearly twice faster then TAO !

    So one should probably consider to also use omniORB as performance reference.

    What I don't understant is where does this performance difference come frome. It is huge for MicoORB for instance which was the slowest in all tests.

    When I read the forum thread where it was shown that Ice was slower than TAO I was puzzled. Because it means it may be even slower than omniORB. Where could this come from ?

    I only see two possible spots of performance losses : data copying and memory allocation. In multithreaded application, thread synchronisation can also be a source of delay.

    Unfortunately I can't publish the report. It is still a draft and an organisation internal study. Though I would be interrested to here some comments on the reported informations.
  • marcmarc FloridaAdministrators, ZeroC Staff Marc LaukienOrganization: ZeroC, Inc.Project: The Internet Communications Engine ZeroC Staff
    I used to implement a high-speed ORB, that was faster than OmniORB. However, comapring this ORB (or OmniORB for that matter) with TAO or Ice is comparing apples with oranges.

    You can get latency down drastically if you choose simpler concurrency models. For example, if you use a blocking client-side concurrency model, latency *will* go down. However, the drawback is that you cannot do nested method calls.

    Similarily, there are server-side thread models that are faster than our leader-follower thread pool model, for example "thread per connection". As far as I know, OmniORB uses a blocking client-side concurrency model, and thread-per-connection as server-side concurrency model. (At least that used to be true in the past, I didn't follow the OmniORB development closely.)

    So if you compare performance, in particular latency, you must make sure that you have comparable concurrency models. Everything else is comparing apples with oranges.

    As for the difference between TAO and Ice, there are two main reasons:
    • First of all, we didn't optimize some marshalling code in Ice 1.0 and Ice 1.1. This code is much better optimized in the upcoming 1.2. So expect better performance numbers in 1.2.
    • The test was done using different concurrency models. We could easily add a blocking client-side model, which would make a big difference, just to look better in performance tests. However, since such a model is of limited use, we rather don't want to do this.
  • Hello,

    I have looked at omniORB documentation. The first chapter of the user documentation gives a clear picture of the situation.

    You are right about the multithreaded model. The omniORB performance reported by our study may be explianed by this on thread per connection model. This requires that they are multiple tcp connections between two process if concurrency is required. But it is said that they cache connections so the number of parallel connections should not be more than the number of concurrent transactions. In version 4.0 they now also support a thread pool model so they remove the scalability problem which as you know is my obssession. ;)

    A good news for Ice and ORBs using the more general multithreading model is that a change in linux kernel 2.6 should strongly reduce the thread context switch overhead. A patch for 2.4.x is already available with the required changes to the kernel. The patch is named something like "low latency". I can get a precise reference on this if desired. There is also an article on it in some linux expert journal.

    So the performance difference between Ice and other ORBs due to thread context switching will problably shrink in the near future.

    You wrote that you implemented a faster ORB than omniORB. How would Ice compare to it ? My understanding is that beside thread latencies, it is the encoding that would make the performance difference. What are the performance differences betwen Ice and CORBA regarding encoding ?
  • marcmarc FloridaAdministrators, ZeroC Staff Marc LaukienOrganization: ZeroC, Inc.Project: The Internet Communications Engine ZeroC Staff
    Originally posted by ChMeessen

    You are right about the multithreaded model. The omniORB performance reported by our study may be explianed by this on thread per connection model. This requires that they are multiple tcp connections between two process if concurrency is required. But it is said that they cache connections so the number of parallel connections should not be more than the number of concurrent transactions. In version 4.0 they now also support a thread pool model so they remove the scalability problem which as you know is my obssession. ;)

    Even more impact has the client-side concurrency model, i.e., how does the client side receive responses. The most simple model is also the most efficient: The thread that sends a request also receives the response. This is a blocking client-side concurrency model. However, since the thread locks the connection while it blocks, no concurrent calls are possible over the same connection, and thus no nesting. You must use more than one connection if you need concurrent or nested calls.
    Originally posted by ChMeessen

    You wrote that you implemented a faster ORB than omniORB. How would Ice compare to it ? My understanding is that beside thread latencies, it is the encoding that would make the performance difference. What are the performance differences betwen Ice and CORBA regarding encoding ?

    The CORBA ORB I co-authored had lower latency than both Ice and OmniORB. However, this is comparing apples with oranges: This ORB had the simpler concurrency model, and also didn't have features such as collocation optimization (which meant all calls go over the wire). For some domains, which require such super-low latency, this is a reasonable tradeoff. For other domains, you cannot work with such a concurrency model.

    As for the encoding, I don't think it matters at all if you measure latency. For short messages, the encoding is not very important, because there is not much to encode.

    As for long messages, I don't think the encoding has a lot of impact on very fast networks, because message size doesn't matter that much on such networks. However, if you use slower connections, like DSL or even modem, the more compact encoding of Ice will definitely be an advantage.
Sign In or Register to comment.