Archived

This forum has been archived. Please start a new discussion on GitHub.

Ice client pool size via Tomcat

In our current setup, tomcat contacts ice servers for each request. We do specify a server side pool size (on the ice servers). We do not use AMI or AMD (we only use synchronous calls).

We use multiple threads of tomcat on the client side, for example say 10 tomcat threads. Wanted to confirm that there is some sort of threading mechanism on the client side. Is effectively a client side thread pool of size 10 created?

Thanks, Shiv

Comments

  • dwayne
    dwayne St. John's, Newfoundland
    No, making invocations from multiple threads does not increase the size of the client thread pool, whose default size is 1.

    What exactly is your concern? If it is that making Ice calls from multiple threads in parallel may not be safe then that is not an issue. Ice is thread-safe in this regard.
  • The concern is that of latency. If we send each ice request one by one, the responses will be slower. Is there a way to change the default thread client pool size for synchronous calls? The manual only talks about changing this for async calls.

    What we would like is for ice to send client side requests using multiple threads so that we do not incur additional latency.

    Thanks, Shiv
  • The client-side thread pool does not affect synchronous calls in any way. Instead, it exists so that replies to asynchronous calls can be processed concurrently.

    If you want to send concurrent synchronous requests, simply fork the number of threads you want and make calls from each of these threads. The calls will be sent concurrently and will be dispatched and processed concurrently, provided that the server-side thread pool size can accommodate that many calls.

    Cheers,

    Michi.