Archived

This forum has been archived. Please start a new discussion on GitHub.

Understanding the ICE threading model

I'm new to ICE and I was trying to understand the ICE threading model as described in the manual in order to decide the size of my communicator thread pools. However, I have problems understanding how many threads from the pools of the caller and callee are actually involved. Is the following true for simple twoway invocations and AMI+AMD invocations?

Simple twoway invocation:
Threads involved:
  • threadA (main thread of the caller)
  • threadB (a thread in the client thread pool of the caller)
  • threadC (a thread in the server thread pool of the callee)
Thread control:
  1. invocation on twoway proxy in threadA
  2. threadA wakes up threadB (something to send) and waits
  3. threadB wakes up threadC (send invocation to server) and waits
  4. threadC does some work, wakes up threadB and is suspended
  5. threadB wakes up threadA and is suspended
  6. threadA does something with the result
So that:
  • This ties up threadA, threadB and threadC when doing the work.
  • Involves 1 thread (client pool) of the caller and 1 thread (server pool) of the callee
  • A concurrent invocation would need at least Ice.ThreadPool.Client.SizeMax=2 for the caller and Ice.ThreadPool.Server.SizeMax=2 for the callee.


AMI(with callback object)+AMD twoway invocation:
Threads involved:
  • threadA (main thread of the caller)
  • threadB (a thread in the client threadpool of the caller)
  • threadC (a thread in the server threadpool of the callee)
  • threadD (a thread of the callee)
  • threadE (a thread in the client threadpool of the caller)

Thread control:
  1. invocation on twoway proxy in threadA
  2. threadA wakes up threadB (something to send)
  3. a. threadB wakes up threadC (send invocation to server) and is suspended
  4. b. threadA does some work and waits (call waitForCompleted on the AsyncResultPtr)
  5. threadC adds a job to a queue, wakes up threadD (which is a waiting queue reader) and is suspended
  6. ThreadD does some work, wakes up threadE and waits (queue reader)
  7. ThreadE wakes up threadB and is suspended
  8. ThreadB does something with the result, wakes up threadA (which was waiting) and is suspended
  9. ThreadA continues
So that:
  • Nothing is tied up (except for ThreadA and ThreadD, but that is by choice).
  • Involves 1 thread (client pool) of the caller and 2 threads (1 client pool + 1 server pool) of the callee
  • A concurrent invocation would need at least Ice.ThreadPool.Client.SizeMax=2 for the caller and Ice.ThreadPool.Client.SizeMax=2 and Ice.ThreadPool.Server.SizeMax=2 for the callee.

Comments

  • benoit
    benoit Rennes, France
    Hi,

    It's not quite correct.

    Simple two-way invocation:
    • invocation on the twoway proxy in threadA
    • threadA from the client sends the invocation over the connection and waits.
    • threadC from the server receives the invocation and does some work. It sends back the response to the client.
    • threadB from the client receives the response and wakes up threadA
    • threadA does something with the result

    ThreadA from the client is tied while waiting for the server to send the response. To allow concurrent invocations, you need to set Ice.ThreadPool.Server.SizeMax=n where n > 1. There's no need to increase the client thread pool size (threadB is only used to read the response from the connection and notify threadA, this is non-blocking and a single thread can process many responses).

    For AMI/AMD:
    • AMI invocation on twoway proxy in threadA, thread A sends the invocation.
      • If possible to send the invocation without blocking, all the data is sent from threadA.
      • Otherwise the control is returned to threadA, and threadB sends the remaining of the request in the background.
      Once the AMI call returns, we don't care anymore about what threadA is doing.
    • threadC from the server receives the request and dispatch the request to the servant.
      • The servant either sends the response from the dispatch using the AMD callback object or decides to delay the response.
      • If the servant decides to delay the response, threadD from the server will eventually send the response.
    • threadB from the client receives the response and eventually calls the AMI callback if specified.

    A single thread from the client thread pool is involved here for two different purposes (sending the request in the background if it can't be done from the main thread and receiving the response). A single thread from the server thread pool is also involved.

    It's fine to have a single thread in the client thread pool for concurrent invocations but like for the synchronous case you need multiple threads in the server thread pool for concurrent dispatch.

    Increasing the number of threads in the client thread pool can be useful when using AMI if you want the AMI callbacks to be processed concurrently.

    Let us know if this is still not quite clear!

    Cheers,
    Benoit.
  • Great, thanks! I was mainly confused by the "The client thread pool services outgoing connections" statement in the manual. This lead me to think that threads from the client thread pool are always involved in outgoing request, while this is only the case when the outgoing request is an AMI (and then only if the calling thread can't do it without blocking). Well, they are always involved since they handle the reply (at least for twoway invocations), but that hasn't anything to do with sending the request.

    Then there is one issue left: bidirectional connections. Suppose app1 establishes the connection to app2. Both the simple and AMI+AMD invocations work exactly the same, except when app2 takes the role of the client. In that case, client and server thread pools switch roles and concurrent calls are not possible. Correct?
  • benoit
    benoit Rennes, France
    Yes, threads from the client thread pool are not always involved in the sending of an invocation, they might or might not be depending on the size of the invocation or if there are other data pending for sending on the connection. In any case, you should see this as an implementation detail. The only thing you need to keep in mind is that there's must be at least one thread in the client thread pool to receive responses or send data in the background (when necessary). To ensure this, you must make sure that your AMI callbacks (or servant dispatch for bi-directional connections), don't keep those threads busy for too long.

    Regarding your question with bi-directional connection, concurrent calls should be possible as long as there are enough threads to dispatch calls concurrently. Whether or not you use bi-directional connections isn't really relevant here.

    Cheers,
    Benoit.
  • Thanks Benoit, that clears things up.