Archived

This forum has been archived. Please start a new discussion on GitHub.

Number of threads/Connections used when submitting multiple AMI methods

If my programming/interaction model is as follow:

1. Client application may interact with multiple services (may share, but don't have too, the same Communicator or Adaptor).
2.At each "interaction" the client is sending multiple AMI request (to one or more services) and then waits and collects all responses or any completed response up to a certain timeout.
3. A client can have multiple threads that can work in the mode described above concurrently.

My understanding is as follow:
1. At the service side each AMI call will use/occupy a thread (which can belong to the communicator or Adaptor thread pool) for the duration of the call unless AMD is used (but than the programmer is responsible for the execution and returning the results in a different thread).

2. At the client side each AMI call will use/occupy a thread until the AMI callback.

3. I assume that per adaptor two Connections are needed (we are not using bi-directional connection) and all concurrent client requests are multiplexed via the same connection (and the same applies to service response).

Is that the above description correct? If so, is there anything I can do to reduce the number of concurrent threads? Any advice on handling client side response time out (Connection timeout looks to harsh for this case)?

Thanks,
Arie.

Comments

  • benoit
    benoit Rennes, France
    Hi,
    aozarov wrote: »
    If my programming/interaction model is as follow:

    1. Client application may interact with multiple services (may share, but don't have too, the same Communicator or Adaptor).
    2.At each "interaction" the client is sending multiple AMI request (to one or more services) and then waits and collects all responses or any completed response up to a certain timeout.
    3. A client can have multiple threads that can work in the mode described above concurrently.

    My understanding is as follow:
    1. At the service side each AMI call will use/occupy a thread (which can belong to the communicator or Adaptor thread pool) for the duration of the call unless AMD is used (but than the programmer is responsible for the execution and returning the results in a different thread).

    Correct.

    2. At the client side each AMI call will use/occupy a thread until the AMI callback.

    No, this is not correct. An AMI request doesn't consume a thread to wait for the response. The responses of AMI requests are handled by the client thread pool. When an AMI request response is received, the client thread pool executes the AMI callback. See 33.3.4 Concurrency Issues in the Ice manul for more information.
    3. I assume that per adaptor two Connections are needed (we are not using bi-directional connection) and all concurrent client requests are multiplexed via the same connection (and the same applies to service response).

    No, a single connection is sufficient. The response of the AMI request is sent over the connection used to receive the request. An AMI request is not different from a regular twoway request in this respect.
    Is that the above description correct? If so, is there anything I can do to reduce the number of concurrent threads? Any advice on handling client side response time out (Connection timeout looks to harsh for this case)?

    You could have a thread waiting for the responses of all your AMI requests and set a flag when the timeout occurs. AMI callbacks would check for this timeout flag and eventually ignore the response if it's set.

    Cheers,
    Benoit.
  • Thanks!

    As for 3 I probably got confused by section 37.7 of the manual and thought that response to a method call comes via a different connection (I assume now that this section is related to callbacks and not to standard method calls).

    Is there a way to "flag" the AMI call as canceled, so that any waiting tasks (not yet executed - blocked on the thread pool) will be avoided and any completed task response will be dropped?
  • benoit
    benoit Rennes, France
    No, there's no way to cancel an AMI request.

    If you're executing long running tasks on the server, it's probably better if you implement a work queue mechanism where you invoke a method to queue or start a new task and pass an Ice callback object to receive the result. This will provide you better control on the execution of the tasks and eventually allow their cancelation.

    Cheers,
    Benoit.
  • Thank you very much.
    In my case they are not long running tasks, I was more concern about the volume of concurrent requests and trying to optimize resources (cpu & network bandwidth). Response is not going to be used if retrieved later than N seconds and I would prefer not working on it or submitting it back and dropping it on the client side. I guess I can use AMD and propagate "max response". When my "execution" thread takes it from the queue it will consider this value and return null or exception if exceeded the timeout. I assume returning null is lighter than returning a full payload or exception propagation.