Archived

This forum has been archived. Please start a new discussion on GitHub.

Understanding AMI/AMD and threadpool settings

In the Ice manual AMI is well explained. You make a call with AMI and whether it has been sent or queued to be sent to the server your call returns. The true or false being sent, or queued to be sent.

a) Now the server gets the request, consumes a communicator thread/adapter thread. (The number of these available set by ThreadPool.Server.Size). Processes on that thread and then returns the value.

Back on your client the ThreadPool.Client.Size kicks in and this determine the number of async responses that can be handled.

However I don't quite get AMD.

b) The request arrives from a client using an Ice internal thread, is placed on a queue, when Server side thread is free the request is processed. Is the number of threads available for AMD still set by ThreadPool.Server.Size.

So the only difference between a and b server without AMD and one with AMD is that the thread that is reading off the underlying transport is not the one that ends up processing the request. So the server scales better since it can build up this queue of pending work.

Would really appreciate a comparison of what happens in a server without AMD and what happens in one with. Does ThreadPool.Server.Size only have an affect when AMD is selected.

Comments

  • dwayne
    dwayne St. John's, Newfoundland
    Whether or not you use AMD the request is still processed by one of the Server threads, the number of which is controlled by the Ice.ThreadPool.Server.Size property. With AMD, Ice itself does not offload the processing of AMD requests onto another set of threads, different from those that would normally process non-AMD requests. Instead what AMD does is allow the user to do the processing in another thread, and not tie up the Ice server thread for the duration of the request processing, thus allowing Ice to continue handling requests once the *_async() method returns.

    If you just do all of your processing in the *_async() method then you will be using one of the server threads, the same as if you did not use AMD at all. Please take a look at the Ice/async demo for an example of AMD being used with a workqueue to handle the processing of long requests while still allowing short requests to be dispatched without delay.
  • matthew
    matthew NL, Canada
    With respect to dispatching the request the semantics of AMD and non-AMD requests are identical. A thread is used from the Ice thread pool, which is used to dispatch the request.

    The primary difference between an AMD and non-AMD request is that with an AMD request you can choose to return from the method dispatch prior to sending the reply to the request.

    This is useful if you can somehow offload the processing. Typical examples are you dispatch an AMI request to another server, and send the reply from the AMI callback, or you have an internal work queue separate from the Ice thread pool which is used to process the request and then send the reply.

    non-AMD request
    Ice Thread Pool Thread -> request -> processing
    <- reply

    AMD request
    Ice Thread Pool Thread -> request
    <-

    ... some time later ...
    Some other thread -> processing -> reply (by calling on the AMD callback object)

    (edit: ok, the ascii art doesn't work very well, but hopefully you get the idea!)
  • Thanks for your answers.

    I some how thought that amd was being clever however all it does is give you the opportunity to be clever.

    Since you can take the AMD callback and place it onto a queue to be serviced else where and let the async method complete thus freeing the thread.

    What one of my developers has done was to call ice_response at the beginning and then continue processing in the method still.

    This I take it still hogs a ice server thread.
  • dwayne
    dwayne St. John's, Newfoundland
    chamate01 wrote: »
    Thanks for your answers.

    I some how thought that amd was being clever however all it does is give you the opportunity to be clever.

    Since you can take the AMD callback and place it onto a queue to be serviced else where and let the async method complete thus freeing the thread.

    What one of my developers has done was to call ice_response at the beginning and then continue processing in the method still.

    This I take it still hogs a ice server thread.

    Right, the thread is not free to process more requests until the _async function returns, whether or not ice_response has been called yet.

    Also, if you are able call ice_response() without having to wait for the processing to complete then you really do not have to use AMD at all. That is, if the method response does not depend on the data processing result then you could just use a regular call which offloads the processing and then returns. If the response does depend on the processing result then AMD should be used, and ice_response() should be called when the processing is complete.
  • Thanks for all your answers. Most helpful and prompt. :)
  • matthew
    matthew NL, Canada
    chamate01 wrote: »
    ...
    What one of my developers has done was to call ice_response at the beginning and then continue processing in the method still.
    ...

    This, however, might have a side effect that you did not consider... what it allows is for the client side call to continue before all server side processing has been completed. You could take advantage of the Ice thread pool (most likely in conjunction with dynamic resizing) to provide your worker threads in this case.