Archived

This forum has been archived. Please start a new discussion on GitHub.

ICE::AMI is useful?

Hi,

[what i thought]
AMI is useful when client want immediately return from "xxx_async". if 2 timer set, one after "xxx_async" call finished, the other after all response received, timer 1 should be small, timer 2 should be related with server. it's very useful if server is IO heavily operations.

that's:
timer 1 = round_trip
timer 2 = server_payload + round_trip

timer 1 + timer 2 may be longer than normal sync call, but we get small timer 1. if timer 2 depends on a blocking operation (such as network, database), AMD + AMI is very useful.

[what the test said]
timer 1 = round_trip + factor * server_payload (where factor is almost equal to 1).
timer 2 = ???

[test result and source code]
attached followed

[and more]
i test "AMD". (modified as manual, with 3 working threads check worker queue), no lucky...........

[and more]
i test under winxp sp2, localhost, ice-2.1.1-precompiled
and i call ::timerBeginPeriod(1) and timerEndPeriod(1)

Comments

  • benoit
    benoit Rennes, France
    Hi,

    What you're observing is expected. The time measured by timer1 includes the time of sending the request "over the wire": when you call sayHello_async(), Ice will marshal the request and send it on the network. Since you're sending many of these requests, the network buffers eventually quickly fill up. When the buffers are full, the client sayHello_async call will block. When the server eventually processes more incoming requests, the buffers empty and the client can send more requests.

    In short, if your server is slow to process the incoming requests, the client will quickly block and will have to wait for the server to process requests to be able to send more requests.

    AMI calls are similar to oneway calls with respect to blocking, I suggest that you take a look at the FAQ "Why can oneway requests block?" in the Issue2 of the Ice newsletter (see http://www.zeroc.com/newsletter) for more information.

    Let me know if this isn't clear enough!

    Benoit.
  • bernard
    bernard Jupiter, FL
    On the client side, you need to use separate AMI callback objects for each concurrent request (see the AMI section in the manual). And if you want to receive callbacks concurrently, you need to increase your client's client thread pool size.

    Cheers,
    Bernard
  • marc
    marc Florida
    A common misconception about AMI is that they do not send requests asynchronously. AMI calls receive responses asynchronously. That is, AMI calls decouple sending the request from receiving the response, so that your application doesn't have to wait until the server finishes dispatching a request. But they do not employ some special sender thread to send AMI calls in the background. If you need something like this, you have to create your own sender thread.
  • benoit wrote:

    In short, if your server is slow to process the incoming requests, the client will quickly block and will have to wait for the server to process requests to be able to send more requests.

    yes, i tested "amd" too (the source code is not included), which push the request into a queue, and has 3 working threads processing the queue. but no differences. please note, i care more on timer 1.
    bernard wrote:
    On the client side, you need to use separate AMI callback objects for each concurrent request (see the AMI section in the manual). And if you want to receive callbacks concurrently, you need to increase your client's client thread pool size.

    i adjust thread pool (client and server) from default setting to 4, and no difference. you can check the testresult.

    conclusion:
    after a quick test, i decide not to use ICE in this project (of course, TAO, MICO, omniORB are not suite for it too). but i'm very like ICE's features, such as C++ mapping, easy of use (CORBA is not friendly to C++), documents and very friendly supports. thanks.
  • matthew
    matthew NL, Canada
    zigzag wrote:
    ...
    conclusion:
    after a quick test, i decide not to use ICE in this project (of course, TAO, MICO, omniORB are not suite for it too). but i'm very like ICE's features, such as C++ mapping, easy of use (CORBA is not friendly to C++), documents and very friendly supports. thanks.

    Its not really clear to me what you want. If you can explain in more detail perhaps Ice can do what you want already. For example, if you want to isolate the client from the backend server being temporarily busy you can use Glacier2 in buffered mode. In this case, Glacier2 will buffer up all requests and your client will not block (as long as the blocking is caused by the backend server being temporarily busy and not because of network problems). Of course, if you backend server is permanently busy then it doesn't solve the problem, but in this case you have bigger problems anyway :)


    Regards, Matthew
  • oh, Glacier2

    thanks for your advise. i skip some chapters when reading manual. i'll read it now.
  • hi, benoit,
    When I search the forum, I read this old post by chance again. I am confused by it :
    Why timer 1 = round_trip? Since xxx_async call will return immediately as soon as the message is copyed from application buffer to the socket stack,where does the round_trip come from?
    In your post, you said it is expected. it makes me a bit puzzled.


    Cheers
    OrNot
  • benoit
    benoit Rennes, France
    Hi,

    Sorry, I don't know why the original poster used the term "round_trip" for timer1. If I remember right timer1 measured the time taken by the async call to return. What zigzag observed was indeed expected for the reasons I've explained (see the attachments in his original post for details on the test case). This FAQ entry also explains why oneway calls can block (this also applies to async calls).

    Cheers,
    Benoit.