Archived

This forum has been archived. Please start a new discussion on GitHub.

Inaccuracy AMI call timeout

Hi:
I have encountered a problem with ICE AMI call. I also made a demo to demonstrate it.
There is a client and a server in the demo.
	//slice definition	
	module test
	{	
	class timeout
	{
		["ami"]	void call();
	};	
	};
For the server side of this demo, it just simply sleep 200s for every call.
For the client side of this demo, please refer to the source code
	prx = test::timeoutPrx::checkedCast( communicator()->stringToProxy("TimeoutServer:default -p 12345")->ice_timeout( 10 * 1000 ) );						
	//get the proxy to server servant and convert it's timeout value to 10s
	prx->call_async(new AMICallback("CALL1"));
	//made a AMI call
	Sleep(5000);
	//pause 5s	
	prx->call_async(new AMICallback("CALL2"));
	//made another AMI call
I set ice property Ice.MonitorConnections to 1 when I start the demo application
I expected that the second AMI call would time out in 5s after the first AMI call timed out.
But the test result showed me the two calls timed out at the same time
I am not sure if I did not use AMI correctly.
please anyone help me on this.
Thank you very much

PS:
test result
prx:TimeoutServer -t:tcp -p 12345 -t 10000
[10/27/09 22:12:12.421] [CALL1] start calling
[10/27/09 22:12:17.421] [CALL2] start calling
[10/27/09 22:12:24.625] [CALL1]caught exception:[Ice::TimeoutException]
[10/27/09 22:12:24.625] [CALL2]caught exception:[Ice::TimeoutException]
//The two AMI call timed out at the same time 10/27/09 22:12:24.625

Comments

  • dwayne
    dwayne St. John's, Newfoundland
    When a request times out, all other outstanding requests using the same connection also time out, and the connection is closed. The requests may then be retried on a new connection as long as the at-most-once semantics are not violated. But in your example, even if they retry they will still timeout again and all requests will get timeout exception. See section 37.3.5 of the manual.
  • dwayne
    dwayne St. John's, Newfoundland
    You will also find more information on timeouts and retry in this FAQ.
  • I believe the most APIs with "idempotent" are those lightweight calls, which may be most likely defined as sync calls, but this TimeoutException appears more unexpected at async calls.

    >> If a request times out, all other outstanding requests on the same connection also time out
    Is there any way to prevent the "other" requests from being killed by the timeout or lower the possibility? For example, to provide a way to the program to test how many of pending requests bound on the same connection, the program therefore can force to use different connection when it detect the existing connection is at a questioning situation
  • dwayne
    dwayne St. John's, Newfoundland
    suds wrote: »
    I believe the most APIs with "idempotent" are those lightweight calls, which may be most likely defined as sync calls, but this TimeoutException appears more unexpected at async calls.

    Idempotent means that calling the operation multiple times has the same effect as calling it just once. It is not related to the time that the call takes to complete or whether async is used.
    suds wrote: »
    Is there any way to prevent the "other" requests from being killed by the timeout or lower the possibility? For example, to provide a way to the program to test how many of pending requests bound on the same connection, the program therefore can force to use different connection when it detect the existing connection is at a questioning situation

    There is no way to query that sort of information from the connection although it would probably be possible for you to track it in your application the number of outstanding AMI calls per connection. The only way to completely prevent other requests from possibly being killed from a timeout is to use a different connection per request. Of course this can lead to resource and performance issues as the number of connections could get very high and because of the time involved in creating new connections.

    You might be able to instead use a limited number of connections whereby all requests that would not normally generate timeouts use a single connection and requests that can be expected to sometimes timeout use separate connections.