Archived

This forum has been archived. Please start a new discussion on GitHub.

AMI and timeout

i've read the ICE(3.2.1) manual chapter 33.3.5.

with default Ice.ACM.Client and Ice.ACM.Server, Ice.MonitorConnections should be 60 seconds. am i right? i saw it in my application's log. every 60 seconds, a thread gives out serval timeout exception in my application.

i set Ice.Override.Timeout = 3000 in my config file. when i saw that a task/job was reported to be finished at 20:12:59 in my application, the monitor thread started at 20:13:00, then a timeout exception was given out. there's just one second after the task/job finished.

so i'm a littel confused about timeout in AMI.
which configuration value i should set? Ice.ACM.Client?Ice.ACM.Server?Ice.MonitorConnections?Ice.Override.Timeout? and what's the exact relationship between these values?

Comments

  • matthew
    matthew NL, Canada
    The ACM timeout values control the active connection management timeout values. This has nothing to do with AMI timeout.

    You use the regular Ice timeout mechanism to set the timeout associated with the requests.

    For example:

    Ice::ObjectPrx prx = ...;
    Ice::ObjectPrx prxWithTimeout = prx->ice_timeout(1000);

    This sets a ten second timeout on the proxy. See my articles in issue 23 and issue 22 on proxies, connections and timeouts for more detail. By using a timeout override you have set that timeout on every proxy, no matter the explicit setting (in the above code case assuming you have an override of 3000 as in your example, the timeout would be 3 seconds not 10).

    Where AMI differs is how the timeout is checked. With a regular blocking invocation the thread that makes the invocation also checks the timeout. However, with an AMI invocation this is not possible since that thread has already been returned to application control. Therefore a dedicated thread is used to check these timeouts. This thread checks at regular intervals (controlled by the configuration property Ice.MonitorConnections). This means that in effect your timeouts will be much more granular. If you set a timeout of 5 seconds and a check period of 10 seconds, you may have to wait up to 10 seconds for the timeout to occur.
  • thanks!
    and i can see these in the ice manual.

    default valus of Ice.MonitorConnections depends on the Ice.ACM.Client and Ice.ACM.Server. Ice manual page 1670. with every configuration default values, what's the default value of Ice.MonitorConnections?

    i set Ice.Override.Timeout=3000 in my application, so the "check" thread would report timeout exception with an interval of 60 seconds, not 3 seconds.

    but i've said that a thread in my application received and finished it's ami invocation instantly at 20:12:59, and 1 second after this, the "check" thread of ice started. and a timeout exception was encountered. this's not 3 seconds , it's just 1 second.

    did AMI invocation timeout depends on Ice.Override.Timeout ? and how to explain the situation in my application?
  • matthew
    matthew NL, Canada
    fanson wrote: »
    thanks!
    and i can see these in the ice manual.

    default valus of Ice.MonitorConnections depends on the Ice.ACM.Client and Ice.ACM.Server. Ice manual page 1670. with every configuration default values, what's the default value of Ice.MonitorConnections?

    The default value will be 60 seconds.
    i set Ice.Override.Timeout=3000 in my application, so the "check" thread would report timeout exception with an interval of 60 seconds, not 3 seconds.

    To be more precise you may have to wait up to 60 seconds to get a timeout. For example, consider:

    - At time 0 the check thread starts with a check interval of 60 seconds.
    - You start a request at time 40 with a timeout of 3 seconds.
    - You will get the timeout at 60 (effectively you got a timeout of 20 seconds).

    If you start the request at time 50. You will get the timeout at time 60 (effective timeout of 10 seconds).

    If your operations are idempotent you also have to account for retries.
    but i've said that a thread in my application received and finished it's ami invocation instantly at 20:12:59, and 1 second after this, the "check" thread of ice started. and a timeout exception was encountered. this's not 3 seconds , it's just 1 second.

    did AMI invocation timeout depends on Ice.Override.Timeout ? and how to explain the situation in my application?

    If you have set the override to 3000 then the timeout set on the proxy will be 3 seconds. Perhaps you are confused about what you are seeing? I just tried all this with the hello world demo (demo/Ice/hello). I modified the demo to send AMI requests and then traced the various timeouts and everything worked as expected. I recommend doing this yourself and if you are still confused please let me know.
  • now i do a test in demo\Ice\async
    the modified files are in the attachment.(config.clent & Clent.cpp)

    then you can see what i done in my case.

    i set Ice.Override.Timeout=10000 in config.client; and i invoke sayHello_async like this,
    hello->sayHello_async(new AMI_Hello_sayHelloI, 60000);
     Sleep(5000);
    //sleep(5) under *nix;
    
    and i do this in a loop for 15 times. so 5*15=75 > 60 (the default value of Ice.MonitorConnections)

    when you run this program. start the server; start the client, input 'd', then 'enter';

    when the invoke in the for loop run 12 times, that's about 60 seconds, the 'check' thread should be started, then 12 timeout exception occurs. but you can see clearly that some of the 12 invokes should not be reported timeout. i set the timeout to be 10 seconds in my config file.

    what's the problems? and how to explain this?

    thanks
  • matthew
    matthew NL, Canada
    You'd be better off making the application interactive and perhaps putting some trace in src/Ice/ConnectionMonitor.cpp if you want to visualize what is occurring.

    At any rate, what is going on is that due to retries it can take twice as long to timeout as you expect. If you make the sayHello method non-idempotent then you'll reduce the retries. Set Ice.Trace.Retry=1 and Ice.Trace.Protocol=1 to get more details on exactly what is going on.
  • matthew wrote: »
    You'd be better off making the application interactive and perhaps putting some trace in src/Ice/ConnectionMonitor.cpp if you want to visualize what is occurring.

    At any rate, what is going on is that due to retries it can take twice as long to timeout as you expect. If you make the sayHello method non-idempotent then you'll reduce the retries. Set Ice.Trace.Retry=1 and Ice.Trace.Protocol=1 to get more details on exactly what is going on.

    thanks, i do as you suggested. but i can not get enough useful information.

    i read some related code in ICE 3.2.1 source.

    ICE closes a connection, stops all the read/write activities with this connection, when an asynchronous request timeouts. is that ICE's intention? maybe that's the reason why i can see so many timeout exceptions if the first timeout exception occurs.

    so, if there are several asynchronous requests on the only one proxy, how can i set timeout? all i see in my test is that ICE will close the connection if one timeout exception occurs, and all the following requests will also be reported timeout exception.

    i also try to get a new proxy when i do an asynchronous invoke.
    ...
    //in a loop
    HelloPrx hello = HelloPrx::checkedCast(communicator()->propertyToProxy("Hello.Proxy"));
    hello->sayHello_async(new AMI_Hello_sayHelloI, 60000);
    ...
    
    ICE's internal connection manager may give me the same connection for that new proxy. is that possible? if not, timeout exception will also be reported like the case before.


    thanks!
  • matthew
    matthew NL, Canada
    Yes, as described in my article a timeout exception closes the connection. This is because Ice considers a timeout to be a hard error. If you have several proxies all using the same connection then they'll all report a timeout. If you want to have a unique timeout per proxy then you can request a unique connection per proxy using connection ids.

    Starting with Ice 3.3 AMI timeouts will work the same way as regular synchronous invocation timeouts. However, their status as a hard error will not change.