Archived

This forum has been archived. Please start a new discussion on GitHub.

Ice Recognition of Physical LAN Connection?

I am trying to resolve a communication issue between a client and server, but do not know if it is related to Ice behavior or not. My setup verifies that communication between client and server exists by having the client send regular “pulses” (perform a one-way invocation via proxy) to the server, and the server sends regular “pulses” (performing a one-way invocation via proxy) to the client, once per second.

On the server side, the algorithm does these 3 things in a try-catch block that is looped continuously as follows:

1) Lock a monitor instantiated with a non-recursive mutex and suspend the calling thread for 1 second (the “pulse rate”):
IceUtil::Monitor<IceUtil::Mutex>::Lock lock( m_monitor );
m_monitor.timedWait( m_pulseRate );

2) Perform the one-way invocation to the client from an instantiated server object (what we call “sender”):
m_senderImpl.m_receiverOnewayPrx->sync();

3) The “sync” method on the client side which is invoked via proxy in the above step is just a wrapper for an object that invokes another method that has 3 sub-steps. When this part of the loop is reached, it is always successful:

a. Lock the calling thread on the client side:
IceUtil::Mutex::Lock lock( m_mutex );

b. Store the current time for evaluation in other methods:
m_lastSyncPulse = IceUtil::Time::now();

c. Maintain the state of a Boolean that is checked in other methods to verify the server is still communicating with the client:
m_isActive = true;

The client side will send its “pulses” to a “sync” method in the server side in a similar fashion.

The problem is that our architecture is designed so that if the physical connection (the LAN line) between client and server is lost, but re-established (within a time out period), the “pulses” should continue as they previously did. This however is not the case.

What happens, is that on the server side, when the LAN connection is broken, regular invocations of the server’s “sync” method by the client stop right away (expected). However, the server (appears) to continue invoking the “sync” method on the client side, via proxy for a short amount of time. In other words, server logs show the server’s “sync” method being invoked by the client via proxy will stop at the same time the LAN line is disconnected, but they also show that the “m_senderImpl.m_receiverOnewayPrx->sync();” statement completes its call for a number of times.

The odd thing is, after the LAN line is reconnected, instead of the sync pulses returning to normal eventually, the server will continue to invoke about 20 of these one-way proxy invocations on the client side, until an Ice connection lost exception is thrown. Again this is after the LAN line is reestablished (about 40 seconds):

TransceiverI.cpp:772: Ice::ConnectionLostException

The result after the exception is something I cannot explain. Instead of the bidirectional communication between client and server ever returning, the communication becomes one way from client to server. In other words, the client is able to use one-way invocation on the server’s “sync” method via proxy, but the reverse is no longer possible.

Is there something in the way that Ice detects physical connectivity between clients and servers that I need to be mindful of to resolve this?

Any advice would be most helpful.

-Jim


Addendum:

Something I thought I should add but I’m not sure if it is relevant.

If I reconnect the LAN line within 3 seconds or less, the bidirectional communication between client and server resumes as expected after about 57 seconds. If I take longer than 3 seconds to reconnect the LAN line, I run into the issue just discussed.

-Jim

Comments

  • benoit
    benoit Rennes, France
    Hi Jim,

    I believe this behavior can be explained by several things:
    • the use of oneway invocations
    • the TCP/IP connection buffering
    • how connection loss is detected on each side of the connection

    An Ice oneway invocation returns as soon as the TCP/IP stack put it on the buffer of the TCP/IP connection (see Oneway Invocations). The oneway call returning doesn't mean the oneway message has been sent nor has been received by the peer. If there's network congestion or a network failure, the oneway message might remain in the TCP/IP connection buffer for some time. TCP/IP buffers typically hold several hundred kilo-bytes of data (and as a result many "sync" messages).

    How quick a connection loss is detected is system dependent and will greatly vary depending on the cause. Typically, if you pull out the ethernet cable from the machine where the Ice application is sending the "sync" messages, the application will likely detect the connection loss quickly (the network card detects that no cable is connected anymore and can relay this information quickly to he TCP/IP stack implementation). On the receiver side however, it might take time until the TCP/IP implementation of the OS detects the connection loss (especially if the 2 machines are separated by multiple network equipments, routers, firewalls, etc).

    I think you should use instead twoway requests for your sync requests. When a twoway request returns you know that the peer received the request (it sent a reply). As soon as there's a TCP/IP connection problem, the twoway request will "hang" until the reply is received or until the TCP/IP stack closes the connection because of a connection loss.

    One strategy could be to send a "sync" twoway request every second using AMI. If the AMI response or exception callback isn't called before 1s, your application could consider that there's a connectivity issue with the peer. A sync() request would only be sent if there's no other pending sync() request. For this to work, you must have the guarantee that the other side is always prompt to reply or otherwise you might get false "alarms" (in this case you could increase the "wait" duration).

    Let us know if you need more information on this.

    Cheers,
    Benoit.
  • Thank you Benoit.

    I like this approach you suggest and will look at incorporating it into our existing design.

    Regards,

    -Jim
  • Hi Benoit,

    Something that was raised in our team discussion was curiosity over how using a two-way invocation in the way you described, although a good approach, would solve our specific issue. In particular, since the premise involves a failure of one way invocations between client and server (and the reverse), but only if it takes longer than 3 seconds to reconnect the LAN line between the two, could you clarify your thoughts on how you believe a two-way invocation will help?

    Our current approach sends one way invocation sync pulses, once per second, with our application closing down after not receiving any pulses for 30 seconds. The new approach suggests using a two way invocation and then shut down our application, if after 30 seconds we receive no return value (or throw an exception at any time of course).

    Since we are only observing a problem if the LAN line is reconnected after a disconnect delay of greater than 3 seconds, does this imply that the connection has a greater chance of reestablishing itself because there is only one two-way invocation pending instead of several one-way invocations pending, at the time of reconnection, as with the previous case?

    Please let me know.

    Regards,

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    The suggestion to use a two-way invocation was to solve the problem where on one side you get a prompt detection of the failure and on the other side the connection loss is only detected after a much longer time (40s).

    That said, I think I better understand your question now and keeping oneway requests might be fine as well as long as both peers send sync() requests to each other. Your algorithm considers that there's a connection issue if it doesn't receive a sync() request within N seconds, it's a "passive" connection loss detection.

    However, as explained in my previous email, it might take time for the sender of the sync() oneway requests to detect a connection loss (up to 40s) (and as a result it can also take time for the sync() to start again with a new connection). One solution to ensure the sync() restarts promptly after you detected a connection loss (i.e. after you didn't receive a sync() within N seconds) would be to force the connection closure before sending a sync() request again, this will ensure that you don't send sync() requests on a "soon to be dead" connection.

    You could for example close the connection when you switch the m_isActive flag from true to false. You can obtain the Ice connection object used by the proxy to send requests with the proxy method ice_getCachedConnection(). You should make sure to check if it's null before calling close(true) on the connection to forcefully close it.

    Cheers,
    Benoit.
  • Hi Benoit,

    Thank you for the details on closing the connection. I will definitely put that to good use.

    Just to clarify though; the actual issue we need to tackle is somewhat different from the way you describe. Both sides (client and server) react to a LAN disconnection pretty quickly (generate alerts) when sync pulses stop (about a 3 second delay), and yes, both client and server are sending sync pulses to each other at the time this occurs.

    The challenge we have is after re-connection (not disconnection or before disconnection) of the LAN line. The sync pulses are actually not being successfully transmitted again at all, if it takes longer than 3 seconds to re-connect the LAN line. The odd behavior is that this occurs even if the timeout for taking action, after seeing no sync pulses in the client or in the server, is set to as high as 3 minutes.

    Note that “seeing no sync pulses” implies that the method from the one way invocation on the recipient side is never invoked again, and “Taking action” implies discontinuing sending sync pulses and placing the application in a hibernation state.

    For example, reconnect the LAN after 3 seconds of disconnect time and the sync pulses continue (after a short delay, about 57 seconds), however, reconnect the LAN at 5 seconds, and the sync pulses never resume, even though the algorithm will continue to send (and look for) sync pulses, since the 3 minute time out has not yet expired. Make sense?

    I was assuming even if the 40 second delay you referenced was even longer due to network latencies, that a 3 minute time out would account for that, but I have difficulty understanding why re-connecting the LAN at 3 seconds results in successful sync pulse transfer but 5 seconds does not…

    But, do see your premise here. My question is will this series of steps address the anomaly observed with LAN line re-connection after disconnect periods greater than 3 seconds (but less than our timeout period)?

    If I understand you correctly:

    1) We observe a lack sync pulses, or a lack of response to a two way invocation (either on client or server side) after our timeout period (say 30 seconds) expires.
    2) As you propose, force the connection closure.
    3) Since the physical LAN line was reconnected before the timeout, if we immediately resume sending sync pulses after the forced closure is it safe to assume we would observe normal bidirectional communication and the forced closure had the effect of “flushing out” any pending sync requests (one way or two way) in the TCP/IP buffer? Is that correct?

    Please let me know.

    Regards,

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    Did you try to enable network tracing to see what occurs at the Ice network level? You can also add protocol tracing to see when the sync requests are sent. To enable the tracing, you can set the following properties: Ice.Trace.Network=2 and Ice.Trace.Protocol=1.

    Also, are you using a single Ice bi-directional connection to send the sync requests in both directions and do you set Ice connection timeouts? Or are you just using regular proxies and as a result establish one connection in each direction between the 2 peers?

    If I understand it correctly, the sync requests continue to be sent successfully over the "dead" connection if you disconnect the LAN for longer than 3s and it appears that the connection actually never fails. If you reconnect the LAN line in less than 3s the connection failure is detected in less than 60s and the sync requests eventually resume (presumably Ice re-established a new connection under the hood after the OS TCP/IP stack reported the connection failure).

    I can't explain why the TCP/IP connection would continue to accept messages indefinitely after the LAN line has been disconnected for longer than 3s. I suspect it would eventually fail after some time. It could be interesting to let the client run for a little while and see if it ends up failing. Using twoway here would solve the problem because the response to the twoway request provides an acknowledgement: if the response isn't received after N seconds, it implies that something is wrong and the connection timeouts (assuming you use Ice connection timeouts).

    If you forcefully close the connection once you detect "sync inactivity", all the requests that were queued with the TCP/IP connection will be dropped and a new TCP/IP connection will be established. So yes the sync should resume normally after the connection closure. I recommend to enable network tracing, it will provide you a better idea of what's occurring under the hood at the network level. I also recommend reading the Ice connection management chapter in the manual for a better understanding on how Ice manages connections.

    Cheers,
    Benoit.
  • benoit
    benoit Rennes, France
    Btw, if you are using an Ice bi-directional connection for sending sync requests in both direction, it is expected that requests on the bi-directional proxy always fail after the connection failure. The peer that accepted the connection and created the bi-directional proxy with Ice::Connection::createProxy can't re-establish automatically a new connection.

    The bi-dir proxy lifetime is bound to the connection lifetime and when the connection is closed you shouldn't continue using the proxy. Instead, you should create a new bi-directional proxy after receiving a sync() request over a new Ice connection.

    Cheers,
    Benoit.
  • Hi Benoit,

    I thought I would include my responses to your questions interlaced with your original post for ease of reading below:

    Hi Jim,

    Did you try to enable network tracing to see what occurs at the Ice network level?

    // *******************************************
    Response:
    Not yet. I will try that.
    // *******************************************

    You can also add protocol tracing to see when the sync requests are sent. To enable the tracing, you can set the following properties: Ice.Trace.Network=2 and Ice.Trace.Protocol=1.

    Also, are you using a single Ice bi-directional connection to send the sync requests in both directions and do you set Ice connection timeouts? Or are you just using regular proxies and as a result establish one connection in each direction between the 2 peers?

    // *******************************************
    Response:
    We use regular proxies. One proxy on the client side will be used to establish a single one way connection to invoke a sync( ) method in the server, and another proxy on the server side will be used to establish a single one way connection to invoke a sync( ) method in the client. Due to the need for ordered processing of communication between the server-client link, we use serialization and try to avoid things like too many bi-directional connections with callbacks that are nested in two way invocations. We actually explored that with one of your colleagues on a different issue previously, but decided against it due to the serialization requirement, its restrictions, desire to refactor or interfaces, etc., etc… We do set all our Ice connection timeouts, currently to 90 seconds.
    // *******************************************

    If I understand it correctly, the sync requests continue to be sent successfully over the "dead" connection if you disconnect the LAN for longer than 3s and it appears that the connection actually never fails.

    // *******************************************
    Response:
    Not exactly.

    If the LAN line stays disconnected longer than 3 seconds, the back-and-forth one way connections described above never return to their original format at all. Each side (client and server) will continue to transmit sync pulses, but once the “Ice::ConnectionLostException” happens on the server side, only the server is able to have its “sync( )” methods invoked from the client’s incoming proxy-based calls. In other words, the one way invocation of a server method via proxy from the client works after the “Ice::ConnectionLostException”, but the reverse (server-to-client) fails. I am at a loss to understand why.

    Even if the LAN line is physically reconnected, long before the ice connection timeout of 90 seconds is up (say 25 seconds), we would expect the client and server to start receiving each other’s sync pulses again, but indeed they do not.
    // *******************************************

    If you reconnect the LAN line in less than 3s the connection failure is detected in less than 60s and the sync requests eventually resume (presumably Ice re-established a new connection under the hood after the OS TCP/IP stack reported the connection failure).

    // *******************************************
    Response:
    That is correct.

    Maybe the network & protocol tracing will give insight into why re-connecting after 3 seconds fails?
    // *******************************************

    I can't explain why the TCP/IP connection would continue to accept messages indefinitely after the LAN line has been disconnected for longer than 3s. I suspect it would eventually fail after some time. It could be interesting to let the client run for a little while and see if it ends up failing.

    // *******************************************
    Response:
    That is a good idea.

    Unfortunately I have usually discontinued my observations and manually shut down our client and server once they enter a hibernation state (after the server throws its exception I previously mentioned).

    It would be good to know if there is a time limit on filling up the TCP/IP buffer in this case.
    // *******************************************

    Using two way here would solve the problem because the response to the twoway request provides an acknowledgement: if the response isn't received after N seconds, it implies that something is wrong and the connection timeouts (assuming you use Ice connection timeouts).

    If you forcefully close the connection once you detect "sync inactivity", all the requests that were queued with the TCP/IP connection will be dropped and a new TCP/IP connection will be established. So yes the sync should resume normally after the connection closure. I recommend to enable network tracing, it will provide you a better idea of what's occurring under the hood at the network level. I also recommend reading the Ice connection management chapter in the manual for a better understanding on how Ice manages connections.

    // *******************************************
    Response:
    Ok. It sounds like there are 3 parts to this solution here:

    1) Use a two way invocation for proxy-based “sync( )” requests (client-to-server and server-to-client).
    2) The two way invocation will provide an immediate acknowledgement of connection state, and then we can act on that state to decide when to force the connection closed in the manner you previously described (if we don’t get a response on either the client or server side within the designated ice timeout period; for example 90 seconds).
    3) Enable network tracing and protocol tracing to get a better understanding of what is happening at a lower level.

    Is that correct?
    // *******************************************

    Cheers,
    Benoit.


    Btw, if you are using an Ice bi-directional connection for sending sync requests in both direction, it is expected that requests on the bi-directional proxy always fail after the connection failure. The peer that accepted the connection and created the bi-directional proxy with Ice::Connection::createProxy can't re-establish automatically a new connection.

    The bi-dir proxy lifetime is bound to the connection lifetime and when the connection is closed you shouldn't continue using the proxy. Instead, you should create a new bi-directional proxy after receiving a sync() request over a new Ice connection.

    // *******************************************
    Response:
    Sounds feasible. There is some desire not to change our interfaces from one way to two way, but it could be worth exploring if successful.
    // *******************************************
  • Hi Benoit,

    I just made a critical observation which I wish I would have seen sooner, but the configuration for our proxies are many levels deep. I just discovered that we are using the Ice Bidirectional connections to send our sync( ) pulses as one-way invocations from client to server via proxy, and then back as one-way invocations from server to client via callback proxy on the same connection (Ice Identity) set up by the client.

    With this new information, would that change the reason that callback sync( ) pulses (from server to client) fail to resume after re-connecting a LAN line after more than 3 seconds have passed for it to be disconnected?

    Please let me know.

    Regards,

    -Jim
  • Benoit,

    I thought I would ask this as well just to be on the safe side. Would the solution you proposed still be viable in this case?

    I.e., does our existing approach of a one-way invocation from client to server that initiates a callback from the server back to the client replace the need for a two way on either side?

    Can that approach be managed via timeouts that I can take action upon from some sort of timeout exceptions?

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    If you are using a bi-directional proxy for sending the sync() from the server to the client, then yes it is expected that after a connection failure the sync() will never resume, the bi-directional proxy is bound to the connection and can't re-establish a new connection in this case. Instead you need to re-create a new proxy when the client re-connects to the server.

    Using twoways calls won't provide additional benefits over oneway as long as you take appropriate action upon detecting the lack of pulses after a given timeout period. Such action should be:
    • On the client side: close the connection to get the sync() call to re-establish a new connection
    • On the server side: close the connection, wait for sync() pulses from the client to resume to create a new bi-dir proxy to the client using the new connection.

    I've attached a small demo that demonstrates this. If either the client or server doesn't receive a sync() call for 5s it drops the connection and consider it's disconnected.

    Could you try it and see if it resolves the issues you've experienced upon network failures?

    To compile the demo, you can unzip it in the Ice-3.5.0-demos/demo/Ice directory from your Ice demo source distribution and build it using make.

    Cheers,
    Benoit.
  • Thank you Benoit. I will try this.

    Do you happen to know why the connection does re-establish itself if the LAN line is disconnected for only 3 seconds or less?

    This anomaly only presents itself if I leave the LAN line disconnected for longer than 3 seconds.

    Any ideas?

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    It's hard to say without seeing the network traces but I suspect Ice is totally unaware of the disconnection in the case where it last less than 3 seconds. The OS probably maintains the TCP/IP connection open even if the line gets disconnected and only if doesn't last more than 3 seconds.

    If you enable network tracing with the demo I provided (Ice.Trace.Network=2), you should be able to see whether or not Ice notices the network disconnection.

    Cheers,
    Benoit.
  • Hi Benoit,

    That sounds reasonable. I'll let you know what I discover...

    -Jim
  • Benoit,

    I think I am missing something in my use of “make” for Windows. I assumed for example that the server was the target of make but this seems not to work when using it with your example (quotes around the path to Server.cpp have no effect):

    C:\Program Files (x86)\GnuWin32\bin>make C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse\Server
    g++ C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse\Server.cpp -o C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse\Server
    process_begin: CreateProcess(NULL, g++ C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse\Server.cpp -o C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse\Server, ...) failed.
    make (e=2): The system cannot find the file specified.
    make: *** [C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse\Server] Error 2

    Then again if you have a pre-made Microsoft Visual Studio 2008 Solution file for this that would be faster too…

    Any advice?

    -Jim

    Well,

    An easy fix was setting my path variable for make.exe and running make from your example directory. However, "Make.rules" is missing from the "Ice-3.4.2-demos\config" directory:

    C:\AB_ZeroC\Ice-3.4.2-demos\demo\Ice\pulse>make client
    makefile:30: ../../../config/Make.rules: No such file or directory
    make: *** No rule to make target `../../../config/Make.rules'. Stop.

    Only these files are there:
    build.properties
    common.xml
    Make.common.rules.mak
    Make.rules.bcc
    Make.rules.mak
    Make.rules.mak.php
    Make.rules.msvc

    Any thoughts?

    -Jim
  • Benoit I was curious,

    Does this demo require compiling the “PulseReceiver.ice” file manually first (using “slice2cpp.exe”)?

    I did not know if using “make” would create the auto-generated “PulseReceiver.cpp/.h” files as well…

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim

    You need to build the demos from a Visual Studio 2008 or 2010 command prompt. You can then use "nmake /f Makefile.mak" to build the demo.

    You can find instructions to build the demos in Ice-3.4.2-demos/README.txt. Note that I didn't provide Visual Studio projects but only a nmake Makefile so you need to refer to the instructions for building with Visual Studio Express in the README.txt file.

    In theory, the PulseReceiver.ice should be compiled by nmake, you don't need to compile it manually.

    Cheers,
    Benoit.
  • benoit
    benoit Rennes, France
    Jim,

    Please find attached to this post a version that compiles with Ice 3.4.2, the one I provided in my previous post only compiled with Ice 3.5.0.

    Cheers,
    Benoit.
  • Thank you Benoit,

    I was able to get the client and server from your demo up and running. Something I would like to try really quick though before integrating this solution into our design, is running your demo with a client on one machine and server on another, so I may perform a test where I disconnect and re-connect the physical LAN line between them.

    I just started glancing at the code but I assume that modifying the server to have an object adapter with endpoints like I’ve done before would allow me to do this?

    Please let me know.

    Regards,

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    You shouldn't need to modify the code to run your demo on separate machines, just edit the config.server and config.client files and update the Pulse.Endpoint and Pulse.Proxy properties respectively.

    The -h localhost option should be replaced with the IP address or hostname of the server machine.

    Cheers,
    Benoit.
  • Thank you Benoit.

    This is very useful. The connection resumes normally when the LAN line is reconnected.

    Just a quick question though. Our interface has many invocations (operations) going over the same connection. In other words, if you think of your demo’s interface, “PulseReceiver”, with a “sync( )” method, and an “updateImage( )” method, etc., all running when they need to, on a single connection, the PulseReceiver interface. In our case they run, one right after the other, due to our enabling of serialization.

    I’m guessing I need some sort of global control over each of these invocations so that closing their singly-shared connection causes them all ( not just “sync( )” ) to cease and restart gracefully (when, and if they are operating). With that said, if I need to set timeouts on the object adapter in the server side and proxies in the client side, can I choose different timeouts on a per-proxy basis without issue, even if those proxies are invoked from a single interface?

    I.e., could I use 60 seconds on proxy 1, 20 seconds on proxy 2, 2 minutes on proxy 3, etc., and just set the timeout for the object adapter to the largest value chosen (2 minutes in this case)?

    This is one idea I had to facilitate closing a connection that has multiple operations on it. Please let me know any thoughts you may have.

    Regards,

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    Timeouts are bound to the connection with Ice so using different timeouts implies distinct TCP/IP network connections. You will loose the serialization of the requests if the requests are sent over different network connections and you will need to keep track of all the connections if you want to be able to close them when you detect a network issue.

    I'm afraid it's a bit difficult to provide advices without knowing more about your application and the reasons why you need different timeouts, to use serialization and bi-directional connections. Can you explain a bit more why you want to use different timeouts?

    Cheers,
    Benoit.
  • Hi Benoit,

    We definitely can only use a single connection. Multiple connections isn’t something we would pursue.

    My thought with different timeouts for different proxies invoked from the same interface was to (possibly) accommodate the different lengths of time that each method called on that proxy might need to complete its operation, so I may incorporate something similar to your demo into our implementation.

    In reality, a single timeout that accounts for the duration of the longest running method would work fine also. It was more for efficiency than anything else. If not feasible, that is fine. It’s more important to find what will solve the problem.

    With that said, our focus is to deal with what I mentioned in my previous post; your demo works fine when there is only one method being invoked in the interface over the single connection. If I have multiple methods as I previously described (running via serialization, etc…), do you have any general comments about issues I should be aware of, when closing a connection (and eventually opening it back up) in the manner conveyed in your demo, so that all the methods running on that connection close and restart smoothly?

    Please let me know.

    -Jim
  • benoit
    benoit Rennes, France
    Hi Jim,

    The Ice client side runtime might retry or not the invocation transparently, see Automatic Retries in the Ice manual for more information on automatic retries.

    Note that the demo I provided is just a demo to help figuring out the cause of your issue... I don't think it can be re-used as is in your application, especially since you have some additional constraints such as the use of serialization.

    For example, imagine that your client sends successively two updateImage() requests which take 10s each to complete and then a sync() request... the sync() request will only be dispatched after 20s (after the 2 updateImage() calls completed). If the pulse timeout is set to 5s, the server will wrongfully close the connection because it didn't receive a sync() call within the 5s duration...

    A solution would be to update the "last pulse" timestamp each time a request is dispatched. The pulse timeout should also be larger than the maximum dispatch duration ... so if updateImage() can take up to 10s to complete, a pulse timeout of 15s to 20s should probably be fine as long as you also update the pulse timestamp on each updateImage() call.

    Cheers,
    Benoit.
  • Thank you Benoit.

    That is a good approach.

    Regards,

    -Jim