Archived

This forum has been archived. Please start a new discussion on GitHub.

Interactive communication - application-level flow control

Hello,

My goal is to achieve an interactive communication in my application with as low delay as possible even on slow network links. One of factors which influence interactivity is queuing of messages in sender's buffers resulting from lack of flow control (application produces data faster than it is possible to transmit it).
This is why I aim to implement flow control on application level - I'd like to decide myself which messages to transmit and which to drop so as to keep the transmit buffers nearly empty.

Is it possible to get the actual load of the buffer or at least statically change its default size? I've read http://www.zeroc.com/forums/showthread.php?t=1562 post, has something changed since then in Ice?
Other option is to measure RTT during transmission, but in case of oneway communication it is unfeasible.

With best regards,
Lukasz

Comments

  • This is a perfect time to use AMD and AMI. Each time you make an asynchronous call, simply increment a "pending requests" counter. Each time an asynchronous call completes, decrement the counter. Your application can read the number of pending requests at any time and decide what to do next based on the total number of outstanding requests. No fuss, no muss :)
  • benoit
    benoit Rennes, France
    Hi Lukasz,
    luke wrote: »
    Hello,
    Is it possible to get the actual load of the buffer or at least statically change its default size? I've read http://www.zeroc.com/forums/showthread.php?t=1562 post, has something changed since then in Ice?
    Other option is to measure RTT during transmission, but in case of oneway communication it is unfeasible.

    No, it's still not possible to configure the TCP/IP send/recv buffer size. So far, I believe we only got one request for this (the one you mention above). It's not clear to me how it would help to implement what you're suggesting. Perhaps you should detail a little more what you are trying to do.

    Cheers,
    Benoit.
  • Hi Benoit,

    If I knew that there is much data (oneway messages) waiting in the sending buffer, I would be able to slow down generation of new messages (by dropping less important ones on the application level) letting the queue to become (almost) empty. Some messages are more critical for me, some are less, but the delay is always critical.

    If I merely could lower the size of the buffer I would be also satisfied - there would be no place for additional, unwanted delays. Buffer of size 64 kB or so can gather a few hundred of short messages and there is no feedback for the client application - oneway invocation doesn't block. For the needs of my application it would be desirable to have such a feedback which could be somehow handled. I have no idea how it would influence performance of the communication, maybe it is worth spending some time.

    From Ice Specification, part 32.13 (Oneway Invocations):
    "On the client-side, the Ice run time will block if the client’s transport buffers fill up, so the client-side application code cannot overrun its local transport.
    Consequently, oneway invocations normally do not block the client-side application code and return immediately, provided that the client does not consistently generate messages faster than the server can process them. If the rate at which the client invokes operations exceeds the rate at which the server can process them, the client-side application code will eventually block in an operation invocation until sufficient room is available in the client’s transport buffers to accept the invocation."

    Best regards, many thanks for fast answers,
    Lukasz