Home Help Center

Determine available bandwidth or data backlog

gmhgmh Member Greg HughesOrganization: Hewlett-PackardProject: Remote Graphics Software
Hello,

Our application can be made to adjust the size of it's data stream to attempt to maintain an image update rate. Is there an easy way using Ice 3.3.1 to query the available network bandwidth, or query the amount of data that is queued waiting on the network?

Regards,
Greg

Comments

  • andreynechandreynech Member Andrey NechypurenkoOrganization: Veterobot.comProject: Robotics vehicle for researchers and makers ✭✭
    Hi Greg,
    gmh wrote: »
    Our application can be made to adjust the size of it's data stream to attempt to maintain an image update rate. Is there an easy way using Ice 3.3.1 to query the available network bandwidth, or query the amount of data that is queued waiting on the network?

    I am currently working on the similar problem and found new AMI API good enough for this purposes. It is easy to monitor the size of the queue with pending requests. Ice 3.3.1 also provides the possibility to monitor the size of queued data but is not so flexible as the new AMI API.

    Regards,
    Andrey.
  • mesmes CaliforniaAdministrators, ZeroC Staff Mark SpruiellOrganization: ZeroC, Inc.Project: Ice Developer ZeroC Staff
    Hi Greg,

    Ice doesn't provide a way to query the available bandwidth, but as Andrey mentioned, your application can manually track the status of all AMI requests to maintain a pretty accurate picture of the amount of data waiting to be sent. For more information, please refer to the "Flow Control" section here (for Ice 3.3) and let us know if you need more information.

    Regards,
    Mark
  • gmhgmh Member Greg HughesOrganization: Hewlett-PackardProject: Remote Graphics Software
    Thanks for the pointers.

    We use batched requests. I have been experimenting with ice_flushBatchRequests_async using a callback. This shows some promise, although I seem to get only a few queued requests before the system reaches some sort of equilibrium (which could be an accidental throttle in our code) where steady state operation does not ever report data is being queued.

    I have disabled autoflush. Would you expect non-AMI batched requests used in conjunction with the _async version of flush to queue data?

    Regards,
    Greg
  • benoitbenoit Rennes, FranceAdministrators, ZeroC Staff Benoit FoucherOrganization: ZeroC, Inc.Project: Ice ZeroC Staff
    Hi,

    Batch requests are queued in a dedicated buffer of the connection (which is associated to the proxy). Flushing the batch requests (queued in this dedicated buffer) is pretty much equivalent to sending a oneway request. The connection prepares for sending a message with the batched requests and sends it immediately if no other messages are queued for sending. If there are other messages waiting for sending, the connection adds the message of batch requests to the sending queue of the connection.

    If I understand it correctly you are using the sent callback in conjunction with ice_flushBatchRequests_async to monitor the number of pending batch requests waiting to be sent, is this correct? It's in general expected that requests get queued initially because it takes time for the connection to be established.

    Cheers,
    Benoit.
Sign In or Register to comment.