Archived

This forum has been archived. Please start a new discussion on GitHub.

memory leak in async invoke?

Hi All.

my server node has a function to send data to another node regularly, (per 0.4 second). I implemented this function by python (2.6.6) & ice 3.4.1.

However when i trying to use begin_Async (where ice 3.4.1 called it is a new style of async invoke), i discovered the memory consumption was increasing and finally a memory error was raised. use plain sync method is ok..no problem at all.

I want to know whether there is a limitation about async invoke? or I guessed i used it in an incorrect manner.

BTW: it only happened in ice client invoke..the server side is running flawlessly.

Thanks.

Comments

  • mes
    mes California
    Hi,

    Thanks, we'll take a look at this. Which operating system(s) are you using?

    Mark
  • Hi mes

    I used windows 7 64 bit (Ice 3.4.1 lib is 32 bit) .. Today i discovered more info about that. As long as i kept the number of invoke at a proper level (it is hard to determine the exact level. it is about 1600 async Invoke per second), this memory error didn't happen.
  • mes
    mes California
    I can't reproduce this problem.

    If your Python script is sending asynchronous requests at a faster rate than the server can dispatch them, it is quite likely that memory usage will grow over time. If you are in this situation, you might consider using the flow-control API that Ice provides to limit the number of queued requests.

    If you can post a small example that reproduces the problem, I'd be happy to take a look at it.

    Regards,
    Mark
  • Hi mes

    I guess my server is unable to handle too many requests and result in memory leak on client side. About the flow control you talked about, could you please give me an example how to use it? I have read the document where I couldn't find any example about that.( The doc only mentions there is a callback slot named _send where a callback can be filled, no more)

    Thanks

    gelin yan
  • mes
    mes California
    We do provide one example in our discussion of polling for completion (see Section 23.15.3 here). This example shows how to use polling to maximize throughput for file transfers, but it also demonstrates how you can limit the number of outstanding asynchronous requests.

    Note that this example calls waitForSent on the AsyncResult object. This method can potentially block indefinitely, for example if there is a problem in the server that prevents it from reading more data from the socket. If you want to avoid blocking, you'll need to use the sent callback instead.

    Regards,
    Mark