Archived

This forum has been archived. Please start a new discussion on GitHub.

OutOfMemoryException occurred while allocating a ByteBuffer

1) We are seeing the following call stack

1d6da7133f97 EventType->InstrumentRelatedDataChangedEvent Processor->EventProcessor t:tcp -h LDNPSM027397 -p 18000 Conversation Status>Loading Message Status->Ready
Ice.MarshalException
reason = "OutOfMemoryException occurred while allocating a ByteBuffer"
at IceInternal.Buffer.reserve(Int32 n) in c:\src.vc8\ice\cs\src\Ice\Buffer.cs:line 149
at IceInternal.Buffer.resize(Int32 n, Boolean reading) in c:\src.vc8\ice\cs\src\Ice\Buffer.cs:line 67
at IceInternal.Buffer.expand(Int32 n) in c:\src.vc8\ice\cs\src\Ice\Buffer.cs:line 55
at IceInternal.BasicStream.expand(Int32 n) in c:\src.vc8\ice\cs\src\Ice\BasicStream.cs:line 2703
at IceInternal.BasicStream.writeSize(Int32 v) in c:\src.vc8\ice\cs\src\Ice\BasicStream.cs:line 623

I guess what is happening here is that we are sending too many messages to a slow server and basically the underlying transport is full so they are being queued in the Ice run time buffer which we are filling up. Is there any way to set the size of this buffer. Was is its default size of this buffer?

2) Also I have seen reference to Ice.CacheMessageBuffers=0 would this be of use. What does this setting actually control? Would it also prevent .NET garbage collection on messages sent over Ice. So unless the server specifically sets references to these received messages to null, the server could potentially also run out of memory.

Regards

Mark

Comments

  • benoit
    benoit Rennes, France
    Hi,

    I assume you are using AMI to send the requests as with regular synchronous requests it's unlikely that you would encounter a memory error condition unless you send lots of data with a request (possibly from multiple threads). Please correct me if I'm wrong however.

    The Ice runtime ensures that AMI requests never block so it's queuing the requests with the Ice connections. This comes at the expense of flow control and as you discovered if the server isn't fast enough to process the requests, the requests start pilling up (waiting for sending) in the client. This can eventually lead to an OutOfMemoryError.

    Ice doesn't provide a way to automatically limit the number of requests queued with the Ice connection. It does provide the necessary APIs to implement this yourself however.

    With the Ice 3.3 AMI mapping, calling an AMI method returns a boolean value that indicates whether or not the invocation was sent immediately or queued. If queued, you can receive (by implementing the Ice::AMISentCallback interface) a notification when the AMI invocation is sent. This enables your application to count the number of outstanding AMI requests waiting to be sent and implement your own flow control. For more information, see here in the Ice 3.3 manual.

    The new Ice 3.4 AMI mapping provides similar APIs to allow your application to implement flow control, see the Ice 3.4 documentation for more information (the new AMI mapping is described in each language mapping chapter).

    The Ice.CacheMessageBuffers property wouldn't help here. It's used as an optimization to avoid creating too much garbage by re-using previously created buffers. It's only used for synchronous requests however and doesn't apply to AMI requests.

    Cheers,
    Benoit.
  • You are correct we are using ami.

    The client (a server process) pushes messages to multiple servers (processors). One of these processors is slow and guess the pushed messages are building up.

    Is the buffer associated with the proxy (connection) so if one receiver is slow I could block sending to that one until the un-sent count reduces. However I could still send to all other processes using their proxies, on different connections.

    All the proxies run in the same communicator. I'm guessing this should be viable, as it would seem odd to have a single buffer per communicator.

    Regards

    Mark
  • benoit
    benoit Rennes, France
    Hi,

    The requests are queued with the connection associated to the proxy when the connection is established or if the connection is not established the requests are queued with the proxy.

    So yes, you can limit the number of requests on a per proxy basis and continue sending requests to other servers even if some limits have been reached for some servers.

    Cheers,
    Benoit.