Home Bug Reports

Orphaned callback object after MemoryLimitException

Andrew SAndrew S Member Andrew SolodovnikovOrganization: Moscow State technical UniversityProject: simple grid-like system ✭✭
Hello!

I'm trying to send large amount of data in one chunk with ami call (data callback object). When chunk is too large, i'm getting Ice::MemoryLimitException in callback object and from this object make ami call (SendItemError), reporting this error. After this, data callback object is never been deleted. Data callback object is created for each request.
	struct AgentTaskEvent: public ::agent::AMI_IAgentTaskEvent_OnItemComplete
	{
		AgentTaskEvent()
		{
LOG_ERROR() << "AgentTaskEvent ctor" << this;
		}
		~AgentTaskEvent()
		{
			LOG_ERROR() << "AgentTaskEvent dtor" << this;
		}
		virtual void ice_response()
		{
		}
		virtual void ice_exception(const ::Ice::Exception& ex)
		{
			LOG_A(llERR) << "Exception while item completion: " << ice_helpers::RootExPrint(ex);
			try 
			{
				ex.ice_throw();
			}
			catch(const Ice::MemoryLimitException &)
			{
				SendItemError(E_OUTOFMEMORY, ex.what());
			}
			catch(...)
			{
			}
			LOG_A(llERR) << "Exception exit";
		}
	};

Here is some log records:
10 Feb 2009 14:02:24.403 [Error] AgentTaskEvent ctor0041FD48
10 Feb 2009 14:02:24.403 [Error] AgentTaskEvent dtor0041FD48
10 Feb 2009 14:22:25.184 [Error] AgentTaskEvent ctor0041FD48
10 Feb 2009 14:22:26.543 [Error] Exception while item completion: ..\..\include\Ice/BasicStream.h:112: Ice::MemoryLimitException:
10 Feb 2009 14:22:26.572 [Error] Exception exit
10 Feb 2009 14:24:43.996 [Error] AgentTaskEvent ctor0041FCA8
10 Feb 2009 14:24:43.996 [Error] AgentTaskEvent dtor0041FCA8
10 Feb 2009 14:24:44.199 [Error] AgentTaskEvent ctor0041FCA8
10 Feb 2009 14:24:44.199 [Error] AgentTaskEvent dtor0041FCA8
10 Feb 2009 14:24:44.324 [Error] AgentTaskEvent ctor0041FCA8
10 Feb 2009 14:24:44.324 [Error] AgentTaskEvent dtor0041FCA8
10 Feb 2009 14:24:44.621 [Error] AgentTaskEvent ctor0041FCA8
10 Feb 2009 14:24:44.621 [Error] AgentTaskEvent dtor0041FCA8
10 Feb 2009 14:24:45.090 [Error] AgentTaskEvent ctor0041FCA8
10 Feb 2009 14:24:45.106 [Error] AgentTaskEvent dtor0041FCA8

As you can see, all event objects before exception were allocated at the same address (0041FD48), but after exception we get a new one (0041FCA8), so 0041FD48 just living dead - it's destructor will never been called :)
When there is no MemoryLimitException, all callback objects allocated at the same address (callbacks are not rare).

Environment:
Ice 3.3, windows xp, VC 9.0.
<IceProperties>
<Item>--Ice.ThreadPool.Server.Size=4</Item>
<Item>--Ice.ThreadPool.Client.Size=4</Item>
<Item>--Ice.ACM.Client=0</Item>
</IceProperties>

Thanks!

Comments

  • benoitbenoit Rennes, FranceAdministrators, ZeroC Staff Benoit FoucherOrganization: ZeroC, Inc.Project: Ice ZeroC Staff
    Hi Andrew,

    I was unfortunately unable to reproduce this with a similar configuration. I modified the cpp\demo\Ice\async demo to send a large byte sequence in the sayHello operation. The AMI request fails with Ice::MemoryLimitException and the AMI callback object is correctly destroyed.

    Could you perhaps try to reproduce this in a small self-compilable test case?

    Cheers,
    Benoit.
  • Andrew SAndrew S Member Andrew SolodovnikovOrganization: Moscow State technical UniversityProject: simple grid-like system ✭✭
    benoit wrote: »
    Hi Andrew,

    I was unfortunately unable to reproduce this with a similar configuration. I modified the cpp\demo\Ice\async demo to send a large byte sequence in the sayHello operation. The AMI request fails with Ice::MemoryLimitException and the AMI callback object is correctly destroyed.

    Could you perhaps try to reproduce this in a small self-compilable test case?

    Cheers,
    Benoit.

    Hi, Benoit.

    Thanks for answer. I'll try to reproduce this with small demo. Surely, our application fails when there is a large (for example, 60 mb) amount of data. With small overhead (for example, 5-10 mb) it works fine - really strange.
    I also have tried to find a source of a problem, but with no luck...
  • Andrew SAndrew S Member Andrew SolodovnikovOrganization: Moscow State technical UniversityProject: simple grid-like system ✭✭
    Andrew S wrote: »
    Hi, Benoit.

    Thanks for answer. I'll try to reproduce this with small demo. Surely, our application fails when there is a large (for example, 60 mb) amount of data. With small overhead (for example, 5-10 mb) it works fine - really strange.
    I also have tried to find a source of a problem, but with no luck...

    Hi, Benoit.

    I was able to reprodude bug on async sample... Jast add
    Ice.ThreadPool.Server.Size=4
    Ice.ThreadPool.Client.Size=4
    Ice.ACM.Client=0
    

    lines to the client and server configs and catch...
    Sample code and binaries are attached. I'm sorry, but it seems, that there is a serious bug with thread pool and async requests...

    http://www.zeroc.com/forums/attachment.php?attachmentid=649&stc=1&d=1234436766
  • benoitbenoit Rennes, FranceAdministrators, ZeroC Staff Benoit FoucherOrganization: ZeroC, Inc.Project: Ice ZeroC Staff
    Hi Andrew,

    Thanks for the test case! I indeed forgot to modify the thread pool size when I tried out the first time. The destructors are called but they are indeed not called right away after the ice_exception dispatch returns. The AMI callback is only destroyed when the thread from the thread pool that executed the ice_exception call is used again (to dispatch an other call).

    You can fix this by adding:
        workItem = 0;
    

    At line 417 of Ice-3.3.0/cpp/src/Ice/ThreadPool.cpp (after the execute() call). This should make sure the callback is destroyed right away after ice_exception is called.

    We'll fix this for the next release!

    Cheers,
    Benoit.
  • Andrew SAndrew S Member Andrew SolodovnikovOrganization: Moscow State technical UniversityProject: simple grid-like system ✭✭
    Hi, Benoit. Thanks for fix, i'll try it.
    FUI, in my case destructors will never be called - they just hangs until thread pool destroyed - we sorted out this problem when we saw hanged (for days) objects...
Sign In or Register to comment.