Archived
This forum has been archived. Please start a new discussion on GitHub.
Core dump in Python servant locator
I have some Python code that uses the servant locator and it consistently
core dumps on Centos 4.1 and Mac OS X 10.4. Here are the two stack
traces:
Linux:
Mac OS X:
The fix was to change this code by adding the AdoptThread adoptThread.
I'm not a Python internal expert, but it may be good to review all
the places that this could be done, such as the following. Shouldn't
IcePy::ServantLocatorWrapper::~ServantLocatorWrapper() do a
Py_DECREF(_servant) on _locator?
Regards,
Blair
core dumps on Centos 4.1 and Mac OS X 10.4. Here are the two stack
traces:
Linux:
#0 PyErr_Fetch (p_type=0xc75eac, p_value=0xc75eac, p_traceback=0xc75eac) at Python/errors.c:210 210 *p_type = tstate->curexc_type; (gdb) bt #0 PyErr_Fetch (p_type=0xc75eac, p_value=0xc75eac, p_traceback=0xc75eac) at Python/errors.c:210 #1 0x00b9a8a2 in instance_dealloc (inst=0xb78cf76c) at Objects/classobject.c:641 #2 0x00bbc638 in dict_dealloc (mp=0xb7b9c824) at Objects/dictobject.c:728 #3 0x00bcff43 in subtype_dealloc (self=0xb78cf70c) at Objects/typeobject.c:691 #4 0x0064064e in IcePy::ServantWrapper::~ServantWrapper$delete () from /usr/local/ice/3.2.0-gcc34/lib/python/IcePy.so #5 0x0108d191 in IceInternal::GCShared::__decRef () from /usr/local/ice/3.2.0-gcc34/lib/libIce.so.32 #6 0x010fad2b in IceInternal::decRef () from /usr/local/ice/3.2.0-gcc34/lib/libIce.so.32 #7 0x01065c49 in Ice::ConnectionI::invokeAll () from /usr/local/ice/3.2.0-gcc34/lib/libIce.so.32 #8 0x0106a7ac in Ice::ConnectionI::message () from /usr/local/ice/3.2.0-gcc34/lib/libIce.so.32 #9 0x0117d511 in IceInternal::ThreadPool::run () from /usr/local/ice/3.2.0-gcc34/lib/libIce.so.32 #10 0x0117e849 in IceInternal::ThreadPool::EventHandlerThread::run () from /usr/local/ice/3.2.0-gcc34/lib/libIce.so.32 #11 0x00d180d9 in startHook () from /usr/local/ice/3.2.0-gcc34/lib/libIceUtil.so.32 #12 0x009e4371 in start_thread () from /lib/tls/libpthread.so.0 ---Type <return> to continue, or q <return> to quit--- #13 0x0084bffe in clone () from /lib/tls/libc.so.6
Mac OS X:
Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x00000028 [Switching to process 344 thread 0x3803] PyErr_Fetch (p_type=0xb0c987ec, p_value=0xb0c987e8, p_traceback=0xb0c987e4) at Python/errors.c:210 210 Python/errors.c: No such file or directory. in Python/errors.c (gdb) bt #0 PyErr_Fetch (p_type=0xb0c987ec, p_value=0xb0c987e8, p_traceback=0xb0c987e4) at Python/errors.c:210 #1 0x002177a8 in instance_dealloc (inst=0x24245a8) at Objects/classobject.c:654#2 0x00239c90 in dict_dealloc (mp=0x2ccc420) at Objects/dictobject.c:766 #3 0x0024f64b in subtype_dealloc (self=0x5c5830) at Objects/typeobject.c:691 #4 0x026f2076 in IcePy::ServantWrapper::~ServantWrapper () #5 0x027c4680 in IceInternal::GCShared::__decRef () #6 0x027a1069 in Ice::ConnectionI::invokeAll () #7 0x027a9be4 in Ice::ConnectionI::message () #8 0x02878643 in IceInternal::ThreadPool::run () #9 0x02879a26 in IceInternal::ThreadPool::EventHandlerThread::run () #10 0x02bb0637 in startHook () #11 0x90023d87 in _pthread_body ()
The fix was to change this code by adding the AdoptThread adoptThread.
IcePy::ServantWrapper::~ServantWrapper() { AdoptThread adoptThread; Py_DECREF(_servant); }
I'm not a Python internal expert, but it may be good to review all
the places that this could be done, such as the following. Shouldn't
IcePy::ServantLocatorWrapper::~ServantLocatorWrapper() do a
Py_DECREF(_servant) on _locator?
IcePy::ServantLocatorWrapper::~ServantLocatorWrapper() { IcePy::ServantLocatorWrapper::ServantLocatorWrapper(PyObject* locator) : _locator(locator) { Py_INCREF(_locator); _objectType = lookupType("Ice.Object"); } IcePy::ServantLocatorWrapper::~ServantLocatorWrapper() { }
Regards,
Blair
0
Comments
-
Hi Blair,
Thanks for the patch. This problem was first reported back in May, and the fix will be included in the next patch release.Shouldn't ~ServantLocatorWrapper do a Py_DECREF(_servant) on _locator?
Take care,
- Mark0 -
Hi Mark,
Thanks for the quick reply. Nice to hear that it'll be fixed in
the next release.
Regards,
Blair0