This forum has been archived. Please start a new discussion on GitHub.

Deadlock on Evictor Shutdown


Found this problem where a Freeze Evictor process won't shut down when CTRL-C is pressed. Perhaps I pressed CTRL-C more than once before the saving thread could finish. I'm not sure. The evictor is large and there were thousands of objects needing saving. I though that perhaps it was a lock in my code that wasn't released, but that doesn't look like what is being waited for in saveNowNoSync().

(gdb) where
#0 0x000000389c9088da in pthread_cond_wait@@GLIBC_2.3.2 ()
from /lib64/tls/
#1 0x0000002a9599539c in Freeze::EvictorI::saveNowNoSync ()
from /network/Ice-3.0.1-newbdb/lib64/
#2 0x0000002a9599580d in Freeze::EvictorI::deactivate ()
from /network/Ice-3.0.1-newbdb/lib64/
#3 0x0000002a956b2cfa in IceInternal::ServantManager::destroy ()
from /network/Ice-3.0.1-newbdb/lib64/
#4 0x0000002a95661e11 in Ice::ObjectAdapterI::waitForDeactivate ()
from /network/Ice-3.0.1-newbdb/lib64/
#5 0x0000002a9565a90c in std::for_each<std::_Rb_tree_iterator<std::pair<std::string const, IceUtil::Handle<Ice::ObjectAdapterI> > >, IceUtilInternal::SecondVoidMemFun<std::string const, Ice::ObjectAdapterI, IceUtil::Handle<Ice::ObjectAdapterI> > > () from /network/Ice-3.0.1-newbdb/lib64/
#6 0x0000002a9565969e in IceInternal::ObjectAdapterFactory::waitForShutdown ()
from /network/Ice-3.0.1-newbdb/lib64/
#7 0x0000002a955e4c95 in Ice::CommunicatorI::waitForShutdown ()
from /network/Ice-3.0.1-newbdb/lib64/
#8 0x0000002a956b6a6b in Ice::Service::waitForShutdown ()
from /network/Ice-3.0.1-newbdb/lib64/
#9 0x0000002a956b982d in Ice::Service::run ()
from /network/Ice-3.0.1-newbdb/lib64/
#10 0x000000000043778e in main (argc=1, argv=0x7fbffff828) at



  • Ignore This

    Sorry to have bugged anyone,

    Think that this is just really slow NFS.

  • matthew
    matthew NL, Canada
    If you are storing BerkeleyDB files on NFS file systems you should be careful. Please read this:
  • Thanks, Aware of the Potential NFS Issue

    Thanks for this pointer. We use NFS for development -- local disk for production.

    The "problem" was simply that the saving thread couldn't keep up with the rest of the system, so the "main" thread was properly blocked. I just hadn't seen things take so long (30 min.) to get flushed before.