Archived

This forum has been archived. Please start a new discussion on GitHub.

IceBox threads and CPU Affinity

CPU: E5-2687W v2 @ 3.40GHz
OS: CentOS 7.3
Kernel: 3.10.0-514.21.2
ZeroC Version: 3.6.1

We have an issue wherein our IceStorm service is experiencing TCP Zero Window conditions. We suspect that it has something to do with thread scheduling since it is running on a box with 32 cores, yet the IceBox Thread Pool (with a minsize of '1' and maxsize of '55') seems to keep a large preponderance of them on core 0 (between 28 - 38 threads of the pool). Thread affinity for the main IceBox process is set to all cores.

Watching the top processes with top -H often shows about 5 or 6 IceBox threads spring to action when work arrives, and often all those threads are shown as allocated on core 0, which means these threads are context switching among each other when they could each have its own core and run simultaneously.

Upon digging deeper with perf sched record -C 0 -- sleep 120 and then perf sched latency, I get irrefutable evidence of lots of scheduling overhead among multiple IceBox threads on core 0 when this is clearly unnecessary given the number of cores available on this box. Over the 120 seconds that I tracked the scheduling on core 0, the total runtime for all the IceBox threads on that core was 973ms, but they had been context switched off a total of 21,329 times. The core with the next highest number of IceBox threads had a total runtime for all allocated IceBox threads of 674ms and had been context switched off a total of only 3,804 times (less by a factor of ~6x).

In all, it seems like IceBox only ever uses between 10 - 12 of the cores on the box despite there being about 50 or so IceBox threads on a 32-core machine. Is there anything we can do configuration-wise to make the IceBox Thread Pool make use of all cores on the box and not just gang up over half of them on core 0?

Comments

  • I should also add that the ThreadPool for this application has 'Serialize' set to 1.

  • Disregard. This is an issue with the default policy specified in "numactl" upon further research.

  • bernard
    bernard Jupiter, FL

    Hi Mark,

    Glad to see you figured out this issue. Also, keep in mind that with IceBox, the communicator for IceBox itself is typically not used much, and as a result doesn't need big thread pools (I'd keep the default settings).

    It's the communicator for your IceBox service(s) (for example IceStorm) that may need larger thread pools. You would also "serialize" the IceStorm's service thread pools, not the IceBox server thread pools. See:
    https://doc.zeroc.com/ice/3.7/icebox/configuring-icebox-services

    All the best,
    Bernard