This forum has been archived. Please start a new discussion on GitHub.

ThreadPools and Concurrency


I am a novice with ice and had a few questions related to Server side ThreadPools.

I can see that one can increase the size of the Server ThreadPool so that multiple clients can connect at once and get serviced by the server however, looking at the CPU utilization on my 4-core machine, I only see one CPU pegged. The server performs a long running calculation and returns a number back to the client hence I would have expected all of the CPUs to be fully pegged but it does not appear to be the case. I don't have any locking going on in my server side code as all the variables are local variables to the serverside method.

As a FYI, my environment consists of Mac OS X Leopard running on a 4-Core Mac Pro and I use Ice-3.2.1 with a re-compiled version of IcePy (the default version of IcePy does not work on Leopard).




  • benoit
    benoit Rennes, France

    It should indeed use multiple threads if you configure the server with multiple threads, for example, with the Ice.ThreadPool.Server.Size=4 configuration property.

    To try this out, you can modify the hello word demo (from the demo/Ice/hello directory) by changing the implementation of the sayHello method to loop for a large number of iterations and start the server with --Ice.ThreadPool.Server.Size=4. If you launch multiple clients to invoke the sayHello method, the server should use up to 4 threads (I tried this on a MacBook Pro with Mac OS X 10.5.1 and it worked as expected).

  • Thank you Benoit for the prompt response.

    Indeed, I set the threadpool size using the configuration item and it does look like the threads are getting allocated properly. For instance, the server is processing multiple clients at once.

    However, I also notice that the OS is not distributing these threads across multiple processor cores. I set my ThreadPoolSize to 10 and I have 4 clients connected to the server however I have a load average of about 2 on a 4-core box. Perhaps this is beyond Ice and may more be Mac OS X or Python behavior?

  • It appears to be related to Python. The C++ version of the server behaves as expected (the more clients you connect the more the CPU cores get used).

    The Python version of the server does allocate more threads however it does not appear to schedule them appropriately. I am assuming there is some sort of locking going on at the Python<=>C++ layer that is causing the threads to run simultaneously.


  • mes
    mes California

    As you've discovered, the Python interpreter is single threaded. Although the Ice thread pool might grow to contain multiple threads, only one thread at a time can be active in the interpreter. This is not a restriction in the Ice extension, but in the interpreter itself, and it appears the next generation of Python ("Python 3000") will continue to have this restriction.

    Alternatives are to run multiple instances of your Python server (such as one for each CPU), or use a different language.

    Take care,
    - Mark
  • I don't think that is true. It is likely that it will be possible to compile a version of python 3k without the GIL, but there will probably be an overall performance hit on single threaded programs. There is already a version made by Adam Olsen which removes the GIL. See this thread here and look for Adam's post. He says he will release the code after Python 3.0 is out. So its possible it could make it into the 3.1 release.

    EDIT: You should also be able to use the C# version with IronPython. IronPython has no such think as the GIL because its built on top of the CLR which is already multi-threaded. (Jython and the Java version would work too)