ThreadPools and Concurrency

in Help Center
Hello,
I am a novice with ice and had a few questions related to Server side ThreadPools.
I can see that one can increase the size of the Server ThreadPool so that multiple clients can connect at once and get serviced by the server however, looking at the CPU utilization on my 4-core machine, I only see one CPU pegged. The server performs a long running calculation and returns a number back to the client hence I would have expected all of the CPUs to be fully pegged but it does not appear to be the case. I don't have any locking going on in my server side code as all the variables are local variables to the serverside method.
As a FYI, my environment consists of Mac OS X Leopard running on a 4-Core Mac Pro and I use Ice-3.2.1 with a re-compiled version of IcePy (the default version of IcePy does not work on Leopard).
Thanks,
S.r.
I am a novice with ice and had a few questions related to Server side ThreadPools.
I can see that one can increase the size of the Server ThreadPool so that multiple clients can connect at once and get serviced by the server however, looking at the CPU utilization on my 4-core machine, I only see one CPU pegged. The server performs a long running calculation and returns a number back to the client hence I would have expected all of the CPUs to be fully pegged but it does not appear to be the case. I don't have any locking going on in my server side code as all the variables are local variables to the serverside method.
As a FYI, my environment consists of Mac OS X Leopard running on a 4-Core Mac Pro and I use Ice-3.2.1 with a re-compiled version of IcePy (the default version of IcePy does not work on Leopard).
Thanks,
S.r.
0
Comments
It should indeed use multiple threads if you configure the server with multiple threads, for example, with the Ice.ThreadPool.Server.Size=4 configuration property.
To try this out, you can modify the hello word demo (from the demo/Ice/hello directory) by changing the implementation of the sayHello method to loop for a large number of iterations and start the server with --Ice.ThreadPool.Server.Size=4. If you launch multiple clients to invoke the sayHello method, the server should use up to 4 threads (I tried this on a MacBook Pro with Mac OS X 10.5.1 and it worked as expected).
Cheers,
Benoit.
Indeed, I set the threadpool size using the configuration item and it does look like the threads are getting allocated properly. For instance, the server is processing multiple clients at once.
However, I also notice that the OS is not distributing these threads across multiple processor cores. I set my ThreadPoolSize to 10 and I have 4 clients connected to the server however I have a load average of about 2 on a 4-core box. Perhaps this is beyond Ice and may more be Mac OS X or Python behavior?
S.r.
The Python version of the server does allocate more threads however it does not appear to schedule them appropriately. I am assuming there is some sort of locking going on at the Python<=>C++ layer that is causing the threads to run simultaneously.
Thanks,
S.r.
As you've discovered, the Python interpreter is single threaded. Although the Ice thread pool might grow to contain multiple threads, only one thread at a time can be active in the interpreter. This is not a restriction in the Ice extension, but in the interpreter itself, and it appears the next generation of Python ("Python 3000") will continue to have this restriction.
Alternatives are to run multiple instances of your Python server (such as one for each CPU), or use a different language.
Take care,
- Mark
EDIT: You should also be able to use the C# version with IronPython. IronPython has no such think as the GIL because its built on top of the CLR which is already multi-threaded. (Jython and the Java version would work too)