Archived

This forum has been archived. Please start a new discussion on GitHub.

IcePy and The global interpreter lock

As we know, python has the concept of global interpreter lock(GIL), that is, only one thread can be really running in the python virtual machine at any point in time. Python threads must acquire a single shared lock (GIL) when they are ready to run, and each thread may be swapped out after running for a set number of virtual machine instructions. So python threads have a bad reputation of performance because they can not really run concurrently.

For IcePy, its threads or thread pools comes from Ice.so written by C++. Does it have the constrait of GIL ?

For example, this is a simple slice file:
module Demo
{

    interface Printer
    {
	void printString(string s);
    };

};

This is the server implementation:
class PrinterI(Demo.Printer):
	def printString(self, s, current=None):
		print s
		# there are many other computation to do
		# ...

Suppose Ice.ThreadPool.Server.Size=5 and there are 5 clients send request to the server simultaneously, how does GIL play its role ? Are there any principles ?

Comments

  • mes
    mes California
    Hi,

    The GIL affects server-side dispatch too. The C++ Ice run time does as much as it can in the available native threads, such as accepting new connections, unmarshaling requests and sending replies. However, eventually it must call into the Python interpreter to dispatch the request to the servant, and only one thread at a time can be active in the interpreter. If this poses a serious problem, you should write your server in another language.

    Take care,
    - Mark