Archived

This forum has been archived. Please start a new discussion on GitHub.

ICE thread pool model under windows

Hi,
This thread mainly concerns ICE thread pool model under windows.

In Windows, ICE adopts "select" and "leader/Follower thread pool" solution for concurrently handling multiple socket events.
Similarly, unix/linux systems, ICE may be configed to adopt "epoll" and "leader/Follower thread pool" solution.

Essentially, six types of socket I/O models are available that allow Winsock applications to manage I/O: blocking, select, WSAAsyncSelect, WSAEventSelect, overlapped, and completion port. It claims that the completion port model offers the best system performance possible when an application has to manage many sockets at once.

I'd like to know why ICE just chooses the "select" socket I/O models in windows systems, without considering the claimed "completion port model"?

Thanks!

Comments

  • benoit
    benoit Rennes, France
    Hi,

    We used select() on Windows because that's what we used originally on Unix and it was supported by Windows 95, 98, etc (unlike completion ports). Now that we don't support anymore these old Windows versions we could indeed use completion ports in the thread pool. However, at this time this has a rather low priority on our TODO list since we never had any customers asking for this.

    Just out of curiosity, do you have a concrete use case where you need Ice completion port support?

    Keep in mind that in practice, a single Ice server rarely needs to scale to several thousand clients. Ice isn't about writing client-server applications where you run a "monolithic" server on a single and often costly machine. Ice is about writing distributed applications where you can eventually distribute the load over several servers on multiple (and often cheaper) machines to always be able to handle more clients :)

    Typically, applications which need to handle a lot of clients are written following this pattern:
    • Clients connect to frontend servers. These frontend servers handle few hundreds clients and often present a simpler view of the application to the client (using the facade design pattern). These frontend servers act as connection concentrators and forward requests to the backend servers.
    • Backend servers implement the logic of the application and are only accessed by few frontend servers. They don't have to worry about being able to handle several thousands of clients.

    This is I think a much better design than the client-"monolithic server" design as you can more easily scale the application by adding more hardware.

    Cheers,
    Benoit.
  • Thanks a lot for giving such a concrete and helpful reply

    Hi, Benoit, Thanks a lot for giving such a concrete and helpful reply.
    Indeed, I don’t have a concrete use case till now. The question is somehow out of curiosity, we are all excited about ICE which is an excellent alternative for modern object middleware for its simplicity and efficiency. Naturally, I am curious about the thread pool model in ICE, which acts as a workhorse. Thank for your suggestions concerns the “front-end and back-end server” architecture. The architecture dose sound very exciting, however, I’d like to give some points that i don’t catch adequately:
    1. what’s the magic for the scalability?
    As you mentioned, front-end servers act as connection concentrators and forward requests to the backend servers, which means, in the end as a net result, the back-end server still handles as many request as the "monolithic" server pattern.
    Is the magic lays in that the front-end servers collect request from hundreds clients(connections) and then pack them, and sequentially forward them throw several connections to the back-end servers( number of these several connections is much less than that of incoming request connection from the clients, moreover these several connections may have been cached already?). Consequently, number of connections directed to backend servers are substantially reduced, and I/O invokes are reduced also?

    2. What are the preconditions for the scalability?
    From 1, it seems that the “front-and back end server” pattern scalability guarantees only the time spend on “message sent form client to the server”? That means, only when a substantial amount of time are used on message communication, the scalability is guaranteed. However, when the time spend on the process of concrete logic of the application, the scalability decays substantially?


    Best regards!
  • benoit
    benoit Rennes, France
    There's no magic. This doesn't reduce the number of requests or make your application less CPU intensive overall. This simply allows you to spread the load over several processes which might be deployed on different machines.

    If you have a monolithic server and it turns out that the machine can't handle the load anymore, your only option is to replace the machine with one which will hopefully be able to handle the load for sometime. But if you design your application to distribute the load over several processes, you can easily deploy more machines to handle the increasing load -- this provides you more flexibility and can more easily scale.

    Of course, there might still be back-end services which can't easily be distributed, this is for example often the case with database servers. However, the good thing is that with a distributed application design you can deploy your database server on a dedicated machine (which doesn't have to support the load of other services of your application...).

    Cheers,
    Benoit.