Archived

This forum has been archived. Please start a new discussion on GitHub.

ICE is huge now, though network still lacks robustness

Is it possible to bypass the network routines provided by ICE? And using my own socket server framework? I've read the source, with such question in mind, but the result turns out to be quite frustrating. First the socket operation is done with thread pool, and the thread pool itself is also less optimized under win32, and the coding is hard to read due to VERY::VERY::LONG::FUNCTION::NAMES. so the challenge seems big for any ICE developer searching for a good socket performance.

I haven't run any benchmark against ICE to check its socket performance, so what I say might be inaccurate, just reading the code would draw some basic conclusion: obviously we wouldn’t expecting something more than 5K connections under win32, things might be similar in Unix since neither epoll is implemented. So before I tried to do sth with ICE's network modules, I thought it might be ZeroC's intention to give a basic socket implementation as example, encouraging users to do their own homework, but after reading the Network.cpp, TCPxxx.cpp stuff I thought I was wrong.

It might be harsh for a new user of ICE to comment on such a big topic, after reading the code & manual for just a day, I might not say everything correctly, but judging from my previews experience with network programming, I think ICE 's network module is quite weak, at least no modern socket model has been used.

ICE might not be the only project suffered from this problem, under the ambicion of taking over all platform , but end up giving some basic socket wrappers that is … although workable, would bring in a hard time for any 10K situation (which is quite common in today's internet environment) . That's the same reason why I write my own IOCP that handles 65535+ but never use Qt's socket wrapper. Things are harder in ICE because its network is hard to bypass…..

I've actully searched about other user's comment throughout the forum, the expectation for modern socket model is high, so I may not be the only one:) . And I noticed a suggestion about using Glacier or something to behave like a proxy , this is actually feasible, but since Glacier2 itself uses network.cpp, so rewritting everything is the only choice, and handling all kinds of detail still unknown to me.

ICE is a nice engine, Slice compiler is nice and handy, the 1800+ document tells a promising company. Only if ICE gives a clean framework not so huge, if it has everything, then it might be hard to keep everything state-of-art. Actually I supposed to see the end of ICE when mutable realms got down in 2004, but ICE still alive and gets better, so I except more now, if some guys could convice me, I might eventually be a licencee.

Comments

  • marc
    marc Florida
    I think you have to be a little bit more concrete about your criticism of our use of sockets, and explain a bit better what you mean with "modern socket model".

    As for a comparison of Ice vs. raw sockets, I recommend to read Matthew Newhook's excellent article "Optimizing Performance of File Transfers" in issue 20 of our newsletter Connections.
  • marc
    marc Florida
    BTW, you are wrong about Ice not using epoll, this is used in Ice version 3.2. You might be looking at an older version of Ice.
  • Hi Marc,

    My last post was more a newbie's comment then an offensive criticism, :cool: but since ICE wraps both distributed object and network communication , so it might be harder for end-user to make a choice, a side-by-side example is: ACE itself is a network communication layer, distributed logic is done by TAO. In my last post, I do mention something about ICE's network module as LESS MODERN, by saying that I mean not being able to see a socket model that could handle more than 10K users. This comment is reasonable since the expectation for how many concurrent users a single server-box could possibly handle has been rewritten remarkably in recent years, now a well-written IOCP server working under Win2003 could handle almost 100K users before non-pages resources ran out.

    To say it simple, the difference between a modern and old-day socket model is that "don’t repeatedly check is there any new incoming socket event, that’s a pure waste of horsepower". ACE's reactor model is well-known as quite modern, albeit I thought it is still showing a less-optimized performance on all platforms it supports, really showing its age I think.

    The choice of socket model isn't trivial , a bad one would drop 99% of connections , use up 100% cpu , leave an unresponsed server full of threads, while a good socket model handles 10Ks and 10Ks of busy connections with less cpu and 4-5 threads….

    It really depends on the sorts of applications we are talking about when we come to "how many concurrent users there would be", but since there are already plenty of good examples out there serving 50K+ or something, compromising for something less robust is quite hard. That's why I tried to isolate ICE's good framework from its "less-capable" network…..
  • marc
    marc Florida
    I'm sorry, but you still have to be more concrete in your criticism. Why do you think that Ice wastes CPU cycles to check that there is a new network event? It does not. If there are no network events, Ice uses exactly 0% CPU. In other words, Ice does not do any form of polling, nor does it have any kind of busy-loops.

    Again, please point me to the concrete code in question, and explain your alternatives. Otherwise I cannot give you any meaningful response. General criticism like "less modern" or "less capable" without providing clear technical details about what exactly you consider "more modern" or "more capable" is not very helpful.

    As for reactive concurrency models (with a reactor, as ACE has one), I've worked with such models extensively for single-threaded middleware (including developing a popular single-threaded CORBA ORB). I fail to see why you think what Ice uses is "less modern" than a reactor. Quite to the contrary: a reactor is typically used in "less modern" single threaded systems, while thread pools are state-of-the-art for modern, multi-threaded systems.
  • It's hard to point out a single line of code in ICE and replace it with my proposed code, because things are more complicated in the world of socket.

    As I could understand, ICE is built up using a COMMON SET of both Unix world and win32 world, the two party do have intersection in socket programming, but that is the bad old days of berkeley style BSD socket that winsock used to be based on, and winsock2 make things a lot different , while in ICE it looks exactly as if writing a server targeting win95-era, no overlapped I/O, no IO Completion, no optimized use of WSABUF buffer, yes doing a modern winsock2 socket engine isn't trivial, more than one manyear if doing it well, certainly takes more than converting some send(..) calls to WSASend(...) to bring in a magical performance boost.

    I've been thinking if my commets are biased, or even paranoic, since ICE might be just targeting Unix platform as server, and leave all win32 stuff as it was to suit some basic needs of client programming, if that turned out to be true, I'd shut-up :)
  • marc
    marc Florida
    Ice uses different mechanisms on Windows and Unix, where it is appropriate. For example, for Unix, we use epoll(), while for Windows, we use select(), but with special knowledge about how select() works on Windows to increase performance. We do use the same API calls where special Windows or Unix APIs do not provide any benefit. Just because Windows does have a "completion ports" mechanism, doesn't mean that it is useful in the context of Ice.

    I'm sorry, but you are simply making uninformed claims. First you make generic claims like "less modern", then you claim Ice would waste CPU cycles waiting for network events, now you claim that we would not make use of operating-system specific APIs where appropriate. You even try to make the claim that Ice wouldn't really target WIN32 as a server platform! This is complete nonsense, given that probably around 50% of our users use Ice on Windows!

    I'm not sure what your next claim will be, but I will not continue to respond to such posts anymore. This is all noise and no substance.
  • Marc, That's simply ignorent....

    Maybe you think ICE's socket performance is good enough on both Unix and Win32, and I'm making noise here, that's fine, if you can give evidence on how many concurrent users ICE could possibly server on win2003....A tricky select() is not an art, nor a best practice... well I will say no more, you don't like criticism and ignores any comment on improvement....

    Before leave this post unchallenged....I recommand you read "Network Programming for windows 2E" by anthory jones or someother books focusing on scalable winsock programming, I might not be productive enough, but still hopes to use ICE to server more users....
  • Linux is my current OS of choice, so I'm very satisfied with Ice 3.2 (besides a few bugs already posted) :)

    I guess English is not Kasulty's native language. He does come across vague, arrogant and annoying. I think he's trying to say that for applications that require large number of concurrent connections (online chat or games for example), "select" doesn't scale well (linear scanning on fd or io object list), which is probably the reason 3.2 moved to epoll on Linux 2.6.

    As a fan of boost libraries (boost.org) in general, I recommend that everyone take a look at Boost.Asio (asio.sf.net) as a potential network layer for Ice. It's heavily scrutinized by many domain experts. It has the potential to become the C++ standard network library. At this stage, it's already using epoll on Linux 2.6, IOCP on WinNT/2k/XP+, kqueue for Mac OS X, all unified with a consistent, well thoughtout interface. You can get ipv6, cygwin and qnx support (as requested by many) for free (almost)

    Boost license is compatible with commercial licenses as well.

    I really hope that Ice flourishes and that we'll have major web browsers with builtin support for Ice and that the world wide web becomes a better place :)
  • Lukelu's opinion came in time to cleanup this mess, and I'd like to say something as supplement:

    As a matter of fact, I did not expect Marc would bring up such a defensive reaction towards technical discussion, it is true that most of my opinions are biased towards the win32 part of ICE, since my knowledge of Unix system is limited, I would only be able to comment on ICE's performance issue within win32 environment, which is at least less than perfect as we could read from the source code, but just like Lukelu has said, choosing Linux for ICE would be a smart choice, since ICE introduces epoll in version 3.2. And certainly my suspicion of whether win32 is an ideal platform to deploy ICE came for that reason too. And the performance of ICE under various platforms it supports is not exposed very clean, uncovered in user manual, the performance testimony one ZeroC's website targets per-connection situation rather than concurrent situation, and thus leads to my vague comments about whether win32 isn't a good choice of server platform to host ICE. Just as if we lost linguistic context completely among the above sessions , my interrogative attitude was treaded as purely annoying lash.

    Criticism is never made in an anthem way, but since I am obviously no native speaker of alphabetic lingua, it made things worse…. I apologize for the offence it caused. My posts came with less cheerful manners but more frustrated attitudes not because ICE is bad but simply because I was failed to integrate ICE into my own network engine which I believed to be more efficient in win32 under high-traffic circumstances, so my previous comments weren't carefully planned, lacking in strictness and comity. I apologize to all ZeroC staff for such a low-quality post, and if it is appropriate, delete this thread at all would do me a huge favor.

    Also, I could understand that certainly I was not the only guy who questions the lack of IOCP, so the tolerence for such repeated requests was dropped to ground throughout these years, and my annoying manner caused more temper than a newbie's innocent query would do.

    Again, as I was mentioned in my first reply towards Marc's defensive response, all my opinions was more a newbie's comment then an offensive criticism, hope these words could clean up the mess a little bit, if not, better leave this post alone…. Nonetheless I still hope a better win32 support ( if possible) in the future, since it would be a bless to have the convenience of VS2005 in hand together with the powerfulness that ICE framework gives us.

    To wrap up, Apologize goes to ZeroC and everyone got offended. I DONT MEAN TO. And no more noises ...
  • I am still at the evaluating whether or not to use ICE, but I am heavily leaning towards it. My target platforms are OS X and Linux. I took a look at Boost.Asio + custom message structures as an alternative, but ICE provides much more out of the box.

    That said 90% of the time a programmer cannot guess where the bottleneck in an application is. I think you should put a benchmarks where your mouth is and test how many concurrent connections ICE can handle on the same hardware under Linux and Windows. That would show on which platform it was more efficient, and allow other people to test as well.

    After all you can nitpick others code to death, second guessing there memory model, class organization, comment style, function naming style, etc. forever, it gets you know where. Performance and reliability are what matters, and for to argue about that you need more than talk, you need benchmarks.