Archived

This forum has been archived. Please start a new discussion on GitHub.

Ice concurrency load handling...

Hi all,

I'm sorry if this post sounds basic, but I'm a newbie so bear with me please.

I've currently done some C++ server and client side tests with ICE and it works great, also I'm currently building an ICE java server and I'll start some tests as well. The thing is that, I've developed some servers before, and for large load handling we've had to use some connection concurrency patterns like one-thread-per-connection, async I/O, reactor pattern, proactor pattern, etc. But, how do I handle large concurrent calls from Ice?

My app only serves atomic operations, that means, it doesn't need any special thread sync, shared thread resources or complex stuff like that.

I've searched the forum and I've read that Ice uses thread pools, but I haven't found any details on these two points:

1 .- What could be the actual request capacity of an Ice Server? I know this is relative to hardware, OS, etc, but when programming a server, do I need to handle my own thread pool, async i/o, etc?

2 .- There is so much documentation about Ice that now my head hurts, I haven't found actual practical ways of handling large user loads.

Do I need to build my own load handling strategy? does Ice automatically handles a thread pool so I don't have to worry about load handling?

How do I actually build, let's say, an Ice java server that could handle lots of user requests? For putting a more concrete example, on Java you have containers that handle the loads, requests, threads, etc. for you, like glassfish for EJBs. Is there a way on Ice that I only develop actual business logic and let Ice handle all the low level thread pooling, connection handling, etc. for me?

I'm starting to check out IceBox and IceGrid, but there is so much stuff on the documentation that I'm confused on what to actually use.

Thanks!

Comments

  • xdm
    xdm La Coruña, Spain
    Hi Raul,
    1 .- What could be the actual request capacity of an Ice Server? I know this is relative to hardware, OS, etc, but when programming a server, do I need to handle my own thread pool, async i/o, etc?

    See our performance white paper
    2 .- There is so much documentation about Ice that now my head hurts, I haven't found actual practical ways of handling large user loads.

    Do I need to build my own load handling strategy? does Ice automatically handles a thread pool so I don't have to worry about load handling?
    A simple Ice server will be able to handle large user loads, Ice handles the thread pool for you, but you can configure it for better suite your needs, see 32.10 The Ice Threading Model

    If you need to scale beyond a single server, you can use IceGrid.
  • A simple Ice server will be able to handle large user loads, Ice handles the thread pool for you

    So by just coding my slice and generating the code, and implementing the servants and adding them to an adapter, that's it? And in case the load gets too big I just configure the thread pool size so that it fits my needs? Cool!

    That's exactly what I wanted to read, because I've read white papers, the Ice book and a lot of posts, but it's just a lot of information and a lot of ways of implementing the same thing that I just didn't know how to practically work with Ice.

    I guess I can use IceBox and IceGrid for distributing the services, let's say in java for ease of crossplatform compiling and deploying and without worrying about handling the connections myself, and I can build clients in C++ to consume those services.

    Thanks a lot, that clears things up a bit better.
  • benoit
    benoit Rennes, France
    Hi,

    Yes, you don't have much to do beside eventually configuring the thread pool size.

    To figure out the right thread pool size you need to see if the implementation of your servants don't block and are always "runnable" (they mostly do computation) or if on the contrary they often block (waiting for resources, waiting to acquire mutexes, performing blocking disk IO, making blocking Ice requests, etc.).

    In the case your servants don't perform any blocking operations, it's in general best to configure a number of thread equal to the number of CPU cores on your machine (more threads won't help much). In the case your servants perform blocking operations, you will need more threads if you want operations to be dispatched while some dispatched operations are blocking (and therefore using Ice server thread pool threads).

    Cheers,
    Benoit.