Archived

This forum has been archived. Please start a new discussion on GitHub.

Resilience

I am evaluating ICE for use in a particular distributed application, and in particular how it would cope with a denial of servce type situation where many simultaneous client requests are made.

What happens if there is a flood of client requests to an ICE server? I assume that at some point connections would fail. At what limit would that happen? What error would the client see?

In an ideal world the ICE client would implement a thread-pool to use for client calls, thus limiting the number of concurrent requests. Analogous to the server threads ICE provides for asynchronous method dispatch. Does ICE do that, or would I have to implement it myself in my application?

My alternative is to use some kind of messaging middleware (eg spread) which would simply queue all the requests. However this throws all the work on marshalling back on to my application.

Comments welcome!

Comments

  • Re: Resilience
    Originally posted by fitzharrys
    What happens if there is a flood of client requests to an ICE server? I assume that at some point connections would fail. At what limit would that happen? What error would the client see?

    It depends on the exact circumstances. Basically, the server-side run monitors the sockets on which it listens for incoming connections (as well as the sockets for which it has established connections) via select(). When a connection request comes in, the select thread calls accept() and adds the connection to the set of monitored connections; when select() indicates that a connection has pending data, the server reads the request off the wire and processes it. (The reading and processing are done by the same thread (leader-follower pattern).) The server maintains a pool of threads (configurable in size) to handle concurrent requests. The maximum number of concurrent requests in the server is therefore limited by the number of threads in the pool.

    If lots of clients are trying to reach the server in parallel, some of them will typically get a connect timeout exception: the server's kernel may not be able to queue all the incoming connection requests.

    On the other hand, a client may have an already established connection to the server. In that case, what happens can vary. The client may write to its local socket and the local (or remote) TCP/IP stack may have sufficent memory to buffer the data, in which case the call simply will be slow -- the client blocks until the server finally gets around to processing and replying to the request. On the other hand, the server may be busy and, moreover, the client could keep writing data to the connection (for example, using async calls). In that case, the client-side TCP/IP buffers will eventually fill up. That causes the client side to be suspended until room is available again to buffer the data, that is, the client will block.
    In an ideal world the ICE client would implement a thread-pool to use for client calls, thus limiting the number of concurrent requests.

    That really wouldn't help. If you are concerned about overloading the server, whatever we do on the client side won't help. For one, we cannot control the number of clients that may trying to reach the server. And, second (and more importantly), the clients may not be using the Ice run time to reach the server, but could be written to use sockets directly. So, any protection has to happen on the server, not the client side.

    Basically, an Ice server is as robust against high traffic rates as TCP/IP: as the load on the server increases, requests take longer and longer to process (from each client's perspective) until, at some point, clients see connection timeouts.
    Analogous to the server threads ICE provides for asynchronous method dispatch. Does ICE do that, or would I have to implement it myself in my application?

    I'm not sure I understand the question. Ice provides both synchronous and asynchronous requests on the client side, and it provides synchronous and asynchronous dispatch on the server side. (AMD for the server side allows you to have more requests in progress than there are threads in the server. In essence, AMD permits the server to queue incoming requests and then process them later, instead of dedicating a separate thread to each request for the duration of that request.)
    My alternative is to use some kind of messaging middleware (eg spread) which would simply queue all the requests. However this throws all the work on marshalling back on to my application.

    I don't know enough about your application to comment. But, generally, unless you have a requirement to store message persistently, so the client or server can shut down while messages are still queued, you won't need messaging middleware, and Ice will do the job for you more easily and with less programming effort.

    Cheers,

    Michi.
  • Thanks Michi.

    I've just been playing around with connections between multiple clients and a single server, and using netstat at the server to see what connections there are.

    It seems as if there are initially two connections, one TCP and one UDP, listening.

    When a client connects and creates a remote object proxy, two TCP connections appear. If the client does nothing for a while, they dissappear again. If the client process makes further remote object proxies, no new connections are made- it seems to reuses the existing ones. However if new processes are created and objects within them, new connections appear.

    Is that what you'd expect? ie that multiple objects within the same client process share the same TCP connections, but new processes require new connections?

    John
  • Originally posted by fitzharrys
    I've just been playing around with connections between multiple clients and a single server, and using netstat at the server to see what connections there are.

    It seems as if there are initially two connections, one TCP and one UDP, listening.

    If you have configured the adapter with a UDP endpoint as well as a TCP endpoint then, yes, you will see two ports being used for listening. Otherwise (and more typically), you will configure just a TCP port, so only a single port is listened on.

    When a client connects and creates a remote object proxy, two TCP connections appear.

    Two? I would expect only one. (Do you have an activated object adapter in the client? If so, you will see that adapter listen on a port.) At any rate, when a client first issues a request to an object, the client opens a connection to the server. Thereafter, that same connection is used to dispatch requests.
    If the client does nothing for a while, they dissappear again. If the client process makes further remote object proxies, no new connections are made- it seems to reuses the existing ones. However if new processes are created and objects within them, new connections appear.

    Yes. Ice reaps idle connections after a while. If the client accesses multiple objects in the same server, it sends all requests on the already-open single connection. Obviously, if new processes (clients) are started, each of those clients has to open its own connection because connections are a per-process resource.
    Is that what you'd expect? ie that multiple objects within the same client process share the same TCP connections, but new processes require new connections?

    Yes. Ice multiplexes requests onto as few connections as possible.

    Cheers,

    Michi.