Home Help Center

Best practices for implementing asynchronous bi-directional communications

SpiderkeysSpiderkeys Member Charles CrossOrganization: Langley NASA Research CenterProject: Autonomous entity oversight & scheduling
I am currently working on an application for autonomous entity oversight and goal scheduling, which can be represented at the most simple level by the model below:

Attachment not found.

I am rather new to ICE, but have made my way through most of the training labs and read a good bit of the manual. I've been able to implement a skeleton of the model above by making the clients also behave as servers (setting up object adapters on them, so the main server can send data/invoke commands on them), but I am not sure about the best way of fleshing out the rest of the functionality (the networking components of it using ICE, not the actual logic to be employed)

What I would like to do is this:

-Give the controller five threads
-1: dedicated solely to performing computations and deciding what commands should be sent out to the entities, maintaining state machine and list of connected entities (logic thread)
-2: dedicated solely to sending data to the UI application asynchronously
-3: dedicated solely to receiving data from the UI application asynchronously
-4: dedicated solely to sending data to all entities asynchronously
-5: dedicated solely to receiving data from all entities asynchronously
-Give each entitity three threads (sending, receiving, and logic)
-Store received messages in queues for the logic threads to consume

My first question is, is setting up the clients to also be servers, as I have done, the proper way of achieving what I want? It seems to work, but I don't know if that is the best way to utilize ICE's capabilities. It seemed correct to me, as I will pretty much never want to block any thread that is sending data for it to wait for a remote method invocation (though I do want the acknowledgement via TCP that the transaction was successful). Is it kosher to allow the server to asynchronously contact clients in a separate thread by opening a proxy to the client's MessageQueue and adding a command to it?

I imagine that this sort of task is common, such as in the case of a chat program:

-Client A sends message to Server
-Server sends message to Clients A, B, and C

or for a more physical case with a controller and robot

-Client A is sending input data to the server from a joystick
-Server forwards input commands to Client B (a robot or something)

Any guidance for what the best way to structure relationships such as (Client->Server->Clients) and (Server->Clients) in ICE is would be much appreciated.

My next question is, from what I have read of the threading model in ICE, am I correct in my understanding that the Object Adapter is in control of the number of threads that are available for incoming and outgoing messages (allocated from its thread pool)? How can I ensure that my threads handle only what I want them to handle? I.e., for my application I want one thread to specifically handle sending to the UI, one to listen for UI messages, one to send to entities, and one to receive from entities. Or perhaps this isn't something I should really worry about and should just leave it to ICE to figure out? I just want to make sure that if for some reason, the entity transmission threads get bogged down, I don't want that traffic getting shifted over to another thread automatically which might interfere with UI communications being handled reliably.

Thanks for taking the time to read this and I definitely appreciate any insight from more experienced users on any architecture techniques that might help me achieve my goals.

Comments

  • SpiderkeysSpiderkeys Member Charles CrossOrganization: Langley NASA Research CenterProject: Autonomous entity oversight & scheduling
    Working at this a bit more, I've come up with the following model for a mock interface between the Controller and a client that displays a map. In this example, the logic thread on the controller is simply generating a random x,y coord and sending it to the map client which plots it. The client also has a separate thread which sends a ping every 10 seconds to say it is alive. If the controller doesn't pick up a ping message from the map client every 100 seconds, it prints a message.

    Attachment not found.

    Is this a sound method for implementing non-blocking, bi-directional messaging passing in ice? I didn't extend it to directly model the full complexity of the project, as I wrote up in my last post, but if this is correct, then it should work to satisfy the entity<->server interface as well. Any advice, tips, tricks, etc are greatly appreciated.
  • mesmes CaliforniaAdministrators, ZeroC Staff Mark SpruiellOrganization: ZeroC, Inc.Project: Ice Developer ZeroC Staff
    Hi,

    Welcome to the forum.
    My first question is, is setting up the clients to also be servers, as I have done, the proper way of achieving what I want? It seems to work, but I don't know if that is the best way to utilize ICE's capabilities.
    There are essentially two different ways to implement bidirectional peers with Ice: by adding traditional server capabilities to a client (as you have done), or by using bidirectional connections. The difference between the two might seem subtle at first. Both require that a client create an object adapter, but the former accepts new incoming connections like any traditional server while the latter reuses an existing outgoing connection (from client to server) for receiving server-to-client callbacks.

    Using bidirectional connections makes life a lot easier when you have firewalls between the peers. If that's not a concern, your current approach is fine.
    My next question is, from what I have read of the threading model in ICE, am I correct in my understanding that the Object Adapter is in control of the number of threads that are available for incoming and outgoing messages (allocated from its thread pool)?
    The Ice communicator maintains two thread pools for client- and server-related activities. All of the communicator's object adapters share its server-side thread pool by default, but you can also configure an object adapter with its own (private) thread pool.

    In a server context, the object adapter's threads (whether they come from the shared pool or its own pool) are only used to handle incoming requests. This means a thread reads a message, decodes it, dispatches the invocation to a servant, marshals any results, and sends the reply. So in that sense, your understanding is correct.

    Note however that the server-side thread pool doesn't normally have anything to do with outgoing requests sent in a client context.
    How can I ensure that my threads handle only what I want them to handle?
    We definitely recommend that you use asynchronous invocations in your UI applications to avoid blocking. During an asynchronous invocation, the calling thread is used to marshal the parameters and attempt to write the message to the socket. If the write could block, it's deferred and control of the calling thread returns to the application. Meanwhile, a thread from the client-side thread pool eventually completes the write operation.

    In this situation, you can control how many threads are in the communicator's client-side thread pool, but the application has no control over the creation or execution of these threads. In other words, an asynchronous invocation is "fire and forget". Eventually, a thread pool thread will also read the reply from the server and dispatch the results to your callback.

    It's possible that the server could get backlogged and cause any number of pending messages to get queued up in Ice's internal buffers in the client. For this reason, we also provide "sent" callbacks that allow you to implement your own flow control if desired.

    In general, if your goal is to avoid blocking, you should use asynchronous invocations in your client contexts as I've already mentioned. Whether you also need to use asynchronous dispatch in a server context depends on the situation. For example, suppose that you're maintaining your own queue of tasks to be processed, and each invocation on the server adds a task to that queue. The implementation of the servant in this case doesn't need to use asynchronous dispatch if there's no possibility of blocking. This assumes of course that the client's invocation is considered to have "succeeded" once the task has been added to the queue.

    On the other hand, if the client's invocation does not "complete" until that queued task has actually been executed, then this would probably be a good situation to use asynchronous dispatch.
    How can I ensure that my threads handle only what I want them to handle? I.e., for my application I want one thread to specifically handle sending to the UI, one to listen for UI messages, one to send to entities, and one to receive from entities.
    One approach would be to set up independent worker threads that simply wait for new tasks to be queued by Ice-related activities. Note that it would probably be overkill to dedicate a thread exclusively to making outgoing Ice requests, since Ice already guarantees that asynchronous invocations won't block. It's safe to make asynchronous Ice invocations from any thread, including the "main" or UI thread.

    I should also mention that Ice supports a "dispatcher" feature that would be handy for your UI applications. For example, this would allow you to dispatch all incoming Ice invocations and AMI callbacks from the UI thread, so that your code could safely interact directly with UI objects. Without a dispatcher, your code would be called by an Ice thread pool thread, and then you would likely have to postpone or queue any UI updates.

    Anyway, as you can see, there's a lot to consider here. I recommend that you approach all such design decisions by asking yourself "What thread will be executing this code, and can it block?" Once you become more familiar with Ice semantics, the available implementation strategies will be more readily apparent.

    Regards,
    Mark
  • mesmes CaliforniaAdministrators, ZeroC Staff Mark SpruiellOrganization: ZeroC, Inc.Project: Ice Developer ZeroC Staff
    Spiderkeys wrote: »
    Is this a sound method for implementing non-blocking, bi-directional messaging passing in ice?
    Yes, that looks reasonable.

    Having one peer "ping" another is part of a commonly used strategy that we usually refer to as a "session". Our sample programs include a simple example of sessions.

    Mark
  • SpiderkeysSpiderkeys Member Charles CrossOrganization: Langley NASA Research CenterProject: Autonomous entity oversight & scheduling
    Thanks Mark for the very indepth response; glad to see I have been on the right track. I'm very impressed with ICE so far and think you guys have built a wonderful product here. So far it's been very straightforward to pick up, and the amount of sample programs, documentation, and the training materials have made it a mostly painless process! I'll give the session sample a look through and then hopefully I can start building the first draft of my application.

    Cheers,
    Charles
  • SpiderkeysSpiderkeys Member Charles CrossOrganization: Langley NASA Research CenterProject: Autonomous entity oversight & scheduling
    Just in case anyone else is curious about implementing this sort of architecture in their applications, I'll upload my "final rough draft" diagram here.

    Attachment not found.

    Note: There may yet still be some improvements to this. I'm not sure if I need to rely on the message handler yet for managing connection info; I haven't gotten to reading up on ICE's built in connection info stuff yet. I also think it will be best to change my server's incoming request strategy to use the session pattern, as Mark mentioned. Just requires some thread-safing of the message queue.

    Cheers :)
Sign In or Register to comment.