Archived

This forum has been archived. Please start a new discussion on GitHub.

Bidirectional Socket Clarification

I have the following scenario: 3 object adapters, one in process A, 2 in process B -- ObjAdpA, ObjAdpB1, ObjAdpB2. Each object adapter has one server thread and one client thread, all using bidirectional sockets. The following sequence occurs

Invocation 1: ObjAdpB1 -> twoway_invocation -> ObjAdpA
Invocation 2: ObjAdpA -> oneway_invocation -> ObjAdpB2

My question is this: Is Invocation 2 (made from within invocation 1) dispatched by the server thread of ObjAdpB1 or ObjAdpB2? Ice documentation says that bidirectional sockets use the initiating adapter's server thread to dispatch calls back into the client, but this is counter-intuitive to the use of a oneway invocation into an object owned by another object adapter. In other words, inside of the two way invocation (Invocation 1) to ObjAdpA, I would still expect to be able to make one way calls into ObjAdpB2 - but I have a deadlock condition suggesting otherwise.

Any feedback/clarification greatly appreciated.

Comments

  • marc
    marc Florida
    Invocation 2 is dispatched by the client-side thread pool. The server-side thread pool or per-OA thread pools have no meaning for callbacks received over bi-directional connections. When the connection is initially established from the client to the server, it is associated with the client side thread pool, and this association does not change over the lifetime of the connection.

    To avoid the deadlock, you can increase the number of threads in the client-side thread pool. By default it's just one thread.

    Whether or not you are using oneways or twoways does not matter. The dispatching mechanisms are exactly the same, except that there is no response being sent back for oneways.
  • Hi Marc, thanks for the speedy reply. It sounds to me as if though Invocation 2 is dispatched by the client thread of ObjAdpB1, while that same thread is waiting for the completion of the two way invocation. Since I have the thread locked around the two way invocation, I end up hanging because the thread can't process the two way completion until the call back is completed.

    Is there any way to turn on protocol tracing that identifies which thread is handling dispatch and which OA that thread belongs to?
  • marc
    marc Florida
    gsalazar wrote:
    Hi Marc, thanks for the speedy reply. It sounds to me as if though Invocation 2 is dispatched by the client thread of ObjAdpB1, while that same thread is waiting for the completion of the two way invocation. Since I have the thread locked around the two way invocation, I end up hanging because the thread can't process the two way completion until the call back is completed.

    This is correct. As mentioned in my other posting, the solution is to add another thread to the client-side thread pool, for example:

    Ice.ThreadPool.Client.Size=2
    gsalazar wrote:
    Is there any way to turn on protocol tracing that identifies which thread is handling dispatch and which OA that thread belongs to?

    Sorry, we don't have such tracing. But it is certainly a good idea for a future addition to Ice.
  • Okay, so two questions.

    1. Above, you mention that a connection is associated with the client side thread pool - is that at least on a per-OA basis? For example, if you created an object adapter with a per-thread pool property, and then assigned it to a connection, are you saying the thread pool is not used any more?

    2. I would have expected the initial two way invocation to block the processing of any incoming one-way callbacks? You are saying that the client thread does not block while waiting for the two way to complete before processing the incoming one-way callback, right? Isn't the client thread busy wating for the two to complete?
  • marc
    marc Florida
    gsalazar wrote:
    Okay, so two questions.

    1. Above, you mention that a connection is associated with the client side thread pool - is that at least on a per-OA basis? For example, if you created an object adapter with a per-thread pool property, and then assigned it to a connection, are you saying the thread pool is not used any more?

    No, client-side thread pools are not on a per-OA basis. There is only one per communicator.

    The OA thread pool is used for requests that are received over regular (i.e., non-bidirectional) connections. Bi-directional connections in the client are always assocatiated with the client-side thread pool, and requests that arrive over this connection are dispatched using this client-side thread pool. That's because the client-side thread pool is used for outgoing connections.

    You have a similar situation on the server side: If the server sends a request over a bi-directional connection, then it uses the server-side thread pool (or the per-OA thread pool) to receive the responses, and not the client side thread pool.

    So to summarize: Outgoing connections (initiated by the client) are always associated with the client side thread pool, and incoming connections (accepted by the server) are always associated with either the server-side thread pool, or the per-OA thread pool.
    gsalazar wrote:
    2. I would have expected the initial two way invocation to block the processing of any incoming one-way callbacks? You are saying that the client thread does not block while waiting for the two way to complete before processing the incoming one-way callback, right? Isn't the client thread busy wating for the two to complete?

    I'm not sure I understand the question. In any case, if you send a twoway over a bi-directional connection with a client-side thread pool containing a single thread, and this twoway triggers a oneway callback over the same connection, then this does not block. The sequence would be something like this:
    1. The client main thread sends the twoway request and waits until it is notified from the client-side thread pool that the response has arrived.
    2. The server dispatches the twoway request, and sends a oneway request to the client during dispatch.
    3. If the oneway is small and fits into the TCP/IP buffer, dispatching can finish immediately.
    4. If the oneway is large and does not fit into the TCP/IP buffer, then the thread from the client thread pool will pick up this oneway and dispatch it. After the data of the large oneway has been picked up by the client, the twoway request dispatching can finish.
    5. In either case, after dispatching is complete, the server sends back the twoway response to the client.
    6. The client will have a thread available to receive the response as soon as the oneway dispatch has finished.
    7. When the thread is available, the response is received, and the main thread is notified about the return value of the twoway call.
  • ice_connection()->setAdapter

    It would appear as though Ice::Connection::setAdapter requires a one-to-one mapping of connection to OA. I have an application creating two seperate objects, putting them in seperate adapters, and then calling prx_>ice_connection()->setAdapter. I can turn on network tracing, and see the requests come in for both objects, but only the requests for the adapter that setAdapter was last called are actually delivered to their servants.

    I have a feeling that this is the documented behavior, due to connection re-use. When I establish two connections to the same server, only one client sees the requests (or callbacks, due to bi-dir connections).

    Is there any way to bypass this behavior? Is it behaving as expected?

    --Gabe
  • to bypass

    For the situation I am looking at, using one OA would probably be fine.

    However, I would still like to have streams processed in parallel. Is a multiple-communicator configuration my only option? How about Ice.ThreadPerConnection? Will that enable a thread per-connection (or per-OA in this case) dispatch for bi-dir connections as well? Or am I still limited to dispatch happening in the communicator's client thread pool?
  • marc
    marc Florida
    You can only have one object adapter handling requests from one connection. If you call setAdapter(), any previously set object adapters will be overridden.

    I'm afraid don't understand the second part of your question, about connection re-use and two connections from two different clients.
  • marc
    marc Florida
    gsalazar wrote:
    For the situation I am looking at, using one OA would probably be fine.

    However, I would still like to have streams processed in parallel. Is a multiple-communicator configuration my only option? How about Ice.ThreadPerConnection? Will that enable a thread per-connection (or per-OA in this case) dispatch for bi-dir connections as well? Or am I still limited to dispatch happening in the communicator's client thread pool?

    You can use Ice.ThreadPerConnection=1. Then there are no thread pools. Instead, every connection will have one single dedicated thread.

    If you want requests to be processed in parallel, you can also simply add more threads to the thread pool. You can even make the thread pool dynamic, so that it resizes to whatever number of threads are required, and later shrinks when the load is reduced.

    With thread-per-connection, if you use multiple OAs, and every client uses a separate OA, and only opens a single connection to this OA, then thread-per-connection effectively becomes thread-per-OA. But why would you want to do this? What would be the advantage? I only see additional overhead for creating all these OAs (each of them would need to listen to a separate port, have a dedicated thread to accept connections on this port, etc).

    If you use thread pools, then the client-side thread pool will be responsible to dispatch bi-dir callbacks (dispatched with OAs that are assigned to outgoing connections), as explained in a previous post. For thread-per-connection, there are no thread pools, but dedicated threads per connection, so none of this applies.
  • Okay, forget about my per-OA comments.

    I would just like to understand Ice.ThreadPerConnection better (Ice documenation for this option is very minimal). Say I am processing two streams in a client via callbacks from two separate connections - graphics and audio. I would like for these to be processed in parellel, but for data from each stream to stay serialized. This is why I originally had each stream in an OA, thinking that having one server and one client thread for each OA would keep the stream serialized, but allow for parallel processing of the two streams.

    If I enable Ice.ThreadPerConnection using bi-dir connections, what is the expected behavior. Is it still the client side thread pool that is responsible for dispatching requests (callbacks), or does the thread per connection end up handling this?
  • marc
    marc Florida
    gsalazar wrote:
    Okay, forget about my per-OA comments.

    I would just like to understand Ice.ThreadPerConnection better (Ice documenation for this option is very minimal). Say I am processing two streams in a client via callbacks from two separate connections - graphics and audio. I would like for these to be processed in parellel, but for data from each stream to stay serialized. This is why I originally had each stream in an OA, thinking that having one server and one client thread for each OA would keep the stream serialized, but allow for parallel processing of the two streams.

    If I enable Ice.ThreadPerConnection using bi-dir connections, what is the expected behavior. Is it still the client side thread pool that is responsible for dispatching requests (callbacks), or does the thread per connection end up handling this?

    The client side thread pool won't even exist in this case. Thread per connection means that there are no thread pools. Instead, each connection has one dedicated thread. This thread handles everything that arrives over the connection. It handles responses for requests, AMI callbacks, and it also dispatches requests.

    One problem you might face is that there is currently no way to force a new connection to be established, i.e., Ice will always reuse connections if it can.

    There is one work-around though: Time-outs are connection specific. This means that if you have two proxies that have the same endpoint except for different timeouts, then the two proxies will use different connections. So you could change the timeout of your audio stream proxy slightly before using it, and you would get a new connections, which then would be processed in a separate thread by the server. (Yes, I know that's a bit hacky, and we will add a clean API to force new connections to future versions of Ice :) )
  • Hi Marc, thanks for your speedy reply. This is good information to have!

    I think I'll experiment with thread per connection to see what performance is like.
  • marc
    marc Florida
    BTW, I'm a bit puzzled by your remark about that you need data streams to be serialized, but processed in parallel. If you use regular twoway requests to transfer data, then your data stream will be properly serialized, even if you have multiple threads in the dispatching thread pool. You just have to use separate mutexes for your audio and video streams.

    You only get a problem if you use oneways, because then requests could be dispatched out of order. Please see the FAQ titled "How can oneway requests arrive out of order?" in our newsletter "Connections", issue #2 (http://www.zeroc.com/newsletter/).
  • reusing connections
    marc wrote:
    One problem you might face is that there is currently no way to force a new connection to be established, i.e., Ice will always reuse connections if it can.

    There is one work-around though: Time-outs are connection specific. This means that if you have two proxies that have the same endpoint except for different timeouts, then the two proxies will use different connections. So you could change the timeout of your audio stream proxy slightly before using it, and you would get a new connections, which then would be processed in a separate thread by the server. (Yes, I know that's a bit hacky, and we will add a clean API to force new connections to future versions of Ice :) )

    Hi Marc,
    I originally read this thread to reaffirm to myself of what I thought Ice.ThreadPerConnection meant. Then I've read your post about the issue I just quoted. It seems that this is a shortcoming, which you will rectify in some next version of the API. Since I didn't dive deep into this matter, and I don't want to miss any piece of information, could you please explain in detail what problems and implications are associated with reusing connections? Maybe I already know the answer, but maybe there is more we can learn.

    regards:
    zoltan
  • marc
    marc Florida
    The only implication is for the thread-per-connection model: Since threads are associated with connections, you will get one thread to dispatch requests in the server for each connection from the client. This means that if for example you want to dispatch two requests in parallel, you must make sure that your client opens two separate connections to the server for these two requests.

    Of course this doesn't apply to the default model, which is thread pool. With the thread pool model, the server can dispatch the client requests in parallel independent of the number of connections, up to the configured number of threads in the thread pool.
  • Has ZeroC committed on a release timeframe for adding manual connection opening to the Ice API (instead of the hacky different timeout method)?
  • marc
    marc Florida
    We will add this to Ice 2.2, but I don't have a firm release date yet.
  • Hi Marc, I am just looking for clarification. I read the 'Threading Considerations' section from Ice 3.2.0 (the posts here referred to Ice 2.x), and I wanted to make sure that the threading model for bidirectional connections is the same in Ice 3.2 as discussed for Ice 2.x in these posts.

    Thanks, --Gabe
  • marc
    marc Florida
    Yes, it's the same with Ice 3.2. Note that there will be some significant changes in Ice 3.3 though. We will release a list of changes for Ice 3.3 shortly.