Archived

This forum has been archived. Please start a new discussion on GitHub.

How to Use Bidirectional 2-Way Nested Invocations AND Serialize Property

Hello,

I have a C++ based single client-server pair using a bidirectional connection. I created a proxy on the client side to call a method in the server (via the server’s servant & object adapter), which in turn, invokes a nested 2-Way invocation there. This nested 2-way method uses a proxy created on the server side (“serverProx2CallClient”) to perform a callback to a method in the client side (via the client’s servant & object adapter). That method (on the client side) then provides a callback response, back to the server.

This all works just fine, until I assert the “serialize” property for the server’s thread pool:

“Ice.ThreadPool.Server.Serialize=1”

Without serialization, one of the additional server threads I have available is able to receive and dispatch the call back response from the client. However, when the server-side serialize property is active, the callback response from the client is not handled by the server, causing it to deadlock. We need serialization for other reasons not mentioned here.

I am assuming that, based on my observations, the original server method which started the nested 2-Way invocation needs to complete before any incoming server request (like a callback response from the client) may begin to be handled.

I have tried creating an additional object adapter on the server side just for the purpose of receiving callback responses from the client, but serialization seems to prevent that as well. Although, the Ice manual seems to state serialization is only on a per-connection basis:

“Setting this property to a value greater than zero forces the thread pool to serialize all messages received over a connection.”

Meaning a “single” connection, proxy-servant pair in this case, unless I am misinterpreting that statement.

With that said, it seems odd that the extra object adapter had no effect. Although, I could be incorrectly implementing the fix to link the second adapter (on the server side) to the existing connection:

“serverProx2CallClient->ice_getConnection()->setAdapter(serverAdapter02);”

We need this functionality but must also maintain serialization. I have thought about trying AMD and AMI, but does anyone have any other suggestions?

Regards,

-Jim

Comments

  • mes
    mes California
    Hi Jim,

    It sounds like you have a pretty good handle on things.

    Note that a connection is not necessarily equivalent to a "proxy-servant pair", because the same connection can be shared by multiple unrelated proxies targeting any servant in the same object adapter.

    You're correct that the serialization is causing the deadlock, and using a separate object adapter is a reasonable solution. The setting for Ice.ThreadPool.Server.Serialize affects every object adapter that shares the communicator's default server-side thread pool. However, using a secondary object adapter should still work around the deadlock because the client would establish a separate connection to this object adapter.

    I suspect the callback proxy that the server provides to the client is not being created correctly. The servant in your server should be doing something like this:
    void initialOperation(const Identity& clientIdent, const Current& curr)
    {
        // Create the bidirectional proxy to the client.
        Server2ClientPrx prx = Server2ClientPrx::uncheckedCast(
            curr.con->createProxy(clientIdent));
    
        // Servant already knows the secondary object adapter.
        // Assume we need to add a new servant for each client.
        CallbackPrx cb = CallbackPrx::uncheckedCast(
            _secondaryOA->addWithUUID(new CallbackServant));
    
        // Make the bidirectional callback, passing the proxy for the
        // callback object that the client will invoke.
        prx->server2ClientOperation(cb);
    }
    

    As you mentioned, another possibility would be for the server's servant to invoke the bidirectional callback via AMI. This would eliminate the deadlock issue and allow you to use only one object adapter. If you decide to go this route, make sure you have a very clear understanding of the concurrency behavior.

    Hope that helps,
    Mark
  • Thank you Mark.

    This is very helpful. Just a couple questions though.

    In the module of my slice file I have two interfaces, one for the client, and one for the server. In the client side implementation, I created the first proxy in your example ("Server2ClientPrx"), just like you did, using the proxy handle that was automatically generated from the slice file compilation of my client’s interface (“clientIntfPrx” in my case).

    That proxy then was used (in the server side implementation) to call a method on the client side, just like you also did at the end of your example, in this line:

    “prx->server2ClientOperation(cb);”

    The mistake I made however was not using an additional proxy (“cb”), for the callback. My question is this; to use that additional proxy, how do I modify my slice file to make another auto generated proxy handle (“CallbackPrx”) correctly?

    Since this 2nd proxy handle does not appear to be making a proxy which calls any methods that I implement (only using the built-in uncheckedCast method), would the slice file simply need an empty additional interface like this to make the 2nd proxy handle?

    interface Callback { };

    Also, once my method on the client side has been passed this new callback proxy, how is it used in that method to provide the callback response to the server? Right now I am just using a return value from the method so, applying your example, it would be expressed like this:

    “std::string retVal = prx->server2ClientOperation(cb);”

    Since I am passing the new call back proxy to this method on the client side, I am assuming my approach is not quite correct.

    Can you advise me?

    Regards,

    -Jim
  • mes
    mes California
    The way I understood it from your original post, there are three separate "interactions" where the nesting looks like this:
    client ---- request op#1 ----> server
      client <---- request op#2 ---- server
        client ---- request op#3 ----> server
        client <---- reply op#3 ---- server
      client ---- reply op#2 ----> server
    client <---- reply op#1 ---- server
    
    This assumes all the operations are invoked using synchronous twoway semantics.

    To avoid any confusion, it would be best to have Slice definitions that we can talk about, even if they're just placeholders for your real application:
    interface ClientToServer1
    {
        void op1(Ice::Identity ident);
    };
    
    interface ClientToServer2
    {
        void op3();
    };
    
    interface ServerToClient
    {
        void op2(ClientToServer2* cb);
    };
    
    Feel free to post your own version with less abstract interface and operation names.

    Note that if you intend to use bidirectional connections, the client cannot pass a proxy to the client. Instead, the client must pass an identity, and the server must create a proxy using the client's connection, as I showed earlier. See the example in demo/Ice/bidir for more concrete details.

    Mark
  • Hello Mark.

    My setup is a little different from the “3-Op” approach you mentioned. It is actually only 2 “Ops”. I also looked at the “demo/Ice/bidir” example which got me started in the right direction, but what I need to do is somewhat different from that example as well.

    The client-server interactions are like this:

    client ---- Using a Server proxy ---- request Op#1 ----> server
    client <---- request Op#2 ---- Using a Client proxy ---- server
    client
    reply to Op#2
    > server // Fails with serialize.
    client <
    reply to Op#1 ---- server

    I assumed this is possible using a single Bidirectional connection, but please correct me if I am wrong with that. The slice definition I am using is quite simple, with the 1st interface for a client, & the 2nd for a server:

    module testCliServCallBk
    {
    interface IReceiverClientIntf
    {
    string nestedCallBackToClient();
    };

    interface ISenderServerIntf
    {
    string addClientNprintString(Ice::Identity ident, string s);
    };
    };


    My design is broken down as follows:


    On the server side, I create a communicator, initialize its properties with 4 threads for both client and server thread pools, and also activate the “Serialize” property for both. I create a single object adapter from that communicator with endpoints and a servant to register with it (which inherits the “ISenderServerIntf” interface from the slice definition):

    serverAdapter01 // Endpoints are “default -p 10000”
    serverServantObject01

    Finally I activate the adapter:

    serverAdapter01->add(serverServantObject01,
    ic->stringToIdentity("serverServantObject01"));
    serverAdapter01->activate();


    On the client side, I create another communicator, but initialize properties with the Ice defaults. I then create a proxy from that communicator using endpoints from the server’s object adapter as so:

    ObjectPrx base = ic->stringToProxy("serverServantObject01:default -p 10000");
    ISenderServerIntfPrx clientProx2CallServer = ISenderServerIntfPrx::checkedCast(base);

    Similar to the server side, I create a single object adapter from the communicator, but without a name and endpoints since it is receiving callback requests from the server. The callback servant I register with that object adapter inherits from the “IReceiverClientIntf” interface in the slice definition and are named as follows:

    clientAdapter01
    clientCallbackServantObject01

    After activating the adapter, I then use the proxy to obtain a connection and associate the object adapter with it. I assume this step allows callback requests from the server, which I discuss below, to be dispatched by the client?

    clientProx2CallServer->ice_getConnection()->setAdapter(clientAdapter01);

    Now, I start the communication circle by using the server proxy created on the client side to call the nested 2-way invocation “addClientNprintString( )” back on the server side, using the client object adapter’s identity:

    status =
    clientProx2CallServer->addClientNprintString(ic->stringToIdentity("clientAdapter01"),
    "\nThe Client says Hello.");

    Once inside that method, back on the server side, I create a client proxy there, using the current connection and the client object adapter’s identity:

    IReceiverClientIntfPrx serverProx2CallClient = IReceiverClientIntfPrx::uncheckedCast(curr.con->createProxy(ident));

    I close the “circle” by using that proxy to perform the callback for a method back on the client side:

    callbackResponseMsg = serverProx2CallClient->nestedCallBackToClient();

    Everything works up to this point with or without serialization. The “hang” with serialization occurs when “nestedCallBackToClient()” tries to return its callback response value from the client back to the server.

    Given that we would like to maintain serialization, and use Bidirectional connectivity, what is the best approach? I tried using a second object adapter on the server side, in the “addClientNprintString( )” method, right before my call to “nestedCallBackToClient()”, using this approach:

    CommunicatorPtr ic = curr.adapter->getCommunicator();

    ObjectAdapterPtr serverAdapter02 = ic->createObjectAdapter(""); // No endpoints reqd.?
    ISenderServerIntfPtr serverServantObject02 = new serverServant;

    serverAdapter02->add(serverServantObject02, ic->stringToIdentity("serverAdapter02"));
    serverAdapter02->activate();

    serverProx2CallClient->ice_getConnection()->setAdapter(serverAdapter02);

    However, that had no effect. Any recommendations you may have would be most helpful.

    Regards,

    -Jim
  • mes
    mes California
    Hi Jim,

    Thanks for the explanation.

    Nested synchronous twoway invocations cannot work over bidirectional connections when serialization is enabled. After receiving the client's initial request, the Ice run time in the server essentially stops listening to the socket until the request completes. This ensures that invocations are executed in the order received. However, it also means that a reply to a nested invocation will not be processed. Using your example, here's what happens:
    client ---- Using a Server proxy ---- request Op#1 ----> server
    client <---- request Op#2 ---- Using a Client proxy ---- server
    client ----- reply to Op#2 --------------------> XXX  server // Nested reply not processed
    client <-------------------------------- reply to Op#1 ---- server // Reply not sent
    
    From the server's perspective, the inner invocation never completes, which also prevents the outer invocation from completing. It doesn't matter how many threads you have in the server-side thread pool, this scenario will never succeed. A second object adapter also won't help here, as you've seen.

    There are a couple of alternatives. First, you could invoke the client callback with oneway semantics. Of course, this assumes that completion of the outer invocation does not depend on the completion of the inner invocation.

    Alternatively, if you must maintain the synchronous semantics, I recommend implementing the server-side operation with AMD and invoking the client callback with AMI. You can then chain the callbacks to achieve the desired semantics. For example:
    class AMICallback : public IceUtil::Shared
    {
    public:
    
        AMICallback(const AMD_ISenderServerIntf_addClientNprintStringPtr& cb) :
            _cb(cb)
        {
        }
    
        void completed(const string& callbackResponseMsg)
        {
            _cb->ice_response(callbackResponseMsg);
        }
    
        void exception(const Ice::Exception& ex)
        {
            cout << ex << endl;
            _cb->ice_exception(ex);
        }
    
    private:
    
        AMD_ISenderServerIntf_addClientNprintStringPtr _cb;
    };
    typedef IceUtil::Handle<AMICallback> AMICallbackPtr;
    
    ...
    
    void ISenderServerIntfI::addClientNprintString_async(
        const AMD_ISenderServerIntf_addClientNprintStringPtr& cb,
        const Ice::Identity& ident,
        const string& s,
        const Ice::Current& curr)
    {
        IReceiverClientIntfPrx serverProx2CallClient = ...;
        AMICallbackPtr amiCB = new AMICallback(cb);
        serverProx2CallClient->begin_nestedCallBackToClient(
            newCallback_IReceiverClientIntf_nestedCallBackToClient(amiCB,
                &AMICallback::completed, &AMICallback::exception));
    }
    
    Admittedly this is more complex than simple synchronous twoway invocations, but it achieves the desired effects: nested twoway invocations with synchronous semantics. In the code above, the AMI callback keeps a reference to the AMD callback; when the reply for the inner AMI invocation is received, it completes the outer invocation by calling the AMD callback.

    Mark
  • Hi Mark.

    Thank you for looking at this. I thought this might be the case. AMD and AMI were going to be my next choice. At least it’s good to know we have options.

    Just out of curiosity, is it possible to use AMD without AMI? We were trying to avoid an interface change on the client side. If absolutely necessary we can go down that route, but would prefer not to if possible.

    Please let me know when you can.

    Regards,

    -Jim
  • mes
    mes California
    I'm not sure what you mean by "interface change on the client side". Using AMI does not require any changes to your Slice interfaces.

    To answer your question, yes, you could use AMD without AMI. However, to avoid the deadlock, you would need to complete the AMD request before invoking the client callback, which may not give you the semantics you need.

    Mark
  • Hi Mark.

    My apologies for the confusion. Correct. We won’t need to change our slice definition, just our client application itself (albeit fairly straight forward) to use AMI. It sounds like that route may be needed anyway to avoid the deadlock with just AMD alone.

    Thank you for all the help.

    Regards,

    -Jim
  • mes
    mes California
    Hi,

    For the scenario described above, using AMI in the client doesn't help to avoid deadlock. We needed to use AMD & AMI in the server to avoid the deadlock.

    Mark
  • Hi Mark,

    Can you give an example? I was under the impression that AMI was only applicable for clients.

    Regards,

    -Jim
  • mes
    mes California
    I used AMI in the example servant code above to avoid making a blocking twoway callback to the client. Together with the use of AMD, this allows the server-side dispatch thread to be released back to the Ice run time and allow another message to be dispatched on the serialized connection.

    Mark
  • Hi Mark,

    I am intrigued by this approach. Is there any concern over a client not being able to obtain the callback due to its firewall blocking it, when using AMD and AMI in the server as you describe?

    In other words, one of the benefits of the bidirectional connection is having a "single" connection set up by the client so callbacks are returned from the server over the same "line".

    Would I be able to maintain that benefit using AMD and AMI in the server as you just referenced?

    Please let me know.

    Regards,

    -Jim
  • mes
    mes California
    You can also use AMD & AMI over a bidirectional connection. The example code I posted uses this technique.

    Mark
  • Excellent.

    Thank you for your help Mark.

    Regards,

    -Jim