Archived

This forum has been archived. Please start a new discussion on GitHub.

A question about ChatRoom in Connections Issue #2

The following words is quoted from Connections Issue #2:
1)The server’s invocation of message is delivered in two steps: the server sends the invocation to the Glacier2 router, and then the router sends the invocation to the chat client.

2)the recommended configuration for the chat server is to use a twoway message to the Glacier2 router. This provides maximum reliability and does not cause problems for the message implementation because Glacier2 is always well-behaved
and does not block the server for any length of time. Ideally we should not hold the lock inside message for the duration of the RPC (to allow for better concurrency), but we will address this in a future article

3)For the communication between Glacier2 and the chat clients, we’ll use oneway messages. This provides maximum throughput and avoids problems due to network latency or malicious clients. In addition, the Glacier2 router must use buffered mode for server-to-client calls.

4)The client’s local TCP/IP stack has a limited amount of buffer space to accept data. If a oneway request is too large to fitintothe remaining TCP/IP buffer space, the kernel suspends the caller in its write system call on the socket until enough buffer space becomes available. The remaining buffer space can be consumed by previously buffered requests or even by a single request, if it is large enough.

My question: If the one-way message from Glacier2 to client is blocked, then the two-way message from Server to Glacier2 has to wait, so I think this is also a problem.

Comments

  • benoit
    benoit Rennes, France
    Hi,

    No this can't happen if you use Glacier2 buffered mode, even if the oneway blocks the twoway call to the router won't block thanks to Glacier2 buffered mode. Please see the manual for more information.

    Benoit.
  • Thank you. I have read this chapter about Glacier2, but I still think that buffered mode just cached the two-way call from Server, and that the calling thread in Server will be blocked until the response is returned from Glacier2, which is dependent on the one-way call from Glacier2 to Client. Could you explain a little more ? Or do you mean that once Glacier2 has cached the two-way request from Server, then give a response to Server immediately ? If so, it's of course OK.
  • marc
    marc Florida
    Yes, the response to the server will be given immediately, because the call to the client is oneway.
  • benoit
    benoit Rennes, France
    That's correct, the twoway call will return as soon as the call is queued in Glacier2. This is of course possible only because the call is forwarded to the client as a oneway call and there's no need to wait for a response from the client.

    Benoit.
  • Thanks, I know now. So if the invocation from Glacier2 to client is a two-way one , then even if Glacier2 uses buffered mode, the calling thread in Server will still be blocked until the invocation from Glacier2 to client has returned. Is it right?

    If so, another question :) . Please check these two cases first:
    Case 1: 5 threads in Client send two-way requests to Server simultaneously. Server use thread-pool dispatch mode(the pool is large enough). So in the Server side, there are also 5 threads to these requests simultaneously.

    Case 2: 5 threads in Client send two-way requests to Server by Glacier2. Suppose requestss from Client to Glacier2 and from Glacier2 to Server are both two-ways. Glacier2 is in buffered mode. According the Ice's manul:
    In buffered mode, the router queues incoming requests and creates an extra thread (or two) dedicated to processing the request queue of each connected client.
    According the quoting, there are two threads in Glacier2 to serve for the Client: one for receiving requests from Client and caching them, the other for taking out request and sending it to Server. As the result, there is only one thread at a time in Server to process the request.

    So, if we want that there are also 5 threads to process Client's requests simultaneously, what we should do?
  • benoit
    benoit Rennes, France
    You don't need to do anything, there will be 5 threads in your server (assuming it's properly configured) to process the 5 client requests.

    Without getting too much into details... Glacier2 is using AMI/AMD to forward twoway invocations: the thread that forwards the request to the server doesn't need to wait for the answer to process another request. I'm afraid I can't give you much more details on the implementation of Glacier2, this is a bit ouf the scope of the support we can give on the forums. I recommend taking a look at Glacier2 source code for more details.

    Benoit.
  • benoit wrote:
    ...I'm afraid I can't give you much more details on the implementation of Glacier2, this is a bit ouf the scope of the support we can give on the forums. I recommend taking a look at Glacier2 source code for more details.
    Thank you! I will see the Glacier2's source code tomorrow.

    BTW, I think that readers can not find the above questions' answer by just reading Ice's manul. So can you improve it ?
  • matthew
    matthew NL, Canada
    I'm not sure what you want to improve in the manual... if you send a one way request then the server does not need to wait for a reply. That's the whole idea, after all :)

    Regards, Matthew
  • benoit wrote:
    You don't need to do anything, there will be 5 threads in your server (assuming it's properly configured) to process the 5 client requests.

    Without getting too much into details... Glacier2 is using AMI/AMD to forward twoway invocations: the thread that forwards the request to the server doesn't need to wait for the answer to process another request.
    I am sorry that I have to bother you again. I have read part of Glacier2's source code , but I don't think that Glacier2 has used AMI to forward request to Server:
    -- src/Glacier2/RequestQueue.cpp
    void
    Glacier2::Request::invoke()
    {
        bool ok;
        ByteSeq outParams;
        
        try
        {
    		if(_forwardContext)
    		{  [COLOR=Blue]//This is not a AMI invocation[/COLOR]
    		    ok = _proxy->ice_invoke(_current.operation, _current.mode, _inParams, outParams, _current.ctx);
    		}
    		else
    		{
    		    ok = _proxy->ice_invoke(_current.operation, _current.mode, _inParams, outParams);
    		}
    		
    		if(_proxy->ice_isTwoway())
    		{
    		    _amdCB->ice_response(ok, outParams);
    		}
        }
        catch(const LocalException& ex)
        {
    		if(_proxy->ice_isTwoway())
    		{
    		    _amdCB->ice_exception(ex);
    		}
        }
    }
    
  • marc
    marc Florida
    Congratulations, you have discovered a bug :)

    We have also recently discovered this bug, and it is fixed for 2.1.2. The bug only shows up if the server sends two callbacks at the same time, and one of them blocks (for example, because the first callback triggers the second callback and waits for its completion, i.e., nested callbacks).
    namespace Glacier2
    {
    
    class AMI_Object_ice_invokeI : public AMI_Object_ice_invoke
    {
    public:
    
        AMI_Object_ice_invokeI(const AMD_Object_ice_invokePtr& amdCB) :
    	_amdCB(amdCB)
        {
    	assert(_amdCB);
        }
    
        virtual void
        ice_response(bool ok, const std::vector<Byte>& outParams)
        {
    	_amdCB->ice_response(ok, outParams);
        }
    
        virtual void
        ice_exception(const Exception& ex)
        {
    	_amdCB->ice_exception(ex);
        }
    
    private:
    
        const AMD_Object_ice_invokePtr _amdCB;
    };
    
    }
    
    // ...
    
    void
    Glacier2::Request::invoke()
    {
        if(_proxy->ice_isTwoway())
        {
    	AMI_Object_ice_invokePtr cb = new AMI_Object_ice_invokeI(_amdCB);
    	if(_forwardContext)
    	{
    	    _proxy->ice_invoke_async(cb, _current.operation, _current.mode, _inParams, _current.ctx);
    	}
    	else
    	{
    	    _proxy->ice_invoke_async(cb, _current.operation, _current.mode, _inParams);
    	}
        }
        else
        {
    	try
    	{
    	    ByteSeq outParams;
    	    if(_forwardContext)
    	    {
    		_proxy->ice_invoke(_current.operation, _current.mode, _inParams, outParams, _current.ctx);
    	    }
    	    else
    	    {
    		_proxy->ice_invoke(_current.operation, _current.mode, _inParams, outParams);
    	    }
    	}
    	catch(const LocalException&)
    	{
    	}
        }
    }
    
  • Another question: A serverProxy is created in the following code. I know that its Identity's category is a flag , so it demands that different serverProxy should have a different catetory, but the algorithm can not promise this. Why not choose something else like generateUUID() ? Thanks.
    Glacier2::RouterI::RouterI(const ObjectAdapterPtr& clientAdapter, const ObjectAdapterPtr& serverAdapter,
    			   const ConnectionPtr& connection, const string& userId, const SessionPrx& session) :
    //...
    {
    	//...
    
        if(serverAdapter)
        {
    		ObjectPrx& serverProxy = const_cast<ObjectPrx&>(_serverProxy);
    		Identity ident;
    		ident.name = "dummy";
    		[COLOR=Blue]ident.category.resize(20);
    		for(string::iterator p = ident.category.begin(); p != ident.category.end(); ++p)
    		{
    		    *p = static_cast<char>(33 + rand() % (127-33)); // We use ASCII 33-126 (from ! to ~, w/o space).
    		}[/COLOR]		
    		serverProxy = serverAdapter->createProxy(ident);
    		
    		ServerBlobjectPtr& serverBlobject = const_cast<ServerBlobjectPtr&>(_serverBlobject);
    		serverBlobject = new ServerBlobject(_communicator, _connection);
        }
    }
    
  • marc
    marc Florida
    The algorithm creates a 20-character random string. The probability of two equivalent strings being created within the same Glacier instance is so small, that it's safe to say that these strings are guaranteed to be unique. It is certainly not less safe than using a UUID.
  • marc wrote:
    Congratulations, you have discovered a bug :)

    We have also recently discovered this bug, and it is fixed for 2.1.2. The bug only shows up if the server sends two callbacks at the same time, and one of them blocks (for example, because the first callback triggers the second callback and waits for its completion, i.e., nested callbacks).
    namespace Glacier2
    {
    
    class AMI_Object_ice_invokeI : public AMI_Object_ice_invoke
    {
    public:
    
        AMI_Object_ice_invokeI(const AMD_Object_ice_invokePtr& amdCB) :
    	_amdCB(amdCB)
        {
    	assert(_amdCB);
        }
    
        virtual void
        ice_response(bool ok, const std::vector<Byte>& outParams)
        {
    	_amdCB->ice_response(ok, outParams);
        }
    
        virtual void
        ice_exception(const Exception& ex)
        {
    	_amdCB->ice_exception(ex);
        }
    
    private:
    
        const AMD_Object_ice_invokePtr _amdCB;
    };
    
    }
    
    // ...
    
    void
    Glacier2::Request::invoke()
    {
        if(_proxy->ice_isTwoway())
        {
    	AMI_Object_ice_invokePtr cb = new AMI_Object_ice_invokeI(_amdCB);
    	if(_forwardContext)
    	{
    	    _proxy->ice_invoke_async(cb, _current.operation, _current.mode, _inParams, _current.ctx);
    	}
    	else
    	{
    	    _proxy->ice_invoke_async(cb, _current.operation, _current.mode, _inParams);
    	}
        }
        else
        {
    	try
    	{
    	    ByteSeq outParams;
    	    if(_forwardContext)
    	    {
    		_proxy->ice_invoke(_current.operation, _current.mode, _inParams, outParams, _current.ctx);
    	    }
    	    else
    	    {
    		_proxy->ice_invoke(_current.operation, _current.mode, _inParams, outParams);
    	    }
    	}
    	catch(const LocalException&)
    	{
    	}
        }
    }
    

    AMI calls can also be blocked just like one-way calls if the Server runs very slowly, is this right ? If so, there may be some other problem in Glacier2 model.
  • marc
    marc Florida
    I'm not sure I understand the question. All calls can block if the server doesn't respond, this is nothing specific to Glacier2. The same would happen if you would call the server directly. To avoid this, you typically set timeouts.
  • marc wrote:
    I'm not sure I understand the question. All calls can block if the server doesn't respond, this is nothing specific to Glacier2. The same would happen if you would call the server directly. To avoid this, you typically set timeouts.

    I means that if one Server behind Glacier2 runs very slowly, then other Servers will be affected in some circumstance.

    Please have a look at the attach file first. This is my explanation:
    Suppose there are many threads in each Client sending requests to different Servers (Server1/Server2) simultaneously We know that in Glacier2, there is a RequestQueue1 corresponding to Client1, and that all requests from Client1 are cached in RequestQueue1.

    Suppose there are 3 requests in RequestQueue1 now: request-a(to Server1) , request-b(to Server2) , request-c(to Server2) . If Server1 runs very slowly for some reason and has no time to receive requests on the wire, the AMI call (request-a) to Server1 in RequestQueue1 will be blocked and other requests (request-b/reqeust-c) to other Servers(Server2) will have to wait. That is, the slowing Server1 has a bad effect on other Servers(Server2). It’s very terrible.

    Now every Client has a corresponding RequestQueue in Glacier2 and all requests to different Servers from the same Client are mixed in a RequestQueue. I think that this is the core of problem. Maybe if we put all requests to the same Server from different Clients in a RequestQueue instead of putting all requests to different Servers from the same Client in a RequestQueue can solve the problem.
  • Now every Client has a corresponding RequestQueue in Glacier2 and all requests to different Servers from the same Client are mixed in a RequestQueue. I think that this is the core of problem. Maybe if we put all requests to the same Server from different Clients in a RequestQueue instead of putting all requests to different Servers from the same Client in a RequestQueue can solve the problem.
    Maybe the best solution is set a timeout to the AMI calls in the RequestQueue. If any AMI call times out, just put the request at the bottom in the RequestQueue and retry it next time. Maybe 2 new properties can control this:
    Glacier2.AMI.timeout=xxx
    Glacier2.AMI.retrys=yyy
  • matthew
    matthew NL, Canada
    What you say is technically correct, its not a scenario that Glacier2 is designed to protect against. Glacier2 isn't meant to protect clients from misbehaving servers, it is meant to protect servers against misbehaving clients. In real world applications client can be expected to misbehave for a whole host of reasons. Servers in a protected environment should be well behaved. If they are not, then they should be fixed.

    Best Regards, Matthew
  • matthew wrote:
    What you say is technically correct, its not a scenario that Glacier2 is designed to protect against. Glacier2 isn't meant to protect clients from misbehaving servers, it is meant to protect servers against misbehaving clients. In real world applications client can be expected to misbehave for a whole host of reasons. Servers in a protected environment should be well behaved. If they are not, then they should be fixed.

    Best Regards, Matthew

    I don't agree with you:).
    There are all kinds of servers behind Glacier2. It's very normal that some runs very fast while some runs slowly because they may do more things in a request than other servers do. Both of them are not misbehaving servers.
  • matthew
    matthew NL, Canada
    If the server cannot keep up with the client then its misbehaving and you should either deploy more of them, or architect them differently. In our view it is not the job of Glacier2 to protect the client against this situation.

    Regards, Matthew
  • matthew wrote:
    If the server cannot keep up with the client then its misbehaving and you should either deploy more of them, or architect them differently. In our view it is not the job of Glacier2 to protect the client against this situation.

    Regards, Matthew

    I am sorry, but I still can not agree with you and I can not convince you either. What I can do is holding on my opition.
  • matthew
    matthew NL, Canada
    There are lots of solutions to your issue :D
    - Since you have the source to Glacier2 itself so you may make whatever changes you feel are necessary.
    - If you don't want to modify Glacier2, then you can write a custom service to handle the back end message queuing for the slow servers.

    Best Regards, Matthew
  • matthew wrote:
    There are lots of solutions to your issue :D
    - Since you have the source to Glacier2 itself so you may make whatever changes you feel are necessary.
    - If you don't want to modify Glacier2, then you can write a custom service to handle the back end message queuing for the slow servers.

    Best Regards, Matthew

    Thank you, Matthew. I am just a fan of ICE and up to now I haven't yet use ICE in my projects because I knew Ice until last year's autumn. I am now just reading ICE's documents and source code and raising my questions and I hope these can be helpful to ICE.