Archived

This forum has been archived. Please start a new discussion on GitHub.

static Ice::CommunicatorPtr

Hello!

I have a large project using Ice. It works with a lot of connections: about 10 connections for each of 100 hosts. Actually, my question: what if I will keep one static Ice::CommunicatorPtr and use it for all connections? Now I have one communicator for each host and I think it's pretty wrong.

Comments

  • xdm
    xdm La Coruña, Spain
    Hi,

    A communicator per host will be a waste of resources. It is fine to have a static reference to a communicator, you must initialize it before you use and destroy it before main exists.
  • Thanks, xdm!

    I have one another question.
    Suppose, the server has an Ice application that sends out information to subscribed clients. The client side can create many connections to this application. I mean there is a class that connects to the application (on the server) and many instances of this class can be created. In fact all instances will have "the same" Ice connection, but it will created each time anew. How can I avoid this and "share" connection? Now I use the idea of the pattern singleton: for each host there is one class and you can get the instance via static function instance(string host).
  • xdm
    xdm La Coruña, Spain
    Hi,

    Ice reuse and cache connections, the client just use a proxy to connect to a server and Ice will automatically reuse an existing connection if possible. Note that separate communicators will not share connections.


    see Connection Management - Ice 3.5 - ZeroC
  • Thanks a lot! It's exactly what I need.
  • Hello again!

    I read the article about caching connections, but I still don't understand how it works in practise.
    Suppose there is a following simple class representing an Ice-client:
    class IceClient
    {
        enum State { Disconnected, Connected };
    public:
        IceClient(string host, int port) : host_(host), port_(port) {}
        void connect()
        {
            try
            {
                // generate proxy string from host_ and port_
                // ...
                proxy_ = communicator->stringToProxy(strproxy);
                server_ = InterfacePrx::checkedCast(proxy_);
                if(!server_) state_ = Disconnected;
                else state_ = Connected;
            }
            catch(const Ice::Exception&)
            {
                state_ = Disconnected;
            }
            
        void invokeMethod()
        {
            if(state_ != Connected) return;
            try
            {
                server_->method();
            }
            catch(const Ice::Exception&)
            {}
        }
    
        State state() const { return state_; }
    
    private:
        InterfacePrx server_;
        ObjectPrx proxy_;
        State state_;
        string host_;
        int port_;
    }
    

    The following code can be invoked for the same host hundreds times:
    IceClient* client = new IceClient(host, port);
    client->connect();
    client->invokeMethod();
    

    And it will invoke the long-term checkedCast for the same host hundreds times. How I can avoid it?
  • benoit
    benoit Rennes, France
    Hi,

    What is the goal of the connect() method and of maintaining the connection state in this class?

    Can't you just implement the following instead:
    class IceClient
    {
    public:
    
        IceClient(string host, int port)
        {
            // generate proxy string from host_ and port_
            Ice::ObjectPrx prx = communicator->stringToProxy(strproxy);
            server_ = InterfacePrx::uncheckedCast(prx);
        }
            
        void invokeMethod()
        {
            try
            {
                server_->method();
            }
            catch(const Ice::Exception&)
            {
                 // TODO: handle the exception
            }
        }
    
    private:
    
        InterfacePrx server_;
    }
    

    In the code above, we do an uncheckedCast instead of a checkedCast, a checkedCast might still be useful however if the Ice object might not implement the requested interface.

    If you need to monitor the state the of the server, you could implement a separate dedicated class which has a thread that regularly pings the server to ensure it's still alive. A instance of this class could eventually be used by your IceClient class to check the server state before invoking on the server (it will just provide an indication of the server state however... the server could still go down right before making the invocation and before the monitoring class notices it).

    Let us know if you need more information on this.

    Cheers,
    Benoit.
  • Ok, thank you, I'll try it.

    Is there any difference between pings on Ice::ObjectPrx and InterfacePrx?
    Ice::ObjectPrx prx = communicator->stringToProxy(strproxy);
    server_ = InterfacePrx::uncheckedCast(prx);
    // variant 1
    prx->ice_ping();
    // variant 2
    server_->ice_ping();
    

    And one another question.
    I use bidirectional connections and set an object adapter to connection for receiving callbacks (like the example in the documentation). There are no problems if I will set an object adapter in fact on the same (cached) connection many times?
    IceClient(string host, int port) : host_(host), port_(port)
    {
        // generate proxy string from host_ and port_
        Ice::ObjectPrx prx = communicator->stringToProxy(strproxy);
        server_ = InterfacePrx::uncheckedCast(prx);
        SubscriberPtr subscriber = new ISubscriber;
        Ice::ObjectAdapterPtr adapter = communicator->createObjectAdapter("");
        adapter->add(subscriber, identity);
        adapter->activate();
        server_->ice_getConnection()->setAdapter(adapter);
    }
    

    And what if connection has not been established after uncheckedCast, but then after some 'method' call the connection is established? Will I still get callbacks via adapter?
  • renzo wrote: »
    And what if connection has not been established after uncheckedCast, but then after some 'method' call the connection is established? Will I still get callbacks via adapter?

    It seems like setAdapter() will throw an exception about the existing adapter with the same "name" and only the first added SubscriberPtr will receive callbacks.
  • benoit
    benoit Rennes, France
    Hi,

    Invoking ice_ping on an Ice::ObjectPrx or InterfacePrx proxy is the same.

    While it's fine to call Ice::Connection::setAdapter multiple times on the connection, you should consider designing your application so that it's only necessary to call it once.

    Calling proxy->ice_getConnection() does establish the connection with the server so you don't necessarily need to make a remote call such as checkedCast to first establish it.

    You can't create multiple object adapters with the same name. You either need to use different names or if the object adapters are designed for only receiving requests over a bi-directional connection, you can create the object adapters with an empty name: communicator->createObjectAdapter(""). It's not totally clear to me why you would need multiple object adapters however.

    Cheers,
    Benoit.
  • Thank you, Benoit!
    I have redesigned the application a little bit and now it works fine and fast.

    I have the last question. There is a class that monitors the connection to the server in a separate thread by calling ice_ping() on proxy. Because of the bad connection I set the timeout on proxy to 5 seconds. If the application completes with the start of ice_ping(), it will hang on for 5 seconds. Is there any way to interrupt functions such as ice_ping(), checkedCast() and etc. (maybe by calling some function on the same proxy)?
  • benoit
    benoit Rennes, France
    Hi,

    You can't interrupt a synchronous call such as ice_ping. If you don't want to block the thread while the call is in progress, you can however use AMI and the begin_ice_ping/end_ice_ping methods. See the Ice manual for more information on how to use AMI with your favorite language mapping.

    Cheers,
    Benoit.