Archived

This forum has been archived. Please start a new discussion on GitHub.

When will the object associated with a proxy be destroyed?

I'm creating ako an ICE object and a proxy pointing to it in my C++ server. I then return the created proxy by value to the caller (in my case a php client).

What I don't quite understand is at which point in time ICE destroys the ICE object.

I would have expected that the proxy has a reference count. So when I create the object in my C++ server the reference count is 1. Then I pass it by value to the php client so ICE would increment it to 2. When the stack proxy in my server method is destroyed the ref count decrements to 1 and finally, when my php client has done it's work the ref count drops to 0 and ICE would destroy the associated ICE object.

My observation shows though that while the proxies are destroyed correctly, the associated ICE object is only destroyed when the adapter it belongs to is shut down.

What am I doing wrong?

I'm also a bit puzzled how I could keep an ICE object alive in PHP. In ASP I could simple store the proxy as a session variable, but how can this be achieved in PHP?

Comments

  • marc
    marc Florida
    The short answer is, Ice never destroys your Ice object. Your code does, for example, with a destroy() operation which your code must implement.

    The life cycle of an Ice object, its servants, and its proxies, are completely independent. I'm afraid giving a detailed explanation is out of the scope of what we can do in a forum post; however, you'll find all the details in the Ice manual, and the many demos, including the ones published in our newsletter Connections. A good starting point is this FAQ:

    http://www.zeroc.com/faq/objectsServantsProxies.html
  • mes
    mes California
    PeteH wrote:
    My observation shows though that while the proxies are destroyed correctly, the associated ICE object is only destroyed when the adapter it belongs to is shut down.

    What am I doing wrong?
    You're not doing anything wrong, that is exactly how it is expected to work. Proxies, objects, and servants have lifecycles that are independent from one another. See this FAQ for more information.
    I'm also a bit puzzled how I could keep an ICE object alive in PHP. In ASP I could simple store the proxy as a session variable, but how can this be achieved in PHP?
    PHP also has a session capability. If you need to store a proxy in a session's state, you should use the stringified form of the proxy.

    Take care,
    - Mark
  • You may also want to check out the article The Grim Reaper, which explains how to use sessions to get rid of abandoned objects in the server.

    Cheers,

    Michi.
  • Many thanks for all your help. I'm still a bit confused, worried or perhaps just not quite educated enough...

    The examples I've seen all use the same pattern:
    1. Client requests a proxy
    2. Server creates a servant
    3. Server creates a proxy
    4. Server registers the two with a communicator's adapter
    5. Server returns proxy

    (2, 3 and 4 are often done before 1 particularly for factory type servants)

    When the client is done and destroys the proxy, nothing happens on the server side. Absolutely nothing! This is in contrast to COM and a little confusing or even concerning (although this may easily have a lot to do with the fact that I'm still learning ICE).

    What I would have expected was this type of destruction mechanism:
    • Client deletes client-side Proxy
    • Ice notifies Server to delete matching server-side Proxy
    • The delete process decrements the ref count on the servant
    • If the servants ref count drops to 0, servant is unregistered/deleted

    I'm not 100% certain that this is the way COM works, but I certainly used to design COM based client/server apps based on this assumption and never experienced any problems with excessive memory use or leaks.

    Let's assume I have a linked list of (ICE) objects. Now a client may request a proxy to the first object, so the server creates a proxy, registers the proxy/first object pair with the adapter and returns the proxy. The client might now call the next() method on the returned proxy. What I see now happening is that on the client side, the proxy objects get correctly allocated/deallocated but on the server side, I end up with as many new proxy objects as I made calls to next(). If I don't implement something along the lines discussed in the grim reaper I end up with substantial memory being used that is no longer needed/reachable, particularly if I'm dealing with large numbers of objects.

    So instead of doing something along these lines on the client side:
    IceObjectPrx	prxListElement;
    
    prxListElement = prxAnother->head();
    
    while (prxListElement)
    {
      // Work with proxy
      prxListElement = prxListElement->next();
    }
    

    I really need to do something along these lines instead:
    IceObjectPrx	prxListElement
    IceObjectPrx	prxPrevListElement;
    
    prxListElement = prxAnother->head();
    
    while (prxListElement)
    {
      prxPrevListElement = prxListElement;
    	
      // Work with prxPrevListElement
      prxListElement = prxListElement->next();
      prxPrevListElement->discard();
    }
    

    Am I missing the point?
  • matthew
    matthew NL, Canada
    Ice doesn't have distributed reference counting as DCOM does -- the primary reason being that this does not scale to large distributed systems.

    With Ice if you don't take steps to destroy the Ice object then it is not destroyed -- and if you choose to use a one to one correpondance between servant and Ice object then you will have a memory leak as you describe.

    As to your linked list example ... this represents very bad distributed systems design.

    Without knowing your system in more detail its hard to provide good advice about the interface design. I would recommend that you contact sales@zeroc.com if you would like some help in this matter.
  • matthew wrote:
    Ice doesn't have distributed reference counting as DCOM does -- the primary reason being that this does not scale to large distributed systems.

    Point taken.

    BTW: I didn't say I was using a 1:1 mapping of proxy/servant which is precisely why I mentioned the expected procedure resulting from a client side proxy destruction.
    matthew wrote:
    As to your linked list example ... this represents very bad distributed systems design.

    And why is that?

    Let's assume you're implementing an application consisting of a class who's instances represent a single record from a table in a databases. Let's assume that making all records from the database resident is not feasible. You provide functionality so that a client can access a specific object. If a client requests a specific object the program either locates and returns the object from its cache or creates the object, populates it from the database and adds it to the cache before returning it. Instead of returning the actual object, you'd return a smart-pointer to the object and maintain a reference count on the object. This ensures that two clients "see" the same object and only objects referenced by clients are resident in the cache. If a client needs to interact with an object repeatedly, he can retain a reference and "hang on" to the object and thus improve efficiency.

    Arguably there is plenty of room for improvement but I'm sure that generally you'd agree with me that such a design is sound, right?

    Semantically a proxy indirection is synonymous to smart pointer indirection with the difference that the former works across address spaces.

    Now you seem to say that implementing the same mechanism but using client and server side proxies would be very bad design. Can you explain why?
  • matthew
    matthew NL, Canada
    It depends how fine grained the model is. The disadvantage in modelling it directly as a linked list as you appear to have done mean that to run through a large number of objects in the database means many RPC calls -- which are very expensive. In addition, if you have to access lots of different fields of the resulting object then this can mean lots of other RPC calls. A better model may be to model the database content as classes, or structures and then to pass the data across the wire in batches from the database (ie: cache the result of a query and then return the first 500 results in the first call, then the next 500 and so and and so forth).

    That being said if you have a database backing the object model then you could choose a server side model such as the Freeze evictor where only a limited number of active servants are in memory at any one time and at any point the servant itself may be evicted and the state reloaded when necessary from the database. This doesn't require any sort of distributed garbage collection and keeps as much state in memory as you feel necessary (through tuning the size of the evictor).