Archived

This forum has been archived. Please start a new discussion on GitHub.

'Transparent' Object Migration Granularity

I thought I had once seen documentation that said transparent object migration can happen at the granularity of a single object.

From what I'm reading now, it seems that indirect references always requires the adapter name. So, to transparently migrate 'session' objects behind the clients back and automatically let ice handle server renegotiation means grouping sessions into object adapter groups. The adapters with their objects can be deactivated and then reinstantiated somewhere else.

Does this sound correct? Minus the world of creating an OA for every object.


Thanks,

Comments

  • benoit
    benoit Rennes, France
    Take a look at section 33.3 and 33.4 of the Ice manual. Indirect proxies don't require an adapter id, you can also have an indirect proxy with just the object identity. So, it's possible to migrate objects individually.

    Benoit.
  • Scalability

    Thanks, Benoit.

    Do you have any hard information on how well the object identity method alone scales?

    I'm interested in session objects with an upper limit of a couple thousand instances. Would this produce too much of an overhead? From what I understand, the overhead is ONLY incurred when retrieving the initial instance or when the proxy fails [object being migrated,server fails] and the locator facility must be recontacted.

    Both of these situations seem acceptable for a latency hit because they are infrequent. Am I missing something critical?

    Thanks

    BTW, a follow up question on implementation. Is there a standard use case for session-type object migration? I was thinking of making the sessions classes and passing them from server to server (using an object factory, correct?). The session would be deregistered from the locator, the object would be passed, it would be reinitialized, and then re-registered with the locator. Hopefully the client would never be the wiser.
  • benoit
    benoit Rennes, France
    Thanks, Benoit.

    Do you have any hard information on how well the object identity method alone scales?

    Sorry, I don't have any hard evidence. However, it was designed to be scalable, the Ice runtime uses a cache to minimize the calls on the Ice locator.
    I'm interested in session objects with an upper limit of a couple thousand instances. Would this produce too much of an overhead? From what I understand, the overhead is ONLY incurred when retrieving the initial instance or when the proxy fails [object being migrated,server fails] and the locator facility must be recontacted.

    That's correct. The Ice runtime will contact the locator only once for an indirect proxy to retrieve the associated endpoints (either of the adapter id if the indirect proxy contains an adapter id, or the object if the indirect proxy only contains an object identity). The endpoints are then cached by the Ice runtime. The cache for the adapter id or object identity is only cleared if an invocation on the indirect proxy fails with an Ice::LocalException.
    Both of these situations seem acceptable for a latency hit because they are infrequent. Am I missing something critical?

    No, I think you got it right ;).
    Thanks

    BTW, a follow up question on implementation. Is there a standard use case for session-type object migration? I was thinking of making the sessions classes and passing them from server to server (using an object factory, correct?). The session would be deregistered from the locator, the object would be passed, it would be reinitialized, and then re-registered with the locator. Hopefully the client would never be the wiser.

    I'm not aware of any standard use case for this. You'll have to be careful when migrating your object: you don't want to allow requests on the old incarnation of your object after you have migrated it to the new server.

    Also, I believe there's currently one issue with respect to how the client will retry after the object has been migrated. The Ice runtime currently doesn't retry upon receiving an Ice::ObjectNotExistException exception. This means that your client would have to catch this exception and retry the invocation (the Ice runtime will then contact the locator again to get the new object endpoints). We are currently discussing this issue internally... stay tuned!

    Benoit.
  • wonderful

    thanks, benoit. that was very helpful. as for the old object being hit, that should be taken care of by deregistering the object with the locator facility and removing it from the adapter. is this correct? there's nothing vestigal left over, is there? all i have to do next is add it to the new server's oa and register it again.

    also, i guess i can see both sides of why the runtime would not automatically retry. but it would be nice. a set limit retry, maybe with exp decay before throwing the exception. that would give an automatic system time to start a new server and reinstantiate objects. was this a scalability choice so that many clients wouldn't flood the servers with retry requests?

    take care,

    seth