Archived

This forum has been archived. Please start a new discussion on GitHub.

ServantLocator implementation

hi guys,

i'd like to implement a server using a ServantLocator implementation, which
runs against a database in order to fetch required data needed by the servant.
[standard approach described in the manual]

in order to encapsulate the database we plan to hide the database behind
a ice based server aswell.

so i need to make a call to an ice proxy inside the ServantLocator implementation.

for scaleability and robustness all proxies shall use mutliple endpoints.

do you see any trouble with this approach?

i'd like to know your opinion.

thanks alot

tom

Comments

  • mes
    mes California
    Hi Tom,

    Unless you're planning to do some sort of caching (such as using an evictor-style locator), this approach could have some serious performance problems. Specifically, latency is going to take a big hit if the locator makes a remote request to the database server for every incoming request.

    However, if the requests on the database server are relatively infrequent, then it sounds like a reasonable approach.

    Take care,
    - Mark
  • bernard
    bernard Jupiter, FL
    Hi Tom,

    You can make remote calls from your servant locator, so I can't see anything wrong from an Ice point of view.
    However I am curious on how you plan to update the databases and ensure they are all in sync. Or is it just read-only access?

    For your servant locator implementation, you may have a look at the IceUtil::Cache C++ template and IceUtil.Cache class: they can help you write a servant locator where you don't lock everything while (slowy) loading an object from its database.

    Cheers,
    Bernard
  • hi Mark, hi Bernard,

    thanks for your replys.

    the servant implementation will be on basis of the evictor.

    for the database consistency we plan to use the replication feature of the
    sql server. or we let it run on a windows 2003 server cluster in order get the
    desired fault toleranz. we are working on that.

    i'll have a look at the Cache class. thanks for the hint.

    Gentlemen, thanks a lot! It's always a pleasure talking to you.


    Take care,

    Tom
  • hi again,

    i scripted a small uml diagramm in order to show the workflow.

    another question came to my mind while doing so:

    what is the event/reason on proxy side to switch to another endpoint?

    is it only - as shown in my uml model - in case the call to the currently used
    endpoint fails??

    what i want to avoid is the existance of the same session on two servers.


    thx again

    tom
  • benoit
    benoit Rennes, France
    The Ice runtime might switch to another endpoint if the connection associated to the proxy is closed and a new connection needs to be established. The connection can be closed for various reasons: a network problem, active connection management (ACM), timeouts, etc.

    Benoit.
  • hi again,

    this wouldn't work this way, i guess.
    because the returned session proxy comes with one endpoint only
    he has no knowledge about the other servers, who are capable of hosting
    the session as well.

    could the client add additional endpoints to the returned proxy?

    by doing so the mechanism could still work - but does it make scence?!?

    reagrds,

    tom
    DeepDiver wrote:
    hi again,

    i scripted a small uml diagramm in order to show the workflow.

    another question came to my mind while doing so:

    what is the event/reason on proxy side to switch to another endpoint?

    is it only - as shown in my uml model - in case the call to the currently used
    endpoint fails??

    what i want to avoid is the existance of the same session on two servers.


    thx again

    tom
  • benoit
    benoit Rennes, France
    You could add endpoints in the server or client. This requires to stringify the proxy (with the communicator->proxyToString() method), add the endpoints and convert back the stringified proxy to a proxy.

    Another option (which isn't available yet but will be available with IceGrid) would be to use an indirect proxy with a replicated object adapter (see the article on IceGrid in the second issue of the Ice newsletter for more information).

    I think the main issue with your approach is that it requires to have no more than one instance of a session object at any point in time. If the connection associated to the session proxy is closed -- the Ice runtime will eventually establish a new connection on another endpoint. How do you make sure that the previous instance of the session object is destroyed?

    Benoit.
  • hi benoit,

    thanks for your comments.
    benoit wrote:
    How do you make sure that the previous instance of the session object is destroyed?

    this is exactly the problem i'm facing.

    i think i'll take care about switching the server myself in the client implementation.

    a proxy can have routers and locators. could i use one of them to plugin
    my server switching approach?

    the only thing i'd like to achieve is a set of servers, capable of hosting the
    same objects and in case of server breakdown the client switches to one of
    the other servers.
    because the hosted object have no state in ram, but in the common share
    database(s) this should work.

    the only thing which has to be ensured is the endpoint switching only occurrs
    on failure of the server.

    take care

    tom
  • benoit
    benoit Rennes, France
    DeepDiver wrote:
    a proxy can have routers and locators. could i use one of them to plugin
    my server switching approach?

    Yes, it's quite possible that the location of the session object could be handled by a specific locator implementation. This might however not be trivial to implement, especially if you want this locator implementation to be highly available as well ;).
    DeepDiver wrote:
    the only thing i'd like to achieve is a set of servers, capable of hosting the
    same objects and in case of server breakdown the client switches to one of
    the other servers.
    because the hosted object have no state in ram, but in the common share
    database(s) this should work.

    the only thing which has to be ensured is the endpoint switching only occurrs on failure of the server.

    If the hosted objects have no state, why do you need to ensure that there's always only one instance of such an object? Couldn't you instead allow having several replicas of the object on the different servers?

    From your diagram, it actually looked like your session objects are caching the state retrieved from the database and therefore they aren't really stateless :). If that's the case, it seems like it's indeed a better solution to handle the failures in the client directly: if a session object becomes unavailable, the client re-creates another session object on another server and forgets about the previous one.

    Benoit.
  • hi benoit,

    maybe the naming of the object as 'session' is very descriptive.
    the session has the responsibilty to interact with a 3rd party system, which
    includes monitoring tasks. results of monitoring are propergated to the client
    - who started the session - and need to be stored in the database.

    from the logic view the session is not stateless. from the implementation view
    they have not state stored in memory - which can be lost in case of server fault.

    i think i'll handle the situation in the client myself.

    thanks alot for spending your time on this.

    cu tom