Archived

This forum has been archived. Please start a new discussion on GitHub.

Load Balancing

Hello,

in the past I was attracted by the possibility to implement load
balancing with CORBA ServantLocator+Interceptor using the
necessary ForwardException, as stated by Douglas Schmidt in various
papers.

Is it possibile to implement a similar feature with Ice ?
I mean, is it possible to implement it at user level, not in Ice internals ?

Thanks in advance,
Guido.

Comments

  • marc
    marc Florida
    You can implement your own load-balancing service similar to IcePack by implementing the Slice interfaces Ice::Locator and Ice::LocatorRegistry. See the file slice/Ice/Locator.ice for details.

    However, other than the comments in the Slice file, we do not yet have documentation about these interfaces. But you can look at how IcePack implements these interfaces, or even use IcePack as a basis for your own implementation.
  • Originally posted by marc
    You can implement your own load-balancing service similar to IcePack by implementing the Slice interfaces Ice::Locator and Ice::LocatorRegistry. See the file slice/Ice/Locator.ice for details.

    However, other than the comments in the Slice file, we do not yet have documentation about these interfaces. But you can look at how IcePack implements these interfaces, or even use IcePack as a basis for your own implementation.

    Hi Marc,
    I have played with the Locator impl of location test, because I cannot
    find IcePack source in Java distribution.
    Anyway, I have experience a nice behaviour of the client silently
    recontacting the locator to get a new endpoint in case of impl
    disconnection.
    But, it seems that once a reference is resolved the endpoint is
    cached and reused for any subsequent proxy (ReferenceFactory).
    The problem arises if the client is a servlet: the whole web application
    will always use the same impl.
    Maybe having the possibility to configure somehow or disable (too hard ?)
    the cache feature could help....

    Just another question.
    I have found a little uncomfortable the explicit Current and Context
    params.
    Maybe it's a matter of taste, but I prefer an Interceptor-like approach
    for the Context and a ThreadLocal-like approach for the Current.
    In particular the explicit approach gives to the invoker the
    responsibility to fill the Context. It is up to the caller set the current
    transaction context rather than to the service.
    Is it possible to know the reasons behind your choice ?

    Ah, last but not least, at all: Ice plays great, both for speed and
    ease of use.

    Regards,
    Guido.
  • benoit
    benoit Rennes, France
    Hi,

    We could certainly add a configuration property to disable the cache for the next release, this shouldn't be too difficult. Another option would be to modify your locator implementation to return a direct proxy containing all the endpoints of your replicated services. For example:

    dummy:tcp -h host1 -p 12345:tcp -h host2 -p 12345:tcp -h host3 -p 12345

    Each time Ice needs to establish a connection to your indirect proxy, it will retrieve these endpoints from the cache and will try to establish the connection with one of them (selected randomly).

    Benoit.
  • Originally posted by benoit
    Hi,

    We could certainly add a configuration property to disable the cache for the next release, this shouldn't be too difficult. Another option would be to modify your locator implementation to return a direct proxy containing all the endpoints of your replicated services. For example:

    dummy:tcp -h host1 -p 12345:tcp -h host2 -p 12345:tcp -h host3 -p 12345

    Each time Ice needs to establish a connection to your indirect proxy, it will retrieve these endpoints from the cache and will try to establish the connection with one of them (selected randomly).

    Benoit.


    Hello,
    thanks for the answer.

    I think that it would be nice to have the possibility to configure
    which refs can be cached, say *@LoadBalancingAdapter,
    ftxengine@FtxAdapter etc.

    About the solution of returning a proxy with multiple endpoints, I
    don't undestand what is the event that triggers a new connection,
    while in the ReferenceFactory cache there is an equal Reference
    previously connected to a certain endpoint.

    I think that a solution is to enforce somehow the Ice core to invoke
    findXXX on the locator.
    It is up to the application the evaluation of relevance of the extra call
    overhead on a per-proxy or per-call basis.

    Well, this post should be in Comments forum...

    Thanks again,
    Guido.
  • benoit
    benoit Rennes, France
    A new connection is established only when there's no connection already established to the server. So the load balancing would only happen at connection establishment, not for each request (a connection is established on the first proxy invocation and is closed when there's no more proxy using the connection -- but this is really an implementation detail your application shouldn't rely on it since this might change).

    Even if there's a way to disable the endpoint cache, the endpoint lookup with the locator would still only occur on connection establishment (without the cache the lookup would be done each time Ice needs to establish a new connection -- right now it's done only once and until the endpoints are invalidated because of a communication failure with the server).

    What kind of load balancing do you need? per request load balancing? Establishing a new connection for each request could be quite expensive.

    Let us know if this is not clear enough!

    Benoit.
  • marc
    marc Florida
    Originally posted by ganzuoni

    Just another question.
    I have found a little uncomfortable the explicit Current and Context
    params.
    Maybe it's a matter of taste, but I prefer an Interceptor-like approach
    for the Context and a ThreadLocal-like approach for the Current.
    In particular the explicit approach gives to the invoker the
    responsibility to fill the Context. It is up to the caller set the current
    transaction context rather than to the service.
    Is it possible to know the reasons behind your choice ?

    As for Context, I agree with you, the explicit passing is to cumbersome. We will therefore add a per-proxy default context to a future version of Ice. Then you don't always have to pass the context explicitly.

    As for Current, we decided against thread specific storage, because of the bad experience we had with this with former products. It would make Ice more complicated, and probably also slower. To us, the explicit Current parameter seems cleaner than thread specific storage, but as you say, in the end it's mostly a matter of taste.
    Originally posted by ganzuoni

    Ah, last but not least, at all: Ice plays great, both for speed and
    ease of use.

    Thanks a lot, I'm glad you like it!
  • Originally posted by benoit
    A new connection is established only when there's no connection already established to the server. So the load balancing would only happen at connection establishment, not for each request (a connection is established on the first proxy invocation and is closed when there's no more proxy using the connection -- but this is really an implementation detail your application shouldn't rely on it since this might change).

    Even if there's a way to disable the endpoint cache, the endpoint lookup with the locator would still only occur on connection establishment (without the cache the lookup would be done each time Ice needs to establish a new connection -- right now it's done only once and until the endpoints are invalidated because of a communication failure with the server).

    What kind of load balancing do you need? per request load balancing? Establishing a new connection for each request could be quite expensive.

    Let us know if this is not clear enough!

    Benoit.



    OK,
    let me clarify.
    It's not a matter of physical connection but endpoint selection.
    Suppose a Locator managing object groups, say LoadBAdapter.
    When the adapter with the same name is registerd in the registry,
    the proxy is *added* to a list bound to the name.
    When the client asks for an adapter, the registry can apply different
    strategies in the process of selecting the right proxy: round-robin,
    random, etc.
    The problem arises, as I said in previous post, when the client is a servlet.
    Once the Reference has been put in ReferenceFactory cache there is
    no way to enforce the Ice core to call Locator to re-get the adapter proxy,
    unless a failure occurs.
    In this situation, adapter proxy selection strategy is triggered only if
    I have different Communicator in the servlet engine.
    The physical connection management is unimportant in this case.

    Regards,
    Guido.
  • Originally posted by marc
    As for Context, I agree with you, the explicit passing is to cumbersome. We will therefore add a per-proxy default context to a future version of Ice. Then you don't always have to pass the context explicitly.

    As for Current, we decided against thread specific storage, because of the bad experience we had with this with former products. It would make Ice more complicated, and probably also slower. To us, the explicit Current parameter seems cleaner than thread specific storage, but as you say, in the end it's mostly a matter of taste.



    Thanks a lot, I'm glad you like it!

    If it does'not irritate you, I wold suggest a preinvoker, called to
    fill the Context, on a per-proxy or per-Communicator basis.

    As for the Current, I agree, but if the implementation of an operation
    requires calls to internal methods, you are forced to declare all the methods
    with the Current, but it is possible to survive.
    As a counterpart, TransactionCurrent concept could be a little hard to
    implement.
    Maybe a predispatch is necessary to allow installed services to transfer
    from request context to thread context, but I think that this will
    bring you very far...(ensure that predispatch is called in the same
    thread that dispatches the operation to the impl, or use a CORBA-like
    ServerRequestInfo, what about AMD upcall style :confused:)

    Well, in Italy it's quite late, have a nice weekend.

    Regards,
    Guido.