Archived

This forum has been archived. Please start a new discussion on GitHub.

3.3 IceStorm and SubscriberPool.Size

Question#1
Section 45.3.8 of the Ice 3.2.1 documentation has a great section called The Subscriber Pool which clarified many things regarding IceStorm's threading behavior when propagating IceStorm messages to subscribers. I noticed that this section was absent in the Ice 3.3 documentation, as were the IceStorm.SubscriberPool.Size, SizeMax and SizeWarn properties. Are these properties still present in 3.3 IceStorm? If not, what are the threading semantics for Topic message propagation to its subscriber set? How can these semantics be configured - i.e. what replaces IceStorm.SubscriberPool.Size? I saw no information in the CHANGES.txt which came with the Ice 3.3 installation.

Question #2
I notices this in CHANGES.txt:
Added a new Quality of Service (QoS) parameter, retryCount, to
control when subscribers are removed. IceStorm automatically
removes a subscriber after the specified number of unsuccessful
event deliveries. The default value of retryCount is 0, meaning
the subscriber is removed immediately upon any failure. A
subscriber is always removed on a hard failure, which is defined
as the occurrence of ObjectNotExistException or
NotRegisteredException.

I assume that this does NOT apply to Topic federation, as I don't see a QoS parameter to the Topic.Link(Topic* linkTo, int cost) method. Thus Topic links are only broken when a message to the linked-to Topic results in a hard failure, just as in Ice3.2.1?

Thanks

Dirk

Comments

  • matthew
    matthew NL, Canada
    dhogan wrote: »
    Question#1
    Section 45.3.8 of the Ice 3.2.1 documentation has a great section called The Subscriber Pool which clarified many things regarding IceStorm's threading behavior when propagating IceStorm messages to subscribers. I noticed that this section was absent in the Ice 3.3 documentation, as were the IceStorm.SubscriberPool.Size, SizeMax and SizeWarn properties. Are these properties still present in 3.3 IceStorm? If not, what are the threading semantics for Topic message propagation to its subscriber set? How can these semantics be configured - i.e. what replaces IceStorm.SubscriberPool.Size? I saw no information in the CHANGES.txt which came with the Ice 3.3 installation.

    The subscriber pool was only necessary because prior to Ice 3.3 there was no way to avoiding blocking when making a remote invocation. With 3.3 this restriction was removed, and therefore the subscriber pool was also removed. Messages are sent directly to the subscriber from the servant dispatch thread (assuming no messages are queued), and the queue is further processed in the AMI callback. This is a much simpler, and more scalable model.
    Question #2
    I notices this in CHANGES.txt:
    Added a new Quality of Service (QoS) parameter, retryCount, to
    control when subscribers are removed. IceStorm automatically
    removes a subscriber after the specified number of unsuccessful
    event deliveries. The default value of retryCount is 0, meaning
    the subscriber is removed immediately upon any failure. A
    subscriber is always removed on a hard failure, which is defined
    as the occurrence of ObjectNotExistException or
    NotRegisteredException.

    I assume that this does NOT apply to Topic federation, as I don't see a QoS parameter to the Topic.Link(Topic* linkTo, int cost) method. Thus Topic links are only broken when a message to the linked-to Topic results in a hard failure, just as in Ice3.2.1?

    Thanks

    Dirk

    That is correct. Topic links are effectively subscribed with a a retryCount of -1. Which is documented as "A value of -1 means IceStorm retries
    forever and never automatically removes a subscriber unless a hard failure occurs. "
  • Section 41.3.3 of the Ice 3.3.0 documentation states that "IceStorm messages have oneway semantics". I don't quite understand this statement, as the article on HA-IceStorm in Connections Issue 28 states that message semantics are a function of the publisher and subscriber proxies. Please clarify.

    I also did not find any information about what sort of message semantics are used to forward messages to linked topics (oneway, batched oneway, twoway, etc).

    Finally, I am not clear on the threading semantics of IceStorm 3.3 in the context of the background I/O deployed in Ice 3.3. Do idle threads from the Publish ThreadPool forward messages to subscribers? And if any of these messages block, I assume a background thread will transparently take over. What limits the number of background threads which take over? The size of the publish pool?

    Thanks

    Dirk
  • matthew
    matthew NL, Canada
    dhogan wrote: »
    Section 41.3.3 of the Ice 3.3.0 documentation states that "IceStorm messages have oneway semantics". I don't quite understand this statement, as the article on HA-IceStorm in Connections Issue 28 states that message semantics are a function of the publisher and subscriber proxies. Please clarify.

    What this means is that any messages forwarded through IceStorm cannot have return values, nor throw any user exceptions.
    I also did not find any information about what sort of message semantics are used to forward messages to linked topics (oneway, batched oneway, twoway, etc).

    IceStorm uses a twoway proxy to forward messages. The messages are batched together put as many messages as possible are sent in the same RPC.
    Finally, I am not clear on the threading semantics of IceStorm 3.3 in the context of the background I/O deployed in Ice 3.3. Do idle threads from the Publish ThreadPool forward messages to subscribers? And if any of these messages block, I assume a background thread will transparently take over. What limits the number of background threads which take over? The size of the publish pool?

    Thanks

    Dirk

    You can find some detailed information on background IO in http://www.zeroc.com/newsletter/issue28.pdf.

    There are no background threads. When a new message arrives IceStorm in a server side dispatch thread forwards the message to all subscribers. This forwarding may either queue the message for later sending if the subscriber currently is processing an outgoing message, or the message is asynchronously sent. The AMI response callbacks, which come from the client side thread pool, are then used to send any queued messages. If a write blocks, the selector thread sends the remainder of the message.

    The publish thread pool, which does the initial queue or message send, is by default 1 as is the client side thread pool (used for processing the AMI callbacks).
  • I think I understand you. My confusion seems to stem from oneway message semantics and twoway proxies. I previously believed that these were mutually exclusive. My understanding is as follows:
    a twoway proxy can have oneway message semantics, but only with AMI. Oneway message semantics means that an invocation returns as soon as it is written to the local network interface; when implemented in the context of a twoway proxy, it means that sucessful receipt on the remote side is realized via callbacks issuing from the remote side's client thread pool(i.e. AMI). In the case of linked topics, the callbacks issue from the client thread pool in the linked-to Topic's Communicator.

    The default twoway proxy simply uses synchronous RPC semantics, in which the call returns when the remote side has finished receiving the invocation.

    In Ice 3.3, messages are forwarded to linked topics serially, but each forward operation is limited to writing the message to the local network interface, and any blocking which occur in the process of handling this write (like resolving the remote endpoint via DNS) will take place in a background thread courtesy of the Ice 3.3 Background I/O.

    Please correct as necessary.

    Thanks

    Dirk
  • matthew
    matthew NL, Canada
    I'm not sure on the exact source of confusion :) If it arises from the earlier quoted section of the Ice manual, then as I tried to explain, that section of the manual isn't really talking about oneway semantics in the context of proxies (oneway, twoway or otherwise). It is talking about the semantics of messages sent through through IceStorm being one-way in nature, because a call through IceStorm cannot return values (and even if it could then since IceStorm is many to one, which value would you pick?).

    To be more concrete this is legal:
    interface Foo
    {
       void acall(string s);
    };
    

    where as this is not, because the call returns a string.
    interface Foo
    {
       string acall(string s);
    };
    
  • It is clear that oneway invocations cannot return values. It is also clear that Ice's terminology and documentation is also not particularly clear - i.e. the overloading of the term oneway in the context of message semantics and proxy types.

    I would appreciate a confirmation/correction of my previous post - i.e.

    In Ice 3.3, messages are forwarded to linked topics serially, but each forward operation is limited to writing the message to the local network interface, and any blocking which occur in the process of handling this write (like resolving the remote endpoint via DNS) will take place in a background thread courtesy of the Ice 3.3 Background I/O.

    And also confirmation/correction of the description of a twoway proxy in the context of oneway message semantics being possible only via AMI. And that invocations against the default twoway Ice proxy implies blocking semantics in which a invocation does not return until the receiving side has ACKed the sender's last octet and/or ACKed the sender's FIN or sent a FIN itself.

    Dirk
  • matthew
    matthew NL, Canada
    dhogan wrote: »
    In Ice 3.3, messages are forwarded to linked topics serially, but each forward operation is limited to writing the message to the local network interface, and any blocking which occur in the process of handling this write (like resolving the remote endpoint via DNS) will take place in a background thread courtesy of the Ice 3.3 Background I/O.

    That is correct.
    And also confirmation/correction of the description of a twoway proxy in the context of oneway message semantics being possible only via AMI.
    And that invocations against the default twoway Ice proxy implies blocking semantics in which a invocation does not return until the receiving side has ACKed the sender's last octet and/or ACKed the sender's FIN or sent a FIN itself.

    Dirk

    At the risk of sounding pedantic, I don't think it is ever correct to describe twoway proxies as having oneway semantics. What I think you are describing here is blocking vs. non-blocking. Synchronous calls on Ice proxies, whether the mode of the proxy (oneway, twoway, etc) always have the possibility of blocking. Starting with Ice 3.3, asynchronous calls on Ice proxies, whatever the mode, will never block the caller.
  • I cannot find a specification of twoway proxies in the Ice documentation. Oneway proxies and oneway message semantics seem to be overloaded and ambiguous. I will try to describe the differences, and you can correct.

    The 'Oneway Invocations' section of the 'Ice Run Time in Detail' chapter of both the Ice-3.3.0.pdf and Ice-3.2.1.pdf describe oneway invocations as returning immediately after the invocation is written to the local network interface. These invocation semantics are present only in a oneway proxy. So oneway invocations and oneway proxies seem to be the same. In Ice-3.2.1 these could potentially block, in Ice-3.3.0, they cannot. In either case, they are synchronous to the act of writing the message to the local network interface. In Ice 3.2.1, this act includes DNS name resolution, and can thus block for a while; in Ice 3.3.0, this is no longer the case.

    I have looked in vain for the definition of a twoway proxy. Presumably this means that it will involve reading from the socket associated with the RPC peer, and thus support method return values and/or out parameters. Is this Ice's definition?

    Orthogonal to oneway and twoway proxies is the notion of AMI, which gets into the synchronous/asynchronous calls. AMI calls are asynchronous; non-AMI calls are synchronous. Although this distinction is specious - you could say that AMI calls in Ice 3.3 are truly asynchronous, and in Ice 3.2 they usually were.

    I am still confused on how you configure a two-way proxy in which invocations return after being written to the local network interface (i.e. the way in which IceStorm forwards messages across links) - does this simply happen whenever you use a twoway proxy against a AMI method which has a return value?

    Dirk
  • matthew
    matthew NL, Canada
    dhogan wrote: »
    I cannot find a specification of twoway proxies in the Ice documentation. Oneway proxies and oneway message semantics seem to be overloaded and ambiguous. I will try to describe the differences, and you can correct.

    The definition in short is that oneway proxies are those proxies used to make oneway invocations. Twoway proxies are those proxies used to make twoway invocations.
    The 'Oneway Invocations' section of the 'Ice Run Time in Detail' chapter of both the Ice-3.3.0.pdf and Ice-3.2.1.pdf describe oneway invocations as returning immediately after the invocation is written to the local network interface. These invocation semantics are present only in a oneway proxy. So oneway invocations and oneway proxies seem to be the same.

    The primary difference between a oneway invocation and a twoway invocation is that the server does not reply to a oneway invocation, and the client does not expect any reply. This means that with oneway invocations you lose certain guarantees, such as message ordering and error reporting.
    In Ice-3.2.1 these could potentially block, in Ice-3.3.0, they cannot. In either case, they are synchronous to the act of writing the message to the local network interface. In Ice 3.2.1, this act includes DNS name resolution, and can thus block for a while; in Ice 3.3.0, this is no longer the case.

    That is incorrect. With Ice 3.3 any synchronous invocation can block the caller, period. It doesn't matter whether the proxy is oneway or twoway, if the invocation is synchronous the invocation has the possibility of blocking.
    I have looked in vain for the definition of a twoway proxy. Presumably this means that it will involve reading from the socket associated with the RPC peer, and thus support method return values and/or out parameters. Is this Ice's definition?

    As I said earlier, twoway proxies are those proxies used to make twoway invocations.

    Unlike oneway invocations, the server always delivers replies to twoway invocations, and the client always reads and delivers those replies. This provides the caller a variety of possibilities, which are not available with oneway invocations, such as returning values and user exceptions, message ordering, and reliable error reporting.
    Orthogonal to oneway and twoway proxies is the notion of AMI, which gets into the synchronous/asynchronous calls. AMI calls are asynchronous; non-AMI calls are synchronous. Although this distinction is specious - you could say that AMI calls in Ice 3.3 are truly asynchronous, and in Ice 3.2 they usually were.

    Without getting in an argument over the definition of "asynchronous", with Ice 3.3 a synchronous invocation blocks the caller until the result of the invocation is available. This means:
    - For a oneway invocation, the data has been written to the network interface (or a failure has occurred)
    - For a twoway invocation, a reply has been received from the server (or a failure has occurred).

    In contrast, Asynchronous invocations guarantee:

    - An asynchronous call on a proxy, no matter the mode, will never block the caller.
    - A reply to the invocation will be delivered to the AMI callback object at some point in the future. For oneway invocations, you are guaranteed that ice_sent() is called when the message has been successfully written to the network interface, or ice_exception, if the oneway call failed. For twoway invocations, you are guaranteed that ice_sent() is called when the message has been successfully written to the network interface, followed by either ice_response() or ice_exception().
    I am still confused on how you configure a two-way proxy in which invocations return after being written to the local network interface (i.e. the way in which IceStorm forwards messages across links)

    Note that this is technically incorrect. With Ice 3.3 AMI invocations guarantee not to block the caller. This doesn't mean that the data has been written to the local network interface; for example, if a DNS lookup needs to occur, the data will be written at a later point, since the DNS lookup can block.
    does this simply happen whenever you use a twoway proxy against a AMI method which has a return value?

    Dirk

    Whether or not the method has a return value is irrelevant. All AMI invocations are non-blocking, twoway proxies are no different. So, in short, yes, if you make a AMI invocation on a twoway proxy the call will be non-blocking.

    IceStorm itself always uses asynchronous messages to forward messages to subscribers. Whether or not it uses a oneway or twoway proxy to forward the message depends on the requested quality of service. For linked topics (which are really treated internally as a special class of subscribers), IceStorm uses an an asynchronous twoway invocation to forward batches of messages.