Archived

This forum has been archived. Please start a new discussion on GitHub.

Glacier2 with multiple object adapters?

I have a scenario where a single application creates multiple object adapters dynamically. I would like to use Glacier2. Is is fair to state the two strategies are:

1. Create a unique glacier2 router for each object adapter and use the ice_router in the proxy to specify the association between proxies and routers. Note, since I dynamically create object adapters, this doesn't seem like a reasonable alternative since it would require a new glacier2 instance for each object adapter.

2. Create a separate communicator and specify the Ice.Deault.Router property for each communicator and then create object adapters. In this case I would end up just creating a new communicator prior to creating a new object adapter, but I would be able to share the same glacier2 router.

Is there ever a scenario where you can create one communicator, multiple object adapters, and only one glacier2 router?

The reason I'm asking is that the example in Figure 38.3 shows a scenario where the "Callback Cient" has both a server and callback interface on the private and public network respectively. So I'm wondering if the two strategies identified above (and in your documentation) are also necessary in this case where the "Callback Cient" has multiple "callback" object adapters, but no other server interfaces.

Regards --Roland

Comments

  • You are correct in your analysis. It is not possible to have multiple object adapters with just one router.

    Fundamentally, if routers are used, the connection from the client to the router is also used to receive callbacks. Therefore any existing connection has to be associated with exactly one object adapter. If there were more than one, then the Ice core wouldn't know which object adapter should receive the request that is send back to the client over the existing connection. I.e., each object adapter needs its own set of connections, which cannot be shared with other object adapters.

    If you use multiple routers, then the client has different connections to these routers, and therefore you can have different object adapters. Similarly, if you use multiple communicators, then the client opens multiple connections to the router for each communicator, and each of these connections can have a separate object adapter.

    I guess what we could do is to force the client to open a new connection, even if only a single communicator is used, and then associate this new connection with a new object adapter.

    However, what is the reason for your application to have multiple object adapters in the first place? Typically, multiple object adapters are used to have different sets of endpoints for different Ice objects. For example, you might want to have one set of Ice objects that is only reachable from the internal network, and another set of Ice objects that is reachable from the external network. Or some Ice objects that are only reachable over SSL, and others that are only reachable over plain TCP/IP. But this does of course not apply if routers is used, because then all requests are handed down over an existing connection that was established from the client to the router.
  • Your idea around forcing the client to open a new connection even if a single communicator is used and associating this connection to an object adapter sounds interesting.

    The reason for using multiple object adapters is to create a separate socket with it's own server thread pool (size=1) for different media streams. For example, let's say we have two media streams, audio and graphics. It is possible to process each one of these media streams in parallel, but the streams must be kept serialized with respect to each stream. We don't really want audio to be serialized behind graphics operations. There are a several options that we've considered.

    1. Use a single object adapter, with a server thread pool > 1 and then handle the serialization in our methods through a combination of sequence numbers and mutexes or build our own internal thread queue.

    2. Use multiple object adapters, one for each media stream, with a server thread pool = 1 for each object adapter. This allows audio and graphics to be processed in parallel as well as allow each media stream to be sequential wrt itself. This was the direction we were considering.

    It might have been useful to have a thread pool for each object in our system. This might have allowed us to use a single object adapter and provide an object for each stream with it's own object thread pool. I'm not sure that this is possible in Ice. I guess we could probably re-create something similiar to this, but I was trying to use as much built-in Ice capabilties as possible. Using an object adapter for each media stream is easy/trivial to implement.

    I realize that using Glacier2 for establishing a single bi-directional connection will force all media streams down the same socket/object adapter thus defeating the reasons for having multiple object adapters to begin with. I'm expecting that our application will have a NAT traversal enable/disable button/feature so we wouldn't be using Glacier2 all the time, only when NAT traversal is required and when it is enabled performance/quality might be impacted. If NAT is enabled then we could also just create a single object adapter, and if it isn't, then go ahead and create multiple object adapters (we would need to design our system to handle this), but then there is option 1 above too.

    I've often wondered why Ice doesn't support bi-directional sockets without requiring the use of glacier. Is this covered in the Ice documentation? The lack of bi-directional sockets has been one of the biggest issues to try and work-through as networking is so complicated with NAT, firewalls, name resolution, VPNs, multi-homed systems, and others. It usually can be resolved, but sometimes not easily. It would have been so much easier if bi-directional sockets were possible since out-bound connections are usually easily established, but there are many complexities around establishing incoming connections. So I'm really hopeful about Glacier2.

    Regards --Roland
  • I might have a solution for you, which does not require multiple object adapters, but still makes sure that each client has exactly one thread.

    Ice 2.0.0 has a new feature, which is still undocumented: It supports thread-per-connection. With this feature, no thread pools are used. Instead, each connection has exactly one thread assigned, which is used exclusively for that connection.

    The initial reason for this feature was Glacier2. Glacier2 uses the thread-per-connection model internally, which ensures that it is not vulnerable to certain attacks, such as one client sending many requests, but never picking up the responses. If thread pools were used, such an attacking client could easily cause thread starvation in Glacier2, resulting in Glacier2 not being able to respond to requests from other clients. With the thread-per-connection model this is not an issue, because each client has a dedicated thread in Glacier2. (Note that this was not an issue for Glacier1, because Glacier1 started a new router for each connecting client.)

    You can try this by setting the property Ice.ThreadPerConnection=1. We didn't document this feature yet, because we first wanted to do more testing. However, we have now run extensive tests with this feature, and it's safe to say that it is stable. With Ice 2.1.0, it will be officially documented. Also note that this feature is currently only implemented in Ice for C++ (and derived products, i.e., Ice for Python or PHP). It will be implemented in Ice for other programming languages in future versions.

    Thread-per-connection has limitations with respect to nested callbacks. Since there is only one thread per connection, you can at most have one level of nesting for callbacks. For example, A->B->A will work, but A->B->A->B will not work.

    The main problem with general bi-directional connections is that the server does not know which incoming connection to use in order to call back a client when it is given a proxy to an Ice object in that client. This is in particular difficult if NAT firewalls are used, because then the server sees an IP address and port which differs from the real client IP address and port.
  • So let me see if I understand this. The algorithm using ThreadPerConnection=1 for my scenario would be.

    1. Create single object adapter.

    2. for all media streams
    2.a Create an object for that media stream.
    2.b Add object to object adapter.
    2.c Create proxy for object just added to object adapter. This represent the callback that will be sent to the server.

    3. Invoke server passing in the appropriate proxy (callback). For example, serverProxy->startAudio(audioProxy) would invoke a method in the server passing in the callback to the audioProxy created in step 2.c above.

    4. When the server starts to use the audioProxy when the connection is establish from the Server back to the Client this connection will have it's own thread.

    At the end of this we have a single object adapter with multiple objects in it. Each connection that is established using the proxy (callback) from the server to the client gets its own thread. So everything will be serialized wrt a connection and every connection will have it's own thread so it will run in parallel.

    If this is accurate then I believe that this is exactly what I was looking for.

    Regards --Roland
  • Hmm... no, I'm afraid I was wrong :( This won't work, because there is only one single connection. So you will not have a connection+thread per Ice object, but only one connection and thread for all Ice objects.

    I got confused with a scenario where you have several clients sending data to the server, and you want the server to allocate exactly one thread for each client (or each connection from a client).

    I'm still not sure whether any of this is necessary at all. You wrote:
    The reason for using multiple object adapters is to create a separate socket with it's own server thread pool (size=1) for different media streams. For example, let's say we have two media streams, audio and graphics. It is possible to process each one of these media streams in parallel, but the streams must be kept serialized with respect to each stream. We don't really want audio to be serialized behind graphics operations. There are a several options that we've considered.

    Let's assume you have multiple threads in the thread pool (client-side thread pool, in case you use glacier), then the processing of the different streams will not be serialized. The same stream, however, will still be serialized, since you have one Ice object per stream. All you have to do is to add mutex-protection to the Ice object that represents a stream. This way, different streams are processed in parallel, but segments of a given stream are serialized.

    There is also no performance penalty, as it doesn't matter how many connections are used. All that matters is that the maximum available bandwidth is used. This is assured by the thread pool, which will immediately supply a new thread for reading data as soon as one thread finished reading a request and starts dispatching such request. (This assumes that your bottleneck is network-bound and not CPU-bound, otherwise having two CPUs sending data on two sockets would indeed be faster.)
  • If I create a server thread pool > 1 and create a mutex for each Ice object (media stream) then this will protect the critical section, but I didn't think a mutex alone is sufficient to gurantee ordering so I would have to handle the ordering separately from the mutex. For example if the client invokes

    proxy->playAudio(buffer1);
    proxy->playAudio(buffer2);

    and playAudio is as follows:

    Audio::playAudio(AudioBuffer buffer)
    {
    IceUtil::Mutex::Lock lock(mutex);

    play(buffer);
    }

    wouldn't it be possible for buffer 2 to play before buffer 1 with the server thread pool > 1. I'm thinking that the Ice runtime will definitely dispatch playAudio(buffer1) prior to playAudio(buffer2), but right after playAudio(buffer1) is dispatched this thread could be pre-empted by the the thread processing playAudio(buffer2) and could grab the lock prior to buffer1 being processed..

    Regards --Roland
  • You are right again. I think I stop giving advice today, I'm not thinking clearly... :(
  • I gave this problem some more thought. I think I know a solution that will do exactly what you want, without the need for Glacier or routers at all. Here is how it would work:
    1. We add the ability for proxies to explicitly request a new connection instead of using an existing one.
    2. We add an operation "addConnection()" to object adapters, which allows you to explicitly add a pre-established connection to an object adapter. (This part is a bit tricky, because it means that the connection has to be unregistered with the client side thread pool, and registered with the object adapter's thread pool. But it is doable.)
    3. The server can already access connections through Ice::Current (another not yet documented feature). So an operation to register a client to receive streams from the server could look like this:
      StreamI::addClient(const Ice::Identity& id, const Ice::Current& current)
      {
          Ice::ObjectPrx newClientProxy = current.connection->createProxy(id);
          //...
      }
      

    With the last step, your server would have a proxy to the client with the correct identity, that uses (a) an existing connection that has been established from the client to the server (so no NAT problems), and (b) a connection that the client handles in a separate thread pool (important for your serialization requirements).

    I believe this would solve your problems in a simple and efficient manner. The solution also does not suffer from the problems of generic bi-directional connection proxies, because the proxies used to call back over existing connections are only valid within the server process that holds the connection.
  • Marc, I'm wondering what you think about the following strategy. We could possibly introduce a NAT traversal mode in our application. This mode would be used when both ends are on the same network.

    If not in NAT traversal mode then create an object adapter for each object (media stream).

    If in NAT traversal mode then create only one object adapter and use Glacier2. In NAT traversal mode we would suffer some performance issues, but at least we would have NAT traversal.

    Another variation on the above, would be to create a Glacier router for each media stream and use a separate object adapter for each medai stream. When our server application is installed we could just start several glacier routers to handle all of our media streams. I'm not sure we want to go this route yet.

    Incidentally, the reason for me asking the questions around Glacier2 was driven by another case case involving VPNs where the proxy that was supplied to the server as a callback was created on the wrong interface. Currently our strategy for trying to cope with this issue is that we create a test socket, not using ICE. After the socket is connected we get the peer ipaddress (ipaddress on the client side) for the socket we just created. This should be the public interface. Then we supply this ipaddress when we create the Ice object adapter. If I had been able to get the peer ipaddress of the proxy then this could have substituted for creating our own test socket/probe, but I don't see this capablity in Ice.

    This test socket approach worked in all the VPN cases that I had seen so far. But now we have a case where this strategy is failing and I don't know all the details yet. I suspect that the test socket was being port forwarded on a loopback interface, so the address for the poxy was still a private address, 192.168.X.X in this case. So the server couldn't establish the callback connection back to the client, but our test socket did work. The port-forwarding scenario is very simliar to NAT hence my questions on Glacier2. The other VPNs that I've worked with create virtual ethernet devices and this does work with our test socket.

    Getting back to your earlier quote:

    "The main problem with general bi-directional connections is that the server does not know which incoming connection to use in order to call back a client when it is given a proxy to an Ice object in that client. This is in particular difficult if NAT firewalls are used, because then the server sees an IP address and port which differs from the real client IP address and port."

    I might being trying to get too low-level on this but, I would really love to understand this further. Currently in Ice, you create an object adapter (which creates a listening socket on the client), add an object to the adapter, and then get a proxy to the object. For callbacks the proxy is then sent to the server. The proxy that the server uses is associated with the ipaddress and port back on the client. The server then uses the proxy which establishes the connection back to the client (now a server too).

    Wouldn't it have been theoretically possible, and I'm speaking in general terms and not what Ice is currently capable of, to create a new object adapter using the same ipaddress and port as the one that the client used to connect to the server. In my niave viewpoint creating an object adapter using the peer address of an existing proxy and connection. The reason why I'm asking this is that applications use bi-directional sockets all the time to send messages above and beyond simple paired request and respose. So I'm trying to understand some of the architectural decisions and motiviation in Ice. For example, do bi-directional sockets fail under certain network conditions or in the general remote object area or are their security issues. Another question would be since glacier2 supports bi-directional sockets using Ice, then why couldn't this capability have been built directly into Ice rather than a separate service?

    If there are any pointers or references on this that would be great. I'm just trying to understand further and been wondering about this for a while, since lack of bi-directional sockets have certainly created some headaches for our software in areas where NAT isn't even being used, such as software firewalls, and VPNs. Glacier2 will definitely resolve these issues, but off-course Glacier2 introduces another system and level of indirection in our system. Please don't feel too compelled to answer this. I imagine that this could get complicated.

    Regards --Roland
  • Originally posted by rhochmuth
    Marc, I'm wondering what you think about the following strategy. We could possibly introduce a NAT traversal mode in our application. This mode would be used when both ends are on the same network.

    If not in NAT traversal mode then create an object adapter for each object (media stream).

    If in NAT traversal mode then create only one object adapter and use Glacier2. In NAT traversal mode we would suffer some performance issues, but at least we would have NAT traversal.

    Sure, this would work. I was simply trying to find a solution that does not require any special NAT traversal mode, and also wouldn't have any performance penalties. It would also be simpler, because no Glacier would be involved at all.
    Originally posted by rhochmuth
    Another variation on the above, would be to create a Glacier router for each media stream and use a separate object adapter for each medai stream. When our server application is installed we could just start several glacier routers to handle all of our media streams. I'm not sure we want to go this route yet.

    Yes, this would also work. Again, I was suggesting the other solution because of its simplicity and because it does not require Glacier. Glacier is really more than just NAT traversal, it's a firewall and session management solution.
    Originally posted by rhochmuth
    Incidentally, the reason for me asking the questions around Glacier2 was driven by another case case involving VPNs where the proxy that was supplied to the server as a callback was created on the wrong interface. Currently our strategy for trying to cope with this issue is that we create a test socket, not using ICE. After the socket is connected we get the peer ipaddress (ipaddress on the client side) for the socket we just created. This should be the public interface. Then we supply this ipaddress when we create the Ice object adapter. If I had been able to get the peer ipaddress of the proxy then this could have substituted for creating our own test socket/probe, but I don't see this capablity in Ice.

    Note that with Ice 2.0, the IP address that you publish in a proxy can differ from the IP address that is actually used. Have a look at the description of the object adapter "PublishedEndpoints" property for more information. (In the Ice manual in chapter 29.3.6.) I'm not really sure if this is helpful to your particular application, but it might be worth a look.

    You can get the peer address of a proxy, but it's a bit of a hack. Get the connection from a proxy, then get a string representation of the connection, and finally parse this string to get the peer address:
    Ice::ConnectionPtr con = theProxy->ice_connection();
    std::string conInfo = con->toString();
    // Parse conInfo to get the peer address...
    
    Originally posted by rhochmuth
    This test socket approach worked in all the VPN cases that I had seen so far. But now we have a case where this strategy is failing and I don't know all the details yet. I suspect that the test socket was being port forwarded on a loopback interface, so the address for the poxy was still a private address, 192.168.X.X in this case. So the server couldn't establish the callback connection back to the client, but our test socket did work. The port-forwarding scenario is very simliar to NAT hence my questions on Glacier2. The other VPNs that I've worked with create virtual ethernet devices and this does work with our test socket.

    Perhaps the physical/published endpoints that I mentioned above can help in this scenario. Published endpoints that differ from the physical ones have been introduced to allow servers to operate behind port-forwarding firewalls.
    Originally posted by rhochmuth
    Getting back to your earlier quote:

    "The main problem with general bi-directional connections is that the server does not know which incoming connection to use in order to call back a client when it is given a proxy to an Ice object in that client. This is in particular difficult if NAT firewalls are used, because then the server sees an IP address and port which differs from the real client IP address and port."

    I might being trying to get too low-level on this but, I would really love to understand this further. Currently in Ice, you create an object adapter (which creates a listening socket on the client), add an object to the adapter, and then get a proxy to the object. For callbacks the proxy is then sent to the server. The proxy that the server uses is associated with the ipaddress and port back on the client. The server then uses the proxy which establishes the connection back to the client (now a server too).

    Wouldn't it have been theoretically possible, and I'm speaking in general terms and not what Ice is currently capable of, to create a new object adapter using the same ipaddress and port as the one that the client used to connect to the server. In my niave viewpoint creating an object adapter using the peer address of an existing proxy and connection.

    No, this wouldn't work. An endpoint must point to a socket that listens to connection requests, and establishes new connections upon request (i.e., a socket in listen state). It cannot point to an existing established connection.

    Even if we would somehow add a special-case for this scenario, it would still not work, because the peer address and the published address might differ if NAT firewalls are used (similar problem like the one you are trying to solve with the test socket).

    Even if we would add another special handling for proxies to get their published IP address from such test sockets, then this solution would still break as soon as we do anything with the proxy outside of the server process that holds such connection.

    These are too many special cases with too many side effects to make this a worthwhile solution :)
    Originally posted by rhochmuth
    The reason why I'm asking this is that applications use bi-directional sockets all the time to send messages above and beyond simple paired request and respose. So I'm trying to understand some of the architectural decisions and motiviation in Ice. For example, do bi-directional sockets fail under certain network conditions or in the general remote object area or are their security issues. Another question would be since glacier2 supports bi-directional sockets using Ice, then why couldn't this capability have been built directly into Ice rather than a separate service?

    Glacier2 is a special application, that has been designed to specifically work with bi-directional connections and their limitations.
    Originally posted by rhochmuth
    If there are any pointers or references on this that would be great. I'm just trying to understand further and been wondering about this for a while, since lack of bi-directional sockets have certainly created some headaches for our software in areas where NAT isn't even being used, such as software firewalls, and VPNs. Glacier2 will definitely resolve these issues, but off-course Glacier2 introduces another system and level of indirection in our system. Please don't feel too compelled to answer this. I imagine that this could get complicated.

    It is indeed very complicated :) But I can also understand your motivation. In more abstract terms, the current Ice object model knows nothing about connections, but only about Ice objects, and proxies that can be used to invoke on Ice objects. This model is incompatible with bi-directional connections. In fact, it is incompatible with any concept of connections :)

    However, I realize that while this model works nicely for many applications, there are some applications that can't ignore the physical reality of connections and special network configurations. That's why we started to introduce an explicit connection concept with Ice 2.0, as an extension to the existing general Ice object model. This is not really finished yet, but it's a starting point. We will continue to refine it further with future versions of Ice.