Archived

This forum has been archived. Please start a new discussion on GitHub.

ICE C++ compression lost after update from version 3.4 to 3.6

bohne
edited May 2016 in Help Center

We have an ICE [version 3.4] connection between two C++ applications via a 50Mbit line with ~200ms latency.

While updating the server side of the connection to ICE 3.6 we noticed that the compression on that connection was disabled.

ICE Properties on Client:
Ice.Override.Compress = 1

ICE Properties on Server:
Ice.Override.Compress = 1
Ice.Default.EncodingVersion = 1.0

What caused the compression to be disabled ?
How can we reenable it ?

Hint: We're using linux redhat 6

Thank you, Andre

Comments

  • benoit
    benoit Rennes, France

    Hi Andre,

    I'm not able to reproduce this using the Ice/throughput C++ demo client/server (running a 3.4 client and 3.6 server).

    How did you diagnose the issue? Did you enable protocol tracing (with --Ice.Trace.Protocol=2) to check the compression status of the requests and replies?

    Cheers,
    Benoit.

  • bohne
    edited June 2016

    We checked using Wireshark. The compression flag on the wire is 0 AND there is much more data transfered that the previous build.

    By the way:

    • We only observed this with batch requests. For non batch requests (synchronous with return value) we saw that compression was activated.
    • And I recompiled the client with 3.6 still the same behaviour. No compression for batch requests.
    • We did some debugging (Recompiled 3.6.1 with debug symbols and used gdb)
      -- compression seems to be generally activated for the Adapter (_compress = true)
  • benoit
    benoit Rennes, France

    Hi,

    Compression is a client-side setting. You can set compression on object adapter endpoints but it's only to enable the compression flag in proxies created by the object adapter. It does not disable or enable compression on the object adapter.

    I can't explain why a 3.4 client would stop compressing the batch requests when talking to a 3.6 server. I tried to reproduce this with the hello demo (3.4 client sending at least 20 sayHello batch requests to a 3.6 server). When the batch is flushed, I verified with protocol tracing (Ice.Trace.Protocol=2) that it's compressed. Compression is enabled on the client with Ice.Override.Compress=1.

    Are you able to reproduce this with our demos? What size is the batch? Note that compression is only used if the message size is over 100 bytes.

    Can you also tell us more about your environment? Which operating system do you use for your client/server?

    Cheers,
    Benoit.

  • bohne
    edited June 2016

    Ok, did some testing with "Ice.Trace.Protocol=2":

    • Tracing is from the server
    • Always the same client (Ice 3.4 windows)

    Server with library version 3.6.1

    -- 06/02/16 15:48:18.535 Protocol: sending asynchronous request message type = 1 (batch request) compression status = 0 (not compressed; do not compress response, if any) message size = 1800018 number of requests = 5000 request #0: identity = b3497006-9dc0-4d85-b296-6781e6937971 facet = operation = onInstrumentEvent mode = 0 (normal) context = request #1: identity = b3497006-9dc0-4d85-b296-6781e6937971 facet = operation = onInstrumentEvent mode = 0 (normal) [...]

    Server with library version 3.4

    -- 06/02/16 15:58:51.938 Protocol: sending asynchronous request message type = 1 (batch request) compression status = 2 (compressed; compress response, if any) message size = 27734 number of requests = 5000 request #0: identity = 2b425c83-33fe-4dac-849b-f8f2101103fa facet = operation = onInstrumentEvent mode = 0 (normal) context = request #1: identity = 2b425c83-33fe-4dac-849b-f8f2101103fa [...]

    Environment:

    • Client: Windows 64Bit - Ice 3.4
    • Server: CentOS 6.4 - Ice 3.4 / Ice 3.6.1 (compiled from source with gcc 5.2.1)

    There were no code changes related to our Ice interface except the "Ice.Default.EncodingVersion = 1.0" in the server

    I hope that helps

  • benoit
    benoit Rennes, France

    Can you tell us a bit more about the batch invocations performed in your server?

    How do you create the proxies? Are you explicitly flushing the batch with the proxy ice_flushBatchRequests() method? Or are you using bi-directional proxies and flushing the batch with the connection flushBatchRequests method?

    If you use the proxy method, does the proxy have the compression flag enabled?

    Cheers,
    Benoit.

  • Hi,

    We are using Ice::CommunicatorPtr communicator->begin_flushBatchRequests() instead of flushBatchRequests on the connection to avoid blocking network calls.

    We get a call from outside that registers a client proxy at the application and then we call it.

    in simplified:
    void registerPublishEventsClient(const Ice::Identity &identity, const Ice::Current &current) { RPC::PublishEventsPrx proxy = RPC::PublishEventsPrx::uncheckedCast(current.con->createProxy(identity)); RPC::PublishEventsPrx batchProxy = proxy->ice_batchOneway()); [...] batchProxy ->SomeRemoteFunction(Payload largePayload); current.con->begin_flushBatchRequests(); }

    I hope I didn't simplified to much.

    Unfortunatly I don't know how to get the compression flag for a proxy.

    Thank you,
    Andre

  • benoit
    benoit Rennes, France

    Hi,

    Ok, I better understand now.

    We have changed how batch requests are handled in Ice 3.6 by introducing batch request queues. See here for the details.

    Since this change, compression no longer works if you use Ice::Communicator::flushBatchRequests or Ice::Connection::flushBatchRequests for flushing batch requests queued on the Ice connection with Ice "fixed-proxies" (proxies created with and bound to the connection) which is what you're doing.

    We'll consider adding this back. In the meantime you will need to flush the connection batch requests using flushBatchRequests on a fixed-proxy bound to the connection. In your example above, instead of using current.con->begin_flushBatchRequests(), you could use batchProxy->ice_compress(true)->ice_ begin_flushBatchRequests() to flush the batch requests with compression enabled.

    Cheers,
    Benoit.

  • Wohey!

    That helped! Now compression is back in place and we can keep communication inside our timeout limits.

    Thank you very much for your help! ... Maybe add a warning that compression

    Regards,
    Andre