Archived

This forum has been archived. Please start a new discussion on GitHub.

ICE 3.4.1 and Qt 4.7

Hello All,

Is there any 'official' paper from ICE team about running ICE 3.4.1 and Qt 4.7 since the old Matthew one on issue15 "Integrating Ice with a GUI" and the last "New Ice 3.4.0 AMI and Qt 4.5 library" from mwilson ?

Regards,
Thierry

Comments

  • benoit
    benoit Rennes, France
    Hi Thierry,

    Sorry, we do not have any new material covering Qt 4.7. Matthew's article is out of date as it doesn't cover the new functionality brought with Ice 3.3 and 3.4 which are namely: non-blocking AMI calls and the [noparse]Ice::Dispatcher[/noparse] interface.

    Both allow to significantly reduce the complexity of making Ice calls from the GUI event loop thread. You can safely make Ice invocations from the GUI thread with AMI and you can directly receive AMI callbacks in the GUI thread.

    Alex's QtDispatcher implementation on this thread should be a good starting point on how to use the Ice dispatcher interface with Qt.

    You might also want to check out the MFC demo from the C++ demo/Ice/MFC directory which demonstrates the integration of MFC with Ice (in a very similar way, by implementing an Ice dispatcher that dispatches AMI callbacks and requests in the MFC event loop).

    Let us know if you need more information on this!

    Cheers,
    Benoit.
  • thanks for the links Benoit.

    I try to start with DispatchEvent sample given by Alex Makarenko, not so far from official MFC exemple, but the customEvent() is never called and my app is freezed...

    Thierry
  • Be sure you are not using any synchronous calls in your GUI thread. That will hang up your process for sure. Use AMI calls. For instance, if you have a slice file:
    module X
    {
        interface Y
        {
            ["ami"] void doSomething();
        };
    };
    

    In your GUI you would call "begin_doSomething()". This guarantees your GUI will not hang. There is good documentation at ZeroC - White Papers and Articles for C++, Java, C#, and Python on using AMI.
  • Thanks Marc,

    Things was not clear for me, and I thought that only mixed sync and async calls was prohibited.

    Anyway my freeze appears on a Qt GUI server app when calling
    topicManagerProxy = IceStorm::TopicManagerPrx::checkedCast(topicProxyObject);
    
    or after init step each time I use proxy object to send my Storm feed.

    May be cause storm calls are not AMD mode, but does it make sense to make AMD call for Storm publish function ?

    Thierry
  • benoit
    benoit Rennes, France
    Hi Thierry,

    Under the hood, the checkedCast call does perform a synchronous call to the IceStorm server to verify the that the Ice object correctly implements the TopicManager interface. As Mark mentioned, you can't do synchronous calls from the Qt thread if you use the Ice dispatcher: the synchronous call will wait for a response but this response can't be received because Qt thread is stuck waiting.

    So here, you should use an uncheckedCast instead to avoid making the synchronous call to IceStorm:
    topicManagerProxy = IceStorm::TopicManagerPrx::uncheckedCast(topicProxyObject);
    

    Cheers,
    Benoit.
  • All is working fine now when publisher, storm and consumer binaries run in local on the same machine.

    I try to start publisher binary on another machine and keep storm and consumer on the same machine.

    I experiences some random freeze on the publisher componant : All the initialize stuff seems to be ok (no exception), but during the first call to the proxy method (in the Qt thread), the call could randomly freeze (until timeout) and then run fine.

    I put in config.pub the Ice.Trace.Network=3, and here is what I saw:
    -- 06/13/11 14:22:38.482 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: attempting to bind to tcp socket 192.168.1.5:10010
    -- 06/13/11 14:22:38.482 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: accepting tcp connections at 192.168.1.5:10010
    -- 06/13/11 14:22:38.482 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: published endpoints for object adapter `General':
       tcp -h 192.168.1.5 -p 10010
    -- 06/13/11 14:22:38.482 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: trying to establish tcp connection to 192.168.1.2:10000
    -- 06/13/11 14:22:38.483 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: tcp connection established
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.483 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.484 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 88 of 88 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.484 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.484 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 12 of 12 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.485 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 72 of 72 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.485 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.486 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 130 of 130 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.486 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 69 of 69 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.486 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.487 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 132 of 132 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.487 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 69 of 69 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.487 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.487 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 127 of 127 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.487 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 66 of 66 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.488 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.488 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 129 of 129 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.488 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 68 of 68 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.488 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.488 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 126 of 126 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    dispatch() 
    -- 06/13/11 14:22:38.489 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: sent 65 of 65 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.489 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 14 of 14 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    -- 06/13/11 14:22:38.489 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: received 128 of 128 bytes via tcp
       local address = 192.168.1.5:49678
       remote address = 192.168.1.2:10000
    


    My diagnostic is that all seems to be ok, and connections are established between both machines (198.168.1.2 and 192.168.1.5).

    Then I calls my proxy method, and the trace reveal this:
    -- 06/13/11 14:22:41.688 /Users/quadbyte/dev/Heliox/flux/flux-build-desktop/flux.app/Contents/MacOS/flux: Network: trying to establish tcp connection to 172.16.6.1:10001
    

    Why the network try to establish a connection with 172.16.6.1 never traced about before ??????

    This is not always 172.16.6.1 on each debug session I launch, but that could explain why the calls are freezed until timeout.

    Does anybody have idea ?

    Here is my config files:

    config.pub
    TopicManager.Proxy=RTSafirService/TopicManager:default -h 192.168.1.2 -p 10000
    General.Endpoints=tcp -h 192.168.1.5 -p 10010
    Ice.Trace.Network=3
    

    config.icebox
    IceBox.ServiceManager.Endpoints=tcp -p 9998
    
    #
    # The IceStorm service. The service is configured using a separate
    # configuration file (see config.service).
    #
    IceBox.Service.IceStorm=IceStormService,34:createIceStorm --Ice.Config=config.service
    

    config.service
    IceStorm.InstanceName=RTSafirService
    
    #
    # This property defines the endpoints on which the IceStorm
    # TopicManager listens.
    #
    IceStorm.TopicManager.Endpoints=default -p 10000
    
    #
    # This property defines the endpoints on which the topic
    # publisher objects listen. If you want to federate
    # IceStorm instances this must run on a fixed port (or use
    # IceGrid).
    #
    IceStorm.Publish.Endpoints=tcp -p 10001:udp -p 10001
    

    Thierry
  • Do you have more than one network card on your server?
  • Physically no, but logically, I have virtualbox and vmware and others. Here is the ifconfig result on my server:
    minix:~ quadbyte$ ifconfig
    lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
    	inet6 ::1 prefixlen 128 
    	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 
    	inet 127.0.0.1 netmask 0xff000000 
    gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
    stf0: flags=0<> mtu 1280
    en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    	ether 00:25:4b:9e:e1:7c 
    	inet6 fe80::225:4bff:fe9e:e17c%en0 prefixlen 64 scopeid 0x4 
    	inet 192.168.1.5 netmask 0xffffff00 broadcast 192.168.1.255
    	media: 1000baseT <full-duplex,flow-control>
    	status: active
    en1: flags=8823<UP,BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500
    	ether 00:24:36:f1:bf:7b 
    	media: autoselect (<unknown type>)
    	status: inactive
    fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078
    	lladdr 00:25:4b:ff:fe:9e:e1:7c 
    	media: autoselect <full-duplex>
    	status: inactive
    vboxnet0: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    	ether 0a:00:27:00:00:00 
    vmnet1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    	ether 00:50:56:c0:00:01 
    	inet 192.168.182.1 netmask 0xffffff00 broadcast 192.168.182.255
    vmnet8: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
    	ether 00:50:56:c0:00:08 
    	inet 172.16.16.1 netmask 0xffffff00 broadcast 172.16.16.255
    

    I think I found the problem. As describe in config.service there is no 'connection' between the location of the service for IceStorm.TopicManager and the location of IceStorn.Publish. For me it make sense that both was located on the same machine, and if the TopicManager connection was ok, then the Publish connection was on the same machine.

    If I specify host on the IceStorm.Publish this way:
    IceStorm.Publish.Endpoints=tcp -p 10001 -h 192.168.1.2
    

    problems seems to disappear .... but it is quite strange that some connections worked fine, the connection list seems not to be ordered the same way each started session.

    Thanks Mark to put me on the way

    Thierry
  • benoit
    benoit Rennes, France
    Hi Thierry,

    When you don't specify the -h option for the endpoints property, the IceStorm service listens on all available network interfaces and the proxies it creates also include the endpoints of each each interface.

    When your publisher invokes on a proxy with all those endpoints and if the connection isn't established yet, it randomly picks an endpoint and tries to establish a connection to this endpoint. The fact that the endpoint is randomly picked explains why it sometime hanged and sometime didn't. If you had configured timeouts, you would have seen that the connection establishment would eventually timeout and the Ice runtime would then have tried establishing the connection using another endpoint.

    For more details on how connection establishment works with Ice I recommend reading Chapter 36 - Connection Management.

    Cheers,
    Benoit.