Archived

This forum has been archived. Please start a new discussion on GitHub.

Questions about AMI and AMD with multi-threaded pool?

In ice 3.3, Are those below correct?

1. AMD with multi-threaded and serialized server pool (Ice.ThreadPool.Server.Serialize=1, Ice.ThreadPool.Server.Size=2) means Ice run time dispatches requests from different connections concurrently where serialize requestes from the same connection, whether oneway request or twoway request.


2. AMD with multi-threaded and non-serialized server pool means Ice run time may not dispatches the requestes from the same connection in the order received, whether oneway request or twoway request.

3. AMI with multi-threaded and serialized client pool (Ice.ThreadPool.Client.Serialize=1, Ice.ThreadPool.Client.Size=2) means Ice run time processes the replies from the same connection serially, whether oneway request or twoway request.

4. AMI with multi-threaded and non-serialized client pool means Ice run time may not processes the replies from the same connection in the order received, whether oneway request or twoway request.

5. For example, if I want to transfer large files block by block between PCs with dual core CPU, should I set both the client to AMI with multi-threaded and serialized client pool and server to AMD with multi-threaded and serialized server pool ? Or just set the client(server) to AMI(AMD) with multi-threaded and serialized client pool only?

To be more specific, see the code about the file transfer example of issue 20.
Could the client receives the block of the file in the order request sended, on condition that client uses AMI with multi-threaded and serialized client pool only while server uses AMD with multi-threaded and non-serialized server pool?
#include <Ice/BuiltinSequences.ice>

module Demo
{

exception FileAccessException
{
    string reason;
};

interface FileStore
{
    ["ami", "cpp:array"] Ice::ByteSeq read(string name, int offset, int num)
	throws FileAccessException;
};

};

If I change the code above (read from the server and write to the client) to follow (read from the client and write to the server) ,Could the server dispatches the request with the block of the file in the order received, on condition that client uses AMI with multi-threaded and non-serialized client pool only while server uses AMD with multi-threaded and serialized server pool?
interface FileStore
{
    ["ami", "cpp:array"] Ice::ByteSeq write(string name, int offset, int num)
	throws FileAccessException;
};


Besides,
Ice 3.3 manaual, p.811
If the server must keep track of the order of client requests, a better solution would be to use serialization in conjunction with (see Section 29.4) asynchronous dispatch to queue the incoming requests for execution by other threads.
what does the "execution by other threads" means?

Comments

  • I Modified the async demo under ice-3.3.0\\demo\Ice to test the questions above. The comlete modified async code compiled under window xp sp2 through visual studio 2003 sp1 with dual-core intel CPU PC. Code see attactment.

    Follow is the Hello.ice.
    module Demo
    {
    
    exception RequestCanceledException
    {
    };
    
    interface Hello
    {
        ["ami", "amd"] int sayHello(int delay,int order)
            throws RequestCanceledException;
    
        void shutdown();
    };
    
    };
    

    Client sends multi greetings through AMI continuously like this:
    for (int i=1;i<17;i++)
        hello->sayHello_async(new AMI_Hello_sayHelloI, 0,i);
    


    Client processes replies to requests like this :
    class AMI_Hello_sayHelloI : public AMI_Hello_sayHello, public IceUtil::Monitor<IceUtil::RecMutex>
    {
    public:
    
        virtual void ice_response(::Ice::Int retOrder)
        {
        	Lock sync(*this);
    	cout <<"retOrder "<<retOrder<<endl;
        }
    ...
    

    Server dispatches request like this:
    void
    HelloI::sayHello_async(const Demo::AMD_Hello_sayHelloPtr& cb, int delay, int order,const Ice::Current&)
    {
        Lock sync(*this);
        if(delay == 0)
        {
            cout << "Hello World!" <<order<< endl;
            cb->ice_response(order);
        }
        else
        {
            _workQueue->add(cb, delay);
        }
    }
    ...
    

    Combining the Ice thread properties, I obtain following results:

    1、client uses AMI,singlethreaded-pool
    1.1、server uses AMD,singlethreaded-pool,
    1.2、server uses multithreaded-pool,Ice.ThreadPool.Server.Serialize=1,
    result:All are ok, order assured from dispatching to applies processing。
    1.3、server uses AMD,multithreaded-pool,non-serialized
    result:server may dispatch in different order。The more Ice.ThreadPool.Server.Size,the more unorder。

    2、client uses AMI,multithreaded-pool,Ice.ThreadPool.Client.Serialize=1
    2.1、server uses AMD,singlethreaded-pool
    2.2、server uses AMD,multithreaded-pool,Ice.ThreadPool.Server.Serialize=1
    result:All are ok, order assured from dispatching to applies processing。
    2.3、server uses AMD,multithreaded-pool,non-serialized
    result:server may dispatch in different order, client process replies concurrently normally。

    3、client uses AMI,multithreaded-pool, non-serialized
    3.1、server uses AMD,singlethreaded-pool
    3.2、server uses AMD,multithreaded-pool,serialized
    result: Server dispatches in the order ,but client may process replies in chaos,even "Lock sync(*this)" used。Why?
    3.3、server uses AMD,multithreaded-pool,non-serialized
    result:Server may dispatches in different order ,,but client may process replies in chaos,even "Lock sync(*this)" used。Why? Client may show like that:
    retOrder 2
    retOrder 3
    retOrder 4retOrder
    5retOrder
    6
    retOrder 1
    retOrdre 8
    retOrder 9
    retOrder 10
    ...
    if client processes replies concurrently normally, client should show like this:
    retOrder 2
    retOrder 3
    retOrder 4
    retOrder 5
    retOrder 6
    retOrder 1
    retOrdre 8
    retOrder 9
    retOrder 10
    ...


    So concluded, Are following answers correct?
    1. AMD with multi-threaded and serialized server pool (Ice.ThreadPool.Server.Serialize=1, Ice.ThreadPool.Server.Size=2) means Ice run time dispatches requests from different connections concurrently where serialize requestes from the same connection, whether oneway request or twoway request.
    1 is correct!
    2. AMD with multi-threaded and non-serialized server pool means Ice run time may not dispatches the requestes from the same connection in the order received, whether oneway request or twoway request.
    2 is correct!
    3. AMI with multi-threaded and serialized client pool (Ice.ThreadPool.Client.Serialize=1, Ice.ThreadPool.Client.Size=2) means Ice run time processes the replies from the same connection serially, whether oneway request or twoway request.
    3 is correct!
    4. AMI with multi-threaded and non-serialized client pool means Ice run time may not processes the replies from the same connection in the order received, whether oneway request or twoway request.
    4 is wrong! If server uses single-threaded pool, order assured!
    5. For example, if I want to transfer large files block by block between PCs with dual core CPU, should I set both the client to AMI with multi-threaded and serialized client pool and server to AMD with multi-threaded and serialized server pool ? Or just set the client(server) to AMI(AMD) with multi-threaded and serialized client pool only?
    You must assure the file writing successively!
    If you write file in client, you should keep replies processing successively, that means client uses serialized multithreaded-pool(or singlethreaded-pool) as well as server uses serialized multithreaded-pool(or singlethreaded-pool).
    If you write file in server, you should keep dispatching successively, that means server uses serialized multithreaded-pool(or singlethreaded-pool).
    New qestion arised: when client uses AMI with multithreaded-pool, non-serialized properites as well as server uses multithreaded-pool, why client may process replies in chaos,even "Lock sync(*this)" used?
  • New qestion arised: when client uses AMI with multithreaded-pool, non-serialized properites as well as server uses multithreaded-pool, why client may process replies in chaos,even "Lock sync(*this)" used?

    Cause I used the different callback object "new AMI_Hello_sayHelloI" in
    hello->sayHello_async(new AMI_Hello_sayHelloI, 0,i);
    
    ,so client may process replies in unexpected order!

    Is the answer to 5 is correct?
    5. For example, if I want to transfer large files block by block between PCs with dual core CPU, should I set both the client to AMI with multi-threaded and serialized client pool and server to AMD with multi-threaded and serialized server pool ? Or just set the client(server) to AMI(AMD) with multi-threaded and serialized client pool only?

    You must assure the file writing successively!
    If you write file in client, you should keep replies processing successively, that means client uses serialized multithreaded-pool(or singlethreaded-pool) as well as server uses serialized multithreaded-pool(or singlethreaded-pool).
    If you write file in server, you should keep dispatching successively, that means server uses serialized multithreaded-pool(or singlethreaded-pool).