Archived

This forum has been archived. Please start a new discussion on GitHub.

my option about AMD

Suppose I want to implementation an AMD operation , I should at least write the following code:
---------code1 (necessary)
module Demo {
	sequence<float> Row;
	sequence<Row> Grid;
	exception RangeError {};
	interface Model {
		["amd"] Grid interpolate(Grid data, float factor)
			throws RangeError;
	};
};

---------code2(necessary)
class ModelI : virtual public Demo::Model,
				virtual public IceUtil::Mutex {
public:
virtual void interpolate_async(
		const Demo::AMD_Model_interpolatePtr &,
		const Demo::Grid &,
		Ice::Float,
		const Ice::Current &);
private:
		std::list<JobPtr> _jobs;
};
---------code3
void ModelI::interpolate_async(
		const Demo::AMD_Model_interpolatePtr & cb,
		const Demo::Grid & data,
		Ice::Float factor,
		const Ice::Current & current)
{
	IceUtil::Mutex::Lock sync(*this);
	JobPtr job = new Job(cb, data, factor);
	_jobs.push_back(job);
}

----------code4
class interpolateJob : public IceUtil::Shared {
public:
	interpolateJob(const Demo::AMD_Model_interpolatePtr &,
		const Demo::Grid &,
		Ice::Float) _cb(cb), _grid(grid), _factor(factor);

	void execute();
private:

	bool interpolateGrid();
	
	Demo::AMD_Model_interpolatePtr _cb;
	Demo::Grid _grid;
	Ice::Float _factor;
};

----------code5
void interpolateJob::execute()
{
	if(!interpolateGrid()) {
		_cb->ice_exception(Demo::RangeError());
		return;
	}

	_cb->ice_response(_grid);
}

----------code6(necessary)
void interpolateJob::interpolateGrid()
{
	//Business Logic(this is what we really want)
}	

----------code7
class interpolateJobThread : public IceUtil::Thread 
{
	virtual void run() 
	{
		while(1)
		{
			_jobMutex.lock();
			
			if(_jobs.size()!=0)
			{
				JobPtr firstJob=(JobPtr)_jobs.front();
				_jobs.pop_front();
				firstJob->execute();
			}
			
			_jobMutex.unlock();
			Sleep(100);
		}
	}
};

----------code8
int main()
{
	//...
	//we can also start a thead pool to ...
	IceUtil::ThreadPtr t = new interpolateJobThread;	
	IceUtil::ThreadControl tc = t->start();
	//...
}


Oh...there is too much code (code3/code4/code5/code7/code 8) to be written for only an AMD operation. Even seriously, if we add another AMD operation, we have to write the similar codes again! But what we really want is just code6!

Another problem: for each AMD operation , we have to start a seperate thread or thead pool! They can neither use Ice run time's ThreadPool nor share a seperate thead pool! This make the management of server side's threads too difficult!

All above flaws decrease AMD's appeal, I think. Can it be improved! The simpler, the better!

Comments

  • hi,

    just a quick idea on your comment:
    Even seriously, if we add another AMD operation, we have to write the similar codes again! But what we really want is just code6!

    Due to the fact you are using C++ you could wrap the reused code(job que ...) in template classes and pass the jos as template argument.
    Actually implementations for job queues exist out there a lot. Search on google.
    e.g.: http://www.codeproject.com/threads/Queue_Manager.asp
    http://www.codeproject.com/threads/#Threads
    Another problem: for each AMD operation , we have to start a seperate thread or thead pool! They can neither use Ice run time's ThreadPool nor share a seperate thead pool! This make the management of server side's threads too difficult!

    why not process multiple job-queues in one thread??
    why not push different jobs to the same que handled within one thread?
    (this is what we did in one of our servers)
    ----------code5
    void interpolateJob::execute()
    {
    	if(!interpolateGrid()) {
    		_cb->ice_exception(Demo::RangeError());
    		return;
    	}
    
    	_cb->ice_response(_grid);
    }
    

    if you do it this way you can skip your AMD approach.
    you allow the AMD call to return after processing -
    this behaviour is almost identical with the standard synchronized methode invokation.

    you should call ice_response after you did accept the request.


    take care


    tom
  • to DeepDiver:
    1.about your comment1(...you could wrap the reused code...) and comment2(...why not process multiple job-queues in one thread...)
    yes, I agree with you! But if they can be simplified by programmers, it should be incorporated into Ice's code generated by slice2cpp. Do you think so ?

    2.about your comment3(...you should call ice_response after you did accept the request...)
    I almost agree with you! The programmer's code should call ice_exception() or ice_response() once and only once. The request has been accepted before. It is just be cached and then bring out to process now.



    BTW, I think Ice can be improved in the following directions:
    1.Programmers should write code as little as possible (code1/code2/code6 is acceptable).

    2.All requests on AMD operations should share a seperate thread pool which is different from IceInternal::Instance::serverThreadPool.

    3.Furthor, it's better if AMD request can bring a priority factor.
  • marc
    marc Florida
    I do not agree with you. AMD is a tool that can be used for many purposes. Having requests processed by a separate thread or a separate thread pool, while giving back the thread that received the request to the Ice core, is only one of many possible uses. To name just one example, having such a processing thread would be completely inappropriate for chained AMI/AMD calls, as we use them in Glacier2 and IcePack, and many other applications that forward requests.

    Something like this does not belong into the Ice core, it is way too specialized. It would belong into an Ice utility library, but even for that, it is hard to find one single method that is appropriate for all applications that require processing of requests by dedicated threads. I think the best we can do for this is to provide examples and explain design patterns, like the example from our Connections newsletter or Ice manual.
  • marc wrote:
    I do not agree with you. AMD is a tool that can be used for many purposes. Having requests processed by a separate thread or a separate thread pool, while giving back the thread that received the request to the Ice core, is only one of many possible uses. To name just one example, having such a processing thread would be completely inappropriate for chained AMI/AMD calls, as we use them in Glacier2 and IcePack, and many other applications that forward requests.

    Something like this does not belong into the Ice core, it is way too specialized. It would belong into an Ice utility library, but even for that, it is hard to find one single method that is appropriate for all applications that require processing of requests by dedicated threads. I think the best we can do for this is to provide examples and explain design patterns, like the example from our Connections newsletter or Ice manual.

    Sorry, I can not understand you! Can you give some demo code to explain it ? Just code slices is enough.
  • Do you mean like this ?
    ----------code6(necessary)
    void interpolateJob::interpolateGrid()
    {
            //Business Logic(this is what we really want)
            ...
            call a AMI method which belongs to a remote object 
            ...
    }	
    
  • marc
    marc Florida
    First of all, in my posting above, I wrote IcePack as an example for message forwarding, but of course I meant IceStorm.

    I'm afraid I cannot post code examples here. First, there are many uses of AMD, so one single code example wouldn't be enough. Second, each of these examples would require thorough explanation, something that I cannot do here in this newsgroup. That's what we have the articles in our newsletter "Connections" for. (I think there will be an article about using AMI and AMD in the next issue.)
  • marc
    marc Florida
    rc_hz wrote:
    Do you mean like this ?
    ----------code6(necessary)
    void interpolateJob::interpolateGrid()
    {
            //Business Logic(this is what we really want)
            ...
            call a AMI method which belongs to a remote object 
            ...
    }	
    

    Sorry, but it is impossible for me to say what the intent of the code above is.

    Again, I cannot write a paper about the various uses of AMD in this newsgroup. This is out of the scope of the support we can give here on this message board. I'm afraid the only way for you to learn more about this is to thoroughly study the manual, and how AMI and AMD is used in Ice services, and further read upcoming articles about this topic in our newsletter.

    If you would like us to provide you with consulting services to explain how to best use AMD for your specific application's needs, then we could certainly do this. In this case, please contact us at info@zeroc.com.
  • marc wrote:
    ...It would belong into an Ice utility library, but even for that, it is hard to find one single method that is appropriate for all applications that require processing of requests by dedicated threads...

    Maybe template method design pattern can help.
    marc wrote:
    ...That's what we have the articles in our newsletter "Connections" for. (I think there will be an article about using AMI and AMD in the next issue.)

    Oh, It means I should wait for almost a month :p
  • matthew
    matthew NL, Canada
    rc_hz wrote:
    Maybe template method design pattern can help.



    Oh, It means I should wait for almost a month :p

    AMD/AMI chaining means that you forward the request to another object as an AMI operation, and then forward the result back to the caller. For example something like this
    // slice
    interface bar
    {
       ["ami"] void bar1();
    };
    
    interface foo
    {
       ["amd"] void foo1();
    };
    
    // C++
    class AMI_bar_bar1I : public AMI_bar_bar1
    {
    public:
       AMI_bar_bar1I(const AMD_foo_foo1Ptr& cb) : _cb(cb) {}
    
       virtual void ice_response() { _cb->ice_response(); }
       virtual void ice_exception(const Ice::Exception& ex) { _cb->ice_exception(ex); }
    };
    
    void
    fooI::foo1(const AMD_foo_foo1Ptr& cb)
    {
        _barPrx->bar1_async(new AMI_bar_bar1I(cb));
    }
    

    For services such as Glacier2 or IceStorm that forward messages (and in the case of Glacier2) their replies to other servers this is very useful.

    It is definitely not appropriate to have a work queue in the core for this type of application!

    The work queue you have written is very specific to your use-case. Its not hard to imagine (Tom pointed out one way) to make this into something that is much more flexible and more reusable. You code, btw, has a big flaw... You do this:
    class interpolateJobThread : public IceUtil::Thread 
    {
            virtual void run() 
            {
                    while(1)
                    {
                            _jobMutex.lock();
                            
                            if(_jobs.size()!=0)
                            {
                                    JobPtr firstJob=(JobPtr)_jobs.front();
                                    _jobs.pop_front();
                                    firstJob->execute();
                            }
                            
                            _jobMutex.unlock();
                            Sleep(100);
                    }
            }
    };
    

    Although the code is not complete more than likely you should not hold the mutex while processing the job, otherwise the pushing of a new job will have to wait until the queue is unlocked which means that you cannot release the server thread immediately which after all is your goal if using AMD. You should probably use a monitor, and wait for jobs to become available instead of using Sleep. See the workqueue demo for an example of how to do this (demo/IceUtil/workqueue).

    What isn't really clear to me when looking at your example is exactly what purpose you have in using AMD in this case? Perhaps you could explain your use-case a little more and we could provide better advice. For example, if you are processing the interpolation data all within the same server I'm not sure I see the point in using AMD at all.

    And finally, what do you mean that AMD should have a priority factor?

    Regards, Matthew
  • matthew wrote:
    Although the code is not complete more than likely you should not hold the mutex while processing the job

    yes, I argee. Holding a lock while deal with the queue (push/pop) is enough!

    matthew wrote:
    What isn't really clear to me when looking at your example is exactly what purpose you have in using AMD in this case?
    My example is just a normal case in a real world application: the client send the request, the server processing the request (only processing business logic which just do calculation or select/insert/update a Database, eg. telecomunication applications). Because synchronous method dispatch is limited by the size of thread pool in the server side, I want to use AMD for every operation.

    we have to argee that AMD can be used for many purposes! AMD/AMI chaining is of course the most difficult one in them. However, the usage in my example is the most popular which will occupy 80%.
    matthew wrote:
    And finally, what do you mean that AMD should have a priority factor?
    For my poor english, I have to give an example:)

    Suppose most AMD operations(AMD/AMI exclusive) share a thread pool in the server side. The thread pool is busy now and all threads are processing requests. 100 additional AMD requests have been cached/delayed and waiting to be processed! Some time later, one thread in the pool has finished its work, so it can choose one request from the cached 100 AMD requests by the request's priority factor. Of course, we can control the priority strategy in all kinds of manner.
  • This is what slice2cpp do now:
    ------------code0(PrinterAMD.ice)
    module Demo
    {
        interface PrinterAMD
        {
            //the operation has one input parameter and one output parameter and one return value
            int printString(int id, out string name);
        };
    };
    
    ------------code1(PrinterAMD.h)
    namespace IceAsync
    {
    
    namespace Demo
    {
    
    class AMD_PrinterAMD_printString : public ::Demo::AMD_PrinterAMD_printString, public ::IceInternal::IncomingAsync
    {
    public:
    
        AMD_PrinterAMD_printString(::IceInternal::Incoming&);
    
        virtual void ice_response(::Ice::Int, const ::std::string&);
        virtual void ice_exception(const ::Ice::Exception&);
        virtual void ice_exception(const ::std::exception&);
        virtual void ice_exception();
    };
    
    }
    
    }
    
    
    namespace Demo
    {
    
    class PrinterAMD : virtual public ::Ice::Object
    {
    public:
        //...
        virtual void printString_async(const ::Demo::AMD_PrinterAMD_printStringPtr&, ::Ice::Int, const ::Ice::Current& = ::Ice::Current()) = 0;
        //...
    };
    
    }
    
    ------------code2(PrinterAMD.cpp)
    ::IceInternal::DispatchStatus
    Demo::PrinterAMD::___printString(::IceInternal::Incoming& __in, const ::Ice::Current& __current)
    {
        ::IceInternal::BasicStream* __is = __in.is();
        ::Ice::Int id;
        __is->read(id);
        ::Demo::AMD_PrinterAMD_printStringPtr __cb = new IceAsync::Demo::AMD_PrinterAMD_printString(__in);
        try
        {
    		printString_async(__cb, id, __current);
        }
        catch(...)
        {
    		//...
        }
        return ::IceInternal::DispatchAsync;
    }
    
  • However , if slice2cpp do like this:
    ------------code1(Suppose it is placed in IncomingAsync.h)
    [COLOR=Red]namespace IceInternal
    {
    class AMDThreadPool
    {
    	//Logic:
    	//	1.The AMDTheadPool has a work queue of AMDJob and runs in a Leader-Follower mode.
    	//	2.A single thread just get one AMDJob off the queue, then call it's AMDJob::execute() method. 
    }
    
    class AMDJob
    {
    public:
    	int registerWithAMDThreadPool()
    	{
    		//Logic:
    		//	1.put self into AMDThreadPool's work queue
    	}
    	
    	virtual void execute() = 0;
    }	
    
    }[/COLOR]
    ------------code2(PrinterAMD.h)
    namespace IceAsync
    {
    
    namespace Demo
    {
    
    class AMD_PrinterAMD_printString : public ::Demo::AMD_PrinterAMD_printString, public ::IceInternal::IncomingAsync,
    			[COLOR=Blue]public ::IceInternal::AMDJob[/COLOR]
    {
    public:
    
        AMD_PrinterAMD_printString(::IceInternal::Incoming&);
    
        virtual void ice_response(::Ice::Int, const ::std::string&);
        virtual void ice_exception(const ::Ice::Exception&);
        virtual void ice_exception(const ::std::exception&);
        virtual void ice_exception();
    [COLOR=Blue]    
    	void cacheData(Ice::Int id, const Ice::Current current, PrinterAMD *ptr);
    	{
    		//1.cache data
    		__id = id;
    		__current = current;
    		__ptr = ptr;
    		
    		//2.register with AMD thread pool
    		registerWithAMDThreadPool();
    	}
    	
    	virutal void execute()
    	{
    		try
    		{
    			//call the real business logic
    			__ptr->printString_async(this, __id,  __name, __ret,__current);
    		}
    		catch (...)
    		{
    			ice_exception(...)	//Send a exception to the client
    		}
    		
    		ice_response(...);		//Send a normal response to the client
    	}
    protected:
    	//The class members is corresponding to the params of the printString operation. dynamic
    	//Input parameters
    	Ice::Int		__id;			
    	Ice::Current	__current;	
    	PrinterAMD*		__ptr;	
    	
    	//Output parameters and return value
    	::std::string	__name;			    
    	::Ice::Int		__ret;[/COLOR]};
    
    }
    
    }
    
    ------------code3(PrinterAMD.h)
    namespace Demo
    {
    
    class PrinterAMD : virtual public ::Ice::Object
    {
    public:
    	//...
        virtual void printString_async(const ::Demo::AMD_PrinterAMD_printStringPtr&, ::Ice::Int, 
        					[COLOR=Blue]::Ice::Int &, ::std::string&,[/COLOR]    					const ::Ice::Current& = ::Ice::Current()) = 0;
        //...
    };
    
    }
    
    
    ------------code4(PrinterAMD.cpp)
    ::IceInternal::DispatchStatus
    Demo::PrinterAMD::___printString(::IceInternal::Incoming& __in, const ::Ice::Current& __current)
    {
        ::IceInternal::BasicStream* __is = __in.is();
        ::Ice::Int id;
        __is->read(id);
        ::Demo::AMD_PrinterAMD_printStringPtr __cb = new IceAsync::Demo::AMD_PrinterAMD_printString(__in);
        try
        {
    [COLOR=Blue]		//printString_async(__cb, id, __current);
    		__cb.cacheData(id, __current, this);[/COLOR]    
        }
        catch(const ::Ice::Exception& __ex)
        {
    		//...
        }
        return ::IceInternal::DispatchAsync;
    }
    
    ------------code5(PrinterAMDI.cpp; programmers' code)
    Programmers can just write a PrinterAMDI implementation class as usual like this:
    
    [COLOR=Red]class PrinterAMDI: public PrinterAMD
    {
        virtual void printString_async(const ::Demo::AMD_PrinterAMD_printStringPtr&, ::Ice::Int, 
        					::Ice::Int &, ::std::string&,
        					const ::Ice::Current& = ::Ice::Current())
        {
        	//we can write business logic here as usual.
        }
    }[/COLOR]
    
    
    If slice2cpp do like above code, then programmers need only write code5. This is very simple.
  • matthew
    matthew NL, Canada
    I'm having trouble understanding your use case.

    What you appear to want to do is shift the processing of the request data onto another thread-pool in the SAME server. If this is all you have done what have you accomplished? Sure, you free up the thread in the Ice dispatch pool quicker but why do you want to do that?

    Case 1:

    Your interface supports both long running and short running requests. For example:
    interface foo
    {
       ["amd"] somedata longRunningMethod();
       int shortRunningMethod();
    };
    

    Case 2:

    You want to queue up the requests internally in the server for processing to avoid blocking clients. This case can be handled by using the Glacier2 router in buffered mode and letting it do the queuing.

    Case 3:

    You want to do special case processing on the queue, as you have suggested with the priority. However, this case cannot be handled in such a generic way by the Ice runtime, so therefore you would have to write this special case code yourself.

    Summary:

    In short, the only use-case that I think makes your suggestion somewhat useful is case 1. However, I suspect that this isn't all that common! Furthermore, you can do exactly this by using two object adapters in your server with separate thread-pools. One for long running requests and one for short running requests. Granted this isn't that convenient, but it is one of several techniques that can handle this use-case nicely without adding more burden to the Ice core and code generators.

    Regards, Matthew
  • Matthew wrote:
    What you appear to want to do is shift the processing of the request data onto another thread-pool in the SAME server. If this is all you have done what have you accomplished? Sure, you free up the thread in the Ice dispatch pool quicker but why do you want to do that?
    yes, this is what i want to do. It has the following advantages:
    1) programmers need only write code5 (in my post13) and don't need to care about thread management.
    2) If the server has many AMD operations, then they can share a single AMD thread pool. Its size can be controlled by a property such as Ice.AMDThreadPool.size= 10.
    Matthew wrote:
    ...Sure, you free up the thread in the Ice dispatch pool quicker but why do you want to do that?
    This is what AMD's internal meaning. By caching requests and freeing up the thread in the Ice dispatch pool as quickly as possible, the server can handle 100 requests with only 10 threads.
  • Matthew wrote:
    Case 2:

    You want to queue up the requests internally in the server for processing to avoid blocking clients. This case can be handled by using the Glacier2 router in buffered mode and letting it do the queuing.
    In the view of Ice's users, the simpler, the better. If a Server and a Client are enough, why should a Glacier2 router be deployed?
    Matthew wrote:
    Case 3:

    You want to do special case processing on the queue, as you have suggested with the priority. However, this case cannot be handled in such a generic way by the Ice runtime, so therefore you would have to write this special case code yourself.

    No, this case can be handled in a generic way by the Ice runtime. It can be handled in AMDThreadPool.
    Matthew wrote:
    ...without adding more burden to the Ice core and code generators...
    The burden on Ice's slice2cpp in my post13 is equal to the burden on Ice's slice2cpp in my post12.
  • In general , AMD is more efficient and scalable than synchronized method dispatch.
    I think, if the AMD operation don't call any other AMI operation, then the server can do like this:

    1)One or more threads just get requests off the wire and cached it in a queue.
    (Many further things can be done, such as : flow control, priority control, timeout control...)
    (And it can solve this question: http://www.zeroc.com/vbulletin/showthread.php?t=1305&highlight=AMI)

    2)Another AMDThreadPool's threads get a reqeust from the queue, process it and send back the response to the client.
  • matthew
    matthew NL, Canada
    rc_hz wrote:
    yes, this is what i want to do. It has the following advantages:
    1) programmers need only write code5 (in my post13) and don't need to care about thread management.
    2) If the server has many AMD operations, then they can share a single AMD thread pool. Its size can be controlled by a property such as Ice.AMDThreadPool.size= 10.


    This is what AMD's internal meaning. By caching requests and freeing up the thread in the Ice dispatch pool as quickly as possible, the server can handle 100 requests with only 10 threads.

    What do you mean by "handle"? If the server does long running operations on all of the requests the server is not handling the load any better than it would just by using the thread pool that the Ice runtime already provides. In fact you would make the performance worse due to all of the context switching that you've introduced.
    In general , AMD is more efficient and scalable than synchronized method dispatch.
    I think, if the AMD operation don't call any other AMI operation, then the server can do like this:

    1)One or more threads just get requests off the wire and cached it in a queue.
    (Many further things can be done, such as : flow control, priority control, timeout control...)
    (And it can solve this question: http://www.zeroc.com/vbulletin/show...5&highlight=AMI)

    2)Another AMDThreadPool's threads get a reqeust from the queue, process it and send back the response to the client.

    AMD is certainly more flexible than using regular synchronous requests. However, that flexibility comes at cost as you've pointed out.
    In the view of Ice's users, the simpler, the better. If a Server and a Client are enough, why should a Glacier2 router be deployed?

    To handle security issues, and allowing the buffering to handle network blocking due to a busy client or server. If you want to see all of the benefits of using Glacier2 I would recommend reading the articles I wrote in Connections 1 & 2.

    Consider the code that you pasted. If the AMD callbacks ice_response() or ice_exception() block due to a network problem you've just gone and tied up a thread in your processing thread pool... If you use Glacier2 (with buffered mode) this problem is avoided.
    No, this case can be handled in a generic way by the Ice runtime. It can be handled in AMDThreadPool.

    Quote:
    Originally Posted by Matthew
    ...without adding more burden to the Ice core and code generators...

    The burden on Ice's slice2cpp in my post13 is equal to the burden on Ice's slice2cpp in my post12.

    The case cannot be handled in a generic way. As you've rightfully pointed out you may want to do very sophisticated priority management on the processing queue. How do you propose that we think of everything that the user might want to do with the queue items. The current implementation allows you manage it exactly as you see fit. Anything we provide will fall short, IMO.

    Regards, Matthew
  • Matthew wrote:
    What do you mean by "handle"? If the server does long running operations on all of the requests the server is not handling the load any better than it would just by using the thread pool that the Ice runtime already provides.
    "handle" just means that server can cached 100 requests with only 10 thread. If using synchonized method dispatch, other 90 requests have to be waiting. This is especially useful for AMI clients, I think!

    Matthew wrote:
    In fact you would make the performance worse due to all of the context switching that you've introduced.
    I think AMD has very little impact on the performance of the server. Additional work is just lock/unlock on the queue.
    Matthew wrote:
    To handle security issues, and allowing the buffering to handle network blocking due to a busy client or server. If you want to see all of the benefits of using Glacier2 I would recommend reading the articles I wrote in Connections 1 & 2.

    Consider the code that you pasted. If the AMD callbacks ice_response() or ice_exception() block due to a network problem you've just gone and tied up a thread in your processing thread pool... If you use Glacier2 (with buffered mode) this problem is avoided
    I argee that Glacier2 can solve the problem. However, as I have mentioned in previous posts, Glacier2 is another process besides Server and Client. It's better if we can solve all these problems in Server process without another process. I think that simplicity is one of the most importantIce's design philosophies. Programmers do little and can get more.
    Matthew wrote:
    The case cannot be handled in a generic way. As you've rightfully pointed out you may want to do very sophisticated priority management on the processing queue. How do you propose that we think of everything that the user might want to do with the queue items. The current implementation allows you manage it exactly as you see fit. Anything we provide will fall short, IMO.
    I think that Ice can give a default implementation of priority management in AMDThreadPool, while programmers can override this implementation to meet their special purposes.