Home Help Center

QoS/Fairness with AMD?

VoidPointerVoidPointer Member Lars OppermannOrganization: PersonalProject: Digital asset management
Hello,

I have another conceptual question that came up during my evaluation of Ice...

Is it possible to implement some sort of QoS or Fairness queuing policy with asynchronous method dispatch? The specific scenario I would like to support goes like this:

Suppose I have a service object and I'm running 8 worker threads (on an 8 core server). The service is invoked as a result of application users starting various jobs in the front-end application. Sometimes, users will start batch-jobs which will result in a large number of service requests which would "flood" the queue so that jobs from other users will wait in the queue for a very long time if they are submitted later than the large batch of requests.

With JMS, I would send messages with decreasing priority for subsequent requests in a batch to a queue so that single requests from other users could skip ahead.

It is clear that I would need to get some indication of the batch-status of the request from the application layer handed down to the request dispatch layer. However, I have not found anything in Ice that could use this information in order to implement the type of throttling behavior I would like to get. Is something like this possible in Ice itself or would I need to implement the throttling before I submit the request to Ice?

Bests,
Lars

Comments

  • benoitbenoit Rennes, FranceAdministrators, ZeroC Staff Benoit FoucherOrganization: ZeroC, Inc.Project: Ice ZeroC Staff
    Hi Lars,

    You can implement this on the server side in a number of different ways.

    One simple way to do this is to create your own thread pool to execute jobs submitted by the client according to a given priority (you can use the request Ice::Context to pass the priority). The implementation of your servants would use AMD and would just queue new jobs with your thread pool. Once a job is executed it would call on the AMD callback to send the response.

    Another more generic way to do this would be to intercept incoming requests with a routing service such as Glacier2. The routing service would queue and forward requests to the backend servers according to their priorities. You could also consider just using blobjects to intercept the requests and queue them with your own custom thread pool.

    Such advanced techniques were presented in the Issue #14 if the Ice Connections newsletter. Although the subject is a bit different it should give you a better idea on how this could work. Note that this article is out-of-date however: it isn't necessary anymore to use such techniques to perform non-blocking Ice invocations. Since Ice 3.3, Ice AMI calls are guaranteed to not block.

    Cheers,
    Benoit.
Sign In or Register to comment.