Archived

This forum has been archived. Please start a new discussion on GitHub.

Operation Queue

I have the need to distribute a service that performs some very long running operations (aka jobs). I am evaluating IceGrid for this purpose. I am looking for some advice or further information on implementing a job queueing and monitoring service. For example, there may only exist 2 machines to perform the job and they can only execute one job at a time. The scheduler may queue 50 requests, but it can only execute 2 at a time. There would also be a need for some sort of prioritization (i.e. move this job to the head of the queue).

After reading through the documentation, it seems that the queue/scheduler could be implemented as a Blobject. It could then queue and forward operations to the available resources in the IceGrid. IceStorm could then be used for job monitoring.

Does this make sense? Is this the appropriate direction to go or is there a better method to handle this sort of functionality?

Comments

  • matthew
    matthew NL, Canada
    I recommend reading over the series of articles I wrote on this topic in issues 16, 17 and 18 of Connections. http://www.zeroc.com/newsletter/index.html
  • I had read through those articles and that is a slick mechanism. However, unless I missed something, there are a couple issues for my needs:

    1. Prioritization. I would like to be able to assign a priority to resource allocation. There is often a need to put one job ahead of another for one reason or another. Is it possible to put a priority on allocating an MP3 encoder factory? Note that this is all inside the LAN, so the client can specify the priority (i.e. "put me at the front of the line").

    2. Disconnect. What happens if the client computer crashes during hour 2 of the job? The job needs to continue and the client needs to be able to reconnect to it. Note that the job is, basically, start only. There should be no need to interact with it while it is running.

    3. Status. Is it possible to view the queue'd jobs?

    #2 and #3 are probably not that important since this is an internal service, but #1 is a requirement.
  • matthew
    matthew NL, Canada
    IceGrid doesn't provide a mechanism to do that. One idea to add a priority based queue is to interject a queuing service between the server side session object and IceGrid that takes the IceGrid session to whom the allocation belongs and and the priority of the request and itself implements the queue of pending jobs.

    For disconnect you would have to return some type of job object to the client which it can query to find out whether the job has completed. For status, you could use the above queuing service.
  • I'm not sure I really understand what you mean by interjecting a service between the server side session and IceGrid. Do you have any information on implementing that? Any direction would be appreciated.

    Thank you.
  • matthew
    matthew NL, Canada
    What I mean is that you create a service, which your client applications call to do the object allocation. This service maintains a queue of priority queue of pending allocations. The service would then itself allocate the object from IceGrid on behalf of the caller.

    Something like this would likely do:
    interface PriorityAllocationService
    {
         Object* allocateObjectById(IceGrid::Session session, int priority, Ice::Identity id)
            throws IceGrid::ObjectNotRegisteredException, IceGrid::AllocationException;
    };
    

    An alternative would be to add appropriate functionality to IceGrid. If you are interested in sponsoring such additions, please contact sales@zeroc.com.
  • Ok, that is interesting. It seems like this could work, but I'm still not entirely sure how to accomplish it.

    So, for example, if I have 2 servers for allocation and they are already allocated. Then a request comes in to allocate an object at normal priority followed by a request to allocate an object at high priority. They should both be queued and wait for a server to become available. I want to make sure that the next server to become available is allocated to the high priority requester. I'm not entirely sure how to accomplish that. The only thing I can think of is to have an allocator thread that checks if there are objects available to allocate and it then calls session.allocateObjectById using the session object from the high priority requester. Is this the way to do that or is there a better way to accomplish this? Is there a way to know when an object becomes available for allocation or is there a way to have the service allocate an object and then "attach" it to the session like the session allocated it? Or am I just completely missing the point here (it's been known to happen) :)
  • matthew
    matthew NL, Canada
    You can almost certainly assume the following in your service:

    - How many objects of each type can be allocated.
    - How many objects of each have been allocated (since your service should allocate all objects of a given type).

    If you know that then you can implement the queue quite simply. You allocate requests immediately, as long as there are remaining workers. If you start to queue, then you queue in priority order. When a worker is returned to the pool, you can allocate the job at the head of the queue.