Archived

This forum has been archived. Please start a new discussion on GitHub.

AMI: timeouts/ destroying

Hi there!

I have two questions on implementing respectively calling AMI methods:
1) Is it possible to set a defined timeout time in order to make the AMI call throw an exception if it didn't give a response after that certain period of time.
This would be somewhat similar to the ice_timeout() casting but instead mean to stop too long-running operations.

2) How do I correctly work with AMI callback object pointers? I mean, is it ok to make the class pointer variable an attribute in the parent class, i.e.

class MyParent
{
AMI_MyFunction* m_pAMIPtr;
};

with

if (!m_pAMIPtr)
m_pAMIPtr = new AMI_MyFunction;
else
{
// something
}

(deleting m_pAMIPtr after response() has come doesn't (obviously) work though)

or is it better to create a local variable and not to store it's value, i.e.

AMI_MyFunction* pFunc = new AMI_MyFunction;
m_pMyObject->MyFunction_async(pFunc);

// don't store pFunc since there's no use of it...


This is probably just a design issue where one solution has more advantages than the other.

Comments welcome!

regs,

Stephan

Comments

  • marc
    marc Florida
    Timeouts for AMI calls work just like non-AMI calls, i.e., use ice_timeout() on the proxy, or set the property Ice.Override.Timeout. You must also set the property Ice.MonitorConnections (see the reference documentation for more details). The timeout must not be shorter than the value of Ice.MonitorConnections.

    As for your 2nd question, you can store smart pointers (Ptr-types) for AMI callback object in any way you like. You must never explicitly delete callback objects, just like you may never explicitly delete any other Ice object that is reference counted.
  • for your 2nd question, you can store smart pointers (Ptr-types) for AMI callback object in any way you like. You must never explicitly delete callback objects, just like you may never explicitly delete any other Ice object that is reference counted.

    ok, sure. What only remains is the question whether it's possible to "stop" such a callback object during execution. Or do I have to pass a flag to that existing object to inform it that it's result isn't interesting any more:


    void MyObject::ice_response(...)
    {
    if (!m_bValid)
    {
    return;
    }
    ...
    }



    Is this the cleanest possible solution?

    regs,

    Stephan
  • mes
    mes California
    Stephan,

    If the request times out, the Ice run-time invokes the ice_exception method on the callback object, with an argument of type Ice::TimeoutException. The Ice run-time guarantees that only one method will be invoked on a callback object for an operation (i.e., either success or an exception). Once ice_exception is invoked, you don't need to worry about subsequent invocations on the callback object for the same operation.

    Does that answer your question?

    - Mark
  • Does that answer your question?

    Not really, sorry. What I wanted to know is whether it's possible to stop the asynchronous execution because I don't need it any more. A compromise would be to avoid ice_response() to be executed and so I'm using the function as previously stated. That's basically OK :)

    Sorry, but there's yet another question around AMI. I'm also planning to use it for transfering file blocks to a certain number of clients. I cannot use oneway invocations because I want to keep track whether the transfer was successful or not.
    Since numClients > size(ThreadPool) :) I wanted to ask whether you have an idea for this. Would it make sense to create an AMI pool which keeps track on how many threads are still available and fills these places in the thread pool by invoking the next asynchronous invocations?
    Or is there a more elegant solution for this?

    regs,

    Stephan
  • Originally posted by stephan
    Not really, sorry. What I wanted to know is whether it's possible to stop the asynchronous execution because I don't need it any more. A compromise would be to avoid ice_response() to be executed and so I'm using the function as previously stated. That's basically OK :)/

    Hmmm... No, not really. The problem is that the thread of control on the server side is in the hands of the application while the operation is executing. There is no way for the Ice run time to simply stop an executing thread. Even if we were to use something drastic, such as thread cancellation (which is not supported by many threads packages), there would still be the issue of getting the server program back into a defined state. (Arbitrarily cancelling a thread results in all sorts of leak problems because destructors don't get a chance to run.)

    To do what you suggest, I could see the following approach (but even that would require changes to Ice): the client, once it receives the timeout exception could invoke another method on the proxy, such as ice_abort(). That would send a message back to the server, informing it that the operation in question is no longer wanted by the client. The method implementation in the object would then do whatever is necessary to clean up and return. The server-side run time then would not marshal any response back to the client for that operation.

    Now, having said all this, I am not at all convinced that this is really a worthwhile thing to do. For one, the feature is rarely wanted or needed. (For example, CORBA has an IIOP Cancel Request that was never implemented, and no-one ever missed it.) It would also require quite intrusive changes to the Ice run time. (In particular, the protocol would have to be versioned.)

    Also, I think you can implement the required semantics yourself without too much difficulty. Suppose the long-running async operation is called takesAWhile(). You can write your interface as:
    interface Example {
        ResultType takesAWhile();
        void stopDoingIt();
    };
    

    The client invokes takesAWhile() and, after some time, gets a timeout exception. At that point, the callback for the timeout invokes stopDoingIt(), which sends another message to the server. The implementation of stopDoingIt() then can set a semaphore or some such that takesAWhile() periodically checks to see whether it should continue to run or not. If takesAWhile() finds the semaphore set, it cleans up and then terminates with a timeout exception.

    This approach should work, provided that you have a spare thread in the server to receive the stopDoingIt() call, and that you can always uniquely associate a call to stopDoingIt() with a particular invocation of takesAWhile(). (That is, if you have multiple calls of takesAWhile() in progress concurrently, you must have a separate instance of the Example interface for each such invocation, so you can know *which* invocation of takesAWhile() should be cancelled by a particular call to stopDoingIt().)

    Alternatively, you can write the interface as:
    interface Example {
        ResultType takesAWhile(string id) throws IdInUse;
        void stopDoingIt(string id) throws NoSuchId;
    };
    

    With that approach, you pass an ID to takesAWhile that identifies this particular invocation and you use the same ID to identify which concurrent invocation of takesAWhile() you want to cancel. That way, you can do it with a single instance of Example instead of requiring a separate instance of Example for each concurrent invocation.

    Cheers,

    Michi.
  • marc
    marc Florida
    Note that with a protocol cancel message, there would always be a race condition as well, because the cancel could be sent while the server runtime is already sending back the response.

    In any case, I believe what Stephan wants is to avoid AMI callbacks being called whenever the result would become "irrelevant". This can be done with user code, just as Stephan suggests: Add a flag to the AMI callback object, that determines if the AMI callback should be called. Check this flag in ice_response and ice_exception. If set, just return. Note that this flag must be mutex protected.
    Sorry, but there's yet another question around AMI. I'm also planning to use it for transfering file blocks to a certain number of clients. I cannot use oneway invocations because I want to keep track whether the transfer was successful or not.
    Since numClients > size(ThreadPool) I wanted to ask whether you have an idea for this. Would it make sense to create an AMI pool which keeps track on how many threads are still available and fills these places in the thread pool by invoking the next asynchronous invocations?
    Or is there a more elegant solution for this?

    I'm afraid I don't understand this question. What does this have to do with AMI (and what is an AMI pool?), and what is the problem with number of clients > size of the thread pool? If a server receives a request, and no thread is available, the request will simply be processed as soon as a thread becomes available.
  • First of all, thanks for your brief answers! That's really helpful!

    I'm afraid I don't understand this question. What does this have to do with AMI (and what is an AMI pool?), and what is the problem with number of clients > size of the thread pool? If a server receives a request, and no thread is available, the request will simply be processed as soon as a thread becomes available.

    OK, I understand. I wasn't aware of that and thought that an exception would be raised in case there's no thread available in the thread pool.

    What I'll do now is to fire my dozens of asynchronous requests and simply don't care about pool etc. That's really cool!

    Thanks!!

    Stephan