Archived

This forum has been archived. Please start a new discussion on GitHub.

Question about AMD

hi,there,
In 30.3.4 of the manu , the concurrency issue of AMI is mentioned. Now my question is about the concurrency issue of AMD.
On page 787, the demo codes are:

void ModelI::interpolate_async(const Demo::AMD_Model_interpolatePtr & cb,
const Demo::Grid & data,
Ice::Float factor,
const Ice::Current & current)
{
IceUtil::Mutex::Lock sync(*this);
JobPtr job = new Job(cb, data, factor);
_jobs.push_back(job);
}

I want to know if this is correct:
..................................
{

IceUtil::Mutex::Lock sync(*this);
JobPtr job1= new Job(cb, data, factor);
_jobs.push_back(job1);

JobPtr job2= new Job(cb, data, factor);
_jobs.push_back(job2);

}

That's, if the cb can be shared. if can't, how to implement this goal , like this:

{
IceUtil::Mutex::Lock sync(*this);
JobPtr job1= new Job(cb, data, factor);
_jobs.push_back(job1);

cb2=cb1->clone();

JobPtr job2= new Job(cb2, data, factor);
_jobs.push_back(job2);
}



TIA---OrNot

Comments

  • benoit
    benoit Rennes, France
    Hi,

    An AMD callback is used to send the reply of an incoming request. It can only be used once: if you call ice_response() or ice_exception() more than once, it will result in undefined behavior. So I believe the answer to your question is "no" since it would break this rule. Your code needs to ensure that you don't send the response twice.

    Hope this helps!

    Benoit.
  • Thank you Benoit.

    Actually, my question came from the following scenario:

    I am implementing a router server very similar to the glacier2router.

    void
    Glacier2::ClientBlobject::ice_invoke_async(const Ice::AMD_Object_ice_invokePtr& amdCB, const ByteSeq& inParams, const Current& current)
    {
    assert(_routingTable); // Destroyed?

    /* My pseudocodes. What I want to do is to invoke the same methods with same paramaters to a group of proxies */

    vector <proxy> proxies= _routingTable->get(current.id);

    for (each proxy in proixes)
    {
    invoke(proxy, amdCB, inParams, current);
    }
    }

    I agree with your reply.This scenario is impossible, isn't it? do you have any better ideas ?

    OrNot
  • benoit
    benoit Rennes, France
    I see, so your incoming request is sent to multiple servers. What happens if the request has output parameters (return value or "out" arguments)? Do you want to send the reply of the first/last/random server that answers?

    In any case, to ensure that the response is only sent once, you coud use a small wrapper and use this wrapper to send the response of the incoming request instead of directly calling on the AMD callback.

    Something like the following for example (here the first that invokes "response" or "exception" will be the one that answers the incoming request, others will be ignored):
     // C++
     class AMDCallbackWrapper : public IceUtil::Mutex, public IceUtil::Shared
     {
          AMDCallbackWrapper(const AMD_Object_ice_invokePtr& cb) : _cb(cb), _sent(false)
          {
          }
          
          void
          response(bool ok, const ByteSeq& outParams)
          {
              Lock sync(*this);
              if(!_sent)
              {
                  _cb->ice_response();
                  _sent = true; // Response sent!
               }
          }
    
          void
          exception(const Exception& ex)
          {
              Lock sync(*this);
              if(!_sent)
              {
                  _cb->ice_exception(ex);
                  _sent = true; // Response sent!
               }
          }
    
          private:
               AMD_Object_invokePtr _cb;
               bool _sent;
     };
     typedef IceUtil::Handle<AMDCallbackWrapper> AMDCallbackWrapperPtr;
    

    Hope this helps!

    Benoit.
  • Thank you Benoit.
    I will study your codes carefully.