Archived

This forum has been archived. Please start a new discussion on GitHub.

IceJ 1.2.0 AMD/AMI related question

Hi, All

I'm looking for some explanations how do I use AMD/AMI scheme in IceJ. Looks like I missed some locking on client side when I send several messages to the same object. I even tried to not send next message until callback called for previouse one. Following is what I try to use.

Sorry if I'm simply missed something in documentation.

---cut-here---
// Test.ice
interface Test {
["amd","ami"] byte[] run();
};

---cut-here---
// Client.java
...
TestPrx test = TestPrxHelper.checkedCast(communicator.stringToProxy(proxy));
Callback cb = new Callback();
byte[] b = new byte[size];
for (int i = 0; i < count; i++) {
test.echo_async(cb, b);
}
...

Comments

  • mes
    mes California
    Hi,

    I'm afraid we need more information from you. Is your program not working the way you expect it to? If not, what exactly is happening?

    Note that the test case in test/Ice/operations uses AMI/AMD.

    Take care,
    - Mark
  • Thanks for reply!

    I'll take a look on it first then if it's still unclear (fom me) I send you details.
  • Well, try to run test/Ice/operations in the way when some test runs more then once.

    1) in TwowaysAMI.Callback.check() add "_called = true;" just before return in order to prepare it for next iteration

    2) run some test more then once

    for (int i = 0; i < 10000; i++) {
    p.opByteS_async(cb, bsi1, bsi2);
    test(cb.check());
    }

    In that case you have to repeat the test quite many times but if you increase size of data transferred to 40k it fails within about 10-20 iterations with following diagnostics:

    tests with regular server.
    starting server... ok
    starting client... ok
    testing stringToProxy... ok
    testing checked cast... ok
    testing twoway operations... ok
    Client: error: unknown exception in `Ice.ThreadPool.Client' thread Client-Ice.ThreadPool.Client-1:
    java.lang.NullPointerException
    at IceInternal.OutgoingAsync.warning(OutgoingAsync.java:282)
    at IceInternal.OutgoingAsync.__finished(OutgoingAsync.java:212)
    at IceInternal.Connection.message(Connection.java:1030)
    at IceInternal.ThreadPool.run(ThreadPool.java:681)
    at IceInternal.ThreadPool.access$100(ThreadPool.java:17)
    at IceInternal.ThreadPool$EventHandlerThread.run(ThreadPool.java:1063)
  • Looks like I've found the source of problem.
    We shouldn't synchronize on callback.ice_response() but rather on something _after_ OutgouingAsync.destroy() as otherwise there is a chance that new _invoke() called between previous callback.ice_response() and OutgoingAsync.destroy()

    OutgoingAsync

    1) __setup()
    2) __invoke() called
    ...
    3) __finish() calls __response()
    now we can start a new invocation!!!
    5) destroy() sets _os and _is to null

    I suggest to register some Object to OutgoingAsync to synchronize with:

    class OutgoingAsync {
    Object sem;
    protected OutgoingAsync() {
    this(null);
    }
    protected OutgoingAsync(Object sem)
    this.sem = sem;
    ...
    }

    private void destroy() {
    ...
    if (sem != null) synchronized(sem) { sem.notify(); }
    }
    ...
    }
  • mes
    mes California
    We're looking into this. Thanks!

    - Mark
  • M.b. it's even better to synchronize on OutgoingAsync

    private void destroy() {
    ...
    synchronized(this) { notify(); }
    }
  • mes
    mes California
    Hi,

    One solution is to synchronize the methods in OutgoingAsync (__setup, __invoke, __finished). We're still considering other alternatives, but if you'd like a workaround until the next release, you can use that.

    Thanks again,
    - Mark
  • IMHO, synchronizing all these methods is too expensive.
    I commented out call to destroy() from __finished() as for now what destroy() does is just releasing buffers so next __setup() should create new ones instead of resetting them
  • mes
    mes California
    But there may not be another call to __setup, and then the buffers would be leaked.

    Synchronizing the methods isn't my first choice either, but it's the easiest temporary fix...

    - Mark
  • Well, lets look on it from application point of view.
    The only reason why this memory is still allocated is that I still have a reference to surrounded callback and I'm going to use it again. Otherwise, I just release callback and all buffers get released as well.
    However, I agree that some __invoke() could allocate "unusually" big buffers. May be OutgoingAsync should just shrink the size of buffers to some default or reasonable values instead of destroying them or gather statistics on "average" buffer size and try to keep them in shape.