Archived

This forum has been archived. Please start a new discussion on GitHub.

ice callback problem for help

Hi,

There is code fragment in ice manual:
Ice::ObjectAdapterPtr adapter = communicator->createObjectAdapter("");
Ice::Identity ident;
ident.name = IceUtil::generateUUID();
ident.category = "";
CallbackPtr cb = new CallbackI;
adapter->add(cb, ident);
adapter->activate();
proxy->ice_getConnection()->setAdapter(adapter);
proxy->addClient(ident);

According to above code.

How many "adapter" and "cb" do I need, If I have 100 requests simultaneously?

Is Adapter : cb = 1:100 or 100:100 ?

It encountered deadlock for many times on windows.

Thanks,
Gu.

Comments

  • And in our project, we do it as:

    Adapter : cb = 1:100.

    Thanks,
    Gu.
  • mes
    mes California
    Hi,

    For most applications, one object adapter is sufficient.

    The example code you showed is for callbacks over bidirectional connections. In this case, it's not possible to use multiple object adapters with a single connection.

    The limiting factor for simultaneous callbacks over bidirectional connections is the number of threads in the client-side thread pool. Increasing this setting would allow more concurrency.

    Another strategy for eliminating deadlocks or thread starvation is using AMD.

    Hope that helps,
    Mark
  • Thanks,
    mes

    There is a central node in our project. It need to more than work for 1000 requests simultaneously from one client.

    Because these requests were in one process, they were use the same connection. And deadlock often occured.:(

    :confused:
    Can we make these requests in more than one connection? And if we can , what shall we do?

    Gu.:)
  • mes
    mes California
    Hi,

    It is possible for a client to force the Ice run time to establish multiple connections, but I wouldn't recommend that as a solution for your deadlock issues. I think the better solution would be to use asynchronous dispatch (AMD) in the server, where the server maintains its own queue of "tasks" that are processed by its own pool of worker threads. We give a simple example of this in the manual.

    Our general recommendation is to configure a server's thread pool with the same number of threads as there are processing cores. You could try to configure the server-side thread pool with 1000 threads to avoid deadlocks due to thread starvation, but doing that would consume a lot of resources and probably won't perform very well.

    The primary advantage of using AMD is that it releases the Ice thread pool thread quickly, such that you don't need to have a lot of threads in your server-side thread pool.

    Anyway, without knowing more about your application's architecture, all we can do is make some general suggestions.

    Mark
  • Mark,
    thanks very much.

    We already have used AMD, and set large size of thread pool to avoid deadlocks. But deadlocks occured as before. :(

    So,
    1 We will continue to check if we used AMD or other methods right.

    2 We want more method as an attempt. We hope you could tell us the method for use more connections.:)

    3 If you can help us analysis our log file, we would very happy.:)
    It is important that what is "Ice::AsyncResult::__wait" wait for?:confused:

    Attachment not found.
    The two file are the same data, while "outdata.txt" is grouped.


    Gu.:)
  • mes
    mes California
    AsyncResult::__wait is called internally by a proxy end_XXX method. __wait will block if the reply hasn't been received yet.

    To force a client to open a new connection to the server, you can configure the proxy with a new "connection ID". See the "Influencing Connection Reuse" section of this page in the manual for more information.

    Regards,
    Mark
  • Mark,
    thanks very much.

    We will have a try in our project.


    :)
    Gu.