Home Help Center

Scalability issue with sync and async patterns

spsonispsoni Member Sury P SoniOrganization: Next DigitalProject: Unified Messaging Plattform for Inhouse Products ✭✭
Hi There,

System Description:

I have implemented a queueing system using Ice on FreeBSD.

Incoming traffic (puts from clients) on queueing server is asynchronous and outgoing traffic (gets from clients) is synchronous.

To test the scalability of queueing server, for puts (async) I immediately send ice_response(), whereas for gets (sync) I am returning 3 character string as data.

Test Setup:

Now, to actually test the scalability of the overall system, I am gradually increasing number of servers on the server machine, each having 1 adapter and listening on 1 port (per server application). Whereas, clients sending their requests to server(s) in round-robin fashion (since each server is just a mock server and I am not at the moment interested in order of messages).


Performance on put clients (async) for sending 100000 messages gradually decreases (occasionally increases in middle, but overall decreases) for every increase in the number of queueing servers onto the server machine.

But the get clients (sync) performance descreases much faster and scale on increasing just 2-3 servers applications.

Therefore, my conclusion was, we cannot use sync get clients to scale our overall application. Correct me if I am wrong!

When I tested my get clients as async, performance was smoothly degrading as compared to abrupt and poor performance in sync.

The problem with async get client is that, we will loose order of messages. And once we start ordering async responses (by holding the next request, untill we receive callback), I believe performance will come down same to sync call. What you say?

I have more questions regarding this behaviour, but before, if someone can explain what is happening in such a scaling environment. If my assumptions/conclusions are wrong.



  • benoitbenoit Rennes, FranceAdministrators, ZeroC Staff Benoit FoucherOrganization: ZeroC, Inc.Project: Ice ZeroC Staff

    I'm not sure I understand your application and what you're trying to measure. For instance, it's not clear to me why the performances of your "put" clients would get worse if you increase the number of queuing servers. How many queuing servers do you deploy, 10, 100, 1000?

    Using AMI for the "get" invocations will allow to send more "get" requests: the client doesn't have to wait for the response of a given request to send another one. However you will indeed have to take care of ordering the responses if you require ordering.

    The best would be to post the code of your test, this will give us a better idea on what you're trying to do and measure.

Sign In or Register to comment.