Archived
This forum has been archived. Please start a new discussion on GitHub.
64kBytes limit?
Hi
I have created a small ice-server (ICE 3.2.0) with the following interface:
module PerfTestICE
{
sequence<byte> ByteSeq;
interface IComponent
{
["cpp:const"] void TestLatency();
["cpp:const"] void TestVectorTransfer(ByteSeq bs);
};
};
I then use it from a client with std::vectors of size 2,4,8,16,....128Kbytes
I works perfectly on a single computer but when running distributed (client on one Windows PC and server on another Windows PC) the client program stalls at some point when the 64Kbytes vector size is reached. I do not know if it is the first or call 999 that stalls (I use 1000 calls and then a mean time).
Is it some sort of configuration I'm missing or is it a small problem with ICE (I guess it is the first :-)
Best Regards,
Kristian Lippert
I have created a small ice-server (ICE 3.2.0) with the following interface:
module PerfTestICE
{
sequence<byte> ByteSeq;
interface IComponent
{
["cpp:const"] void TestLatency();
["cpp:const"] void TestVectorTransfer(ByteSeq bs);
};
};
I then use it from a client with std::vectors of size 2,4,8,16,....128Kbytes
I works perfectly on a single computer but when running distributed (client on one Windows PC and server on another Windows PC) the client program stalls at some point when the 64Kbytes vector size is reached. I do not know if it is the first or call 999 that stalls (I use 1000 calls and then a mean time).
Is it some sort of configuration I'm missing or is it a small problem with ICE (I guess it is the first :-)
Best Regards,
Kristian Lippert
0
Comments
-
Hi,
There shouldn't be a problem with 64kb messages. We routinely run tests across networks with larger requests.
Have you tried the Ice throughput and latency demos across your network? If they work fine then the issue is probably specific to your test code. If the Ice demos don't work, then perhaps your network/operating environment is causing the trouble. If further assistance is required, please be sure to include some more details about your environment such as the compiler you are using (e.g. VC 6.0), your compiler settings (you should always build with optimizations enabled when perf testing), third party libraries and extensions you are using, the version of Windows, a general description of your test network, etc.
Since you're interested in Ice performance, I recommend reading Matthew's article "Optimizing Performance of File Transfers' in issue 20 of the Ice Newsletter.
Cheers,
Brent0 -
Strange spikes
Hi
I did some additional testing and it seemed as if I was just little impatient. The code did terminate.
The test data from the program shows a funny behaviour on my system (2 x WinXP on portable computers, no firewall, 3Com router).
There is a tremendous peak on 64kbytes. The output data looks like:
Trips Size Time
1000 0 0,0368971
1000 1 0,0326874
1000 2 0,0285955
1000 4 0,0513968
1000 8 0,0288277
1000 16 0,0218444
1000 32 0,0255851
1000 64 0,0253102
1000 128 0,0340493
1000 256 0,0352522
1000 512 0,0496233
1000 1024 0,0904662
1000 2048 0,178775
1000 4096 0,348922
1000 8192 0,690463
1000 16384 1,38792
1000 32768 2,76744
1000 65536 197,059
1000 131072 56,0493
The time is in seconds for the 1000 calls
There is also a funny spike at size = 4 bytes
I tried to run a similar program on our current RMI system and it did not show this peak on the same network!
Any ideas?
Best Regards,
Kristian0 -
Hi Kristian,
This of course shouldn't occur. Let me try to reproduce it. I'll get back to you as soon as I know more.
Cheers,
Benoit.0 -
I can send the VC8 project to you if you want it?
Best Regards,
Kristian0 -
0
-
In the meantime, I tried to reproduce your problem with a modified throughput demo and an Ice 3.2.0-VC60 release build. I haven't been able to reproduce what you're seeing so far.
Here's what I got:time for 10000 sequences (1024 bytes): 410ms time per sequence: 0.041ms time for 10000 sequences (16384 bytes): 2243ms time per sequence: 0.2243ms time for 10000 sequences (65536 bytes): 6229ms time per sequence: 0.6229ms time for 10000 sequences (131072 bytes): 12048ms time per sequence: 1.2048ms time for 10000 sequences (262144 bytes): 23794ms time per sequence: 2.3794ms
The client and server are running on different Windows XP machines. Both machines are connected to a small Linksys 100MB ethernet hub. I'll now try with a VC80 build to see if it makes a difference.
I'm also attaching to this post the source code of the throughput client that I used (you can just replace demo\Ice\throughput\Client.cpp with it after removing the .txt extension if you want to try it out).
Btw, you could also try to enable network tracing with --Ice.Trace.Network=3 on both the client and the server when transmitting the 64KB byte sequence. It might give us some clues on where the delays occur.
Cheers,
Benoit.0 -
FYI, I still can't reproduce with a VC80 build.
After more investigation, I think I might actually have found one possible explanation here. You might be running into this specific problem.
Ice used to limit the size of the packets it sends because it lead to poor performances when sending big packets (i.e.: the problem reported on the KB article). However, we removed this limiter some time ago since we couldn't reproduce the issue on newer Windows versions and it did actually negatively affect performances. However, for some reasons, it sounds like you might still need this limiter in your environment.
To figure out if this is really the problem, I suggest to either try:- Method 3 of the Microsoft Knowledge Base Article, i.e.: set the registry value TcpAckFrequency to 1. Or,
- Apply the patch attached to this post to an Ice source distribution and rebuild Ice.
Here's some instructions to apply the patch on the Ice source distribution:C:\> unzip Ice-3.2.0.zip C:\> cd Ice-3.2.0 C:\> patch -p0 < patch.txt
Cheers,
Benoit.0 -
Hi,
I've posted a patch for this issue [thread=3174]here[/thread]. Thanks for reporting this problem!
Cheers,
Benoit.0 -
Patch works fine
Hi
Thanks for the good work. The patch (in a binary form directly from Benoit) fixed this problem.
My first set of data looked like (before the patch):
Size is in bytes
Time is in seconds
Mean is the meantime for a single call
Var is the variance
MBits is the calculated throughput
RoundTrips Size Time Mean Var Mbits
1000 0 0,0368971 3,69E-05 -1,36E-06 0
1000 1 0,0326874 3,27E-05 -1,07E-06 0,239006467
1000 2 0,0285955 2,86E-05 -8,11E-07 0,546414646
1000 4 0,0513968 5,14E-05 -1,93E-06 0,608014507
1000 8 0,0288277 2,88E-05 -7,78E-07 2,168053643
1000 16 0,0218444 2,18E-05 -4,72E-07 5,722290381
1000 32 0,0255851 2,56E-05 -6,45E-07 9,771312209
1000 64 0,0253102 2,53E-05 -6,21E-07 19,75488143
1000 128 0,0340493 3,40E-05 -1,13E-06 29,3691794
1000 256 0,0352522 3,53E-05 -1,21E-06 56,7340478
1000 512 0,0496233 4,96E-05 -2,45E-06 80,60729536
1000 1024 0,0904662 9,05E-05 -8,17E-06 88,43081726
1000 2048 0,178775 0,000178775 -3,19E-05 89,49797231
1000 4096 0,348922 0,000348922 -0,000121609 91,71104144
1000 8192 0,690463 0,000690463 -0,000476254 92,6914259
1000 16384 1,38792 0,00138792 -0,00192439 92,2243357
1000 32768 2,76744 0,00276744 -0,00765105 92,50426387
1000 65536 197,059 0,197059 -38,7927 2,598206628
1000 131072 56,0493 0,0560493 -3,13297 18,26963049
As seen there is a tremendous drop in throughput at 64kBytes and 128kBytes
After the patch had been applied the data looks like:
RoundTrips Size Time Mean Var Mbits
1000 0 0,0368768 3,69E-05 -1,36E-06 0
1000 1 0,0322178 3,22E-05 -1,04E-06 0,242490176
1000 2 0,0263997 2,64E-05 -6,96E-07 0,591862786
1000 4 0,0522829 5,23E-05 -1,98E-06 0,597709767
1000 8 0,0545044 5,45E-05 -2,01E-06 1,146696414
1000 16 0,0380551 3,81E-05 -1,18E-06 3,284710853
1000 32 0,0243305 2,43E-05 -5,84E-07 10,27516903
1000 64 0,0254678 2,55E-05 -6,31E-07 19,63263415
1000 128 0,0335657 3,36E-05 -1,09E-06 29,79231775
1000 256 0,0349455 3,49E-05 -1,20E-06 57,2319755
1000 512 0,0494289 4,94E-05 -2,43E-06 80,92431756
1000 1024 0,0908794 9,09E-05 -8,24E-06 88,02875019
1000 2048 0,17845 0,00017845 -3,18E-05 89,66096946
1000 4096 0,348865 0,000348865 -0,00012157 91,72602583
1000 8192 0,695443 0,000695443 -0,000483154 92,02767157
1000 16384 1,388 0,001388 -0,00192461 92,21902017
1000 32768 2,77739 0,00277739 -0,00770613 92,17286733
1000 65536 5,52608 0,00552608 -0,030507 92,65157218
1000 131072 11,0572 0,0110572 -0,12214 92,60934052
I want to thank for the fast reply and handling by the ZeroC staff!
Best Regards,
Kristian Lippert0