Archived
This forum has been archived. Please start a new discussion on GitHub.
latency result
Comments
-
It's difficult to precisely answer your question without more information on how the test is written. Could you perhaps post the code for your client?
To answer 1), it's indeed possible for the latency to be high if the connection needs to be established first. The best is to invoke on the proxy (ice_ping method for example) to establish the connection before to start doing any measurement (see the Ice throughput and latency demos for examples).
To answer 2), we would need to know if you're sending the sequences with oneway or twoway requests.
Also, are you only sending the sequence 20 times and measuring the time of each invocation? If that's the case, instead, I would recommend to measure the time it takes to send the sequence N times (with N large enough to get reproducible results) and compute the average. Sending a single small sequence is quite fast and I wouldn't be surprised by strange results if the operating system also executes other tasks while you're running the tests. The resolution of the clock might also not be good enough to accurately measure such a small duration.
Cheers,
Benoit.0 -
thanks benoit.
here is my client code.
ThroughputPrx throughputOneway = ThroughputPrx::uncheckedCast(throughput->ice_oneway());
int seqSize;
double min, max, temp, temp2, smalltotal=0, total=0;
ByteSeq byteSeq(seqSize, 0);
pair<const Ice::Byte*, const Ice::Byte*> byteArr;
byteArr.first = &byteSeq[0];
byteArr.second = byteArr.first + byteSeq.size();
throughput->ice_ping(); // Initial ping to setup the connection.
IceUtil::Time tm;
const int repetitions = 20;
for(int k=0; k < 6; k++){
min = max = temp = temp2 = smalltotal = total=0;
if (k==0) seqSize = 1024;
else if (k==1)seqSize = 5120;
else if (k==2) seqSize = 10240;
else if (k==3) seqSize = 15360;
else if (k==4) seqSize = 20480;
else if (k==5) seqSize = 25600;
printf("using byte sequences \nsending %d byte sequences of size %d\n", repetitions, seqSize);
for(int i = 0; i < repetitions; ++i)
{
//count latency for 10 times, then get the average.
for(int j = 0; j< 10; j++){
tm = IceUtil::Time::now();
throughput->sendByteSeq(byteArr);
tm = IceUtil::Time::now() - tm;
temp = tm.toMilliSecondsDouble();
smalltotal = smalltotal + temp;
}
smalltotal = smalltotal/10;
total = total + smalltotal;
//print the latency for each ping
printf("time for %d : %fms\n", i+1, smalltotal);
// compare and set the min and max value
if(i==0){
min = max = smalltotal;
}
else if(smalltotal < min){
min = smalltotal;
}
else if(smalltotal > max){
max = smalltotal;
}
}
//print total, min, max, average, throughput
double mbit = repetitions * seqSize * 8.0 / total;
printf("====================================\n");
printf("totol time for %d sequences = %fms\n", repetitions, total);
printf("min = %fms\nmax = %fms\n", min, max);
printf("average =: %fms\n", total / repetitions);
printf("throughput: %.5fMbps\n", mbit);
printf("====================================\n");
}
i run the test with average of 10 times for each test (total 20 test for each seqSize). The result come out even more weird.
i am running the test in 2 different computers which connnected with a switch.0 -
Hi,
Your client is wrong, you don't initialize the size of the vector:int seqSize; double min, max, temp, temp2, smalltotal=0, total=0; ByteSeq byteSeq(seqSize, 0); // *** seqSize isn't initialized here! ***
You should move the byteSeq declaration bellow the initialization of seqSize in the first loop.
Also, as explained in my last post, I wouldn't measure the time of each invocation. Instead, you should measure the total time:tm = IceUtil::Time::now(); for(int j = 0; j< 10; j++) { throughput->sendByteSeq(byteArr); } tm = IceUtil::Time::now() - tm; smalltotal = tm.toMilliSecondsDouble() / 10;
I'm getting consistent results with these changes on my Linux box (P4, 2.4Ghz with Fedora Core 4). If you're not getting consistent results, I would recommend to increase the number of time the sequence is sent (from 10 to 100 or 1000 for example).
Cheers,
Benoit.0 -
thanks benoit.
i tested it with 1000 and now the data is more stable, but it still have some weird data. this is the graph i get from two way latency test
each date size with 20 tests. and each test with 1000 run
in the throughput demo code, what is the difference between one way, two way, echo and receive?0 -
-
leoang wrote:thanks benoit.
i tested it with 1000 and now the data is more stable, but it still have some weird data. this is the graph i get from two way latency test
each date size with 20 tests. and each test with 1000 run
in the throughput demo code, what is the difference between one way, two way, echo and receive?
See section 30.12 in the Ice manual for information on oneway invocations. In the "receive" test, it's the server which sends the data to the client whereas in the "echo" test, the client sends some data to the server and the server sends back this data to the client. See the Slice definitions for each operation in the demo/Ice/throughput/Throughput.ice file.
Cheers,
Benoit.0 -
thanks benoit.
one way is sending data to server without caring the server receive or not. then how about two way?
what is the differenece between two way and echo?0 -
A twoway invocation will wait for the server reply. The reply is sent by the server when it's done dispatching the invocation. See the Ice manual for more information.
Note that oneway/twoway are two different invocation modes whereas send/echo/receive are simply method names. For the difference between the send and echo methods, please see their signatures in the Throughput.ice slice file.
Cheers,
Benoit.0 -
why ICE latency is much more fater than TAO? is it arhitecture problem??0
-
here is the screen shot of my graph for ice and tao two way latency0
-
-
Ice-E is designed for size and speed. TAO is not, and never has been, a fast CORBA implementation. If you need a very fast CORBA ORB, then you should try omniORB instead.
Note that with respect to latency, there are no significant advantages of the Ice protocol compared to CORBA's IIOP. So it all comes down to "quality of implementation".0