Archived
This forum has been archived. Please start a new discussion on GitHub.
Stress test
We are conducting a stress test of UDP & TCP connections using the ICE library.
On the server side we are going to have one communicator for TCP and one for UDP within the one application.
I am uncertain how to configure the client side to simulate multiple clients trying to connect to the server.
Do i need to run a number of executables simultaneously or can i create multiple client objects within the same application.
I have experimented with creating multiple clients within the one application,pseudo code as follows :
loop 1
create interface proxy for each client
loop 2
execute interface function on every second clients proxy.(we only want half the connections to be active)
Does this result in each client instance getting a different port assigned or is there better way to set this up.
Thanks in advance.
On the server side we are going to have one communicator for TCP and one for UDP within the one application.
I am uncertain how to configure the client side to simulate multiple clients trying to connect to the server.
Do i need to run a number of executables simultaneously or can i create multiple client objects within the same application.
I have experimented with creating multiple clients within the one application,pseudo code as follows :
loop 1
create interface proxy for each client
loop 2
execute interface function on every second clients proxy.(we only want half the connections to be active)
Does this result in each client instance getting a different port assigned or is there better way to set this up.
Thanks in advance.
0
Comments
-
You can create multiple communicators in your client, and create proxies with these different communicators. This will force separate connections for these proxies to be used, instead of the connection sharing that would happen if you would use only one communicator.0
-
Client Hanging
Thanks for the ideas for the stress test. I was able to get everything up and running.
We have experienced an intermittant problem with the client hanging.
The client hangs on the first checkedCast call that it makes - it goes into endless loop & server is not connected to. The code currently works on my machine but when another developer tries to build it from our cvs - it hangs at the checkedCast point. Both of us are running windows XP.
Here is the code from server & client :
Client :
Ice::CommunicatorPtr ic = Ice::initialize(argc, argv);
Ice::ObjectPrx base = ic->stringToProxy("StressTCP:tcp -p 11111");
StressTCPPrx proxy = StressTCPPrx::checkedCast(base);
Server:
Ice::CommunicatorPtr ic;
SummaryStats::instance().Init();
Ice::StatsPtr statsptr = new IceStats;
try
{
ic = Ice::initialize(argc, argv);
ic->setStats(statsptr);
// Create TCP Adapter
Ice::ObjectAdapterPtr adapter_tcp = ic->createObjectAdapterWithEndpoints("TCPAdapter", "tcp -p 11111");
Ice::ObjectPtr object_tcp = new StressTCPI;
adapter_tcp->add(object_tcp,Ice::stringToIdentity("StressTCP"));
adapter_tcp->activate();
// Create UDP Adapter
Ice::ObjectAdapterPtr adapter_udp = ic->createObjectAdapterWithEndpoints("UDPAdapter", "udp -p 11112");
Ice::ObjectPtr object_udp = new StressUDPI;
adapter_udp->add(object_udp,Ice::stringToIdentity("StressUDP"));
adapter_udp->activate();
ic->waitForShutdown();
}
catch (const Ice::Exception & e)
{
cerr << e << endl;
}
catch (const char * msg)
{
cerr << msg << endl;
}
if (ic)
ic->destroy();0 -
There is definitely no bug with checkedCast in Ice. All the tests and demos use checkedCast, and there is no endless loop anywhere.
Since it works for you, but not on the other machine, I suspect that the other machine's built environment is broken.0 -
Call Stack
I have been able to reproduce the problem on my work laptop so can provide the call stack :
KERNEL32! 7c57e592()
MSVCRTD! _CxxThrowException@8 + 57 bytes
Ice::ConnectionRefusedException::ice_throw() line 522
IceInternal::ProxyFactory::checkRetryAfterException(const Ice::LocalException & {...}, int & 2) line 118 + 13 bytes
IceProxy::Ice::Object::__handleException(const Ice::LocalException & {...}, int & 2) line 631
IceProxy::Ice::Object::ice_isA(const _STL::basic_string<char,_STL::char_traits<char>,_STL::allocator<char> > & {...}, ...) line 126
IceProxy::Ice::Object::ice_isA(const _STL::basic_string<char,_STL::char_traits<char>,_STL::allocator<char> > & {...}) line 105
IceInternal::checkedCastImpl(const IceInternal::ProxyHandle<IceProxy::Ice::Object> & {...}) line 291 + 45 bytes
IceInternal::checkedCastHelper(const IceInternal::ProxyHandle<IceProxy::Ice::Object> & {...}, void * 0x00000000) line 76 + 13 bytes
IceInternal::ProxyHandle<IceProxy::StressTCP>::checkedCast(const IceInternal::ProxyHandle<IceProxy::Ice::Object> & {...}) line 238 + 17 bytes
main(int 1, char * * 0x00516210) line 119 + 13 bytes
mainCRTStartup() line 338 + 17 bytes
KERNEL32! 7c581af6()0 -
This means that the server does not run, or does not run on the host/port that is specified.
You should add a catch clause for Ice exceptions, and print them on standard output, like in the demos.0 -
IP Address
I have a feeling the problem has something to do with the ip address of the server. A couple of questions ...
(i) Is there a way I can find out what IP address the server is listening on.
(ii) What is the format for specifying the host address when the client is connecting.0 -
Re: IP AddressOriginally posted by tony_h
(i) Is there a way I can find out what IP address the server is listening on.(ii) What is the format for specifying the host address when the client is connecting.
identity:protocol -h host_or_IP -p port etc.
You can use a hostname or an IP address as the argument to the -h option.
If the client is using a proxy that it received from the server, then the proxy contains the endpoints of the server's object adapter used to create the proxy. You must make sure that the object adapter's endpoints are accessible to the client.
Also note that the object adapter does not automatically listen on all available network interfaces. If you specify an endpoint of tcp -h 127.0.0.1 -p 10000, then the object adapter will only listen on the interface for 127.0.0.1, meaning no other hosts will be able to connect to it on that endpoint. If you want the adapter to listen on multiple endpoints, you must specify each one individually.
See the manual for more information.
Take care,
- Mark0 -
Fixed !
I finally tracked it down to not having a stlport preprocessor directive - my project was linking to the ice debug library which in turn is linked to the stlport debug dll. My application was linking against the stlport release dll.
I added _STLP_DEBUG as a preprocessor directive and everything worked.
Thanks for your support on this.0 -
Further to my original post ..
Is it possible to have multiple client connections (& client ports) within the same thread.0 -
Yes, use proxies that have been created with different communicators. But why do you want to have separate connections?0
-
This is not a realistic requirement for a real app but merely for purposes of our stress test.
We are wanting to simulate multiple (1000 for example) open connections on the server.
At the moment i am opening a new communicator for each connection on the client side which tops out at about 350 because it runs out of threads on the client. Also there is overhead with so many threads open.
I was wanting to have more connections with less threads open.0 -
There is no need to create so many threads if all you want to do is to test the number of connections a server can handle. Simply create all communicators in one thread, create a proxy with each of them, and send a request using each proxy. Every request will then use a separate connection.
By the way, I do not believe that the result will be meaningful. There is no problem for a Wish server to handle a large number of connections. Also note that Wish automatically closes connections that have been idle for a certain timeout period. Such connections are then transparently reestablished when a new request is sent. (That's called Active Connection Management.)0 -
What i wasnt sure of was how to force each communicator to be in same thread each time. I thought that spawning a new thread for each communicator was the default behaviour.0
-
Yes, each new communicator spawns threads internally (two by default). There is nothing you can do about this. For real-world applications, this is not a problem, because you would never create so many communicators. However, you don't need to spawn threads in your own code, only in order to create separate communicators.
I still don't see the point of this test. Do you want to test if a server can accept a few thousand connections, of which most are then idle? It sure can! Do you want to test if a server can handle heavy simultaneous traffic on all these connections? Only if you have a very high end computer, that can handle all the processing! The limits are not set by Ice, but by the time it takes to process each request in your server code.0