problems ... freeze with 10,000,000 records

in Help Center
i use freeze to store 10,000,000 records
struct MD {
long n0;
long n1;
};
struct IFileID {
short domain;
MD key;
};
struct FileInfo {
int size;
byte count;
StringSequence header;
};
slice2freeze --dict Demo::FileMap,Demo::IFileID,Demo::FileInfo --dict-index Demo::FileMap,size
i use the simple test case in ICE/demo/Freeze/bench with the struct-struct map.
when there is only 10,000~1,000,000 records , everything goes all right. but when there is 10,000,000 records , there 's something wrong with writing all the data into the map.
the test application ends without any info...there is no core dump (with -g option); and this problem occurs at some unfixed place. the test application ends when there is 5,000,000 records in the map sometimes, and 7,000,000 records sometimes
i'm using Linux 2.6.9-42.0.10.ELsmp (RHEL 4), and i install Ice3.2.0 with the source code.
so i need your help...
thanks!
struct MD {
long n0;
long n1;
};
struct IFileID {
short domain;
MD key;
};
struct FileInfo {
int size;
byte count;
StringSequence header;
};
slice2freeze --dict Demo::FileMap,Demo::IFileID,Demo::FileInfo --dict-index Demo::FileMap,size
i use the simple test case in ICE/demo/Freeze/bench with the struct-struct map.
when there is only 10,000~1,000,000 records , everything goes all right. but when there is 10,000,000 records , there 's something wrong with writing all the data into the map.
the test application ends without any info...there is no core dump (with -g option); and this problem occurs at some unfixed place. the test application ends when there is 5,000,000 records in the map sometimes, and 7,000,000 records sometimes
i'm using Linux 2.6.9-42.0.10.ELsmp (RHEL 4), and i install Ice3.2.0 with the source code.
so i need your help...
thanks!
0
Comments
Before we look further into this, could you upgrade to Ice 3.2.1 (we only provide free support on the forums for the latest Ice version) and could you also specify which BerkeleyDB version you used to build Ice?
Thanks.
Cheers,
Benoit.
i will upgrade to 3.2.1
i use the bdb in source code you provided in ThirdParty-Source , named db-4.5.20.NC (with patches).
the main part of my test application:
//write test
//writing 100,000 each time for 100 times, and print some info
for(int k = 0;k < 100;++k){
_watch.start();
{
for(i = k*_repetitions/100; i < k*_repetitions/100+_repetitions/100; ++i)
{
md.n0 = 123456;
md.n1 = 654321;
s1.domain = i;
s1.key = md;
s2.size = i;
s2.count = 2;
vector<string> head;
loc.push_back("test");
s2.header= head;
#if defined(__BCPLUSPLUS__) || (defined(_MSC_VER) && (_MSC_VER < 1310))
m.put(T::value_type(s1, s2));
#else
m.put(typename T::value_type(s1, s2));
#endif
}
}
total = _watch.stop();
perRecord = (total / _repetitions)*100;
cout << "\t["<<k+1<<"]"<<"time for " << _repetitions/100 << " writes: " << total * 1000 << "ms";
cout << "\ttime per write: " << perRecord * 1000 << "ms" << endl;
}
//read test
_watch.start();
for(i = 0;i< _repetitions ;i++)
{
md.n0 = 123456;
md.n1 = 654321;
s1.domain = i;
s1.key = md5;
typename T::iterator p = m.findBySize(i);
test(p != m.end());
test(p->first.domain == i);
}
total = _watch.stop();
perRecord = total / _repetitions;
cout << "\ttime for " << _repetitions << " reads: " << total * 1000 << "ms" << endl;
cout << "\ttime per read: " << perRecord * 1000 << "ms" << endl;
and there is something bothering me...the data stored in db directory do not increase when the records is up to about 100,000
Also, since you're using RHEL4, did you try to run your test using the Ice 3.2.1 binary distribution?
Best regards,
Bernard
i just rewrite some code in demo/Freeze/bench/, the full test-case is in the attachments
plz compile & run the test case: ./client 10000000
thanks...
I was unable to reproduce this problem. On a fairly fast dual-core RHEL4.4 32-bit desktop, your test runs fine ... it just takes several hours to complete:
That's using an Ice 3.2.1 debug build and a Berkeley DB release build. The last log file was log.0000000281.
Of course, it will be very difficult to help if we can't reproduce the problem.
Also, I don't see the relationship with the bench demo; it would be clearer to write a standalone test.
This test takes a long time because every single write is performed in its own transaction. If you want to speed it up, you could group several writes in the same transaction.
Cheers,
Bernard
it seems that freeze performs very well.
i also do the same test under Ice 3.2.0, but the last section of read test don't produce any infomation.
and there is something more i have to mention. if the data type for IFileID.domain is 'short'(actually 'int' in my testcase in source.zip), there is overflow problems when i assign an integer (from 1 to 10000000) to IFileID.domain. but the test application do not produce any info about this problem. The test application may fail under this situation,without any warning or core dump... and the data in 'db' directory do not increase at some point even when the test application is running
so where is the problem? ICE or BDB? maybe the developer have to do more himself/herself.
with great thanks!
Your test case does not show any problem in Ice, Freeze or Berkeley DB. As far as I can tell, there is no problem.
Naturally, debugging your test-case or helping you write a good test case is beyond the free support we offer on these forums. For further assistance, you should consider subscribing to our commercial support; please contact [email protected] for details.
Best regards,
Bernard