Archived

This forum has been archived. Please start a new discussion on GitHub.

problems ... freeze with 10,000,000 records

i use freeze to store 10,000,000 records


struct MD {
long n0;
long n1;
};

struct IFileID {
short domain;
MD key;
};

struct FileInfo {
int size;
byte count;
StringSequence header;
};


slice2freeze --dict Demo::FileMap,Demo::IFileID,Demo::FileInfo --dict-index Demo::FileMap,size

i use the simple test case in ICE/demo/Freeze/bench with the struct-struct map.

when there is only 10,000~1,000,000 records , everything goes all right. but when there is 10,000,000 records , there 's something wrong with writing all the data into the map.

the test application ends without any info...there is no core dump (with -g option); and this problem occurs at some unfixed place. the test application ends when there is 5,000,000 records in the map sometimes, and 7,000,000 records sometimes

i'm using Linux 2.6.9-42.0.10.ELsmp (RHEL 4), and i install Ice3.2.0 with the source code.

so i need your help...

thanks!

Comments

  • benoit
    benoit Rennes, France
    Hi and welcome to the forums!

    Before we look further into this, could you upgrade to Ice 3.2.1 (we only provide free support on the forums for the latest Ice version) and could you also specify which BerkeleyDB version you used to build Ice?

    Thanks.

    Cheers,
    Benoit.
  • thanks...

    i will upgrade to 3.2.1

    i use the bdb in source code you provided in ThirdParty-Source , named db-4.5.20.NC (with patches).


    the main part of my test application:

    //write test
    //writing 100,000 each time for 100 times, and print some info
    for(int k = 0;k < 100;++k){
    _watch.start();
    {

    for(i = k*_repetitions/100; i < k*_repetitions/100+_repetitions/100; ++i)
    {


    md.n0 = 123456;
    md.n1 = 654321;
    s1.domain = i;
    s1.key = md;

    s2.size = i;
    s2.count = 2;
    vector<string> head;
    loc.push_back("test");

    s2.header= head;


    #if defined(__BCPLUSPLUS__) || (defined(_MSC_VER) && (_MSC_VER < 1310))
    m.put(T::value_type(s1, s2));
    #else
    m.put(typename T::value_type(s1, s2));
    #endif


    }
    }
    total = _watch.stop();
    perRecord = (total / _repetitions)*100;

    cout << "\t["<<k+1<<"]"<<"time for " << _repetitions/100 << " writes: " << total * 1000 << "ms";
    cout << "\ttime per write: " << perRecord * 1000 << "ms" << endl;

    }




    //read test
    _watch.start();
    for(i = 0;i< _repetitions ;i++)
    {
    md.n0 = 123456;
    md.n1 = 654321;
    s1.domain = i;
    s1.key = md5;
    typename T::iterator p = m.findBySize(i);
    test(p != m.end());
    test(p->first.domain == i);
    }
    total = _watch.stop();
    perRecord = total / _repetitions;

    cout << "\ttime for " << _repetitions << " reads: " << total * 1000 << "ms" << endl;
    cout << "\ttime per read: " << perRecord * 1000 << "ms" << endl;



    and there is something bothering me...the data stored in db directory do not increase when the records is up to about 100,000
  • bernard
    bernard Jupiter, FL
    If you could post the full test-case as an attachment, it would help us reproduce and investigate this issue.

    Also, since you're using RHEL4, did you try to run your test using the Ice 3.2.1 binary distribution?

    Best regards,
    Bernard
  • i install ICE 3.2.1 from the source code(including thirdparty-source you provided). i do the test case in RHEL 4 and Ubuntu 7.04

    i just rewrite some code in demo/Freeze/bench/, the full test-case is in the attachments

    plz compile & run the test case: ./client 10000000

    thanks...
  • bernard
    bernard Jupiter, FL
    Hi Haiyang,

    I was unable to reproduce this problem. On a fairly fast dual-core RHEL4.4 32-bit desktop, your test runs fine ... it just takes several hours to complete:
    $ ./client 10000000
    FileMap
            [1]time for 100000 writes: 95640.5ms    time per write: 0.9ms
            [2]time for 100000 writes: 122229ms     time per write: 1.2ms
            [3]time for 100000 writes: 112198ms     time per write: 1.1ms
            [4]time for 100000 writes: 105386ms     time per write: 1ms
            [5]time for 100000 writes: 102824ms     time per write: 1ms
            [6]time for 100000 writes: 123572ms     time per write: 1.2ms
            [7]time for 100000 writes: 79224.2ms    time per write: 0.7ms
            [8]time for 100000 writes: 84006.3ms    time per write: 0.8ms
            [9]time for 100000 writes: 84684.3ms    time per write: 0.8ms
            [10]time for 100000 writes: 108496ms    time per write: 1ms
            [11]time for 100000 writes: 143470ms    time per write: 1.4ms
            [12]time for 100000 writes: 103452ms    time per write: 1ms
            [13]time for 100000 writes: 89245.7ms   time per write: 0.8ms
            [14]time for 100000 writes: 95315.8ms   time per write: 0.9ms
            [15]time for 100000 writes: 89170.1ms   time per write: 0.8ms
            [16]time for 100000 writes: 92926.7ms   time per write: 0.9ms
            [17]time for 100000 writes: 87934.8ms   time per write: 0.8ms
            [18]time for 100000 writes: 94055ms     time per write: 0.9ms
            [19]time for 100000 writes: 117217ms    time per write: 1.1ms
            [20]time for 100000 writes: 153953ms    time per write: 1.5ms
            [21]time for 100000 writes: 130970ms    time per write: 1.3ms
            [22]time for 100000 writes: 97541.9ms   time per write: 0.9ms
            [23]time for 100000 writes: 98134.5ms   time per write: 0.9ms
            [24]time for 100000 writes: 97088ms     time per write: 0.9ms
            [25]time for 100000 writes: 107418ms    time per write: 1ms
            [26]time for 100000 writes: 154542ms    time per write: 1.5ms
            [27]time for 100000 writes: 103892ms    time per write: 1ms
            [28]time for 100000 writes: 106588ms    time per write: 1ms
            [29]time for 100000 writes: 106366ms    time per write: 1ms
            [30]time for 100000 writes: 101491ms    time per write: 1ms
            [31]time for 100000 writes: 98943.6ms   time per write: 0.9ms
            [32]time for 100000 writes: 100250ms    time per write: 1ms
            [33]time for 100000 writes: 135876ms    time per write: 1.3ms
            [34]time for 100000 writes: 188748ms    time per write: 1.8ms
            [35]time for 100000 writes: 113660ms    time per write: 1.1ms
            [36]time for 100000 writes: 108372ms    time per write: 1ms
            [37]time for 100000 writes: 107840ms    time per write: 1ms
            [38]time for 100000 writes: 107030ms    time per write: 1ms
            [39]time for 100000 writes: 107895ms    time per write: 1ms
            [40]time for 100000 writes: 108511ms    time per write: 1ms
            [41]time for 100000 writes: 106806ms    time per write: 1ms
            [42]time for 100000 writes: 106417ms    time per write: 1ms
            [43]time for 100000 writes: 107994ms    time per write: 1ms
            [44]time for 100000 writes: 106845ms    time per write: 1ms
            [45]time for 100000 writes: 109347ms    time per write: 1ms
            [46]time for 100000 writes: 111735ms    time per write: 1.1ms
            [47]time for 100000 writes: 144379ms    time per write: 1.4ms
            [48]time for 100000 writes: 180830ms    time per write: 1.8ms
            [49]time for 100000 writes: 120183ms    time per write: 1.2ms
            [50]time for 100000 writes: 95540.9ms   time per write: 0.9ms
            [51]time for 100000 writes: 94959.1ms   time per write: 0.9ms
            [52]time for 100000 writes: 94476.7ms   time per write: 0.9ms
            [53]time for 100000 writes: 224981ms    time per write: 2.2ms
            [54]time for 100000 writes: 99717.1ms   time per write: 0.9ms
            [55]time for 100000 writes: 94236.6ms   time per write: 0.9ms
            [56]time for 100000 writes: 97778.7ms   time per write: 0.9ms
            [57]time for 100000 writes: 97368.5ms   time per write: 0.9ms
            [58]time for 100000 writes: 82871.6ms   time per write: 0.8ms
            [59]time for 100000 writes: 88659.1ms   time per write: 0.8ms
            [60]time for 100000 writes: 113776ms    time per write: 1.1ms
            [61]time for 100000 writes: 161972ms    time per write: 1.6ms
            [62]time for 100000 writes: 205758ms    time per write: 2ms
            [63]time for 100000 writes: 85202.2ms   time per write: 0.8ms
            [64]time for 100000 writes: 70074.4ms   time per write: 0.7ms
            [65]time for 100000 writes: 67672.7ms   time per write: 0.6ms
            [66]time for 100000 writes: 67809.4ms   time per write: 0.6ms
            [67]time for 100000 writes: 70033.9ms   time per write: 0.7ms
            [68]time for 100000 writes: 69062.5ms   time per write: 0.6ms
            [69]time for 100000 writes: 65333.3ms   time per write: 0.6ms
            [70]time for 100000 writes: 71338.4ms   time per write: 0.7ms
            [71]time for 100000 writes: 70211.6ms   time per write: 0.7ms
            [72]time for 100000 writes: 67599ms     time per write: 0.6ms
            [73]time for 100000 writes: 72965.4ms   time per write: 0.7ms
            [74]time for 100000 writes: 72155.8ms   time per write: 0.7ms
            [75]time for 100000 writes: 144469ms    time per write: 1.4ms
            [76]time for 100000 writes: 126610ms    time per write: 1.2ms
            [77]time for 100000 writes: 67470.7ms   time per write: 0.6ms
            [78]time for 100000 writes: 66399.1ms   time per write: 0.6ms
            [79]time for 100000 writes: 67217.7ms   time per write: 0.6ms
            [80]time for 100000 writes: 75657.4ms   time per write: 0.7ms
            [81]time for 100000 writes: 195339ms    time per write: 1.9ms
            [82]time for 100000 writes: 71958.1ms   time per write: 0.7ms
            [83]time for 100000 writes: 74345.5ms   time per write: 0.7ms
            [84]time for 100000 writes: 79868.6ms   time per write: 0.7ms
            [85]time for 100000 writes: 71308.9ms   time per write: 0.7ms
            [86]time for 100000 writes: 66099.7ms   time per write: 0.6ms
            [87]time for 100000 writes: 71065.9ms   time per write: 0.7ms
            [88]time for 100000 writes: 127952ms    time per write: 1.2ms
            [89]time for 100000 writes: 209962ms    time per write: 2ms
            [90]time for 100000 writes: 121628ms    time per write: 1.2ms
            [91]time for 100000 writes: 63299ms     time per write: 0.6ms
            [92]time for 100000 writes: 63918.2ms   time per write: 0.6ms
            [93]time for 100000 writes: 63108.8ms   time per write: 0.6ms
            [94]time for 100000 writes: 64297.2ms   time per write: 0.6ms
            [95]time for 100000 writes: 61223.5ms   time per write: 0.6ms
            [96]time for 100000 writes: 63367.3ms   time per write: 0.6ms
            [97]time for 100000 writes: 63694.9ms   time per write: 0.6ms
            [98]time for 100000 writes: 64327.6ms   time per write: 0.6ms
            [99]time for 100000 writes: 63038.6ms   time per write: 0.6ms
            [100]time for 100000 writes: 61240.5ms  time per write: 0.6ms
            time for 10000000 reads: 666027ms
            time per read: 0.066ms
    

    That's using an Ice 3.2.1 debug build and a Berkeley DB release build. The last log file was log.0000000281.

    Of course, it will be very difficult to help if we can't reproduce the problem.

    Also, I don't see the relationship with the bench demo; it would be clearer to write a standalone test.

    This test takes a long time because every single write is performed in its own transaction. If you want to speed it up, you could group several writes in the same transaction.

    Cheers,
    Bernard
  • thanks for your help...

    it seems that freeze performs very well.

    i also do the same test under Ice 3.2.0, but the last section of read test don't produce any infomation.

    and there is something more i have to mention. if the data type for IFileID.domain is 'short'(actually 'int' in my testcase in source.zip), there is overflow problems when i assign an integer (from 1 to 10000000) to IFileID.domain. but the test application do not produce any info about this problem. The test application may fail under this situation,without any warning or core dump... and the data in 'db' directory do not increase at some point even when the test application is running

    so where is the problem? ICE or BDB? maybe the developer have to do more himself/herself.

    with great thanks!
  • i intend to put every single write operation in its own transaction.
  • bernard
    bernard Jupiter, FL
    so where is the problem? ICE or BDB?

    Your test case does not show any problem in Ice, Freeze or Berkeley DB. As far as I can tell, there is no problem.

    Naturally, debugging your test-case or helping you write a good test case is beyond the free support we offer on these forums. For further assistance, you should consider subscribing to our commercial support; please contact info@zeroc.com for details.

    Best regards,
    Bernard