Hi there,
I encountered problems when handling large datasets with fast-bit.
The volume datasets are bricked that each brick can be put into memory. 
The way I handle it is:

ibis::tablex* tblx = ibis::tablex::create();
for all bricks
      for all datasets
           read into memory
           if first_time
               tblx->addColumn(name, type);
           tblx->append(name, 0, size, data);
      tblx->write(strFullDir);
      tblx->clearData();
delete tblx;

Since I found the tablex::write() function will append the already 
existed file, but I'm not sure if my usage of tblx::append is correct as 
the starting position is always 0. If I increment the starting position, 
it seems that fast-bit will append the entire table to the files in each 
iteration...As I don't want to use up entire memory, I hope to write out 
the newly added items in the table to HDD in each iteration, and release 
the data.
I tried datasets with 1 * 1 * 2 bricks, and it seems working good. But 
when I experiment with data sets with 4 * 4 *4 bricks, index files are 
not created. Any help would be grateful. Thanks!

Regards,
Liang

_______________________________________________
FastBit-users mailing list
[email protected]
https://hpcrdm.lbl.gov/cgi-bin/mailman/listinfo/fastbit-users

Reply via email to