Hi All

I seem to get a memory error when running ibis on a 54G 53 million row fastbit 
db.

# ibis -d . -q "SELECT sum(IN_BYTES),IPV4_SRC_ADDR,IPV4_DST_ADDR where IN_BYTES 
> 0"

Warning -- fileManager::unload timed-out while waiting for 281,791,252 bytes 
(totalBytes=1,799,361,864, maxBytes=2,076,628,992)
Error -- bundles::ctor received an exception, start cleaning up
Warning -- table::select absorbed a bad_alloc exception (storage::ctor(copy 
memory) failed), will return a nil pointer  

Is there something on the IS I can tune or to be able to query larger datasets. 
I'm running Linux ubuntu (patty)

I am writing about 480 000 records every 5 minutes into 5 minute partitions. 

This is the header of that -part.txt file on how I create the records. 

--snip--
BEGIN HEADER
Name = "00"
Description = "/usr/local/bin//ardea -d 
/usr/local/iris/data/flows/flowdb/2011/08/11/01/00 -m FIRST_SWITCHED:uint 
LAST_SWITCHED:uint IPV4_SRC_ADDR:uint IPV4_DST_ADDR:uint IPV4_SRC_PORT:uint 
IPV4_DST_PORT:uint PROTOCOL:key SRC_TOS:uint IN_PKTS:uint IN_BYTES:uint 
OUT_PKTS:uint OUT_BYTES:uint INPUT_SNMP:key OUTPUT_SNMP:key SRC_AS:uint 
DST_AS:uint IPV4_SRC_MASK:uint IPV4_DST_MASK:uint DST_TOS:uint 
IPV4_NEXTHOP:uint EXADDR:uint AS_PATH_1:uint AS_PATH_2:uint AS_PATH_3:uint 
AS_PATH_4:uint AS_PATH_5:uint -t /dev/stdin"
Number_of_columns = 26
Number_of_rows = 453879
Timestamp = 1313017647
State = 1
END HEADER
--snip--

thanks

--
Alan Kemp
email: [email protected]
mobile: +27 83 257 5970
three6five systems





_______________________________________________
FastBit-users mailing list
[email protected]
https://hpcrdm.lbl.gov/cgi-bin/mailman/listinfo/fastbit-users

Reply via email to