Hi Tobi. You want 10k-20k DB files, or 10k-20k tables inside a single DB file?
Plus, how many rows per table do you want? Cheers, Igor Tobias Oetiker wrote: > Hi Igor, > > the constant size is good news ... > > for a realistic simulation you have to have 10-20k rrd file > aequivalents ... since the caching effect is a rather important > part of the equation. > > cheers > tobi > > Today Sfiligoi Igor wrote: > >> kevin brintnall wrote: >>> On Mon, Nov 17, 2008 at 12:14:04PM -0600, Sfiligoi Igor wrote: >>>> Running a simple open/update/close loop, I get ~9 updates per second: >>> Igor, what kind of rates can you get with RRD update on the same hardware? >>> >> I get ~350 updates per second using plain rrdtool update invocations: >> bash-3.2$ rrdtool create t1.rrd DS:val:GAUGE:300:0:200000 RRA:LAST:0.9:1:100 >> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t1.rrd >> N:$RANDOM; done; date >> Mon Nov 17 12:40:20 CST 2008 >> Mon Nov 17 12:40:48 CST 2008 >> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t1.rrd >> N:$RANDOM; done; date >> Mon Nov 17 12:41:00 CST 2008 >> Mon Nov 17 12:41:28 CST 2008 >> >> bash-3.2$ rrdtool create t2.rrd DS:val:GAUGE:300:0:200000 >> RRA:LAST:0.9:1:2000 >> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t2.rrd >> N:$RANDOM; done; date >> Mon Nov 17 12:41:35 CST 2008 >> Mon Nov 17 12:42:03 CST 2008 >> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t2.rrd >> N:$RANDOM; done; date >> Mon Nov 17 12:42:08 CST 2008 >> Mon Nov 17 12:42:37 CST 2008 >> >> >> Indeed, sqlite approach seems to be viable only when grouping together >> many updates into a singular transaction: >> 1 row update/transaction = ~9Hz >> 10 row updates/transaction = ~85Hz >> 100 row updates/transaction = ~800Hz >> >> Igor >> >> > _______________________________________________ rrd-developers mailing list [email protected] https://lists.oetiker.ch/cgi-bin/listinfo/rrd-developers
