Hi Tobi. I created a DB file with 50k tables, each with 10k rows... the resulting file is ~4GB.
Speed tests were quite dismaying; To start, it took me ~12h to create the 50k tables! This translates to ~1 table per second. Then I tested the updates... here the results are even worse.... SQLlite does not seem to handle big DB files very well... to update 1 database it takes ~2s! Updating all 50k tables in a single transaction is better: ~700s But this is still only ~70Hz. Igor Tobias Oetiker wrote: > Today Sfiligoi Igor wrote: > >> Hi Tobi. >> >> You want >> 10k-20k DB files, or >> 10k-20k tables inside a single DB file? >> >> Plus, how many rows per table do you want? > > I can only speak form the rrdtool side, there we see a dramatic > fall in performance once not everything is in cache anymore ... > (if you do not call rrdtool on the commandline but using the > language bindings, you can do up to 20k updates per second as long > as the files are in memory/cached). Realworld applications normally > can not keep them in memory though, rrdtool tries to give the OS > hints on what to keep to optimize performance. > > As for the setup (one file vs many) there you should whatever works > well for sqlite. I guess one of the advantages of SQLite would be > that more RRD structures can be held in a single file which should > improve performance. > > cheers > tobi > > >> Cheers, >> Igor >> >> Tobias Oetiker wrote: >>> Hi Igor, >>> >>> the constant size is good news ... >>> >>> for a realistic simulation you have to have 10-20k rrd file >>> aequivalents ... since the caching effect is a rather important >>> part of the equation. >>> >>> cheers >>> tobi >>> >>> Today Sfiligoi Igor wrote: >>> >>>> kevin brintnall wrote: >>>>> On Mon, Nov 17, 2008 at 12:14:04PM -0600, Sfiligoi Igor wrote: >>>>>> Running a simple open/update/close loop, I get ~9 updates per second: >>>>> Igor, what kind of rates can you get with RRD update on the same hardware? >>>>> >>>> I get ~350 updates per second using plain rrdtool update invocations: >>>> bash-3.2$ rrdtool create t1.rrd DS:val:GAUGE:300:0:200000 >>>> RRA:LAST:0.9:1:100 >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t1.rrd >>>> N:$RANDOM; done; date >>>> Mon Nov 17 12:40:20 CST 2008 >>>> Mon Nov 17 12:40:48 CST 2008 >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t1.rrd >>>> N:$RANDOM; done; date >>>> Mon Nov 17 12:41:00 CST 2008 >>>> Mon Nov 17 12:41:28 CST 2008 >>>> >>>> bash-3.2$ rrdtool create t2.rrd DS:val:GAUGE:300:0:200000 >>>> RRA:LAST:0.9:1:2000 >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t2.rrd >>>> N:$RANDOM; done; date >>>> Mon Nov 17 12:41:35 CST 2008 >>>> Mon Nov 17 12:42:03 CST 2008 >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t2.rrd >>>> N:$RANDOM; done; date >>>> Mon Nov 17 12:42:08 CST 2008 >>>> Mon Nov 17 12:42:37 CST 2008 >>>> >>>> >>>> Indeed, sqlite approach seems to be viable only when grouping together >>>> many updates into a singular transaction: >>>> 1 row update/transaction = ~9Hz >>>> 10 row updates/transaction = ~85Hz >>>> 100 row updates/transaction = ~800Hz >>>> >>>> Igor >>>> >>>> >> > _______________________________________________ rrd-developers mailing list [email protected] https://lists.oetiker.ch/cgi-bin/listinfo/rrd-developers
