Igor, these numbers are for the case where all files fit in cache, you use the perl bindings with MMAP io ... there is a benchmark application in the example directory ...
cheers tobi Today Sfiligoi Igor wrote: > Uhm... Tobi, I have a question about your > "20k updates per second" > claim. > > Do you get this numbers by opening and closing the file every time, or > are these continous updates on an open file? > > Igor > > Tobias Oetiker wrote: > > Today Sfiligoi Igor wrote: > > > >> Hi Tobi. > >> > >> You want > >> 10k-20k DB files, or > >> 10k-20k tables inside a single DB file? > >> > >> Plus, how many rows per table do you want? > > > > I can only speak form the rrdtool side, there we see a dramatic > > fall in performance once not everything is in cache anymore ... > > (if you do not call rrdtool on the commandline but using the > > language bindings, you can do up to 20k updates per second as long > > as the files are in memory/cached). Realworld applications normally > > can not keep them in memory though, rrdtool tries to give the OS > > hints on what to keep to optimize performance. > > > > As for the setup (one file vs many) there you should whatever works > > well for sqlite. I guess one of the advantages of SQLite would be > > that more RRD structures can be held in a single file which should > > improve performance. > > > > cheers > > tobi > > > > > >> Cheers, > >> Igor > >> > >> Tobias Oetiker wrote: > >>> Hi Igor, > >>> > >>> the constant size is good news ... > >>> > >>> for a realistic simulation you have to have 10-20k rrd file > >>> aequivalents ... since the caching effect is a rather important > >>> part of the equation. > >>> > >>> cheers > >>> tobi > >>> > >>> Today Sfiligoi Igor wrote: > >>> > >>>> kevin brintnall wrote: > >>>>> On Mon, Nov 17, 2008 at 12:14:04PM -0600, Sfiligoi Igor wrote: > >>>>>> Running a simple open/update/close loop, I get ~9 updates per second: > >>>>> Igor, what kind of rates can you get with RRD update on the same > >>>>> hardware? > >>>>> > >>>> I get ~350 updates per second using plain rrdtool update invocations: > >>>> bash-3.2$ rrdtool create t1.rrd DS:val:GAUGE:300:0:200000 > >>>> RRA:LAST:0.9:1:100 > >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t1.rrd > >>>> N:$RANDOM; done; date > >>>> Mon Nov 17 12:40:20 CST 2008 > >>>> Mon Nov 17 12:40:48 CST 2008 > >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t1.rrd > >>>> N:$RANDOM; done; date > >>>> Mon Nov 17 12:41:00 CST 2008 > >>>> Mon Nov 17 12:41:28 CST 2008 > >>>> > >>>> bash-3.2$ rrdtool create t2.rrd DS:val:GAUGE:300:0:200000 > >>>> RRA:LAST:0.9:1:2000 > >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t2.rrd > >>>> N:$RANDOM; done; date > >>>> Mon Nov 17 12:41:35 CST 2008 > >>>> Mon Nov 17 12:42:03 CST 2008 > >>>> bash-3.2$ date; for ((i=0; $i<10000; i++)); do rrdtool update t2.rrd > >>>> N:$RANDOM; done; date > >>>> Mon Nov 17 12:42:08 CST 2008 > >>>> Mon Nov 17 12:42:37 CST 2008 > >>>> > >>>> > >>>> Indeed, sqlite approach seems to be viable only when grouping together > >>>> many updates into a singular transaction: > >>>> 1 row update/transaction = ~9Hz > >>>> 10 row updates/transaction = ~85Hz > >>>> 100 row updates/transaction = ~800Hz > >>>> > >>>> Igor > >>>> > >>>> > >> > > > > -- Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland http://it.oetiker.ch [EMAIL PROTECTED] ++41 62 775 9902 / sb: -9900 _______________________________________________ rrd-developers mailing list [email protected] https://lists.oetiker.ch/cgi-bin/listinfo/rrd-developers
