Alex, just making sure... you're running the optimized build, right?  The
debug build (which I believe is the default setup) is way slower than the
optimized build.

Josh


On Mon, Oct 20, 2008 at 6:48 PM, Alex <[EMAIL PROTECTED]> wrote:

>
> Luke,
>
> I tried to set IN MEMORY access group option for RandomTest table.
>
> The result for random read test improved by ~5x:
>
> random_read_test 1000000000
>
> 0%   10   20   30   40   50   60   70   80   90   100%
> |----|----|----|----|----|----|----|----|----|----|
> ***************************************************
>  Elapsed time:  1971.06 s
>  Total scanned:  1000000
>    Throughput:  513430.49 bytes/s
>    Throughput:  507.34 scanned cells/s
>
> However, it is still ~2x worse than the number from BigTable paper for
> non-mem tables and ~20x worse than mem-table.
>
> Thanks,
> Alex
>
> On Oct 19, 2:18 pm, Luke <[EMAIL PROTECTED]> wrote:
> > The random_read_test used to score 4k qps for comparable benchmark (vs
> > 1.2k qps in the bigtable paper, note the 10k qps number is for
> > memtable, which is very different from regular table. In hypertable
> > you can use the IN MEMORY access group option to get memtable) The
> > regular table scanner needs to merge scan cell cache, cell stores, so
> > it's much more expensive than memtable scanner which just scan the
> > cellcache, regardless the size of the table.
> >
> > Doug pushed out .11, which contains major changes to the way cellcache
> > and compaction works before he went on a vacation. There might be a
> > performance regression in recent releases. Thanks for the note, we'll
> > look into it.
> >
> > __Luke
> >
> > On Oct 16, 9:59 pm, Alex <[EMAIL PROTECTED]> wrote:
> >
> > > Hi All,
> >
> > > Could somebody explain/describe CellCache allocation/eviction policy?
> >
> > > After running random read/write tests, I came to the conclusion that
> > > CellCache operation is very different from what is described in Google
> > > BigTable paper, i.e. CellCache works as a write buffer rather than a
> > > cache. CellCache seems to help a lot for optimizing writes but it
> > > doesn't help reads.
> >
> > > Here are the results:
> >
> > > random_write_test 100000000
> >
> > >   Elapsed time:  8.16 s
> > >  Total inserts:  100000
> > >     Throughput:  12403559.87 bytes/s
> > >     Throughput:  12256.48 inserts/s
> >
> > > random_read_test 100000000
> >
> > >   Elapsed time:  1038.47 s
> > >  Total scanned:  100000
> > >     Throughput:  97451.43 bytes/s
> > >     Throughput:  96.30 scanned cells/s
> >
> > > Random read speed is ~100x slower than the result in Google BigTable
> > > for random read test which fits in the memory. In this case the data
> > > set size should be ~100MB and should comfortably fit in the DRAM
> > > (8GB).
> >
> > > Also, tcmalloc heap profiling shows that the usage memory actually
> > > decreases to ~50MB during random read test while it is >700MB during
> > > random write test (although top instead shows increase).
> >
> > > I apologize if I am missing something very basic, I have very little
> > > experience in this area.
> >
> > > Thanks,
> > > Alex
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Hypertable Development" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/hypertable-dev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to