Niels de Carpentier niels at decarpentier.com writes:
I'd like to see how they do that. The fact is you are still going to get
random
seeks since you have to binary search the blocks in an entire row since
there is
no way you can read a several thousand block row into memory to search
On Wed, Mar 28, 2012 at 9:25 AM, Danny Piccirillo
danny.picciri...@member.fsf.org wrote:
The case has been made on Phoronix for F-Trees: They makes use hard
drive speeds, not (relatively slow) access times; beat SSD's; and scale
perfectly across multiple cores with hundreds of millions of
On Wed, Mar 28, 2012 at 02:25:39PM +, Danny Piccirillo wrote:
The case has been made on Phoronix for F-Trees: They makes use hard
drive speeds, not (relatively slow) access times; beat SSD's; and scale
perfectly across multiple cores with hundreds of millions of entries.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/28/2012 02:42 PM, Josef Bacik wrote:
On Wed, Mar 28, 2012 at 02:25:39PM +, Danny Piccirillo wrote:
The case has been made on Phoronix for F-Trees: They makes use
hard drive speeds, not (relatively slow) access times; beat
SSD's; and
On Wed, Mar 28, 2012 at 02:42:04PM -0400, Josef Bacik wrote:
On Wed, Mar 28, 2012 at 02:25:39PM +, Danny Piccirillo wrote:
The case has been made on Phoronix for F-Trees: They makes use hard
drive speeds, not (relatively slow) access times; beat SSD's; and scale
perfectly across
but lets say O(log N/2) where N is the number of elements in the row.
So in the situation I describe you are looking at having to do minimum
of 29 reads, one for each row,
Hmm.
Levels are powers of two and are either full or empty. So the total
item count tells you which levels are full or
On Wed, Mar 28, 2012 at 03:50:07PM -0400, Zach Brown wrote:
but lets say O(log N/2) where N is the number of elements in the row.
So in the situation I describe you are looking at having to do minimum
of 29 reads, one for each row,
Hmm.
Levels are powers of two and are either full or
I imagine there is, but based on what little information they've shown
I don't see how it's a hands down win against b-trees. If anything
we're talking about having to solve really complex problems in order
to get any sort of good performance out of this thing.
Oh, absolutely. Tack on COW
You are still going to have to have at least 29 levels to accomodate 1
billion
objects, though they won't all be full (sorry I missed the must be full or
empty
bit). So it looks like we'll have to actually search what 13 rows right?
So
still more rows than a b-tree, and again you are
On Wed, Mar 28, 2012 at 10:44:03PM +0200, Niels de Carpentier wrote:
You are still going to have to have at least 29 levels to accomodate 1
billion
objects, though they won't all be full (sorry I missed the must be full or
empty
bit). So it looks like we'll have to actually search
I'd like to see how they do that. The fact is you are still going to get
random
seeks since you have to binary search the blocks in an entire row since
there is
no way you can read a several thousand block row into memory to search it,
so
once your rows get pretty big you are doing just as
11 matches
Mail list logo