Re: [PATCHES] hash index improving v3

2008-09-04 Thread Zdenek Kotala

Alex Hunsaker napsal(a):

On Thu, Sep 4, 2008 at 9:48 PM, Alex Hunsaker <[EMAIL PROTECTED]> wrote:

Ok here are the results:

(data generated from the c program before)
select count(1) from test_hash;
   count
---
 10011

create index test_hash_num_idx on test_hash using hash (num);
CVS: Time: 698065.180 ms
patch: Time: 565982.099 ms

./pgbench -c 1 -t 10 -n -f bench.sql
bench.sql
select count(1) from test_hash where num = 110034304728896610;

CVS: tps = 7232.375875 (excluding connections establishing)
patch: tps = 7913.700150 (excluding connections establishing)

EXPLAIN ANALYZE select count(1) from test_hash where num = 110034304728896610;
 QUERY
PLAN

 Aggregate  (cost=29.24..29.25 rows=1 width=0) (actual
time=0.066..0.067 rows=1 loops=1)
   ->  Index Scan using test_hash_num_idx on test_hash
(cost=0.00..29.24 rows=1 width=0) (actual time=0.051..0.054 rows=1
loops=1)
 Index Cond: (num = 110034304728896610::bigint)
 Total runtime: 0.153 ms


Oddly the index sizes were the same (4096 MB) is that to be expected?


I think yes, because haskey is uint32. You save space only if you use hash for 
example on varchar attribute.


Zdenek

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Alex Hunsaker
On Thu, Sep 4, 2008 at 9:48 PM, Alex Hunsaker <[EMAIL PROTECTED]> wrote:

Ok here are the results:

(data generated from the c program before)
select count(1) from test_hash;
   count
---
 10011

create index test_hash_num_idx on test_hash using hash (num);
CVS: Time: 698065.180 ms
patch: Time: 565982.099 ms

./pgbench -c 1 -t 10 -n -f bench.sql
bench.sql
select count(1) from test_hash where num = 110034304728896610;

CVS: tps = 7232.375875 (excluding connections establishing)
patch: tps = 7913.700150 (excluding connections establishing)

EXPLAIN ANALYZE select count(1) from test_hash where num = 110034304728896610;
 QUERY
PLAN

 Aggregate  (cost=29.24..29.25 rows=1 width=0) (actual
time=0.066..0.067 rows=1 loops=1)
   ->  Index Scan using test_hash_num_idx on test_hash
(cost=0.00..29.24 rows=1 width=0) (actual time=0.051..0.054 rows=1
loops=1)
 Index Cond: (num = 110034304728896610::bigint)
 Total runtime: 0.153 ms


Oddly the index sizes were the same (4096 MB) is that to be expected?

Here is the change I made to hashint8
--- a/src/backend/access/hash/hashfunc.c
+++ b/src/backend/access/hash/hashfunc.c
@@ -61,12 +61,14 @@ hashint8(PG_FUNCTION_ARGS)
 */
 #ifndef INT64_IS_BUSTED
int64   val = PG_GETARG_INT64(0);
-   uint32  lohalf = (uint32) val;
+/* uint32  lohalf = (uint32) val;
uint32  hihalf = (uint32) (val >> 32);

lohalf ^= (val >= 0) ? hihalf : ~hihalf;

return hash_uint32(lohalf);
+*/
+   return val % 4294967296;
 #else
/* here if we can't count on "x >> 32" to work sanely */
return hash_uint32((int32) PG_GETARG_INT64(0));

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Alex Hunsaker
On Thu, Sep 4, 2008 at 8:17 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> I guess one thing we could do for testing purposes is lobotomize one of
> the datatype-specific hash functions.  For instance, make int8_hash
> return the input mod 2^32, ignoring the upper bytes.  Then it'd be easy
> to compute different int8s that hash to the same thing.


Heh Ok im slowly getting there... So we lobotomize hashint8 and then
time how long it takes to make an index on a table... something like:
create table test_hash(num int8);

(obviously on a 64 bit machine)
int main(void)
{
unsigned long y = 0;
unsigned cnt = 0;

printf("insert into test_hash (num) values ");

//while(cnt != LONG_MAX/UINT_MAX)
while(cnt < 1000)
{
y += UINT_MAX;

printf("(%ld), ", y);

cnt++;
}

printf("(0);\n");

}

./a.out | psql

pgbench -c 1 -t1000 -n -f test.sql

test.sql:
create index test_hash_num_idx on test_hash using hash (num);
drop index test_hash_num_idx;

For both pre and post patch just to make sure post patch is not worse
than pre patch???

If im still way off and its not to much trouble want to give me a test
case to run =) ?

Or maybe because hash collisions should be fairly rare its not
something to really worry about?

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
"Alex Hunsaker" <[EMAIL PROTECTED]> writes:
> On Thu, Sep 4, 2008 at 7:13 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> * check that the queries actually use the indexes (not sure that the
>> proposed switch settings ensure this, not to mention you didn't create
>> the indexes)

> Well I was assuming I could just test the speed of a hash join...

Uh, no, hash joins have nearly zip to do with hash indexes.  They rely
on the same per-datatype support functions but that's the end of the
commonality.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
"Alex Hunsaker" <[EMAIL PROTECTED]> writes:
> On Thu, Sep 4, 2008 at 7:45 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> So what we need for testing is a few different key values that hash to
>> the same code.  Not sure about an easy way to find such.

> Hrm, well I have not really looked at the hash algorithm but I assume
> we could just reduce the number of buckets?

No, we need fully equal hash keys, else the code won't visit the heap.

I guess one thing we could do for testing purposes is lobotomize one of
the datatype-specific hash functions.  For instance, make int8_hash
return the input mod 2^32, ignoring the upper bytes.  Then it'd be easy
to compute different int8s that hash to the same thing.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [HACKERS] TODO item: Implement Boyer-Moore searching (First time hacker)

2008-09-04 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> After reading the wikipedia article on Boyer-Moore search algorithm, it 
> looks to me like this patch actually implements the simpler 
> Boyer-Moore-Horspool algorithm that only uses one lookup table. That's 
> probably fine, as it ought to be faster on small needles and haystacks 
> because it requires less effort to build the lookup tables, even though 
> the worst-case performance is worse. It should still be faster than what 
> we have now.

Hmm.  B-M-H has worst case search speed O(M*N) (where M = length of
pattern, N = length of search string); whereas full B-M is O(N).
Maybe we should build the second table when M is large?  Although
realistically that is probably gilding the lily, since frankly there
haven't been many real-world complaints about the speed of these
functions anyway ...

> The skip table really should be constructed only once in 
> text_position_start and stored in TextPositionState. That would make a 
> big difference to the performance of those functions that call 
> text_position_next repeatedly: replace_text, split_text and text_to_array.

+1.  The heuristic about big a skip table to make may need some
adjustment as well, since it seems to be considering only a single
search.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Alex Hunsaker
On Thu, Sep 4, 2008 at 7:45 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> So what we need for testing is a few different key values that hash to
> the same code.  Not sure about an easy way to find such.

Hrm, well I have not really looked at the hash algorithm but I assume
we could just reduce the number of buckets?

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Alex Hunsaker
On Thu, Sep 4, 2008 at 7:13 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Alex Hunsaker" <[EMAIL PROTECTED]> writes:
>> Ok let me know if this is to naive of an approach or not hitting the
>> right cases you want tested.
>
> You have the unique-versus-not dimension, but I'm also wondering about
> narrow vs wide index keys (say about 8 bytes vs 50-100 or so).  In the
> former case we're not saving any index space by storing only the hash
> code, so these could be expected to have different performance
> behaviors.

Arg yes... I just read the last part of your mail in this thread.  I
think it was the one on -hackers that talked about narrow vs wide...
so I figured I would just try to do what the thread where you posted
the patch talked about namley the below:

>So my thinking right now is that we should just test this patch as-is.
>If it doesn't show really horrid performance when there are lots of
>hash key collisions, we should forget the store-both-things idea and
>just go with this.

So I thought, lets try to generate lots of hash collisions... obviosly
though using the same key wont do that... Not sure what I was thinking

> As for specifics of the suggested scripts:
>
> * might be better to do select count(*) not select 1, so that client
> communication is minimized

Yar.

> * check that the queries actually use the indexes (not sure that the
> proposed switch settings ensure this, not to mention you didn't create
> the indexes)

Well I was assuming I could just test the speed of a hash join...

> * make sure the pgbench transaction counts are large enough to ensure
> significant runtime
> * the specific table sizes suggested are surely not large enough

Ok

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
I wrote:
> You have the unique-versus-not dimension,

On second thought, actually not.  What we want to look at is the penalty
for false matches due to *distinct* key values that happen to have the
same hash codes.  Your test case for all-the-same is using all the same
key values, which means it'll hit the heap a lot, but none of those will
be wasted trips.

So what we need for testing is a few different key values that hash to
the same code.  Not sure about an easy way to find such.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
"Alex Hunsaker" <[EMAIL PROTECTED]> writes:
> Ok let me know if this is to naive of an approach or not hitting the
> right cases you want tested.

You have the unique-versus-not dimension, but I'm also wondering about
narrow vs wide index keys (say about 8 bytes vs 50-100 or so).  In the
former case we're not saving any index space by storing only the hash
code, so these could be expected to have different performance
behaviors.

As for specifics of the suggested scripts:

* might be better to do select count(*) not select 1, so that client
communication is minimized

* check that the queries actually use the indexes (not sure that the
proposed switch settings ensure this, not to mention you didn't create
the indexes)

* make sure the pgbench transaction counts are large enough to ensure
significant runtime

* the specific table sizes suggested are surely not large enough

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Alex Hunsaker
On Thu, Sep 4, 2008 at 5:11 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> So my thinking right now is that we should just test this patch as-is.
> If it doesn't show really horrid performance when there are lots of
> hash key collisions, we should forget the store-both-things idea and
> just go with this.

Ok let me know if this is to naive of an approach or not hitting the
right cases you want tested.

create table hash_a (same text, uniq text);
insert into hash_a (same, uniq)  select 'same', n from
generate_series(0, 5000) as n;

create table hash_b (uniq text);
insert into hash_b (uniq)  select n  from generate_series(5000, 1) as n;

pgbench -c 1 -t 100 -n -f of the following

hash_same.sql:
set enable_seqscan to off;
set enable_mergejoin to off;
select 1 from hash_a as a inner join hash_a as aa on aa.same = a.same;

hash_uniq.sql:
set enable_seqscan to off;
set enable_mergejoin to off;
select 1 from hash_a as a inner join hash_b as b on b.uniq = a.uniq;

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
Here is an updated patch incorporating Zdenek's review, my own
observation that we should make the index tupledesc tell the truth,
and some other fixes/improvements such as making backwards scans
work as expected.

The main thing lacking before this could be committed, from a code
standpoint, is a cleaner solution to the problem of adjusting the
index tupledesc (see the ugly hack in catalog/index.c).  However,
that complaint is irrelevant for functionality or performance testing,
so I'm throwing this back out there in hopes someone will do some...

I thought a little bit about how to extend this to store both hashcode
and original index key, and realized that the desire to have a truthful
index tupledesc makes that a *whole* lot harder.  The planner, and
really even the pg_index catalog representation, assume that the visible
columns of an index are one-for-one with the index keys.  We can slide
through with the attached patch because this is still true ---
effectively we're just using a "storage type" different from the indexed
column's type for hash indexes, as already works for GIST and GIN.
But having two visible columns would bollix up quite a lot of stuff.
So I think if we actually want to do that, we'd need to revert to the
concept of cheating on the tupledesc.  Aside from the various uglinesses
that I was able to remove from the original patch by not having that,
I'm still quite concerned that we'd find something else wrong with
doing that, further down the road.

So my thinking right now is that we should just test this patch as-is.
If it doesn't show really horrid performance when there are lots of
hash key collisions, we should forget the store-both-things idea and
just go with this.

regards, tom lane




binNKAVnsTxLq.bin
Description: hash-v4.patch.gz

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] [HACKERS] TODO item: Implement Boyer-Moore searching (First time hacker)

2008-09-04 Thread Heikki Linnakangas
After reading the wikipedia article on Boyer-Moore search algorithm, it 
looks to me like this patch actually implements the simpler 
Boyer-Moore-Horspool algorithm that only uses one lookup table. That's 
probably fine, as it ought to be faster on small needles and haystacks 
because it requires less effort to build the lookup tables, even though 
the worst-case performance is worse. It should still be faster than what 
we have now.


The skip table really should be constructed only once in 
text_position_start and stored in TextPositionState. That would make a 
big difference to the performance of those functions that call 
text_position_next repeatedly: replace_text, split_text and text_to_array.


David Rowley wrote:

I've done some more revisions to the patch. This has mostly just involved
tuning the skip table size based on the size of the search. This has
basically involved lots of benchmarks with different strings and calculating
the best size of table to use. The reason for this is to maintain fast
searches for smaller strings. The overhead of initialising a 256 element
array would probably out weigh the cost of the search if this were not done.
The size of the skip table increases with longer strings, or rather the size
that is utilised.

Performance:
For smaller searches performance of the patch and existing version are very
similar. The patched version starts to out perform the existing version when
the needle and haystack become larger.

The patch wins hands down with searches that leads the existing function in
to dead ends, for example:

SELECT STRPOS('A AA AAA  A AA AAA','AAA');

When searching for very small strings, say just a single character in a
sentence the existing function marginally beats the patched version.

Outside of Postgres I've done benchmarks where I've searched for every
combination of the search string in the search string. Like:

test | t  |
test | te |
test | tes|
test | test   |
test | e  |
test | es |
test | est|
test | s  |
test | st |
test | t  |


I felt this was fair for both versions. The patched version beat the
unpatched version. The test I carried out was a string of 934 characters.



--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
Zdenek Kotala <[EMAIL PROTECTED]> writes:
> pgsql/src/backend/access/hash/hashutil.c
> 

>  It would be better remove #define from hash.h and setup it there 
> directly.

Actually, I don't like this aspect of the patch one bit: it means that
the system catalogs are lying about what is stored in the index, which
seems likely to break something somewhere, either now or down the road.
I think the correct way to handle this is to make the pg_attribute entries
(and hence the index's relation descriptor) accurately match what is
stored in the index.  For testing purposes I propose this crude hack
in catalog/index.c's ConstructTupleDescriptor():

*** src/backend/catalog/index.c.origMon Aug 25 18:42:32 2008
--- src/backend/catalog/index.c Thu Sep  4 16:20:12 2008
***
*** 133,138 
--- 133,139 
Form_pg_attribute to = indexTupDesc->attrs[i];
HeapTuple   tuple;
Form_pg_type typeTup;
+   Form_pg_opclass opclassTup;
Oid keyType;
  
if (atnum != 0)
***
*** 240,246 
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for opclass %u",
 classObjectId[i]);
!   keyType = ((Form_pg_opclass) GETSTRUCT(tuple))->opckeytype;
ReleaseSysCache(tuple);
  
if (OidIsValid(keyType) && keyType != to->atttypid)
--- 241,252 
if (!HeapTupleIsValid(tuple))
elog(ERROR, "cache lookup failed for opclass %u",
 classObjectId[i]);
!   opclassTup = (Form_pg_opclass) GETSTRUCT(tuple);
!   /* HACK: make hash always use int4 as storage (really it's 
uint32) */
!   if (opclassTup->opcmethod == HASH_AM_OID)
!   keyType = INT4OID;
!   else
!   keyType = opclassTup->opckeytype;
ReleaseSysCache(tuple);
  
if (OidIsValid(keyType) && keyType != to->atttypid)


Assuming the patch gets accepted, we should devise some cleaner way
of letting index AMs adjust their indexes' reldescs; maybe declare a
new entry point column in pg_am that lets the AM modify the tupledesc
constructed by this function before it gets used to create the index.
But that is irrelevant to performance testing, so I'm not going to do
it right now.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Tom Lane
Zdenek Kotala <[EMAIL PROTECTED]> writes:
> I performed code review and see my comments.

Thanks for the comments.  I've incorporated all of these into an updated
patch that I'm preparing, except for

>  Why not define new datatype for example HashKey instead of uint32?

This seems like a good idea, but I think we should do it as a separate,
cosmetic-cleanup patch.  It'll touch a lot of parts of access/hash/ that
the current patch doesn't need to change, and thus complicate reviewing.

regards, tom lane

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-04 Thread Zdenek Kotala

I performed code review and see my comments.


pgsql/src/backend/access/hash/hashpage.c


use sizeof() or something better the 4.



pgsql/src/backend/access/hash/hashpage.c


New empty line.



pgsql/src/backend/access/hash/hashutil.c


It would be better remove #define from hash.h and setup it there 
directly.




pgsql/src/backend/access/hash/hashutil.c


Why not return directly uint32?



pgsql/src/backend/access/hash/hashutil.c


Retype to correct return type.



pgsql/src/backend/access/hash/hashutil.c


Whats about null values?



pgsql/src/backend/access/hash/hashutil.c


I'm not sure if values modification is safe. Please, recheck.



pgsql/src/backend/access/hash/hashutil.c


Return value is not much clear. I prefer to return InvalidOffset 
when no record is found. However it seems that you use result also for 
PageAddItem to put item on correct ordered position. I think better 
explanation should help to understand how it works.




pgsql/src/backend/access/hash/hashutil.c


It could return FirstOffset number in case when nothing interesting 
is on the page.




pgsql/src/include/access/hash.h


Why not define new datatype for example HashKey instead of uint32?



pgsql/src/include/access/hash.h


It is not good place. See my other comment.


--
You also forgot to bump hash index version in meta page.

Zdenek




--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches