Re: [HACKERS] Cube Index Size

2011-06-04 Thread Nick Raj
2011/6/2 Teodor Sigaev teo...@sigaev.ru

  Can we figure out some information about index i.e. whet is the height
 of index tree, how many values are placed in one leaf node and one non
 leaf level node?


 http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/


For improving space utilization, When node is splitted, then we have to
assign enteries to two groups. Once, one group is reached some threshod (m)
then, insert the remaining entries into another group.

Can you suggest some way to choose 'm' (beacuse cube store in form of NDBOX
that having variable length) or provide some guide with code?

Thanks


 --
 Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW:
 http://www.sigaev.ru/



Re: [HACKERS] Cube Index Size

2011-06-02 Thread Teodor Sigaev

Can we figure out some information about index i.e. whet is the height
of index tree, how many values are placed in one leaf node and one non
leaf level node?


http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/
--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube Index Size

2011-06-01 Thread Heikki Linnakangas

On 01.06.2011 10:48, Nick Raj wrote:

On Tue, May 31, 2011 at 12:46 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

If not, please post a self-contained test case to create and populate the
table, so that others can easily try to reproduce it.



   I have attached .sql file that having 2 tuples
   Table creation - create table cubtest(c cube);
   Index creation - create index t on cubtest using gist(c);


Ok, I can reproduce the issue with that. The index is only 4MB in size 
when I populate it with random data (vs. 15 MB with your data). The 
command I used is:


INSERT INTO cubtest SELECT cube(random(), random()) FROM 
generate_series(1,2);


My guess is that the picksplit algorithm performs poorly with that data. 
Unfortunately, I have no idea how to improve that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube Index Size

2011-06-01 Thread Alexander Korotkov
On Wed, Jun 1, 2011 at 3:37 PM, Heikki Linnakangas 
heikki.linnakan...@enterprisedb.com wrote:

 My guess is that the picksplit algorithm performs poorly with that data.
 Unfortunately, I have no idea how to improve that.


Current cube picksplit function have no storage utilization guarantees,
while original Guttman's picksplit has them (if one of group size reaches
some threshold, then all other entries go to another group). Also, current
picksplit is mix of Guttman's linear and quadratic algorithms. It picks
seeds quadratically, but distributes entries linearly.
I see following ways of solving picksplit problem for cube:
1) Add storage utilization guarantees to current picksplit. It may cause
increase of overlaps, but should descrease index size.
2) Add storage utilization guarantees to current picksplit and replace
entries distribution algorithm to the quadratic one. Picksplit will take
more time, but it should give more stable and predictable result.
3) I had some experiments with my own picksplit algorithm, which showed
pretty good results on tests which I've run. But current implementation is
dirty and it's require more testing.

 --
With best regards,
Alexander Korotkov.


Re: [HACKERS] Cube Index Size

2011-06-01 Thread Teodor Sigaev

Ok, I can reproduce the issue with that. The index is only 4MB in size
when I populate it with random data (vs. 15 MB with your data). The
command I used is:

INSERT INTO cubtest SELECT cube(random(), random()) FROM
generate_series(1,2);

My guess is that the picksplit algorithm performs poorly with that data.
Unfortunately, I have no idea how to improve that.


One of idea is add sorting of Datums to be splitted by cost of insertion. It's 
implemented in intarray/tsearch GiST indexes.


Although I'm not sure that it will help but our researches on Guttman's 
picksplit algorimth show significant improvements.

--
Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW: http://www.sigaev.ru/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube Index Size

2011-06-01 Thread Alexander Korotkov
2011/6/1 Teodor Sigaev teo...@sigaev.ru

 One of idea is add sorting of Datums to be splitted by cost of insertion.
 It's implemented in intarray/tsearch GiST indexes.


Yes, it's a good compromise between linear and quadratic entries
distribution algorithms. In quadratic algorithm each time entry with maximal
difference of inserion cost is inserted. Quadratic algorithm runs slowly
than sorting one, but on my tests it shows slightly better results.

 --
With best regards,
Alexander Korotkov.


Re: [HACKERS] Cube Index Size

2011-06-01 Thread Nick Raj
2011/6/1 Alexander Korotkov aekorot...@gmail.com

 2011/6/1 Teodor Sigaev teo...@sigaev.ru

 One of idea is add sorting of Datums to be splitted by cost of insertion.
 It's implemented in intarray/tsearch GiST indexes.


 Yes, it's a good compromise between linear and quadratic entries
 distribution algorithms. In quadratic algorithm each time entry with maximal
 difference of inserion cost is inserted. Quadratic algorithm runs slowly
 than sorting one, but on my tests it shows slightly better results.


 Can we figure out some information about index i.e. whet is the height of
index tree, how many values are placed in one leaf node and one non leaf
level node?

Regards,
Nick


Re: [HACKERS] Cube Index Size

2011-05-31 Thread Heikki Linnakangas

On 30.05.2011 21:51, Nick Raj wrote:

Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
 select pg_size_pretty(pg_relation_
size('cubtest'));   //Table size without index
  pg_size_pretty

  8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
  pg_size_pretty

  16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?


Which version of PostgreSQL are you using? I wonder if this could be due 
to the bug in cube's picksplit algorithm that was fixed a while ago:


http://archives.postgresql.org/message-id/aanlktimc8w6guhpwjewdjqa6wgovh-7qg9ar4pem2...@mail.gmail.com

If not, please post a self-contained test case to create and populate 
the table, so that others can easily try to reproduce it.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube Index Size

2011-05-31 Thread Nicolas Barbier
2011/5/30, Nick Raj nickrajj...@gmail.com:

 3. When tuples are 5 lakh

For the benefit of the others: 5 lakh seems to mean 500,000.

URL:http://en.wikipedia.org/wiki/Lakh

Nicolas

-- 
A. Because it breaks the logical sequence of discussion.
Q. Why is top posting bad?

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cube Index Size

2011-05-30 Thread Nick Raj
Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_
size('cubtest'));   //Table size without index
 pg_size_pretty

 8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
 pg_size_pretty

 16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick


[HACKERS] Cube Index Size

2011-05-30 Thread Nick Raj
Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_size('cubtest'));   //Table size
without index
 pg_size_pretty

 8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
 pg_size_pretty

 16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick