[HACKERS] Different execution time for same plan

2011-06-06 Thread Nick Raj
Hi,

I am using postgresql 8.4.6. I have made an index on my data-type that is
working fine. I mean output is coming properly.

When i execute the query first time, query takes a quite longer time but
second time execution of the same query takes very less time (despite
execution plan is same)

This is my first time execution of query 
*explain analyze select * from vehicle_st1 where '(116.3,39.3,2007/06/11
11:11:11),(117.2,39.8,2007/09/13 11:11:11)' @ stpoint;*
   QUERY
PLAN
-
 Index Scan using st1 on vehicle_st1  (cost=0.00..8226.36 rows=2096
width=66) (actual time=65.962..1587.627 rows=9069 loops=1)
   Index Cond: ('(116.30,39.30,2007-06-11
11:11:11+05:30),(117.20,39.80,2007-09-13 11:11:11+05:30)'::ndpoint
@ stpoint)
* Total runtime: 1594.446 ms*
(3 rows)

Second time
*explain analyze select * from vehicle_st1 where '(116.3,39.3,2007/06/11
11:11:11),(117.2,39.8,2007/09/13 11:11:11)' @ stpoint;*
   QUERY
PLAN
-
 Index Scan using st1 on vehicle_st1  (cost=0.00..8226.36 rows=2096
width=66) (actual time=0.156..14.316 rows=9069 loops=1)
   Index Cond: ('(116.30,39.30,2007-06-11
11:11:11+05:30),(117.20,39.80,2007-09-13 11:11:11+05:30)'::ndpoint
@ stpoint)
 *Total runtime: 19.525 ms*
(3 rows)

Third time
*It gives 17.148 ms*

Fourth time
*It gives 25.102 ms*

MY postgresql.conf file having setting like this (this is original setting,
i haven't modify anything)

#--
# RESOURCE USAGE (except WAL)
#--

# - Memory -

shared_buffers = 28MB# min 128kB
# (change requires restart)
#temp_buffers = 8MB# min 800kB
#max_prepared_transactions = 0# zero disables the feature
# (change requires restart)
# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
#work_mem = 1MB# min 64kB
#maintenance_work_mem = 16MB# min 1MB
#max_stack_depth = 2MB# min 100kB



Why the same plan giving different execution time? (Reason may be data gets
buffered (cached) for the second time execution) Why there is so much
difference?
I want to know the estimate correct time of this query then which option is
true?
1. First one(1594 ms) when application just started, all buffer are empty.
But in practical situation they are not fully empty.
2. I have to taken the stable execution time (19-21 ms).
3. Average down these four execution time.

Which option will be true?

Thanks
Nick


Re: [HACKERS] Cube Index Size

2011-06-04 Thread Nick Raj
2011/6/2 Teodor Sigaev teo...@sigaev.ru

  Can we figure out some information about index i.e. whet is the height
 of index tree, how many values are placed in one leaf node and one non
 leaf level node?


 http://www.sigaev.ru/cvsweb/cvsweb.cgi/gevel/


For improving space utilization, When node is splitted, then we have to
assign enteries to two groups. Once, one group is reached some threshod (m)
then, insert the remaining entries into another group.

Can you suggest some way to choose 'm' (beacuse cube store in form of NDBOX
that having variable length) or provide some guide with code?

Thanks


 --
 Teodor Sigaev   E-mail: teo...@sigaev.ru
   WWW:
 http://www.sigaev.ru/



Re: [HACKERS] Cube Index Size

2011-06-01 Thread Nick Raj
2011/6/1 Alexander Korotkov aekorot...@gmail.com

 2011/6/1 Teodor Sigaev teo...@sigaev.ru

 One of idea is add sorting of Datums to be splitted by cost of insertion.
 It's implemented in intarray/tsearch GiST indexes.


 Yes, it's a good compromise between linear and quadratic entries
 distribution algorithms. In quadratic algorithm each time entry with maximal
 difference of inserion cost is inserted. Quadratic algorithm runs slowly
 than sorting one, but on my tests it shows slightly better results.


 Can we figure out some information about index i.e. whet is the height of
index tree, how many values are placed in one leaf node and one non leaf
level node?

Regards,
Nick


[HACKERS] Cube Index Size

2011-05-30 Thread Nick Raj
Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_
size('cubtest'));   //Table size without index
 pg_size_pretty

 8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
 pg_size_pretty

 16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick


[HACKERS] Cube Index Size

2011-05-30 Thread Nick Raj
Hi,

Cube code provided by postgres contrib folder. It uses the NDBOX structure.
On creating index, it's size increase at a high rate.

On inserting some tuple and creating indexes its behaviour is shown below.

1. When there is only one tuple
select pg_size_pretty(pg_relation_size('cubtest'));   //Table size
without index
 pg_size_pretty

 8192 bytes
(1 row)

select pg_size_pretty(pg_total_relation_size('cubtest')); //Table size with
index
 pg_size_pretty

 16 kB
(1 row)

i.e. Index size in nearly 8kB

2. When tuples are 20,000

Table Size without index - 1.6 MB
Table Size with index - 11 MB
i.e. Index size is nearly 9.4 MB

3. When tuples are 5 lakh

Table Size without index - 40 MB
Table Size with index - 2117 MB
i.e. Index size is nearly 2077 MB ~ 2GB
It is taking nearly 20-25 min for creating index for 5 lakh tuples.

Can some one tell me why index is becoming so large?
How to compress or reduce its size?

Thanks
Nick


[HACKERS] Logfile

2011-05-22 Thread Nick Raj
Hi,
I am using contrib/cube code. I am building GIST index on cube data type
then it leads to a very large size of log file (nearly 220 MB for only 12k
records).
While creating index on geometry field with gist gives 1KB size of log file
for 17 lakh records.

Can someone please tell me how to stop postgres to logged so much data in
case of cube?

Thanks
Nick


Re: [HACKERS] Logfile

2011-05-22 Thread Nick Raj
sorry, actually becuase of one printf statement(i have added) because of
that, these has been occured. My mistake

On Mon, May 23, 2011 at 9:06 AM, Robert Haas robertmh...@gmail.com wrote:

 On Sun, May 22, 2011 at 6:42 AM, Nick Raj nickrajj...@gmail.com wrote:
  I am using contrib/cube code. I am building GIST index on cube data type
  then it leads to a very large size of log file (nearly 220 MB for only
 12k
  records).
  While creating index on geometry field with gist gives 1KB size of log
 file
  for 17 lakh records.
 
  Can someone please tell me how to stop postgres to logged so much data in
  case of cube?

 Well, I don't have a clue why it would be writing that many log
 messages.  Maybe you should look in the log file and see what those
 messages are specifically... with a few more details, someone might be
 able to help you out.

 Also, this question would be better directed to pgsql-general or
 pgsql-admin.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



[HACKERS] DETOAST Datum

2011-05-16 Thread Nick Raj
Hi,
I have defined some function and also used NDBOX structure that having
variable length.

typedef struct NDBOX
{
int32vl_len_;/* varlena length */
unsigned int dim;
doublex[1];
} NDBOX;

When i called my function, it gives NDBOX to be null
On debugging, i found out ,FunctionInvokeCall invokes fmgr_oldstyle
function, for getting argument

if (fnextra-arg_toastable[i])//this returns false, not able to get
arguments
fcinfo-arg[i] =
PointerGetDatum(PG_DETOAST_DATUM(fcinfo-arg[i]));
}

How to get arguments toastable?? and even my table pg_class.reltoastrelid
entry is zero.
Can i have to tell explicitly to toast?

If i commented that if conditions then, it got stuck below:

struct varlena *
pg_detoast_datum(struct varlena * datum)
{
if (VARATT_IS_EXTENDED(datum))  //My code get stuck here
return heap_tuple_untoast_attr(datum);
else
return datum;
}

Can anyone tell me what VARATT_IS_EXTENDED(datum) mean?

Thanks


Re: [HACKERS] DETOAST Datum

2011-05-16 Thread Nick Raj
On Mon, May 16, 2011 at 7:52 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Robert Haas robertmh...@gmail.com writes:
  On Mon, May 16, 2011 at 3:41 AM, Nick Raj nickrajj...@gmail.com wrote:
  How to get arguments toastable?? and even my table
 pg_class.reltoastrelid
  entry is zero.

  It's pretty hard to guess what's going wrong here from the information
  you've provided.  But reltoastid should not be 0 if you're using a
  variable-length data type.

 It could be if the type is not marked toastable (ie, has storage class
 PLAIN, which I think is the default even for varlena types).

 But in any case I suspect that 90% of the problem here is an incorrect
 declaration of the *function* not the type.  We haven't seen the C
 function declaration, nor the SQL CREATE FUNCTION command, but if it's
 going through fmgr_oldstyle then there isn't a PG_FUNCTION_INFO_V1
 marker, which maybe is the problem.

 Thanks for your replies

Tom, you are correct. I missed PG_FUNCTION_INFO_V1 marker. After adding this
marker, my code works fine.
Thanks a lot.





regards, tom lane



Re: [HACKERS] Debug contrib/cube code

2011-05-13 Thread Nick Raj
Sorry i don't know about AFAICS.
Yes, i want to modify cube code for this i want to go in detail by
debugging.
Can you throw some light on how to debug contrib/cube code?

On Fri, May 6, 2011 at 9:59 PM, Euler Taveira de Oliveira eu...@timbira.com
 wrote:

 Em 06-05-2011 02:14, Nick Raj escreveu:

  I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can
 we able to debug that cube code?  Because there is no .configure  file
 to enable debug. Is there is any way to change make file to enable debug?

  What do you want to debug? AFAICS you need to change the code to achieve
 what you want.


 --
  Euler Taveira de Oliveira - Timbira   http://www.timbira.com.br/
  PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



[HACKERS] Debug contrib/cube code

2011-05-05 Thread Nick Raj
Hi,
I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can we
able to debug that cube code?  Because there is no .configure  file to
enable debug. Is there is any way to change make file to enable debug?

Thanks


[HACKERS] Defining input function for new datatype

2011-04-21 Thread Nick Raj
Hi,
I am defining a new data type called mpoint
i.e.
typedef struct mpoint
{
Point p;
Timestamp t;
} mpoint;

For defining input/output function

1 Datum mpoint_in(PG_FUNCTION_ARGS)
2 {
3
4mpoint *result;
5char *pnt=(char *)malloc (sizeof (20));
6char *ts=(char *)malloc (sizeof (20));
7result= (mpoint *) palloc(sizeof(mpoint));
8char *st = PG_GETARG_CSTRING(0);
9mpoint_decode(st,pnt,ts);
// st breaks down into pnt that corresponds to Point and ts corresponds to
Timestamp
10
11  result-p = point_in(PointerGetDatum(pnt));//
point_in (input function for point that assigns x, y into point)
12  result- t = timestamp_in(PointerGetDatum(ts)); // similar
for timestamp
13
14  PG_RETURN_MPOINT_P(result);
15   }

line no 11 warning: passing argument 1 of ‘point_in’ makes pointer from
integer without a cast
 ../../../include/utils/geo_decls.h:191: note: expected
‘FunctionCallInfo’ but argument is of type ‘unsigned int’
line no 11 error: incompatible types when assigning to type ‘Point’ from
type ‘Datum’
line no 12 warning: passing argument 1 of ‘timestamp_in’ makes pointer from
integer without a cast
 ../../../include/utils/timestamp.h:205: note: expected
‘FunctionCallInfo’ but argument is of type ‘unsigned int’

Can anybody figure out what kind of mistake i am doing?
Also, why it got related to 'FunctionCallInfo' ?

Thanks
Nick


[HACKERS] Global variables in plpgsql

2011-04-11 Thread Nick Raj
Hi,
Can anyone know how to define global variable in plpgsql?
Thanks

Regards,
Raj


Re: [HACKERS] Composite Index Structure

2011-03-07 Thread Nick Raj
Hi,

Thanks for your suggestion, Heikki. Btree_gist module provides Gist
implementation for various different data types, equivalent to btree.
Correct me if i wrong?
create index . using gist (coordinates,time). It will create index on
both but i think there would be no linkage between them.
But i want linkage between location and time tree. I want for each location
i have Btree like

Rtree root
/ \
 non-leaf ...
  /\  \
 leaf   ..   leaf
   | |
 Btree rootBtree root
  / \  /   \
non-leaf
  /
 leaf

How to create this kind of structure?
What are the modification is required for this?

Thanks
Raj
On Mon, Mar 7, 2011 at 12:33 PM, Heikki Linnakangas 
heikki.linnakan...@enterprisedb.com wrote:

 On 07.03.2011 08:07, Nick Raj wrote:

 I want to construct an Composite Index Structure i.e. a combination of
 gist and btree.
 What i am thinking is that first creating a Rtree structure that is
 pointing
 to another Btree structure.
 For example, Suppose i want to find vehicles between 2 to 4 pm on
 14/2/2011
 on X road.

 I am thinking of creating rtree structure for road network and then btree
 for time. For reaching X road i use Rtree, and from there btree begin i.e.
 leaf node of rtree contains the pointer to root node of btree ( in this
 way
 i have all time belonging to X road)

 My question is that how to implement this composite index structure in
 postgres?

 Let us suppose, if i create mygist index, then i have to write my own
 operator class?
 or
 can i use gist index as it is and btree tree as it is. I mean their
 operator
 class and their gist methods but how to establish linkage between them?


 It sounds like a use case for a multi-column gist index. See btree_gist
 contrib module. You'll want something like:

 CREATE INDEX ... USING gist (coordinates, time)

 --
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com



[HACKERS] Composite Index Structure

2011-03-06 Thread Nick Raj
Hi all,

I want to construct an Composite Index Structure i.e. a combination of
gist and btree.
What i am thinking is that first creating a Rtree structure that is pointing
to another Btree structure.
For example, Suppose i want to find vehicles between 2 to 4 pm on 14/2/2011
on X road.

I am thinking of creating rtree structure for road network and then btree
for time. For reaching X road i use Rtree, and from there btree begin i.e.
leaf node of rtree contains the pointer to root node of btree ( in this way
i have all time belonging to X road)

My question is that how to implement this composite index structure in
postgres?

Let us suppose, if i create mygist index, then i have to write my own
operator class?
or
can i use gist index as it is and btree tree as it is. I mean their operator
class and their gist methods but how to establish linkage between them?

Any idea ??

Thanks
Raj


[HACKERS] Spatio-Temporal Functions

2011-02-26 Thread Nick Raj
Hi,
I am writing some spatio-temporal function in postgis.
Like, ST_Enters( obj_geom, boundary_geom, time t1,time t2)

For example- Does vehicle enters in ABC between time t1 to t2?
Let us suppose, i take only one object geometry at a time and do compare
then i could not produce the output because it is continous function of time
i.e. Enters is made up of Disjoint - Touches - Inside .
So, for creating this function, i require all the object's geometry before
it go inside the function. So that it can call only once.

My first problem is that
create or replace function ST_Enters(, geometry,timestamp,timestamp)
returns boolean 

Which kind of data type to be used because i have to catch more than 1000
rows and geometry[] doesn't work for it.
So, Can anyone tell me which data type can be used to catch this?

Second,
How can i use this function in SQL? Because suppose i write query like
select ST_Enters ( obj_geom, (select boundary_geom from boundary),
'25/2/2011 12:23:32','26/2/2011') from vehicledata where vehicleid= 'XYZ';
and again it comes out to be one object geometry matches with boundary
geometry,
if i store the inbetween data in some temp table then it creates a lot of
computation between the table and if it end up in between then table may
contains some wrong data.
So, How to use this function in SQL in continuous manner??

Thanks in advance.