Re: [HACKERS] Modular Type Libraries: was A real currency type

2006-03-22 Thread Martijn van Oosterhout
On Tue, Mar 21, 2006 at 10:39:03PM -0500, Tom Lane wrote:
 AFAICS, the main part of the type system that isn't modular is the
 support for type parameters (a/k/a typmod), such as the maximum length
 for varchar or the precision/scale for numeric.  We could certainly
 invent an API for interpreting such parameters.  But in the current state
 of the system the types that have such parameters have to be hard-wired
 into the SQL grammar, and I don't see how to get rid of that hard-wiring
 without breaking a whole lot of stuff.  Any bison gurus hanging about?

We've been here before:

http://archives.postgresql.org/pgsql-hackers/2005-08/msg01142.php

Turns out it isn't too hard, except there are a number of limitations.
Unfortunatly I seem to have the deleted the patch I created then. :(

Summerising the thread then:

- It made some changes to what was and wasn't a reserved word.
- Character sets for CHAR types still fall outside the system.
- Some other changes in behaviour

I'm not sure if much has changed since that discussion.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [PATCHES] [HACKERS] Automatically setting work_mem

2006-03-22 Thread Simon Riggs
On Wed, 2006-03-22 at 07:48 +, Simon Riggs wrote:
 On Tue, 2006-03-21 at 17:47 -0500, Tom Lane wrote:
 
  I'm fairly unconvinced about Simon's underlying premise --- that we
  can't make good use of work_mem in sorting after the run building phase
  --- anyway.  
 
 We can make good use of memory, but there does come a point in final
 merging where too much is of no further benefit. That point seems to be
 at about 256 blocks per tape; patch enclosed for testing. (256 blocks
 per tape roughly doubles performance over 32 blocks at that stage).
 
 That is never the case during run building - more is always better.
 
  If we cut back our memory usage 
 Simon inserts the words: too far
  then we'll be forcing a
  significantly more-random access pattern to the temp file(s) during
  merging, because we won't be able to pre-read as much at a time.
 
 Yes, thats right.
 
 If we have 512MB of memory that gives us enough for 2000 tapes, yet the
 initial runs might only build a few runs. There's just no way that all
 512MB of memory is needed to optimise the performance of reading in a
 few tapes at time of final merge.
 
 I'm suggesting we always keep 2MB per active tape, or the full
 allocation, whichever is lower. In the above example that could release
 over 500MB of memory, which more importantly can be reused by subsequent
 sorts if/when they occur.
 
 
 Enclose two patches:
 1. mergebuffers.patch allows measurement of the effects of different
 merge buffer sizes, current default=32
 
 2. reassign2.patch which implements the two kinds of resource
 deallocation/reassignment proposed.

Missed couple of minor points in patch: reassign3.patch attached ro
completely replace reassign2.patch.

Recent test results show that with a 512MB test sort we can reclaim 97%
of memory during final merge with only a noise level (+2%) increase in
overall elapsed time. (Thats just an example, your mileage may vary). So
a large query would use and keep about 536MB memory rather than 1536MB.

Best Regards, Simon Riggs
Index: src/backend/utils/sort/tuplesort.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/utils/sort/tuplesort.c,v
retrieving revision 1.65
diff -c -r1.65 tuplesort.c
*** src/backend/utils/sort/tuplesort.c	10 Mar 2006 23:19:00 -	1.65
--- src/backend/utils/sort/tuplesort.c	22 Mar 2006 09:34:58 -
***
*** 179,186 
   */
  #define MINORDER		6		/* minimum merge order */
  #define TAPE_BUFFER_OVERHEAD		(BLCKSZ * 3)
! #define MERGE_BUFFER_SIZE			(BLCKSZ * 32)
! 
  /*
   * Private state of a Tuplesort operation.
   */
--- 179,187 
   */
  #define MINORDER		6		/* minimum merge order */
  #define TAPE_BUFFER_OVERHEAD		(BLCKSZ * 3)
! #define OPTIMAL_MERGE_BUFFER_SIZE	(BLCKSZ * 32)
! #define PREFERRED_MERGE_BUFFER_SIZE (BLCKSZ * 256)
! #define REUSE_SPACE_LIMIT   RELSEG_SIZE
  /*
   * Private state of a Tuplesort operation.
   */
***
*** 255,260 
--- 256,270 
  	 */
  	int			currentRun;
  
+ /*
+  * These variables are used during final merge to reassign resources
+  * as they become available for each tape
+  */
+ int lastPrereadTape;/* last tape preread from */
+ int numPrereads;/* num times last tape has been selected */
+ int reassignableSlots;  /* how many slots can be reassigned */
+ longreassignableMem;/* how much memory can be reassigned */
+ 
  	/*
  	 * Unless otherwise noted, all pointer variables below are pointers
  	 * to arrays of length maxTapes, holding per-tape data.
***
*** 294,299 
--- 304,310 
  	int		   *tp_runs;		/* # of real runs on each tape */
  	int		   *tp_dummy;		/* # of dummy runs for each tape (D[]) */
  	int		   *tp_tapenum;		/* Actual tape numbers (TAPE[]) */
+ 
  	int			activeTapes;	/* # of active input tapes in merge pass */
  
  	/*
***
*** 398,408 
--- 409,423 
  
  static Tuplesortstate *tuplesort_begin_common(int workMem, bool randomAccess);
  static void puttuple_common(Tuplesortstate *state, SortTuple *tuple);
+ static void grow_memtuples(Tuplesortstate *state);
+ static void shrink_memtuples(Tuplesortstate *state);
  static void inittapes(Tuplesortstate *state);
  static void selectnewtape(Tuplesortstate *state);
  static void mergeruns(Tuplesortstate *state);
  static void mergeonerun(Tuplesortstate *state);
  static void beginmerge(Tuplesortstate *state);
+ static void assignResourcesUniformly(Tuplesortstate *state, bool initialAssignment);
+ static void reassignresources(Tuplesortstate *state, int srcTape);
  static void mergepreread(Tuplesortstate *state);
  static void mergeprereadone(Tuplesortstate *state, int srcTape);
  static void dumptuples(Tuplesortstate *state, bool alltuples);
***
*** 727,733 
   * moves around with tuple addition/removal, this might result in thrashing.
   * Small increases in the array size are 

[HACKERS] Request from Tom--offlist

2006-03-22 Thread Jonah H. Harris
Hey Tom,

In regards to your last email to me, I'd like your help and
advice. In the past, I'd heard that you block a ton of emails, so
I'm not sure whether you received my response or not. If you
would, please reply to me privately with info. Thanks!

Sorry for posting this to the list guys!-- Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporation732.331.1324


[HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Thomas Hallgren

Imagine the following scenario:

Function 'A' returns SETOF 'x'. It will issue a query using SPI that 
calls function 'B'. This function returns SETOF 'y'.

Each tuple of 'x' is formed from some data in 'y'.
There will be millions of tuples so building a set of 'y' in memory is 
not an option.


What would the recommended use of MemoryContexts in an SRF function be 
in order to make this work? The SPI_connect must be issued during the 
SRF_IS_FIRST_CALL() phase. The Portal that it creates must remain alive 
until it's time for the SRF_RETURN_DONE(). What would the recommended 
approach be to accomplish this efficiently (and without introducing a 
major memory leak)?


The problem I'm trying to solve is a generic one. It's very possible 
that the recursion is is of arbitrary depth.


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Patch Submission Guidelines

2006-03-22 Thread Mark Wong
On Tue, 14 Feb 2006 21:54:12 +
Simon Riggs [EMAIL PROTECTED] wrote:

 On Tue, 2006-02-14 at 16:17 -0500, Andrew Dunstan wrote:
 
  If I had enough time there are all sorts of things like this I'd love to 
  set up. A fetchable url that says try these experimental CVS branches 
  or something like that would be great.
 
 How much time would you need? I think having every patch built before
 anyone even looks at the code would sort out most of the issues I
 mentioned. 

Sorry I've gotten into this late.  The PLM developed at OSDL might be
useful here.  We're still grabbing daily snapshots from CVS and patches
could be submitted against those to see if they apply cleanly.  Sparse
is also run but no compiling is done, although that could be easily
arranged.

Here a link:
http://plm.osdl.org/plm-cgi/plm

Mark

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Modular Type Libraries: was A real currency type

2006-03-22 Thread William ZHANG

Tom Lane [EMAIL PROTECTED]
 Timestamps and numerics are definitely in the spec, geometric and
 network types are definitely not.  IIRC, bitstring types are in SQL99
 but for some reason are deprecated in SQL2003 (if anyone knows the
 reasoning behind the SQL committee's about-face on that, please clue
 us in).

There is a standard data type called BIT in ODBC and JDBC,
but it is sth. like SQL standard's BOOLEAN, not BIT. It seems that
some DBMSs implement BIT as BOOLEAN in the backend. Maybe the standard
committee think that bit string is useless and easy to cause confusion?

Regards,
William ZHANG



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Automatically setting work_mem

2006-03-22 Thread Luke Lonergan
Tom,

On 3/21/06 3:06 PM, Tom Lane [EMAIL PROTECTED] wrote:

 The real problem we are facing with a whole lot of our optimization
 issues (not only sorting) is that it's not all that trivial to get
 credible experimental results that we can expect will hold up across
 a range of usage scenarios.

As proven by the qsort tests - point taken.

- Luke



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] How to put back??

2006-03-22 Thread Dhanaraj M - Sun Microsystems

Hi all,

I have recented joined and working on postgres. I fixed a bug that I 
saw in the mailing list. I ran the regression test that is available in 
postgres. It was successful and now I need the following details..


1) Test suits that i could get to test my code..
2) How can I put back my fix into postgres src code.. (or) What is the 
procedure that I should follow at this stage??


Waiting for your suggestions,
Thanks
Dhanaraj

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Martijn van Oosterhout
On Wed, Mar 22, 2006 at 03:31:59PM +0100, Thomas Hallgren wrote:
 Imagine the following scenario:
 
 Function 'A' returns SETOF 'x'. It will issue a query using SPI that 
 calls function 'B'. This function returns SETOF 'y'.
 Each tuple of 'x' is formed from some data in 'y'.
 There will be millions of tuples so building a set of 'y' in memory is 
 not an option.

I think you're running into a small limitation of set functions here.
If you look at nodeFunctionScan.c that handles this, you can see that
the code is written in such a way as to collect all the tuples first
before returning anything. Not sure why it does that, probably to
handle mark/restore, though that isn't stated anywhere in the code.

 What would the recommended use of MemoryContexts in an SRF function be 
 in order to make this work? The SPI_connect must be issued during the 
 SRF_IS_FIRST_CALL() phase. The Portal that it creates must remain alive 
 until it's time for the SRF_RETURN_DONE(). What would the recommended 
 approach be to accomplish this efficiently (and without introducing a 
 major memory leak)?

Well, I think this is done the normal way. The function returning
values allocates them in it's own context and does a RETURN NEXT. Once
it has returned them it can free it, or reset the context if it
prefers. The caller is always responsible for copying (since it isn't
often needed).

Have you read the executor/README ?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [HACKERS] How to put back??

2006-03-22 Thread Martijn van Oosterhout
On Wed, Mar 22, 2006 at 12:22:04PM +0530, Dhanaraj M - Sun Microsystems wrote:
 Hi all,
 
 I have recented joined and working on postgres. I fixed a bug that I 
 saw in the mailing list. I ran the regression test that is available in 
 postgres. It was successful and now I need the following details..
 
 1) Test suits that i could get to test my code..
 2) How can I put back my fix into postgres src code.. (or) What is the 
 procedure that I should follow at this stage??

Post the patch to pgsql-patches. The regression test is the test suite.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Tom Lane
Martijn van Oosterhout kleptog@svana.org writes:
 I think you're running into a small limitation of set functions here.
 If you look at nodeFunctionScan.c that handles this, you can see that
 the code is written in such a way as to collect all the tuples first
 before returning anything.

I don't think Thomas intended to go through nodeFunctionScan, so this
needn't apply to him.

 Not sure why it does that,

plpgsql and similar languages will return a tuplestore anyway, so it has
to handle that case, and it was convenient to make all the cases look
alike for starters.  Nobody's yet gone back to improve it for the case
of languages that return a tuple per call.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Andrew Dunstan

Tom Lane wrote:



plpgsql and similar languages will return a tuplestore anyway, so it has
to handle that case, and it was convenient to make all the cases look
alike for starters.  Nobody's yet gone back to improve it for the case
of languages that return a tuple per call.

 





This would be hard to do in the plperl case, at least, and I would be 
surprised if it weren't in most others too. So what plperl does is to 
fetch the whole set on the first call and then fudges all the other 
calls to get the next element from the result set. We save out the 
intermediate tuple store on each call and restore it afterwards, so I 
think recursion shouldn't be a difficulty.


cheers

andrew

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] How to put back??

2006-03-22 Thread Adrian Maier
On 3/22/06, Dhanaraj M - Sun Microsystems [EMAIL PROTECTED] wrote:
 Hi all,

  I have recented joined and working on postgres. I fixed a bug that I
 saw in the mailing list. I ran the regression test that is available in
 postgres. It was successful and now I need the following details..

 1) Test suits that i could get to test my code..
 2) How can I put back my fix into postgres src code.. (or) What is the
 procedure that I should follow at this stage??

You might find the following document useful:

http://www.postgresql.org/docs/faqs.FAQ_DEV.html


Cheers,
Adrian Maier

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Thomas Hallgren
Recursive calls works in PL/Java. No problem there. But the larger the 
set, the more memory it consumes. Do I read your answers correctly if I 
conclude this is a known limitation when SPI is used? I.e. there's no 
way to stream one row at a time without ever building the full set?


Regards,
Thomas Hallgren


Andrew Dunstan wrote:

Tom Lane wrote:



plpgsql and similar languages will return a tuplestore anyway, so it has
to handle that case, and it was convenient to make all the cases look
alike for starters.  Nobody's yet gone back to improve it for the case
of languages that return a tuple per call.

 





This would be hard to do in the plperl case, at least, and I would be 
surprised if it weren't in most others too. So what plperl does is to 
fetch the whole set on the first call and then fudges all the other 
calls to get the next element from the result set. We save out the 
intermediate tuple store on each call and restore it afterwards, so I 
think recursion shouldn't be a difficulty.


cheers

andrew



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Alvaro Herrera
Thomas Hallgren wrote:
 Recursive calls works in PL/Java. No problem there. But the larger the 
 set, the more memory it consumes. Do I read your answers correctly if I 
 conclude this is a known limitation when SPI is used? I.e. there's no 
 way to stream one row at a time without ever building the full set?

Hmm, are you using a tuplestore?  The PL/php code for return_next looks
like this:

ZEND_FUNCTION(return_next)
{
... some stuff ...

/* Use the per-query context so that the tuplestore survives */
oldcxt = MemoryContextSwitchTo(rsi-econtext-ecxt_per_query_memory);

/* Form the tuple */
tup = plphp_srf_htup_from_zval(param, current_attinmeta, current_memcxt);

/* First call?  Create the tuplestore. */
if (!current_tuplestore)
current_tuplestore = tuplestore_begin_heap(true, false, work_mem);

/* Save the tuple and clean up */
tuplestore_puttuple(current_tuplestore, tup);
heap_freetuple(tup);

MemoryContextSwitchTo(oldcxt);
}

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Tom Lane
Thomas Hallgren [EMAIL PROTECTED] writes:
 Recursive calls works in PL/Java. No problem there. But the larger the 
 set, the more memory it consumes. Do I read your answers correctly if I 
 conclude this is a known limitation when SPI is used? I.e. there's no 
 way to stream one row at a time without ever building the full set?

By no means.  The point is that there are some callers of SRFs that are
going to materialize the result set, as well as some SRFs that are going
to hand back a materialized result set anyway.  The interface can handle
a tuple-per-call but that's not the way everybody chooses to use it.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Andrew Dunstan

Thomas Hallgren wrote:

Recursive calls works in PL/Java. No problem there. But the larger the 
set, the more memory it consumes. Do I read your answers correctly if 
I conclude this is a known limitation when SPI is used? I.e. there's 
no way to stream one row at a time without ever building the full set?



plperl stashes the results in a tuplestore object, which spills to disk. 
So memory use is not unbounded. Before 8.1 we had no return_next and no 
intermediate tuplestore, so we had serious memory problems with 
returning large sets.


As for SPI calls, we also had problems there but now we provide a cursor 
interface that works much more nicely.


cheers

andrew




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] 8.2 planning features

2006-03-22 Thread Robert Treat
On Tuesday 21 March 2006 18:59, satoshi nagayasu wrote:
 Tom Lane wrote:
  In particular, asking for a list of features that will be done in
  particular future releases shows a complete lack of understanding
  of the process ...

 I completely understand.

 However, we also need to know why business people want
 to know about the future plan. For the business people,
 the roadmap is used to know the software is fit to
 their (growing) business, not only now but in the future.

 Roadmap can be changed, but still roadmap is necessary
 for some kind of users.


I guess we need all talks like this to have a special slide that points out 
that if someone doesn't see the feature they need on the list, they can have 
it in the next version as long as they're willing to put some resources 
behind it.   :-)

-- 
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] 8.2 planning features

2006-03-22 Thread Andrew Dunstan
Robert Treat wrote:

On Tuesday 21 March 2006 18:59, satoshi nagayasu wrote:
  

Tom Lane wrote:


In particular, asking for a list of features that will be done in
particular future releases shows a complete lack of understanding
of the process ...
  

I completely understand.

However, we also need to know why business people want
to know about the future plan. For the business people,
the roadmap is used to know the software is fit to
their (growing) business, not only now but in the future.

Roadmap can be changed, but still roadmap is necessary
for some kind of users.




I guess we need all talks like this to have a special slide that points out 
that if someone doesn't see the feature they need on the list, they can have 
it in the next version as long as they're willing to put some resources 
behind it.   :-)

  


But it isn't true. Patches get rejected. We won't just automatically
adopt someone's idea if a cool new feature.

My idea of a roadmap would indicate a set of features that had at least
in principle general acceptance and which somebody had undertaken to try
to deliver. Unfortunately, Our TODO list doesn't necessarily meet either
of these criteria. It could also be extended to include some sort of
prioritisation, e.g. really really want to have, would like to have,
would be nice if we can get it in.

The roadmap would be indicative, not prescriptive, i.e. nothing would
stop somebody from delivering a patch for some cool new feature, and
having it considered. And the absence of some feature would not
necessarily inhibit a release. But it would be nice to have some
document that indicated what was likely to be coming down the track
(e.g. I didn't realise that Jonah was going to be doing hierarchical
queries, which will be a very cool thing to have indeed, and one which
many users will drool over.)

cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] How to put back??

2006-03-22 Thread Darcy Buskermolen
On Tuesday 21 March 2006 22:52, Dhanaraj M - Sun Microsystems wrote:
 Hi all,

  I have recented joined and working on postgres. I fixed a bug that I
 saw in the mailing list. I ran the regression test that is available in
 postgres. It was successful and now I need the following details..

 1) Test suits that i could get to test my code..
Regression tests are #1, you can create your own tests if it is new behavior
You can also try testing against pgbench, php-tpcw, the OSDL database tests 
etc..

 2) How can I put back my fix into postgres src code.. (or) What is the
 procedure that I should follow at this stage??

Create a context diff and sent it to the pgsql-patches@postgresql.org list.


 Waiting for your suggestions,
 Thanks
 Dhanaraj

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org

-- 
Darcy Buskermolen
Wavefire Technologies Corp.

http://www.wavefire.com
ph: 250.717.0200
fx: 250.763.1759

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] How to put back??

2006-03-22 Thread Chris Browne
[EMAIL PROTECTED] (Dhanaraj M - Sun Microsystems) writes:
 Hi all,

  I have recented joined and working on postgres. I fixed a bug that I
 saw in the mailing list. I ran the regression test that is available
 in postgres. It was successful and now I need the following details..

 1) Test suits that i could get to test my code..

There is an existing test suite in the source code in src/test/regress.

You might check in there to see where the tests relevant to the
feature you modified reside, and see if you can add relevant tests
that break, in the unpatched system, and work fine, after the change.

 2) How can I put back my fix into postgres src code.. (or) What is the
 procedure that I should follow at this stage??

Once you have a patch, there's a patches list where you can submit it...
-- 
output = (cbbrowne @ cbbrowne.com)
http://cbbrowne.com/info/spreadsheets.html
Rules of the Evil Overlord #32. I will not fly into a rage and kill a
messenger who brings me bad news  just to illustrate how evil I really
am. Good messengers are hard to come by.
http://www.eviloverlord.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Thomas Hallgren

Tom Lane wrote:

Thomas Hallgren [EMAIL PROTECTED] writes:
  
Recursive calls works in PL/Java. No problem there. But the larger the 
set, the more memory it consumes. Do I read your answers correctly if I 
conclude this is a known limitation when SPI is used? I.e. there's no 
way to stream one row at a time without ever building the full set?



By no means.  The point is that there are some callers of SRFs that are
going to materialize the result set, as well as some SRFs that are going
to hand back a materialized result set anyway.  The interface can handle
a tuple-per-call but that's not the way everybody chooses to use it.

  
OK. I've managed to get rid of my last memory-leak (i hope). I followed 
Martijn's suggestion to create the returned tuple in my own context. Now 
even the nastiest recursive chains using huge sets of data seems to 
behave ok :-)


There's one thing that's still a bit fuzzy to me. If I don't use SPI, 
the context that is current when my SRF function is called seems to be 
reset between each call. I can palloc stuff in it as much as I like. I 
can even create the tuple that I return using this context. No memory 
leak. But if I, during the SPI_IS_FIRST_CALL phase, do an SPI_connect 
(done when the 'multi_call_memory_ctx' is current), then the leak seem 
to occur immediately. Will that connect somehow alter the durability for 
the context that is current on each call to my SRF?


Regards,
Thomas Hallgren


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Martijn van Oosterhout
On Wed, Mar 22, 2006 at 09:09:34PM +0100, Thomas Hallgren wrote:
 There's one thing that's still a bit fuzzy to me. If I don't use SPI, 
 the context that is current when my SRF function is called seems to be 
 reset between each call. I can palloc stuff in it as much as I like. I 
 can even create the tuple that I return using this context. No memory 
 leak. But if I, during the SPI_IS_FIRST_CALL phase, do an SPI_connect 
 (done when the 'multi_call_memory_ctx' is current), then the leak seem 
 to occur immediately. Will that connect somehow alter the durability for 
 the context that is current on each call to my SRF?

Ok, I'm not sure if I understand the reasoning but I think it's like
this:

- When the results of an SRF are accumulated by
ExecMakeTableFunctionResult, that function is reseting your context
each time.

- When you call SPI_connect it creates a new context and switches to
it. It switches back on SPI_finish. SPI_finish switches to the context
active at SPI_connect, maybe this is not what you expect?

The ExecMakeTableFunctionResult only resets the one context, the one
provided when your function starts, anything created in other contexts
is Somebody Else's Problem. So the question, which context are you
allocating in?

Hope this clarifies it,
 

-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


[HACKERS] Accessing schema data in information schema

2006-03-22 Thread Peter Eisentraut
I'm updating the information schema for SQL:2003.  I'm having some 
difficulties with the sequences view.  It should look approximately 
like this (uninteresting stuff omitted):

CREATE VIEW sequences AS
  SELECT CAST(current_database() AS sql_identifier) AS sequence_catalog,
 CAST(nc.nspname AS sql_identifier) AS sequence_schema,
 CAST(c.relname AS sql_identifier) AS sequence_name,
 CAST(null AS cardinal_number) AS maximum_value, -- FIXME
 CAST(null AS cardinal_number) AS minimum_value, -- FIXME
 CAST(null AS cardinal_number) AS increment, -- FIXME
 CAST(null AS character_data) AS cycle_option-- FIXME
  FROM pg_namespace nc, pg_class c
  WHERE c.relnamespace = nc.oid
AND c.relkind = 's';

How does one get at the missing fields.  The only way I know is 
selecting from the sequence, but how does one work this into this 
query?  Somehow it seems that these things should be stored in a real 
system catalog.

Ideas (short of using PERFORM in PL/pgSQL)?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 How does one get at the missing fields.  The only way I know is 
 selecting from the sequence, but how does one work this into this 
 query?  Somehow it seems that these things should be stored in a real 
 system catalog.

Yeah.  I've occasionally toyed with the idea that sequences should be
rows in a single catalog instead of independent tables as they are now.
This would make for a much smaller disk footprint (with consequent I/O
savings) and would solve problems like the one you have.  Unfortunately
the backward-compatibility issues seem a bit daunting :-(.  It's
probably not completely impossible, but how do we preserve the existing
behavior that you can SELECT * FROM seqname and get the parameters?

Ideally I'd like
SELECT * FROM seqname;  -- gets params of one sequence
SELECT * FROM pg_sequence;  -- gets params of all sequences

One possible kluge is to make all the sequences be child tables of a
pg_sequence catalog that exists only to be their inheritance parent.
This seems pretty ugly from a performance point of view though.
Selecting from pg_sequence would be really expensive if you have a lot
of sequences, and there wouldn't be any opportunity for reducing the
disk footprint.

(Thinks a bit...)  Maybe it would work for pg_sequence to be a real
catalog with a row per sequence, and we also create a view named after
the sequence that simply selects from pg_sequence with an appropriate
WHERE condition.

Plan C would be to say that we don't need to preserve SELECT * FROM
seqname, but I'll bet there would be some hollering.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Recursive calls to functions that return sets

2006-03-22 Thread Tom Lane
Thomas Hallgren [EMAIL PROTECTED] writes:
 But if I, during the SPI_IS_FIRST_CALL phase, do an SPI_connect 
 (done when the 'multi_call_memory_ctx' is current), then the leak seem 
 to occur immediately. Will that connect somehow alter the durability for 
 the context that is current on each call to my SRF?

Are you remembering to SPI_finish when you're done?  Maybe what you are
leaking is the SPI state information.

You could look at the MemoryContextStats printout for clues --- easiest
way is to deliberately run the backend out of memory, and after the out
of memory error occurs, look in the postmaster log.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Hannu Krosing
Ühel kenal päeval, K, 2006-03-22 kell 16:11, kirjutas Tom Lane:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  How does one get at the missing fields.  The only way I know is 
  selecting from the sequence, but how does one work this into this 
  query?  Somehow it seems that these things should be stored in a real 
  system catalog.
 
 Yeah.  I've occasionally toyed with the idea that sequences should be
 rows in a single catalog instead of independent tables as they are now.
 This would make for a much smaller disk footprint (with consequent I/O
 savings) and would solve problems like the one you have. 

Would it not make page locking problems much worse with all get_next()'s
competeing to update the same page? 

At least unless you reserve one page for each sequence.

-
Hannu


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Tom Lane
Hannu Krosing [EMAIL PROTECTED] writes:
 Ühel kenal päeval, K, 2006-03-22 kell 16:11, kirjutas Tom Lane:
 Yeah.  I've occasionally toyed with the idea that sequences should be
 rows in a single catalog instead of independent tables as they are now.
 This would make for a much smaller disk footprint (with consequent I/O
 savings) and would solve problems like the one you have. 

 Would it not make page locking problems much worse with all get_next()'s
 competeing to update the same page? 

Well, there'd be at most about 80 sequences per page (ballpark estimate
remembering that we'd still want to store a sequence name) and the
reduction in demand for shared buffers might outweigh the increased
contention for any one buffer.  I haven't seen any examples where get_next
is the key source of contention anyhow.  A last point is that in simple
cases where the contention is all on one sequence, you're going to have
that problem anyway.

 At least unless you reserve one page for each sequence.

Which is exactly what I don't want.  But we could imagine padding the
tuples to achieve any particular tuples/page ratio we want, if 80 proves
to be uncomfortably many.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Hannu Krosing
Ühel kenal päeval, K, 2006-03-22 kell 17:29, kirjutas Tom Lane:
 Hannu Krosing [EMAIL PROTECTED] writes:
  Ühel kenal päeval, K, 2006-03-22 kell 16:11, kirjutas Tom Lane:
  Yeah.  I've occasionally toyed with the idea that sequences should be
  rows in a single catalog instead of independent tables as they are now.
  This would make for a much smaller disk footprint (with consequent I/O
  savings) and would solve problems like the one you have. 
 
  Would it not make page locking problems much worse with all get_next()'s
  competeing to update the same page? 
 
 Well, there'd be at most about 80 sequences per page (ballpark estimate
 remembering that we'd still want to store a sequence name) and the
 reduction in demand for shared buffers might outweigh the increased
 contention for any one buffer. I haven't seen any examples where get_next
 is the key source of contention anyhow.  

Probably true. I can't think of one right now either. And we have
caching to solve these cases.

 A last point is that in simple
 cases where the contention is all on one sequence, you're going to have
 that problem anyway.
 
  At least unless you reserve one page for each sequence.
 
 Which is exactly what I don't want.  But we could imagine padding the
 tuples to achieve any particular tuples/page ratio we want, if 80 proves
 to be uncomfortably many.

I guess we can't easily start locking some subarea of a page, say 256
byte subpage, or just the tuple.

OTOH it may be possible as we don't need to lock page header for
sequences as the tuple is updated in place and will not change in size.

OTOOH, I'm afraid we still need to WAL the whole page, so the savings
will be marginal.


Hannu



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Alvaro Herrera
Hannu Krosing wrote:

 I guess we can't easily start locking some subarea of a page, say 256
 byte subpage, or just the tuple.

 OTOH it may be possible as we don't need to lock page header for
 sequences as the tuple is updated in place and will not change in size.

Huh, we _can_ lock individual tuples, using LockTuple() (or rather,
heap_lock_tuple).  Since the tuple is modified in place, there's no need
to lock the whole page.

 OTOOH, I'm afraid we still need to WAL the whole page, so the savings
 will be marginal.

Huh, why?  We can just keep the current WAL logging for sequences, or
something very similar, can't we?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Hannu Krosing wrote:
 I guess we can't easily start locking some subarea of a page, say 256
 byte subpage, or just the tuple.

 Huh, we _can_ lock individual tuples, using LockTuple() (or rather,
 heap_lock_tuple).  Since the tuple is modified in place, there's no need
 to lock the whole page.

But heap_lock_tuple is pretty expensive and subject to deadlocks.  I
think getting the buffer content lock on the page will still be the
right thing.

 OTOOH, I'm afraid we still need to WAL the whole page, so the savings
 will be marginal.

 Huh, why?  We can just keep the current WAL logging for sequences, or
 something very similar, can't we?

In the case of the first touch of a sequence page after checkpoint, we'd
need to WAL the whole page image to defend against page breaks during
write.  After that though the WAL entries would be *smaller* than they
are now, since there'd be no need to log the entire content of the
changed tuple; we'd know we only need to log the counter advance.

It's hard to say whether this'd be a win, loss, or wash without testing.
It'd probably depend on how many nextval's per checkpoint you want to
assume.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] create type error message

2006-03-22 Thread Jim Buttafuoco

# select version();
  version

 PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 
1:3.3.5-13)
(1 row)

simple example:
# create type a as (a text,b int);
CREATE TYPE
# create type a as (a text,b int);
ERROR:  relation a already exists

seems like  

ERROR: type a already exists
would be better.

Thanks
Jim


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Where does the time go?

2006-03-22 Thread Kevin Grittner
I have some odd results from timing two versions of an update query, and
was hoping to get a better handle on how to interpret this.  The query
does an update of one table.  One version does three NOT IN tests
against three related tables.  The other version does the logically
equivalent NOT EXISTS tests.  Here are the timings:

(a)  NOT IN EXPLAIN ANALYZE reports: 150 ms.

(b)  Difference between SELECT CURRENT_TIMESTAMP values before and
after the NOT IN EXPLAIN ANALYZE: 171 ms.

(c)  Difference between SELECT CURRENT_TIMESTAMP values before and
after NOT IN query without EXPLAIN ANALYZE: 140 ms.

(d)  NOT EXISTS EXPLAIN ANALYZE reports: 9 ms.

(e)  Difference between SELECT CURRENT_TIMESTAMP values before and
after the NOT EXISTS EXPLAIN ANALYZE: 62 ms.

(f)  Difference between SELECT CURRENT_TIMESTAMP values before and
after NOT EXISTS query without EXPLAIN ANALYZE: 62 ms.

I repeated these tests many times.  After each test I updated the table
back to its original state and ran CLUSTER and VACUUM ANALYZE, then
allowed a minute of settling time.  The timings are remarkably
consistent from one try to the next with an occasional outlier on the
high side.

It seems to me that these results indicate that EXPLAIN ANALYZE
distorts the plan for the IN query (b - c = 31 ms), but not the plan for
the EXISTS query (e == f).  The time to bracket the UPDATE with the
SELECT CURRENT_TIMESTAMP queries can't exceed 21 ms (b - a).  That seems
like a long time, though, so I suspect that some of that time is going
to the same place that most of the 53 ms (e - d) goes on the NOT EXISTS
query.  Where would that be?  Parse and plan phases?  Is there a way to
pin that down better?

I was planning on posting the queries with plans and timings to
illustrate a costing issue, but when I saw these timings I figured I'd
ask about this first.  It raises what seems like a more fundamental
question, which would have an impact on collecting all of the
interesting information.  If execution time is 9 ms, but there is
something else adding several times that, perhaps that something else is
interesting in itself.

Thanks for any info.

-Kevin


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Modular Type Libraries: was A real currency type

2006-03-22 Thread Trent Shipley
On Wednesday 2006-03-22 08:53, William ZHANG wrote:
 Tom Lane [EMAIL PROTECTED]

  Timestamps and numerics are definitely in the spec, geometric and
  network types are definitely not.  IIRC, bitstring types are in SQL99
  but for some reason are deprecated in SQL2003 (if anyone knows the
  reasoning behind the SQL committee's about-face on that, please clue
  us in).

 There is a standard data type called BIT in ODBC and JDBC,
 but it is sth. like SQL standard's BOOLEAN, not BIT. It seems that
 some DBMSs implement BIT as BOOLEAN in the backend. Maybe the standard
 committee think that bit string is useless and easy to cause confusion?

In or out of the standard, bitstring can be a nice type to have.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Darcy Buskermolen
On Wednesday 22 March 2006 13:11, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  How does one get at the missing fields.  The only way I know is
  selecting from the sequence, but how does one work this into this
  query?  Somehow it seems that these things should be stored in a real
  system catalog.

 Yeah.  I've occasionally toyed with the idea that sequences should be
 rows in a single catalog instead of independent tables as they are now.
 This would make for a much smaller disk footprint (with consequent I/O
 savings) and would solve problems like the one you have.  Unfortunately
 the backward-compatibility issues seem a bit daunting :-(.  It's
 probably not completely impossible, but how do we preserve the existing
 behavior that you can SELECT * FROM seqname and get the parameters?

 Ideally I'd like
   SELECT * FROM seqname;  -- gets params of one sequence
   SELECT * FROM pg_sequence;  -- gets params of all sequences

 One possible kluge is to make all the sequences be child tables of a
 pg_sequence catalog that exists only to be their inheritance parent.
 This seems pretty ugly from a performance point of view though.
 Selecting from pg_sequence would be really expensive if you have a lot
 of sequences, and there wouldn't be any opportunity for reducing the
 disk footprint.

 (Thinks a bit...)  Maybe it would work for pg_sequence to be a real
 catalog with a row per sequence, and we also create a view named after
 the sequence that simply selects from pg_sequence with an appropriate
 WHERE condition.

I'd think that would be a workable solution, with documentation notes that 
this will be deprecated in favor of information_schema  in an upcoming 
release ?


 Plan C would be to say that we don't need to preserve SELECT * FROM
 seqname, but I'll bet there would be some hollering.
 ?


   regards, tom lane

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

-- 
Darcy Buskermolen
Wavefire Technologies Corp.

http://www.wavefire.com
ph: 250.717.0200
fx: 250.763.1759

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Tom Lane
Darcy Buskermolen [EMAIL PROTECTED] writes:
 On Wednesday 22 March 2006 13:11, Tom Lane wrote:
 (Thinks a bit...)  Maybe it would work for pg_sequence to be a real
 catalog with a row per sequence, and we also create a view named after
 the sequence that simply selects from pg_sequence with an appropriate
 WHERE condition.

 I'd think that would be a workable solution, with documentation notes that 
 this will be deprecated in favor of information_schema  in an upcoming 
 release ?

Yeah, we could consider the views a transitional thing, and get rid of
them after a release or two.  Tell people to change over to either look
in the pg_sequence catalog, or use the information_schema view.  Does
that view expose everything that there is, though, or will we have
proprietary extensions that are not in SQL2003?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Static build of psql with readline support

2006-03-22 Thread Mark Kirkwood

Christopher Kings-Lynne wrote:

To the GP, adding -lncurses (or rather the static equivalent) to your
link line should solve it. But if you include any other libraries like
ssl or kerberos be prepared to add a lot more.



With -lncurses or -lcurses I still can't get this to work.  I add it to 
the ${CC} line, right?




This is what I used (current 8.2 sources FreeBSD 6.0):


*** Makefile.orig   Thu Mar 23 14:37:37 2006
--- MakefileThu Mar 23 14:40:46 2006
***
*** 27,32 
--- 27,34 

  FLEXFLAGS = -Cfe

+ CFLAGS += -static
+ LIBS += -lcurses

  all: submake-libpq submake-libpgport submake-backend psql


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[HACKERS] On vacation

2006-03-22 Thread Bruce Momjian
I am heading on vacation starting tomorrow/Thursday, and return the
following Thursday, March 30th.  I will be in Florida with my family.

-- 
  Bruce Momjian   http://candle.pha.pa.us
  SRA OSS, Inc.   http://www.sraoss.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Where does the time go?

2006-03-22 Thread Tom Lane
Kevin Grittner [EMAIL PROTECTED] writes:
 I have some odd results from timing two versions of an update query, and
 was hoping to get a better handle on how to interpret this.

You didn't show us the explain analyze results, but I'm betting that a
big part of your issue is that the EXPLAIN ANALYZE instrumentation
overhead is (1) significant and (2) different for the two query plans.
The instrumentation overhead should be about the same for any one plan
node execution, but the two plans could involve very different numbers
of plan node executions ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Static build of psql with readline support

2006-03-22 Thread Andrew Dunstan
Mark Kirkwood said:
 Christopher Kings-Lynne wrote:
 To the GP, adding -lncurses (or rather the static equivalent) to your
 link line should solve it. But if you include any other libraries
 like ssl or kerberos be prepared to add a lot more.


 With -lncurses or -lcurses I still can't get this to work.  I add it
 to  the ${CC} line, right?


 This is what I used (current 8.2 sources FreeBSD 6.0):


 *** Makefile.orig   Thu Mar 23 14:37:37 2006
 --- MakefileThu Mar 23 14:40:46 2006
 ***
 *** 27,32 
 --- 27,34 

   FLEXFLAGS = -Cfe

 + CFLAGS += -static
 + LIBS += -lcurses

   all: submake-libpq submake-libpgport submake-backend psql


That might work on FBSD but it doesn't work everywhere  - when I tried it on
Linux I got nasty link errors.

cheers

andrew



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Andrew Dunstan
Tom Lane said:
 Darcy Buskermolen [EMAIL PROTECTED] writes:
 On Wednesday 22 March 2006 13:11, Tom Lane wrote:
 (Thinks a bit...)  Maybe it would work for pg_sequence to be a real
 catalog with a row per sequence, and we also create a view named
 after the sequence that simply selects from pg_sequence with an
 appropriate WHERE condition.

 I'd think that would be a workable solution, with documentation notes
 that  this will be deprecated in favor of information_schema  in an
 upcoming  release ?

 Yeah, we could consider the views a transitional thing, and get rid of
 them after a release or two.  Tell people to change over to either look
 in the pg_sequence catalog, or use the information_schema view.  Does
 that view expose everything that there is, though, or will we have
 proprietary extensions that are not in SQL2003?


What happens to sequence ACLs?

cheers

andrew



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] On vacation

2006-03-22 Thread Michael Glaesemann


On Mar 23, 2006, at 11:43 , Bruce Momjian wrote:


I am heading on vacation starting tomorrow/Thursday, and return the
following Thursday, March 30th.  I will be in Florida with my family.


Have a great trip!

Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Static build of psql with readline support

2006-03-22 Thread Mark Kirkwood

Andrew Dunstan wrote:

Mark Kirkwood said:


Christopher Kings-Lynne wrote:


To the GP, adding -lncurses (or rather the static equivalent) to your
link line should solve it. But if you include any other libraries
like ssl or kerberos be prepared to add a lot more.



With -lncurses or -lcurses I still can't get this to work.  I add it
to  the ${CC} line, right?



This is what I used (current 8.2 sources FreeBSD 6.0):


*** Makefile.orig   Thu Mar 23 14:37:37 2006
--- MakefileThu Mar 23 14:40:46 2006
***
*** 27,32 
--- 27,34 

 FLEXFLAGS = -Cfe

+ CFLAGS += -static
+ LIBS += -lcurses

 all: submake-libpq submake-libpgport submake-backend psql




That might work on FBSD but it doesn't work everywhere  - when I tried it on
Linux I got nasty link errors.



It does for me (2.6.15-gentoo-r5) - note that my previous mail is way 
too vague about which Makefile to patch (sorry): src/bin/psql/Makefile.


Cheers

Mark




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] Static build of psql with readline support

2006-03-22 Thread Andrew Dunstan
Mark Kirkwood said:
 Andrew Dunstan wrote:
 Mark Kirkwood said:

Christopher Kings-Lynne wrote:

To the GP, adding -lncurses (or rather the static equivalent) to
your link line should solve it. But if you include any other
libraries like ssl or kerberos be prepared to add a lot more.


With -lncurses or -lcurses I still can't get this to work.  I add it
to  the ${CC} line, right?


This is what I used (current 8.2 sources FreeBSD 6.0):


*** Makefile.orig   Thu Mar 23 14:37:37 2006
--- MakefileThu Mar 23 14:40:46 2006
***
*** 27,32 
--- 27,34 

  FLEXFLAGS = -Cfe

+ CFLAGS += -static
+ LIBS += -lcurses

  all: submake-libpq submake-libpgport submake-backend psql



 That might work on FBSD but it doesn't work everywhere  - when I tried
 it on Linux I got nasty link errors.


 It does for me (2.6.15-gentoo-r5) - note that my previous mail is way
 too vague about which Makefile to patch (sorry): src/bin/psql/Makefile.

FC3:

/home/andrew/pglive/pgsql.plperl-pq/src/interfaces/libpq/ip.c:79: warning:
Using 'getaddrinfo' in statically linked applications requires at runtime the
shared libraries from the glibc version used for linking
collect2: ld returned 1 exit status
make: *** [psql] Error 1



cheers

andrew



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Static build of psql with readline support

2006-03-22 Thread Mark Kirkwood

Andrew Dunstan wrote:

Mark Kirkwood said:


Andrew Dunstan wrote:


Mark Kirkwood said:



Christopher Kings-Lynne wrote:



To the GP, adding -lncurses (or rather the static equivalent) to
your link line should solve it. But if you include any other
libraries like ssl or kerberos be prepared to add a lot more.



With -lncurses or -lcurses I still can't get this to work.  I add it
to  the ${CC} line, right?



This is what I used (current 8.2 sources FreeBSD 6.0):


*** Makefile.orig   Thu Mar 23 14:37:37 2006
--- MakefileThu Mar 23 14:40:46 2006
***
*** 27,32 
--- 27,34 

FLEXFLAGS = -Cfe

+ CFLAGS += -static
+ LIBS += -lcurses

all: submake-libpq submake-libpgport submake-backend psql




That might work on FBSD but it doesn't work everywhere  - when I tried
it on Linux I got nasty link errors.



It does for me (2.6.15-gentoo-r5) - note that my previous mail is way
too vague about which Makefile to patch (sorry): src/bin/psql/Makefile.



FC3:

/home/andrew/pglive/pgsql.plperl-pq/src/interfaces/libpq/ip.c:79: warning:
Using 'getaddrinfo' in statically linked applications requires at runtime the
shared libraries from the glibc version used for linking
collect2: ld returned 1 exit status
make: *** [psql] Error 1




Is that after patching only the psql Makefile? Interesting - you 
wouldn't think FC3 would be *that* different 


Cheers

Mark

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Static build of psql with readline support

2006-03-22 Thread Andrew Dunstan
Mark Kirkwood said:


 FC3:

 /home/andrew/pglive/pgsql.plperl-pq/src/interfaces/libpq/ip.c:79:
 warning: Using 'getaddrinfo' in statically linked applications
 requires at runtime the shared libraries from the glibc version used
 for linking
 collect2: ld returned 1 exit status
 make: *** [psql] Error 1



 Is that after patching only the psql Makefile? Interesting - you
 wouldn't think FC3 would be *that* different 


Yes. I just patched that Makefile, and then in the psql directory did make
clean; make.

cheers

andrew



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Tom Lane
Andrew Dunstan [EMAIL PROTECTED] writes:
 What happens to sequence ACLs?

Hm, good point.  We could put 'em in pg_sequence, except that most of
the operations on pg_sequence rows will be nontransactional, and that
doesn't seem to square nicely with transactional updates on ACLs.
Maybe we need two catalogs just to separate the transactional and
nontransactional data for a sequence?  Ugh.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Accessing schema data in information schema

2006-03-22 Thread Christopher Kings-Lynne

Hm, good point.  We could put 'em in pg_sequence, except that most of
the operations on pg_sequence rows will be nontransactional, and that
doesn't seem to square nicely with transactional updates on ACLs.
Maybe we need two catalogs just to separate the transactional and
nontransactional data for a sequence?  Ugh.


Is it possible to have an SRF that can peek into the lastval data and 
present it, and make no changes to our catalogs at all?


Or can't we use in the schema view something like:

CREATE VIEW sequences AS
  SELECT CAST(current_database() AS sql_identifier) AS sequence_catalog,
 CAST(nc.nspname AS sql_identifier) AS sequence_schema,
 CAST(c.relname AS sql_identifier) AS sequence_name,
 (SELECT seq_info('sequence_name', 'max')) AS maximum_value,
 (SELECT seq_info('sequence_name', 'min')) AS minimum_value,
 (SELECT seq_info('sequence_name', 'inc')) AS increment,
 (SELECT seq_info('sequence_name', 'cycle')) AS cycle_option
  FROM pg_namespace nc, pg_class c
  WHERE c.relnamespace = nc.oid
AND c.relkind = 's';

Chris


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] create type error message

2006-03-22 Thread Tom Lane
Jim Buttafuoco [EMAIL PROTECTED] writes:
 # create type a as (a text,b int);
 CREATE TYPE
 # create type a as (a text,b int);
 ERROR:  relation a already exists

 seems like  
 ERROR: type a already exists
 would be better.

It's not really all that easy, because tables and composite types both
have entries in both pg_class and pg_type.  Consider

regression=# create table tt (a text,b int);
CREATE TABLE
regression=# create type tt as (a text,b int);
ERROR:  relation tt already exists

regression=# create type zz as (a text,b int);
CREATE TYPE
regression=# create table zz (a text,b int);
ERROR:  relation zz already exists

In all three cases the error is actually detected inside
heap_create_with_catalog, because we choose to make the pg_class entry
first.  We could possibly alter the error report based on the relkind
of the entry we are about to make, but I'm unconvinced it'd be an
improvement...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Enver ALTIN

Hi,

We have been having a short talk with Devrim this evening about the
pgnixInstaller[1] project he started[2] a while ago.

I have briefly read over the thread following the project announce, 
particularly comments on issues about package systems of distributions 
and integration with them.


I believe the project will add some value to the PostgreSQL community, 
so I'd like to join and add my grain of salt.


Main goal of the project appears to be making installation and 
configuration of PostgreSQL server a no-brainer, not easier. Any moron 
you can get to the /mouse/[3] should be able to get PostgreSQL up and 
running in some short time period. And that moron should be able to do 
it in some fashion that's not so good, but the installation should work.


Fortunately we're not alone in the universe, several other projects 
suffered from the very same problem. Including the much larger 
OpenOffice.org project. I think some of you have installed it on your 
favorite Unix-alike OS using it's click-click-and-go installer, when 
distributions did not have packages of it -- ancient times.


We'll be doing that for PostgreSQL, I think it's a fantastic way for me 
to get involved.


In order to achieve this *properly*, I intend to make PostgreSQL 
relocatable, that is, PostgreSQL should be able to run if you `copy` 
it's binaries somewhere else -- no matter where you `./configure 
--prefix`ed it. I took a very quick look at some parts of the codebase 
and it seems doable to me.


Apparently we'll be avoiding package systems integration and 
development-packages-requirement issues mentioned in the thread.


Next step would be a simple, possibly shell-script and xdialog/zenity[4] 
based prototype installer. I personally can't afford the Python thing, 
Solaris and possibly others don't have it yet and embedding Python in a 
shell script is not the brightest idea obviously.


Now the question, would you accept patches for this, or are we going to 
maintain some set of patches?


May you have suggestions about any other direction, I'd be more than 
happy to know.


Cheers,

[1] http://pgfoundry.org/projects/pgnixinstaller/
[2] http://archives.postgresql.org/pgsql-hackers/2006-01/msg00943.php
[3] http://www.joelonsoftware.com/articles/fog24.html
--
Enver

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Enver ALTIN

Hey,

Enver ALTIN wrote:
Next step would be a simple, possibly shell-script and xdialog/zenity[4] 
based prototype installer. I personally can't afford the Python thing, 
Solaris and possibly others don't have it yet and embedding Python in a 
shell script is not the brightest idea obviously.


[4] http://cvs.gnome.org/viewcvs/zenity/

:)
--
Enver

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Tom Lane
Enver ALTIN [EMAIL PROTECTED] writes:
 In order to achieve this *properly*, I intend to make PostgreSQL 
 relocatable, that is, PostgreSQL should be able to run if you `copy` 
 it's binaries somewhere else -- no matter where you `./configure 
 --prefix`ed it. I took a very quick look at some parts of the codebase 
 and it seems doable to me.

s/doable/already done/, no?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Enver ALTIN

Hi Tom,

Tom Lane wrote:

Enver ALTIN [EMAIL PROTECTED] writes:
In order to achieve this *properly*, I intend to make PostgreSQL 
relocatable, that is, PostgreSQL should be able to run if you `copy` 
it's binaries somewhere else -- no matter where you `./configure 
--prefix`ed it. I took a very quick look at some parts of the codebase 
and it seems doable to me.


s/doable/already done/, no?


I did a Google search on relocatable PostgreSQL and the first 
result[1] said it's already done. Apparently it didn't work for me on 
Solaris 8 (linking problems). I'll be looking at it sometime today.


Thanks for your immediate response :)

[1] http://www.postgresql.org/docs/faqs.TODO.html
--
Enver

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Tom Lane
Enver ALTIN [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 s/doable/already done/, no?

 I did a Google search on relocatable PostgreSQL and the first 
 result[1] said it's already done. Apparently it didn't work for me on 
 Solaris 8 (linking problems). I'll be looking at it sometime today.

OK ... it's supposed to work to shift the whole installation tree to
a new root, ie, paths to places like the /share and /lib directories
are determined relative to where the backend executable actually is.
If this is not working on Solaris then for sure we want to know ...

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Josh Berkus
Enver,

 Fortunately we're not alone in the universe, several other projects
 suffered from the very same problem. Including the much larger
 OpenOffice.org project.

Are you crossing over from OpenOffice.org?  Hi!  I'm former OOo now PG and 
likely to get involved with OOo again very soon ...

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Get explain output of postgresql in Tables

2006-03-22 Thread Akshat Nair
HiI read a post in the archives saying about storing explain output directly into tables. Is this feature present in postgres now??I have a software in which I need to display the explain output in a Tree format, for which I need to parse the textual plan and get the relvant information.
I have a parser written in java which does some work but its not completely working. Can I get the grammar for the explain output? Or if someone has some other idea please let me know.Thanks-Akshat



Re: [HACKERS] Get explain output of postgresql in Tables

2006-03-22 Thread Tom Lane
Akshat Nair [EMAIL PROTECTED] writes:
 Can I get the grammar for the explain output?

There isn't one, it's just text and subject to change at a moment's
notice :-(.  The past proposals that we format it a bit more rigidly
have so far foundered for lack of a workable definition of what the
structure should be.  It's still an open problem to devise that
definition.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] pgNixInstaller: Making PostgreSQL relocatable

2006-03-22 Thread Greg Stark
Tom Lane [EMAIL PROTECTED] writes:

 Enver ALTIN [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  s/doable/already done/, no?
 
  I did a Google search on relocatable PostgreSQL and the first 
  result[1] said it's already done. Apparently it didn't work for me on 
  Solaris 8 (linking problems). I'll be looking at it sometime today.
 
 OK ... it's supposed to work to shift the whole installation tree to
 a new root, ie, paths to places like the /share and /lib directories
 are determined relative to where the backend executable actually is.
 If this is not working on Solaris then for sure we want to know ...

I'm sure this isn't the only possible gotcha but I do seem to recall that on
Solaris there's no such thing as a default LD_LIBRARY_PATH. Every binary
stores absolute paths to every shared library it's linked against.

-- 
greg


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster