Re: [HACKERS] State of support for back PG branches

2005-09-27 Thread Michael Paesold

Tom Lane wrote:

To my mind the main rationale for continuing to support 7.2 is that it
was the last pre-schema release, and so people whose apps haven't yet
been fixed to cope with schemas will be on their own once we drop it.
While each release has some portability gotchas, I don't think there
have been any quite that big since then.  If we drop support for 7.2,
it wouldn't be out of the question for us to drop 7.3 and 7.4 too (at
least not from where I sit ... I'm sure some will differ).


Yes. For one of our application that makes massive use of triggers the 
transition from 7.2 to 7.3 was less painful than 7.4 to 8.0. Many of the 
triggers were written with the assumption that foreign key checks were 
effectively delayed until the end of the calling statement.


Just another opinion. Nevertheless if it's feasible to support 7.4 for a 
while, I would recommend it. Also from a marketing statepoint -- having 
back branches supported for a visible amount of time increases people's 
confident in PostgreSQL and it's stability.


Best Regards,
Michael Paesold

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] State of support for back PG branches

2005-09-27 Thread Steve Atkins
On Mon, Sep 26, 2005 at 09:54:46PM -0700, Joshua D. Drake wrote:
 [ raised eyebrow... ]  Has bizgres obtained a crystal ball from
 somewhere?  There is *no* way anyone could provide you anything that
 has any legitimate claim on the name PG 8.2 three months from now.
  
 
 
 I **think** what he meant was that depending on what Bizgres has done in 
 the next three
 months they may not run 8.1 but run Bizgres instead.

Partly that, yes. Once 8.1 has stabilized we'll look at 8.1, bizgres
as current at that point and the 8.2 crystal ball and decide where the
sweet spot is for our needs.

The only crystal ball involved is the assumption that if bizgres has
Neat Stuff(tm) that would be of widespread use in it's development
tree at that point then the odds are good that it, or something
functionally equivalent to it, will appear in the 8.2 development
tree. If the neat stuff is neat enough we may push off an update from
7.4 and develop using some of that neat stuff, with the plan of
deploying either to bizgres or to 8.2.

(My best guess right now is that 8.1 will be the right thing to move to,
 but I'm delaying that decision until next year.)

Cheers,
  Steve


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Dave Page
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
 Sent: 27 September 2005 01:13
 To: Peter Eisentraut
 Cc: pgsql-hackers@postgresql.org
 Subject: [HACKERS] Making pgxs builds work with a relocated 
 installation
 
 The other open issue in this area was that on Windows, pg_config needs
 to return space-free path names to avoid breaking the 
 makefiles.  It was
 suggested that this could be handled by passing pg_config's 
 result path
 names through GetShortPathName() on that platform.  That 
 sounds OK to me
 but I'm not in a position to write or test such a patch; can someone
 take care of that?

At the risk of getting squished in the rush to help, if noone else
volunteers I'll try to find some time.

:-)

/D

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Magnus Hagander
  The other open issue in this area was that on Windows, 
 pg_config needs 
  to return space-free path names to avoid breaking the 
 makefiles.  It 
  was suggested that this could be handled by passing 
 pg_config's result 
  path names through GetShortPathName() on that platform.  
 That sounds 
  OK to me but I'm not in a position to write or test such a 
 patch; can 
  someone take care of that?
 
 At the risk of getting squished in the rush to help, if noone 
 else volunteers I'll try to find some time.

Missed this one when it came through the first time. Note: Not
volunteering, just commenting on the how-fix :P

Using GetShortPathName() will break on any system that has disabled
short filename generatino, which IIRC is a recommended best practice
both for performance and for security in legacy apps. I don't know what
it does, but probably it will just return the same long path again.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Dave Page
 

 -Original Message-
 From: Magnus Hagander [mailto:[EMAIL PROTECTED] 
 Sent: 27 September 2005 08:58
 To: Dave Page; Tom Lane; Peter Eisentraut
 Cc: pgsql-hackers@postgresql.org
 Subject: RE: [HACKERS] Making pgxs builds work with a 
 relocated installation
 
   The other open issue in this area was that on Windows, 
  pg_config needs 
   to return space-free path names to avoid breaking the 
  makefiles.  It 
   was suggested that this could be handled by passing 
  pg_config's result 
   path names through GetShortPathName() on that platform.  
  That sounds 
   OK to me but I'm not in a position to write or test such a 
  patch; can 
   someone take care of that?
  
  At the risk of getting squished in the rush to help, if noone 
  else volunteers I'll try to find some time.
 
 Missed this one when it came through the first time. Note: Not
 volunteering, just commenting on the how-fix :P
 
 Using GetShortPathName() will break on any system that has disabled
 short filename generatino, which IIRC is a recommended best practice
 both for performance and for security in legacy apps. I don't 
 know what
 it does, but probably it will just return the same long path again.

If it's disabled, then they aren't going to be able to use short names
anyway, therefore we can't do much about it. If you see what I mean!

If GetShortPathName() just returns what was passed to it in such cases,
then at least we won't be any worse off than we are now.

Regards,Dave.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Dave Page
 

 -Original Message-
 From: Magnus Hagander [mailto:[EMAIL PROTECTED] 
 Sent: 27 September 2005 09:19
 To: Dave Page; Tom Lane; Peter Eisentraut
 Cc: pgsql-hackers@postgresql.org
 Subject: RE: [HACKERS] Making pgxs builds work with a 
 relocated installation
 
  If it's disabled, then they aren't going to be able to use 
  short names anyway, therefore we can't do much about it. If 
  you see what I mean!
 
 Well, are we sure thare are no other ways? Either some funky 
 quoting or
 backslash-escaping spaces or something like that?

Tried quoting and '\ '. Neither work.

  If GetShortPathName() just returns what was passed to it in 
  such cases, then at least we won't be any worse off than we are now.
 
 Right. That'll have to be checked though, the API docs don't seem to
 talk about that. Could be it gives you NULL or something 
 equally evil..

Well, knowing you you have a VM setup in this way so testing should be
easy enough :-p

/D

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Magnus Hagander
The other open issue in this area was that on Windows,
   pg_config needs
to return space-free path names to avoid breaking the
   makefiles.  It
was suggested that this could be handled by passing
   pg_config's result
path names through GetShortPathName() on that platform.  
   That sounds
OK to me but I'm not in a position to write or test such a
   patch; can
someone take care of that?
   
   At the risk of getting squished in the rush to help, if 
 noone else 
   volunteers I'll try to find some time.
  
  Missed this one when it came through the first time. Note: Not 
  volunteering, just commenting on the how-fix :P
  
  Using GetShortPathName() will break on any system that has disabled 
  short filename generatino, which IIRC is a recommended best 
 practice 
  both for performance and for security in legacy apps. I don't know 
  what it does, but probably it will just return the same long path 
  again.
 
 If it's disabled, then they aren't going to be able to use 
 short names anyway, therefore we can't do much about it. If 
 you see what I mean!

Well, are we sure thare are no other ways? Either some funky quoting or
backslash-escaping spaces or something like that?


 If GetShortPathName() just returns what was passed to it in 
 such cases, then at least we won't be any worse off than we are now.

Right. That'll have to be checked though, the API docs don't seem to
talk about that. Could be it gives you NULL or something equally evil..

//Magnus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] Strange, very strange

2005-09-27 Thread Andrey Repko
Hello pgsql-hackers,

 Two tables t1 and t2.
 t2 has fk (no action) to t1.
 Two triggers on t1 before delete, delete all fk on t2, and on t2
 after delete update t1.

 When we do delete on t1, we have situation when at t1 updates record
 that deleting. Of course logic is not correct, but... PosgreSQL
 delete records from t2, update record at t1 and leave it alive. But
 why?
--
new_db=# SELECT * FROM t1;
 id | suma
+--
 10 |5
 11 |6
 12 |6
(3 rows)

new_db=# SELECT * FROM t2;
 id | fk_t1 | suma2
+---+---
 12 |10 | 6
 13 |10 | 6
(2 rows)

new_db=# DELETE FROM t1 WHERE id=10;
DELETE 0

:(
new_db=# SELECT * FROM t1;
 id | suma
+--
 11 |6
 12 |6
 10 |   -7
 ^^^ why?
(3 rows)

new_db=# SELECT * FROM t2;
 id | fk_t1 | suma2
+---+---
(0 rows)

new_db=# DELETE FROM t1 WHERE id=10;
DELETE 1

Test finished.
Script:
--
CREATE TABLE t1 (
id bigserial NOT NULL,
suma bigint
);

CREATE TABLE t2 (
id bigserial NOT NULL,
fk_t1 bigint,
suma2 bigint
);

INSERT INTO t1 (id,suma) VALUES (10,5);
INSERT INTO t1 (id,suma) VALUES (11,6);
INSERT INTO t1 (id,suma) VALUES (12,6);

INSERT INTO t2 (id,fk_t1,suma2) VALUES (12,10,6);
INSERT INTO t2 (id,fk_t1,suma2) VALUES (13,10,6);

CREATE FUNCTION test2_t2() RETURNS trigger
AS $$
begin
  /* Тело функции */
  UPDATE t1 SET suma=suma-old.suma2 WHERE t1.id=old.fk_t1;
  return null;
end;
$$LANGUAGE plpgsql;

CREATE FUNCTION test_t1() RETURNS trigger
AS $$
begin
  /* Тело функции */
  DELETE FROM t2 WHERE fk_t1=old.id;
  return old;
end;
$$LANGUAGE plpgsql;

CREATE TRIGGER t1_tr
BEFORE DELETE ON t1
FOR EACH ROW
EXECUTE PROCEDURE test_t1();

CREATE TRIGGER t2_tr
AFTER DELETE ON t2
FOR EACH ROW
EXECUTE PROCEDURE test2_t2();

ALTER TABLE ONLY t1
ADD CONSTRAINT t1_pkey PRIMARY KEY (id);


ALTER TABLE ONLY t2
ADD CONSTRAINT t2_fk FOREIGN KEY (fk_t1) REFERENCES t1(id) ON
UPDATE NO ACTION ON DELETE NO ACTION;

--

-- 
С наилучшими пожеланиями,
  Репко Андрей Владимирович   mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] prepared queries in plperl

2005-09-27 Thread Dmitry Karasik

Hello,

I've posted before a patch that enables prepared queries in plperl, and
now, after a while, as it seems working as expected, I'd like to ask if it
would be a reasonable thing to commit in into -devel? The patch against
the latest cvs is at http://www.karasik.eu.org/misc/plperl.diff .

The idea behind the patch is to make it possible to pass non-interpolated
parameters to the query:

CREATE OR REPLACE FUNCTION plus_one(INTEGER) RETURNS INTEGER AS $$
  my $x = spi_prepare('SELECT $1 AS datum', 'INT4');
  my $rv = spi_exec_prepared($x, $q + 1);
  spi_freeplan( $x);
  return $rv-{rows}[0]-{datum};
$$ LANGUAGE plperl;

-- 
Thanks,
Dmitry Karasik





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] prepared queries in plperl

2005-09-27 Thread Andrew Dunstan
Dmitry Karasik said:

 Hello,

 I've posted before a patch that enables prepared queries in plperl, and
 now, after a while, as it seems working as expected, I'd like to ask if
 it would be a reasonable thing to commit in into -devel? The patch
 against the latest cvs is at http://www.karasik.eu.org/misc/plperl.diff
 .

 The idea behind the patch is to make it possible to pass
 non-interpolated parameters to the query:

 CREATE OR REPLACE FUNCTION plus_one(INTEGER) RETURNS INTEGER AS $$
  my $x = spi_prepare('SELECT $1 AS datum', 'INT4');
  my $rv = spi_exec_prepared($x, $q + 1);
  spi_freeplan( $x);
  return $rv-{rows}[0]-{datum};
 $$ LANGUAGE plperl;



Develeopment branch is currently in feature freeze pending release of
version 8.1.

Meanwhile, I will observe that this very desirable feature needs an
interface with spi_fetchrow() - fetching large gobs of data all at once into
perl memory is just not nice.

We need to examine the names being used on our spi functions - I'm not sure
there is sufficient consistency about them.

cheers

andrew



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] unchecked malloc

2005-09-27 Thread Sibtay Abbas
This dicussion reminds me of a possible memory leak in plpgsql's code. In case you are interested in it;



in pl_comp.c, plpgsql_build_variable takes a pointer to a PLpgSQL_type
structure, which is always a malloc'ed instance(since we always use
plpgsql_build_datatype function). The switch statement in plpgsql_build_variable function elicits that its reference is only
kept in case the type structure represents a PLPGSQL_TTYPE_SCALAR,
otherwise it is not kept and needed in case its either
PLPGSQL_TTYPE_ROW or PLPGSQL_TTYPE_REC.

So is it intensional or a memory leak?



Thank you
On 9/27/05, Tom Lane [EMAIL PROTECTED] wrote:
Qingqing Zhou [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] wrote No, because you're thinking in terms of the backend environment, and
 generally in the backend the answer to when to use malloc directly is never. Well, except before MemoryContext mechanism is set up? For example, the functions(
e.g., GUC, vfd) used during bootstrap.I think you need to take another look at the startup sequences.Thosemodules are not run before MemoryContextInit.In any case, the oddsof running out of memory before we get to MemoryContextInit are so small
that I don't have a problem with crashing if it happens.regards,
tom lane---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to 
[EMAIL PROTECTED] so that your message can get through to the mailing list cleanly


Re: [HACKERS] Database file compatability

2005-09-27 Thread Jeffrey W. Baker
On Mon, 2005-09-26 at 17:27 -0500, Jim C. Nasby wrote:
 If a database is created with a 64 bit version of initdb, would a 32bit
 backend be able to talk to it? Likewise, would a backend compiled by a
 different compiler be able to?
 
 If there was some kind of incompatability, would the backend just refuse
 to start, or would it start and start silently trashing data?

I plugged a storage array that was initialized with ia32 and attached it
to an amd64 machine.  The postmaster complained at startup that
such-and-such magic value was incorrect and refused to start.  However
it was implied on the mailing list that for certain unlucky magic values
the postmaster may have started anyway and eaten the database.

-jwb

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] PostgreSQL overall design

2005-09-27 Thread Jonah H. Harris
Were you looking for a call graph?On 9/27/05, Abhijit Menon-Sen [EMAIL PROTECTED] wrote:
At 2005-09-27 15:20:05 +0530, [EMAIL PROTECTED] wrote: Can anyone please tell/point me where I can get the postgresql system layout (I've an interest to contribute).
http://www.postgresql.org/developer/codingAnd, in particular:http://www.postgresql.org/docs/faqs.FAQ_DEV.html
-- ams---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to 
[EMAIL PROTECTED] so that your message can get through to the mailing list cleanly-- Respectfully,Jonah H. Harris, Database Internals Architect
EnterpriseDB Corporationhttp://www.enterprisedb.com/


Re: [HACKERS] Questions about proper newline handling in psql output

2005-09-27 Thread Peter Eisentraut
Am Sonntag, 25. September 2005 22:45 schrieb Martijn van Oosterhout:
 1. To be able to control the spacing, psql now has to be very careful
 about its output. eg \r is printed as \r, ascii control characters are
 output as \x00 style and other control chars as \u. This is a
 change from previous behaviour, yet you're pretty much forced to if you
 want to control the output.

I don't think this is necessary.  If you put control characters into your 
text, then you clearly don't care about aligned output, so you don't get any.  
About \r, I think that needs to be figured into the alignment calculation, to 
work well on Windows and Mac.

 2. Currently I've changed the aligned outputs but not the unaligned
 ones. Given you're not worrying about alignment there anyway, why do
 the work? Also, we recommend unaligned output for script users so I
 don't feel right changing it.

I think this could be a separate feature.

 3. How to show that a value is continued? As you can see below I use
 ':' before columns that have data. This obviously doesn't work for
 first column if there's no outer border. If your border style is 0
 you're totally out of luck.

I think you need to keep the normal delimiter and need some extra mark within 
the table cells.  (Think about how it would have to look in an HTML table.)

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] State of support for back PG branches

2005-09-27 Thread Chris Browne
[EMAIL PROTECTED] (Steve Atkins) writes:
 We started our upgrade from 7.2 to 7.4 about 20 months ago and finished it
 about 10 months ago, skipping 7.3 entirely.

We did similar; there was only one system deployed in a timeframe
where 7.3 was relevant, and the big systems skipped over 7.3 much as
you did.

 We've only just today hit our first problem in 7.4, and it's fixed by
 upgrading to 7.4.current, rather than the 7.4.something we originally
 upgraded to from 7.2.something.

Ditto, to a degree; we hit a pretty funky index update thing that was
resolved in 7.4.8.

 We'll be skipping 8.0 completely and the next step will probably be to
 8.1.something (or possibly 8.2.something, depending on how bizgres
 looks in 3 months time). We'd probably consider upgrading our
 customers more often, but a dump and restore is extremely painful.

We're strategizing somewhat similarly, save for bizgres not being on
our roadmap.

Dump and restore isn't forcibly necessary; we did the 7.2 to 7.4
upgrade via eRServer, and made sure that Slony-I was designed to
support upgrades.

I recently did an application upgrade (not a PG version change) using
Slony-I; replicated to a node that wasn't otherwise busy, and used
that node as the base where various tables were transformed into
their new forms.  Replicated the new forms on everywhere.  On the
flag day, a MOVE SET shifted mastery, MERGE SET pasted things
together, and EXECUTE SCRIPT finished the transformation.

We're starting to look at 8.1, and would *certainly* use Slony-I to
perform that upgrade.  The on the cheap method would involve
replacing nodes one at a time with 8.1 versions, though we'd more
likely have a bunch of 7.4 nodes running parallel with a corresponding
set of 8.1 nodes, and, once done, drop all the 7.4 ones at once...

 Just a view from the pg-based-enterprise-application world.

 A nice pg_upgrade utility would make a big difference. Clearly an
 in-place upgrade is possible, but maintaining is hard. There are two
 broad ways of running a pg_upgrade project - one that is entirely
 independent of the main codebase and one that puts requirements on
 the main codebase developers (if you change $foo you provide code
 to translate old $foo to new $foo). Any feel for the relative
 difficulty of the two approaches? And how much push-back there'd be
 on the latter?

This strikes me as being only marginally easier than the proverbial
desires for tools to convert ext2 to XFS or ReiserFS.

The conversion tool would have to encode a lot of hairy details, and
would require that the likes of Tom Lane and Bruce Momjian spend a lot
of their time writing the conversion tool instead of working on new
features.

With filesystems, it seems easier and cheaper to buy an extra disk
drive (what, $200?) and use something like rsync/unison to relatively
efficiently replicate the filesystem.

Slony-I is the PostgreSQL equivalent to rsync/unison, in this case.

Or you could look at Mammoth Replicator, if you prefer...

The replication approach allows Tom and Bruce to work on sexy new
features instead of forcing them into the data conversion mould...
-- 
select 'cbbrowne' || '@' || 'cbbrowne.com';
http://cbbrowne.com/info/slony.html
It's always darkest just before it gets pitch black.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] PostgreSQL overall design

2005-09-27 Thread Andrew Dunstan


[ -performance removed ]

Gnanavel S wrote:



 Need a clarification in copydir.c file of src/port directory,  In the 
following snippet the destination directory is created first then the 
source directory is read. Suppose if I don't have permission to read 
the source, even then the destination directory would be created.

I just want to know whether there is any reason for doing so?

   



Under what circumstances do you imagine this will happen, since the 
postmaster user owns all the files and directories?


cheers

andrew

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Bruce Momjian
Magnus Hagander wrote:
  Changes
  ---
  Win32 signal handling patch (Magnus)
 
 Unless someone else steps up to doing this one, please remove it from
 the list. I will not have time to dig into this patch before 8.1.

OK, what should the TODO item be?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Peter Eisentraut
Am Dienstag, 27. September 2005 02:12 schrieb Tom Lane:
 What I propose we do about this is change the path setup section
 of Makefile.global to look like (for each path variable)

 ifdef PGXS
 pkglibdir = $(shell pg_config --pkglibdir)
 else
 # existing code to set up pkglibdir
 endif

That looks right.

 Not all of the path variables set up in Makefile.global are currently
 available from pg_config; the missing ones are

 prefix
 exec_prefix
 sbindir
 mandir
 localedir
 libexecdir
 datadir
 sysconfdir
 pkgincludedir
 docdir

 The first three of these don't seem to be directly referenced anywhere
 in the Makefiles, so I propose just removing them from Makefile.global.

I see

prefix := /usr/local/pgsql
exec_prefix := ${prefix}

bindir := ${exec_prefix}/bin
sbindir := ${exec_prefix}/sbin

in the default configuration, so we need to keep the first two at least.  We 
don't need to expose them through pgxs, though.

 The other ones will need to be added to pg_config's repertoire, unless
 someone can make a pretty good case that no pgxs-using module would ever
 need to install into that directory.

pgxs only needs to expose the currently exposed variables plus sysconfdir, as 
previously discussed.  Unless someone can make a case that they need to 
access the other directories.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Questions about proper newline handling in psql output

2005-09-27 Thread Martijn van Oosterhout
On Tue, Sep 27, 2005 at 03:02:29PM +0200, Peter Eisentraut wrote:
 I don't think this is necessary.  If you put control characters into your 
 text, then you clearly don't care about aligned output, so you don't get any. 
  
 About \r, I think that needs to be figured into the alignment calculation, to 
 work well on Windows and Mac.

Good point. On UNIX however, outputting a \r will jump back to the
beginning of the line, overwriting whatever was there. But you raise a
good question, should the output of psql be console specific. i.e. if
you insert a value with unix newlines then on a Mac the output won't
have linebreaks.

However, part of this discussion was because we were considering
changing the output of \df to display more info using multiple lines.
And if so we want it to display consistantly on all platforms, right?

My argument for the other control characters is: given the work to make
this work for \r, \t, and \n needs all this anyway, why not just fix it
for *all* control characters in one go and be completely solved of the
problem, for now and forever. If someone embeds the control characters
to change the title of your xterm, change the font, clear the screen,
etc should psql just blat that out? I realise it does it now and that
it's not a strong argument, but since we're here already...

  3. How to show that a value is continued? As you can see below I use
  ':' before columns that have data. This obviously doesn't work for
  first column if there's no outer border. If your border style is 0
  you're totally out of luck.
 
 I think you need to keep the normal delimiter and need some extra mark within 
 the table cells.  (Think about how it would have to look in an HTML table.)

Umm, I wasn't thinking of changing the HTML output at all, it doesn't
need it since whatever displays the HTML will take care of alignment.
Same for troff and CSV. Straight aligned text output is the only one we
care about AFAICS (and the only one that requires work to make it
happen).

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpkWnlydcyP5.pgp
Description: PGP signature


Re: [HACKERS] Questions about proper newline handling in psql output

2005-09-27 Thread Martijn van Oosterhout
On Tue, Sep 27, 2005 at 12:12:15AM -, Greg Sabino Mullane wrote:
  4. Some system output like pg_views has really really long strings,
  would it be acceptable to change the output there to add newlines at
  various places to make it output nicer with this change?
 
 I'd say no, until you propose a more concrete set of rules on how and when.

Well, I was thinking before the keywords SELECT, FROM, WHERE, AND,
ORDER BY, GROUP BY and HAVING. For bonus indent subqueries also. But
I'm not too fussed, it was just a thought. In fact, we already do it
for the output of \d for views, some maybe a hint from there...

  Query is: select oid, prosrc as HdrLine1 HdrLine2, proacl from pg_proc 
  limit 1;
 
 Some smaller samples with a third column (and a large 2nd one) might be nice.

Ok, multiline strings in the pg_catalog are thin on the ground, I was
hoping to use real data rather than stuff I made up. (Real data has a
habit of showing weaknesses far better than things you dream up). But
I'll give it a shot.
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpJ5nCy3zGIr.pgp
Description: PGP signature


Re: [HACKERS] [PERFORM] PostgreSQL overall design

2005-09-27 Thread Bruce Momjian

Have you read the developers FAQ?

---

Gnanavel S wrote:
 Hi All,
 
 Can anyone please tell/point me where I can get the postgresql system layout
 (I've an interest to contribute). I would also like to know the files
 involved for performing each task ( for eg when doing a select operation
 what is exactly happening in postgres along with the files).
 
 I was wandering inside the source for a while and I couldn't get a start
 point to go with.
 
 Need a clarification in copydir.c file of src/port directory, In the
 following snippet the destination directory is created first then the source
 directory is read. Suppose if I don't have permission to read the source,
 even then the destination directory would be created.
 I just want to know whether there is any reason for doing so?
 
 if (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)
 ereport(ERROR,
 (errcode_for_file_access(),
 errmsg(could not create directory \%s\: %m, todir)));
 
 xldir = AllocateDir(fromdir);
 if (xldir == NULL)
 ereport(ERROR,
 (errcode_for_file_access(),
 errmsg(could not open directory \%s\: %m, fromdir)));
 
 
 
 --
 with thanks  regards,
 S.Gnanavel
 Satyam Computer Services Ltd.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] prepared queries in plperl

2005-09-27 Thread Dmitry Karasik

 Andrew We need to examine the names being used on our spi functions - I'm
 Andrew not sure there is sufficient consistency about them.

Yes, but I'd rather wish the patch committed before that, because
otherwise I would be forced to rework the code again, after the
namespace is finally stabilized.

 Andrew Meanwhile, I will observe that this very desirable feature needs
 Andrew an interface with spi_fetchrow() - fetching large gobs of data all
 Andrew at once into perl memory is just not nice.

Agreed, I think as there is pair of spi_exec_query/spi_query, there should be
corresponding spi_exec_prepared/spi_whatever/spi_fetchrow_prepared. If I
have time, I'll look into implementing these, but again, I'd rather wait
until the patch is committed. When 8.1 is scheduled for release? 

-- 
Sincerely,
Dmitry Karasik


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Tom Lane
Peter Eisentraut [EMAIL PROTECTED] writes:
 Am Dienstag, 27. September 2005 02:12 schrieb Tom Lane:
 Not all of the path variables set up in Makefile.global are currently
 available from pg_config; the missing ones are
 
 prefix
 exec_prefix
 sbindir
 mandir
 localedir
 libexecdir
 datadir
 sysconfdir
 pkgincludedir
 docdir
 
 The first three of these don't seem to be directly referenced anywhere
 in the Makefiles, so I propose just removing them from Makefile.global.

 I see

 prefix := /usr/local/pgsql
 exec_prefix := ${prefix}

 bindir := ${exec_prefix}/bin
 sbindir := ${exec_prefix}/sbin

 in the default configuration, so we need to keep the first two at least.  We 
 don't need to expose them through pgxs, though.

I stand corrected on those.

 The other ones will need to be added to pg_config's repertoire, unless
 someone can make a pretty good case that no pgxs-using module would ever
 need to install into that directory.

 pgxs only needs to expose the currently exposed variables plus sysconfdir, as
 previously discussed.  Unless someone can make a case that they need to 
 access the other directories.

pgxs.mk itself requires access to datadir and docdir, so I don't
see how you can maintain that those aren't necessary.  The only reason
it doesn't also reference mandir and localedir is that none of our
current contrib modules have any man pages or locale support, but that
hardly seems far-fetched as a requirement for external modules.  Also,
pkgincludedir *must* be supported else we cannot set up the -I options
for includedir_server and includedir_internal.

On second look, libexecdir isn't used anywhere, so we might as well just
remove it entirely.  But all the others seem necessary to me.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Magnus Hagander
   Changes
   ---
   Win32 signal handling patch (Magnus)
  
  Unless someone else steps up to doing this one, please 
 remove it from 
  the list. I will not have time to dig into this patch before 8.1.
 
 OK, what should the TODO item be?

A link to the mail should be there, I guess (it's somewhere in the
archives). Investigate different way of handling signals on win32 with
a link perhaps.

Note - we need to investigate, I'm not convinced that doing it is worth
it at all (I asked for opinions on that earlier, but no other win32
hacker was available for comment). And then if it is, the patch itself
should be reviewed.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Vacuum questions...

2005-09-27 Thread Gaetano Mendola
Gaetano Mendola wrote:
 Alvaro Herrera wrote:
 On Mon, Sep 26, 2005 at 05:41:24PM +0200, Gaetano Mendola wrote:
 Joshua D. Drake wrote:
 Autovacuum is integrated into the backend for 8.1
 Can I set the autovacuum parameter per table instead of per
 engine ?
 Yes.
 

Reading the 8.1 release note I found:


Move /contrib/pg_autovacuum into the main server (Alvaro)

Integrating autovacuum into the server allows it to be automatically started
and stopped in sync with the database server, and allows autovacuum to be
configured from postgresql.conf.


May be it could be useles mention that was not exactly pg_autovacuum moved
because for example you can now set parameter per table and pg_autvacuum did 
not.


Regards
Gaetano Mendola



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Bruce Momjian
Magnus Hagander wrote:
Changes
---
Win32 signal handling patch (Magnus)
   
   Unless someone else steps up to doing this one, please 
  remove it from 
   the list. I will not have time to dig into this patch before 8.1.
  
  OK, what should the TODO item be?
 
 A link to the mail should be there, I guess (it's somewhere in the
 archives). Investigate different way of handling signals on win32 with
 a link perhaps.
 
 Note - we need to investigate, I'm not convinced that doing it is worth
 it at all (I asked for opinions on that earlier, but no other win32
 hacker was available for comment). And then if it is, the patch itself
 should be reviewed.

Added to TODO:

o Improve signal handling,
  http://archives.postgresql.org/pgsql-patches/2005-06/msg00027.php

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Database file compatability

2005-09-27 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 07:05:28PM -0400, Tom Lane wrote:
 Qingqing Zhou [EMAIL PROTECTED] writes:
  Jim C. Nasby [EMAIL PROTECTED] wrote
  If a database is created with a 64 bit version of initdb, would a 32bit
  backend be able to talk to it? Likewise, would a backend compiled by a
  different compiler be able to?
 
  The key problem I believe is the serials of ALIGNOF macros. Especially for 
  MAX_ALIGNOF. Different Hardware/OS/compiler will have different 
  understanding of it.
 
 Yeah.  It might be worth adding MAX_ALIGNOF to the set of configuration
 data stored in pg_control, just to be sure you couldn't shoot yourself
 in the foot that way.

PLEASE. :)

ISTM that 64 bit is becomming much more common, so I think the odds of
someone going from a 32 to 64 bit (or vice-versa) version of PostgreSQL
on the same machine is much larger now than it has been in the past. I
think we really need to protect this as much as possible. This isn't so
much a foot-gun as a foot-nuclear-weapon.

Would I be correct in assuming that doing this for 8.1 would require
another initdb? :/ For something as minor as this, would it be
reasonable to ship a utility to avoid the initdb? I'd very much like to
see this in 8.1...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Peter Eisentraut
Tom Lane wrote:
 pgxs.mk itself requires access to datadir and docdir, so I don't
 see how you can maintain that those aren't necessary.  The only
 reason it doesn't also reference mandir and localedir is that none of
 our current contrib modules have any man pages or locale support,

Well, I don't support the notion that pgxs installs things there by 
default, but if it does, then I guess we have to fix it to do so 
correctly.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] State of support for back PG branches

2005-09-27 Thread Josh Berkus

Steve,


The only crystal ball involved is the assumption that if bizgres has
Neat Stuff(tm) that would be of widespread use in it's development
tree at that point then the odds are good that it, or something
functionally equivalent to it, will appear in the 8.2 development
tree. 


It certainly won't be because it hasn't been submitted.  All features 
for Bizgres PostgreSQL (as opposed to the proprietary MPP) will be sent 
to -patches.   The main difference will be the release schedule.


--Josh

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] [BUGS] BUG #1883: Renaming a schema leaves inconsistent

2005-09-27 Thread Bruce Momjian

With the following errors caused by ALTER SCHEMA RENAME, I recommend we
remove this feature for 8.1 and revisit it for 8.2.  I would just remove
the grammar construct for it and the documentation.

To fix this, we would need to redesign the way we store DEFAULT sequence
assignments, and I don't think that is a good thing to do during beta. 
I see people wanting bitmapped scans ASAP, not renaming of schemas.  Our
beta time is better spent on other things than getting this to work now.

---

Bruce Momjian wrote:
 Tom Lane wrote:
  Bruce Momjian pgman@candle.pha.pa.us writes:
   This item has been added to the 8.1 bugs list:
 http://momjian.postgresql.org/cgi-bin/pgbugs
  
  This isn't going to be fixed for 8.1.  I think it's really a variant of
  the TODO item
  o %Have ALTER TABLE RENAME rename SERIAL sequence names
 
 Well, it might be a variant, but its failure is much worse.  For a table
 rename, you just get a strange \d display:
   
   test= CREATE TABLE test (x SERIAL);
   NOTICE:  CREATE TABLE will create implicit sequence test_x_seq for 
 serial column test.x
   CREATE TABLE
   test= ALTER TABLE test RENAME TO test2;
   ALTER TABLE
   test= INSERT INTO test2 VALUES (DEFAULT);
   INSERT 0 1
   test= \d test2
 Table public.test2
Column |  Type   |  Modifiers
   +-+-
x  | integer | not null default nextval('public.test_x_seq'::text)
 
 The insert into the table still works.  For the schema rename, the
 insert into the table doesn't work anymore.  The odds that a schema
 rename is going to have _no_ sequence dependencies in the same schema
 seems pretty unlikely, meaning rename schema is almost guarantted to
 create some broken table defaults.  With this behavior, if we can't fix
 it in 8.1, I am wonderingf we should just disable the feature:
 
   test= CREATE SCHEMA aa;
   CREATE SCHEMA
   test= CREATE TABLE aa.test (x SERIAL);
   NOTICE:  CREATE TABLE will create implicit sequence test_x_seq for 
 serial column test.x
   CREATE TABLE
   test= ALTER SCHEMA aa RENAME TO bb;
   ALTER SCHEMA
   test= INSERT INTO bb.test VALUES (DEFAULT);
   ERROR:  SCHEMA aa does NOT exist
   test= \d bb.test
 Table bb.test
Column |  Type   |Modifiers
   +-+-
x  | integer | not null default nextval('aa.test_x_seq'::text)
 
 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   pgman@candle.pha.pa.us   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Bruce Momjian
The open items list has been reduced nicely:

   PostgreSQL 8.1 Open Items
   =

Current version at http://candle.pha.pa.us/cgi-bin/pgopenitems or
from http://www.postgresql.org/developer/beta.

Changes
---
fix pg_dump --clean for roles
foreign trigger timing issue
fix ALTER SCHEMA RENAME for sequence dependency, or remove feature
spinlock performance
fix semantic issues of granted permissions in roles
fix pgxs for Win32 paths

Questions
-
cosider O_SYNC as default when O_DIRECT exists
/contrib move to pgfoundry
bump major library version number?
pgindent, when?
make sure bitmap scan optimizer settings are reasonable

Documentation
-
document control over partial page writes

Fixed Since Last Beta
-




-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PERFORM] A Better External Sort?

2005-09-27 Thread Jeffrey W. Baker
On Tue, 2005-09-27 at 13:15 -0400, Ron Peacetree wrote:

 That Btree can be used to generate a physical reordering of the data
 in one pass, but that's the weakest use for it.  The more powerful
 uses involve allowing the Btree to persist and using it for more
 efficient re-searches or combining it with other such Btrees (either as
 a step in task distribution across multiple CPUs or as a more efficient
 way to do things like joins by manipulating these Btrees rather than
 the actual records.)

Maybe you could describe some concrete use cases.  I can see what you
are getting at, and I can imagine some advantageous uses, but I'd like
to know what you are thinking.

Specifically I'd like to see some cases where this would beat sequential
scan.  I'm thinking that in your example of a terabyte table with a
column having only two values, all the queries I can think of would be
better served with a sequential scan.

Perhaps I believe this because you can now buy as much sequential I/O as
you want.  Random I/O is the only real savings.

-jwb



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] Install pg_regress script to support PGXS?

2005-09-27 Thread Tom Lane
While testing the recent pgxs patch, I noticed that you can build and
install contrib with PGXS:
cd contrib
gmake USE_PGXS=1 all
gmake USE_PGXS=1 install
It seems that it ought to work to run installcheck too:
gmake USE_PGXS=1 installcheck
but this does not quite work because the pg_regress script isn't
included in the installation tree.  (If you copy it to where it'd
need to be, installcheck works.)

Is it worth including pg_regress in the installation to make this
work?  Seems like it might be handy for external modules to be able
to run self-tests.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Using GetShortPathName() will break on any system that has disabled
 short filename generatino, which IIRC is a recommended best practice
 both for performance and for security in legacy apps. I don't know what
 it does, but probably it will just return the same long path again.

Yuck.  Anyone have another idea on coping with space-containing
pathnames?  I suppose we could try to quote the path variables properly
in all the makefiles, but that sure seems like a painful proposition.

Meanwhile, I believe I've fixed the relocatability issue per se.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] PostgreSQL overall design

2005-09-27 Thread Abhijit Menon-Sen
At 2005-09-27 15:20:05 +0530, [EMAIL PROTECTED] wrote:

 Can anyone please tell/point me where I can get the postgresql system
 layout (I've an interest to contribute).

http://www.postgresql.org/developer/coding

And, in particular:

http://www.postgresql.org/docs/faqs.FAQ_DEV.html

-- ams

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PERFORM] A Better External Sort?

2005-09-27 Thread Ron Peacetree
From: Dann Corbit [EMAIL PROTECTED]
Sent: Sep 26, 2005 5:13 PM
To: Ron Peacetree [EMAIL PROTECTED], pgsql-hackers@postgresql.org, 
   pgsql-performance@postgresql.org
Subject: RE: [HACKERS] [PERFORM] A Better External Sort?

I think that the btrees are going to be O(n*log(n)) in construction of
the indexes in disk access unless you memory map them [which means you
would need stupendous memory volume] and so I cannot say that I really
understand your idea yet.

Traditional algorithms for the construction of Btree variants (B, B+, B*, ...)
don't require O(nlgn) HD accesses.  These shouldn't either.

Let's start by assuming that an element is = in size to a cache line and a
node fits into L1 DCache.  To make the discussion more concrete, I'll use a
64KB L1 cache + a 1MB L2 cache only as an example.

Simplest case: the Key has few enough distinct values that all Keys or
KeyPrefixes fit into L1 DCache (for a 64KB cache with 64B lines, that's
 = 1000 different values.  More if we can fit more than 1 element into
each cache line.).

As we scan the data set coming in from HD, we compare the Key or KeyPrefix
to the sorted list of Key values in the node.  This can be done in O(lgn) using
Binary Search or O(lglgn) using a variation of Interpolation Search.  
If the Key value exists, we append this RID to the list of RIDs having the
same Key:
  If the RAM buffer of this list of RIDs is full we append it and the current
  RID to the HD list of these RIDs.
Else we insert this new key value into its proper place in the sorted list of 
Key
values in the node and start a new list for this value of RID.

We allocate room for a CPU write buffer so we can schedule RAM writes to
the RAM lists of RIDs so as to minimize the randomness of them.

When we are finished scanning the data set from HD, the sorted node with
RID lists for each Key value contains the sort order for the whole data set.

Notice that almost all of the random data access is occuring within the CPU
rather than in RAM or HD, and that we are accessing RAM or HD only when
absolutely needed.

Next simplest case: Multiple nodes, but they all fit in the CPU cache(s).
In the given example CPU, we will be able to fit at least 1000 elements per
node and 2^20/2^16= up to 16 such nodes in this CPU.  We use a node's
worth of space as a RAM write buffer, so we end up with room for 15 such
nodes in this CPU.  This is enough for a 2 level index to at least 15,000
distinct Key value lists.

All of the traditional tricks for splitting a Btree node and redistributing
elements within them during insertion or splitting for maximum node
utilization can be used here.

The most general case: There are too many nodes to fit within the CPU
cache(s).  The root node now points to a maximum of at least 1000 nodes
since each element in the root node points to another node.  A full 2 level
index is now enough to point to at least 10^6 distinct Key value lists, and
3 levels will index more distinct Key values than is possible in our 1TB, 
500M record example.

We can use some sort of node use prediction algorithm like LFU to decide
which node should be moved out of CPU when we have to replace one of
the nodes in the CPU.  The nodes in RAM or on HD can be arranged to
maximize streaming IO behavior and minimize random access IO
behavior.

As you can see, both the RAM and HD IO are as minimized as possible,
and what such IO there is has been optimized for streaming behavior.

 
Can you draw a picture of it for me?  (I am dyslexic and understand things
far better when I can visualize it).

Not much for pictures.  Hopefully the explanation helps?

Ron

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[HACKERS] PostgreSQL overall design

2005-09-27 Thread Gnanavel S
Hi All,

 Can anyone please tell/point me where I can get the
postgresql system layout (I've an interest to contribute). I would also
like to know the files involved for performing each task ( for eg when
doing a select operation what is exactly happening in postgres along
with the files).

 I was wandering inside the source for a while and I couldn't get a start point to go with. 

Need a clarification in copydir.c file of src/port
directory, In the following snippet the destination directory is
created first then the source directory is read. Suppose if I don't
have permission to read the source, even then the destination directory
would be created.
I just want to know whether there is any reason for doing so?

 if (mkdir(todir, S_IRUSR | S_IWUSR | S_IXUSR) != 0)
 ereport(ERROR,

(errcode_for_file_access(),

errmsg(could not create directory \%s\: %m, todir)));

 xldir = AllocateDir(fromdir);
 if (xldir == NULL)
 ereport(ERROR,

(errcode_for_file_access(),

errmsg(could not open directory \%s\: %m, fromdir)));


-- with thanks  regards,S.GnanavelSatyam Computer Services Ltd.


Re: [HACKERS] [PERFORM] A Better External Sort?

2005-09-27 Thread Ron Peacetree
SECOND ATTEMPT AT POST.  Web mailer appears to have
eaten first one.  I apologize in advance if anyone gets two
versions of this post.
=r

From: Tom Lane [EMAIL PROTECTED]
Sent: Sep 26, 2005 9:42 PM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort? 

So far, you've blithely assumed that you know the size of a cache line,
the sizes of L1 and L2 cache,

NO.  I used exact values only as examples.  Realistic examples drawn
from an extensive survey of past, present, and what I could find out
about future systems; but only examples nonetheless.  For instance,
Hennessy and Patterson 3ed points out that 64B cache lines are
optimally performing for caches between 16KB and 256KB.  The same
source as well as sources specifically on CPU memory hierarchy
design points out that we are not likely to see L1 caches larger than
256KB in the forseeable future.

The important point was the idea of an efficient Key, rather than
Record, sort using a CPU cache friendly data structure with provably
good space and IO characteristics based on a reasonable model of
current and likely future single box computer architecture (although
it would be fairly easy to extend it to include the effects of
networking.)

No apriori exact or known values are required for the method to work.


and that you are working with sort keys that you can efficiently pack
into cache lines.

Not pack.  map.  n items can not take on more than n values.  n
values can be represented in lgn bits.  Less efficient mappings can
also work.  Either way I demonstrated that we have plenty of space in
a likely and common cache line size.  Creating a mapping function
to represent m values in lgm bits is a well known hack, and if we keep
track of minimum and maximum values for fields during insert and
delete operations, we can even create mapping functions fairly easily.
(IIRC, Oracle does keep track of minimum and maximum field
values.)


And that you know the relative access speeds of the caches and
memory so that you can schedule transfers,

Again, no.  I created a reasonable model of a computer system that
holds remarkably well over a _very_ wide range of examples.  I
don't need the numbers to be exactly right to justify my approach
to this problem or understand why other approaches may have
downsides.  I just have to get the relative performance of the
system components and the relative performance gap between them
reasonably correct.  The stated model does that very well.

Please don't take my word for it.  Go grab some random box:
laptop, desktop, unix server, etc and try it for yourself.  Part of the
reason I published the model was so that others could examine it.
 

and that the hardware lets you get at that transfer timing.

Never said anything about this, and in fact I do not need any such.


And that the number of distinct key values isn't very large.

Quite the opposite in fact.  I went out of my way to show that the
method still works well even if every Key is distinct.  It is _more
efficient_ when the number of distinct keys is small compared to
the number of data items, but it works as well as any other Btree
would when all n of the Keys are distinct.  This is just a CPU cache
and more IO friendly Btree, not some magical and unheard of
technique.  It's just as general purpose as Btrees usually are.

I'm simply looking at the current and likely future state of computer
systems architecture and coming up with a slight twist on how to use
already well known and characterized techniques. not trying to start
a revolution.


I'm trying very hard NOT to waste anyone's time around here.
Including my own
Ron 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [PERFORM] A Better External Sort?

2005-09-27 Thread Jonah H. Harris
Ron,

Again, if you feel strongly enough about the theory to argue it, I
recommend that you spend your time constructively; create an
implemenation of it. Citing academics is cool and all, but code
speaks louder than theory in this case. As Tom mentioned, this
has to be portable. Making assumptions about computing
architectures (especially those in the future), is fine for theory, but
not practical for something that needs to be maintained in the
real-world. Go forth and write thy code.

-JonahOn 9/27/05, Ron Peacetree [EMAIL PROTECTED] wrote:
SECOND ATTEMPT AT POST.Web mailer appears to haveeaten first one.I apologize in advance if anyone gets twoversions of this post.=rFrom: Tom Lane [EMAIL PROTECTED]
Sent: Sep 26, 2005 9:42 PMSubject: Re: [HACKERS] [PERFORM] A Better External Sort?So far, you've blithely assumed that you know the size of a cache line,the sizes of L1 and L2 cache,
NO.I used exact values only as examples.Realistic examples drawnfrom an extensive survey of past, present, and what I could find outabout future systems; but only examples nonetheless.For instance,
Hennessy and Patterson 3ed points out that 64B cache lines areoptimally performing for caches between 16KB and 256KB.The samesource as well as sources specifically on CPU memory hierarchydesign points out that we are not likely to see L1 caches larger than
256KB in the forseeable future.The important point was the idea of an efficient Key, rather thanRecord, sort using a CPU cache friendly data structure with provablygood space and IO characteristics based on a reasonable model of
current and likely future single box computer architecture (althoughit would be fairly easy to extend it to include the effects ofnetworking.)No apriori exact or known values are required for the method to work.
and that you are working with sort keys that you can efficiently packinto cache lines.Not pack.map.n items can not take on more than n values.nvalues can be represented in lgn bits.Less efficient mappings can
also work.Either way I demonstrated that we have plenty of space ina likely and common cache line size.Creating a mapping functionto represent m values in lgm bits is a well known hack, and if we keeptrack of minimum and maximum values for fields during insert and
delete operations, we can even create mapping functions fairly easily.(IIRC, Oracle does keep track of minimum and maximum fieldvalues.)And that you know the relative access speeds of the caches and
memory so that you can schedule transfers,Again, no.I created a reasonable model of a computer system thatholds remarkably well over a _very_ wide range of examples.Idon't need the numbers to be exactly right to justify my approach
to this problem or understand why other approaches may havedownsides.I just have to get the relative performance of thesystem components and the relative performance gap between themreasonably correct.The stated model does that very well.
Please don't take my word for it.Go grab some random box:laptop, desktop, unix server, etc and try it for yourself.Part of thereason I published the model was so that others could examine it.
and that the hardware lets you get at that transfer timing.Never said anything about this, and in fact I do not need any such.And that the number of distinct key values isn't very large.
Quite the opposite in fact.I went out of my way to show that themethod still works well even if every Key is distinct.It is _moreefficient_ when the number of distinct keys is small compared tothe number of data items, but it works as well as any other Btree
would when all n of the Keys are distinct.This is just a CPU cacheand more IO friendly Btree, not some magical and unheard oftechnique.It's just as general purpose as Btrees usually are.I'm simply looking at the current and likely future state of computer
systems architecture and coming up with a slight twist on how to usealready well known and characterized techniques. not trying to starta revolution.I'm trying very hard NOT to waste anyone's time around here.
Including my ownRon---(end of broadcast)---TIP 5: don't forget to increase your free space map settings-- 
Respectfully,Jonah H. Harris, Database Internals ArchitectEnterpriseDB Corporationhttp://www.enterprisedb.com/


[HACKERS] effective SELECT from child tables

2005-09-27 Thread Ilia Kantor








Let table A be inherited by A1, A2, A3.



How to select from A records where actual relations
are A1, A2 ?



I found a way somewhere, it sounds like SELECT  
WHERE tableoid IN (a1.oid, a2.oid),

but tableoid checks actually do seq scan.



Like: SELECT * FROM sometable WHERE tableoid
=anything will do seq. scan on sometable..



So such way seems very ineffective: it seq scans and
filters records..



Maybe new constraint_exclusion staff could help to
exclude non-matching tables from inheritance query ?








Re: [HACKERS] effective SELECT from child tables

2005-09-27 Thread Josh Berkus
Ilia,

 Maybe new constraint_exclusion staff could help to exclude non-matching
 tables from inheritance query ?

Yes, that's exactly what it's for.  Your testing is welcome.  Download 8.1 
and try it today.

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [BUGS] BUG #1883: Renaming a schema leaves inconsistent

2005-09-27 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 With the following errors caused by ALTER SCHEMA RENAME, I recommend we
 remove this feature for 8.1 and revisit it for 8.2.

It's a mistake to blame ALTER SCHEMA RENAME for this problem, as you can
cause it equally well by renaming the sequence itself, or by moving it
into another schema with ALTER TABLE SET SCHEMA.  Will you also insist
on disabling the latter new feature?

I experimented a little bit with defining nextval() and friends as
taking regclass instead of text, and it seems like that works pretty
nicely for these problems, once you've got the literal in the form of
regclass (ie, internally an OID).  For actual SERIAL columns that
doesn't seem like a big deal, because the default expression doesn't
appear literally in dumps (at least not dumps made with a recent version
of pg_dump).  All we'd have to do is tweak the parser to generate a call
to nextval(regclass) instead of nextval(text) when expanding SERIAL.

For dumps that contain explicit calls, like
keycol int default nextval('foo'::text)
I really don't think there is anything much we can do :-( except to
recommend that people update the default expressions.  You'd need to
change it to
keycol int default nextval('foo'::regclass)
to be safe against renamings of 'foo', and I don't see any very good
way to force that to happen automatically.

I think that a reasonable answer for 8.1 would be to add
nextval(regclass) (and I guess parallel versions of currval and setval,
too), leaving the existing text-based functions available, and modifying
the parser to use nextval(regclass) instead of nextval(text) in SERIAL
defaults.

In the long run it would be nice to deprecate and eventually remove
the text-based functions, but I don't see how to do that in the short
run without introducing an implicit text-to-regclass cast for
compatibility purposes.  That seems a bit risky.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] effective SELECT from child tables

2005-09-27 Thread Tom Lane
Ilia Kantor [EMAIL PROTECTED] writes:
 Let table A be inherited by A1, A2, A3.

 How to select from A records where actual relations are A1, A2 ?

Why not just select directly from the child tables?  I can't get excited
about optimizing the case you propose.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Vacuum questions...

2005-09-27 Thread Jan Wieck

On 9/24/2005 8:17 PM, Jim C. Nasby wrote:


Would it be difficult to vacuum as part of a dump? The reasoning behind
this is that you have to read the table to do the dump anyway, 


I think aside from what's been said so far, it would be rather difficult 
anyway. pg_dump relies on MVCC and requires to run in one transaction to 
see a consistent snapshot while vacuum jiggles around with transactions 
in some rather non-standard way.



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Peter Eisentraut
Bruce Momjian wrote:
 bump major library version number?

Were there any incompatible interface changes?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Bruce Momjian
Peter Eisentraut wrote:
 Bruce Momjian wrote:
  bump major library version number?
 
 Were there any incompatible interface changes?

No, I don't _think_ so, but we have been bitten by this before, not
because of API change but because of use of libpgport functions  called
by libpq in one release but not in a later one.  What happened was that
apps pulled pgport functions from libpq and not from libpgport, and when
the calls were removed from libpq, the old apps didn't work anymore. 
This hit us in 8.0.X.

Makefile.global has this now:

# Force clients to pull symbols from the non-shared library libpgport
# rather than pulling some libpgport symbols from libpq just because
# libpq uses those functions too.  This makes applications less
# dependent on changes in libpq's usage of pgport.  To do this we link 
to
# pgport before libpq.  This does cause duplicate -lpgport's to appear
# on client link lines.
ifdef PGXS
libpq_pgport = -L$(libdir) -lpgport $(libpq)
else
libpq_pgport = -L$(top_builddir)/src/port -lpgport $(libpq)
endif

so I think we are OK going forward, but it something I wanted to keep an
eye out for.  In older releases we actually had reports of failures, and
just told people to recompile, not realizing the magnitude of the
problem (it was assume to be more old CVS build issue than a
backward-compatible issue.)  I am going to remove the open item about
this because I think if we had a problem, we would have heard about it
by now.

It is an interesting story because it does highlight that the libpq API
is not the only cause of a major bump.


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [BUGS] BUG #1883: Renaming a schema leaves inconsistent

2005-09-27 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  With the following errors caused by ALTER SCHEMA RENAME, I recommend we
  remove this feature for 8.1 and revisit it for 8.2.
 
 It's a mistake to blame ALTER SCHEMA RENAME for this problem, as you can
 cause it equally well by renaming the sequence itself, or by moving it
 into another schema with ALTER TABLE SET SCHEMA.  Will you also insist
 on disabling the latter new feature?

Sure.  (Silly question, silly answer.)

Seriously, you have to consider the likelihood of breakage, and the
surprise factor.  If someone moves a sequence to another schema or
renames it, they would assume breakage, but moving all tables/sequences
together would not suggest sequence breakage.

So, below, you are saying that once 8.0.X dumps are loaded into 8.1,
that the renaming of those schemas would succeed, at least for SERIAL,
but not for manual sequence defaults.  That seems OK, I guess, in the
hope that people who are creating defaults manually based on sequences
are going to know how to fix things.

---

 I experimented a little bit with defining nextval() and friends as
 taking regclass instead of text, and it seems like that works pretty
 nicely for these problems, once you've got the literal in the form of
 regclass (ie, internally an OID).  For actual SERIAL columns that
 doesn't seem like a big deal, because the default expression doesn't
 appear literally in dumps (at least not dumps made with a recent version
 of pg_dump).  All we'd have to do is tweak the parser to generate a call
 to nextval(regclass) instead of nextval(text) when expanding SERIAL.
 
 For dumps that contain explicit calls, like
   keycol int default nextval('foo'::text)
 I really don't think there is anything much we can do :-( except to
 recommend that people update the default expressions.  You'd need to
 change it to
   keycol int default nextval('foo'::regclass)
 to be safe against renamings of 'foo', and I don't see any very good
 way to force that to happen automatically.
 
 I think that a reasonable answer for 8.1 would be to add
 nextval(regclass) (and I guess parallel versions of currval and setval,
 too), leaving the existing text-based functions available, and modifying
 the parser to use nextval(regclass) instead of nextval(text) in SERIAL
 defaults.
 
 In the long run it would be nice to deprecate and eventually remove
 the text-based functions, but I don't see how to do that in the short
 run without introducing an implicit text-to-regclass cast for
 compatibility purposes.  That seems a bit risky.
 
   regards, tom lane
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PERFORM] A Better External Sort?

2005-09-27 Thread Josh Berkus

Ron,

I've somehow missed part of this thread, which is a shame since this is 
an area of primary concern for me.


Your suggested algorithm seems to be designed to relieve I/O load by 
making more use of the CPU.   (if I followed it correctly).  However, 
that's not PostgreSQL's problem; currently for us external sort is a 
*CPU-bound* operation, half of which is value comparisons.  (oprofiles 
available if anyone cares)


So we need to look, instead, at algorithms which make better use of 
work_mem to lower CPU activity, possibly even at the expense of I/O.


--Josh Berkus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Re-run query on automatic reconnect

2005-09-27 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 02:52:02PM -0600, Michael Fuhr wrote:
 On Mon, Sep 26, 2005 at 01:23:03PM -0500, Jim C. Nasby wrote:
  Is there any particular reason why psql doesn't re-run a query that
  failed due to disconnect from the server after re-connecting? I've
  thought maybe it's because it might somehow be dangerous, but I can't
  think of any case where that's actually true.
 
 What if the query itself resulted in the disconnect by causing the
 backend to crash?  Re-running such a query automatically would be
 a bad idea.  Or did I misunderstand what you're asking?

The case I was thinking of was...

psql run some command...

psql returns from call

user does something else; meanwhile, server forced shutdown/crash

user comes back, tries to run some query and gets automatic reconnect

In any case, Tom brought up a bunch of cases where this would be a bad
idea, so my question's answored.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Gerbil build farm failure

2005-09-27 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 06:58:16PM -0400, Tom Lane wrote:
 Michael Fuhr [EMAIL PROTECTED] writes:
  Gerbil's looking better lately:
  http://pgbuildfarm.org/cgi-bin/show_history.pl?nm=gerbilbr=REL8_0_STABLE
 
 Yeah.  We've been poking at it off-list, and it seems that the problem
 was a local build failure due to not having a clean copy of the
 repository (ye olde junk-in-the-supposedly-clean-vpath-tree problem).

Well, just to be clear, I first logged into that box after the problem
started. It's possible that someone else had mucked with the install,
but unlikely. I suspect that there was a real build issue of some kind
to start with. Since it's working now I guess it doesn't matter, but I'd
still suspect code from back when the problem started.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] \d on database with a lot of tables is slow

2005-09-27 Thread Jim C. Nasby
On Mon, Sep 26, 2005 at 03:00:44PM -0400, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  I have a client with a database that contains 4000 relations according
  to vacuum verbose, and \d in psql is painfully slow. In particular...
 
 It's presumably mostly in the pg_table_is_visible() calls.  Not sure if
 we can do much to speed those up, but: how many schemas in your search
 path?  What's the distribution of pg_class entries among the schemas?

db=# show search_path;
 decibel, pg_sysviews, information_schema, rrs, public

db=# select schema_name, count(*) from pg_user_tables group by schema_name;
 public |   764
 ledger | 6
 state  | 2
 _track_replica |10
 repair | 3
 summarized |   586
 orders |   512
 snapshot   |  1012
 acl|10

db=# 

Also, do you have any idea on the 'Did not find relation named
table-thats-there' error? table-thats-there isn't a temp table, and I don't
believe they're even using temp tables, so I don't think that's the issue.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] \d on database with a lot of tables is slow

2005-09-27 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 On Mon, Sep 26, 2005 at 03:00:44PM -0400, Tom Lane wrote:
 It's presumably mostly in the pg_table_is_visible() calls.  Not sure if
 we can do much to speed those up, but: how many schemas in your search
 path?  What's the distribution of pg_class entries among the schemas?

 db=# show search_path;
  decibel, pg_sysviews, information_schema, rrs, public

 db=# select schema_name, count(*) from pg_user_tables group by schema_name;
  public |   764
  ledger | 6
  state  | 2
  _track_replica |10
  repair | 3
  summarized |   586
  orders |   512
  snapshot   |  1012
  acl|10

Hmm, so lots and lots of tables that aren't visible at all ... that's
definitely the slowest case for pg_table_is_visible.  I'll think about
whether we can improve it.

 Also, do you have any idea on the 'Did not find relation named
 table-thats-there' error? table-thats-there isn't a temp table, and I don't
 believe they're even using temp tables, so I don't think that's the issue.

Uh, what's the exact error message again?  (If it's a backend message,
the verbose form would be useful.)

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Vacuum questions...

2005-09-27 Thread Jim C. Nasby
On Tue, Sep 27, 2005 at 02:47:46PM -0400, Jan Wieck wrote:
 On 9/24/2005 8:17 PM, Jim C. Nasby wrote:
 
 Would it be difficult to vacuum as part of a dump? The reasoning behind
 this is that you have to read the table to do the dump anyway, 
 
 I think aside from what's been said so far, it would be rather difficult 
 anyway. pg_dump relies on MVCC and requires to run in one transaction to 
 see a consistent snapshot while vacuum jiggles around with transactions 
 in some rather non-standard way.

Is this true even if they were in different connections?

My (vague) understanding of the vacuum process is that it first vacuums
indexes, and then vacuums the heap. Since we don't dump indexes, there's
nothing for backup to do while those are vacuumed, so my idea is:

pg_dump:
foreach (table)
spawn vacuum
wait for vacuum to hit heap
start copy
wait for analyze to finish
next;

dump_vacuum (table):
foreach (index on table)
vacuum index
next;

notify pg_dump we're going to start vacuum of heap
vacuum heap

if we should analyze {
analyze table
}

notify pg_dump analyze is done
exit

AFAIK, this should allow both to run in seperate transactions. Granted,
it would slow down the dump, since it would have to wait while indexes
were being vacuumed, but it would win when it came to the heap.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Vacuum questions...

2005-09-27 Thread Tom Lane
Jim C. Nasby [EMAIL PROTECTED] writes:
 AFAIK, this should allow both to run in seperate transactions.

... and pretty much destroy any synchronization between the two scans,
which was sort of the point wasn't it?

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Tom Lane
Bruce Momjian pgman@candle.pha.pa.us writes:
 fix ALTER SCHEMA RENAME for sequence dependency, or remove feature

I've posted a proposed patch to fix this.  The patch requires an initdb
(to add new sequence functions), so if we do that we may as well also
fix the 32/64bit risk mentioned here:
http://archives.postgresql.org/pgsql-hackers/2005-09/msg01241.php

Also, the floor seems open to discuss whether or not to revert the file
access functions to their pre-beta2 APIs.  I've got mixed feelings about
that myself, but you can certainly make a case that the current
definitions are not enough cleaner than what was there before to justify
changing.  This seems particularly true for pg_cancel_backend(), which
already was in the core in 8.0.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Marc G. Fournier

On Tue, 27 Sep 2005, Tom Lane wrote:


Bruce Momjian pgman@candle.pha.pa.us writes:

fix ALTER SCHEMA RENAME for sequence dependency, or remove feature


I've posted a proposed patch to fix this.  The patch requires an initdb
(to add new sequence functions), so if we do that we may as well also
fix the 32/64bit risk mentioned here:
http://archives.postgresql.org/pgsql-hackers/2005-09/msg01241.php

Also, the floor seems open to discuss whether or not to revert the file
access functions to their pre-beta2 APIs.  I've got mixed feelings about
that myself, but you can certainly make a case that the current
definitions are not enough cleaner than what was there before to justify
changing.  This seems particularly true for pg_cancel_backend(), which
already was in the core in 8.0.


IMHO, changes like this *should not* have been allowed during beta, period 
... even during feature freeze, it would have been questionable :(



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] State of support for back PG branches

2005-09-27 Thread Marc G. Fournier

On Mon, 26 Sep 2005, Josh Berkus wrote:


Tom,


Or, as you say, we could take the viewpoint that there are commercial
companies willing to take on the burden of supporting back releases, and
the development community ought not spend its limited resources on doing
that.  I'm hesitant to push that idea very hard myself, because it would
look too much like I'm pushing the interests of my employer Red Hat
... but certainly there's a reasonable case to be made there.


Well, I think you know my opinion on this.  Since there *are* commercial 
companies available, I think we should use them to reduce back-patching 
effort.   I suggest that our policy should be:  the community will patch two 
old releases, and beyond that if it's convenient, but no promises. In other 
words, when 8.1 comes out we'd be telling 7.3 users We'll be patching this 
only where we can apply 7.4 patches.  Otherwise, better get a support 
contract.


Of course, a lot of this is up to individual initiative; if someone fixes a 
patch so it applies back to 7.2, there's no reason not to make it available. 
However, there's no reason *you* should make it a priority.


Agreed ... if its convient/easy to back patch, cool ... but don't go out 
of your way to do it ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [HACKERS] \d on database with a lot of tables is slow

2005-09-27 Thread Jim C. Nasby
On Tue, Sep 27, 2005 at 06:53:13PM -0400, Tom Lane wrote:
  Also, do you have any idea on the 'Did not find relation named
  table-thats-there' error? table-thats-there isn't a temp table, and I 
  don't
  believe they're even using temp tables, so I don't think that's the issue.
 
 Uh, what's the exact error message again?  (If it's a backend message,
 the verbose form would be useful.)

Sorry, remembered it wrong. It's 'Did not find any relation named',
which appears to be in bin/psql/describe.c. It does occur when trying to
do a \d on a specific table.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Vacuum questions...

2005-09-27 Thread Jim C. Nasby
On Tue, Sep 27, 2005 at 07:12:21PM -0400, Tom Lane wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
  AFAIK, this should allow both to run in seperate transactions.
 
 ... and pretty much destroy any synchronization between the two scans,
 which was sort of the point wasn't it?

Aren't there ways to sync them outside of a transaction? My theory is
that you don't need to syncronize them at the tuple level, since
whichever one gets ahead reading the HEAP will be pulling the data off
the drive, while the one that's behind will just grab it out of the
buffer (or at worst, the kernel's cache). So all you should need to do
is start both scans at about (as in within a few seconds) the same time.

Heck, if vacuum was made to put more verbose info in it's process status
then it could be as simple as having pg_dump start a vacuum of a table
in a seperate connection and just watching for the status to indicate it
had started vacuuming the table.

I *think* this shouldn't be too hard to test, which is good since it's
all theory right now. :)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Open items list for 8.1

2005-09-27 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian pgman@candle.pha.pa.us writes:
  fix ALTER SCHEMA RENAME for sequence dependency, or remove feature
 
 I've posted a proposed patch to fix this.  The patch requires an initdb
 (to add new sequence functions), so if we do that we may as well also
 fix the 32/64bit risk mentioned here:
 http://archives.postgresql.org/pgsql-hackers/2005-09/msg01241.php
 
 Also, the floor seems open to discuss whether or not to revert the file
 access functions to their pre-beta2 APIs.  I've got mixed feelings about
 that myself, but you can certainly make a case that the current
 definitions are not enough cleaner than what was there before to justify
 changing.  This seems particularly true for pg_cancel_backend(), which
 already was in the core in 8.0.

I am thinking we should keep things as they are now.  

I remember two changes of significance.  First, pg_cancel_backend()'s
return value was change to boolean.  I think the compelling argument
here is that we are adding other functions that _should_ return boolean,
and to keep pg_cancel_backend() as 1/0 was kind of strange.  Also, I
assume pg_cancel_backend() is not a general use function and therefore
is more of an admin function that we can adjust as needed to improve the
API.  We have always allowed rare/admin functions to be improved without
as much concern for backward compatibility as a more mainstream feature.

The other change was the rename of pg_complete_relation_size() to
pg_total_relation_size().  While there was a huge (exhausting)
discussion that finalized on pg_complete_relation_size(), a number of
people felt pg_total_relation_size() was better.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Database file compatability

2005-09-27 Thread Qingqing Zhou

Jim C. Nasby [EMAIL PROTECTED] wrote
 Yeah.  It might be worth adding MAX_ALIGNOF to the set of configuration
 data stored in pg_control, just to be sure you couldn't shoot yourself
 in the foot that way.

 PLEASE. :)


I am coming up with a patch of it.  I think all ALIGNOF macros should be 
checked.

Regards,
Qingqing 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Database file compatability

2005-09-27 Thread Tom Lane
Qingqing Zhou [EMAIL PROTECTED] writes:
 I think all ALIGNOF macros should be checked.

There are no platforms for which ALIGNOF_SHORT is different from 2.
I don't think there are any platforms we care about where ALIGNOF_INT
is different from 4.  The cases of interest are ALIGNOF_DOUBLE,
ALIGNOF_LONG, ALIGNOF_LONG_LONG_INT (note that MAXIMUM_ALIGNOF is
just the largest of these).  In practice long int is the same type
as either int or long long int, so ALIGNOF_LONG isn't a distinct
case either.  What it comes down to is that MAXIMUM_ALIGNOF is
sufficient to tell the difference between the platforms we need to
deal with.  If you have a counterexample, tell us about it.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] State of support for back PG branches

2005-09-27 Thread Chris Browne
[EMAIL PROTECTED] (Marc G. Fournier) writes:
 On Mon, 26 Sep 2005, Josh Berkus wrote:

 Tom,

 Or, as you say, we could take the viewpoint that there are commercial
 companies willing to take on the burden of supporting back releases, and
 the development community ought not spend its limited resources on doing
 that.  I'm hesitant to push that idea very hard myself, because it would
 look too much like I'm pushing the interests of my employer Red Hat
 ... but certainly there's a reasonable case to be made there.

 Well, I think you know my opinion on this.  Since there *are*
 commercial companies available, I think we should use them to reduce
 back-patching effort.   I suggest that our policy should be:  the
 community will patch two old releases, and beyond that if it's
 convenient, but no promises. In other words, when 8.1 comes out we'd
 be telling 7.3 users We'll be patching this only where we can apply
 7.4 patches.  Otherwise, better get a support contract.

 Of course, a lot of this is up to individual initiative; if someone
 fixes a patch so it applies back to 7.2, there's no reason not to
 make it available. However, there's no reason *you* should make it a
 priority.

 Agreed ... if its convient/easy to back patch, cool ... but don't go
 out of your way to do it ...

We're looking at Slony-I the same way.

The earliest version it ever did support was 7.3.4.

Some effort has had to go into making sure it continues to support
7.3.x, and, as of today's check-ins, there is *some* functionality
which is lost if you aren't running at least 7.4.

At some point, it will make sense to drop 7.3 support, but since
Slony-I has, as a common use-case, assisting to upgrade to newer
versions, I'm loathe to drop it arbitrarily.

One happy part of that is that it doesn't mean that 7.3 becomes
*totally* unsupported, as major releases such as 1.0.5 and 1.1.0 *do*
support it, and I wouldn't feel horribly badly if direct support
ceased in 1.2 as long as this left people with old databases the
option of using Slony-I 1.1 to upgrade from PG 7.3 to 8.1, at which
point they could get Newer, Better Slony-I 1.3 stuff via upgrading
just on the 8.1 instances.

Of course, there hasn't been anything *SO* substantial changed that it
has become tempting enough to drop 7.3 support.  There have
occasionally been suggestions to add some 8.0-specific functionality;
when plenty of people are still using 7.4, that just doesn't tempt
:-).
-- 
let name=cbbrowne and tld=cbbrowne.com in String.concat @ [name;tld];;
http://www3.sympatico.ca/cbbrowne/linux.html
There is   nothing in the world  more  helpless and irresponsible and
depraved than a man in the depths of an ether binge.
-- Dr. Hunter S. Thompson

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[HACKERS] enhanced pgbench

2005-09-27 Thread Tatsuo Ishii
Hi all,

We have enhanced pgbench so that it accepts a series of SQL commands
in a file(see attached patches against 8.0.3). This would make it
possible to test various sets of SQL commands. In the file it is
allowed to use a meta command.  Currently only \setrandom meta
command is allowed, which sets specified random number into a
variable. For example,

\setrandom aid 1 10

will set a random number into variable aid between 1 and 1.

A variable can be reffered to in an SQL command by adding : in front
of the the command name.

Here is an example SQL command file.

\setrandom aid 1 10
\setrandom bid 1 1
\setrandom tid 1 10
\setrandom delta 1 1
BEGIN
UPDATE accounts SET abalance = abalance + :delta WHERE aid = :aid
SELECT abalance FROM accounts WHERE aid = :aid
UPDATE tellers SET tbalance = tbalance + :delta WHERE tid = :tid
UPDATE branches SET bbalance = bbalance + :delta WHERE bid = :bid
INSERT INTO history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, 
:delta, 'now')
END

This will execute virtually same SQL commands builtin pgbench.

To use the SQL command file, you can use -f option:

pgbench -f /foo/bar/sqlfile

I think the enhanced pgbench is quite usefull and I would like to
include in 8.1. Or should I keep it for 8.2?
--
SRA OSS, Inc. Japan
Tatsuo Ishii
*** pgbench/pgbench.c   2004-11-09 15:09:31.0 +0900
--- pgbench-new/pgbench.c   2005-09-27 14:31:34.0 +0900
***
*** 41,46 
--- 41,49 
  #include sys/resource.h
  #endif   /* ! WIN32 */
  
+ #include ctype.h
+ #include search.h
+ 
  extern char *optarg;
  extern intoptind;
  
***
*** 72,77 
--- 75,83 
  #define ntellers  10
  #define naccounts 10
  
+ #define SQL_COMMAND   1
+ #define META_COMMAND  2
+ 
  FILE *LOGFILE = NULL;
  
  bool  use_log;/* log transaction latencies to 
a file */
***
*** 91,96 
--- 97,108 
  
  typedef struct
  {
+   char   *name;
+   char   *value;
+ } Variable;
+ 
+ typedef struct
+ {
PGconn *con;/* connection handle to DB */
int id; /* client No. */
int state;  /* state No. */
***
*** 103,115 
int tid;/* teller id for this 
transaction */
int delta;
int abalance;
struct timeval txn_begin;   /* used for measuring latencies */
  } CState;
  
  static void
  usage(void)
  {
!   fprintf(stderr, usage: pgbench [-h hostname][-p port][-c nclients][-t 
ntransactions][-s scaling_factor][-n][-C][-v][-S][-N][-l][-U login][-P 
password][-d][dbname]\n);
fprintf(stderr, (initialize mode): pgbench -i [-h hostname][-p 
port][-s scaling_factor][-U login][-P password][-d][dbname]\n);
  }
  
--- 115,137 
int tid;/* teller id for this 
transaction */
int delta;
int abalance;
+   void   *variables;
struct timeval txn_begin;   /* used for measuring latencies */
  } CState;
  
+ typedef struct
+ {
+   int type;
+   int argc;
+   char  **argv;
+ } Command;
+ 
+ Command **commands = NULL;
+ 
  static void
  usage(void)
  {
!   fprintf(stderr, usage: pgbench [-h hostname][-p port][-c nclients][-t 
ntransactions][-s scaling_factor][-n][-C][-v][-S][-N][-f filename][-l][-U 
login][-P password][-d][dbname]\n);
fprintf(stderr, (initialize mode): pgbench -i [-h hostname][-p 
port][-s scaling_factor][-U login][-P password][-d][dbname]\n);
  }
  
***
*** 190,195 
--- 212,326 
return (0); /* OK */
  }
  
+ static int
+ compareVariables(const void *v1, const void *v2)
+ {
+   return strcmp(((Variable *)v1)-name, ((Variable *)v2)-name);
+ }
+ 
+ static char *
+ getVariable(CState * st, char *name)
+ {
+   Variablekey = { name }, *var;
+ 
+   var = tfind(key, st-variables, compareVariables);
+   if (var != NULL)
+   return (*(Variable **)var)-value;
+   else
+   return NULL;
+ }
+ 
+ static int
+ putVariable(CState * st, char *name, char *value)
+ {
+   Variablekey = { name }, *var;
+ 
+   var = tfind(key, st-variables, compareVariables);
+   if (var == NULL)
+   {
+   if ((var = malloc(sizeof(Variable))) == NULL)
+   return false;
+ 
+   var-name = NULL;
+   var-value = NULL;
+ 
+   if ((var-name = strdup(name)) == NULL
+   || (var-value = strdup(value)) == NULL
+   || tsearch(var, st-variables, compareVariables) == 
NULL)
+  

Re: [HACKERS] Making pgxs builds work with a relocated installation

2005-09-27 Thread Peter Eisentraut
Tom Lane wrote:
 Yuck.  Anyone have another idea on coping with space-containing
 pathnames?

Switch to scons.  You heard it here first!

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster