Re: [GENERAL] too many trigger records found for relation item -

2007-01-26 Thread Csaba Nagy
 [snip] I believe this is fixed as of 8.2 --- can you duplicate it
 there?  (No, backpatching the fix is not practical.)

No, I was not able to duplicate it on 8.2, so I think it's fixed (given
that on 8.1 the errors are triggered almost 100% of the runs).

 How sure are you about that uninterlocked getChildTableName() thing?
 It's possible to get a failure complaining about duplicate type name
 instead of duplicate relation name during CREATE TABLE, if the timing
 is just right.

Not sure at all (I left it deliberately unsynchronized to go as fast as
it can even if it errors sometimes on duplicate tables), so that might
be an explanation. The error is a bit misleading though, or better
inconsistent: if I would have to detect the duplicate table error
condition in my code so that I can take corrective steps I would need to
look for 2 error types instead of 1 - if I only knew that I have to.

And BTW, I have seen something similar while creating temporary tables
which should not conflict even with the same table name I think...

Cheers,
Csaba.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
On Thu, Jan 25, 2007 at 04:29:58PM -0500, Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Jeremy Haile wrote:
  If anyone else is experiencing similar problems, please post your
  situation.
 
  All the Windows buildfarm machines are, apparently.
 
 Can't anyone with a debugger duplicate this and get a stack trace for
 us?  If the stats collector is indeed freezing up, a stack trace showing
 where it's stuck would be exceedingly helpful.

Done some checking here. What happens is that suddenly the pgstats
socket stops receiving data. select() (pgstat.c line 1802) returns after
timeout, so got_data is always zero.

Interesting note: I just ran the serial regression tests, and they pass
fine. With the parallel tests, it always stops receiving data somewhere
during the first parallel group.

//Magnus

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
On Thu, Jan 25, 2007 at 04:29:58PM -0500, Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Jeremy Haile wrote:
  If anyone else is experiencing similar problems, please post your
  situation.
 
  All the Windows buildfarm machines are, apparently.
 
 Can't anyone with a debugger duplicate this and get a stack trace for
 us?  If the stats collector is indeed freezing up, a stack trace showing
 where it's stuck would be exceedingly helpful.

Another update - what broke it was not the enabling of autovacuum, it
was the enabling of row level stats. If I disable stats_row_level,
parallel tests pass again.

That doesn't actually tell us *why* it's broken, I think, but it does
tell us why the autovac patch caused it.

I don't have 8.1 or 8.2 around on win32, but it'd be interesting to see if the
same issue happens if you run the tests on that with stats_row_level
enabled. Most likely the same thing happens.

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
On Thu, Jan 25, 2007 at 04:29:58PM -0500, Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  Jeremy Haile wrote:
  If anyone else is experiencing similar problems, please post your
  situation.
 
  All the Windows buildfarm machines are, apparently.
 
 Can't anyone with a debugger duplicate this and get a stack trace for
 us?  If the stats collector is indeed freezing up, a stack trace showing
 where it's stuck would be exceedingly helpful.

Maybe I should finish testing before I send my emails.

Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
if I put a #undef select right before the select in pgstat.c, the
regression tests pass. 

I guess the bug is shown because with row level stats we simply have
more data to process. And it appears only to happen on UDP sockets from
what I can tell.

Now, what Iwould *like* to do is to re-implement that part of the code
using the Win32 APIs instead of going through select(). Since it's very
isolated code. I'm going to try that and see how invasive it is, then
see if it'll get accepted :-)

(This would of course give us better performance in general in that
codepath, since all the emulation stuff wouldn't be needed, so there's a
point to doing that other than finding the obscure UDP-related bug in
pgwin23_select)

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Teodor Sigaev

Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
if I put a #undef select right before the select in pgstat.c, the
regression tests pass. 


May be, problem is related to fixed bug in pgwin32_waitforsinglesocket() ?
WaitForMultipleObjectsEx might sleep indefinitely while waiting socket to write, 
so, may be there is symmetrical problem with read? Or pgwin32_select() is used 
for waiting write too?




--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Duplicate key violation

2007-01-26 Thread Joris Dobbelsteen
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of Brian Wipf
Sent: donderdag 25 januari 2007 22:42
To: pgsql-general@postgresql.org
Subject: [GENERAL] Duplicate key violation

I got a duplicate key violation when the following query was performed:

INSERT INTO category_product_visible (category_id, product_id)
   SELECT  cp.category_id, cp.product_id
   FROMcategory_product cp
   WHERE   cp.product_id = $1 AND
   not exists (
   select  'x'
   fromcategory_product_visible cpv
   where   cpv.product_id = 
cp.product_id and
   cpv.category_id = cp.category_id
   );

This is despite the fact the insert is written to only insert 
rows that do not already exist. The second time the same query 
was run it went through okay. This makes me think there is 
some kind of race condition, which I didn't think was possible 
with PostgreSQL's MVCC implementation. I'm unable to duplicate 
the problem now and the error only occurred once in weeks of 
use. This is on PostgreSQL 8.2.1 running on openSUSE Linux 
10.2. Slony-I 1.2.6 is being used for replication to a single 
slave database.

[snip]

This section is relevant:
http://www.postgresql.org/docs/8.2/interactive/transaction-iso.html

In the default isolation level Read commited you are protected against
dirty reads.
You are not protected against nonrepeatable reads and phantom reads.

In fact if you start a transaction now, others are not prevented from
inserting records. This can result in a situation where you did not find
the record, since someone else has just instead it after your
transaction was started.

This is not a race condition, but a side-effect.

- Joris

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] triggers vs b-tree

2007-01-26 Thread Joris Dobbelsteen
I believe you should design it in a slightly different way:

-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of 
gustavo halperin
Sent: donderdag 25 januari 2007 21:34
To: pgsql-general@postgresql.org
Subject: [GENERAL] triggers vs b-tree

Hello I have a design question:

  I have a table representing Families, and a table 
representing Persons.

 So my question is: If most of the time I need to find all the 
persons for one asked family what is the best way to do that?
 I think about two methods:
 1-  Making a b-tree index in ficha_person with the 
rows family_id and person_id.
 2 - Adding an array in the table ficha_family 
containing the persons of this family. And creating a Trigger 
that update this array for each  person insert/delete in the 
table ficha_family.

It seems you are quite new (or unfamiliar) to databases.
Are you familiar with Database Normalization?

Read this:
http://databases.about.com/od/specificproducts/a/normalization.htm
(Handles all you need)
http://www.phlonx.com/resources/nf3/
(Gives better examples up to 3th normal form)
http://en.wikipedia.org/wiki/Database_normalization
(Wikipedia does not seem to be too good, but it works)

After reading this carefully you should know what option you should
take. The answer is right in there, but there is a lot more useful
information too.

- Joris

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] DBI support for pg native arrays?

2007-01-26 Thread Louis-David Mitterrand
Hello,

For a select array(...) as col1, col2, col3 from table I'd like the DBI 
driver to output col1 as a perl array instead of a scalar 
{res1,res2,etc.} representation of it.

Is that somehow possible? I looked at the docs without finding anything.

Thanks,

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
On Fri, Jan 26, 2007 at 01:11:00PM +0300, Teodor Sigaev wrote:
 Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
 if I put a #undef select right before the select in pgstat.c, the
 regression tests pass. 
 
 May be, problem is related to fixed bug in pgwin32_waitforsinglesocket() ?
 WaitForMultipleObjectsEx might sleep indefinitely while waiting socket to 
 write, so, may be there is symmetrical problem with read? Or 
 pgwin32_select() is used for waiting write too?


pgwin32_waitforsinglesocket() appears to work fine. And we only use
FD_READ, so it's not affected by your patch from what I can tell.

I've got it passnig tests with select replaced with waitforsinglesocket
- now I just need to implement timeout in that one :-)

//Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
On Fri, Jan 26, 2007 at 12:26:58PM +0100, Magnus Hagander wrote:
 On Fri, Jan 26, 2007 at 01:11:00PM +0300, Teodor Sigaev wrote:
  Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
  if I put a #undef select right before the select in pgstat.c, the
  regression tests pass. 
  
  May be, problem is related to fixed bug in pgwin32_waitforsinglesocket() ?
  WaitForMultipleObjectsEx might sleep indefinitely while waiting socket to 
  write, so, may be there is symmetrical problem with read? Or 
  pgwin32_select() is used for waiting write too?
 
 
 pgwin32_waitforsinglesocket() appears to work fine. And we only use
 FD_READ, so it's not affected by your patch from what I can tell.
 
 I've got it passnig tests with select replaced with waitforsinglesocket
 - now I just need to implement timeout in that one :-)

Attached patch seems to solve the problem on my machine at least. Uses
pgwin32_waitforsinglesocket() instead of pgwin32_select(). Changes
pgwin32_waitforsinglesocket() to accept the timeout as a parameter (this
is why it touches files outside of the stats area).

//Magnus
Index: src/backend/libpq/be-secure.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/libpq/be-secure.c,v
retrieving revision 1.75
diff -c -r1.75 be-secure.c
*** src/backend/libpq/be-secure.c   5 Jan 2007 22:19:29 -   1.75
--- src/backend/libpq/be-secure.c   26 Jan 2007 11:32:07 -
***
*** 275,281 
  #ifdef WIN32

pgwin32_waitforsinglesocket(SSL_get_fd(port-ssl),

(err == SSL_ERROR_WANT_READ) ?
!  FD_READ | 
FD_CLOSE : FD_WRITE | FD_CLOSE);
  #endif
goto rloop;
case SSL_ERROR_SYSCALL:
--- 275,281 
  #ifdef WIN32

pgwin32_waitforsinglesocket(SSL_get_fd(port-ssl),

(err == SSL_ERROR_WANT_READ) ?
!  FD_READ | 
FD_CLOSE : FD_WRITE | FD_CLOSE, INFINITE);
  #endif
goto rloop;
case SSL_ERROR_SYSCALL:
***
*** 374,380 
  #ifdef WIN32

pgwin32_waitforsinglesocket(SSL_get_fd(port-ssl),

(err == SSL_ERROR_WANT_READ) ?
!  FD_READ | 
FD_CLOSE : FD_WRITE | FD_CLOSE);
  #endif
goto wloop;
case SSL_ERROR_SYSCALL:
--- 374,380 
  #ifdef WIN32

pgwin32_waitforsinglesocket(SSL_get_fd(port-ssl),

(err == SSL_ERROR_WANT_READ) ?
!  FD_READ | 
FD_CLOSE : FD_WRITE | FD_CLOSE, INFINITE);
  #endif
goto wloop;
case SSL_ERROR_SYSCALL:
***
*** 889,895 
  #ifdef WIN32

pgwin32_waitforsinglesocket(SSL_get_fd(port-ssl),

(err == SSL_ERROR_WANT_READ) ?
!  FD_READ | FD_CLOSE | FD_ACCEPT : 
FD_WRITE | FD_CLOSE);
  #endif
goto aloop;
case SSL_ERROR_SYSCALL:
--- 889,895 
  #ifdef WIN32

pgwin32_waitforsinglesocket(SSL_get_fd(port-ssl),

(err == SSL_ERROR_WANT_READ) ?
!  FD_READ | FD_CLOSE | FD_ACCEPT : 
FD_WRITE | FD_CLOSE, INFINITE);
  #endif
goto aloop;
case SSL_ERROR_SYSCALL:
Index: src/backend/port/win32/socket.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/port/win32/socket.c,v
retrieving revision 1.16
diff -c -r1.16 socket.c
*** src/backend/port/win32/socket.c 5 Jan 2007 22:19:35 -   1.16
--- src/backend/port/win32/socket.c 26 Jan 2007 11:35:16 -
***
*** 114,120 
  }

  int
! pgwin32_waitforsinglesocket(SOCKET s, int what)
  {
static HANDLE waitevent = INVALID_HANDLE_VALUE;
static SOCKET current_socket = -1;
--- 114,120 
  }

  int
! pgwin32_waitforsinglesocket(SOCKET s, int what, int timeout)
  {
static HANDLE waitevent = INVALID_HANDLE_VALUE;
static SOCKET current_socket = -1;
***
*** 

[GENERAL] Rollback using WAL files?

2007-01-26 Thread M.A. Oude Kotte
Hi all!

First of all I'm new to this list, please be gentle :-) Next I'd like to
mention that I've already searched the documentation and the archives,
but couldn't find the answer to my question.

I'm running a production/development database using PostgreSQL 8.1 on a
Debian server. Due to some bad code in one of our applications who use
this database, some of the data was modified incorrectly the last few
days. The idea is that I would like to restore the entire database as
much as possible, meaning I would like to undo all transactions that
were performed on it.

Now I've found the WAL files in the pg_xlog directory, and started
browsing around for documentation on what I can do with those. But all I
can find is that you can use them to restore your database after a crash
or a custom backup. But I would like to do it the other way around (not
use them to restore a database, but to roll it back entirely a few
days). I have 4 WAL files, from last Tuesday to today. Can I use these
files to ROLLBACK the current database, so that it's restored to the
situation it was in on Tuesday?

Thanks a lot for any help,


Regards,

Marc



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [Fwd: [PORTS] M$ SQL server DTS package equivalent in Postgres]

2007-01-26 Thread Tomi N/A

2007/1/23, Paul Lambert [EMAIL PROTECTED]:


G'day,

Is there an equivalent in Postgres to the DTS Packages available in M$
SQL server.

I use these in SQL server to pre-load data from CSV files prior to
enabling replication from my primary application. Any pointers on where
best to go for this would be appreciated. I'm reading about something
called EMS, is that the way to go?

Sample of one of the files I use:

DEALER_ID^DATE_CHANGED^TIME_CHANGED^BILLING_CODE_ID^DES^BILLING_CODE_TYPE^LABOUR_RATE^LABOUR_SALES_GROUP^CUSTOMER_NO^PARTS_SALES_GRO
f UP^COMEBACK^WORKSHOP^FRANCHISE^LOCATION^DELETEFLAG
F65^23-Jan-2007^10:13^AA^ADVERSITING ADMIN^I^45^40^2196^18^^0^BLANK^0^
F65^23-Jan-2007^10:13^AN^ADV NEW^I^45^40^1636^18^^0^BLANK^0^
F65^23-Jan-2007^10:13^AP^ADV PARTS^I^45^40^1919^18^^0^BLANK^0^
F65^23-Jan-2007^10:13^AS^ADV SERV^I^45^40^2057^18^^0^BLANK^0^
F65^23-Jan-2007^10:13^AU^ADV USED^I^45^40^1775^18^^0^BLANK^0^N




Paul,
what you're looking for exists in a number of variations.
You can use a good text editor with the postgres' COPY command for simple
bulk .csv loading, but in the long run, you might want to consider a
full-fledged 
ETLhttp://en.wikipedia.org/wiki/Extract%252C_transform%252C_loadtool
like the Pentaho Data Integration suite.  It allows you to use a wide
variety of data sources (anything with a jdbc driver, .xls, .csv, .xml for
starters), chain data transformations, store them and execute them on a
regular basis or on-demand from a java app etc. A number of such tools are
available, but I suggested Pentaho Data Integration as I have experience
with it and because it is under an open source licence. If anyone can
suggest a better tool for the job, please say so: I for one would like to
have my horizons broadened. :)

Cheers,
t.n.a.


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Jeremy Haile
   Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
   if I put a #undef select right before the select in pgstat.c, the
   regression tests pass. 
   
   May be, problem is related to fixed bug in pgwin32_waitforsinglesocket() ?
   WaitForMultipleObjectsEx might sleep indefinitely while waiting socket to 
   write, so, may be there is symmetrical problem with read? Or 
   pgwin32_select() is used for waiting write too?
  
  
  pgwin32_waitforsinglesocket() appears to work fine. And we only use
  FD_READ, so it's not affected by your patch from what I can tell.
  
  I've got it passnig tests with select replaced with waitforsinglesocket
  - now I just need to implement timeout in that one :-)
 
 Attached patch seems to solve the problem on my machine at least. Uses
 pgwin32_waitforsinglesocket() instead of pgwin32_select(). Changes
 pgwin32_waitforsinglesocket() to accept the timeout as a parameter (this
 is why it touches files outside of the stats area).

Magnus - thanks for your investigation and work!  Any chance I could get
a patched exe for win32 and test on my servers?

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] trigger question

2007-01-26 Thread Furesz Peter

Hello, maybe I have found a better solution.
In PostgreSQL 8.2 this current solution is not working properly, because I
got
too many triggers on table tablename error. This is the first thing. The
second problem
that if something go wrong between the disable and re-enable the trigger,
the trigger will
stay in disabled state, so we want to find a solution to disable the trigger
for the current session.
You have to put this line in postgresql.conf

custom_variable_classes = 'general'
This is a session variable. If this variable IS FALSE, I check it at TRIGGER
fire time and if it is false,
I allow the trigger to fire, if not I skip the trigger. If it is not set, I
set to FALSE at the trigger first line.
If something goes wrong, the trigger stay in disable for the current session
only.
For example:

CREATE OR REPLACE FUNCTION public.tr_logolas () RETURNS trigger AS
$body$
DECLARE
   v_tmp RECORD;
   a_trigger_disable BOOLEAN;
BEGIN
   SELECT NULLIF(current_setting('general.trigger_tmp'), '') AS trigger_tmp
INTO v_tmp;
   IF NOT FOUND OR v_tmp.trigger_tmp IS NULL THEN
   EXECUTE 'SET SESSION general.trigger_tmp=FALSE';
   a_trigger_disable := FALSE;
   ELSE
   a_trigger_disable := v_tmp.trigger_tmp;
   END IF;
   --  END OF SETTING UP TRIGGER **
   IF TG_OP='DELETE' THEN
   --disabling trigger
   EXECUTE 'SET SESSION general.trigger_tmp=TRUE';

   UPDATE sulyozas
   SET torolve = TRUE
   WHERE sulyozas_id = OLD . sulyozas_id;
   --enabling trigger
   EXECUTE 'SET SESSION general.trigger_tmp=FALSE';
   END IF;

   IF TG_OP='UPDATE' AND a_trigger_disable IS FALSE THEN
   --Do something here
   END IF;
END;
- Original Message - 
From: Lenorovitz, Joel [EMAIL PROTECTED]

To: Furesz Peter [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Friday, January 19, 2007 5:57 PM
Subject: RE: trigger question


I ran into a similar problem and the solution I came up with (which
admittedly feels like a kludge) was to temporarily disable the triggers
on the table being modified while an update was made and then
re-enabling them immediately after the update.  I am sure there is
potential for problems with this approach and I too would like to find a
better one, but right now this works as I am still in the development
stage and not dealing with any critical data.  Anyway, this is
essentially the code I use (applied now to table foobar) and maybe
sharing it will help inspire a better solution.  Please keep the list
and me informed if you have oneThanks, Joel

Code excerpt from within on delete trigger function for foobar.

-- Disable triggers on table foobar
UPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid =
'foobar'::pg_catalog.regclass';
-- Perform update
UPDATE foobar SET is_deleted=TRUE WHERE foobar_id=OLD.foobar;
-- Re-enable triggers on table foobar
UPDATE pg_catalog.pg_class SET reltriggers = 1 WHERE oid =
'foobar'::pg_catalog.regclass';


-Original Message-
From: Furesz Peter [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 16, 2007 10:36 AM
To: postgres levlista
Subject: trigger question

Hello,

I have a table named foobar  and I don't want to allow from DELETE or
UPDATE its rows.

I have a table as described below:
foobar(foobar_id, value, is_deleted);

I don't want to allow directly delete or modify the table's rows. I plan
to make an on before update or delete trigger and on delete action I
update the actual row is_deleted flag, on UPDATE action I also update
the is_deleted flag and I insert a new row with the new values.
Everything is ok, but when I capture the delete action I am execute an
update what triggering the trigger again and I got an unwanted row.

CREATE TRIGGER tr_foobar BEFORE UPDATE OR DELETE  ON public.foobar
FOR EACH ROW EXECUTE PROCEDURE public.tr_foobar_func();

BEGIN
 IF TG_OP='DELETE' THEN
UPDATE foobar SET is_deleted=TRUE WHERE foobar_id=OLD.foobar;
RETURN NULL;
 ELSEIF TG_OP='UPDATE' THEN
INSERT INTO foobar(value) VALUES(NEW.value);
NEW.is_deleted=TRUE;
NEW.value=OLD.value;
RETURN NEW;
 END IF;
END;

What is the right solution for this situation. Thank you for the help!



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] too many trigger records found for relation item -

2007-01-26 Thread Furesz Peter
I have the same problem yesterday. I got this error when I try to disable 
the trigger in pg_catalog:


UPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid =
'foobar'::pg_catalog.regclass';

But if I disabling the trigger using this syntax:

ALTER TABLE tablename DISABLE TRIGGER triggername

everything ok.

- Original Message - 
From: Csaba Nagy [EMAIL PROTECTED]

To: Postgres general mailing list pgsql-general@postgresql.org
Sent: Friday, January 26, 2007 10:06 AM
Subject: Re: [GENERAL] too many trigger records found for relation item -



[snip] I believe this is fixed as of 8.2 --- can you duplicate it
there?  (No, backpatching the fix is not practical.)


No, I was not able to duplicate it on 8.2, so I think it's fixed (given
that on 8.1 the errors are triggered almost 100% of the runs).


How sure are you about that uninterlocked getChildTableName() thing?
It's possible to get a failure complaining about duplicate type name
instead of duplicate relation name during CREATE TABLE, if the timing
is just right.


Not sure at all (I left it deliberately unsynchronized to go as fast as
it can even if it errors sometimes on duplicate tables), so that might
be an explanation. The error is a bit misleading though, or better
inconsistent: if I would have to detect the duplicate table error
condition in my code so that I can take corrective steps I would need to
look for 2 error types instead of 1 - if I only knew that I have to.

And BTW, I have seen something similar while creating temporary tables
which should not conflict even with the same table name I think...

Cheers,
Csaba.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Alvaro Herrera
Jeremy Haile wrote:

 Magnus - thanks for your investigation and work!  Any chance I could get
 a patched exe for win32 and test on my servers?

In the meantime, could you please confirm that turning row_level_stats
off makes the regression test pass?  And whatever tests you were doing?
(Note that by doing that, autovacuum won't work at all, because it needs
those stats.)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] no unpinned buffers available ? why? (hstore and plperl involved)

2007-01-26 Thread Dave Cramer

Tom,

I've also got a customer getting this error message.

the OS is OSX 10.3 they are using plpgsql, and shared buffers is set  
very low


shared_buffers = 16 

Dave

On 3-Jan-07, at 10:19 AM, Tom Lane wrote:


hubert depesz lubaczewski [EMAIL PROTECTED] writes:
and - after some time of this create table, postmaster process  
eats all

the memory (over 1.8g), and dies with:
psql:133.sql:125: ERROR:  error from Perl function: no unpinned  
buffers

available at line 5.


Could you reduce this to a self-contained example please?  Your
functions depend on a bunch of tables that you have not provided
definitions or data for ...

regards, tom lane

---(end of  
broadcast)---

TIP 6: explain analyze is your friend




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] large document multiple regex

2007-01-26 Thread Merlin Moncure

Hello,

I am receiving a large (300k+_ document from an external agent and
need to reduce a few interesting bits of data out of the document on
an insert trigger into separate fields.

regex seems one way to handle this but is there any way to avoid
rescanning the document for each regex.  One solution I am kicking
around is some C hackery but then I lose the expressive power of
regex.  Ideally, I need to be able to scan some text and return a
comma delimited string of values extracted from it.  Does anybody know
if this is possible or have any other suggestions?

merlin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] [Fwd: [PORTS] M$ SQL server DTS package equivalent in Postgres]

2007-01-26 Thread Merlin Moncure

On 1/26/07, Tomi N/A [EMAIL PROTECTED] wrote:

2007/1/23, Paul Lambert [EMAIL PROTECTED]:
 Is there an equivalent in Postgres to the DTS Packages available in M$
 SQL server.



what you're looking for exists in a number of variations.
You can use a good text editor with the postgres' COPY command for simple
bulk .csv loading, but in the long run, you might want to consider a


In my opinion, if your input data is in well-formed csv, you don't
really need much of anything.  Make a table(s) with all text columns
which will accept the csv data from the copy statement.  After that,
write queries to insert...select data from your import tables into the
actual tables holding the data doing all the appropriate casting
in-query.

Besides being easy to schedule and very flexible, manipulating data
with queries is extremely powerful and fairly easy to maintain
assuming you know a little SQL -- thanks to postgresql's huge array of
built in string manipulation functions.  Your skills learned here will
pay off using the database as well for other things.

Not only that, but this approach will be fast since it is declarative
and handles entire tables at once as opposed to DTS-ish solutions
which tend to do processing record by record.  Not to mention they are
overcomplicated and tend to suck.  (DTS does have the ability to read
from any ODBC source which is nice...but that does not apply here).
In fact, my favorite use for DTS is to convert databases out of
Microsoft SQL server and (ugh!) Access, a task which it excels
at...but the real magic here is in the ODBC driver, not DTS.

Worst case scenario is you have to do some preprocessing in C or perl
on the csv document if it is not completely well formed and blows up
postgresql's copy statement.  In other words, you don't need a data
processor, PostgreSQL *is* a data processor.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Rollback using WAL files?

2007-01-26 Thread Merlin Moncure

On 1/26/07, M.A. Oude Kotte [EMAIL PROTECTED] wrote:

Now I've found the WAL files in the pg_xlog directory, and started
browsing around for documentation on what I can do with those. But all I
can find is that you can use them to restore your database after a crash
or a custom backup. But I would like to do it the other way around (not
use them to restore a database, but to roll it back entirely a few
days). I have 4 WAL files, from last Tuesday to today. Can I use these
files to ROLLBACK the current database, so that it's restored to the
situation it was in on Tuesday?


probably not.  While such things are possible with WAL files, it is
only for a relatively short duration unless your server is extremely
inactive or you planned for this contingency in advance -- by setting
up a a PITR snapshot and archiving as many days WAL files back as you
would like to be able to go back in time.  I'm guessing you didn't do
this.

Next we look at standard dumps (pg_dump) and possibly logs (are you
logging statements?) to recover your data.  If those don't produce
results, and you have no other way of recovering your information,
this has officially become a learning experience :(.

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
 if I put a #undef select right before the select in pgstat.c, the
 regression tests pass. 
 I guess the bug is shown because with row level stats we simply have
 more data to process. And it appears only to happen on UDP sockets from
 what I can tell.

Hmm ... if this theory is correct, then statistics collection has
never worked at all on Windows, at least not under more than the most
marginal load; and hence neither has autovacuum.

Does that conclusion agree with reality?  You'd think we'd have heard
a whole lot of complaints about it, not just Jeremy's; and I don't
remember it being a sore point.  (But then again I just woke up.)
What seems somewhat more likely is that we broke pgwin32_select
recently, in which case we oughta find out why.  Or else remove it
entirely (does your patch make that possible?).

Keep in mind also that we have seen the stats-test failure on
non-Windows machines, so we still need to explain that ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
On Fri, Jan 26, 2007 at 09:55:39AM -0500, Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
  Apparantly there is a bug lurking somewhere in pgwin32_select(). Because
  if I put a #undef select right before the select in pgstat.c, the
  regression tests pass. 
  I guess the bug is shown because with row level stats we simply have
  more data to process. And it appears only to happen on UDP sockets from
  what I can tell.
 
 Hmm ... if this theory is correct, then statistics collection has
 never worked at all on Windows, at least not under more than the most
 marginal load; and hence neither has autovacuum.

We have had lots of reports of issues with the stats collector on
Windows. Some were definitly fixed by the patch by OT, but I don't
think all.
The thing is, since it didn't give any error messages at all, most users
wouldn't notice. Other than their tables getting bloated, in which case
they would do a manual vacuum and conlcude autovacuum wasn't good
enough. Or something.


 Does that conclusion agree with reality?  You'd think we'd have heard
 a whole lot of complaints about it, not just Jeremy's; and I don't
 remember it being a sore point.  (But then again I just woke up.)
 What seems somewhat more likely is that we broke pgwin32_select
 recently, in which case we oughta find out why.  Or else remove it
 entirely (does your patch make that possible?).

AFAIK, it only affects UDP connections, and this patch takes
pgwin32_select out of the loop for all UDP stuff.
But if we get this in, pgwin32_select is only used in the postmaster
accept-new-connections loop (from what I can tell by a quick look), so
I'd definitly want to rewrite that one as well to use a better way than
select-emulation. Then it could go away completely.


 Keep in mind also that we have seen the stats-test failure on
 non-Windows machines, so we still need to explain that ...

Yeah. But it *could* be two different stats issues lurking. Perhaps the
issue we've seen on non-windows can be fixed by the settings Alvaro had
me try (increasing autovacuum_vacuum_cost_delay or the delay in the
regression test).

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Jeremy Haile
 We have had lots of reports of issues with the stats collector on
 Windows. Some were definitly fixed by the patch by OT, but I don't
 think all.

Here were a couple of other reports I found:
http://archives.postgresql.org/pgsql-hackers/2006-09/msg00415.php
http://archives.postgresql.org/pgsql-hackers/2006-04/msg00127.php


 The thing is, since it didn't give any error messages at all, most users
 wouldn't notice. Other than their tables getting bloated, in which case
 they would do a manual vacuum and conlcude autovacuum wasn't good
 enough. Or something.

This is indeed what I assumed at first.  I started running vacuum
analyze hourly and turned off autovacuum.  Later, I decided to try the
autovacuum route again and investigated why it wasn't working well.

Magnus - could you send me a patched exe to try in my environment? 
Would it be compatible with 8.2.1?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] no unpinned buffers available ? why? (hstore and plperl involved)

2007-01-26 Thread Tom Lane
Dave Cramer [EMAIL PROTECTED] writes:
 I've also got a customer getting this error message.
 the OS is OSX 10.3 they are using plpgsql, and shared buffers is set  
 very low
 shared_buffers = 16   

Well, the answer to that is if it hurts, don't do that.  You couldn't
expect to process more than a very small number of very simple queries
with so few buffers.  (Example: a simple INSERT involving a btree index
will require at least four concurrently pinned buffers if there's a need
for a btree page split; a join query would require at least one buffer
per table and index involved, etc.)  Hubert was using a reasonably large
number of buffers, so his case sounds more like an actual bug, but I'd
call the above just pilot error.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [Fwd: [PORTS] M$ SQL server DTS package equivalent in Postgres]

2007-01-26 Thread Tomi N/A

Besides being easy to schedule and very flexible, manipulating data
with queries is extremely powerful and fairly easy to maintain
assuming you know a little SQL -- thanks to postgresql's huge array of
built in string manipulation functions.  Your skills learned here will
pay off using the database as well for other things.

Not only that, but this approach will be fast since it is declarative
and handles entire tables at once as opposed to DTS-ish solutions
which tend to do processing record by record.  Not to mention they are
overcomplicated and tend to suck. (DTS does have the ability to read
from any ODBC source which is nice...but that does not apply here).


Different strokes for different folks, it seems.
I'd argue that COPY followed by a barrage of plpgsql statements can't
be used for anything but the most trivial data migration cases (where
it's invaluable) where you have line-organized data input for a
hand-full of tables at most.
In my experience (which is probably very different from anyone
else's), most real world situations include data from a number of very
different sources, ranging from the simplest (.csv and, arguably,
.xml) to the relatively complex (a couple of proprietary databases,
lots of tables, on-the fly row merging, splitting or generating
primary keys, date format problems and general pseudo-structured,
messed up information).
Once you've got your data in your target database (say, pgsql), using
SQL to manipulate the data makes sense, but it is only the _final_
step of an average, real world data transformation.

Cheers,
t.n.a.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] [Fwd: [PORTS] M$ SQL server DTS package equivalent in

2007-01-26 Thread Jeremy Haile
I've also used Pentaho Data Integration (previously known as Kettle)
quite extensively, and can recommend it.  It supports many different
databases and has fairly good documentation (although thin in some
areas).  It has a GUI drag-and-drop tool that can be used to configure
transformations and is very flexible.  It also has an active community
that responds when you have issues.

I use it as part of a regular job that runs every 5 minutes and hourly
to copy and transform data from a SQL Server DB to a PostgreSQL DB.  I
use COPY when I can simply select data into a CSV and load it into
another DB - but as Tomi said, when you have to do primary key
generation, row merging, data cleanup, and data transformations - I
would use some sort of ETL tool over just SQL.

My 2 cents,
Jeremy Haile


On Fri, 26 Jan 2007 15:14:22 +, Tomi N/A [EMAIL PROTECTED] said:
  Besides being easy to schedule and very flexible, manipulating data
  with queries is extremely powerful and fairly easy to maintain
  assuming you know a little SQL -- thanks to postgresql's huge array of
  built in string manipulation functions.  Your skills learned here will
  pay off using the database as well for other things.
 
  Not only that, but this approach will be fast since it is declarative
  and handles entire tables at once as opposed to DTS-ish solutions
  which tend to do processing record by record.  Not to mention they are
  overcomplicated and tend to suck. (DTS does have the ability to read
  from any ODBC source which is nice...but that does not apply here).
 
 Different strokes for different folks, it seems.
 I'd argue that COPY followed by a barrage of plpgsql statements can't
 be used for anything but the most trivial data migration cases (where
 it's invaluable) where you have line-organized data input for a
 hand-full of tables at most.
 In my experience (which is probably very different from anyone
 else's), most real world situations include data from a number of very
 different sources, ranging from the simplest (.csv and, arguably,
 .xml) to the relatively complex (a couple of proprietary databases,
 lots of tables, on-the fly row merging, splitting or generating
 primary keys, date format problems and general pseudo-structured,
 messed up information).
 Once you've got your data in your target database (say, pgsql), using
 SQL to manipulate the data makes sense, but it is only the _final_
 step of an average, real world data transformation.
 
 Cheers,
 t.n.a.
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] triggers vs b-tree

2007-01-26 Thread Tomas Vondra
 Hello I have a design question:
 
  I have a table representing Families, and a table representing Persons.
 The table Family have a row family_id as primary key.
 The table Person have a row person_id as primary key and contain also a
 row family_id.
 As you can understand, the row family_id in a table ficha_person is not
 unique, I mean is the same for all the family person's.
 
 So my question is: If most of the time I need to find all the persons
 for one asked family what is the best way to do that?
 I think about two methods:
 1-  Making a b-tree index in ficha_person with the rows
 family_id and person_id.
 2 - Adding an array in the table ficha_family containing the
 persons of this family. And creating a Trigger that update this array
 for each  person insert/delete in the table ficha_family.
 
 So ..., what do you think? There are a better solution or what of the
 above solutions is better ??
 
  Thank you in advance,
   Gustavo.
 

The trigger/array seems to me as a premature optimization - if you are
not sure the index is 'too slow' (and there's no other way to speed it
up) don't do it.

You should always have a foreign key in Person(family_id) referencing
the Family(family_id) as you need to reinforce data integrity between
these two tables, and the 'rule of a thumb' is to have indexes on
foreign keys in large tables. The reason is pretty simple - the
PostgreSQL does a simple query when checking the foreign key.

So if the Person table is 'small' (less than for example 1000 rows) and
it will not grow too much, there's no need to use an index (as it won't
be used for small tables) and the queries to get all the family members
will be very fast too.

On the other side, if the Person table is 'large' (say more than 10.000
rows), then there should be an index on Person(family_id). Then it
depends on your requirements - the most important things to consider are
these:

1) Will the application be mostly used to select or update?

   The trigger adds (small) overhead to modifications, but if you do
   mostly selects this may not be a problem.

2) Do you need only IDs of the family members, or do you need all the
   data from Person table?

   The trigger/array solution gives you only IDs and you'll have to
   fetch the data in a separate query (most probably). The array simply
   complicates the queries.

Anyway, I would try to stick with the foreign key / index solution as
long as possible. If you are not happy with the speed do some benchmarks
with the trigger / array solution and compare them to the foreign key /
index. Try to do some other optimizations too (for example cluster the
Person table along the family_id column - that usually means a serious
performance boost).

Tomas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] too many trigger records found for relation item -

2007-01-26 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 And BTW, I have seen something similar while creating temporary tables
 which should not conflict even with the same table name I think...

I've heard reports of that, but never been able to duplicate it ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Rollback using WAL files?

2007-01-26 Thread Florian Weimer
* M. A. Oude Kotte:

 I'm running a production/development database using PostgreSQL 8.1 on a
 Debian server. Due to some bad code in one of our applications who use
 this database, some of the data was modified incorrectly the last few
 days. The idea is that I would like to restore the entire database as
 much as possible, meaning I would like to undo all transactions that
 were performed on it.

In theory, this should be possible (especially if you haven't switched
off full page writes).  But I don't know a ready-made solution for
this kind of task.

-- 
Florian Weimer[EMAIL PROTECTED]
BFK edv-consulting GmbH   http://www.bfk.de/
KriegsstraƟe 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Postgresql 8.1: plperl code works with LATIN1, fails with UTF8

2007-01-26 Thread Philippe Lang
Hi,

I've got plperl code that works just fine when the database is encoded using 
LATIN1, but fails as soon as I switch to UTF8.

I've been testing PG 8.1.4 under Linux, and PG 8.1.6 under FreeBSD, both behave 
exactly the save.

I'm sorry I'm not able to strip down the code, and show you a small test, but 
if anyone need the full script, feel free to ask me per email.

The code is made up of plperl routines, all structured in the same way, but 
only one of them fails in UTF8. It is:


#
CREATE OR REPLACE FUNCTION public.volets_fiche_fab_1 (
IN  id_commande int4,

OUT pos int4,
OUT quant   int4,
OUT nbre_vtxint4,
OUT nbre_vtx_total  int4,
OUT larg_maconnerie int4,
OUT haut_maconnerie int4,
OUT larg_vtxvarchar(20),
OUT haut_vtxint4,
OUT ouv int4,
OUT couvre_joints   text,
OUT coupe_verticale text,
OUT vide_interieur  varchar(20),
OUT typ varchar(20)
)
RETURNS SETOF record
AS

$$

BEGIN { strict-import(); }


#
#-- Lexical variables

#
my @i;
my @io;
my @o;
my $i;
my $io;
my $o;
my %input;
my %output;
my $fab;
my $fab_nrows;
my $lignes_query;
my $lignes;
my $lignes_nrows;
my $lignes_rn;
my $c;
my $j;
my $key;
my $value;
my $ordre;
my $vtxg;
my $vtxd;


#
#-- Helper functions

#
my $init = sub
{
$c = 0;
foreach $i (@i) {$input{$i} = @_[$c++]};
foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]};
foreach $o (@o) {$output{$o} = @_[$c++]};
};

my $start_sub = sub
{
$init(@_);
};

my $end_sub = sub
{
return undef;
};

my $ret = sub
{
while (($key, $value) = each %output) {if (!defined($value)) 
{elog(ERROR, 'Valeur indƩfinie pour ' . $key)}}; 
return_next \%output;
$init(@_);
};


#
#-- Configuration des paramĆØtres de la fonction

#
@i = ( 'id_commande'
 );

@io = ();

@o = ( 'pos',
   'quant', 
   'nbre_vtx',
   'nbre_vtx_total',
   'larg_maconnerie', 
   'haut_maconnerie',
   'larg_vtx', 
   'haut_vtx',
   'ouv',
   'couvre_joints',
   'coupe_verticale',
   'vide_interieur',
   'typ'
 );


#
#-- PrĆ©paration des paramĆØtres de la fonction

#
$start_sub(@_);


#
#-- CrƩation de la fiche de fabrication

#
$lignes_query = 'SELECT * FROM lignes WHERE id_commande = ' . 
$input{'id_commande'} . ' ORDER BY pos;';
$lignes = spi_exec_query($lignes_query);
$lignes_nrows = $lignes-{processed};
foreach $lignes_rn (0 .. $lignes_nrows - 1) 
{
# Fabrication de la ligne
$fab = spi_exec_query('SELECT * FROM volets_fab(' . 
$lignes-{rows}[$lignes_rn]-{'id'} . ');');
$fab_nrows = $fab-{processed};

# Recherches des Ć©ventuels vantaux de gauche et droite
for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Largeur de vantail 
gauche') and ($j  $fab_nrows); $j = $j + 1) {};
if ($j  $fab_nrows) { $vtxg = $fab-{rows}[$j]-{'larg'}; }
for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Largeur de vantail 
droite') and ($j  $fab_nrows); $j = $j + 1) {};
if ($j  $fab_nrows) { $vtxd = $fab-{rows}[$j]-{'larg'}; }

# Position
$output{'pos'} = $lignes-{rows}[$lignes_rn]-{'pos'};

# QuantitƩ
$output{'quant'} = $lignes-{rows}[$lignes_rn]-{'quant'};

# Nombre de vantaux
$output{'nbre_vtx'} = $lignes-{rows}[$lignes_rn]-{'nbre_vtx'};

# Nombre de vantaux total
$output{'nbre_vtx_total'} = $lignes-{rows}[$lignes_rn]-{'nbre_vtx'} * 
$lignes-{rows}[$lignes_rn]-{'quant'};

# Largeur de maƧonnerie
for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Largeur de maƧonnerie') 
and ($j  $fab_nrows); $j = $j + 1) {};
if ($j  $fab_nrows) 

[GENERAL] Can you specify the pg_xlog location from a config file?

2007-01-26 Thread Karen Hill
Windows doesn't support symlinks.  Is it possible instead for there to
be a config file that lets one set where the pg_xlog directory will sit?


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Can you specify the pg_xlog location from a config file?

2007-01-26 Thread Roman Neuhauser
# [EMAIL PROTECTED] / 2007-01-26 09:21:27 -0800:
 Windows doesn't support symlinks.  Is it possible instead for there to
 be a config file that lets one set where the pg_xlog directory will sit?

Windows has junction points.

-- 
How many Vietnam vets does it take to screw in a light bulb?
You don't know, man.  You don't KNOW.
Cause you weren't THERE. http://bash.org/?255991

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Installation on Web Server

2007-01-26 Thread codeWarrior
In theory  -- yes. In practicality -- no -- And yes... yopu are corerct --  
postgreSQL needs to be installed PHP. YOu will find this to be the case with 
ANY dependencies in PHP, including things like jpeg supprt, curl, etc. So 
this is NOT a postgreSQL problem, not really anyway.

This is realy a PHP / ./configure --with-postgres problem -- the problem 
being that you don't have the postgreSQL shared libraries installed because 
you did not install postgreSQL on the web server. PHP can't really be 
compiled with postgreSQL support without being able to link against the 
.so's [shared objects].

Your best bet: The easiest, most reliable solution is to do a full install 
of postgreSQL on your web server then rebuild PHP. You dont have to run 
postgeSQL on the web server -- the added benefit is that you have a readily 
available postgreSQL server.

The mendium level alternative  -- I think you might be able to use the 
Pear::DB libs in PHP and not necessarily have to compile / build PHP against 
the postgreSQL libs. I am not positive on this -- I don't do things this 
way.

The hard way -- you woud have to identify the postgreSQL shared libs that 
PHP needs to link against, retrieve the postgreSQL source code, modigy the 
build script for postgreSQL so you build just the shared libs, then build / 
install those [the shared libs].




Richard Hayward [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 I'm wanting to install PHP on a Apache/Linux server with support for
 PostgreSQL.

 It seems that I need to install PostgreSQL on the machine first,
 before the PHP installation will work.

 The way I've done this previously is simply to install PostgreSQL.
 However, the database server is in fact another machine, so another
 full install of it on the web server seems redundant. All the
 instructions I've found assume that web and database servers are
 running on the same host.

 Is there some (easy !) way to install just the minimum parts of
 PostgreSQL I need on the web server?

 Regards

 Richard 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Can you specify the pg_xlog location from a config

2007-01-26 Thread Dave Page
Karen Hill wrote:
 Windows doesn't support symlinks.  Is it possible instead for there to
 be a config file that lets one set where the pg_xlog directory will sit?


You can use a junction point for this instead of a symlink. Google for
utilities to create them.

Regards, Dave.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Can you specify the pg_xlog location from a config file?

2007-01-26 Thread Tom Lane
Karen Hill [EMAIL PROTECTED] writes:
 Windows doesn't support symlinks.

Yes it does, at least in reasonably modern versions.  They're called
junctions or something like that.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Rollback using WAL files?

2007-01-26 Thread Tom Lane
Florian Weimer [EMAIL PROTECTED] writes:
 In theory, this should be possible (especially if you haven't switched
 off full page writes).

Not really --- the WAL records are not designed to carry full
information about the preceding state of the page, so you can't use them
to undo.  (Example: a DELETE record says which tuple was deleted, but
not what was in it.)

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problem with result ordering

2007-01-26 Thread Thorsten Kƶrner
Hi Ted,

Am Donnerstag, 25. Januar 2007 19:53 schrieb Ted Byers:
 The question I'd ask before offering a solution is, Does the order of the
 id data matter, or is it a question of having all the results for a given
 id together before proceeding to the next id?  The answer to this will
 determine whether or not adding either a group by clause or an order by
 clause will help.

 Is there a reason you client app doesn't submit a simple select for each of
 the required ids?  You'd have to do some checking to see whether it pays to
 have the ordering or grouping operation handled on the server or client.
 Other options to consider, perhaps affecting performance and security,
 would be parameterized queries or stored procedures.

Yes, the reason is, that a large list of ids are generated from a users 
request, coming from outside our application (for example as an EXCEL sheet), 
and the output msut be ordered by the ordering in this list.
Surely we can handle this in our client application (java code), but I think 
it would be more sexy to have this possibility in the database logic, since 
our client-application should not do much more than retrieve data from the db 
and then present it.

Thanks for your comments
Thorsten

-- 
CappuccinoSoft Business Systems
Hamburg

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Predicted lifespan of different PostgreSQL branches

2007-01-26 Thread Bill Moran

I spend some time googling this and searching the Postgresql.org site, but
I'm either not good enough with the search strings, or it's not to be found.

I'm trying to plan upgrades so that we don't upgrade needlessly, but also
don't get caught using stuff that nobody's supporting any more.
The FreeBSD project keeps this schedule:
http://www.freebsd.org/security/#adv
which is _really_ nice when talking to managers and similar people about
when upgrades need to be scheduled.

Does the PostgreSQL project have any similar policy about EoLs?  Even just
a simple statement like, it is our goal to support major branches for 2
years after release or some such?

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Rollback using WAL files?

2007-01-26 Thread Karen Hill

On Jan 26, 9:45 am, [EMAIL PROTECTED] (Tom Lane) wrote:
 Florian Weimer [EMAIL PROTECTED] writes:
  In theory, this should be possible (especially if you haven't switched
  off full page writes).Not really --- the WAL records are not designed to 
  carry full
 information about the preceding state of the page, so you can't use them
 to undo.  (Example: a DELETE record says which tuple was deleted, but
 not what was in it.)

It would be really useful if one had the option of allowing the WAL
records to keep track of what was in a tuple as evidenced here.   I use
triggers on every production table to record every change to log tables
(which have rules to prevent deleting and updating). Allowing the
option of having the  WAL  do this seems like a good idea...

regards,
karen


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Installation on Web Server

2007-01-26 Thread Alan Hodgson
  Is there some (easy !) way to install just the minimum parts of
  PostgreSQL I need on the web server?

In a sane distribution, you would just do something like yum install 
php php-pgsql, and that would pull in whatever bits of PostgreSQL are 
required (shared libraries).

-- 
It's not a war on drugs, it's a war on personal freedom. Keep that in 
mind at all times.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Can you specify the pg_xlog location from a config file?

2007-01-26 Thread Jeremy Haile
This utility is useful for creating junctions in Windows:
http://www.microsoft.com/technet/sysinternals/FileAndDisk/Junction.mspx

I am using this to symlink my pg_xlog directory to another disk and it
works great.

Jeremy Haile


On Fri, 26 Jan 2007 18:27:04 +, Roman Neuhauser
[EMAIL PROTECTED] said:
 # [EMAIL PROTECTED] / 2007-01-26 09:21:27 -0800:
  Windows doesn't support symlinks.  Is it possible instead for there to
  be a config file that lets one set where the pg_xlog directory will sit?
 
 Windows has junction points.
 
 -- 
 How many Vietnam vets does it take to screw in a light bulb?
 You don't know, man.  You don't KNOW.
 Cause you weren't THERE. http://bash.org/?255991
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Problem with result ordering

2007-01-26 Thread Ted Byers

Hi Ted,



Hi Thorsten,



Am Donnerstag, 25. Januar 2007 19:53 schrieb Ted Byers:
The question I'd ask before offering a solution is, Does the order of 
the

id data matter, or is it a question of having all the results for a given
id together before proceeding to the next id?  The answer to this will
determine whether or not adding either a group by clause or an order by
clause will help.

Is there a reason you client app doesn't submit a simple select for each 
of
the required ids?  You'd have to do some checking to see whether it pays 
to

have the ordering or grouping operation handled on the server or client.
Other options to consider, perhaps affecting performance and security,
would be parameterized queries or stored procedures.


Yes, the reason is, that a large list of ids are generated from a users
request, coming from outside our application (for example as an EXCEL 
sheet),

and the output msut be ordered by the ordering in this list.
Surely we can handle this in our client application (java code), but I 
think
it would be more sexy to have this possibility in the database logic, 
since
our client-application should not do much more than retrieve data from the 
db

and then present it.



To be honest, your rationale here makes no sense to me, business or 
otherwise.  I think I'd be remiss if I didn't tell you this.  Of course, 
what you do is up to you, but I never concern myself with what is 'more 
sexy' when designing a distributed application.  I can see a number of 
situations in which your approach would result in terrible performance.  If 
you have a lot of users, and you're putting all the workload on your data 
server, all the client apps will end up spending a lot of time waiting for 
the server to do its thing.  Ordering the display of data, while it can be 
helped by the database, is really a presentation issue and IMHO the best 
place for that, by default, is the client app (i.e. do it server side only 
if there is a very good reason to do so).


If I understand you correctly, you have a java application as the client 
app, and it receives your users' ID data, from whatever source.  I'd assume, 
and hope, that you have built code to read the IDs from sources like your 
Excel spreadsheet, but that doesn't matter that much.  For the purpose of 
this exchange, it wouldn't matter if you made your clients enter the data 
manually (though IMHO it would be sadistic to make users manually enter a 
long list of values when you can as easily have the program read them from a 
file of whatever format).  The point it that you say our client-application 
should not do much more than retrieve data from the db and then present it, 
and this implies you get the IDs into your client application.  You say 
you're getting a large list of ids coming from outside our application. 
If your database is large, and your list of IDs is long, you may be taking a 
huge performance hit by making the database perform either an ordering or 
grouping that both would be unnecessary if you constructed a series of 
simple parameterized queries in your client app and executed them in the 
order you desire.  Whether or not this matters will depend on just how large 
your large is, how many simultaneous users there'd be, and how powerful the 
server hardware is (but I'd be quite upset if one of my developers made me 
pay more for a faster server just because he or she thought it would be sexy 
to do all the work on the server rather than the client).


Given what you've just said, if I were in your place, I'd be addressing the 
ordering issues in the client java application first, and then look at doing 
it in the database only if doing this work in the client app presented 
problems that could be addressed by doing the work on the server.  Equally 
importantly, if there is a reason to not take the obvious and simple option, 
I'd be doing some performance evaluation based on enough test data to 
reasonably simulate real world usage so that I'd have hard data on which 
option is to be preferred.



Thanks for your comments
Thorsten



You're welcome.  I hope you find this useful.

Ted

--
CappuccinoSoft Business Systems
Hamburg

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Installation on Web Server

2007-01-26 Thread Joshua D. Drake
Alan Hodgson wrote:
 Is there some (easy !) way to install just the minimum parts of
 PostgreSQL I need on the web server?
 
 In a sane distribution, you would just do something like yum install 
 php php-pgsql, and that would pull in whatever bits of PostgreSQL are 
 required (shared libraries).

I think you mean apt-get install no sane distribution uses yum.

/me sends barbs for a distribution war out to the net.

Joshua D. Drake




-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Can you specify the pg_xlog location from a config

2007-01-26 Thread Adam Rich

You can also mount a partition as a directory under an existing drive letter, 
using the disk management utility built-in to windows.  It's not as granular 
as the junction method, but there's a built-in gui for it, and its more widely 
known and used.



-Original Message-
From: Jeremy Haile [EMAIL PROTECTED]
To: Roman Neuhauser [EMAIL PROTECTED]; Karen Hill [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: 1/26/2007 12:43 PM
Subject: Re: [GENERAL] Can you specify the pg_xlog location from a config file?

This utility is useful for creating junctions in Windows:
http://www.microsoft.com/technet/sysinternals/FileAndDisk/Junction.mspx

I am using this to symlink my pg_xlog directory to another disk and it
works great.

Jeremy Haile


On Fri, 26 Jan 2007 18:27:04 +, Roman Neuhauser
[EMAIL PROTECTED] said:
 # [EMAIL PROTECTED] / 2007-01-26 09:21:27 -0800:
  Windows doesn't support symlinks.  Is it possible instead for there to
  be a config file that lets one set where the pg_xlog directory will sit?
 
 Windows has junction points.
 
 -- 
 How many Vietnam vets does it take to screw in a light bulb?
 You don't know, man.  You don't KNOW.
 Cause you weren't THERE. http://bash.org/?255991
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] column limit

2007-01-26 Thread Bruno Wolff III
On Thu, Jan 25, 2007 at 10:47:50 -0700,
  Isaac Ben [EMAIL PROTECTED] wrote:
 
 The data is gene expression data with 20,000 dimensions. Part of the
 project I'm working on is to discover what dimensions are truly
 independent.   But to start with I need to have
 all of the data available in a master table to do analysis on.  After
 the analysis I hope to derive subsets of much lower dimensionality.

Are you actually planning to do the analysis in Postgres? This doesn't seem
like a real good fit for that kind of task. (Though I haven't played with
the R stuff, and that might be good for doing that kind of analysis.)

If you do put this in postgres, it seems the two most natural things are
to use arrays to store the dimension values or to have table with a key
of the gene and the dimension and have another column with the value of
that dimension for that gene.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] column limit

2007-01-26 Thread Isaac Ben

On 1/26/07, Bruno Wolff III [EMAIL PROTECTED] wrote:

On Thu, Jan 25, 2007 at 10:47:50 -0700,
  Isaac Ben [EMAIL PROTECTED] wrote:

 The data is gene expression data with 20,000 dimensions. Part of the
 project I'm working on is to discover what dimensions are truly
 independent.   But to start with I need to have
 all of the data available in a master table to do analysis on.  After
 the analysis I hope to derive subsets of much lower dimensionality.

Are you actually planning to do the analysis in Postgres? This doesn't seem
like a real good fit for that kind of task. (Though I haven't played with
the R stuff, and that might be good for doing that kind of analysis.)


I plan on accessing the data with postgres via python and R. The main
reason for putting the data in postgres is that postgres handles large
data sets well and it will allow me to pull subsets easily if slowly.



If you do put this in postgres, it seems the two most natural things are
to use arrays to store the dimension values or to have table with a key
of the gene and the dimension and have another column with the value of
that dimension for that gene.


Yeah, I received a tip from someone regarding the use of arrays, and I
think that I will be using that.  Thanks for the tips.

IB

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] column limit

2007-01-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/26/07 13:37, Isaac Ben wrote:
 On 1/26/07, Bruno Wolff III [EMAIL PROTECTED] wrote:
 On Thu, Jan 25, 2007 at 10:47:50 -0700, Isaac Ben
 [EMAIL PROTECTED] wrote:
[snip]
 
 I plan on accessing the data with postgres via python and R. The
 main reason for putting the data in postgres is that postgres
 handles large data sets well and it will allow me to pull subsets
 easily if slowly.

I wonder if sed/grep/awk (or, just perl) could rapidly do your row
and column pre-filtering?

 If you do put this in postgres, it seems the two most natural
 things are to use arrays to store the dimension values or to
 have table with a key of the gene and the dimension and have
 another column with the value of that dimension for that gene.
 
 Yeah, I received a tip from someone regarding the use of arrays,
 and I think that I will be using that.  Thanks for the tips.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFulz7S9HxQb37XmcRArBQAKCZc1Eusg/HtsdMKs8A6z8MTT6FgACg1GuU
yOjqrCxi8CIPX3rCjrDcX6U=
=0xY2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Attached patch seems to solve the problem on my machine at least. Uses
 pgwin32_waitforsinglesocket() instead of pgwin32_select(). Changes
 pgwin32_waitforsinglesocket() to accept the timeout as a parameter (this
 is why it touches files outside of the stats area).

Applied to HEAD and 8.2 --- assuming the Windows buildfarm machines go
green, we should probably consider back-porting this to 8.1 and 8.0.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Problem with result ordering

2007-01-26 Thread Filip Rembiałkowski

2007/1/25, Thorsten Kƶrner [EMAIL PROTECTED]:

Hi,

when I fire the following query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089);

I will get the same results in the same order, as in in the next query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in
(11042,42197,672089,26250,16279);

I wonder, how it is possible, to retrieve the results in the same order, as
queried in the list. The listed IDs are from an application outside the
database.

Version is PostgreSQL 8.2.1

Has anyone an idea, how to do this, while PostgreSQL knows nothing about
hints, like oracle does?


obvious solution is to create temporary table like
create temp table tmp ( id serial, key integer );
then populate it with your list in order,
and then join it with your source table.

but it will require some extra coding, either in your app or in PL
set-returning function

F.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] column limit

2007-01-26 Thread Isaac Ben

Hi,
I'm trying to create a table with 20,000 columns of type int2, but I
keep getting the error message that the limit is 1600.  According to
this message http://archives.postgresql.org/pgsql-admin/2001-01/msg00199.php
it can be increased, but only up to about 6400.  Can anyone tell me
how to get 20,000 columns?

Thanks,
IB

Isaac Ben Jeppsen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] relationship in a table

2007-01-26 Thread Kai Schlamp
Hello!

What is the best way to find out the relationship of two columns in a
table?
I want to know if it is 1:1, 1:N, or M:N. Is there an easy way to do
this with a SQL statement?

Best regards,
Kai


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql Backup

2007-01-26 Thread Alan Hodgson
On Wednesday 24 January 2007 20:57, bala [EMAIL PROTECTED] 
wrote:
 'script.sh'   contains

 PGUSER=postgres  pg_dumpall  /path/to/backup_date.sql

 Running the script using crond creates the 0 byte files.

 But , If i run the script in console , it creates the file with
 content.

 Any suggestion pls?

Include the full path to pg_dumpall in the script.  
Make sure the postgres user can connect without a password or has an 
appropriate .pgpass file in it's home directory.

-- 
No matter how disastrously some government policy has turned out, anyone 
who criticizes it can expect to hear: 'But what would you replace it 
with?'  When you put out a fire, what do you replace it with?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Loop plpgsql recordset

2007-01-26 Thread Furesz Peter
Hello,

How can I loop a PL/PgSQL recorset variable? The example:

DECLARE
v_tmp_regi RECORD;
v_tmp RECORD;
BEGIN
  SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE 
sf.termekfajta_id=
  a_termekfajta_id AND sf.marka_id=a_marka_id;

DELETE FROM sulyozas_futamido;

FOR v_tmp IN v_tmp_regi LOOP
--I would like to work here with the old recordset!
END LOOP;
^^
   -- This is not working !!!

END;


Re: [GENERAL] too many trigger records found for relation item -

2007-01-26 Thread Martijn van Oosterhout
On Fri, Jan 26, 2007 at 02:33:05PM +0100, Furesz Peter wrote:
 I have the same problem yesterday. I got this error when I try to disable 
 the trigger in pg_catalog:
 
 UPDATE pg_catalog.pg_class SET reltriggers = 0 WHERE oid =
 'foobar'::pg_catalog.regclass';

Well duh. The error is precisely complaining about the fact that the
reltriggers field doesn't match the number of actual triggers. What
this tells you is that this is the wrong way to disable triggers.

 But if I disabling the trigger using this syntax:
 
 ALTER TABLE tablename DISABLE TRIGGER triggername
 
 everything ok.

And this is the right way...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Postgresql Backup

2007-01-26 Thread bala
Hello frends!

 i am using postgresql database. I am using crond for daily backup.

Following is the crontab entry,

29 17 * * 1-6  postgres  /path/to/script.sh

'script.sh'   contains

PGUSER=postgres  pg_dumpall  /path/to/backup_date.sql

Running the script using crond creates the 0 byte files.

But , If i run the script in console , it creates the file with
content.

Any suggestion pls?

Regards

Bala


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Jeremy Haile
 Applied to HEAD and 8.2 --- assuming the Windows buildfarm machines go
 green, we should probably consider back-porting this to 8.1 and 8.0.

Not trying to be a nuisance, but I'd really like to try this out in my
environment and see if my problems disappear.  Is there anyone out there
who could provide me with a patched exe for Win32?  If not, I could try
to get my system setup to build for Windows, but I'm not sure what all
that would involve.

Thanks,
Jeremy Haile

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Magnus Hagander
Jeremy Haile wrote:
 Applied to HEAD and 8.2 --- assuming the Windows buildfarm machines go
 green, we should probably consider back-porting this to 8.1 and 8.0.
 
 Not trying to be a nuisance, but I'd really like to try this out in my
 environment and see if my problems disappear.  Is there anyone out there
 who could provide me with a patched exe for Win32?  If not, I could try
 to get my system setup to build for Windows, but I'm not sure what all
 that would involve.

I'll see if I can build you something tomorrow. You're on 8.2, right? Do
you use any features like Kerberos, SSL or NLS? I don't think I have
them set up properly in my mingw build env, so it'd be easier if I could
build without them.

Unless beaten by someone who has a complete env ;-)

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DBI support for pg native arrays?

2007-01-26 Thread Martijn van Oosterhout
On Fri, Jan 26, 2007 at 12:00:42PM +0100, Louis-David Mitterrand wrote:
 Hello,
 
 For a select array(...) as col1, col2, col3 from table I'd like the DBI 
 driver to output col1 as a perl array instead of a scalar 
 {res1,res2,etc.} representation of it.
 
 Is that somehow possible? I looked at the docs without finding anything.

That would involve the DBI driver converting it, since it's just
returning what the server returns...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Default fillfactor question (index types other than btree)

2007-01-26 Thread Tony Caduto
Does anyone know what the default fillfactor is for index types other 
than btree?


I found in the docs that the default for btree is 90, but can't seem to 
find what it is for the other index types.


Thanks in advance,

Tony

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Stats collector frozen?

2007-01-26 Thread Jeremy Haile
Using standard build (none of the things you mentioned) on 8.2.1
currently.

I really appreciate it!


On Fri, 26 Jan 2007 21:24:09 +0100, Magnus Hagander
[EMAIL PROTECTED] said:
 Jeremy Haile wrote:
  Applied to HEAD and 8.2 --- assuming the Windows buildfarm machines go
  green, we should probably consider back-porting this to 8.1 and 8.0.
  
  Not trying to be a nuisance, but I'd really like to try this out in my
  environment and see if my problems disappear.  Is there anyone out there
  who could provide me with a patched exe for Win32?  If not, I could try
  to get my system setup to build for Windows, but I'm not sure what all
  that would involve.
 
 I'll see if I can build you something tomorrow. You're on 8.2, right? Do
 you use any features like Kerberos, SSL or NLS? I don't think I have
 them set up properly in my mingw build env, so it'd be easier if I could
 build without them.
 
 Unless beaten by someone who has a complete env ;-)
 
 //Magnus

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Ayuda sobre Indices

2007-01-26 Thread julio . caicedo
Buen Dia.

Por favor si saben como, me gustaria saber como puedo eliminar un indice
PERO SOLO si este existe. Como valido si existe o no el indice para luego
eliminarlo ??

Gracias.


---
Please, i need drop index but ONLY and ONLY this exist index. Thanks.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Installation on Web Server

2007-01-26 Thread Richard Hayward
I'm wanting to install PHP on a Apache/Linux server with support for
PostgreSQL.

It seems that I need to install PostgreSQL on the machine first,
before the PHP installation will work.

The way I've done this previously is simply to install PostgreSQL.
However, the database server is in fact another machine, so another
full install of it on the web server seems redundant. All the
instructions I've found assume that web and database servers are
running on the same host.

Is there some (easy !) way to install just the minimum parts of
PostgreSQL I need on the web server?

Regards

Richard

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Record not returned until index is rebuilt

2007-01-26 Thread Nik
I have the following table definition in postgres 8.0:

CREATE TABLE userInfo
(
  userID character varying(32) NOT NULL,
  password character varying(32) NOT NULL,
  salt character varying(16) NOT NULL,
  CONSTRAINT userInfo_PK PRIMARY KEY (userID)
) ;

Administrators are able to add new users to this table via PHP web
interface. It works fine most of the time. Every once in a while, we
get a problem where a newly added user does not show up in the list of
all users after being added.

For example, I will add a user named test to the database. When I do
SELECT * FROM userInfo
I will see that record in the database. However if I do
SELECT * FROM userInfo WHERE userID='test'
no records are returned.

This record will not show up in the query where it's specified as a
WHERE clause until I REINDEX and VACUUM ANALYZE the table. After that,
the record will show up in both queries. Newly added users will show up
for a while, until the same problem occurs.

Why is it that the record is visible when bulk selected, but not when
selected as a part of the WHERE clause, and why is it that the index
rebuild and vacuum fixes the problem? Is it possible that the primary
key index is not being updated properly?

Thanks.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Sample C++ code using libpqxx/Postgresql sought

2007-01-26 Thread usenet
I am looking for some sample code using the libpqxx (C++) API for
Postgresql.  I have found some tutorials

(http://thaiopensource.org/devprojects/libpqxx/doc/2.6.8/html/Tutorial/
http://www.cs.wisc.edu/~ajkunen/libpqxx-2.4.2/Tutorial/)

and some references

(http://thaiopensource.org/devprojects/libpqxx/doc/2.6.8/html/Reference/
http://www.postgresql.org/files/documentation/pdf/7.1/programmer.pdf
http://www.postgresql.org/files/documentation/pdf/7.3/programmer-7.3.2-US.pdf)

but they  are all giving bits and pieces of information.  I am the sort
of person who operates on pre-existing sample code, rather than a
programmer's manual.  Any pointers to sample code using libpqxx will be
appreciated.

Choi


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] PostgreSQL data loss

2007-01-26 Thread BluDes

Hi everyone,
 I have a problem with one of my costomers.
I made a program that uses a PostgreSQL (win32) database to save its data.
My customer claims that he lost lots of data reguarding his own clients 
and that those data had surely been saved on the database.
My first guess is that he is the one who deleted the data but wants to 
blame someone else, obviously I can't prove it.


Could it be possible for PostgreSQL to lose its data? Maybe with a file 
corruption? Could it be possible to restore these data?


My program does not modify or delete data since its more like a log that 
only adds information. It is obviously possible to delete these logs but 
it requires to answer yes to 2 different warnings, so the data can't 
be deleted accidentally.


I have other customers with even 10 times the amount of data of the one 
who claimed the loss but no problems with them.
He obviously made no backups (and claims whe never told him to do them 
so we are responsible even for this) though the program has a dedicated 
Backup-section.


Any suggestion?

Daniele

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Installation on Web Server

2007-01-26 Thread Richard Hayward
Thanks for the advise folks.
I've  just installed postgresql in its entirety, but won't run it.

Regards,

Richard

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Can you specify the pg_xlog location from a config file?

2007-01-26 Thread Bill Bartlett
The Windows Server 2003 Resource Kit and WinXP Resource Kit (both free
to download) both also provide the LINKD utility -- we use LINKD to
create junctions on most of our Windows servers (to mimic the equivalent
hard links we use on our Linux servers).

- Bill

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Jeremy Haile
 Sent: Friday, January 26, 2007 1:44 PM
 To: Roman Neuhauser; Karen Hill
 Cc: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Can you specify the pg_xlog location 
 from a config file?
 
 
 This utility is useful for creating junctions in Windows: 
 http://www.microsoft.com/technet/sysinternals/FileAndDisk/Junc
tion.mspx

I am using this to symlink my pg_xlog directory to another disk and it
works great.

Jeremy Haile


On Fri, 26 Jan 2007 18:27:04 +, Roman Neuhauser
[EMAIL PROTECTED] said:
 # [EMAIL PROTECTED] / 2007-01-26 09:21:27 -0800:
  Windows doesn't support symlinks.  Is it possible instead for there 
  to be a config file that lets one set where the pg_xlog directory 
  will sit?
 
 Windows has junction points.
 
 --
 How many Vietnam vets does it take to screw in a light bulb?
 You don't know, man.  You don't KNOW.
 Cause you weren't THERE. http://bash.org/?255991
 
 ---(end of 
 broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL data loss

2007-01-26 Thread Bill Moran
In response to BluDes [EMAIL PROTECTED]:

 Hi everyone,
   I have a problem with one of my costomers.
 I made a program that uses a PostgreSQL (win32) database to save its data.
 My customer claims that he lost lots of data reguarding his own clients 
 and that those data had surely been saved on the database.
 My first guess is that he is the one who deleted the data but wants to 
 blame someone else, obviously I can't prove it.

No, you can't.  You're contract should contain language regarding you
not being responsible for data loss, to protect you from jerks like this.

 Could it be possible for PostgreSQL to lose its data? Maybe with a file 
 corruption? Could it be possible to restore these data?

It's possible for any program to lose data, if the hardware fails, if the
user tries to edit files that they shouldn't.  If the user has admin
access to the PostgreSQL box, they can cause data loss.

 My program does not modify or delete data since its more like a log that 
 only adds information. It is obviously possible to delete these logs but 
 it requires to answer yes to 2 different warnings, so the data can't 
 be deleted accidentally.

I've actually seen people accidentally hit yes twice when they didn't
want to.  Tell him to lay off the coffee.

 I have other customers with even 10 times the amount of data of the one 
 who claimed the loss but no problems with them.
 He obviously made no backups (and claims we never told him to do them 
 so we are responsible even for this) though the program has a dedicated 
 Backup-section.
 
 Any suggestion?

Yes.  Call your lawyer first and see what the laws in your area say
regarding this.  Then talk to your lawyer about making sure your
boilerplate contract covers this kind of thing and protects you from
future incidents.  Take your lawyers advice on how to handle it.

In any event, refuse to ever do any business with him again.  In my
experience, these kinds of customers aren't worth the pennies they pay
you.  Also, refuse to give in.  If you give him anything for free, he'll
never leave you alone.  I have personal experience with these types.

I am not a lawyer ... I just play one on the Internet.

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Sample C++ code using libpqxx/Postgresql sought

2007-01-26 Thread Reid Thompson
On Fri, 2007-01-26 at 12:16 -0800, [EMAIL PROTECTED] wrote:
 I am looking for some sample code using the libpqxx (C++) API for
 Postgresql.  I have found some tutorials
 
 (http://thaiopensource.org/devprojects/libpqxx/doc/2.6.8/html/Tutorial/
 http://www.cs.wisc.edu/~ajkunen/libpqxx-2.4.2/Tutorial/)
 
 and some references
 
 (http://thaiopensource.org/devprojects/libpqxx/doc/2.6.8/html/Reference/
 http://www.postgresql.org/files/documentation/pdf/7.1/programmer.pdf
 http://www.postgresql.org/files/documentation/pdf/7.3/programmer-7.3.2-US.pdf)
 
 but they  are all giving bits and pieces of information.  I am the sort
 of person who operates on pre-existing sample code, rather than a
 programmer's manual.  Any pointers to sample code using libpqxx will be
 appreciated.
 
 Choi
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/

you might try looking through the libpqxx test cases
http://thaiopensource.org/development/libpqxx/browser/trunk/test/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Record not returned until index is rebuilt

2007-01-26 Thread Tom Lane
Nik [EMAIL PROTECTED] writes:
 I have the following table definition in postgres 8.0:

PG 8.0.what, exactly?

 Why is it that the record is visible when bulk selected, but not when
 selected as a part of the WHERE clause, and why is it that the index
 rebuild and vacuum fixes the problem?

Sounds like a corrupted index.  It seems pretty odd that you would be
getting recurring corruptions like that --- we've recently fixed some
corner case bugs causing index corruption, but they're not all that easy
to trigger (and most of the ones I remember had other symptoms than just
search misses).  How much faith have you got in your hardware platform?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL data loss

2007-01-26 Thread Scott Marlowe
On Fri, 2007-01-26 at 15:06, Bill Moran wrote:
 In response to BluDes [EMAIL PROTECTED]:

  
  Any suggestion?
 

 In any event, refuse to ever do any business with him again.  In my
 experience, these kinds of customers aren't worth the pennies they pay
 you.  Also, refuse to give in.  If you give him anything for free, he'll
 never leave you alone.  I have personal experience with these types.

What Bill said, ++


Re: [GENERAL] PostgreSQL data loss

2007-01-26 Thread Joshua D. Drake
Scott Marlowe wrote:
 On Fri, 2007-01-26 at 15:06, Bill Moran wrote:
 In response to BluDes [EMAIL PROTECTED]:
 
 Any suggestion?
 
 In any event, refuse to ever do any business with him again.  In my
 experience, these kinds of customers aren't worth the pennies they pay
 you.  Also, refuse to give in.  If you give him anything for free, he'll
 never leave you alone.  I have personal experience with these types.
 
 What Bill said, ++
'
To follow this up from a PostgreSQL company :).

Be plaintative. Tell him that any loss of data should have been covered
by backups and that such losses of data typically happen either by user
error or hardware failure.

For customer service reasons, offer him 1 hour of diagnostics for free
(assuming this is a good customer). Then you can tell him what your
findings are.

If the customer is difficult after this offering I would suggest firing
the customer.

Sincerely,

Joshua D. Drake
Command Prompt, Inc.



-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [Fwd: [PORTS] M$ SQL server DTS package equivalent

2007-01-26 Thread Paul Lambert

Thanks all for your tips and pointers.

Looking at copy I think it may do just what I need. The tables I load 
the data into have the same columns in the same order as those in the 
CSV file. Loading data in this manner is going to be a rare occurance - 
just when we install a new customer site and need to do an initial 
transfer of data from the main system before we switch on my real-time 
replication program. The programs that extract these csv files already 
take care of duplicate key checking and so forth, so there shouldn't be 
any issues as far as data integrity checking goes. I.e. there's no 
actual data transformation, row merging and the like.


Thanks again to everyone who's offered some advice, much appreciated.

Regards,
Paul.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Predicted lifespan of different PostgreSQL branches

2007-01-26 Thread Shane Ambler

Bill Moran wrote:

I spend some time googling this and searching the Postgresql.org site, but
I'm either not good enough with the search strings, or it's not to be found.

I'm trying to plan upgrades so that we don't upgrade needlessly, but also
don't get caught using stuff that nobody's supporting any more.
The FreeBSD project keeps this schedule:
http://www.freebsd.org/security/#adv
which is _really_ nice when talking to managers and similar people about
when upgrades need to be scheduled.

Does the PostgreSQL project have any similar policy about EoLs?  Even just
a simple statement like, it is our goal to support major branches for 2
years after release or some such?



There is no set time frame planned that I know of.

It is more a matter of users that keep the old versions alive. Some with 
large datasets on busy servers that can't allocate enough downtime to 
upgrade tend to be keeping the older versions running.


As far as I know there are some companies that support the security 
fixes being back-ported to 7.x releases and this is the only reason they 
do get updates and are still listed on the site. There is some developer 
desire to drop 7.x altogether.


v8.0 has been available for 2 years now and a common first answer to 
support questions for anything older is to upgrade.
If they are running PostgreSQL on Windows then they should be using 8.1 
at least and be encouraged to stay more up to date as the Windows 
version is still young and less tested and is getting more improvements 
with each release.


I would not suggest that you have any clients use less than 8.0 with 8.1 
preferred and 8.2 for new installs.


But as the old saying goes if it ain't broke don't fix it. If the 
version they have runs fine and fulfills their need then leave it be.



Upgrading at the same time as hardware can be a good way to go if you 
aren't interested in always having the newest version.




--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/26/07 17:28, Shane Ambler wrote:
 Bill Moran wrote:
 I spend some time googling this and searching the Postgresql.org site,
 but
 I'm either not good enough with the search strings, or it's not to be
 found.

 I'm trying to plan upgrades so that we don't upgrade needlessly, but also
 don't get caught using stuff that nobody's supporting any more.
 The FreeBSD project keeps this schedule:
 http://www.freebsd.org/security/#adv
 which is _really_ nice when talking to managers and similar people about
 when upgrades need to be scheduled.

 Does the PostgreSQL project have any similar policy about EoLs?  Even
 just
 a simple statement like, it is our goal to support major branches for 2
 years after release or some such?

 
 There is no set time frame planned that I know of.
 
 It is more a matter of users that keep the old versions alive. Some with
 large datasets on busy servers that can't allocate enough downtime to
 upgrade tend to be keeping the older versions running.

How much does the on-disk structure of *existing* tables and indexes
change between x.y versions?

Between, for example, 8.0 and 8.2?

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFupB8S9HxQb37XmcRArvWAKCwTj6kDG6+rAa4vZ30PEQUkDHy5ACg7CZf
8PaPJuy6gYBuCo5JNdxgdBQ=
=olUx
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Loop plpgsql recordset

2007-01-26 Thread George Weaver
On Thursday, January 25 Furesz Peter wrote:

How can I loop a PL/PgSQL recorset variable? The example:

DECLARE
v_tmp_regi RECORD;
v_tmp RECORD;
BEGIN
  SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE 
sf.termekfajta_id=
  a_termekfajta_id AND sf.marka_id=a_marka_id;

DELETE FROM sulyozas_futamido;

FOR v_tmp IN v_tmp_regi LOOP
--I would like to work here with the old recordset!
END LOOP;
^^
   -- This is not working !!!

END;

Its difficult to determine what you're trying to accomplish in the loop, but 
you may want to refer to 37.7.4. Looping Through Query Results in 
http://www.postgresql.org/docs/8.2/interactive/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING

Note that DELETE FROM sulyozas_futamido; will delete ALL records in 
sulyozas_futamido!

Perhaps:

FOR v_tmp IN  SELECT * FROM sulyozas_futamido sf 
WHERE sf.termekfajta_id = a_termekfajta_id AND sf.marka_id=a_marka_id;

 LOOP

 DELETE FROM sulyozas_futamido WHERE (some condition related to v_tmp???)

Work with old record now in v_tmp
  
END LOOP;

Regards,
George



Re: [GENERAL] Ayuda sobre Indices

2007-01-26 Thread Jaime Casanova

Si quieres escribir en espaƱol, por favor, escribe a
[EMAIL PROTECTED]
esta es una lista en ingles. Tu respuesta abajo...

On 1/25/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Buen Dia.

Por favor si saben como, me gustaria saber como puedo eliminar un indice
PERO SOLO si este existe. Como valido si existe o no el indice para luego
eliminarlo ??

Gracias.



from 8.2 you can do:

DROP INDEX IF EXISTS name ;

--
regards,
Jaime Casanova

Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs and the universe trying
to produce bigger and better idiots.
So far, the universe is winning.
  Richard Cook

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Speaking of upgrades...

2007-01-26 Thread Douglas McNaught
Ron Johnson [EMAIL PROTECTED] writes:

 How much does the on-disk structure of *existing* tables and indexes
 change between x.y versions?

 Between, for example, 8.0 and 8.2?

Enough to require a dump/reload in order to upgrade.  :)

Within major versions (8.2.0, 8.2.1, etc) the files are compatible so
you can do in-place upgrades (except in extraordinary circumstances).

-Doug


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Bruce Momjian
Ron Johnson wrote:
  There is no set time frame planned that I know of.
  
  It is more a matter of users that keep the old versions alive. Some with
  large datasets on busy servers that can't allocate enough downtime to
  upgrade tend to be keeping the older versions running.
 
 How much does the on-disk structure of *existing* tables and indexes
 change between x.y versions?
 
 Between, for example, 8.0 and 8.2?

Yes:

http://www.postgresql.org/docs/8.2/static/migration.html

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/26/07 20:12, Bruce Momjian wrote:
 Ron Johnson wrote:
 There is no set time frame planned that I know of.

 It is more a matter of users that keep the old versions alive. Some with
 large datasets on busy servers that can't allocate enough downtime to
 upgrade tend to be keeping the older versions running.

 How much does the on-disk structure of *existing* tables and indexes
 change between x.y versions?

 Between, for example, 8.0 and 8.2?
 
 Yes:
 
   http://www.postgresql.org/docs/8.2/static/migration.html

I was thinking of something like the release notes, but a bit more
targeted.  (I know.  diff the source.)

http://www.postgresql.org/docs/8.2/interactive/release-8-2.html

For example, I've read these release notes, and there are some index
modifications, but don't *seem* to be *table* structure changes.

So, in an upgrade from 8.1 to 8.2, what's really preventing pg from
letting the user:

1. Cleanly shutdown pg.
2. Install v8.2.
3. Start pg.
4. psql -c 'REINDEX DATABASE' some_db

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuslLS9HxQb37XmcRAn90AJ4zCgRqXZbXMmWKTXWT1o7Y2c7S8ACgxYcD
maKk5w+qam1Uy8SDi/R0WQ4=
=UHLl
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Bruce Momjian
Ron Johnson wrote:
  Yes:
  
  http://www.postgresql.org/docs/8.2/static/migration.html
 
 I was thinking of something like the release notes, but a bit more
 targeted.  (I know.  diff the source.)
 
 http://www.postgresql.org/docs/8.2/interactive/release-8-2.html
 
 For example, I've read these release notes, and there are some index
 modifications, but don't *seem* to be *table* structure changes.
 
 So, in an upgrade from 8.1 to 8.2, what's really preventing pg from
 letting the user:
 
 1. Cleanly shutdown pg.
 2. Install v8.2.
 3. Start pg.
 4. psql -c 'REINDEX DATABASE' some_db

The changes are usually very subtle, like changes in the storage format
for certain data types, or bit changes in the row headers --- it isn't
something really of interest except to developers.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/26/07 21:48, Bruce Momjian wrote:
 Ron Johnson wrote:
 Yes:

 http://www.postgresql.org/docs/8.2/static/migration.html
 I was thinking of something like the release notes, but a bit more
 targeted.  (I know.  diff the source.)

 http://www.postgresql.org/docs/8.2/interactive/release-8-2.html

 For example, I've read these release notes, and there are some index
 modifications, but don't *seem* to be *table* structure changes.

 So, in an upgrade from 8.1 to 8.2, what's really preventing pg from
 letting the user:

 1. Cleanly shutdown pg.
 2. Install v8.2.
 3. Start pg.
 4. psql -c 'REINDEX DATABASE' some_db
 
 The changes are usually very subtle, like changes in the storage format
 for certain data types, or bit changes in the row headers --- it isn't
 something really of interest except to developers.

Ah, too low-level to make it into the relnotes.

What are your plans for reducing the number of resources needed to
upgrade databases?


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFutInS9HxQb37XmcRAsihAKC+j6D4HVkWqc4l9EctRuHfud0WDgCghGZ6
a2W1ym+Fru3YowebTLgCqto=
=+K4E
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Joshua D. Drake

 1. Cleanly shutdown pg.
 2. Install v8.2.
 3. Start pg.
 4. psql -c 'REINDEX DATABASE' some_db
 The changes are usually very subtle, like changes in the storage format
 for certain data types, or bit changes in the row headers --- it isn't
 something really of interest except to developers.
 
 Ah, too low-level to make it into the relnotes.
 
 What are your plans for reducing the number of resources needed to
 upgrade databases?

There will be a time, some time in the future that in place upgrades
will be available. In fact there was very little changes that made it
necessary from 8.1 to 8.2.

However, that being said, the best way to get it done is to either start
a project to do it, join a project to do it, or help sponsor a project
to do it.

The reality is, as much as a few like to shout that it is something that
must happen (in place upgrades) it is amazing how little work happens
from those people to have it happen.

Please note that I am not speaking about you directly, just that I find
that if something needs to happen, it will. In place upgrades hasn't
happen yet.

Sincerely,

Joshua D. Drake



-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Martijn van Oosterhout
On Fri, Jan 26, 2007 at 10:16:39PM -0600, Ron Johnson wrote:
 What are your plans for reducing the number of resources needed to
 upgrade databases?

As noted, the table structure changes only slightly between versions,
if at all. What does change significantly is the catalog. Even now
there have been significant underlying changes to make the catalog
totally incompatable. There has been work on a pg_upgrade tool which
would create a new database with the new version and then copy the data
tables from the old version and rebuild the indexes. The idea being
that the data is large, but the underlying system is fairly small.

In theory it could work, but I don't know about the status. There has
been some support from the core that *if* such a pg_upgrade tool
existed, *then* they would provide some backward compatability for he
datatypes.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] Predicted lifespan of different PostgreSQL branches

2007-01-26 Thread Tom Lane
Shane Ambler [EMAIL PROTECTED] writes:
 Bill Moran wrote:
 Does the PostgreSQL project have any similar policy about EoLs?

 There is no set time frame planned that I know of.

No, there's no agreed-on policy.  So far there's really only been one
release that we've actively decided to decommission, and that was 7.2.
Before about 7.1 or 7.2, to be frank, the code base was not solid enough
that anyone would expect long-term support; nor did we have the manpower
to consider back-patching any more than the latest release version.
So it was simply not a consideration before that.  We dropped 7.2 when
we decided it was unfixably broken --- I don't recall the specific
motivation anymore, but it was a we-can't-fix-this-without-initdb kind
of problem, and if you're gonna initdb you might as well move to a newer
release branch.

 It is more a matter of users that keep the old versions alive.

Even more to the point, a matter of developers being willing to take
the time to ensure that critical fixes are back-ported to old branches.
Right now I think the driving force here is that Red Hat is paying me
to make critical fixes for versions that are in their supported RHEL
releases, namely PG 7.3 and 7.4.  The EOLs for those RHEL versions are
still depressingly far away :-(.  The rest of core does not care at all
about 7.x, but they're willing to humor me to the extent of wrapping
tarballs as long as I keep putting the bug fixes into CVS.

There's been some idle discussion on the lists about establishing an
official project policy, perhaps five years from release, but I don't
see that as meaning anything, because in the end this is still all
driven by developers scratching their own itch (or their company's itch).
Way-back releases are going to get supported for exactly as long as
someone's willing to do the work.  And future occurrences of the 7.2
this is unfixable decision are certainly not impossible, and would
throw a monkey wrench into any such plan anyway.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: Speaking of upgrades... (was Re: [GENERAL] Predicted ...)

2007-01-26 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/27/07 00:19, Martijn van Oosterhout wrote:
 On Fri, Jan 26, 2007 at 10:16:39PM -0600, Ron Johnson wrote:
 What are your plans for reducing the number of resources needed to
 upgrade databases?
 
 As noted, the table structure changes only slightly between versions,
 if at all. What does change significantly is the catalog. Even now
 there have been significant underlying changes to make the catalog
 totally incompatable. There has been work on a pg_upgrade tool which
 would create a new database with the new version and then copy the data
 tables from the old version and rebuild the indexes. The idea being
 that the data is large, but the underlying system is fairly small.

You mean copy just the system catalog?

 In theory it could work, but I don't know about the status. There has
 been some support from the core that *if* such a pg_upgrade tool
 existed, *then* they would provide some backward compatability for he
 datatypes.

That's how the system I use at work does upgrades.  The existing
user data formats don't change, but the system catalog does.  So,
when you convert a DB, it makes a fresh copy of the catalog, and
then migrates the old catalog to the new catalog.  15 year old
systems can be upgraded that way, with no harm to the user data.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuvFGS9HxQb37XmcRAmyhAJ9bTG8cVB1vYk8YMWDeIXTuC49QXQCgqOAe
zzqCCBcZn9UdddvJKKw4vYM=
=3mVM
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly