Re: [GENERAL] Postgresql Backup

2007-01-25 Thread Hannes Dorbath

On 25.01.2007 05:57, bala wrote:

But , If i run the script in console , it creates the file with
content.


Define $PATH or use /usr/local/bin/pg_dumpall or where ever it is.

--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] libeay32.dll ssleay32.dll in system PATH

2007-01-25 Thread Steven De Vriendt

Hi,

When I try to install PostgreSQL8.2 I get this error message:

Incompactibel version of openssl detected in system path. when you remove
libeay32.dll  ssleay32.dll in your
system path postgres will install a newer version...

I've looked in my PATH but I can't seem to find it, can anyone help me out ?


Re: [GENERAL] libeay32.dll ssleay32.dll in system PATH

2007-01-25 Thread Magnus Hagander
Check your system, system32 and winnt directories.

/Magnus


--- Original message ---
From: Steven De Vriendt [EMAIL PROTECTED]
Sent: 1-25-'07,  11:37

 Hi,
 
 When I try to install PostgreSQL8.2 I get this error message:
 
 Incompactibel version of openssl detected in system path. when you remove 
 libeay32.dll  ssleay32.dll in your
 system path postgres will install a newer version...
 
 I've looked in my PATH but I can't seem to find it, can anyone help me out ?
 


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] libeay32.dll ssleay32.dll in system PATH

2007-01-25 Thread Steven De Vriendt

solved :-)
thx

On 1/25/07, Magnus Hagander [EMAIL PROTECTED] wrote:


Check your system, system32 and winnt directories.

/Magnus


--- Original message ---
From: Steven De Vriendt [EMAIL PROTECTED]
Sent: 1-25-'07,  11:37

 Hi,

 When I try to install PostgreSQL8.2 I get this error message:

 Incompactibel version of openssl detected in system path. when you
remove libeay32.dll  ssleay32.dll in your
 system path postgres will install a newer version...

 I've looked in my PATH but I can't seem to find it, can anyone help me
out ?





[GENERAL] Postgresql 64bit question

2007-01-25 Thread Tony Caduto

Hi,
I was just wondering if a 32bit client connected to a 64bit server, 
would it be possible for the 64bit server to return a OID that was over 
4 billion to the 32 bit
client and possibly cause a range error if the OID value was used in a 
unsigned 32-bit integer var?


Thanks,

--
Tony 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql 64bit question

2007-01-25 Thread Peter Eisentraut
Tony Caduto wrote:
 I was just wondering if a 32bit client connected to a 64bit server,
 would it be possible for the 64bit server to return a OID that was
 over 4 billion to the 32 bit
 client and possibly cause a range error if the OID value was used in
 a unsigned 32-bit integer var?

OIDs are always 32 bit.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] upgrading pl/pgsql triggers from 7.4 to 8.2

2007-01-25 Thread Louis-David Mitterrand
Hello,

We tried upgrading a 7.4 base to 8.2 and found many issues with the 
triggers. What are the main changes in the pl/pgsql syntax or contraints 
checking between these two version?

Thanks,

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] upgrading pl/pgsql triggers from 7.4 to 8.2

2007-01-25 Thread Terry Lee Tucker
On Thursday 25 January 2007 10:02 am, Louis-David Mitterrand 
[EMAIL PROTECTED] thus communicated:
  Hello,

  We tried upgrading a 7.4 base to 8.2 and found many issues with the
  triggers. What are the main changes in the pl/pgsql syntax or contraints
  checking between these two version?

  Thanks,

  ---(end of broadcast)---
  TIP 2: Don't 'kill -9' the postmaster

We have to do this as well so your question is of great interest to me. I hope 
you/we get lots of answers.
-- 
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812  Fax: (336) 372-6812  Cell: (336) 363-4719
[EMAIL PROTECTED]
www.turbocorp.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] too many trigger records found for relation item -

2007-01-25 Thread Csaba Nagy
[Update: the post didn't make it to the list probably due to the attachment, so 
I resend it inlined... and I was not able to trigger the same behavior on 8.2, 
so it might have been already fixed.]

[snip]
 Well, if you can show a reproducible test case, I'd like to look at it.

OK, I have a test case which has ~ 90% success rate in triggering the
issue on my box. It is written in Java, hope you can run it, in any case
you'll get the idea how to reproduce the issue.

The code is attached, and I list here some typical output run against an
8.1.3 postgres installation. The first exception is strange on it's own,
it was produced after a few runs, might be caused by another issue with
creating/dropping tables (I think I have seen this too some time ago).

I'll go and run it against 8.2 and see if the issue is still there. My
problems on the integration box turned out to be postgres logging set to
too high level and running out of disk space due to log amount...

Cheers,
Csaba.


Error executing sql: CREATE TABLE test_child_0 (a bigint primary key
references test_parent(a))
org.postgresql.util.PSQLException: ERROR: duplicate key violates unique
constraint pg_type_typname_nsp_index
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329)
at
com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155)
at
com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12)
at
com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:91)
at
com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76)
Error executing sql: DROP TABLE test_child_0
com.domeus.trials.TestChildTableCreationIndependent$MissingTableException
at
com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:158)
at
com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12)
at
com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:98)
at
com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76)
Error executing sql: DROP TABLE test_child_251
org.postgresql.util.PSQLException: ERROR: 2 trigger record(s) not found
for relation test_parent
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329)
at
com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155)
at
com.domeus.trials.TestChildTableCreationIndependent.access$200(TestChildTableCreationIndependent.java:12)
at
com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.doWork1(TestChildTableCreationIndependent.java:98)
at
com.domeus.trials.TestChildTableCreationIndependent$WorkerThread.run(TestChildTableCreationIndependent.java:76)
Error executing sql: DROP TABLE test_child_258
org.postgresql.util.PSQLException: ERROR: too many trigger records found
for relation test_parent
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1548)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1316)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:191)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:452)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:337)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:329)
at
com.domeus.trials.TestChildTableCreationIndependent.executeQuery(TestChildTableCreationIndependent.java:155)
at

[GENERAL] SQL Newbie Question

2007-01-25 Thread Inoqulath

Hello Folks

Have a look at this Table:

CREATE TABLE foo(
id serial,
a_name text,
CONSTRAINT un_name UNIQUE (a_name));

Obviously, inserting a string twice results in an error (as one would 
expect). But: is there any known possibility to ingnore an errorneous 
INSERT like SQLite's conflict algorithm (SQLite:INSERT OR 
[IGNORE|ABORT] INTO foo [...])?
I tried to use a trigger before INSERT takes place, but it seems that 
before firing a trigger the constraints are checked...
Background: I'd like to INSERT a few thousand lines in one transaction, 
where some values will be appear twice.


thx in Advance

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] column limit

2007-01-25 Thread Isaac Ben

Hi,
I'm trying to create a table with 20,000 columns of type int2, but I
keep getting the error message that the limit is 1600.  According to
this message http://archives.postgresql.org/pgsql-admin/2001-01/msg00199.php
it can be increased, but only up to about 6400.  Can anyone tell me
how to get 20,000 columns?

Thanks,
IB

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Subject: Postgres processes have a burst of CPU usage

2007-01-25 Thread Jim C. Nasby
On Tue, Jan 23, 2007 at 07:47:26AM -0800, Subramaniam Aiylam wrote:
 Hello all,
 
   I have a setup in which four client machines access
 a Postgres database (8.1.1) running on a Linux box.
 So, there are connections from each machine to the
 database; hence, the Linux box has about 2 postgres
 processes associated with each machine.
 
   I am using the JDBC driver
 (postgresql-8.1-404.jdbc3.jar) to talk to the
 database. I am also using the Spring framework(1.2.2)
 and Hibernate (3.0.5) on top of JDBC. I use Apache's

Beware of Hibernate... it likes to abuse the RDBMS...

 DBCP database connection pool (1.2.1).
 
   Now, there is one particular update that I make from
 one of the client machines - this involves a
 reasonably large object graph (from the Java point of
 view). It deletes a bunch of rows (around 20 rows in
 all) in 4-5 tables and inserts another bunch into the
 same tables.
 
   When I do this, I see a big spike in the CPU usage
 of postgres processes that are associated with ALL the
 client machines, not just the one I executed the
 delete/insert operation on. The spike seems to happen
 a second or two AFTER the original update completes
 and last for a few seconds.
 
   Is it that this operation is forcibly clearing some
 client cache on ALL the postgres processes? Why is
 there such an interdependency? Can I set some
 parameter to turn this off?

No, wouldn't be clearing anything. Would the other connections be
updating the same data? If so they could be waiting on locks that the
graph-updating code has acquired.

I suggest you turn on query logging and see what commands are actually
being sent to the database.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL Newbie Question

2007-01-25 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/25/07 09:30, Inoqulath wrote:
 Hello Folks
 
 Have a look at this Table:
 
 CREATE TABLE foo(
 id serial,
 a_name text,
 CONSTRAINT un_name UNIQUE (a_name));
 
 Obviously, inserting a string twice results in an error (as one would
 expect). But: is there any known possibility to ingnore an errorneous
 INSERT like SQLite's conflict algorithm (SQLite:INSERT OR
 [IGNORE|ABORT] INTO foo [...])?
 I tried to use a trigger before INSERT takes place, but it seems that
 before firing a trigger the constraints are checked...
 Background: I'd like to INSERT a few thousand lines in one transaction,
 where some values will be appear twice.

No.

  Unique \U*nique\, a. [F. unique; cf. It. unico; from L. unicus,
 from unus one. See {One}.]
 Being without a like or equal; unmatched; unequaled;
 unparalleled; single in kind or excellence; sole. --
 {U*niquely}, adv. -- {U*niqueness}, n.
 [1913 Webster]


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuNAMS9HxQb37XmcRAh4XAJ99ebAGyuHTFc9+bLiuW5ewPJkIYgCgguLP
1UDAUlXSGnZrKQb4Czoqp5w=
=Wm9P
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Problem with result ordering

2007-01-25 Thread Thorsten Körner
Hi,

when I fire the following query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250, 
11042, 16279, 42197, 672089);

I will get the same results in the same order, as in in the next query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in 
(11042,42197,672089,26250,16279);

I wonder, how it is possible, to retrieve the results in the same order, as 
queried in the list. The listed IDs are from an application outside the 
database.

Version is PostgreSQL 8.2.1

Has anyone an idea, how to do this, while PostgreSQL knows nothing about 
hints, like oracle does?

THX,
Thorsten

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] column limit

2007-01-25 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/25/07 09:34, Isaac Ben wrote:
 Hi,
 I'm trying to create a table with 20,000 columns of type int2, but I
 keep getting the error message that the limit is 1600.  According to
 this message
 http://archives.postgresql.org/pgsql-admin/2001-01/msg00199.php
 it can be increased, but only up to about 6400.  Can anyone tell me
 how to get 20,000 columns?

Why the heck do you need 20 *thousand* columns?

Assuming, though, that you know what you're doing, and the design
isn't horribly botched, then arrays might be what you want.

Normalizing the table might be better, and vertically partitioning
it would be a big performance win if you don't need all 20
*thousand* columns at the same time.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuNE0S9HxQb37XmcRAoxxAKCLrX0WaekNH8N8ghAzMkhgMmZ43ACg7F0K
0pqcprs/suZ/1xmK73PAdOE=
=WLdL
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] SQL Newbie Question

2007-01-25 Thread btober

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/25/07 09:30, Inoqulath wrote:
  

Hello Folks

Have a look at this Table:

CREATE TABLE foo(
id serial,
a_name text,
CONSTRAINT un_name UNIQUE (a_name));

Obviously, inserting a string twice results in an error ...is there any conflict 
algorithm (SQLite:INSERT OR
[IGNORE|ABORT] INTO foo [...])?...


No.

  Unique \U*nique\, a. [F. unique; cf. It. unico; from L. unicus,
 from unus one. See {One}.]
 Being without a like or equal; ...
  


I think he is not asking How do I insert duplicate rows into a 
unique-constrained column?, but rather that he wants to have the insert 
transaction proceed successfully, ignoring the duplicates, i.e., the 
resulting inserted rows will number less than the original source rows 
by exactly the subset of duplicate source rows.


My suggestion would be to load the data into an unconstrained temporary 
table, then select distinct from that for insertion into your actual 
working table.





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Problem with result ordering

2007-01-25 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/25/07 09:45, Thorsten Körner wrote:
 Hi,
 
 when I fire the following query:
 select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250, 
 11042, 16279, 42197, 672089);
 
 I will get the same results in the same order, as in in the next query:
 select m_id, m_u_id, m_title, m_rating from tablename where m_id in 
 (11042,42197,672089,26250,16279);
 
 I wonder, how it is possible, to retrieve the results in the same order, as 
 queried in the list. The listed IDs are from an application outside the 
 database.
 
 Version is PostgreSQL 8.2.1
 
 Has anyone an idea, how to do this, while PostgreSQL knows nothing about 
 hints, like oracle does?

What do you mean same order?  The order that they are listed in
the IN() clause?

I doubt it.  SQL is, by definition, set-oriented and the only ways
to guarantee a certain output sequence are ORDER BY and GROUP BY,
and they use collating sequences.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuNLmS9HxQb37XmcRAmTSAJ9mbcf8AptR4YsjdG7xBocasldfdgCdEGSz
MNjSxmx3KBP79LXRzTgQ2Qk=
=nif4
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] SQL Newbie Question

2007-01-25 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/25/07 09:54, [EMAIL PROTECTED] wrote:
 Ron Johnson wrote:
 On 01/25/07 09:30, Inoqulath wrote:
[snip]
 I think he is not asking How do I insert duplicate rows into a
 unique-constrained column?, but rather that he wants to have the insert
 transaction proceed successfully, ignoring the duplicates, i.e., the
 resulting inserted rows will number less than the original source rows
 by exactly the subset of duplicate source rows.

Ah, ok.

 My suggestion would be to load the data into an unconstrained temporary
 table, then select distinct from that for insertion into your actual
 working table.

That works on tables of a few thousand rows.  Even a few million
rows.  Doesn't scale, though.

Savepoints might be a workaround, also.

Still, pg *really* needs a not rollback on error mode.  Throw the
exception, let the app handle it and keep on going.

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuNSWS9HxQb37XmcRAvoUAJ4r7RIxj+JH9gcZNadQrQFaI/NTnwCeM6Al
ZdpFvGuV4AemAYTXbY+Vgaw=
=GxBV
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] SQL Newbie Question

2007-01-25 Thread Inoqulath

Good hint. I think that should work for me.
Thanks

(At last, now I know what unique means ;-)  )

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Problem with result ordering

2007-01-25 Thread Tom Lane
Thorsten =?iso-8859-1?q?K=F6rner?= [EMAIL PROTECTED] writes:
 select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250, 
 11042, 16279, 42197, 672089);

 I wonder, how it is possible, to retrieve the results in the same order, as 
 queried in the list.

You could rewrite the query as

select ... from tablename where m_id = 26250
union all
select ... from tablename where m_id = 11042
union all
select ... from tablename where m_id = 16279
union all
select ... from tablename where m_id = 42197
union all
select ... from tablename where m_id = 672089

This isn't guaranteed by the SQL spec to produce the results in any
particular order either; but there's no good reason for PG to rearrange
the order of the UNION arms, whereas there are plenty of good reasons to
try to optimize fetching of individual rows.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Problem with result ordering

2007-01-25 Thread A. Kretschmer
am  Thu, dem 25.01.2007, um 16:45:23 +0100 mailte Thorsten Körner folgendes:
 Hi,
 
 when I fire the following query:
 select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250, 
 11042, 16279, 42197, 672089);
 
 I will get the same results in the same order, as in in the next query:
 select m_id, m_u_id, m_title, m_rating from tablename where m_id in 
 (11042,42197,672089,26250,16279);
 
 I wonder, how it is possible, to retrieve the results in the same order, as 
 queried in the list. The listed IDs are from an application outside the 
 database.

a little trick;

store your order-definition and the where-condition in a separate table.

Our table:
test=*# select * from foo;
 id |  val
+---
  1 | text1
  2 | text2
  3 | text3
  4 | text4
  5 | text5
(5 rows)

Our sort-order:
-- col a: the sort-order
-- col b: the where-condition
test=*# select * from o;
 a | b
---+---
 1 | 3
 2 | 5
 3 | 1
(3 rows)

test=*# select foo.id, foo.val from foo join o on foo.id=o.b order by o.a;
 id |  val
+---
  3 | text3
  5 | text5
  1 | text1
(3 rows)

You can try it without this table, only with generate_series or so.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Jeremy Haile
Tom,

Did this information shed any light on what the problem might be?  Any
solution or workaround?

Thanks!
Jeremy Haile

On Wed, 24 Jan 2007 14:19:05 -0500, Jeremy Haile [EMAIL PROTECTED]
said:
 pgstat.stat was last updated 1/22 12:25pm - there is no pgstat.tmp.  
 
 Coincidentally (I think not) - the last auto-analyze was performed at
 2007-01-22 12:24:11.424-05.
 
 The logs for 1/22 are empty - so no errors or anything like that to give
 clues...
 
 Thanks!
 Jeremy Haile
 
 
 On Wed, 24 Jan 2007 14:00:52 -0500, Tom Lane [EMAIL PROTECTED] said:
  Jeremy Haile [EMAIL PROTECTED] writes:
   The stats collector appears to still be running, since I can see a
   postgres.exe process with -forkcol.  However, I never notice it using
   I/O or CPU usage.  Also, querying the pg_stat_user_tables view shows no
   change in table stats even for tables that change very frequently.
  
  Is $PGDATA/global/pgstat.stat getting updated?  (watch the file mod time
  at least)  Do you see any pgstat.tmp file in there?
  
  regards, tom lane
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] sequence increment jumps?

2007-01-25 Thread John Smith

guys,
i inserted 1 record into my database (default
nextval('sequencename'::regclass) where (start 1 increment 1)). then i
tried to insert 1 other record twice but both those inserts failed
because of a domain check (ERROR: value too long for type character
varying(X). when i was finally able to insert that record the
sequence jumped 2 places. seems like it counted the failed inserts?

i had insert errors yesterday (ERROR: invalid input syntax for
integer ERROR: column 'columnname' is of type date but expression is
of type integer) but they didn't cause any increment jumps. and when
i insert a record now the sequence increments just fine.

bug or nature of the beast? how to reset? btw using 8.1, got no other
sessions, no record deletes, no triggers, no rules.
jzs

http://www.postgresql.org/docs/8.1/interactive/sql-createsequence.html
http://archives.postgresql.org/pgsql-general/2001-11/msg01004.php
http://archives.postgresql.org/pgsql-admin/2002-02/msg00335.php

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Tom Lane
Jeremy Haile [EMAIL PROTECTED] writes:
 Did this information shed any light on what the problem might be?

It seems to buttress Magnus' theory that the intermittent (or not so
intermittent) stats-test buildfarm failures we've been seeing have to
do with the stats collector actually freezing up, rather than just
not reacting fast enough as most of us (or me anyway) thought.  But
why that is happening remains anyone's guess.  I don't suppose you
have debugging tools that would let you get a stack trace from the
collector process?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Jeremy Haile
Unfortunately I don't have any debugging tools installed that would work
against postgres - although I'd be glad to do something if you could
tell me the steps involved.  I can reproduce the issue quite easily on
two different Windows machines (one is XP, the other is 2003).

Please let me know if there is anything else I can do to help debug this
problem.  

Do you know of any workaround other than restarting the whole server? 
Can the collector be restarted individually? 

Thanks,
Jeremy Haile


On Thu, 25 Jan 2007 12:42:11 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  Did this information shed any light on what the problem might be?
 
 It seems to buttress Magnus' theory that the intermittent (or not so
 intermittent) stats-test buildfarm failures we've been seeing have to
 do with the stats collector actually freezing up, rather than just
 not reacting fast enough as most of us (or me anyway) thought.  But
 why that is happening remains anyone's guess.  I don't suppose you
 have debugging tools that would let you get a stack trace from the
 collector process?
 
   regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] column limit

2007-01-25 Thread Isaac Ben

Hi,
Sorry, I forgot to post back to the list instead of just replying
individual responders.

The data is gene expression data with 20,000 dimensions. Part of the
project I'm working on is to discover what dimensions are truly
independent.   But to start with I need to have
all of the data available in a master table to do analysis on.  After
the analysis I hope to derive subsets of much lower dimensionality.

IB

Isaac Ben Jeppsen

On 1/25/07, David Brain [EMAIL PROTECTED] wrote:

Hi,

Seeing as how no one has asked this question yet - I have to ask, why do
you need 20,000 columns? I'm sure I'm not the only one who is curious.

It's hard to think of a situation where this couldn't be solved by using
a schema with parent/child tables to have 2 rows per record rather
than 2 columns.

David.

Isaac Ben wrote:
 Hi,
 I'm trying to create a table with 20,000 columns of type int2, but I
 keep getting the error message that the limit is 1600.  According to
 this message
 http://archives.postgresql.org/pgsql-admin/2001-01/msg00199.php
 it can be increased, but only up to about 6400.  Can anyone tell me
 how to get 20,000 columns?

 Thanks,
 IB

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


--
David Brain - bandwidth.com
[EMAIL PROTECTED]
919.297.1078



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Tom Lane
Jeremy Haile [EMAIL PROTECTED] writes:
 Unfortunately I don't have any debugging tools installed that would work
 against postgres - although I'd be glad to do something if you could
 tell me the steps involved.  I can reproduce the issue quite easily on
 two different Windows machines (one is XP, the other is 2003).

Sorry, I don't know anything about Windows debugging either.  Can you
put together a test case that would let one of the Windows-using hackers
reproduce it?

 Do you know of any workaround other than restarting the whole server? 
 Can the collector be restarted individually? 

On Unix you can just 'kill -TERM' the collector process and the postmaster
will start a new one without engaging in a database panic cycle.  Dunno
what the equivalent is on Windows but it's probably possible.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] column limit

2007-01-25 Thread Bruno Wolff III
On Thu, Jan 25, 2007 at 08:34:08 -0700,
  Isaac Ben [EMAIL PROTECTED] wrote:
 Hi,
 I'm trying to create a table with 20,000 columns of type int2, but I
 keep getting the error message that the limit is 1600.  According to
 this message http://archives.postgresql.org/pgsql-admin/2001-01/msg00199.php
 it can be increased, but only up to about 6400.  Can anyone tell me
 how to get 20,000 columns?

Can you explain what you are really trying to do? It is unlikely that using
2 columns is the best way to solve your problem. If we know what you are
really trying to do we may be able to make some other suggestions.
One thing you might start looking at is using an array or arrays.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] sequence increment jumps?

2007-01-25 Thread Douglas McNaught
John Smith [EMAIL PROTECTED] writes:

 i had insert errors yesterday (ERROR: invalid input syntax for
 integer ERROR: column 'columnname' is of type date but expression is
 of type integer) but they didn't cause any increment jumps. and when
 i insert a record now the sequence increments just fine.

 bug or nature of the beast? how to reset? btw using 8.1, got no other
 sessions, no record deletes, no triggers, no rules.

Nature of the beast.  Sequence increments aren't rolled back on
transaction abort (for performance and concurrency reasons), so you
should expect gaps.

-Doug

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] sequence increment jumps?

2007-01-25 Thread Bruno Wolff III
On Thu, Jan 25, 2007 at 12:33:51 -0500,
  John Smith [EMAIL PROTECTED] wrote:
 guys,
 i inserted 1 record into my database (default
 nextval('sequencename'::regclass) where (start 1 increment 1)). then i
 tried to insert 1 other record twice but both those inserts failed
 because of a domain check (ERROR: value too long for type character
 varying(X). when i was finally able to insert that record the
 sequence jumped 2 places. seems like it counted the failed inserts?

That is how sequences work. All your are guaranteed globally is that they
are unique. You can't rely on getting a sequence without gaps. Within a single
session you can get a guaranty that the values increase monotonicly if you
disallow wrap around for the sequence.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Problem with result ordering

2007-01-25 Thread Thorsten Körner
Hi Fillip,

thanks for your hint, I have tested it on a development database, and it 
worked well. 
Are there any experiences how this will affect performance on a large 
database, with very high traffic?
Is it recommended to use temp tables in such an environment?

THX in advance
Thorsten

Am Donnerstag, 25. Januar 2007 17:02 schrieb Filip Rembiałkowski:
 2007/1/25, Thorsten Körner [EMAIL PROTECTED]:
  Hi,
 
  when I fire the following query:
  select m_id, m_u_id, m_title, m_rating from tablename where m_id in
  (26250, 11042, 16279, 42197, 672089);
 
  I will get the same results in the same order, as in in the next query:
  select m_id, m_u_id, m_title, m_rating from tablename where m_id in
  (11042,42197,672089,26250,16279);
 
  I wonder, how it is possible, to retrieve the results in the same order,
  as queried in the list. The listed IDs are from an application outside
  the database.
 
  Version is PostgreSQL 8.2.1
 
  Has anyone an idea, how to do this, while PostgreSQL knows nothing about
  hints, like oracle does?

 obvious solution is to create temporary table like
 create temp table tmp ( id serial, key integer );
 then populate it with your list in order,
 and then join it with your source table.

 but it will require some extra coding, either in your app or in PL
 set-returning function

 F.

-- 
CappuccinoSoft Business Systems
Hamburg

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] sequence increment jumps?

2007-01-25 Thread Merlin Moncure

On 1/25/07, John Smith [EMAIL PROTECTED] wrote:

guys,
i inserted 1 record into my database (default
nextval('sequencename'::regclass) where (start 1 increment 1)). then i
tried to insert 1 other record twice but both those inserts failed
because of a domain check (ERROR: value too long for type character
varying(X). when i was finally able to insert that record the
sequence jumped 2 places. seems like it counted the failed inserts?


if you absolutely must have gapless identifiers in your database,
follow this procedure:
http://www.varlena.com/GeneralBits/130.php

as others have stated, sequences are (much) faster than rule/trigger
based solutions and have better concurrency.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Magnus Hagander
Tom Lane wrote:
 Jeremy Haile [EMAIL PROTECTED] writes:
 Unfortunately I don't have any debugging tools installed that would work
 against postgres - although I'd be glad to do something if you could
 tell me the steps involved.  I can reproduce the issue quite easily on
 two different Windows machines (one is XP, the other is 2003).
 
 Sorry, I don't know anything about Windows debugging either.  Can you
 put together a test case that would let one of the Windows-using hackers
 reproduce it?

That would help a lot.


 Do you know of any workaround other than restarting the whole server? 
 Can the collector be restarted individually? 
 
 On Unix you can just 'kill -TERM' the collector process and the postmaster
 will start a new one without engaging in a database panic cycle.  Dunno
 what the equivalent is on Windows but it's probably possible.

You can use pg_ctl to send the int signal. If it's completely hung, that
may not work. In that case you can kill it from task manager, but that's
equiv of a kill -9, which means that the postmaster will restart all
backends.

//Magnus

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Jeremy Haile [EMAIL PROTECTED] writes:
 Do you know of any workaround other than restarting the whole server? 
 Can the collector be restarted individually? 

 You can use pg_ctl to send the int signal. If it's completely hung, that
 may not work. In that case you can kill it from task manager, but that's
 equiv of a kill -9, which means that the postmaster will restart all
 backends.

No, the postmaster does not care how badly the stats collector croaks,
because the collector's not connected to shared memory, so there's no
risk of collateral damage.  It'll just start a new one without bothering
the backends.  So you can do whatever you have to do to kill the stuck
collector --- I was just not sure how to do that under Windows.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Jeremy Haile
 Then just pick it up in Task Manager or Process Explorer or whatever and
 kill it off. Just make sure you pick the right process.

I mentioned earlier that killing off the collector didn't work - however
I was wrong.  I just wasn't giving it enough time.  If I kill the
postgres.exe -forkcol process, it does gets restarted, although
sometimes it takes a minute.

Since it only seems to update pgstat.stat once after restarting, I'd
need to kill it once-a-minute to keep my statistics up to date =)  So,
unfortunately it's not a great workaround to my problem.


Jeremy Haile

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problem with result ordering

2007-01-25 Thread Ted Byers
The question I'd ask before offering a solution is, Does the order of the 
id data matter, or is it a question of having all the results for a given id 
together before proceeding to the next id?  The answer to this will 
determine whether or not adding either a group by clause or an order by 
clause will help.


Is there a reason you client app doesn't submit a simple select for each of 
the required ids?  You'd have to do some checking to see whether it pays to 
have the ordering or grouping operation handled on the server or client. 
Other options to consider, perhaps affecting performance and security, would 
be parameterized queries or stored procedures.


Much depends on the design and implementation of your client app.  I know, 
e.g., that in ASP.NET 2, and later, you can handle multiple resultsets from 
a single datasource, so a trivially simple SQL script that consists of the 
simplest SELECT statements might be a viable option.  But it is hard to 
advise since you don't say if you have access to or control over the source 
code for the client app or what it is written in.


In my experience, I always have to run some benchmarks for a given 
distributed application to figure out how best to distribute the workload, 
and there are always plenty of different ways to do things, with often big 
differences in performance and security.  It seems never to be trivial to 
figure this out without some testing before a final decision.  I can never 
just assume that it is best to do all the processing in the RDBMS backend to 
my apps.


HTH

Ted

- Original Message - 
From: Thorsten Körner [EMAIL PROTECTED]

To: pgsql-general@postgresql.org
Sent: Thursday, January 25, 2007 10:45 AM
Subject: [GENERAL] Problem with result ordering



Hi,

when I fire the following query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in 
(26250,

11042, 16279, 42197, 672089);

I will get the same results in the same order, as in in the next query:
select m_id, m_u_id, m_title, m_rating from tablename where m_id in
(11042,42197,672089,26250,16279);

I wonder, how it is possible, to retrieve the results in the same order, 
as

queried in the list. The listed IDs are from an application outside the
database.

Version is PostgreSQL 8.2.1

Has anyone an idea, how to do this, while PostgreSQL knows nothing about
hints, like oracle does?

THX,
Thorsten

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] sequence increment jumps?

2007-01-25 Thread Benjamin Smith
On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
 Nature of the beast.  Sequence increments aren't rolled back on
 transaction abort (for performance and concurrency reasons), so you
 should expect gaps.

Behavior long ago noted and accounted for. But I've always wondered why this 
was so? Is there a specific reason for this behavior? 

-Ben 

-- 
The best way to predict the future is to invent it.
- XEROX PARC slogan, circa 1978

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Converting 7.x to 8.x

2007-01-25 Thread Benjamin Smith
On Tuesday 23 January 2007 13:55, Carlos wrote:
 What would be the faster way to convert a 7.4.x database into an 8.x
 database?  A dump of the database takes over 20 hours so we want to convert
 the database without having to do a dump and resptore.

You've probably already accounted for this, but make sure you've tried your 
options for loading the database. Using long (insert) form vs copy can make 
a *huge* performance difference. 

(Hours vs seconds, in some cases!) 

-Ben 
-- 
The best way to predict the future is to invent it.
- XEROX PARC slogan, circa 1978

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] sequence increment jumps?

2007-01-25 Thread Douglas McNaught
Benjamin Smith [EMAIL PROTECTED] writes:

 On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
 Nature of the beast.  Sequence increments aren't rolled back on
 transaction abort (for performance and concurrency reasons), so you
 should expect gaps.

 Behavior long ago noted and accounted for. But I've always wondered why this 
 was so? Is there a specific reason for this behavior? 

Being able to roll back a sequence increment would require locking the
sequence for the duration of the transaction, which would kill
concurrency.

-Doug

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Jeremy Haile
I'll try to put together a test case for hackers, although I'm not sure
what exactly causes it.  

Basically, when I fire up PostgreSQL - after about a minute the stats
collector runs once (pgstat.stat is updated, autovacuum fires up, etc.)
- and then the collector seems to hang.  If I watch it's performance
information, it does not read or write to disk again and pgstat.stat is
never updated again.  It never updates pgstat.stat more than once after
restart.  There are no errors in the log

I tried killing the collector a variety of ways on Windows, but it seems
to terminate indefinitely.  I don't see a kill program for windows that
lets me specify the signal to use.  So other than restarting PostgreSQL,
I'm not sure how to workaround this problem.

If anyone else is experiencing similar problems, please post your
situation.

On Thu, 25 Jan 2007 12:51:31 -0500, Tom Lane [EMAIL PROTECTED] said:
 Jeremy Haile [EMAIL PROTECTED] writes:
  Unfortunately I don't have any debugging tools installed that would work
  against postgres - although I'd be glad to do something if you could
  tell me the steps involved.  I can reproduce the issue quite easily on
  two different Windows machines (one is XP, the other is 2003).
 
 Sorry, I don't know anything about Windows debugging either.  Can you
 put together a test case that would let one of the Windows-using hackers
 reproduce it?
 
  Do you know of any workaround other than restarting the whole server? 
  Can the collector be restarted individually? 
 
 On Unix you can just 'kill -TERM' the collector process and the
 postmaster
 will start a new one without engaging in a database panic cycle.  Dunno
 what the equivalent is on Windows but it's probably possible.
 
   regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] encode, lower and 0x8a

2007-01-25 Thread Michael Artz

Perhaps my understanding of the 'encode' function is incorrect, but I
was under the impression that I could do something like:

SELECT lower(encode(bytes, 'escape')) FROM mytable;

as it sounded like (from the manual) that 'encode' would return valid
ASCII, with all the non-ascii bytes hex escaped.  When I have the byte
0x8a, however, I get the error:

ERROR:  invalid byte sequence for encoding UTF8: 0x8a

I have the sneaking suspicion that I am missing something, so please
correct me if I am wrong.  If I am wrong, is there a better way to
lowercase all the ascii characters in a bytea string?

Here is a simple way to recreate this:

CREATE TABLE mytable (bytes BYTEA);
INSERT INTO mytable VALUES (E'212');
SELECT lower(encode(bytes, 'escape')) FROM mytable;

Thanks
-Mike

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] sequence increment jumps?

2007-01-25 Thread Bruce Momjian
Douglas McNaught wrote:
 Benjamin Smith [EMAIL PROTECTED] writes:
 
  On Thursday 25 January 2007 09:53, Douglas McNaught wrote:
  Nature of the beast. ?Sequence increments aren't rolled back on
  transaction abort (for performance and concurrency reasons), so you
  should expect gaps.
 
  Behavior long ago noted and accounted for. But I've always wondered why 
  this 
  was so? Is there a specific reason for this behavior? 
 
 Being able to roll back a sequence increment would require locking the
 sequence for the duration of the transaction, which would kill
 concurrency.

This is an FAQ.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Problem with result ordering

2007-01-25 Thread Tommy Gildseth

Tom Lane wrote:

Thorsten =?iso-8859-1?q?K=F6rner?= [EMAIL PROTECTED] writes:
  
select m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250, 
11042, 16279, 42197, 672089);


You could rewrite the query as

select ... from tablename where m_id = 26250
union all
select ... from tablename where m_id = 11042
union all
select ... from tablename where m_id = 16279
union all
select ... from tablename where m_id = 42197
union all
select ... from tablename where m_id = 672089

This isn't guaranteed by the SQL spec to produce the results in any
particular order either; but there's no good reason for PG to rearrange
the order of the UNION arms, whereas there are plenty of good reasons to
try to optimize fetching of individual rows.
  



Or a variant of this,
SELECT m_id, m_u_id, m_title, m_rating from tablename where m_id in (26250,
11042, 16279, 42197, 672089) ORDER BY m_id=26250, m_id=11042, 
m_id=16279, m_id=42197, m_id=672089;



--
Tommy Gildseth
http://www.gildseth.com/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] loop plpgsql recordset variable

2007-01-25 Thread Furesz Peter

Hello,

How can I loop a PL/PgSQL recorset variable? The example:

   DECLARE
   v_tmp_regi RECORD;
   v_tmp RECORD;
   BEGIN
 SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE 
sf.termekfajta_id=

 a_termekfajta_id AND sf.marka_id=a_marka_id;

   DELETE FROM sulyozas_futamido;

   FOR v_tmp IN v_tmp_regi LOOP
   --I would like to work here with the old recordset!
   END LOOP;
   ^^
  -- This is not working !!!

   END;

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] replication choices

2007-01-25 Thread Ben
Hi guys. I've inherited a system that I'm looking to add replication to. 
It already has some custom replication code, but it's being nice to say 
that code less than good. I'm hoping there's an existing project out there 
that will work much better. Unfortunately, I'm not seeing anything that 
obviously fits my needs, so maybe somebody here can suggest something.


I've got a single cluster in the datacenter and dozens of remote sites. 
Many of these sites are on unreliable connections to the internet, and 
while they're online more often then not, when their network will go down 
isn't known, and even when it's up, the network isn't that fast.


A vast majority of activity occurs at these remote sites, with very little 
at the datacenter cluster. That said, the datacenter cluster needs to keep 
pretty good copies of most (but not all) of the data at each site. 
Obviously the network unrealiability puts a limit on how up to date the 
datacenter can be, but loosing data is considered Bad. So, for instance, 
restoring the daily backup of each site at the datacenter is too 
infrequent.


Each site will replicate to its own schema in the datacenter cluster, so I 
don't *think* I need a multi-master solution but at the same time, 
because data will be coming from multiple sites, simply replaying WAL 
files at the datacenter won't work.


In addition, there will be some data changes made at the datacenter that 
will need to replicate to all of the remote sites as soon as they're 
online. It's ok if data being replicated from the datacenter ends up in a 
different schema at the remote sites than the schema which holds the data 
that will be replicated back to the datacenter.


My current best guess of what to do is create a global schema at every 
database, a local schema at each site, and a schema for each site at the 
datacenter. Then I can use Slony to replicate the global schema from the 
datacenter to each site, and again use Slony to replicate the local schema 
from each site to that site's schema in the datacenter. But I'm not too 
familiar with Slony, and from what I understand, using Slony with bad 
networks leads to bad problems. I'm also not sure that Slony supports 
replicating from multiple sources to the same postgres install, even if 
each replication process is writing to a different schema.


Are there any better options? Or is my Slony idea not so bad?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] replication choices

2007-01-25 Thread Bruce Momjian

Have you read the 8.2 documentation about this:

http://www.postgresql.org/docs/8.2/static/high-availability.html

---

Ben wrote:
 Hi guys. I've inherited a system that I'm looking to add replication to. 
 It already has some custom replication code, but it's being nice to say 
 that code less than good. I'm hoping there's an existing project out there 
 that will work much better. Unfortunately, I'm not seeing anything that 
 obviously fits my needs, so maybe somebody here can suggest something.
 
 I've got a single cluster in the datacenter and dozens of remote sites. 
 Many of these sites are on unreliable connections to the internet, and 
 while they're online more often then not, when their network will go down 
 isn't known, and even when it's up, the network isn't that fast.
 
 A vast majority of activity occurs at these remote sites, with very little 
 at the datacenter cluster. That said, the datacenter cluster needs to keep 
 pretty good copies of most (but not all) of the data at each site. 
 Obviously the network unrealiability puts a limit on how up to date the 
 datacenter can be, but loosing data is considered Bad. So, for instance, 
 restoring the daily backup of each site at the datacenter is too 
 infrequent.
 
 Each site will replicate to its own schema in the datacenter cluster, so I 
 don't *think* I need a multi-master solution but at the same time, 
 because data will be coming from multiple sites, simply replaying WAL 
 files at the datacenter won't work.
 
 In addition, there will be some data changes made at the datacenter that 
 will need to replicate to all of the remote sites as soon as they're 
 online. It's ok if data being replicated from the datacenter ends up in a 
 different schema at the remote sites than the schema which holds the data 
 that will be replicated back to the datacenter.
 
 My current best guess of what to do is create a global schema at every 
 database, a local schema at each site, and a schema for each site at the 
 datacenter. Then I can use Slony to replicate the global schema from the 
 datacenter to each site, and again use Slony to replicate the local schema 
 from each site to that site's schema in the datacenter. But I'm not too 
 familiar with Slony, and from what I understand, using Slony with bad 
 networks leads to bad problems. I'm also not sure that Slony supports 
 replicating from multiple sources to the same postgres install, even if 
 each replication process is writing to a different schema.
 
 Are there any better options? Or is my Slony idea not so bad?
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] loop plpgsql recordset variable

2007-01-25 Thread Merlin Moncure

On 1/25/07, Furesz Peter [EMAIL PROTECTED] wrote:

How can I loop a PL/PgSQL recorset variable? The example:

DECLARE
v_tmp_regi RECORD;
v_tmp RECORD;
BEGIN
  SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE
sf.termekfajta_id=
  a_termekfajta_id AND sf.marka_id=a_marka_id;

DELETE FROM sulyozas_futamido;

FOR v_tmp IN v_tmp_regi LOOP
--I would like to work here with the old recordset!
END LOOP;
^^
   -- This is not working !!!

END;


I think what you want to do is this:
BEGIN
  SELECT * INTO v_tmp_regi FROM sulyozas_futamido sf WHERE
sf.termekfajta_id=
  a_termekfajta_id AND sf.marka_id=a_marka_id;

DELETE FROM sulyozas_futamido;

FOR v_temp_regi IN SELECT * FROM sulyozas_futamido sf WHERE
sf.termekfajta_id = a_termekfajta_id AND sf.marka_id=a_marka_id LOOP

END LOOP;

FOR v_tmp IN v_tmp_regi LOOP
--I would like to work here with the old recordset!
END LOOP;
^^

END;

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Postgresql best practices

2007-01-25 Thread Robert Boone

Hello,
I was wondering if anyone could point me to any documention on  
seting up Postgresql in a web hosting environment. Things like  
account management, access host management and privilege management  
to users that are resellers and will need to administer there own  
users in postgresql. Not sure if anything like this exists but I  
would like to see it if it does.


Thanks,
Robert

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Alvaro Herrera
Jeremy Haile wrote:
 I'll try to put together a test case for hackers, although I'm not sure
 what exactly causes it.  
 
 Basically, when I fire up PostgreSQL - after about a minute the stats
 collector runs once (pgstat.stat is updated, autovacuum fires up, etc.)
 - and then the collector seems to hang.  If I watch it's performance
 information, it does not read or write to disk again and pgstat.stat is
 never updated again.  It never updates pgstat.stat more than once after
 restart.  There are no errors in the log
 
 I tried killing the collector a variety of ways on Windows, but it seems
 to terminate indefinitely.  I don't see a kill program for windows that
 lets me specify the signal to use.  So other than restarting PostgreSQL,
 I'm not sure how to workaround this problem.
 
 If anyone else is experiencing similar problems, please post your
 situation.

All the Windows buildfarm machines are, apparently.

We verified this with Magnus.  He found that the tenk2 table does not
seem to get stat updates -- the numbers are all zero, at all times.  I
thought I had blogged about this ... oh yeah, it's here

http://www.advogato.org/person/alvherre/diary.html?start=11

AFAIR (Magnus can surely confirm) there were some other tables that
weren't showing stats as all zeros -- but there's no way to know whether
those numbers were put there before the collector had frozen (if
that's really what's happening).

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Problem loading pg_dump file

2007-01-25 Thread Mason Hale

Hello --

I'm having a problem loading a recent pg_dump of our production database.

In our environment we take a monthly snapshot of our production server and
copy that to our development server so that we have a recent batch of data
to work with.

However, when trying to load the file for this month's snapshot, we are (for
the first time) seeing a slew of errors, such as:

invalid command \N
invalid command \N
ERROR:  syntax error at or near /\n  img alt= style= at character 1
LINE 1: /\n  img alt= style=border: 0;
   ^
ERROR:  syntax error at or near padding at character 1
LINE 1: padding: 8px 0 0 0;
   ^
ERROR:  syntax error at or near height at character 1
LINE 1: height: 2px;
   ^
ERROR:  syntax error at or near font at character 1
LINE 1: font-size: 1px;
   ^
ERROR:  syntax error at or near border at character 1
LINE 1: border: 0;
   ^
ERROR:  syntax error at or near margin at character 1
LINE 1: margin: 0;
   ^
ERROR:  syntax error at or near padding at character 1
LINE 1: padding: 0;
   ^
invalid command \N
invalid command \N

The commands I'm using to create and load the dump are:

on production:
pg_dump bduprod_2 | gzip  bdu_01_21_07.gz

(transfer .gz file to development server)

on dev:
createdb -T template0 -D disk1 -e -O lss -U postgres bdu_01_21_07
gunzip -c bduprod_2-01-21-07.gz | psql bdu_01_21_07

I'm also unzipped the .gz file and tried to load it directly via
psql psql bdu_01_21_07  psql bduprod_2-01-21-07
with the same results.

I think I might be running into the UTF8 encoding issue mentioned in this
message:
http://archives.postgresql.org/pgsql-bugs/2006-10/msg00246.php

Both the production and dev servers are UTF8.

Obviously, beyond our monthly dev snapshots, I'm concerned about the
reliability of our production server backups.

Anyone have any ideas what the problem is? Is there a way for me to tell if
it is the UTF8 encoding problem mentioned above?
Any work arounds?

thanks in advance,
Mason


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Jeremy Haile wrote:
 If anyone else is experiencing similar problems, please post your
 situation.

 All the Windows buildfarm machines are, apparently.

Can't anyone with a debugger duplicate this and get a stack trace for
us?  If the stats collector is indeed freezing up, a stack trace showing
where it's stuck would be exceedingly helpful.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Controlling Database Growth

2007-01-25 Thread Mark Drago
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,

I'm using PostgreSQL to log web traffic leaving our network.  This
results in the database growing to a fairly large size.  This machine
will be left unattended and basically unmaintained for long stretches of
time so I need a way to limit the disk space that Postgres uses.  So far
I have come up with two basic ideas:

1. Routinely run 'du' in the directory containing the PostgreSQL data,
which in my case is /var/postgresql/data and when it gets to a certain
size remove a whole bunch of the old data from the database, and run
'vacuum full; reindex database db_name; analyze;'.

The problem with this is that the vacuum could take nearly an hour to
run in some cases and there will be data that needs to get logged during
this hour.  Also, the vacuum process could use disk space above what the
database is currently using and that disk space may not be available.

2. Use pgstattuple() to determine how much space is being used at any
given time and delete a bunch of old rows from the database when it is
approaching a limit.

The nice thing about this is that 'vacuum full;' does not have to be
executed in order to see the space get reclaimed.  The downside is that
running pgstattuple() is much more expensive than running 'du', so the
disk space checks can't happen as often, and they can not be run at all
during the day.

I am curious to know if anyone has any other ideas as to how I can limit
the disk space that PostgreSQL uses to say 5GB.  I have not looked in to
pg_autovacuum yet, but from what I have read about it it does not seem
to be the answer to this problem.  Has anyone else had to do such a
thing before?  Does anyone have any ideas on how to do this better?

Thanks,
Mark Drago
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org

iD8DBQFFuSGy2ovBrIOxiiARAjmeAKCgmN4fNWTv1ZTgkCQZCeAAgYdLyQCgwZsb
uqveC3xd97nWNg2ty2MCs0M=
=dTca
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Jeremy Haile
 AFAIR (Magnus can surely confirm) there were some other tables that
 weren't showing stats as all zeros -- but there's no way to know whether
 those numbers were put there before the collector had frozen (if
 that's really what's happening).

Yeah - I have numbers that updated before the stats collector started
freezing.  Do you know which version of PG this started with?  I have
upgraded 8.1.3, 8.1.4, 8.2, and 8.2.1 in the past months and I didn't
have the collector enabled until 8.2.1 - so I'm not sure how long this
has been a problem.  

I might try rolling back to a previous version - it's either that or
setup a scheduled vacuum analyze until we figure out this problem.  I'm
having to manually run it every day now... =)  I think this is a pretty
critical problem since it cripples autovacuum on Windows.

Are you guys in a position to debug the collector process and see where
it is freezing (ala Tom's earlier comment)?  Anything I can do to help
debug this problem faster?

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] replication choices

2007-01-25 Thread Ben
Yes, but unless I'm missing something, it doesn't look like any of those 
options perfectly fit my situation, except perhaps Slony, which is why I'm 
leaning that direction now despite my concerns.


Is there a section of this page I should be re-reading?

On Thu, 25 Jan 2007, Bruce Momjian wrote:



Have you read the 8.2 documentation about this:

http://www.postgresql.org/docs/8.2/static/high-availability.html

---

Ben wrote:

Hi guys. I've inherited a system that I'm looking to add replication to.
It already has some custom replication code, but it's being nice to say
that code less than good. I'm hoping there's an existing project out there
that will work much better. Unfortunately, I'm not seeing anything that
obviously fits my needs, so maybe somebody here can suggest something.

I've got a single cluster in the datacenter and dozens of remote sites.
Many of these sites are on unreliable connections to the internet, and
while they're online more often then not, when their network will go down
isn't known, and even when it's up, the network isn't that fast.

A vast majority of activity occurs at these remote sites, with very little
at the datacenter cluster. That said, the datacenter cluster needs to keep
pretty good copies of most (but not all) of the data at each site.
Obviously the network unrealiability puts a limit on how up to date the
datacenter can be, but loosing data is considered Bad. So, for instance,
restoring the daily backup of each site at the datacenter is too
infrequent.

Each site will replicate to its own schema in the datacenter cluster, so I
don't *think* I need a multi-master solution but at the same time,
because data will be coming from multiple sites, simply replaying WAL
files at the datacenter won't work.

In addition, there will be some data changes made at the datacenter that
will need to replicate to all of the remote sites as soon as they're
online. It's ok if data being replicated from the datacenter ends up in a
different schema at the remote sites than the schema which holds the data
that will be replicated back to the datacenter.

My current best guess of what to do is create a global schema at every
database, a local schema at each site, and a schema for each site at the
datacenter. Then I can use Slony to replicate the global schema from the
datacenter to each site, and again use Slony to replicate the local schema
from each site to that site's schema in the datacenter. But I'm not too
familiar with Slony, and from what I understand, using Slony with bad
networks leads to bad problems. I'm also not sure that Slony supports
replicating from multiple sources to the same postgres install, even if
each replication process is writing to a different schema.

Are there any better options? Or is my Slony idea not so bad?

---(end of broadcast)---
TIP 6: explain analyze is your friend


--
 Bruce Momjian   [EMAIL PROTECTED]
 EnterpriseDBhttp://www.enterprisedb.com

 + If your life is a hard drive, Christ can be your backup. +



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Stats collector frozen?

2007-01-25 Thread Magnus Hagander
Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
 Jeremy Haile wrote:
 If anyone else is experiencing similar problems, please post your
 situation.
 
 All the Windows buildfarm machines are, apparently.
 
 Can't anyone with a debugger duplicate this and get a stack trace for
 us?  If the stats collector is indeed freezing up, a stack trace showing
 where it's stuck would be exceedingly helpful.

Must've been asleep when reading and writing in this thread. Didn't
realize it was the same issue as the  buildfarm-killer. Will do the
debugger+stacktrace tomorrow on my VC install.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Controlling Database Growth

2007-01-25 Thread Bill Moran
In response to Mark Drago [EMAIL PROTECTED]:
 
 I'm using PostgreSQL to log web traffic leaving our network.  This
 results in the database growing to a fairly large size.  This machine
 will be left unattended and basically unmaintained for long stretches of
 time so I need a way to limit the disk space that Postgres uses.  So far
 I have come up with two basic ideas:
 
 1. Routinely run 'du' in the directory containing the PostgreSQL data,
 which in my case is /var/postgresql/data and when it gets to a certain
 size remove a whole bunch of the old data from the database, and run
 'vacuum full; reindex database db_name; analyze;'.
 
 The problem with this is that the vacuum could take nearly an hour to
 run in some cases and there will be data that needs to get logged during
 this hour.  Also, the vacuum process could use disk space above what the
 database is currently using and that disk space may not be available.
 
 2. Use pgstattuple() to determine how much space is being used at any
 given time and delete a bunch of old rows from the database when it is
 approaching a limit.
 
 The nice thing about this is that 'vacuum full;' does not have to be
 executed in order to see the space get reclaimed.  The downside is that
 running pgstattuple() is much more expensive than running 'du', so the
 disk space checks can't happen as often, and they can not be run at all
 during the day.
 
 I am curious to know if anyone has any other ideas as to how I can limit
 the disk space that PostgreSQL uses to say 5GB.  I have not looked in to
 pg_autovacuum yet, but from what I have read about it it does not seem
 to be the answer to this problem.  Has anyone else had to do such a
 thing before?  Does anyone have any ideas on how to do this better?

I don't think either of those are good ideas, because they both rely on
disk limits to trigger drastic changes in database size, which will then
require drastic maintenance operations (vacuum full, reindex) to clean
up.

Personally, I think you'd be a lot better off estimating how much new
data comes in each day, and scheduling a daily delete of old data combined
with regular vacuum (either via cron or using autovacuum) and an occasional
reindex.

Add to that some system monitoring via snmp traps -- maybe even graphing
with mrtg -- to keep an eye on things in case you need to adjust the
frequency or amount of stuff that's done, and you should see the database
stabilize at a manageable size.

-- 
Bill Moran
Collaborative Fusion Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] column limit

2007-01-25 Thread Martijn van Oosterhout
On Thu, Jan 25, 2007 at 10:47:50AM -0700, Isaac Ben wrote:
 The data is gene expression data with 20,000 dimensions. Part of the
 project I'm working on is to discover what dimensions are truly
 independent.   But to start with I need to have
 all of the data available in a master table to do analysis on.  After
 the analysis I hope to derive subsets of much lower dimensionality.

Even if you managed to hack the server enough to make that work (which
is debatable) performance is going to suck. The system is simply not
setup for that kind of thing. Use either arrays, or normalize the data
into a seperate table.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


[GENERAL] Duplicate key violation

2007-01-25 Thread Brian Wipf

I got a duplicate key violation when the following query was performed:

INSERT INTO category_product_visible (category_id, product_id)
SELECT  cp.category_id, cp.product_id
FROMcategory_product cp
WHERE   cp.product_id = $1 AND
not exists (
select  'x'
fromcategory_product_visible cpv
where   cpv.product_id = cp.product_id and
cpv.category_id = cp.category_id
);

This is despite the fact the insert is written to only insert rows  
that do not already exist. The second time the same query was run it  
went through okay. This makes me think there is some kind of race  
condition, which I didn't think was possible with PostgreSQL's MVCC  
implementation. I'm unable to duplicate the problem now and the error  
only occurred once in weeks of use. This is on PostgreSQL 8.2.1  
running on openSUSE Linux 10.2. Slony-I 1.2.6 is being used for  
replication to a single slave database.


I'll monitor the problem and if it recurs, I'll rebuild the primary  
key index. Perhaps the category_product_visible_pkey index was/is  
corrupted in some way.


Brian Wipf
[EMAIL PROTECTED]


The exact error was:
select process_pending_changes(); FAILED!!! Message: ERROR: duplicate  
key violates unique constraint category_product_visible_pkey  
CONTEXT: SQL statement INSERT INTO category_product_visible  
(category_id, product_id) SELECT cp.category_id, cp.product_id FROM  
category_product cp WHERE cp.product_id = $1 AND not exists ( select  
'x from category_product_visible cpv where cpv.product_id =  
cp.product_id an cpv.category_id = cp.category_id); PL/pgSQL  
function insert_cpv line 3 at SQL statement PL/pgSQL function  
process_mp_change line 15 at assignment PL/pgSQL function  
process_pending_changes line 13 at assignment


The insert_cpv(...) function and table definitions follow. I can  
provide any other information required.


CREATE FUNCTION insert_cpv(
my_product_id   int
) RETURNS boolean AS $$
DECLARE
BEGIN
INSERT INTO category_product_visible (category_id, product_id)
SELECT  cp.category_id, cp.product_id
FROMcategory_product cp
WHERE   cp.product_id = $1 AND
not exists (
select  'x'
fromcategory_product_visible cpv
where   cpv.product_id = cp.product_id and
cpv.category_id = cp.category_id
);
return found;
END;
$$ LANGUAGE plpgSQL;

\d category_product
  Table public.category_product
   Column|  Type   | Modifiers
-+-+---
category_id | integer | not null
product_id  | integer | not null
Indexes:
x_category_product_pk PRIMARY KEY, btree (category_id,  
product_id)

x_category_product__category_id_fk_idx btree (category_id)
x_category_product__product_id_fk_idx btree (product_id)
Foreign-key constraints:
x_category_product_category_fk FOREIGN KEY (category_id)  
REFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED
x_category_product_product_fk FOREIGN KEY (product_id)  
REFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED

Triggers:
_ssprod_replication_cluster_logtrigger_17 AFTER INSERT OR DELETE  
OR UPDATE ON category_product FOR EACH ROW EXECUTE PROCEDURE  
_ssprod_replication_cluster.logtrigger('_ssprod_replication_cluster',  
'17', 'kk')
category_product_trigger BEFORE INSERT OR DELETE ON  
category_product FOR EACH ROW EXECUTE PROCEDURE  
category_product_trigger()


\d category_product_visible
 Table public.category_product_visible
   Column|  Type  | Modifiers
-++---
category_id | integer| not null
product_id  | integer| not null
Indexes:
category_product_visible_pkey PRIMARY KEY, btree (category_id,  
product_id)

category_product_visible__product_id_fk_idx btree (product_id)
Triggers:
_ssprod_replication_cluster_logtrigger_18 AFTER INSERT OR DELETE  
OR UPDATE ON category_product_visible FOR EACH ROW EXECUTE PROCEDURE  
_ssprod_replication_cluster.logtrigger('_ssprod_replication_cluster',  
'18', 'kvkvv')



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Controlling Database Growth

2007-01-25 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/25/07 15:43, Bill Moran wrote:
 In response to Mark Drago [EMAIL PROTECTED]:
[snip]
 I don't think either of those are good ideas, because they both
 rely on disk limits to trigger drastic changes in database size,
 which will then require drastic maintenance operations (vacuum
 full, reindex) to clean up.
 
 Personally, I think you'd be a lot better off estimating how much
 new data comes in each day, and scheduling a daily delete of old
 data combined with regular vacuum (either via cron or using
 autovacuum) and an occasional reindex.
 
 Add to that some system monitoring via snmp traps -- maybe even
 graphing with mrtg -- to keep an eye on things in case you need
 to adjust the frequency or amount of stuff that's done, and you
 should see the database stabilize at a manageable size.

Agree totally with this.

You could even partition the table (by month, probably) either using
a view of a UNION ALL or with PostgreSQL's built-in partitioning.
Dropping the oldest month would then be a rapid process.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFuUOIS9HxQb37XmcRArJKAJ4u39v+IpTjpCZ6oPSpmfrhkybikACfWrGB
1JM2fokqQafd/yOWGv7vDa8=
=1jNP
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Duplicate key violation

2007-01-25 Thread Tom Lane
Brian Wipf [EMAIL PROTECTED] writes:
 I got a duplicate key violation when the following query was performed:
 INSERT INTO category_product_visible (category_id, product_id)
   SELECT  cp.category_id, cp.product_id
   FROMcategory_product cp
   WHERE   cp.product_id = $1 AND
   not exists (
   select  'x'
   fromcategory_product_visible cpv
   where   cpv.product_id = cp.product_id and
   cpv.category_id = cp.category_id
   );

 This is despite the fact the insert is written to only insert rows  
 that do not already exist. The second time the same query was run it  
 went through okay. This makes me think there is some kind of race  
 condition, which I didn't think was possible with PostgreSQL's MVCC  
 implementation.

If you're doing more than one of these concurrently, then of course
there's a race condition: the NOT EXISTS is testing for nonexistence
as of the query snapshot.  If two sessions do this concurrently then
they'll try to insert the same rows and one of them is gonna fail.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] triggers vs b-tree

2007-01-25 Thread gustavo halperin

Hello I have a design question:

 I have a table representing Families, and a table representing Persons.
The table Family have a row family_id as primary key.
The table Person have a row person_id as primary key and contain also a 
row family_id.
As you can understand, the row family_id in a table ficha_person is not 
unique, I mean is the same for all the family person's.


So my question is: If most of the time I need to find all the persons 
for one asked family what is the best way to do that?

I think about two methods:
1-  Making a b-tree index in ficha_person with the rows 
family_id and person_id.
2 - Adding an array in the table ficha_family containing the 
persons of this family. And creating a Trigger that update this array 
for each  person insert/delete in the table ficha_family.


So ..., what do you think? There are a better solution or what of the 
above solutions is better ??


 Thank you in advance,
  Gustavo.

Tables:

CREATE SEQUENCE ficha_person_id_seq;
CREATE TABLE ficha_person (
 person_idinteger DEFAULT nextval('ficha_person_id_seq')  
CONSTRAINT the_pers

on_id PRIMARY KEY,
 family_idinteger ,--CONSTRAINT the_family_id
.
) WITH OIDS;


CREATE SEQUENCE ficha_family_id_seq;
CREATE TABLE ficha_family (
 family_idinteger DEFAULT nextval('ficha_family_id_seq')  
CONSTRAINT the_fami

ly_id PRIMARY KEY,
 person_id   integer[],  --- Optionally, instead of using b-tree index.
.
) WITH OIDS;



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Linking a Postgres table on Linux to Ms Access

2007-01-25 Thread carter ck

Hi all,

Is there any way that I can synchronize a table in Postgres on Linux with 
another table in Ms Access?


The requirement of the assignment is as following:

In postgres, there is a table called message_received. Whenever we insert, 
update or edit this table, the table in Ms Access should also be updated.


Advices and suggestions are all welcomed and appreciated.

Thanks

_
Get an advanced look at the new version of Windows Live Messenger. 
http://get.live.com/messenger/overview



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Linking a Postgres table on Linux to Ms Access

2007-01-25 Thread codeWarrior
go in the other direction... Convert your table in MS Access to use a 
pass-through query to the postgreSQL table. Connect yoour MS Access pass 
through table to postgreSQL using OBBC.

Even better: Drop MS Access completely and just use postgreSQL. Access is a 
totally inferior technology.


carter ck [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Hi all,

 Is there any way that I can synchronize a table in Postgres on Linux with 
 another table in Ms Access?

 The requirement of the assignment is as following:

 In postgres, there is a table called message_received. Whenever we insert, 
 update or edit this table, the table in Ms Access should also be updated.

 Advices and suggestions are all welcomed and appreciated.

 Thanks

 _
 Get an advanced look at the new version of Windows Live Messenger. 
 http://get.live.com/messenger/overview


 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match
 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Linking a Postgres table on Linux to Ms Access

2007-01-25 Thread Joshua D. Drake
codeWarrior wrote:
 go in the other direction... Convert your table in MS Access to use a 
 pass-through query to the postgreSQL table. Connect yoour MS Access pass 
 through table to postgreSQL using OBBC.
 
 Even better: Drop MS Access completely and just use postgreSQL. Access is a 
 totally inferior technology.

Not for a front end it isn't :)

Link Table is what this guy is looking for.

Joshua D. Drake


 
 
 carter ck [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 Hi all,

 Is there any way that I can synchronize a table in Postgres on Linux with 
 another table in Ms Access?

 The requirement of the assignment is as following:

 In postgres, there is a table called message_received. Whenever we insert, 
 update or edit this table, the table in Ms Access should also be updated.

 Advices and suggestions are all welcomed and appreciated.

 Thanks

 _
 Get an advanced look at the new version of Windows Live Messenger. 
 http://get.live.com/messenger/overview


 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

 
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org/
 


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Duplicate key violation

2007-01-25 Thread Adam Rich

Sounds like you'll either need an explicit LOCK TABLE
command, set your transaction isolation to serializable,
or use advisory locking.

http://www.postgresql.org/docs/8.2/interactive/explicit-locking.html#LOC
KING-TABLES
http://www.postgresql.org/docs/8.2/interactive/transaction-iso.html#XACT
-SERIALIZABLE
http://www.postgresql.org/docs/8.2/interactive/functions-admin.html#FUNC
TIONS-ADVISORY-LOCKS



-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane
Sent: Thursday, January 25, 2007 6:21 PM
To: Brian Wipf
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Duplicate key violation 


Brian Wipf [EMAIL PROTECTED] writes:
 I got a duplicate key violation when the following query was
performed:
 INSERT INTO category_product_visible (category_id, product_id)
   SELECT  cp.category_id, cp.product_id
   FROMcategory_product cp
   WHERE   cp.product_id = $1 AND
   not exists (
   select  'x'
   fromcategory_product_visible cpv
   where   cpv.product_id = cp.product_id
and
   cpv.category_id = cp.category_id
   );

 This is despite the fact the insert is written to only insert rows  
 that do not already exist. The second time the same query was run it  
 went through okay. This makes me think there is some kind of race  
 condition, which I didn't think was possible with PostgreSQL's MVCC  
 implementation.

If you're doing more than one of these concurrently, then of course
there's a race condition: the NOT EXISTS is testing for nonexistence
as of the query snapshot.  If two sessions do this concurrently then
they'll try to insert the same rows and one of them is gonna fail.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] bytea performance issue

2007-01-25 Thread brian stone
I have to store binary data in a table, ranging from 512K - 1M.  I am getting 
very poor performance when inserting this data.

create table my_stuff (data bytea);

I then try to insert 10 1M blobs into this table using PQexecParams from C.  It 
takes ~10 seconds to insert the 10 records. 

The test is being performed locally so this is not a network issue.  If I 
change the data I send from the client to 1M worth of text, and change the 
'my_stuff.data' to TEXT, the same test takes ~2 seconds.  Has anyone else seen 
this performance issue with bytea?

I looked at large objects but that is limited to an Oid and I will need more 
than 4 billion unique identifiers.  Maybe not 4 billion objects at once, but I 
will burn through them quickly.

I looked at the byteain and byteaout functions in the postgresql source and 
there is quite a bit of processing going on - escape stuff.  Although, this is 
for text-to-internal correct?  If I use PGexecParams setting the format to 
binary, the backend should use the recv/send functions ... right?

I don't need this escaping; this information never needs to be in text form.  
Is it possible to create a user-defined type that's only binary?  It looks like 
I am forced to defined an input and output function.  The input function takes 
a CString.

Any other solutions for getting binary data into postgresql?  Suggestions?

skye


 
-
Finding fabulous fares is fun.
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.

Re: [GENERAL] Problem loading pg_dump file

2007-01-25 Thread Tom Lane
Mason Hale [EMAIL PROTECTED] writes:
 I'm having a problem loading a recent pg_dump of our production database.

 However, when trying to load the file for this month's snapshot, we are (for
 the first time) seeing a slew of errors, such as:

 invalid command \N
 invalid command \N
 ERROR:  syntax error at or near /\n  img alt= style= at character 1
 LINE 1: /\n  img alt= style=border: 0;
 ^

You need to look at the very first error, and ignore the slew following
it.  What seems to have happened here is that an error in the COPY command
caused psql to fall out of copy mode (or perhaps never enter it in the
first place) and start trying to treat lines of COPY data as SQL
commands.  So, tons of noise.  What was the first error?

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] DBLink contrib used for replication

2007-01-25 Thread Iannsp

hello all,
I like to know what you think about using dblink to construct serious 
syncronous and asyncronous replication.
I'm work with this idea only for test and think this is possible or almost 
possible because I don't know the performance for long distances but in the 
same network, like inside one company, the performance is good.


thanks for advanced,

--
Ivo Nascimento
Iann tech - Desenvolvendo soluções com performance e segurança
http://www.ianntech.com.br


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] bytea performance issue

2007-01-25 Thread Tom Lane
brian stone [EMAIL PROTECTED] writes:
 I have to store binary data in a table, ranging from 512K - 1M.  I am getting 
 very poor performance when inserting this data.
 create table my_stuff (data bytea);
 I then try to insert 10 1M blobs into this table using PQexecParams from C.  
 It takes ~10 seconds to insert the 10 records. 
 The test is being performed locally so this is not a network issue.  If I 
 change the data I send from the client to 1M worth of text, and change the 
 'my_stuff.data' to TEXT, the same test takes ~2 seconds.  Has anyone else 
 seen this performance issue with bytea?

How are you transmitting the data exactly?  Have you tried using
oprofile or some such to identify the culprit?

It does sound like escaping could be the issue, except that if you're
sending binary parameters as your message suggests (but doesn't actually
say) then there shouldn't be any escape processing going on.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] bytea performance issue

2007-01-25 Thread brian stone
I have not tried profiling yet; I am no pro at that.

output of SELECT version()
PostgreSQL 8.2rc1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 
20060404 (Red Hat 3.4.6-3)

This is the test program.  I run it on the same machine as the postmaster.  I 
am not sure, but I would assume that uses unix sockets rather than tcp.

// CREATE TABLE my_stuff (data bytea);

int main(void)
{
  int i;
  PGconn *conn;
  unsigned char *data[1];
  int datal = 1024*1024;
  int data_format = 1;
  PGresult *res;

  conn = PQsetdb(NULL, NULL, NULL, NULL, testdb);
  if(!conn)
  {
printf(failed to connect to 'testdb'\n);
return 1;
  }

  data[0] = (unsigned char *)malloc(datal);
  for(i=0; i  10; i++)
  {
res = PQexecParams(
  conn,
  INSERT INTO my_stuff (data) VALUES ($1),
  1,
  NULL,
  (const char * const *)data,
  (const int *)datal,
  (const int *)data_format,
  1);

if(res)
{
  printf(%s\n, PQresultErrorMessage(res));
  PQclear(res);
}
  }

  PQfinish(conn);
  return 0;
}

gcc -I/usr/local/pgsql/include -o bytea_test bytea_test.c -lpq -lcrypt

Output of - `time ./bytea_test`
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 

real0m9.300s
user0m0.013s
sys 0m0.010s

Thanks,
skye


Tom Lane [EMAIL PROTECTED] wrote: brian stone  writes:
 I have to store binary data in a table, ranging from 512K - 1M.  I am getting 
 very poor performance when inserting this data.
 create table my_stuff (data bytea);
 I then try to insert 10 1M blobs into this table using PQexecParams from C.  
 It takes ~10 seconds to insert the 10 records. 
 The test is being performed locally so this is not a network issue.  If I 
 change the data I send from the client to 1M worth of text, and change the 
 'my_stuff.data' to TEXT, the same test takes ~2 seconds.  Has anyone else 
 seen this performance issue with bytea?

How are you transmitting the data exactly?  Have you tried using
oprofile or some such to identify the culprit?

It does sound like escaping could be the issue, except that if you're
sending binary parameters as your message suggests (but doesn't actually
say) then there shouldn't be any escape processing going on.

   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


 
-
8:00? 8:25? 8:40?  Find a flick in no time
 with theYahoo! Search movie showtime shortcut.

Re: [GENERAL] bytea performance issue

2007-01-25 Thread brian stone
So there is no confusion as to why my output has 10 lines that say Error:, 
the pg error printf line should read:

printf(Error: %s\n, PQresultErrorMessage(res));

skye


brian stone [EMAIL PROTECTED] wrote: I have not tried profiling yet; I am no 
pro at that.

output of SELECT version()
PostgreSQL 8.2rc1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 
20060404 (Red Hat 3.4.6-3)

This is the test program.  I run it on the same machine as the postmaster.  I 
am not sure, but I would assume that uses unix sockets rather than tcp.

// CREATE TABLE my_stuff (data bytea);

int main(void)
{
  int i;
  PGconn *conn;
  unsigned char *data[1];
  int datal = 1024*1024;
  int data_format = 1;
  PGresult *res;

  conn = PQsetdb(NULL, NULL, NULL, NULL, testdb);
  if(!conn)
  {
printf(failed to connect to 'testdb'\n);
return 1;
  }

  data[0] = (unsigned char *)malloc(datal);
  for(i=0; i  10; i++)
  {
res = PQexecParams(
   conn,
  INSERT INTO my_stuff (data) VALUES ($1),
  1,
  NULL,
  (const char * const *)data,
  (const int *)datal,
  (const int *)data_format,
  1);

if(res)
{
  printf(%s\n, PQresultErrorMessage(res));
  PQclear(res);
}
  }

  PQfinish(conn);
  return 0;
}

gcc -I/usr/local/pgsql/include -o bytea_test bytea_test.c -lpq -lcrypt

Output of - `time ./bytea_test`
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 
Error: 

real0m9.300s
user 0m0.013s
sys 0m0.010s

Thanks,
skye


Tom Lane [EMAIL PROTECTED] wrote: brian stone  writes:
 I have to store binary data in a table, ranging from 512K - 1M.  I am getting 
 very poor performance when inserting this data.
 create table my_stuff (data bytea);
 I then try to insert 10 1M blobs into this table using PQexecParams from C.  
 It takes ~10 seconds to insert the 10 records. 
 The test is being performed locally so this is not a network issue.  If I 
 change the data I send from the client to 1M worth of text, and change the 
 'my_stuff.data' to TEXT, the same test takes ~2 seconds.  Has anyone else 
 seen this performance issue with bytea?

How are you transmitting the data exactly?  Have you tried using
oprofile or some such to identify the  culprit?

It does sound like escaping could be the issue, except that if you're
sending binary parameters as your message suggests (but doesn't actually
say) then there shouldn't be any escape processing going on.

   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

   

-
8:00? 8:25? 8:40?  Find a flick in no time
 with theYahoo! Search movie showtime shortcut.

 
-
Bored stiff? Loosen up...
Download and play hundreds of games for free on Yahoo! Games.

Re: [GENERAL] DBLink contrib used for replication

2007-01-25 Thread Tom Lane
Iannsp [EMAIL PROTECTED] writes:
 I like to know what you think about using dblink to construct serious 
 syncronous and asyncronous replication.

I think it'd be a lot of work and at the end of the day you'd pretty
much have reinvented Slony-I ... why not just use slony?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Converting 7.x to 8.x

2007-01-25 Thread Jim Nasby

On Jan 25, 2007, at 12:47 PM, Benjamin Smith wrote:

On Tuesday 23 January 2007 13:55, Carlos wrote:

What would be the faster way to convert a 7.4.x database into an 8.x
database?  A dump of the database takes over 20 hours so we want  
to convert

the database without having to do a dump and resptore.


You've probably already accounted for this, but make sure you've  
tried your
options for loading the database. Using long (insert) form vs  
copy can make

a *huge* performance difference.


In case no one's mentioned it already, you can also perform this  
migration using Slony, by making the 7.4 database the master, and  
replicating to an 8.x database.

--
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] too many trigger records found for relation item -

2007-01-25 Thread Tom Lane
Csaba Nagy [EMAIL PROTECTED] writes:
 Well, if you can show a reproducible test case, I'd like to look at it.

 OK, I have a test case which has ~ 90% success rate in triggering the
 issue on my box. It is written in Java, hope you can run it, in any case
 you'll get the idea how to reproduce the issue.

Hm, well the trigger-related complaints are pretty obviously from a
known race condition: pre-8.2 we'd read the pg_class row for a table
before obtaining any lock on the table.  So if someone else was
concurrently adding or deleting triggers then the value of
pg_class.reltriggers could be wrong by the time we'd managed to acquire
any lock.  I believe this is fixed as of 8.2 --- can you duplicate it
there?  (No, backpatching the fix is not practical.)

 The code is attached, and I list here some typical output run against an
 8.1.3 postgres installation. The first exception is strange on it's own,
 it was produced after a few runs, might be caused by another issue with
 creating/dropping tables (I think I have seen this too some time ago).

How sure are you about that uninterlocked getChildTableName() thing?
It's possible to get a failure complaining about duplicate type name
instead of duplicate relation name during CREATE TABLE, if the timing
is just right.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match