Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread Albe Laurenz
 As per postgres docs, 'Select for update' is used to obtain row level 
 locks where as 'lock table' is used to obtain table level locks.
 
 Under serializable isolation level, select for update gives error if 
 rows selected have been modified concurrently.
 but 'lock table' does not give such error even though some of the rows

 in the table are modified by concurrent transaction.
 
 Is this the expected behavior?

LOCK TABLE should never give you an error, except for a deadlock
resolution
error.

LOCK TABLE will just wait until there is no lock on the table that is
incompatible with the requested lock, then it will obtain the lock and
return.

LOCK TABLE does not modify tables or rows and so you cannot get a
serialization error, which is only issued when you run serializable
and try to modify a row that is newer than your transaction begin time.

On the other hand, LOCK TABLE will not necessarily prevent you from
subsequently receiving serialization errors if you do not request
an exclusive lock on the table.

Does that answer your questions?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgresql 8.2.4 crash with tsearch2

2007-05-22 Thread Philippe Amelant
hi,
Ok it's my mistake, I forgot to add -i at for gendict config.sh


regards

Le lundi 21 mai 2007 à 19:32 +0200, Philippe Amelant a écrit :
 hi,
 I have compiled postgresql 8.2.4 on a debian etch witch french snowball
 stemmer .
 I applied the lastest patch send by Teodor Sigaev
 (http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/tsearch_snowball_82-20070504.gz)
 
 and my backend still crash.
 I tested on 2 differents server both with etch, one with i386 the other
 with amd64.
 The first crash on ts_vector when the parameter string is longer than
 200 characters.
 The second crash on lexize
 If you have other patch to try I will do it :)
 
 If for you there is no bug, I don't understand what's appen to my
 servers. any idea ?
 
 thank
 
 Regards
 
 
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Postgresql 8.2.4 crash with tsearch2

2007-05-22 Thread Philippe Amelant
hum not enough tests before sending this mail, the fisrt request 

select lexize('fr','chose');
work but the server crash on the second resquest (the same one)


Le mardi 22 mai 2007 à 12:16 +0200, Philippe Amelant a écrit :
 hi,
 Ok it's my mistake, I forgot to add -i at for gendict config.sh
 
 
 regards
 
 Le lundi 21 mai 2007 à 19:32 +0200, Philippe Amelant a écrit :
  hi,
  I have compiled postgresql 8.2.4 on a debian etch witch french snowball
  stemmer .
  I applied the lastest patch send by Teodor Sigaev
  (http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/tsearch_snowball_82-20070504.gz)
  
  and my backend still crash.
  I tested on 2 differents server both with etch, one with i386 the other
  with amd64.
  The first crash on ts_vector when the parameter string is longer than
  200 characters.
  The second crash on lexize
  If you have other patch to try I will do it :)
  
  If for you there is no bug, I don't understand what's appen to my
  servers. any idea ?
  
  thank
  
  Regards
  
  
  
  ---(end of broadcast)---
  TIP 9: In versions below 8.0, the planner will ignore your desire to
 choose an index scan if your joining column's datatypes do not
 match
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread sudhir

Thanks Laurenz for quick reply.

If this is the expected behavior then isn't 'Lock table' is just extra 
performance penalty and achieves nothing under serializable isolation level.


The serializable isolation level in postgres is infact snapshot isolation.
Suppose a transaction T is using 'lock table' on table A and then 
querying it.

Here T will be blocked untill all conflicting locks on A are released.
When there are no conflicting locks on A, T will go ahead and read data 
from the snapshot taken at the T's start.


So, in short 'Lock Table' just delayed query of transaction T.


LOCK TABLE should never give you an error, except for a deadlock
resolution
error.

LOCK TABLE will just wait until there is no lock on the table that is
incompatible with the requested lock, then it will obtain the lock and
return.

LOCK TABLE does not modify tables or rows and so you cannot get a
serialization error, which is only issued when you run serializable
and try to modify a row that is newer than your transaction begin time.

On the other hand, LOCK TABLE will not necessarily prevent you from
subsequently receiving serialization errors if you do not request
an exclusive lock on the table.

Does that answer your questions?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend

  


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] default db

2007-05-22 Thread Sandro Dentella
Hi,

  I see that the default encoding for a db is taken from template1. 

  I'm using debian and I don't understand how to create template1 on
  installation of postgresql with a preferred encoding.

  Is it safe to just dump/restore template1 with different encoding?

  TIA
  *:-)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] default db

2007-05-22 Thread John D. Burger

Sandro Dentella wrote:


  I'm using debian and I don't understand how to create template1 on
  installation of postgresql with a preferred encoding.

  Is it safe to just dump/restore template1 with different encoding?


Or drop template1, and then recreate it from template0 with the  
defaults you want.  I gather that's the point of template0.  See the  
Fine Manual:


http://www.postgresql.org/docs/8.1/interactive/manage-ag- 
templatedbs.html


- John D. Burger
  MITRE



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] default db

2007-05-22 Thread Hannes Dorbath

On 22.05.2007 14:02, Sandro Dentella wrote:
  I see that the default encoding for a db is taken from template1. 


  I'm using debian and I don't understand how to create template1 on
  installation of postgresql with a preferred encoding.

  Is it safe to just dump/restore template1 with different encoding?


Default encoding is taken from the cluster locale. Debian sure has some 
fancy wrapper scripts for that, but


initdb -D $PGDATA --locale='en_US.utf8'

is the command to create a new cluster with another locale. Chose one 
from locale -a



--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread Albe Laurenz
sudhir wrote:
 LOCK TABLE should never give you an error, except for a deadlock
 resolution error.

 LOCK TABLE will just wait until there is no lock on the table that is
 incompatible with the requested lock, then it will obtain the lock
and
 return.

 LOCK TABLE does not modify tables or rows and so you cannot get a
 serialization error, which is only issued when you run serializable
 and try to modify a row that is newer than your transaction 
 begin time.

 On the other hand, LOCK TABLE will not necessarily prevent you from
 subsequently receiving serialization errors if you do not request
 an exclusive lock on the table.

 If this is the expected behavior then isn't 'Lock table' is just extra

 performance penalty and achieves nothing under serializable
 isolation level.
 
 The serializable isolation level in postgres is infact snapshot
isolation.
 Suppose a transaction T is using 'lock table' on table A and then 
 querying it.
 Here T will be blocked untill all conflicting locks on A are released.
 When there are no conflicting locks on A, T will go ahead and read
data 
 from the snapshot taken at the T's start.
 
 So, in short 'Lock Table' just delayed query of transaction T.

I think that you still do not understand it completely.

Consider these two cases:

Case a)

Session 1 starts a serializable transaction T.
The first statement in transaction T will mark the time at which
the 'snapshot' that you mention above is 'taken'. Let's call this
time t1.

At a time t2  t1, Session 2 updates a row on table r.

At t3  t2, Session 1 tries to update the same row in table r.
Session 1 will fail with a serialization error.

Case b)

Session 1 starts a serializable transaction T.
The first statement in transaction T is 'LOCK TABLE r'. The statement
returns at time t1 which is the 'snapshot' time for transaction T.

At time t2  t1, Session 2 tries to modify a row in table r.
Session 2 will have to wait until transaction T is completed, because
it cannot get a shared lock on the table.

At any time  t1, Session 1 can update the same row in table r
without receiving an error.


You see, there is a difference. In case a) the serializable transaction
will very likely fail if there are many concurrent changes on the table.
In case b), the serializable transaction will always succeed, while
all concurrent updates must wait.

Does that make sense to you?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Problem with pg_dump

2007-05-22 Thread znahor-news

Hi!

I've got problem with dumping the database on the slave server.

Situation:

Master:
Running CentOS release 4.4 (Final) with PostgreSQL 8.2.4.

Slave:
Running CentOS release 4.3 (Final) with PostgreSQL 8.2.4 and slon 
version 1.2.9


pg_dump on master works fine. On slave i've got error on start of 
dumping:


-bash-3.00$ pg_dump -d -D my_database
pg_dump: failed sanity check, parent table OID 225893092 of 
pg_rewrite entry OID 225893271 not found


Any ideas to solve this problem?

Best regards,

Piotr Konieczny


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Problem with pg_dump

2007-05-22 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hi!

I've got problem with dumping the database on the slave server.

Situation:

Master:
Running CentOS release 4.4 (Final) with PostgreSQL 8.2.4.

Slave:
Running CentOS release 4.3 (Final) with PostgreSQL 8.2.4 and slon 
version 1.2.9


pg_dump on master works fine. On slave i've got error on start of dumping:

-bash-3.00$ pg_dump -d -D my_database
pg_dump: failed sanity check, parent table OID 225893092 of pg_rewrite 
entry OID 225893271 not found


Any ideas to solve this problem?


Do you have such a table, and what does the rule (pg_rewrite) say?

SELECT OID,* FROM pg_class WHERE OID = ...
SELECT OID,* FROM pg_rewrite WHERE OID = ...

Have you made any schema changes that slony might not have handled? You 
didn't apply schema changes without putting them through slony, did you?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread Tom Lane
Albe Laurenz [EMAIL PROTECTED] writes:
 You see, there is a difference. In case a) the serializable transaction
 will very likely fail if there are many concurrent changes on the table.
 In case b), the serializable transaction will always succeed, while
 all concurrent updates must wait.

The critical point here is that LOCK TABLE commands at the start of a
serializable transaction are performed *before* taking the transaction's
snapshot (the snap happens at the first regular DML command).  They not
only protect against post-snap changes as shown by Albe's example, but
against uncommitted changes that were made before transaction start
(by making the serializable xact wait until those changes are committed
or aborted before it takes its snap).

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
I posted an issue I was having with a migration from MS SQL server to
pgsql over the weekend. Tom's suggestion for the query I posted was
right on, I made the appropriate updates to column types by dumping,
changing and restoring the database. I then analyze'd the db and my
query performance was equal to that of MS SQL. This is my first
migration and also my first time trying to spot performance issues,
mainly because all the previous db's I've worked with were built from
scratch, never an issue with performance, but never worked with so much
data either (not sure if that has anything to do with my issues).

I have developed a view in pgsql that takes over 160K ms to execute, but
when copied into MS SQL against the old database (with syntax mods of
course), runs in a few seconds. Seems the issues are with tblcontactinfo
and tblclientactivitytag. Only if I remove all references to *both*
tables do I get good performance from the query. Thanks for any help!

SELECT tblclientmaster.fldclientnumber, tblclientmaster.fldclientname, 
tblclientmaster.fldclienttype, tblclientmaster.fldbuyingstatus, 
tblclientmaster.fldsellingstatus, tblclientproductpreference.fldfullservice, 
tblclientproductpreference.fldlimitedservice, 
tblclientproductpreference.fldallsuite, tblclientproductpreference.fldbudget, 
tblclientproductpreference.fldconference, tblclientproductpreference.fldresort, 
tblclientproductpreference.flddailyfee, 
tblclientproductpreference.fldsemiprivate, 
tblclientproductpreference.fldprivate, tblclientproductpreference.fldmunicipal, 
tblclientroomsize.fldsize149, tblclientroomsize.fldsize299, 
tblclientroomsize.fldsize449, tblclientroomsize.fldsize599, 
tblclientroomsize.fldsize600, tblgeopreference.fldsw, tblgeopreference.fldnw, 
tblgeopreference.fldmw, tblgeopreference.fldw, tblgeopreference.fldma, 
tblgeopreference.fldse, tblgeopreference.flds, tblgeopreference.fldne, 
tblproductmaster.fldproductname, tblproductmaster.fldproductcode, 
tblcontactinfo.fldcontactnumber, tblcontactinfo.fldcontactfirstname, 
tblcontactinfo.fldcontactlastname, (tblcontactinfo.fldcontactaddress1::text || 
' '::text) || tblcontactinfo.fldcontactaddress2::text AS fldcontactaddress, 
tblcontactinfo.fldcontactcity, tblcontactinfo.fldcontactstate, 
tblcontactinfo.fldcontactzipcode, tblcontactinfo.fldcontacttitle, 
tblcontactinfo.fldcontactphone2_type, tblcontactinfo.fldcontactphone2_num, 
tblcontactinfo.fldcontactphone3_num, tblcontactinfo.fldcontactphone4_num, 
tblcontactinfo.fldcontactphone5_num, tblcontactinfo.fldcontactemail, 
tblcontactinfo.fldenable, tblcontactinfo.fldcontactphone1_num, 
tblcontactinfo.fldperscomments, tblclientactivitytag.fldcontactactivitytag
   FROM tblclientmaster
   LEFT JOIN tblclientproductpreference ON tblclientmaster.fldclientnumber = 
tblclientproductpreference.fldclientnumber
   LEFT JOIN tblclientroomsize ON tblclientmaster.fldclientnumber = 
tblclientroomsize.fldclientnumber
   LEFT JOIN tblgeopreference ON tblclientmaster.fldclientnumber = 
tblgeopreference.fldclientnumber
   LEFT JOIN tblclientproductrelation ON tblclientmaster.fldclientnumber = 
tblclientproductrelation.fldclientnumber
   JOIN tblproductmaster ON tblclientproductrelation.fldproductnumber = 
tblproductmaster.fldproductnumber
   LEFT JOIN tblcontactinfo ON tblclientmaster.fldclientnumber = 
tblcontactinfo.fldclientnumber
   LEFT JOIN tblclientactivitytag ON tblclientmaster.fldclientnumber = 
tblclientactivitytag.fldclientnumber
  ORDER BY tblclientmaster.fldclientnumber, tblclientmaster.fldclientname, 
tblclientmaster.fldclienttype, tblclientmaster.fldbuyingstatus, 
tblclientmaster.fldsellingstatus, tblclientproductpreference.fldfullservice, 
tblclientproductpreference.fldlimitedservice, 
tblclientproductpreference.fldallsuite, tblclientproductpreference.fldbudget, 
tblclientproductpreference.fldconference, tblclientproductpreference.fldresort, 
tblclientproductpreference.flddailyfee, 
tblclientproductpreference.fldsemiprivate, 
tblclientproductpreference.fldprivate, tblclientproductpreference.fldmunicipal, 
tblclientroomsize.fldsize149, tblclientroomsize.fldsize299, 
tblclientroomsize.fldsize449, tblclientroomsize.fldsize599, 
tblclientroomsize.fldsize600, tblgeopreference.fldsw, tblgeopreference.fldnw, 
tblgeopreference.fldmw, tblgeopreference.fldw, tblgeopreference.fldma, 
tblgeopreference.fldse, tblgeopreference.flds, tblgeopreference.fldne, 
tblproductmaster.fldproductname, tblproductmaster.fldproductcode, 
tblcontactinfo.fldcontactnumber, tblcontactinfo.fldcontactfirstname, 
tblcontactinfo.fldcontactlastname, (tblcontactinfo.fldcontactaddress1::text || 
' '::text) || tblcontactinfo.fldcontactaddress2::text, 
tblcontactinfo.fldcontactcity, tblcontactinfo.fldcontactstate, 
tblcontactinfo.fldcontactzipcode, tblcontactinfo.fldcontacttitle, 
tblcontactinfo.fldcontactphone2_type, tblcontactinfo.fldcontactphone2_num, 
tblcontactinfo.fldcontactphone3_num, tblcontactinfo.fldcontactphone4_num, 
tblcontactinfo.fldcontactphone5_num, 

Re: [GENERAL] Problem with pg_dump

2007-05-22 Thread Andrew Sullivan
On Tue, May 22, 2007 at 04:36:49PM +0200, [EMAIL PROTECTED] wrote:
 pg_dump on master works fine. On slave i've got error on start of 
 dumping:
 
 -bash-3.00$ pg_dump -d -D my_database
 pg_dump: failed sanity check, parent table OID 225893092 of 
 pg_rewrite entry OID 225893271 not found

You're using Slony, right?  You can't use pg_dump on a database where
_any_ table is Slony-replicated right now.  Yes, this is due to the
catalogue corruption Slony introduces.  There's an alternative in the
Slony tools/ directory.  

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
Everything that happens in the world happens at some place.
--Jane Jacobs 

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Problem with pg_dump

2007-05-22 Thread Joshua D. Drake

Andrew Sullivan wrote:

On Tue, May 22, 2007 at 04:36:49PM +0200, [EMAIL PROTECTED] wrote:
pg_dump on master works fine. On slave i've got error on start of 
dumping:


-bash-3.00$ pg_dump -d -D my_database
pg_dump: failed sanity check, parent table OID 225893092 of 
pg_rewrite entry OID 225893271 not found


You're using Slony, right?  You can't use pg_dump on a database where
_any_ table is Slony-replicated right now.  Yes, this is due to the
catalogue corruption Slony introduces.  There's an alternative in the
Slony tools/ directory.  


*cough*

catalog features Slony adds.


Joshua D. Drake




A




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Problem with pg_dump

2007-05-22 Thread Richard Huxton

Andrew Sullivan wrote:

On Tue, May 22, 2007 at 04:36:49PM +0200, [EMAIL PROTECTED] wrote:
pg_dump on master works fine. On slave i've got error on start of 
dumping:


-bash-3.00$ pg_dump -d -D my_database
pg_dump: failed sanity check, parent table OID 225893092 of 
pg_rewrite entry OID 225893271 not found


You're using Slony, right?  You can't use pg_dump on a database where
_any_ table is Slony-replicated right now.  Yes, this is due to the
catalogue corruption Slony introduces.  There's an alternative in the
Slony tools/ directory.  


I'll be - I've obviously never tried pg_dump-ing a slave node. It does 
say it here in black-and white though:


This represents something of a corruption of the data dictionary, and 
is why you should not directly use pg_dump to dump schemas on subscribers.

http://slony.info/adminguide/slony1-1.2.6/doc/adminguide/slonyintro.html

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Richard Huxton

Robert Fitzpatrick wrote:

I have developed a view in pgsql that takes over 160K ms to execute, but
when copied into MS SQL against the old database (with syntax mods of
course), runs in a few seconds. 


Your query seems to produce 41.8 million rows. Are you sure MS-SQL is 
returning that many rows in a few seconds?



Merge Left Join  (cost=35366.42..57565.28 rows=565261 width=297) (actual 
time=1000.457..148111.905 rows=41866801 loops=1)


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
On Tue, 2007-05-22 at 17:21 +0100, Richard Huxton wrote:
 Robert Fitzpatrick wrote:
  I have developed a view in pgsql that takes over 160K ms to execute, but
  when copied into MS SQL against the old database (with syntax mods of
  course), runs in a few seconds. 
 
 Your query seems to produce 41.8 million rows. Are you sure MS-SQL is 
 returning that many rows in a few seconds?
 
I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
PERCENT' after SELECT in the query. The Enterprise Manager does not
indicate how many rows come back. I save it as a VIEW in MS SQL and do a
'select count(*)...' and, yes, it comes back 42164877 records.

Just to be sure MS SQL hasn't done something to the structure (I noticed
dbo prefixes, etc.), I pasted back into pgadmin, took off 'top 100
percent'. Then saved as a view and did a count(*) in pgsql, got
41866801. The difference in the number of records could vary due to the
dataset age I'm using in pgsql for migration. The MS SQL db is still
actively used. I hope I'm just missing something here...this are the
queries for mssql and pgsql, respectively...

SELECT TOP 100 PERCENT dbo.tblClientMaster.fldClientNumber, 
dbo.tblClientMaster.fldClientName, dbo.tblClientMaster.fldClientType, 
  dbo.tblClientMaster.fldBuyingStatus, 
dbo.tblClientMaster.fldSellingStatus, 
dbo.tblClientProductPreference.fldFullService, 
  dbo.tblClientProductPreference.fldLimitedService, 
dbo.tblClientProductPreference.fldAllSuite, 
dbo.tblClientProductPreference.fldBudget, 
  dbo.tblClientProductPreference.fldConference, 
dbo.tblClientProductPreference.fldResort, 
dbo.tblClientProductPreference.fldDailyFee, 
  dbo.tblClientProductPreference.fldSemiPrivate, 
dbo.tblClientProductPreference.fldPrivate, 
dbo.tblClientProductPreference.fldMunicipal, 
  dbo.tblClientRoomSize.fldSize149, 
dbo.tblClientRoomSize.fldSize299, dbo.tblClientRoomSize.fldSize449, 
dbo.tblClientRoomSize.fldSize599, 
  dbo.tblClientRoomSize.fldSize600, 
dbo.tblGeoPreference.fldSW, dbo.tblGeoPreference.fldNW, 
dbo.tblGeoPreference.fldMW, 
  dbo.tblGeoPreference.fldW, dbo.tblGeoPreference.fldMA, 
dbo.tblGeoPreference.fldSE, dbo.tblGeoPreference.fldS, 
dbo.tblGeoPreference.fldNE, 
  dbo.tblProductMaster.fldProductName, 
dbo.tblProductMaster.fldProductCode, dbo.tblContactInfo.fldContactNumber, 
  dbo.tblContactInfo.fldContactFirstName, 
dbo.tblContactInfo.fldContactLastName, dbo.tblContactInfo.fldContactCity, 
dbo.tblContactInfo.fldContactState, 
  dbo.tblContactInfo.fldContactZipCode, 
dbo.tblContactInfo.fldContactTitle, dbo.tblContactInfo.fldContactPhone2_Type, 
  dbo.tblContactInfo.fldContactPhone2_Num, 
dbo.tblContactInfo.fldContactPhone3_Num, 
dbo.tblContactInfo.fldContactPhone4_Num, 
  dbo.tblContactInfo.fldContactPhone5_Num, 
dbo.tblContactInfo.fldContactEMail, dbo.tblContactInfo.fldEnable, 
dbo.tblContactInfo.fldContactPhone1_Num, 
  dbo.tblContactInfo.fldPersComments, 
dbo.tblClientActivityTag.fldContactActivityTag
FROM dbo.tblClientMaster LEFT OUTER JOIN
  dbo.tblClientProductPreference ON 
dbo.tblClientMaster.fldClientNumber = 
dbo.tblClientProductPreference.fldClientNumber LEFT OUTER JOIN
  dbo.tblClientRoomSize ON 
dbo.tblClientMaster.fldClientNumber = dbo.tblClientRoomSize.fldClientNumber 
LEFT OUTER JOIN
  dbo.tblGeoPreference ON 
dbo.tblClientMaster.fldClientNumber = dbo.tblGeoPreference.fldClientNumber LEFT 
OUTER JOIN
  dbo.tblClientProductRelation ON 
dbo.tblClientMaster.fldClientNumber = 
dbo.tblClientProductRelation.fldClientNumber INNER JOIN
  dbo.tblProductMaster ON 
dbo.tblClientProductRelation.fldProductNumber = 
dbo.tblProductMaster.fldProductNumber LEFT OUTER JOIN
  dbo.tblContactInfo ON dbo.tblClientMaster.fldClientNumber 
= dbo.tblContactInfo.fldClientNumber LEFT OUTER JOIN
  dbo.tblClientActivityTag ON 
dbo.tblClientMaster.fldClientNumber = dbo.tblClientActivityTag.fldClientNumber

 SELECT tblclientmaster.fldclientnumber, tblclientmaster.fldclientname, 
tblclientmaster.fldclienttype, tblclientmaster.fldbuyingstatus, 
tblclientmaster.fldsellingstatus, tblclientproductpreference.fldfullservice, 
tblclientproductpreference.fldlimitedservice, 
tblclientproductpreference.fldallsuite, tblclientproductpreference.fldbudget, 
tblclientproductpreference.fldconference, tblclientproductpreference.fldresort, 
tblclientproductpreference.flddailyfee, 
tblclientproductpreference.fldsemiprivate, 
tblclientproductpreference.fldprivate, tblclientproductpreference.fldmunicipal, 
tblclientroomsize.fldsize149, tblclientroomsize.fldsize299, 
tblclientroomsize.fldsize449, tblclientroomsize.fldsize599, 

Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread PFC



I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
PERCENT' after SELECT in the query. The Enterprise Manager does not
indicate how many rows come back. I save it as a VIEW in MS SQL and do a
'select count(*)...' and, yes, it comes back 42164877 records.


	No, it comes back 1 record with the count in it, the ORDER BY is useless  
for a count(*), etc.


What is it that you are trying to do exactly ?

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Rounding datetimes

2007-05-22 Thread jws
Is there a way to round an interval to the nearest minute or do I need
to create a function for this?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Tom Lane
Robert Fitzpatrick [EMAIL PROTECTED] writes:
 On Tue, 2007-05-22 at 17:21 +0100, Richard Huxton wrote:
 Your query seems to produce 41.8 million rows. Are you sure MS-SQL is
 returning that many rows in a few seconds?

 I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
 PERCENT' after SELECT in the query. The Enterprise Manager does not
 indicate how many rows come back. I save it as a VIEW in MS SQL and do a
 'select count(*)...' and, yes, it comes back 42164877 records.

 Just to be sure MS SQL hasn't done something to the structure (I noticed
 dbo prefixes, etc.), I pasted back into pgadmin, took off 'top 100
 percent'. Then saved as a view and did a count(*) in pgsql, got
 41866801.

How much time do the two select count(*) operations take?  That would be
a reasonably fair comparison of the query engines, as opposed to
whatever might be happening on the client side (in particular, I wonder
whether the MS client is actually fetching all the rows or just the
first few).

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] swap storm created by 8.2.3

2007-05-22 Thread Joseph Shraibman

I'm running:

PostgreSQL 8.2.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 
3.4.6 20060404 (Red Hat 3.4.6-3)


My memory settings are:

work_mem = 64MB
shared_buffers = 128MB
temp_buffers = 32MB

I ran a query that was SELECT field, count(*) INTO TEMP temptable and 
it grew to be 10gig (as reported by top) and brought the whole machine 
to its knees.  How do I keep this from happening again?


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
On Tue, 2007-05-22 at 19:04 +0200, PFC wrote:
  I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
  PERCENT' after SELECT in the query. The Enterprise Manager does not
  indicate how many rows come back. I save it as a VIEW in MS SQL and do a
  'select count(*)...' and, yes, it comes back 42164877 records.
 
   No, it comes back 1 record with the count in it, the ORDER BY is 
 useless  
 for a count(*), etc.
 
   What is it that you are trying to do exactly ?

Yes, one record indicating over 42 million records available from the
view, correct? I realized this after my first post, there is no 'ORDER
BY' in my last post with the two query examples.

I'm trying to my query in pgsql to return a result in the same amount of
time (approx) than it does in the existing mssql db. The query comes
back with results using MS SQL Enterprise Manager in seconds and the
same query in pgadmin takes super long. I just tried running the query
now and it is still going with over 200K ms clocked. If I stop the
query, remove all references to tblcontactinfo and tblactivitytag, the
query comes back in less than 6000 ms.

-- 
Robert


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Rounding datetimes

2007-05-22 Thread Peter Childs

On 22 May 2007 10:08:24 -0700, jws [EMAIL PROTECTED] wrote:


Is there a way to round an interval to the nearest minute or do I need
to create a function for this?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



date_trunc('minute',interval)

see

http://www.postgresql.org/docs/8.2/interactive/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC

Peter Childs


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread sudhir
OK. In your example  lock table command is used to avoid rollbacks due 
to concurrent transaction.

So LOCK TABLE is useful in this situation.

I have one last doubt:
why there is difference between behavior of 'select for update' and 
'lock table'.

one causes serialization error and other does not.
(even though both are variations of locking mechanism)

case 1)

T1# BEGIN -- snapshot taken
T1# Set transaction isolation level serializable;
T2# BEGIN -- snapshot taken
T2# Set transaction isolation level serializable;
T1# Update account set bal=bal-100 where accno=129;
T2# lock table account;  -- *blocked*
T1# commit;
T2# -- lock obtained


case 2)

T1# BEGIN -- snapshot taken
T1# Set transaction isolation level serializable;
T2# BEGIN -- snapshot taken
T2# Set transaction isolation level serializable;
T1# Update account set bal=bal-100 where accno=129;
T2# select * from account where accno=129 for update; -- *blocked*
T1# commit;
T2# -- serialization error








Consider these two cases:

Case a)

Session 1 starts a serializable transaction T.
The first statement in transaction T will mark the time at which
the 'snapshot' that you mention above is 'taken'. Let's call this
time t1.

At a time t2  t1, Session 2 updates a row on table r.

At t3  t2, Session 1 tries to update the same row in table r.
Session 1 will fail with a serialization error.

Case b)

Session 1 starts a serializable transaction T.
The first statement in transaction T is 'LOCK TABLE r'. The statement
returns at time t1 which is the 'snapshot' time for transaction T.

At time t2  t1, Session 2 tries to modify a row in table r.
Session 2 will have to wait until transaction T is completed, because
it cannot get a shared lock on the table.

At any time  t1, Session 1 can update the same row in table r
without receiving an error.


You see, there is a difference. In case a) the serializable transaction
will very likely fail if there are many concurrent changes on the table.
In case b), the serializable transaction will always succeed, while
all concurrent updates must wait.
  



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Rounding datetimes

2007-05-22 Thread Steve Crawford
jws wrote:
 Is there a way to round an interval to the nearest minute...

Yes. See date_trunc function. Of course this truncates down to the
specified unit. If you want to round up/down to the nearest minute I
suppose you could just add '30 seconds'::interval before truncating.

select
now(),
date_trunc('minute', now()),
date_trunc('minute', now()+'30 seconds'::interval);

-[ RECORD 1 ]-
now| 2007-05-22 10:25:37.706279-07
date_trunc | 2007-05-22 10:25:00-07
date_trunc | 2007-05-22 10:26:00-07


Cheers,
Steve

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
On Tue, 2007-05-22 at 13:10 -0400, Tom Lane wrote:
 Robert Fitzpatrick [EMAIL PROTECTED] writes:
  On Tue, 2007-05-22 at 17:21 +0100, Richard Huxton wrote:
  Your query seems to produce 41.8 million rows. Are you sure MS-SQL is
  returning that many rows in a few seconds?
 
  I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
  PERCENT' after SELECT in the query. The Enterprise Manager does not
  indicate how many rows come back. I save it as a VIEW in MS SQL and do a
  'select count(*)...' and, yes, it comes back 42164877 records.
 
  Just to be sure MS SQL hasn't done something to the structure (I noticed
  dbo prefixes, etc.), I pasted back into pgadmin, took off 'top 100
  percent'. Then saved as a view and did a count(*) in pgsql, got
  41866801.
 
 How much time do the two select count(*) operations take?  That would be
 a reasonably fair comparison of the query engines, as opposed to
 whatever might be happening on the client side (in particular, I wonder
 whether the MS client is actually fetching all the rows or just the
 first few).

Takes 25K ms in pgsql, don't see a timer in MS Ent Manager, but only 5
seconds clocked. Maybe I should put together a php script to operate on
each to be using the exact same client. I am doing all this all on the
same server with PostgreSQL 8.2 loaded in Windows Server 2003 also with
MS SQL server 2000.

-- 
Robert


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread sudhir

It is not necessary that LOCK TABLE will be the first statement.
(assuming serializable isolation level is snapshot isolation in postgres)
For serializable transaction, snapshot should be taken when the 'BEGIN' 
statement is executed, and not when LOCK TABLE succeeds.

Hence, uncommitted changes should be invisible to serializable transaction.



The critical point here is that LOCK TABLE commands at the start of a
serializable transaction are performed *before* taking the transaction's
snapshot (the snap happens at the first regular DML command).  They not
only protect against post-snap changes as shown by Albe's example, but
against uncommitted changes that were made before transaction start
(by making the serializable xact wait until those changes are committed
or aborted before it takes its snap).

  



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How to move tables in another physial place

2007-05-22 Thread Michelle Konzack
Hello,

I am using Debian GNU/Linux 4.0 with postgresql 8.2.

Now I have splited my Monster-Table of 580 GByte (historical timeline)
into one table per year (= several 100 now) and use table partitioning.
(Cool for pg_dump since I can dump singel tables and burn it now on CD
or DVD). Then I have reorganized my server and now I have several Raid-1
volumes to use table partitioning.

The tables are now working and my Database is over 80 times faster
because most searches are not over the whole table (580 GByte) but over
a paticular time (mostly 20-40 years).

What I want now is to move tables to another partition.

How can I do this? (best and fastest way)

Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSN LinuxMichi
0033/6/6192519367100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: [GENERAL] swap storm created by 8.2.3

2007-05-22 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/22/07 12:17, Joseph Shraibman wrote:
 I'm running:
 
 PostgreSQL 8.2.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC)
 3.4.6 20060404 (Red Hat 3.4.6-3)
 
 My memory settings are:
 
 work_mem = 64MB
 shared_buffers = 128MB
 temp_buffers = 32MB
 
 I ran a query that was SELECT field, count(*) INTO TEMP temptable and
 it grew to be 10gig (as reported by top) and brought the whole machine
 to its knees.  How do I keep this from happening again?

It looks to me as if TEMP tables are in-memory structures, not
file-backed objects like regular tables.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGUysiS9HxQb37XmcRAo06AJ4k2i7Q1GN1digKbcYoZYuZv/E0SwCg4zN5
ENSWo9fvXwTJWKatOGb/xpY=
=WAvF
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] How to move tables in another physial place

2007-05-22 Thread Joshua D. Drake

Michelle Konzack wrote:

Hello,

I am using Debian GNU/Linux 4.0 with postgresql 8.2.

Now I have splited my Monster-Table of 580 GByte (historical timeline)
into one table per year (= several 100 now) and use table partitioning.
(Cool for pg_dump since I can dump singel tables and burn it now on CD
or DVD). Then I have reorganized my server and now I have several Raid-1
volumes to use table partitioning.

The tables are now working and my Database is over 80 times faster
because most searches are not over the whole table (580 GByte) but over
a paticular time (mostly 20-40 years).

What I want now is to move tables to another partition.

How can I do this? (best and fastest way)



You are looking for tablespaces.

http://www.postgresql.org/docs/8.0/interactive/manage-ag-tablespaces.html



Greetings
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant





--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to move tables in another physial place

2007-05-22 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/22/07 12:01, Michelle Konzack wrote:
 Hello,
 
 I am using Debian GNU/Linux 4.0 with postgresql 8.2.
 
 Now I have splited my Monster-Table of 580 GByte (historical timeline)
 into one table per year (= several 100 now) and use table partitioning.
 (Cool for pg_dump since I can dump singel tables and burn it now on CD
 or DVD). Then I have reorganized my server and now I have several Raid-1
 volumes to use table partitioning.
 
 The tables are now working and my Database is over 80 times faster
 because most searches are not over the whole table (580 GByte) but over
 a paticular time (mostly 20-40 years).
 
 What I want now is to move tables to another partition.
 
 How can I do this? (best and fastest way)

Your subject says physial place, but then you ask about moving a
table to another partition.

Since partitions are sub-sections of tables, I don't understand what
you are asking.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGUywzS9HxQb37XmcRAlI4AKCHKBmWSW5FpLKmiPRvXfZqWd68RwCfaTOm
0qhxJymuK1WMXcQ9JcFfA1c=
=yTo5
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] short ciruit logic of plpgsql in 8.2

2007-05-22 Thread Ian Harding

This keeps biting me.  In a trigger function for INSERT OR UPDATE if you try

IF TG_OP = 'UPDATE' AND OLD.foo = 'bar' THEN
...

it will blow up on inserts because there is no OLD.  I always expect
this to short circuit and am always disappointed.  Easy fix, of
course...

IF TG_OP = 'UPDATE' THEN
   IF OLD.foo = 'bar' THEN
...

In case this was what the OP was asking about in particular, it still
happens to me in 8.2

- Ian


On 5/17/07, Tom Lane [EMAIL PROTECTED] wrote:

Richard Broersma Jr [EMAIL PROTECTED] writes:
 Does anyone know if logic operations in plpgsql are short circuited in 8.2?

They are often optimized, but if you expect short circuiting following
exactly the rules of, say, C, you are likely to be disappointed.  See
the manual:

http://www.postgresql.org/docs/8.2/static/sql-expressions.html#SYNTAX-EXPRESS-EVAL

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How to move tables in another physial place

2007-05-22 Thread Joshua D. Drake

Ron Johnson wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/22/07 12:01, Michelle Konzack wrote:

Hello,

I am using Debian GNU/Linux 4.0 with postgresql 8.2.

Now I have splited my Monster-Table of 580 GByte (historical timeline)
into one table per year (= several 100 now) and use table partitioning.
(Cool for pg_dump since I can dump singel tables and burn it now on CD
or DVD). Then I have reorganized my server and now I have several Raid-1
volumes to use table partitioning.

The tables are now working and my Database is over 80 times faster
because most searches are not over the whole table (580 GByte) but over
a paticular time (mostly 20-40 years).

What I want now is to move tables to another partition.

How can I do this? (best and fastest way)


Your subject says physial place, but then you ask about moving a
table to another partition.

Since partitions are sub-sections of tables, I don't understand what
you are asking.



Hard drive partitions.

Joshua D. Drake



- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGUywzS9HxQb37XmcRAlI4AKCHKBmWSW5FpLKmiPRvXfZqWd68RwCfaTOm
0qhxJymuK1WMXcQ9JcFfA1c=
=yTo5
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] How to move tables in another physial place

2007-05-22 Thread Michelle Konzack
Am 2007-05-22 12:45:23, schrieb Ron Johnson:
 Your subject says physial place, but then you ask about moving a
 table to another partition.
 
 Since partitions are sub-sections of tables, I don't understand what
 you are asking.

:-)

INSIDE the postgresql I use table partitioning but the whole database
is on ONE PHYSICAL partition (/dev/sdb1) in the moment.

I know tablespace and want to move parts of the Maintable (which is
cutted down in many smaler parts) on another PYSIACAL partitonon of
/dev/sdc1, /dev/sdd1 and /dev/sde1 to reduce the seeks on the drives.

But since the database IS on ONE PYSIACAL partiton I do not know,
HOW TO MOVE some of the smaller tables (without stoping the PostgreSQL
from operating since it is needed 24/7)

Thanks, Greetings and nice Day
Michelle Konzack
Systemadministrator
Tamay Dogan Network
Debian GNU/Linux Consultant


-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/
# Debian GNU/Linux Consultant #
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSN LinuxMichi
0033/6/6192519367100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Richard Huxton

Robert Fitzpatrick wrote:

On Tue, 2007-05-22 at 13:10 -0400, Tom Lane wrote:

Robert Fitzpatrick [EMAIL PROTECTED] writes:

On Tue, 2007-05-22 at 17:21 +0100, Richard Huxton wrote:

Your query seems to produce 41.8 million rows. Are you sure MS-SQL is
returning that many rows in a few seconds?

I thought the same thing. While I'm not a MS SQL guru, I put 'TOP 100
PERCENT' after SELECT in the query. The Enterprise Manager does not
indicate how many rows come back. I save it as a VIEW in MS SQL and do a
'select count(*)...' and, yes, it comes back 42164877 records.
Just to be sure MS SQL hasn't done something to the structure (I noticed
dbo prefixes, etc.), I pasted back into pgadmin, took off 'top 100
percent'. Then saved as a view and did a count(*) in pgsql, got
41866801.

How much time do the two select count(*) operations take?  That would be
a reasonably fair comparison of the query engines, as opposed to
whatever might be happening on the client side (in particular, I wonder
whether the MS client is actually fetching all the rows or just the
first few).


Takes 25K ms in pgsql, don't see a timer in MS Ent Manager, but only 5
seconds clocked. Maybe I should put together a php script to operate on
each to be using the exact same client. I am doing all this all on the
same server with PostgreSQL 8.2 loaded in Windows Server 2003 also with
MS SQL server 2000.


1. Be aware that with the real query, PG is giving you the time to fetch 
the *last* row, whereas MS is probably the *first* row. I'm reasonably 
sure of this because 4.1 million rows * (say) 256 bytes would be 1GB of 
data to return, which in a few seconds seems optimistic.


2. If you don't want all the rows in one go, explicitly declare a cursor 
with PG.


3. The EXPLAIN ANALYSE will not have the 
format-and-transfer-data-to-client costs, but I think does the rest of 
the query.


4. We're still 5 x slower than MS-SQL (with the count). That might well 
be down to having to check visibility on each row with our MVCC rather 
than just going to the index.


Hmm... How much of your machine is PG getting to use vs. MS-SQL? What 
are your shared_buffers, work_mem, effective_cache_size (and how much 
RAM on this box)?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] swap storm created by 8.2.3

2007-05-22 Thread Richard Huxton

Joseph Shraibman wrote:

I'm running:

PostgreSQL 8.2.3 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 
3.4.6 20060404 (Red Hat 3.4.6-3)


My memory settings are:

work_mem = 64MB
shared_buffers = 128MB
temp_buffers = 32MB

I ran a query that was SELECT field, count(*) INTO TEMP temptable and 
it grew to be 10gig (as reported by top)


What was the real query?
How many rows are we talking about?

 and brought the whole machine

to its knees.  How do I keep this from happening again?


Set your per-user limits (man ulimit or man bash) to restrict PG's 
overall memory consumption.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] How to move tables in another physial place

2007-05-22 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/22/07 12:58, Michelle Konzack wrote:
 Am 2007-05-22 12:45:23, schrieb Ron Johnson:
 Your subject says physial place, but then you ask about moving a
 table to another partition.

 Since partitions are sub-sections of tables, I don't understand what
 you are asking.
 
 :-)
 
 INSIDE the postgresql I use table partitioning but the whole database
 is on ONE PHYSICAL partition (/dev/sdb1) in the moment.
 
 I know tablespace and want to move parts of the Maintable (which is
 cutted down in many smaler parts) on another PYSIACAL partitonon of
 /dev/sdc1, /dev/sdd1 and /dev/sde1 to reduce the seeks on the drives.
 
 But since the database IS on ONE PYSIACAL partiton I do not know,
 HOW TO MOVE some of the smaller tables (without stoping the PostgreSQL
 from operating since it is needed 24/7)

Since PostgreSQL doesn't use raw partitions, your database can't
own it.  (Although, obviously, Postgres files might be the only
ones on sdb1.)

Is Maintable composed of smaller tables?

Anyway, this is what you need:
http://www.postgresql.org/docs/8.2/interactive/sql-altertable.html
ALTER TABLE foo SET TABLESPACE bar;


- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGUzWdS9HxQb37XmcRAoSEAKDuKTJi/yFuQcVdWKKzVTPzzJMs4ACfUqzM
Rpd+Xf4/2o1b6mo5xgm8AXQ=
=ASU2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
On Tue, 2007-05-22 at 19:03 +0100, Richard Huxton wrote:
 4. We're still 5 x slower than MS-SQL (with the count). That might
 well 
 be down to having to check visibility on each row with our MVCC
 rather 
 than just going to the index.

Tips? I'd love to know how to see inside MVCC. I really appreciate the
help!

 
 Hmm... How much of your machine is PG getting to use vs. MS-SQL? What 
 are your shared_buffers, work_mem, effective_cache_size (and how much 
 RAM on this box)? 

3.5GB of RAM on a Xeon 2.8GHz server. I have default shared_buffers
32MB, no defaults changed except listen_addresses. How can I check
work_mem and effective_cache_size? 

-- 
Robert


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
On Tue, 2007-05-22 at 14:30 -0400, Robert Fitzpatrick wrote:
 On Tue, 2007-05-22 at 19:03 +0100, Richard Huxton wrote:
  
  Hmm... How much of your machine is PG getting to use vs. MS-SQL? What 
  are your shared_buffers, work_mem, effective_cache_size (and how much 
  RAM on this box)? 
 
 3.5GB of RAM on a Xeon 2.8GHz server. I have default shared_buffers
 32MB, no defaults changed except listen_addresses. How can I check
 work_mem and effective_cache_size? 

I did some googling and came up with some ideas, I have it now with
these settings and after restarting PG, no help.

work_mem = 5MB
shared_buffers = 128MB
effective_cache_size = 800MB

-- 
Robert


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Richard Huxton

Robert Fitzpatrick wrote:

On Tue, 2007-05-22 at 19:03 +0100, Richard Huxton wrote:

4. We're still 5 x slower than MS-SQL (with the count). That might
well 
be down to having to check visibility on each row with our MVCC
rather 
than just going to the index.


Tips? I'd love to know how to see inside MVCC. I really appreciate the
help!


The main thing is that PostgreSQL's implementation of MVCC means that
1. (In many cases) writers need not block readers.
2. An update is effectively a delete and an insert.
3. VACUUM is needed to mark space from deleted rows for re-use.
4. The indexes don't carry visibility information, which means we need 
to go to the actual row on-disk to see if the current transaction can 
actually see the row.


This last point is a problem for things like count(*) where we can't 
just count entries in the index because we don't know if some of the 
rows they point to might be deleted. The reason we don't store 
visibility info with the index is that it makes the index larger, so 
using up valuable RAM more quickly.


For more info, see Internals in the manuals for a start. Then have a 
quick look around these for some more bits  pieces. There are some 
presentation slides somewhere. Note - I am *not* a developer, just a 
long-term user.


http://www.postgresql.org/developer/
http://www.postgresql.org/docs/faqs.FAQ_DEV.html
http://www.postgresql.org/docs/techdocs

Hmm... How much of your machine is PG getting to use vs. MS-SQL? What 
are your shared_buffers, work_mem, effective_cache_size (and how much 
RAM on this box)? 


3.5GB of RAM on a Xeon 2.8GHz server. I have default shared_buffers
32MB, no defaults changed except listen_addresses. How can I check
work_mem and effective_cache_size? 


Ah - I bet MS-SQL is squatting on a gig of RAM or some-such too. Read 
through this - it's a little old, but still good advice.

  http://www.powerpostgresql.com/PerfList/
You'll probably find increasing work_mem (by a lot) for this one query 
will help you out.

  SET work_mem = something large;
  SELECT 
  SET work_mem = small again;

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Richard Huxton

Robert Fitzpatrick wrote:

I did some googling and came up with some ideas, I have it now with
these settings and after restarting PG, no help.

work_mem = 5MB


My last post missed yours - you're ahead of my previous reply :-)

Try 32MB, then 64MB, then perhaps 128MB. If it looks like you're not 
using stupid amounts of RAM on this one sort go higher still.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Permance issues with migrated db

2007-05-22 Thread Robert Fitzpatrick
On Tue, 2007-05-22 at 20:13 +0100, Richard Huxton wrote:
 Robert Fitzpatrick wrote:
  I did some googling and came up with some ideas, I have it now with
  these settings and after restarting PG, no help.
  
  work_mem = 5MB
 
 My last post missed yours - you're ahead of my previous reply :-)
 
 Try 32MB, then 64MB, then perhaps 128MB. If it looks like you're not 
 using stupid amounts of RAM on this one sort go higher still.
 

This really has me perplexed now :\

I closed pgadmin, set it to 128MB work_mem and restarted PG, went back
into pgadmin and created script from my saved view (again, this view is
identical/copied from the same mssql view with only syntax changes). I
started the query and then I started looking over that doc you sent me
(again, since I just read through it from a google search)...after over
30 ms, still going, not even finishing. I'm beginning to worry
something is wrong with the dataset migrated from mssql? I can migrate
again tonight, last done about a month ago. Since then it has been
dumped from a linux server and loaded on this Windows server without
issue.

-- 
Robert


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] autoReconnect with PostGres

2007-05-22 Thread Ranjan Kumar Baisak

Hello Group,
 I am using postgres in my web application. Number 
of physical DB connections are just getting increased for each 
re-deployment of my web application. I generally dont shutdown my 
webserver and only re-deploy my web application. I heard that 
autoReconnect = true reuses idle connections for each redeployments. I 
am not sure whether this can help me to make physical DB connections 
constant.

My configuration parameters:

Resource
  name=jdbc/
  scope=Shareable
  type=javax.sql.DataSource
  url=jdbc:postgresql://mydb.test.com/micms
  validationQuery=select tablename from pg_tables where schemaname 
= #39;gpdb#39;

  maxIdle=10
  maxActive=20
  maxWait=-1
  driverClassName=org.postgresql.Driver
  removeAbandoned=true
  username=
  logAbandoned=true
  removeAbandonedTimeout=120
  password= /


Your help would be highly appreciated.

regards,
Ranjan

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] partial word matching

2007-05-22 Thread Mage

 Hello,

as far as I know treach2 doesn't support partial word matching. Last 
time I checked it didn't. (I also googled before I asking this).


Am I wrong? If not, we would implement some FTI, something like the old 
and decrepated FTI-crontib. Where should we start reading tutorials? I 
googled for this question too, however most results were about tsearch2 
or old FTI contrib. I would read something advanced about this topic. 
Something that must be read.


Thank you.

(Before asking I was searching in my last 30k e-mails from this mailing 
list. Sadly they were not full-text indexed.)


  Mage

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[Re] Re: [Re] Re: [GENERAL] Winsock error 10035 while trying to upgrade from 8.0 to 8.2

2007-05-22 Thread Cyril VELTER
 Cyril VELTER wrote:
  No I'm not. It's not even complied in the server nor in the pg_dump 
binary.
  
  The server is built on windows using MSYS simply with ./configure  
  make 
all 
   make install
  
  
  I've been able to reproduce the problem 6 times (at random points in 
  the 
  process, but it never complete successfully). Is there any test I can do to 

  help investigate the problem ?
 
 Sorry I haven't gotten back to you for a while.
 
 Yeah, if you can attach a debugger to the backend (assuming you have a
 predictable backend it happens to - but if you're loading, you are using
 a single session, I assume?), add a breakpoint around the area of the
 problem, and get a backtrace from exactly where it shows up, that would
 help.


Thanks for your reply. I'll try to do this. I've installed gdb on the 
problematic machine and recompiled postgres with debug symbols (configure 
--enable-debug)

I'm not very familiar with gdb. Could you give some direction on 
setting the 
breakpoint. After running gdb on the postgres.exe file, I'm not able to set the 
breakpoint (b socket.c:574 give me an error).

Searching the source files, it seems the error message is generated in 
port/win32/socket.c line 594.

Thanks,

cyril


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Command line export or copy utility?

2007-05-22 Thread Scott Ribe
Use psql; it provides all the command-line options you need.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Command line export or copy utility?

2007-05-22 Thread Brent Wood

Francisco Reyes wrote:

Does anyone know of any export or copy utility that runs on FreeBSD?
I basically need a program that will connect to one database, do a 
select and copy the result to a second database.


There are a few ways, from memory (so I might have the odd syntax error):


To replicate a table run pg_dump on one machine pointing at the host/db 
to export  pipe the output to psql -f with the host  name of the 
target db.


pg_dump -h host0 -d db0 -t table ... | psql -h host1 -d db1 -f


you can do similar data streams from one db to another with (if the 
target table exists):


psql  -c copy table to STDOUT ... | psql ... -c copy table from 
STDOUT ...



to do this with the results of a query to subset the data will require 
the pre-building of the target table, but you can do:


psql -h host0 -d db0 -F| -Atc select.; | psql -h host1 -d db1 -c 
copy table from STDIN with delimiters = '|';




Cheers,

 Brent Wood

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Command line export or copy utility?

2007-05-22 Thread Reece Hart
On Tue, 2007-05-22 at 18:07 -0400, Francisco Reyes wrote:
 Does anyone know of any export or copy utility that runs on FreeBSD?
 I basically need a program that will connect to one database, do a
 select and copy the result to a second database. 

Two options:

1) if you want a whole table or schema, a pipe works nicely:
eg$ pg_dump -t table | psql


2) As of 8.2, you can formulate COPY commands with subqueries. For
example:
eg$ psql -c 'COPY (SELECT origin_id,origin FROM origin
WHERE is_public order by 1) TO STDOUT'

eg$ psql -c 'COPY (SELECT x FROM a WHERE x%2=1) TO STDOUT' \
| psql -c 'COPY a FROM STDIN;'


The only wrinkle is what to do when you need the DDL for the table
itself (say, when you want to create the same table with a subset of the
rows). The way I do this is to pg_dump the schema (-s) in the custom
format (-Fc). Then, I generate a table of contents with pg_restore -l,
edit the TOC to include only the entries I want, and then rerun
pg_restore with -L.

Good luck,
Reece

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread Joris Dobbelsteen
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of sudhir
Sent: dinsdag 22 mei 2007 19:28
To: Tom Lane
Cc: Albe Laurenz; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Lock table, Select for update and 
Serialization error

[note: text reordered]

 The critical point here is that LOCK TABLE commands at the 
start of a 
 serializable transaction are performed *before* taking the 
 transaction's snapshot (the snap happens at the first regular DML 
 command).  They not only protect against post-snap changes 
as shown by 
 Albe's example, but against uncommitted changes that were 
made before 
 transaction start (by making the serializable xact wait until those 
 changes are committed or aborted before it takes its snap).
[end reorder]
It is not necessary that LOCK TABLE will be the first statement.
(assuming serializable isolation level is snapshot isolation 
in postgres) For serializable transaction, snapshot should be 
taken when the 'BEGIN' 
statement is executed, and not when LOCK TABLE succeeds.

Tom is correct, the snapshot is taken at the first DML statement, NOT at
transaction start (the begin statement). Test it yourself.
Your 'should' be might be the expected behaviour, but its not the
implemented behaviour.

As Tom is point out, the LOCK TABLE as the first statement is to prevent
serializable errors from happening.

Hence, uncommitted changes should be invisible to serializable 
transaction.

Uncommited changes are at all times only and only visible to the
transaction that made those changes. No other transactions, of any
isolation level, can see uncommited changes from other transactions.
Remember, postgres uses the MVCC model.

- Joris Dobbelsteen


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Lock table, Select for update and Serialization error

2007-05-22 Thread Joris Dobbelsteen
-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of sudhir
Sent: dinsdag 22 mei 2007 19:21
To: Albe Laurenz
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Lock table, Select for update and 
Serialization error

OK. In your example  lock table command is used to avoid 
rollbacks due to concurrent transaction.
So LOCK TABLE is useful in this situation.

I have one last doubt:
why there is difference between behavior of 'select for 
update' and 'lock table'.
one causes serialization error and other does not.
(even though both are variations of locking mechanism)

The locking level is at a very different level and you have to see the
implications of the diffent ways:

The LOCK statement is to prevent other transactions from accessing the
table. This is a high-level lock with very low overhead to take. The
disadvantage is obviously the performance impact is has, as it is highly
likely to block other transactions.
The mechanism is very useful to get some guarentees about what will
happen with the data in the table. This allows for synchronizing
modification between different transactions.

The select for update has two uses:
1) Ensure the data is current and remains so, for a small subset of a
table.
2) Prevent deadlocks caused by lock escallation.
What I didn't put explicitly is that select for update is to indicate
that a tuple will be updated.
For serializable it implies that the current version you see should be
current.

Obviously there is a common need for something with the concurrency
benefit of select for update, but with relaxed requirements. The
postgres developers envisioned this and for this purpose use select for
share.
The select for share only does:
1) Ensure the data is current and remains so, for a small subset of the
table.

Summarizing:
* Lock table - High-level: executes fast, but concurrency problems.
Guarentees about future changes.
Select for update - Low-level, concurrent, ensures data validity and
indicates its modified shortly.
Select for share - Low-level, concurrent, ensures data validity.

Hopefully this clears it up a bit.

- Joris Dobbelsteen

[snip]


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Using a trigger with an object-relational manager

2007-05-22 Thread Rick Schumeyer
I'm using Ruby on Rails and have two tables, A and B.  Every row in A 
needs a corresponding row in B.  A also contains a FK pointing to B.


I created a before insert trigger on A that inserts a new row in B, and 
sets the FK in A.  This seems to be running fine.


The problem is that the new A object that rails gets does not see the 
effect of the trigger.  If I call the reload method, then everything 
is ok.


I'm not sure if this is a rails specific problem, or if there is an 
adjustment I can make to my SQL to avoid the need for the reload.


Can I somehow put the creation of A, B and setting the FK in a 
transaction so that rails receives the row from A after the trigger 
completes?  If so, I'm not sure the best way to do that.


Thanks for any suggestions!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Integrity on large sites

2007-05-22 Thread Naz Gassiep
I'm working in a project at the moment that is using MySQL, and people
keep making assertions like this one:

*Really* big sites don't ever have referential integrity. Or if the few
spots they do (like with financial transactions) it's implemented on the
application level (via, say, optimistic locking), never the database level.

This sounds to me like MySQLish. A large DB working with no RI would
give me nightmares. Is it really true that large sites turn RI off to
improve performance, or is that just a MySQL thing where it gets turned
off just because MySQL allows you to turn it off and improve
performance? Can you even turn RI off in PostgreSQL? Does Oracle, DB2 or
MSSQL allow you to turn it off? Am I just being naive in thinking that
everyone runs their DBs with RI in production?

- Naz

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Integrity on large sites

2007-05-22 Thread Stuart Cooper

*Really* big sites don't ever have referential integrity. Or if the few
spots they do (like with financial transactions) it's implemented on the
application level (via, say, optimistic locking), never the database level.


Some large sites don't even use data types!

http://www.thestar.com/News/article/189175

in some cases the field for the social insurance number was instead
filled in with a birth date.

(search the archives for OT: Canadian Tax Database)

Cheers,
Stuart.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Integrity on large sites

2007-05-22 Thread Richard P. Welty

Naz Gassiep wrote:

I'm working in a project at the moment that is using MySQL, and people
keep making assertions like this one:

*Really* big sites don't ever have referential integrity. Or if the few
spots they do (like with financial transactions) it's implemented on the
application level (via, say, optimistic locking), never the database level.

This sounds to me like MySQLish. A large DB working with no RI would
give me nightmares. Is it really true that large sites turn RI off to
improve performance,

i know from having worked in the shop that handles it that the databases
used in processing of NYS Personal Income Tax (Informix) most assuredly
use referential integrity.

anything else would be suicide.

certain shops do turn it off for large databases. that doesn't make it a 
good idea,

or the industry norm.

richard


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Integrity on large sites

2007-05-22 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/22/07 21:12, Naz Gassiep wrote:
 I'm working in a project at the moment that is using MySQL, and people
 keep making assertions like this one:
 
 *Really* big sites don't ever have referential integrity. Or if the few
 spots they do (like with financial transactions) it's implemented on the
 application level (via, say, optimistic locking), never the database level.
 
 This sounds to me like MySQLish. A large DB working with no RI would
 give me nightmares. Is it really true that large sites turn RI off to
 improve performance, or is that just a MySQL thing where it gets turned
 off just because MySQL allows you to turn it off and improve
 performance? Can you even turn RI off in PostgreSQL? Does Oracle, DB2 or
 MSSQL allow you to turn it off? Am I just being naive in thinking that
 everyone runs their DBs with RI in production?

Allow you to turn it off???

RI as in foreign keys or RI as in primary keys?

FKs are not implemented on our big transactional systems that use
Rdb/VMS.  Originally this was because the extra load would slow down
a system that needed every ounce of speed back on late 1990s technology.

Now we have (some) faster hardware, but even higher posting volumes.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGU6gRS9HxQb37XmcRAjSnAJwN8XhCxsHyeJHqxzi/k0Dj6O8fVACdGxrd
R1hfrTh9ifDivr51AGt1NNQ=
=CSLd
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Using a trigger with an object-relational manager

2007-05-22 Thread Michael Glaesemann


On May 22, 2007, at 21:21 , Rick Schumeyer wrote:

The problem is that the new A object that rails gets does not see  
the effect of the trigger.  If I call the reload method, then  
everything is ok.


As ActiveRecord (which I'm assuming you're using with Rails) has no  
idea of what is going on in the database (ActiveRecord doesn't have a  
concept of triggers), there's no way for it to see the effect of the  
trigger without querying the database for the latest information  
(which is what you're doing with reload).


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Integrity on large sites

2007-05-22 Thread Ben
Not using foreign keys makes sense for some applications. WORM  
applications where you know you are loading accurate data, for  
example. Or times when it doesn't matter if an application bug  
corrupts your data.


But if you care about your data and if you can't trust your client to  
edit it correctly, you'd better have referential integrity. Size is  
irrelevant to that equation.


On May 22, 2007, at 7:12 PM, Naz Gassiep wrote:


I'm working in a project at the moment that is using MySQL, and people
keep making assertions like this one:

*Really* big sites don't ever have referential integrity. Or if  
the few
spots they do (like with financial transactions) it's implemented  
on the
application level (via, say, optimistic locking), never the  
database level.


This sounds to me like MySQLish. A large DB working with no RI would
give me nightmares. Is it really true that large sites turn RI off to
improve performance, or is that just a MySQL thing where it gets  
turned

off just because MySQL allows you to turn it off and improve
performance? Can you even turn RI off in PostgreSQL? Does Oracle,  
DB2 or

MSSQL allow you to turn it off? Am I just being naive in thinking that
everyone runs their DBs with RI in production?

- Naz

---(end of  
broadcast)---

TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings