Re: [GENERAL] slony over LAN and VPN
On 15.07.2007 18:03, angga erwina wrote: i would like to replicate my dbase over LAN and VPN,could you help me please,give the tutorial or url that i can follow it step-by-step..especially about the configuration in admin conninfo... Read the manual of the VPN software you are planing to use, read the Slony documentation. What is not clear to you? -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Force SSL / username combination
On Fri, 2007-07-13 at 06:53 -0400, Robert Treat wrote: Well if you do the popular technique of doing everything through stored procedures (in our case plpgsql functions) then you can have those functions check. I don't like that approach myself though. Right. This approach always seemed too late to me, since the user was already connected at that point. I agree with both of you: I would rather have the client to use its certificate to login to PostgreSQL, like some other database is capable of. However, given that this is not (currently) possible (as far as I know), I have to consider workarounds. And one of those is to have the client use a certificate for the underlying connection, restrict access to the database to stored procedures, and have those stored procedures use sslinfo to obtain the certificate data. On my system, I want different people to access the same database, but I have to make sure they cannot access other peoples data. If I just let them use certificates for the underlying connection, without further checks at the database level, I do not think I can make sure user A doesn't get user B's data, if he/she wants to. My initial design of the system included a front-end on the server to which all clients connect, which would translate/check the queries. I then thought it would be easier if this would all be done by the database server, using stored procedures. Basically, that means I'm changing my front-end from a separate application to a set of stored procedures. Note that, for my case, the set of different queries a client can do is pretty limited, so I can easily write a stored procedure for all queries in the set. If there is some alternative that is easier to implement, please let me know! Koen ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Weird error message in SQL function call
Hi all. I've got this error message while running an SQL function: BEGIN psql:filtb.sql:1029: ERROR: COMMIT is not allowed in a SQL function CONTEXT: SQL function f_relazione during startup funzione SQL f_relazione istruzione 1 END Of course, I have no COMMIT command either in the function itself or in any other function called by f_relazione(). In the syslog I see that this is labelled as error #0A000 Feature not supported. Is there any hint? -- Vincenzo Romano -- Maybe Computer will never become as intelligent as Humans. For sure they won't ever become so stupid. [VR-1988] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Weird error message in SQL function call
Vincenzo Romano wrote: Hi all. I've got this error message while running an SQL function: BEGIN psql:filtb.sql:1029: ERROR: COMMIT is not allowed in a SQL function CONTEXT: SQL function f_relazione during startup funzione SQL f_relazione istruzione 1 END Of course, I have no COMMIT command either in the function itself or in any other function called by f_relazione(). Seeing as you are reading in a large .sql file, I'd look for a quoting error - if the closing quote for the function-body isn't correct it'll run on, and then encounter a COMMIT further down the file. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Weird error message in SQL function call
On Monday 16 July 2007 12:18:23 Richard Huxton wrote: Vincenzo Romano wrote: Hi all. I've got this error message while running an SQL function: BEGIN psql:filtb.sql:1029: ERROR: COMMIT is not allowed in a SQL function CONTEXT: SQL function f_relazione during startup funzione SQL f_relazione istruzione 1 END Of course, I have no COMMIT command either in the function itself or in any other function called by f_relazione(). Seeing as you are reading in a large .sql file, I'd look for a quoting error - if the closing quote for the function-body isn't correct it'll run on, and then encounter a COMMIT further down the file. You hit the spot! The actual problem was in a deeper function body that used to be PgPLSQL and is now SQL. A spurious END; in the end of the body was being interpreted as a COMMIT; Thanks. -- Vincenzo Romano -- Maybe Computer will never become as intelligent as Humans. For sure they won't ever become so stupid. [VR-1988] ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] count the number of bits set to 1 in a bit string field
On Jul 15, 2007, at 7:20 PM, Ragnar wrote: On sun, 2007-07-15 at 15:35 -0400, Rajarshi Guha wrote: Hi, is there a built in function that will give me the number of bits that are set to 1 in a bit string field? no, but it should be trivial to do with pl/pgsql Thanks for the pointer --- Rajarshi Guha [EMAIL PROTECTED] GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE --- Gravity brings me down. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Limit number connections by IP
Thaks Stefan On 13 jul, 12:25, [EMAIL PROTECTED] (Stefan Kaltenbrunner) wrote: tabai wrote: Hi I know that i can limited the total number of connections in postgresql.conf with max_connections, but... can i limite the max number of connections from an specific IP? For example y like have total max connections of 40 buy the same ip can't have more than 5 connections open. It is possible? no - you can limit the maximum numbers of connections on a per database and also a per role base. If you really need a per source address limitation look into using whatever firewall solution is available on your OS. Stefan ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] createing indexes on large tables and int8
Hi i try to populate a database. I dropped all indexes on the target table to speed up the copy. it works fine. After this i create the index and it took 10 hours just for one index (primary key). I have 100.000.000 rows with one PK (int8), two integer data values, and two FK (int8) Are there other options than maintenance_work_mem to speed up index creation? How do i find the optimal value for maintenance_work_mem. At the moment i have 160MB of maintenance work_mem. related questions: I use int8 types in most PK or FK columns. I could change my java code to use integer instead of Long ( i dont know why i took Long in the first place). a) Would int4 instead of int8 speed up creation of index? b) it will reduze the size of the table, of course. Would this reduce size of index, too? By the same amount? c) How much speed up will i gain on queries? Postgresql Doc mention it in section data types without saying how much speed-up i gain. Please, i just want to know if its worth it. Is it more like 0,1%, 1%, 10% or 50%? any help on speeding this up is very appreciated. kind regards, janning ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] How to Cluster an Index live?
Hi, How can I have my table clustered as records are being insert/updated (per transaction). I tried doing so with creating an index using pgAdmin III, and it generated the following SQL: CREATE INDEX someindex ON mytable (m2); ALTER TABLE mytable CLUSTER ON someindex; Now, above DID NOT cluster my table and I had to run PostgreSQL's own cluster command to have it clustered. According to my db-savvy colleagues (who use SQL Server), a live cluster is a must have on a high traffic large db and they are questioning this feature of PostgreSQL. I'd appreciate comments here, Thanks, Mike ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to Cluster an Index live?
According to PostgreSQL.org TODO: CLUSTER * -Make CLUSTER preserve recently-dead tuples per MVCC requirements * Automatically maintain clustering on a table This might require some background daemon to maintain clustering during periods of low usage. It might also require tables to be only partially filled for easier reorganization. Another idea would be to create a merged heap/index data file so an index lookup would automatically access the heap data too. A third idea would be to store heap rows in hashed groups, perhaps using a user-supplied hash function. http://archives.postgresql.org/pgsql-performance/2004-08/msg00349.php * %Add default clustering to system tables To do this, determine the ideal cluster index for each system table and set the cluster setting during initdb. * %Add VERBOSE option to report tables as they are processed, like VACUUM VERBOSE * -Add more logical syntax CLUSTER table USING index; support current syntax for backward compatibility The question is when these todo items are going to get addressed. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How to Cluster an Index live?
Look for it in 8.3: http://archives.postgresql.org/pgsql-patches/2006-08/msg00124.php ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] PostGreSQL Replication
Sounds like something you'd want to handle within the application I believe i will try to follow this path. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] SMTP
Can I have a straw poll on the best way to glue smtp to postgres, is exim the only option? Cheers ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Weird error message in SQL function call
Hi all. I've got this error message while running an SQL function: BEGIN psql:filtb.sql:1029: ERROR: COMMIT is not allowed in a SQL function CONTEXT: SQL function f_relazione during startup funzione SQL f_relazione istruzione 1 END Of course, I have no COMMIT command either in the function itself or in any other function called by f_relazione(). In the syslog I see that this is labelled as error #0A000 Feature not supported. Is there any hint? -- Vincenzo Romano -= NotOrAnd.IT Information Technologies =- tel +39 0823 454163 | cel +39 339 8083886 | fax +39 02 700506964 Smooth seas never make experienced sailormen ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] restore dump to 8.19
On Saturday 14 July 2007 00:04:08 Jim Nasby wrote: On Jul 13, 2007, at 2:11 PM, [EMAIL PROTECTED] wrote: i tried to restore a dump from version 8.1.8 to 8.1.9 and i had in one table a value 1.7383389519587511e-310 i got the following error message: pg_restore: ERROR: type double precision value out of range: underflow CONTEXT: COPY gesamtpunktecache, line 925001, column gc_gesamtsiege: 1.7383389519587511e-310 pg_restore: [archiver (db)] error returned by PQendcopy: ERROR: type double precision value out of range: underflow Is this on *identical* hardware, and were the binaries built the same? Floating point stuff is very hardware dependent, and may have some dependencies on compiler, etc as well. no, it was different hardware and maybe different compiler. Hmm, i do understand why computers are not masters of floating point (though i dont understand how they can fly air planes) , but a dump should be something which can be used across different hardware, shouldn't it? kind regards, janning ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] unconvertable characters
My 8.0.1 database is using ISO_8859_8 encoding. When I select specific fields I get a warning: WARNING: ignoring unconvertible ISO_8859_8 character 0x00c2 I now want to upgrade my database to 8.2.4 and change the encoding to UTF-8. When the restore is done, I get the following errors: pg_restore: restoring data for table manufacturers_old pg_restore: [archiver (db)] Error from TOC entry 4836; 0 9479397 TABLE DATA manufacturers postgres pg_restore: [archiver (db)] COPY failed: ERROR: character 0xc2 of encoding ISO_8859_8 has no equivalent in UTF8 CONTEXT: COPY manufacturers_old, line 331 And no data is put into the table. Is there a function I can use to replace the unconvertable charachters to blanks? such as: update manufacturers set manufacturername=replace(manufacturername,0x00c2,'') (that query doesn't work.) Or is there another way of doing it so that I just get rid of any characters that are not convertable? Thank You Sim ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] pg_dump vs schemas
Tom Lane writes: Shouldn't the public. be left out? I'm hesitant to remove the schema spec for fear we'd end up with underspecified output in some corner case or other. Totally understand. How about making it an option? Just like the --no-owner option. There are options that one may rarely use, but that can be very usefull for certain type of specialized restores. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_dump without blobs
Sébastien Boutté writes: I try with options : -n public (without -b) but it's always the same, the dump is huge. According to the man page that should have done it: Note: Non-schema objects such as blobs are not dumped when -n is specified. You can add blobs back to the dump with the --blobs switch. I guess the next question is 'what does postgresql considers a blob'? bytea fields? How about a large text with megabytes worth of data? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SMTP
I wrote a plpython function to have my database send email through an smtp server. If that is of interest I can post my function paddy carroll wrote: Can I have a straw poll on the best way to glue smtp to postgres, is exim the only option? Cheers ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] encrypting a query string
Hi all, Is it possible to take a string (ie: a user's password) and have postgres encrypt the string before performing the query? At the moment, I am using postgresql + postfix for email. I need to save the passwords in clear text in the DB and I don't feel safe doing that. I'd like to save the passwords as an SHA1 hash. Then when postfix checks the password it uses a query that converts the passed password into an SHA1 hash then performs the comparison. So a pseudo code of what I'd like is: SELECT foo FROM table WHERE passwd=sha1($password); I did a search on postgres's website and the only reference to SHA1 I saw was in the connection to psql. I hope this doesn't mean it's not possible. I'm much less familiar with postfix and am hoping to avoid mucking around with it. :P Thanks!! Madison ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] unconvertable characters
On Mon, Jul 16, 2007 at 04:20:22PM +0300, Sim Zacks wrote: My 8.0.1 database is using ISO_8859_8 encoding. When I select specific fields I get a warning: WARNING: ignoring unconvertible ISO_8859_8 character 0x00c2 Did any of the data originate on Windows? Might the data be in Windows-1255 or some encoding other than ISO-8859-8? In Windows-1255 0xc2 represents U+05B2 HEBREW POINT HATAF PATAH -- does that character seem correct in the context of the data? http://en.wikipedia.org/wiki/Windows-1255 I now want to upgrade my database to 8.2.4 and change the encoding to UTF-8. When the restore is done, I get the following errors: pg_restore: restoring data for table manufacturers_old pg_restore: [archiver (db)] Error from TOC entry 4836; 0 9479397 TABLE DATA manufacturers postgres pg_restore: [archiver (db)] COPY failed: ERROR: character 0xc2 of encoding ISO_8859_8 has no equivalent in UTF8 CONTEXT: COPY manufacturers_old, line 331 And no data is put into the table. Is there a function I can use to replace the unconvertable charachters to blanks? If the data is in an encoding other than ISO-8859-8 then you could redirect the output of pg_restore to a file or pipe it through a filter and change the SET client_encoding line to whatever the encoding really is. For example, if the data is Windows-1255 then you'd use the following: SET client_encoding TO win1255; Another possibility would be to use a command like iconv to convert the data to UTF-8 and strip unconvertible characters; on many systems you could do that with iconv -f iso8859-8 -t utf-8 -c. If you convert to UTF-8 then you'd need to change client_encoding accordingly. -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] createing indexes on large tables and int8
Janning Vygen [EMAIL PROTECTED] writes: After this i create the index and it took 10 hours just for one index (primary key). I have 100.000.000 rows with one PK (int8), two integer data values, and two FK (int8) What PG version is this? We did a fair amount of work on sort speed for 8.2. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] unconvertable characters
Michael, I have been manually debugging and each symbol is different, though they each give the same error code. For example, in one it was a pound sign, though when I did an update and put in the pound sign it worked. Another time it was the degree symbol. I'm going to look at iconv as that sounds like the best possibility. Sim Michael Fuhr wrote: On Mon, Jul 16, 2007 at 04:20:22PM +0300, Sim Zacks wrote: My 8.0.1 database is using ISO_8859_8 encoding. When I select specific fields I get a warning: WARNING: ignoring unconvertible ISO_8859_8 character 0x00c2 Did any of the data originate on Windows? Might the data be in Windows-1255 or some encoding other than ISO-8859-8? In Windows-1255 0xc2 represents U+05B2 HEBREW POINT HATAF PATAH -- does that character seem correct in the context of the data? http://en.wikipedia.org/wiki/Windows-1255 I now want to upgrade my database to 8.2.4 and change the encoding to UTF-8. When the restore is done, I get the following errors: pg_restore: restoring data for table manufacturers_old pg_restore: [archiver (db)] Error from TOC entry 4836; 0 9479397 TABLE DATA manufacturers postgres pg_restore: [archiver (db)] COPY failed: ERROR: character 0xc2 of encoding ISO_8859_8 has no equivalent in UTF8 CONTEXT: COPY manufacturers_old, line 331 And no data is put into the table. Is there a function I can use to replace the unconvertable charachters to blanks? If the data is in an encoding other than ISO-8859-8 then you could redirect the output of pg_restore to a file or pipe it through a filter and change the SET client_encoding line to whatever the encoding really is. For example, if the data is Windows-1255 then you'd use the following: SET client_encoding TO win1255; Another possibility would be to use a command like iconv to convert the data to UTF-8 and strip unconvertible characters; on many systems you could do that with iconv -f iso8859-8 -t utf-8 -c. If you convert to UTF-8 then you'd need to change client_encoding accordingly. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_dump without blobs
On Jul 16, 2007, at 9:26 AM, Francisco Reyes wrote: I guess the next question is 'what does postgresql considers a blob'? bytea fields? How about a large text with megabytes worth of data? bytea and text fields are NOT blobs. they are what you access via the 'large object' functions. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_dump without blobs
Vivek Khera wrote: On Jul 16, 2007, at 9:26 AM, Francisco Reyes wrote: I guess the next question is 'what does postgresql considers a blob'? bytea fields? How about a large text with megabytes worth of data? bytea and text fields are NOT blobs. they are what you access via the 'large object' functions. To follow up on this. In oracle large text CLOB and binary objects BLOB are synonomous (I believe) with PostgreSQL TEXT and BYTEA. PostgreSQL also supports a non standard, and frankly better implementation called lo for binary data, which also uses BYTEA data but breaks it up to make it more efficient per row. There is no way to not dump your TEXT and BYTEA data from a particular column if you are dumping the whole table. One option would be to use CREATE TEMP TABLE AS SELECT... and then dump that temp table. Joshua D. Drake ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Postgres Geometry
I have developed a PostgreSQL database c/w a Delphi interface with which to input data. I would like to display this data graphically. Following is a small example of the type of graphic display I am seeking. Can anyone tell me if Postgres Geo is a suitable vehicle for this application? If so, are there any tools that may assist me in developing this graphic interface? Bob Pawley clip_image001.gif
Re: [GENERAL] encrypting a query string
Madison Kelly schrieb: Hi all, Is it possible to take a string (ie: a user's password) and have postgres encrypt the string before performing the query? At the moment, I am using postgresql + postfix for email. I need to save the passwords in clear text in the DB and I don't feel safe doing that. I'd like to save the passwords as an SHA1 hash. Then when postfix checks the password it uses a query that converts the passed password into an SHA1 hash then performs the comparison. So a pseudo code of what I'd like is: SELECT foo FROM table WHERE passwd=sha1($password); I did a search on postgres's website and the only reference to SHA1 I saw was in the connection to psql. I hope this doesn't mean it's not possible. I'm much less familiar with postfix and am hoping to avoid mucking around with it. :P You can use contrib/pgcrypto: http://developer.postgresql.org/pgdocs/pgsql/contrib/pgcrypto/ contrib is often installed allong with postgres and contains not (yet) core included extensions. http://developer.postgresql.org/pgdocs/pgsql/contrib/pgcrypto/sql/sha1.sql shows example usage. HTH Tino Wildenhain ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How to Cluster an Index live?
Mike [EMAIL PROTECTED] writes: Look for it in 8.3: http://archives.postgresql.org/pgsql-patches/2006-08/msg00124.php What he's looking for isn't actually included in that todo list. I'm sorry but nobody has been looking at an online cluster command. Clustered tables of the type you're imagining aren't really supported in Postgres at all. What Postgres does is reorder the table in place but the index is still stored separately. The patch you refer to here would help keep the table in order as updates and inserts happen which doesn't currently happen. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Postgres Geometry
--- Bob Pawley [EMAIL PROTECTED] wrote: I have developed a PostgreSQL database c/w a Delphi interface with which to input data. If so, are there any tools that may assist me in developing this graphic interface? This link seemed enteresting to me. http://www.netbeans.org/download/flash/netbeans_60/jl_preso_vislib/player.html ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SMTP
On Sun, Jul 15, 2007 at 08:53:54AM +0100, paddy carroll wrote: Can I have a straw poll on the best way to glue smtp to postgres, is exim the only option? What do you mean, glue smtp to postgres? You want to authenticate from Pg, or store the mail in there, or send mail from the database, or. . .? A -- Andrew Sullivan | [EMAIL PROTECTED] Users never remark, Wow, this software may be buggy and hard to use, but at least there is a lot of code underneath. --Damien Katz ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] pg_dump vs schemas
On Fri, 2007-07-13 at 20:06 -0500, Michael Glaesemann wrote: On Jul 13, 2007, at 19:10 , Francisco Reyes wrote: Alternatively is there any easy way to take all data in one schema and load it into a target DB and a different schema? You might try using the -n flag with pg_dump, replace schema1 with schema2 in the dump file, and loading the altered dump into the new database. There may also be some tricks you can play with pg_restore (on a dump file created with pg_dump -Fc), though I don't know specifically what offhand. I would find it helpful if there were an easy way to rename objects (specifically schemas) during the restore process. Let's say I have a development database, and I want to copy the entire schema myapp_dev1 to schema myapp_dev2 on the same database. Currently, it's fairly awkward to do that. How do other people do that? Is it worth trying to add a way for pg_restore to rename object? Or what about an option so pg_restore will not emit the the schema name at all, and the user who restores is can just set their search_path to where they want all the objects to go? Regards, Jeff Davis ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Moved postgres, now won't start
Hi all, I've created a small 2-node (Debian Etch, PgSQL8.1) cluster using a (shared) DRBD8 partition formatted as ext3 running in Primary/Secondary mode. I shut down postgresql-8.1, moved '/etc/postgresql' and '/etc/postgres-commin' to '/ha/etc' (where '/ha' is the DRBD partitions mount point). Then I created symlinks to the directories under '/ha' and then restarted PostgreSQL. Everything *seemed* okay, until I tried to connect to a database (ie: 'template1' as 'postgres'). Then I get the error: $ psql template1 psql: FATAL: could not open file global/pg_database: No such file or directory When I tried connecting to another DB as a user with a (md5) password it recognizes if the password is right or not. Also, the file: # cat /var/lib/postgresql/8.1/main/global/pg_database postgres 10793 1663 499 499 template1 1 1663 499 499 template0 10792 1663 499 499 Exists, and is readable as you can see. Any idea what's wrong? Does it not like that '/var/lib/postgres - '/ha/var/lib/postgres'? Thanks! Madison ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Moved postgres, now won't start
Madison Kelly [EMAIL PROTECTED] writes: I've created a small 2-node (Debian Etch, PgSQL8.1) cluster using a (shared) DRBD8 partition formatted as ext3 running in Primary/Secondary mode. I shut down postgresql-8.1, moved '/etc/postgresql' and '/etc/postgres-commin' to '/ha/etc' (where '/ha' is the DRBD partitions mount point). Then I created symlinks to the directories under '/ha' and then restarted PostgreSQL. Everything *seemed* okay, until I tried to connect to a database (ie: 'template1' as 'postgres'). Then I get the error: $ psql template1 psql: FATAL: could not open file global/pg_database: No such file or directory I think that's the first actual file access that happens during the connect sequence (everything before that is done with in-memory caches in the postmaster). So what I'm wondering is whether you *really* shut down and restarted the postmaster, or whether you are trying to connect to the same old postmaster process that has now had all its files deleted out from under it. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] SMTP
store mail , send mail, receive mail, filter mail. I need a mail firewall for a set of bespoke applications in a secure environment. I will probably use postfix *+++ On 16 Jul 2007, at 17:54, Andrew Sullivan wrote: On Sun, Jul 15, 2007 at 08:53:54AM +0100, paddy carroll wrote: Can I have a straw poll on the best way to glue smtp to postgres, is exim the only option? What do you mean, glue smtp to postgres? You want to authenticate from Pg, or store the mail in there, or send mail from the database, or. . .? A -- Andrew Sullivan | [EMAIL PROTECTED] Users never remark, Wow, this software may be buggy and hard to use, but at least there is a lot of code underneath. --Damien Katz ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Moved postgres, now won't start
Hi, Madison Kelly írta: Hi all, I've created a small 2-node (Debian Etch, PgSQL8.1) cluster using a (shared) DRBD8 partition formatted as ext3 running in Primary/Secondary mode. I shut down postgresql-8.1, moved '/etc/postgresql' and '/etc/postgres-commin' to '/ha/etc' (where '/ha' is the DRBD partitions mount point). Then I created symlinks to the directories under '/ha' and then restarted PostgreSQL. Everything *seemed* okay, until I tried to connect to a database (ie: 'template1' as 'postgres'). Then I get the error: $ psql template1 psql: FATAL: could not open file global/pg_database: No such file or directory When I tried connecting to another DB as a user with a (md5) password it recognizes if the password is right or not. Also, the file: # cat /var/lib/postgresql/8.1/main/global/pg_database postgres 10793 1663 499 499 template1 1 1663 499 499 template0 10792 1663 499 499 Exists, and is readable as you can see. Any idea what's wrong? Does it not like that '/var/lib/postgres - '/ha/var/lib/postgres'? Thanks! Madison Do you use SELinux? Look for avc denied messages in the logs to see if it's the case. -- -- Zoltán Böszörményi Cybertec Geschwinde Schönig GmbH http://www.postgresql.at/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Moved postgres, now won't start
Tom Lane wrote: I think that's the first actual file access that happens during the connect sequence (everything before that is done with in-memory caches in the postmaster). So what I'm wondering is whether you *really* shut down and restarted the postmaster, or whether you are trying to connect to the same old postmaster process that has now had all its files deleted out from under it. regards, tom lane Thank you for your reply! Before the move; # /etc/init.d/postgresql-8.1 status Version Cluster Port Status OwnerData directory Log file 8.1 main 5432 online postgres /var/lib/postgresql/8.1/main /var/log/postgresql/postgresql-8.1-main.log # /etc/init.d/postgresql-8.1 stop Stopping PostgreSQL 8.1 database server: main. nicole:/etc/postgresql/8.1/main# /etc/init.d/postgresql-8.1 status Version Cluster Port Status OwnerData directory Log file 8.1 main 5432 down postgres /var/lib/postgresql/8.1/main /var/log/postgresql/postgresql-8.1-main.log I hope that doesn't get too mangled. Unless I am misunderstanding stop, then I think it was stopped. I made the move/symlinks mentioned in my first post, then restarted. For double certainty, I switched to the slave node after shutting down postgres on the master node and doubled checked that it was still 'down' as well. Madison ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Moved postgres, now won't start
Zoltan Boszormenyi wrote: Do you use SELinux? Look for avc denied messages in the logs to see if it's the case. No, I don't (unless I missed it and Debian Etch uses it by default now). To be sure, I checked the log files and only say this: 2007-07-16 13:58:03 EDT LOG: incomplete startup packet 2007-07-16 13:58:04 EDT LOG: could not open temporary statistics file global/pgstat.tmp: No such file or directory 2007-07-16 13:59:03 EDT FATAL: could not open file global/pg_database: No such file or directory 2007-07-16 13:59:04 EDT LOG: could not open temporary statistics file global/pgstat.tmp: No such file or directory 2007-07-16 14:00:03 EDT FATAL: could not open file global/pg_database: No such file or directory Over and over again. I tried shutting down postgresql again and got this at the shell: # /etc/init.d/postgresql-8.1 stop Stopping PostgreSQL 8.1 database server: main* pg_ctl: postmaster does not shut down (does not shutdown gracefully, now stopping immediately)pg_ctl: could not send stop signal (PID: 19958): No such process Insecure dependency in kill while running with -T switch at /usr/bin/pg_ctlcluster line 370. (does not shutdown, killing the process) failed! And this in the logs: 2007-07-16 14:28:00 EDT LOG: received fast shutdown request 2007-07-16 14:28:00 EDT LOG: shutting down 2007-07-16 14:28:00 EDT PANIC: could not open control file global/pg_control: No such file or directory 2007-07-16 14:28:00 EDT LOG: background writer process (PID 19960) was terminated by signal 6 2007-07-16 14:28:00 EDT LOG: terminating any other active server processes 2007-07-16 14:28:00 EDT LOG: all server processes terminated; reinitializing 2007-07-16 14:28:00 EDT LOG: could not open file postmaster.pid: No such file or directory 2007-07-16 14:28:00 EDT PANIC: could not open control file global/pg_control: No such file or directory 2007-07-16 14:28:00 EDT LOG: could not open temporary statistics file global/pgstat.tmp: No such file or directory Lastly, to be very sure, I tried grep'ing for that string with no results: nicole:/var/log# grep avc denied * -Rni nicole:/var/log# Thanks for the reply! Madison ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] What's the logical counterpart of the to_hex function?
I have data (from an external source) which is in text format as a hex number (it's actually an IP address, but that's probably not relevant.) to_hex gets from integer to hex, I can cast a hex constant to integer (ie x'12a7'31'::int, but how do I get a database column from string/hex to integer? (If this is referenced in the documentation, I'm not finding it.) -- Mike Nolan
Re: [GENERAL] Moved postgres, now won't start
Tom Lane wrote: Madison Kelly [EMAIL PROTECTED] writes: I've created a small 2-node (Debian Etch, PgSQL8.1) cluster using a (shared) DRBD8 partition formatted as ext3 running in Primary/Secondary mode. I shut down postgresql-8.1, moved '/etc/postgresql' and '/etc/postgres-commin' to '/ha/etc' (where '/ha' is the DRBD partitions mount point). Then I created symlinks to the directories under '/ha' and then restarted PostgreSQL. Everything *seemed* okay, until I tried to connect to a database (ie: 'template1' as 'postgres'). Then I get the error: $ psql template1 psql: FATAL: could not open file global/pg_database: No such file or directory I think that's the first actual file access that happens during the connect sequence (everything before that is done with in-memory caches in the postmaster). So what I'm wondering is whether you *really* shut down and restarted the postmaster, or whether you are trying to connect to the same old postmaster process that has now had all its files deleted out from under it. To test your idea, I rebooted both cluster nodes and it works now. How could I have done this without requiring a reboot? Is there a way to tell postgres to create an entirely new connection? Thanks!! Madison ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_dump vs schemas
Jeff Davis writes: Let's say I have a development database, and I want to copy the entire schema myapp_dev1 to schema myapp_dev2 on the same database. Currently, it's fairly awkward to do that. One possible way may be: Dump the source schema. Remove references to the schema name in the pg_dump file. Change search_path and restore schema. Not sure how this would work with the special Fc format. It should work with the ASCII format. How do other people do that? Is it worth trying to add a way for pg_restore to rename object? In the particular case that you mentioned, if pg_dump could be made to not include the schema name anywhere that should make the process easier. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Moved postgres, now won't start
Madison Kelly [EMAIL PROTECTED] writes: Over and over again. I tried shutting down postgresql again and got this at the shell: # /etc/init.d/postgresql-8.1 stop Stopping PostgreSQL 8.1 database server: main* pg_ctl: postmaster does not shut down (does not shutdown gracefully, now stopping immediately)pg_ctl: could not send stop signal (PID: 19958): No such process Insecure dependency in kill while running with -T switch at /usr/bin/pg_ctlcluster line 370. (does not shutdown, killing the process) failed! And this in the logs: 2007-07-16 14:28:00 EDT LOG: received fast shutdown request 2007-07-16 14:28:00 EDT LOG: shutting down 2007-07-16 14:28:00 EDT PANIC: could not open control file global/pg_control: No such file or directory 2007-07-16 14:28:00 EDT LOG: background writer process (PID 19960) was terminated by signal 6 2007-07-16 14:28:00 EDT LOG: terminating any other active server processes 2007-07-16 14:28:00 EDT LOG: all server processes terminated; reinitializing 2007-07-16 14:28:00 EDT LOG: could not open file postmaster.pid: No such file or directory 2007-07-16 14:28:00 EDT PANIC: could not open control file global/pg_control: No such file or directory 2007-07-16 14:28:00 EDT LOG: could not open temporary statistics file global/pgstat.tmp: No such file or directory I think this proves my theory --- that all looks like leftover processes trying to work in an installation that isn't there anymore. (Except I have no idea what the insecure dependency bit is about.) What I suspect happened is that you moved the directories before you actually shut down the old postmaster, and then the initscript's stop command would have failed because it couldn't find the postmaster.pid file. You could get rid of the old postmaster by doing ps auxww | grep post to determine its PID and then kill -QUIT postmaster_pid. The real problem you're likely to have is that if you moved the directories while anything was happening, you'll have an inconsistent snapshot of the database files, probably meaning database corruption. There isn't anything much you can do about that at this stage (although REINDEXing your more active tables might not be a bad idea, once you've got the thing talking to you again). I hope you have a reasonably recent backup to resort to, in case it emerges that things are hopelessly messed up. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_dump vs schemas
On Mon, 2007-07-16 at 14:58 -0400, Francisco Reyes wrote: One possible way may be: Dump the source schema. Remove references to the schema name in the pg_dump file. Change search_path and restore schema. That's what I currently do. It seems a little flimsy though: there are too many objects to really remove the references by hand, so we do a global search-and-replace. As long as the schema name is unique enough, I suppose it's alright for a development (non-production) database. How do other people do that? Is it worth trying to add a way for pg_restore to rename object? In the particular case that you mentioned, if pg_dump could be made to not include the schema name anywhere that should make the process easier. That's what I was thinking. It might be better placed in pg_restore though, so that way you can decide after you've already made the backup. Regards, Jeff Davis ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] What's the logical counterpart of the to_hex function?
On Jul 16, 2007, at 11:36 AM, Michael Nolan wrote: I have data (from an external source) which is in text format as a hex number (it's actually an IP address, but that's probably not relevant.) It likely is relevant, as it means it's a 32 bit unsigned integer, which isn't something postgresql supports. Depending on what you need you might want to look at using 32 bit signed, with a 2^31 offset, 64 bit signed, inet or ip4 (which is on pgfoundry, not built-in). to_hex gets from integer to hex, I can cast a hex constant to integer (ie x'12a7'31'::int, but how do I get a database column from string/hex to integer? Last time I checked the suggested way to do this was with a plpgsql function that dynamically creates the cast and does it with exec. Google for postgresql and hex and you can likely find sample code. Cheers, Steve ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] deferred check constraints
Right now, it would be nice if I could get a check constraint to be deferred. Its a long story. I want a circular constraint. The way things are set up right now, it would be easy if I could defer my check constraint. I'm doing a polymorphic relation. One direction is a simple reference a fixed table. The other direction is a reference to table that changes based upon the type of the item. I can do this check in a function which implies it is a check constraint. I may figure out how to flip everything around so that the simple reference constraint could be deferred. That is one option The other option is to add deferred check constraints to PostgreSQL. I've never looked at the PostgreSQL code but I like parsers, etc. How hard would it be to add this to PostgreSQL and is it something of general interest or am I somewhat lost in the woods? Thank you, Perry ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] deferred check constraints
Perry Smith [EMAIL PROTECTED] writes: Right now, it would be nice if I could get a check constraint to be deferred. Possibly you could use a deferred constraint trigger and do the check inside that, but I think you will find there are all sorts of race conditions in this design. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] deferred check constraints
Perry Smith [EMAIL PROTECTED] writes: Right now, it would be nice if I could get a check constraint to be deferred. Its a long story. I want a circular constraint. The way things are set up right now, it would be easy if I could defer my check constraint. I'm doing a polymorphic relation. One direction is a simple reference a fixed table. The other direction is a reference to table that changes based upon the type of the item. I can do this check in a function which implies it is a check constraint. The main problem with this is that check constraints which refer to other tables don't really work. Not to the degree of rigour that referential integrity checks maintain. Consider what happens if someone updates the record you're targeting but hasn't committed yet. Your check constraint will see the old version and pass even though it really shouldn't. It'll even pass if the update has committed but your query started before it did so. The other option is to add deferred check constraints to PostgreSQL. I've never looked at the PostgreSQL code but I like parsers, etc. How hard would it be to add this to PostgreSQL and is it something of general interest or am I somewhat lost in the woods? I suspect the reason they don't exist is precisely as above that they don't really make a lot of sense. If your check constraint can't usefully include queries on other tables then there's no reason to defer it. Your record isn't going to become acceptable later if it isn't now. But many people do put queries in check constraints with the caveats understood -- it's still useful if the referred-to data is basically static. So perhaps it would be useful. There is something in the standard called Assertions which I think are supposed to address this issue. But they're hard and I don't know if any database supports them. I wonder if we did whether anyone would find them useful. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_dump without blobs
I understand that i cannot do a dump of a database without bytea values.I hope that these feature would be present in next version of pg_dump as we can store large binary values in these sort of fields and pg_dump taking a lot of time for dumping the database. For the moment, I will patch my database to transform my bytea field into lo (oid) field. I would to do something similar like this : update table set new_field = oldfield but i have problem with cast from bytea to oid. Is there an efficient way of doing this update ? Thank you, Sébastien Boutté On 7/16/07, Joshua D. Drake [EMAIL PROTECTED] wrote: Vivek Khera wrote: On Jul 16, 2007, at 9:26 AM, Francisco Reyes wrote: I guess the next question is 'what does postgresql considers a blob'? bytea fields? How about a large text with megabytes worth of data? bytea and text fields are NOT blobs. they are what you access via the 'large object' functions. To follow up on this. In oracle large text CLOB and binary objects BLOB are synonomous (I believe) with PostgreSQL TEXT and BYTEA. PostgreSQL also supports a non standard, and frankly better implementation called lo for binary data, which also uses BYTEA data but breaks it up to make it more efficient per row. There is no way to not dump your TEXT and BYTEA data from a particular column if you are dumping the whole table. One option would be to use CREATE TEMP TABLE AS SELECT... and then dump that temp table. Joshua D. Drake ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] deferred check constraints
On Jul 16, 2007, at 3:08 PM, Gregory Stark wrote: Perry Smith [EMAIL PROTECTED] writes: Right now, it would be nice if I could get a check constraint to be deferred. Its a long story. I want a circular constraint. The way things are set up right now, it would be easy if I could defer my check constraint. I'm doing a polymorphic relation. One direction is a simple reference a fixed table. The other direction is a reference to table that changes based upon the type of the item. I can do this check in a function which implies it is a check constraint. The main problem with this is that check constraints which refer to other tables don't really work. Not to the degree of rigour that referential integrity checks maintain. Consider what happens if someone updates the record you're targeting but hasn't committed yet. Your check constraint will see the old version and pass even though it really shouldn't. It'll even pass if the update has committed but your query started before it did so. This brings up a point that I have wondered about. I think I need a nice clear concise explanation of how the magic of a relational database transactions are done. I'll go see if I can find one. If anyone has a pointer to one, that will help me the most right now. The other option is to add deferred check constraints to PostgreSQL. I've never looked at the PostgreSQL code but I like parsers, etc. How hard would it be to add this to PostgreSQL and is it something of general interest or am I somewhat lost in the woods? I suspect the reason they don't exist is precisely as above that they don't really make a lot of sense. If your check constraint can't usefully include queries on other tables then there's no reason to defer it. Your record isn't going to become acceptable later if it isn't now. The constraint will be valid before the transaction completes (is what I am thinking). I need to add an element to table A and an element to table B that reference each other. The polymorphic gunk comes up because table B is not the same table each time. I just want something that will fire after the inserts but before the transaction ends that will make sure that A-B and B-A. Thank you for your help, Perry ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] pg_dump without blobs
Sébastien Boutté writes: I understand that i cannot do a dump of a database without bytea values. What version of postgresql? If 8.2 uou can do a copy to with a subselect copy (select with fields except bytea) to '/somepath'; ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SMTP
paddy carroll writes: store mail , send mail, receive mail, filter mail. I need a mail firewall for a set of bespoke applications in a secure environment. I will probably use postfix Postfix works well with Postgresql. For greylisting Sqlgrey works well with Postgresql. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] deferred check constraints
On Jul 16, 2007, at 3:29 PM, Perry Smith wrote: On Jul 16, 2007, at 3:08 PM, Gregory Stark wrote: Perry Smith [EMAIL PROTECTED] writes: Right now, it would be nice if I could get a check constraint to be deferred. Its a long story. I want a circular constraint. The way things are set up right now, it would be easy if I could defer my check constraint. I'm doing a polymorphic relation. One direction is a simple reference a fixed table. The other direction is a reference to table that changes based upon the type of the item. I can do this check in a function which implies it is a check constraint. The main problem with this is that check constraints which refer to other tables don't really work. Not to the degree of rigour that referential integrity checks maintain. Consider what happens if someone updates the record you're targeting but hasn't committed yet. Your check constraint will see the old version and pass even though it really shouldn't. It'll even pass if the update has committed but your query started before it did so. This brings up a point that I have wondered about. I think I need a nice clear concise explanation of how the magic of a relational database transactions are done. I'll go see if I can find one. If anyone has a pointer to one, that will help me the most right now. The postgres docs are great: http://www.postgresql.org/docs/8.2/ interactive/mvcc.html Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Capturing return value of a function
Hi, I have a function like this CREATE OR REPLACE FUNCTION sp_insert(text, text, text, text) RETURNS text AS $BODY$ BEGIN INSERT INTO tbl(a,b,c,d) VALUES ($1,$2, $3, $4); RETURN 'success'; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE; I was wondering what would the function return if insert fails. I want it to return 'success' upon a successful insert and 'failure' if insert fails. Would the following implementation work for that? CREATE OR REPLACE FUNCTION sp_insert(text, text, text, text) RETURNS text AS $BODY$ BEGIN BEGIN INSERT INTO tbl(a,b,c,d) VALUES ($1,$2, $3, $4); RETURN 'success'; END; RETURN 'failure'; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE;
Re: [GENERAL] SMTP
[EMAIL PROTECTED] (paddy carroll) writes: store mail , send mail, receive mail, filter mail. I need a mail firewall for a set of bespoke applications in a secure environment. I will probably use postfix It is possible to configure various parts of postfix (e.g. - aliases and other such) via queries against PostgreSQL tables, so this is something that PostgreSQL could help with. -- cbbrowne,@,linuxdatabases.info http://linuxfinances.info/info/linuxxian.html The problem with the current Lisp Machine system is that nothing ever calls anything anymore. -- KMP ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] deferred check constraints
Perry Smith [EMAIL PROTECTED] writes: I need to add an element to table A and an element to table B that reference each other. The polymorphic gunk comes up because table B is not the same table each time. Not that I haven't done similar things in the past but you should think hard about whether you can't normalize your data further to avoid this. It will make querying your data later much easier. (Think of what your joins are going to look like.) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] pg_dump without blobs
I use version 8.2.4 since last week end in my production database. I don't want to do a per table dump (I have more than 300 tables). It's not maintainable as the schema evolves regulary. On 7/16/07, Francisco Reyes [EMAIL PROTECTED] wrote: Sébastien Boutté writes: I understand that i cannot do a dump of a database without bytea values. What version of postgresql? If 8.2 uou can do a copy to with a subselect copy (select with fields except bytea) to '/somepath'; ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Capturing return value of a function
On Jul 16, 2007, at 15:55 , Jasbinder Singh Bali wrote: I was wondering what would the function return if insert fails. I want it to return 'success' upon a successful insert and 'failure' if insert fails. Depends on what you mean by failed. Do you mean an error was raised? Then you'll just get a normal error message. To catch this, check the error trapping section of the PL/pgSQL documentation. Do you mean nothing was inserted? If so, you won't catch this in your above code. You can check the FOUND special variable after the INSERT to see if anything was inserted and set your result message based on that. Would the following implementation work for that? No. BEGIN/END blocks in PL/pgSQL are not equivalent to SQL BEGIN/ COMMIT transactions. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Capturing return value of a function
Jasbinder Singh Bali [EMAIL PROTECTED] writes: I was wondering what would the function return if insert fails. It would abort immediately and abort your transaction as well unless something higher up catches the error. I want it to return 'success' upon a successful insert and 'failure' if insert fails. You need an EXCEPT WHEN clause, see: http://www.postgresql.org/docs/8.2/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] psql error while restoring database: unrecognized node type: 655
I'm attempting to dump and restore an 8.2.4 database to another (same architecture) machine also running 8.2.4 with a freshly initialized database. I'm using this script to generate the pg_dump file: #!/bin/sh DATE=`date +%Y%m%d%H%M%S` #dump the live wykids database /usr/local/bin/pg_dumpall -c -h localhost \ /home/_postgresql/wykids$DATE.sql #slony /usr/local/bin/dropdb -h slony.internal wykids #recreate the development wykids database from the dump file we just made #first we connect to template1 and set template0 to accept connections /usr/local/bin/psql -U _postgresql -h slony.internal -t -c \ update pg_database set datallowconn = 't' where datname = 'template0'; template1 #template0 is stock database--no additions whatsoever /usr/local/bin/psql -h slony.internal template0 -f \ /home/_postgresql/wykids$DATE.sql #now we connect to template1 again and set template0 to not accept connections /usr/local/bin/psql -U _postgresql -h slony.internal -t -c \ update pg_database set datallowconn = 'f' where datname = 'template0'; template1 #vacuum analyze /usr/local/bin/psql -U _postgresql -h slony.internal -t -c \ vacuum analyze; wykids When restoring to slony, psql begins spewing errors and eventually stops. I've narrowed the problem table to a specific table and tried doing a pg_dump on just that table. Using psql to load that one table gives me the same error, which comes at the last line of the create table statement. Here is the first part of the dump file: File: /tmp/people.sql -- -- PostgreSQL database dump -- SET client_encoding = 'SQL_ASCII'; SET standard_conforming_strings = off; SET check_function_bodies = false; SET client_min_messages = warning; SET escape_string_warning = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = true; -- -- Name: people; Type: TABLE; Schema: public; Owner: _postgresql; Tablespace: -- CREATE TABLE people ( pp_id integer NOT NULL, pp_stars_id integer, pp_mentor_id numeric, pp_trainer_id numeric, pp_director_id numeric, pp_apprentice_id numeric, pp_first_name character varying(255) NOT NULL, pp_last_name character varying(255) NOT NULL, pp_address character varying(255), pp_city character varying(50), pp_state character varying(3), pp_zip character varying(10), pp_county character varying(255), pp_home_phone character varying(10), pp_work_phone character varying(10), pp_work_phone_extension character varying(6), pp_cell_phone character varying(10), pp_fax character varying(10), pp_dob date, pp_gender character varying(20), pp_race_native_american boolean, pp_race_hispanic boolean, pp_race_african_american boolean, pp_race_asian boolean, pp_race_caucasian boolean, pp_race_pacific_islander boolean, pp_email character varying(60), pp_setup_date date DEFAULT ('now'::text)::timestamp(6) with time zone, pp_last_updated_date date DEFAULT ('now'::text)::timestamp(6) with time zone, pp_education_level character varying(255), pp_associates character varying(255), pp_bachelors character varying(255), pp_cda character varying(255), pp_masters character varying(255), pp_doctorate character varying(255), pp_certifications text, pp_prof_memberships text, pp_job_title character varying(255), pp_employer character varying(255), pp_hourly_wage numeric(5,2), pp_username character varying(25), pp_password character varying(25), pp_password_question character varying(255), pp_password_answer character varying(255), pp_notes text, pp_last_updated_by character varying(50) DEFAULT current_user() NOT NULL, pp_provisional_p boolean DEFAULT false NOT NULL, pp_ethnicity character varying(25), pp_race_other_p boolean, pp_race_other character varying(255), CONSTRAINT pp_cell_phone_ck CHECK (((pp_cell_phone IS NULL) OR ((pp_cell_phone)::text ~ similar_escape('[2-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'::text, NULL::text, CONSTRAINT pp_fax_ck CHECK (((pp_fax IS NULL) OR ((pp_fax)::text ~ similar_escape('[2-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'::text, NULL::text, CONSTRAINT pp_home_phone_ck CHECK (((pp_home_phone IS NULL) OR ((pp_home_phone)::text ~ similar_escape('[2-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'::text, NULL::text, CONSTRAINT pp_work_phone_ck CHECK (((pp_work_phone IS NULL) OR ((pp_work_phone)::text ~ similar_escape('[2-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]'::text, NULL::text, CONSTRAINT pp_work_phone_extension_ck CHECK (((pp_work_phone_extension IS NULL) OR ((pp_work_phone_extension)::text ~ similar_escape('[0-9]{1,6}'::text, NULL::text, CONSTRAINT pp_wyoming_county_required_ck CHECK pp_state)::text 'WY'::text) OR (pp_county IS NOT
Re: [GENERAL] Moved postgres, now won't start
Tom Lane wrote: I think this proves my theory --- that all looks like leftover processes trying to work in an installation that isn't there anymore. (Except I have no idea what the insecure dependency bit is about.) Insecure dependency is about Perl tainted mode (which pg_ctlcluster is written in). -- Alvaro Herrera http://www.PlanetPostgreSQL.org/ Tulio: oh, para qué servirá este boton, Juan Carlos? Policarpo: No, aléjense, no toquen la consola! Juan Carlos: Lo apretaré una y otra vez. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] psql error while restoring database: unrecognized node type: 655
Jeff Ross [EMAIL PROTECTED] writes: When psql begins loading this file, it throws the error: psql:/tmp/people.sql:79: ERROR: unrecognized node type: 655 Hmm. Can you try that with \set VERBOSITY verbose so we can determine where the error is being thrown from? The interesting thing is that this exact process works just fine to a third machine also running 8.2.4 The example doesn't fail for me, either. We've occasionally seen messages of this type from poorly-done local modifications to the backend. How did you come by the postgres executables you're using on the problem machine? regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] psql error while restoring database: unrecognized node type: 655
Tom Lane wrote: Jeff Ross [EMAIL PROTECTED] writes: When psql begins loading this file, it throws the error: psql:/tmp/people.sql:79: ERROR: unrecognized node type: 655 Hmm. Can you try that with \set VERBOSITY verbose so we can determine where the error is being thrown from? Using the same dump file as before, but with verbosity on: psql:/tmp/people.sql:79: ERROR: XX000: unrecognized node type: 655 LOCATION: _outValue, outfuncs.c:1707 psql:/tmp/people.sql:82: ERROR: 42P01: relation public.people does not exist LOCATION: RangeVarGetRelid, namespace.c:216 The interesting thing is that this exact process works just fine to a third machine also running 8.2.4 The example doesn't fail for me, either. We've occasionally seen messages of this type from poorly-done local modifications to the backend. How did you come by the postgres executables you're using on the problem machine? I installed the latest package from the OpenBSD snapshots. The other machines are also using OpenBSD package installations. regards, tom lane Thanks, Jeff ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Capturing return value of a function
- Original Message - From: Jasbinder Singh Bali To: pgsql-general@postgresql.org Sent: Monday, July 16, 2007 3:55 PM Subject: [GENERAL] Capturing return value of a function snip I was wondering what would the function return if insert fails. I want it to return 'success' upon a successful insert and 'failure' if insert fails. You can find out whether the insert was successful or not using the tools here: http://www.postgresql.org/docs/8.2/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS snip Regards, George ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] psql error while restoring database: unrecognized node type: 655
Tom Lane wrote: Jeff Ross [EMAIL PROTECTED] writes: Tom Lane wrote: Hmm. Can you try that with \set VERBOSITY verbose so we can determine where the error is being thrown from? psql:/tmp/people.sql:79: ERROR: XX000: unrecognized node type: 655 LOCATION: _outValue, outfuncs.c:1707 Hmm [ pokes around a bit... ] Do you perhaps have a higher debug verbosity level on this machine than the others? I can't immediately think of a reason why anything would be trying to print an untransformed NULL constant, but it sort of looks like that's what's happening. It seems that was it. When I installed postgres on the new machine, I uncommented and enabled debug_print_parse, something I did not do on the others. Now the entire dump/restore process runs error free. If you could get a stack trace from the point of the errfinish call it would be helpful. Would ktrace work for this? This is my development box, so I can break it again. regards, tom lane Thanks you, Tom. Jeff ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] createing indexes on large tables and int8
Tom Lane wrote: Janning Vygen [EMAIL PROTECTED] writes: After this i create the index and it took 10 hours just for one index (primary key). I have 100.000.000 rows with one PK (int8), two integer data values, and two FK (int8) What PG version is this? We did a fair amount of work on sort speed for 8.2. yeah - back when i tested that during the 8.2 development cycle I got a 5-6x speedup with the external sort improvements. ie sorting 1.8B rows (integer) went down from over 12h to about 2h10min - but 10h sounds like a lot for only 100M rows - I wonder what kind of hardware that is and how much concurrent activity is going on ... Stefan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] psql error while restoring database: unrecognized node type: 655
Jeff Ross [EMAIL PROTECTED] writes: Tom Lane wrote: Hmm [ pokes around a bit... ] Do you perhaps have a higher debug verbosity level on this machine than the others? I can't immediately think of a reason why anything would be trying to print an untransformed NULL constant, but it sort of looks like that's what's happening. It seems that was it. When I installed postgres on the new machine, I uncommented and enabled debug_print_parse, something I did not do on the others. Now the entire dump/restore process runs error free. Hah. So it's just that _outValue has been missing that case (since the beginning of time, looks like :-(). We don't dump raw parse trees often enough for anyone to have noticed. Will fix, thanks for the report. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] why postgresql over other RDBMS
Added to TODO: * Allow multiple indexes to be created concurrently, ideally via a single heap scan, and have a restore of a pg_dump somehow use it http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php --- Ron Johnson wrote: On 06/01/07 11:22, Bruce Momjian wrote: PFC wrote: On Thu, 31 May 2007 22:20:09 +0200, Vivek Khera [EMAIL PROTECTED] wrote: On May 25, 2007, at 5:28 PM, Tom Lane wrote: That's true at the level of DDL operations, but AFAIK we could parallelize table-loading and index-creation steps pretty effectively --- and that's where all the time goes. I would be happy with parallel builds of the indexes of a given table. That way you have just one scan of the whole table to build all its indexes. Will the synchronized seq scan patch be able to do this by issuing all the CREATE INDEX commands at the same time from several different database connections ? No, but it could someday. Or would a CREATE MANY INDEXES (where in one statement you specify all the indexes on a single table) command be easier to implement? This way also the process reads the table once, building separate sortwork files on-the-fly. Too bad child processes can't inherit transaction state. -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] why postgresql over other RDBMS
Bruce Momjian wrote: Added to TODO: * Allow multiple indexes to be created concurrently, ideally via a single heap scan, and have a restore of a pg_dump somehow use it http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php Would it not also make sense to use this ability for a non-index-specific REINDEX command? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Issues with PL/PGSQL function..
Hi All, Running into a small issue with a PL/PGSQL function under PostgreSQL 8.0.11... epassembly=# select version(); version - PostgreSQL 8.0.11 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE Linux) (1 row) The issue is most likely related to a Drop table call I have at the end of the stored procedure, the relevant chunks of the code are (I apologize in advance for culling as much as I have from the query's, but I am unable to release those details): BEGIN SELECT INTO ttl ''40''::integer AS id_days_ttl; CREATE TEMPORARY TABLE tmp1 AS SELECT ...; CREATE TEMPORARY TABLE tmp2 AS SELECT * FROM tmp1 ...; CREATE TEMPORARY TABLE tmp3 AS SELECT * FROM ... WHERE ... IN (SELECT ... FROM tmp2); ... DROP TABLE tmp3; DROP TABLE tmp2; DROP TABLE tmp1; END The function runs the first time with out issue, but (and again I think it's cause of the implicit drops in the function), I get this error on any subsequent runs: NOTICE: Creating TEMPORARY table tmp1... NOTICE: Creating TEMPORARY table tmp2... ERROR: relation with OID 38699 does not exist I believe it's telling me that it can not find the OID of tmp1, but I am unsure if it is looking for the first run value or the value of the second run for that particular table... Does anyone have any additional suggestions that I can use to track down more details of what is actually causing the issue? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Issues with PL/PGSQL function..
Try using it with 'execute' as that might help... OR: CREATE TEMP TABLE tblname WITH (OIDS) ON COMMIT DROP AS select * from someothertbl; that means the temporary table will be dropped at the end of the current transaction block. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 7/16/07, Chris Bowlby [EMAIL PROTECTED] wrote: Hi All, Running into a small issue with a PL/PGSQL function under PostgreSQL 8.0.11... epassembly=# select version(); version - PostgreSQL 8.0.11 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE Linux) (1 row) The issue is most likely related to a Drop table call I have at the end of the stored procedure, the relevant chunks of the code are (I apologize in advance for culling as much as I have from the query's, but I am unable to release those details): BEGIN SELECT INTO ttl ''40''::integer AS id_days_ttl; CREATE TEMPORARY TABLE tmp1 AS SELECT ...; CREATE TEMPORARY TABLE tmp2 AS SELECT * FROM tmp1 ...; CREATE TEMPORARY TABLE tmp3 AS SELECT * FROM ... WHERE ... IN (SELECT ... FROM tmp2); ... DROP TABLE tmp3; DROP TABLE tmp2; DROP TABLE tmp1; END The function runs the first time with out issue, but (and again I think it's cause of the implicit drops in the function), I get this error on any subsequent runs: NOTICE: Creating TEMPORARY table tmp1... NOTICE: Creating TEMPORARY table tmp2... ERROR: relation with OID 38699 does not exist I believe it's telling me that it can not find the OID of tmp1, but I am unsure if it is looking for the first run value or the value of the second run for that particular table... Does anyone have any additional suggestions that I can use to track down more details of what is actually causing the issue? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] why postgresql over other RDBMS
Matthew T. O'Connor wrote: Bruce Momjian wrote: Added to TODO: * Allow multiple indexes to be created concurrently, ideally via a single heap scan, and have a restore of a pg_dump somehow use it http://archives.postgresql.org/pgsql-general/2007-05/msg01274.php Would it not also make sense to use this ability for a non-index-specific REINDEX command? Not sure, but I suppose. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Issues with PL/PGSQL function..
Shoaib Mir [EMAIL PROTECTED] writes: Try using it with 'execute' as that might help... In current releases you need EXECUTE, else the thing will try to cache a query plan using the OID of the first instance of the temp table, and that won't work for subsequent instances. OR: CREATE TEMP TABLE tblname WITH (OIDS) ON COMMIT DROP AS select * from someothertbl; ON COMMIT DROP won't help, but maybe you could have just one temp table per session, created with ON COMMIT DELETE ROWS? Or try TRUNCATE'ing the table when it already exists. This will be all better in 8.3, FWIW. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] why postgresql over other RDBMS
Bruce Momjian [EMAIL PROTECTED] writes: Matthew T. O'Connor wrote: Bruce Momjian wrote: * Allow multiple indexes to be created concurrently, ideally via a single heap scan, and have a restore of a pg_dump somehow use it Actually, the sync scan patch ought to make this more or less happen magically. If you start a bunch of concurrent index builds they will try to scan the table together. There's no useful way for pg_dump to make use of this since it only has one backend. And you still need to generate n copies of the data for sorting. And performing n sorts in parallel won't be as cache efficient as doing them one after the other. So there's still a use case for the TODO But the hole is not nearly as urgent as before. You can get most of the benefit if you really need it by rolling your own. And the cool thing is some people already have rolled their own and they'll just magically see an improvement. They don't have to do anything they weren't doing already to turn it on. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] why postgresql over other RDBMS
Gregory Stark wrote: Bruce Momjian [EMAIL PROTECTED] writes: Matthew T. O'Connor wrote: Bruce Momjian wrote: * Allow multiple indexes to be created concurrently, ideally via a single heap scan, and have a restore of a pg_dump somehow use it Actually, the sync scan patch ought to make this more or less happen magically. If you start a bunch of concurrent index builds they will try to scan the table together. There's no useful way for pg_dump to make use of this since it only has one backend. And you still need to generate n copies of the data for sorting. And performing n sorts in parallel won't be as cache efficient as doing them one after the other. So there's still a use case for the TODO But the hole is not nearly as urgent as before. You can get most of the benefit if you really need it by rolling your own. And the cool thing is some people already have rolled their own and they'll just magically see an improvement. They don't have to do anything they weren't doing already to turn it on. They could roll their own a lot easier if you had finished the psql concurrent patch. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Concurrency Question
I'm trying to clearly understand how foreign key constraints work. I still need some help. The PostgreSQL documentation says: ROW EXCLUSIVE Conflicts with the SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. The commands UPDATE, DELETE, and INSERT acquire this lock mode on the target table (in addition to ACCESS SHARE locks on any other referenced tables). In general, this lock mode will be acquired by any command that modifies the data in a table. So if my foreign key constraint is: table A b_id references b(id) and if table B already has an try for id = 5 and I do an insert into table A with b_id of 5 how does the database ensure that the entry in table B will still be there by the time the transaction ends? e.g. if there is an insert into A and a delete from b of id = 5, if the delete happens first, then the insert should fail. If the insert happens first, then the delete should fail. But how is this accomplished? Looking at the documentation above, I would expect the insert into A to get a Row exclusive lock for table A. And, I'm guessing it would get an ACCESS SHARE lock for table B. But this would not prevent the delete from B from happening at the same time (if I am reading this correctly). Can someone help me out here? Thank you, Perry
Re: [GENERAL] why postgresql over other RDBMS
Bruce Momjian [EMAIL PROTECTED] writes: They could roll their own a lot easier if you had finished the psql concurrent patch. I did. But you decided you didn't want it. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] why postgresql over other RDBMS
Gregory Stark wrote: Bruce Momjian [EMAIL PROTECTED] writes: They could roll their own a lot easier if you had finished the psql concurrent patch. I did. But you decided you didn't want it. As far as I know, we asked for a libpq API change and you ignored multiple requests. You want the URLs? -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] why postgresql over other RDBMS
Bruce Momjian wrote: Gregory Stark wrote: Bruce Momjian [EMAIL PROTECTED] writes: They could roll their own a lot easier if you had finished the psql concurrent patch. I did. But you decided you didn't want it. As far as I know, we asked for a libpq API change and you ignored multiple requests. You want the URLs? Never mind. You ignored requests. I am not going to address this further. -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] why postgresql over other RDBMS
Bruce Momjian [EMAIL PROTECTED] writes: As far as I know, we asked for a libpq API change and you ignored multiple requests. You want the URLs? No. One suggestion was made regarding an internal technical issue (polling versus using select/poll on the sockets). I've long ago made that change though I had questions about the best way to do it which were never answered so even the way I made that change might not be acceptable. In any case this was just one suggestion made based on discussion which happened to turn up on list without actually reading the rest of the code. If you're interested in reviewing the patch I'm sure you would have dozens of issues. I would be happy to rework it along whatever lines you want. But I would prefer to see people focus on reviewing major features like HOT, clustered indexes, GII (which I would suggest calling index organized tables since that's effectively what they are). -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Concurrency Question
Perry Smith [EMAIL PROTECTED] writes: Looking at the documentation above, I would expect the insert into A to get a Row exclusive lock for table A. And, I'm guessing it would get an ACCESS SHARE lock for table B. But this would not prevent the delete from B from happening at the same time (if I am reading this correctly). The bit you quoted was for tables. The RI trigger does indeed take a share lock on the referenced record in table B which prevents it from being deleted. (In older versions it used to take an exclusive lock because there were no share locks on records.) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly