Re: [GENERAL] OID Usage
Tom Lane wrote: The thing you have to worry about is the possibility of duplicate OIDs once your DB has been running long enough for the OID counter to wrap around (2^32 OIDs). You should make sure that index is specifically declared as UNIQUE, so that any attempt to insert a duplicate OID will fail. That might be enough for you, or you might want to add logic to your application to retry automatically after such a failure. Ahh, yes ... this was what I thought may have be the problem, Not that 2^32 is a small number, but as time goes by on a busy system, this will happened one day. Unique index is a good plan, it will make an error but no data will be harmed then ! How does PG itself handle a search on an duplicated oid, without a index ... return two rows ? Will there be a future substitute for PGoidValue that is more reliable, like a rowid ? Thanks anyway ! /BL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 07:39:16PM +0100, Bo Lorentsen wrote: But, does this mean that the oid sollution I have decriped (and implimentet) have some unknown problems, or will oid's become obsolete in the near future ? The PostgreSQL documentation discourages the use of OIDs for primary keys. For example, the Object Identifier Types section in the Data Types chapter says: The oid type is currently implemented as an unsigned four-byte integer. Therefore, it is not large enough to provide database-wide uniqueness in large databases, or even in large individual tables. So, using a user-created table's OID column as a primary key is discouraged. OIDs are best used only for references to system tables. The System Columns section of the Data Definition chapter says: OIDs are 32-bit quantities and are assigned from a single cluster-wide counter. In a large or long-lived database, it is possible for the counter to wrap around. Hence, it is bad practice to assume that OIDs are unique, unless you take steps to ensure that they are unique. The CREATE TABLE documentation in the Reference part says: Once the counter wraps around, uniqueness of OIDs can no longer be assumed, which considerably reduces their usefulness. Specifying WITHOUT OIDS also reduces the space required to store the table on disk by 4 bytes per row of the table, thereby improving performance. The 8.0 Release Notes say the following under Deprecated Features: By default, tables in PostgreSQL 8.0 and earlier are created with OIDs. In the next release, this will _not_ be the case: to create a table that contains OIDs, the WITH OIDS clause must be specified or the default_with_oids configuration parameter must be enabled. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] ntfs for windows port rc5-2
On Fri, Jan 14, 2005 at 08:39:28AM -0800, J. Greenlees wrote: why? since an app that I'm working on would be useless for 60% of potential clients, using posgresql with the requirement for ms' corrupted ntfs means postgresql isn't going to work for it. I think what you are referring to is the installer refusing to install on a NTFS partition. From the FAQ: http://pginstaller.projects.postgresql.org/FAQ_windows.html 2.4) Can I install PostgreSQL on a FAT partition? PostgreSQL's number one priority is the integrity of your data. FAT and FAT32 filesystems simply do not offer the reliabilty required to allow this. In addition, the lack of security features offered by FAT make it impossible to secure the raw data files from unauthorised modification. Finally, PostgreSQL utilises a feature called 'reparse points' to implement tablespaces. This feature is not available on FAT partitions. snip It is recognised however, that on some systems such as developer's PCs, FAT partitions may be the only choice. In such cases, you can simply install PostgreSQL as normal, but without initialising the database cluster. When the installation has finished, manually run the 'initdb.exe' program on the FAT partition. Security and reliability will be compromised however, and any attempts to create tablespaces will fail. since ms does not include a compiler, and the source for 8.0 won't cross compile from linux. ( gcc 3.3.0 ) To compile the native port on Windows you need MinGW. And there's always the Cygwin port still. See: http://www.postgresql.org/files/documentation/faqs/text/FAQ_MINGW Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpUZ8SBT1m65.pgp Description: PGP signature
[GENERAL] Parsing of backslash in statements via ODBC
We are trying to use a suite of third party applications, which run on Windows 2000, that can make use of a database backend via ODBC. Most of the apps work just great with Postgres. One of the components (a key visualization application) does not. Note that this all works fine, as is, using Oracle or MS-SQL as the backend database. The vendor was kind enough to provide to us the SQL statements that application makes (there aren't many) and we were able to identify the problem by manually tinkering with the data. A column, called tagname, is a VARCHAR(40) and typical values are: Foo\Bar Baz\Blah\You\Get\The\Drift These values are inserted into the database just fine from other components on this suite (also via ODBC from a Windows client). When we get rid of any backslashes in the values, everything works just fine. Unfortunately, this is not an option. The statements are: 1) SELECT tn,ti,tt,tdt FROM t WHERE tn = 'mytag01' 2) SELECT dat,mt,ti,val,s,m FROM fl WHERE (ti = @1) AND dat BETWEEN @3 AND @5 ORDER BY dat DESC, mt DESC Where @1 is the result of statement (1). Since MS-SQL Oracle (both via ODBC) work fine with this app, I would suspect the issue is that Postgres is interpreting backslashes as escape characters in situations that the supported databases do not. Has anyone run across something like this, and if so, can anything be done given that we don't have the source code of the application in question? Alex ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 07:39:16PM +0100, Bo Lorentsen wrote: But, does this mean that the oid sollution I have decriped (and implimentet) have some unknown problems, or will oid's become obsolete in the near future ? It means using OIDs as you described has very well known problems and they will break on you eventually. You can mitigate the damage by creating a UNIQUE index on the oid column but you'd better be sure your application can handle the side-effects. OIDs won't become obsolete, but they'll probably no longer be enabled by default at some stage. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpmo4AGLZZKj.pgp Description: PGP signature
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 07:44:18PM +0100, Bo Lorentsen wrote: Alvaro Herrera wrote: You can create a function to get the sequence name attached to a table. Of course, you should take into account the fact that there could be more than one (two serial fields in a table are rare but not impossible), but if your tables have only one sequence you should be OK. Are there a way to find and test if it is a primary key ? pg_index has an indisprimary column. Hmm, need to play more around using the pg_ system tables. Are they all well documentet, or need I some guessing ? See the System Catalogs chapter in the documentation. If you run psql -E you'll see the queries that psql executes for commands like \d foo. Those commands query the system catalogs. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] MOVE
Hello, Here I'm implementing a session management, which has a connections table partitioned between active and archived connections. A connection represents a connection between a user and a chatroom. I use partitioning for performance reasons. The active table contains all the data for the active session : user_id, chatroom_id, session start time, and other information. The archive table contains just the user_id, chatroom_id, session start and end time, for logging purposes, and for displaying on the site, which user was logged to which chatroom and from when to when. Thus, when a user disconnects from a chatroom, I must move one row from the active to the archive table. This poses no problem as there is a UNIQUE index (iser_id,chatroom_id) so I select the row FOR UPDATE, insert it in the archive table, then delete it. Now, when a user logs out from the site, or when his session is purged by the auto-expiration cron job, I must also expire ALL his open chatroom connections. INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; Now, if the user inserts a connection between the two queries above, the thing will fail (the connection will just be deleted). I know that there are many ways to do it right : - LOCK the table in exclusive mode - use an additional primary key on the active table which is not related to the user_id and the chatroom_id, select the id's of the sessions to expire in a temporary table, and use that - use an extra field in the table to mark that the rows are being processed - use transaction isolation level SERIALIZABLE However, all these methods somehow don't feel right, and as this is an often encountered problem, I'd really like to have a sql command, say MOVE, or SELECT AND DELETE, whatever, which acts like a SELECT, returning the rows, but deleting them as well. Then I'd just do INSERT INTO archive (...) SELECT ... AND DELETE FROM active WHERE user_id = ...; which would have the following advantages : - No worries about locks : - less chance of bugs - higher performance because locks have to be waited on, by definition - No need to do the request twice (so, it is twice as fast !) - Simplicity and elegance There would be an hidden bonus, that if you acquire locks, you better COMMIT the transaction as soon as possible to release them, whereas here, you can happily continue in the transaction. I think this command would make a nice cousin to the also very popular INSERT... OR UPDATE which tries to insert a row, and if it exists, UPDATES it instead of inserting it ! What do you think ? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 12:20:50PM -0700, Michael Fuhr wrote: On Fri, Jan 14, 2005 at 07:44:18PM +0100, Bo Lorentsen wrote: Alvaro Herrera wrote: You can create a function to get the sequence name attached to a table. Of course, you should take into account the fact that there could be more than one (two serial fields in a table are rare but not impossible), but if your tables have only one sequence you should be OK. Are there a way to find and test if it is a primary key ? pg_index has an indisprimary column. Yeah, though things get hairy that way because you have to peek at pg_attribute to match the objsubid in pg_depend; and self-join pg_class to get to the index itself. Not sure if it all can be done in a single query. If you run psql -E you'll see the queries that psql executes for commands like \d foo. Those commands query the system catalogs. Sadly, there's hardly anything there that uses pg_depend. -- I dream about dreams about dreams, sang the nightingale under the pale moon (Sandman) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] postmaster listening on specified addresses
Hello, I was wondering if there's a correct method for running postmaster with the option of listening on a select group of addresses. Does postmaster accept multiple -h hostname options on the command-line, or alternatively a comma-separated list of hostnames or addresses? What if you have a server with multiple network interfaces and addresses assigned to each, and you only want postmaster to listen on a specific subset of those addresses in addition to localhost? Does anyone know if there a supported method for doing this? The documentation for the -h option only states: Specifies the TCP/IP host name or address on which the postmaster is to listen for connections from client applications. Defaults to listening on all configured addresses (including localhost). Clearly the server is capable of listening on mutliple addresses since the default is all of them, but the -h option is described only for use with a single address. Thanks, Brian Carp ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] is there anyway to get the backends IP address from the PID?
Hi, Does anyone know if there is a way to get the backends IP address from the PID? I am using the view pg_stat_activity and it would be nice if it would also display the IP address along with the PID. I can see the IP address when I do a ps -ef but it would be nice to be able to get it via a sql command. Thanks, Tony Caduto http://www.amsoftwaredesign.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] MOVE
PFC wrote: I'd really like to have a sql command, say MOVE, or SELECT AND DELETE, whatever, which acts like a SELECT, returning the rows, but deleting them as well. Oracle implements this with the syntax DELETE FROM ... RETURNING ...; There is also UPDATE ... RETURNING ...; Where the deleted rows are also returned. This is non-standard SQL, but there is a precedent. It is efficient because it saves at least one round trip from the database to the client. I don't use them to stay portable. Rick PFC [EMAIL PROTECTED]To: Postgres general mailing list pgsql-general@postgresql.org e.com cc: Sent by: Subject: [GENERAL] MOVE [EMAIL PROTECTED] tgresql.org 01/14/2005 02:49 PM Hello, Here I'm implementing a session management, which has a connections table partitioned between active and archived connections. A connection represents a connection between a user and a chatroom. I use partitioning for performance reasons. The active table contains all the data for the active session : user_id, chatroom_id, session start time, and other information. The archive table contains just the user_id, chatroom_id, session start and end time, for logging purposes, and for displaying on the site, which user was logged to which chatroom and from when to when. Thus, when a user disconnects from a chatroom, I must move one row from the active to the archive table. This poses no problem as there is a UNIQUE index (iser_id,chatroom_id) so I select the row FOR UPDATE, insert it in the archive table, then delete it. Now, when a user logs out from the site, or when his session is purged by the auto-expiration cron job, I must also expire ALL his open chatroom connections. INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; Now, if the user inserts a connection between the two queries above, the thing will fail (the connection will just be deleted). I know that there are many ways to do it right : - LOCK the table in exclusive mode - use an additional primary key on the active table which is not related to the user_id and the chatroom_id, select the id's of the sessions to expire in a temporary table, and use that - use an extra field in the table to mark that the rows are being processed - use transaction isolation level SERIALIZABLE However, all these methods somehow don't feel right, and as this is an often encountered problem, I'd really like to have a sql command, say MOVE, or SELECT AND DELETE, whatever, which acts like a SELECT, returning the rows, but deleting them as well. Then I'd just do INSERT INTO archive (...) SELECT ... AND DELETE FROM active WHERE user_id = ...; which would have the following advantages : - No worries about locks : - less chance of bugs - higher performance because locks have to be waited on, by definition - No need to do the request twice (so, it is twice as fast !) - Simplicity and elegance There would be an hidden bonus, that if you acquire locks, you better COMMIT the transaction as soon as possible to release them, whereas here, you can happily continue in the transaction. I think this command would make a nice cousin to the also very popular INSERT... OR UPDATE which tries to insert a row, and if it exists, UPDATES it instead of inserting it ! What do you think ? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate
Re: [GENERAL] postmaster listening on specified addresses
Administrator [EMAIL PROTECTED] writes: Hello, I was wondering if there's a correct method for running postmaster with the option of listening on a select group of addresses. Does postmaster accept multiple -h hostname options on the command-line, or alternatively a comma-separated list of hostnames or addresses? What if you have a server with multiple network interfaces and addresses assigned to each, and you only want postmaster to listen on a specific subset of those addresses in addition to localhost? Does anyone know if there a supported method for doing this? It would require changes to the existing code (see below). The documentation for the -h option only states: Specifies the TCP/IP host name or address on which the postmaster is to listen for connections from client applications. Defaults to listening on all configured addresses (including localhost). Clearly the server is capable of listening on mutliple addresses since the default is all of them, but the -h option is described only for use with a single address. Actually, in the sockets API to listen on all configured addresses you specify the wildcard address (0.0.0.0). There is no call to listen on this list of addresses. What you are looking for could be done, but it would require multiple listening sockets, one for each address, which (as far as I know) the code doesn't currently do. -Doug ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Problem Dropping a Database with users connected to it
On Fri, 2005-01-14 at 10:58 -0600, Bruno Wolff III wrote: On Fri, Jan 14, 2005 at 11:16:16 -0500, Eric Dorland [EMAIL PROTECTED] wrote: * Disconnecting all other users before dropping the db, but that doesn't seem possible (I could start and stop the db, but that doesn't stop any clients from just reconnecting right away). You could use an alter pg_hba.conf file while doing the drop. * Just drop all the tables, etc. instead of dropping the db. There doesn't seem to be a good way to do this except doing an explicit DROP TABLE foo on all 200 tables. Is there a good recipe for this, an easy way to get a listing of all the tables in a db? If all of the tables are in the public schema, dropping that schema should cascade to dropping all of the tables. I had not considered dropping the schema, but that sounds like a good solution. I'll give it a shot. -- Eric Dorland [EMAIL PROTECTED] WCG 514.398-5023 ext. 09562 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] postmaster listening on specified addresses
On Fri, Jan 14, 2005 at 03:28:24PM -0500, Administrator wrote: I was wondering if there's a correct method for running postmaster with the option of listening on a select group of addresses. PostgreSQL 8.0 will allow this -- it'll replace the virtual_host configuration variable with listen_addresses, and postmaster's -h option will be equivalent to listen_addresses. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] C locale + unicode
Thanks for the info - to the point and much appreciated! John Sidney-Woollett Tom Lane wrote: John Sidney-Woollett [EMAIL PROTECTED] writes: Do upper() and lower() only work correctly for postgres v8 UTF-8 encoded databases? (They don't seem to work on chars standard ascii on my 7.4.6 db). Is this locale or encoding specific issue? Before 8.0, they don't work on multibyte characters, period. In 8.0 they work according to your locale setting. Is there likely to be a significant difference in speed between a database using a UTF-8 locale and the C locale (if you don't care about the small issues you detailed below)? I'd expect the C locale to be materially faster for text sorting. Don't have a number offhand. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] MOVE
Use an after inset trigger. On Fri, 2005-01-14 at 15:38, [EMAIL PROTECTED] wrote: PFC wrote: I'd really like to have a sql command, say MOVE, or SELECT AND DELETE, whatever, which acts like a SELECT, returning the rows, but deleting them as well. Oracle implements this with the syntax DELETE FROM ... RETURNING ...; There is also UPDATE ... RETURNING ...; Where the deleted rows are also returned. This is non-standard SQL, but there is a precedent. It is efficient because it saves at least one round trip from the database to the client. I don't use them to stay portable. Rick PFC [EMAIL PROTECTED]To: Postgres general mailing list pgsql-general@postgresql.org e.com cc: Sent by: Subject: [GENERAL] MOVE [EMAIL PROTECTED] tgresql.org 01/14/2005 02:49 PM Hello, Here I'm implementing a session management, which has a connections table partitioned between active and archived connections. A connection represents a connection between a user and a chatroom. I use partitioning for performance reasons. The active table contains all the data for the active session : user_id, chatroom_id, session start time, and other information. The archive table contains just the user_id, chatroom_id, session start and end time, for logging purposes, and for displaying on the site, which user was logged to which chatroom and from when to when. Thus, when a user disconnects from a chatroom, I must move one row from the active to the archive table. This poses no problem as there is a UNIQUE index (iser_id,chatroom_id) so I select the row FOR UPDATE, insert it in the archive table, then delete it. Now, when a user logs out from the site, or when his session is purged by the auto-expiration cron job, I must also expire ALL his open chatroom connections. INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; Now, if the user inserts a connection between the two queries above, the thing will fail (the connection will just be deleted). I know that there are many ways to do it right : - LOCK the table in exclusive mode - use an additional primary key on the active table which is not related to the user_id and the chatroom_id, select the id's of the sessions to expire in a temporary table, and use that - use an extra field in the table to mark that the rows are being processed - use transaction isolation level SERIALIZABLE However, all these methods somehow don't feel right, and as this is an often encountered problem, I'd really like to have a sql command, say MOVE, or SELECT AND DELETE, whatever, which acts like a SELECT, returning the rows, but deleting them as well. Then I'd just do INSERT INTO archive (...) SELECT ... AND DELETE FROM active WHERE user_id = ...; which would have the following advantages : - No worries about locks : - less chance of bugs - higher performance because locks have to be waited on, by definition - No need to do the request twice (so, it is twice as fast !) - Simplicity and elegance There would be an hidden bonus, that if you acquire locks, you better COMMIT the transaction as soon as possible to release them, whereas here, you can happily continue in the transaction. I think this command would make a nice cousin to the also very popular INSERT... OR UPDATE which tries to insert a row, and if it exists, UPDATES it instead of inserting
Re: [GENERAL] MOVE
On Fri, Jan 14, 2005 at 08:49:24PM +0100, PFC wrote: the auto-expiration cron job, I must also expire ALL his open chatroom connections. INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; Now, if the user inserts a connection between the two queries above, the thing will fail (the connection will just be deleted). I know that there are many ways to do it right : Why not just do it in a single transaction? I don't think you need to use SERIALIZABLE at all, I think normal read-committed mode will do what you want, no? BEGIN; INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; COMMIT; The DELETE can only delete the rows returned by the select, that's the whole point of transactions... Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpQ6t4QzfGCu.pgp Description: PGP signature
Re: [GENERAL] ntfs for windows port rc5-2
Martijn van Oosterhout wrote: On Fri, Jan 14, 2005 at 08:39:28AM -0800, J. Greenlees wrote: why? since an app that I'm working on would be useless for 60% of potential clients, using posgresql with the requirement for ms' corrupted ntfs means postgresql isn't going to work for it. I think what you are referring to is the installer refusing to install on a NTFS partition. From the FAQ: http://pginstaller.projects.postgresql.org/FAQ_windows.html 2.4) Can I install PostgreSQL on a FAT partition? PostgreSQL's number one priority is the integrity of your data. FAT and FAT32 filesystems simply do not offer the reliabilty required to allow this. In addition, the lack of security features offered by FAT make it impossible to secure the raw data files from unauthorised modification. Finally, PostgreSQL utilises a feature called 'reparse points' to implement tablespaces. This feature is not available on FAT partitions. snip It is recognised however, that on some systems such as developer's PCs, FAT partitions may be the only choice. In such cases, you can simply install PostgreSQL as normal, but without initialising the database cluster. When the installation has finished, manually run the 'initdb.exe' program on the FAT partition. Security and reliability will be compromised however, and any attempts to create tablespaces will fail. since ms does not include a compiler, and the source for 8.0 won't cross compile from linux. ( gcc 3.3.0 ) To compile the native port on Windows you need MinGW. And there's always the Cygwin port still. See: http://www.postgresql.org/files/documentation/faqs/text/FAQ_MINGW Hope this helps, rc5-2 msi will not install at all on a fat32 filesystem even without initialising the database. sorry but whole purpose of putting it on a windows box was to make db app for a 250,000 person client base. with some still using win95, some win 98, some winme. all of which do not have ntfs support. since the app will not be world accessable, only through localhost, the lack of security isn't a major concern. -- only plain text format email accepted. smaller file size, no virus transfer no proprietary file formats. smime.p7s Description: S/MIME Cryptographic Signature
Re: [GENERAL] Parsing of backslash in statements via ODBC
aboster [EMAIL PROTECTED] writes: Since MS-SQL Oracle (both via ODBC) work fine with this app, I would suspect the issue is that Postgres is interpreting backslashes as escape characters in situations that the supported databases do not. Postgres definitely considers backslashes to be escape characters in string literals. Has anyone run across something like this, and if so, can anything be done given that we don't have the source code of the application in question? Without the app source code you may be kinda stuck :-(. It's possible that you could hack something at the ODBC level, though. Try asking on the pgsql-odbc list. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Parsing of backslash in statements via ODBC
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Given that this issue is a violation of SQL compatibility, shouldn't there really be an option to turn off interpretation of backslash characters in string literals as escapes? Maybe as a session variable of some kind, with a default being set in postgresql.conf? On Jan 14, 2005, at 4:43 PM, Tom Lane wrote: aboster [EMAIL PROTECTED] writes: Since MS-SQL Oracle (both via ODBC) work fine with this app, I would suspect the issue is that Postgres is interpreting backslashes as escape characters in situations that the supported databases do not. Postgres definitely considers backslashes to be escape characters in string literals. Has anyone run across something like this, and if so, can anything be done given that we don't have the source code of the application in question? Without the app source code you may be kinda stuck :-(. It's possible that you could hack something at the ODBC level, though. Try asking on the pgsql-odbc list. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org - --- Frank D. Engel, Jr. [EMAIL PROTECTED] $ ln -s /usr/share/kjvbible /usr/manual $ true | cat /usr/manual | grep John 3:16 John 3:16 For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life. $ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (Darwin) iD8DBQFB6EDd7aqtWrR9cZoRAguTAJ9sij6xZ2Xmd2XfcEzeE1Wu4tVfuwCfcj6B MIPrLSXexDnl36k7ubDIEUg= =7H3R -END PGP SIGNATURE- ___ $0 Web Hosting with up to 120MB web space, 1000 MB Transfer 10 Personalized POP and Web E-mail Accounts, and much more. Signup at www.doteasy.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] MOVE
Martijn van Oosterhout kleptog@svana.org writes: Why not just do it in a single transaction? I don't think you need to use SERIALIZABLE at all, I think normal read-committed mode will do what you want, no? BEGIN; INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; COMMIT; No, that's exactly wrong: in read-committed mode the DELETE could delete rows that were not seen by the SELECT. It would work in serializable mode though. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] ntfs for windows port rc5-2
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 You may wish to consider a different database for your project. SQLite may be a better choice, for example, depending on the project's specific needs (www.sqlite.org). Win95/98/ME is poor technology, no matter how many users it still has. It's probably about time for them to upgrade or switch to another OS (of course, I think Windows in general is a poor technology, but that's for another list...). OTOH, does anyone know if the cygwin version of postgresql enforces the NTFS requirement? That may be another option... On Jan 14, 2005, at 4:39 PM, J. Greenlees wrote: Martijn van Oosterhout wrote: On Fri, Jan 14, 2005 at 08:39:28AM -0800, J. Greenlees wrote: why? since an app that I'm working on would be useless for 60% of potential clients, using posgresql with the requirement for ms' corrupted ntfs means postgresql isn't going to work for it. I think what you are referring to is the installer refusing to install on a NTFS partition. From the FAQ: http://pginstaller.projects.postgresql.org/FAQ_windows.html 2.4) Can I install PostgreSQL on a FAT partition? PostgreSQL's number one priority is the integrity of your data. FAT and FAT32 filesystems simply do not offer the reliabilty required to allow this. In addition, the lack of security features offered by FAT make it impossible to secure the raw data files from unauthorised modification. Finally, PostgreSQL utilises a feature called 'reparse points' to implement tablespaces. This feature is not available on FAT partitions. snip It is recognised however, that on some systems such as developer's PCs, FAT partitions may be the only choice. In such cases, you can simply install PostgreSQL as normal, but without initialising the database cluster. When the installation has finished, manually run the 'initdb.exe' program on the FAT partition. Security and reliability will be compromised however, and any attempts to create tablespaces will fail. since ms does not include a compiler, and the source for 8.0 won't cross compile from linux. ( gcc 3.3.0 ) To compile the native port on Windows you need MinGW. And there's always the Cygwin port still. See: http://www.postgresql.org/files/documentation/faqs/text/FAQ_MINGW Hope this helps, rc5-2 msi will not install at all on a fat32 filesystem even without initialising the database. sorry but whole purpose of putting it on a windows box was to make db app for a 250,000 person client base. with some still using win95, some win 98, some winme. all of which do not have ntfs support. since the app will not be world accessable, only through localhost, the lack of security isn't a major concern. -- only plain text format email accepted. smaller file size, no virus transfer no proprietary file formats. - --- Frank D. Engel, Jr. [EMAIL PROTECTED] $ ln -s /usr/share/kjvbible /usr/manual $ true | cat /usr/manual | grep John 3:16 John 3:16 For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life. $ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (Darwin) iD8DBQFB6EOs7aqtWrR9cZoRAtGcAKCDdfxAWPzNw23+hJ/t9xObxoP+kACfTz1T eD6NOkOnIcok1U3iSGnjxyo= =P26l -END PGP SIGNATURE- ___ $0 Web Hosting with up to 120MB web space, 1000 MB Transfer 10 Personalized POP and Web E-mail Accounts, and much more. Signup at www.doteasy.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Parsing of backslash in statements via ODBC
On Fri, Jan 14, 2005 at 04:59:57PM -0500, Frank D. Engel, Jr. wrote: Given that this issue is a violation of SQL compatibility, shouldn't there really be an option to turn off interpretation of backslash characters in string literals as escapes? Maybe as a session variable of some kind, with a default being set in postgresql.conf? I'm not totally sure about how ODBC works, but if it's anything like Perl DBI, surely it's the responsibility of the ODBC layer to escape the baackslashes? Maybe it depends on whether they're using placeholders or not... Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpFU6eEzwiH9.pgp Description: PGP signature
Re: [GENERAL] 7.4.6 FC2 MUCH slower from 2.6.9-1.11 to 2.6.10-1.8
Just tried 2.6.10-1.9_FC2 with the same bad results. Am I the only one experiencing this? What would be special? The hardware? It has nothing special, a popular Asus motherboard, 1GB memory, a modern Maxtor HD and nothing else. Clodoaldo --- Clodoaldo Pinto [EMAIL PROTECTED] escreveu: 31 minutes in 2.6.10-1.8: select kstime(), update_ranking_usuarios(), kstime(); kstime| update_ranking_usuarios | kstime -+-+- 2005-01-13 20:27:56 | | 2005-01-13 20:58:46 (1 row) 5 minutes in 2.6.9-1.11: select kstime(), update_ranking_usuarios(), kstime(); kstime| update_ranking_usuarios | kstime -+-+- 2005-01-13 23:24:30 | | 2005-01-13 23:29:26 (1 row) The function: CREATE OR REPLACE FUNCTION update_ranking_usuarios() RETURNS void AS 'declare linha record; rank integer; begin rank := 0; for linha in select usuario from usuarios_producao where not anonymous --and n_time != 446 order by pontos_0 desc loop rank := rank + 1; update usuarios_producao set rank_0 = rank where usuario = linha.usuario ; end loop; -- -- rank := 0; for linha in select usuario from usuarios_producao where not anonymous --and n_time != 446 order by pontos_0 + (( pontos_0 - pontos_7) / 7) desc loop rank := rank + 1; update usuarios_producao set rank_24 = rank where usuario = linha.usuario ; end loop; -- -- rank := 0; for linha in select usuario from usuarios_producao where not anonymous --and n_time != 446 order by pontos_0 + pontos_0 - pontos_7 desc loop rank := rank + 1; update usuarios_producao set rank_7 = rank where usuario = linha.usuario ; end loop; -- -- rank := 0; for linha in select usuario from usuarios_producao where not anonymous --and n_time != 446 order by pontos_0 + (( pontos_0 - pontos_7) * 30 / 7) desc loop rank := rank + 1; update usuarios_producao set rank_30 = rank where usuario = linha.usuario ; end loop; return; end;' LANGUAGE 'plpgsql' STABLE; Part of postgresql.conf: shared_buffers = 3000 # min 16, at least max_connections*2, 8KB each sort_mem = 49152# min 64, size in KB vacuum_mem = 32768 # min 1024, size in KB vacuum_mem = 32768 max_fsm_pages = 30 # min max_fsm_relations*16, 6 bytes each fsync = false wal_buffers = 256 checkpoint_segments = 32# in logfile segments, min 1, 16MB each The log file shows only a long list of: LOG: recycled transaction log file 010A00D0 LOG: recycled transaction log file 010A00CF LOG: recycled transaction log file 010A00C9 ... Regards, Clodoaldo Pinto ___ Yahoo! Acesso Grátis - Instale o discador do Yahoo! agora. http://br.acesso.yahoo.com/ - Internet rápida e grátis ---(end of broadcast)--- TIP 8: explain analyze is your friend ___ Yahoo! Acesso Grátis - Instale o discador do Yahoo! agora. http://br.acesso.yahoo.com/ - Internet rápida e grátis ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] developers map
hi, maybe this is a bit off-topic but is the map in the developer page dinamyc? if so, is taking data from a postgresql database? where can i found info for doing something like that? what language is used for doing that? regards, Jaime Casanova _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] ntfs for windows port rc5-2
Frank D. Engel, Jr. wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 You may wish to consider a different database for your project. SQLite may be a better choice, for example, depending on the project's specific needs (www.sqlite.org). Win95/98/ME is poor technology, no matter how many users it still has. It's probably about time for them to upgrade or switch to another OS (of course, I think Windows in general is a poor technology, but that's for another list...). OTOH, does anyone know if the cygwin version of postgresql enforces the NTFS requirement? That may be another option... I'll check sqllite out, thanks for the tip on it. not sure about the cygwin, but don't really want to cause clients to have to install and run extra services that shouldn't be needed. I agree about windows, not worth using at all. -- only plain text format email accepted. smaller file size, no virus transfer no proprietary file formats. smime.p7s Description: S/MIME Cryptographic Signature
Re: [GENERAL] [HACKERS] Much Ado About COUNT(*)
On 1/14/05 12:47 PM, Frank D. Engel, Jr. [EMAIL PROTECTED] wrote: It's probably too messy to be worthwhile this way, though. More trouble than it would be worth. It would be rather useful if there was a way to get a reasonably accurate count (better than analyze provides) in a very short period. When you've got a relatively wide table that has hundreds of millions to over a billion rows, and you need to report on how many rows in the table, that can take a long time. Wes ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] OID Usage
Martijn van Oosterhout wrote: It means using OIDs as you described has very well known problems and they will break on you eventually. You can mitigate the damage by creating a UNIQUE index on the oid column but you'd better be sure your application can handle the side-effects. Ok, Tom told me about the same :-( But why are oid's still in PG, that are they good for ? Will there be a real unique row id, like there is in Oracle, or will this be keept as an internal value only ? OIDs won't become obsolete, but they'll probably no longer be enabled by default at some stage. Is this because some old application's using oid's in somewhat small dataset ? Hope this helps, It did thanks. /BL ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] OID Usage
Michael Fuhr wrote: The PostgreSQL documentation discourages the use of OIDs for primary keys. For example, the Object Identifier Types section in the Data Types chapter says: ... Thanks for taking you the time to snip this together, I think I will try to find a way to find the propper primary key (using pg_* tables), and if this uses the nextval, I may be able to retrive the currently inserted row by using currval. The 8.0 Release Notes say the following under Deprecated Features: Why have this not happend before ? The PGoidValue need to be depricated too. And why is it not substitutet with something else ? /BL ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] OID Usage
Alvaro Herrera wrote: Yeah, though things get hairy that way because you have to peek at pg_attribute to match the objsubid in pg_depend; and self-join pg_class to get to the index itself. Not sure if it all can be done in a single query. Sounds like my task, to make an oid free insert/select, is going to be very interesting :-) /BL ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] OID Usage
Michael Fuhr wrote: See the System Catalogs chapter in the documentation. Ok, I think I will compile all the given information in this thread, to make a new and more non oid'ish solution, as the dataset I manage are going to grow quite a lot :-) If you run psql -E you'll see the queries that psql executes for commands like \d foo. Those commands query the system catalogs. This may be very usefull, thanks. /BL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [HACKERS] Much Ado About COUNT(*)
A cardinality estimate function might be nice. SELECT cardinality_estimate(table_name) If it is off by 25% then no big deal. It would be useful for the PostgreSQL query planner also, I imagine. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Wes Sent: Friday, January 14, 2005 2:59 PM To: Postgres general mailing list Subject: Re: [GENERAL] [HACKERS] Much Ado About COUNT(*) On 1/14/05 12:47 PM, Frank D. Engel, Jr. [EMAIL PROTECTED] wrote: It's probably too messy to be worthwhile this way, though. More trouble than it would be worth. It would be rather useful if there was a way to get a reasonably accurate count (better than analyze provides) in a very short period. When you've got a relatively wide table that has hundreds of millions to over a billion rows, and you need to report on how many rows in the table, that can take a long time. Wes ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] ntfs for windows port rc5-2
I don't think that Windows isn't worth using some versions such as XP are quite stable for most purposes. By no means am I saying go put a production database server like Postgres or Oracle on it. SMB's (Small - to - Medium Businesses) may benefit from Windows 2000 if there aren't able to get somebody who can manage Linux/Unix in their environment and don't have a heavy load. Cheers, Aly. I agree about windows, not worth using at all. -- Aly Dharshi [EMAIL PROTECTED] A good speech is like a good dress that's short enough to be interesting and long enough to cover the subject ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] OID Usage
On Sat, Jan 15, 2005 at 12:06:41AM +0100, Bo Lorentsen wrote: Martijn van Oosterhout wrote: It means using OIDs as you described has very well known problems and they will break on you eventually. You can mitigate the damage by creating a UNIQUE index on the oid column but you'd better be sure your application can handle the side-effects. Ok, Tom told me about the same :-( But why are oid's still in PG, that are they good for ? Will there be a real unique row id, like there is in Oracle, or will this be keept as an internal value only ? They're still there because the system tables use them. But for user tables you use sequences which go up to 64 bit and have protection against wraparound. Is this because some old application's using oid's in somewhat small dataset ? Basically, OIDs are not useful in user tables, they're not unique, have no special priveledges w.r.t. other columns. All they do is take up extra storage space. Basically, the use of OIDs has been discouraged for a long time now and it's finally getting to the stage where they'll be disabled on user tables by default. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpPQoQyQG9Jw.pgp Description: PGP signature
Re: [GENERAL] OID Usage
On Sat, Jan 15, 2005 at 12:06:41AM +0100, Bo Lorentsen wrote: Ok, Tom told me about the same :-( But why are oid's still in PG, that are they good for ? Will there be a real unique row id, like there is in Oracle, or will this be keept as an internal value only ? Most system catalogs use OIDs as primary keys. So they cannot just disappear. But on user tables, there's not a lot of use for them IMHO. There's no internal row id on Postgres; having one would mean more storage requirements. If you want one, you know where to get it ... if not, you may as well save the space. -- Alvaro Herrera ([EMAIL PROTECTED]) No hay cielo posible sin hundir nuestras raíces en la profundidad de la tierra(Malucha Pinto) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Parsing of backslash in statements via ODBC
Frank D. Engel, Jr. [EMAIL PROTECTED] writes: Given that this issue is a violation of SQL compatibility, shouldn't there really be an option to turn off interpretation of backslash characters in string literals as escapes? Maybe as a session variable of some kind, with a default being set in postgresql.conf? That has about as much chance of getting in as a session variable to change the identifier-case-folding behavior, and for the same reason: any such variable is certain to break tons of existing client-side code that doesn't know about it. We already learned this lesson with respect to autocommit :-(. It is irritating that we can't easily support exactly-spec-compliant applications, and I'd like to find a solution. But a GUC variable ain't it. See past discussions about case-folding behavior for some of the ramifications. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] [HACKERS] Much Ado About COUNT(*)
On Fri, Jan 14, 2005 at 03:11:32PM -0800, Dann Corbit wrote: A cardinality estimate function might be nice. SELECT cardinality_estimate(table_name) If it is off by 25% then no big deal. It would be useful for the PostgreSQL query planner also, I imagine. If that's all you want, what about the row estimate from pg_class? It has the number of rows active at last vacuum... For really large tables I imagine it'd be easily close enough... Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpjUZPxiXyGu.pgp Description: PGP signature
Re: [GENERAL] OID Usage
It's not very hard to do. I just got rid them. It took me about a day. Our application is an X-Windows front end written is C. I wrote a function to return the next value of the serial key for any table. Here is the select statement buitl with sprintf: SELECT relname FROM pg_class WHERE relkind = \'S\' AND relname = \'%s_recid_seq\': All our sequences are called recid and since the naming convention is table_name_recid_seq, it's easy to get the name of the right sequence. You might as well go ahead and do it. You'll feel better after you do ;o) On Friday 14 January 2005 06:13 pm, Bo Lorentsen saith: Alvaro Herrera wrote: Yeah, though things get hairy that way because you have to peek at pg_attribute to match the objsubid in pg_depend; and self-join pg_class to get to the index itself. Not sure if it all can be done in a single query. Sounds like my task, to make an oid free insert/select, is going to be very interesting :-) /BL ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Work: 1-336-372-6812 Cell: 1-336-363-4719 email: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Parsing of backslash in statements via ODBC
Martijn van Oosterhout kleptog@svana.org writes: I'm not totally sure about how ODBC works, but if it's anything like Perl DBI, surely it's the responsibility of the ODBC layer to escape the baackslashes? Maybe it depends on whether they're using placeholders or not... I suppose they are not using placeholders, or we'd not be having this discussion ... regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] MOVE
BEGIN; INSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...; DELETE FROM active WHERE user_id = ...; COMMIT; The DELETE can only delete the rows returned by the select, that's the whole point of transactions... Well, in the case of having a unique index on user_id, and if no-one updates the row between the insert and the delete, it will work ;) And if someone updates it in between, well, the update is not archived, so you have to LOCK your row FOR UPDATE. And if this procedure is called twice at the same time, the rows will be historized twice... etc... Which is precisely why I don't like this approach ! As a side note, I've installed 8.0 rc, and wow. The slow queries feel a lot faster on the command prompt, my small queries have became faster too... very good work ! In the end, I've implemented it with an AFTER DELETE trigger on the 'live' table, which, after the row has been deleted, inserts it in the history table, using the magic variable OLD. This will work because the row is already deleted, thus can't be concurrently updated by another transaction (because a transaction trying to update a row will wait on the lock acquired by the DELETE, and vice versa). So, for ONE row at a time, in a trigger, it works beautifully, thank you postgres ! I find the solution very elegant : when a session expires, it is deleted from the live session table, then the trigger catches it just in time to shove it in the sessions history table, then some other tables like user-to-chatroom connections, which happen to have a user_id referencing into the live sessions table, get the ON DELETE CASCADE and are also purged and historized automatically. I am very happy with this solution BUT it's done one-row-at-a-time, so it's slower than I'd like ! The key is to insert the row deleted from the live table into the history table AFTER it has been deleted, to avoid all funky locks problems. Now consider the following : you must DELETE several items at the same time and historize them. If you INSERT then DELETE: - records can be inserted, updated or deleted between the two. The inserted ones will be historized but not deleted (duplicates !), the deleted ones will be lost forever, unhistorized, the updated ones won't have their updates historized. Not very well for concurrecy ! You can LOCK FOR UPDATE before, this solves the UPDATE and DELETE problem, but not the INSERT problem. You can, of course, lock the entire table, but well, it reminds me too much of the MySQL way. You can also use SERIALIZABLE mode which solves all the problems, but if something goes wrong, everything fails and you have to retry the whole trasaction, whereas a proper lock would be waited on... If there is a primary key in the 'live' table you can SELECT FOR UPDATE into a tamporary table, then delete using the pkeys in the temp table, then insert from the temp table... ugly ! That's why I bother you to have the possibility of DELETE returning the DELETE'd rows ;) It's not very useful if you process one row, but when you process several at a time, it would be really great, because instead of 2*N queries (DELETE+INSERT hidden in a trigger) you'd just do one (INSERT ... DELETE AND SELECT ... FROM ...). Today, if you don't want to do it in a trigger, you have to have a unique index, SELECT FOR UPDATE, INSERT, DELETE, that's three queries per row. In a perfect world, this would be then used in an ON DELETE RULE which would replace the DELETES by deletes inserting the rows into the history table Also I've thought about some other interesting applications, if DELETE returns rows, why not UPDATE or even INSERT ? Many applications use INSERT... then SELECT currval(sequence). I also like to set defaults in the database, like for instance some rows which have timestamp fields defaulting to now() or things like that. I have a tree table with a ltree field which is generated by a trigger from the parent's path and the current row's id. Some other fields are also inherited from the parent. Why not do INSERT INTO ... AND SELECT ... which would return the sequence field, and any other fields which have been initialized by ON INSERT triggers... this would be neat... instead of INSERT, SELECT currval, SELECT .. FROM table WHERE id=... Same thing for on update triggers. You could replace some plpgsql procedures with one query, and what's more important, not worry about locking headaches. Anyway, my problem is solved now with triggers, but I like the idea very much (and Oracle has it) (and Tom once said a DELETE was just more or less like a SELECT)... so ... Regards ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] [HACKERS] Much Ado About COUNT(*)
On 1/14/05 5:37 PM, Martijn van Oosterhout kleptog@svana.org wrote: If that's all you want, what about the row estimate from pg_class? It has the number of rows active at last vacuum... For really large tables I imagine it'd be easily close enough... For showing the changes in a given day (or even week), that isn't accurate enough. Wes ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] MOVE
Use an after inset trigger. Well I did the reverse, an after delete trigger on the live table which inserts the deleted row in the history table, and it works very well. Thanks. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 05:10:10PM -0300, Alvaro Herrera wrote: On Fri, Jan 14, 2005 at 12:20:50PM -0700, Michael Fuhr wrote: On Fri, Jan 14, 2005 at 07:44:18PM +0100, Bo Lorentsen wrote: Alvaro Herrera wrote: You can create a function to get the sequence name attached to a table. Of course, you should take into account the fact that there could be more than one (two serial fields in a table are rare but not impossible), but if your tables have only one sequence you should be OK. Are there a way to find and test if it is a primary key ? pg_index has an indisprimary column. Yeah, though things get hairy that way because you have to peek at pg_attribute to match the objsubid in pg_depend; and self-join pg_class to get to the index itself. Not sure if it all can be done in a single query. If you do manage to write a function that will do this I hope you can share it with the community. IMHO PostgreSQL could do with more functions for querying the system catalogs. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] serial increments on failed insert
I've noticed what seems to be an odd effect in psql 7.3. It works like this: 1 Create a table: CREATE TABLE foo ( sval serial, uval int UNIQUE ); 2 Run 3 inserts, the second of which fails because it fails the unique constraint: INSERT INTO foo VALUES (DEFAULT,1); INSERT INTO foo VALUES (DEFAULT,1); --- This fails INSERT INTO foo VALUES (DEFAULT,2); 3 look at the table: SELECT * FROM foo; sval | uval --+-- 1 |1 3 |2 --- look here (2 rows) Notice that even though the second insert failed, it still incremented the serial value. This seems counter intuative to the way that serial should work. Is this truly a bug, or is there a good work around? Thanks, Dave ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] serial increments on failed insert
David Kammer [EMAIL PROTECTED] writes: Notice that even though the second insert failed, it still incremented the serial value. This seems counter intuative to the way that serial should work. Is this truly a bug, No. nextval() calls never roll back; see the documentation. is there a good work around? Don't assume that a serial column is without gaps. It's only intended to be unique. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] serial increments on failed insert
On Fri, Jan 14, 2005 at 04:57:19PM -0800, David Kammer wrote: Notice that even though the second insert failed, it still incremented the serial value. This seems counter intuative to the way that serial should work. Is this truly a bug, or is there a good work around? That's correct, documented behaviour. A serial column is mostly just a sequence in disguise. A sequence is guaranteed to give unique, increasing values, but in many cases may miss a value (for several reasons - in this case because once a sequence value is used, it's used, even if the transaction it was used in is rolled back). Do you really need that column to increase one at a time? Or just to increase and be unique? You could look at the maximum value in the column and use the maximum value plus one (and be prepared to retry if there's an index on that column to guarantee uniqueness). Cheers, Steve ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] serial increments on failed insert
On Fri, Jan 14, 2005 at 04:57:19PM -0800, David Kammer wrote: Notice that even though the second insert failed, it still incremented the serial value. This seems counter intuative to the way that serial should work. Is this truly a bug, or is there a good work around? See the Sequence Manipulation Functions section in the Functions and Operators chapter of the documentation. The page contains the following note: Important: To avoid blocking of concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions may leave unused holes in the sequence of assigned values. setval operations are never rolled back, either. Sequences are for obtaining numbers guaranteed to be unique; other assumptions about their behavior are probably unwarranted. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] pgpool
John Cunningham wrote: concerned that if I drop the number of connections to less than the number of databases I have, that pgpool would open the limit of connections, hold them open and not allow any connections to the remaining databases. Is this a concern? If I set up pgpool will I have to have the same number of connections as I have databases? That depends on how you configure pgpool. pgpool is not aware of the connection limit count in the PostgreSQL server, so it will happily open connections until there are no more slots available. pgpool will require max_pool * num_init_children connection slots. max_pool should be the number of database/user combinations you use (300 in your case, assuming only one database user account), Not really. If a user connects to pgpool and all onnection slots are already full, then pgpool will release the oldest connection slot and reuse it for the new connection. So even if there are 300 database/user combinations, it's ok to set max_pool as low as, for example, 4. Of course this will have unwanted side effect in that connection caches are not very well kept, though. -- Tatsuo Ishii and num_init_children should be on the order of how many concurrent connections you expect to each combination (several in your case). So you should have at least 300 * several PostgreSQL connection slots, which is probably more than the 1000 or so that is the default. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] PYTHON, ODBC
we won't be using any tools aside from psycho pg, no gui application frameworks, ie we're not using zope, ruby on rails, etc. The front end will only work for this particular database. This has come about due to the complicated nature of the database, and inability of zope to do what we need, and the problems with overhead we've been experiencing with rails due to the size of the database. Hope this clears things up a bit. thanks, matt On Sun, 09 Jan 2005 14:56:57 +0100, Pierre-Frédéric Caillaud [EMAIL PROTECTED] wrote: completely proprietary front end written in python. Any help finding useful What does a completely proprietary front-end in python means ? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] PostgreSQL 8 on windows very slow
Hi, I'm currently testing several databases for an application written in Delphi 7. I use zeos lib to access PostreSQL8-RC1 on MS-Windows 2000 SP4. PostrgreSQL is extremly slow, with a lot of disk access on INSERT request. Have-you seen this problem ? May be some parameters should be adjusted. What should I check ? Thanks ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] PYTHON, ODBC
thanks for the advice, matt On Sun, 09 Jan 2005 20:47:08 +0100, Pierre-Frédéric Caillaud [EMAIL PROTECTED] wrote: due to the complicated nature of the database, and inability of zope Well, I've found that Zope is very good to do a few things, and very bad at the rest. to do what we need, and the problems with overhead we've been experiencing with rails due to the size of the database. Hope this I like psycopy because selects with a lot of rows are processed efficiently. Look at the pydo library in the skunkweb project (google !) I'd advise you to start writing a base class for your database object (class DbObject) with a set of methods for setting instance methods from query results. SELECT * then use cursor.dictfetchall() which preserves the fields names ! Add methods for inserting and updating. Add a mapping of fields to type converters (which are functions) to convert non-standard types like arrays to python lists if you need them. Then derive your DbObject class for each table, with class variables containing the field names and types converters, which will be used by the base class. This way you can have a derived class with almost no code. Enjoy ! ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Books for experienced DB developer
Tino, Multiple recordsets means returning multiple setof results, not just one. As an example in a SQL Server Stored Procedure you can have the following in the same stored procedure: create proc getdata as select * from table1 select * from table2 go and it will return 2 resultsets. This is not possible in postgresql today. Tino Wildenhain [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] On Fri, 2005-01-07 at 11:56 +0100, Patrick FICHE wrote: I'm afraid this is still a problem. From my knowledge, Postgres function is able to return a single result-set not multiple. I may have missed some facility... rtfm_please see http://techdocs.postgresql.org/guides/SetReturningFunctions rtfm_please or http://www.postgresql.org/docs/current/static/xfunc-sql.html#AEN28835 (fresh from freenode irc #postgresql channel) Or is it not what you mean? Regards Tino ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] PostgreSQL 8 on windows very slow
I use zeoslib 6.1.5 for Delphi 7. I have also tried 6.5.1-alpha. It's faster (especially with firebird). I will take a look at dbExpress. Thank you. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] StartTransact back at Debug2 in v8?
Dear Gurus, Please point me to the right way to find the answer, if it's already answered, and forward the mail to the right list if I missed. Also, please CC: answers to me, since I'm unsure if I'm subscribed to the list right now... ;) In v7.3, our logfiles got bloated by Start/CommitTransactionCommand. In v7.4, this could be eliminated by a loglevel of DEBUG2, which showed user-level debug, but not these Start/CommitTransact DEBUG messages. In 8.0.0rc4 I'm sad to see that not only they are back to the same DEBUG level (DEBUG2), but it's even more verbose. Is there a way to configure the log differently, or is there an easy way to modify the source to do so (e.g. degrading these annoying messages to DEBUG3? Below is part of my postgresql.conf and a log piece. TIA, -- G. begin postgresql.conf snippet - # - When to Log - #log_min_messages = notice # Values, in order of decreasing detail: log_min_messages = debug2 #log_min_error_statement = panic # Values in order of increasing severity: log_min_error_statement=error # - What to Log - #log_connections = false log_connections = true #log_line_prefix = '' # e.g. '%u%%%d ' log_line_prefix = '%t [%p] [EMAIL PROTECTED] ' endof postgresql.conf snippet - begin log snippet - 2005-01-11 13:52:25 CET [19185] [EMAIL PROTECTED] DEBUG: StartTransaction 2005-01-11 13:52:25 CET [19185] [EMAIL PROTECTED] DEBUG: name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 2081/1/0, nestlvl: 1, children: 2005-01-11 13:52:25 CET [19185] [EMAIL PROTECTED] DEBUG: CommitTransaction endof log snippet - ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] serial increments on failed insert
Ive noticed what seems to be an odd effect in psql 7.3. It works like this: 1 Create a table: CREATE TABLE foo ( sval serial, uval int UNIQUE ); 2 Run 3 inserts, the second of which fails because it fails the unique constraint: INSERT INTO foo VALUES (DEFAULT,1); INSERT INTO foo VALUES (DEFAULT,1); --- This fails INSERT INTO foo VALUES (DEFAULT,2); 3 look at the table: SELECT * FROM foo; sval | uval --+-- 1 |1 3 |2 --- look here (2 rows) Notice that even though the second insert failed, it still incremented the serial value. This seems counter intuative to the way that serial should work. Is this truly a bug, or is there a good work around? Thanks, Dave ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] PostgreSQL v7.3.6
Hi Could you provide me with a link to where I can download v7.3.6 of PostgreSQL for win32, This is for some testing of mine, Thanks in advanced. Jason
[GENERAL] speaks psql unicode?
Hi, if i try to import data via SQL-Inserts (exported with pgManager) out from an utf-8 file i get always parse errors in the first line. After switching to ascii and using of SET client_encoding TO 'latin1' i can import all lines, but some unicode-characters are, like expected, damaged. Now my question: Can psql process Unicodefiles? Thanks for any tipp, thomas ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Benchmarks
Is there any benchmark of Postgres 8.00 comparing with previous versions and others DBMS? Bruno
Re: [GENERAL] Function for retreiving datatype
Brendan, I have had similar problems and the way I resolve it is by running the SQL statement directly in PGAdmin and in the resultset it tells you what the field types are. For example. create or replace function test1(vara int, varb int) returns setof record as $$ declare row record; begin for row in select * from table1 where field1=vara and field2=varb LOOP return next row; end loop; return; end; $$ language 'plpgsql' I copy the select statement and either make up variables for vara and varb or completely leave the where statement out. The result set then has: field1 (int) field2(varchar). I don't see how a function would help you in the middle of the code because you need to already know the field type before you call the function. Also the fieldtype can dynamically change if you are concatenating or applying other functions to the fields. For example, field xyz as a varchar and abc as text. xyz || abc stores the result as a text. Good Luck Sim Brendan Jurd [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Michael Fuhr wrote: On Tue, Jan 11, 2005 at 03:28:08AM +1100, Brendan Jurd wrote: Does postgres have a function to determine the data type of an argument? In what context? What problem are you trying to solve? Well, I solved the original problem in a different way, but I'd still like to know whether such a function exists. The original problem had to do with querying a row-returning function. I had an SQL function that returned SETOF record, and I was trying to use it in the FROM clause of a query. To do so, you need to provide a list of column definitions. I was getting the error about the returned row types not matching my column defs. In the end it was a simple mistake -- I had specified 'text' where I should have specified 'varchar'. I had thought to use some kind of gettype function to find out exactly what data types my query was returning. On that note, it might be helpful to increase the verbosity of the returned row types error message, so that it actually explains the mismatch it encountered. Something like Returned column 3 is varchar(15) but column definition is text would have made debugging a whole lot easier. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] PostgreSQL 8 on windows very slow
I have tried RC4, but there's no differences My results using my configuration are : MySQL 4 is 6 times faster than pgSQL Firebird 1.5 is 3 times faster than pgSQL Are these results coherent ? May be the problem comes more from ZeosLib than pgSQL8 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] PostGreSQL on Access Report
I'm nearing the end of my PostGreSQL migration from Access/Access to Access/PostGreSQL and I wanted to let everyone know how it is going. 1) ADO and access do not go well together, at least not in access 2000. Recordsets are not updatable, the internal access sort and filter functions work sporadically and weird. 2) I had initially converted those forms that didn't need to be updatable to ADO and kept the rest DAO, but that didn't work out so well, so now I have taken a step backwards and changed everything to DAO. I still managed to get a huge performance gain from the native Access. 3) Access does not like functions as tables such as select * from f_name(p1,p2...) it says its a bad from statement when you try to sort. I solved this by writing a little workaround where my function first puts the parameters in a table with the functionname and IP Address of the client. Then on the server I have another function that reads the parameters from the table on gets the data using a for row in Execute... statement. Finally I have a view that calls the new function. That way access to a function looks like a standard table. 4) I imported the citext case insensitive type, thank you Demolish for that. I originally changed all my varchars and text fields to citext and changed the ODBC setting to Unknowns as LongVarChar, but that made them all memos, which I thought was good. Then I found out you can't sort memos in Access. So I changed all the Texts back to text and removed the ODBC setting, so the unknown came in as an Access text (a DB varchar) and now the memos are not case insensitive, but that's life. 5) One trick I learned is that Access does not refresh any ODBC properties except for the server name when you Refresh Linked Tables and Choose a different location. To change other settings you have to delete the linked table and relink it. 6) Subforms lose their field linkage if the recordset is changed, even if the fields in the new recordset are the same. If you check the properties it will still say they are set, but they really aren't. 7) Subforms, I think it's actually only subdatasheets, can't have a passthrough query as the recordsource, but views can be linked as tables, so that is a good way of handling them. That's all I can think of at the moment. Hopefully this information will stop at least one person from pulling out all his hair as he tries to figure out how this actually works. Sim ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] to_char(interval, text) deprecated in future - how do we get consistent interval output without it?
I saw the note in the docs that to_char(interval, text) is deprecated, and will be removed. I searched the archives and saw more mentions of this, but no real explanation as to how it is planned for us to get consistent output formatting when querying a column containing interval data. For example, if you have data in an interval column, and just SELECT it (without special formatting), you get things like: 00:05:00 3 days 1 day 04:00:00 3 days 03:10:00 Currently, if I use something like: to_char(t.estimated_time, 'DD HH24:MI'), I seem to get consistent results that my program can easily deal with: 00 00:05 00 00:10 01 04:00 00 01:05 03 03:10 Can someone please explain if there is something else I should be using, or what the plans actually are for handling this in future? Should I just store the value in seconds as an integer and handle everything myself? Thanks, Susan Cassidy -- See our award-winning line of tape and disk-based backup recovery solutions at http://www.overlandstorage.com -- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] PostgreSQL 8 on windows very slow
What queries are you running? What sort of a machine are the database systems running on? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of lol Sent: Tuesday, January 11, 2005 6:47 AM To: pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL 8 on windows very slow I have tried RC4, but there's no differences My results using my configuration are : MySQL 4 is 6 times faster than pgSQL Firebird 1.5 is 3 times faster than pgSQL Are these results coherent ? May be the problem comes more from ZeosLib than pgSQL8 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Open Source Database Opportunity
Title: Open Source Database Opportunity Hello all I am looking for a survey which compares functionalities of the 3 following databases : Oracle, Mysql and PostreSQL Do somebody could send me some link or some documentation about this Thanks Regards Christophe TOUBLANC Architecture Technique Groupe SFR Cegetel -Direction Générale Technique Groupe Cegetel Système d'Information - Direction de l'urbanisation et du pilotage Immeuble Acacia - bur. 427 Tel : +33 01 71 01 47 59 - Fax : +33 01 71 01 62 17
Re: [GENERAL] Benchmarks
You might find something useful here: http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bruno Tenorio Avila Sent: Monday, January 10, 2005 9:09 PM To: pgsql-general@postgresql.org Subject: [GENERAL] Benchmarks Is there any benchmark of Postgres 8.00 comparing with previous versions and others DBMS? Bruno
[GENERAL] Problems with a trigger
Hi! I have a problem with an AFTER INSERT row-based trigger. It returns record new is not assigned yet. Any ideas? Thanks! Valentin Militaru [EMAIL PROTECTED] SC Telcor Communications SRL Tel. fix: 0316900015 Fax: 031691 Telefon mobil: 0741168267 attachment: 368405.gif
Re: [GENERAL] Open Source Database Opportunity
Title: Open Source Database Opportunity A web search turned this stuff up: http://www.linuxlinks.com/local/business/databases.shtml http://www.3asoft.com/article/10210.html http://www.eweek.com/article2/0,1759,1433850,00.asp I am sure you can find plenty more with a few well directed GOOGLE inquiries From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of TOUBLANC Christophe Sent: Friday, January 14, 2005 6:05 AM To: 'pgsql-general@postgresql.org' Cc: SOLTANI Roelof Subject: [GENERAL] Open Source Database Opportunity Hello all I am looking for a survey which compares functionalities of the 3 following databases : Oracle, Mysql and PostreSQL Do somebody could send me some link or some documentation about this Thanks Regards Christophe TOUBLANC Architecture Technique Groupe SFR Cegetel -Direction Générale Technique Groupe Cegetel Système d'Information - Direction de l'urbanisation et du pilotage Immeuble Acacia - bur. 427 Tel : +33 01 71 01 47 59 - Fax : +33 01 71 01 62 17
Re: [GENERAL] Open Source Database Opportunity
You might try searching the mailing list archives. On Fri, Jan 14, 2005 at 03:05:07PM +0100, TOUBLANC Christophe wrote: Hello all I am looking for a survey which compares functionalities of the 3 following databases : Oracle, Mysql and PostreSQL Do somebody could send me some link or some documentation about this Thanks Regards Christophe TOUBLANC Architecture Technique Groupe SFR Cegetel -Direction G?n?rale Technique Groupe Cegetel Syst?me d'Information - Direction de l'urbanisation et du pilotage Immeuble Acacia - bur. 427 Tel : +33 01 71 01 47 59 - Fax : +33 01 71 01 62 17 -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Problems with a trigger
Perhaps you need to do a FOR EACH ROW trigger? If that doesn't solve it, I need more details. Regards, Jeff On Thu, 2005-01-13 at 15:03 +0200, Valentin Militaru wrote: Hi! I have a problem with an AFTER INSERT row-based trigger. It returns record new is not assigned yet. Any ideas? Thanks! Valentin Militaru [EMAIL PROTECTED] SC Telcor Communications SRL Tel. fix: 0316900015 Fax: 031691 Telefon mobil: 0741168267 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] [HACKERS] Much Ado About COUNT(*)
Frank D. Engel, Jr. [EMAIL PROTECTED] writes: Yep, that could cause problems. Okay, now I'm joining the program. The only thing I can see that would fix this ... There are well understood mechanisms to fix this. It's a SMOP or simple matter of programming. What you would do is insert into a summary table a record that indicates how many records you've inserted into the master table. Periodically you have some daemon collect up those records and replace them with a single record. But this can be done already by hand and it's not clear having the database do it automatically is necessarily a good idea. It would impose a cost on every insert when most of the time it wouldn't be useful. Moreover this is just a special case of a general problem called materialized views. If it were added to the database it would probably be more worthwhile implementing a more general feature that could handle other aggregate functions besides count(*) as well as other types of queries besides simple unqualified aggregates. -- greg ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] speaks psql unicode?
On Jan 10, 2005, at 10:20 AM, Thomas Chille wrote: if i try to import data via SQL-Inserts (exported with pgManager) out from an utf-8 file i get always parse errors in the first line. After switching to ascii and using of SET client_encoding TO 'latin1' i can import all lines, but some unicode-characters are, like expected, damaged. Now my question: Can psql process Unicodefiles? Sure. Try this at the top of your file: \encoding UNICODE I think you could also set the ENCODING variable on the command line with the -v option. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Bogus subscription instructions on website
Found at http://archives.postgresql.org/pgsql-general/ If you hate getting many mail messages per day then you should consider a digest (where you receive multiple messages to the list as one message to you). To subscribe or unsubscribe from the digested list, send mail to [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]. The body of the message should contain the single line subscribe-digest pgsql-general or unsubscribe-digest pgsql-general Good advice, bad instruction. If you send the aforementioned command to [EMAIL PROTECTED], it is rejected. The correct way to get the digest is: 1. Subscribe to the list normally. 2. Once subscription is complete, send majordomo the command set LISTNAME digest It's also worth doing a help subscribe and a lists-full LISTNAME, so that you can see what options are available for the digest delivery. Anyone who doesn't know how to use majordomo's help function is likely to become very frustrated by the fact that the website's instructions don't work. BJ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 06:39:25PM -0600, Jim C. Nasby wrote: If you do manage to write a function that will do this I hope you can share it with the community. IMHO PostgreSQL could do with more functions for querying the system catalogs. Here's a first attempt at a view that shows tables and their primary key columns and sequences. I chose a view instead of a function because a view shows everything in the database with a single query, which simplifies visual examination of the results. Modify it or convert it to a function as needed. The view assumes single-column primary keys defined as SERIAL types. Properly handling other situations would be a desirable enhancement. I've done only trivial testing, so if anybody finds a situation where the view fails (taking the above assumption into account) then please describe it. CREATE OR REPLACE VIEW pk_sequence AS SELECT n.nspname AS tableschema, c.relname AS tablename, a.attname AS pkcol, n2.nspname AS seqschema, c2.relname AS seqname FROM pg_class AS c JOIN pg_namespace AS n ON n.oid = c.relnamespace JOIN pg_index AS i ON i.indrelid = c.oid AND i.indisprimary IS TRUE JOIN pg_attribute AS a ON a.attrelid = c.oid AND a.attnum = i.indkey[0] JOIN pg_depend AS d ON d.refobjid = c.oid AND d.refobjsubid = i.indkey[0] JOIN pg_class AS c2 ON c2.oid = d.objid AND c2.relkind = 'S' JOIN pg_namespace AS n2 ON n2.oid = c2.relnamespace; -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] OID Usage
On Jan 14, 2005, at 16:03, Bo Lorentsen wrote: Now, are there any danger in using this method ? And if there is, how can I do this trick without knowing the layout of the table I insert into ? You can use currval() to get the sequence value that was pulled from your insert. You can check the documentation for usage, as well as searching the archives for discussions of using OIDs as part of your database logic. Hope this helps. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] OID Usage
Michael Glaesemann wrote: You can use currval() to get the sequence value that was pulled from your insert. You can check the documentation for usage, as well as searching the archives for discussions of using OIDs as part of your database logic. I know this, but i like not to know anything about the metadata of the table i use. Basicly using the same functionality, as given in mysql in the mysql_insert_id, as I use the same low level code for both DB's (until my boss give in totally to PG :-)). /BL ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Functions that return RECORD type
Craig Bryden wrote: When I run select * from GetAccountInfo (100)I get the following error message: ERROR: a column definition list is required for functions returning record please can someone explain to me how to create a column definition list. CREATE FUNCTION foo() RETURNS SETOF RECORD AS 'SELECT 1::int,2::int,''A''::text;' LANGUAGE sql; SELECT * FROM foo() AS (a int, b int, c text); a | b | c ---+---+--- 1 | 2 | A (1 row) The other way (which I prefer) is to define a type and change the function definition: CREATE TYPE foo_res_type AS (a int, b int, c text); CREATE FUNCTION foo() RETURNS SETOF foo_res_type ... -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] OID Usage
Hi, On Fri, 14 Jan 2005, Bo Lorentsen wrote: Michael Glaesemann wrote: You can use currval() to get the sequence value that was pulled from your insert. You can check the documentation for usage, as well as searching the archives for discussions of using OIDs as part of your database logic. I know this, but i like not to know anything about the metadata of the table i use. Basicly using the same functionality, as given in mysql in the mysql_insert_id, as I use the same low level code for both DB's (until my boss give in totally to PG :-)). why should your application not want to know about the metadata of it's own tables ? That sounds quite strange when you think about it. If you name your sequences in a generic way you can alway construct the name of the sequence from the name of the table and the id column. We use this in our php framework function insert_id() { global $pg_conn; if(isset($pg_conn)) { $query = sprintf(SELECT currval('%s_%s_seq') AS id,$this-table,$this-id_column); $result = @pg_query($pg_conn,$query); $row = pg_fetch_assoc($result); return strval($row[id]); } else { return 0; } } Greetings Christian -- Christian Kratzer [EMAIL PROTECTED] CK Software GmbHhttp://www.cksoft.de/ Phone: +49 7452 889 135 Fax: +49 7452 889 136 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 10:32:18AM +0100, Christian Kratzer wrote: $query = sprintf(SELECT currval('%s_%s_seq') AS id,$this-table,$this-id_column); PostgreSQL 8.0 will have a pg_get_serial_sequence() function that returns the sequence name for a particular column so you don't have to construct it. This is useful when a table or column has been renamed, in which case the above will probably break. CREATE TABLE foo (fooid serial); ALTER TABLE foo RENAME TO bar; ALTER TABLE bar RENAME fooid TO barid; \d bar Table public.bar Column | Type | Modifiers +-+ barid | integer | not null default nextval('public.foo_fooid_seq'::text) SELECT pg_get_serial_sequence('bar', 'barid'); pg_get_serial_sequence public.foo_fooid_seq (1 row) INSERT INTO bar VALUES (DEFAULT); SELECT currval(pg_get_serial_sequence('bar', 'barid')); currval - 1 (1 row) -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] OID Usage
Christian Kratzer wrote: why should your application not want to know about the metadata of it's own tables ? That sounds quite strange when you think about it. Well, the ideer is to be compatible with mysql at the same level in the code. This works nicely, as I have descriped, but I am concerned if there is any strains attached to this method. It is all found in the : http://lue.dk/prj/dbc/index.html If you name your sequences in a generic way you can alway construct the name of the sequence from the name of the table and the id column. We use this in our php framework function insert_id() { global $pg_conn; if(isset($pg_conn)) { $query = sprintf(SELECT currval('%s_%s_seq') AS id,$this-table,$this-id_column); $result = @pg_query($pg_conn,$query); $row = pg_fetch_assoc($result); return strval($row[id]); } else { return 0; } } Thanks, but this demands you to have the table and id_column name in your hand, and I don't right now. Also ... the currval function are specifik to postgresql, and there are nothing like it in mysql that can make any garanti for getting row for newly inserted data. You can access autoincrement values in mysql, but no garanties are given about its value (someone else have inserted a new in the same table). But thanks for your interrest., anyway. /BL ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] OID Usage
Michael Fuhr wrote: PostgreSQL 8.0 will have a pg_get_serial_sequence() function that returns the sequence name for a particular column so you don't have to construct it. This is useful when a table or column has been renamed, in which case the above will probably break. Quite nice but not what I need, as I still need to know the id column name. But thanks anyway. /BL ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Invalid input for numeric problem
I have the following view (output from psql \d) Column | Type | Modifiers +---+--- code | character varying(15) | subhead| text | sc_description | character varying(60) | Grant | numeric | adjustments| numeric | expenditure| numeric | balance| numeric | head | integer | period | text | View definition: SELECT vw_expend.code, CASE WHEN left(vw_ac.ac_code::text, 2) = 'SA'::text THEN 'Salary Costs'::text WHEN left(vw_ac.ac_code::text, 2) = 'SC'::text THEN 'Startup Costs'::text WHEN left(vw_ac.ac_code::text, 2) = 'RC'::text THEN 'Running Costs'::text WHEN left(vw_ac.ac_code::text, 2) = 'TC'::text THEN 'Training Costs'::text ELSE NULL::text END AS subhead, vw_ac.sc_description, vw_expend.Grant, vw_expend.sum AS adjustments, vw_expend.expenditure, vw_expend.balance, CASE WHEN left(vw_ac.ac_code::text, 2) = 'SA'::text THEN 1 WHEN left(vw_ac.ac_code::text, 2) = 'SC'::text THEN 2 WHEN left(vw_ac.ac_code::text, 2) = 'RC'::text THEN 3 WHEN left(vw_ac.ac_code::text, 2) = 'TC'::text THEN 4 ELSE NULL::integer END AS head, CASE WHEN to_number(vw_expend.code::text, '999'::text) 194::numeric THEN '3'::text WHEN to_number(vw_expend.code::text, '999'::text) 195::numeric AND to_number(vw_expend.code::text, '999'::text) 50::numeric THEN '1'::text WHEN to_number(vw_expend.code::text, '999'::text) 50::numeric THEN '2'::text ELSE '0'::text END AS period FROM vw_expend JOIN vw_ac ON vw_expend.code::text = vw_ac.id::text ORDER BY to_number(vw_expend.code::text, '999'::text); When I do for example SELECT * FROM vw_budget WHERE period = '1'; I get the following error ERROR: invalid input syntax for type numeric: this is with rc3 anyone any idea what is going on here? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] PQexecParams and CURSOR
Hello, I Could not achieve to use CURSOR with PQexecParams ! How to you do ? Here what I've done: 1) paramValues[0] = 2; // This is the parameter for the query res = PQexec( conn , DECLARE MY_CURSOR FOR SELECT * FROM GRGL.RANGE_MODIFIER WHERE WEAPON_ID = $1 ); = ERROR DECLARE failed: ERROR: There is no parameter 1 2) then paramValues[0] = 2; // This is the parameter for the query res = PQexecParams( conn , DECLARE MY_CURSOR FOR SELECT * FROM GRGL.RANGE_MODIFIER WHERE WEAPON_ID = $1, 1, NULL, paramValues, NULL, NULL, 0); res = PQexec( conn , FETCH 1 FROM MY_CURSOR ); = Error FETCH failed: ERROR: no value found for parameter 1 3)-- then paramValues[0] = 2; // This is the parameter for the query res = PQexecParams( conn , DECLARE MY_CURSOR FOR SELECT * FROM GRGL.RANGE_MODIFIER WHERE WEAPON_ID = $1 , 1, NULL, paramValues, NULL, NULL, 0); ); res = PQexecParams( conn , FETCH 1 FROM MY_CURSOR, 1, NULL, paramValues, NULL, NULL, 0); = Error: FETCH failed: ERROR: bind message supplies 1 parameters, but prepared statement requires 0 PLEASE, COULD YOU HELP ? THANKS. Laurent Marzullo ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] pl/pgsql trigger: syntax error at or near ELSEIF
Hello, what is the parser trying to tell me? (7.4.2 if it matters) test=# CREATE OR REPLACE FUNCTION SYNC_COUPLECOUNT() test-# RETURNS TRIGGER test-# AS ' test'# BEGIN test'# IF TG_OP = ''INSERT'' THEN test'# UPDATE _calls test'# SET test'# realcouplecount = realcouplecount + 1 test'# WHERE test'# id = NEW.callid; test'# ELSEIF TG_OP = ''DELETE'' THEN test'# UPDATE _calls test'# SET test'# realcouplecount = realcouplecount - 1 test'# WHERE test'# id = NEW.callid; test'# END IF; test'# RETURN NEW; test'# END; test'# ' test-# LANGUAGE plpgsql; CREATE FUNCTION test=# CREATE TRIGGER triginsdel test-# AFTER INSERT OR DELETE ON _couples test-# FOR EACH ROW EXECUTE PROCEDURE SYNC_COUPLECOUNT(); CREATE TRIGGER test=# insert into _couples (id, callid) values (get_next_coupleid(), 1); ERROR: syntax error at or near ELSEIF at character 1 CONTEXT: PL/pgSQL function sync_couplecount line 8 at SQL statement On a related note: if I replace NEW with OLD in the second UPDATE (in the ELSEIF branch), I get: test=# insert into _couples (id, callid) values (get_next_coupleid(), 1); ERROR: record old is not assigned yet DETAIL: The tuple structure of a not-yet-assigned record is indeterminate. CONTEXT: PL/pgSQL function sync_couplecount line 8 at SQL statement What's wrong with that? -- FreeBSD 4.10-STABLE 12:48PM up 1 day, 6:53, 2 users, load averages: 0.05, 0.05, 0.00 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Insufficient system resources for PostgreSQL 7.4.x?
Hello, I changed DB server assigned for developing PHP applications, from MySQL to PostgreSQL. - Celeron 566Mhz - 128MB DDR 266MHz - HDD 20GB Seagate 7200rpm - Apache 2.0.52 - PHP 4.3.10 - PostgreSQL 7.4.6 But a problem occured. PHP scripts end due fatal errors Fatal error: Maximum execution time of 30 seconds exceeded When I had used MySQL there were no problems with similat queries. These queries work with 20 records tables, so I think, problem is somewhere else. Is necessary to upgrade HW or I can try to downgrade Apache, PHP and PostgreSQL but I don't know if is solution... Thank you very much for your replies. -- Jiri Nemec ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Insufficient system resources for PostgreSQL 7.4.x?
Ji Nmec wrote: But a problem occured. PHP scripts end due fatal errors Fatal error: Maximum execution time of 30 seconds exceeded When I had used Looks like you got yourself an endless loop there. Alban Hertroys. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pl/pgsql trigger: syntax error at or near ELSEIF
Hello try 35.7.2.4. IF-THEN-ELSIF-ELSE IF boolean-expression THEN statements [ ELSIF boolean-expression THEN statements [ ELSIF boolean-expression THEN statements ...]] [ ELSE statements ] END IF; regards Pavel ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] pl/pgsql trigger: syntax error at or near ELSEIF
Roman Neuhauser wrote: Hello, what is the parser trying to tell me? (7.4.2 if it matters) test'# ELSEIF TG_OP = ''DELETE'' THEN You typed ELSEIF, the parser doesn't know what that means (It's either ELSIF or ELSE IF). On a related note: if I replace NEW with OLD in the second UPDATE (in the ELSEIF branch), I get: What's wrong with that? OLD doesn't exist in INSERT triggers, only in UPDATE and DELETE triggers. A record in an INSERT trigger is by definition a new record, an old record wouldn't make sense. IMHO, it would be better to write two or three different triggers (one on INSERT, the other on DELETE, and maybe a third on UPDATE) than writing a single and trying to figure out which of those three it was in the trigger. Alban Hertroys. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Cursor bug?
On Thu, Jan 13, 2005 at 04:35:04PM -0600, Mike G. wrote: thank you. I use the cursor because I really do an update against a different table based on a value from the select in the original table. I am eagerly awaiting 8.0 and the ability to issue an Update Table1 Set Table1.col = Table2.col Using (Select y from Table2) What's this USING keyword you're referring to, I can't find it in the documentation anywhere, what does it do? Beside, it's seems to me your entire function could be replaced by a single query: insert into temp_tables.delete_me select name, file_number, question_to_change, new_result, email from source_table where question_to_change = 'consultant' and file_number is not NULL; or something like that, i didn't check the schema. But this is probably a cut down version. Tom's suggestion to use a FOR IN SELECT loop is a good one. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpqUewSR2JiF.pgp Description: PGP signature
Re: [GENERAL] Insufficient system resources for PostgreSQL 7.4.x?
I changed DB server assigned for developing PHP applications, from MySQL to PostgreSQL. - Celeron 566Mhz - 128MB DDR 266MHz - HDD 20GB Seagate 7200rpm - Apache 2.0.52 - PHP 4.3.10 - PostgreSQL 7.4.6 But a problem occured. PHP scripts end due fatal errors Fatal error: Maximum execution time of 30 seconds exceeded When I had used MySQL there were no problems with similat queries. These queries work with 20 records tables, so I think, problem is somewhere else. Is necessary to upgrade HW or I can try to downgrade Apache, PHP and PostgreSQL but I don't know if is solution... It really depends on the queries, whether they use the right indexes (can speedup from 2 hours/query to 1 minute/query), the amount of data stored and other things. Your hardware/software shouldn't be the problem. Dan ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Functions returning RECORD
This was very helpfull. Thank you Craig On Thu, Jan 13, 2005 at 07:58:33PM +0200, Craig Bryden wrote: When I run select * from GetAccountInfo (100)I get the following error message: ERROR: a column definition list is required for functions returning record If the function will always return the same row type then create a composite type with CREATE TYPE and return that instead of RECORD. please can someone explain to me how to create a column definition list. See the documentation for SELECT in the Reference part of the PostgreSQL documentation; the Examples section shows a query with a column definition list. But you won't need a column definition list if you return a known type instead of RECORD. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Problem running native windows Postgres 8 pg_dump etc on cygwin from bash
I have many database build scripts written for cygwin bash. These scripts use the cygwin port of postgres 7.4 and its client tools like psql, pg_dump etc. I have modified my bash scripts to use the Postgres 8 native windows tools. Unfortunately these bash scripts don't seem to work. Some example resutls of native windows Postgres 8 commands run in cygwin bash: pg_dump --version - spools back a database dump as if pg_dump is running using default (no) parameters. The same command in a DOS window works fine. psql -- version - does not return and a ^C required to kill psql - returns the default interactive session Can anyone throw any light on this as I really don't want to have to rewrite all my scripts into DOS batch files Any help would be much appreciated. Laurie :. CONFIDENTIALITY : This e-mail and any attachments are confidential and may be privileged. If you are not a named recipient, please notify the sender immediately and do not disclose the contents to another person, use it for any purpose or store or copy the information in any medium. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] OID Usage
On Fri, Jan 14, 2005 at 11:39:54AM +0100, Bo Lorentsen wrote: Thanks, but this demands you to have the table and id_column name in your hand, and I don't right now. You can create a function to get the sequence name attached to a table. Of course, you should take into account the fact that there could be more than one (two serial fields in a table are rare but not impossible), but if your tables have only one sequence you should be OK. Something with select relname, relkind from pg_depend join pg_class on (oid = objid) where pg_depend.refobjid = 'foo'::regclass and relkind = 'S'; (only lightly tested). Then you can use that to construct your argument to the nextval() function. Also ... the currval function are specifik to postgresql, and there are nothing like it in mysql that can make any garanti for getting row for newly inserted data. You can access autoincrement values in mysql, but no garanties are given about its value (someone else have inserted a new in the same table). This doesn't happen with sequences on Postgres. The value you get is guaranteed to be the one the sequence generated for you. -- Alvaro Herrera ([EMAIL PROTECTED]) Those who use electric razors are infidels destined to burn in hell while we drink from rivers of beer, download free vids and mingle with naked well shaved babes. (http://slashdot.org/comments.pl?sid=44793cid=4647152) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Version Control Software for Database Objects
PG Lightning Admin has version control(with a diff viewer) for functions built in. It wouldn't be that difficult to add other objects after I release 1.0 Here is a screen shot: http://www.amsoftwaredesign.com/downloads/pg_screenshots/function_version_control.PNG It creates a table in the public schema to hold the version information and it keeps track of who did what by pulling in the userid from the OS, not from PG. On a NT domain or active dir it will also look up the full name of the user. I will be releasing the program as shareware for 29.99 ( a pretty good deal) It also has a function editor and query editor with code completion and param hinting for built in as well as user created functions. Here are some more screen shots: http://www.amsoftwaredesign.com/pg_ss.asp.asp You could also beta test and receive it for free :-) Let me know if you would like to participate. Thanks, Tony Caduto AM Software Design Milwaukee WI. http://www.amsoftwaredesign.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Problem running native windows Postgres 8 pg_dump etc on cygwin from bash
You should run the Cygwin tools from Cygwin. They work fine against a native backend. //Magnus -Original Message- I have many database build scripts written for cygwin bash. These scripts use the cygwin port of postgres 7.4 and its client tools like psql, pg_dump etc. I have modified my bash scripts to use the Postgres 8 native windows tools. Unfortunately these bash scripts don't seem to work. Some example resutls of native windows Postgres 8 commands run in cygwin bash: pg_dump --version - spools back a database dump as if pg_dump is running using default (no) parameters. The same command in a DOS window works fine. psql -- version - does not return and a ^C required to kill psql - returns the default interactive session Can anyone throw any light on this as I really don't want to have to rewrite all my scripts into DOS batch files Any help would be much appreciated. Laurie :. CONFIDENTIALITY : This e-mail and any attachments are confidential and may be privileged. If you are not a named recipient, please notify the sender immediately and do not disclose the contents to another person, use it for any purpose or store or copy the information in any medium. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] OID Usage
Bo Lorentsen [EMAIL PROTECTED] writes: I use normal tabel ID (SERIAL and BIGSERIAL) all over the place for FK constaints, but I use OID in one special situation. When I insert a single row into a table, I like my low level code to be kompatible with mysql ( mysql_insert_id ), and fetch the row that I just inserted. This I do by using the PGoidValue function, and then select the row by the oid. This works quite nice but when a table get large, it become a big search (seq scan) so I have added an index on oid's on the table where I use this trick, and this have helper :-) The thing you have to worry about is the possibility of duplicate OIDs once your DB has been running long enough for the OID counter to wrap around (2^32 OIDs). You should make sure that index is specifically declared as UNIQUE, so that any attempt to insert a duplicate OID will fail. That might be enough for you, or you might want to add logic to your application to retry automatically after such a failure. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Problem running native windows Postgres 8 pg_dump etc on
Magnus wrote: You should run the Cygwin tools from Cygwin. They work fine against a native backend. Thanks, I'll have to wait for the port of P8.0.0 rc4 to cygwin as the current cygwin port of pg_dump is 8.0.0 B2 and does not work correctly for reason of the error 'column nsptablespace does not exist' which is fixed in later pg_dump versions. Still intrigued as to why the native ports don't work though. Regards Laurie :. CONFIDENTIALITY : This e-mail and any attachments are confidential and may be privileged. If you are not a named recipient, please notify the sender immediately and do not disclose the contents to another person, use it for any purpose or store or copy the information in any medium. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Invalid input for numeric problem
mike wrote: WHEN to_number(vw_expend.code::text, '999'::text) 50::numeric THEN '2'::text When I do for example SELECT * FROM vw_budget WHERE period = '1'; I get the following error ERROR: invalid input syntax for type numeric: Probably vw_expend.code contains a space in one or more rows, and to_number() is failing the conversion. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Problem running native windows Postgres 8 pg_dump etc on
You should run the Cygwin tools from Cygwin. They work fine against a native backend. Thanks, I'll have to wait for the port of P8.0.0 rc4 to cygwin as the current cygwin port of pg_dump is 8.0.0 B2 and does not work correctly for reason of the error 'column nsptablespace does not exist' which is fixed in later pg_dump versions. Still intrigued as to why the native ports don't work though. Cygwin has a different handling of console input/output than the windows command prompt. I've seen similar weird errors with other native windows tools on the cygwin shell. //Magnus ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Problem Dropping a Database with users connected to it
Hi, I'm basically trying to do what the subject says, through various means with no success. The basic situation is that every night we recreate our development database with a complete copy of our live data. The problem is some of the developers (well me especially) leave open connections to the DB at night, so the database drop fails. Now that's ok, but I need some sort of alternative... I thought of: * Disconnecting all other users before dropping the db, but that doesn't seem possible (I could start and stop the db, but that doesn't stop any clients from just reconnecting right away). * Some sort of idle time disconnection. Postgres doesn't seem to support this. * Just drop all the tables, etc. instead of dropping the db. There doesn't seem to be a good way to do this except doing an explicit DROP TABLE foo on all 200 tables. Is there a good recipe for this, an easy way to get a listing of all the tables in a db? Can anyone suggest an approach I missed or enlighten me on how to accomplish one of the above. Thanks very much. -- Eric Dorland [EMAIL PROTECTED] WCG 514.398-5023 ext. 09562 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pl/pgsql trigger: syntax error at or near ELSEIF
# [EMAIL PROTECTED] / 2005-01-14 13:27:24 +0100: Roman Neuhauser wrote: Hello, what is the parser trying to tell me? (7.4.2 if it matters) test'# ELSEIF TG_OP = ''DELETE'' THEN You typed ELSEIF, the parser doesn't know what that means (It's either ELSIF or ELSE IF). Thanks to everyone who pointed this out for me. On a related note: if I replace NEW with OLD in the second UPDATE (in the ELSEIF branch), I get: What's wrong with that? OLD doesn't exist in INSERT triggers, only in UPDATE and DELETE triggers. A record in an INSERT trigger is by definition a new record, an old record wouldn't make sense. IMHO, it would be better to write two or three different triggers (one on INSERT, the other on DELETE, and maybe a third on UPDATE) than writing a single and trying to figure out which of those three it was in the trigger. Thanks, going that route indeed solved the problem. -- If you cc me or remove the list(s) completely I'll most likely ignore your message.see http://www.eyrie.org./~eagle/faqs/questions.html ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]