Re: [GENERAL] Build universal binary on Mac OS X 10.6?
On Wed, Dec 2, 2009 at 1:14 AM, Tom Lane t...@sss.pgh.pa.us wrote: Israel Brewster isr...@frontierflying.com writes: I am trying to create a static, universal (x86 and ppc at least - 64 bit architectures are optional) build of the PostgreSQL libraries on a Mac OS X 10.6 machine. I think what you're missing is that the header files generated by configure are arch-sensitive. You need to take steps to make sure that each build reads the appropriate header declarations. Check the pgsql-hackers archives for previous discussions of building universal binaries on Darwin --- I think the last one was mid-2008 or so. I have tried both with PostgreSQL version 8.4.1 and 8.3.8 with the same result. My recollection is that you want 8.4 for this, previous releases would have had even more hoops to jump through to get there. If you get it to work Israel, please report back to the list. I tried briefly and failed completely at building a 3 way binary on one machine. My next attempt was going to involve multiple build machines and lipo... -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgres performance on Veritas VxVM
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, I'm running Postgres 8.3.7 on Solaris 10 x64, using VxFS/VxVM 5.0 MP3. I'm trying to determine the best storage configuration for my workload, so I ran some tests of Postgres performance with various combinations of VM and filesystem. What I noticed was that when all other factors were the same, running on VxVM was nearly twice as slow as running on a plain disk. My configuration was a VxFS filesystem mounted at /sql, 'noatime,cio', and another mounted at /sql/pg_xlog, 'noatime,cio,mincache=direct,convosync=direct'. This forced direct I/O for the WAL. Without VxVM, these filesystems were on plain disk slices. With VxVM, I added the single disk to a diskgroup and created two volumes on it for the filesystems, layout=concat, with the default options. The backend storage is a 14-disk RAID10 using an internal RAID card (Sun StorageTek STK-RAID-INT), with 256MB battery-backed write cache. The system has 32GB RAM and nothing apart from Postgres was running on it during the test. The same postgresql.conf was used for both tests; I configured 256MB of shared_buffers and 20 checkpoint_segments. The benchmark I'm using is a database import tool called osm2pgsql, with a subset of our actual workload. Without VxVM, the import completed in 2 hours, 34 minutes. With VxVM, it took 5 hours 7 minutes. Has anyone else tested Postgres on VxVM and noticed such a large performance hit? - river. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (HP-UX) iEYEARECAAYFAksWK6kACgkQIXd7fCuc5vIJowCfUVLjDQ7R34n4QIls4Uenahoq lNMAn3CO7zr+HIwirKZMl3YRtiFdMaY+ =/kND -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to install just client libraries on windows?
On 2/12/2009 12:37 PM, Sachin Srivastava wrote: We have updated the installer to avoid configuring ld.so.conf and instead use rpath linking. Great - hopefully that'll save you time and hassle with the installer in the long run. Thanks for listening - I for one do appreciate the work you folks do maintaining the installers, and don't mean my comments on any flaws in them to detract from that. I've had enough fun building and distributing binaries to know what a thankless task it can be. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to install just client libraries on windows?
On Wed, Dec 2, 2009 at 1:29 AM, Craig Ringer cr...@postnewspapers.com.au wrote: EnterpriseDB are currently handling the creation of the binary installers for PostgreSQL. I've been talking with them about some other installer issues (where the interferes with system libraries on Linux) too, so hopefully they'll be doing some work on their installers soon. Maybe they can look at splitting the client library installer into a separate package too. There is virtually no call for a client library installer - I can recall maybe half a dozen requests for one since 8.0, and most of those ppl actually wanted psql etc. Libpq is not like an ODBC or JDBC driver for example, where it may be plugged into any number of generic apps, but must be specifically linked with each application, thus application vendors will ship it themselves with their apps. The guys that are doing that will generally need more than just the client libraries - for example, they'll need headers and import libraries as well. The closest thing we have to a client installer is probably pgAdmin, which will install libpq and it's dependencies, as well as psql, pg_dump/pg_dumpall/pg_restore, and of course, pgAdmin. Dave: Are there sources available for the EnterpriseDB installer/packaging system? I'd like to get into the linux library path patcking problem myself, since nobody at EDB seemed to be interested in the issues when discussed on -general last week. As Sachin mentioned, a patch for that was committed late last night and should move into QA today. The source is periodically pushed to git.postgresql.org though. I'll also ask the team to acknowledge any future issues on-list, so its clear they are being worked on. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres performance on Veritas VxVM
River Tarnell wrote: My configuration was a VxFS filesystem mounted at /sql, 'noatime,cio', and another mounted at /sql/pg_xlog, 'noatime,cio,mincache=direct,convosync=direct'. This forced direct I/O for the WAL. Without VxVM, these filesystems were on plain disk slices. With VxVM, I added the single disk to a diskgroup and created two volumes on it for the filesystems, layout=concat, with the default options. That convosync trick only does what you want here if you also change wal_sync_method=open_sync (or open_data_sync). As it is, you're still calling fsync all the time despite what you did with the mount options, and you're not getting direct writes because you're not writing synchronously to trigger the conversion. What you should do is the following: postgresql.conf: wal_sync_method = open_datasync /sql/pg_xlog: 'noatime,cio,mincache=direct,convosync=direct That should work quite well. See http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm for a lot more details about the various options for VxFS tuning of WAL writes. Note that I only recommend open_sync for VxFS there because that will work on Linux too. Since you're on Solaris, you should be able to get the smaller open_datasync writes and some improvements from using direct writes too stack on top of one another. -- Greg Smith2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Build universal binary on Mac OS X 10.6?
On Tue, Dec 1, 2009 at 11:43 PM, Israel Brewster isr...@frontierflying.comwrote: I am trying to create a static, universal (x86 and ppc at least - 64 bit architectures are optional) build of the PostgreSQL libraries on a Mac OS X 10.6 machine. The command line I am using is as follows: 10.6 is intel only, so why do you even bother with PPC ? -- GJ
Re: [GENERAL] Build universal binary on Mac OS X 10.6?
2009/12/2 Grzegorz Jaśkiewicz gryz...@gmail.com: On Tue, Dec 1, 2009 at 11:43 PM, Israel Brewster isr...@frontierflying.com wrote: I am trying to create a static, universal (x86 and ppc at least - 64 bit architectures are optional) build of the PostgreSQL libraries on a Mac OS X 10.6 machine. The command line I am using is as follows: 10.6 is intel only, so why do you even bother with PPC ? 10.6 supports building of PPC binaries (in theory at least) for use on older OSs, -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to install just client libraries on windows?
Dave, The closest thing we have to a client installer is probably pgAdmin, which will install libpq and it's dependencies, as well as psql, pg_dump/pg_dumpall/pg_restore, and of course, pgAdmin. maybe we should promote this information to the public? Calling pgAdmin additionally the client installer surely fits the needs of many, many Windows-Users who just need the PostgreSQL client on their machines. Harald -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Spielberger Straße 49 70435 Stuttgart 0173/9409607 no fx, no carrier pigeon - %s is too gigantic of an industry to bend to the whims of reality -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres Dump out of order
On 2/12/2009 3:41 PM, silly wrote: pg_dump dumps data first and then the constraints (including FK) so there shouldn't be any problems when you import the dump. ... assuming you're using a sufficiently recent version of pg_dump. Wasn't that added fairly recently? -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres Dump out of order
In response to Helio Campos Mello de Andrade : Hi guys, - I'm having a problem when i want to make a backup of my system. - Postgres generated dump was created out of foreing key order and when i try to recreate my database structures, data and functions. Does someone have this same issue? Someone knows of some app that do the job? Which PG-Version? Wild guess: not a 8.x, or? And yes, provide more informations, for instance the table-definition and the error-message you got. Regards, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Storing images in database for web applications
On 2009-11-27, Thom Brown thombr...@gmail.com wrote: --0016e659f44c2bea2504795842a7 Content-Type: text/plain; charset=ISO-8859-1 Hi all, I'm wondering if anyone has experience of storing and getting images to and from a database? We currently have the problem of images being uploaded to a single gateway used by many companies, most of which run several websites. As it stands, once they upload the image, it then has to be fsync'd to the appropriate servers (3-way in some cases) or accessed through our image proxy. But now we wish to redesign our various image upload systems into one system which will also make images available across various sites/companies and therefore servers. So one solution is to store images in a database. What I'm wondering is if this is an appropriate solution? What are the downsides? Could we expect severe performance hits? To help mitigate the issue of many sites continuously querying a database for images, we'd implement a caching system, so images would only be queried from the database probably around once an hour. The benefits I can see is that images are managed in one place and accessibly easily by all servers. The problem is putting everything in one place (all eggs in one basket), so if the server goes down, all sites lose their images once their cache has expired... unless we implemented a system that falls back to cache if connection fails, even if cache has expired. Any suggestion? Alternatives? some sort of broadcasting to share the images as when they are uploaded, perhaps something based on NNTP or email. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] quote string exactly as it is
Hello, How can I quote a string *exactly* as it is? I tried using quote_literal() but it doesn't return what I need in some cases. E.g. If my string is: ss\\\ss And I do: select quote_literal('ss\\\ss'); I get: E'ss\\ss' -- My string now has E'' added and one backslash has been removed! What I want to do is to pass a string to a custom made function. Since the string can contain various metacharcters I need some way to pass this string exactly as it is. (For those who might be interested, the strings I'm working with are FASTQ quality scores) I'm using Postgresql 8.3 on Windows XP Thanks a lot! Dario -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] quote string exactly as it is
On Wed, Dec 2, 2009 at 12:10 PM, dario@libero.it dario@libero.itwrote: Hello, How can I quote a string *exactly* as it is? I tried using quote_literal() but it doesn't return what I need in some cases. E.g. If my string is: ss\\\ss And I do: select quote_literal('ss\\\ss'); I get: E'ss\\ss' -- My string now has E'' added and one backslash has been removed! http://www.postgresql.org/docs/8.3/static/sql-syntax-lexical.html That E tells postgres, that string is escaped. -- GJ
[GENERAL] Auto Vacuum Daemon
Hello All, I am using Postgres 8.3.5 on windows and was using manually VACCUM and ANALYZE commands in my scripts but now I have turned on the AutoVaccum daemon by changing the postgresql conf file. Is it fine to do that or should we manually perform vacuum and analyze. Will it automatically take care of vacuuming the tables and analyzing them? Any suggestions will be appreciated. Thanks Regards __ Ambarish Bhattacharya | Senior Software Engineer | SunGard Technology Services | Meridian Plaza, Sr.No.108/8/1+2/1, S B Road | Pune 411053, India | | Tel: +91-20-25606232| Mobile: +91 9823964376 | a.bhattacha...@sungard.com mailto:a.bhattacha...@.sungard.com | www.sungard.com | P Think before you print CONFIDENTIALITY: This email (including any attachments) may contain confidential, proprietary and privileged information, and unauthorized disclosure or use is prohibited. If you received this email in error, please notify the sender and delete this email from your system. Thank you.
Re: [GENERAL] quote string exactly as it is
On Wed, Dec 2, 2009 at 7:10 AM, dario@libero.it dario@libero.it wrote: Hello, How can I quote a string *exactly* as it is? I tried using quote_literal() but it doesn't return what I need in some cases. E.g. If my string is: ss\\\ss dollar quote it: select $abc$ss\\\ss$abc$; merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
R: Re: [GENERAL] quote string exactly as it is
Thanks a lot for replies! Dollar quote is what I needed, but I could find it in! Dario Messaggio originale Da: mmonc...@gmail.com Data: 02/12/2009 14.15 A: dario@libero.itdario@libero.it Cc: pgsql- gene...@postgresql.org Ogg: Re: [GENERAL] quote string exactly as it is On Wed, Dec 2, 2009 at 7:10 AM, dario@libero.it dario@libero.it wrote: Hello, How can I quote a string *exactly* as it is? I tried using quote_literal() but it doesn't return what I need in some cases. E.g. If my string is: ss\\\ss dollar quote it: select $abc$ss\\\ss$abc$; merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unexpected EOF on client connection
Francisco Figueiredo Jr. wrote: Presumably I should never get this error if I use Npgsql? That's correct. But there may be a bug in Npgsql code itself. That's why I asked you to try to raise minpoolvalue to check if you see less errors. I hope it helps. Thanks Francisco - I currently have MinPoolSize set to 3 (I have a lot of databases on this cluster), I think this copes 90% of the time but I shall set it to 10 and see what happens. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Roles with passwords; SET ROLE ... WITH PASSWORD ?
At present, a role may have a password but unless it's a login role the docs say that password doesn't get used. I currently have an app where it'd be handy to be able to: SET ROLE rolename WITH PASSWORD 'blah'; to switch to role `rolename' only if the password `blah' is correct for that role. `rolename' might or might not be a login role, though in most of the cases I'm looking at it'd make more sense for it to be a login role. Why would this be useful? The app relies on PostgreSQL for basic user management and authentication. User rights are controlled by role assignments, and are enforced at the database level by appropriate table, column and function permissions plus selective use of SECURITY DEFINER functions and triggers. The app only uses role memberships to decide what UI to hide to avoid confusing the user with permission errors from the DB. This works extremely well, _except_ that occasionally it's desirable to override a user's rights by intervention of a supervisor user. Having a way to do this by switching to a role by providing a password to confirm access would be nice, especially if the user didn't have to be a member of that role already. Right now I'm having to spawn a new connection with the supplied supervisor username password, then do the work in that connection. This works OK, but: (a) The switch can't be done mid-transaction so that priveleges are held for the minimum time possible. I'm looking at using `SET ROLE' to drop down to lower rights in the supervisor connection instead, but this could be clumsy when the main user has several roles significant for the operation(s) being performed. (b) The new connection doesn't hold the advisory locks the first connection had, which is occasionally problematic. (c) It's a PITA when working through an ORM like Hibernate, whereas a `SET ROLE' would be trivial and convenient. (d) I'd rather not spawn the extra backend, though I guess it doesn't matter much with an event of this rarity. Anyway ... I'm curious about whether `SET ROLE rolename WITH PASSWORD' is something that's technically practical to implement in PostgreSQL and what people think about the idea. I don't see any obvious, gaping security issues with doing this, since anybody who can 'SET ROLE ... WITH PASSWORD ...' should be able to log in with those credentials too. However, there may be issues interacting with external auth systems like ldap or kerberos. Thoughts? Also: I'm currently thinking of writing a `SECURITY DEFINER' function that tests a supplied password against a named role by direct access to pg_shadow, and if the password matches invokes SET ROLE with that role. Crazy? -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Large Objects and Replication question
Does anyone know of a replication solution that can handle large objects? Preferrably on a per database rather than per cluster basis. Incidentally - out of interest - why doesn't Slony handle large objects? Thanks. Howard www.selestial.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] deferrable foreign keys
Hi, are there downsides of making foreign keys deferrable (but initially immediate) for updates, when the transaction does not set the constraint behaviour to deferred? I'd expect that to have the same behaviour as non deferrable foreign keys. What I don't understand is, why is non deferrable the default, then. So maybe I miss something. regards Morus -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] deferrable foreign keys
On Wed, Dec 2, 2009 at 2:29 PM, Morus Walter morus.walter...@googlemail.com wrote: Hi, are there downsides of making foreign keys deferrable (but initially immediate) for updates, when the transaction does not set the constraint behaviour to deferred? I'd expect that to have the same behaviour as non deferrable foreign keys. What I don't understand is, why is non deferrable the default, then. it is just sometimes desired to not check the constraints, until comit. For instance, if you run bit of code that is old, and you don't want to mess around with keys. Or you have some strange way of putting information together. Basically it is all about order of operation within transaction. Sometimes it cannot be guaranteed, and hence an option to defer the constraint check. -- GJ
Re: [GENERAL] READ ONLY I/O ERROR
We are getting the below errors after 20 or 25 days of database creation. ERROR: could not open relation 1919829/1152694/1921473: Read-only file system ERROR: could not read block 312320 of relation 1964206/1152694/1981329: Input/output error If we create a new database the problem is repeated after 20 or 25 days. Until then we don't have any issues with the new database. The size of database is very huge. We are loading millions of records every day and also fetching from the database is also high. Even the disks are not full. We are not dropping the old database. What is the reason for this issue? How can we ensure that it is not a database issue? We are using GridSQL: 1.1.0.9 PostgreSQL 8.3 Architecture Details: CentOS 5.3 64 bit Areca high point rocket raid 3520 8 port 32 GB RAM -- Thanks Sam Jas --- On Mon, 30/11/09, Greg Smith g...@2ndquadrant.com wrote: From: Greg Smith g...@2ndquadrant.com Subject: Re: [GENERAL] READ ONLY I/O ERROR To: Scott Marlowe scott.marl...@gmail.com Cc: Sam Jas samja...@yahoo.com, pgsql-general@postgresql.org Date: Monday, 30 November, 2009, 8:29 PM Scott Marlowe wrote: Areca doesn't make the high point rocket raid cards (which are medium quality RAID cards). On a good day maybe. HighPoint is a pretty miserable RAID vendor--in the same league as Promise from what I've seen as far as their Linux driver support goes. In generally, and for reasons I'm not completely sure of, everyone selling fake RAID cards seems to be completely incompetent. The page at http://linuxmafia.com/faq/Hardware/sata.html hasn't been updated in a while, but as of 2007 all the current HighPoint cards were still based on closed-source drivers only. Completely worthless hardware IMHO. Sounds like your hardware is bad. Could be mobo / cpu / memory or RAID card. Does this machine hang every so often or anything? It's not out of the question for this sort of problem to be caused by a bad driver too. In this case it seems more likely it's a drive failure though. -- Greg Smith 2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general The INTERNET now has a personality. YOURS! See your Yahoo! Homepage. http://in.yahoo.com/
Re: [GENERAL] Postgres Dump out of order
Craig Ringer cr...@postnewspapers.com.au writes: On 2/12/2009 3:41 PM, silly wrote: pg_dump dumps data first and then the constraints (including FK) so there shouldn't be any problems when you import the dump. ... assuming you're using a sufficiently recent version of pg_dump. Wasn't that added fairly recently? Depends on context, which the OP provided none of. pg_dump has handled FK dependencies -- even circular ones -- correctly for a long time, given that you're doing a full schema+data dump. If you ask it for a data-only dump, there is no way to handle circular dependencies, so until recently it just threw up its hands and dumped the tables in an arbitrary order. Recent versions (I think probably only 8.4.x) will order a data-only dump correctly for FK considerations so long as there are no circular dependencies. If you must use a data-only dump pre-8.4, I'd suggest using pg_restore's -L switch to manually control the restore order. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Roles with passwords; SET ROLE ... WITH PASSWORD ?
Craig Ringer cr...@postnewspapers.com.au writes: Anyway ... I'm curious about whether `SET ROLE rolename WITH PASSWORD' is something that's technically practical to implement in PostgreSQL and what people think about the idea. Seems like it would have all the standard problems with cleartext passwords being exposed in pg_stat_activity, system logs, etc. Also, what about people who are using more-secure-than-password auth methods, like Kerberos? I'm not really for it. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] quote string exactly as it is
dario.ber wrote: How can I quote a string *exactly* as it is? I tried using quote_literal() but it doesn't return what I need in some cases. E.g. If my string is: ss\\\ss And I do: select quote_literal('ss\\\ss'); I get: E'ss\\ss' -- My string now has E'' added and one backslash has been removed! What I want to do is to pass a string to a custom made function. Since the string can contain various metacharcters I need some way to pass this string exactly as it is. If the backslashes are your only problem, set standard_conforming_strings=on (in postgresql.conf or in your session). Then backslashes are treated as normal characters (unless you prepend the string constant with E). laurenz= SET standard_conforming_strings=on; SET laurenz= select quote_literal('ss\\\ss'); quote_literal --- E'ss\\ss' (1 row) Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
On Dec 2, 2009, at 4:23 PM, Howard Cole wrote: Does anyone know of a replication solution that can handle large objects? Preferrably on a per database rather than per cluster basis. Take a look at Mammoth Replicator: https://projects.commandprompt.com/public/replicator. Additionally there is a list of available open-source replication solutions here: http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling Personally I'm not aware of other solutions supporting large objects, but I'm sure there should be more of them :). Incidentally - out of interest - why doesn't Slony handle large objects? Here is an excerpt from the Slony documentation: Slony-I does not automatically propagate schema changes, nor does it have any ability to replicate large objects. There is a single common reason for these limitations, namely that Slony-I collects updates using triggers, and neither schema changes, large object operations, nor TRUNCATE requests are able to have triggers suitable to inform Slony-I when those sorts of changes take place. As a result, the only database objects where Slony-I can replicate updates are tables and sequences. http://www.slony.info/documentation/slonyintro.html#INTRODUCTION I'm not sure that it's up-to-date, since 8.4 has at least on truncate triggers, but still there are no triggers on large objects yet. -- Alexey Klyukin http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
Does anyone know of a replication solution that can handle large objects? Preferrably on a per database rather than per cluster basis. pgpool-II can handle large objects. However you need to use newer API of libpq to create large objects: Oid lo_create(PGconn *conn, Oid lobjId); I'm not sure Java or some other languages support the equivalent API though. You cannot use old API lo_creat() since it relies on OID, which pgpool-II does not guarantee OIDs can be replicated. -- Tatsuo Ishii SRA OSS, Inc. Japan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] READ ONLY I/O ERROR
(please use text only email to the list) On Wed, Dec 2, 2009 at 7:51 AM, Sam Jas samja...@yahoo.com wrote: We are getting the below errors after 20 or 25 days of database creation. ERROR: could not open relation 1919829/1152694/1921473: Read-only file system ERROR: could not read block 312320 of relation 1964206/1152694/1981329: Input/output error PostgreSQL cannot make a file system read only. The OS does that. What do your system logs in /var/log have to say when this happens? There's got to be more context in there than we're getting evidence of here on the list. If we create a new database the problem is repeated after 20 or 25 days. Until then we don't have any issues with the new database. My guess is that it's not a fixed number, just what you've seen so far, could happen in a day or a month or a year. The size of database is very huge. We are loading millions of records every day and also fetching from the database is also high. Even the disks are not full. We are not dropping the old database. What is the reason for this issue? Looks like bad hardware to me. How can we ensure that it is not a database issue? It can't be a database number, as the database isn't capable of actually locking a file system. It can trigger an OS bug maybe that causes this problem, but given that no one else is having this issue with Centos 5.3, I'm gonna bet on bad hardware. We are using GridSQL: 1.1.0.9 PostgreSQL 8.3 Architecture Details: CentOS 5.3 64 bit Areca high point rocket raid 3520 8 port 32 GB RAM I will repeat, Areca does NOT MAKE the high point rocket raid. I will also add that a Rocket Raid is not, IMHO, suitable for a production environment. If it's an actual Areca, then the model will be something like 11xx, 12xx, or 16xx numbers, not 3520. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Roles with passwords; SET ROLE ... WITH PASSWORD ?
One other topic that is related to this is that we now have a expire date but it would be nice to have a number of days also. This would make it easy to force the user to change their passwords every X days if internal security is being used instead of something like Kerberos or LDAP. Best Regards Michael Gould Tom Lane t...@sss.pgh.pa.us wrote: Craig Ringer cr...@postnewspapers.com.au writes: Anyway ... I'm curious about whether `SET ROLE rolename WITH PASSWORD' is something that's technically practical to implement in PostgreSQL and what people think about the idea. Seems like it would have all the standard problems with cleartext passwords being exposed in pg_stat_activity, system logs, etc. Also, what about people who are using more-secure-than-password auth methods, like Kerberos? I'm not really for it. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Michael Gould, Managing Partner Intermodal Software Solutions, LLC 904.226.0978 904.592.5250 fax -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] deferrable foreign keys
Morus Walter morus.walter...@googlemail.com writes: are there downsides of making foreign keys deferrable (but initially immediate) for updates, when the transaction does not set the constraint behaviour to deferred? I'd expect that to have the same behaviour as non deferrable foreign keys. What I don't understand is, why is non deferrable the default, then. Because the SQL standard says so. I don't believe there is any actual penalty for deferrable within the PG implementation, but perhaps there is in other systems' implementations. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
BTW Additionally there is a list of available open-source replication solutions here: http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling The link http://www.slony2.org/ mentioned in the wiki page above apparently does nothing to do with Slony-II. Can someone please fix it. -- Tatsuo Ishii SRA OSS, Inc. Japan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
On Dec 2, 2009, at 5:48 PM, Tatsuo Ishii wrote: BTW Additionally there is a list of available open-source replication solutions here: http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling The link http://www.slony2.org/ mentioned in the wiki page above apparently does nothing to do with Slony-II. Can someone please fix it. Removed the link. -- Alexey Klyukin http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Books
Thomas Løcke escribió: Would I be supporting the PostgreSQL project by buying these, or are they just a compilation of the online manuals, done by some random author calling him-/herself The PostgreSQL Global Development Group? :o) If you really want to support the PostgreSQL project, you could donate money through SPI. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] READ ONLY I/O ERROR
On 2/12/2009 11:35 PM, Scott Marlowe wrote: (please use text only email to the list) On Wed, Dec 2, 2009 at 7:51 AM, Sam Jassamja...@yahoo.com wrote: We are getting the below errors after 20 or 25 days of database creation. ERROR: could not open relation 1919829/1152694/1921473: Read-only file system ERROR: could not read block 312320 of relation 1964206/1152694/1981329: Input/output error PostgreSQL cannot make a file system read only. The OS does that. What do your system logs in /var/log have to say when this happens? There's got to be more context in there than we're getting evidence of here on the list. In particular, if you're on a Linux system check the output of the dmesg command. I expect to see warnings about file system errors and about the file system being re-mounted read-only. I won't be surprised to see disk/raid errors either. If we create a new database the problem is repeated after 20 or 25 days. Until then we don't have any issues with the new database. My guess is that it's not a fixed number, just what you've seen so far, could happen in a day or a month or a year. Do you do any RAID scrubbing? On what schedule? Do you test the disks that are part of your RAID array using their internal SMART diagnostics? Is your server ever hard-reset or rebooted due to loss of power? (PostgreSQL is fine with this on a proper setup, but if you have a buggy RAID controller or one that caches writes without a battery backup, it's going to have issues). -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
Tatsuo Ishii wrote: However you need to use newer API of libpq to create large objects: Oid lo_create(PGconn *conn, Oid lobjId); [...] You cannot use old API lo_creat() since it relies on OID, which pgpool-II does not guarantee OIDs can be replicated. Does it mean that lo_create(conn, 0) is supported while lo_creat(conn,INV_READ|INV_WRITE) is not ? It sounds strange from the user's point of view, because they do the same thing, don't they? Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Roles with passwords; SET ROLE ... WITH PASSWORD ?
Michael Gould mgo...@intermodalsoftwaresolutions.net writes: One other topic that is related to this is that we now have a expire date but it would be nice to have a number of days also. This would make it easy to force the user to change their passwords every X days if internal security is being used instead of something like Kerberos or LDAP. There's already a solution to that in CVS HEAD: you can add a plug-in module to enforce password policy, including a limit on how far away the expiration date is. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] import data from openoffice Calc
HI Adrian, Thanks for pointing out the reporting bug information about my problem. I have tried your suggestion and it works great. But this approach can only solve one part of my problem, because I also need to open an existing db table through Clac, edit some tuples, and then save it as a new table. For this case, I can not convert my original date data to text (because it will become an integer after converting as you mentioned). Thanks again. LW On Tue, Dec 1, 2009 at 6:38 PM, Adrian Klaver akla...@comcast.net wrote: - Le-shin Wu les...@gmail.com wrote: Hi Adrian, Thanks for your information. I think your suggestion will be my last approach, if I really can not find a way to solve my problem. Actually, I tried to format the cells type (the date column in my sheet) in Calc as Date before I copy and past, but somehow it works for only once. This also confused me. Thanks again. LW Once the column is formatted as Date the underlying value will be the integer value. I did a little test and found the following. If I preformatted a column as Text and then entered date strings for example 12/01/09 it stayed a string when cut and pasted. If I just started entering a date string in a column the column would become a Date column. Changing the formatting to Text would change the date strings 12/01/09 to the underlying integer 40147. Any new date strings entered however would stay as 12/01/09. Hope this helps. Adrian Klaver akla...@comcast.net
Re: [GENERAL] Postgres performance on Veritas VxVM
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Greg Smith: What you should do is the following: postgresql.conf: wal_sync_method = open_datasync /sql/pg_xlog: 'noatime,cio,mincache=direct,convosync=direct That should work quite well. Thanks. That reduced the import time to about 4hr 30 minutes. However, this is still much slower than using VxFS with plain disks. Any other ideas? I'm now running a test using VxFS on SVM soft partitions to see if that improves performance at all (but I'd much rather have the flexibility of VxVM). - river. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (HP-UX) iEYEARECAAYFAksWlf0ACgkQIXd7fCuc5vLDQQCghI3Dm8/GRuDVOucrYlQoq5jS sk0AoLR0SJyt+aK7k1F+iTEQ/t3Qto+U =y3ed -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Undefined subroutine main::spi_prepare
I have a PL/Perl stored procedure that's worked fine on my development boxes but is suddenly failing when installed on a test machine. Both my test machines and the new machine are RHEL AS5. The differences are the test machine is 64-bit, and running Postgres 8.1.18 instead of my development machine which is 32-bit and running Postgres 8.2.7. The error in the postgresql log is: ERROR: error from Perl function: Undefined subroutine main::spi_prepare called at line 2. LOG: unexpected EOF on client connection My first thought was that the postgresql-pl package wasn't installed, but it appears to be there. Does anyone else know what sort of installation/configuration error could cause a failure like this? Thanks. -Tony -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Undefined subroutine main::spi_prepare
I have a PL/Perl stored procedure that's worked fine on my development boxes but is suddenly failing when installed on a test machine. Both my test machines and the new machine are RHEL AS5. The differences are the test machine is 64-bit, and running Postgres 8.1.18 instead of my development machine which is 32-bit and running Postgres 8.2.7. The error in the postgresql log is: ERROR: error from Perl function: Undefined subroutine main::spi_prepare called at line 2. LOG: unexpected EOF on client connection My first thought was that the postgresql-pl package wasn't installed, but it appears to be there. Does anyone else know what sort of installation/configuration error could cause a failure like this? Thanks. -Tony -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Roles with passwords; SET ROLE ... WITH PASSWORD ?
On 2/12/2009 11:04 PM, Tom Lane wrote: Craig Ringercr...@postnewspapers.com.au writes: Anyway ... I'm curious about whether `SET ROLE rolename WITH PASSWORD' is something that's technically practical to implement in PostgreSQL and what people think about the idea. Seems like it would have all the standard problems with cleartext passwords being exposed in pg_stat_activity, system logs, etc. Yeah, I was a bit concerned about that, but it can be worked around with careful use of parameterised queries (depending, admittedly, on client library/driver). It's still not pretty. And, of course, you'd have to run over SSL or some other encrypted channel to make it even faintly OK. Also, what about people who are using more-secure-than-password auth methods, like Kerberos? That's more what I was worried about. I thought it might be useful anwyay, though, as it seems that a *lot* of people use Pg's built-in user management. Between the two, though, as I look at it more I'm inclined to agree that it's probably not worth it. Doing it right would need protocol-level support for re-running authentication, which I imagine would be major server- and client-surgery as well as a protocol version change ... hardly worth it for a niche capability. I'll probably achieve roughly the same thing for my particular needs with a function that does a manual lookup in pg_shadow. It's ugly, but should do what I need. I can hide the password in a query parameter, so long as I make sure the params aren't interpolated on the client end. I force the use of SSL to permit users to log in at all, so network sniffing shouldn't be an issue. It's frustrating that there's no way to change priveleges on the fly (without a new connection and potential locking issues) ... but it's not the end of the world. If the above approach doesn't work I can always go back to spawning new connections and living with the issues. Thanks for looking at the notion, though - I thought it *might* be worth a glance. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Undefined subroutine main::spi_prepare
Tony Cebzanov tony...@andrew.cmu.edu writes: I have a PL/Perl stored procedure that's worked fine on my development boxes but is suddenly failing when installed on a test machine. Both my test machines and the new machine are RHEL AS5. The differences are the test machine is 64-bit, and running Postgres 8.1.18 instead of my development machine which is 32-bit and running Postgres 8.2.7. I don't think spi_prepare existed in 8.1 plperl. In general it's not a terribly good idea to develop on a newer PG version than you plan to deploy on ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Roles with passwords; SET ROLE ... WITH PASSWORD ?
Craig Ringer cr...@postnewspapers.com.au writes: On 2/12/2009 11:04 PM, Tom Lane wrote: Seems like it would have all the standard problems with cleartext passwords being exposed in pg_stat_activity, system logs, etc. Yeah, I was a bit concerned about that, but it can be worked around with careful use of parameterised queries (depending, admittedly, on client library/driver). No, not really, because we don't support parameters in utility commands. Even if we did, parameter values get logged, so the leak to the postmaster log is still there. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Build universal binary on Mac OS X 10.6?
On Dec 2, 2009, at 1:03 AM, Dave Page wrote: 2009/12/2 Grzegorz Jaśkiewicz gryz...@gmail.com: On Tue, Dec 1, 2009 at 11:43 PM, Israel Brewster isr...@frontierflying.com wrote: I am trying to create a static, universal (x86 and ppc at least - 64 bit architectures are optional) build of the PostgreSQL libraries on a Mac OS X 10.6 machine. The command line I am using is as follows: 10.6 is intel only, so why do you even bother with PPC ? 10.6 supports building of PPC binaries (in theory at least) for use on older OSs, Exactly. If I was building for my own personal use, I wouldn't bother with PPC, or even i386 - i'd just go with x86_64. However, as I am developing software for wide distribution, I want it to be able to run on as many machines/OS's as possible. That said, I did sort of get this to work. What I ended up doing was building for each architecture separately (but on the same machine), then using lipo to combine the resulting libraries. When I took all but one architecture flag out of the configure string I used, it worked- regardless of which architecture I left in. I haven't had a chance to test this fully yet, but so far it seems to have worked - lipo reports all three architectures (I didn't bother with ppc64), and in the initial compile phase at least Qt is happy (still dealing with other unrelated compile issues there). Apparently PostgreSQL will happily build for any one of the platforms under Mac OS X 10.6, but trying to build for multiple platforms at once gives it heartburn. Given that, I tend to think there was something wrong with the way I was trying to do it - something missing/wrong from the CFLAGS or LDFLAGS or the like. I'm somewhat curious though. I didn't have any difficulties making universal builds of MySQL and SQLite by simply passing multiple -arch flags to CFLAGS and LDFLAGS. It makes me wonder what PostgreSQL is doing differently that causes problems? Thanks for the feedback and advice! -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general --- Israel Brewster Computer Support Technician II Frontier Flying Service Inc. 5245 Airport Industrial Rd Fairbanks, AK 99709 (907) 450-7250 x293 --- BEGIN:VCARD VERSION:3.0 N:Brewster;Israel;;; FN:Israel Brewster ORG:Frontier Flying Service;MIS TITLE:PC Support Tech II EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com TEL;type=WORK;type=pref:907-450-7293 item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701; item1.X-ABADR:us CATEGORIES:General X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson END:VCARD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Auto Vaccum
Hi All - how can I find out when the last vaccum run on the databse? Can you please help? regards
Re: [GENERAL] Postgres performance on Veritas VxVM
River Tarnell wrote: I'm now running a test using VxFS on SVM soft partitions to see if that improves performance at all (but I'd much rather have the flexibility of VxVM). Flexibility is often expensive from a performance point of view. We regularly tell people here that they have to avoid using Linux's LVM for similar reasons--while it shouldn't be so slow, it is. Nothing you can do about it but use direct disk partitions instead if you need the performance to be good. -- Greg Smith2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Build universal binary on Mac OS X 10.6?
Israel Brewster isr...@frontierflying.com writes: That said, I did sort of get this to work. What I ended up doing was building for each architecture separately (but on the same machine), then using lipo to combine the resulting libraries. When I took all but one architecture flag out of the configure string I used, it worked- regardless of which architecture I left in. I haven't had a chance to test this fully yet, but so far it seems to have worked - The server executables will probably not work, except on the arch you built on. The client programs might accidentally fail to fail; I'm not sure whether they contain any dependencies on the arch-specific values that are extracted by configure. You really need to create pg_config.h contents that are correct for the specific arch you're trying to compile for. The last time I tried this, the only good way to do that was by running configure on the particular architecture. (Maybe 10.6 has got some cute way around that, but I doubt it.) I'm somewhat curious though. I didn't have any difficulties making universal builds of MySQL and SQLite by simply passing multiple -arch flags to CFLAGS and LDFLAGS. Can't speak to SQLite, but I know quite well that mysql has got essentially the same issues as PG with having arch-specific configure output. Have you actually tested those universal builds on any arch except where you built them? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] return value for PQbinaryTuples
Merlin Moncure wrote: PQbinaryTuples is basically going to return whatever you passed into resultformat when you executed the query (in the case of PQexec, it's going to be 1 always). You mean 0 (i.e. text, not binary). And with an exception on PQexec(FETCH c) when c is a binary cursor. In this case the result is binary and PQbinaryTuples() reflects that. Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto Vaccum
On Wed, 2009-12-02 at 12:13 -0500, akp geek wrote: how can I find out when the last vaccum run on the databse? Check pg_stat_user_tables. -- Devrim GÜNDÜZ, RHCE Command Prompt - http://www.CommandPrompt.com devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Build universal binary on Mac OS X 10.6?
On Dec 2, 2009, at 8:26 AM, Tom Lane wrote: Israel Brewster isr...@frontierflying.com writes: That said, I did sort of get this to work. What I ended up doing was building for each architecture separately (but on the same machine), then using lipo to combine the resulting libraries. When I took all but one architecture flag out of the configure string I used, it worked- regardless of which architecture I left in. I haven't had a chance to test this fully yet, but so far it seems to have worked - The server executables will probably not work, except on the arch you built on. The client programs might accidentally fail to fail; I'm not sure whether they contain any dependencies on the arch- specific values that are extracted by configure. You really need to create pg_config.h contents that are correct for the specific arch you're trying to compile for. The last time I tried this, the only good way to do that was by running configure on the particular architecture. (Maybe 10.6 has got some cute way around that, but I doubt it.) I'm somewhat curious though. I didn't have any difficulties making universal builds of MySQL and SQLite by simply passing multiple -arch flags to CFLAGS and LDFLAGS. Can't speak to SQLite, but I know quite well that mysql has got essentially the same issues as PG with having arch-specific configure output. Have you actually tested those universal builds on any arch except where you built them? Well, I'm not trying to use the server or client programs from this build - I just want the universal libraries for my programs. My point in this last section, however, doesn't necessarily extend as far as actual function, but rather is just with the build. MySQL and SQLite build for multiple architectures quite happily, Postgres doesn't build at all except for single architectures (the way I am trying at least). Granted, it's entirely possible that the server/client built by MySQL doesn't work on other platforms, but the build does - thus my curiosity. FWIW, I have tested a program of mine linked against the multi- architecture builds of the MySQL and SQLite libraries on other platforms, and they do work. But then, I didn't need to do any fancy work with multiple builds and lipo with them. Actually, back when I was on Mac OS 10.5 I managed to get a universal build of Postgres 8.2 that worked - there's an old thread of mine from when I was having problems with that as well. But I'm trying to get updated here, and seem to be running into all new problems :-) regards, tom lane --- Israel Brewster Computer Support Technician II Frontier Flying Service Inc. 5245 Airport Industrial Rd Fairbanks, AK 99709 (907) 450-7250 x293 --- BEGIN:VCARD VERSION:3.0 N:Brewster;Israel;;; FN:Israel Brewster ORG:Frontier Flying Service;MIS TITLE:PC Support Tech II EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com TEL;type=WORK;type=pref:907-450-7293 item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701; item1.X-ABADR:us CATEGORIES:General X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson END:VCARD -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] import data from openoffice Calc
- Le-shin Wu les...@gmail.com wrote: HI Adrian, Thanks for pointing out the reporting bug information about my problem. I have tried your suggestion and it works great. But this approach can only solve one part of my problem, because I also need to open an existing db table through Clac, edit some tuples, and then save it as a new table. For this case, I can not convert my original date data to text (because it will become an integer after converting as you mentioned). Thanks again. LW Might be easier to use the CREATE TABLE AS command: http://www.postgresql.org/docs/8.4/interactive/sql-createtableas.html Adrian Klaver akla...@comcast.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto Vaccum
thank you 2009/12/2 Devrim GÜNDÜZ dev...@gunduz.org On Wed, 2009-12-02 at 12:13 -0500, akp geek wrote: how can I find out when the last vaccum run on the databse? Check pg_stat_user_tables. -- Devrim GÜNDÜZ, RHCE Command Prompt - http://www.CommandPrompt.com devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz
Re: [GENERAL] Postgres performance on Veritas VxVM
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Greg Smith: Flexibility is often expensive from a performance point of view. We regularly tell people here that they have to avoid using Linux's LVM for similar reasons--while it shouldn't be so slow, it is. Nothing you can do about it but use direct disk partitions instead if you need the performance to be good. Okay, that makes sense. What about using plain slices for the WAL, but using the VM for the data? For example, we have 14 disks, so I could allocate 2 for the log in RAID1 (146GB, which is more than enough), then use the remaining 12 under VxVM for the data. If I understand right, the critical factor is the WAL write speed; the VM is easily able to keep up with writes to the data files, since those are mostly asynchronous. Does this seems like a reasonable solution? (I'll benchmark this configuration anyway, but I'd be interested in any comments.) Thanks, River. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (HP-UX) iEYEARECAAYFAksWwi0ACgkQIXd7fCuc5vKxzQCeMB0ECbxedXIcQ+YEhFcuUJzc 7egAn0zbzed5VL/E8UPFReZDhl50LTuK =NvX6 -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to get RTREE performance from GIST index?
Alban Hertroys wrote: In the default PostgreSQL configuration, The Autovacuum Daemon takes care of automatic analyzing of tables when they are first loaded with data, and as they change throughout regular operation. When autovacuum is disabled, it is a good idea to run ANALYZE periodically, or just after making major changes in the contents of a table. That last line isn't explicit about temporary tables, but the reason for running ANALYZE in both cases is the same. Actually, autovacuum doesn't process temp tables at all because it cannot get to them; they might live solely in the creating process' private memory area. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to get RTREE performance from GIST index?
Actually, autovacuum doesn't process temp tables at all because it cannot get to them; they might live solely in the creating process' private memory area. Does that mean that, in between creating a temporary table and actually using it in a complicate query, it is desirable to run an ANALYZE command on it? I haven't been doing that, because I didn't know. Regards -- Clive Page -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Catastrophic changes to PostgreSQL 8.4
Hello, I am the project manager of Bacula. One of the database backends that Bacula uses is PostgreSQL. This email is to notify you that a change you made to setting database character codes has created havoc with certain unfortunate Bacula users. Bacula sets the database encoding to SQL_ASCII, because although Bacula supports UTF-8 character encoding, it cannot enforce it. Certain operating systems such as Unix, Linux and MacOS can have filenames that are not in UTF-8 format. Since Bacula stores filenames in PostgreSQL tables, we use SQL_ASCII. We set SQL_ASCII by default when creating the database via the command recommended in recent versions of PostgreSQL (e.g. 8.1), with: CREATE DATABASE bacula ENCODING 'SQL_ASCII'; However, with PostgreSQL 8.4, the above command is ignored because the default table copied is not template0. This means that some Bacula users who have created PostgreSQL databases with version 8.4, typically find them created with SQL_UTF8 format, which results in serious errors when doing backups for certain machines. Apparently, for PostgreSQL 8.4, one must specify: CREATE DATABASE bacula ENCODING 'SQL_ASCII' TEMPLATE=template0; Rather than making this incompatible modification, it would have been advisable to make the default equivalent to the above. Of course I imagine that there are lots of reasons why that could not be programmed as such. I also notice the following comment in your 8.4 documentation: PostgreSQL will allow superusers to create databases with SQL_ASCII encoding even when LC_CTYPE is not C or POSIX. As noted above, SQL_ASCII does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether. If you do eliminate SQL_ASCII at some point, it would be a real pity and create a lot of inconvience for your users. Regards, Kern -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: vacuumdb: vacuuming of database xy failed: PANIC: corrupted item pointer: 19227
On 29 nov., 07:02, Tech 2010 tch...@gmail.com wrote: What is this? PANIC: corrupted item lengths: total 8192, available space 8068 for the others... find the selected row and replace/delete it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Cannot login, with C/C++ program
I have written a C/C++ program that trys to log into a local PostgreSQL database. Here is the code: pg_conn = PQconnectdb( hostaddr = '127.0.0.1' port = '' dbname = 'TBDB' user = 'sysdba' password = 'stelmo777' connect_timeout = '10'); if (!pg_conn) { return false; } if (PQstatus(pg_conn) != CONNECTION_OK) { return false; // - execution reaches here. I checked the database, user and password by logging in with psql. What could be wrong? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Query using partitioned table hangs
I have an app that was previously using a large unpartitioned table with no problems. I partitioned this table and am now experiencing intermittent hangs when inserting data into the partitioned table. The stored procedure that does the insert seems to run to completion even when it 'hangs'. There are no messages in the log file, no errors/exceptions that I'm aware of, and I'm at a loss as to what is causing this, so any help would be much appreciated. Here's the details: PostgreSQL version: 8.4 OS: Reproduced on both, RHEL 5.3 and OS X 10.5 Application: Java 1.6, using PostgreSQL 8.4 JDBC type 4 driver. The application invokes a pl/pgsql stored procedure (with autocommit set to true). This sproc does several inserts - see below for the code. This works fine with a single unpartitioned table, even for large inserts. With the table partitioned, it hangs intermittently, usually occurs within 20 - 30 minutes of running the application, after invoking the sproc ~25 times. When it hangs, it hangs indefinitely - I know because I inadvertently left it in 'hung' state for 24 hours, so it's not just slow. The last command of the sproc truncates the staging table, raw_data. The table raw_data is empty when the query is hung, which suggests that the sproc is running to completion. I tried sending a SIGHUP to the postmaster process (kill -1) while the sproc was hung, thinking it might be a thread blocked somewhere, but that had no effect. The relevant tables, queries etc are listed below: -- The parent table CREATE TABLE event ( id BIGSERIAL PRIMARY KEY, evt_time TIMESTAMP WITH TIME ZONE NOT NULL, fk_host INTEGER REFERENCES dim_host NOT NULL, fk_user INTEGER REFERENCES dim_user ); -- example of a child table: CREATE TABLE event_y2009m09 ( CONSTRAINT event_y2009m09_evt_time_check CHECK (evt_time = '2009-09-01'::date AND evt_time '2009-10-01'::date) ) INHERITS (event) WITH ( OIDS=FALSE ); -- Example of trigger function used CREATE OR REPLACE FUNCTION event_insert_trigger() RETURNS trigger AS $BODY$ BEGIN IF ( NEW.evt_time = DATE '2009-08-01' AND NEW.evt_time DATE '2009-09-01' ) THEN INSERT INTO event_y2009m08 VALUES (NEW.*); ELSIF ( NEW.evt_time = DATE '2009-09-01' AND NEW.evt_time DATE '2009-10-01' ) THEN INSERT INTO event_y2009m09 VALUES (NEW.*); ELSE RAISE EXCEPTION 'Date out of range.'; END IF; RETURN NULL; END; $$ LANGUAGE plpgsql; -- The sproc that hangs CREATE OR REPLACE FUNCTION normalize_data() RETURNS void AS $BODY$ DECLARE str_value text; match record; BEGIN -- Run analyze to keep statistics up to date -- raw_data contains ~60,000 records at this point ANALYZE raw_data; INSERT INTO dim_host (name_str) SELECT DISTINCT host_name FROM raw_data EXCEPT SELECT name_str FROM dim_host; ANALYZE dim_host; -- Do a few more inserts like the above -- Then copy all the records from the staging table -- Simplified for brevity - real query is a 12-way join -- All FK constraints on table event are dropped before -- we run this query. -- All appropriate partitions and triggers have been created -- in advance of running this query. INSERT INTO event (evt_time, fk_host, fk_user) SELECT r.evt_time, dim_host.id, dim_user.id FROM raw_data as r JOIN dim_host ON r.host_name = dim_host.name_str LEFT JOIN dim_user ON r.user_name = dim_user.user_name; TRUNCATE raw_data; RETURN; END; $$ LANGUAGE plpgsql; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] return value for PQbinaryTuples
On Wed, Dec 2, 2009 at 12:28 PM, Daniel Verite dan...@manitou-mail.org wrote: Merlin Moncure wrote: PQbinaryTuples is basically going to return whatever you passed into resultformat when you executed the query (in the case of PQexec, it's going to be 1 always). You mean 0 (i.e. text, not binary). And with an exception on PQexec(FETCH c) when c is a binary cursor. In this case the result is binary and PQbinaryTuples() reflects that. quite right! I had completely forgotten about binary cursors (which are, IMO, a total hack). merlin silonet=# declare test_cursor binary cursor with hold for select 1094795585::int; DECLARE CURSOR Time: 0.327 ms silonet=# fetch from test_cursor; int4 ── (1 row) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4
Kern Sibbald k...@sibbald.com writes: Bacula sets the database encoding to SQL_ASCII, because although Bacula supports UTF-8 character encoding, it cannot enforce it. Okay ... CREATE DATABASE bacula ENCODING 'SQL_ASCII'; However, with PostgreSQL 8.4, the above command is ignored because the default table copied is not template0. ... so copy template0. I think you probably want to force C locale (lc_ctype/lc_collate) as well, which was not possible for an individual database before 8.4. So on the whole I think 8.4 ought to be more amenable to what you want than prior versions were. No, it is not exactly the same as prior versions were, but that's the price of progress. I also notice the following comment in your 8.4 documentation: PostgreSQL will allow superusers to create databases with SQL_ASCII encoding even when LC_CTYPE is not C or POSIX. As noted above, SQL_ASCII does not enforce that the data stored in the database has any particular encoding, and so this choice poses risks of locale-dependent misbehavior. Using this combination of settings is deprecated and may someday be forbidden altogether. If you do eliminate SQL_ASCII at some point, it would be a real pity and create a lot of inconvience for your users. You misread it. We are not talking about eliminating SQL_ASCII --- as you say, that's useful. What is deprecated is trying to use SQL_ASCII with a non-C locale, which is dangerous, and always has been. If you've been putting non-UTF8 data into a database that could be running under a UTF8-dependent locale, I'm surprised you haven't noticed problems already. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot login, with C/C++ program
Daniel danwgr...@gmail.com writes: I have written a C/C++ program that trys to log into a local PostgreSQL database. Here is the code: pg_conn = PQconnectdb( hostaddr = '127.0.0.1' port = '' dbname = 'TBDB' user = 'sysdba' password = 'stelmo777' connect_timeout = '10'); if (!pg_conn) { return false; } if (PQstatus(pg_conn) != CONNECTION_OK) { return false; // - execution reaches here. I checked the database, user and password by logging in with psql. What could be wrong? If you print out the connection object's error message, you might find out. It's hard to guess about it without that information. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot login, with C/C++ program
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Did you check you pg_hba.conf ? another thing: although port defaults to 5432 you *should* specify that in the connection string. also... is sysdba *really* an authorized user? check your permissions... BR, Pedro. On 12/02/2009 09:42 PM, Tom Lane wrote: Daniel danwgr...@gmail.com writes: I have written a C/C++ program that trys to log into a local PostgreSQL database. Here is the code: pg_conn = PQconnectdb( hostaddr = '127.0.0.1' port = '' dbname = 'TBDB' user = 'sysdba' password = 'stelmo777' connect_timeout = '10'); if (!pg_conn) { return false; } if (PQstatus(pg_conn) != CONNECTION_OK) { return false; // - execution reaches here. I checked the database, user and password by logging in with psql. What could be wrong? If you print out the connection object's error message, you might find out. It's hard to guess about it without that information. regards, tom lane -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org/ iEYEARECAAYFAksW6nMACgkQ2FH5GXCfxAuOBgCeMdssoz+fE26lFX9StTCksUyp tAkAoKKbnjPrE4fk8HpEbLYJHq3wq7UB =e3vc -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] use log_statement to log only SELECTs?
On Sun, 2009-11-29 at 06:00 -0500, Bruce Momjian wrote: Brian Witt wrote: I've been looking for a way to use log_statement to log only select statements; is this possible? (I'm using PostgreSQL 8.1.18) No, log_statement doesn't allow do that, and I can't think of another option. Thanks for the response. I was afraid of that. Can I make this a feature request? How do I do that? Should I add it to this page: http://wiki.postgresql.org/wiki/Todo Thanks again, Brian signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Cannot login, with C/C++ program
Daniel wrote: I have written a C/C++ program that trys to log into a local PostgreSQL database. Here is the code: pg_conn = PQconnectdb( hostaddr = '127.0.0.1' port = '' dbname = 'TBDB' user = 'sysdba' password = 'stelmo777' connect_timeout = '10'); if (!pg_conn) { return false; } In the program, you should fprintf(stderr, %s, PQerrorMessage(pg_conn)) and/or copy-paste for us the psql command line that works. Best regards, -- Daniel PostgreSQL-powered mail user agent and storage: http://www.manitou-mail.org -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4
On Wednesday 02 December 2009 5:18:52 am Kern Sibbald wrote: Hello, I am the project manager of Bacula. One of the database backends that Bacula uses is PostgreSQL. This email is to notify you that a change you made to setting database character codes has created havoc with certain unfortunate Bacula users. Bacula sets the database encoding to SQL_ASCII, because although Bacula supports UTF-8 character encoding, it cannot enforce it. Certain operating systems such as Unix, Linux and MacOS can have filenames that are not in UTF-8 format. Since Bacula stores filenames in PostgreSQL tables, we use SQL_ASCII. We set SQL_ASCII by default when creating the database via the command recommended in recent versions of PostgreSQL (e.g. 8.1), with: CREATE DATABASE bacula ENCODING 'SQL_ASCII'; However, with PostgreSQL 8.4, the above command is ignored because the default table copied is not template0. This means that some Bacula users who have created PostgreSQL databases with version 8.4, typically find them created with SQL_UTF8 format, which results in serious errors when doing backups for certain machines. When I tried the above Postgres did not ignore the command, instead it gave me the following error and did not create the database: CREATE DATABASE bacula ENCODING 'SQL_ASCII'; ERROR: new encoding (SQL_ASCII) is incompatible with the encoding of the template database (UTF8) HINT: Use the same encoding as in the template database, or use template0 as template. Regards, Kern -- Adrian Klaver akla...@comcast.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to auto-increment?
Hi, I have a table like this: id_product id_increment and I need to increment values in id_increment like this prod_1 1 prod_1 2 prod_1 3 prod_2 1 Wich is the best way to do this? Using a trigger? Where can I find examples of plpgsql doing this? Best Regards, André. Sorry for my bad english.
Re: [GENERAL] How to auto-increment?
See the data type SERIAL in the PostgreSQL manual for whatever flavor of the database you are using ... Apologies for top-posting -- challenged mail client. HTH, Greg W. From: Andre Lopes lopes80an...@gmail.com To: pgsql-general@postgresql.org Sent: Wed, December 2, 2009 2:52:51 PM Subject: [GENERAL] How to auto-increment? Hi, I have a table like this: id_product id_increment and I need to increment values in id_increment like this prod_1 1 prod_1 2 prod_1 3 prod_2 1 Wich is the best way to do this? Using a trigger? Where can I find examples of plpgsql doing this? Best Regards, André. Sorry for my bad english.
Re: [GENERAL] How to auto-increment?
Andre Lopes wrote: Hi, I have a table like this: id_product id_increment and I need to increment values in id_increment like this prod_1 1 prod_1 2 prod_1 3 prod_2 1 Wich is the best way to do this? Using a trigger? Where can I find examples of plpgsql doing this? offhand, I'd have another table that has (id_product primary key, next_increment integer), and use it to populate your id_increment fields, bumping the next_increment each time you fetch it. I assume its OK if there are missing increment values, like if you delete a product/increment from your table, or if in the middle of inserting a new one, there's a transaction rollback for some reason? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to get RTREE performance from GIST index?
On 2 Dec 2009, at 21:12, Clive Page wrote: Actually, autovacuum doesn't process temp tables at all because it cannot get to them; they might live solely in the creating process' private memory area. Does that mean that, in between creating a temporary table and actually using it in a complicate query, it is desirable to run an ANALYZE command on it? I haven't been doing that, because I didn't know. Only if you created an index on it. If you didn't it doesn't matter as in that case you always get a sequential scan, unless I'm mistaken. Alban Hertroys -- Screwing up is the best way to attach something to the ceiling. !DSPAM:737,4b16f67511731227681557! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto Vacuum Daemon
On Wed, Dec 2, 2009 at 13:46, a.bhattacha...@sungard.com wrote: Is it fine to do that or should we manually perform vacuum and analyze. Will it automatically take care of vacuuming the tables and analyzing them? Have a look at this wiki-page that was posted by Craig on this list a few days ago: http://wiki.postgresql.org/index.php?title=VACUUM_FULL -- - Rikard - http://bos.hack.org/cv/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto Vacuum Daemon
On Wed, Dec 2, 2009 at 5:46 AM, a.bhattacha...@sungard.com wrote: Hello All, I am using Postgres 8.3.5 on windows and was using manually VACCUM and ANALYZE commands in my scripts but now I have turned on the AutoVaccum daemon by changing the postgresql conf file. Is it fine to do that or should we manually perform vacuum and analyze. Will it automatically take care of vacuuming the tables and analyzing them? Any suggestions will be appreciated. Most of the time autovacuum is fine and can handle all your vacuuming and analyzing needs no problem. However, on some very heavy traffic or large sized databases you may need to tune it so that it keeps up with your needs. On my production database servers I had to adjust a few settings to make autovac more aggressive so that it wasn't falling behind. On all my other pg instances the default settings are just fine. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] using querys like: 'select table.*'
Hi! In postgresql 7.2 i can use this kind of querys: select table.* ; select * from table1 where table1.key=othertable.key; but in postgresql 8.3 i have an error like this: ERROR: missing FROM-clause entry for table ... what should i do to solve this? :-)
Re: [GENERAL] using querys like: 'select table.*'
erobles wrote: Hi! In postgresql 7.2 i can use this kind of querys: select table.* ; select * from table1 where table1.key=othertable.key; but in postgresql 8.3 i have an error like this: ERROR: missing FROM-clause entry for table ... what should i do to solve this? :-) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] use log_statement to log only SELECTs?
Brian Witt wrote: Thanks for the response. I was afraid of that. Can I make this a feature request? How do I do that? Should I add it to this page: http://wiki.postgresql.org/wiki/Todo Generally it's better to see if there's at least some general support for an idea before adding it to the TODO list. In this case, I don't think you'll find any. The current way you get everything out of log_statement is completely reasonable for most people, and I doubt you'll ever get one of the folks who hack on PostgreSQL to work on the feature you'd like unless you paid them to. You'd need a much better justification for why you can't just filter things out of the log yourself before it would be worth further complicating the code involved. It's just not a common request--if anything, you might find people want everything *but* SELECTs. -- Greg Smith2ndQuadrant Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] using querys like: 'select table.*'
On Wednesday 02 December 2009 4:04:47 pm erobles wrote: erobles wrote: Hi! In postgresql 7.2 i can use this kind of querys: select table.* ; select * from table1 where table1.key=othertable.key; but in postgresql 8.3 i have an error like this: ERROR: missing FROM-clause entry for table ... what should i do to solve this? :-) Short term see here for config setting in postgresql.conf that can revert the behavior. http://www.postgresql.org/docs/8.3/interactive/runtime-config-compatible.html#RUNTIME-CONFIG-COMPATIBLE-VERSION Long term, change the queries :) -- Adrian Klaver akla...@comcast.net -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large Objects and Replication question
However you need to use newer API of libpq to create large objects: Oid lo_create(PGconn *conn, Oid lobjId); [...] You cannot use old API lo_creat() since it relies on OID, which pgpool-II does not guarantee OIDs can be replicated. Does it mean that lo_create(conn, 0) is supported while lo_creat(conn,INV_READ|INV_WRITE) is not ? It sounds strange from the user's point of view, because they do the same thing, don't they? Well, I do not recommend to use lo_create(conn, 0) either. Actually users can use lo_create(conn, 0) or lo_create without any problem until you try to extract existing large objects by oid. -- Tatsuo Ishii SRA OSS, Inc. Japan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] quote string exactly as it is
My solution was to set standard_conforming_strings = on in postgresql.conf. Bill dario@libero.it wrote: Hello, How can I quote a string *exactly* as it is? I tried using quote_literal() but it doesn't return what I need in some cases. E.g. If my string is: ss\\\ss And I do: select quote_literal('ss\\\ss'); I get: E'ss\\ss' -- My string now has E'' added and one backslash has been removed! What I want to do is to pass a string to a custom made function. Since the string can contain various metacharcters I need some way to pass this string exactly as it is. (For those who might be interested, the strings I'm working with are FASTQ quality scores) I'm using Postgresql 8.3 on Windows XP Thanks a lot! Dario -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Build universal binary on Mac OS X 10.6?
Israel Brewster isr...@frontierflying.com writes: Well, I'm not trying to use the server or client programs from this build - I just want the universal libraries for my programs. My point in this last section, however, doesn't necessarily extend as far as actual function, but rather is just with the build. MySQL and SQLite build for multiple architectures quite happily, Postgres doesn't build at all except for single architectures (the way I am trying at least). Well, it's been done. Searching the PG archives for prior discussions I find http://archives.postgresql.org/pgsql-hackers/2008-07/msg00884.php which describes success with multiple -arch flags in CFLAGS plus hand creation of relevant .h files. The first few steps in the older recipe here http://archives.postgresql.org/pgsql-general/2008-02/msg00200.php show a reasonably plausible way to mechanically generate the required .h files, although I wonder whether it's right in detail --- over here http://archives.postgresql.org/pgsql-hackers/2008-07/msg00898.php I had found out that you also need --host if you want to fool configure into generating .h files for a different architecture. On the whole I'd still recommend building the reference .h files on the actual target arch rather than trusting cross-compile to create them correctly. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4
On 2/12/2009 9:18 PM, Kern Sibbald wrote: Hello, I am the project manager of Bacula. One of the database backends that Bacula uses is PostgreSQL. As a Bacula user (though I'm not on the Bacula lists), first - thanks for all your work. It's practically eliminated all human intervention from something that used to be a major pain. Configuring it to handle the different backup frequencies, retention periods and diff/inc/full needs of the different data sets was a nightmare, but once set up it's been bliss. The 3.x `Accurate' mode is particularly nice. Bacula sets the database encoding to SQL_ASCII, because although Bacula supports UTF-8 character encoding, it cannot enforce it. Certain operating systems such as Unix, Linux and MacOS can have filenames that are not in UTF-8 format. Since Bacula stores filenames in PostgreSQL tables, we use SQL_ASCII. I noticed that while doing some work on the Bacula database a while ago. I was puzzled at the time about why Bacula does not translate file names from the source system's encoding to utf-8 for storage in the database, so all file names are known to be sane and are in a known encoding. Because Bacula does not store the encoding or seem to transcode the file name to a single known encoding, it does not seem to be possible to retrieve files by name if the bacula console is run on a machine with a different text encoding to the machine the files came from. After all, café in utf-8 is a different byte sequence to café in iso-9660-1, and won't match in equality tests under SQL_ASCII. Additionally, I'm worried that restoring to a different machine with a different encoding may fail, and if it doesn't will result in hopelessly mangled file names. This wouldn't be fun to deal with during disaster recovery. (I don't yet know if there are provisions within Bacula its self to deal with this and need to do some testing). Anyway, it'd be nice if Bacula would convert file names to utf-8 at the file daemon, using the encoding of the client, for storage in a utf-8 database. Mac OS X (HFS Plus) and Windows (NTFS) systems store file names as Unicode (UTF-16 IIRC). Unix systems increasingly use utf-8, but may use other encodings. If a unix system does use another encoding, this may be determined from the locale in the environment and used to convert file names to utf-8. Windows systems using FAT32 and Mac OS 9 machines on plain old HFS will have file names in the locale's encoding, like UNIX systems, and are fairly easily handled. About the only issue I see is that systems may have file names that are not valid text strings in the current locale, usually due to buggy software butchering text encodings. I guess a *nix system _might_ have different users running with different locales and encodings, too. The latter case doesn't seem easy to handle cleanly as file names on unix systems don't have any indication of what encoding they're in stored with them. I'm not really sure these cases actually show up in practice, though. Personally, I'd like to see Bacula capable of using a utf-8 database, with proper encoding conversion at the fd for non-utf-8 encoded client systems. It'd really simplify managing backups for systems with a variety of different encodings. ( BTW, one way to handle incorrectly encoded filenames and paths might be to have a `bytea' field that's generally null to store such mangled file names. Personally though I'd favour just rejecting them. ) We set SQL_ASCII by default when creating the database via the command recommended in recent versions of PostgreSQL (e.g. 8.1), with: CREATE DATABASE bacula ENCODING 'SQL_ASCII'; However, with PostgreSQL 8.4, the above command is ignored because the default table copied is not template0. It's a pity that attempting to specify an encoding other than the safe one when using a non-template0 database doesn't cause the CREATE DATABASE command to fail with an error. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto Vacuum Daemon
On 3/12/2009 7:24 AM, Rikard Bosnjakovic wrote: On Wed, Dec 2, 2009 at 13:46,a.bhattacha...@sungard.com wrote: Is it fine to do that or should we manually perform vacuum and analyze. Will it automatically take care of vacuuming the tables and analyzing them? Have a look at this wiki-page that was posted by Craig on this list a few days ago: http://wiki.postgresql.org/index.php?title=VACUUM_FULL For that question you're better off reading: http://www.postgresql.org/docs/current/static/routine-vacuuming.html#AUTOVACUUM http://www.postgresql.org/docs/8.3/interactive/runtime-config-autovacuum.html The main thing to understand is that turning on autovacuum may not be quite sufficient if you have some tables with really high update or delete loads. You may need to tell autovacuum to pay more attention to those tables. Similarly, on heavily loaded databases you might need to tune autovacuum to be more aggressive so that it keeps up with table bloat. Since you're still on 8.3 you will also need to make sure that your max_fsm_pages are sufficient. 8.4 gets rid of that configuration option and manages the task for you automatically instead, so consider upgrading at some point. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4
Craig Ringer cr...@postnewspapers.com.au writes: It's a pity that attempting to specify an encoding other than the safe one when using a non-template0 database doesn't cause the CREATE DATABASE command to fail with an error. Huh? regression=# create database foo lc_ctype = 'en_US.utf8' encoding = 'latin1'; ERROR: encoding LATIN1 does not match locale en_US.utf8 DETAIL: The chosen LC_CTYPE setting requires encoding UTF8. As best I can tell, the OP is complaining exactly because it did fail, in contrast to pre-8.4 versions that let him create unsafe combinations. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] using querys like: 'select table.*'
On 3/12/2009 7:58 AM, erobles wrote: Hi! In postgresql 7.2 i can use this kind of querys: select table.* ; select * from table1 where table1.key=othertable.key; but in postgresql 8.3 i have an error like this: ERROR: missing FROM-clause entry for table ... what should i do to solve this? :-) You've already been pointed to the workaround backward-compat option. What's happening here is that you're doing an implicit inner join. Your query is being interpreted by PostgreSQL as if you wrote: select * from table1, othertable where table1.key=othertable.key; though I prefer to write it as the IMO more readable: select * from table1 INNER JOIN othertable ON table1.key=othertable.key; This behaviour isn't supported anymore partly is because it's way too easy to write: select * from table1 where table1.key=typotable.key; and get confusing error messages or, if `typotable' exists, confusing query results. I'm pretty sure there were more reasons too, but I wasn't really active on the lists when that was going on. It's also really confusing when reading a query. So - as already pointed out, you will need to re-write your queries to add the required tables to the from clause. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Catastrophic changes to PostgreSQL 8.4
On 3/12/2009 11:03 AM, Tom Lane wrote: Craig Ringercr...@postnewspapers.com.au writes: It's a pity that attempting to specify an encoding other than the safe one when using a non-template0 database doesn't cause the CREATE DATABASE command to fail with an error. Huh? regression=# create database foo lc_ctype = 'en_US.utf8' encoding = 'latin1'; ERROR: encoding LATIN1 does not match locale en_US.utf8 DETAIL: The chosen LC_CTYPE setting requires encoding UTF8. As best I can tell, the OP is complaining exactly because it did fail, in contrast to pre-8.4 versions that let him create unsafe combinations. Teach me not to check it myself before saying anything. I read the OP's post as saying that they were having issues with creation _succeeding_ but disregarding the encoding specification: CREATE DATABASE bacula ENCODING 'SQL_ASCII'; However, with PostgreSQL 8.4, the above command is ignored because the default table copied is not template0. This means that some Bacula users who have created PostgreSQL databases with version 8.4, typically find them created with SQL_UTF8 format, which results in serious errors when doing backups for certain machines. ... which would indeed be nasty. That's what I was commenting on. Failing to create an unsafe database, on the other hand, is just what I want to happen, and it's good to see that's what happens on my 8.4 systems. I'm confused by the above, though. Kern: Is the issue that the CREATE DATABASE is failing, or that it's succeeding incorrectly? If it's succeeding incorrectly, it'd really help to know more about the problem systems. The creation script shipped in 3.0.2 was: # # KES: Note: the CREATE DATABASE, probably should be # CREATE DATABASE ${db_name} $ENCODING TEMPLATE template0 # if $bindir/psql -f - -d template1 $* END-OF-DATA CREATE DATABASE ${db_name} $ENCODING; ALTER DATABASE ${db_name} SET datestyle TO 'ISO, YMD'; END-OF-DATA then echo Creation of ${db_name} database succeeded. else echo Creation of ${db_name} database failed. fi exit 0 ... which doesn't look like it'd do anything unexpected to template1 if the CREATE DATABASE failed. You might want to set ON_ERROR_STOP in psql anyway, though. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4
On 3/12/2009 11:09 AM, Jerome Alet wrote: On Thu, Dec 03, 2009 at 10:54:07AM +0800, Craig Ringer wrote: Anyway, it'd be nice if Bacula would convert file names to utf-8 at the file daemon, using the encoding of the client, for storage in a utf-8 database. +1 for me. this is the way to go. I understand people with an existing backup history won't be very happy with this unless you provide them the appropriate tools or instructions to convert their database's content, though. I just noticed, while reading src/cats/create_postgresql_database: # use SQL_ASCII to be able to put any filename into # the database even those created with unusual character sets ENCODING=ENCODING 'SQL_ASCII' # use UTF8 if you are using standard Unix/Linux LANG specifications # that use UTF8 -- this is normally the default and *should* be # your standard. Bacula works correctly *only* with correct UTF8. # # Note, with this encoding, if you have any weird filenames on # your system (names generated from Win32 or Mac OS), you may # get Bacula batch insert failures. # #ENCODING=ENCODING 'UTF8' ... so it's defaulting to SQL_ASCII, but actually supports utf-8 if your systems are all in a utf-8 locale. Assuming there's some way for the filed to find out the encoding of the director's database, it probably wouldn't be too tricky to convert non-matching file names to the director's encoding in the fd (when the director's encoding isn't SQL_ASCII, of course). This also makes me wonder how filenames on Mac OS X and Windows are handled. I didn't see any use of the unicode-form APIs or any UTF-16 to UTF-8 conversion in an admittedly _very_ quick glance at the filed/ sources. How does bacula handle file names on those platforms? Read them with the non-unicode APIs and hope they fit into the current non-unicode encoding? Or am I missing something? -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4
* Craig Ringer (cr...@postnewspapers.com.au) wrote: ... so it's defaulting to SQL_ASCII, but actually supports utf-8 if your systems are all in a utf-8 locale. Assuming there's some way for the filed to find out the encoding of the director's database, it probably wouldn't be too tricky to convert non-matching file names to the director's encoding in the fd (when the director's encoding isn't SQL_ASCII, of course). I'm not sure which piece of bacula connects to PostgreSQL, but whatever it is, it could just send a 'set client_encoding' to the PG backend and all the conversion will be done by PG.. This also makes me wonder how filenames on Mac OS X and Windows are handled. I didn't see any use of the unicode-form APIs or any UTF-16 to UTF-8 conversion in an admittedly _very_ quick glance at the filed/ sources. How does bacula handle file names on those platforms? Read them with the non-unicode APIs and hope they fit into the current non-unicode encoding? Or am I missing something? Good question. Stephen signature.asc Description: Digital signature
Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4
On Thu, Dec 03, 2009 at 10:54:07AM +0800, Craig Ringer wrote: Anyway, it'd be nice if Bacula would convert file names to utf-8 at the file daemon, using the encoding of the client, for storage in a utf-8 database. +1 for me. this is the way to go. I understand people with an existing backup history won't be very happy with this unless you provide them the appropriate tools or instructions to convert their database's content, though. bye -- Jérôme Alet - jerome.a...@univ-nc.nc - Centre de Ressources Informatiques Université de la Nouvelle-Calédonie - BPR4 - 98851 NOUMEA CEDEX Tél : +687 266754 Fax : +687 254829 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4
Hi! On Thu, Dec 3, 2009 at 10:39 PM, Jerome Alet jerome.a...@univ-nc.nc wrote: On Thu, Dec 03, 2009 at 10:54:07AM +0800, Craig Ringer wrote: Anyway, it'd be nice if Bacula would convert file names to utf-8 at the file daemon, using the encoding of the client, for storage in a utf-8 database. +1 for me. +1 here: it, in fact, have problems when restoring to a server with different code page as the original one. this is the way to go. I understand people with an existing backup history won't be very happy with this unless you provide them the appropriate tools or instructions to convert their database's content, though. bye -- Jérôme Alet - jerome.a...@univ-nc.nc - Centre de Ressources Informatiques Université de la Nouvelle-Calédonie - BPR4 - 98851 NOUMEA CEDEX Tél : +687 266754 Fax : +687 254829 -- Join us December 9, 2009 for the Red Hat Virtual Experience, a free event focused on virtualization and cloud computing. Attend in-depth sessions from your desk. Your couch. Anywhere. http://p.sf.net/sfu/redhat-sfdev2dev ___ Bacula-users mailing list bacula-us...@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/bacula-users -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [Bacula-devel] Catastrophic changes to PostgreSQL 8.4
On Thu, December 3, 2009 00:18, Kern Sibbald wrote: [SNIP change in Postgres defaults.] Bloody typical, after the recent bollocking of MySQL, I was thinking, Maybe I should switch over to postgres? Hmm, I think I've got some spare Oracle or DB2 licenses floating around... Now, do I *have* the time to get Bacula to work with Oracle/DB2 (When's that roll-out planned?? Oh, Friday... :-') Cheers, GaryB-) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Strange. I can logon with an invalid or no password atall
My program is reporting Logged on. even if I enter an invalid or no password atall. Here is my login function and below is the function that calls it: bool DBConn::Connect(const std::string host, const std::string user, const std::string pass) // Connects to the database { std::string cs = hostaddr = ' + host + ' port = '' dbname = 'TBDB' user = ' + user + ' password = ' + pass + ' connect_timeout = '10'; pg_conn = PQconnectdb(cs.c_str()); if (!pg_conn) return false; if (PQstatus(pg_conn) != CONNECTION_OK) return false; return true; } void TBClientFrame::Login(const wxString user, const wxString pass) { std::string host(127.0.0.1); std::string user_str = std::string(user.mb_str(*wxConvCurrent)); std::string pass_str = std::string(pass.mb_str(*wxConvCurrent)); if (db_conn.Connect(host, user_str, pass_str)) wxMessageBox(wxT(Logged on.), wxT(Client Info.)); else { fprintf(stderr, %s, db_conn.ErrorMsg()); } } It does report an error if the user name is invalid. What could cause this strange behaviour. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot login, with C/C++ program
Oops, as it happens I had an error in my program logic and pg_conn was actually true. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Strange. I can logon with an invalid or no password atall
Daniel danwgr...@gmail.com writes: My program is reporting Logged on. even if I enter an invalid or no password atall. You sure the server is configured to ask for a password? http://www.postgresql.org/docs/8.4/static/client-authentication.html regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Strange. I can logon with an invalid or no password atall
void TBClientFrame::Login(const wxString user, const wxString pass) { std::string host(127.0.0.1); std::string user_str = std::string(user.mb_str(*wxConvCurrent)); std::string pass_str = std::string(pass.mb_str(*wxConvCurrent)); if (db_conn.Connect(host, user_str, pass_str)) wxMessageBox(wxT(Logged on.), wxT(Client Info.)); else { fprintf(stderr, %s, db_conn.ErrorMsg()); } } It does report an error if the user name is invalid. What could cause this strange behaviour. What is in your pg_hba.conf file? Seems like you've got it set to 'trust' for (at least) 127.0.0.1 - try changing that to 'md5' and restarting postgres. -- Postgresql php tutorials http://www.designmagick.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [Bacula-users] [GENERAL] Catastrophic changes to PostgreSQL 8.4
Stephen Frost wrote: * Craig Ringer (cr...@postnewspapers.com.au) wrote: ... so it's defaulting to SQL_ASCII, but actually supports utf-8 if your systems are all in a utf-8 locale. Assuming there's some way for the filed to find out the encoding of the director's database, it probably wouldn't be too tricky to convert non-matching file names to the director's encoding in the fd (when the director's encoding isn't SQL_ASCII, of course). I'm not sure which piece of bacula connects to PostgreSQL, but whatever it is, it could just send a 'set client_encoding' to the PG backend and all the conversion will be done by PG. The director is responsible for managing all the metadata, and it's the component that connects to Pg. If the fd sent the system charset along with the bundle of filenames etc that it sends to the director, then I don't see why the director couldn't `SET client_encoding' appropriately before inserting data from that fd, then `RESET client_encoding' once the batch insert was done. The only downside is that if even one file has invalidly encoded data, the whole batch insert fails and is rolled back. For that reason, I'd personally prefer that the fd handle conversion so that it can exclude such files (with a loud complaint in the error log) or munge the file name into something that _can_ be stored. Come to think of it, if the fd and database are both on a utf-8 encoding, the fd should *still* validate the utf-8 filenames it reads. There's no guarantee that just because the system thinks the filename should be utf-8, it's actually valid utf-8, and it'd be good to catch this at the fd rather than messing up the batch insert by the director, thus making it much safer than it presently is to use Bacula with a utf-8 database. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to auto-increment?
In response to Andre Lopes : Hi, I have a table like this: id_product id_increment and I need to increment values in id_increment like this prod_1 1 prod_1 2 prod_1 3 prod_2 1 Ahh, you want to count per group, yes? Do you have 8.4? If yes, i would suggest you an other way: use a CTE-query to count that. Or create a view based on a CTE-query. In your case: select id_product, row_number() over (partition by id_product) from table Wich is the best way to do this? Using a trigger? Where can I find examples of plpgsql doing this? With a TRIGGER there are possible, but what happens if you delete the first record containing 'prod_1'? Sorry for my bad english. Mee too ... Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] deferrable foreign keys
Hallo Tom, Morus Walter morus.walter...@googlemail.com writes: are there downsides of making foreign keys deferrable (but initially immediate) for updates, when the transaction does not set the constraint behaviour to deferred? I'd expect that to have the same behaviour as non deferrable foreign keys. What I don't understand is, why is non deferrable the default, then. Because the SQL standard says so. Ok. Understood. I don't believe there is any actual penalty for deferrable within the PG implementation, but perhaps there is in other systems' implementations. Thanks a lot for your help. Morus -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general