[GENERAL] INSERT..RETURNING on a partitioned table
Hello, I'm attempting to convert a big table into smaller tables; I currently do a lot of INSERT .. RETURNING calls on the big table, which works perfectly. To convert the table into smaller tables, I have set up a test case of 3 tables, based on a 'CHECK ( hashtext(field) % 2 ) = -1' (or 0 or 1). Now, even this works perfectly - data is inserted into the correct table according to this value, and 'SET constraint_exclusion TO on' even makes the SELECT () calls work properly. However, I'm not stuck with my INSERT .. RETURNING: basically, I have three conditional rules at the moment, and need to make these rules return the ID of the inserted row. But postgres tells me this: 'ERROR: RETURNING lists are not supported in conditional rules' So my question is, is there any way that postgres supports INSERT .. RETURNING with partitioned tables, where the subtable to insert to is not know at application level ? I know I could write a stored procedure for this, which SELECT ()s the id from a subtable after it has been INSERTed, but this will put more stress on the database server, and it sounds silly that INSERT .. RETURNING would not be supported in my use case. Any ideas/suggestions ? Thanks in advance! Regards, Leon Mergen ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Providing user based previleges to Postgres DB
Hi All, Currently in one of the projects we want to restrict the unauthorized users to the Postgres DB. Here we are using Postgres version 8.2.0 Can anybody tell me how can I provide the user based previleges to the Postgres DB so that, we can restrict the unauthorized users as well as porivde the access control to the users based on the set previleges by the administrator. Thanks and Regards, Ramac The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
Re: [GENERAL] Kill session in PostgreSQL
Beware that this is not a supported method, though. It often works, but don't make it a part of your regular maintenance. What you can do is use pg_cancel_query() to kick the session back out to IDLE - that's fully suported, and it will drop any locks the process has, so it shuold get rid of most of your problem. //Magnus On Tue, Apr 10, 2007 at 08:46:20AM -0400, Ardian Xharra wrote: You can use kill PID - Original Message - From: Ashish Karalkar To: pggeneral Sent: Tuesday, April 10, 2007 8:01 AM Subject: [GENERAL] Kill session in PostgreSQL Hello all, is there any command just like ORACLE Uses kill session to kill a particular session . tried withh linux kill -9 PID but it also kills all other sessions or am I just giving wrong signal to command kill? Thanks in advance With Regards ashish -- Check out what you're missing if you're not on Yahoo! Messenger -- No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.446 / Virus Database: 269.0.0/754 - Release Date: 09/04/2007 22:59 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] What about SkyTools?
I will test them intensively in coming weeks and report the results. I wrote an overview of the Skype PostgreSQL projects in Russian (will be published this week), so there will be more details available soon. In my opinion, SkyTools are worth testing, they look more interesting than Slony in some aspects. On 4/11/07, Dmitry Koterov [EMAIL PROTECTED] wrote: Hello. Have anybody used SkyTools in production environment? What's the impression? In practice - is it now more preferrable than Slony or not yet? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re : [GENERAL] Dumping part (not all) of the data in a database...methods?
Hi, AFAIK, you can not do that with pg_dump ... but as you are working on only one table you can write the COPY command your self, and since version 8.2, you can write a SELECT query instead of a table name, like COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO '/usr1/proj/bray/sql/a_list_countries.copy'; see http://www.postgresql.org/docs/8.2/interactive/sql-copy.html for all the details I hope this helps, [EMAIL PROTECTED] The Computing Froggy - Message d'origine De : Andrew Edson [EMAIL PROTECTED] À : [EMAIL PROTECTED] Envoyé le : Mercredi, 11 Avril 2007, 20h43mn 39s Objet : Re: [GENERAL] Dumping part (not all) of the data in a database...methods? I am aware of this, yes, but the data in question is all (both sets) contained on a single table. That's why I was looking for a way to do a 'dump where (select foo where bar = 'criteria')' structure. Merlin Moncure [EMAIL PROTECTED] wrote: On 4/11/07, Andrew Edson wrote: I'm needing to do a partial dump on a database. All of the entries in the db can be marked as one of two groups, and I've been asked to create a dump of just the second group. It is possible to do a select statement based dump and just grab the one set of records in the output? you are aware you can dump a table at a time, right? pg_dump -t foo dumps table foo. A partial dumping scheme would probably involve using pg_dump with various flag in combination with a script that makes a list of things to dump. merlin Looking for earth-friendly autos? Browse Top Cars by Green Rating at Yahoo! Autos' Green Center. ___ Découvrez une nouvelle façon d'obtenir des réponses à toutes vos questions ! Profitez des connaissances, des opinions et des expériences des internautes sur Yahoo! Questions/Réponses http://fr.answers.yahoo.com
[GENERAL] Oracle mailing lists
Hello, I am a little bit off Postgres but would like to know about some good Oracle mailing lists. I am looking something about backups and recovery things in Oracle I am not familiar with Thank you Jan Mura [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] role passwords and md5()
Dear list, I am trying to verify the password given by a user against the system catalog. Since I need the password hash later on, I can not just use the authentication mechanism for verification, but need to do this in SQL statements. Unfortunately, even if I set passwords to use MD5 encryption in pg_hba.conf, the SQL function MD5() returns a different hash. A (shortened) example: CREATE ROLE my_user WITH ENCRYPTED PASSWORD 'my_password'; SELECT * FROM pg_authid WHERE rolname='my_user' AND rolpassword=MD5('my_password'); Any ideas, what to do to make this work? Best regards, Lutz Broedel -- To verify the digital signature, you need to load the following certificate: https://pki.pca.dfn.de/uh-ca/pub/cacert/rootcert.crt smime.p7s Description: S/MIME Cryptographic Signature
Re: [GENERAL] [ADMIN] Increasing the shared memory
Hello! I've tried first to increase the number of shared buffers, I doubled it, from 1000 to 2000 (16Mb) Unfortunately this had no effect. Then I increased the number of max_locks_per_transaction from 64 to 128 (these shoul assure about 12 800 lock slots) considering max_connections=100 and max_prepared_transaction=5 (Quote from the manual - The shared lock table is created to track locks on max_locks_per_transaction * (max_connections http://www.postgresql.org/docs/8.2/interactive/runtime-config-connection.ht ml#GUC-MAX-CONNECTIONS + max_prepared_transactions http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html #GUC-MAX-PREPARED-TRANSACTIONS ) objects (e.g. tables);) I've also restarted This had also no effect. Because I can't see any difference between the maximum input accepted for our application with the old configuration and the maximum input accepted now, with the new configuration. It looks like nothing happened. Thanks Sorin _ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Shoaib Mir Sent: Monday, April 02, 2007 6:02 PM To: Sorin N. Ciolofan Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [ADMIN] Increasing the shared memory An extract from -- http://www.powerpostgresql.com/PerfList/ might help you shared_buffers: As a reminder: This figure is NOT the total memory PostgreSQL has to work with. It is the block of dedicated memory PostgreSQL uses for active operations, and should be a minority of your total RAM on the machine, since PostgreSQL uses the OS disk cache as well. Unfortunately, the exact amount of shared buffers required is a complex calculation of total RAM, database size, number of connections, and query complexity. Thus it's better to go with some rules of thumb in allocating, and monitor the server (particuarly pg_statio views) to determine adjustments. On dedicated servers, useful values seem to be between between 8MB and 400MB (between 1000 and 50,000 for 8K page size). Factors which raise the desired shared buffers are larger active portions of the database, large complex queries, large numbers of simultaneous queries, long-running procedures or transactions, more available RAM, and faster/more CPUs. And, of course, other applications on the machine. Contrary to some expectations, allocating much too much shared_buffers can actually lower peformance, due time required for scanning. Here's some examples based on anecdotes and TPC tests on Linux machines: * Laptop, Celeron processor, 384MB RAM, 25MB database: 12MB/1500 * Athlon server, 1GB RAM, 10GB decision-support database: 120MB/15000 * Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/3 * Quad Xeon server, 8GB RAM, 200GB, 300-connection heavy transaction processing database: 400MB/5 Please note that increasing shared_buffers, and a few other memory parameters, will require you to modify your operating system's System V memory parameters. See the main PostgreSQL documentation for instructions on this. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com)
Re: [GENERAL] role passwords and md5()
Lutz Broedel wrote: Dear list, I am trying to verify the password given by a user against the system catalog. Since I need the password hash later on, I can not just use the authentication mechanism for verification, but need to do this in SQL statements. Unfortunately, even if I set passwords to use MD5 encryption in pg_hba.conf, the SQL function MD5() returns a different hash. A (shortened) example: CREATE ROLE my_user WITH ENCRYPTED PASSWORD 'my_password'; SELECT * FROM pg_authid WHERE rolname='my_user' AND rolpassword=MD5('my_password'); Any ideas, what to do to make this work? Best regards, Lutz Broedel A quick look at the source shows that the hashed value stored in pg_authid uses the role name as a salt for the hashing of the password. Moreover, the value in pg_authid has the string md5 prepended to the hash value (I imagine to allow different hash algorithms to be used, but I haven't personally seen anything but md5). Given your example above, the following statement should do what you are looking for: SELECT * FROM pg_authid WHERE rolname='my_user' AND rolpassword = 'md5' || md5('my_password' || 'my_user'); Hope this helps. Andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [ADMIN] Increasing the shared memory
In response to Sorin N. Ciolofan [EMAIL PROTECTED]: I've tried first to increase the number of shared buffers, I doubled it, from 1000 to 2000 (16Mb) Unfortunately this had no effect. The difference between 8M and and 16M of shared buffers is pretty minor. Try bumping it up to 250M or so and see if that helps. You could install the pg_buffercache addon and monitor your buffer usage to see how much is actually being used. However, if the problem is write performance (which I'm inferring from your message that it is) then increasing shared_buffers isn't liable to make a significant improvement, unless the inserts are doing a lot of querying as well. With inserts, the speed is going to (most likely) be limited by the speed of your disks. I may have missed this information in earlier posts, did you provide details of you hardware configuration? Have you done tests to find out what speed your disks are running? Have you monitored IO during your inserts to see if the IO subsystem is maxed out? Also, the original problem you were trying to solve has been trimmed from this thread, which makes me wonder if any of my advice is relevant. Then I increased the number of max_locks_per_transaction from 64 to 128 (these shoul assure about 12 800 lock slots) considering max_connections=100 and max_prepared_transaction=5 (Quote from the manual - The shared lock table is created to track locks on max_locks_per_transaction * (max_connections http://www.postgresql.org/docs/8.2/interactive/runtime-config-connection.ht ml#GUC-MAX-CONNECTIONS + max_prepared_transactions http://www.postgresql.org/docs/8.2/interactive/runtime-config-resource.html #GUC-MAX-PREPARED-TRANSACTIONS ) objects (e.g. tables);) I've also restarted This had also no effect. Because I can't see any difference between the maximum input accepted for our application with the old configuration and the maximum input accepted now, with the new configuration. It looks like nothing happened. Thanks Sorin _ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Shoaib Mir Sent: Monday, April 02, 2007 6:02 PM To: Sorin N. Ciolofan Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [ADMIN] Increasing the shared memory An extract from -- http://www.powerpostgresql.com/PerfList/ might help you shared_buffers: As a reminder: This figure is NOT the total memory PostgreSQL has to work with. It is the block of dedicated memory PostgreSQL uses for active operations, and should be a minority of your total RAM on the machine, since PostgreSQL uses the OS disk cache as well. Unfortunately, the exact amount of shared buffers required is a complex calculation of total RAM, database size, number of connections, and query complexity. Thus it's better to go with some rules of thumb in allocating, and monitor the server (particuarly pg_statio views) to determine adjustments. On dedicated servers, useful values seem to be between between 8MB and 400MB (between 1000 and 50,000 for 8K page size). Factors which raise the desired shared buffers are larger active portions of the database, large complex queries, large numbers of simultaneous queries, long-running procedures or transactions, more available RAM, and faster/more CPUs. And, of course, other applications on the machine. Contrary to some expectations, allocating much too much shared_buffers can actually lower peformance, due time required for scanning. Here's some examples based on anecdotes and TPC tests on Linux machines: * Laptop, Celeron processor, 384MB RAM, 25MB database: 12MB/1500 * Athlon server, 1GB RAM, 10GB decision-support database: 120MB/15000 * Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/3 * Quad Xeon server, 8GB RAM, 200GB, 300-connection heavy transaction processing database: 400MB/5 Please note that increasing shared_buffers, and a few other memory parameters, will require you to modify your operating system's System V memory parameters. See the main PostgreSQL documentation for instructions on this. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] [ADMIN] Increasing the shared memory
Sorin N. Ciolofan [EMAIL PROTECTED] writes: This had also no effect. Because I can't see any difference between the maximum input accepted for our application with the old configuration and the maximum input accepted now, with the new configuration. It looks like nothing happened. This is the first you've mentioned about *why* you wanted to increase the settings, and what it sounds like to me is that you are increasing the wrong thing. What's the actual problem? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] digest data types?
Does anyone have postgresql types for message digests, especially md5 and sha1? Obviously I could store these as text (as I currently do), but I'm particularly interested in custom types that store digests as binary blobs and provide conversion to/from text. Am I correct in assuming that the space saved by storing digests as binary (1/2 size of hex) will substantially impact index ins/upd/del performance or when the digest itself is a large fraction of the rest of the row size? Thanks, Reece -- Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0
Re: [GENERAL] SQL - finding next date
On 11/04/2007 21:15, Jon Sime wrote: This is probably a very simple one, but I just can't see the answer and it's driving me nuts. I have a table holding details of academic terms, Many thanks indeed to all who replied - I particularly like Jeff's solution, and will use that one. Regards, Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] SQL - finding next date
On 4/12/07, Raymond O'Donnell [EMAIL PROTECTED] wrote: On 11/04/2007 21:15, Jon Sime wrote: This is probably a very simple one, but I just can't see the answer and it's driving me nuts. I have a table holding details of academic terms, Many thanks indeed to all who replied - I particularly like Jeff's solution, and will use that one. I think this is by far the cleanest: select * from term where start_date (select start_date from term where name = 'foo') order by start_date limit 1; merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Oracle mailing lists
Jan Mura wrote: Hello, I am a little bit off Postgres but would like to know about some good Oracle mailing lists. A little bit off??? Use the forums on the Oracle Technology Network: http://www.oracle.com/technology//index.html -- Guy Rouillier ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] SQL - finding next date
On 4/12/07, Merlin Moncure [EMAIL PROTECTED] wrote: On 4/12/07, Raymond O'Donnell [EMAIL PROTECTED] wrote: On 11/04/2007 21:15, Jon Sime wrote: This is probably a very simple one, but I just can't see the answer and it's driving me nuts. I have a table holding details of academic terms, Many thanks indeed to all who replied - I particularly like Jeff's solution, and will use that one. I think this is by far the cleanest: select * from term where start_date (select start_date from term where name = 'foo') order by start_date limit 1; just to clarify, that would be the best way to pick out the next term from a known term. If you wanted to present the complete list of terms along with the next sequential term, I would suggest: select name, (select name from term f where e.start_date f.start_date order by f.start_date limit 1) as next_term from term e; I tested it and this is much faster than 'where exists' solution. If you want all the properties of the next term, just replace 'select name' with 'select term' which will return the term as a record object. merlin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Autovac _scale_ settings not changed by SIGHUP?
Perhaps I misunderstand something here. Autovac demon is configged on and working but was using standard settings which are commented out in postgresql.conf. The context field of pg_settings view suggests that the scale factor values and other autovac settings can be changed with SIGHUP. Uncommenting the settings in postgresql.conf and giving the new values, then followed by a pg_reload_conf() call however and I see no change in the settings. Also did pg_ctl reload for good measure and nothing changed. Thanks -- --- Jerry Sievers 732 365-2844 (work) Production Database Administrator 305 321-1144 (mobil WWW E-Commerce Consultant ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SQL - finding next date
On 12/04/2007 18:01, Merlin Moncure wrote: I tested it and this is much faster than 'where exists' solution. Is this an attribute of PostgreSQL in particular, or would it be true of RDBMSs in general? Thanks again, Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Autovac _scale_ settings not changed by SIGHUP?
Jerry Sievers [EMAIL PROTECTED] writes: Uncommenting the settings in postgresql.conf and giving the new values, then followed by a pg_reload_conf() call however and I see no change in the settings. Works for me ... regression=# show autovacuum_vacuum_scale_factor; autovacuum_vacuum_scale_factor 0.2 (1 row) -- edit postgresql.conf ... regression=# select pg_reload_conf(); pg_reload_conf t (1 row) regression=# show autovacuum_vacuum_scale_factor; autovacuum_vacuum_scale_factor 0.25 (1 row) regression=# Are you sure you're editing the right copy of postgresql.conf? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SQL - finding next date
On 4/12/07, Raymond O'Donnell [EMAIL PROTECTED] wrote: On 12/04/2007 18:01, Merlin Moncure wrote: I tested it and this is much faster than 'where exists' solution. Is this an attribute of PostgreSQL in particular, or would it be true of RDBMSs in general? evaluation of subqueries is one place where various databases quite a lot...postgresql one of the nice things about postgresql is that sql optimization usually (but not always) entails finding the most direct query to attack the problem. other databases might prefer joins or standard subquery approach (where in/exists, etc). my suggestion to return the record in a field as a composite type is a non-standard trick (i think...do composite types exist in the sql standard?). merlin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SQL - finding next date
Merlin Moncure escribió: my suggestion to return the record in a field as a composite type is a non-standard trick (i think...do composite types exist in the sql standard?). I think composite types are in the standard, yes, but they are a bit different from what we have. I tried to read that part of the standard a while back and came up blank, it's too filled with strange constructs. Too closely linked to tables for one thing; there seems to be no notion of anonymous or standalone types. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Bugs not appearing in list archives
I've posted a bug report twice through the web based interface, most recently last Friday. To date, my bug has never shown up in the archives for the pg-bugs list. Chris Fischer Database Engineer http://www.channeladvisor.com/ ChannelAdvisor Corporation 2701 Aerial Center Parkway Morrisville | North Carolina | 27560 919.228.2011 Many Channels. One Advisor. www.channeladvisor.com http://www.channeladvisor.com/ http://feeds.feedburner.com/ChanneladvisorBlog ca_complete.gif Description: ca_complete.gif ChanneladvisorBlog.gif Description: ChanneladvisorBlog.gif
Re: [GENERAL] seeking: advice on reordering table
On 4/12/07, Jonathan Vanasco [EMAIL PROTECTED] wrote: can anyone suggest a good way to reorder the table ? everything that i can think of involves creating a new table which means I'd have to redo all the constraints . The common thing to do is to write your query in such a way that what interests you most is put left ... // Jonathan Vanasco Cheers, Andrej ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Bugs not appearing in list archives
Chris Fischer wrote: I've posted a bug report twice through the web based interface, most recently last Friday. To date, my bug has never shown up in the archives for the pg-bugs list. There is currently an intermittent issue with the server that forwards the bugs from the website to the lists. We are planning to replace the server tomorrow. In the meantime, please forward your reports to [EMAIL PROTECTED] Sorry for any inconvenience caused. Regards, Dave ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] backend reset of database
I've done some further research on this problem. I'm not a big user of strace, so I'm looking at getting some insight here. Basically, there are two different versions of the pcmiler libraries. One set works, the other does not. The following is the strace output from both processes. The first is the one that works, the second is the one that does not work. Anyone see anything here that might identify the problem? Thanks: working strace: send(3, Q\0\0\0008select pcm_getmiles_s(\'spar..., 57, 0) = 57 rt_sigaction(SIGPIPE, {SIG_DFL}, {SIG_IGN}, 8) = 0 poll([{fd=3, events=POLLIN|POLLERR, revents=POLLIN}], 1, -1) = 1 recv(3, T\0\0\0\'\0\1pcm_getmiles_s\0\0\0\0\0\0\0\0\0\2\275..., 16384, 0) = 74ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {c_iflags=0x2d02, c_oflags=0x5, c_cflags=0x4bf, c_lflags=0x8a3b, c_line=0, c_cc=\x03\x1c\x7f\x15\x04\x00\x01\xff\x11\x13\x1a\xff\x12\x0f\x17\x16\xff\x00\x00}) = 0 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {c_iflags=0x2d02, c_oflags=0x5, c_cflags=0x4bf, c_lflags=0x8a3b, c_line=0, c_cc=\x03\x1c\x7f\x15\x04\x00\x01\xff\x11\x13\x1a\xff\x12\x0f\x17\x16\xff\x00\x00}) = 0 ioctl(1, TIOCGWINSZ, {ws_row=24, ws_col=80, ws_xpixel=0, ws_ypixel=0}) = 0 broken strace: send(3, Q\0\0\0008select pcm_getmiles_s(\'spar..., 57, 0) = 57 rt_sigaction(SIGPIPE, {SIG_DFL}, {SIG_IGN}, 8) = 0 poll([{fd=3, events=POLLIN|POLLERR, revents=POLLIN|POLLHUP}], 1, -1) = 1 recv(3, , 16384, 0) = 0 time(NULL) = 1176403599 poll([{fd=3, events=POLLIN|POLLERR, revents=POLLIN|POLLHUP}], 1, 0) = 1 recv(3, , 16384, 0) = 0 close(3)= 0 write(2, server closed the connection une..., 137server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. ) = 137 write(2, The connection to the server was..., 57The connection to the server was lost. Attempting reset: ) = 57 -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Performance of using tablespaces to seperate indexes and tables to different disks
Hi, By separating tables and indexes onto different drives through the use of tablespaces does this increase COPY or CREATE INDEX performance? If you have a ballpark figure of how much gain if any, that would be great too. Benjamin ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] pg_dump when database contains multiple tablespaces
Hi, What is the resulting dump (pg_dump) from a database with multiple tables spaces? Does the dump just strip off the TABLESPACE command, so it will lump all the tablespaces into the default tablespace? Benjamin ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] local selectivity estimation - computing frequency of predicates
Dear All. How to compute the frequency of predicate (e.g. Salary $7) in an SQL query from a DB's pre-defined indexes?. I'm specifically looking at how to retrieve information about indices (like number of pages at each level of index, range of attribute values etc.) Any suggestions regarding the same would be great Thanks, Avdhoot K. Saple Junior Research Associate High Performance Grid Computing Infosys Technologies Ltd. CAUTION - Disclaimer * This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system. ***INFOSYS End of Disclaimer INFOSYS***
[GENERAL] deadlock
Hello. I've encountered deadlock on postgresql 8.1. Here is the simple example. create table t1 ( id1 integer primary key, val1 integer ); create table t2 ( id2 integer primary key, id1 integer references t1 on delete cascade, val1 integer, val2 integer, val3 integer ); insert into t1 select generate_series(1,10) as id1; insert into t2 select generate_series(1,100) as id2, generate_series(1,10) as id1; Then three concurrent transaction start. /*1*/ begin; /*1*/ update t2 set val1=1 where id2=50; /*1*/ update t2 set val2=2 where id2=50; /*2*/ begin; /*2*/ update t2 set val1=1 where id2=40; /*2*/ update t2 set val2=2 where id2=40; /*2*/ commit; /*3*/ begin; /*3*/ update t1 set val1=1 where id1=10; /*1*/ update t2 set val3=3 where id2=50; Here we have deadlock for transactions 1 and 3. Is it bug? And if so, will it be fixed? Or is it not a bug? What should I do then? Thank you. Sorry for bad english. -- Alexey A. Nalbat Price Express http://www.price.ru/ http://www.tyndex.ru/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Re: The rule question before, request official documentation on the problem
In article [EMAIL PROTECTED], Chris Travers [EMAIL PROTECTED] wrote: % DO ALSO rules involving NEW are fundamentally dangerous to the integrity % of data because NEW is not guaranteed to be internally consistent. DO % INSTEAD rules are fine (there is only one NEW), as are any DO ALSO rules % involving OLD. It seems to me that this sort of dogmatism is fundamentally dangerous. CREATE TABLE x (a varchar(20) PRIMARY KEY, b INT NOT NULL); CREATE TABLE y (a varchar(20) NOT NULL, b INT NOT NULL); CREATE RULE y_ins AS ON INSERT TO y DO UPDATE x SET b=b+new.b WHERE a=new.a; CREATE RULE y_del AS ON DELETE TO y DO UPDATE x SET b=b-old.b WHERE a=old.a; INSERT INTO x VALUES ('a', 0); INSERT INTO y VALUES ('a', 2); INSERT INTO y VALUES ('a', 2); SELECT * FROM x; a | b ---+--- a | 4 DELETE FROM y; SELECT * FROM x; a | b ---+--- a | 2 The DO ALSO rules involving OLD didn't do so well here. The section on rules v. triggers could do with a caveat or two, but it's a bit much to call them fundamentally dangerous. -- Patrick TJ McPhee North York Canada [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Start postgresql failed
Hi, everyone. I tried restart the postgresql after my computer crashed. However, It is failed. the reason in the log file is runuser: cannot set groups: Operation not permitted IT is really strange error isnt it? Dose anybody have a clue??? Thanks very much ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SQL - finding next date
On 4/11/07, Raymond O'Donnell [EMAIL PROTECTED] wrote: Hi all, This is probably a very simple one, but I just can't see the answer and it's driving me nuts. I have a table holding details of academic terms, and I need an SQL query such that for any given term I want to find the next term by starting date (or just NULL if there isn't one). Here's the table - CREATE TABLE terms ( term_id serial NOT NULL, term_name character varying(40) NOT NULL, term_starts date NOT NULL, term_ends date NOT NULL, . ) - so, supposing I have the following data - term_id | term_name | term_starts | ... -+-+-+-- 1 | Spring 2007 | 2007-01-10 | ... 2 | Autumn 2007 | 2007-09-01 | ... 6 | Spring 2008 | 2008-01-06 | ... - then for term '1' I'd like to return '2', for term '2' I'd like to return '6', and so on. The closest I've got is getting ALL terms that start after a given one, but I run into trouble after thatany help will be appreciated! Thanks in advance, Ray. --- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] --- SELECT main.term_name, main.term_starts mts, next.term_name, next.term_starts nts FROM terms main LEFT JOIN terms NEXT ON main.term_starts NEXT.term_starts WHERE NOT EXISTS (SELECT 1 FROM terms t WHERE t.term_starts main.term_starts AND t.term_starts NEXT.term_starts) It's kind of a sneak attack way of getting at the min(term_starts) without using an aggregate. Jeff
Re: [GENERAL] Evaluate only one CASE WHEN in a select
On Apr 11, 11:35 pm, [EMAIL PROTECTED] (Guy Rouillier) wrote: dcrespo wrote: Hi everybody, I'm implementing something like this: SELECT CASE WHEN add_numbers(t1.main_number,t2.main_number)100 THEN t1.description1 ELSE t2.description1 END AS number_description1, CASE WHEN add_numbers(t1.main_number,t2.main_number)100 THEN t1.description2 ELSE t2.description2 END AS number_description2 FROM table1 t1, table2 t2; Is there a way to evaluate the 'CASE WHEN' only once? Sure, see the implementation of CASE here: http://www.postgresql.org/docs/8.2/static/functions-conditional.html Not sure what you are looking for, though, since your condition in both CASEs above is exactly the same. -- Guy Rouillier ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq- Hide quoted text - - Show quoted text - They are exactly the same, that's why I want to evaluate it only once and, depending on it, put the corresponding value into two different fields that must be returned, instead of evaluating once for each field. Any insight? Thank you Daniel ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] backend reset of database
Geoffrey wrote: I forgot to mention the one difference I did see in the strace output. I don't understand why there would be difference in the call parms to poll(). Then again, I'm not altogether sure what code is calling poll at this point. I'm assuming it's from the database engine. I've noted these lines below: working strace: poll([{fd=3, events=POLLIN|POLLERR, revents=POLLIN}], 1, -1) = 1 broken strace: poll([{fd=3, events=POLLIN|POLLERR, revents=POLLIN|POLLHUP}], 1, -1) = 1 -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] backend reset of database
Geoffrey [EMAIL PROTECTED] writes: I've done some further research on this problem. I'm not a big user of strace, so I'm looking at getting some insight here. Basically, there are two different versions of the pcmiler libraries. One set works, the other does not. The following is the strace output from both processes. You strace'd the wrong process, ie, the client. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] backend reset of database
Tom Lane wrote: Geoffrey [EMAIL PROTECTED] writes: I've done some further research on this problem. I'm not a big user of strace, so I'm looking at getting some insight here. Basically, there are two different versions of the pcmiler libraries. One set works, the other does not. The following is the strace output from both processes. You strace'd the wrong process, ie, the client. Okay, so how stupid can I be? (you could have sent it as private email) :) Now that I have my head on my shoulders where it belongs, I'll give this another spin... -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] deadlock
Alexey Nalbat [EMAIL PROTECTED] writes: I've encountered deadlock on postgresql 8.1. Here is the simple example. Your example doesn't deadlock for me ... regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Evaluate only one CASE WHEN in a select
dcrespo [EMAIL PROTECTED] writes: They are exactly the same, that's why I want to evaluate it only once and, depending on it, put the corresponding value into two different fields that must be returned, instead of evaluating once for each field. Any insight? There's no solution that wouldn't cost you more than double evaluation, for such a simple expression. The general solution is to use two levels of SELECT: select ..., x, x, ... from (select ..., big-expr as x, ... from ... offset 0) ss; You need the offset 0 (which is otherwise a no-op) to prevent the planner from folding the two selects into a single level and ending up with two copies of big-expr anyway. The runtime overhead associated with the extra plan level is about going to eat up whatever you might save in this example, though with a seriously expensive expression (for instance, a function that does some fairly expensive SELECT itself) you might find it worth doing. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] deadlock
On Mon, 2007-04-09 at 04:24, Alexey Nalbat wrote: Hello. I've encountered deadlock on postgresql 8.1. Here is the simple example. create table t1 ( id1 integer primary key, val1 integer ); create table t2 ( id2 integer primary key, id1 integer references t1 on delete cascade, val1 integer, val2 integer, val3 integer ); insert into t1 select generate_series(1,10) as id1; insert into t2 select generate_series(1,100) as id2, generate_series(1,10) as id1; Then three concurrent transaction start. /*1*/ begin; /*1*/ update t2 set val1=1 where id2=50; /*1*/ update t2 set val2=2 where id2=50; /*2*/ begin; /*2*/ update t2 set val1=1 where id2=40; /*2*/ update t2 set val2=2 where id2=40; /*2*/ commit; /*3*/ begin; /*3*/ update t1 set val1=1 where id1=10; /*1*/ update t2 set val3=3 where id2=50; Here we have deadlock for transactions 1 and 3. That's not a deadlock, transaction 3 is simply waiting for transaction 1 to commit or rollback. If you run a commit or rollback on transaction 1 then transaction 3 will then be ready to commit or rollback as needed. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_standby
On Mar 29, 2:17 pm, [EMAIL PROTECTED] (Thomas F. O'Connell) wrote: I see that Simon has pushed pg_standbyinto contrib for 8.3. Is there anything that would make the current version in CVS unsuitable for use in 8.2.x? I've done a cursory inspection of the code, but I'll admit that I haven't looked up the interfaces used from postgres_fe.h and pg_config_manual.h to see whether anything would be unusable or behave unexpectedly in 8.2. I'm assuming the correct way to install it would be to take the pg_standby directory from CVS, add it to an 8.2.x source contrib tree, and build as if it were a part of contrib in 8.2? A post from Simon on a thread in pgsql-admin answers the primary question: http://archives.postgresql.org/pgsql-admin/2007-04/msg00069.php And I answered the part about building from testing. The answer seems to be yes. -- Thomas F. O'Connell optimizing modern web applications : for search engines, for usability, and for performance : http://o.ptimized.com/ 615-260-0005
Re: [GENERAL] backend reset of database
Tom Lane wrote: Geoffrey [EMAIL PROTECTED] writes: I've done some further research on this problem. I'm not a big user of strace, so I'm looking at getting some insight here. Basically, there are two different versions of the pcmiler libraries. One set works, the other does not. The following is the strace output from both processes. You strace'd the wrong process, ie, the client. Well, when I strace the proper process, it appears to point back at the 3rd party application, thus I'm following up with them. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] seeking: advice on reordering table
Andrej Ricnik-Bay wrote: On 4/12/07, Jonathan Vanasco [EMAIL PROTECTED] wrote: can anyone suggest a good way to reorder the table ? everything that i can think of involves creating a new table which means I'd have to redo all the constraints . The common thing to do is to write your query in such a way that what interests you most is put left ... // Jonathan Vanasco Create a clustered index on the attribute you want it ordered by. (I'm not sure is a clustered index can be a composite, but the docs or someone else here will let you know :-) Note that any inserts (or updates on the attr with the clustered index) will cause the ordering to be broken, in which case you'll need to re-cluster. Cheers, Brent Wood ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] DTrace and PostgreSQL
I've got Solaris 10 11/06 on my PC. I removed the static keyword in src/backend/access/transam/xact.c. for the AbortTransaction and CommitTransaction functions declarations and compiled 8.2.3. Everything works nicely. I was wondering if DTrace could tell me how many inserts are being done in a pl/pgsql function while in a loop for example. As you know a pl/pgsql function executes in a single transaction so the DTrace probe transaction__commit(int) I believe is not helpful here. Could DTrace measure how many inserts are being done in a transaction that has not yet been commited, especially if that transaction block is in a pl/pgsql function? This would be extremely useful as when one has a bunch of inserts one could be able to see how far along the pl/pgsql function was. regards, karen ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] DTrace and PostgreSQL
Karen Hill wrote: I've got Solaris 10 11/06 on my PC. I removed the static keyword in src/backend/access/transam/xact.c. for the AbortTransaction and CommitTransaction functions declarations and compiled 8.2.3. Everything works nicely. I was wondering if DTrace could tell me how many inserts are being done in a pl/pgsql function while in a loop for example. As you know a pl/pgsql function executes in a single transaction so the DTrace probe transaction__commit(int) I believe is not helpful here. Could DTrace measure how many inserts are being done in a transaction that has not yet been commited, especially if that transaction block is in a pl/pgsql function? This would be extremely useful as when one has a bunch of inserts one could be able to see how far along the pl/pgsql function was. There are no trace points for that, but you can insert them if you want. I believe the appropriate place to put it would be either heap_insert or ExecInsert. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Arrays with Rails?
Has anyone here used a postgres array with Rails? If so, how? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Arrays with Rails?
Rick Schumeyer wrote: Has anyone here used a postgres array with Rails? If so, how? split()? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster