Re: [HACKERS] LWLock statistics collector (was: CSStorm occurred again by postgreSQL8.2)
Hi,All. Since the cause was found and the provisional patch was made and solved about the CSStorm problem in previous mails, it reports. Subject: [HACKERS] poor performance with Context Switch Storm at TPC-W. Date: Tue, 11 Jul 2006 20:09:24 +0900 From: Katsuhiko Okano [EMAIL PROTECTED] poor performance with Context Switch Storm occurred with the following composition. Premise knowledge : PostgreSQL8.0 to SAVEPOINT was supported. All the transactions have one or more subtransactions in an inside. When judging VISIBILITY of a tupple, XID which inserted the tupple needs to judge a top transaction or a subtransaction. (if it's XMIN committed) In order to judge, it is necessary to access SubTrans. (data structure which manages the parents of transaction ID) SubTrans is accessed via a LRU buffer. Occurrence conditions of this phenomenon : The occurrence conditions of this phenomenon are the following. - There is transaction which refers to the tupple in quantity frequency (typically seq scan). - (Appropriate frequency) There is updating transaction. - (Appropriate length) There is long live transaction. Point of view : (A) The algorithm which replaces a buffer is bad. A time stamp does not become new until swapout completes the swapout page. If access is during swap at other pages, the swapout page will be in the state where it is not used most, It is again chosen as the page for swapout. (When work load is high) (B) Accessing at every judgment of VISIBILITY of a tupple is frequent. If many processes wait LWLock using semop, CSStorm will occur. Result : As opposed to (A), I created a patch which the page of read/write IN PROGRESS does not make an exchange candidate. (It has betterslot supposing the case where all the pages are set to IN PROGRESS.) The patch was applied. However, it recurred. it did not become fundamental solution. As opposed to (B), A patch which is changed so that it may consider that all the transactions are top transactions was created. (Thank you, ITAGAKI) The patch was applied. 8 hours was measured. CSStorm problem was stopped. Argument : (1)Since neither SAVEPOINT nor the error trap using PL/pgSQL is done, the subtransaction is unnecessary. Is it better to implement the mode not using a subtransaction? (2)It is the better if a cache can be carried out by structure like CLOG that it seems that it is not necessary to check a LRU buffer at every occasion. Are there a problem and other ideas? Katsuhiko Okano okano katsuhiko _at_ oss ntt co jp ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] LWLock statistics collector (was: CSStorm occurred again by postgreSQL8.2)
Katsuhiko Okano wrote: Since the cause was found and the provisional patch was made and solved about the CSStorm problem in previous mails, it reports. (snip) (A) The algorithm which replaces a buffer is bad. A time stamp does not become new until swapout completes the swapout page. If access is during swap at other pages, the swapout page will be in the state where it is not used most, It is again chosen as the page for swapout. (When work load is high) The following is the patch. diff -cpr postgresql-8.1.4-orig/src/backend/access/transam/slru.c postgresql-8.1.4-SlruSelectLRUPage-fix/src/backend/access/transam/slru.c *** postgresql-8.1.4-orig/src/backend/access/transam/slru.c 2006-01-21 13:38:27.0 +0900 --- postgresql-8.1.4-SlruSelectLRUPage-fix/src/backend/access/transam/slru.c 2006-07-25 18:02:49.0 +0900 *** SlruSelectLRUPage(SlruCtl ctl, int pagen *** 703,710 for (;;) { int slotno; ! int bestslot = 0; unsigned int bestcount = 0; /* See if page already has a buffer assigned */ for (slotno = 0; slotno NUM_SLRU_BUFFERS; slotno++) --- 703,712 for (;;) { int slotno; ! int bestslot = -1; ! int betterslot = -1; unsigned int bestcount = 0; + unsigned int bettercount = 0; /* See if page already has a buffer assigned */ for (slotno = 0; slotno NUM_SLRU_BUFFERS; slotno++) *** SlruSelectLRUPage(SlruCtl ctl, int pagen *** 720,732 */ for (slotno = 0; slotno NUM_SLRU_BUFFERS; slotno++) { ! if (shared-page_status[slotno] == SLRU_PAGE_EMPTY) ! return slotno; ! if (shared-page_lru_count[slotno] bestcount ! shared-page_number[slotno] != shared-latest_page_number) ! { ! bestslot = slotno; ! bestcount = shared-page_lru_count[slotno]; } } --- 722,746 */ for (slotno = 0; slotno NUM_SLRU_BUFFERS; slotno++) { ! switch (shared-page_status[slotno]) ! { ! case SLRU_PAGE_EMPTY: ! return slotno; ! case SLRU_PAGE_READ_IN_PROGRESS: ! case SLRU_PAGE_WRITE_IN_PROGRESS: ! if (shared-page_lru_count[slotno] bettercount ! shared-page_number[slotno] != shared-latest_page_number) ! { ! betterslot = slotno; ! bettercount = shared-page_lru_count[slotno]; ! } ! default:/* SLRU_PAGE_CLEAN,SLRU_PAGE_DIRTY */ ! if (shared-page_lru_count[slotno] bestcount ! shared-page_number[slotno] != shared-latest_page_number) ! { ! bestslot = slotno; ! bestcount = shared-page_lru_count[slotno]; ! } } } *** SlruSelectLRUPage(SlruCtl ctl, int pagen *** 736,741 --- 750,758 if (shared-page_status[bestslot] == SLRU_PAGE_CLEAN) return bestslot; + if (bestslot == -1) + bestslot = betterslot; + /* * We need to do I/O. Normal case is that we have to write it out, * but it's possible in the worst case to have selected a read-busy Regards, Katsuhiko Okano okano katsuhiko _at_ oss ntt co jp ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [COMMITTERS] pgsql: First small patches needed for regression tests
[EMAIL PROTECTED] (Michael Meskes) writes: Log Message: --- First small patches needed for regression tests According to the buildfarm, this commit broke the build under --enable-thread-safety. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] Connection limit and Superuser
It appears that the superuser does not have connection limit enforcement. I think this should be changed. Slony in particular does not need more than N connections but does require being a super user. -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Connection limit and Superuser
On Mon, Jul 31, 2006 at 08:47:38AM -0400, Rod Taylor wrote: It appears that the superuser does not have connection limit enforcement. I think this should be changed. So if some admin process goes awry and uses up all the connection slots, how does the admin get in to see what's happening? If there's a limit you're not really superuser, are you? Slony in particular does not need more than N connections but does require being a super user. Maybe someone should look into enabling slony to not run as a superuser? Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] Connection limit and Superuser
Rod Taylor [EMAIL PROTECTED] writes: It appears that the superuser does not have connection limit enforcement. I think this should be changed. If you're superuser, you are not subject to access restrictions, by definition. I cannot imagine any scenario under which the above would be a good idea. (Hint: it would be more likely to lock out manual admin connections than Slony.) regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] DTrace enabled build fails
Robert Lor wrote: Peter Eisentraut wrote: That rings a bell. Can we get a more precise designation on what version of DTrace we support? And where can one get that required update? Peter, The problem with static function was fixed recently and is now available in Solaris Express (the development version of Solaris). You can get the bits from http://www.sun.com/software/solaris/solaris-express/get.jsp. I forgot to mention this know issue in my previous emails! I was told by the DTrace engineer that this fix will be in the next update of Solaris 10. Do we need to add detection logic to catch buggy versions? -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Connection limit and Superuser
On Mon, 2006-07-31 at 15:00, Martijn van Oosterhout wrote: On Mon, Jul 31, 2006 at 08:47:38AM -0400, Rod Taylor wrote: It appears that the superuser does not have connection limit enforcement. I think this should be changed. So if some admin process goes awry and uses up all the connection slots, how does the admin get in to see what's happening? If there's a limit you're not really superuser, are you? I thought there is a limit for super-users too... citation from: http://www.postgresql.org/docs/8.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS max_connections (integer) Determines the maximum number of concurrent connections to the database server. The default is typically 100, but may be less if your kernel settings will not support it (as determined during initdb). This parameter can only be set at server start. Increasing this parameter may cause PostgreSQL to request more System V shared memory or semaphores than your operating system's default configuration allows. See Section 16.4.1 for information on how to adjust those parameters, if necessary. superuser_reserved_connections (integer) Determines the number of connection slots that are reserved for connections by PostgreSQL superusers. At most max_connections connections can ever be active simultaneously. Whenever the number of active concurrent connections is at least max_connections minus superuser_reserved_connections, new connections will be accepted only for superusers. The default value is 2. The value must be less than the value of max_connections. This parameter can only be set at server start. Cheers, Csaba. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Connection limit and Superuser
Martijn van Oosterhout wrote: On Mon, Jul 31, 2006 at 08:47:38AM -0400, Rod Taylor wrote: It appears that the superuser does not have connection limit enforcement. I think this should be changed. So if some admin process goes awry and uses up all the connection slots, how does the admin get in to see what's happening? If there's a limit you're not really superuser, are you? Slony in particular does not need more than N connections but does require being a super user. Maybe someone should look into enabling slony to not run as a superuser? That was my initial reaction to this suggestion. But then I realised that it might well make sense to have a separate connection-limited superuser for Slony purposes (or any other special purpose) alongside an unlimited superuser. If we were restricted to having just one superuser I would be much more inclined to agree with you. Perhaps if this suggestion were to be adopted it could be argued that the superuser reserved connection slots should be kept only for superusers that are not connection-limited. cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] User-defined typle similar to char(length) varchar(length)
On Mon, Jul 31, 2006 at 05:04:00PM +0400, Teodor Sigaev wrote: Is it possible to create user-defined type with optional length in create table similar to char()/varchar()/bit()? Without modification gram.y of course... No. Search the archives for discussions about user defined typmod. The basic problem came down too that the set of allowed words for functions and types would be forced to be the same (due to restrictions in lookahead), and people wern't happy with that because if may hamper future SQL compatability. There were ways around this (all fairly ugly though). Note: this is what I remember about the discussions anyway. I'm sure a lot of people would be happy if someone took this on though. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] Connection limit and Superuser
On Mon, 2006-07-31 at 09:06 -0400, Tom Lane wrote: Rod Taylor [EMAIL PROTECTED] writes: It appears that the superuser does not have connection limit enforcement. I think this should be changed. If you're superuser, you are not subject to access restrictions, by definition. I cannot imagine any scenario under which the above would be a good idea. (Hint: it would be more likely to lock out manual admin connections than Slony.) If you don't want an admin user to have a connection limit, give them -1 or no connection limit. Anyway, you're right that Slony should not require superuser status but at the moment that is rather tricky to accomplish since it wants to muck about in the system catalogues, use pg_cancel_backend, among other things. -- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Connection limit and Superuser
On Mon, 2006-07-31 at 15:07 +0200, Csaba Nagy wrote: On Mon, 2006-07-31 at 15:00, Martijn van Oosterhout wrote: On Mon, Jul 31, 2006 at 08:47:38AM -0400, Rod Taylor wrote: It appears that the superuser does not have connection limit enforcement. I think this should be changed. So if some admin process goes awry and uses up all the connection slots, how does the admin get in to see what's happening? If there's a limit you're not really superuser, are you? I thought there is a limit for super-users too... citation from: http://www.postgresql.org/docs/8.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS Sorry for not being more specific. I was speaking about ALTER ROLE WITH CONNECTION LIMIT. -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Connection limit and Superuser
Nevermind, I realized now that you're talking about a different setting. I thought there is a limit for super-users too... citation from: http://www.postgresql.org/docs/8.1/static/runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS Cheers, Csaba. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Connection limit and Superuser
On Mon, 2006-07-31 at 15:00 +0200, Martijn van Oosterhout wrote: On Mon, Jul 31, 2006 at 08:47:38AM -0400, Rod Taylor wrote: It appears that the superuser does not have connection limit enforcement. I think this should be changed. So if some admin process goes awry and uses up all the connection slots, how does the admin get in to see what's happening? If there's a limit you're not really superuser, are you? Work this one through. If an admin process goes awry and uses up all the connection slots it has reached max_connections AND used superuser_reserved_connections as well. This means an admin cannot get in to see what is happening. That's what happens today. I would much prefer that Superuser 'a' reaches WITH CONNECTION LIMIT for user 'a' and superuser 'b' can get in to see what is happening. Slony in particular does not need more than N connections but does require being a super user. Maybe someone should look into enabling slony to not run as a superuser? Have a nice day, -- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] User-defined typle similar to char(length) varchar(length)
Martijn van Oosterhout kleptog@svana.org writes: No. Search the archives for discussions about user defined typmod. The basic problem came down too that the set of allowed words for functions and types would be forced to be the same (due to restrictions in lookahead), and people wern't happy with that because if may hamper future SQL compatability. There were ways around this (all fairly ugly though). I seem to remember that someone had come up with an idea that might allow it to work, but no one pushed it as far as coming up with a patch. It's definitely the grammar that is the problem though, specifically constructs like char(42) 'literal value here' If you don't treat the names of these types as reserved, it is darn hard to tell that you're not looking at a function call until you get to the right paren and see a string literal as lookahead ... and postponing the parse decision that long is painful. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Connection limit and Superuser
Andrew Dunstan [EMAIL PROTECTED] writes: Martijn van Oosterhout wrote: Maybe someone should look into enabling slony to not run as a superuser? That was my initial reaction to this suggestion. But then I realised that it might well make sense to have a separate connection-limited superuser for Slony purposes (or any other special purpose) alongside an unlimited superuser. Actually, the real question in my mind is why Slony can't be trusted to use the right number of connections to start with. If you don't trust it that far, what are you doing letting it into your database as superuser to start with? As for connection-limited superuser, if you can't do ALTER USER SET on yourself then you aren't a superuser, so any such restriction is illusory anyway. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Relation locking and relcache load (was Re: Going for all green buildfarm results)
I wrote: ... This means the only thing stopping us from taking lock before we invoke relcache is lack of knowledge about the rel's relisshared status. While digging through all the places that open relcache entries, I've realized that there's another problem, specifically the way that we lock indexes. The current theory is that index_open() takes no lock, and then we establish a lock just for the duration of an index scan. The comments for index_open explain: *Note: we acquire no lock on the index. A lock is not needed when *simply examining the index reldesc; the index's schema information *is considered to be protected by the lock that the caller had better *be holding on the parent relation. Some type of lock should be *obtained on the index before physically accessing it, however. *This is handled automatically for most uses by index_beginscan *and index_endscan for scan cases, or by ExecOpenIndices and *ExecCloseIndices for update cases. Other callers will need to *obtain their own locks. However, the lionfish failure makes the folly of this approach evident (it was in fact index_open that failed in that example, unless my theory about it is all wrong). If we're going to move to lock-before-open then we've got to fix index_open too. I envision making index_open just about like heap_open, ie add a lockmode parameter, and then get rid of the separate lock step in index_beginscan. There is one small problem with doing that, which is this code in ExecOpenIndices: indexDesc = index_open(indexOid); if (indexDesc-rd_am-amconcurrent) LockRelation(indexDesc, RowExclusiveLock); else LockRelation(indexDesc, AccessExclusiveLock); IOW you need to already have the index open to find out what sort of lock to take on it. I have a modest proposal for fixing that: let's get rid of amconcurrent, so that RowExclusiveLock is always the right lock to take here. All of the currently supported AMs have amconcurrent = true, and so does the proposed bitmap index patch, and given the project's current focus on high concurrent performance I cannot imagine that we'd accept a future patch to add a nonconcurrent index type. So let's just legislate that all AMs have to support concurrent updates (or at least that they can't rely on the main system to protect them from the case). Any objections? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Relation locking and relcache load (was Re: Going for all green buildfarm results)
* Tom Lane ([EMAIL PROTECTED]) wrote: I think the best solution for this might be to put the responsibility for creating system catalogs' toast tables into the bootstrap phase instead of making initdb do it afterwards. This would be a Good Thing anyway since currently we are incapable of dealing with bootstrap-time insertions of values large enough to need toasting. I'm imagining adding macros to the include/catalog/*.h files along the lines of Would this make it much more difficult to support user-defined indexes on system catalogs? It looks like we don't support that at the moment but as we see larger Postgres installations it seems likely we'll need to. I don't really consider myself a very heavy Postgres user but I've got databases w/ 30k entries in pg_class and near 300k in pg_attribute... Those aren't shared but it sounds like you were talking about all of them above anyway. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Connection limit and Superuser
On Mon, 2006-07-31 at 09:52 -0400, Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Martijn van Oosterhout wrote: Maybe someone should look into enabling slony to not run as a superuser? That was my initial reaction to this suggestion. But then I realised that it might well make sense to have a separate connection-limited superuser for Slony purposes (or any other special purpose) alongside an unlimited superuser. Actually, the real question in my mind is why Slony can't be trusted to use the right number of connections to start with. If you don't trust it that far, what are you doing letting it into your database as superuser to start with? I generally try to apply reasonable restrictions on all activities that take place on my systems unless the machine was dedicated for that task (in which case the limitations are those of the machine). When things go wrong, and they almost always do eventually, these types of restrictions ensure that only the one process grinds to a halt instead of the entire environment. Cron jobs are another area that are frequently implemented incorrectly. Implementing checks to see if it is already running is overlooked enough that I would like to restrict them as well. This is less important since roles now allow multiple users to take ownership of a relation; less jobs that need to run as a superuser. -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] tg_trigtuple not NULL in AFTER STATEMENT triggers?
I've noticed that tg_trigtuple and tg_newtuple aren't cleared to NULL in AFTER STATEMENT triggers. Is that an oversight, or does the code intentionally not bother because trigger functions shouldn't be referencing those members in statement-level triggers anyway, or is there some other reason? -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Relation locking and relcache load (was Re: Going for all green buildfarm results)
Stephen Frost [EMAIL PROTECTED] writes: * Tom Lane ([EMAIL PROTECTED]) wrote: I think the best solution for this might be to put the responsibility for creating system catalogs' toast tables into the bootstrap phase instead of making initdb do it afterwards. Would this make it much more difficult to support user-defined indexes on system catalogs? AFAICS the problems with that are orthogonal to this. You'll never have user-defined (as in added after initdb) indexes on shared catalogs, because there is no way to update their pg_class descriptions in all databases at once. For non-shared catalogs there's nothing except access permissions stopping you from adding ordinary indexes now. We don't support partial or functional indexes on system catalogs, but the implementation reasons for that are unrelated to what I'm doing. It looks like we don't support that at the moment but as we see larger Postgres installations it seems likely we'll need to. I don't really consider myself a very heavy Postgres user but I've got databases w/ 30k entries in pg_class and near 300k in pg_attribute... And are you seeing any performance issues related to lack of indexes? For the system catalogs we understand the access patterns pretty well (I think), and I thought we pretty much had the right indexes on them already. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Connection limit and Superuser
Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Martijn van Oosterhout wrote: Maybe someone should look into enabling slony to not run as a superuser? That was my initial reaction to this suggestion. But then I realised that it might well make sense to have a separate connection-limited superuser for Slony purposes (or any other special purpose) alongside an unlimited superuser. Actually, the real question in my mind is why Slony can't be trusted to use the right number of connections to start with. If you don't trust it that far, what are you doing letting it into your database as superuser to start with? As for connection-limited superuser, if you can't do ALTER USER SET on yourself then you aren't a superuser, so any such restriction is illusory anyway. As a protection against malice, yes. I think Rod was more interested in some protection against stupidity. Maybe the real answer is that Slony should connect as a non-superuser and call security definer functions for the privileged things it needs to do. cheers andrew ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] tg_trigtuple not NULL in AFTER STATEMENT triggers?
Michael Fuhr [EMAIL PROTECTED] writes: I've noticed that tg_trigtuple and tg_newtuple aren't cleared to NULL in AFTER STATEMENT triggers. Is that an oversight, Probably. Send a patch? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [PATCHES] extension for sql update
Am Mittwoch, 26. Juli 2006 22:58 schrieb Tom Lane: The reason people want this syntax is that they expect to be able to write, say, UPDATE mytab SET (foo, bar, baz) = (SELECT alpha, beta, gamma FROM othertab WHERE key = mytab.key); I don't find any derivation in the standard that would permit this. The only thing I could find are variations on SET (a) = x -- no parentheses SET (a, b) = (x, y) SET (a, b) = ROW (x, y) where x and y are some sort of value expression. I would have expected the sort of thing that you describe, but if you know how to derive that, I'd like to see it. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Connection limit and Superuser
As a protection against malice, yes. I think Rod was more interested in some protection against stupidity. Maybe the real answer is that Slony should connect as a non-superuser and call security definer functions for the privileged things it needs to do. Wouldn't that break Slony's ability to connect to older postgresql versions and replicate? Joshua D. Drake cheers andrew ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Connection limit and Superuser
Joshua D. Drake wrote: As a protection against malice, yes. I think Rod was more interested in some protection against stupidity. Maybe the real answer is that Slony should connect as a non-superuser and call security definer functions for the privileged things it needs to do. Wouldn't that break Slony's ability to connect to older postgresql versions and replicate? I don't know anything of Slony's internals, but I don't see why older versions should matter - Postgres has had security definer functions for every release that Slony supports. Maybe I'm missing something ... cheers andrew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] extension for sql update
On Mon, 2006-07-31 at 17:26 +0200, Peter Eisentraut wrote: Am Mittwoch, 26. Juli 2006 22:58 schrieb Tom Lane: The reason people want this syntax is that they expect to be able to write, say, UPDATE mytab SET (foo, bar, baz) = (SELECT alpha, beta, gamma FROM othertab WHERE key = mytab.key); I don't find any derivation in the standard that would permit this. The only thing I could find are variations on SET (a) = x -- no parentheses SET (a, b) = (x, y) SET (a, b) = ROW (x, y) where x and y are some sort of value expression. I would have expected the sort of thing that you describe, but if you know how to derive that, I'd like to see it. I believe contextually typed row value constructor element list can be one or more value expressions which includes a row value expression. row value expression gives us the row subquery option. For that matter the below portion of contextually typed row value constructor gives us: | left paren contextually typed row value constructor element comma contextually typed row value constructor element list right paren This breaks down into one or more comma separated row subquerys. UPDATE tab SET (...) = ((SELECT foo, bar from a), (select bif,baz from b)); -- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] tg_trigtuple not NULL in AFTER STATEMENT triggers?
On Mon, Jul 31, 2006 at 11:12:14AM -0400, Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: I've noticed that tg_trigtuple and tg_newtuple aren't cleared to NULL in AFTER STATEMENT triggers. Is that an oversight, Probably. Send a patch? Sure. Is the switch in AfterTriggerExecute() around line 2116 in commands/trigger.c close to where I should be looking? -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Going for all green buildfarm results
On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Stefan Kaltenbrunner wrote: FYI: lionfish just managed to hit that problem again: http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfishdt=2006-07-29%2023:30:06 The test alter_table, which is on the same parallel group as limit (the failing test), contains these lines: ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1; ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1; I bet Alvaro's spotted the problem. ALTER INDEX RENAME doesn't seem to take any lock on the index's parent table, only on the index itself. That means that a query on onek could be trying to read the pg_class entries for onek's indexes concurrently with someone trying to commit a pg_class update to rename an index. If the query manages to visit the new and old versions of the row in that order, and the commit happens between, *neither* of the versions would look valid. MVCC doesn't save us because this is all SnapshotNow. Not sure what to do about this. Trying to lock the parent table could easily be a cure-worse-than-the-disease, because it would create deadlock risks (we've already locked the index before we could look up and lock the parent). Thoughts? The path of least resistance might just be to not run these tests in parallel. The chance of this issue causing problems in the real world seems small. It doesn't seem that unusual to want to rename an index on a running system, and it certainly doesn't seem like the kind of operation that should pose a problem. So at the very least, we'd need a big fat warning in the docs about how renaming an index could cause other queries in the system to fail, and the error message needs to be improved. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] extension for sql update
On Sun, Jul 30, 2006 at 08:38:30PM -0400, Rod Taylor wrote: On Sun, 2006-07-30 at 20:20 -0400, Robert Treat wrote: On Thursday 27 July 2006 09:28, Bruce Momjian wrote: Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: Tom Lane wrote: UPDATE mytab SET (foo, bar, baz) = (SELECT alpha, beta, gamma FROM othertab WHERE key = mytab.key); That UPDATE example is interesting because I remember when using Informix that I had to do a separate SELECT statement for each UPDATE column I wanted to update. I didn't realize that you could group columns and assign them from a single select --- clearly that is a powerful syntax we should support some day. No question. The decision at hand is whether we want to look like we support it, when we don't yet. I'd vote not, because I think the main use-case for the row-on-the-left syntax is exactly this, and so I fear people will just get frustrated if they see it in the syntax synopsis and try to use it. I'm not a big fan of implementing partial solutions (remember left-joins are not implemented messages :-) way back when) , however in my experience with this form of the update command, the primary usage is not to use a subselect to derive the values, but to make it easier to generate sql, using a single I disagree. UPDATE mytab SET (foo, bar, baz) =(SELECT ...) is the specifications way of doing an update with a join. That is its primary purpose. UPDATE ... FROM is a PostgreSQL alternative to the above. An alternative that people have been using without complaint for years (probably because a number of other databases do the same thing). Perhaps a good compromise would be to allow UPDATE ... (SELECT) where it would meet the current requirements for UPDATE ... FROM. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] tg_trigtuple not NULL in AFTER STATEMENT triggers?
Michael Fuhr [EMAIL PROTECTED] writes: On Mon, Jul 31, 2006 at 11:12:14AM -0400, Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: I've noticed that tg_trigtuple and tg_newtuple aren't cleared to NULL in AFTER STATEMENT triggers. Is that an oversight, Probably. Send a patch? Sure. Is the switch in AfterTriggerExecute() around line 2116 in commands/trigger.c close to where I should be looking? Yeah, it looks like some attention needs to be paid to whether ate_oldctid and ate_newctid were supplied, rather than just blindly passing pointers to possibly-uninitialized local structs. Offhand I think you could remove the switch entirely in favor of driving the setup of these fields off the if (ItemPointerIsValid(... tests. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Relation locking and relcache load (was Re: Going for all green buildfarm results)
* Tom Lane ([EMAIL PROTECTED]) wrote: Stephen Frost [EMAIL PROTECTED] writes: * Tom Lane ([EMAIL PROTECTED]) wrote: I think the best solution for this might be to put the responsibility for creating system catalogs' toast tables into the bootstrap phase instead of making initdb do it afterwards. Would this make it much more difficult to support user-defined indexes on system catalogs? AFAICS the problems with that are orthogonal to this. You'll never have user-defined (as in added after initdb) indexes on shared catalogs, because there is no way to update their pg_class descriptions in all databases at once. Ok. For non-shared catalogs there's nothing except access permissions stopping you from adding ordinary indexes now. I had thought this might be the case since I had some recollection of indexes on catalogs either being speculated about or suggested on -perform. The error-message isn't entirely clear about this fact though: src/backend/catalog/index.c:495 (or so) (errcode(ERRCODE_FEATURE_NOT_SUPPORTED), errmsg(user-defined indexes on system catalog tables are not supported))); And are you seeing any performance issues related to lack of indexes? Depends on the eye of the beholder to some extent I suppose. For the system catalogs we understand the access patterns pretty well (I think), and I thought we pretty much had the right indexes on them already. The case that I was specifically thinking about was the relowner in pg_class not being indexed. tsf= explain analyze select cl.relname from pg_authid a join pg_class cl on (a.oid = cl.relowner) where a.rolname = 'postgres'; QUERY PLAN Hash Join (cost=2.54..1970.25 rows=383 width=64) (actual time=0.113..77.950 rows=223 loops=1) Hash Cond: (outer.relowner = inner.oid) - Seq Scan on pg_class cl (cost=0.00..1881.59 rows=16459 width=68) (actual time=0.036..46.607 rows=17436 loops=1) - Hash (cost=2.54..2.54 rows=1 width=4) (actual time=0.057..0.057 rows=1 loops=1) - Seq Scan on pg_authid a (cost=0.00..2.54 rows=1 width=4) (actual time=0.047..0.050 rows=1 loops=1) Filter: (rolname = 'postgres'::name) Total runtime: 78.358 ms (7 rows) It's not exactly *slow* but an index might speed it up. I was trying to create one and couldn't figure out the right incantation to make it happen. 'allow_system_table_mods = true' wasn't working in postgresql.conf (it wouldn't start) for some reason... Other system-catalog queries that I've been a little unhappy about the performance of (though I don't know if indexes would help, so this is really just me complaining) are: initial table list in ODBC w/ Access (takes *forever* when you have alot of tables...); schema/table lists in phppgadmin when there are alot of schemas/tables; information_schema queries (try looking at information_schema.columns for a given table when you've got alot of tables... over 10x slower than looking at pg_class/pg_attribute directly, 3 seconds vs. 200ms, or so). Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Going for all green buildfarm results
Jim C. Nasby wrote: On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Stefan Kaltenbrunner wrote: FYI: lionfish just managed to hit that problem again: http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfishdt=2006-07-29%2023:30:06 The test alter_table, which is on the same parallel group as limit (the failing test), contains these lines: ALTER INDEX onek_unique1 RENAME TO tmp_onek_unique1; ALTER INDEX tmp_onek_unique1 RENAME TO onek_unique1; I bet Alvaro's spotted the problem. ALTER INDEX RENAME doesn't seem to take any lock on the index's parent table, only on the index itself. That means that a query on onek could be trying to read the pg_class entries for onek's indexes concurrently with someone trying to commit a pg_class update to rename an index. If the query manages to visit the new and old versions of the row in that order, and the commit happens between, *neither* of the versions would look valid. MVCC doesn't save us because this is all SnapshotNow. Not sure what to do about this. Trying to lock the parent table could easily be a cure-worse-than-the-disease, because it would create deadlock risks (we've already locked the index before we could look up and lock the parent). Thoughts? The path of least resistance might just be to not run these tests in parallel. The chance of this issue causing problems in the real world seems small. It doesn't seem that unusual to want to rename an index on a running system, and it certainly doesn't seem like the kind of operation that should pose a problem. So at the very least, we'd need a big fat warning in the docs about how renaming an index could cause other queries in the system to fail, and the error message needs to be improved. it is my understanding that Tom is already tackling the underlying issue on a much more general base ... Stefan ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Connection limit and Superuser
[EMAIL PROTECTED] (Andrew Dunstan) writes: Joshua D. Drake wrote: As a protection against malice, yes. I think Rod was more interested in some protection against stupidity. Maybe the real answer is that Slony should connect as a non-superuser and call security definer functions for the privileged things it needs to do. Wouldn't that break Slony's ability to connect to older postgresql versions and replicate? I don't know anything of Slony's internals, but I don't see why older versions should matter - Postgres has had security definer functions for every release that Slony supports. Maybe I'm missing something ... Most of Slony-I's activities don't require superuser access. The usual thing that's running are SYNC events, and those merely require write access to some internal Slony-I tables and write access to the replicated tables on the subscribers. The functions that do need superuser access are (basically) - subscribe set (needs to alter system tables) - execute script (ditto) The trouble is that you in effect need to have that superuser up and ready for action at any time in case it's needed, and it being that needful, we basically use it all the time. Perhaps it's worth looking at shoving the superuser stuff into SECURITY DEFINER functions; that may be worth considering post-1.2.0... -- output = reverse(gro.gultn @ enworbbc) http://cbbrowne.com/info/multiplexor.html Wow! Windows now can do everything using shared library DLLs, just like Multics did back in the 1960s! Maybe someday they'll discover separate processes and pipes, which came out in the 1970s! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] Allow commenting of variables in postgresql.conf to -
Zdenek Kotala wrote: I performed some cleanup in my code as well. I reduced some conditions, which cannot occur and fixed context validation in the set_config_options function. I hope that It is final version of our patch. The way I see it, combining a feature change with a code refactoring and random white space changes is a pretty optimal way to get your patch rejected. Please submit patches for these items separately. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Connection limit and Superuser
Ühel kenal päeval, E, 2006-07-31 kell 09:52, kirjutas Tom Lane: Andrew Dunstan [EMAIL PROTECTED] writes: Martijn van Oosterhout wrote: Maybe someone should look into enabling slony to not run as a superuser? That was my initial reaction to this suggestion. But then I realised that it might well make sense to have a separate connection-limited superuser for Slony purposes (or any other special purpose) alongside an unlimited superuser. Actually, the real question in my mind is why Slony can't be trusted to use the right number of connections to start with. If you don't trust it that far, what are you doing letting it into your database as superuser to start with? This has probably nothing to do withs slony. One way tos shut out users from postgresqls backend is to cut all connections in a way that a smart client sees (maybe by sending keepalives), but backend does not (it times out after some TCP timeout, which by default is in about 2.5hours). BTW, sometimes this does happen by itself in case of long enough connections. In such a case the client will likely establish new connection(s), and if the whole process happens many times, then the backend runs out of connections. As for connection-limited superuser, if you can't do ALTER USER SET on yourself then you aren't a superuser, so any such restriction is illusory anyway. I guess they want protection against accidentally using up all connections, not to have a way for competing superusers to locking each other out; -- Hannu Krosing Database Architect Skype Technologies OÜ Akadeemia tee 21 F, Tallinn, 12618, Estonia Skype me: callto:hkrosing Get Skype for free: http://www.skype.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Going for all green buildfarm results
Stefan Kaltenbrunner [EMAIL PROTECTED] writes: Jim C. Nasby wrote: On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote: The path of least resistance might just be to not run these tests in parallel. The chance of this issue causing problems in the real world seems small. It doesn't seem that unusual to want to rename an index on a running system, and it certainly doesn't seem like the kind of operation that should pose a problem. So at the very least, we'd need a big fat warning in the docs about how renaming an index could cause other queries in the system to fail, and the error message needs to be improved. it is my understanding that Tom is already tackling the underlying issue on a much more general base ... Done in HEAD, but we might still wish to think about changing the regression tests in the back branches, else we'll probably continue to see this failure once in a while ... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] ERROR: could not open relation with OID 909391158
Hackers, I have been loading 200+ million call records into a new Postgresql 8.1.4 install. Everything has been going great until a couple of minutes ago. After the process loads a single file (300k to 500k records), it summaries the data into a summary table. I have been getting the following error message ERROR: could not open relation with OID 909391158 I don't have any relations with an OID of 909391158, I checked this is the following query select * from pg_class where oid = 909391158; I don't know where to go from here. What i have don't to move on is rename the summary table to summary_old and created a new (empty) one for now. The process is happy with this. I have search the logs and there are NO hardware related errors. I am running a vacuum full verbose on the summary_old table now to see if any errors popup. Any ideas? Thanks Jim select version() returns: PostgreSQL 8.1.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20060613 (prerelease) (Debian 4.1.1-5) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Going for all green buildfarm results
Tom Lane wrote: Stefan Kaltenbrunner [EMAIL PROTECTED] writes: Jim C. Nasby wrote: On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote: The path of least resistance might just be to not run these tests in parallel. The chance of this issue causing problems in the real world seems small. It doesn't seem that unusual to want to rename an index on a running system, and it certainly doesn't seem like the kind of operation that should pose a problem. So at the very least, we'd need a big fat warning in the docs about how renaming an index could cause other queries in the system to fail, and the error message needs to be improved. it is my understanding that Tom is already tackling the underlying issue on a much more general base ... Done in HEAD, but we might still wish to think about changing the regression tests in the back branches, else we'll probably continue to see this failure once in a while ... How sure are we that this is the cause of the problem? The feeling I got was this is a good guess. If so, do we want to prevent ourselves getting any further clues in case we're wrong? It's also an interesting case of a (low likelihood) bug which is not fixable on any stable branch. cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Going for all green buildfarm results
Andrew Dunstan wrote: Tom Lane wrote: Stefan Kaltenbrunner [EMAIL PROTECTED] writes: Jim C. Nasby wrote: On Sun, Jul 30, 2006 at 11:44:44AM -0400, Tom Lane wrote: The path of least resistance might just be to not run these tests in parallel. The chance of this issue causing problems in the real world seems small. It doesn't seem that unusual to want to rename an index on a running system, and it certainly doesn't seem like the kind of operation that should pose a problem. So at the very least, we'd need a big fat warning in the docs about how renaming an index could cause other queries in the system to fail, and the error message needs to be improved. it is my understanding that Tom is already tackling the underlying issue on a much more general base ... Done in HEAD, but we might still wish to think about changing the regression tests in the back branches, else we'll probably continue to see this failure once in a while ... How sure are we that this is the cause of the problem? The feeling I got was this is a good guess. If so, do we want to prevent ourselves getting any further clues in case we're wrong? It's also an interesting case of a (low likelihood) bug which is not fixable on any stable branch. well I have a lot of trust into tom - though the main issue is that this issue seems to be difficult hard to trigger. afaik only one box (lionfish) ever managed to hit it and even there only 2 times out of several hundred builds - I don't suppose we can come up with a testcase that might be more reliably showing that issue ? Stefan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Going for all green buildfarm results
Stefan Kaltenbrunner wrote: Andrew Dunstan wrote: How sure are we that this is the cause of the problem? The feeling I got was this is a good guess. If so, do we want to prevent ourselves getting any further clues in case we're wrong? It's also an interesting case of a (low likelihood) bug which is not fixable on any stable branch. well I have a lot of trust into tom - though the main issue is that this issue seems to be difficult hard to trigger. afaik only one box (lionfish) ever managed to hit it and even there only 2 times out of several hundred builds - I don't suppose we can come up with a testcase that might be more reliably showing that issue ? Maybe we could write a suitable test case using Martijn's concurrent testing framework. Or with a pair of custom SQL script running under pgbench, and a separate process sending random SIGSTOP/SIGCONT to backends. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] extension for sql update
Peter Eisentraut [EMAIL PROTECTED] writes: Am Mittwoch, 26. Juli 2006 22:58 schrieb Tom Lane: The reason people want this syntax is that they expect to be able to write, say, UPDATE mytab SET (foo, bar, baz) = (SELECT alpha, beta, gamma FROM othertab WHERE key = mytab.key); I don't find any derivation in the standard that would permit this. Well, there are two ways to get there. SQL99 does not actually have the syntax with parentheses on the left, but what it does have is SET ROW: set clause ::= update target equals operator update source | mutated set clause equals operator update source update target ::= object column | ROW | object column left bracket or trigraph simple value specification right bracket or trigraph update source ::= value expression | contextually typed value specification and you can derive (SELECT ...) from value expression via value expression ::= ... | row value expression row value expression ::= ... | row value constructor row value constructor ::= ... | row subquery row subquery ::= subquery subquery ::= left paren query expression right paren query expression ::= [ with clause ] query expression body query expression body ::= non-join query expression non-join query expression ::= non-join query term non-join query term ::= non-join query primary non-join query primary ::= simple table simple table ::= query specification query specification ::= SELECT [ set quantifier ] select list table expression Another interesting restriction in SQL99 is 9) If an update target specifies ROW, then: a) set clause list shall consist of exactly one set clause SC. SQL2003 seems to have dropped the ROW syntax entirely, but instead they have set clause ::= multiple column assignment multiple column assignment ::= set target list equals operator assigned row assigned row ::= contextually typed row value expression and from there it goes through just like before. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Postgres Process in Kernel Mode?
Hello, Im new in postgres SQL and I have some questions about the space where postgres process run. 1-Can any body say me what libs use postgres for make system calls, for example LIBC? 2-Can any body talk me if some postgres process can run in Linux kernel space? 3- Some body knows if exist some projects that ports postgres process on Linux kernel space. I was read some source code of postgres, like, Postgres.C and others I was found C instructions like Printf, thats prohibitive for Linux kernel applications, for example in kernel mode we use printk. I suppose that postgres are ported in a user space only. Im Ok? Thanks Moises -
Re: [HACKERS] Going for all green buildfarm results
Alvaro Herrera [EMAIL PROTECTED] writes: Maybe we could write a suitable test case using Martijn's concurrent testing framework. The trick is to get process A to commit between the times that process B looks at the new and old versions of the pg_class row (and it has to happen to do so in that order ... although that's not a bad bet given the way btree handles equal keys). I think the reason we've not tracked this down before is that that's a pretty small window. You could force the problem by stopping process B with a debugger breakpoint and then letting A do its thing, but short of something like that you'll never reproduce it with high probability. As far as Andrew's question goes: I have no doubt that this race condition is (or now, was) real and could explain Stefan's failure. It's not impossible that there's some other problem in there, though. If so we will still see the problem from time to time on HEAD, and know that we have more work to do. But I don't think that continuing to see it on the back branches will teach us anything. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Postgres Process in Kernel Mode?
moises wrote: 1-Can any body say me what libs use postgres for make system calls, for example LIBC? libc and a lot others. 2-Can any body talk me if some postgres process can run in Linux kernel space? No. 3- Some body knows if exist some projects that ports postgres process on Linux kernel space. No that we've heard lately. (But I remember a guy with a .cu domain asking not long ago, maybe it was you.) I was found C instructions like Printf, that's prohibitive for Linux kernel applications, for example in kernel mode we use printk. That's because Postgres is a user-space program and there are no intentions to change that. I suppose that postgres are ported in a user space only. I'm Ok? Yes. If you prefer spanish, I suggest you subscribe to the pgsql-es-ayuda list. There is at least one Postgres hacker there. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Postgres Process in Kernel Mode?
[EMAIL PROTECTED] (moises) writes: html xmlns:o=urn:schemas-microsoft-com:office:office xmlns:w=urn:schemas-microsoft-com:office:word xmlns=http://www.w3.org/TR/REC-html40; head meta http-equiv=Content-Type content=text/html; charset=us-ascii meta name=Generator content=Microsoft Word 11 (filtered medium) style !-- /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:Times New Roman;} a:link, span.MsoHyperlink {color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {color:purple; text-decoration:underline;} span.EstiloCorreo17 {mso-style-type:personal-compose; font-family:Arial; color:windowtext;} @page Section1 {size:595.3pt 841.9pt; margin:70.85pt 3.0cm 70.85pt 3.0cm;} div.Section1 {page:Section1;} -- /style /head body lang=ES link=blue vlink=purple div class=Section1 p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'Hello,o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'I#8217;m new in postgres SQL and I have some questions about the space where postgres process run.o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'1-Can any body say me what libs use postgres for make system calls, for example LIBC? o:p/o:p/span/font/p You can easily determine this yourself using ldd. The answer will vary depending on what options you use when compiling it. p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'2-Can any body talk me if some postgres process can run in Linux kernel space? o:p/o:p/span/font/p Yes, it doesn't. p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'3- Some body knows if exist some projects that ports postgres process on Linux kernel space.o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'I was read some source code of postgres, like, o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'Postgres.C and others#8230;o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'nbsp;o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'I was found C instructions like Printf, that#8217;s prohibitive for Linux kernel applications, for example in kernel mode we use printk.o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'I suppose that postgres are ported in a user space only. I#8217;m Ok?o:p/o:p/span/font/p That's correct. p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'Thanks o:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'Moiseso:p/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'- o:p/o:p/span/font/p /div /body /html You might want to consider using an email client that doesn't slobber: p class=MsoNormalfont size=2 face=Arialspan lang=EN-GB style='font-size: 10.0pt;font-family:Arial'o:pnbsp;/o:p/span/font/p around everywhere. It's really irritating to have to read around that deteriorata. --
Re: [HACKERS] DTrace enabled build fails
Bruce Momjian wrote: Do we need to add detection logic to catch buggy versions? Instead of adding extra logic, I think it's sufficient with documentation since the issue will soon be fixed in the next Solaris update. Regards, -Robert ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Forcing current WAL file to be archived
Ühel kenal päeval, T, 2006-07-25 kell 17:05, kirjutas Simon Riggs: On Tue, 2006-07-25 at 11:53 -0400, Tom Lane wrote: That's fine, but feature freeze is in a week and we don't even have the basic function for manually doing a log file switch. Let's get that done first and then think about automatic switches. Agreed. Simon, did you (or anybody else) manage to complete the patch for adding the (wal_filename, offset) returning function ? -- Hannu Krosing Database Architect Skype Technologies OÜ Akadeemia tee 21 F, Tallinn, 12618, Estonia Skype me: callto:hkrosing Get Skype for free: http://www.skype.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] DTrace enabled build fails
Robert Lor [EMAIL PROTECTED] writes: Bruce Momjian wrote: Do we need to add detection logic to catch buggy versions? Instead of adding extra logic, I think it's sufficient with documentation since the issue will soon be fixed in the next Solaris update. I agree ... it's not like this is a feature aimed at novices. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Forcing current WAL file to be archived
On Tue, 2006-08-01 at 00:40 +0300, Hannu Krosing wrote: Ühel kenal päeval, T, 2006-07-25 kell 17:05, kirjutas Simon Riggs: On Tue, 2006-07-25 at 11:53 -0400, Tom Lane wrote: That's fine, but feature freeze is in a week and we don't even have the basic function for manually doing a log file switch. Let's get that done first and then think about automatic switches. Agreed. Simon, did you (or anybody else) manage to complete the patch for adding the (wal_filename, offset) returning function ? Just wrapping now. I tried to add archive_timeout also, though am still fiddling with that, so I've taken that back out for now. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] trivial script for getting pgsql-committers patches
I found a script I wrote some time ago and had forgotten. I pipe the messages from pgsql-committers to it, and it connects to our cvsweb, grabs the patches and puts them into a single file in /tmp. It's pretty low-tech but it saves me the time to go clicking each link to see what changed where. I attach it in case someone finds it useful. If you have any ideas for improvements, they're welcome. (Code improvements are even more welcome of course). It needs LWP::UserAgent. I don't know if this comes with the standard installation. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] trivial script for getting pgsql-committers patches
I attach it in case someone finds it useful. If you have any ideas for improvements, they're welcome. (Code improvements are even more welcome of course). Too quick to hit send, sorry. Yeah, the name of the script sucks. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. #!/usr/bin/perl -w $pid = fork; if ($pid == -1) { print woops ... $!; exit; } elsif ($pid != 0) { exit; } use LWP::UserAgent; $suffix = ; $temp = `mktemp /tmp/patch.XX`; chomp $temp; open SAL, , $temp or die $temp: $!; $agent = LWP::UserAgent-new(keep_alive = 3); while () { if (!defined $subject /^Subject: \[COMMITTERS\] pgsql: (.*)/) { $subject = $1; } elsif (m/^Modified/) { $suffix = f=H; } elsif (m/^Added/) { $suffix = ; } elsif (m,\((http://[^)]*)\)$,) { my $url = $1.$suffix; $res = $agent-get($url); print SAL $res-content; } } close SAL; $subject =~ s/[ _'\/]/_/g; rename $temp, sprintf (/tmp/patch.%s.html, $subject); ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] OSCON fallout - Compressed Annealing optimizer
Robert Hansen did a talk at OSCON on a compressed annealing framework called Djinni: http://sixdemonbag.org/Djinni/ It's a framework to use compressed annealing (a derivative of simulated annealing) for finding approximate solutions to NP-complete problems such as the TSP with time windows. Note that while Djinni is implemented in C++, it already supports embedding via SWIG, and has a C wrapper and is accessible from other languages. And it's BSD-licensed... This has the potential to be an alternative to the present use of GEQO for query optimization for cases of large joins (e.g. - involving large numbers of tables). On the one hand, I'm somewhat suited to looking at this in that I have the operations research background to know what they're talking about. On the other hand, I'm not familiar with the optimizer, nor do I deal with sorts of cases with so many joins that this would be worthwhile. I've bounced a ToDo entry over to Bruce, and am making sure this is documented here on pgsql.hackers so that a web search can readily find it in the future... -- cbbrowne,@,cbbrowne.com http://www3.sympatico.ca/cbbrowne/ It is far from complete, but it should explain enough that you don't just stare at your sendmail.cf file like a deer staring at an oncoming truck. -- David Charlap ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Going for all green buildfarm results
Tom Lane wrote: As far as Andrew's question goes: I have no doubt that this race condition is (or now, was) real and could explain Stefan's failure. It's not impossible that there's some other problem in there, though. If so we will still see the problem from time to time on HEAD, and know that we have more work to do. But I don't think that continuing to see it on the back branches will teach us anything. Fair enough. cheers andrew ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Forcing current WAL file to be archived
On Tue, 2006-08-01 at 00:40 +0300, Hannu Krosing wrote: Ühel kenal päeval, T, 2006-07-25 kell 17:05, kirjutas Simon Riggs: On Tue, 2006-07-25 at 11:53 -0400, Tom Lane wrote: That's fine, but feature freeze is in a week and we don't even have the basic function for manually doing a log file switch. Let's get that done first and then think about automatic switches. Agreed. So: automatic switching of xlogs I've written a patch to implement archive_timeout, apart from the infrastructure required to allow archiver to use LWLocks. If we do this, it will allow the archiver to write to shared memory and log files in particular. People may have a robustness issue with that, so I'd like to check before doing this. As a result, I'm thinking: What's the minimum infrastructure I can get away with? I'll post to -patches what I've got, to further this discussion. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Standby Mode
The restartableRecovery patch introduces the concept of standby_mode, where you define in the recovery.conf file that this server is acting as a log-shipping target. We can extend that concept to a few other useful places. We've discussed a number of times that we can use a script that waits indefinitely for a log file. On reflection, it seems fairly trivial to put this directly into the backend. I've got a number of possible designs: 1. standby_mode tests to see if restore_command fails, if so it will test for a notification_file then loops back round for the restore_command again. When a failover occurs the failure-sensing mechanism writes the notification file and we bring up the standby. 2. standby_timeout - potentially usable in conjunction with archive_timeout. The standby server loops while waiting for the restore_command to work until it gets to standby_timeout seconds, then automatically comes up in standby mode. Sounds great, but possibly slightly less useful since there's no explicit instruction to perform failover. Both of those are very easy to implement, now that I've seen how, and would augment the other functionality recently delivered for 8.2. What does the panel think? Take the red pill, or stick with blue? -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Standby Mode
Seems you completed most of the PITR items. That will make great additions for 8.2. --- Simon Riggs wrote: The restartableRecovery patch introduces the concept of standby_mode, where you define in the recovery.conf file that this server is acting as a log-shipping target. We can extend that concept to a few other useful places. We've discussed a number of times that we can use a script that waits indefinitely for a log file. On reflection, it seems fairly trivial to put this directly into the backend. I've got a number of possible designs: 1. standby_mode tests to see if restore_command fails, if so it will test for a notification_file then loops back round for the restore_command again. When a failover occurs the failure-sensing mechanism writes the notification file and we bring up the standby. 2. standby_timeout - potentially usable in conjunction with archive_timeout. The standby server loops while waiting for the restore_command to work until it gets to standby_timeout seconds, then automatically comes up in standby mode. Sounds great, but possibly slightly less useful since there's no explicit instruction to perform failover. Both of those are very easy to implement, now that I've seen how, and would augment the other functionality recently delivered for 8.2. What does the panel think? Take the red pill, or stick with blue? -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Standby Mode
On Mon, 2006-07-31 at 20:34 -0400, Bruce Momjian wrote: Seems you completed most of the PITR items. That will make great additions for 8.2. Diogo's xlogviewer stuff should be ready shortly also (1-3 weeks). Since that's a contrib module how do we stand on having that get into 8.2, even though it isn't going to hit the main feature freeze? I'll be able to spend more time with him on that now. Do we care whether that's part of the release or not? It's a pretty specific tool. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Standby Mode
Simon Riggs wrote: On Mon, 2006-07-31 at 20:34 -0400, Bruce Momjian wrote: Seems you completed most of the PITR items. That will make great additions for 8.2. Diogo's xlogviewer stuff should be ready shortly also (1-3 weeks). Since that's a contrib module how do we stand on having that get into 8.2, even though it isn't going to hit the main feature freeze? I think we would allow a /contrib tool during freeze, especially since it is a debugging tool and not something that you use for queries. I'll be able to spend more time with him on that now. Do we care whether that's part of the release or not? It's a pretty specific tool. True. We could put it on pgfoundry and let him make improvements to it during the 8.2 cycle (faster than we can), and then roll it into 8.3. One idea is to put the URL of the pgfoundry project in our documentation. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] Restartable Recovery
On Sun, 2006-07-16 at 20:56 +0100, Simon Riggs wrote: On Sun, 2006-07-16 at 15:33 -0400, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: On Sun, 2006-07-16 at 12:40 -0400, Tom Lane wrote: A compromise that might be good enough is to add an rmgr routine defined as bool is_idle(void) that tests whether the rmgr has any open state to worry about. Then, recovery checkpoints are done only if all rmgrs say they are idle. Perhaps that should be extended to say whether there are any non-idempotent changes made in the last checkpoint period. That might cover a wider set of potential actions. Perhaps best to call it safe_to_checkpoint(), and not pre-judge what reasons the rmgr might have for not wanting to restart here. You read my mind. If we are only going to do a recovery checkpoint at every Nth checkpoint record, then occasionally having to skip one seems no big problem --- just do it at the first subsequent record that is safe. Got it. I've implemented this for BTree, GIN, GIST using an additional rmgr functionbool rm_safe_restartpoint(void) The functions are actually trivial, assuming I've understood this and how GIST and GIN work for their xlogging. Recovery checkpoints are now renamed restartpoints to avoid confusion with checkpoints. So checkpoints occur during normal processing (only) and restartpoints occur during recovery (only). Updated patch enclosed, which I believe has no conflicts with the other patches on xlog.c just submitted. Much additional testing required, but the underlying concepts are very simple really. Andreas: any further gotchas? :-) -- Simon Riggs EnterpriseDB http://www.enterprisedb.com Index: src/backend/access/gin/ginxlog.c === RCS file: /projects/cvsroot/pgsql/src/backend/access/gin/ginxlog.c,v retrieving revision 1.3 diff -c -r1.3 ginxlog.c *** src/backend/access/gin/ginxlog.c 14 Jul 2006 14:52:16 - 1.3 --- src/backend/access/gin/ginxlog.c 31 Jul 2006 23:51:56 - *** *** 538,540 --- 538,548 MemoryContextDelete(opCtx); } + bool + gin_safe_restartpoint(void) + { + if (list_length(incomplete_splits) 0) + return false; + + return true; + } Index: src/backend/access/gist/gistxlog.c === RCS file: /projects/cvsroot/pgsql/src/backend/access/gist/gistxlog.c,v retrieving revision 1.22 diff -c -r1.22 gistxlog.c *** src/backend/access/gist/gistxlog.c 14 Jul 2006 14:52:16 - 1.22 --- src/backend/access/gist/gistxlog.c 31 Jul 2006 23:51:57 - *** *** 818,823 --- 818,831 MemoryContextDelete(insertCtx); } + bool + gist_safe_restartpoint(void) + { + if (list_length(incomplete_inserts) 0) + return false; + + return true; + } XLogRecData * formSplitRdata(RelFileNode node, BlockNumber blkno, bool page_is_leaf, Index: src/backend/access/nbtree/nbtxlog.c === RCS file: /projects/cvsroot/pgsql/src/backend/access/nbtree/nbtxlog.c,v retrieving revision 1.36 diff -c -r1.36 nbtxlog.c *** src/backend/access/nbtree/nbtxlog.c 25 Jul 2006 19:13:00 - 1.36 --- src/backend/access/nbtree/nbtxlog.c 31 Jul 2006 23:51:58 - *** *** 794,796 --- 794,805 } incomplete_splits = NIL; } + + bool + btree_safe_restartpoint(void) + { + if (list_length(incomplete_splits) 0) + return false; + + return true; + } Index: src/backend/access/transam/rmgr.c === RCS file: /projects/cvsroot/pgsql/src/backend/access/transam/rmgr.c,v retrieving revision 1.23 diff -c -r1.23 rmgr.c *** src/backend/access/transam/rmgr.c 11 Jul 2006 17:26:58 - 1.23 --- src/backend/access/transam/rmgr.c 31 Jul 2006 23:51:58 - *** *** 23,42 const RmgrData RmgrTable[RM_MAX_ID + 1] = { ! {XLOG, xlog_redo, xlog_desc, NULL, NULL}, ! {Transaction, xact_redo, xact_desc, NULL, NULL}, ! {Storage, smgr_redo, smgr_desc, NULL, NULL}, ! {CLOG, clog_redo, clog_desc, NULL, NULL}, ! {Database, dbase_redo, dbase_desc, NULL, NULL}, ! {Tablespace, tblspc_redo, tblspc_desc, NULL, NULL}, ! {MultiXact, multixact_redo, multixact_desc, NULL, NULL}, ! {Reserved 7, NULL, NULL, NULL, NULL}, ! {Reserved 8, NULL, NULL, NULL, NULL}, ! {Reserved 9, NULL, NULL, NULL, NULL}, ! {Heap, heap_redo, heap_desc, NULL, NULL}, ! {Btree, btree_redo, btree_desc, btree_xlog_startup, btree_xlog_cleanup}, ! {Hash, hash_redo, hash_desc, NULL, NULL}, ! {Gin, gin_redo, gin_desc, gin_xlog_startup, gin_xlog_cleanup}, ! {Gist, gist_redo, gist_desc, gist_xlog_startup, gist_xlog_cleanup}, ! {Sequence, seq_redo, seq_desc, NULL, NULL} }; --- 23,42 const RmgrData RmgrTable[RM_MAX_ID + 1] = { ! {XLOG, xlog_redo, xlog_desc, NULL, NULL, NULL}, !
Re: [HACKERS] [PATCHES] Restartable Recovery
Nice. I was going to ask if this could make it into 8.2. --- Simon Riggs wrote: On Sun, 2006-07-16 at 20:56 +0100, Simon Riggs wrote: On Sun, 2006-07-16 at 15:33 -0400, Tom Lane wrote: Simon Riggs [EMAIL PROTECTED] writes: On Sun, 2006-07-16 at 12:40 -0400, Tom Lane wrote: A compromise that might be good enough is to add an rmgr routine defined as bool is_idle(void) that tests whether the rmgr has any open state to worry about. Then, recovery checkpoints are done only if all rmgrs say they are idle. Perhaps that should be extended to say whether there are any non-idempotent changes made in the last checkpoint period. That might cover a wider set of potential actions. Perhaps best to call it safe_to_checkpoint(), and not pre-judge what reasons the rmgr might have for not wanting to restart here. You read my mind. If we are only going to do a recovery checkpoint at every Nth checkpoint record, then occasionally having to skip one seems no big problem --- just do it at the first subsequent record that is safe. Got it. I've implemented this for BTree, GIN, GIST using an additional rmgr function bool rm_safe_restartpoint(void) The functions are actually trivial, assuming I've understood this and how GIST and GIN work for their xlogging. Recovery checkpoints are now renamed restartpoints to avoid confusion with checkpoints. So checkpoints occur during normal processing (only) and restartpoints occur during recovery (only). Updated patch enclosed, which I believe has no conflicts with the other patches on xlog.c just submitted. Much additional testing required, but the underlying concepts are very simple really. Andreas: any further gotchas? :-) -- Simon Riggs EnterpriseDB http://www.enterprisedb.com [ Attachment, skipping... ] ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Standby Mode
Simon, Diogo's xlogviewer stuff should be ready shortly also (1-3 weeks). Since that's a contrib module how do we stand on having that get into 8.2, even though it isn't going to hit the main feature freeze? This is why I was asking for status reports on pgsql-students 10 days ago. I guess the question is, will it be *production-ready* in 3 weeks, or just ready to be tested? -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Standby Mode
Josh Berkus wrote: Simon, Diogo's xlogviewer stuff should be ready shortly also (1-3 weeks). Since that's a contrib module how do we stand on having that get into 8.2, even though it isn't going to hit the main feature freeze? This is why I was asking for status reports on pgsql-students 10 days ago. I guess the question is, will it be *production-ready* in 3 weeks, or just ready to be tested? At this point we are making no promises that it will be in 8.2. It is possible if we like it in the future, and see no downside, but at this point, just assume it will not be in 8.2. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [COMMITTERS] pgsql: Improvements to Maintenance and External Projects documentation.
Bruce Momjian wrote: Log Message: --- Improvements to Maintenance and External Projects documentation. Joshua D. Drake Robert Treat Did you apply it just like it was in the pgpatches queue, or did you editorialize? I was considering editorializing it a bit and applying ... On the other hand, if I go and change it someone will have to check my grammar after I do. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [COMMITTERS] pgsql: Improvements to Maintenance and External
Alvaro Herrera wrote: Bruce Momjian wrote: Log Message: --- Improvements to Maintenance and External Projects documentation. Joshua D. Drake Robert Treat Did you apply it just like it was in the pgpatches queue, or did you editorialize? I was considering editorializing it a bit and applying ... On the other hand, if I go and change it someone will have to check my grammar after I do. You apply changes and I will look over your CVS diffs, OK? -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [COMMITTERS] pgsql: Improvements to Maintenance and External Projects documentation.
Bruce Momjian wrote: Alvaro Herrera wrote: Bruce Momjian wrote: Log Message: --- Improvements to Maintenance and External Projects documentation. Joshua D. Drake Robert Treat Did you apply it just like it was in the pgpatches queue, or did you editorialize? I was considering editorializing it a bit and applying ... On the other hand, if I go and change it someone will have to check my grammar after I do. You apply changes and I will look over your CVS diffs, OK? Sure, thanks. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq