Re: [HACKERS] Split up the wiki TODO page?
Joshua D. Drake napsal(a): Tom Lane wrote: snip I think the wiki TODO needs to be broken into multiple pages. How can we go about that? I would think the easiest thing to do would be to break up the page into categories. Either via difficulty or type of fix (administrative, planner, etc...) +1 to split it by current chapters (administrative, planner ...). Zdenek -- Zdenek Kotala Sun Microsystems Prague, Czech Republic http://sun.com/postgresql -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
Zdenek Kotala napsal(a): Joshua D. Drake napsal(a): Tom Lane wrote: snip I think the wiki TODO needs to be broken into multiple pages. How can we go about that? I would think the easiest thing to do would be to break up the page into categories. Either via difficulty or type of fix (administrative, planner, etc...) +1 to split it by current chapters (administrative, planner ...). And maybe add special TODO page called testing for tracking ideas how to improve regress tests, buildfarm and so on. Zdenek -- Zdenek Kotala Sun Microsystems Prague, Czech Republic http://sun.com/postgresql -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
On Tue, 26 Aug 2008, Tom Lane wrote: WARNING: This page is 106 kilobytes long; some browsers may have problems editing pages approaching or longer than 32kb. Hmm ... my browser isn't failing, but I do seem to detect a certain lack of snappiness to the edits. There haven't been any popular browsers with the 32kb limit around since circa 2002. See http://en.wikipedia.org/wiki/Wikipedia:Article_size#Web_browsers_which_have_problems_with_long_articles for details. The practical limit for any recent browser is 400KB, almost 4X as large as the article is now. For comparison, the longest page on Wikipedia proper is http://en.wikipedia.org/wiki/Line_of_succession_to_the_British_throne at 362kb. There's certainly been a recent flurry of activity as this page has been tweaked to meet pent up demand for an easy to edit TODO list, I have my doubts the real limit here will be hit anytime soon. The usual way to handle editing larger documents like this one with better responsiveness is to edit section at a time, using the [edit] buttons on the right of each section rather than the one at the top. If you're not doing that already that might help you out. Not arguing against breaking it into sections (don't really care either way), just pointing out that it may not actually be necessary. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Auto-explain patch
Hi, I'm very interested in the auto-explain feature. Are there any plans to re-add it in the next commit-fest? Dean Rasheed [EMAIL PROTECTED] wrote: Please do not export ExplainState --- that's an internal matter for explain.c. Export some wrapper function with a cleaner API than explain_outNode, instead. OK, that's much neater. How about the attached patch? I exported initialization of ExplainState and explain_outNode call to a new function ExplainOneResult. It receives executor nodes and returns the tree as a text. I think it'd be better to implement the auto-explain feature not as a core feature but as an extension, because various users have various desires about the feature. We could write a small extension moudle using hooks and the ExplainOneResult function. If we includes the extension as a contrib module, users can mofify it and install customized version of auto-explain. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center export_explain.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Auto-explain patch
On Tue, 2008-08-26 at 19:24 +0900, ITAGAKI Takahiro wrote: I'm very interested in the auto-explain feature. Me too, though must apologise I've had no further time to review this. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] gsoc, oprrest function for text search take 2
On Thu, 2008-08-14 at 22:27 +0200, Jan Urbański wrote: Jan Urbański wrote: + * ts_selfuncs.c Not sure why this is in its own file, but if it must be could we please put it in a file called selfuncs_ts.c so it is similar to the existing filename? -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] gsoc, oprrest function for text search take 2
Simon Riggs wrote: On Thu, 2008-08-14 at 22:27 +0200, Jan Urbański wrote: Jan Urbański wrote: + * ts_selfuncs.c Not sure why this is in its own file I couldn't decide where to put it, so I came up with this. put it in a file called selfuncs_ts.c so it is similar to the existing filename? I followed the pattern of ts_parse.c, ts_utils.c and so on. Also, I see geo_selfuncs.c. No big deal, though, I can move it. Cheers, Jan -- Jan Urbanski GPG key ID: E583D7D2 ouden estin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Implementing cost limit/delays for insert/delete/update/select
On Mon, 2008-08-25 at 22:39 +0200, Peter Schuller wrote: Does this sound vaguely sensible? Is there an obvious show-stopper I am missing? This was a well structured proposal. The main problem is where you put the delay_point() calls. If you put them at the top of the executor then you will get a delay proportional to the number of rows retrieved. For many queries, such as count(*) this might be just one row, yet have run for hours. There's no point having a priority scheme if it doesn't apply to all queries equally. If you put them at each call of each node then you will get an unacceptable overhead as Tom suggests. Not sure what to suggest, if anything, apart from just writing your own delay() function and using it in your query. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Mon, Aug 25, 2008 at 1:07 PM, Dave Cramer [EMAIL PROTECTED] wrote: On 25-Aug-08, at 10:43 AM, Alvaro Herrera wrote: Dave Cramer wrote: Well, I go the extra mile and kill any remaing autovac procs Here are the logs 2008-08-25 04:00:01 EDT [32276] LOG: autovacuum launcher shutting down 2008-08-25 04:00:01 EDT [20526] LOG: autovacuum launcher started What did you SIGHUP, the launcher or postmaster? You need the latter. The launcher should exit automatically at that time. No, I am HUP'ing the postmaster then subsequently killing any autovacuums still around, which may be the problem. I may be killing the launcher prematurely. I'll try again tonight. Ok, here are the logs from last night 2008-08-26 04:00:02 EDT [25407] LOG: received SIGHUP, reloading configuration files 2008-08-26 04:00:02 EDT [22649] LOG: autovacuum launcher shutting down 2008-08-26 04:00:02 EDT [30438] LOG: autovacuum launcher started you can see the SIGHUP, the launcher being shut down, and starting right back up again ??? is this expected behaviour ? Dave
Re: [HACKERS] pg_dump roles support
Hello, Stephen Frost wrote: As I discuss above, it'd be really nice have a --role or similar option to ask pg_dump to set role to a particular user before dumping the database. I created a patch to set the role to a specified name just after the db connection. Please review it for possible upstream inclusion. Regards, Laszlo Benedek --- postgresql-8.3.1.orig/src/bin/pg_dump/pg_dump.c 2008-01-30 19:35:55.0 +0100 +++ postgresql-8.3.1/src/bin/pg_dump/pg_dump.c 2008-08-26 12:26:56.0 +0200 @@ -208,6 +208,7 @@ const char *pgport = NULL; const char *username = NULL; const char *dumpencoding = NULL; + const char *pgrole = NULL; const char *std_strings; bool oids = false; TableInfo *tblinfo; @@ -258,6 +259,7 @@ {no-acl, no_argument, NULL, 'x'}, {compress, required_argument, NULL, 'Z'}, {encoding, required_argument, NULL, 'E'}, + {role, required_argument, NULL, 'r'}, {help, no_argument, NULL, '?'}, {version, no_argument, NULL, 'V'}, @@ -302,7 +304,7 @@ } } - while ((c = getopt_long(argc, argv, abcCdDE:f:F:h:in:N:oOp:RsS:t:T:U:vWxX:Z:, + while ((c = getopt_long(argc, argv, abcCdDE:f:F:h:in:N:oOp:r:RsS:t:T:U:vWxX:Z:, long_options, optindex)) != -1) { switch (c) @@ -374,6 +376,10 @@ pgport = optarg; break; + case 'r': /* role */ +pgrole = optarg; +break; + case 'R': /* no-op, still accepted for backwards compatibility */ break; @@ -539,6 +545,18 @@ exit(1); } } + + /* Set the role if requested */ + if (pgrole) + { + PQExpBuffer roleQry = createPQExpBuffer(); + appendPQExpBuffer(roleQry, SET ROLE TO %s;\n, fmtId(pgrole)); + PGresult *res = PQexec(g_conn, roleQry-data); + check_sql_result(res, g_conn, roleQry-data, PGRES_COMMAND_OK); + + PQclear(res); + destroyPQExpBuffer(roleQry); + } /* * Get the active encoding and the standard_conforming_strings setting, so @@ -771,6 +789,8 @@ printf(_( --use-set-session-authorization\n use SESSION AUTHORIZATION commands instead of\n ALTER OWNER commands to set ownership\n)); + printf(_( -r, --role set role before dump\n)); + printf(_(\nConnection options:\n)); printf(_( -h, --host=HOSTNAME database server host or socket directory\n)); -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: Ok, here are the logs from last night 2008-08-26 04:00:02 EDT [25407] LOG: received SIGHUP, reloading configuration files 2008-08-26 04:00:02 EDT [22649] LOG: autovacuum launcher shutting down 2008-08-26 04:00:02 EDT [30438] LOG: autovacuum launcher started you can see the SIGHUP, the launcher being shut down, and starting right back up again ??? is this expected behaviour ? Certainly not, and that's not what I see here either. I assume process 25407 is (was) the postmaster, yes? If you show autovacuum, is it on? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 9:37 AM, Alvaro Herrera [EMAIL PROTECTED]wrote: Dave Cramer wrote: Ok, here are the logs from last night 2008-08-26 04:00:02 EDT [25407] LOG: received SIGHUP, reloading configuration files 2008-08-26 04:00:02 EDT [22649] LOG: autovacuum launcher shutting down 2008-08-26 04:00:02 EDT [30438] LOG: autovacuum launcher started you can see the SIGHUP, the launcher being shut down, and starting right back up again ??? is this expected behaviour ? Certainly not, and that's not what I see here either. I assume process 25407 is (was) the postmaster, yes? If you show autovacuum, is it on? Yes that was the postmaster, and I did check to see if autovacuum was on, and it was not. Dave -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 9:59 AM, Dave Cramer [EMAIL PROTECTED] wrote: On Tue, Aug 26, 2008 at 9:37 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: Ok, here are the logs from last night 2008-08-26 04:00:02 EDT [25407] LOG: received SIGHUP, reloading configuration files 2008-08-26 04:00:02 EDT [22649] LOG: autovacuum launcher shutting down 2008-08-26 04:00:02 EDT [30438] LOG: autovacuum launcher started you can see the SIGHUP, the launcher being shut down, and starting right back up again ??? is this expected behaviour ? Certainly not, and that's not what I see here either. I assume process 25407 is (was) the postmaster, yes? If you show autovacuum, is it on? Yes that was the postmaster, and I did check to see if autovacuum was on, and it was not. Dave So where do we go from here ?
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: On Tue, Aug 26, 2008 at 9:59 AM, Dave Cramer [EMAIL PROTECTED] wrote: On Tue, Aug 26, 2008 at 9:37 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: Certainly not, and that's not what I see here either. I assume process 25407 is (was) the postmaster, yes? If you show autovacuum, is it on? Yes that was the postmaster, and I did check to see if autovacuum was on, and it was not. So where do we go from here ? The only possible explanation for this behavior is that somebody is signalling the postmaster due to Xid wraparound issues. This is keyed on some GUC vars -- Perhaps you have autovacuum_freeze_max_age set to an insane value? varsup.c line 246 /* * We'll start trying to force autovacuums when oldest_datfrozenxid gets * to be more than autovacuum_freeze_max_age transactions old. * * Note: guc.c ensures that autovacuum_freeze_max_age is in a sane range, * so that xidVacLimit will be well before xidWarnLimit. * * [...] */ xidVacLimit = oldest_datfrozenxid + autovacuum_freeze_max_age; ... if (TransactionIdFollowsOrEquals(curXid, xidVacLimit) IsUnderPostmaster) SendPostmasterSignal(PMSIGNAL_START_AUTOVAC_LAUNCHER); However, I think that in allowed configurations you should also receive these warnings: /* Give an immediate warning if past the wrap warn point */ if (TransactionIdFollowsOrEquals(curXid, xidWarnLimit)) ereport(WARNING, (errmsg(database \%s\ must be vacuumed within %u transactions, NameStr(*oldest_datname), xidWrapLimit - curXid), errhint(To avoid a database shutdown, execute a full-database VACUUM in \%s\., NameStr(*oldest_datname; -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 10:56 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: On Tue, Aug 26, 2008 at 9:59 AM, Dave Cramer [EMAIL PROTECTED] wrote: On Tue, Aug 26, 2008 at 9:37 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: Certainly not, and that's not what I see here either. I assume process 25407 is (was) the postmaster, yes? If you show autovacuum, is it on? Yes that was the postmaster, and I did check to see if autovacuum was on, and it was not. So where do we go from here ? The only possible explanation for this behavior is that somebody is signalling the postmaster due to Xid wraparound issues. This is keyed on some GUC vars -- Perhaps you have autovacuum_freeze_max_age set to an insane value? Doesn't appear to be insane ? #autovacuum_freeze_max_age = 2 # maximum XID age before forced vacuum
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: On Tue, Aug 26, 2008 at 10:56 AM, Alvaro Herrera [EMAIL PROTECTED] The only possible explanation for this behavior is that somebody is signalling the postmaster due to Xid wraparound issues. This is keyed on some GUC vars -- Perhaps you have autovacuum_freeze_max_age set to an insane value? Doesn't appear to be insane ? #autovacuum_freeze_max_age = 2 # maximum XID age before forced vacuum Not only sane, but also the default ;-) What's the max age(pg_database.datfrozenxid)? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 11:41 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: On Tue, Aug 26, 2008 at 10:56 AM, Alvaro Herrera [EMAIL PROTECTED] The only possible explanation for this behavior is that somebody is signalling the postmaster due to Xid wraparound issues. This is keyed on some GUC vars -- Perhaps you have autovacuum_freeze_max_age set to an insane value? Doesn't appear to be insane ? #autovacuum_freeze_max_age = 2 # maximum XID age before forced vacuum Not only sane, but also the default ;-) What's the max age(pg_database.datfrozenxid)? select datfrozenxid from pg_database ; datfrozenxid -- 201850617 101850961 86039359 21522712 -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 11:51 AM, Dave Cramer [EMAIL PROTECTED] wrote: On Tue, Aug 26, 2008 at 11:41 AM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: On Tue, Aug 26, 2008 at 10:56 AM, Alvaro Herrera [EMAIL PROTECTED] The only possible explanation for this behavior is that somebody is signalling the postmaster due to Xid wraparound issues. This is keyed on some GUC vars -- Perhaps you have autovacuum_freeze_max_age set to an insane value? Doesn't appear to be insane ? #autovacuum_freeze_max_age = 2 # maximum XID age before forced vacuum Not only sane, but also the default ;-) What's the max age(pg_database.datfrozenxid)? select datfrozenxid from pg_database ; datfrozenxid -- 201850617 101850961 86039359 21522712 this code in autovacuum.c looks like it might be interesting if (AutoVacuumShmem-av_signal[AutoVacForkFailed]) { /* * If the postmaster failed to start a new worker, we sleep * for a little while and resend the signal. The new worker's * state is still in memory, so this is sufficient. After * that, we restart the main loop. * * XXX should we put a limit to the number of times we retry? * I don't think it makes much sense, because a future start * of a worker will continue to fail in the same way. */ AutoVacuumShmem-av_signal[AutoVacForkFailed] = false; pg_usleep(10L); /* 100ms */ SendPostmasterSignal(PMSIGNAL_START_AUTOVAC_WORKER); continue; Do these signals get cleaned up on a reload ? Dave
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: On Tue, Aug 26, 2008 at 11:41 AM, Alvaro Herrera [EMAIL PROTECTED] What's the max age(pg_database.datfrozenxid)? select datfrozenxid from pg_database ; datfrozenxid -- 201850617 101850961 86039359 21522712 Well, the first one is over the limit, isn't it? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: AutoVacuumShmem-av_signal[AutoVacForkFailed] = false; pg_usleep(10L); /* 100ms */ SendPostmasterSignal(PMSIGNAL_START_AUTOVAC_WORKER); continue; Do these signals get cleaned up on a reload ? Well, not on a reload specifically, but this signal is of prompt response (i.e. the postmaster acts immediately on it). See CheckPostmasterSignal. Also, note that this code starts a worker, not the launcher which is what you're seeing. The signal you're looking for is PMSIGNAL_START_AUTOVAC_LAUNCHER (see the varsup.c code in the vicinity of what I posted earlier). The postmaster response is to set start_autovac_launcher (see sigusr1_handler in postmaster.c) and when this is seen set, the launcher is started (see ServerLoop in postmaster.c). However the bit you're interested in is *why* the signal is being sent, which is what the freeze limits determine. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 12:10 PM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: AutoVacuumShmem-av_signal[AutoVacForkFailed] = false; pg_usleep(10L); /* 100ms */ SendPostmasterSignal(PMSIGNAL_START_AUTOVAC_WORKER); continue; Do these signals get cleaned up on a reload ? Well, not on a reload specifically, but this signal is of prompt response (i.e. the postmaster acts immediately on it). See CheckPostmasterSignal. Also, note that this code starts a worker, not the launcher which is what you're seeing. Ok, back to why turns out template0 is the culprit, why is autovac not vacuuming this ? Dave
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: Ok, back to why turns out template0 is the culprit, why is autovac not vacuuming this ? Hmm ... template0 is not supposed to need vacuuming, because it is frozen ... is it marked with datallowconn=false? -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 12:21 PM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: Ok, back to why turns out template0 is the culprit, why is autovac not vacuuming this ? Hmm ... template0 is not supposed to need vacuuming, because it is frozen ... is it marked with datallowconn=false? Yes select * from pg_database where datname='template0'; datname | datdba | encoding | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datconfig | datacl ---++--+---+--+--+---+--+---+---+- template0 | 10 |6 | t | f| -1 | 11510 |201850617 | 1663 | | {=c/postgres,postgres=CTc/postgres} So how to fix ? -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Re: [HACKERS] gsoc, oprrest function for text search take 2
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= [EMAIL PROTECTED] writes: Simon Riggs wrote: put it in a file called selfuncs_ts.c so it is similar to the existing filename? I followed the pattern of ts_parse.c, ts_utils.c and so on. Also, I see geo_selfuncs.c. No big deal, though, I can move it. Given the precedent of geo_selfuncs.c, I think you were right the first time. A more interesting question is whether it should just get folded into selfuncs.c ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Alvaro Herrera [EMAIL PROTECTED] writes: Dave Cramer wrote: turns out template0 is the culprit, why is autovac not vacuuming this ? Hmm ... template0 is not supposed to need vacuuming, because it is frozen ... is it marked with datallowconn=false? 8.3's autovac doesn't care about that, does it? Seems like the next step is to enable logging of autovac's decision-making. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
Greg Smith [EMAIL PROTECTED] writes: On Tue, 26 Aug 2008, Tom Lane wrote: WARNING: This page is 106 kilobytes long; some browsers may have problems editing pages approaching or longer than 32kb. Hmm ... my browser isn't failing, but I do seem to detect a certain lack of snappiness to the edits. The practical limit for any recent browser is 400KB, almost 4X as large as the article is now. Okay... There's certainly been a recent flurry of activity as this page has been tweaked to meet pent up demand for an easy to edit TODO list, Certainly true. Okay, let's leave it alone for a little while and see if the growth curve flattens out. It'll certainly be easiest to manage if it can stay a single page. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: Yes select * from pg_database where datname='template0'; datname | datdba | encoding | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datconfig | datacl ---++--+---+--+--+---+--+---+---+- template0 | 10 |6 | t | f| -1 | 11510 |201850617 | 1663 | | {=c/postgres,postgres=CTc/postgres} So how to fix ? I think I see the problem -- vac_truncate_clog is not ignoring these databases when passing the new frozen value to SetTransactionIdLimit. /* * Scan pg_database to compute the minimum datfrozenxid * * Note: we need not worry about a race condition with new entries being * inserted by CREATE DATABASE. Any such entry will have a copy of some * existing DB's datfrozenxid, and that source DB cannot be ours because * of the interlock against copying a DB containing an active backend. * Hence the new entry will not reduce the minimum. Also, if two VACUUMs * concurrently modify the datfrozenxid's of different databases, the * worst possible outcome is that pg_clog is not truncated as aggressively * as it could be. */ relation = heap_open(DatabaseRelationId, AccessShareLock); scan = heap_beginscan(relation, SnapshotNow, 0, NULL); while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) { Form_pg_database dbform = (Form_pg_database) GETSTRUCT(tuple); Assert(TransactionIdIsNormal(dbform-datfrozenxid)); if (TransactionIdPrecedes(myXID, dbform-datfrozenxid)) frozenAlreadyWrapped = true; else if (TransactionIdPrecedes(dbform-datfrozenxid, frozenXID)) { frozenXID = dbform-datfrozenxid; namecpy(oldest_datname, dbform-datname); } } ... /* * Update the wrap limit for GetNewTransactionId. Note: this function * will also signal the postmaster for an(other) autovac cycle if needed. */ SetTransactionIdLimit(frozenXID, oldest_datname); If it doesn't ignore them, then it should be properly vacuuming template0 as any other database. We've changed autovac's behavior on this area back and forth so I may be misremembering what's our rationale du jour. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 12:50 PM, Alvaro Herrera [EMAIL PROTECTED] wrote: Dave Cramer wrote: Yes select * from pg_database where datname='template0'; datname | datdba | encoding | datistemplate | datallowconn | datconnlimit | datlastsysoid | datfrozenxid | dattablespace | datconfig | datacl ---++--+---+--+--+---+--+---+---+- template0 | 10 |6 | t | f| -1 | 11510 |201850617 | 1663 | | {=c/postgres,postgres=CTc/postgres} So how to fix ? I think I see the problem -- vac_truncate_clog is not ignoring these databases when passing the new frozen value to SetTransactionIdLimit. /* * Scan pg_database to compute the minimum datfrozenxid * * Note: we need not worry about a race condition with new entries being * inserted by CREATE DATABASE. Any such entry will have a copy of some * existing DB's datfrozenxid, and that source DB cannot be ours because * of the interlock against copying a DB containing an active backend. * Hence the new entry will not reduce the minimum. Also, if two VACUUMs * concurrently modify the datfrozenxid's of different databases, the * worst possible outcome is that pg_clog is not truncated as aggressively * as it could be. */ relation = heap_open(DatabaseRelationId, AccessShareLock); scan = heap_beginscan(relation, SnapshotNow, 0, NULL); while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL) { Form_pg_database dbform = (Form_pg_database) GETSTRUCT(tuple); Assert(TransactionIdIsNormal(dbform-datfrozenxid)); if (TransactionIdPrecedes(myXID, dbform-datfrozenxid)) frozenAlreadyWrapped = true; else if (TransactionIdPrecedes(dbform-datfrozenxid, frozenXID)) { frozenXID = dbform-datfrozenxid; namecpy(oldest_datname, dbform-datname); } } ... /* * Update the wrap limit for GetNewTransactionId. Note: this function * will also signal the postmaster for an(other) autovac cycle if needed. */ SetTransactionIdLimit(frozenXID, oldest_datname); If it doesn't ignore them, then it should be properly vacuuming template0 as any other database. We've changed autovac's behavior on this area back and forth so I may be misremembering what's our rationale du jour. Well, I'm willing to help debug this, however this is a busy production database and I need to be able to turn it off for a few hours a day. Would changing autovacuum_freeze_max_age be a solution ? Dave -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Alvaro Herrera [EMAIL PROTECTED] writes: If it doesn't ignore them, then it should be properly vacuuming template0 as any other database. We've changed autovac's behavior on this area back and forth so I may be misremembering what's our rationale du jour. AFAICS, the only way in which current autovac treats !datallowconn databases specially is this test in do_autovacuum: if (dbForm-datistemplate || !dbForm-datallowconn) default_freeze_min_age = 0; else default_freeze_min_age = vacuum_freeze_min_age; Perhaps there's something wrong with the idea of setting freeze_min_age to zero? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
Tom Lane escribió: Certainly true. Okay, let's leave it alone for a little while and see if the growth curve flattens out. It'll certainly be easiest to manage if it can stay a single page. FWIW most of the growth occured when I changed the archive links to include the message subject. In any case, we already have some TODO items split -- for example the XML_Todo file, Todo:Collate (these need to be renamed), etc. I'm not sure if it's best to merge them back into Todo, or to separate the few items in Todo into those other files and add a direct reference to them. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
On Wed, Aug 27, 2008 at 2:48 AM, Tom Lane [EMAIL PROTECTED] wrote: Certainly true. Okay, let's leave it alone for a little while and see if the growth curve flattens out. It'll certainly be easiest to manage if it can stay a single page. Apart from the management aspect (which is very much a valid concern), others have expressed a desire to be able to easily search the list for easy Todo items. We got that working by adding the [E] tags to those items, and it's now very convenient to search for them. If we split the page up into subpages for each section, that feature would be sunk. The size of the page really isn't something we should be worrying about. As Greg points out, we have the usual wiki per-section edit capability, so in practice we will almost never need to edit the whole page at once. The only reasons for doing so are to perform global changes, or alter the intro text. All day-to-day maintenance of the list, such as adding new items, marking items as done, etc. should be done via the per-section [edit] links. Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, 26 Aug 2008 12:58:59 -0400 Dave Cramer [EMAIL PROTECTED] wrote: Well, I'm willing to help debug this, however this is a busy production database and I need to be able to turn it off for a few hours a day. Would changing autovacuum_freeze_max_age be a solution ? Populate the table pg_autovacuum with all your relations and the defaults from the postgresql.conf. Then set enabled to FALSE on all the tuples. When you are ready to turn autovacuum back on, set it to TRUE. Joshua D. Drake -- The PostgreSQL Company since 1997: http://www.commandprompt.com/ PostgreSQL Community Conference: http://www.postgresqlconference.org/ United States PostgreSQL Association: http://www.postgresql.us/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: Well, I'm willing to help debug this, however this is a busy production database and I need to be able to turn it off for a few hours a day. Would changing autovacuum_freeze_max_age be a solution ? Yes. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
Brendan Jurd [EMAIL PROTECTED] writes: The size of the page really isn't something we should be worrying about. As Greg points out, we have the usual wiki per-section edit capability, so in practice we will almost never need to edit the whole page at once. The only reasons for doing so are to perform global changes, or alter the intro text. All day-to-day maintenance of the list, such as adding new items, marking items as done, etc. should be done via the per-section [edit] links. Hm, I'd never noticed those --- have they been there long? Maybe the above policy should be stated somewhere at the top of the page? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: If it doesn't ignore them, then it should be properly vacuuming template0 as any other database. We've changed autovac's behavior on this area back and forth so I may be misremembering what's our rationale du jour. AFAICS, the only way in which current autovac treats !datallowconn databases specially is this test in do_autovacuum: if (dbForm-datistemplate || !dbForm-datallowconn) default_freeze_min_age = 0; else default_freeze_min_age = vacuum_freeze_min_age; Perhaps there's something wrong with the idea of setting freeze_min_age to zero? Nope, AFAICS it's harmless; what it means is that on those databases, all tuples will be frozen immediately. I'll try to reproduce the problem here. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Alvaro Herrera [EMAIL PROTECTED] writes: Dave Cramer wrote: Well, I'm willing to help debug this, however this is a busy production database and I need to be able to turn it off for a few hours a day. Would changing autovacuum_freeze_max_age be a solution ? Yes. Could we first see a cycle of autovac log output with log_autovacuum_min_duration = 0? Otherwise we're not going to get closer to understanding why it's not cleaning up template0 for you. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
On Wed, Aug 27, 2008 at 3:31 AM, Tom Lane [EMAIL PROTECTED] wrote: Brendan Jurd [EMAIL PROTECTED] writes: The size of the page really isn't something we should be worrying about. As Greg points out, we have the usual wiki per-section edit capability, so in practice we will almost never need to edit the whole page at once. The only reasons for doing so are to perform global changes, or alter the intro text. All day-to-day maintenance of the list, such as adding new items, marking items as done, etc. should be done via the per-section [edit] links. Hm, I'd never noticed those --- have they been there long? It's a standard mediawiki feature. We had some challenges getting them to cooperate with our nicely indented subsections, but they've certainly been there. Maybe the above policy should be stated somewhere at the top of the page? Yeah. I'm starting to think we could do with adding a Help page for the Todo, analogous to http://wiki.postgresql.org/wiki/CommitFest:Help. The Help page would explain how to take care of the various administrative tasks. I'd rather not clutter up the actual Todo page with that sort of thing; it gets in the way of the people who are just looking to view the list. I'll go write it up now. Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 1:33 PM, Tom Lane [EMAIL PROTECTED] wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Dave Cramer wrote: Well, I'm willing to help debug this, however this is a busy production database and I need to be able to turn it off for a few hours a day. Would changing autovacuum_freeze_max_age be a solution ? Yes. Could we first see a cycle of autovac log output with log_autovacuum_min_duration = 0? Otherwise we're not going to get closer to understanding why it's not cleaning up template0 for you. I actually set that last time I restarted. There's nothing particularly interesting there. Is there another log GUC that needs to be tweaked to get more output ? Dave
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: On Tue, Aug 26, 2008 at 1:33 PM, Tom Lane [EMAIL PROTECTED] wrote: Could we first see a cycle of autovac log output with log_autovacuum_min_duration = 0? Otherwise we're not going to get closer to understanding why it's not cleaning up template0 for you. I actually set that last time I restarted. There's nothing particularly interesting there. Is there another log GUC that needs to be tweaked to get more output ? My guess is that autovacuum is skipping the database for some reason, so there's no log entry at all. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
On Tue, Aug 26, 2008 at 1:45 PM, Alvaro Herrera [EMAIL PROTECTED]wrote: Dave Cramer wrote: On Tue, Aug 26, 2008 at 1:33 PM, Tom Lane [EMAIL PROTECTED] wrote: Could we first see a cycle of autovac log output with log_autovacuum_min_duration = 0? Otherwise we're not going to get closer to understanding why it's not cleaning up template0 for you. I actually set that last time I restarted. There's nothing particularly interesting there. Is there another log GUC that needs to be tweaked to get more output ? My guess is that autovacuum is skipping the database for some reason, so there's no log entry at all. Seems like a viable explanation, but doesn't advance us any further ? Dave
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Dave Cramer wrote: On Tue, Aug 26, 2008 at 1:45 PM, Alvaro Herrera [EMAIL PROTECTED]wrote: My guess is that autovacuum is skipping the database for some reason, so there's no log entry at all. Seems like a viable explanation, but doesn't advance us any further ? Nope, it doesn't -- we need to understand what's the reason. I'm checking the code. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Implementing cost limit/delays for insert/delete/update/select
Hello, [ I have not yet had the time to look at code again in response to some of the points raised raised by several people; but I wanted to follow-up somewhat still on other bits. ] You would have to test for whether it's time to sleep much more often. Possibly before every ExecProcNode call would be enough. That would have overhead comparable to EXPLAIN ANALYZE, which is a lot. I'm fairly dubious about this whole proposal: it's not clear to me that the vacuum delay stuff works very well at all, and to the extent that it does work it's because vacuum has such stylized, predictable behavior. Well, it definitely works well enough to make a large difference in my use cases. In particular with respect to the amount of write activity generated which easily causes latency problems. That said, it remains to be seen how much of an issue heavy write activity will be once upgraded to 8.3 and after tweaking the Linux buffer cache. Right now, I do not expect the database to be even useful in one of my use cases, if it were not for delay points during vacuuming. So although I make no argument as to whether it works better due to the limited and understood nature of vacuuming, it is definitely an appreciate feature for my use cases. The same can't be said of general SQL queries. For one thing, it's not apparent that rate-limiting I/O would be sufficient, because although vacuum is nearly always I/O bound, general queries often are CPU bound; or their system impact might be due to other factors like contention for shared-memory data structures. In my case I mostly care about I/O. I believe that this is going to be a fairly common fact with anyone whose primary concern is latency. The good part about CPU contention is that it is handled quite well by modern operating systems/hardware. Even on a single-core machine, a single CPU bound query should still only have a percentage-wise throughput impact on other traffic (normally; of course you might have some particularly bad contention on some resource, etc). If your database is very sensitive to latency, you are likely running it at far below full throughput, meaning that there should be quite a bit of margin in terms of CPU. This would be especially true on multi-core machines where the impact of a single backend is even less. The problem I have with I/O is that saturating I/O, in particular with writes, has all sorts of indirect effects that are difficult to predict, and are not at all guaranteed to translate into a simple percentage-wise slow-down. For example, I've seen stalls lasting several *minutes* due to a bulk DELETE of a million rows or so. With mixed random-access writes, streaming writes, and the PG buffer cache, the operating system buffer cache, and the RAID controller's cache, it is not at all unlikely that you will have significant latency problems when saturating the system with writes. So recognizing that I am not likely to ever have very good behavior while saturating the storage system with writes, I instead want to limit the write activity generated to a sensible amount (preferably such that individual bursts are small enough to e.g. fit in a RAID controller cache). This reduces the problem of ensuring good behavior with respect to short burst of writes and their interaction with checkpoints, which is a much easier problem than somehow ensuring fairness under write-saturated load. So that is where my motivation comes from; in more or less all my use cases, limiting disk I/O is massively more important than limiting CPU usage. On this topic, I have started thinking again about direct I/O. I asked about this on -performance a while back in a different context and it seemed there was definitely no clear concensus that one should have direct I/O. That seemed to be mostly due to a perceived lack of increase in throughput. However my gut feel is that by bypassing the OS buffer cache, you could significantly improve real-time/latency sensitive aspects in PostgreSQL in cases where throughput is not your primary concern. Perhaps something like that would be a more effective approach. Priority inversion is a pretty serious concern as well (ie, a sleeping low priority query might be blocking other queries). I presume this is in reference to bulk modifications (rather than selects) blocking other transactions with conflicting updates? If so, yes I see that. The feature would be difficult to use reliably for writes except in very controlled situations (in my particular use-case that I am tunnel vision:ing on, it is guaranteed that there is no conflict due to the nature of the application). But is my understanding correct that there is no reason to believe there are such issues for read-only queries, or queries that do not actually conflict (at the SQL level) with concurrent transactions? (Ignoring the impact it might have on old transactions hanging around for a longer time.) -- / Peter Schuller PGP userID: 0xE9758B7D or
Re: [HACKERS] Split up the wiki TODO page?
All, Apart from the management aspect (which is very much a valid concern), others have expressed a desire to be able to easily search the list for easy Todo items. We got that working by adding the [E] tags to those items, and it's now very convenient to search for them. Goodness, if only we had some kind of organized repository for these TODO items capable of holding multiple categories per item. Maybe something with items and attributes, and some kind of relationship between the TODO item and its categories. I don't know where we'd find something like that, though. Maybe we should see if the folks at Ingres have anything. ;-/ --Josh -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] September Commit Fest coming soon!
Folks, The september commit fest starts in one week. The goal, this time, is to start reviewing on day 1 of the commit fest and not spend the first 3 days collecting extra patches. So if your patch isn't on the list *on* September 1, it may get pushed into November. So get it on the list now! http://wiki.postgresql.org/wiki/CommitFest:2008-09 Second, I need to know who's available for round-robin reviewing for this commitfest (September 1-15). If you volunteer, you'll be assigned a patch to review on September 4 from the list of unclaimed patches, and will be expected to review it within 5 days. And then hopefully be assigned a second one. Please, we really need help reviewing, and this is a good way to gain experience. --Josh Commitfest Mom Berkus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Is it really such a good thing for newNode() to be a macro?
I happened to be looking at nodes.h and started wondering just how sane this coding really is: extern PGDLLIMPORT Node *newNodeMacroHolder; #define newNode(size, tag) \ ( \ AssertMacro((size) = sizeof(Node)),/* need the tag, at least */ \ newNodeMacroHolder = (Node *) palloc0fast(size), \ newNodeMacroHolder-type = (tag), \ newNodeMacroHolder \ ) Given that we're calling palloc, it's not clear that saving one level of function call is really buying much; and what it's costing us is a store to a global variable that the compiler has no way to optimize away. On a lot of platforms, accessing global variables isn't especially cheap. Also, considering that palloc0fast is a nontrivial macro, and that there are a LOT of uses of newNode(), we're paying rather a lot of code space for a pretty dubious savings. So I'm tempted to get rid of this and just make newNode() an out-of-line function. Thoughts? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Split up the wiki TODO page?
On Wed, Aug 27, 2008 at 4:23 AM, Josh Berkus [EMAIL PROTECTED] wrote: Goodness, if only we had some kind of organized repository for these TODO items capable of holding multiple categories per item. Maybe something with items and attributes, and some kind of relationship between the TODO item and its categories. I don't know where we'd find something like that, though. Maybe we should see if the folks at Ingres have anything. ;-/ I think I may have heard of such a beast. I believe they call it an infostation, or something. Anyway ... I've written up a quick explanation of how to administer the todo list: http://wiki.postgresql.org/wiki/Talk:Todo I just planted it on the Talk page for the Todo, rather than creating a separate page for it. Cheers, BJ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Restartable signals 'n all that
Tom Lane wrote: So we've got two problems: SA_RESTART is preventing some EINTRs from happening when we'd like, and yet it seems we are at risk of unwanted EINTRs anyway. The only really clean solution I can see is to stop using SA_RESTART and try to make all our syscalls EINTR-proof. But the probability of bugs-of-omission seems just about 100%, especially in third party backend add-ons that we don't get to review the code for. Did we do anything about this? I see we have it on TODO ... -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Restartable signals 'n all that
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: So we've got two problems: SA_RESTART is preventing some EINTRs from happening when we'd like, and yet it seems we are at risk of unwanted EINTRs anyway. The only really clean solution I can see is to stop using SA_RESTART and try to make all our syscalls EINTR-proof. But the probability of bugs-of-omission seems just about 100%, especially in third party backend add-ons that we don't get to review the code for. Did we do anything about this? I see we have it on TODO ... No, I haven't done anything about it. (I'm not entirely convinced that there's a real problem on any modern platforms.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] initdb change
On Monday 25 August 2008 14:05:21 Joshua Drake wrote: On Mon, 25 Aug 2008 13:56:16 -0400 Andrew Dunstan [EMAIL PROTECTED] wrote: That is what I was suggesting. Why should the xlog directory be treated specially? Consider the following: mount /dev/sda1 /var/lib/pgsql mount /dev/sdb1 /srv1/pgsql/pg_xlog (which has a link from /var/lib/pgsql/data/pg_xlog) initdb -D /var/lib/pgsql/data -X /var/lib/pgsql/data/pg_xlog Will fail; now you have multiple steps to get everything where it should be. We don't do this for any other subdirectory of PGDATA. The extra logic would be a Well the only other directory it would even matter for would be pg_clog (maybe). I grant that it is a very little feature that could be lived without. nuisance and for no great gain in functionality that I can see. In an environment where you are provisioning many spindle machines over many differently mounts and raid configurations it could be useful. The question is; is it worth it? I don't know. I was just trying to understand exactly what David was talking about and offer some suggestions. I would have thought the place you need this is where you have SA's who set up a machine, creating a $PGDATA and $PGDATA/xlog on seperate mountpoints where the postgres user has full rights to use those directories, but not create directies in those locations. In that scenario, the DBA couldn't create the directories if he wanted, so allowing the behavior to use an existing directory would be helpful. -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] initdb change
Robert Treat wrote: I would have thought the place you need this is where you have SA's who set up a machine, creating a $PGDATA and $PGDATA/xlog on seperate mountpoints where the postgres user has full rights to use those directories, but not create directies in those locations. In that scenario, the DBA couldn't create the directories if he wanted, so allowing the behavior to use an existing directory would be helpful. As I have already pointed out in this thread, the allegation that you cannot use an existing directory is false. See below for proof. cheers andrew [EMAIL PROTECTED] inst.8.3.5707]$ sudo mkdir /bk/xl [EMAIL PROTECTED] inst.8.3.5707]$ sudo chown andrew:andrew /bk/xl [EMAIL PROTECTED] inst.8.3.5707]$ bin/initdb -X /bk/xl blurfl The files belonging to this database system will be owned by user andrew. This user must also own the server process. The database cluster will be initialized with locale en_US.UTF-8. The default database encoding has accordingly been set to UTF8. The default text search configuration will be set to english. creating directory blurfl ... ok fixing permissions on existing directory /bk/xl ... ok creating subdirectories ... ok selecting default max_connections ... 100 selecting default shared_buffers/max_fsm_pages ... 32MB/204800 creating configuration files ... ok creating template1 database in blurfl/base/1 ... ok initializing pg_authid ... ok initializing dependencies ... ok creating system views ... ok loading system objects' descriptions ... ok creating conversions ... ok creating dictionaries ... ok setting privileges on built-in objects ... ok creating information schema ... ok vacuuming database template1 ... ok copying template1 to template0 ... ok copying template1 to postgres ... ok WARNING: enabling trust authentication for local connections You can change this by editing pg_hba.conf or using the -A option the next time you run initdb. Success. You can now start the database server using: bin/postgres -D blurfl or bin/pg_ctl -D blurfl -l logfile start -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal to sync SET ROLE and pg_stat_activity
Grant Finnemore escreveu: Invoking pg_stat_activity after the SET ROLE is changed will however leave the usename unchanged. You're right. Because, as you spotted, usename is synonym of session usename. SET SESSION AUTHORIZATION behaves similarly, although in that case, it's documented that both session_user and current_user are changed to reflect the new user. Ugh? The manual [1][2] documents the behavior of both commands. I have on occasion used a database pooling scheme that whenever a connection is retrieved from the pool, either a SET ROLE or SET SESSION AUTHORIZATION is issued to enable database level access restrictions. Similarly, when the connection is returned, a RESET instruction is issued. I can't see in your use case the advantage of allowing to show current_user. IMHO, it would be advantageous to be able to display which connections are in use by a given user through the pg_stat_activity view. Isn't it embarrassing if, for example, mary queries pg_stat_activity and sees that I'm using her role, is it? I'm not against exposing this information but I think it could be superuser-only. There are two ways in which this could be done. Firstly, we could alter the current usename field in the view. This would keep the view definition the same, but would alter the semantics, which could affect existing clients. Alternatively, we could introduce another column that would reflect the role name. Why not add another column: current_usename? I would object if we've intended to change the view semantics. [1] http://www.postgresql.org/docs/8.3/static/sql-set-role.html [2] http://www.postgresql.org/docs/8.3/static/sql-set-session-authorization.html -- Euler Taveira de Oliveira http://www.timbira.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] September Commit Fest coming soon!
On Tue, Aug 26, 2008 at 1:28 PM, Josh Berkus [EMAIL PROTECTED] wrote: Second, I need to know who's available for round-robin reviewing for this commitfest (September 1-15). let's make a try... count with me for round-robin reviewing... -- regards, Jaime Casanova Soporte y capacitación de PostgreSQL Asesoría y desarrollo de sistemas Guayaquil - Ecuador Cel. (593) 87171157 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] TODO - Commitfest
Folks, It looks to me like there should be more links, maybe even bidirectional ones, between the TODO wiki and commitfest pages. Does mediawiki have a bidirectional link capability? Cheers, David. -- David Fetter [EMAIL PROTECTED] http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it really such a good thing for newNode() to be a macro?
Tom Lane wrote: I happened to be looking at nodes.h and started wondering just how sane this coding really is: extern PGDLLIMPORT Node *newNodeMacroHolder; #define newNode(size, tag) \ ( \ AssertMacro((size) = sizeof(Node)),/* need the tag, at least */ \ newNodeMacroHolder = (Node *) palloc0fast(size), \ newNodeMacroHolder-type = (tag), \ newNodeMacroHolder \ ) Given that we're calling palloc, it's not clear that saving one level of function call is really buying much; and what it's costing us is a store to a global variable that the compiler has no way to optimize away. On a lot of platforms, accessing global variables isn't especially cheap. Also, considering that palloc0fast is a nontrivial macro, and that there are a LOT of uses of newNode(), we're paying rather a lot of code space for a pretty dubious savings. Correct analysis, I'd say. So I'm tempted to get rid of this and just make newNode() an out-of-line function. Getting rid of the global variable is imperative. However, for the rest you'd have two alternate options (besides making it a normal function): a. Use macros like: #define makeNode(_type_,_variable_) \ newNode(sizeof(_type_), T_##_type_, _variable_) #define newNode(size, tag, variable)\ do { \ Node * newNodeMacroHolder;\ AssertMacro((size) = sizeof(Node));/* need the tag, at least */ \ newNodeMacroHolder = (Node *) palloc0fast(size); \ newNodeMacroHolder-type = (tag); \ _variable_ = newNodeMacroHolder; \ } while(0) b. Create a function newNode() which is declared as inline, which basically gives you the same code as under (a). -- Sincerely, Stephen R. van den Berg. Good moaning! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it really such a good thing for newNode() to be a macro?
Stephen R. van den Berg [EMAIL PROTECTED] writes: b. Create a function newNode() which is declared as inline, which basically gives you the same code as under (a). I considered that one, but since part of my argument is that inlining this is a waste of code space, it seems like a better inlining technology isn't really the answer. The other two alternatives would force notational changes on all the callers, which doesn't seem appealing (there are close to 1400 calls of makeNode() in the backend...) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] TODO - Commitfest
David Fetter wrote: Folks, It looks to me like there should be more links, maybe even bidirectional ones, between the TODO wiki and commitfest pages. Does mediawiki have a bidirectional link capability? Huh, what's a bidirectional link in this context? I think both the Commitfest and Todo are just ordered collections of pointers to the archives, IMHO anyway (which makes even sadder the fact that the archives are so puny). I'm not sure what you are envisioning. Can you be more explicit? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] TODO - Commitfest
On Tue, Aug 26, 2008 at 09:40:26PM -0400, Alvaro Herrera wrote: David Fetter wrote: Folks, It looks to me like there should be more links, maybe even bidirectional ones, between the TODO wiki and commitfest pages. Does mediawiki have a bidirectional link capability? Huh, what's a bidirectional link in this context? I think both the Commitfest and Todo are just ordered collections of pointers to the archives, For example, Common Table Expressions is both on the TODO list and on September's Commitfest. They should probably point to each other so long as such a relationship exists. IMHO anyway (which makes even sadder the fact that the archives are so puny). I'm not sure what you are envisioning. Can you be more explicit? See above :) Cheers, David. -- David Fetter [EMAIL PROTECTED] http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] can't stop autovacuum by HUP'ing the server
Alvaro Herrera wrote: Nope, AFAICS it's harmless; what it means is that on those databases, all tuples will be frozen immediately. I'll try to reproduce the problem here. No luck :-( It works as expected for me. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers