Re: [HACKERS] pg_prewarm
So I wrote a prewarming utility. Patch is attached. You can prewarm either the OS cache or PostgreSQL's cache, and there are two options for prewarming the OS cache to meet different needs. By passing the correct arguments to the function, you can prewarm an entire relation or just the blocks you choose; prewarming of blocks from alternate relation forks is also supported, for completeness. Hope you like it. +1
Re: [HACKERS] pg_prewarm
Hi, On Thu, 2012-03-08 at 23:13 -0500, Robert Haas wrote: It's been bugging me for a while now that we don't have a prewarming utility, for a couple of reasons, including: 1. Our customers look at me funny when I suggest that they use pg_relation_filepath() and /bin/dd for this purpose. 2. Sometimes when I'm benchmarking stuff, I want to get all the data cached in shared_buffers. This is surprisingly hard to do if the size of any relation involved is =1/4 of shared buffers, because the BAS_BULKREAD stuff kicks in. You can do it by repeatedly seq-scanning the relation - eventually all the blocks trickle in - but it takes a long time, and that's annoying. So I wrote a prewarming utility. I was talking to an Oracle DBA about this just yesterday. We also have pgfincore, but pg_prewarm is pretty much we need actually, I think. Did not test the patch, but the feature should be in core/contrib/whatever. This will also increase performance for the static tables that needs to be in the buffers all the time. I'm also seeing some use cases for BI databases. Thanks! Regards, -- Devrim GÜNDÜZ Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz signature.asc Description: This is a digitally signed message part
Re: Scaling XLog insertion (was Re: [HACKERS] Moving more work outside WALInsertLock)
On 07.03.2012 17:28, Tom Lane wrote: Simon Riggssi...@2ndquadrant.com writes: On Wed, Mar 7, 2012 at 3:04 PM, Tom Lanet...@sss.pgh.pa.us wrote: Alvaro Herreraalvhe...@commandprompt.com writes: So they are undoubtely rare. Not sure if as rare as Higgs bosons. Even if they're rare, having a major performance hiccup when one happens is not a side-effect I want to see from a patch whose only reason to exist is better performance. I agree the effect you point out can exist, I just don't want to slow down the main case as a result. I don't see any reason to think that what I suggested would slow things down, especially not if the code were set up to fall through quickly in the typical case where no page boundary is crossed. Integer division is not slow on any machine made in the last 15 years or so. Agreed. I wasn't worried about the looping with extra-large records, but might as well not do it. Here's an updated patch. It now only loops once per segment that a record crosses. Plus a lot of other small cleanup. I've been doing some performance testing with this, using a simple C function that just inserts a dummy WAL record of given size. I'm not totally satisfied. Although the patch helps with scalability at 3-4 concurrent backends doing WAL insertions, it seems to slow down the single-client case with small WAL records by about 5-10%. This is what Robert also saw with an earlier version of the patch (http://archives.postgresql.org/pgsql-hackers/2011-12/msg01223.php). I tested this with the data directory on a RAM drive, unfortunately I don't have a server with a hard drive that can sustain the high insertion rate. I'll post more detailed results, once I've refined the tests a bit. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 1:13 PM, Robert Haas robertmh...@gmail.com wrote: It's been bugging me for a while now that we don't have a prewarming utility, for a couple of reasons, including: 1. Our customers look at me funny when I suggest that they use pg_relation_filepath() and /bin/dd for this purpose. 2. Sometimes when I'm benchmarking stuff, I want to get all the data cached in shared_buffers. This is surprisingly hard to do if the size of any relation involved is =1/4 of shared buffers, because the BAS_BULKREAD stuff kicks in. You can do it by repeatedly seq-scanning the relation - eventually all the blocks trickle in - but it takes a long time, and that's annoying. So I wrote a prewarming utility. Patch is attached. You can prewarm either the OS cache or PostgreSQL's cache, and there are two options for prewarming the OS cache to meet different needs. By passing the correct arguments to the function, you can prewarm an entire relation or just the blocks you choose; prewarming of blocks from alternate relation forks is also supported, for completeness. Hope you like it. +1 When a relation is loaded into cache, are corresponding indexes also loaded at the same time? Can this load only the specified index into cache? When the relation is too huge to fit into the cache and most access pattern in the system is index scan, DBA might want to load only index rather than table. For such system, so far I've been suggesting using pgstatindex, but it's good if pg_prewarm can do that. This utility might be helpful to accelerate a recovery of WAL record not containing FPW. IOW, before starting a recovery, list the relations to recover from WAL files by using xlogdump tool, load them into cache by using this utility, and then start a recovery. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Scaling XLog insertion (was Re: [HACKERS] Moving more work outside WALInsertLock)
On Fri, Mar 9, 2012 at 7:04 PM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: Here's an updated patch. It now only loops once per segment that a record crosses. Plus a lot of other small cleanup. Thanks! But you forgot to attach the patch. I've been doing some performance testing with this, using a simple C function that just inserts a dummy WAL record of given size. I'm not totally satisfied. Although the patch helps with scalability at 3-4 concurrent backends doing WAL insertions, it seems to slow down the single-client case with small WAL records by about 5-10%. This is what Robert also saw with an earlier version of the patch (http://archives.postgresql.org/pgsql-hackers/2011-12/msg01223.php). I tested this with the data directory on a RAM drive, unfortunately I don't have a server with a hard drive that can sustain the high insertion rate. I'll post more detailed results, once I've refined the tests a bit. I'm also doing performance test. If I get interesting result, I'll post it. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
we had some different idea here in the past: what if we had a procedure / method to allow people to save the list of current buffers / cached blocks to be written to disk (sorted). we could then reload this cache profile on startup in the background or people could load a certain cache content at runtime (maybe to test or whatever). writing those block ids in sorted order would help us to avoid some random I/O on reload. regards, hans On Mar 9, 2012, at 5:13 AM, Robert Haas wrote: It's been bugging me for a while now that we don't have a prewarming utility, for a couple of reasons, including: 1. Our customers look at me funny when I suggest that they use pg_relation_filepath() and /bin/dd for this purpose. 2. Sometimes when I'm benchmarking stuff, I want to get all the data cached in shared_buffers. This is surprisingly hard to do if the size of any relation involved is =1/4 of shared buffers, because the BAS_BULKREAD stuff kicks in. You can do it by repeatedly seq-scanning the relation - eventually all the blocks trickle in - but it takes a long time, and that's annoying. So I wrote a prewarming utility. Patch is attached. You can prewarm either the OS cache or PostgreSQL's cache, and there are two options for prewarming the OS cache to meet different needs. By passing the correct arguments to the function, you can prewarm an entire relation or just the blocks you choose; prewarming of blocks from alternate relation forks is also supported, for completeness. Hope you like it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company pg_prewarm_v1.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Cybertec Schönig Schönig GmbH Gröhrmühlgasse 26 A-2700 Wiener Neustadt Web: http://www.postgresql-support.de -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of patch renaming constraints
Hi, Peter Eisentraut pete...@gmx.net writes: On tor, 2012-01-12 at 22:43 -0600, Joshua Berkus wrote: Most normal uses of alter table ... rename constraint ... worked normally. However, the patch does not deal correctly with constraints which are not inherited, such as primary key constraints: New patch which works around that issue. I've been reviewing this new patch and it seems ready for commiter for me: the code indeed looks like it's always been there, the corner cases are covered in the added regression tests, including the one Josh ran into problem with in the previous round of testing. The regression test covering made me lazy enough not to retry the patch here, I trust Peter on testing his own work here :) I'll update my command trigger patch as soon as this one makes it in. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] Support for foreign keys with arrays
Il giorno gio, 08/03/2012 alle 08.11 -0500, Robert Haas ha scritto: On Fri, Feb 24, 2012 at 9:01 PM, Noah Misch n...@leadboat.com wrote: I consider these the core changes needed to reach Ready for Committer: - Fix crash in array_replace(arr, null, null). - Don't look through the domain before calling can_coerce_type(). - Compare operator family of pfeqop and TYPECACHE_EQ_OPR at creation. - Move post-processing from gram.y and remove t/f magic values. So, is someone working on making these changes? We are working on it and I hope we can send the v4 version during the upcoming weekend. Regards, Marco -- Marco Nenciarini - 2ndQuadrant Italy PostgreSQL Training, Services and Support marco.nenciar...@2ndquadrant.it | www.2ndQuadrant.it -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Bug: walsender and high CPU usage
Hi, I found the bug which causes walsender to enter into busy loop when replication connection is terminated. Walsender consumes lots of CPU resource (%sys), and this situation lasts until it has detected the termination of replication connection and exited. The cause of this bug is that the walsender loop doesn't call ResetLatch at all in the above case. Since the latch remains set, the walsender loop cannot sleep on the latch, i.e., WaitLatch always returns immediately. We can fix this bug by adding ResetLatch into the top of the walsender loop. Patch attached. This bug exists in 9.1 but not in 9.2dev. In 9.2dev, this bug has already been fixed by the commit (cff75130b5f63e45423c2ed90d6f2e84c21ef840). This commit refactors and refines the walsender loop logic in addition to adding ResetLatch. So I'm tempted to backport this commit (except the deletion of wal_sender_delay) to 9.1 rather than applying the attached patch. OTOH, attached patch is quite simple, and its impact on 9.1 would be very small, so it's easy to backport that. Thought? Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c index 3497269..35c7042 100644 --- a/src/backend/replication/walsender.c +++ b/src/backend/replication/walsender.c @@ -700,6 +700,9 @@ WalSndLoop(void) /* Loop forever, unless we get an error */ for (;;) { + /* Clear any already-pending wakeups */ + ResetLatch(MyWalSnd-latch); + /* * Emergency bailout if postmaster has died. This is to avoid the * necessity for manual cleanup of all postmaster children. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On 9 March 2012 00:28, Thom Brown t...@linux.com wrote: On 8 March 2012 22:24, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: We're getting there. :) It was late last night and I forgot to get around to testing pg_dump, which isn't working correctly: -- -- Name: cmd_trg_after_alter_aggregate; Type: COMMAND TRIGGER; Schema: -; Owner: -- CREATE COMMAND TRIGGER cmd_trg_after_alter_aggregate AFTERALTER AGGREGATE EXECUTE PROCEDURE cmd_trg_info (); There shouldn't be quotes around the command, and when removing them, ensure there's a space before the command. All variations of the ALTER statements in the dump are fine, so are CREATE statements for ANY COMMAND command triggers. Also I notice that CREATE/ALTER/DROP COMMAND TRIGGER appears just before CREATE/ALTER/DROP TRIGGER in the documentation. This breaks the alphabetical order and I wasn't expecting to find it there when scanning down the page. Could we move them into an alphabetic position? -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Sun, Mar 4, 2012 at 8:26 PM, Magnus Hagander mag...@hagander.net wrote: On Sun, Feb 26, 2012 at 00:53, Euler Taveira de Oliveira eu...@timbira.com wrote: On 25-02-2012 09:23, Magnus Hagander wrote: Do we even *need* the validate_xlog_location() function? If we just remove those calls, won't we still catch all the incorrectly formatted ones in the errors of the sscanf() calls? Or am I too deep into weekend-mode and missing something obvious? sscanf() is too fragile for input sanity check. Try pg_xlog_location_diff('12/3', '-10/0'), for example. I won't object removing that function if you protect xlog location input from silly users. Ah, good point. No, that's the reason I was missing :-) Patch applied, thanks! Thanks for committing the patch! Euler proposed one more patch upthread, which replaces pg_size_pretty(bigint) with pg_size_pretty(numeric) so that pg_size_pretty(pg_xlog_location_diff()) succeeds. It's also worth committing this patch? http://archives.postgresql.org/message-id/4f315f6c.8030...@timbira.com Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_stats_recovery view
On Tue, Feb 14, 2012 at 4:10 PM, Jaime Casanova ja...@2ndquadrant.com wrote: On Thu, Feb 2, 2012 at 2:32 AM, Magnus Hagander mag...@hagander.net wrote: I haven't looked through the code in detail, but one direct comment: do we really need/want to send this through the stats collector? It will only ever have one sender - perhaps we should just either store it in shared memory or store it locally and only send it on demand? fyi, i intend to send a reworked patch later today, it will store the info locally and send it on demand. Jaime, are you planning to submit the updated version of the patch? Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server
(2012/03/09 14:00), Tom Lane wrote: I wrote: There are a couple of other points that make me think we need to revisit the PlanForeignScan API definition some more, too. ... So we need to break down what PlanForeignScan currently does into three separate steps. The first idea that comes to mind is to call them GetForeignRelSize, GetForeignPaths, GetForeignPlan; but maybe somebody has a better idea for names? Attached is a draft patch for that. 1. FilefdwPlanState.pages and FileFdwPlanState.ntuples seems redundant. Why not use RelOptInfo.pages and RelOptInfo.tuples? 2. IMHO RelOptInfo.fdw_private seems confusing. How about renaming it to e.g., RelOptInfo.fdw_state? Attached is a patch for the draft patch. Best regards, Etsuro Fujita *** a/contrib/file_fdw/file_fdw.c --- b/contrib/file_fdw/file_fdw.c *** *** 74,87 static const struct FileFdwOption valid_options[] = { }; /* ! * FDW-specific information for RelOptInfo.fdw_private. */ typedef struct FileFdwPlanState { char *filename; /* file to read */ List *options;/* merged COPY options, excluding filename */ - BlockNumber pages; /* estimate of file's physical size */ - double ntuples;/* estimate of number of rows in file */ } FileFdwPlanState; /* --- 74,85 }; /* ! * FDW-specific information for RelOptInfo.fdw_state. */ typedef struct FileFdwPlanState { char *filename; /* file to read */ List *options;/* merged COPY options, excluding filename */ } FileFdwPlanState; /* *** *** 132,140 static void fileGetOptions(Oid foreigntableid, char **filename, List **other_options); static List *get_file_fdw_attribute_options(Oid relid); static void estimate_size(PlannerInfo *root, RelOptInfo *baserel, ! FileFdwPlanState *fdw_private); static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel, - FileFdwPlanState *fdw_private, Cost *startup_cost, Cost *total_cost); --- 130,137 char **filename, List **other_options); static List *get_file_fdw_attribute_options(Oid relid); static void estimate_size(PlannerInfo *root, RelOptInfo *baserel, ! FileFdwPlanState *fpstate); static void estimate_costs(PlannerInfo *root, RelOptInfo *baserel, Cost *startup_cost, Cost *total_cost); *** *** 415,433 fileGetForeignRelSize(PlannerInfo *root, RelOptInfo *baserel, Oid foreigntableid) { ! FileFdwPlanState *fdw_private; /* * Fetch options. We only need filename at this point, but we might * as well get everything and not need to re-fetch it later in planning. */ ! fdw_private = (FileFdwPlanState *) palloc(sizeof(FileFdwPlanState)); ! fileGetOptions(foreigntableid, ! fdw_private-filename, fdw_private-options); ! baserel-fdw_private = (void *) fdw_private; /* Estimate relation size */ ! estimate_size(root, baserel, fdw_private); } /* --- 412,429 RelOptInfo *baserel, Oid foreigntableid) { ! FileFdwPlanState *fpstate; /* * Fetch options. We only need filename at this point, but we might * as well get everything and not need to re-fetch it later in planning. */ ! fpstate = (FileFdwPlanState *) palloc(sizeof(FileFdwPlanState)); ! fileGetOptions(foreigntableid, fpstate-filename, fpstate-options); ! baserel-fdw_state = (void *) fpstate; /* Estimate relation size */ ! estimate_size(root, baserel, fpstate); } /* *** *** 443,455 fileGetForeignPaths(PlannerInfo *root, RelOptInfo *baserel, Oid foreigntableid) { - FileFdwPlanState *fdw_private = (FileFdwPlanState *) baserel-fdw_private; Coststartup_cost; Costtotal_cost; /* Estimate costs */ ! estimate_costs(root, baserel, fdw_private, ! startup_cost, total_cost); /* Create a ForeignPath node and add it as only possible path */ add_path(baserel, (Path *) --- 439,449 RelOptInfo *baserel, Oid foreigntableid) { Coststartup_cost; Costtotal_cost; /* Estimate costs */ ! estimate_costs(root, baserel, startup_cost, total_cost); /* Create
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 5:24 AM, Fujii Masao masao.fu...@gmail.com wrote: When a relation is loaded into cache, are corresponding indexes also loaded at the same time? No, although if you wanted to do that you could easily do so, using a query like this: select pg_prewarm(indexrelid, 'main', 'read', NULL, NULL) from pg_index where indrelid = 'your_table_name'::regclass; Can this load only the specified index into cache? Yes. The relation can be anything that has storage, so you can prewarm either a table or an index (or even a sequence or TOAST table, if you're so inclined). When the relation is too huge to fit into the cache and most access pattern in the system is index scan, DBA might want to load only index rather than table. For such system, so far I've been suggesting using pgstatindex, but it's good if pg_prewarm can do that pgstatindex is an interesting idea; hadn't thought of that. Actually, though, pgstaindex probably ought to be using a BufferAccessStrategy to avoid trashing the cache. I've had reports of pgstatindex torpedoing performance on production systems. This utility might be helpful to accelerate a recovery of WAL record not containing FPW. IOW, before starting a recovery, list the relations to recover from WAL files by using xlogdump tool, load them into cache by using this utility, and then start a recovery. Interesting idea. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
Robert Haas robertmh...@gmail.com writes: It's been bugging me for a while now that we don't have a prewarming utility, for a couple of reasons, including: 1. Our customers look at me funny when I suggest that they use pg_relation_filepath() and /bin/dd for this purpose. Try telling them about pgfincore maybe. https://github.com/klando/pgfincore 2. Sometimes when I'm benchmarking stuff, I want to get all the data cached in shared_buffers. This is surprisingly hard to do if the size of any relation involved is =1/4 of shared buffers, because the BAS_BULKREAD stuff kicks in. You can do it by repeatedly seq-scanning the relation - eventually all the blocks trickle in - but it takes a long time, and that's annoying. That reminds me of something… cedric=# select * from pgfadvise_willneed('pgbench_accounts'); relpath | os_page_size | rel_os_pages | os_pages_free +--+--+--- base/11874/16447 | 4096 | 262144 |169138 base/11874/16447.1 | 4096 |65726 |103352 (2 rows) Time: 4462,936 ms With pgfincore you can also get at how many pages are in memory already, os cache or shared buffers, per file segment of a relation. So you can both force warming up a whole relation, parts of it, and check the current state of things. So I wrote a prewarming utility. Patch is attached. You can prewarm either the OS cache or PostgreSQL's cache, and there are two options for prewarming the OS cache to meet different needs. By passing the correct arguments to the function, you can prewarm an entire relation or just the blocks you choose; prewarming of blocks from alternate relation forks is also supported, for completeness. Is it possible with your tool to snapshot the OS and PostgreSQL cache in order to warm an Hot Standby server? Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 5:42 AM, Hans-Jürgen Schönig postg...@cybertec.at wrote: we had some different idea here in the past: what if we had a procedure / method to allow people to save the list of current buffers / cached blocks to be written to disk (sorted). we could then reload this cache profile on startup in the background or people could load a certain cache content at runtime (maybe to test or whatever). writing those block ids in sorted order would help us to avoid some random I/O on reload. I don't think that's a bad idea at all, and someone actually did write a patch for it at one point, though it didn't get committed, partly I believe because of technical issues and partly because Greg Smith was uncertain how much good it did to restore shared_buffers without thinking about the OS cache. Personally, I don't buy into the latter objection: a lot of people are running with data sets that fit inside shared_buffers, and those people would benefit tremendously. However, this just provides mechanism, not policy, and is therefore more general. You could use pg_buffercache to save the cache contents at shutdown and pg_prewarm to load those blocks back in at startup, if you were so inclined. Or if you just want to load up your main relation, and its indexes, you can do that, too. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 8:25 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: Robert Haas robertmh...@gmail.com writes: It's been bugging me for a while now that we don't have a prewarming utility, for a couple of reasons, including: 1. Our customers look at me funny when I suggest that they use pg_relation_filepath() and /bin/dd for this purpose. Try telling them about pgfincore maybe. https://github.com/klando/pgfincore Oh, huh. I had no idea that pgfincore could do that. I thought that was just for introspection; I didn't realize it could actually move data around for you. 2. Sometimes when I'm benchmarking stuff, I want to get all the data cached in shared_buffers. This is surprisingly hard to do if the size of any relation involved is =1/4 of shared buffers, because the BAS_BULKREAD stuff kicks in. You can do it by repeatedly seq-scanning the relation - eventually all the blocks trickle in - but it takes a long time, and that's annoying. That reminds me of something… cedric=# select * from pgfadvise_willneed('pgbench_accounts'); relpath | os_page_size | rel_os_pages | os_pages_free +--+--+--- base/11874/16447 | 4096 | 262144 | 169138 base/11874/16447.1 | 4096 | 65726 | 103352 (2 rows) Time: 4462,936 ms That's not the same thing. That's pulling them into the OS cache, not shared_buffers. Is it possible with your tool to snapshot the OS and PostgreSQL cache in order to warm an Hot Standby server? Nope. It doesn't have any capabilities to probe for information, because I knew those things already existed in pg_buffercache and pgfincore, and also because they weren't what I needed to solve my immediate problem, which was a way to get the entirety of a relation into shared_buffers. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Review of pg_archivecleanup -x option patch
On Fri, Mar 9, 2012 at 12:47 AM, Jaime Casanova ja...@2ndquadrant.com wrote: Sorry, here's the patch rebased and with the suggestion from Alex. Which reminds me, I never thank him for the review (shame on me) :D with the patch this time This may be a stupid idea, but it seems to me that it might be better to dispense with all of the logic in here to detect whether the file name is still going to be long enough after chomping the extension. I feel like that's just making things complicated. I assume the extensions we're thinking people will want to strip here are things like .gz, in which case there should be no confusion; and if someone's dumb enough to use an extension like 0 (with no dot or anything) but only sometimes then I think they deserve what they get (viz: errors). See attached (only lightly tested) patch for what I'm thinking of. Also, I'm wondering about this warning in the documentation: +extension added by the compression program. Note that the +filename.backup/ file name passed to the program should not +include the extension. IIUC, the latest change you made makes that warning obsolete, no? [rhaas pgsql]$ contrib/pg_archivecleanup/pg_archivecleanup -d -x .gz . 00010010.0020.backup.gz pg_archivecleanup: keep WAL file ./00010010 and later Thoughts? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company archivecleanup-extension-rmh.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On Wed, Mar 7, 2012 at 4:53 PM, Thom Brown t...@linux.com wrote: I've also since found that if I issue a VACUUM, CLUSTER or REINDEX on a read-only standby, the BEFORE ANY COMMAND trigger fires. I don't think any trigger should fire on a read-only standby. Why ever not? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On 9 March 2012 14:09, Robert Haas robertmh...@gmail.com wrote: On Wed, Mar 7, 2012 at 4:53 PM, Thom Brown t...@linux.com wrote: I've also since found that if I issue a VACUUM, CLUSTER or REINDEX on a read-only standby, the BEFORE ANY COMMAND trigger fires. I don't think any trigger should fire on a read-only standby. Why ever not? Sorry, I meant any command trigger. It's because none of the commands can be run on a standby, so the triggers don't seem appropriate. -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On Fri, Mar 9, 2012 at 9:22 AM, Thom Brown t...@linux.com wrote: On 9 March 2012 14:09, Robert Haas robertmh...@gmail.com wrote: On Wed, Mar 7, 2012 at 4:53 PM, Thom Brown t...@linux.com wrote: I've also since found that if I issue a VACUUM, CLUSTER or REINDEX on a read-only standby, the BEFORE ANY COMMAND trigger fires. I don't think any trigger should fire on a read-only standby. Why ever not? Sorry, I meant any command trigger. It's because none of the commands can be run on a standby, so the triggers don't seem appropriate. I'm not convinced. Right now, it's fairly useless - all the triggers could possibly do is throw an error, and an error is going to get thrown anyway, so it's only a question of which error message the user will see. But we discussed before the idea of adding a capability for BEFORE triggers to request that the actual execution of the command get skipped, and then it's possible to imagine this being useful. Someone could even use a command trigger that detects which machine it's running on, and if it's the standby, uses dblink to execute the command on the master, or something crazy like that. Command triggers could also be useful for logging all attempts to execute a particular command, which is probably still appropriate on the standby. I think that it will be a good thing to try to treat Hot Standby mode as much like regular operation as is reasonably possible, across the board. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On 9 March 2012 14:30, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 9:22 AM, Thom Brown t...@linux.com wrote: On 9 March 2012 14:09, Robert Haas robertmh...@gmail.com wrote: On Wed, Mar 7, 2012 at 4:53 PM, Thom Brown t...@linux.com wrote: I've also since found that if I issue a VACUUM, CLUSTER or REINDEX on a read-only standby, the BEFORE ANY COMMAND trigger fires. I don't think any trigger should fire on a read-only standby. Why ever not? Sorry, I meant any command trigger. It's because none of the commands can be run on a standby, so the triggers don't seem appropriate. I'm not convinced. Right now, it's fairly useless - all the triggers could possibly do is throw an error, and an error is going to get thrown anyway, so it's only a question of which error message the user will see. But we discussed before the idea of adding a capability for BEFORE triggers to request that the actual execution of the command get skipped, and then it's possible to imagine this being useful. Someone could even use a command trigger that detects which machine it's running on, and if it's the standby, uses dblink to execute the command on the master, or something crazy like that. Command triggers could also be useful for logging all attempts to execute a particular command, which is probably still appropriate on the standby. I think that it will be a good thing to try to treat Hot Standby mode as much like regular operation as is reasonably possible, across the board. I see your point. My suggestion to Dimitri in another email was either enable triggers for all commands or none. At the moment it's only available on utility commands. -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Fujii Masao masao.fu...@gmail.com writes: Euler proposed one more patch upthread, which replaces pg_size_pretty(bigint) with pg_size_pretty(numeric) so that pg_size_pretty(pg_xlog_location_diff()) succeeds. It's also worth committing this patch? Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 9:37 AM, Tom Lane t...@sss.pgh.pa.us wrote: Fujii Masao masao.fu...@gmail.com writes: Euler proposed one more patch upthread, which replaces pg_size_pretty(bigint) with pg_size_pretty(numeric) so that pg_size_pretty(pg_xlog_location_diff()) succeeds. It's also worth committing this patch? Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. The point is that it would be useful to use it on the difference between two xlog locations, but that is a numeric value, not int8, because of signedness issues. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On Fri, Mar 9, 2012 at 9:35 AM, Thom Brown t...@linux.com wrote: I see your point. My suggestion to Dimitri in another email was either enable triggers for all commands or none. At the moment it's only available on utility commands. Yeah, that's clearly not the best of all possible worlds. :-) I think we had better look seriously at postponing this patch to 9.3. Your reviewing is obviously moving things forward rapidly, but I think it's unrealistic to think this is going to be in a committable state any time in the next week or two. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pgsql_fdw, FDW for PostgreSQL server
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes: (2012/03/09 14:00), Tom Lane wrote: Attached is a draft patch for that. 1. FilefdwPlanState.pages and FileFdwPlanState.ntuples seems redundant. Why not use RelOptInfo.pages and RelOptInfo.tuples? I intentionally avoided setting RelOptInfo.pages because that would have other effects on planning (cf total_table_pages or whatever it's called). It's possible that that would actually be desirable, depending on whether you think the external file should be counted as part of the query's disk-access footprint; but it would take some investigation to conclude that, which I didn't feel like doing right now. Likewise, I'm not sure offhand what side effects might occur from using RelOptInfo.tuples, and didn't want to change file_fdw's behavior without more checking. 2. IMHO RelOptInfo.fdw_private seems confusing. How about renaming it to e.g., RelOptInfo.fdw_state? Why is that better? It seems just as open to confusion with another field (ie, the execution-time fdw_state). I thought for a little bit about trying to give different names to all four of the fdw private fields (RelOptInfo, Path, Plan, PlanState) but it's not obvious what naming rule to use, and at least the last two of those can't be changed without breaking existing FDW code. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] logging in high performance systems.
On Wed, Nov 23, 2011 at 9:28 PM, Theo Schlossnagle je...@omniti.com wrote: We have a need for logging in systems where it isn't feasible to log to disk as it negatively impacts performance. I'd like to be able to creatively solve this problem without modifying the core, but today I cannot. So... here's my first whack at solving this with some flexibility. The first thing I did was add hook points where immediate statement logging happens pre_exec and those that present duration post_exec. These should, with optimization turned on, have only a few instructions of impact when no hooks are registered (we could hoist the branch outside the function call if that were identified as an issue). https://github.com/postwait/postgres/commit/62bb9dfa2d373618f10e46678612720a3a01599a The second thing I did was write a sample use of those hooks to implement a completely non-blocking fifo logger. (if it would block, it drops the log line). The concept is that we could run this without risk of negative performance impact due to slow log reading (choosing to drop logs in lieu of pausing). And a simple process could be written to consume from the fifo. We use this method in other systems to log many 10s of thousands of log lines per second with negligible impact on performance. https://github.com/postwait/postgres/commit/c8f5a346c7b2c3eba9f72ea49077dc72f03a2679 Thoughts? Feedback? As can be seen, the patch is pretty tiny. Theo, Tom recently committed something by another author that is along similar lines to what you have here (I think). Can you comment on whether you think more is still needed and what the differences are between that approach and yours? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
I wrote: Fujii Masao masao.fu...@gmail.com writes: Euler proposed one more patch upthread, which replaces pg_size_pretty(bigint) with pg_size_pretty(numeric) so that pg_size_pretty(pg_xlog_location_diff()) succeeds. It's also worth committing this patch? Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. Actually ... now that I look at it, isn't it completely bogus to be using numeric for the result of pg_xlog_location_diff? There's no way for the difference of two xlog locations to be anywhere near as wide as 64 bits. That'd only be possible if XLogFileSize exceeded 1GB, which we don't let it get anywhere near. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 9:37 AM, Tom Lane t...@sss.pgh.pa.us wrote: Why would it be useful to use pg_size_pretty on xlog locations? The point is that it would be useful to use it on the difference between two xlog locations, Um, that is exactly the claim I was questioning. Why is that useful? but that is a numeric value, not int8, because of signedness issues. See my followup --- this statement appears factually incorrect, whatever you may feel about the usefulness issue. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On 9 March 2012 14:47, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 9:35 AM, Thom Brown t...@linux.com wrote: I see your point. My suggestion to Dimitri in another email was either enable triggers for all commands or none. At the moment it's only available on utility commands. Yeah, that's clearly not the best of all possible worlds. :-) I think we had better look seriously at postponing this patch to 9.3. Your reviewing is obviously moving things forward rapidly, but I think it's unrealistic to think this is going to be in a committable state any time in the next week or two. That's unfortunate if that's the case. I'll dedicate any bandwidth necessary for additional testing as I would really like to see this get in, but if it transpires there's more outstanding work and polishing needed than time Dimitri personally has available, then I guess it'll have to be a 9.3 feature. :'( -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 9:22 AM, Thom Brown t...@linux.com wrote: Sorry, I meant any command trigger. It's because none of the commands can be run on a standby, so the triggers don't seem appropriate. I'm not convinced. Right now, it's fairly useless - all the triggers could possibly do is throw an error, and an error is going to get thrown anyway, so it's only a question of which error message the user will see. But we discussed before the idea of adding a capability for BEFORE triggers to request that the actual execution of the command get skipped, and then it's possible to imagine this being useful. Um, surely the you can't do that in a read-only session error is going to get thrown long before the command trigger could be called? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On 9 March 2012 15:05, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 9:22 AM, Thom Brown t...@linux.com wrote: Sorry, I meant any command trigger. It's because none of the commands can be run on a standby, so the triggers don't seem appropriate. I'm not convinced. Right now, it's fairly useless - all the triggers could possibly do is throw an error, and an error is going to get thrown anyway, so it's only a question of which error message the user will see. But we discussed before the idea of adding a capability for BEFORE triggers to request that the actual execution of the command get skipped, and then it's possible to imagine this being useful. Um, surely the you can't do that in a read-only session error is going to get thrown long before the command trigger could be called? Yes, at the moment that's the case. I said that this wasn't the case for utility commands but I've noticed the message is different for those: ERROR: cannot execute VACUUM during recovery vs ERROR: cannot execute CREATE TABLE in a read-only transaction So my complaint around that was misleading and wrong. -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pg_upgrade and umask
What do people think of pg_upgrade setting its umask to 0077 so the log and SQL files are only readable by the postgres user? -rwx-- 1 postgres postgres 41 Mar 9 09:59 delete_old_cluster.sh* -rw--- 1 postgres postgres 6411 Mar 8 21:56 pg_upgrade_dump_all.sql -rw--- 1 postgres postgres 5651 Mar 8 21:56 pg_upgrade_dump_db.sql -rw--- 1 postgres postgres 738 Mar 8 21:56 pg_upgrade_dump_globals.sql -rw--- 1 postgres postgres 1669 Mar 8 21:56 pg_upgrade_internal.log -rw--- 1 postgres postgres 1667 Mar 8 21:56 pg_upgrade_restore.log -rw--- 1 postgres postgres 1397 Mar 8 21:56 pg_upgrade_server.log -rw--- 1 postgres postgres 385 Mar 8 21:56 pg_upgrade_utility.log The umask would also affect files it copies like clog and the data files, but those already have only postgres permissions. The downside is that users running pg_upgrade with 'su' or 'RUNAS' would need to use those to inspect the log files for errors. FYI, delete_old_cluster.sh probably has to be run as root, but root seems able to run an executable that it doesn't own. I am thinking it isn't worth the complexity of using umask and restricting those files, but wanted opinions. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade and umask
Bruce Momjian br...@momjian.us writes: What do people think of pg_upgrade setting its umask to 0077 so the log and SQL files are only readable by the postgres user? +1 for restricting the log files, but I'm dubious that you should alter the existing permissions on copied files in any way. IOW, umask seems like the wrong tool. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade and umask
On fre, 2012-03-09 at 10:10 -0500, Bruce Momjian wrote: What do people think of pg_upgrade setting its umask to 0077 so the log and SQL files are only readable by the postgres user? That would be good to have. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade and umask
On Fri, Mar 09, 2012 at 10:18:31AM -0500, Tom Lane wrote: Bruce Momjian br...@momjian.us writes: What do people think of pg_upgrade setting its umask to 0077 so the log and SQL files are only readable by the postgres user? +1 for restricting the log files, but I'm dubious that you should alter the existing permissions on copied files in any way. IOW, umask seems like the wrong tool. I was afraid you would say that. :-( The problem is that these files are being created often by shell redirects, e.g. pg_dump -f out 2 log_file. There is no clean way to control the file creation permissions in this case --- only umask gives us a process-level setting. Actually, one crafty idea would be to do the umask only when I exec something, and when I create the initial files with the new banner you suggested. Let me look into that. Frankly, the permissions are already being modified by the default umask, e.g. 0022. Do we want a zero umask? -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On Fri, Mar 9, 2012 at 10:05 AM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 9:22 AM, Thom Brown t...@linux.com wrote: Sorry, I meant any command trigger. It's because none of the commands can be run on a standby, so the triggers don't seem appropriate. I'm not convinced. Right now, it's fairly useless - all the triggers could possibly do is throw an error, and an error is going to get thrown anyway, so it's only a question of which error message the user will see. But we discussed before the idea of adding a capability for BEFORE triggers to request that the actual execution of the command get skipped, and then it's possible to imagine this being useful. Um, surely the you can't do that in a read-only session error is going to get thrown long before the command trigger could be called? Hmmm yeah. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 9:55 AM, Tom Lane t...@sss.pgh.pa.us wrote: I wrote: Fujii Masao masao.fu...@gmail.com writes: Euler proposed one more patch upthread, which replaces pg_size_pretty(bigint) with pg_size_pretty(numeric) so that pg_size_pretty(pg_xlog_location_diff()) succeeds. It's also worth committing this patch? Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. Actually ... now that I look at it, isn't it completely bogus to be using numeric for the result of pg_xlog_location_diff? There's no way for the difference of two xlog locations to be anywhere near as wide as 64 bits. That'd only be possible if XLogFileSize exceeded 1GB, which we don't let it get anywhere near. rhaas=# select pg_xlog_location_diff('/0', '0/0'); pg_xlog_location_diff --- 18374686475393433600 (1 row) rhaas=# select pg_xlog_location_diff('/0', '0/0')::int8; ERROR: bigint out of range STATEMENT: select pg_xlog_location_diff('/0', '0/0')::int8; -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
Robert Haas robertmh...@gmail.com writes: https://github.com/klando/pgfincore Oh, huh. I had no idea that pgfincore could do that. I thought that was just for introspection; I didn't realize it could actually move data around for you. Well, I though Cédric already had included shared buffers related facilities, so that make us square it seems… Is it possible with your tool to snapshot the OS and PostgreSQL cache in order to warm an Hot Standby server? Nope. It doesn't have any capabilities to probe for information, because I knew those things already existed in pg_buffercache and pgfincore, and also because they weren't what I needed to solve my immediate problem, which was a way to get the entirety of a relation into shared_buffers. So that's complementary with pgfincore, ok. I still wish we could maintain the RAM content HOT on the standby in the same way we are able to maintain its data set on disk, though. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] elegant and effective way for running jobs inside a database
2012/3/6 Alvaro Herrera alvhe...@commandprompt.com: It seems to me that the only thing that needs core support is the ability to start up the daemon when postmaster is ready to accept queries, and shut the daemon down when postmaster kills backends (either because one crashed, or because it's shutting down). +10 Even though it is different from the original requirement, I also would like to see the feature to run daemon processes managed by extension according to start/stop of the postmaster. I'm trying to implement an extension that uses GPU devices to help calculation of complex qualifiers. CUDA or OpenCL has a limitation that does not allow a particular number of processes open a device concurrently. So, I launches calculation threads that handles all the communication with GPU devices behalf on the postmaster process, however, it is not a graceful design, of course. Each backend communicate with the calculation thread via shared- memory segment, thus, it should be a child process of postmaster. So, although my motivation is not something like Cron in core, it seems to me Alvaro's idea is quite desirable and reasonable, to be discussed in v9.3. Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 9:55 AM, Tom Lane t...@sss.pgh.pa.us wrote: Actually ... now that I look at it, isn't it completely bogus to be using numeric for the result of pg_xlog_location_diff? rhaas=# select pg_xlog_location_diff('/0', '0/0')::int8; ERROR: bigint out of range Oh ... I see my mistake. I was looking at this: /* * result = XLogFileSize * (xlogid1 - xlogid2) + xrecoff1 - xrecoff2 */ and confusing XLogFileSize with XLogSegSize. Not the best choice of names. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade and umask
Bruce Momjian br...@momjian.us writes: The problem is that these files are being created often by shell redirects, e.g. pg_dump -f out 2 log_file. There is no clean way to control the file creation permissions in this case --- only umask gives us a process-level setting. Actually, one crafty idea would be to do the umask only when I exec something, and when I create the initial files with the new banner you suggested. Let me look into that. You could create empty log files with the desired permissions, and then do the execs with log_file, and thereby not have to globally change umask. Frankly, the permissions are already being modified by the default umask, e.g. 0022. Do we want a zero umask? I'm not so worried about default umask; nobody's complained yet about wrong permissions on pg_upgrade output files. But umask 077 would be likely to do things like get rid of group access to postgresql.conf, which some people intentionally set. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 10:33 AM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: So that's complementary with pgfincore, ok. I still wish we could maintain the RAM content HOT on the standby in the same way we are able to maintain its data set on disk, though. That's an interesting idea. It seems tricky, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 5:21 AM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 5:24 AM, Fujii Masao masao.fu...@gmail.com wrote: When a relation is loaded into cache, are corresponding indexes also loaded at the same time? No, although if you wanted to do that you could easily do so, using a query like this: select pg_prewarm(indexrelid, 'main', 'read', NULL, NULL) from pg_index where indrelid = 'your_table_name'::regclass; Could that be included in an example? Maybe admins are expected to know how to construct such queries of the cuff, but I always need to look it up each time which is rather tedious. In the patch: s/no special projection/no special protection/ Thanks for putting this together. Cheers, Jeff -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] elegant and effective way for running jobs inside a database
On Fri, Mar 9, 2012 at 9:36 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote: 2012/3/6 Alvaro Herrera alvhe...@commandprompt.com: It seems to me that the only thing that needs core support is the ability to start up the daemon when postmaster is ready to accept queries, and shut the daemon down when postmaster kills backends (either because one crashed, or because it's shutting down). So, although my motivation is not something like Cron in core, it seems to me Alvaro's idea is quite desirable and reasonable, to be discussed in v9.3. 100% agree (having re-read the thread and Alvaro's idea having sunk in). Being able to set up daemon processes side by side with the postmaster would fit the bill nicely. It's pretty interesting to think of all the places you could go with it. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade and umask
On Fri, Mar 09, 2012 at 10:41:53AM -0500, Tom Lane wrote: Bruce Momjian br...@momjian.us writes: The problem is that these files are being created often by shell redirects, e.g. pg_dump -f out 2 log_file. There is no clean way to control the file creation permissions in this case --- only umask gives us a process-level setting. Actually, one crafty idea would be to do the umask only when I exec something, and when I create the initial files with the new banner you suggested. Let me look into that. You could create empty log files with the desired permissions, and then do the execs with log_file, and thereby not have to globally change umask. Yes, that is what I have done, with the attached patch. I basically wrapped the fopen call with umask calls, and have the system() call wrapped too. That takes care of all the files pg_upgrade creates. Frankly, the permissions are already being modified by the default umask, e.g. 0022. Do we want a zero umask? I'm not so worried about default umask; nobody's complained yet about wrong permissions on pg_upgrade output files. But umask 077 would be likely to do things like get rid of group access to postgresql.conf, which some people intentionally set. Yes, that was my conclusion too, but I wanted to ask. FYI, this doesn't affect the install itself, just what pg_upgrade changes, and it doesn't touch postgresql.conf, but, as you, I am worried there might be long-term problems with an aggressive umask that covered the entire executable. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + diff --git a/contrib/pg_upgrade/check.c b/contrib/pg_upgrade/check.c new file mode 100644 index a5f63eb..7905534 *** a/contrib/pg_upgrade/check.c --- b/contrib/pg_upgrade/check.c *** issue_warnings(char *sequence_script_fil *** 165,176 if (sequence_script_file_name) { prep_status(Adjusting sequences); ! exec_prog(true, ! SYSTEMQUOTE \%s/psql\ --set ON_ERROR_STOP=on --no-psqlrc --port %d --username \%s\ ! -f \%s\ --dbname template1 \%s\ SYSTEMQUOTE, new_cluster.bindir, new_cluster.port, os_info.user, ! sequence_script_file_name, log_opts.filename2); unlink(sequence_script_file_name); check_ok(); } --- 165,177 if (sequence_script_file_name) { prep_status(Adjusting sequences); ! exec_prog(true, UTILITY_LOG_FILE, ! SYSTEMQUOTE \%s/psql\ --echo-queries ! --set ON_ERROR_STOP=on --no-psqlrc --port %d --username \%s\ ! -f \%s\ --dbname template1 \%s\ 21 SYSTEMQUOTE, new_cluster.bindir, new_cluster.port, os_info.user, ! sequence_script_file_name, UTILITY_LOG_FILE); unlink(sequence_script_file_name); check_ok(); } diff --git a/contrib/pg_upgrade/controldata.c b/contrib/pg_upgrade/controldata.c new file mode 100644 index 5239601..e01280d *** a/contrib/pg_upgrade/controldata.c --- b/contrib/pg_upgrade/controldata.c *** get_control_data(ClusterInfo *cluster, b *** 126,136 /* we have the result of cmd in output. so parse it line by line now */ while (fgets(bufin, sizeof(bufin), output)) { ! if (log_opts.debug) ! fputs(bufin, log_opts.debug_fd); #ifdef WIN32 - /* * Due to an installer bug, LANG=C doesn't work for PG 8.3.3, but does * work 8.2.6 and 8.3.7, so check for non-ASCII output and suggest a --- 126,134 /* we have the result of cmd in output. so parse it line by line now */ while (fgets(bufin, sizeof(bufin), output)) { ! pg_log(PG_VERBOSE, %s, bufin); #ifdef WIN32 /* * Due to an installer bug, LANG=C doesn't work for PG 8.3.3, but does * work 8.2.6 and 8.3.7, so check for non-ASCII output and suggest a diff --git a/contrib/pg_upgrade/dump.c b/contrib/pg_upgrade/dump.c new file mode 100644 index 772ca37..b1d4034 *** a/contrib/pg_upgrade/dump.c --- b/contrib/pg_upgrade/dump.c *** *** 11,16 --- 11,17 #include pg_upgrade.h + #include sys/types.h void generate_old_dump(void) *** generate_old_dump(void) *** 22,31 * --binary-upgrade records the width of dropped columns in pg_class, and * restores the frozenid's for databases and relations. */ ! exec_prog(true, SYSTEMQUOTE \%s/pg_dumpall\ --port %d --username \%s\ ! --schema-only --binary-upgrade \%s/ ALL_DUMP_FILE \ ! SYSTEMQUOTE, new_cluster.bindir, old_cluster.port, os_info.user, os_info.cwd); check_ok(); } --- 23,33 * --binary-upgrade records the width of dropped columns in pg_class, and * restores the frozenid's for databases and relations. */ ! exec_prog(true, UTILITY_LOG_FILE, SYSTEMQUOTE \%s/pg_dumpall\ --port %d --username \%s\ ! --schema-only --binary-upgrade \%s/%s\ 2 \%s\ ! SYSTEMQUOTE,
Re: [HACKERS] pg_prewarm
On Fri, Mar 9, 2012 at 10:53 AM, Jeff Janes jeff.ja...@gmail.com wrote: On Fri, Mar 9, 2012 at 5:21 AM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 5:24 AM, Fujii Masao masao.fu...@gmail.com wrote: When a relation is loaded into cache, are corresponding indexes also loaded at the same time? No, although if you wanted to do that you could easily do so, using a query like this: select pg_prewarm(indexrelid, 'main', 'read', NULL, NULL) from pg_index where indrelid = 'your_table_name'::regclass; Could that be included in an example? Maybe admins are expected to know how to construct such queries of the cuff, but I always need to look it up each time which is rather tedious. Not a bad idea. I thought of including an Examples section, but it didn't seem quite worth it for the simple case of prewarming a heap. Might be worth it to also include this. In the patch: s/no special projection/no special protection/ OK, will fix. Thanks for putting this together. I will confess that it was 0% altruistic. Not having it was ruining my day. :-) -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] elegant and effective way for running jobs inside a database
On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote: 100% agree (having re-read the thread and Alvaro's idea having sunk in). Being able to set up daemon processes side by side with the postmaster would fit the bill nicely. It's pretty interesting to think of all the places you could go with it. pgAgent could use it *right now*. I keep forgetting to restart it after restarting PostgreSQL and finding after a day or so that no jobs have run. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 16:37, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 9:55 AM, Tom Lane t...@sss.pgh.pa.us wrote: Actually ... now that I look at it, isn't it completely bogus to be using numeric for the result of pg_xlog_location_diff? rhaas=# select pg_xlog_location_diff('/0', '0/0')::int8; ERROR: bigint out of range Oh ... I see my mistake. I was looking at this: /* * result = XLogFileSize * (xlogid1 - xlogid2) + xrecoff1 - xrecoff2 */ and confusing XLogFileSize with XLogSegSize. Not the best choice of names. Yeah, the use of XLogFile to mean something other than, well a file in the xlog, is greatly annoying.. I guess we could change it, but it goes pretty deep in the system so it's not a small change... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 15:37, Tom Lane t...@sss.pgh.pa.us wrote: Fujii Masao masao.fu...@gmail.com writes: Euler proposed one more patch upthread, which replaces pg_size_pretty(bigint) with pg_size_pretty(numeric) so that pg_size_pretty(pg_xlog_location_diff()) succeeds. It's also worth committing this patch? Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. Given the expense, perhaps we need to different (overloaded) functions instead? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Magnus Hagander mag...@hagander.net writes: On Fri, Mar 9, 2012 at 15:37, Tom Lane t...@sss.pgh.pa.us wrote: Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. Given the expense, perhaps we need to different (overloaded) functions instead? That would be a workable solution, but I continue to not believe that this is useful enough to be worth the trouble. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
Robert Haas robertmh...@gmail.com writes: I'm not convinced. Right now, it's fairly useless - all the triggers could possibly do is throw an error, and an error is going to get thrown anyway, so it's only a question of which error message the user will see. But we discussed before the idea of adding a capability for BEFORE triggers to request that the actual execution of the command get skipped, and then it's possible to imagine this being useful. Someone could even use a command trigger that detects which machine it's running on, and if it's the standby, uses dblink to execute the command on the master, or something crazy like that. Command triggers could also be useful for logging all attempts to execute a particular command, which is probably still appropriate on the standby. There are some other use cases, like using plsh to go apt-get install an extension's package when you see the master just created it, so that your read only queries on the hot standby have a chance of loading the code you need. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 18:18, Tom Lane t...@sss.pgh.pa.us wrote: Magnus Hagander mag...@hagander.net writes: On Fri, Mar 9, 2012 at 15:37, Tom Lane t...@sss.pgh.pa.us wrote: Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. Given the expense, perhaps we need to different (overloaded) functions instead? That would be a workable solution, but I continue to not believe that this is useful enough to be worth the trouble. There's certainly some use to being able to prettify it. Wouldn't a pg_size_pretty(numeric) also be useful if you want to pg_size_() a sum() of something? Used on files it doesn't make too much sense, given how big those files have to be, but it can be used on other things as well... I can see a usecase for having a pg_size_pretty(numeric) as an option. Not necessarily a very big one, but a 0 one. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
Robert Haas robertmh...@gmail.com writes: I think we had better look seriously at postponing this patch to 9.3. I understand why you're drawing that conclusion, but I don't think that's the best we can do here, by a long shot. Your reviewing is obviously moving things forward rapidly, but I think it's unrealistic to think this is going to be in a committable state any time in the next week or two. What's happening is that I've been abusing Thom's availability, leaving him with the testing and fixing oddities along the way. Those came mainly from an attempt at being as automatic as possible when writing commands support. I'm now about done reviewing each and every call site and having them covered in the tests. What remains to be done now is how to pass down arguments to the triggers (switching from function arguments to trigger style magic variables, per Tom request), and review REINDEX and CREATE OPERATOR CLASS support. That's about it. The API and the call sites location have been stable for a long time now, and are following your previous round of review. The catalog storage and commands grammar are ok too, we've been hashing them out. We've been very careful about not introducing code path hazards, the only novelty being a new place to ERROR out (no fancy silent utility command execution control). Really, I would think we're about there now. I would be rather surprised not to be able to finish that patch by the end of next week, and will arrange myself to be able to devote more time on it each day if that's what needed. Remember that we intend to build an extension providing a C-coded function doing the heavy lifting of back parsing the command string from the Node parse tree, with the goal of having Slony, Londiste and Bucardo able to use that and implement support for DLLs. I really want that to happen in 2012. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] check function patch
Hello Alvaro here is new version - merged Peter's doc changes. I created a new header functioncmds.h. This file contains lines related to checker only. I didn't want to unclean this patch by header files reorganization. Regards Pavel 2012/3/8 Pavel Stehule pavel.steh...@gmail.com: Hello there are other version related to your last comments. I removed magic constants. This is not merged with Peter's changes. I'll do it tomorrow. Probably there will be some bigger changes in header files, but this can be next step. Regards Pavel 2012/3/8 Alvaro Herrera alvhe...@alvh.no-ip.org: Hi guys, sorry, I'm stuck in an unfamiliar webmail. I checked the patch Petr just posted. http://archives.postgresql.org/pgsql-hackers/2012-03/msg00482.php I have two comments. First, I notice that the documentation changes has two places that describe the columns that a function checker returns -- one in the plhandler page, another in the create language page. I think it should exist only on one of those, probably the create language one; and the plhandler page should just say the checker should comply with the specification at create language. Or something like that. Also, the fact that the tuple description is prose makes it hard to read; I think it should be a table -- three columns: name, type, description. My second comment is that the checker tuple descriptor seems to have changed in the code. In the patch I posted, the FunctionCheckerDesc() function was not static; in this patch it has been made static. But what I intended was that the other places that need a descriptor for anything would use this function to get one, instead of building them by hand. There are two such places currently, one in CreateProceduralLanguage. I think this should be simply walking the tupdesc-attrs array to create the arrays it needs for the ProcedureCreate call -- shoud be a rather easy change. The other place is plpgsql's report_error(). Honestly I don't like this function at all due to the way it's assuming what the tupledesc looks like. I'm not sure how to improve it, however, but it seems wrong to me. One reason to do this this way (i.e. centralize knowledge of what the tupdesc looks like) is that otherwise they get out of sync -- I notice that CreateProcedureLanguage now knows that there are 15 columns while the other places believe there are only 11. reduced_pl_checker_2012-03-09-1.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 12:38 PM, Magnus Hagander mag...@hagander.net wrote: On Fri, Mar 9, 2012 at 18:18, Tom Lane t...@sss.pgh.pa.us wrote: Magnus Hagander mag...@hagander.net writes: On Fri, Mar 9, 2012 at 15:37, Tom Lane t...@sss.pgh.pa.us wrote: Why would it be useful to use pg_size_pretty on xlog locations? -1 because of the large expense of bigint-numeric-whatever conversion that would be added to existing uses. Given the expense, perhaps we need to different (overloaded) functions instead? That would be a workable solution, but I continue to not believe that this is useful enough to be worth the trouble. There's certainly some use to being able to prettify it. Wouldn't a pg_size_pretty(numeric) also be useful if you want to pg_size_() a sum() of something? Used on files it doesn't make too much sense, given how big those files have to be, but it can be used on other things as well... I can see a usecase for having a pg_size_pretty(numeric) as an option. Not necessarily a very big one, but a 0 one. +1. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] elegant and effective way for running jobs inside a database
On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheeler da...@justatheory.com wrote: On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote: 100% agree (having re-read the thread and Alvaro's idea having sunk in). Being able to set up daemon processes side by side with the postmaster would fit the bill nicely. It's pretty interesting to think of all the places you could go with it. pgAgent could use it *right now*. I keep forgetting to restart it after restarting PostgreSQL and finding after a day or so that no jobs have run. That can and should be fixed by teaching pgAgent that failing to connect to the server, or getting disconnected, is not a fatal error, but a reason to sleep and retry. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] elegant and effective way for running jobs inside a database
On 03/09/2012 01:40 PM, Robert Haas wrote: On Fri, Mar 9, 2012 at 12:02 PM, David E. Wheelerda...@justatheory.com wrote: On Mar 9, 2012, at 7:55 AM, Merlin Moncure wrote: 100% agree (having re-read the thread and Alvaro's idea having sunk in). Being able to set up daemon processes side by side with the postmaster would fit the bill nicely. It's pretty interesting to think of all the places you could go with it. pgAgent could use it *right now*. I keep forgetting to restart it after restarting PostgreSQL and finding after a day or so that no jobs have run. That can and should be fixed by teaching pgAgent that failing to connect to the server, or getting disconnected, is not a fatal error, but a reason to sleep and retry. Yeah. It's still not entirely clear to me what a postmaster-controlled daemon is going to be able to do that an external daemon can't. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Rules containing INSERT/UPDATE lack dependencies on target columns
I looked into the misbehavior reported here: http://archives.postgresql.org/pgsql-bugs/2012-03/msg00068.php The reason the ALTER TABLE fails to fail is $SUBJECT: it goes looking for pg_depend entries showing that rewrite rules depend on the column to be altered, but there isn't one. This is basically because dependency.c only generates column dependencies for Vars, and the representation of an INSERT or UPDATE does not use a Var to specify a target column. ISTM we need to add such dependencies. Aside from the change-of-type issue reported above, you can get very curious behavior if you remove a target column with ALTER TABLE DROP COLUMN, and I don't think we want to allow that. I'm inclined to only fix this in HEAD. Back-patching would have limited usefulness since it wouldn't cause the missing pg_depend entries to spring into existence for existing rules; and given the lack of prior reports, it's clearly not something that comes up often. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On Fri, Mar 9, 2012 at 12:51 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: Robert Haas robertmh...@gmail.com writes: I think we had better look seriously at postponing this patch to 9.3. I understand why you're drawing that conclusion, but I don't think that's the best we can do here, by a long shot. Well, if you get to the point where you're done churning the code in the next week or so, I'm willing to do one or two more rounds of serious review, but if that doesn't get us there then I think we need to give up. The energy you've put into this is commendable, but we're about to start the third month of this CommitFest, and until we get this release at least to beta or so, we can't start any new CommitFests or branch the tree. That basically means that nothing else of mine is going to get committed until the current crop of patches are dealt with - or for a good while after, for that matter, but getting the current crop of patches dealt with is the first step. Of course, I also want to have a good release and I understand the necessity of spending time on other people's patches as well as my own, as I believe I've demonstrated, but I don't want to stay in that mode indefinitely, which I think is an understandable position. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On fre, 2012-03-09 at 18:13 +0100, Magnus Hagander wrote: and confusing XLogFileSize with XLogSegSize. Not the best choice of names. Yeah, the use of XLogFile to mean something other than, well a file in the xlog, is greatly annoying.. I guess we could change it, but it goes pretty deep in the system so it's not a small change... The whole thing was built around the lack of 64 bit integers. If we bit the bullet and changed the whole thing to be just a single 64-bit counter, we could probably delete thousands of lines of code. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Peter Eisentraut pete...@gmx.net writes: Yeah, the use of XLogFile to mean something other than, well a file in the xlog, is greatly annoying.. I guess we could change it, but it goes pretty deep in the system so it's not a small change... The whole thing was built around the lack of 64 bit integers. If we bit the bullet and changed the whole thing to be just a single 64-bit counter, we could probably delete thousands of lines of code. Hm. I think thousands is an overestimate, but yeah the logic could be greatly simplified. However, I'm not sure we could avoid breaking the existing naming convention for WAL files. How much do we care about that? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 2:26 PM, Tom Lane t...@sss.pgh.pa.us wrote: Peter Eisentraut pete...@gmx.net writes: Yeah, the use of XLogFile to mean something other than, well a file in the xlog, is greatly annoying.. I guess we could change it, but it goes pretty deep in the system so it's not a small change... The whole thing was built around the lack of 64 bit integers. If we bit the bullet and changed the whole thing to be just a single 64-bit counter, we could probably delete thousands of lines of code. Hm. I think thousands is an overestimate, but yeah the logic could be greatly simplified. However, I'm not sure we could avoid breaking the existing naming convention for WAL files. How much do we care about that? Probably not very much, since WAL files aren't portable across major versions anyway. But I don't see why you couldn't keep the naming convention - there's nothing to prevent you from converting a 64-bit integer back into two 32-bit integers if and where needed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rules containing INSERT/UPDATE lack dependencies on target columns
On Fri, Mar 9, 2012 at 2:14 PM, Tom Lane t...@sss.pgh.pa.us wrote: I'm inclined to only fix this in HEAD. +1. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Tom Lane t...@sss.pgh.pa.us wrote: Peter Eisentraut pete...@gmx.net writes: The whole thing was built around the lack of 64 bit integers. If we bit the bullet and changed the whole thing to be just a single 64-bit counter, we could probably delete thousands of lines of code. Hm. I think thousands is an overestimate, but yeah the logic could be greatly simplified. However, I'm not sure we could avoid breaking the existing naming convention for WAL files. How much do we care about that? We have a few scripts in our backup area that are based around the current WAL file naming convention, so there would be some impact; but I have to believe it would be pretty minor. Most of the pain would be related to the need to support both naming conventions for some transition period. If it simplifies the WAL-related logic, it seems well worth it to me. We just have to know it's coming and be clear on what the new naming rules are. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 2:34 PM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 2:26 PM, Tom Lane t...@sss.pgh.pa.us wrote: Peter Eisentraut pete...@gmx.net writes: Yeah, the use of XLogFile to mean something other than, well a file in the xlog, is greatly annoying.. I guess we could change it, but it goes pretty deep in the system so it's not a small change... The whole thing was built around the lack of 64 bit integers. If we bit the bullet and changed the whole thing to be just a single 64-bit counter, we could probably delete thousands of lines of code. Hm. I think thousands is an overestimate, but yeah the logic could be greatly simplified. However, I'm not sure we could avoid breaking the existing naming convention for WAL files. How much do we care about that? Probably not very much, since WAL files aren't portable across major versions anyway. But I don't see why you couldn't keep the naming convention - there's nothing to prevent you from converting a 64-bit integer back into two 32-bit integers if and where needed. On further reflection, this seems likely to break quite a few third-party tools. Maybe it'd be worth it anyway, but it definitely seems like it would be worth going to at least some minor trouble to avoid it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] sepgsql's DROP Permission checks
On Sat, Feb 4, 2012 at 10:54 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote: OK, I modified the patch according to your suggestions. object_access_hook was extended to take an argument of void * pointer, and InvokeObjectAccessHook was also allows to deliver it. Sorry for the long radio silence on this patch. The changes to the object-access hook stuff seem sound to me now, so I've gone ahead and committed that part of this. I'll look at the rest next. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Is it time for triage on the open patches?
Robert Haas robertmh...@gmail.com writes: Well, if you get to the point where you're done churning the code in the next week or so, I'm willing to do one or two more rounds of serious review, but if that doesn't get us there then I think we need to give up. The energy you've put into this is commendable, but we're about to start the third month of this CommitFest, and until we get this release at least to beta or so, we can't start any new CommitFests or branch the tree. That basically means that nothing else of mine is going to get committed until the current crop of patches are dealt with - or for a good while after, for that matter, but getting the current crop of patches dealt with is the first step. Of course, I also want to have a good release and I understand the necessity of spending time on other people's patches as well as my own, as I believe I've demonstrated, but I don't want to stay in that mode indefinitely, which I think is an understandable position. This is a fair position, but I think it's a bit unfair to be applying such pressure to just the command-triggers patch and not all the other open issues. Hence, $SUBJECT: is it time to start forcing this commitfest to a conclusion, and if so what kind of schedule are we trying to set? Personally, the open patches that I'm excited about getting into the tree (or at least trying hard to) are: * NULLs support in SP-GiST * Caching constant stable expressions per execution and not that much else. (I'm also interested in the pgsql_fdw patch but I'm afraid that getting it to committable shape in the next week or two may be unrealistic.) Probably other people have their own, different shortlists. (This is not to say that I'm objecting to any other patches, only that I'd push to delay closing the CF to finish these, but not others.) I think a reasonable way to proceed might be to get some consensus on a short list of patches we're willing to try to push to completion, then set a schedule accordingly, and then anything that doesn't get done by the deadline gets kicked to 9.3. Or we can just keep drifting. But the number of open patches that are *not* Ready for Committer, nigh two months after the CF started, is depressingly large. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] RFC: Making TRUNCATE more MVCC-safe
On Fri, Mar 9, 2012 at 3:46 AM, Robert Haas robertmh...@gmail.com wrote: On Wed, Mar 7, 2012 at 5:41 PM, Simon Riggs si...@2ndquadrant.com wrote: Case #2 is certainly a problem for FrozenXID as well, because anything that's marked with FrozenXID is going to look visible to everybody, including our older snapshots. And I gather you're saying it's also a problem for HEAP_XMIN_COMMITTED. The problem there is that later subtransactions often have xids that are greater than xmax, so the xid shows as running when we do XidInMVCCSnapshot(), which must then be altered for this one weird case. I tried that and not happy with result. Altering XidInMVCCSnapshot() seems like a good thing to avoid, but I confess I don't quite follow what you're describing here otherwise. I had assumed that the way we were fixing this problem was to disable these optimizations for transactions that had more than one snapshot floating around. I'm not sure whether the patch does that or not, but I think it probably needs to It does. I thought you already read the patch? I glanced over it, but did not look through it in detail. I'll do a more careful look at your next version. I'm not confident about the restrictions this patch imposes and we aren't close enough to a final version for me to honestly request this be considered for this release. I think its time to close this door for now. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoiding shutdown checkpoint at failover
On Thu, Mar 8, 2012 at 1:20 PM, Robert Haas robertmh...@gmail.com wrote: Are we still considering trying to do this for 9.2? Seems it's been over a month without a new patch, and it's not entirely clear that we know what the design should be. It's important, but not ready. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 2:34 PM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 2:26 PM, Tom Lane t...@sss.pgh.pa.us wrote: Hm. I think thousands is an overestimate, but yeah the logic could be greatly simplified. However, I'm not sure we could avoid breaking the existing naming convention for WAL files. How much do we care about that? Probably not very much, since WAL files aren't portable across major versions anyway. But I don't see why you couldn't keep the naming convention - there's nothing to prevent you from converting a 64-bit integer back into two 32-bit integers if and where needed. On further reflection, this seems likely to break quite a few third-party tools. Maybe it'd be worth it anyway, but it definitely seems like it would be worth going to at least some minor trouble to avoid it. The main actual simplification would be in getting rid of the hole at the end of each 4GB worth of WAL, cf this bit in xlog_internal.h: /* * We break each logical log file (xlogid value) into segment files of the * size indicated by XLOG_SEG_SIZE. One possible segment at the end of each * log file is wasted, to ensure that we don't have problems representing * last-byte-position-plus-1. */ #define XLogSegSize ((uint32) XLOG_SEG_SIZE) #define XLogSegsPerFile (((uint32) 0x) / XLogSegSize) #define XLogFileSize(XLogSegsPerFile * XLogSegSize) If we can't get rid of that and have a continuous 64-bit WAL address space then it's unlikely we can actually simplify any logic. Now, doing that doesn't break the naming convention exactly; what it changes is that there will be WAL files numbered xxx (for some number of trailing-1-bits I'm too lazy to work out at the moment) where before there were not. So the question really is how much external code there is that is aware of that specific noncontiguous numbering behavior and would be broken if things stopped being that way. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it time for triage on the open patches?
On Fri, Mar 9, 2012 at 2:47 PM, Tom Lane t...@sss.pgh.pa.us wrote: This is a fair position, but I think it's a bit unfair to be applying such pressure to just the command-triggers patch and not all the other open issues. Hence, $SUBJECT: is it time to start forcing this commitfest to a conclusion, and if so what kind of schedule are we trying to set? Just to be clear, it wasn't my intention to hold command triggers specifically to a different standard - but I do differentiate between small patches and big patches. Small patches that someone can get committed with an hour's worth of review can be treated a little more leniently than large patches that may take many cycles of review adding up to days of effort, and I believe command triggers to be one such patch. Personally, the open patches that I'm excited about getting into the tree (or at least trying hard to) are: * NULLs support in SP-GiST * Caching constant stable expressions per execution and not that much else. (I'm also interested in the pgsql_fdw patch but I'm afraid that getting it to committable shape in the next week or two may be unrealistic.) Probably other people have their own, different shortlists. I'd like to get the two sepgsql patches done if possible. I'm going to commit the rest of the DROP patch shortly, and the GUC for client label I will review and commit if it seems like it's in good shape. I would *like* to see Heikki's XLogInsert scaling patch committed, but it seems like it's still too buggy for that, and I'm not sure how long we should wait for it to get fixed; I also don't have plans to work on it personally. It's hard to pick favorites among the rest; there are a number of things I'd like to work on, but if it's just me working on them it's going to take longer than I want to wait for them to get done. There's been very little patch review going on, with a couple of notable exceptions like Thom and Noah, and not a lot of new patch versions from patch authors either, again with a few exceptions, like Dimitri. So it's not terribly surprising that progress is very slow. I'm not sure what to do about that, either: it doesn't seem very fair to start booting patches things that are in relatively good shape, but on the other hand I'm not willing to single-handedly (or even with both hands) take on the task of reviewing everything that nobody else is paying attention to. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] Add GUC sepgsql.client_label
On 2012-03-06 15:14, Kohei KaiGai wrote: In case of sepgsql_setcon() being invoked with null argument to reset security label of the client, but not committed yet, the last item of the client_label_pending has null label. (It performs as a mark of a security label being reset.) Yes, I see that now. Another solution could be to append client_label_peer on the pending list instead of NULL, but maybe this is not important enough to discuss. I tried to do some testing by first transition into a smaller MLS context, then reset to the original again, but that is not allowed by the regtest policy. I searched the internet for 30 minutes about how to write a allow rule that would allow e.g. the transition from c1.c4 back to c1.c1023 but failed. two other minor nitpicks -- * contrib/sepgsql/label.c -+ * contrib/sepgsqlet/label.c typo here.. -+ /* Append the supplied new_label on the pending list. */ ++ /* ++ * Append the supplied new_label on the pending list until ++ * the current transaction is committed. ++ */ the 'until the current transaction is committed' part is something going on outside of sepgsql_set_client_label() - this function just appends to the list, always. That the list is reset on transaction commit/abort time, is done and also already documented elsewhere. A new reader could be confused to not find transaction related things in sepgsql_set_client_label(). regards, Yeb -- Yeb Havinga http://www.mgrid.net/ Mastering Medical Data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On tor, 2012-03-08 at 23:15 +0100, Pavel Stehule wrote: But you propose some little bit different than is current plpgsql checker and current design. Is it? Why? It looks like exactly the same thing, except that the interfaces you propose are tightly geared toward checking SQL-like languages, which looks like a mistake to me. It's not bad, but it is some different and it is not useful for plpgsql - external stored procedures are different, than SQL procedures and probably you will check different issues. I don't think so multiple checkers and external checkers are necessary - if some can living outside, then it should to live outside. Internal checker can be one for PL language. It is parametrized - so you can control goals of checking. What would be the qualifications for being an internal or an external checker? Why couldn't your plpgsql checker be an external checker? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] RFC: Making TRUNCATE more MVCC-safe
On Fri, Mar 9, 2012 at 2:59 PM, Simon Riggs si...@2ndquadrant.com wrote: On Fri, Mar 9, 2012 at 3:46 AM, Robert Haas robertmh...@gmail.com wrote: On Wed, Mar 7, 2012 at 5:41 PM, Simon Riggs si...@2ndquadrant.com wrote: Case #2 is certainly a problem for FrozenXID as well, because anything that's marked with FrozenXID is going to look visible to everybody, including our older snapshots. And I gather you're saying it's also a problem for HEAP_XMIN_COMMITTED. The problem there is that later subtransactions often have xids that are greater than xmax, so the xid shows as running when we do XidInMVCCSnapshot(), which must then be altered for this one weird case. I tried that and not happy with result. Altering XidInMVCCSnapshot() seems like a good thing to avoid, but I confess I don't quite follow what you're describing here otherwise. I had assumed that the way we were fixing this problem was to disable these optimizations for transactions that had more than one snapshot floating around. I'm not sure whether the patch does that or not, but I think it probably needs to It does. I thought you already read the patch? I glanced over it, but did not look through it in detail. I'll do a more careful look at your next version. I'm not confident about the restrictions this patch imposes and we aren't close enough to a final version for me to honestly request this be considered for this release. I think its time to close this door for now. OK, makes sense. I was trying to hold my nose, because I really would like to see this stuff work better than it does, but I had my doubts, too. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] xlog location arithmetic
On Fri, Mar 9, 2012 at 3:04 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 2:34 PM, Robert Haas robertmh...@gmail.com wrote: On Fri, Mar 9, 2012 at 2:26 PM, Tom Lane t...@sss.pgh.pa.us wrote: Hm. I think thousands is an overestimate, but yeah the logic could be greatly simplified. However, I'm not sure we could avoid breaking the existing naming convention for WAL files. How much do we care about that? Probably not very much, since WAL files aren't portable across major versions anyway. But I don't see why you couldn't keep the naming convention - there's nothing to prevent you from converting a 64-bit integer back into two 32-bit integers if and where needed. On further reflection, this seems likely to break quite a few third-party tools. Maybe it'd be worth it anyway, but it definitely seems like it would be worth going to at least some minor trouble to avoid it. The main actual simplification would be in getting rid of the hole at the end of each 4GB worth of WAL, cf this bit in xlog_internal.h: /* * We break each logical log file (xlogid value) into segment files of the * size indicated by XLOG_SEG_SIZE. One possible segment at the end of each * log file is wasted, to ensure that we don't have problems representing * last-byte-position-plus-1. */ #define XLogSegSize ((uint32) XLOG_SEG_SIZE) #define XLogSegsPerFile (((uint32) 0x) / XLogSegSize) #define XLogFileSize (XLogSegsPerFile * XLogSegSize) If we can't get rid of that and have a continuous 64-bit WAL address space then it's unlikely we can actually simplify any logic. Now, doing that doesn't break the naming convention exactly; what it changes is that there will be WAL files numbered xxx (for some number of trailing-1-bits I'm too lazy to work out at the moment) where before there were not. So the question really is how much external code there is that is aware of that specific noncontiguous numbering behavior and would be broken if things stopped being that way. I would expect that most things would NOT know about that particular foible, and just be matching pathnames on an RE, which should be fine. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Avoiding shutdown checkpoint at failover
On Fri, Mar 9, 2012 at 3:00 PM, Simon Riggs si...@2ndquadrant.com wrote: On Thu, Mar 8, 2012 at 1:20 PM, Robert Haas robertmh...@gmail.com wrote: Are we still considering trying to do this for 9.2? Seems it's been over a month without a new patch, and it's not entirely clear that we know what the design should be. It's important, but not ready. Thanks for the update. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On tor, 2012-03-08 at 19:19 -0500, Robert Haas wrote: On Thu, Mar 8, 2012 at 4:54 PM, Peter Eisentraut pete...@gmx.net wrote: * It's not terribly important to me to be able to run checkers separately. If I wanted to do that, I would just disable or remove the checker. Does this requirement mean that you want to essentially associate a set of checkers with each language and then, when asked to check a function, run all of them serially in an undefined order? Well, the more I think about it and look at this patch, the more I think that this would be complete overkill and possibly quite useless for my purposes. I can implement the entire essence of this framework (except the plpgsql_checker itself, which is clearly useful) in 10 lines, namely: CREATE OR REPLACE FUNCTION pep8(src text) RETURNS text IMMUTABLE LANGUAGE plsh AS $$ #!/bin/bash pep8 --ignore=W391 (echo $1) 21 | sed -r 's/^[^:]*://' $$; SELECT proname, pep8(prosrc) FROM pg_proc WHERE prolang = ANY (SELECT oid FROM pg_language WHERE lanname LIKE '%python%') ORDER BY 1; I don't know what more one would need. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] sepgsql's DROP Permission checks
On Fri, Mar 9, 2012 at 2:41 PM, Robert Haas robertmh...@gmail.com wrote: On Sat, Feb 4, 2012 at 10:54 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote: OK, I modified the patch according to your suggestions. object_access_hook was extended to take an argument of void * pointer, and InvokeObjectAccessHook was also allows to deliver it. Sorry for the long radio silence on this patch. The changes to the object-access hook stuff seem sound to me now, so I've gone ahead and committed that part of this. I'll look at the rest next. That looks fine also, so committed that too. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On Fri, Mar 9, 2012 at 3:15 PM, Peter Eisentraut pete...@gmx.net wrote: On tor, 2012-03-08 at 19:19 -0500, Robert Haas wrote: On Thu, Mar 8, 2012 at 4:54 PM, Peter Eisentraut pete...@gmx.net wrote: * It's not terribly important to me to be able to run checkers separately. If I wanted to do that, I would just disable or remove the checker. Does this requirement mean that you want to essentially associate a set of checkers with each language and then, when asked to check a function, run all of them serially in an undefined order? Well, the more I think about it and look at this patch, the more I think that this would be complete overkill and possibly quite useless for my purposes. I can implement the entire essence of this framework (except the plpgsql_checker itself, which is clearly useful) in 10 lines, namely: CREATE OR REPLACE FUNCTION pep8(src text) RETURNS text IMMUTABLE LANGUAGE plsh AS $$ #!/bin/bash pep8 --ignore=W391 (echo $1) 21 | sed -r 's/^[^:]*://' $$; SELECT proname, pep8(prosrc) FROM pg_proc WHERE prolang = ANY (SELECT oid FROM pg_language WHERE lanname LIKE '%python%') ORDER BY 1; I don't know what more one would need. Well, I agree with you, but Tom disagrees, so that's why we're talking about it... -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it time for triage on the open patches?
Robert Haas robertmh...@gmail.com writes: There's been very little patch review going on, with a couple of notable exceptions like Thom and Noah, and not a lot of new patch versions from patch authors either, again with a few exceptions, like Dimitri. So it's not terribly surprising that progress is very slow. I'm not sure what to do about that, either: it doesn't seem very fair to start booting patches things that are in relatively good shape, but on the other hand I'm not willing to single-handedly (or even with both hands) take on the task of reviewing everything that nobody else is paying attention to. Right. IMO the point of setting a deadline would be to try to light a fire under people who have been letting reviewing and patch-updating slide. I don't want to arbitrarily boot a lot of small patches that could have gotten done, but I don't want the active committers to be the only ones pushing things to completion, either. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
Robert Haas robertmh...@gmail.com writes: On Fri, Mar 9, 2012 at 3:15 PM, Peter Eisentraut pete...@gmx.net wrote: Well, the more I think about it and look at this patch, the more I think that this would be complete overkill and possibly quite useless for my purposes. I can implement the entire essence of this framework (except the plpgsql_checker itself, which is clearly useful) in 10 lines, namely: CREATE OR REPLACE FUNCTION pep8(src text) RETURNS text IMMUTABLE LANGUAGE plsh AS $$ #!/bin/bash pep8 --ignore=W391 (echo $1) 21 | sed -r 's/^[^:]*://' $$; SELECT proname, pep8(prosrc) FROM pg_proc WHERE prolang = ANY (SELECT oid FROM pg_language WHERE lanname LIKE '%python%') ORDER BY 1; I don't know what more one would need. Well, I agree with you, but Tom disagrees, so that's why we're talking about it... What Peter's example demonstrates is that you can apply a single checker for a single language without bothering with any common framework. Well, yeah. What I've wanted from this patch from the beginning was a common framework. That is, I want to be able to write something like SELECT check_function(oid) FROM pg_proc WHERE proowner = 'tgl' and have it just work for all languages for which I have checkers. You can't get that with a collection of ad-hoc checkers. If we're going to go the ad-hoc route, there seems little reason to be considering a core patch at all. Freestanding checkers could just as well be independent projects. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it time for triage on the open patches?
Robert Haas robertmh...@gmail.com writes: Just to be clear, it wasn't my intention to hold command triggers specifically to a different standard - but I do differentiate between small patches and big patches. Small patches that someone can get committed with an hour's worth of review can be treated a little more leniently than large patches that may take many cycles of review adding up to days of effort, and I believe command triggers to be one such patch. I share your view here, and in fact the code for the patch has been updated in only two ways since 1/15: adding support for new commands and reacting to review (refactoring, cleaning, features removal, fix the glitch). That's the reason why I can see we're very near the end of it, the code churn is about to be over now. There's been very little patch review going on, with a couple Yeah, I'd like to get back reviewing soon too, obviously I've been somehow more busy than expected. I'm not sure what to do about that, either: it doesn't seem very fair to start booting patches things that are in relatively good shape, but on the other hand I'm not willing to single-handedly (or even with both hands) take on the task of reviewing everything that nobody else is paying attention to. It seems like February has seen lots of participants distracted away from the commit fest, we should probably take this into account. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [v9.2] Add GUC sepgsql.client_label
On Tue, Mar 6, 2012 at 9:14 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote: [ new patch ] Are we absolutely certain that we want the semantics of sepgsql_setcon() to be transactional? Because if we made them non-transactional, this would be a whole lot simpler, and it would still meet the originally proposed use case, which is to allow the security context of a connection to be changed on a one-time basis before handing it off to a client application. As a separate but related note, the label management here seems to be excessively complicated. In particular, it seems to me that the semantics of sepgsql_get_client_label() become quite muddled under this patch. An explicitly-set label overrides the default label, but a trusted procedure's temporary label overrides that. So if you enter a trusted procedure, and it calls sepgsql_setcon(), it'll change the label, but no actual security transition will occur; then, when you exit the trusted procedure, you'll get the new label in place of whatever the caller had before. That seems like one heck of a surprising set of semantics. It seems to me that it would make more sense to view the set of security labels in effect as a stack. When we enter a trusted procedure, it pushes a new label on the stack; when we exit a trusted procedure, it pops the top label off the stack. sepgsql_setcon() changes the top label on the stack. If we want to retain transactional semantics, then you can also have a toplevel label that doesn't participate in the stack; a commit copies the sole item on the stack into the toplevel label, and transaction start copies the toplevel label into an empty stack. In the current coding, I think client_label_peer is redundant with client_label_committed - once the latter is set, the former is unused, IIUC - and what I'm proposing is that client_label_func shouldn't be separate, but rather should mutate the stack of labels maintained by client_label_pending. The docs need updating, both to reflect the version bump in sepgsql-regtest.te and to describe the actual feature. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
2012/3/9 Peter Eisentraut pete...@gmx.net: On tor, 2012-03-08 at 23:15 +0100, Pavel Stehule wrote: But you propose some little bit different than is current plpgsql checker and current design. Is it? Why? It looks like exactly the same thing, except that the interfaces you propose are tightly geared toward checking SQL-like languages, which looks like a mistake to me. no, you can check any PL language - and output result is based on SQL Errors, so it should be enough for all PL too. It's not bad, but it is some different and it is not useful for plpgsql - external stored procedures are different, than SQL procedures and probably you will check different issues. I don't think so multiple checkers and external checkers are necessary - if some can living outside, then it should to live outside. Internal checker can be one for PL language. It is parametrized - so you can control goals of checking. What would be the qualifications for being an internal or an external checker? Why couldn't your plpgsql checker be an external checker? plpgsql checker cannot be external checker, because it reuse 70% of plpgsql environment, - parser, runtime, ... so creating a external checker is equal to creating a second plpgsql environment - maybe reduced, but you have to duplicate parser, sql integration, ... Regards Pavel -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
Pavel Stehule pavel.steh...@gmail.com writes: 2012/3/9 Peter Eisentraut pete...@gmx.net: What would be the qualifications for being an internal or an external checker? Why couldn't your plpgsql checker be an external checker? plpgsql checker cannot be external checker, because it reuse 70% of plpgsql environment, - parser, runtime, ... Well, that just means that it'd be a good idea for that function to be supplied by the same shared library that supplies the plpgsql execution functions. There wouldn't need to be any connection that the core system particularly understands. So, like Peter, I'm not quite sure what distinction is meant to be drawn by internal vs external. The thing that really struck me about Peter's previous comments was the desire to support multiple checkers per PL. I had been assuming that we'd just extend the validator model in some way --- either another column in pg_language or extending the API for validator functions. Neither of those work if we want to allow multiple checkers. Now, I'm not at all convinced that multiple checkers are worth the trouble ... but if they are it seems like we need a different system catalog to store them in. And the entries in that catalog wouldn't necessarily be created by the same extension that creates the PL language. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it time for triage on the open patches?
On Fri, Mar 9, 2012 at 3:40 PM, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: I share your view here, and in fact the code for the patch has been updated in only two ways since 1/15: adding support for new commands and reacting to review (refactoring, cleaning, features removal, fix the glitch). That's the reason why I can see we're very near the end of it, the code churn is about to be over now. Eh, maybe. I suspect you may be underestimating the amount of work left to do, but I'll reserve judgement until I read a new version of the patch. It seems like February has seen lots of participants distracted away from the commit fest, we should probably take this into account. Sure, I got knocked out for a week by my trip to Japan, and Tom was busy with other things, too, and Heikki went on vacation for a week. But, really, I don't think that's the main problem. If people are tired of working on the CommitFest, they're not going to get reinvigorated just because we let it go on for another month. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On Fri, Mar 9, 2012 at 3:33 PM, Tom Lane t...@sss.pgh.pa.us wrote: If we're going to go the ad-hoc route, there seems little reason to be considering a core patch at all. Freestanding checkers could just as well be independent projects. I completely agree. I think there is little reason to be considering a core patch. I haven't seen any convincing evidence (or any evidence at all) that being able to fling checkers at a large number of functions written in different procedural languages is an important use case for anyone. I think the vast majority of checking will get done one function at a time; and therefore we are gilding the lily. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Is it time for triage on the open patches?
On Friday, March 09, 2012 10:13:15 PM Robert Haas wrote: If people are tired of working on the CommitFest, they're not going to get reinvigorated just because we let it go on for another month. On that line: From Sundway onwards I do have time again to do reviewing. I am not anybody is doing commitfest management right now, so I am asking here. Any patch that can benefit from the level of review I can do? If nobody raises a voice I will take a look at the next version of commandtriggers first. Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
2012/3/9 Tom Lane t...@sss.pgh.pa.us: Pavel Stehule pavel.steh...@gmail.com writes: 2012/3/9 Peter Eisentraut pete...@gmx.net: What would be the qualifications for being an internal or an external checker? Why couldn't your plpgsql checker be an external checker? plpgsql checker cannot be external checker, because it reuse 70% of plpgsql environment, - parser, runtime, ... Well, that just means that it'd be a good idea for that function to be supplied by the same shared library that supplies the plpgsql execution functions. There wouldn't need to be any connection that the core system particularly understands. So, like Peter, I'm not quite sure what distinction is meant to be drawn by internal vs external. internal - implement in core, external - implement in extension. This is history about this feature: a) I wrote plgsql_lint, and I proposed it as core plpgsql functionality b) there was request using some different then GUC, and there was first check_function c) there was request do it with more user friendly - there is CHECK xxx d) there was request for support multiple checks, there is CHECK xxx ALL e) these features was reduced - step by step,... but really important @a Personally I think so any form of plpgsql check is big step against current state. We can move general check functions to plpgsql space, and it can be good enough for plpgsql developers. It is not user friendly like CHECK FUNCTION is it, but it has important functionality - checking of embedded SQL without execution and without dependency on executed paths. I cannot to move plpgsql checker to extension, because there is dependency on plpgsql lib, and this is problem. If I can do it, then I did it The thing that really struck me about Peter's previous comments was the desire to support multiple checkers per PL. I had been assuming that we'd just extend the validator model in some way I tried to do it, but it is not practical - a interface of validators is volatile now - it is different for functions and for triggers, and it doesn't expect returning anything else than exception. More - other new functionality can increase complexity of current plpgsql validators. So this is reason, why I designed new handler function. --- either another column in pg_language or extending the API for validator functions. Neither of those work if we want to allow multiple checkers. Now, I'm not at all convinced that multiple checkers are worth the trouble I don't see a reason why we need a multiple checkers - checkers are parametrised, so there are no real reason, But what statement will be maintain this catalog - CREATE CHECK ? You need DROP, ALTER, .. it is lot code too. ... but if they are it seems like we need a different system catalog to store them in. And the entries in that catalog wouldn't necessarily be created by the same extension that creates the PL language. This looks like real overhead. Without user interface - like CHECK statement, what is value of this? I am skeptic. Can we go back and can we solve a checking just plpgsql - it just needs integration of plpgsql checker to plpgsql runtime. Any PL can have one or more these functions. And when we will have a adequate user API - SQL statements (CHECK statements or some else), then we can create a new catalog. It is strange do it in different order. Regards Pavel regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On Fri, Mar 9, 2012 at 5:09 PM, Pavel Stehule pavel.steh...@gmail.com wrote: Well, that just means that it'd be a good idea for that function to be supplied by the same shared library that supplies the plpgsql execution functions. There wouldn't need to be any connection that the core system particularly understands. So, like Peter, I'm not quite sure what distinction is meant to be drawn by internal vs external. internal - implement in core, external - implement in extension. [...] I cannot to move plpgsql checker to extension, because there is dependency on plpgsql lib, and this is problem. If I can do it, then I did it I don't object to having this feature live in src/pl/plpgsql, and I don't think Tom's objecting to that either. I just don't think it needs any particular support in src/backend. I don't see a reason why we need a multiple checkers - checkers are parametrised, so there are no real reason, But what statement will be maintain this catalog - CREATE CHECK ? You need DROP, ALTER, .. it is lot code too. If the checkers are written by different people and shipped separately, then a parameter interface does not make anything better. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pg_crypto failures with llvm on OSX
Buildfarm member mussel (OS X 10.7.3, llvm-gcc 4.2.1, x86_64)seems to be getting consistent warnings when running the pgcrypto regression tests, that look like this: WARNING: detected write past chunk end in ExprContext 0x7fec2b11eb58 Does anyone have an idea why that might be? cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
2012/3/9 Robert Haas robertmh...@gmail.com: On Fri, Mar 9, 2012 at 5:09 PM, Pavel Stehule pavel.steh...@gmail.com wrote: Well, that just means that it'd be a good idea for that function to be supplied by the same shared library that supplies the plpgsql execution functions. There wouldn't need to be any connection that the core system particularly understands. So, like Peter, I'm not quite sure what distinction is meant to be drawn by internal vs external. internal - implement in core, external - implement in extension. [...] I cannot to move plpgsql checker to extension, because there is dependency on plpgsql lib, and this is problem. If I can do it, then I did it I don't object to having this feature live in src/pl/plpgsql, and I don't think Tom's objecting to that either. I just don't think it needs any particular support in src/backend. I don't see a reason why we need a multiple checkers - checkers are parametrised, so there are no real reason, But what statement will be maintain this catalog - CREATE CHECK ? You need DROP, ALTER, .. it is lot code too. If the checkers are written by different people and shipped separately, then a parameter interface does not make anything better. ok - it has sense, but it has sense only with some smart statements (like CHECK). Without these statements I have to directly call checker function and then concept of generalised checkers has not sense. Pavel -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Publish checkpoint timing and sync files summary data to pg_stat_bgwriter
On Wed, Feb 22, 2012 at 2:11 PM, Peter Geoghegan pe...@2ndquadrant.com wrote: One beef that I have with the variable name m_write_ms is that ms could equally well refer to microseconds or milliseconds, and these mistakes are very common. I would expect ms to mean milliseconds and us to mean microseconds. Details of existing comment bug: The docs show that pg_stat_*_functions accumulates time in units of milliseconds. However, a PgStat_FunctionEntry payload is commented as sending the time/self time to the stats collector in microseconds. So this comment, in the existing code, is actually wrong: PgStat_Counter f_time; /* times in microseconds */ PgStat_Counter f_time_self; } PgStat_StatFuncEntry; I think that the comment is not wrong. The code that populates it looks like this: m_ent-f_time = INSTR_TIME_GET_MICROSEC(entry-f_counts.f_time); m_ent-f_time_self = INSTR_TIME_GET_MICROSEC(entry-f_counts.f_time_self If that's not producing microseconds, those macros are badly misnamed. Note that the pg_stat_user_functions view divides the return value of the function by 1000, which is why the view output is in milliseconds. As for the substance of the patch, I am in general agreement that this is a good idea. Storing the statistics in pg_stat_bgwriter is a more flexible approach than is immediately apparent. Users can usefully calculate delta values, as part of the same process through which checkpoint tuning is performed by comparing output from select now(), * from pg_stat_bgwriter at different intervals. This also puts this information within easy reach of monitoring tools. On further thought, I'll revise the position I took upthread and concede that write_ms and sync_ms are useful. Given values of the stats counters at time A and time B, you can calculate what percentage of that time was spent in the write phase of some checkpoint, the sync phase of some checkpoint, and not in any checkpoint. But I'm still pretty sketchy on sync_files. I can't see how you can do anything useful with that. So, I'd ask Greg and/or Jaime to produce a revision of the patch with those concerns in mind, as well as fixing the md.c usage of log_checkpoints. Seems we still are waiting for an update of this patch. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On Fri, Mar 9, 2012 at 5:31 PM, Pavel Stehule pavel.steh...@gmail.com wrote: ok - it has sense, but it has sense only with some smart statements (like CHECK). Without these statements I have to directly call checker function and then concept of generalised checkers has not sense. I agree. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_crypto failures with llvm on OSX
On Fri, Mar 09, 2012 at 05:28:20PM -0500, Andrew Dunstan wrote: Buildfarm member mussel (OS X 10.7.3, llvm-gcc 4.2.1, x86_64)seems to be getting consistent warnings when running the pgcrypto regression tests, that look like this: WARNING: detected write past chunk end in ExprContext 0x7fec2b11eb58 Does anyone have an idea why that might be? Could it be related to this: openssl.c:840: warning: AES_cbc_encrypt is deprecated (declared at /usr/include/openssl/aes.h:106) basically every API under /usr/include/openssl gives this warning. Replaced or heavily hacked openssl library? Same for core code: be-secure.c:329: warning: SSL_renegotiate is deprecated (declared at /usr/include/openssl/ssl.h:1530) Could someone take a look whats going on there? -- marko -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Command Triggers, patch v11
On 9 March 2012 21:38, Dimitri Fontaine dimi...@2ndquadrant.fr wrote: Hi, Please find attached v15 of the patch, addressing all known issues apart from the trigger function argument passing style. Expect a new patch with that taken care of early next week. (The github branch too, should you be using that) Thom Brown t...@linux.com writes: CREATE OPERATOR CLASS now shows objectname as 'hash' instead of its name where it's declared as USING hash. This isn't a problem with ALTER/DROP OPERATOR CLASS. Everything else above works as expected now. Ah yes that needed a special case, it's properly handled now, and tested. Confirmed. When dropping domains, the name of the domain includes the schema name: Fixed. Confirmed. Could we change this to REINDEX DATABASE triggers are not supported? This way it would be consistent with the AFTER CREATE INDEX CONCURRENTLY warning. Sure, done. Confirmed, thanks. REINDEX on a table seems to show no schema name but an object name for specific triggers: Was a typo, fixed. Confirmed When REINDEXing an index rather than a table, the table's details are shown in the trigger. Is this expected?: Fixed. Confirmed. ALTER CAST is still listed and needs removing, not just from the documentation but every place it's used your code too. I can currently create a trigger for it, but it's impossible for it to fire since there's no such command. Removed. Confirmed that it's removed from the code. Just needs removing from the docs too now. (doc/src/sgml/ref/create_command_trigger.sgml) All these corrections I mentioned previously still needs to be made: That's about the docs, I edited them accordingly to your comments. Confirmed. Thom Brown t...@linux.com writes: All other command triggers don't fire on read-only standbys, and the inconsistency doesn't seem right. On the one hand all CREATE/DROP/ALTER triggers aren't fired because of the cannot execute command in a read-only transaction error message, but triggers do occur before utility commands, which would otherwise display the same message, and might not display it at all if the trigger causes an error in its function call. So it seems like they should either all fire, or none of them should. What are you thoughts? The others trigger don't fire because an ERROR case is detected before they have a chance to run, much like on a primary in some ERROR cases. Yes, I've since discovered that I shouldn't have been treating them equally due to the different type of error those sets of commands generate. That's fine then. Thom Brown t...@linux.com writes: It was late last night and I forgot to get around to testing pg_dump, which isn't working correctly: Fixed. Confirmed. Also I notice that CREATE/ALTER/DROP COMMAND TRIGGER appears just before CREATE/ALTER/DROP TRIGGER in the documentation. This breaks the alphabetical order and I wasn't expecting to find it there when scanning down the page. Could we move them into an alphabetic position? I don't see that problem in the source files, could you be more specific? I can't see the problem in the source either, but when viewing postgresql/doc/src/sgml/html/reference.html in my browser, CREATE COMMAND TRIGGER appears between CREATE TRIGGER and CREATE TYPE. Not sure why though. Unless you're cleverly distracted me away from some issue that's yet to be resolved, that appears to be nearly all my concerns addressed. All that appears to remain is the trigger-variable stuff which you said shall arrive early next week, and also the that odd doc issue I mentioned in the paragraph above (although that could just be something weird happening when I build it). Sounds like I have the weekend off. :) Thanks Dimitri. -- Thom -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] NULL's support in SP-GiST
Oleg Bartunov o...@sai.msu.su writes: attached patch introduces NULLs indexing for SP-GiST. With this patch Sp-GiST supports IS NULL, IS NOT NULL clauses, as well as full index scan. I've looked at this patch a bit. I share Jaime's extreme discomfort with re-using GIN code to handle some pages of an SPGist index. Making that code serve two masters is going to bite us on the rear sooner or later, probably sooner. I must also object in the strongest terms to the proposed rearrangement of SPGiST page special space to make it sort-of-compatible with GIN special space. That will entirely break tools such as pg_filedump, which needs the special space to be visibly different from GIN, or it won't know how to print the page contents. The other aspect I don't particularly like is the proposed changes to the opclass interface API. Adding a satisfyAll field seems like just a kluge. Also, it does nothing to fix the complaints I had in http://archives.postgresql.org/pgsql-hackers/2011-12/msg00804.php about the search API being inherently inefficient for multiple scan keys, because it forces repeat reconstruction of the indexed value. I think a better fix for the opclass API would be to do what I suggested there: * Perhaps it'd be a good idea to move the loop over scankeys to inside the opclass consistent methods, ie call them just once to check all the scankeys. Then we could meaningfully define zero scankeys as a full index scan, and we would also get rid of redundant value reconstruction work when there's more than one scankey. I'm less sure about what to do to store nulls, but one idea is to have a separate SPGiST tree storing only nulls and descending from its own root page, similar to the idea in this patch of having a separate root page for nulls. It'd be a tad less efficient than GIN-based storage for large numbers of nulls, but you probably don't want to use SPGiST to index columns with lots of nulls anyway. Normally, if I felt that a patch needed to be thrown away and rewritten, I'd just bounce it back to the author for rework. However, in this case we are under a time crunch, and I feel that it's critical that we try to get both the opclass API and the on-disk format right for 9.2. It will be much harder to change either thing once we release. So I'm willing to spend some time rewriting the patch according to these ideas, and will go off and do that if there are not objections. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On fre, 2012-03-09 at 21:54 +0100, Pavel Stehule wrote: no, you can check any PL language - and output result is based on SQL Errors, so it should be enough for all PL too. But then I would have to map all language-specific error reports to some SQL error scheme, which is not only cumbersome but pretty useless. For example, a Python programmer will be familiar with the typical output that pylint produces and how to fix it. If we hide that output behind the layer of SQL-ness, that won't make things easier to anyone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] poll: CHECK TRIGGER?
On fre, 2012-03-09 at 15:33 -0500, Tom Lane wrote: What I've wanted from this patch from the beginning was a common framework. That is, I want to be able to write something like SELECT check_function(oid) FROM pg_proc WHERE proowner = 'tgl' and have it just work for all languages for which I have checkers. You can't get that with a collection of ad-hoc checkers. Well, there isn't any program either that will run through, say, the PostgreSQL source tree and check each file according to the respective programming language. You pick the checkers for each language yourself and decide for each checker how to apply it and how to process the results. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers