Re: [HACKERS] New Event Trigger: table_rewrite
On 16 October 2014 10:18, Dimitri Fontaine wrote: > Dimitri Fontaine writes: >> Please find attached to this email a patch to implement a new Event >> Trigger, fired on the the "table_rewrite" event. As attached, it's meant >> as a discussion enabler and only supports ALTER TABLE (and maybe not in >> all forms of it). It will need to grow support for VACUUM FULL and >> CLUSTER and more before getting commited. > > And here's already a new version of it, including support for ALTER > TABLE, VACUUM and CLUSTER commands, and documentation. The patch itself looks fine overall. Docs look in place, tests OK. API changes may need more thought. I'm not sure myself, they just look fairly quick. It would be more useful to work on the applications of this 1. INSERT into a table * Action start time * Schema * Tablename * Number of blocks in table which would then allow you to do these things run an assessment report showing which tables would be rewritten. 2. Get access to number of blocks, so you could limit rewrites only to smaller tables by putting a block limit in place. 3. It might be even cooler to contemplate having pg_stat_activity publish an estimated end time. We'd probably need some kind of time_per_block parameter for each tablespace so we can estimate the time. Doing 1 and 2 at least would make this a good feature. We can do a later patch for 3, or similar, once this is accepted. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] tracking commit timestamps
On 13 October 2014 10:05, Petr Jelinek wrote: >> I worked bit on this patch to make it closer to committable state. > Here is updated version that works with current HEAD for the October > committfest. I've reviewed this and it looks good to me. Clean, follows existing code neatly, documented and tested. Please could you document a few things * ExtendCommitTS() works only because commit_ts_enabled can only be set at server start. We need that documented so somebody doesn't make it more easily enabled and break something. (Could we make it enabled at next checkpoint or similar?) * The SLRU tracks timestamps of both xids and subxids. We need to document that it does this because Subtrans SLRU is not persistent. If we made Subtrans persistent we might need to store only the top level xid's commitTS, but that's very useful for typical use cases and wouldn't save much time at commit. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] On partitioning
On 2014-10-28 08:19:36 -0400, Robert Haas wrote: > On Tue, Oct 28, 2014 at 6:06 AM, Andres Freund wrote: > > In my opinion we can reuse (some of) the existing logic for INHERITS to > > implement "proper" partitioning, but that should be an implementation > > detail. > > Sure, that would be a sensible way to do it. I mostly care about not > throwing out all the work that's been done on the planner and > executor. In that ase I'm not sure if there's actual disagreement here. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] On partitioning
On Tue, Oct 28, 2014 at 6:06 AM, Andres Freund wrote: > In my opinion we can reuse (some of) the existing logic for INHERITS to > implement "proper" partitioning, but that should be an implementation > detail. Sure, that would be a sensible way to do it. I mostly care about not throwing out all the work that's been done on the planner and executor. Maybe you're thinking we'll eventually replace that with something better, which is fine, but I wouldn't underestimate the effort to make that happen. For example, I think it's be sensible for the first patch to just add some new user-visible syntax with some additional catalog representation that doesn't actually do all that much yet. Then subsequent patches could use that additional metadata to optimize partition prune, implement tuple routing, etc. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] BUG: *FF WALs under 9.2 (WAS: .ready files appearing on slaves)
On Tue, Oct 28, 2014 at 1:12 AM, Heikki Linnakangas wrote: > On 10/27/2014 02:12 PM, Fujii Masao wrote: >> >> On Fri, Oct 24, 2014 at 10:05 PM, Heikki Linnakangas >> wrote: >>> >>> On 10/23/2014 11:09 AM, Heikki Linnakangas wrote: At least for master, we should consider changing the way the archiving works so that we only archive WAL that was generated in the same server. I.e. we should never try to archive WAL files belonging to another timeline. I just remembered that we discussed a different problem related to this some time ago, at http://www.postgresql.org/message-id/20131212.110002.204892575.horiguchi.kyot...@lab.ntt.co.jp. The conclusion of that was that at promotion, we should not archive the last, partial, segment from the old timeline. >>> >>> >>> >>> So, this is what I came up with for master. Does anyone see a problem >>> with >>> it? >> >> >> What about the problem that I raised upthread? This is, the patch >> prevents the last, partial, WAL file of the old timeline from being >> archived. >> So we can never PITR the database to the point that the last, partial WAL >> file has. > > > A partial WAL file is never archived in the master server to begin with, so > if it's ever used in archive recovery, the administrator must have performed > some manual action to copy the partial WAL file from the original server. Yes. > When he does that, he can also copy it manually to the archive, or whatever > he wants to do with it. Yes, but currently he doesn't need to do that manually. At the end of archive recovery, .ready file is created and the partial, last, WAL file of the old timeline would be archived later. It's never recycled until it's archived. So when subsequent archive recovery is necessary, we can expect that it exists in either pg_xlog or archival storage. OTOH, if the patch is applied, it's never archived and may have been already recycled, and then subsequent archive recovery would fail. Therefore, the problem that the patch can cause is that the partial WAL file may be recycled even when it has not been archived yet. The partial WAL file may not be able to be archived for some reasons, but that's not directly a problem if it exists in pg_xlog. Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] proposal: CREATE DATABASE vs. (partial) CHECKPOINT
On 10/28/2014 02:56 AM, David G Johnston wrote: Tom Lane-2 wrote So maybe we shouldn't cling to the WAL-logging approach too much. Maybe Heikki's idea from to abandon the full checkpoint and instead assume that once the transaction commits, all the files were fsynced OK. Of couse, this will do nothing about the replay hazards. Well, I'm not insisting on any particular method of getting there, but if we're going to touch this area at all then I think "fix the replay hazards" should be a non-negotiable requirement. We'd never have accepted such hazards if CREATE DATABASE were being introduced for the first time; it's only like this because nobody felt like rewriting a Berkeley-era kluge. Maybe adding a ToDo: "Fix replay hazards in CREATE DATABASE" and listing them explicitly would be a good start. Not sure if WAL or CREATE would be more appropriate but WAL seems like a better fit. My opinion is that we should do the WAL-logging, unless wal_level=minimal, in which case we just fsync everything before commit. That would hurt people who are currently using CREATE DATABASE with a large template database, with WAL archiving enabled, but for everyone else it would be a win or not noticeable. If that's not acceptable, perhaps we could assume that a database with !datallowconn=false can be copied without WAL-logging, i.e. assume that a database with !datallowconn won't be modified. Of course, that's a shaky assumption because someone might just switch datallowconn back on after the CREATE DATABASE. Maybe we could force a checkpoint when you do that, instead.. To the topic at hand would "CREATE DATABASE name WITH LOGGED = true" work? As with UNLOGGED tables giving the user the choice of WAL/fsync/checkpoint behavior seems reasonable. As Thomas said there a couple of diametrically opposed use-cases here and it seems like we've already implemented the more difficult one. Yeah, that would be another way to provide an escape hatch if someone absolutely needs to avoid the extra I/O. I wouldn't like to maintain two different ways of doing the same thing, though. It would be a feature that would be used rarely, so even if we get it right, subtle bugs could easily creep into it over the years. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Better support of exported snapshots with pg_dump
On Tue, Oct 28, 2014 at 8:08 PM, Petr Jelinek wrote: > Hi, > > On 28/10/14 03:15, Michael Paquier wrote: >> >> >> Updated patch with those comments addressed is attached. > > > Great, I have no further comments so I consider this patch ready for > committer (and will mark it so momentarily). OK thanks! > Just as a note - there is the issue you raised yourself about the less than > perfect error message, but I really don't think it's worth the trouble to > have special handling for it as the message coming from the server is clear > enough IMHO. Yeah thinking the same. Let's see what others think (btw, if this code gets committed, be sure to mark Simon as a co-author). Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Better support of exported snapshots with pg_dump
Hi, On 28/10/14 03:15, Michael Paquier wrote: Updated patch with those comments addressed is attached. Great, I have no further comments so I consider this patch ready for committer (and will mark it so momentarily). Just as a note - there is the issue you raised yourself about the less than perfect error message, but I really don't think it's worth the trouble to have special handling for it as the message coming from the server is clear enough IMHO. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes
On Tue, Oct 28, 2014 at 4:54 PM, Syed, Rahila wrote: > Hello Fujii-san, > > Thank you for your comments. > >>The patch isn't applied to the master cleanly. >>The compilation of the document failed with the following error message. >>openjade:config.sgml:2188:12:E: end tag for element "TERM" which is not open >>make[3]: *** [HTML.index] Error 1 >>xlog.c:930: warning: ISO C90 forbids mixed declarations and code >>xlogreader.c:744: warning: ISO C90 forbids mixed declarations and code >>xlogreader.c:744: warning: ISO C90 forbids mixed declarations and code > > Please find attached patch with these rectified. > >>Only backend calls CompressBackupBlocksPagesAlloc when SIGHUP is sent. >>Why does only backend need to do that? What about other processes which can >>write FPW, e.g., autovacuum? > I had overlooked this. I will correct it. > >>Do we release the buffers for compressed data when fpw is changed from >>"compress" to "on"? > The current code does not do this. Don't we need to do that? >>The memory is always (i.e., even when fpw=on) allocated to uncompressedPages, >>but not to compressedPages. Why? I guess that the test of fpw needs to be >>there > uncompressedPages is also used to store the decompression output at the time > of recovery. Hence, memory for uncompressedPages needs to be allocated even > if fpw=on which is not the case for compressedPages. You don't need to make the processes except the startup process allocate the memory for uncompressedPages when fpw=on. Only the startup process uses it for the WAL decompression. BTW, what happens if the memory allocation for uncompressedPages for the recovery fails? Which would prevent the recovery at all, so PANIC should happen in that case? Regards, -- Fujii Masao -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] On partitioning
On 2014-10-28 14:34:22 +0900, Amit Langote wrote: > > Hi, > > > From: Andres Freund [mailto:and...@2ndquadrant.com] > > On 2014-10-27 06:29:33 -0300, Alvaro Herrera wrote: > > > Amit Langote wrote: > > > > FWIW, I think Robert's criticism regarding not basing this on > inheritance > > > > scheme was not responded to. > > > > > > It was responded to by ignoring it. I didn't see anybody else > > > supporting the idea that inheritance is in any way a sane thing to base > > > partitioning on. Sure, we have accumulated lots of kludges over the > > > years to cope with the fact that, really, it doesn't work very well. So > > > what. We can keep them, I don't care. > > > > As far as I understdood Robert's criticism it was more about the > > internals, than about the userland representation. To me it's absolutely > > clear that 'real partitioning' userland shouldn't be based on the > > current hacks to allow it. > > For my understanding: > > By partitioning 'userland' representation, do you mean an implementation > choice where a partition is literally an inheritance child of the partitioned > table as registered in pg_inherits? Or something else? Yes, I mean explicit usage of INHERITS. In my opinion we can reuse (some of) the existing logic for INHERITS to implement "proper" partitioning, but that should be an implementation detail. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add CREATE support to event triggers
On 2014-10-28 05:30:43 -0300, Alvaro Herrera wrote: > > A couple of comments: this patch introduces a basic infrastructure able to > > do the following set of operations: > > - Obtention of parse tree using StashedCommand > > - Reorganization of parse tree to become an ObjTree, with boolean, array > > - Reorganization of ObjTree to a JsonB value > > I am actually a bit doubtful about why we actually need this intermediate > > ObjTree step... What would be wrong in manipulating a JSON(B) parsed tree, > > that is first empty, and pushing key/value objects to it when processing > > each command? Something moving toward in this direction is that ObjTree has > > some logic to manipulate booleans, null values, etc. This results in some > > duplication with what jsonb and json can actually do when creating when > > manipulating strings, as well as the extra processing like > > objtree_to_jsonb_element. ddl_json.c (should be ddl_jsonb.c?) would as well > > take more its sense as it directly manipulates JSONB containers. > > Uhm. Obviously we didn't have jsonb when I started this and we do have > them now, so I could perhaps see about updating the patch to do things > this way; but I'm not totally sold on that idea, as my ObjTree stuff is > a lot easier to manage and the jsonb API is pretty ugly. I looked at this as well, and I think trying to do so would not result in readable code. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [BUGS] ltree::text not immutable?
Tom Lane wrote: > Not entirely sure what to do about this. It seems like for the purposes > of contrib/chkpass, it's a good thing that chkpass_in() won't reuse the > same salt. Weak as a 12-bit salt might be nowadays, it's still better > than no salt. Nonetheless, this behavior is breaking assumptions made > in places like array_in and record_in. > > For the moment I'm tempted to mark chkpass_in as stable (with a comment > explaining that it isn't really) just so we can put in the error check > in CREATE TYPE. But I wonder if anyone has a better idea. Can we have a separate function that accepts unencrypted passwords, and change chkpass_in to only accept encrypted ones? Similar to to_tsquery() et al. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Autovacuum fails to keep visibility map up-to-date in mostly-insert-only-tables
Jeff Janes wrote: > It is only a page read if you have to read the page. It would seem optimal > to have bgwriter adventitiously set hint bits and vm bits, because that is > the last point at which the page can be changed without risking that it be > written out twice. At that point, it has been given the maximum amount of > time it can be given for the interested transactions to have committed and > to have aged past the xmin horizon. I seem to recall that the main problem > with that, though, is that you must be attached to a database in order to > determine visibility, and bgwriter is not attached to a database. Regarding tuple hint bits, I couldn't find any such limitation in SetHintBits, other than in MarkBufferDirtyHint there being some code that would cause trouble: it accesses MyPgXact, which bgwriter would obviously not have. Maybe worth some experimentation ... I'm not sure about vm bits, though. That's a whole different topic. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] jsonb generator functions
Andrew Dunstan wrote: This bit: > +/* > + * Determine how we want to render values of a given type in datum_to_jsonb. > + * > + * Given the datatype OID, return its JsonbTypeCategory, as well as the > type's > + * output function OID. If the returned category is JSONBTYPE_CAST, we > + * return the OID of the type->JSON cast function instead. > + */ > +static void > +jsonb_categorize_type(Oid typoid, > + JsonbTypeCategory * tcategory, > + Oid *outfuncoid) > +{ seems like it can return without having set the category and func OID, if there's no available cast. Callers don't seem to check for this condition; is this a bug? If not, why not? Maybe some extra comments are warranted. Right now, for the "general case" there, there are two syscache lookups rather than one. The fix is simple: just do the getTypeOutputInfo call inside each case inside the switch instead of once at the beginning, so that the general case can omit it; then there is just one syscache access in all the cases. json.c suffers from the same problem. Anyway this whole business of searching through the CASTSOURCETARGET syscache seems like it could be refactored. If I'm counting correctly, that block now appears four times (three in this patch, once in json.c). Can't we add a new function to (say) lsyscache and remove that? I'm just commenting on that part because the syscache.h/pg_cast.h inclusions look a bit out of place; it's certainly not a serious issue. I looked at what makes you include miscadmin.h. It's only USE_XSD_DATES as far as I can tell. I looked at how that might be fixed, and a quick patch that moves DateStyle, DateOrder and IntervalStyle (and associated definitions) to datetime.h seems to work pretty well ... except that initdb.c requires to know about some DATEORDER constants; but frontend code cannot include datetime.h because of Datum. So that idea crashed and burned until someone reorganizes the whole datetime code, which currently is pretty messy. I don't have any further comments on this patch, other than please add JsonbTypeCategory to pgindent/typedefs.list before doing your pgindent run. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add CREATE support to event triggers
Hi Michael, thanks for the review. Michael Paquier wrote: > On Thu, Oct 16, 2014 at 4:01 PM, Michael Paquier > wrote: > > > On Mon, Oct 13, 2014 at 12:45 PM, Alvaro Herrera > > wrote: > > > Here's a new version of this series. > > Here is some input on patch 4: > > 1) Use of OBJECT_ATTRIBUTE: > + case OBJECT_ATTRIBUTE: > + catalog_id = TypeRelationId;/* XXX? */ > + break; > I think that the XXX mark could be removed, using TypeRelationId is correct > IMO as OBJECT ATTRIBUTE is used when managing an attribute object, which is > a custom type used for CREATE/ALTER TYPE. Agreed. > 2) This patch is showing up many warnings, among them: Will check. > 3) What's that actually? > +/* XXX merge this with ObjectTypeMap? */ > static event_trigger_support_data event_trigger_support[] = { > We may be able to do something smarter with event_trigger_support[], but as > this consists in a mapping of subcommands associated with CREATE/DROP/ALTER > and is only a reverse engineering operation of somewhat > AlterObjectTypeCommandTag or CreateCommandTag, I am not sure how you could > merge that. Some input perhaps? ObjectTypeMap is part of the patch that handles DROP for replication; see my other patch in commitfest. I am also not sure how to merge all this stuff; with these patches, we would have three "do event triggers support this object type" functions, so I was thinking in having maybe a text file from which these functions are generated from a perl script or something. But for now I think it's okay to keep things like this. That comment is only there to remind me that some cleanup might be in order. > 4) > +/* > + * EventTriggerStashCommand > + * Save data about a simple DDL command that was just executed > + */ > Shouldn't this be "single" instead of "simple"? In an older version it was "basic". Not wedded to "simple", but I don't think "single" is the right thing. A later patch in the series introduces type Grant, and there's also type AlterTable. The idea behind Simple is to include command types that do not require special handling; but all these commands are single commands. > 6) > + command = deparse_utility_command(cmd); > + > + /* > +* Some parse trees return NULL when deparse is attempted; > we don't > +* emit anything for them. > +*/ > + if (command != NULL) > + { > Small detail, but you may here just use a continue to make the code a bit > more readable after deparsing the command. Will check. > 9) Those declarations are not needed in event_trigger.c: > +#include "utils/json.h" > +#include "utils/jsonb.h" Will check. I split ddl_json.c at the last minute and I may have forgotten to remove these. > 10) Would you mind explaining what means "fmt"? > + * Allocate a new object tree to store parameter values -- varargs version. > + * > + * The "fmt" argument is used to append as a "fmt" element in the output > blob. "fmt" is equivalent to sprintf and friends' fmt argument. I guess this commands needs to be expanded a bit. > 11) deparse_utility.c mentions here and there JSON objects, but what is > created are JSONB objects. I'd rather be clear here. Good point. > 12) Already mentioned before, but this reverse engineering machine for > types would be more welcome as a set of functions in lsyscache (one for > namespace, one for type name and one for is_array). For typemodstr the need > is different as it uses printTypmod. > +void > +format_type_detailed(Oid type_oid, int32 typemod, > +Oid *nspid, char **typname, char > **typemodstr, > +bool *is_array) I am unsure about this. Other things that require many properties of the same object do a single lookup and return all of them in a single call, rather than repeated calls. > 13) This change seems unrelated to this patch... > - int type = 0; > + JsonbIteratorToken type = WJB_DONE; > JsonbValue v; > int level = 0; > boolredo_switch = false; > @@ -454,7 +454,7 @@ JsonbToCString(StringInfo out, JsonbContainer *in, int > estimated_len) > first = false; > break; > default: > - elog(ERROR, "unknown flag of jsonb > iterator"); > + elog(ERROR, "unknown jsonb iterator token > type"); Yes, sorry. I was trying to figure out how to use the jsonb stuff and I found this error message was quite unclear. In general, jsonb code seems to have random warts ... > A couple of comments: this patch introduces a basic infrastructure able to > do the following set of operations: > - Obtention of parse tree using StashedCommand > - Reorganization of parse tree to become
Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes
Hello Fujii-san, Thank you for your comments. >The patch isn't applied to the master cleanly. >The compilation of the document failed with the following error message. >openjade:config.sgml:2188:12:E: end tag for element "TERM" which is not open >make[3]: *** [HTML.index] Error 1 >xlog.c:930: warning: ISO C90 forbids mixed declarations and code >xlogreader.c:744: warning: ISO C90 forbids mixed declarations and code >xlogreader.c:744: warning: ISO C90 forbids mixed declarations and code Please find attached patch with these rectified. >Only backend calls CompressBackupBlocksPagesAlloc when SIGHUP is sent. >Why does only backend need to do that? What about other processes which can >write FPW, e.g., autovacuum? I had overlooked this. I will correct it. >Do we release the buffers for compressed data when fpw is changed from >"compress" to "on"? The current code does not do this. >The memory is always (i.e., even when fpw=on) allocated to uncompressedPages, >but not to compressedPages. Why? I guess that the test of fpw needs to be there uncompressedPages is also used to store the decompression output at the time of recovery. Hence, memory for uncompressedPages needs to be allocated even if fpw=on which is not the case for compressedPages. Thank you, Rahila Syed -Original Message- From: Fujii Masao [mailto:masao.fu...@gmail.com] Sent: Monday, October 27, 2014 6:50 PM To: Rahila Syed Cc: Andres Freund; Syed, Rahila; Alvaro Herrera; Rahila Syed; PostgreSQL-development; Tom Lane Subject: Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes On Fri, Oct 17, 2014 at 1:52 PM, Rahila Syed wrote: > Hello, > > Please find the updated patch attached. Thanks for updating the patch! Here are the comments. The patch isn't applied to the master cleanly. I got the following compiler warnings. xlog.c:930: warning: ISO C90 forbids mixed declarations and code xlogreader.c:744: warning: ISO C90 forbids mixed declarations and code xlogreader.c:744: warning: ISO C90 forbids mixed declarations and code The compilation of the document failed with the following error message. openjade:config.sgml:2188:12:E: end tag for element "TERM" which is not open make[3]: *** [HTML.index] Error 1 Only backend calls CompressBackupBlocksPagesAlloc when SIGHUP is sent. Why does only backend need to do that? What about other processes which can write FPW, e.g., autovacuum? Do we release the buffers for compressed data when fpw is changed from "compress" to "on"? +if (uncompressedPages == NULL) +{ +uncompressedPages = (char *)malloc(XLR_TOTAL_BLCKSZ); +if (uncompressedPages == NULL) +outOfMem = 1; +} The memory is always (i.e., even when fpw=on) allocated to uncompressedPages, but not to compressedPages. Why? I guess that the test of fpw needs to be there. Regards, -- Fujii Masao __ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. compress_fpw_v2.patch Description: compress_fpw_v2.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add CREATE support to event triggers
On Thu, Oct 16, 2014 at 4:01 PM, Michael Paquier wrote: > On Mon, Oct 13, 2014 at 12:45 PM, Alvaro Herrera > wrote: > > Here's a new version of this series. Here is some input on patch 4: 1) Use of OBJECT_ATTRIBUTE: + case OBJECT_ATTRIBUTE: + catalog_id = TypeRelationId;/* XXX? */ + break; I think that the XXX mark could be removed, using TypeRelationId is correct IMO as OBJECT ATTRIBUTE is used when managing an attribute object, which is a custom type used for CREATE/ALTER TYPE. 2) This patch is showing up many warnings, among them: event_trigger.c:1460:20: note: uninitialized use occurs here addr.classId = classId; ^~~ event_trigger.c:1446:21: note: initialize the variable 'objSubId' to silence this warning uint32 objSubId; ^ = 0 Or: deparse_utility.c:301:1: warning: unused function 'append_object_object' [-Wunused-function] append_object_object(ObjTree *tree, char *name, ObjTree *value) ^ In the 2nd case though I imagine that those functions in deparse_utility.c are used afterwards... There are a couple of other warning related to SCT_Simple but that's normal as long as 17 or 24 are not combined with it. 3) What's that actually? +/* XXX merge this with ObjectTypeMap? */ static event_trigger_support_data event_trigger_support[] = { We may be able to do something smarter with event_trigger_support[], but as this consists in a mapping of subcommands associated with CREATE/DROP/ALTER and is only a reverse engineering operation of somewhat AlterObjectTypeCommandTag or CreateCommandTag, I am not sure how you could merge that. Some input perhaps? 4) +/* + * EventTriggerStashCommand + * Save data about a simple DDL command that was just executed + */ Shouldn't this be "single" instead of "simple"? 5) I think that SCT_Simple should be renamed as SCT_Single +typedef enum StashedCommandType +{ + SCT_Simple, +} StashedCommandType; This comment holds as well for deparse_simple_command. 6) + command = deparse_utility_command(cmd); + + /* +* Some parse trees return NULL when deparse is attempted; we don't +* emit anything for them. +*/ + if (command != NULL) + { Small detail, but you may here just use a continue to make the code a bit more readable after deparsing the command. 7) pg_event_trigger_get_creation_commands is modified as well in patches 17 and 24. You may as well use an enum on cmd->type. 8) Rejoining a comment done earlier by Andres, it would be nice to have ERRCODE_WRONG_CONTEXT (unrelated to this patch). ERRCODE_FEATURE_NOT_SUPPORTED seems rather a different error type... 9) Those declarations are not needed in event_trigger.c: +#include "utils/json.h" +#include "utils/jsonb.h" 10) Would you mind explaining what means "fmt"? + * Allocate a new object tree to store parameter values -- varargs version. + * + * The "fmt" argument is used to append as a "fmt" element in the output blob. 11) deparse_utility.c mentions here and there JSON objects, but what is created are JSONB objects. I'd rather be clear here. 12) Already mentioned before, but this reverse engineering machine for types would be more welcome as a set of functions in lsyscache (one for namespace, one for type name and one for is_array). For typemodstr the need is different as it uses printTypmod. +void +format_type_detailed(Oid type_oid, int32 typemod, +Oid *nspid, char **typname, char **typemodstr, +bool *is_array) 13) This change seems unrelated to this patch... - int type = 0; + JsonbIteratorToken type = WJB_DONE; JsonbValue v; int level = 0; boolredo_switch = false; @@ -454,7 +454,7 @@ JsonbToCString(StringInfo out, JsonbContainer *in, int estimated_len) first = false; break; default: - elog(ERROR, "unknown flag of jsonb iterator"); + elog(ERROR, "unknown jsonb iterator token type"); 14) This could already be pushed as a separate commit: -extern bool creating_extension; +extern PGDLLIMPORT bool creating_extension; A couple of comments: this patch introduces a basic infrastructure able to do the following set of operations: - Obtention of parse tree using StashedCommand - Reorganization of parse tree to become an ObjTree, with boolean, array - Reorganization of ObjTree to a JsonB value I am actually a bit doubtful about why we actually need this intermediate ObjTree step... What would be wrong in manipu