Re: [HACKERS] [PATCH] postgres_fdw extension support
On Tue, Oct 6, 2015 at 10:32 PM, Andres Freund wrote: > Maybe I'm missing something major here. But given that you're looking up > solely based on Oid objnumber, Oid classnumber, how would this cache > work if there's two foreign servers with different extension lists? Oh. Nice catch here. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] No Issue Tracker - Say it Ain't So!
On Wednesday 30 September 2015 14:41:34 you wrote: > On Tue, Sep 29, 2015 at 12:08:56PM +1300, Gavin Flower wrote: > > Linux kernel project uses bugzilla (https://bugzilla.kernel.org) > > AIUI this is not mandatory for kernel hackers, and more opt-in from a > some/many/a few(?) subsystem maintainers. Some parts use it more, some > less or not at all. > > > and so does LibreOffice (https://bugs.documentfoundation.org) > > Thas is true, however. > > Same for freedesktop.org and the Gnome project, I believe. > > > Michael What about Trac? http://trac.edgewall.org/wiki/TracUsers -- YUriy Zhuravlev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] postgres_fdw extension support
On 2015-10-06 07:01:53 -0700, Paul Ramsey wrote: > diff --git a/contrib/postgres_fdw/sql/shippable.sql > b/contrib/postgres_fdw/sql/shippable.sql > new file mode 100644 > index 000..83ee38c > --- /dev/null > +++ b/contrib/postgres_fdw/sql/shippable.sql > @@ -0,0 +1,76 @@ I think it'd be good to add a test exercising two servers with different extensions marked as shippable. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
On 2015/10/07 6:19, Robert Haas wrote: On Fri, Oct 2, 2015 at 4:26 AM, Kyotaro HORIGUCHIwrote: During join search, a joinrel should be comptible between local joins and remote joins, of course target list also should be so. So it is quite difficult to add wholerow resjunk for joinrels before whole join tree is completed even if we allow row marks that are not bound to base RTEs. Suppose ROW_MARK_COPY is in use, and suppose the query is: SELECT ft1.a, ft1.b, ft2.a, ft2.b FROM ft1, ft2 WHERE ft1.x = ft2.x; When the foreign join is executed, there's going to be a slot that needs to be populated with ft1.a, ft1.b, ft2.a, ft2.b, and a whole row reference. Now, let's suppose the slot descriptor has 5 columns: those 4, plus a whole-row reference for ROW_MARK_COPY. IIUC, I think that if ROW_MARK_COPY is in use, the descriptor would have 6 columns: those 4, plus a whole-row var for ft1 and another whole-row bar for ft2. Maybe I'm missing something, though. 4, plus a whole-row reference for ROW_MARK_COPY. If we know what values we're going to store in columns 1..4, couldn't we just form them into a tuple to populate column 5? We don't actually need to be able to fetch such a tuple from the remote side because we can just construct it. I think. I also was thinking whether we could replace one of the whole-row vars with a whole-row var that represents the scan slot of the ForeignScanState node. Best regards, Etsuro Fujita -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
Hello, At Wed, 7 Oct 2015 12:30:27 +0900, Etsuro Fujitawrote in <561491d3.3070...@lab.ntt.co.jp> > On 2015/10/07 6:19, Robert Haas wrote: > > On Fri, Oct 2, 2015 at 4:26 AM, Kyotaro HORIGUCHI > > wrote: > >> During join search, a joinrel should be comptible between local > >> joins and remote joins, of course target list also should be > >> so. So it is quite difficult to add wholerow resjunk for joinrels > >> before whole join tree is completed even if we allow row marks > >> that are not bound to base RTEs. > > > Suppose ROW_MARK_COPY is in use, and suppose the query is: SELECT > > ft1.a, ft1.b, ft2.a, ft2.b FROM ft1, ft2 WHERE ft1.x = ft2.x; > > > > When the foreign join is executed, there's going to be a slot that > > needs to be populated with ft1.a, ft1.b, ft2.a, ft2.b, and a whole row > > reference. Now, let's suppose the slot descriptor has 5 columns: those > > 4, plus a whole-row reference for ROW_MARK_COPY. > > IIUC, I think that if ROW_MARK_COPY is in use, the descriptor would > have 6 columns: those 4, plus a whole-row var for ft1 and another > whole-row bar for ft2. Maybe I'm missing something, though. You're right. The result tuple for the Robert's example has 6 attributes in the order of [ft1.a, ft1.b, (ft1.a, ft1.b), ft2.a...] But the point of the discussion is in another point. The problem is when such joins are joined with another local table. For such case, the whole-row reference for the intermediate foreign-join would lose the targets in top-level tuple. > > 4, plus a whole-row reference for ROW_MARK_COPY. If we know what > > values we're going to store in columns 1..4, couldn't we just form > > them into a tuple to populate column 5? We don't actually need to be > > able to fetch such a tuple from the remote side because we can just > > construct it. I think. > > I also was thinking whether we could replace one of the whole-row vars > with a whole-row var that represents the scan slot of the > ForeignScanState node. I suppose it requires additional resjunk to be added on joinrel creation, which is what Kaigai-san said as overkill. But I'm interedted in what it looks. cheers, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
On Tue, Oct 6, 2015 at 11:30 PM, Etsuro Fujitawrote: > IIUC, I think that if ROW_MARK_COPY is in use, the descriptor would have 6 > columns: those 4, plus a whole-row var for ft1 and another whole-row bar for > ft2. Maybe I'm missing something, though. Currently, yes, but I think we should change it so that... >> 4, plus a whole-row reference for ROW_MARK_COPY. If we know what >> values we're going to store in columns 1..4, couldn't we just form >> them into a tuple to populate column 5? We don't actually need to be >> able to fetch such a tuple from the remote side because we can just >> construct it. I think. > I also was thinking whether we could replace one of the whole-row vars with > a whole-row var that represents the scan slot of the ForeignScanState node. ...it works like this instead. KaiGai is suggesting that constructing a join plan to live under the foreign scan-qua-join isn't really that difficult, but that is like saying that it's OK to go out to a nice sushi restaurant without bringing any money because it won't be too hard to talk the manager into letting you leave for a quick trip to the ATM at the end of the meal. Maybe so, maybe not, but if you plan ahead and bring money then you don't have to worry about it. The only reason why we would need the join plan in the first place is because we had the foreign scan output whole-row vars for the baserels instead of its own slot. If we have it output a var for its own slot then it doesn't matter whether constructing the join plan is easy or hard, because we don't need it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
On Wed, Oct 7, 2015 at 12:10 AM, Kyotaro HORIGUCHIwrote: >> IIUC, I think that if ROW_MARK_COPY is in use, the descriptor would >> have 6 columns: those 4, plus a whole-row var for ft1 and another >> whole-row bar for ft2. Maybe I'm missing something, though. > > You're right. The result tuple for the Robert's example has 6 > attributes in the order of [ft1.a, ft1.b, (ft1.a, ft1.b), ft2.a...] > > But the point of the discussion is in another point. The problem > is when such joins are joined with another local table. For such > case, the whole-row reference for the intermediate foreign-join > would lose the targets in top-level tuple. Really? Would that mean that ROW_MARK_COPY is totally broken? I bet it's not. >> > 4, plus a whole-row reference for ROW_MARK_COPY. If we know what >> > values we're going to store in columns 1..4, couldn't we just form >> > them into a tuple to populate column 5? We don't actually need to be >> > able to fetch such a tuple from the remote side because we can just >> > construct it. I think. >> >> I also was thinking whether we could replace one of the whole-row vars >> with a whole-row var that represents the scan slot of the >> ForeignScanState node. > > I suppose it requires additional resjunk to be added on joinrel > creation, which is what Kaigai-san said as overkill. But I'm > interedted in what it looks. I think it rather requires *replacing* two resjunk columns by one new one. The whole-row references for the individual foreign tables are only there to support EvalPlanQual; if we instead have a column to populate the foreign scan's slot directly, then we can use that column for that purpose directly and there's no remaining use for the whole-row vars on the baserels. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 6, 2015 at 10:57:42AM -0700, Josh Berkus wrote: > This is kind of like CVS. We didn't upgrade so Subversion, becuase we > said "we already have a user-friendly interface to CVS, called Marc." > We only moved to git when it could provide us with solid advantages. > > I believe the same thing is happening here. The inefficiency of the old > system (Bruce's mailbox) is becoming higher than the inefficiency of a > new, hypothetical system. Yes, just like I used to handle the uncommitted patches until we had a commitfest app. I was glad to be done with that job too. > > Therefore, our current default behavior is to ignore user reports, > > unless someone takes an action to reply, record, or retain the email for > > later review. What a tracker does is to make the default user report be > > _retained_, meaning we have to take action to _not_ retain a user report > > as an open item. > > Well, we can determine how that's handled. There are bug trackers out > there that automatically archive unconfirmed bug reports after a certain > amount of time. I'd personally recommend it. > > Of course, that requires a bug tracker which can have an "unconfirmed" > status. Yes, interesting idea. Basically, someone needs to get more benefit from the tracking than the work we put into it. It might be that our users mostly get the benefits. > > Second, we have a mix of user reports. Some bug reports are not bugs > > and must be reclassified. In other cases, uses ask questions via > > non-tracked communicate channels, e.g. pgsql-general, but they are > > really bugs. So, to do this right, we need a way of marking tracked > > bugs as not bugs, and a way of adding bugs that were reported in a > > non-tracked manner. > > Yeah, I was wondering about that. Yes, that is 50% of the items that end up on the TODO list. > Speaking of which ... this project is rich in skilled users who are > involved in the community but don't code. Bug triage is exactly the > kind of thing very part-time community supporters can do, if we make it > easy for them to do. Yes. Part of the problem is that tracker maintenance is almost done in a closet, so there is little outward reinforcement to keep people motivated. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 6 October 2015 at 12:32, Nathan Wagnerwrote: > > Also, the version numbers are user reported and a bit of a mess. I > don't think they could really be relied on as is for users trying to > find out if a bug affects their version. Someone would have to update > that information, and communicate that update to the tracker. The > general concensus seems to be that magic phrases at the beginning of a > line in a message body is the way to go. I don't necessarily agree, but > any consistent system can be made to work. This obviously applies to > any metadata, not just version numbers. > i also don't agree that everything will happen by magic... for example, bug # 6150 (https://granicus.if.org/pgbugs/6150) was reported for PostgreSQL version: 0.9 so we need a way to, at least, fix those. also, while the bug report allow you to say which OS was affected that is the one the user was using. still the bug could happen in all OSes (versions?), only some, only a specifi OS in a specific version is affected. so we need, to have an interface to fix metada... and people taking responsability for that -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte 24x7 y capacitación -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 06, 2015 at 01:17:48PM -0400, Bruce Momjian wrote: > I do think we rushed to choose a tracker a little too quickly. Have we chosen one? > Let me explain, from a high level, what a new tracker will change in > our workflow. [snip] I won't quote your whole message, which I essentially agree with. Let me say that the questions I have brought up have several purposes. One, I think it's important to identify what exactly we're after. I hope my questions have help put some light on that. Two, I think any attempt to tell the developers and committers that they need to change their workflow to adapt to some system is bound to fail, so, I have asked, just what changed would you all be willing to actually *do*? Tom Lane is pretty good at noting a bug number in his commit messages, for example. Would he be willing to modify that slightly to make it easier to machine parse? Would you be willing to add a bug number to your commit messages? I'm not asking for guarantees. Actually I'm not really asking for anything, I'm just trying to figure out what the parameters of a solution might be. If the answer to that is "no, I'm not willing to change anything at all", that's fine, it just colors what might be done and how much automation I or someone else might be able to write. I think even with a bug tracker the default "ignore" behavior can still be done. In principle, we could even mark bugs as "unconfirmed" or "logged" or something right away and only mark them as new or open or something if they actually draw a reply. We could even require a reply from a committer if that was wanted. If I haven't made it clear by now, I am trying to write a system that requires the absolute minimal amount of change to the existing way of doing things. As I noted in my original email, I've put together a bug tracker, not a ticket system. If people would like to make some small adjustments to make it easier to automate a bug tracker, that would be great, but if not, that's fine too, it's no *worse* than what we already have. And if people really wanted a ticket system, it wouldn't be hard to enhance a tracker. > My point is that we have our current workflow not because we are > idiots, but because it fit our workflow and resources best. I am not > sure if we have succeeded because of our current non-retain mode, or > in spite of it. It might be time to switch to a default-retain mode, > especially since most other projects have that mode, but we should be > clear what we are getting into. I thinking having a bug tracker and retention vs non-retention are orthogonal questions. But I agree that knowing what might be involved is a good idea. I think perhaps one of the problems is that different people want different things, so it's probably going to be hard to make everyone happy. -- nw -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
Joshua D. Drake wrote: > On 10/06/2015 11:33 AM, Alvaro Herrera wrote: > >So I am dubious that people that currently do not contribute will > >contribute in the future just because we change the system. > > No, not just because we change the software. The mindset has to change too > and procedures have to change too. > > That said, your argument boils down to, "I once heated water with wood and > it didn't boil. Therefore I won't heat water again." [I have heated water with wood till boiling point, FWIW] > It should be, "I once heated water with wood and it didn't boil. How can I > change my process so that it will?" Oh, I am not saying we shouldn't have a tracker ("change the process"). I'm just saying that this particular argument for it is bogus. > Until hackers have that mindset about all this stuff (except hacking) we > will continue to hit walls we don't have to hit. I personally deal with bug reports almost every day and would love to see a change in this area. I am not hopeful that unwashed masses will jump to assist me (us) in keeping the bug tracker clean and useful. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 10/06/2015 12:03 PM, Bruce Momjian wrote: > On Tue, Oct 6, 2015 at 03:33:20PM -0300, Alvaro Herrera wrote: >> Joshua D. Drake wrote: >>> On 10/06/2015 10:57 AM, Josh Berkus wrote: On 10/06/2015 10:17 AM, Bruce Momjian wrote: >> Speaking of which ... this project is rich in skilled users who are involved in the community but don't code. Bug triage is exactly the kind of thing very part-time community supporters can do, if we make it easy for them to do. >>> >>> That is an understatement. There is a huge pool of non-hackers that can >>> help contribute to this sort of thing. >> >> It was said, way back when, "adding Windows support will add a huge pool >> of Windows-only developers". I'm not sure that the impact was really >> all that big there. We have a few Windows-enabled people, but how many >> of them are Windows-only? >> >> We similarly said, "moving the TODO list to the wiki will add a huge >> pool of users that cannot edit the current CVS-only file". To date, >> most of what has happened is that the old items have become stale and >> Bruce continues to do 99% of the work of maintaining it. >> >> So I am dubious that people that currently do not contribute will >> contribute in the future just because we change the system. > > Agreed, though that did work for the commitfest --- I almost never deal > with those patches, for good and bad. There isn't a huge pool, but then we don't need a huge pool. We need like three people, and I think that's not unreasonable to expect. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 6, 2015 at 8:15 PM, Nathan Wagnerwrote: > On Tue, Oct 06, 2015 at 10:57:42AM -0700, Josh Berkus wrote: > > > Speaking of which ... this project is rich in skilled users who are > > involved in the community but don't code. Bug triage is exactly the > > kind of thing very part-time community supporters can do, if we make it > > easy for them to do. > > I can make it easy. But that doesn't directly address two of the other > points: > > 1: Do we need any system for who is allowed to triage bugs? > 2: Should an equivalent email be sent to the list? > > Specifically with point number 2, suppose the above mechanism is > used. When a triager marks a bug as (say) not a bug, should > the system just update the database, or should it actually > send a formatted email to the bugs list with the 'Bug Status: not a bug' > line (among others, presumably)? I think it should send the email, > but I can see how that could be construed as junk. > > I think that's an absolute requirement. Otherwise the system will force people to check both email and the tracker. The whole point is that those who prefer the email-only workflow should be able to keep that one. If someone doesn't want them, it's easy enough to filter them in the MUA. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/
Re: [HACKERS] [PATCH] postgres_fdw extension support
On Tue, Oct 6, 2015 at 8:52 AM, Andres Freundwrote: > I think it'd be good to add a test exercising two servers with different > extensions marked as shippable. Done, P 20151006b_postgres_fdw_extensions.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 6, 2015 at 03:33:20PM -0300, Alvaro Herrera wrote: > Joshua D. Drake wrote: > > On 10/06/2015 10:57 AM, Josh Berkus wrote: > > >On 10/06/2015 10:17 AM, Bruce Momjian wrote: > > > >Speaking of which ... this project is rich in skilled users who are > > >involved in the community but don't code. Bug triage is exactly the > > >kind of thing very part-time community supporters can do, if we make it > > >easy for them to do. > > > > That is an understatement. There is a huge pool of non-hackers that can > > help contribute to this sort of thing. > > It was said, way back when, "adding Windows support will add a huge pool > of Windows-only developers". I'm not sure that the impact was really > all that big there. We have a few Windows-enabled people, but how many > of them are Windows-only? > > We similarly said, "moving the TODO list to the wiki will add a huge > pool of users that cannot edit the current CVS-only file". To date, > most of what has happened is that the old items have become stale and > Bruce continues to do 99% of the work of maintaining it. > > So I am dubious that people that currently do not contribute will > contribute in the future just because we change the system. Agreed, though that did work for the commitfest --- I almost never deal with those patches, for good and bad. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 10/06/2015 10:17 AM, Bruce Momjian wrote: > First, let me say I am glad we are talking about this, and am open to > the criticism that my and other's tracking open items by keeping them in > our personal mailboxes is not only odd, but bizarre given the size of > our community and the responsibility we have. On the other hand, until we do have some kind of tracker, it is absolutely essential. But at this point, it's more than one person can do. This is kind of like CVS. We didn't upgrade so Subversion, becuase we said "we already have a user-friendly interface to CVS, called Marc." We only moved to git when it could provide us with solid advantages. I believe the same thing is happening here. The inefficiency of the old system (Bruce's mailbox) is becoming higher than the inefficiency of a new, hypothetical system. > Therefore, our current default behavior is to ignore user reports, > unless someone takes an action to reply, record, or retain the email for > later review. What a tracker does is to make the default user report be > _retained_, meaning we have to take action to _not_ retain a user report > as an open item. Well, we can determine how that's handled. There are bug trackers out there that automatically archive unconfirmed bug reports after a certain amount of time. I'd personally recommend it. Of course, that requires a bug tracker which can have an "unconfirmed" status. > Second, we have a mix of user reports. Some bug reports are not bugs > and must be reclassified. In other cases, uses ask questions via > non-tracked communicate channels, e.g. pgsql-general, but they are > really bugs. So, to do this right, we need a way of marking tracked > bugs as not bugs, and a way of adding bugs that were reported in a > non-tracked manner. Yeah, I was wondering about that. > My point is that we have our current workflow not because we are idiots, > but because it fit our workflow and resources best. I am not sure if we > have succeeded because of our current non-retain mode, or in spite of > it. It might be time to switch to a default-retain mode, especially > since most other projects have that mode, but we should be clear what we > are getting into. FWIW, when I talk about bugs which we lost track of, they're not generally unconfirmed bug reports. Usually, it's stuff which a contributor replied to, but the bug was low-impact, circumstantial, and hard to reproduce, and came in during a busy period (like release time). So I'd be perfectly OK with the idea that unconfirmed bugs hang around in the system for 60 days, then automatically convert to "stale" status. Until we build up a team of volunteers for bug triage, we might have to do that. Speaking of which ... this project is rich in skilled users who are involved in the community but don't code. Bug triage is exactly the kind of thing very part-time community supporters can do, if we make it easy for them to do. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 10/06/2015 10:57 AM, Josh Berkus wrote: On 10/06/2015 10:17 AM, Bruce Momjian wrote: This is kind of like CVS. We didn't upgrade so Subversion, becuase we said "we already have a user-friendly interface to CVS, called Marc." We only moved to git when it could provide us with solid advantages. This is a very good point. I believe the same thing is happening here. The inefficiency of the old system (Bruce's mailbox) is becoming higher than the inefficiency of a new, hypothetical system. As one of the longest running contributors to this community, I didn't even know that is where bugs went. How I didn't know this, I have no idea :P Second, we have a mix of user reports. Some bug reports are not bugs and must be reclassified. In other cases, uses ask questions via non-tracked communicate channels, e.g. pgsql-general, but they are really bugs. So, to do this right, we need a way of marking tracked bugs as not bugs, and a way of adding bugs that were reported in a non-tracked manner. Yeah, I was wondering about that. Right, that is why I am trying to push us toward an "issue" tracker not a bug tracker. A bug tracker explicitly limits the purpose of something that may otherwise be a huge boon to this community. Speaking of which ... this project is rich in skilled users who are involved in the community but don't code. Bug triage is exactly the kind of thing very part-time community supporters can do, if we make it easy for them to do. That is an understatement. There is a huge pool of non-hackers that can help contribute to this sort of thing. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. New rule for social situations: "If you think to yourself not even JD would say this..." Stop and shut your mouth. It's going to be bad. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
Joshua D. Drake wrote: > On 10/06/2015 10:57 AM, Josh Berkus wrote: > >On 10/06/2015 10:17 AM, Bruce Momjian wrote: > >Speaking of which ... this project is rich in skilled users who are > >involved in the community but don't code. Bug triage is exactly the > >kind of thing very part-time community supporters can do, if we make it > >easy for them to do. > > That is an understatement. There is a huge pool of non-hackers that can > help contribute to this sort of thing. It was said, way back when, "adding Windows support will add a huge pool of Windows-only developers". I'm not sure that the impact was really all that big there. We have a few Windows-enabled people, but how many of them are Windows-only? We similarly said, "moving the TODO list to the wiki will add a huge pool of users that cannot edit the current CVS-only file". To date, most of what has happened is that the old items have become stale and Bruce continues to do 99% of the work of maintaining it. So I am dubious that people that currently do not contribute will contribute in the future just because we change the system. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 10/06/2015 11:33 AM, Alvaro Herrera wrote: Joshua D. Drake wrote: On 10/06/2015 10:57 AM, Josh Berkus wrote: On 10/06/2015 10:17 AM, Bruce Momjian wrote: Speaking of which ... this project is rich in skilled users who are involved in the community but don't code. Bug triage is exactly the kind of thing very part-time community supporters can do, if we make it easy for them to do. That is an understatement. There is a huge pool of non-hackers that can help contribute to this sort of thing. It was said, way back when, "adding Windows support will add a huge pool of Windows-only developers". I'm not sure that the impact was really all that big there. We have a few Windows-enabled people, but how many of them are Windows-only? I think it was naive that anyone would suggest that windows developers would show up, we aren't friendly to them. It is true that we go through great pains to have a decent Windows port but our community is and (AFAICT) forever will be, Unix centric. We similarly said, "moving the TODO list to the wiki will add a huge pool of users that cannot edit the current CVS-only file". To date, most of what has happened is that the old items have become stale and Bruce continues to do 99% of the work of maintaining it. That isn't a wiki issue it is a lack of policy issue. How do we define what becomes a TODO? How can someone submit a TODO? Does it get voted on? Where is this documented? (list goes on) So I am dubious that people that currently do not contribute will contribute in the future just because we change the system. No, not just because we change the software. The mindset has to change too and procedures have to change too. That said, your argument boils down to, "I once heated water with wood and it didn't boil. Therefore I won't heat water again." It should be, "I once heated water with wood and it didn't boil. How can I change my process so that it will?" Until hackers have that mindset about all this stuff (except hacking) we will continue to hit walls we don't have to hit. JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. New rule for social situations: "If you think to yourself not even JD would say this..." Stop and shut your mouth. It's going to be bad. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 10/06/2015 11:51 AM, Alvaro Herrera wrote: Joshua D. Drake wrote: [I have heated water with wood till boiling point, FWIW] Hahahah I have no doubt. It should be, "I once heated water with wood and it didn't boil. How can I change my process so that it will?" Oh, I am not saying we shouldn't have a tracker ("change the process"). I'm just saying that this particular argument for it is bogus. Until hackers have that mindset about all this stuff (except hacking) we will continue to hit walls we don't have to hit. I personally deal with bug reports almost every day and would love to see a change in this area. I am not hopeful that unwashed masses will jump to assist me (us) in keeping the bug tracker clean and useful. I don't think the unwashed masses will either but I bet we could get a handful from -general and even a few that watch but don't participate on this list. Frankly, I bet the next time I speak I could get at least one volunteer just by bringing it up (how do you think this whole bug tracker thing started?). JD -- Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564 PostgreSQL Centered full stack support, consulting and development. New rule for social situations: "If you think to yourself not even JD would say this..." Stop and shut your mouth. It's going to be bad. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PATCH: index-only scans with partial indexes
Hello, At Thu, 01 Oct 2015 01:36:51 +0200, Tomas Vondrawrote in <560c7213.3010...@2ndquadrant.com> > > Good point. I think we may simply point indexrinfos to the existing > > list > > of restrictions in that case - we don't need to copy it. So no > > additional memory / CPU consumption, and it should make the code > > working > > with it a bit simpler. > > Attached is v5 of the patch improving the comments (rephrasing, moving > before function etc.). Looks good (for me). > I also attempted to fix the issue with empty list for non-partial > indexes by simply setting it to rel->baserestrictinfo in > match_restriction_clauses_to_index (for non-partial indexes), Although it is not the cause of these errors (or any errors on regress), build_paths_for_OR (which doesn't seem to be called in regression tests) doesn't set indexrinfos properly. match_clauses_to_index can commonize(?) the process in return for additional list copy and concat. The attached diff is doing in the latter method. Avoiding the copies will make the code a bit complicated. But anyway regtests doesn't say whether it is sane or not:( (TODO) > but that > results in a bunch of > >ERROR: variable not found in subplan target list > > during "make installcheck". I can't quite wrap my head around why. During considerartion on parameterized joins in get_join_index_paths, caluseset fed to get_index_paths is generated from given join (j|e)clausesets. So completing the clauseset with necessary restrictinfo from rcaluseset fixes the problem. The attached patch applies on the latest v5 patch and will address above issues. (And modifies expected files, which are the manifestation of this improovement). regards, -- Kyotaro Horiguchi NTT Open Source Software Center diff --git a/src/backend/optimizer/path/indxpath.c b/src/backend/optimizer/path/indxpath.c index c5c2b6e..105351f 100644 --- a/src/backend/optimizer/path/indxpath.c +++ b/src/backend/optimizer/path/indxpath.c @@ -662,6 +662,13 @@ get_join_index_paths(PlannerInfo *root, RelOptInfo *rel, /* We should have found something, else caller passed silly relids */ Assert(clauseset.nonempty); + /* + * get_index_paths requires that restririctinfo for the index is stored in + * clauseset->indexrinfos. Since it doesn't contain join clauses, just + * copying that of rclauseset is enough. + */ + clauseset.indexrinfos = rclauseset->indexrinfos; + /* Build index path(s) using the collected set of clauses */ get_index_paths(root, rel, index, , bitindexpaths); @@ -867,7 +874,6 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel, double loop_count; List *orderbyclauses; List *orderbyclausecols; - List *restrictinfo; List *index_pathkeys; List *useful_pathkeys; bool found_lower_saop_clause; @@ -1015,16 +1021,13 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel, orderbyclausecols = NIL; } - restrictinfo - = (index->indpred != NIL) ? clauses->indexrinfos : rel->baserestrictinfo; - /* * 3. Check if an index-only scan is possible. If we're not building * plain indexscans, this isn't relevant since bitmap scans don't support * index data retrieval anyway. */ index_only_scan = (scantype != ST_BITMAPSCAN && - check_index_only(rel, index, restrictinfo)); + check_index_only(rel, index, clauses->indexrinfos)); /* * 4. Generate an indexscan path if there are relevant restriction clauses @@ -1038,7 +1041,7 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel, ipath = create_index_path(root, index, index_clauses, clause_columns, - restrictinfo, + clauses->indexrinfos, orderbyclauses, orderbyclausecols, useful_pathkeys, @@ -1065,7 +1068,7 @@ build_index_paths(PlannerInfo *root, RelOptInfo *rel, ipath = create_index_path(root, index, index_clauses, clause_columns, - restrictinfo, + clauses->indexrinfos, NIL, NIL, useful_pathkeys, @@ -2121,6 +2124,11 @@ match_clauses_to_index(IndexOptInfo *index, Assert(IsA(rinfo, RestrictInfo)); match_clause_to_index(index, rinfo, clauseset); } + + /* copy clauses so that it won't be broken on concatenation afterward */ + if (index->indpred == NIL) + clauseset->indexrinfos = + list_concat(clauseset->indexrinfos, list_copy(clauses)); } /* diff --git a/src/test/regress/expected/aggregates.out b/src/test/regress/expected/aggregates.out index de826b5..88f6a02 100644 --- a/src/test/regress/expected/aggregates.out +++ b/src/test/regress/expected/aggregates.out @@ -780,7 +780,6 @@ explain (costs off) -> Index Only Scan Backward using minmaxtest2i on minmaxtest2 Index Cond: (f1 IS NOT NULL) -> Index Only Scan using minmaxtest3i on minmaxtest3 - Index Cond: (f1 IS NOT NULL) InitPlan 2 (returns $1)
Re: [HACKERS] run pg_rewind on an uncleanly shut down cluster.
On Tue, Oct 6, 2015 at 12:41 AM, Oleksii Kliukinwrote: > pg_rewind -D postgresql0 --source-server="host=127.0.0.1 port=5433 > dbname=postgres" > The servers diverged at WAL position 0/360 on timeline 1. > could not open file "data/postgresql0/pg_xlog/00010002": > No such file or directory > > Note that this problem happens not 100% of time during the tests, > sometimes pg_rewind can actually rewind the former master. I don't think that there is any actual reason preventing us from rewinding a node that has its state in pg_control set as something else than DB_SHUTDOWNED, the important point here is to be sure that the target node is *not* running while pg_rewind is running (perhaps pg_rewind should perform an action in the target node to not have it run, let's say that it creates a fake postgresql.conf with invalid data and renames the existing one). Checking pg_control makes things easier though, there is no need to rely on external binaries like "pg_ctl status" or some parsing of postmaster.pid with kill(pid, 0) for example. > I know I can copy the segment back from the archive, but I'd like to > avoid putting this logic into the failover tool if possible. Is there > anything we can do to avoid the problem described above, or is there a > better way to bring up the former master after the crash with pg_rewind? Well, for 9.5 (and actually the same applies to the 9.3 and 9.4 version on github because I am keeping the previous versions consistent with what is in 9.5), I guess no. This missing segment is going to be needed in any case because the list of blocks modified needs to be found, hence the question is "how can pg_rewind guess where a WAL segment missing from the target's pg_xlog is located?". And there are multiple answers: - An archive path, then let's add an option to pg_rewind to add a path, though this needs the archive path to be mounted locally, and usually that's not the case. - An existing node of the cluster, perhaps the segment is still present on another standby node that already replayed it, though this would need an extra node. - The source node itself, if we are lucky the missing segment created before WAL forked is still there. It may not be there though if it has already been recycled. At the end it seems to me that this is going to need some extra operation by the failover tool or the system administrator either way, and that any additional logic to check where this segment is located is never going to satisfy completely the failover use cases. Hence I would keep just pg_rewind out of that. Regards, -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH v1] GSSAPI encryption support
On Sun, Oct 4, 2015 at 1:18 AM, Andres Freundwrote: > Hi, > > I quickly read through the patch, trying to understand what exactly is > happening here. To me the way the patch is split doesn't make much sense > - I don't mind incremental patches, but right now the steps don't > individually make sense. I agree with Andres. While I looked a bit at this patch, I just had a look at them a whole block and not individually. > On 2015-07-02 14:22:13 -0400, Robbie Harwood wrote: > [Andres' comments] Here are some comments on top of what Andres has mentioned. --- a/configure.in +++ b/configure.in @@ -636,6 +636,7 @@ PGAC_ARG_BOOL(with, gssapi, no, [build with GSSAPI support], krb_srvtab="FILE:\$(sysconfdir)/krb5.keytab" ]) AC_MSG_RESULT([$with_gssapi]) +AC_SUBST(with_gssapi) I think that using a new configure variable like that with a dedicated file fe-secure-gss.c and be-secure-gss.c has little sense done this way, and that it would be more adapted to get everything grouped in fe-auth.c for the frontend and auth.c for the backend, or move all the GSSAPI-related stuff in its own file. I can understand the move though: this is to imitate OpenSSL in a way somewhat similar to what has been done for it with a rather generic set if routines, but with GSSAPI that's a bit different, we do not have such a set of routines, hence based on this argument moving it to its own file has little sense. Now, a move that would make sense though is to move all the GSSAPI stuff in its own file, for example pg_GSS_recvauth and pg_GSS_error for the backend, and you should do the same for the frontend with all the pg_GSS_* routines. This should be as well a refactoring patch on top of the actual feature. diff --git a/src/interfaces/libpq/fe-secure-gss.c b/src/interfaces/libpq/fe-secure-gss.c new file mode 100644 index 000..afea9c3 --- /dev/null +++ b/src/interfaces/libpq/fe-secure-gss.c @@ -0,0 +1,92 @@ +#include You should add a proper header to those new files. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Multi-tenancy with RLS
On Tue, Oct 6, 2015 at 10:56 AM, Haribabu Kommiwrote: > Here I attached an updated version of the patch with the following changes. I found some problems related to providing multi-tenancy on a system catalog view. This is because, system catalog view uses the owner that is created the user instead of the current user by storing the user information in "checkAsUser" field in RangeTblEntry structure. The same "checkAsUser" is used in check_enable_rls function before getting the policies for the table. All the system catalog views are created by the super user, so no row level security policies are applied to the views. Ex- SET SESSION ROLE tenancy_user1; select relname from pg_class where relname = 'tenancy_user2_tbl1'; relname - (0 rows) select schemaname, relname from pg_stat_all_tables where relname = 'tenancy_user2_tbl1'; schemaname | relname + public | tenancy_user2_tbl1 (1 row) The policy that is created on pg_class system catalog table is, get all the objects that current user have permissions. Permissions can be anything. To fix the problem, I thought of using current session id instead of "checkAsUser" while applying row level security policies to system catalog objects. This doesn't affect the normal objects. But this solution has given some problems for foreign_data.sql while running the regress tests as the planner is generating targetlist as NULL. Is the above specified solution is the correct approach to handle this problem? If it is i will check the foreign_data.sql problem, otherwise is there any good approach to handle the same? Regards, Hari Babu Fujitsu Australia -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Obsolete use of volatile in walsender.c, walreceiver.c, walreceiverfuncs.c?
On Thu, Oct 1, 2015 at 11:01 PM, Michael Paquierwrote: >> Right, see attached. > > It seems to me that we could as well simplify checkpoint.c and > logical.c. In those files volatile casts are used as well to protect > from reordering for spinlock operations. See for example 0002 on top > of 0001 that is Thomas' patch. These patches look good to me, so I have committed them. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More work on SortSupport for text - strcoll() and strxfrm() caching
On Tue, Oct 6, 2015 at 1:07 PM, Robert Haaswrote: > Reviewing 0001, I'm happy to see us add bswap64, but I'm not sure we > should put it in c.h, because that's included by absolutely > everything. How about putting it in a new #include inside src/port, > like src/port/pg_bswap.h? Then pg_crc.h can include that, but other > things can, too. I guess I imagined that bswap64() was fundamental infrastructure, but on second thought that's not actually in evidence -- it is not already needed in plenty of other places. So yeah, works for me. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More work on SortSupport for text - strcoll() and strxfrm() caching
On Sun, Oct 4, 2015 at 2:17 AM, Peter Geogheganwrote: > On Tue, Aug 4, 2015 at 12:41 PM, Robert Haas wrote: >> Some comments: > > I attach a new version of the patch series that incorporates all your > feedback. The patch series is now made cumulative in a way that makes > it easy for someone to independently commit the unsigned integer > comparison optimization for text, and nothing else. The macro that > uses is in a dedicated header now, because I have another patch > (SortSupport for the UUID type) that uses the same optimization for > the same reason. It seems like something that will probably end up > with a third or forth client before too long, so I think the byte swap > macro wrapper belongs in sortsupport.h. Reviewing 0001, I'm happy to see us add bswap64, but I'm not sure we should put it in c.h, because that's included by absolutely everything. How about putting it in a new #include inside src/port, like src/port/pg_bswap.h? Then pg_crc.h can include that, but other things can, too. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More work on SortSupport for text - strcoll() and strxfrm() caching
On Tue, Oct 6, 2015 at 1:16 PM, Robert Haaswrote: >> I guess I imagined that bswap64() was fundamental infrastructure, but >> on second thought that's not actually in evidence -- it is not already >> needed in plenty of other places. So yeah, works for me. > > If you would care to revise the patch accordingly, I will commit it > (barring objections from others, of course). Sure. It might take me a couple of days to get around to it, though -- things are a bit hectic here. Thanks -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] check fails on Fedora 23
On Sun, Oct 4, 2015 at 11:52 AM, Andrew Dunstanwrote: > Isn't this arguably a Fedora regression? What did they change in F23 to make > it fail? I note that F23 is still in Beta. Maybe, but it's pretty unfriendly for us to complain about a library issue, if it is one, by failing an Assert(). People with non-assert-enabled builds will just get wrong answers. Yuck. Thinking about how this could happen, I believe that one possibility is that there are two strings A and B and a locale L such that strcoll_l(A, B, L) and memcmp(strxfrm(A, L), strxfrm(B, L)) disagree (that is, the results are of different sign, or one is zero and the other is not). I don't have an environment handy to reproduce this, but it would be nifty if someone could figure out exactly what strings are failing and then provide the strcoll result and the strxfrm blobs for those strings. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More work on SortSupport for text - strcoll() and strxfrm() caching
On Tue, Oct 6, 2015 at 4:09 PM, Peter Geogheganwrote: > On Tue, Oct 6, 2015 at 1:07 PM, Robert Haas wrote: >> Reviewing 0001, I'm happy to see us add bswap64, but I'm not sure we >> should put it in c.h, because that's included by absolutely >> everything. How about putting it in a new #include inside src/port, >> like src/port/pg_bswap.h? Then pg_crc.h can include that, but other >> things can, too. > > I guess I imagined that bswap64() was fundamental infrastructure, but > on second thought that's not actually in evidence -- it is not already > needed in plenty of other places. So yeah, works for me. If you would care to revise the patch accordingly, I will commit it (barring objections from others, of course). -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SortSupport for UUID type
On Sun, Oct 4, 2015 at 2:21 AM, Peter Geogheganwrote: > Attached is SortSupport for UUID type patch. This accelerates all UUID > related sort-intense operations, making them about twice as fast with > millions of tuples. I know that Heroku has plenty of tables used by > various applications with a UUID primary key, so it will be nice to > have CREATE INDEX go so much faster in those cases. The SortSupport > routine uses abbreviation, and relies on unsigned integer comparisons. > It has a dependency on the new abbreviated key for text patch series > [1]. > > For full details, see commit message. It's a bit unfortunate that I > have yet another sort patch here, without being much closer to getting > every other such patch committed, but I happened to have written this > patch a while ago. I wanted to shape-up the common API that this patch > and the related text patch use, so it just made sense to submit the > UUID patch now. +tmp = ((uint32) res ^ (uint32) ((uint64) res >> 32)); The outer set of parentheses here seems pretty worthless. Perhaps one of the parentheses at the end of the statement should be moved to just after "res". That seems like it would add considerably clarity. > This patch is not all that exciting -- the techniques used here are > very simple, and it's all familiar territory for us. I expect that > this patch will not be at all contentious when someone eventually gets > around to reviewing it. There is no doubt that we need more reviewers. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] More work on SortSupport for text - strcoll() and strxfrm() caching
On Tue, Oct 6, 2015 at 4:26 PM, Peter Geogheganwrote: > On Tue, Oct 6, 2015 at 1:16 PM, Robert Haas wrote: >>> I guess I imagined that bswap64() was fundamental infrastructure, but >>> on second thought that's not actually in evidence -- it is not already >>> needed in plenty of other places. So yeah, works for me. >> >> If you would care to revise the patch accordingly, I will commit it >> (barring objections from others, of course). > > Sure. It might take me a couple of days to get around to it, though -- > things are a bit hectic here. I know the feeling. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] about fsync in CLOG buffer write
On Sun, Oct 4, 2015 at 3:25 PM, Andres Freundwrote: > I don't think it's worth investing time and complexity to bypass SLRU in > certain cases. We should rather rewrite the thing completely. +1. That code is considerably past its sell-by date. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.
On Sat, Oct 3, 2015 at 7:38 AM, Michael Paquierwrote: >> Granted, you have to try fairly hard to shoot yourself in the leg, >> but since the solution is so simple, why not? If we never reuse ports >> within a single test, this goes away. > > Well, you can reuse the same port number in a test. Simply teardown > the existing node and then recreate a new one. I think that port > number assignment to a node should be transparent to the caller, in > our case the perl test script holding a scenario. It seems that these days 'make check' creates a directory in /tmp called /tmp/pg_regress-RANDOMSTUFF. Listening on TCP ports is disabled, and the socket goes inside this directory with a name like .s.PGSQL.PORT. You can connect with psql -h /tmp/pg_regress-RANDOMSTUFF -p PORT, but not over TCP. This basically removes the risk of TCP port number collisions, as well as the risk of your temporary instance being hijacked by a malicious user on the same machine. I'm not sure what we do on Windows, though. >> In particular, I was shutting down an archiving node and the archiving >> was truncated. I *think* smart doesn't do this. But again, it's really >> that the test writer can't easily override, not that the default is wrong. > > Ah, OK. Then fast is just fine. It shuts down the node correctly. > "smart" would wait for all the current connections to finish but I am > wondering if it currently matters here: I don't see yet a clear use > case yet where it would make sense to have multi-threaded script... If > somebody comes back with a clear idea here perhaps we could revisit > that but it does not seem worth it now. I don't have anything brilliant to say about this point, but here's a perhaps-not-brilliant comment: If there's a bug in one of smart and fast shutdown and the other works great, it would be nice to catch that. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Connection string parameter 'replication' in documentation
On Mon, Oct 5, 2015 at 6:07 AM, Kyotaro HORIGUCHIwrote: > /* > * We use the expand_dbname parameter to process the connection > string (or > -* URI), and pass some extra options. The deliberately undocumented > -* parameter "replication=true" makes it a replication connection. The > -* database name is ignored by the server in replication mode, but > specify > -* "replication" for .pgpass lookup. > +* URI), and pass some extra options. The paramter "replication" > +* deliberately documented out of the section for the ordiary client > +* protocol, having "true" makes it a physical replication > connection. The > +* database name is ignored by the server in physical replication > mode, > +* but specify "replication" for .pgpass lookup. > */ I don't think this is an improvement, even ignoring the fact that you've spelled a couple of words incorrectly. The original text seems clear enough, and the new text isn't really fully accurate either: the discussion of when the database name is ignored really shouldn't be linked to whether this is logical or physical replication. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] check fails on Fedora 23
On Wed, Oct 7, 2015 at 9:49 AM, Robert Haaswrote: > On Sun, Oct 4, 2015 at 11:52 AM, Andrew Dunstan wrote: >> Isn't this arguably a Fedora regression? What did they change in F23 to make >> it fail? I note that F23 is still in Beta. > > Maybe, but it's pretty unfriendly for us to complain about a library > issue, if it is one, by failing an Assert(). People with > non-assert-enabled builds will just get wrong answers. Yuck. > > Thinking about how this could happen, I believe that one possibility > is that there are two strings A and B and a locale L such that > strcoll_l(A, B, L) and memcmp(strxfrm(A, L), strxfrm(B, L)) disagree > (that is, the results are of different sign, or one is zero and the > other is not). I wonder if Glibc bug 18589 is relevant. Bug 18934 says "Note that these unittests pass with glibc-2.21 but fail with 2.22 and current git due to bug 18589 which points to a broken change in the collate algorithm that needs to be reverted first." Hungarian is mentioned. Doesn't Fedora 23 include glibc-2.22? Is it possible that that bug affects strcoll but not strxfrm? https://sourceware.org/bugzilla/show_bug.cgi?id=18589 https://sourceware.org/bugzilla/show_bug.cgi?id=18934 -- Thomas Munro http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl/pg_rewind tests vs. slow AIX buildfarm members
On Wed, Oct 7, 2015 at 6:44 AM, Tom Lanewrote: > I wrote: >> So the attached modified patch adjusts the PID-match logic and some >> comments, but is otherwise what I posted before. I believe that this >> might actually work on Windows, but I have no way to test it. Someone >> please try that? (Don't forget to test the service-start path, too.) > > Has anyone tried that patch on Windows? I'd like to push it in hopes of > improving buildfarm stability, but I'm hesitant to do so without some > confirmation that it acts as I think it will. I marked this patch as something to look at, though I haven't take the time to do it yet... -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
Robert Haaswrites: > On Tue, Oct 6, 2015 at 5:53 PM, Tom Lane wrote: >> I dunno, if you close a portal before you've gotten CommandComplete, >> should you expect that all its actions were taken? Arguably, that >> should be more like a ROLLBACK. > I dunno either, but a rollback would undo everything, and a commit > would do everything. Ending up in a state where we've done some of it > but not all of it is strange. Being able to run an unbounded number > of commands without a CommandCounterIncrement is *really* strange. I'm fairly sure that we *have* done all of it; the Portal code is careful to execute DML commands to completion whether or not the client accepts all the RETURNING rows. It will become visible after you commit. The issue here is just whether it's visible to a subsequent Bind within the same transaction. > I'm not very sure what to do about it, though. Possibly we need the close-portal message processing code to do the CCI stuff if it observes that the Portal hasn't been run to completion. (I think there is already enough state in the Portal struct to tell that, though I'm too lazy to look right now.) I'm concerned though whether this would amount to a protocol break. It's certainly a behavioral change that we'd have to document. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl/pg_rewind tests vs. slow AIX buildfarm members
I wrote: > So the attached modified patch adjusts the PID-match logic and some > comments, but is otherwise what I posted before. I believe that this > might actually work on Windows, but I have no way to test it. Someone > please try that? (Don't forget to test the service-start path, too.) Has anyone tried that patch on Windows? I'd like to push it in hopes of improving buildfarm stability, but I'm hesitant to do so without some confirmation that it acts as I think it will. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] check fails on Fedora 23
On 10/06/2015 04:49 PM, Robert Haas wrote: On Sun, Oct 4, 2015 at 11:52 AM, Andrew Dunstanwrote: Isn't this arguably a Fedora regression? What did they change in F23 to make it fail? I note that F23 is still in Beta. Maybe, but it's pretty unfriendly for us to complain about a library issue, if it is one, by failing an Assert(). People with non-assert-enabled builds will just get wrong answers. Yuck. Thinking about how this could happen, I believe that one possibility is that there are two strings A and B and a locale L such that strcoll_l(A, B, L) and memcmp(strxfrm(A, L), strxfrm(B, L)) disagree (that is, the results are of different sign, or one is zero and the other is not). I don't have an environment handy to reproduce this, but it would be nifty if someone could figure out exactly what strings are failing and then provide the strcoll result and the strxfrm blobs for those strings. Well, it's failing like this: TRAP: FailedAssertion("!(compareResult < 0)", File: "nodeMergejoin.c", Line: 942) 2015-10-04 20:03:42.894 UTC [56118614.53cf:2] LOG: server process (PID 21681) was terminated by signal 6: Aborted 2015-10-04 20:03:42.894 UTC [56118614.53cf:3] DETAIL: Failed process was running: SELECT p1.oid, p1.proname, p2.oid, p2.proname FROM pg_proc AS p1, pg_proc AS p2 WHERE p1.oid < p2.oid AND p1.prosrc = p2.prosrc AND p1.prolang = 12 AND p2.prolang = 12 AND (p1.proisagg = false OR p2.proisagg = false) AND (p1.prolang != p2.prolang OR p1.proisagg != p2.proisagg OR p1.prosecdef != p2.prosecdef OR p1.proleakproof != p2.proleakproof OR p1.proisstrict != p2.proisstrict OR p1.proretset != p2.proretset OR p1.provolatile != p2.provolatile OR p1.pronargs != p2.pronargs); So I think the right way would be to do something like this: for p1 in select * from pg_proc loop for p2 in select * from pg_proc loop raise notice 'p1: %, %, p2: % %', p1.proname, p1.prosrc, p2,proname, p2,prosrc; perform p1.prosrc = p2.prosrc; end loop; end loop; Then the last notice raised should show us the offending strings at least. Does that make sense? Alternatively one could try to get it with a debugger. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] check fails on Fedora 23
On 10/06/2015 05:45 PM, Thomas Munro wrote: On Wed, Oct 7, 2015 at 9:49 AM, Robert Haaswrote: On Sun, Oct 4, 2015 at 11:52 AM, Andrew Dunstan wrote: Isn't this arguably a Fedora regression? What did they change in F23 to make it fail? I note that F23 is still in Beta. Maybe, but it's pretty unfriendly for us to complain about a library issue, if it is one, by failing an Assert(). People with non-assert-enabled builds will just get wrong answers. Yuck. Thinking about how this could happen, I believe that one possibility is that there are two strings A and B and a locale L such that strcoll_l(A, B, L) and memcmp(strxfrm(A, L), strxfrm(B, L)) disagree (that is, the results are of different sign, or one is zero and the other is not). I wonder if Glibc bug 18589 is relevant. Bug 18934 says "Note that these unittests pass with glibc-2.21 but fail with 2.22 and current git due to bug 18589 which points to a broken change in the collate algorithm that needs to be reverted first." Hungarian is mentioned. Doesn't Fedora 23 include glibc-2.22? Is it possible that that bug affects strcoll but not strxfrm? https://sourceware.org/bugzilla/show_bug.cgi?id=18589 https://sourceware.org/bugzilla/show_bug.cgi?id=18934 Yes, it's 2.22: [vagrant@localhost ~ ]$ rpm -q -a | grep glibc glibc-2.22-3.fc23.x86_64 glibc-devel-2.22-3.fc23.x86_64 glibc-common-2.22-3.fc23.x86_64 glibc-headers-2.22-3.fc23.x86_64 cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
On Tue, Oct 6, 2015 at 5:53 PM, Tom Lanewrote: > Robert Haas writes: >> From looking at the code, it appears to me that if the Execute is run >> to completion, then its results will be seen by future statements, but >> if the portal is closed earlier, then not. See the end of >> exec_execute_message. The handler for the Close message (inside >> PostgresMain) doesn't seem to do anything that would result in a >> CommandCounterIncrement() or CommitTransactionCommand(). > >> This does seem a little strange. > > I dunno, if you close a portal before you've gotten CommandComplete, > should you expect that all its actions were taken? Arguably, that > should be more like a ROLLBACK. I dunno either, but a rollback would undo everything, and a commit would do everything. Ending up in a state where we've done some of it but not all of it is strange. Being able to run an unbounded number of commands without a CommandCounterIncrement is *really* strange. I'm not very sure what to do about it, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
Robert Haaswrites: > On Tue, Oct 6, 2015 at 6:10 PM, Tom Lane wrote: >> I'm concerned though whether this would amount to a protocol break. > Me, too. There are enough CCI's floating around the code that there may not actually be any observable problem, at least not in typical cases. That would explain the lack of complaints ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Tsvector editing functions
On Mon, Oct 5, 2015 at 1:29 PM, Stas Kelvichwrote: > Hello. > > There is patch that adds some editing routines for tsvector type. > > tsvector delete(tsvector, text) > removes entry from tsvector by lexeme name > set unnest(tsvector) > expands a tsvector to a set of rows. Each row has following columns: > lexeme, postings, weights. > text[] to_array(tsvector) > converts tsvector to array of lexemes > tsvector to_tsvector(text[]) > converts array of lexemes to tsvector When submitting a patch, it's a good idea to explain why someone would want the feature you are adding. Maybe that's obvious to you, but it isn't clear to me why we'd want this. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Small documentation fix in src/bin/initdb/po/zh_CN.po
When working on a script, I stumbled over a mistake in the zh_CN.po translation for initdb. Patch attached. Regards, -- Andreas 'ads' Scherbaum German PostgreSQL User Group European PostgreSQL User Group - Board of Directors Volunteer Regional Contact, Germany - PostgreSQL Project diff --git a/src/bin/initdb/po/zh_CN.po b/src/bin/initdb/po/zh_CN.po index 3986eab..1dd6dcd 100644 --- a/src/bin/initdb/po/zh_CN.po +++ b/src/bin/initdb/po/zh_CN.po @@ -727,7 +727,7 @@ msgid "" "Report bugs to.\n" msgstr "" "\n" -"æ¥åéè¯¯è³ .\n" +"æ¥åéè¯¯è³ .\n" #: initdb.c:2907 msgid "" -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Obsolete comment in tidpath.c
On Mon, Oct 5, 2015 at 3:05 AM, Etsuro Fujitawrote: > I think "best_inner_indexscan()" in the following comment in tidpath.c > is obsolete. > > * There is currently no special support for joins involving CTID; in > * particular nothing corresponding to best_inner_indexscan(). Since it's > * not very useful to store TIDs of one table in another table, there > * doesn't seem to be enough use-case to justify adding a lot of code > * for that. > > How about s/best_inner_indexscan()/parameterized scans/? I'm not sure that's altogether clear. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
On Fri, Oct 2, 2015 at 4:26 AM, Kyotaro HORIGUCHIwrote: > During join search, a joinrel should be comptible between local > joins and remote joins, of course target list also should be > so. So it is quite difficult to add wholerow resjunk for joinrels > before whole join tree is completed even if we allow row marks > that are not bound to base RTEs. Suppose ROW_MARK_COPY is in use, and suppose the query is: SELECT ft1.a, ft1.b, ft2.a, ft2.b FROM ft1, ft2 WHERE ft1.x = ft2.x; When the foreign join is executed, there's going to be a slot that needs to be populated with ft1.a, ft1.b, ft2.a, ft2.b, and a whole row reference. Now, let's suppose the slot descriptor has 5 columns: those 4, plus a whole-row reference for ROW_MARK_COPY. If we know what values we're going to store in columns 1..4, couldn't we just form them into a tuple to populate column 5? We don't actually need to be able to fetch such a tuple from the remote side because we can just construct it. I think. Is this a dumb idea, or could it work? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
On Sat, Oct 3, 2015 at 5:03 AM, Shay Rojanskywrote: > Hi hackers, some odd behavior has been reported with Npgsql and I wanted to > get your help. > > Npgsql supports sending multiple SQL statements in a single packet via the > extended protocol. This works fine, but when the second query SELECTs a > value modified by the first's UPDATE, I'm getting a result as if the UPDATE > hasn't yet occurred. > > The exact messages send by Npgsql are: > > Parse (UPDATE data SET name='foo' WHERE id=1), statement=unnamed > Describe (statement=unnamed) > Bind (statement=unnamed, portal=MQ0) > Parse (SELECT * FROM data WHERE id=1), statement=unnamed > Describe (statement=unnamed) > Bind (statement=unnamed, portal=MQ1) > Execute (portal=MQ0) > Close (portal=MQ0) > Execute (portal=MQ1) > Close (portal=MQ1) > Sync > > Instead of returning the expected 'foo' value set in the first command's > UPDATE, I'm getting whatever value was previously there. > Note that this happen regardless of whether a transaction is already set and > of the isolation level. > > Is this the expected behavior, have I misunderstood the protocol specs? >From looking at the code, it appears to me that if the Execute is run to completion, then its results will be seen by future statements, but if the portal is closed earlier, then not. See the end of exec_execute_message. The handler for the Close message (inside PostgresMain) doesn't seem to do anything that would result in a CommandCounterIncrement() or CommitTransactionCommand(). This does seem a little strange. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
Robert Haaswrites: > From looking at the code, it appears to me that if the Execute is run > to completion, then its results will be seen by future statements, but > if the portal is closed earlier, then not. See the end of > exec_execute_message. The handler for the Close message (inside > PostgresMain) doesn't seem to do anything that would result in a > CommandCounterIncrement() or CommitTransactionCommand(). > This does seem a little strange. I dunno, if you close a portal before you've gotten CommandComplete, should you expect that all its actions were taken? Arguably, that should be more like a ROLLBACK. Note there'd only be a difference in case of an operation with RETURNING, else there's no way (at this level anyway) to pause a data-modifying command mid-execution. This logic all predates RETURNING, I think, so maybe it does need to be revisited. But it's not entirely clear what should happen. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Obsolete comment in tidpath.c
Robert Haaswrites: > On Mon, Oct 5, 2015 at 3:05 AM, Etsuro Fujita > wrote: >> I think "best_inner_indexscan()" in the following comment in tidpath.c >> is obsolete. >> >> * There is currently no special support for joins involving CTID; in >> * particular nothing corresponding to best_inner_indexscan(). Since it's >> * not very useful to store TIDs of one table in another table, there >> * doesn't seem to be enough use-case to justify adding a lot of code >> * for that. >> >> How about s/best_inner_indexscan()/parameterized scans/? > I'm not sure that's altogether clear. Probably consider_index_join_clauses() is the closest current equivalent. However, it may not be such a great idea to have this comment referencing a static function in another file, as it wouldn't occur to people to look here when rewriting indxpath.c. (Ahem.) Perhaps "in particular, no ability to produce parameterized paths here". regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_ctl/pg_rewind tests vs. slow AIX buildfarm members
On Wed, Oct 7, 2015 at 7:05 AM, Michael Paquierwrote: > On Wed, Oct 7, 2015 at 6:44 AM, Tom Lane wrote: >> I wrote: >>> So the attached modified patch adjusts the PID-match logic and some >>> comments, but is otherwise what I posted before. I believe that this >>> might actually work on Windows, but I have no way to test it. Someone >>> please try that? (Don't forget to test the service-start path, too.) >> >> Has anyone tried that patch on Windows? I'd like to push it in hopes of >> improving buildfarm stability, but I'm hesitant to do so without some >> confirmation that it acts as I think it will. > > I marked this patch as something to look at, though I haven't take the > time to do it yet... s/take/taken -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
On Tue, Oct 6, 2015 at 6:10 PM, Tom Lanewrote: > I'm concerned though whether this would amount to a protocol break. Me, too. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Odd query execution behavior with extended protocol
On Tue, Oct 6, 2015 at 6:18 PM, Tom Lanewrote: > Robert Haas writes: >> On Tue, Oct 6, 2015 at 6:10 PM, Tom Lane wrote: >>> I'm concerned though whether this would amount to a protocol break. > >> Me, too. > > There are enough CCI's floating around the code that there may not > actually be any observable problem, at least not in typical cases. > That would explain the lack of complaints ... It's pretty to think so, but I've been doing this long enough to be skeptical. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] run pg_rewind on an uncleanly shut down cluster.
> On 06 Oct 2015, at 08:58, Michael Paquier >wrote: > > On Tue, Oct 6, 2015 at 12:41 AM, Oleksii Kliukin > wrote: >> pg_rewind -D postgresql0 --source-server="host=127.0.0.1 port=5433 >> dbname=postgres" The servers diverged at WAL position 0/360 on >> timeline 1. could not open file >> "data/postgresql0/pg_xlog/00010002": No such file or >> directory >> >> Note that this problem happens not 100% of time during the tests, >> sometimes pg_rewind can actually rewind the former master. > > I don't think that there is any actual reason preventing us from > rewinding a node that has its state in pg_control set as something > else than DB_SHUTDOWNED, the important point here is to be sure that > the target node is *not* running while pg_rewind is running (perhaps > pg_rewind should perform an action in the target node to not have it > run, let's say that it creates a fake postgresql.conf with invalid > data and renames the existing one). Checking pg_control makes things > easier though, there is no need to rely on external binaries like > "pg_ctl status" or some parsing of postmaster.pid with kill(pid, 0) > for example. Does pg_rewind actually rely on the cluster being rewound to finish recovery? If not, than it would be a good idea to add —force flag to force the pg_rewind to ignore the state check, as you suggested in this thread: http://www.postgresql.org/message-id/flat/CAF8Q-Gw1HBKzpSEVtotLg=dr+ee-6q59qqfhy5tor3fyaen...@mail.gmail.com#CAF8Q-Gw1HBKzpSEVtotLg=dr+ee-6q59qqfhy5tor3fyaen...@mail.gmail.com > >> I know I can copy the segment back from the archive, but I'd like to >> avoid putting this logic into the failover tool if possible. Is there >> anything we can do to avoid the problem described above, or is there >> a better way to bring up the former master after the crash with >> pg_rewind? > > Well, for 9.5 (and actually the same applies to the 9.3 and 9.4 > version on github because I am keeping the previous versions > consistent with what is in 9.5), I guess no. > > This missing segment is going to be needed in any case because the > list of blocks modified needs to be found, hence the question is "how > can pg_rewind guess where a WAL segment missing from the target's > pg_xlog is located?". And there are multiple answers: > - An archive path, then let's add an option to pg_rewind to add a > path, though this needs the archive path to be mounted locally, and > usually that's not the case. > - An existing node of the cluster, perhaps the segment is still > present on another standby node that already replayed it, though > this would need an extra node. > - The source node itself, if we are lucky the missing segment created > before WAL forked is still there. It may not be there though if it > has already been recycled. At the end it seems to me that this is > going to need some extra operation by the failover tool or the > system administrator either way, and that any additional logic to > check where this segment is located is never going to satisfy > completely the failover use cases. Hence I would keep just pg_rewind > out of that. Well, checking the source node looks like an option that does not require providing any additional information by DBA, as the connection string or the path to the data dir is already there. It would be nice if pg_rewind could fetch WAL from the given restore_command though, or even use the command already there in recovery.conf (if the node being recovered is a replica, which I guess is a pretty common case). Anyway, thank you for describing the issue. In my case, it seems I solved it by removing the files from the archive_status directory of the former master (the node being rewound). This makes PostgreSQL forget that it has to remove an already archived (but still required for pg_rewind) segment (I guess it does it during stop when the checkpoint is issued). Afterwards, postgres starts it in a single user mode with archive_command=false and archive_mode=on, to make sure no segments are archived/removed, and stopped right afterwards with: postgres --single -D . -c "max_replication_slots=5" -c "wal_level=hot_standby" -c "wal_log_hints=on" -c "archive_mode=on" -c "archive_command=false” postgres
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 06, 2015 at 10:57:42AM -0700, Josh Berkus wrote: > On 10/06/2015 10:17 AM, Bruce Momjian wrote: > > Therefore, our current default behavior is to ignore user reports, > > unless someone takes an action to reply, record, or retain the email for > > later review. What a tracker does is to make the default user report be > > _retained_, meaning we have to take action to _not_ retain a user report > > as an open item. > > Well, we can determine how that's handled. There are bug trackers out > there that automatically archive unconfirmed bug reports after a certain > amount of time. I'd personally recommend it. > > Of course, that requires a bug tracker which can have an "unconfirmed" > status. This is essentially what I have done with the 'Stale' status. Though I have done at two years to be conservative, rather than 60 days, which I think is entirely more reasonable. > > Second, we have a mix of user reports. Some bug reports are not bugs > > and must be reclassified. In other cases, uses ask questions via > > non-tracked communicate channels, e.g. pgsql-general, but they are > > really bugs. So, to do this right, we need a way of marking tracked > > bugs as not bugs, and a way of adding bugs that were reported in a > > non-tracked manner. > > Yeah, I was wondering about that. I think I have suggested that there be a way to generate a bug id via email. Presumably someone could just copy that email address to make a not-tracked discussion get a bug id. If the system archived all the lists (not hard) it would be possible to pull the other emails from the thread into the bug (also not hard). As for marking as 'not-a-bug' this can easily be done via whatever mechanism might be used. Something along the lines of: Bug Status: not a bug would probably work. It's really whatever people are willing to actually do. > FWIW, when I talk about bugs which we lost track of, they're not > generally unconfirmed bug reports. Usually, it's stuff which a > contributor replied to, but the bug was low-impact, circumstantial, and > hard to reproduce, and came in during a busy period (like release time). > So I'd be perfectly OK with the idea that unconfirmed bugs hang around > in the system for 60 days, then automatically convert to "stale" status. My tracker would do this trivially if I changed the query to 60 days instead of two years. > Until we build up a team of volunteers for bug triage, we might have to > do that. > > Speaking of which ... this project is rich in skilled users who are > involved in the community but don't code. Bug triage is exactly the > kind of thing very part-time community supporters can do, if we make it > easy for them to do. I can make it easy. But that doesn't directly address two of the other points: 1: Do we need any system for who is allowed to triage bugs? 2: Should an equivalent email be sent to the list? Specifically with point number 2, suppose the above mechanism is used. When a triager marks a bug as (say) not a bug, should the system just update the database, or should it actually send a formatted email to the bugs list with the 'Bug Status: not a bug' line (among others, presumably)? I think it should send the email, but I can see how that could be construed as junk. -- nw -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Obsolete comment in tidpath.c
On 2015/10/07 7:01, Tom Lane wrote: > Robert Haaswrites: >> On Mon, Oct 5, 2015 at 3:05 AM, Etsuro Fujita >> wrote: >>> I think "best_inner_indexscan()" in the following comment in tidpath.c >>> is obsolete. >>> >>> * There is currently no special support for joins involving CTID; in >>> * particular nothing corresponding to best_inner_indexscan(). Since it's >>> * not very useful to store TIDs of one table in another table, there >>> * doesn't seem to be enough use-case to justify adding a lot of code >>> * for that. >>> >>> How about s/best_inner_indexscan()/parameterized scans/? > >> I'm not sure that's altogether clear. > > Probably consider_index_join_clauses() is the closest current equivalent. > However, it may not be such a great idea to have this comment referencing > a static function in another file, as it wouldn't occur to people to look > here when rewriting indxpath.c. (Ahem.) > > Perhaps "in particular, no ability to produce parameterized paths here". Works for me. Best regards, Etsuro Fujita -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Connection string parameter 'replication' in documentation
Ouch! At Tue, 6 Oct 2015 17:22:17 -0400, Robert Haaswrote in > On Mon, Oct 5, 2015 at 6:07 AM, Kyotaro HORIGUCHI > wrote: > > /* > > * We use the expand_dbname parameter to process the connection > > string (or > > -* URI), and pass some extra options. The deliberately undocumented > > -* parameter "replication=true" makes it a replication connection. > > The > > -* database name is ignored by the server in replication mode, but > > specify > > -* "replication" for .pgpass lookup. > > +* URI), and pass some extra options. The paramter "replication" > > +* deliberately documented out of the section for the ordiary client > > +* protocol, having "true" makes it a physical replication > > connection. The > > +* database name is ignored by the server in physical replication > > mode, > > +* but specify "replication" for .pgpass lookup. > > */ > > I don't think this is an improvement, even ignoring the fact that > you've spelled a couple of words incorrectly. The original text seems > clear enough, and the new text isn't really fully accurate either: the > discussion of when the database name is ignored really shouldn't be > linked to whether this is logical or physical replication. Thank you for your kindly replying this. I agree to the comment above. It is my mistake that "in physical.." looks to qualify "ignored" but no future in polishing it. regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Multi-tenancy with RLS
On Tue, Oct 6, 2015 at 10:29 PM, Stephen Frostwrote: > * Haribabu Kommi (kommi.harib...@gmail.com) wrote: >> On Tue, Oct 6, 2015 at 10:56 AM, Haribabu Kommi >> wrote: >> > Here I attached an updated version of the patch with the following changes. >> >> I found some problems related to providing multi-tenancy on a system >> catalog view. >> This is because, system catalog view uses the owner that is created >> the user instead >> of the current user by storing the user information in "checkAsUser" >> field in RangeTblEntry >> structure. > > Right, when querying through a view to tables underneath, we use the > permissions of the view owner. View creators should be generally aware > of this already. > > I agree that it adds complications to the multi-tenancy idea since the > system views, today, allow viewing of all objects. There are two ways > to address that: > > Modify the system catalog views to include the same constraints that the > policies on the tables do > > or > > Allow RLS policies against views and then create the necessary policies > on the views in the catalog. > > My inclination is to work towards the latter as that's a capability we'd > like to have anyway. Thanks for the solutions to handle the problem. Currently I thought of providing two multi-tenancy solutions to the user. They are: 1. Tenancy at shared system catalog tables level 2. Tenancy at database system catalog tables. User can create views on system catalog tables, even though I want to provide tenancy on those views also. I will do further analysis and provide details of which solution gives the benefit of two tenancy levels and then I can proceed for implementation after discussion. Regards, Hari Babu Fujitsu Australia -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Foreign join pushdown vs EvalPlanQual
> > > > Who can provide a projection to generate joined tuple? > > > > It is a job of individual plan-state-node to be walked on during > > > > EvalPlanQualNext(). > > > > > > EvalPlanQualNext simply does recheck tuples stored in epqTuples, > > > which are designed to be provided by EvalPlanQualFetchRowMarks. > > > > > > I think that that premise shouldn't be broken for convenience... > > > > > Do I see something different or understand incorrectly? > > EvalPlanQualNext() walks down entire subtree of the Lock node. > > (epqstate->planstate is entire subplan of Lock node.) > > > > TupleTableSlot * > > EvalPlanQualNext(EPQState *epqstate) > > { > > MemoryContext oldcontext; > > TupleTableSlot *slot; > > > > oldcontext = MemoryContextSwitchTo(epqstate->estate->es_query_cxt); > > slot = ExecProcNode(epqstate->planstate); > > MemoryContextSwitchTo(oldcontext); > > > > return slot; > > } > > > > If and when relations joins are kept in the sub-plan, ExecProcNode() > > processes the projection by join, doesn't it? > > Yes, but it is needed to prepare alternative plan to do such > projection. > No matter. The custom-scan node is a good reference to have underlying plan nodes that can be kicked by extension. If we adopt same semantics, these alternative plan shall not be kicked unless FDW driver does not want. Also, I don't think it is difficult to construct an alternative join- path using unparametalized nested-loop (note that all we need to do is evaluation towards a most one tuples for each base relation). If people felt it is really re-invention of the wheel, core backend can provide a utility function to produce the alternative path. Probably, Path * foreign_join_alternative_local_join_path(PlannerInfo *root, RelOptInfo *joinrel) can generate what we need. > > Why projection by join is not a part of EvalPlanQualNext()? > > It is the core of its job. Unless projection by join, upper node cannot > > recheck the tuple come from child nodes. > > What I'm uneasy on is the foreign join introduced the difference > in behavior between ordinary fetching and epq fetching. It is > quite natural having joined whole row but is seems hard to get. > hard to get, and easy to construct on the fly. > Another reason is that ExecScanFetch with scanrelid == 0 on EPQ > is FDW/CS specific feature and looks to be a kind of hack. (Even > if it would be one of many) > It means these are kind of exceptional ones, thus it is reasonable to avoid fundamental changes in RowLock mechanism, isn't it? Thanks, -- NEC Business Creation Division / PG-Strom Project KaiGai Kohei-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] run pg_rewind on an uncleanly shut down cluster.
On Tue, Oct 6, 2015 at 6:04 PM, Oleksii Kliukinwrote: > Does pg_rewind actually rely on the cluster being rewound to finish > recovery? That's not mandatory AFAIK. I think that Heikki has just implemented it in the safest way possible for a first shot. That's something we could relax in the future. > If not, than it would be a good idea to add —force flag to force the > pg_rewind to ignore the state check, as you suggested in this thread: > http://www.postgresql.org/message-id/flat/CAF8Q-Gw1HBKzpSEVtotLg=dr+ee-6q59qqfhy5tor3fyaen...@mail.gmail.com#CAF8Q-Gw1HBKzpSEVtotLg=dr+ee-6q59qqfhy5tor3fyaen...@mail.gmail.com Another one would be to remove this check of pg_control by something closer to what pg_ctl status does with postmaster.pid for example. And to perhaps add a safeguard to prevent a concurrent user to start the target node when pg_rewind run begins. > Well, checking the source node looks like an option that does not require > providing any additional information by DBA, as the connection string or the > path to the data dir is already there. It would be nice if pg_rewind could > fetch WAL from the given restore_command though, or even use the command > already there in recovery.conf (if the node being recovered is a replica, > which I guess is a pretty common case). Kind of. Except that we would want a user to be able to pass a custom restore_command for more flexibility that would be used by pg_rewind itself. > Anyway, thank you for describing the issue. In my case, it seems I solved it > by removing the files from the archive_status directory of the former master > (the node being rewound). This makes PostgreSQL forget that it has to remove > an already archived (but still required for pg_rewind) segment (I guess it > does it during stop when the checkpoint is issued). Afterwards, postgres > starts it in a single user mode with archive_command=false and > archive_mode=on, to make sure no segments are archived/removed, and stopped > right afterwards with: Interesting. That's one way to go. > Afterwards, pg_rewind runs on the cluster without any noticeable issues. > Since the node is not going to continue as a master and the contents of > pg_xlog/archive_status is changed after pg_rewind anyway, I don’t think any > data is lost after initial removal of archive_status files. Yep. Its content is replaced by everything from the source node. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.
Hello Fujii-san, >Here are another review comments Thank you for review. Please find attached an updated patch. > You removed some empty lines, for example, in vacuum.h. >Which seems useless to me. Has been corrected in the attached. >Why did you use an array to store the progress information of VACUUM? >I think that it's better to use separate specific variables for them for >better code readability, for example, variables scanned_pages, >heap_total_pages, etc. It is designed this way to keep it generic for all the commands which can store different progress parameters in shared memory. >Currently only progress_param_float[0] is used. So there is no need to use an >array here. Same as before . This is for later use by other commands. >progress_param_float[0] saves the percetage of VACUUM progress. >But why do we need to save that information into shared memory? >We can just calculate the percentage whenever pg_stat_get_vacuum_progress() is >executed, instead. There seems to be no need to save that information. This has been corrected in the attached. >char progress_message[PROGRESS_MESSAGE_LENGTH][N_PROGRESS_PARAM]; >As Sawada pointed out, there is no user of this variable. Have used it to store table name in the updated patch. It can also be used to display index names, current phase of VACUUM. This has not been included in the patch yet to avoid cluttering the display with too much information. >For example, st_activity of autovacuum worker displays "autovacuum ...". >So as Sawada reported, he could not find any entry for autovacuum in >pg_stat_vacuum_progress. In the attached patch , I have performed check for autovacuum also. >I think that you should add the flag or something which indicates whether this >backend is running VACUUM or not, into PgBackendStatus. >pg_stat_vacuum_progress should display the entries of only backends with that >flag set true. This design means that you need to set the flag to true when >starting VACUUM and reset at the end of VACUUM progressing. This design seems better in the sense that we don’t rely on st_activity entry to display progress values. A variable which stores flags for running commands can be created in PgBackendStatus. These flags can be used at the time of display of progress of particular command. >That is, non-superuser should not be allowed to see the >pg_stat_vacuum_progress entry of superuser running VACUUM? This has been included in the updated patch. >This code may cause total_pages and total_heap_pages to decrease while VACUUM >is running. Yes. This is because the initial count of total pages to be vacuumed and the pages which are actually vacuumed can vary depending on visibility of tuples. The pages which are all visible are skipped and hence have been subtracted from total page count. Thank you, Rahila Syed __ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. Vacuum_progress_checker_v4.patch Description: Vacuum_progress_checker_v4.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.
On Wed, Oct 7, 2015 at 5:58 AM, Robert Haas wrote: > On Sat, Oct 3, 2015 at 7:38 AM, Michael Paquier wrote: > It seems that these days 'make check' creates a directory in /tmp > called /tmp/pg_regress-RANDOMSTUFF. Listening on TCP ports is > disabled, and the socket goes inside this directory with a name like > .s.PGSQL.PORT. You can connect with psql -h > /tmp/pg_regress-RANDOMSTUFF -p PORT, but not over TCP. This basically > removes the risk of TCP port number collisions, as well as the risk of > your temporary instance being hijacked by a malicious user on the same > machine. Right, that's for example /var/folders/ on OSX, and this is defined once per test run via $tempdir_short. PGHOST is set to that as well. > I'm not sure what we do on Windows, though. sspi with include_realm through 127.0.0.1. >>> In particular, I was shutting down an archiving node and the archiving >>> was truncated. I *think* smart doesn't do this. But again, it's really >>> that the test writer can't easily override, not that the default is wrong. >> >> Ah, OK. Then fast is just fine. It shuts down the node correctly. >> "smart" would wait for all the current connections to finish but I am >> wondering if it currently matters here: I don't see yet a clear use >> case yet where it would make sense to have multi-threaded script... If >> somebody comes back with a clear idea here perhaps we could revisit >> that but it does not seem worth it now. > > I don't have anything brilliant to say about this point, but here's a > perhaps-not-brilliant comment: > > If there's a bug in one of smart and fast shutdown and the other works > great, it would be nice to catch that. Yes, sure. I extended the patch to support other stop modes than fast, the default being kept to fast if none is defined. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.
On Wed, Oct 7, 2015 at 7:43 AM, Michael Paquierwrote: > On Wed, Oct 7, 2015 at 5:58 AM, Robert Haas wrote: >> On Sat, Oct 3, 2015 at 7:38 AM, Michael Paquier wrote: >> It seems that these days 'make check' creates a directory in /tmp >> called /tmp/pg_regress-RANDOMSTUFF. Listening on TCP ports is >> disabled, and the socket goes inside this directory with a name like >> .s.PGSQL.PORT. You can connect with psql -h >> /tmp/pg_regress-RANDOMSTUFF -p PORT, but not over TCP. This basically >> removes the risk of TCP port number collisions, as well as the risk of >> your temporary instance being hijacked by a malicious user on the same >> machine. > > Right, that's for example /var/folders/ on OSX, and this is defined > once per test run via $tempdir_short. PGHOST is set to that as well. Er, mistake here. That's actually once per standard_initdb, except that all the tests I have included in my patch run it just once to create a master node. It seems that it would be wiser to set one socket dir per node then, remove the port assignment stuff, and use tempdir_short as a key to define a node as well as in the connection string to this node. I'll update the patch later today... -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Small documentation fix in src/interfaces/ecpg/preproc/po/pt_BR.po
When working on a script, I stumbled over a mistake in the pt_BR.po translation for ecpg. Patch attached. Regards, -- Andreas 'ads' Scherbaum German PostgreSQL User Group European PostgreSQL User Group - Board of Directors Volunteer Regional Contact, Germany - PostgreSQL Project diff --git a/src/interfaces/ecpg/preproc/po/pt_BR.po b/src/interfaces/ecpg/preproc/po/pt_BR.po index 804e201..1d855a3 100644 --- a/src/interfaces/ecpg/preproc/po/pt_BR.po +++ b/src/interfaces/ecpg/preproc/po/pt_BR.po @@ -313,7 +313,7 @@ msgstr "erro de sintaxe no comando EXEC SQL INCLUDE" #: pgc.l:1237 #, c-format msgid "internal error: unreachable state; please report this to" -msgstr "erro interno: estado inacessÃvel; por favor relato isso a " +msgstr "erro interno: estado inacessÃvel; por favor relato isso a " #: pgc.l:1362 #, c-format -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Getting sorted data from foreign server
Hi All, standard_qp_callback() sets root->query_pathkeys to pathkeys on which the result of query_planner are expected be sorted upon (see the function for more details). The patch checks if any prefix of these pathkeys can be entirely evaluated using the foreign relation and at the foreign server under consideration. If yes, it gets estimates of costs involved and adds paths with those pathkeys. There can be multiple pathkeyless paths added for a given base relation. For every such path one path with pathkeys is added. If there is an index matching on the foreign server, getting the data sorted from foreign server improves execution time as seen from the results. The patch adds this functionality entirely in postgres_fdw. For a postgres_fdw foreign table ft1(val int, val2 int), with the patch EXPLAIN VERBOSE SELECT * FROM ft1 ORDER BY val; gives QUERY PLAN Foreign Scan on public.ft1 (cost=100.29..6480.42 rows=100118 width=8) Output: val, val2 Remote SQL: SELECT val, val2 FROM public.lt ORDER BY val ASC (3 rows) observe that the query sent to the foreign server has ORDER BY clause in it. The test script attached has more examples of the same. The patch adds a small test case. Results Attached find the script used to measure the performance. The script creates a foreign server and foreign table pointing to the local server and local table resp. The test runs three different types of queries (simple sort, group by, sorted result from inheritance hierarchy) multiple times and calculates the average execution time for each query with and without the patch. The performance is measured for foreign table (ft1 above) populated with 100 rows (in-memory sorting) and with 10 rows (external sorting) resp. The output of the script with and without patch and with different sizes of foreign table is attached here. We can observe following 1. For large number of rows (when the memory is not enough to hold all the data to be sorted) we see 20-25% reduction in the query execution time when there is matching index on the foreign server. 2. For very small number of rows (when the memory is enough to hold all the data to be sorted) there is not much performance gain and sometimes the planner is not choosing the path with pathkeys for foreign scans. 3. In all the cases, the planning time increases owing to EXPLAIN queries fired on the foreign server. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company diff --git a/contrib/postgres_fdw/deparse.c b/contrib/postgres_fdw/deparse.c index 697de60..25d8650 100644 --- a/contrib/postgres_fdw/deparse.c +++ b/contrib/postgres_fdw/deparse.c @@ -186,23 +186,26 @@ is_foreign_expr(PlannerInfo *root, * Check that the expression consists of nodes that are safe to execute * remotely. */ glob_cxt.root = root; glob_cxt.foreignrel = baserel; loc_cxt.collation = InvalidOid; loc_cxt.state = FDW_COLLATE_NONE; if (!foreign_expr_walker((Node *) expr, _cxt, _cxt)) return false; - /* Expressions examined here should be boolean, ie noncollatable */ - Assert(loc_cxt.collation == InvalidOid); - Assert(loc_cxt.state == FDW_COLLATE_NONE); + /* + * The collation of the expression should be none or originate from a + * foreign var. + */ + Assert(loc_cxt.state == FDW_COLLATE_NONE || + loc_cxt.state == FDW_COLLATE_SAFE); /* * An expression which includes any mutable functions can't be sent over * because its result is not stable. For example, sending now() remote * side could cause confusion from clock offsets. Future versions might * be able to make this choice with more granularity. (We check this last * because it requires a lot of expensive catalog lookups.) */ if (contain_mutable_functions((Node *) expr)) return false; @@ -1870,10 +1873,50 @@ printRemoteParam(int paramindex, Oid paramtype, int32 paramtypmod, */ static void printRemotePlaceholder(Oid paramtype, int32 paramtypmod, deparse_expr_cxt *context) { StringInfo buf = context->buf; char *ptypename = format_type_with_typemod(paramtype, paramtypmod); appendStringInfo(buf, "((SELECT null::%s)::%s)", ptypename, ptypename); } + +void +appendOrderByClause(StringInfo buf, PlannerInfo *root, RelOptInfo *baserel, List *pathkeys) +{ + ListCell *lcell; + deparse_expr_cxt context; + int nestlevel; + char*delim = " "; + + /* Set up context struct for recursion */ + context.root = root; + context.foreignrel = baserel; + context.buf = buf; + context.params_list = NULL; + + /* Make sure any constants in the exprs are printed portably */ + nestlevel = set_transmission_modes(); + + appendStringInfo(buf, " ORDER BY"); + foreach(lcell, pathkeys) + { + PathKey*pathkey = lfirst(lcell); + Expr*em_expr; + + em_expr = find_em_expr_for_rel(pathkey->pk_eclass, baserel); + Assert(em_expr); +
Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.
Hello, Please check the attached patch as the earlier one had typo in regression test output. >+#define PG_STAT_GET_PROGRESS_COLS30 >Why did you use 30? That has come from N_PROGRESS_PARAM * 3 where N_PROGRESS_PARAM = 10 is the number of progress parameters of each type stored in shared memory. There are three such types (int, float, string) hence total number of progress parameters can be 30. Thank you, Rahila Syed __ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. Vacuum_progress_checker_v4.patch Description: Vacuum_progress_checker_v4.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] postgres_fdw extension support
On October 4, 2015 at 9:56:10 PM, Michael Paquier (michael.paqu...@gmail.com(mailto:michael.paqu...@gmail.com)) wrote: > On Sun, Oct 4, 2015 at 11:40 AM, Paul Ramsey wrote: > > I put all changes relative to your review here if you want a nice colorized > > place to check > > > > https://github.com/pramsey/postgres/commit/ed33e7489601e659f436d6afda3cce28304eba50 > > - /* updatable is available on both server and table */ > + /* updatable option is available on both server and table */ > This is just noise (perhaps I am the one who introduced it, oops). No, I did, it matches the change to the comment below that addresses Andres note. The principle of least change is butting up against the principle of language usage consistency here. Can remove, of course. P -- http://postgis.net http://cleverelephant.ca -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Multi-tenancy with RLS
* Haribabu Kommi (kommi.harib...@gmail.com) wrote: > On Tue, Oct 6, 2015 at 10:56 AM, Haribabu Kommi >wrote: > > Here I attached an updated version of the patch with the following changes. > > I found some problems related to providing multi-tenancy on a system > catalog view. > This is because, system catalog view uses the owner that is created > the user instead > of the current user by storing the user information in "checkAsUser" > field in RangeTblEntry > structure. Right, when querying through a view to tables underneath, we use the permissions of the view owner. View creators should be generally aware of this already. I agree that it adds complications to the multi-tenancy idea since the system views, today, allow viewing of all objects. There are two ways to address that: Modify the system catalog views to include the same constraints that the policies on the tables do or Allow RLS policies against views and then create the necessary policies on the views in the catalog. My inclination is to work towards the latter as that's a capability we'd like to have anyway. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] [PATCH] postgres_fdw extension support
On Tue, Oct 6, 2015 at 5:32 AM, Andres Freundwrote: > The problem is basically that cache invalidations can arrive while > you're building new cache entries. Everytime you e.g. open a relation > cache invalidations can arrive. Assume you'd written the code like: > You're avoiding that by only entering into the hashtable *after* the > lookup. And I think that deserves a comment. Thanks, revised patch attached. P. 20151006_postgres_fdw_extensions.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] postgres_fdw extension support
On 2015-10-03 19:40:40 -0700, Paul Ramsey wrote: > > > + /* > > > + * Right now "shippability" is exclusively a function of whether > > > + * the obj (proc/op/type) is in an extension declared by the user. > > > + * In the future we could additionally have a whitelist of functions > > > + * declared one at a time. > > > + */ > > > + bool shippable = lookup_shippable(objnumber, extension_list); > > > + > > > + entry = (ShippableCacheEntry *) > > > + hash_search(ShippableCacheHash, > > > + (void *) , > > > + HASH_ENTER, > > > + NULL); > > > + > > > + entry->shippable = shippable; > > > + } > > > > I suggest adding a comment that we consciously are only HASH_ENTERing > > the key after doing the lookup_shippable(), to avoid the issue that the > > lookup might accept cache invalidations and thus delete the entry again. > I’m not understanding this one. I only ever delete cache entries in > bulk, when InvalidateShippableCacheCallback gets called on updates to > the foreign server definition. I must not be quite understanding your > gist. The problem is basically that cache invalidations can arrive while you're building new cache entries. Everytime you e.g. open a relation cache invalidations can arrive. Assume you'd written the code like: entry = hash_search(HASH_ENTER, key, ); if (found) return entry->whateva; entry->whateva = lookup_shippable(...); return entry->whateva; lookup_shippable() opens a relation, which accepts invalidations. Which then go and delete the contents of the hashtable. And you're suddenly accessing free()d memory... You're avoiding that by only entering into the hashtable *after* the lookup. And I think that deserves a comment. Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [PATCH] Teach Catalog.pm how many attributes there should be per DATA() line
Fixes a build issue I ran into while adding some columns to system tables: Throws a build error if we encounter a different number of fields in a DATA() line than we expect for the catalog in question. Previously, it was possible to silently ignore any mismatches at build time which could result in symbol undefined errors at link time. Now we stop and identify the infringing line as soon as we encounter it, which greatly speeds up the debugging process. 0001-Teach-Catalog.pm-how-many-attributes-there-should-be.patch Description: Binary data -- David Christensen PostgreSQL Team Manager End Point Corporation da...@endpoint.com 785-727-1171 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
Nathan, * Nathan Wagner (nw...@hydaspes.if.org) wrote: > So, in order to do some clean up and see how my pgbugs page > (https://granicus.if.org/pgbugs/) might actually work I've been going > through bugs and marking their status. A lot of questions arise. Thanks for working on this! > A lot of the reports aren't bugs at all, but requests for help. My > guess is that the users either don't know where to ask or don't > understand the difference between a bug and not knowing how to do what > they want to do. Perhaps a more thorough explaination on the submission > form would be useful. While I agree with you that they are requests for help, I doubt changing the submission form would really help much. [...] > I have lots of different types of 'not a bug'. debbugs has categories for more-or-less all of these: For the case where it's a feature rather than a bug, there's "wishlist". > Not a bug, the use should have posted to a different list. (e.g. 13602) > Not a bug, probably user error, which is similar to the above. > Not a bug, but a bug in the libraries we use (e.g. openssl, 10184) The ones above would all simply be closed with a comment to the user about what the issue is. > Not a bug, works as intended, i.e. the user didn't make a mistake, but > had an incorrect notion of what it was supposed to do. (11596) This could be either closed or, if it pops up enough, left as 'wontfix', so other users don't report it again. > Not a bug, but the user never got a reply. That is, I decided > personally that this wasn't actually a bug. (13367) One thing with debbugs, at least, is that the user gets an initial reply saying "we got it" and then another whenever the status changes (including if it gets closed out as not-a-bug or similar). > And bug 1000 is not a bug, system test. Eh, not sure we need to worry about that one too much. :) > Do we care about the difference between any of these? I track them > differently in my update script, but they all get the same status in the > db. I like the set of categories which debbugs provides. > Can a bug be 'fixed' if there's no actually identifiable commit that > fixes the bug? See 13516, which Tom Lane claims was fixed in 9.1. A > grep for 13516 of the git log for both master and REL9_1_STABLE don't > turn up anything. Yes, that can certainly happen. It'd be better if the commit which actually fixed it was found but that's just being idealistic- whatever we use shouldn't force us to close a bug with a commit (debbugs certainly doesn't). > I can't as yet figure out how to match up git commit messages to > identify every branch in which a fix was applied. I could of course > load all of the commit messages into a table and compare them that way. Can't this be done by simply looking for the bug# in the commit log for each branch and, when found, associating that branch with the bug#? > Should I mark as "open" (i.e. not "new) any report that has a response? > More than one response? That would at least distinguish between bug > reports that at least someone, in theory, has taken a look at, and those > that haven't been addressed at all. I'm not sure that distinction is particularly useful, but I know some systems do it. > I have created the following statuses: > > Fixed - bug is presumably fixed Ok. > Unreproduceable - we can't make the system demonstrate this error This should be a flag or attribute of the bug, not a final disposition. > Timed Out - the reporter was asked to provide more information and > didn't respond for a while. I would probably suggest somewhere around a > month for this. Should there be a 'waiting on feedback' to mark the > 'pre timed out' phase? Not sure we need this, isn't it just 'closed'? > Stale 5281 - the bug hasn't had any activity for >2 years, so just > close it. If people want to troll through these to give them a better > status, that would probably be good, but there's still a thousand open > bugs newer than that. How is this different from 'timed out'? > Not Our Bug - looks like a bug, but it's not a bug in postgres. What > exactly are our bugs? Just what you'd get out of the release tarballs > or the git repo? Or is there more? This would also be 'closed', but with a note saying it's not our issue. > Won't Fix - this is arguably a bug, but for some reason we're not going > to fix it. Perhaps we intentionally violate the standard, or it's a bug > against a version we don't support, or we're not going to backpatch it. I'm not sure that it's actually a bug, it's more of a placeholder that says "yes, we understand people might think this behavior should be different, but we don't agree and aren't going to change it." > Open - this seems to be a bug, or at least we're talking about it and > it's not where we want to close it. Note of course that "closing" a bug > just means it will show up somewhere else in my tracker, obviously it > doesn't affect the mailing list at all. Yes,
[HACKERS] [PATCH] Comment fixes
Use the correct name “pgindent” in comments. 0001-Make-pgindent-references-consistent.patch Description: Binary data -- David Christensen PostgreSQL Team Manager End Point Corporation da...@endpoint.com 785-727-1171 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] postgres_fdw extension support
On 2015-10-06 06:42:17 -0700, Paul Ramsey wrote: > *sigh*, no you’re not missing anything. In moving to the cache and > marking things just as “shippable” I’ve lost the test that ensures > they are shippable for this *particular* server (which only happens in > the lookup stage). So yeah, the cache will be wrong in cases where > different servers have different extension opti ons. Should be easy enough to fix - just add the server's oid to the key. But I guess you're already on that. Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [Proposal] Table partition + join pushdown
Hello. I tried to read this and had some random comments on this. -- general I got some warning on compilation on unused variables and wrong arguemtn type. I failed to have a query that this patch works on. Could you let me have some specific example for this patch? This patch needs more comments. Please put comment about not only what it does but also the reason and other things for it. -- about namings Names for functions and variables are needed to be more appropriate, in other words, needed to be what properly informs what they are. The followings are the examples of such names. "added_restrictlist"'s widely distributed as many function arguemnts and JoinPathExtraData makes me feel dizzy.. create_mergejoin_path takes it as "filtering_clauses", which looks far better. try_join_pushdown() is also the name with much wider meaning. This patch tries to move hashjoins on inheritance parent to under append paths. It could be generically called 'pushdown' but this would be better be called such like 'transform appended hashjoin' or 'hashjoin distribution'. The latter would be better. (The function name would be try_distribute_hashjoin for the case.) The name make_restrictinfos_from_check_contr() also tells me wrong thing. For example, extract_constraints_for_hashjoin_distribution() would inform me about what it returns. -- about what make_restrictinfos_from_check_constr() does In make_restrictinfos_from_check_constr, the function returns modified constraint predicates correspond to vars under hashjoinable join clauses. I don't think expression_tree_mutator is suitable to do that since it could allow unwanted result when constraint predicates or join clauses are not simple OpExpr's. Could you try more simple and straight-forward way to do that? Otherwise could you give me clear explanation on what it does? regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] bugs and bug tracking
So, in order to do some clean up and see how my pgbugs page (https://granicus.if.org/pgbugs/) might actually work I've been going through bugs and marking their status. A lot of questions arise. A lot of the reports aren't bugs at all, but requests for help. My guess is that the users either don't know where to ask or don't understand the difference between a bug and not knowing how to do what they want to do. Perhaps a more thorough explaination on the submission form would be useful. What is the difference between a bug and a feature request? Ok, I know the difference, but do we want to treat them differently. For example, see bug 9457 (https://granicus.if.org/pgbugs/9457). As Pavel Stehule noted, this isn't a *bug* per se, but what should we do with it? Close it as 'not a bug'? I don't like this because it's not really the same as the other 'not a bug's. Create another 'closed' status of 'feature request'? Except that if we decide to implement the feature, in some sense it becomes a bug until we actually implement it. Create an 'open' status of 'feature request' to mark it until it is either implemented or rejected. At least then it's tracked. This last choice is my preference. I conflate open bugs in the sense of 'not closed so we still need to do something with the bug even if it is just closing it' and open bugs in the sense of 'this seems to actually be a bug in postgres'. I'm not sure what terminology I should use. I have lots of different types of 'not a bug'. Not a bug, the use should have posted to a different list. (e.g. 13602) Not a bug, probably user error, which is similar to the above. Not a bug, but a bug in the libraries we use (e.g. openssl, 10184) Not a bug, works as intended, i.e. the user didn't make a mistake, but had an incorrect notion of what it was supposed to do. (11596) Not a bug, but the user never got a reply. That is, I decided personally that this wasn't actually a bug. (13367) And bug 1000 is not a bug, system test. Do we care about the difference between any of these? I track them differently in my update script, but they all get the same status in the db. Can a bug be 'fixed' if there's no actually identifiable commit that fixes the bug? See 13516, which Tom Lane claims was fixed in 9.1. A grep for 13516 of the git log for both master and REL9_1_STABLE don't turn up anything. I can't as yet figure out how to match up git commit messages to identify every branch in which a fix was applied. I could of course load all of the commit messages into a table and compare them that way. Should I mark as "open" (i.e. not "new) any report that has a response? More than one response? That would at least distinguish between bug reports that at least someone, in theory, has taken a look at, and those that haven't been addressed at all. I have created the following statuses: Fixed - bug is presumably fixed Unreproduceable - we can't make the system demonstrate this error Timed Out - the reporter was asked to provide more information and didn't respond for a while. I would probably suggest somewhere around a month for this. Should there be a 'waiting on feedback' to mark the 'pre timed out' phase? Stale 5281 - the bug hasn't had any activity for >2 years, so just close it. If people want to troll through these to give them a better status, that would probably be good, but there's still a thousand open bugs newer than that. Not Our Bug - looks like a bug, but it's not a bug in postgres. What exactly are our bugs? Just what you'd get out of the release tarballs or the git repo? Or is there more? Not a Bug - see above discussion Won't Fix - this is arguably a bug, but for some reason we're not going to fix it. Perhaps we intentionally violate the standard, or it's a bug against a version we don't support, or we're not going to backpatch it. Open - this seems to be a bug, or at least we're talking about it and it's not where we want to close it. Note of course that "closing" a bug just means it will show up somewhere else in my tracker, obviously it doesn't affect the mailing list at all. New - this hasn't been looked at enough for someone to change the status to something better. I don't have a "reopened" status. I'm not sure what it means, other than it used to be closed, but someone changed it back to open. I don't immediately see why we would want to distinguish between this and a regular open bug, other than perhaps as a way of conflating status with priority. It's easy to make one though if people really want it. I probably have too many statuses already. I will post later on my thoughts on how to control the system. Are people, in principle, willing to put magic incantations in their emails and commit messages? I'm not asking for a commitment to my tool here, I'm just exploring what the bounds of people's, and committer's in particular, willingness to adjust their workflow or message texts a bit to make it easier to automate
Re: [HACKERS] [PATCH] postgres_fdw extension support
On 2015-10-06 06:28:34 -0700, Paul Ramsey wrote: > +/* > + * is_shippable > + * Is this object (proc/op/type) shippable to foreign server? > + * Check cache first, then look-up whether (proc/op/type) is > + * part of a declared extension if it is not cached. > + */ > +bool > +is_shippable(Oid objnumber, Oid classnumber, List *extension_list) > +{ > + ShippableCacheKey key; > + ShippableCacheEntry *entry; > + > + /* Always return false if the user hasn't set the "extensions" option */ > + if (extension_list == NIL) > + return false; > + > + /* Find existing cache, if any. */ > + if (!ShippableCacheHash) > + InitializeShippableCache(); > + > + /* Zero out the key */ > + memset(, 0, sizeof(key)); > + > + key.objid = objnumber; > + key.classid = classnumber; > + > + entry = (ShippableCacheEntry *) > + hash_search(ShippableCacheHash, > + (void *) , > + HASH_FIND, > + NULL); > + > + /* Not found in ShippableCacheHash cache. Construct new entry. */ > + if (!entry) > + { > + /* > + * Right now "shippability" is exclusively a function of whether > + * the obj (proc/op/type) is in an extension declared by the > user. > + * In the future we could additionally have a whitelist of > functions > + * declared one at a time. > + */ > + bool shippable = lookup_shippable(objnumber, classnumber, > extension_list); > + > + /* > + * Don't create a new hash entry until *after* we have the > shippable > + * result in hand, as the shippable lookup might trigger a cache > + * invalidation. > + */ > + entry = (ShippableCacheEntry *) > + hash_search(ShippableCacheHash, > + (void *) , > + HASH_ENTER, > + NULL); > + > + entry->shippable = shippable; > + } > + > + if (!entry) > + return false; > + else > + return entry->shippable; > +} Maybe I'm missing something major here. But given that you're looking up solely based on Oid objnumber, Oid classnumber, how would this cache work if there's two foreign servers with different extension lists? Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] postgres_fdw extension support
On October 6, 2015 at 6:32:36 AM, Andres Freund (and...@anarazel.de(mailto:and...@anarazel.de)) wrote: > On 2015-10-06 06:28:34 -0700, Paul Ramsey wrote: > > +/* > > + * is_shippable > > + * Is this object (proc/op/type) shippable to foreign server? > > + * Check cache first, then look-up whether (proc/op/type) is > > + * part of a declared extension if it is not cached. > > + */ > > +bool > > +is_shippable(Oid objnumber, Oid classnumber, List *extension_list) > > +{ > > + ShippableCacheKey key; > > + ShippableCacheEntry *entry; > > + > > + /* Always return false if the user hasn't set the "extensions" option */ > > + if (extension_list == NIL) > > + return false; > > + > > + /* Find existing cache, if any. */ > > + if (!ShippableCacheHash) > > + InitializeShippableCache(); > > + > > + /* Zero out the key */ > > + memset(, 0, sizeof(key)); > > + > > + key.objid = objnumber; > > + key.classid = classnumber; > > + > > + entry = (ShippableCacheEntry *) > > + hash_search(ShippableCacheHash, > > + (void *) , > > + HASH_FIND, > > + NULL); > > + > > + /* Not found in ShippableCacheHash cache. Construct new entry. */ > > + if (!entry) > > + { > > + /* > > + * Right now "shippability" is exclusively a function of whether > > + * the obj (proc/op/type) is in an extension declared by the user. > > + * In the future we could additionally have a whitelist of functions > > + * declared one at a time. > > + */ > > + bool shippable = lookup_shippable(objnumber, classnumber, extension_list); > > + > > + /* > > + * Don't create a new hash entry until *after* we have the shippable > > + * result in hand, as the shippable lookup might trigger a cache > > + * invalidation. > > + */ > > + entry = (ShippableCacheEntry *) > > + hash_search(ShippableCacheHash, > > + (void *) , > > + HASH_ENTER, > > + NULL); > > + > > + entry->shippable = shippable; > > + } > > + > > + if (!entry) > > + return false; > > + else > > + return entry->shippable; > > +} > > Maybe I'm missing something major here. But given that you're looking up > solely based on Oid objnumber, Oid classnumber, how would this cache > work if there's two foreign servers with different extension lists? *sigh*, no you’re not missing anything. In moving to the cache and marking things just as “shippable” I’ve lost the test that ensures they are shippable for this *particular* server (which only happens in the lookup stage). So yeah, the cache will be wrong in cases where different servers have different extension opti ons. Thanks, P. -- http://postgis.net http://cleverelephant.ca -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] run pg_rewind on an uncleanly shut down cluster.
On Tue, Oct 6, 2015 at 03:58:44PM +0900, Michael Paquier wrote: > On Tue, Oct 6, 2015 at 12:41 AM, Oleksii Kliukinwrote: > > pg_rewind -D postgresql0 --source-server="host=127.0.0.1 port=5433 > > dbname=postgres" > > The servers diverged at WAL position 0/360 on timeline 1. > > could not open file "data/postgresql0/pg_xlog/00010002": > > No such file or directory > > > > Note that this problem happens not 100% of time during the tests, > > sometimes pg_rewind can actually rewind the former master. > > I don't think that there is any actual reason preventing us from > rewinding a node that has its state in pg_control set as something > else than DB_SHUTDOWNED, the important point here is to be sure that > the target node is *not* running while pg_rewind is running (perhaps > pg_rewind should perform an action in the target node to not have it > run, let's say that it creates a fake postgresql.conf with invalid > data and renames the existing one). Checking pg_control makes things > easier though, there is no need to rely on external binaries like > "pg_ctl status" or some parsing of postmaster.pid with kill(pid, 0) > for example. To disable the old cluster, pg_upgrade rename pg_control to pg_control.old in disable_old_cluster(). You should do that, or pg_upgrade should use whatever new method you choose. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] postgres_fdw extension support
On Tue, Oct 6, 2015 at 6:55 AM, Andres Freundwrote: > On 2015-10-06 06:42:17 -0700, Paul Ramsey wrote: >> *sigh*, no you’re not missing anything. In moving to the cache and >> marking things just as “shippable” I’ve lost the test that ensures >> they are shippable for this *particular* server (which only happens in >> the lookup stage). So yeah, the cache will be wrong in cases where >> different servers have different extension opti ons. > > Should be easy enough to fix - just add the server's oid to the key. > > But I guess you're already on that. Yep, just a big chastened :) Thanks for the catch, here's the patch, P 20151006a_postgres_fdw_extensions.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
Magnus Haganderwrites: > It doesn't actually. You can post to the bugs list without being subscribed > and it hits moderation. If you fill out the bug form without being > subscribed, it hits exactly the same moderation. There is no difference - > the bug form basically just sends an email with your address as the from > one. > It might be that we don't make that clear, but that's how it works :) > Maybe we need a "question" form thta does the same thing but doesn't assign > a bugid and just sends an email to -general instead? Seems like a good idea, but if you make it a separate form, lots of people will never see it. How about having just one page, but a drop-down menu for "bug" versus "question"? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 6, 2015 at 7:04 PM, Jaime Casanova < jaime.casan...@2ndquadrant.com> wrote: > On 6 October 2015 at 08:05, Nathan Wagnerwrote: > > A lot of the reports aren't bugs at all, but requests for help. My > > guess is that the users either don't know where to ask or don't > > understand the difference between a bug and not knowing how to do what > > they want to do. Perhaps a more thorough explaination on the submission > > form would be useful. > > > > the real problem here is that fill the bug report doesn't force you to > register in a ML, while asking for help in a ML will. and a lot of > people don't want to register in a ML, they just want a specific > question answered so i don't think any change in the form will avoid > that. > It doesn't actually. You can post to the bugs list without being subscribed and it hits moderation. If you fill out the bug form without being subscribed, it hits exactly the same moderation. There is no difference - the bug form basically just sends an email with your address as the from one. It might be that we don't make that clear, but that's how it works :) Maybe we need a "question" form thta does the same thing but doesn't assign a bugid and just sends an email to -general instead? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 6, 2015 at 01:05:24PM +, Nathan Wagner wrote: > So, in order to do some clean up and see how my pgbugs page > (https://granicus.if.org/pgbugs/) might actually work I've been going > through bugs and marking their status. A lot of questions arise. > > A lot of the reports aren't bugs at all, but requests for help. My > guess is that the users either don't know where to ask or don't > understand the difference between a bug and not knowing how to do what > they want to do. Perhaps a more thorough explanation on the submission > form would be useful. I am glad you are putting together a mock-up solution with actual data so we can see what a final solution would look like, and see some of the decisions we have to make. First, let me say I am glad we are talking about this, and am open to the criticism that my and other's tracking open items by keeping them in our personal mailboxes is not only odd, but bizarre given the size of our community and the responsibility we have. I do think we rushed to choose a tracker a little too quickly. Let me explain, from a high level, what a new tracker will change in our workflow. Right now, items stream at us via email. They are broadcast to everyone on the lists. For most people, the email just flies by and is deleted. For a few people, they respond, and generate more activity. For a few others, they keep the email to see if is resolved, and if not, try to get it resolved later, or recorded. Therefore, our current default behavior is to ignore user reports, unless someone takes an action to reply, record, or retain the email for later review. What a tracker does is to make the default user report be _retained_, meaning we have to take action to _not_ retain a user report as an open item. This gets to the core issue --- that maintaining a tracker is going to require more work than what we do now, and explains why most community project trackers (and perhaps most commercial project trackers) become clogged with unaddressed items. This also highlights the need for a serious dedication of time to keep a tracker orderly. This is perhaps best expressed by this comment from Andrew Dunstan: http://www.postgresql.org/message-id/560d4960.5010...@dunslane.net In my former life I once had to send out a memo to developers that said "If you're not working on items in the tracker you're not doing your job." In summary, a tracker can become an unrelenting task-master, where you are continually un-retaining items and reclassifying, which might explain why they often end up disorderly or ignored. There are advantages to having a tracker, but it will take action on our part to manage the new default-retain behavior. I think this is why email integration is key, because it allows us to adjust the retain behavior with minimal effort. Second, we have a mix of user reports. Some bug reports are not bugs and must be reclassified. In other cases, uses ask questions via non-tracked communicate channels, e.g. pgsql-general, but they are really bugs. So, to do this right, we need a way of marking tracked bugs as not bugs, and a way of adding bugs that were reported in a non-tracked manner. One of the advantages of the non-retain behavior we have now is that, for responsible developers, a recognized bug either has to be recorded or fixed --- both take time, so we have a tendency to choose "fix". With a retain-default behavior, "recorded" becomes the default and the pressure to fix is reduced. Of course, there is also the "let it fly by and ignore it option", which we have now, and which we will not have in the default-retain mode. My point is that we have our current workflow not because we are idiots, but because it fit our workflow and resources best. I am not sure if we have succeeded because of our current non-retain mode, or in spite of it. It might be time to switch to a default-retain mode, especially since most other projects have that mode, but we should be clear what we are getting into. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On Tue, Oct 06, 2015 at 12:04:11PM -0500, Jaime Casanova wrote: > I like how this page is looking now, good work. Thank you. > Now, AFAIU from previous mails part of the reason to have a bug > tracker is to make easy to know when a bug was fixed. Which should be > interpreted as: which versions this bug affected? and which minor > versions on those branches fix the issue > > for example bug # 13636 was reported for 9.4.4 but it existed in older > branches so Tom fixed it in all active branches (ie: > http://www.postgresql.org/message-id/e1zfjgx-0005lu...@gemulon.postgresql.org). > is it possible (even if not yet implemented) to add that information? It is possible. I don't know yet how easy it will be to automate it for all the back patches. I think I can look for matching commit messages but I haven't written any code yet. It might be possible to infer this information after the fact by looking at where on which branches a commit fix was applied. > also i like that we can search on bugs but we can filter by version. > i'm just guessing that if someone looks for a bug he's probably > worrying about the specific version he is using. I'll probably get to adding filtering soon-ish. Maybe even today. I haven't decided if I want to do that on the server side, or add some javascript to let the user sort and filter whatever the page has already returned. I'm not really a web guy, so it takes me a while to figure out what to do. > having a bug tracker that allow us to not lose track of bugs is good, > having a bug tracker that allow us to have the whole information on a > bug is better, IMHO. I agree. It's just a matter of somehow automating the process. I'm under no illusions though that I have any control over things. I'm hoping that one or more of the committers will say "we'd like to do it this way" and I'll work with that. Personally, I'd like to see either '[Fixes #12345]' anywhere in a commit message, or 'Fixes: #12345' at the beginning of a line. But it has to come from them. Also, the version numbers are user reported and a bit of a mess. I don't think they could really be relied on as is for users trying to find out if a bug affects their version. Someone would have to update that information, and communicate that update to the tracker. The general concensus seems to be that magic phrases at the beginning of a line in a message body is the way to go. I don't necessarily agree, but any consistent system can be made to work. This obviously applies to any metadata, not just version numbers. > > A lot of the reports aren't bugs at all, but requests for help. My > > guess is that the users either don't know where to ask or don't > > understand the difference between a bug and not knowing how to do what > > they want to do. Perhaps a more thorough explaination on the submission > > form would be useful. > > > > the real problem here is that fill the bug report doesn't force you to > register in a ML, while asking for help in a ML will. and a lot of > people don't want to register in a ML, they just want a specific > question answered so i don't think any change in the form will avoid > that. True. Perhaps we could provide another form for other lists. Probably tilting at windmills here, but it would be nice if we could cut down on the amount of time taken up by "this isn't a bug, you should go ask down the hall". -- nw -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [PATCH] minor doc patch
This patch changes in http://www.postgresql.org/docs/9.5/static/sql-createtype.html "variable length" into "variable-length", since in other places there it is written with "-" in the middle, not " ". -- Nikolay Shaplov Postgres Professional: http://www.postgrespro.com Russian Postgres Companydiff --git a/doc/src/sgml/ref/create_type.sgml b/doc/src/sgml/ref/create_type.sgml index a6a4644..5a09f19 100644 --- a/doc/src/sgml/ref/create_type.sgml +++ b/doc/src/sgml/ref/create_type.sgml @@ -324,7 +324,7 @@ CREATE TYPE name internallength. Base data types can be fixed-length, in which case internallength is a - positive integer, or variable length, indicated by setting + positive integer, or variable-length, indicated by setting internallength to VARIABLE. (Internally, this is represented by setting typlen to -1.) The internal representation of all -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bugs and bug tracking
On 6 October 2015 at 08:05, Nathan Wagnerwrote: > So, in order to do some clean up and see how my pgbugs page > (https://granicus.if.org/pgbugs/) might actually work I've been going > through bugs and marking their status. A lot of questions arise. > /* DISCLAIMER */ My opinion is not important in this issue /* END DISCLAIMER */ I like how this page is looking now, good work. Now, AFAIU from previous mails part of the reason to have a bug tracker is to make easy to know when a bug was fixed. Which should be interpreted as: which versions this bug affected? and which minor versions on those branches fix the issue for example bug # 13636 was reported for 9.4.4 but it existed in older branches so Tom fixed it in all active branches (ie: http://www.postgresql.org/message-id/e1zfjgx-0005lu...@gemulon.postgresql.org). is it possible (even if not yet implemented) to add that information? also i like that we can search on bugs but we can filter by version. i'm just guessing that if someone looks for a bug he's probably worrying about the specific version he is using. having a bug tracker that allow us to not lose track of bugs is good, having a bug tracker that allow us to have the whole information on a bug is better, IMHO. > A lot of the reports aren't bugs at all, but requests for help. My > guess is that the users either don't know where to ask or don't > understand the difference between a bug and not knowing how to do what > they want to do. Perhaps a more thorough explaination on the submission > form would be useful. > the real problem here is that fill the bug report doesn't force you to register in a ML, while asking for help in a ML will. and a lot of people don't want to register in a ML, they just want a specific question answered so i don't think any change in the form will avoid that. -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte 24x7 y capacitación -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers