On 19/08/10 04:46, Robert Haas wrote:
At any rate, we should definitely NOT wait another
month to start thinking about Sync Rep again.
Agreed. EnterpriseDB is interested in having that feature, so I'm on the
hook to spend time on it regardless of commitfests.
I haven't actually
looked at an
Magnus Hagander wrote:
> Is there some way to make cvs2git work this way, and just not bother
> even trying to create merge commits, or is that fundamentally
> impossible and we need to look at another tool?
The good news: (I just reminded myself/realized that) Max Bowsher has
already implemented
On Wed, Aug 18, 2010 at 7:46 PM, Greg Smith wrote:
> Kevin Grittner wrote:
>>
>> I don't think I want to try to handle two in a row, and I think your style
>> is better suited
>> than mine to the final CF for a release, but I might be able to take on
>> the 2010-11 CF if people want that
>
> Ha, y
Alvaro Herrera wrote:
> Excerpts from Michael Haggerty's message of mié ago 18 12:00:44 -0400 2010:
>
>> 3. Run
>>
>> git filter-branch
>>
>> This rewrites the commits using any parentage changes from the grafts
>> file. This changes most commits' SHA1 hashes. After this you can
>> discard t
>>> How about an idea to add a new flag in RangeTblEntry which shows where
>>> the RangeTblEntry came from, instead of clearing requiredPerms?
>>> If the flag is true, I think ExecCheckRTEPerms() can simply skip checks
>>> on the child tables.
>>
>> How about the external module just checks if the
(2010/08/18 21:52), Stephen Frost wrote:
> * KaiGai Kohei (kai...@ak.jp.nec.com) wrote:
>> If rte->requiredPerms would not be cleared, the user of the hook will
>> be able to check access rights on the child tables, as they like.
>
> This would only be the case for those children which are being t
Kevin Grittner wrote:
I don't think I want to try to handle two in a row, and I think your style is
better suited
than mine to the final CF for a release, but I might be able to take on the
2010-11 CF if people want that
Ha, you just put yourself right back on the hook with that comment, and
At the close of the 2010-07 CommitFest, the numbers were:
72 patches were submitted
3 patches were withdrawn (deleted) by their authors
14 patches were moved to CommitFest 2010-09
--
55 patches in CommitFest 2010-07
--
3 committed to 9.0
--
52 patches for 9.1
--
1 rejected
20 returned with fee
On Aug 18, 2010, at 9:02 AM, Robert Haas wrote:
> On Wed, Aug 18, 2010 at 8:45 AM, Greg Stark wrote:
>> On Tue, Aug 17, 2010 at 11:29 PM, Dave Page wrote:
>>> Which is ideal for monitoring your own connection - having the info in
>>> the pg_stat_activity is also valuable for monitoring and syst
Robert Haas wrote:
> I'd just like to take a minute to thank him publicly for his
> efforts. We started this CommitFest with something like 60
> patches, which is definitely on the larger side for a CommitFest,
> and Kevin did a great job staying on top of what was going on with
> all of them a
Josh Berkus writes:
>> Most likely that's the libc implementation of the select()-based sleeps
>> for vacuum_cost_delay. I'm still suspicious that the writes are eating
>> more cost_delay points than you think.
> Tested that. It does look like if I increase vacuum_cost_limit to 1
> and lowe
On ons, 2010-08-18 at 13:45 +0100, Greg Stark wrote:
> But progress bars alone aren't really the big prize. I would really
> love to see the explain plans for running queries.
The auto_explain module does that already.
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To mak
On tis, 2010-08-17 at 13:52 -0400, Stephen Frost wrote:
> I don't like how the backend would have to send something NOTICE-like,
> I had originally been thinking "gee, it'd be nice if psql could query
> pg_stat while doing something else", but that's not really possible...
> So, I guess NOTICE-like
> Tested that. It does look like if I increase vacuum_cost_limit to 1
> and lower vacuum_cost_page_dirty to 10, it reads 5-7 pages and writes
> 2-3 before each pollsys. The math seems completely wrong on that,
> though -- it should be 50 and 30 pages, or similar. If I can, I'll test
> a vac
Dean Rasheed writes:
> The problem is that the trigger code assumes that anything it
> allocates in the per-tuple memory context will be freed per-tuple
> processed, which used to be the case because the loop in ExecutePlan()
> calls ResetPerTupleExprContext() once each time round the loop, and
>
Kevin didn't send out an official gavel-banging announcement of the
end of CommitFest 2009-07 (possibly because I neglected until today to
give him privileges to actually change it in the web application), but
I'd just like to take a minute to thank him publicly for his efforts.
We started this Com
> That would explain all the writes, but it doesn't seem to explain why
> your two servers aren't behaving similarly.
Well, that's why I said "ostensibly identical". There may in fact be
differences, not just in the databases but in some OS libs as well.
These servers have been in production for
Robert Haas writes:
> Anyway, it's not really important enough to me to have a protracted
> argument about it. Let's wait and see if anyone else has an opinion,
> and perhaps a consensus will emerge.
Well, nobody else seems to care, so I went ahead and committed the
shorter form of the patch, ie
Tom Lane wrote:
> Josh Berkus writes:
>> This is an anti-wraparound vacuum, so it could have something to
>> do with the hint bits. Maybe it's setting the freeze bit on
>> every page, and writing them one page at a time?
>
> That would explain all the writes, but it doesn't seem to explain
>
Fujii Masao writes:
> The explanation of trace_recovery_messages in the document
> is inconsistent with the definition of it in guc.c.
Setting the default to WARNING is confusing and useless, because
there are no trace_recovery calls with that debug level. IMO the
default setting should be LOG,
Josh Berkus writes:
>> Rather, what you need to be thinking about is how
>> come vacuum seems to be making lots of pages dirty on only one of these
>> machines.
> This is an anti-wraparound vacuum, so it could have something to do with
> the hint bits. Maybe it's setting the freeze bit on every
> On further reflection, though: since we put in the BufferAccessStrategy
> code, which was in 8.3, the background writer isn't *supposed* to be
> very much involved in writing pages that are dirtied by VACUUM. VACUUM
> runs in a small ring of buffers and is supposed to have to clean its own
> di
Josh Berkus writes:
>> What I find interesting about that trace is the large proportion of
>> writes. That appears to me to indicate that it's *not* a matter of
>> vacuum delays, or at least not just a matter of that. The process seems
>> to be getting involved in having to dump dirty buffers to
> What I find interesting about that trace is the large proportion of
> writes. That appears to me to indicate that it's *not* a matter of
> vacuum delays, or at least not just a matter of that. The process seems
> to be getting involved in having to dump dirty buffers to disk. Perhaps
> the ba
On Wed, Aug 18, 2010 at 11:29 AM, Peter Eisentraut wrote:
> On tis, 2010-08-17 at 01:16 -0500, Jaime Casanova wrote:
>> >> creating collations ...FATAL: invalid byte sequence for encoding
>> >> "UTF8": 0xe56c09
>> >> CONTEXT: COPY tmp_pg_collation, line 86
>> >> STATEMENT: COPY tmp_pg_collation
Excerpts from Robert Haas's message of mié ago 18 13:10:19 -0400 2010:
> I think what is frustrating is that we have a mental image of what the
> history looks like in CVS based on what we actually do, and it doesn't
> look anything like the history that cvs2git created. You can to all
> kinds of
While testing triggers, I came across the following memory leak.
Here's a simple test case:
CREATE TABLE foo(a int);
CREATE OR REPLACE FUNCTION trig_fn() RETURNS trigger AS
$$
BEGIN
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER ins_trig BEFORE INSERT ON foo
FOR EACH ROW EXECUTE PROCED
On Wed, Aug 18, 2010 at 12:18 PM, Michael Haggerty wrote:
> Tom Lane wrote:
>> Michael Haggerty writes:
>>> The "exclusive" possibility is to ignore the fact that some of the
>>> content of B4 came from trunk and to pretend that FILE1 just appeared
>>> out of nowhere in commit B4 independent of t
On Wed, 2010-08-18 at 12:26 -0400, Alvaro Herrera wrote:
> Excerpts from Magnus Hagander's message of mié ago 18 11:52:58 -0400 2010:
> > On Wed, Aug 18, 2010 at 17:33, Khee Chin wrote:
> > > I previously proposed off-list an alternate solution to generate the git
> > > repository which was turned
Excerpts from Michael Haggerty's message of mié ago 18 12:00:44 -0400 2010:
> 3. Run
>
> git filter-branch
>
> This rewrites the commits using any parentage changes from the grafts
> file. This changes most commits' SHA1 hashes. After this you can
> discard the .git/info/grafts file. You
On tis, 2010-08-17 at 01:16 -0500, Jaime Casanova wrote:
> >> creating collations ...FATAL: invalid byte sequence for encoding
> >> "UTF8": 0xe56c09
> >> CONTEXT: COPY tmp_pg_collation, line 86
> >> STATEMENT: COPY tmp_pg_collation FROM
> >> E'/usr/local/pgsql/9.1/share/locales.txt';
> >> """
>
Excerpts from Magnus Hagander's message of mié ago 18 11:52:58 -0400 2010:
> On Wed, Aug 18, 2010 at 17:33, Khee Chin wrote:
> > I previously proposed off-list an alternate solution to generate the git
> > repository which was turned down due to it not being able to handle
> > incremental updates.
Tom Lane wrote:
> Michael Haggerty writes:
>> The "exclusive" possibility is to ignore the fact that some of the
>> content of B4 came from trunk and to pretend that FILE1 just appeared
>> out of nowhere in commit B4 independent of the FILE1 in TRUNK:
>
>> T0 -- T1 -- T2 T3 -- T4
Robert Haas wrote:
> Exactly. IMHO, the way this should work is by starting at the
> beginning of time and working forward. [...]
What you are describing is more or less the algorithm that was used by
cvs2svn version 1.x. It mostly works, but has nasty edge cases that are
impossible to fix.
cv
Alvaro Herrera wrote:
> Excerpts from Michael Haggerty's message of mié ago 18 05:01:29 -0400 2010:
>
>> [...] Alternatively, you
>> could write a tool that would rewrite the ancestry information in the
>> repository *after* the cvs2git conversion using .git/info/grafts (see
>> git-filter-branch(
On Wed, Aug 18, 2010 at 11:03 AM, Tom Lane wrote:
> Michael Haggerty writes:
>> So let's take the simplest example: a branch BRANCH1 is created from
>> trunk commit T1, then some time later another FILE1 from trunk commit T3
>> is added to BRANCH1 in commit B4. How should this series of events b
On Wed, Aug 18, 2010 at 17:33, Khee Chin wrote:
> I previously proposed off-list an alternate solution to generate the git
> repository which was turned down due to it not being able to handle
> incremental updates. However, since we are now looking at a one-time
> conversion, this method might co
Tom Lane wrote:
> Michael Haggerty writes:
>> So let's take the simplest example: a branch BRANCH1 is created from
>> trunk commit T1, then some time later another FILE1 from trunk commit T3
>> is added to BRANCH1 in commit B4. How should this series of events be
>> represented in a git repositor
2010/8/18 Tom Lane :
> Pavel Stehule writes:
>> 2010/8/18 Tom Lane :
>>> There would be plenty of scope to re-use the machinery without any
>>> SQL-level extensions. All you need is a polymorphic aggregate
>>> transition function that maintains a tuplestore or whatever.
>
>> Have we to use a tran
Pavel Stehule writes:
> 2010/8/18 Tom Lane :
>> There would be plenty of scope to re-use the machinery without any
>> SQL-level extensions. Â All you need is a polymorphic aggregate
>> transition function that maintains a tuplestore or whatever.
> Have we to use a transisdent function? If we impl
I previously proposed off-list an alternate solution to generate the git
repository which was turned down due to it not being able to handle
incremental updates. However, since we are now looking at a one-time
conversion, this method might come in handy.
---
Caveat: cvs2git apparently requires CVS
Excerpts from Michael Haggerty's message of mié ago 18 05:01:29 -0400 2010:
> cvs2git doesn't currently have this option. I'm not sure how much work
> it would be to implement; probably a few days'. Alternatively, you
> could write a tool that would rewrite the ancestry information in the
> repo
On Wed, Aug 18, 2010 at 04:46:57PM +0200, Pavel Stehule wrote:
> 2010/8/18 Tom Lane :
> > David Fetter writes:
> >> Apart from the medians, which "median-like" aggregates do you
> >> have in mind to start with? If you can provide examples of
> >> "median-like" aggregates that people might need to
Michael Haggerty writes:
> So let's take the simplest example: a branch BRANCH1 is created from
> trunk commit T1, then some time later another FILE1 from trunk commit T3
> is added to BRANCH1 in commit B4. How should this series of events be
> represented in a git repository?
> ...
> The "exclus
David Fetter writes:
> On Wed, Aug 18, 2010 at 10:39:33AM -0400, Tom Lane wrote:
>> There would be plenty of scope to re-use the machinery without any
>> SQL-level extensions. All you need is a polymorphic aggregate
>> transition function that maintains a tuplestore or whatever.
>> I don't see th
2010/8/18 Tom Lane :
> David Fetter writes:
>> Apart from the medians, which "median-like" aggregates do you have in
>> mind to start with? If you can provide examples of "median-like"
>> aggregates that people might need to implement as user-defined
>> aggregates, or other places where people wo
On Wed, Aug 18, 2010 at 10:39:33AM -0400, Tom Lane wrote:
> David Fetter writes:
> > Apart from the medians, which "median-like" aggregates do you have in
> > mind to start with? If you can provide examples of "median-like"
> > aggregates that people might need to implement as user-defined
> > ag
2010/8/18 David Fetter :
> On Wed, Aug 18, 2010 at 04:10:18PM +0200, Pavel Stehule wrote:
>> 2010/8/18 David Fetter :
>> > Which median do you plan to implement? Or do you plan to implement
>> > several different medians, each with distinguishing names?
>>
>> my proposal enabled implementation of
David Fetter writes:
> Apart from the medians, which "median-like" aggregates do you have in
> mind to start with? If you can provide examples of "median-like"
> aggregates that people might need to implement as user-defined
> aggregates, or other places where people would use this machinery, it
On Wed, Aug 18, 2010 at 04:10:18PM +0200, Pavel Stehule wrote:
> 2010/8/18 David Fetter :
> > Which median do you plan to implement? Or do you plan to implement
> > several different medians, each with distinguishing names?
>
> my proposal enabled implementation of any "median like" function. But
2010/8/18 David Fetter :
> On Wed, Aug 18, 2010 at 04:03:25PM +0200, Pavel Stehule wrote:
>> 2010/8/18 Tom Lane :
>> > Pavel Stehule writes:
>> >> I still thinking about a "median" type functions. My idea is to
>> >> introduce a new syntax for stype definition - like
>> >
>> >> stype = type, or
>>
On Wed, Aug 18, 2010 at 04:03:25PM +0200, Pavel Stehule wrote:
> 2010/8/18 Tom Lane :
> > Pavel Stehule writes:
> >> I still thinking about a "median" type functions. My idea is to
> >> introduce a new syntax for stype definition - like
> >
> >> stype = type, or
> >> stype = ARRAY OF type [ ORDER
2010/8/18 Tom Lane :
> Pavel Stehule writes:
>> I still thinking about a "median" type functions. My idea is to
>> introduce a new syntax for stype definition - like
>
>> stype = type, or
>> stype = ARRAY OF type [ ORDER [ DESC | ASC ]], or
>> stype = TUPLESTORE OF type, or
>> stype = TUPLESORT OF
Pavel Stehule writes:
> I still thinking about a "median" type functions. My idea is to
> introduce a new syntax for stype definition - like
> stype = type, or
> stype = ARRAY OF type [ ORDER [ DESC | ASC ]], or
> stype = TUPLESTORE OF type, or
> stype = TUPLESORT OF type [ DESC | ASC ]
This see
On Wed, Aug 18, 2010 at 8:45 AM, Greg Stark wrote:
> On Tue, Aug 17, 2010 at 11:29 PM, Dave Page wrote:
>> Which is ideal for monitoring your own connection - having the info in
>> the pg_stat_activity is also valuable for monitoring and system
>> administration. Both would be ideal :-)
>
> Hm, I
On 18 August 2010 13:45, Greg Stark wrote:
> On Tue, Aug 17, 2010 at 11:29 PM, Dave Page wrote:
>> Which is ideal for monitoring your own connection - having the info in
>> the pg_stat_activity is also valuable for monitoring and system
>> administration. Both would be ideal :-)
>
> Hm, I think I
On Wed, Aug 18, 2010 at 8:49 AM, Stephen Frost wrote:
> In the end, I'm thinking that if the external security module wants to
> enforce a check against all the children of a parent, they could quite
> possibly handle that already and do it in such a way that it won't break
> depending on the spec
* KaiGai Kohei (kai...@ak.jp.nec.com) wrote:
> If rte->requiredPerms would not be cleared, the user of the hook will
> be able to check access rights on the child tables, as they like.
This would only be the case for those children which are being touched
in the current query, which would depend o
Robert,
* Robert Haas (robertmh...@gmail.com) wrote:
> If C1, C2, and C3 inherit from P, it's perfectly reasonable to grant
> permissions to X on C1 and C2, Y on C3, and Z on C1, C2, C3, and P. I
> don't think we should disallow that. Sure, it's possible to do things
> that are less sane, but if
On Tue, Aug 17, 2010 at 11:29 PM, Dave Page wrote:
> Which is ideal for monitoring your own connection - having the info in
> the pg_stat_activity is also valuable for monitoring and system
> administration. Both would be ideal :-)
Hm, I think I've come around to the idea that having the info in
Hello
I still thinking about a "median" type functions. My idea is to
introduce a new syntax for stype definition - like
stype = type, or
stype = ARRAY OF type [ ORDER [ DESC | ASC ]], or
stype = TUPLESTORE OF type, or
stype = TUPLESORT OF type [ DESC | ASC ]
when stype is ARRAY of then final an
2010/8/18 KaiGai Kohei :
>> It's also worth pointing out that the hook in ExecCheckRTPerms() does
>> not presuppose label-based security. It could be used to implement
>> some other policy altogether, which only strengthens the argument that
>> we can't know how the user of the hook wants to handl
Hello
I found a break in GROUPING SETS implementation. Now I am playing with
own executor and planner node and I can't to go forward :(. Probably
this feature will need a significant update of our agg implementation.
Probably needs a some similar structure like CTE but it can be a
little bit reduc
On Wed, Aug 18, 2010 at 11:01, Michael Haggerty wrote:
> Martijn van Oosterhout wrote:
>> On Wed, Aug 18, 2010 at 08:25:45AM +0200, Michael Haggerty wrote:
>>> So let's take the simplest example: a branch BRANCH1 is created from
>>> trunk commit T1, then some time later another FILE1 from trunk co
Martijn van Oosterhout wrote:
> On Wed, Aug 18, 2010 at 08:25:45AM +0200, Michael Haggerty wrote:
>> So let's take the simplest example: a branch BRANCH1 is created from
>> trunk commit T1, then some time later another FILE1 from trunk commit T3
>> is added to BRANCH1 in commit B4. How should this
On Wed, Aug 18, 2010 at 08:25, Michael Haggerty wrote:
> Tom Lane wrote:
>> I lack git-fu pretty completely, but I do have the CVS logs ;-).
>> It looks like some of these commits that are being ascribed to the
>> REL8_3_STABLE branch were actually only committed on HEAD. For
>> instance my commi
66 matches
Mail list logo