Peter Eisentraut <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Should we force initdb to correct these pg_proc entries, or just quietly
>> change pg_proc.h?
> Considering the extent of the changes, I'd be in favor of forcing an initdb.
Well, if you're going to change the contents of pg_cast th
Simon Riggs <[EMAIL PROTECTED]> writes:
> That's a killer reason, I suppose. I was really trying to uncover what
> the thinking was, so we can document it. Having VACUUM ignore it
> completely seems wrong.
What you seem to be forgetting is that VACUUM is charged with cleaning
out LP_DEAD tuples, w
Bruce Momjian wrote:
> I am confused. You stated in your earlier email:
>
>> Looking again at bug report [1], I agree that's a glibc bug. Numbers
>> in pt_BR has its format 1.234.567,89; sometimes the format 1234567,89
>> is acceptable too, ie, the thousand separator is optional. I guess
>
> s
Euler Taveira de Oliveira wrote:
> Bruce Momjian wrote:
>
> > OK, I researched this and realized it should have been obvious to me
> > when I added this code in 2006 that making the thousands separator
> > always "," for a locale of "" was going to cause a problem.
> >
> I tested your patch and I
Bruce Momjian wrote:
> OK, I researched this and realized it should have been obvious to me
> when I added this code in 2006 that making the thousands separator
> always "," for a locale of "" was going to cause a problem.
>
I tested your patch and IMHO it breaks the glibc behavior. I'm providing
The world rejoiced as [EMAIL PROTECTED] (Alvaro Herrera) wrote:
> Simon Riggs wrote:
>> I notice that slony records the oldestxmin that was running when it last
>> ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
>> when it would be clearly pointless to do so.
>>
>> AFAICS a
Tom Lane wrote:
> Given the actual behavior of xmltotext_with_xmloption, it certainly
> seems like a pretty useless error check. Also, xml_out doesn't behave
> that way, so why should xmltotext?
>
> The volatility markings of xml_in and texttoxml seem wrong too.
This is the patch that came out of
Simon Riggs wrote:
On Thu, 2007-11-08 at 13:34 -0800, Joshua D. Drake wrote:
On Sun, 04 Nov 2007 18:55:59 +
Simon Riggs <[EMAIL PROTECTED]> wrote:
ve up and have ready access to
is a HP DL 585. It has 8 cores (Opteron), 32GB of ram and 28
spindles over 4 channels.
My question is -hackers,
On Thu, 2007-11-08 at 13:34 -0800, Joshua D. Drake wrote:
> On Sun, 04 Nov 2007 18:55:59 +
> Simon Riggs <[EMAIL PROTECTED]> wrote:
> ve up and have ready access to
> > > is a HP DL 585. It has 8 cores (Opteron), 32GB of ram and 28
> > > spindles over 4 channels.
> > >
> > > My question is -ha
"Markus Schiltknecht" <[EMAIL PROTECTED]> writes:
>> 1) Go through all subrels asking for any interesting pathkey lists. Gather up
>>the union of all of these.
>
> I also tried to modify the Append node first, then figured that it might be
> better to base on the merge join node instead. Whil
Hello Gregory,
Gregory Stark wrote:
I've been hacking on the idea of an Append node which maintains the ordering
of its subtables merging their records in order. This is important for
partitioned tables since otherwise a lot of plans are not available such as
merge joins.
Cool!
Some time ago,
On Thu, 2007-11-22 at 19:02 +, Heikki Linnakangas wrote:
> Even if we could use PageIsPrunable, it would be a bad thing from a
> robustness point of view. If we ever failed to set the Prunable-flag on
> a page for some reason, VACUUM would never remove the dead tuples.
That's a killer reaso
Simon Riggs wrote:
On Thu, 2007-11-22 at 13:21 -0500, Tom Lane wrote:
Simon Riggs <[EMAIL PROTECTED]> writes:
Why isn't VACUUM optimised the same way HOT is?
It doesn't do the same things HOT does.
Thanks for the enlightenment :-)
Clearly much of the code in heap_page_prune_opt() differs, y
On Thu, 2007-11-22 at 15:20 -0300, Alvaro Herrera wrote:
> Simon Riggs wrote:
> > I notice that slony records the oldestxmin that was running when it last
> > ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
> > when it would be clearly pointless to do so.
> >
> > AFAICS aut
On Thu, 2007-11-22 at 13:21 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > Why isn't VACUUM optimised the same way HOT is?
>
> It doesn't do the same things HOT does.
Thanks for the enlightenment :-)
Clearly much of the code in heap_page_prune_opt() differs, yet the test
fo
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Simon Riggs wrote:
>> [Also there is a comment saying "this is a bug" in autovacuum.c
>> Are we thinking to go production with that phrase in the code?]
> Yeah, well, it's only a comment ;-) The problem is that a worker can
> decide that a table needs
Simon Riggs <[EMAIL PROTECTED]> writes:
> Why isn't VACUUM optimised the same way HOT is?
It doesn't do the same things HOT does.
regards, tom lane
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Simon Riggs wrote:
> I notice that slony records the oldestxmin that was running when it last
> ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
> when it would be clearly pointless to do so.
>
> AFAICS autovacuum does not do this, or did I miss that?
Hmm, I think it's just
I notice that slony records the oldestxmin that was running when it last
ran a VACUUM on its tables. This allows slony to avoid running a VACUUM
when it would be clearly pointless to do so.
AFAICS autovacuum does not do this, or did I miss that?
It seems easy to add (another, groan) column onto p
"Guillaume Smet" <[EMAIL PROTECTED]> writes:
> I thought I could also perform a test on CVS head every month from
> December 2006 to now to see if it can give us a better idea of when
> the overhead first appeared. Ping me if you're interested in it.
If you feel like doing that, it might be intere
On Nov 22, 2007 12:45 PM, Guillaume Smet <[EMAIL PROTECTED]> wrote:
> I thought I could also perform a test on CVS head every month from
> December 2006 to now to see if it can give us a better idea of when
> the overhead first appeared. Ping me if you're interested in it.
If I recall correctly, I
Tom,
On Nov 22, 2007 10:29 AM, Simon Riggs <[EMAIL PROTECTED]> wrote:
> Sounds comprehensive, thanks for double checking.
>
> Would it be possible to do these tests?
>
>
Do you want me to perform additional tests or are you pretty sure of
what the problem is?
I thought I could also perform a te
On Nov 22, 2007 5:00 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Gregory Stark <[EMAIL PROTECTED]> writes:
> > Out of curiosity have you recompiled 8.2.5 recently? That is, are they
> > compiled with the same version of gcc?
>
> CVS tip of both branches, freshly compiled for this test.
And in my cas
Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> "53.3. Database Page Layout"
> in table 53-2,
> "ItemPointerData Array of (offset,length) pairs pointing to the actual
> items. 4 bytes per item."
> the explanation should be for ItemIdData, not for ItemPointerData, I think.
Yeah, you're right.
Gregory Stark <[EMAIL PROTECTED]> writes:
> "Tom Lane" <[EMAIL PROTECTED]> writes:
>> The weird thing is that after a couple of hours of poking at it with
>> oprofile and other sharp objects, I have no idea *why* it's slower.
>> oprofile shows just about the same relative percentages for all the
>>
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2007-11-22 at 10:34 +0100, Zeugswetter Andreas ADI SD wrote:
>> On further reflection I think that initdb time is probably sufficient.
>> Do you think that would be a reasonable TODO ?
> I think you'd have to explain why this was needed. It was use
"Guillaume Smet" <[EMAIL PROTECTED]> writes:
> On Nov 22, 2007 6:44 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Are you examining only "trivial" queries? I've been able to identify a
>> couple of new planner hotspots that could explain some slowdown if the
>> planning time is material compared to t
cinu wrote:
Hi All,
I was exploring through the BuildFarm specific perl
script run_build.pl.
I executed the perl script and it went on file by
downloading the latest PostgreSQL source code that is
8.3Beta2, after successful completion of the script it
creates the required set of logfiles in
I've been hacking on the idea of an Append node which maintains the ordering
of its subtables merging their records in order. This is important for
partitioned tables since otherwise a lot of plans are not available such as
merge joins.
The logic I've followed is to do as follows:
1) Go through
Hi,
"53.3. Database Page Layout"
in table 53-2,
"ItemPointerDataArray of (offset,length) pairs pointing to the actual
items. 4 bytes per item."
the explanation should be for ItemIdData, not for ItemPointerData, I think.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
---(
On 22/11/2007, Peter Eisentraut <[EMAIL PROTECTED]> wrote:
> Pavel Stehule wrote:
> > I am playing with methods. It's +/- function with first hidden arguments.
> >
> > example: sin(10) ~ (10).sin() is equivalent.
> > legal is substring('',1,3).upper() too etc
> >
> > I spent some time with bis
On Thu, 2007-11-22 at 10:34 +0100, Zeugswetter Andreas ADI SD wrote:
> > I don't think that should even be a TODO item --- it seems far more
> > likely to provide a foot-gun than useful capability.
>
> On further reflection I think that initdb time is probably sufficient.
> Do you think that would
"Tom Lane" <[EMAIL PROTECTED]> writes:
> The weird thing is that after a couple of hours of poking at it with
> oprofile and other sharp objects, I have no idea *why* it's slower.
> oprofile shows just about the same relative percentages for all the
> hot-spot functions in the backend.
Out of cu
> I don't think that should even be a TODO item --- it seems far more
> likely to provide a foot-gun than useful capability.
On further reflection I think that initdb time is probably sufficient.
Do you think that would be a reasonable TODO ?
> Whether 16MB is still a reasonable default segment
On Thu, 2007-11-22 at 00:30 +0100, Guillaume Smet wrote:
> > Is the data identical on both systems?
Guillaume,
Sounds comprehensive, thanks for double checking.
Would it be possible to do these tests?
1. Compare SELECT 1;
This will allow us to remove planner and indexscan overheads from
resul
> > > Perhaps we should move the successful archived message to DEBUG1
now,
> > > except for the first message after the archiver starts or when the
> > > archive_command changes, plus one message every 255 segments?
> > > That would reduce the log volume in the normal case without
endangering
>
36 matches
Mail list logo