On Tue, Mar 13, 2012 at 1:53 PM, Robert Haas wrote:
> What I mean is that the function ArchiveEntry() is defined in
> pg_backup_archiver.c, and it takes an argument called relpages, and
> the string "relpages" does not appear anywhere else in that file.
Uhm, that's kinda concerning, isn't it... f
se+sync: ~ 1.3s
Patch attached.
Now I feel much better about it. Most people will either have fadvise, a
write cache (rightly or wrongly), or actually need the sync. Those that
have none of those can use -N.
Regards,
Jeff Davis
initdb-fsync-20120313.patch.gz
Description: GNU Zip
On 13-03-2012 21:34, Bruce Momjian wrote:
> It might be a solution for cases where we don't modify it. I frankly am
> worried that if we copy over statistics even in ASCII that don't match
> what the server expects, it might lead to a crash, which has me back to
> wanting to speed up vacuumdb.
>
On Tue, Mar 13, 2012 at 01:46:24PM -0400, Robert Haas wrote:
> On Mon, Mar 12, 2012 at 3:28 PM, Simon Riggs wrote:
> > I agree with you that some worst case performance tests should be
> > done. Could you please say what you think the worst cases would be, so
> > those can be tested? That would av
On Tue, Mar 13, 2012 at 5:06 PM, Andres Freund wrote:
> Generally, uppon rereading, I have to say that I am not very happy with the
> decision that ANY triggers are fired from other places than the specific
> triggers. That seams to be a rather dangerous/confusing route to me.
I agree. I think th
On Tue, Mar 13, 2012 at 6:44 PM, Josh Berkus wrote:
>> That's a speedup of nearly a factor of two, so clearly fsync-related
>> stalls are a big problem here, even with wal_buffers cranked up
>> through the ceiling.
>
> H. Do you have any ability to test on XFS?
It seems I do.
XFS, with fsy
On Tue, Mar 13, 2012 at 7:05 PM, Fujii Masao wrote:>
> If it's really a high-UPDATE workload, wouldn't autovacuum start soon?
Also, while vacuum cleanup records are applied, could not the standby
also update its free space map, without having to send the actual FSM
updates? I guess that's boggin
I have created a 29GB test database by using standard pgbnech -i -s
2000. Then I executed:
explain (analyze, buffers) select * from pgbench_accounts where aid in
(select cast(random()*2 as int) from generate_series(1,500));
Nested Loop (cost=30.00..6075.07 rows=1 width=97) (actu
On Tue, Mar 13, 2012 at 10:02 PM, Fujii Masao wrote:
> On Wed, Mar 14, 2012 at 7:20 AM, Robert Haas wrote:
>> On Tue, Mar 13, 2012 at 3:48 PM, Robert Haas wrote:
>>> On Mon, Mar 12, 2012 at 4:45 PM, Jeff Janes wrote:
Rerunning all 4 benchmarks (both 16MB and 32MB wal_buffers on both
m
On Wed, Mar 14, 2012 at 8:53 AM, Josh Berkus wrote:
> All,
>
> I've discovered a built-in performance issue with replication failover
> at one site, which I couldn't find searching the archives. I don't
> really see what we can do to fix it, so I'm posting it here in case
> others might have clev
On Wed, Mar 14, 2012 at 7:20 AM, Robert Haas wrote:
> On Tue, Mar 13, 2012 at 3:48 PM, Robert Haas wrote:
>> On Mon, Mar 12, 2012 at 4:45 PM, Jeff Janes wrote:
>>> Rerunning all 4 benchmarks (both 16MB and 32MB wal_buffers on both
>>> machines) with fsync=off (as well as synchronous_commit=off s
On Tue, Mar 13, 2012 at 4:53 PM, Josh Berkus wrote:
> All,
>
> I've discovered a built-in performance issue with replication failover
> at one site, which I couldn't find searching the archives. I don't
> really see what we can do to fix it, so I'm posting it here in case
> others might have clev
On Tue, Mar 13, 2012 at 08:22:51PM -0400, Bruce Momjian wrote:
> On Tue, Mar 13, 2012 at 05:33:29PM -0500, Kevin Grittner wrote:
> > Bruce Momjian wrote:
> >
> > > What is the target=10 duration? I think 10 is as low as we can
> > > acceptably recommend. Should we recommend they run vacuumdb
>
On Tue, Mar 13, 2012 at 08:30:17PM -0400, Tom Lane wrote:
> Bruce Momjian writes:
> > Another idea is to just copy over pg_statistic like we copy of
> > pg_largeobject now, and force autovacuum to run.
>
> That would be an automatic crash in a 9.1 to 9.2 migration, as well as
> any other release
Bruce Momjian writes:
> Another idea is to just copy over pg_statistic like we copy of
> pg_largeobject now, and force autovacuum to run.
That would be an automatic crash in a 9.1 to 9.2 migration, as well as
any other release where we changed the column layout of pg_statistic.
On Tue, Mar 13, 2012 at 05:33:29PM -0500, Kevin Grittner wrote:
> Bruce Momjian wrote:
>
> > What is the target=10 duration? I think 10 is as low as we can
> > acceptably recommend. Should we recommend they run vacuumdb
> > twice, once with default_statistics_target = 4, and another with
> > t
All,
I've discovered a built-in performance issue with replication failover
at one site, which I couldn't find searching the archives. I don't
really see what we can do to fix it, so I'm posting it here in case
others might have clever ideas.
1. The Free Space Map is not replicated between serve
On 03/13/2012 06:30 PM, Robert Haas wrote:
On Tue, Mar 13, 2012 at 5:42 PM, Bruce Momjian wrote:
What is the target=10 duration? I think 10 is as low as we can
acceptably recommend. Should we recommend they run vacuumdb twice, once
with default_statistics_target = 4, and another with the de
On Tue, Mar 13, 2012 at 3:30 PM, Robert Haas wrote:
> On Tue, Mar 13, 2012 at 5:42 PM, Bruce Momjian wrote:
>> What is the target=10 duration? I think 10 is as low as we can
>> acceptably recommend. Should we recommend they run vacuumdb twice, once
>> with default_statistics_target = 4, and ano
Jaime Casanova wrote:
> Has anyone ever complained about the inconsistency (at least it
> seems that to me) of using INHERITS in CREATE TABLE and INHERIT in
> ALTER TABLE?
They make sense to me as an English speaker. CREATE TABLE more or
less has a *description* of the table to be created, wh
> That's a speedup of nearly a factor of two, so clearly fsync-related
> stalls are a big problem here, even with wal_buffers cranked up
> through the ceiling.
H. Do you have any ability to test on XFS?
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
--
Sent via pgsql-hacker
Robert Haas wrote:
> I'm not sure why we're so glibly rejecting Dan's original
> proposal. Sure, adjusting pg_upgrade when we whack around
> pg_statistic is work, but who ever said that a workable in-place
> upgrade facility would be maintenance-free? We're operating under
> a number of restri
Hi,
Has anyone ever complained about the inconsistency (at least it seems
that to me) of using INHERITS in CREATE TABLE and INHERIT in ALTER
TABLE?
Anyone, besides me, think we should fix that? Maybe support both
version in both commands or choosing one using it everywhere?
--
Jaime Casanova
Bruce Momjian wrote:
> What is the target=10 duration? I think 10 is as low as we can
> acceptably recommend. Should we recommend they run vacuumdb
> twice, once with default_statistics_target = 4, and another with
> the default?
Here are the results at various settings.
1 : 172198.892
On Tue, Mar 13, 2012 at 5:42 PM, Bruce Momjian wrote:
> What is the target=10 duration? I think 10 is as low as we can
> acceptably recommend. Should we recommend they run vacuumdb twice, once
> with default_statistics_target = 4, and another with the default?
I'm not sure why we're so glibly r
On Tue, Mar 13, 2012 at 4:55 AM, Andres Freund wrote:
> On Tuesday, March 13, 2012 03:26:34 AM Robert Haas wrote:
>> Meanwhile, here are some TPS graphs at 16MB and 32MB on the IBM POWER7
>> machine. 32 clients, 1800 seconds, scale factor 300, synchronous
>> commit off.
> That graph makes me crin
On Tue, Mar 13, 2012 at 3:48 PM, Robert Haas wrote:
> On Mon, Mar 12, 2012 at 4:45 PM, Jeff Janes wrote:
>> Rerunning all 4 benchmarks (both 16MB and 32MB wal_buffers on both
>> machines) with fsync=off (as well as synchronous_commit=off still)
>> might help clarify things.
>
> I reran the 32-cli
On Tue, Mar 13, 2012 at 03:29:22PM -0500, Kevin Grittner wrote:
> I went even lower than you suggested:
>
> set default_statistics_target = 4;
>
> And it was much faster, but still more time than the pg_upgrade run
> itself:
>
> cir=# analyze;
> ANALYZE
> Time: 474319.826 ms
>
> A little un
On Tue, Mar 13, 2012 at 09:28:33PM +, Greg Stark wrote:
> hmph. One thing that could speed up analyze on raid arrays would be
> doing prefetching so more than one spindle can be busy. Sacrificing
> statistical accuracy by reading a less random sample on contiguous
> blocks of rows would also be
On Tue, Mar 13, 2012 at 7:30 PM, Bruce Momjian wrote:
> OK, so a single 44GB tables took 2.5 minutes to analyze; that is not
> good. It would require 11 such tables to reach 500GB (0.5 TB), and
> would take 27 minutes. The report I had was twice as long, but still in
> the ballpark of "too long
Peter Eisentraut writes:
> On lör, 2012-03-10 at 18:47 -0500, Tom Lane wrote:
>> It appears to me that null-ness of tzp and tzn are used as a 3-way flag
>> to identify the style of timezone output wanted (none, numeric, or alpha).
> It's not quite a three-way flag, because it also depends on the
On Tuesday, March 13, 2012 09:07:32 PM Dimitri Fontaine wrote:
> Hi,
>
> Andres Freund writes:
> > I did a short review of what I found after merging master
>
> Thanks!
>
> > - I still find it strange not to fire on cascading actions
>
> We don't build statement for cascading so we don't fire
Bruce Momjian wrote:
> OK, good idea. Kevin, can you test this:
>
> PGOPTIONS='-c default_statistics_target=10' vacuumdb --all
> --analyze-only
>
> Is it faster? Thanks.
Well, I just did something similar in psql -- I disabled the delays
by:
set vacuum_cost_delay = 0;
I checked f
On Tue, Mar 13, 2012 at 10:10:02PM +0200, Peter Eisentraut wrote:
> On tis, 2012-03-13 at 15:44 -0400, Tom Lane wrote:
> > I wonder whether it'd be worth recommending that people do an initial
> > ANALYZE with a low stats target, just to get some stats in place,
> > and then go back to analyze at w
Bruce Momjian wrote:
> OK, so a single 44GB tables took 2.5 minutes to analyze; that is
> not good. It would require 11 such tables to reach 500GB (0.5
> TB), and would take 27 minutes. The report I had was twice as
> long, but still in the ballpark of "too long". :-(
But it's really 600 t
On tis, 2012-03-13 at 15:44 -0400, Tom Lane wrote:
> I wonder whether it'd be worth recommending that people do an initial
> ANALYZE with a low stats target, just to get some stats in place,
> and then go back to analyze at whatever their normal setting is.
Perhaps going even further, ANALYZE coul
Hi,
Andres Freund writes:
> I did a short review of what I found after merging master
Thanks!
> - I still find it strange not to fire on cascading actions
We don't build statement for cascading so we don't fire command
triggers. The user view is that there was no drop command on the sub
object
Peter Geoghegan writes:
> I probably should have exposed the query_id directly in the
> pg_stat_statements view, perhaps as "query_hash".
FWIW, I think that's a pretty bad idea; the hash seems to me to be
strictly an internal matter. Given the sponginess of its definition
I don't really want it
Bruce Momjian wrote:
cir=# analyze "CaseHist";
ANALYZE
Time: 143450.467 ms
> OK, so a single 44GB tables took 2.5 minutes to analyze; that is
> not good. It would require 11 such tables to reach 500GB (0.5
> TB), and would take 27 minutes. The report I had was twice as
> long
On Mon, Mar 12, 2012 at 4:45 PM, Jeff Janes wrote:
> Rerunning all 4 benchmarks (both 16MB and 32MB wal_buffers on both
> machines) with fsync=off (as well as synchronous_commit=off still)
> might help clarify things.
I reran the 32-client benchmark on the IBM machine with fsync=off and got this:
Greg Stark writes:
> On Tue, Mar 13, 2012 at 1:38 AM, Daniel Farina wrote:
>> You probably are going to ask: "why not just run ANALYZE and be done
>> with it?"
> Uhm yes. If analyze takes a long time then something is broken. It's
> only reading a sample which should be pretty much a fixed numbe
On lör, 2012-03-10 at 18:47 -0500, Tom Lane wrote:
> > void EncodeDateTime(struct pg_tm * tm, fsec_t fsec, const int *tzp, const
> > char *tzn, int style, char *str)
>
> It appears to me that null-ness of tzp and tzn are used as a 3-way flag
> to identify the style of timezone output wanted (none
On Tue, Mar 13, 2012 at 02:07:14PM -0500, Kevin Grittner wrote:
> Bruce Momjian wrote:
> > On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:
>
> >> cir=# analyze "CaseHist";
> >> ANALYZE
> >> Time: 143450.467 ms
> >> cir=# select relpages, reltuples from pg_class where relname =
>
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= writes:
> Here are the updated patches which use PLy_elog instead of plain elog.
> The difference is that they will get marked for translation and that the
> original Python exception will show up in the errdetail field.
Applied, thanks.
r
"Kevin Grittner" wrote:
> Bruce Momjian wrote:
>> That is 2.5 minutes. How large is that database?
I dug around a little and found that we had turned on vacuum cost
limits on the central databases, because otherwise the web team
complained about performance during maintenance windows. On th
On Tue, Mar 13, 2012 at 2:56 PM, Bruce Momjian wrote:
> Is this a TODO?
If you're going to create one, link to Álvaro's message. But I'm not
sure we should without a better sense of what we actually want to do,
from the options he laid out.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.
On Tue, Mar 13, 2012 at 9:59 AM, Kevin Grittner
wrote:
> Tom Lane wrote:
>
>> (I'm also unconvinced that sorting by relation size is a good idea
>> anyway. Anything that makes the dump order less predictable gets
>> push-back, IME.)
>
> Given that people often use diff on files from pg_dump,
> u
Bruce Momjian wrote:
> On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:
>> cir=# analyze "CaseHist";
>> ANALYZE
>> Time: 143450.467 ms
>> cir=# select relpages, reltuples from pg_class where relname =
>> 'CaseHist';
>> relpages | reltuples
>> --+-
>> 3588
On 2 March 2012 20:10, Tom Lane wrote:
> I do intend to take this one up in due course
I probably should have exposed the query_id directly in the
pg_stat_statements view, perhaps as "query_hash". The idea of that
would be to advertise the potential non-uniqueness of the value - a
collision is *e
On Wed, Feb 29, 2012 at 01:27:43PM -0500, Robert Haas wrote:
> On Wed, Feb 29, 2012 at 12:20 PM, Thom Brown wrote:
> > On 29 February 2012 17:16, Tom Lane wrote:
> >> Thom Brown writes:
> >>> So could we introduce either a command to show which objects are owned
> >>> by a particular role, or al
On Tue, Mar 13, 2012 at 01:18:58PM -0500, Kevin Grittner wrote:
> Greg Stark wrote:
> > Daniel Farina wrote:
> >> You probably are going to ask: "why not just run ANALYZE and be
> >> done with it?"
> >
> > Uhm yes. If analyze takes a long time then something is broken.
> > It's only reading a sa
On Tue, Mar 13, 2012 at 05:46:06PM +, Greg Stark wrote:
> On Tue, Mar 13, 2012 at 1:38 AM, Daniel Farina wrote:
> > You probably are going to ask: "why not just run ANALYZE and be done
> > with it?"
>
> Uhm yes. If analyze takes a long time then something is broken. It's
> only reading a samp
On sön, 2012-02-19 at 20:23 +0100, Pavel Stehule wrote:
> other very simple patch - enhance autocomplete to support CREATE OR
> REPLACE FUNCTION statement
I see two problems with this:
- We also have other CREATE OR REPLACE variants that this won't
complete.
- It won't complete something like CR
Greg Stark wrote:
> Daniel Farina wrote:
>> You probably are going to ask: "why not just run ANALYZE and be
>> done with it?"
>
> Uhm yes. If analyze takes a long time then something is broken.
> It's only reading a sample which should be pretty much a fixed
> number of pages per table. It shoul
On 03/13/2012 01:53 PM, Robert Haas wrote:
I tried this actually (patch attached) but then I wanted to test it
and couldn't find anything that used pgpipe() on Windows.
pg_basebackup/pg_basebackup.c is using it but it's in an #ifndef WIN32
and the same is true for postmaster/syslogger.c. Am I
On Mon, Mar 12, 2012 at 11:35 PM, Joachim Wieland wrote:
> How do you mean it's unused? pg_dump_sort.c uses relpages to dump the
> largest tables first. What you don't want to see in a parallel dump is
> a worker starting to dump a large table while everybody else is
> already idle...
What I mean
On Tue, Mar 13, 2012 at 1:38 AM, Daniel Farina wrote:
> You probably are going to ask: "why not just run ANALYZE and be done
> with it?"
Uhm yes. If analyze takes a long time then something is broken. It's
only reading a sample which should be pretty much a fixed number of
pages per table. It sho
On Mon, Mar 12, 2012 at 3:28 PM, Simon Riggs wrote:
> I agree with you that some worst case performance tests should be
> done. Could you please say what you think the worst cases would be, so
> those can be tested? That would avoid wasting time or getting anything
> backwards.
I've thought about
Excerpts from Bruce Momjian's message of mar mar 13 14:00:52 -0300 2012:
>
> On Tue, Mar 06, 2012 at 04:39:32PM -0300, Alvaro Herrera wrote:
> > When there is a single locker in a tuple, we can just store the locking info
> > in the tuple itself. We do this by storing the locker's Xid in XMAX,
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= writes:
> I came up with a stack of context structures that gets pushed when a
> PL/Python starts being executed and popped when it returns. At first
> they contained just a scratch memory context used by PLyDict_FromTuple.
> Then under the premise of confirming th
On Mon, Feb 27, 2012 at 05:26:04PM +0100, Magnus Hagander wrote:
> On Mon, Feb 27, 2012 at 16:20, Tom Lane wrote:
> > Magnus Hagander writes:
> >> On Mon, Feb 27, 2012 at 04:37, Robert Haas wrote:
> >>> Why not change the default? Does anyone really prefer the bare bones
> >>> doc output?
> >
>
On Mon, Mar 12, 2012 at 9:24 PM, Noah Misch wrote:
> When we lock an update-in-progress row, we walk the t_ctid chain and lock all
> descendant tuples. They may all have uncommitted xmins. This is essential to
> ensure that the final outcome of the updating transaction does not affect
> whether
On Tue, Mar 06, 2012 at 04:39:32PM -0300, Alvaro Herrera wrote:
> Here's a first attempt at a README illustrating this. I intend this to
> be placed in src/backend/access/heap/README.tuplock; the first three
> paragraphs are stolen from the comment in heap_lock_tuple, so I'd remove
> those from th
Folks,
This is for 9.3, of course.
I noticed that CREATE FOREIGN TABLE (LIKE some_table) doesn't work. I
believe it should, as it would:
- Remove a POLA violation
- Make data loading into an extant table even easier, especially if
there need to be filtering or other cleanup steps
Come to thi
On Tue, Mar 13, 2012 at 12:08:41PM -0300, Alvaro Herrera wrote:
> > > You're wrong. Autovacuum does not consider time, only dead/live tuple
> > > counts. The formulas it uses are in the autovacuum docs; some details
> > > (such as the fact that it skips tables that do not have stat entries)
> > >
On Tue, Mar 13, 2012 at 09:28:58AM -0500, Kevin Grittner wrote:
> Bruce Momjian wrote:
>
> > I just received a sobering blog comment stating that pg_upgrade
> > took 5 minutes on a 0.5TB database, but analyze took over an hour:
>
> Yeah, we have had similar experiences. Even if this can't be
Excerpts from Andres Freund's message of mar mar 13 08:22:26 -0300 2012:
> - I think list_command_triggers should do a heap_lock_tuple(LockTupleShared)
> on the command trigger tuple. But then again just about nothing else does :(
If you want to do something like that, I think it's probably mor
amit sehas wrote:
> If we have a query of the form:
>
> Select *, (Select * FROM T2 WHERE p2 = T1.p1) FROM T1 ORDER
> BY 1 WHERE p3 = 75
>
> In SQL, if a subselect is present in the column-list, is the
> result set of this subselect considered to be a part of a
> single tuple returned from the o
Excerpts from Bruce Momjian's message of mar mar 13 11:49:26 -0300 2012:
>
> On Tue, Mar 13, 2012 at 11:34:16AM -0300, Alvaro Herrera wrote:
> >
> > Excerpts from Bruce Momjian's message of mar mar 13 11:14:43 -0300 2012:
> > > On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:
> > >
On Tue, Mar 13, 2012 at 11:34:16AM -0300, Alvaro Herrera wrote:
>
> Excerpts from Bruce Momjian's message of mar mar 13 11:14:43 -0300 2012:
> > On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:
> > > On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian wrote:
> > > > To answer your specif
Excerpts from Bruce Momjian's message of mar mar 13 11:14:43 -0300 2012:
> On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:
> > On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian wrote:
> > > To answer your specific question, I think clearing the last analyzed
> > > fields should cause
Bruce Momjian wrote:
> I just received a sobering blog comment stating that pg_upgrade
> took 5 minutes on a 0.5TB database, but analyze took over an hour:
Yeah, we have had similar experiences. Even if this can't be done
for every release or for every data type, bringing over statistics
from
Joel Jacobson writes:
> In situations when it's natural and makes sense to mix IN/OUT
> variables and columns names, it would be nice to being able to
> explicitly specifying you are referring to the IN or OUT variable with
> a specific name.
Can't you qualify them with the function name?
On Tue, Mar 13, 2012 at 12:33:09AM -0700, Daniel Farina wrote:
> On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian wrote:
> > To answer your specific question, I think clearing the last analyzed
> > fields should cause autovacuum to run on analyze those tables. What I
> > don't know is whether not c
Hello
you can use function name as qualifier
create or replace function fx(paramname type, ...)
returns ...
begin
SELECT INTO fx.paramname, ...
Regards
Pavel Stehule
2012/3/13 Joel Jacobson :
> The introduction of custom_variable_classes and #variable_conflict in
> 9.0 partly solves the prob
2012/3/12 Robert Haas :
> On Mon, Mar 12, 2012 at 12:30 PM, Kohei KaiGai wrote:
>>> Suppose that the connection starts out in context connection_pooler_t.
>>> Based on the identity of the user, we transition to foo_t, bar_t, or
>>> baz_t. If it's possible, by any method, for one of those context
On Tue, Mar 13, 2012 at 12:12:27AM -0400, Tom Lane wrote:
> Bruce Momjian writes:
> > Copying the statistics from the old server is on the pg_upgrade TODO
> > list. I have avoided it because it will add an additional requirement
> > that will make pg_upgrade more fragile in case of major version
The introduction of custom_variable_classes and #variable_conflict in
9.0 partly solves the problem with mixing IN/OUT variables with column
names.
In 8.4 and before, it defaulted to the IN/OUT variable, if it shared
the name of a column.
In 9.0 the behaviour was changed to raise an error if a vari
On Tuesday, March 13, 2012 02:48:11 PM Tom Lane wrote:
> (I'm also unconvinced that sorting by relation size is a good idea
> anyway. Anything that makes the dump order less predictable gets
> push-back, IME.)
Why? Especially in the directory format - which is a prerequisite for parallel
dump if
Tom Lane wrote:
> (I'm also unconvinced that sorting by relation size is a good idea
> anyway. Anything that makes the dump order less predictable gets
> push-back, IME.)
Given that people often use diff on files from pg_dump,
unpredictable ordering can be a bad thing. On the other hand, tha
On Mon, Mar 12, 2012 at 11:10 PM, Jaime Casanova wrote:
> On Mon, Mar 12, 2012 at 9:41 PM, Noah Misch wrote:
>>
>> I created a CF entry for this and marked it Ready for Committer.
>
> i wasn't sure if create an entry this late was a good idea or not...
> but now i feel better because is less prob
Joachim Wieland writes:
> On Sat, Mar 10, 2012 at 9:51 AM, Robert Haas wrote:
>> -const char *owner, bool withOids,
>> +const char *owner,
>> +unsigned long int relpages, bool withOids,
>>
>> The new argument to ArchiveEntry
On Mon, Mar 12, 2012 at 4:45 PM, Jeff Janes wrote:
> Rerunning all 4 benchmarks (both 16MB and 32MB wal_buffers on both
> machines) with fsync=off (as well as synchronous_commit=off still)
> might help clarify things.
> If it increases the TPS of Nate@16MB, but doesn't change the other 3
> situati
Hi,
I did a short review of what I found after merging master
(b4af1c25bbc636379efc5d2ffb9d420765705b8a) to what I currently fetched from
your repo (d63df64580114de4d83cfe8eb45eb630724b8b6f).
- I still find it strange not to fire on cascading actions
- I dislike the missing locking leading to s
(2012/03/13 15:53), Shigeru HANADA wrote:
> (2012/03/12 19:21), Etsuro Fujita wrote:
>> According to the following documentation on IterateForeignScan() in
>> 50.2. Foreign Data Wrapper Callback Routines, I have created a patch to
>> support the error handling in file_fdw. Please find attached a p
On Tuesday, March 13, 2012 03:26:34 AM Robert Haas wrote:
> Meanwhile, here are some TPS graphs at 16MB and 32MB on the IBM POWER7
> machine. 32 clients, 1800 seconds, scale factor 300, synchronous
> commit off.
That graph makes me cringe because its pretty representative of what I have
seen in p
On Tuesday, March 13, 2012 04:49:40 AM Jeff Davis wrote:
> On Sun, 2012-02-05 at 17:56 -0500, Noah Misch wrote:
> > I meant primarily to illustrate the need to be comprehensive, not comment
> > on which executable should fsync a particular file. Bootstrap-mode
> > backends do not sync anything dur
On Mon, Mar 12, 2012 at 9:12 PM, Tom Lane wrote:
> Bruce Momjian writes:
>> Copying the statistics from the old server is on the pg_upgrade TODO
>> list. I have avoided it because it will add an additional requirement
>> that will make pg_upgrade more fragile in case of major version changes.
>
On Mon, Mar 12, 2012 at 8:10 PM, Bruce Momjian wrote:
> To answer your specific question, I think clearing the last analyzed
> fields should cause autovacuum to run on analyze those tables. What I
> don't know is whether not clearing the last vacuum datetime will cause
> the table not to be analy
If we have a query of the form:
Select *, (Select * FROM T2 WHERE p2 = T1.p1) FROM T1 ORDER
BY 1 WHERE p3 = 75
In SQL, if a subselect is present in the column-list, is the
result set of this subselect considered to be a part of a
single tuple returned from the outer query, or does the result
set
On Tue, Mar 13, 2012 at 2:50 AM, Tom Lane wrote:
> Info appreciated.
Email seen, will reply when I can later today.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-hackers mailing list (pgsql-hackers@p
91 matches
Mail list logo