AMD FX 8120 / centos 6.2 / latest source (git head)
It seems to be quite easy to force a 'sync' replica to not be equal to master by
recreating+loading a table in a while loop.
For this test I compiled+checked+installed three separate instances on the same
machine. The
replica application_nam
On Thu, May 17, 2012 at 6:08 AM, Joshua Berkus wrote:
> As you can see, the indexonlyscan version of the query spends 5% as much time
> reading the data as the seq scan version, and doesn't have to read the heap
> at all. Yet it spends 20 seconds doing ... what, exactly?
>
> BTW, kudos on the n
Ants,
Well, that's somewhat better, but again hardly the gain in performance I'd
expect to see ... especially since this is ideal circumstances for index-only
scan.
bench2=# select count(*) from pgbench_accounts;
count
--
2000
(1 row)
Time: 3827.508 ms
bench2=# set enable_
On Wed, May 16, 2012 at 11:38 PM, Alvaro Herrera
wrote:
> Well, that is not surprising in itself -- InitTempTableNamespace calls
> RemoveTempRelations to cleanup from a possibly crashed previous backend
> with the same ID. So that part of the backtrace looks normal to me
> (unless there is someth
On Thu, May 17, 2012 at 2:28 AM, Heikki Linnakangas
wrote:
> What percentage of total CPU usage is the palloc() overhead in these tests?
> If we could totally eliminate the palloc() overhead, how much faster would
> the test run?
AllocSetAlloc is often the top CPU consumer in profiling results, b
Erik,
Are you taking the counts *while* the table is loading? In sync replication,
it's possible for the counts to differ for a short time due to one of three
things:
* transaction has been saved to the replica and confirm message hasn't reached
the master yet
* replica has synched the transa
Jim, Fujii,
Even more fun:
1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
standby_mode = on )
2) Connect the server to *itself* as a replica.
3) This will work and report success, up until you do your first write.
4) Then ... segfault!
- Original Message -
Robert Haas writes:
> One piece of reasonably low-hanging fruit appears to be OpExpr. It
> seems like it would be better all around to put Node *arg1 and Node
> *arg2 in there instead of a list... aside from saving pallocs, it
> seems like it would generally simplify the code.
Obviously, Stephe
On May16, 2012, at 15:51 , Tom Lane wrote:
> Alvaro Herrera writes:
>> We just came across a situation where a corrupted HFS+ filesystem
>> appears to return ERANGE on a customer machine. Our first reaction was
>> to turn zero_damaged_pages on to allow taking a pg_dump backup of the
>> database,
On Thu, May 17, 2012 at 3:42 PM, Joshua Berkus wrote:
> Even more fun:
>
> 1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
> standby_mode = on )
>
> 2) Connect the server to *itself* as a replica.
>
> 3) This will work and report success, up until you do your first write.
>
>
On Thu, May 17, 2012 14:32, Joshua Berkus wrote:
> Erik,
>
> Are you taking the counts *while* the table is loading? In sync replication,
> it's possible for
> the counts to differ for a short time due to one of three things:
>
> * transaction has been saved to the replica and confirm message has
will investigate that
Tom Lane wrote:
Teodor Sigaev writes:
After editing query with external editor psql exits on Ctrl-C:
FWIW, I failed to reproduce that on any of my machines. Maybe
your editor is leaving the tty in a funny state?
regards, tom lane
--
Teodor Si
On Thu, May 17, 2012 at 4:53 PM, Erik Rijkers wrote:
> The count(*) was done in the way that I showed, i.e. *after* psql had exited.
> My understanding is
> that, with synchronous replication 'on' and configured properly, psql could
> only return *after*
> the sync-replica had the data safely o
On Thu, May 17, 2012 16:10, Ants Aasma wrote:
> On Thu, May 17, 2012 at 4:53 PM, Erik Rijkers wrote:
>> The count(*) was done in the way that I showed, i.e. *after* psql had
>> exited. My understanding
>> is
>> that, with synchronous replication 'on' and configured properly, psql could
>> only
Hitoshi Harada schrieb:
> On Wed, May 16, 2012 at 12:50 AM, Volker Grabsch
> wrote:
> > I propose the following general optimization: If all window
> > functions are partitioned by the same first field (here: id),
> > then any filter on that field should be executed before
> > WindowAgg. So a que
* Robert Haas (robertmh...@gmail.com) wrote:
> So I guess the first question here is - does it improve performance?
>
> Because if it does, then it's worth pursuing ... if not, that's the
> first thing to fix.
Alright, so I've done some pgbench's using all default configs with just
a straight up
On Thu, May 17, 2012 at 5:22 AM, Joshua Berkus wrote:
> Ants,
>
> Well, that's somewhat better, but again hardly the gain in performance I'd
> expect to see ... especially since this is ideal circumstances for index-only
> scan.
>
> bench2=# select count(*) from pgbench_accounts;
> count
>
On Thu, May 17, 2012 at 12:01 PM, Joshua Berkus wrote:
>
>> > And: if we still have to ship logs, what's the point in even having
>> > cascading replication?
>>
>> At least cascading replication (1) allows you to adopt more flexible
>> configuration of servers,
>
> I'm just pretty shocked. The la
On Thu, May 17, 2012 at 10:42 PM, Ants Aasma wrote:
> On Thu, May 17, 2012 at 3:42 PM, Joshua Berkus wrote:
>> Even more fun:
>>
>> 1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
>> standby_mode = on )
>>
>> 2) Connect the server to *itself* as a replica.
>>
>> 3) This will
2012/5/17 Volker Grabsch :
> Also, is there any chance to include a (simple) attempt of
> such an optimiztation into PostgreSQL-9.2 beta, or is this
> only a possible topic for 9.3 and later?
For 9.2, you’re about 4 months late :-). The last commitfest was in Januari:
https://commitfest.postgres
Jeff,
That's in-RAM speed ... I ran the query twice to make sure the index was
cached, and it didn't get any better. And I meant 5X per byte rather than 5X
per tuple.
I talked this over with Haas, and his opinion is that we have a LOT of overhead
in the way we transverse indexes, especially l
On Thu, May 17, 2012 at 11:35 AM, Joshua Berkus wrote:
> Jeff,
>
> That's in-RAM speed ... I ran the query twice to make sure the index was
> cached, and it didn't get any better. And I meant 5X per byte rather than 5X
> per tuple.
Ah, OK that makes more sense. I played around with this, spec
Yeah, I don't know how I produced the crash in the first place, because of
course the self-replica should block all writes, and retesting it I can't get
it to accept a write. Not sure how I did it in the first place.
So the bug is just that you can connect a server to itself as its own replica.
FWIW, I failed to reproduce that on any of my machines. Maybe
your editor is leaving the tty in a funny state?
Seems system() call cleanups sigaction state on FreeBSD. I've modify
void
setup_cancel_handler(void)
{
fprintf(stderr, "%p -> %p\n", pqsignal(SIGINT, handle_sigint),
handle_sigint
24 matches
Mail list logo