On 17.05.2012 06:43, Robert Haas wrote:
The attached patch provides some rough instrumentation for determining
where palloc calls are coming from. This is obviously just for
noodling around with, not for commit, and there may well be bugs. But
enjoy.
I gave this a quick spin on a couple of
AMD FX 8120 / centos 6.2 / latest source (git head)
It seems to be quite easy to force a 'sync' replica to not be equal to master by
recreating+loading a table in a while loop.
For this test I compiled+checked+installed three separate instances on the same
machine. The
replica
On Thu, May 17, 2012 at 6:08 AM, Joshua Berkus j...@agliodbs.com wrote:
As you can see, the indexonlyscan version of the query spends 5% as much time
reading the data as the seq scan version, and doesn't have to read the heap
at all. Yet it spends 20 seconds doing ... what, exactly?
BTW,
Ants,
Well, that's somewhat better, but again hardly the gain in performance I'd
expect to see ... especially since this is ideal circumstances for index-only
scan.
bench2=# select count(*) from pgbench_accounts;
count
--
2000
(1 row)
Time: 3827.508 ms
bench2=# set
On Wed, May 16, 2012 at 11:38 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Well, that is not surprising in itself -- InitTempTableNamespace calls
RemoveTempRelations to cleanup from a possibly crashed previous backend
with the same ID. So that part of the backtrace looks normal to me
On Thu, May 17, 2012 at 2:28 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
What percentage of total CPU usage is the palloc() overhead in these tests?
If we could totally eliminate the palloc() overhead, how much faster would
the test run?
AllocSetAlloc is often the top CPU
Erik,
Are you taking the counts *while* the table is loading? In sync replication,
it's possible for the counts to differ for a short time due to one of three
things:
* transaction has been saved to the replica and confirm message hasn't reached
the master yet
* replica has synched the
Jim, Fujii,
Even more fun:
1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
standby_mode = on )
2) Connect the server to *itself* as a replica.
3) This will work and report success, up until you do your first write.
4) Then ... segfault!
- Original Message -
Robert Haas robertmh...@gmail.com writes:
One piece of reasonably low-hanging fruit appears to be OpExpr. It
seems like it would be better all around to put Node *arg1 and Node
*arg2 in there instead of a list... aside from saving pallocs, it
seems like it would generally simplify the code.
On May16, 2012, at 15:51 , Tom Lane wrote:
Alvaro Herrera alvhe...@alvh.no-ip.org writes:
We just came across a situation where a corrupted HFS+ filesystem
appears to return ERANGE on a customer machine. Our first reaction was
to turn zero_damaged_pages on to allow taking a pg_dump backup of
On Thu, May 17, 2012 at 3:42 PM, Joshua Berkus j...@agliodbs.com wrote:
Even more fun:
1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
standby_mode = on )
2) Connect the server to *itself* as a replica.
3) This will work and report success, up until you do your first
On Thu, May 17, 2012 14:32, Joshua Berkus wrote:
Erik,
Are you taking the counts *while* the table is loading? In sync replication,
it's possible for
the counts to differ for a short time due to one of three things:
* transaction has been saved to the replica and confirm message hasn't
will investigate that
Tom Lane wrote:
Teodor Sigaevteo...@sigaev.ru writes:
After editing query with external editor psql exits on Ctrl-C:
FWIW, I failed to reproduce that on any of my machines. Maybe
your editor is leaving the tty in a funny state?
regards, tom
On Thu, May 17, 2012 at 4:53 PM, Erik Rijkers e...@xs4all.nl wrote:
The count(*) was done in the way that I showed, i.e. *after* psql had exited.
My understanding is
that, with synchronous replication 'on' and configured properly, psql could
only return *after*
the sync-replica had the
On Thu, May 17, 2012 16:10, Ants Aasma wrote:
On Thu, May 17, 2012 at 4:53 PM, Erik Rijkers e...@xs4all.nl wrote:
The count(*) was done in the way that I showed, i.e. *after* psql had
exited. My understanding
is
that, with synchronous replication 'on' and configured properly, psql could
Hitoshi Harada schrieb:
On Wed, May 16, 2012 at 12:50 AM, Volker Grabsch v...@notjusthosting.com
wrote:
I propose the following general optimization: If all window
functions are partitioned by the same first field (here: id),
then any filter on that field should be executed before
* Robert Haas (robertmh...@gmail.com) wrote:
So I guess the first question here is - does it improve performance?
Because if it does, then it's worth pursuing ... if not, that's the
first thing to fix.
Alright, so I've done some pgbench's using all default configs with just
a straight up
On Thu, May 17, 2012 at 5:22 AM, Joshua Berkus j...@agliodbs.com wrote:
Ants,
Well, that's somewhat better, but again hardly the gain in performance I'd
expect to see ... especially since this is ideal circumstances for index-only
scan.
bench2=# select count(*) from pgbench_accounts;
On Thu, May 17, 2012 at 12:01 PM, Joshua Berkus j...@agliodbs.com wrote:
And: if we still have to ship logs, what's the point in even having
cascading replication?
At least cascading replication (1) allows you to adopt more flexible
configuration of servers,
I'm just pretty shocked. The
On Thu, May 17, 2012 at 10:42 PM, Ants Aasma a...@cybertec.at wrote:
On Thu, May 17, 2012 at 3:42 PM, Joshua Berkus j...@agliodbs.com wrote:
Even more fun:
1) Set up a server as a cascading replica (e.g. max_wal_senders = 3,
standby_mode = on )
2) Connect the server to *itself* as a
2012/5/17 Volker Grabsch v...@notjusthosting.com:
Also, is there any chance to include a (simple) attempt of
such an optimiztation into PostgreSQL-9.2 beta, or is this
only a possible topic for 9.3 and later?
For 9.2, you’re about 4 months late :-). The last commitfest was in Januari:
Jeff,
That's in-RAM speed ... I ran the query twice to make sure the index was
cached, and it didn't get any better. And I meant 5X per byte rather than 5X
per tuple.
I talked this over with Haas, and his opinion is that we have a LOT of overhead
in the way we transverse indexes, especially
On Thu, May 17, 2012 at 11:35 AM, Joshua Berkus j...@agliodbs.com wrote:
Jeff,
That's in-RAM speed ... I ran the query twice to make sure the index was
cached, and it didn't get any better. And I meant 5X per byte rather than 5X
per tuple.
Ah, OK that makes more sense. I played around
Yeah, I don't know how I produced the crash in the first place, because of
course the self-replica should block all writes, and retesting it I can't get
it to accept a write. Not sure how I did it in the first place.
So the bug is just that you can connect a server to itself as its own
FWIW, I failed to reproduce that on any of my machines. Maybe
your editor is leaving the tty in a funny state?
Seems system() call cleanups sigaction state on FreeBSD. I've modify
void
setup_cancel_handler(void)
{
fprintf(stderr, %p - %p\n, pqsignal(SIGINT, handle_sigint),
handle_sigint);
25 matches
Mail list logo