On Sun, Oct 28, 2012 at 9:40 PM, Claudio Freire klaussfre...@gmail.comwrote:
On Sun, Oct 28, 2012 at 12:15 PM, Karl Denninger k...@denninger.net
wrote:
4. pg_start_backup('Upgrading') and rsync the master to the NEW slave
directory ex config files (postgresql.conf, recovery.conf and
On Mon, Oct 29, 2012 at 7:41 AM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
I also think that's a good option for most case, but not because it is
faster, in fact if you count the whole process, it is slower. But the master
will be on backup state (between pg_start_backup and
On Mon, Oct 29, 2012 at 9:53 AM, Claudio Freire klaussfre...@gmail.comwrote:
On Mon, Oct 29, 2012 at 7:41 AM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
Just for the record, we do this quite frequently in our pre-production
servers, since the network there is a lot slower and
On Mon, Oct 29, 2012 at 9:09 AM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
If you have incremental backup, a restore_command on recovery.conf seems
better than running rsync again when the slave get out of sync. Doesn't
it?
What do you mean?
Usually, when it falls out of sync
On Mon, Oct 29, 2012 at 10:23 AM, Claudio Freire klaussfre...@gmail.comwrote:
On Mon, Oct 29, 2012 at 9:09 AM, Matheus de Oliveira
matioli.math...@gmail.com wrote:
If you have incremental backup, a restore_command on recovery.conf
seems
better than running rsync again when the slave get
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay 48 WAL files
and connect to
On Mon, Oct 29, 2012 at 02:05:24PM +0100, Albe Laurenz wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took
On 10/28/2012 10:06 AM, Tom Lane wrote:
9.2 will only pick the right plan if that plan's estimated cost is a
good bit cheaper than the wrong parameterized plan.
Is it also possible that the planner differences between extended and
simple query mode caused this? That really bit us in the ass
Albe Laurenz wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay
Alvaro Herrera wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay
On 10/26/2012 04:08 PM, Tom Lane wrote:
So the bottom line is that this is a case where you need a lot of
resolution in the histogram. I'm not sure there's anything good
we can do to avoid that.
I kinda hoped it wouldn't be something like that. For the particularly
painful instance, it was
On 10/27/2012 10:49 PM, Віталій Тимчишин wrote:
It can be that some query(s) use a lot of work mem, either because of
high work_mem setting or because of planner error. In this case the
moment query runs it will need memory that will later be returned and
become free. Usually this can be seen
On Mon, Oct 29, 2012 at 6:05 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for
Hi, thanks for any help. I've tried to be thorough, but let me know if I should
provide more information.
A description of what you are trying to achieve and what results you expect:
I have a large (3 million row) table called tape that represents files,
which I join to a small (100 row)
Did you try to add an index on filesystem_id
From: Woolcock, Sean sean.woolc...@emc.com
To: pgsql-performance@postgresql.org pgsql-performance@postgresql.org
Sent: Monday, October 29, 2012 6:41 PM
Subject: [PERFORM] Request for help with slow query
Hi,
Shaun Thomas wrote:
I know that current_date seems like an edge case, but I can't see
how getting the most recent activity for something is an uncommon
activity. Tip tracking is actually the most frequent pattern in the
systems I've seen.
Yeah, this has been a recurring problem with database
I thought that an index was implicitly created for foreign keys, but I see that
that's not true. I've just created one now and re-ran the query but it did not
change the query plan or run time.
Thanks,
Sean
From: salah jubeh [s_ju...@yahoo.com]
Sent:
On 10/29/2012 12:25 PM, Woolcock, Sean wrote:
I thought that an index was implicitly created for foreign keys, but I see that
that's not true. I've just created one now and re-ran the query but it did not
change the query plan or run time.
1. Explain analyze, not explain please
Check to
On 10/29/2012 12:41 PM, Woolcock, Sean wrote:
An example query that's running slowly for me is:
select tape.volser,
tape.path,
tape.scratched,
tape.size,
extract(epoch from tape.last_write_date) as last_write_date,
As shaun has indicated, there is no need for join, also as Joshua suggested, it
is good to upgrade your server. also add indexes for your predicates and
foreign keys and you will get a desired result.
Regards
From: Shaun Thomas stho...@optionshouse.com
I'm going to push for the upgrade and make the other suggested changes.
Thanks to all for the help,
Sean
From: salah jubeh [s_ju...@yahoo.com]
Sent: Monday, October 29, 2012 3:49 PM
To: stho...@optionshouse.com; Woolcock, Sean
Cc:
Woolcock, Sean wrote:
A description of what you are trying to achieve and what results
you expect:
I have a large (3 million row) table called tape that represents
files, which I join to a small (100 row) table called filesystem
that represents filesystems. I have a web interface that
Thank you all for your replies.
I did figure out what is going on.
9.1 is indeed faster than 8.2.11 so we are good to go forward.
Thank you again
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slower-Performance-on-Postgres-9-1-6-vs-8-2-11-tp5729749p5729991.html
As I increase concurrency I'm experiencing what I believe are too slow
queries given the minuscule amount of data in my tables.
I have 20 Django worker processes and use ab to generate 3000 requests
to a particular URL which is doing some read only queries. I ran this
with ab concurrency level
24 matches
Mail list logo