Re: [PERFORM] The state of PG replication in 2008/Q2?

2008-08-22 Thread RW
My company finally has the means to install a new database server for replication. I have Googled and found a lot of sparse information out there regarding replication systems for PostgreSQL and a lot of it looks very out-of-date. Can I please get some ideas from those of you that are

Re: [PERFORM] The state of PG replication in 2008/Q2?

2008-08-22 Thread Mathias Stjernström
Yup, but sometimes you are not in charge of the DDL changes. You may have many different users that make changes or for example if you are working with Ruby On Rails you have something thats called Migrations that handles all DDL changes in those situations it can get really complicated

Re: [PERFORM] Slow query with a lot of data

2008-08-22 Thread Moritz Onken
Am 21.08.2008 um 19:08 schrieb Merlin Moncure: On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken [EMAIL PROTECTED] wrote: Am 21.08.2008 um 16:39 schrieb Scott Carey: It looks to me like the work_mem did have an effect. Your earlier queries had a sort followed by group aggregate at the

Re: [PERFORM] The state of PG replication in 2008/Q2?

2008-08-22 Thread Mathias Stjernström
I Agree with Robert but i never heard of Cybercluster before. Does anyone have any experience with Cybercluster? It sounds really interesting! Best regards, Mathias Stjernström http://www.pastbedti.me/ On 22 aug 2008, at 08.18, RW wrote: My company finally has the means to install a new

Re: [PERFORM] The state of PG replication in 2008/Q2?

2008-08-22 Thread Peter Eisentraut
Dan Harris wrote: My desire would be to have a parallel server that could act as a hot standby system with automatic fail over in a multi-master role. I will add my me too for DRBD + Heartbeat. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to

Re: [PERFORM] Postgres not using array

2008-08-22 Thread Gregory Stark
André Volpato [EMAIL PROTECTED] writes: Tom Lane escreveu: We are guessing that a dual core 3.0GHz will beat up a quad core 2.2, at least in this environmnent with less than 4 concurrent queryes. The most you could hope for from that is less than a 50% speedup. I'd suggest investing some

[PERFORM] Identifying the nature of blocking I/O

2008-08-22 Thread Peter Schuller
[for the purpose of this post, 'blocking' refers to an I/O operation taking a long time for reasons other than the amount of work the I/O operation itself actually implies; not to use of blocking I/O calls or anything like that] Hello, I have a situation in which deterministic latency is a lot

Re: [PERFORM] The state of PG replication in 2008/Q2?

2008-08-22 Thread Jan Otto
Hi Mathias, On Aug 22, 2008, at 8:35 AM, Mathias Stjernström wrote: I Agree with Robert but i never heard of Cybercluster before. Does anyone have any experience with Cybercluster? It sounds really interesting! Some months ago i took a look into cybercluster. At that point cybercluster

[PERFORM] Nested Loop join being improperly chosen

2008-08-22 Thread Brad Ediger
Hello, I'm having trouble with a Nested Loop being selected for a rather complex query; it turns out this is a pretty bad plan as the nested loop's row estimates are quite off (1 estimated / 1207881 actual). If I disable enable_nestloop, the query executes much faster (42 seconds instead

Re: [PERFORM] Optimizing a VIEW

2008-08-22 Thread Decibel!
On Aug 20, 2008, at 1:18 PM, Tom Lane wrote: If you're worried about storage space, I wouldn't go for arrays of composite :-(. The tuple header overhead is horrendous, almost certainly a lot worse than hstore. Oh holy cow, I didn't realize we had a big header in there. Is that to allow for

Re: [PERFORM] Postgres not using array

2008-08-22 Thread André Volpato
Gregory Stark escreveu: Andr Volpato [EMAIL PROTECTED] writes: I think we almost reached the tuning limit, without changing the schema. It's hard to tell from the plan you posted (and with only a brief look) but it looks to me like your query with that function is

Re: [PERFORM] Why do my hash joins turn to nested loops?

2008-08-22 Thread pgsql-performance
On Thu, 21 Aug 2008, Tom Lane wrote: I think you need to raise from_collapse_limit and/or join_collapse_limit. Ahah, that was it.. a much simpler solution than I was fearing. I had already re-written the queries to get around it, but ran into another snag with that method, so this was good

[PERFORM] Big delete on big table... now what?

2008-08-22 Thread Fernando Hevia
Hi list. I have a table with over 30 million rows. Performance was dropping steadily so I moved old data not needed online to an historic table. Now the table has about 14 million rows. I don't need the disk space returned to the OS but I do need to improve performance. Will a plain vacuum do or

Re: [PERFORM] Big delete on big table... now what?

2008-08-22 Thread Kevin Grittner
Fernando Hevia [EMAIL PROTECTED] wrote: I have a table with over 30 million rows. Performance was dropping steadily so I moved old data not needed online to an historic table. Now the table has about 14 million rows. I don't need the disk space returned to the OS but I do need to improve

Re: [PERFORM] Big delete on big table... now what?

2008-08-22 Thread Bill Moran
Fernando Hevia [EMAIL PROTECTED] wrote: Hi list. I have a table with over 30 million rows. Performance was dropping steadily so I moved old data not needed online to an historic table. Now the table has about 14 million rows. I don't need the disk space returned to the OS but I do need to

Re: [PERFORM] Big delete on big table... now what?

2008-08-22 Thread Gregory Stark
Bill Moran [EMAIL PROTECTED] writes: Fernando Hevia [EMAIL PROTECTED] wrote: Hi list. I have a table with over 30 million rows. Performance was dropping steadily so I moved old data not needed online to an historic table. Now the table has about 14 million rows. I don't need the disk

[PERFORM] Large number of tables slow insert

2008-08-22 Thread Loic Petit
Hi, I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create a lot of tables. I simulated a large sensor network with 3000