My company finally has the means to install a new database server for
replication. I have Googled and found a lot of sparse information out
there regarding replication systems for PostgreSQL and a lot of it
looks very out-of-date. Can I please get some ideas from those of you
that are
Yup, but sometimes you are not in charge of the DDL changes. You may
have many different users that make changes or for example if you are
working with Ruby On Rails you have something thats called Migrations
that handles all DDL changes in those situations it can get really
complicated
Am 21.08.2008 um 19:08 schrieb Merlin Moncure:
On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken
[EMAIL PROTECTED] wrote:
Am 21.08.2008 um 16:39 schrieb Scott Carey:
It looks to me like the work_mem did have an effect.
Your earlier queries had a sort followed by group aggregate at the
I Agree with Robert but i never heard of Cybercluster before.
Does anyone have any experience with Cybercluster? It sounds really
interesting!
Best regards,
Mathias Stjernström
http://www.pastbedti.me/
On 22 aug 2008, at 08.18, RW wrote:
My company finally has the means to install a new
Dan Harris wrote:
My desire would be to have a parallel server that could act as a hot
standby system with automatic fail over in a multi-master role.
I will add my me too for DRBD + Heartbeat.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
André Volpato [EMAIL PROTECTED] writes:
Tom Lane escreveu:
We are guessing that a dual core 3.0GHz will beat up a quad core 2.2,
at least in this environmnent with less than 4 concurrent queryes.
The most you could hope for from that is less than a 50% speedup. I'd
suggest investing some
[for the purpose of this post, 'blocking' refers to an I/O operation
taking a long time for reasons other than the amount of work the I/O
operation itself actually implies; not to use of blocking I/O calls or
anything like that]
Hello,
I have a situation in which deterministic latency is a lot
Hi Mathias,
On Aug 22, 2008, at 8:35 AM, Mathias Stjernström wrote:
I Agree with Robert but i never heard of Cybercluster before.
Does anyone have any experience with Cybercluster? It sounds really
interesting!
Some months ago i took a look into cybercluster. At that point
cybercluster
Hello,
I'm having trouble with a Nested Loop being selected for a rather
complex query; it turns out this is a pretty bad plan as the nested
loop's row estimates are quite off (1 estimated / 1207881 actual). If
I disable enable_nestloop, the query executes much faster (42 seconds
instead
On Aug 20, 2008, at 1:18 PM, Tom Lane wrote:
If you're worried about storage space, I wouldn't go for arrays of
composite :-(. The tuple header overhead is horrendous, almost
certainly a lot worse than hstore.
Oh holy cow, I didn't realize we had a big header in there. Is that
to allow for
Gregory Stark escreveu:
Andr Volpato [EMAIL PROTECTED] writes:
I think we almost reached the tuning limit, without changing the schema.
It's hard to tell from the plan you posted (and with only a brief look) but it
looks to me like your query with that function is
On Thu, 21 Aug 2008, Tom Lane wrote:
I think you need to raise from_collapse_limit and/or
join_collapse_limit.
Ahah, that was it.. a much simpler solution than I was fearing.
I had already re-written the queries to get around it, but ran into
another snag with that method, so this was good
Hi list.
I have a table with over 30 million rows. Performance was dropping steadily
so I moved old data not needed online to an historic table. Now the table
has about 14 million rows. I don't need the disk space returned to the OS
but I do need to improve performance. Will a plain vacuum do or
Fernando Hevia [EMAIL PROTECTED] wrote:
I have a table with over 30 million rows. Performance was dropping
steadily
so I moved old data not needed online to an historic table. Now the
table
has about 14 million rows. I don't need the disk space returned to
the OS
but I do need to improve
Fernando Hevia [EMAIL PROTECTED] wrote:
Hi list.
I have a table with over 30 million rows. Performance was dropping steadily
so I moved old data not needed online to an historic table. Now the table
has about 14 million rows. I don't need the disk space returned to the OS
but I do need to
Bill Moran [EMAIL PROTECTED] writes:
Fernando Hevia [EMAIL PROTECTED] wrote:
Hi list.
I have a table with over 30 million rows. Performance was dropping steadily
so I moved old data not needed online to an historic table. Now the table
has about 14 million rows. I don't need the disk
Hi,
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount
of sensors. In order to have good performances on querying by timestamp on
each sensor, I partitionned my measures table for each sensor. Thus I create
a lot of tables.
I simulated a large sensor network with 3000
17 matches
Mail list logo