Hello,
Clearly, I shouldn't actually use these transactions unless I have to, and
in cases where I do use it, I'd expect the completion of the transaction to
depend on the speed of all participating databases in the transaction, but
are there any additional overheads which might come with a
Hi,
I am ready to install ver. 8.1 to our db server, but I have some
questions about it.
When I use autovacuum (8.1) is it required to use vacuum analyze for
maintenance or autovacuum is enough?
We have 2 processors (hyperthread) and is it needed to configure the
psql to use it or is it
On Mon, Dec 12, 2005 at 09:54:30AM +0100, Szabolcs BALLA wrote:
When I use autovacuum (8.1) is it required to use vacuum analyze for
maintenance or autovacuum is enough?
autovacuum should be enough.
We have 2 processors (hyperthread) and is it needed to configure the
psql to use it or is
Thanks very much - there are a lot of good articles there... Reading as
fast as I can :)
Best,
Bealach
From: Thomas F. O'Connell [EMAIL PROTECTED]
To: Bealach-na Bo [EMAIL PROTECTED]
CC: PgSQL - Performance pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Very slow queries - please
Michael Fuhr wrote:
On Sun, Dec 11, 2005 at 11:53:36AM +, Carlos Benkendorf wrote:
I would like to use autovacuum but is not too much expensive
collecting row level statistics?
The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant
Paal,
On 12/12/05 2:10 AM, Pål Stenslet [EMAIL PROTECTED] wrote:
Here are the schema details, but first a little introduction:
Terrific, very helpful and thanks for both.
I wonder why the bitmap scan isn't selected in this query, Tom might have
some opinion and suggestions about it.
I'd
On Sun, Dec 11, 2005 at 11:53:36AM +, Carlos Benkendorf wrote:
I would like to use autovacuum but is not too much expensive
collecting row level statistics?
The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance
On Dec 9, 2005, at 10:50 AM, Andreas Pflug wrote:
Well, if your favourite dealer can't supply you with such common
equipment as 15k drives you should consider changing the dealer.
They don't seem to be aware of db hardware reqirements.
Thanks to all for your opinions. I'm definitely
On Mon, Dec 12, 2005 at 01:33:27PM -0500, Merlin Moncure wrote:
The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance difference for
simple selects, but a series of insert/update/delete operations ran
about 30% slower when
On Mon, Dec 12, 2005 at 10:23:42AM -0300, Alvaro Herrera wrote:
Michael Fuhr wrote:
The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance difference for
simple selects, but a series of insert/update/delete operations ran
about
On Dec 8, 2005, at 2:21 PM, Jeffrey W. Baker wrote:
For the write transactions, the speed and size of the DIMM on that LSI
card will matter the most. I believe the max memory on that
adapter is
512MB. These cost so little that it wouldn't make sense to go with
anything smaller.
From
On Dec 12, 2005, at 1:59 PM, Vivek Khera wrote:
From where did you get LSI MegaRAID controller with 512MB? The
320-2X doesn't seem to come with more than 128 from the factory.
Can you just swap out the DIMM card for higher capacity?
We've swapped out the DIMMs on MegaRAID controllers.
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap upgrade.
What's the max you can put into one of these
On Fri, Dec 02, 2005 at 06:28:09PM -0500, Francisco Reyes wrote:
I am in the process of designing a new system.
There will be a long list of words such as
-word table
word_id integer
word varchar
special boolean
Some special words are used to determine if some work is to be done and
On Thu, 8 Dec 2005 11:59:24 -0500 , Amit V Shah [EMAIL PROTECTED]
wrote:
CONSTRAINT pk_runresult_has_catalogtable PRIMARY KEY
(runresult_id_runresult, catalogtable_id_catalogtable, value)
' - Index Scan using runresult_has_catalogtable_id_runresult
on runresult_has_catalogtable
Merlin Moncure [EMAIL PROTECTED] writes:
The cost depends on your usage patterns. I did tests with one of
my applications and saw no significant performance difference for
simple selects, but a series of insert/update/delete operations ran
about 30% slower when block- and row-level statistics
On Mon, 2005-12-12 at 16:19, Vivek Khera wrote:
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap
On Dec 12, 2005, at 2:19 PM, Vivek Khera wrote:
On Dec 12, 2005, at 5:16 PM, J. Andrew Rogers wrote:
We've swapped out the DIMMs on MegaRAID controllers. Given the
cost of a standard low-end DIMM these days (which is what the LSI
controllers use last I checked), it is a very cheap
On Mon, Dec 12, 2005 at 06:01:01PM -0500, Tom Lane wrote:
IIRC, the only significant cost from enabling stats is the cost of
transmitting the counts to the stats collector, which is a cost
basically paid once at each transaction commit. So short transactions
will definitely have more overhead
Michael Fuhr [EMAIL PROTECTED] writes:
Further tests show that for this application
the killer is stats_command_string, not stats_block_level or
stats_row_level.
I tried it with pgbench -c 10, and got these results:
41% reduction in TPS rate for stats_command_string
9%
20 matches
Mail list logo