Erik Jones wrote:
Decibel! wrote:
I should mention that if you can handle splitting the
update into multiple transactions, that will help a
lot since it means you won't be doubling the size of
the table.
As I mentioned above, when you do an update you're actually inserting a
new row and
On Aug 8, 2007, at 3:00 AM, Heikki Linnakangas wrote:
Erik Jones wrote:
Decibel! wrote:
I should mention that if you can handle splitting the
update into multiple transactions, that will help a
lot since it means you won't be doubling the size of
the table.
As I mentioned above, when you do
On Jul 18, 2007, at 1:08 PM, Steven Flatt wrote:
Some background: we make extensive use of partitioned tables. In
fact, I'm
really only considering reindexing partitions that have just
closed. In
our simplest/most general case, we have a table partitioned by a
timestamp
column, each
In response to Steven Flatt [EMAIL PROTECTED]:
On 8/8/07, Vivek Khera [EMAIL PROTECTED] wrote:
If all you ever did was insert into that table, then you probably
don't need to reindex. If you did mass updates/deletes mixed with
your inserts, then perhaps you do.
Do some experiments
Hello all,
I am trying to enable capturing of the submitted code via an
application...how do I do this in Postgres? Performance is SLOW on my
server and I have autovacuum enabled as well as rebuilt indexes...whatelse
should be looked at?
Thanks...Michelle
--
View this message in context:
On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:
I am trying to enable capturing of the submitted code via an
application...how do I do this in Postgres? Performance is SLOW on my
server and I have autovacuum enabled as well as rebuilt indexes...whatelse
should be looked at?
Try
On Wed, Aug 08, 2007 at 03:27:57PM -0400, Bill Moran wrote:
I've had similar experience. One thing you didn't mention that I've noticed
is that VACUUM FULL often bloats indexes. I've made it SOP that
after application upgrades (which usually includes lots of ALTER TABLES and
other massive
we currently have logging enabled for all queries over 100ms, and keep
the last 24 hours of logs before we rotate them. I've found this tool
very helpful in diagnosing new performance problems that crop up:
http://pgfouine.projects.postgresql.org/
Bryan
On 8/8/07, Steinar H. Gunderson [EMAIL
Bill Moran [EMAIL PROTECTED] writes:
In response to Steven Flatt [EMAIL PROTECTED]:
What's interesting is that an insert-only table can benefit significantly
from reindexing after the table is fully loaded.
I've had similar experience. One thing you didn't mention that I've noticed
is that
I saw an interesting topic in the archives on best bang for the buck for
$20k.. about a year old now.
So whats the thoughts on a current combined rack/disks/cpu combo around
the $10k-$15k point, currently?
I can configure up a Dell poweredge 2900 for $9k, but am wondering if
I'm missing out
On 8/8/07, justin [EMAIL PROTECTED] wrote:
I saw an interesting topic in the archives on best bang for the buck for
$20k.. about a year old now.
So whats the thoughts on a current combined rack/disks/cpu combo around
the $10k-$15k point, currently?
I can configure up a Dell poweredge 2900
No it wouldn't be much kit for $20k
but that example is currently coming in at $9k ... (the $20k referred to
is last years topic).
I think I can spend up to $15k but it would have to be clearly
faster/better/more expandable than this config.
Or I can spend $9k with someone else if I can
12 matches
Mail list logo