On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
Well, it's not hard to more disk than that, but you'd
Michael Stone wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
Well, it's not hard to more
Karl Wright wrote:
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID? It's _easy_ to get faster disk
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID?
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID?
Scott Marlowe wrote:
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
Francisco Reyes wrote:
Alvaro Herrera writes:
How large is the database? I must admit I have never seen a database
that took 4 days to vacuum. This could mean that your database is
humongous, or that the vacuum strategy is wrong for some reason.
Specially with 16GB of RAM.
I have a setup
Karl Wright wrote:
So, I guess this means that there's no way I can keep the database
adequately vacuumed with my anticipated load and hardware. One thing or
the other will have to change.
Have you checked your maintenance_work_mem setting? If it's not large
enough, vacuum will need to scan
Karl Wright wrote:
(b) the performance of individual queries had already degraded
significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more vacuuming. This comment makes me think you do. I
think what
Karl Wright writes:
Okay - I started a VACUUM with the 8.1 database yesterday morning,
having the database remain under load. As of 12:30 today (~27 hours),
the original VACUUM was still running. At that point:
I don't recall if you said it already, but what is your
maintenance_work_mem?
Alvaro Herrera wrote:
Karl Wright wrote:
(b) the performance of individual queries had already degraded
significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more vacuuming. This comment makes me think
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
(b) the performance of individual queries had already degraded
significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more vacuuming. This
Francisco Reyes wrote:
Karl Wright writes:
Okay - I started a VACUUM with the 8.1 database yesterday morning,
having the database remain under load. As of 12:30 today (~27 hours),
the original VACUUM was still running. At that point:
I don't recall if you said it already, but what is your
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
(b) the performance of individual queries had already degraded
significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
I am afraid that I did answer this. My largest tables are the ones
continually being updated. The smaller ones are updated only
infrequently.
Can you afford to vacuum them in parallel?
Hmm, interesting question. If VACUUM
Is there a sensible way to partition the large table into smaller
tables?
Mike Stone
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
Michael Stone wrote:
Is there a sensible way to partition the large table into smaller tables?
It entirely depends on your data set.
Joshua D. Drake
Mike Stone
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send
On Wed, Jun 20, 2007 at 02:01:34PM -0400, Karl Wright wrote:
(FWIW, ANALYZE operations are kicked off after every 30,000 inserts,
updates, or deletes, by the application itself).
I don't think you should do it that way. I suspect that automatic
VACUUM ANALYSE way more often on each table --
On Wed, Jun 20, 2007 at 11:14:45AM -0700, Joshua D. Drake wrote:
Michael Stone wrote:
Is there a sensible way to partition the large table into smaller tables?
It entirely depends on your data set.
Yes, that's why it was a question rather than a suggestion. :)
Mike Stone
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually sounds like you're getting whacked by the same
problem that got us a
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually sounds like you're getting whacked by the same
On Wed, Jun 20, 2007 at 05:29:41PM -0400, Karl Wright wrote:
A nice try, but I had just completed a VACUUM on this database three
hours prior to starting the VACUUM that I gave up on after 27 hours.
You keep putting it that way, but your problem is essentially that
you have several tables
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually sounds like you're getting
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually
Question: Does anyone have any idea what bottleneck I am hitting? An
index's performance should in theory scale as the log of the number of
rows - what am I missing here?
These can help people on the list to help you :
- Your hardware config (CPU, RAM, disk) ?
-
Tom Lane wrote:
Karl Wright [EMAIL PROTECTED] writes:
- At any given time, there are up to 100 of these operations going on at
once against the same database.
It sounds like your hardware is far past maxed out. Which is odd
since tables with a million or so rows are pretty small for modern
An overnight VACUUM helped things quite a bit. I am now getting
throughput of around 75 transactions per minute, where before I was
getting 30. Also, the CPU is no longer pegged, and the machines load
average has dropped to an acceptable 6-10 from somewhere above 20.
While this is still
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
Are you saying you weren't used to run VACUUM
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
Are you saying you weren't
Karl Wright [EMAIL PROTECTED] writes:
This particular run lasted four days before a VACUUM became essential. The
symptom that indicates that VACUUM is needed seems to be that the CPU usage of
any given postgresql query skyrockets. Is this essentially correct?
Postgres is designed on the
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
On Tue, 2007-06-19 at 09:37 -0400, Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query
Gregory Stark wrote:
Karl Wright [EMAIL PROTECTED] writes:
This particular run lasted four days before a VACUUM became essential. The
symptom that indicates that VACUUM is needed seems to be that the CPU usage of
any given postgresql query skyrockets. Is this essentially correct?
Postgres
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this
In response to Karl Wright [EMAIL PROTECTED]:
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage
Bill Moran wrote:
In response to Karl Wright [EMAIL PROTECTED]:
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
Karl Wright [EMAIL PROTECTED] writes:
Fine - but what if the previous vacuum is still in progress, and does not
finish in 5 minutes?
Yes, well, there are problems with this design but the situation is already
much improved in 8.2 and there are more improvements on the horizon.
But it's likely
A useful utility that I've found is PgFouine. It has an option to
analyze VACUUM VERBOSE logs. It has been instrumental in helping me
figure out whats been going on with my VACUUM that is taking 4+
hours, specifically tracking the tables that are taking the longest.
I highly recommend
Tom Lane wrote:
Karl Wright [EMAIL PROTECTED] writes:
Also, as I said before, I have done extensive query analysis and found
that the plans for the queries that are taking a long time are in fact
very reasonable. Here's an example from the application log of a query
that took way more time
Karl Wright [EMAIL PROTECTED] writes:
[2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on
intrinsiclink (cost=0.00..14177.29 rows=5 width=253)
[2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND
((childidhash)::text = ($3)::text))
[2007-06-18
Tom Lane wrote:
Karl Wright [EMAIL PROTECTED] writes:
[2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on
intrinsiclink (cost=0.00..14177.29 rows=5 width=253)
[2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND
((childidhash)::text = ($3)::text))
Gregory Stark writes:
VACUUM doesn't require shutting down the system, it doesn't lock any tables or
otherwise prevent other jobs from making progress. It does add extra i/o but
In addition to what Gregory pointed out, you may want to also consider using
Autovacuum. That may also help.
Alvaro Herrera writes:
How large is the database? I must admit I have never seen a database
that took 4 days to vacuum. This could mean that your database is
humongous, or that the vacuum strategy is wrong for some reason.
Specially with 16GB of RAM.
I have a setup with several databases
Karl Wright [EMAIL PROTECTED] writes:
I did an ANALYZE on that table and repeated the explain, and got this:
...
... even more wildly wrong.
Hmm. You might need to increase the statistics target for your larger
tables. It's probably not a big deal for queries like this one, but I'm
worried
Karl Wright wrote:
I did an ANALYZE on that table and repeated the explain, and got this:
metacarta= analyze intrinsiclink;
ANALYZE
metacarta= explain select count(*) from intrinsiclink where
jobid=1181766706097 and
childidhash='7E130F3B688687757187F1638D8776ECEF3009E0';
Karl Wright writes:
I'm not writing off autovacuum - just the concept that the large tables
aren't the ones that are changing. Unfortunately, they *are* the most
dynamically updated.
Would be possible for you to partition the tables?
By date or some other fashion to try to have some tables
Gregory Stark [EMAIL PROTECTED] writes:
Karl Wright [EMAIL PROTECTED] writes:
In this case it looks like the planner is afraid that that's exactly
what will happen --- a cost of 14177 suggests that several thousand row
fetches are expected to happen, and yet it's only predicting 5 rows out
Francisco Reyes wrote:
I have a setup with several databases (the largest of which is 1TB
database) and I do a nightly vacuum analyze for ALL databases. It takes
about 22 hours. And this is with constant updates to the large 1TB
database. This is with Postgresql 8.1.3
22h nightly? Wow, you
Heikki Linnakangas writes:
On a serious note, the index vacuum improvements in 8.2 might help you
to cut that down. You seem to be happy with your setup, but I thought
I'd mention it..
I am really, really trying.. to go to 8.2.
I have a thread on general going on for about a week.
I am
Hi,
I have an application which really exercises the performance of
postgresql in a major way, and I am running into a performance
bottleneck with Postgresql 8.1 that I do not yet understand.
Here are the details:
- There is a primary table, with some secondary tables
- The principle
Karl Wright wrote:
Hi,
I have an application which really exercises the performance of
postgresql in a major way, and I am running into a performance
bottleneck with Postgresql 8.1 that I do not yet understand.
Here are the details:
- There is a primary table, with some secondary tables
-
Karl Wright [EMAIL PROTECTED] writes:
- At any given time, there are up to 100 of these operations going on at
once against the same database.
It sounds like your hardware is far past maxed out. Which is odd
since tables with a million or so rows are pretty small for modern
hardware. What's
53 matches
Mail list logo