Michael Stone wrote:
> On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
>> I checked the disk picture - this is a RAID disk array with 6 drives,
>> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
>> hard to get more/faster disk than that.
>
> Well, it's not hard
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
Well, it's not hard to more disk than that, but you'd
Scott Marlowe wrote:
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
W
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID? It
Andrew Sullivan wrote:
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
I checked the disk picture - this is a RAID disk array with 6 drives,
with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
hard to get more/faster disk than that.
What kind of RAID? It
On Thu, Jun 21, 2007 at 12:29:49PM -0400, Karl Wright wrote:
> I checked the disk picture - this is a RAID disk array with 6 drives,
> with a bit more than 1Tbyte total storage. 15,000 RPM. It would be
> hard to get more/faster disk than that.
What kind of RAID? It's _easy_ to get faster disk
Scott Marlowe wrote:
Karl Wright wrote:
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only i
Karl Wright wrote:
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You
Scott Marlowe wrote:
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually so
Karl Wright wrote:
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually sounds like you're getting
On Wed, Jun 20, 2007 at 05:29:41PM -0400, Karl Wright wrote:
> A nice try, but I had just completed a VACUUM on this database three
> hours prior to starting the VACUUM that I gave up on after 27 hours.
You keep putting it that way, but your problem is essentially that
you have several tables th
Shaun Thomas wrote:
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
I am afraid that I did answer this. My largest tables
are the ones continually being updated. The smaller
ones are updated only infrequently.
You know, it actually sounds like you're getting whacked by the same
On Wednesday 20 June 2007 12:55:20 pm Karl Wright wrote:
> I am afraid that I did answer this. My largest tables
> are the ones continually being updated. The smaller
> ones are updated only infrequently.
You know, it actually sounds like you're getting whacked by the same
problem that got us
On Wed, Jun 20, 2007 at 11:14:45AM -0700, Joshua D. Drake wrote:
Michael Stone wrote:
Is there a sensible way to partition the large table into smaller tables?
It entirely depends on your data set.
Yes, that's why it was a question rather than a suggestion. :)
Mike Stone
--
On Wed, Jun 20, 2007 at 02:01:34PM -0400, Karl Wright wrote:
> (FWIW, ANALYZE operations are kicked off after every 30,000 inserts,
> updates, or deletes, by the application itself).
I don't think you should do it that way. I suspect that automatic
VACUUM ANALYSE way more often on each table --
Michael Stone wrote:
Is there a sensible way to partition the large table into smaller tables?
It entirely depends on your data set.
Joshua D. Drake
Mike Stone
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an
Is there a sensible way to partition the large table into smaller
tables?
Mike Stone
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
Karl Wright wrote:
> Alvaro Herrera wrote:
> >Karl Wright wrote:
> >>I am afraid that I did answer this. My largest tables are the ones
> >>continually being updated. The smaller ones are updated only
> >>infrequently.
> >
> >Can you afford to vacuum them in parallel?
>
> Hmm, interesting que
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
(b) the performance of individual queries had already degraded
significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more
Francisco Reyes wrote:
Karl Wright writes:
Okay - I started a VACUUM with the 8.1 database yesterday morning,
having the database remain under load. As of 12:30 today (~27 hours),
the original VACUUM was still running. At that point:
I don't recall if you said it already, but what is your
Karl Wright wrote:
> Alvaro Herrera wrote:
> >Karl Wright wrote:
> >
> >>(b) the performance of individual queries had already degraded
> >>significantly in the same manner as what I'd seen before.
> >
> >You didn't answer whether you had smaller, more frequently updated
> >tables that need more v
Alvaro Herrera wrote:
Karl Wright wrote:
(b) the performance of individual queries had already degraded
significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more vacuuming. This comment makes me think yo
Karl Wright writes:
Okay - I started a VACUUM with the 8.1 database yesterday morning,
having the database remain under load. As of 12:30 today (~27 hours),
the original VACUUM was still running. At that point:
I don't recall if you said it already, but what is your
maintenance_work_mem?
Karl Wright wrote:
> (b) the performance of individual queries had already degraded
> significantly in the same manner as what I'd seen before.
You didn't answer whether you had smaller, more frequently updated
tables that need more vacuuming. This comment makes me think you do. I
think what y
Karl Wright wrote:
So, I guess this means that there's no way I can keep the database
adequately vacuumed with my anticipated load and hardware. One thing or
the other will have to change.
Have you checked your maintenance_work_mem setting? If it's not large
enough, vacuum will need to scan
Francisco Reyes wrote:
Alvaro Herrera writes:
How large is the database? I must admit I have never seen a database
that took 4 days to vacuum. This could mean that your database is
humongous, or that the vacuum strategy is wrong for some reason.
Specially with 16GB of RAM.
I have a setup w
Heikki Linnakangas writes:
On a serious note, the index vacuum improvements in 8.2 might help you
to cut that down. You seem to be happy with your setup, but I thought
I'd mention it..
I am really, really trying.. to go to 8.2.
I have a thread on "general" going on for about a week.
I am unab
Francisco Reyes wrote:
I have a setup with several databases (the largest of which is 1TB
database) and I do a nightly vacuum analyze for ALL databases. It takes
about 22 hours. And this is with constant updates to the large 1TB
database. This is with Postgresql 8.1.3
22h nightly? Wow, you h
"Gregory Stark" <[EMAIL PROTECTED]> writes:
> "Karl Wright" <[EMAIL PROTECTED]> writes:
>
>>> In this case it looks like the planner is afraid that that's exactly
>>> what will happen --- a cost of 14177 suggests that several thousand row
>>> fetches are expected to happen, and yet it's only predi
Karl Wright writes:
I'm not writing off autovacuum - just the concept that the large tables
aren't the ones that are changing. Unfortunately, they *are* the most
dynamically updated.
Would be possible for you to partition the tables?
By date or some other fashion to try to have some tables n
Karl Wright wrote:
> I did an ANALYZE on that table and repeated the explain, and got this:
>
> >>
> metacarta=> analyze intrinsiclink;
> ANALYZE
> metacarta=> explain select count(*) from intrinsiclink where
> jobid=1181766706097 and
> childidhash='7E130F3B688687757187F1638D8776ECEF3009E0'
Karl Wright <[EMAIL PROTECTED]> writes:
> I did an ANALYZE on that table and repeated the explain, and got this:
> ...
> ... even more wildly wrong.
Hmm. You might need to increase the statistics target for your larger
tables. It's probably not a big deal for queries like this one, but I'm
worri
"Karl Wright" <[EMAIL PROTECTED]> writes:
>> In this case it looks like the planner is afraid that that's exactly
>> what will happen --- a cost of 14177 suggests that several thousand row
>> fetches are expected to happen, and yet it's only predicting 5 rows out
>> after the filter. It's using
Alvaro Herrera writes:
How large is the database? I must admit I have never seen a database
that took 4 days to vacuum. This could mean that your database is
humongous, or that the vacuum strategy is wrong for some reason.
Specially with 16GB of RAM.
I have a setup with several databases (t
Gregory Stark writes:
VACUUM doesn't require shutting down the system, it doesn't lock any tables or
otherwise prevent other jobs from making progress. It does add extra i/o but
In addition to what Gregory pointed out, you may want to also consider using
Autovacuum. That may also help.
Tom Lane wrote:
Karl Wright <[EMAIL PROTECTED]> writes:
[2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on
intrinsiclink (cost=0.00..14177.29 rows=5 width=253)
[2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND
((childidhash)::text = ($3)::text))
[2007-
Karl Wright <[EMAIL PROTECTED]> writes:
> [2007-06-18 09:39:49,797]ERROR Plan: Index Scan using i1181764142395 on
> intrinsiclink (cost=0.00..14177.29 rows=5 width=253)
> [2007-06-18 09:39:49,797]ERROR Plan: Index Cond: ((jobid = $2) AND
> ((childidhash)::text = ($3)::text))
> [2007-06-18 09
Tom Lane wrote:
Karl Wright <[EMAIL PROTECTED]> writes:
Also, as I said before, I have done extensive query analysis and found
that the plans for the queries that are taking a long time are in fact
very reasonable. Here's an example from the application log of a query
that took way more time
Karl Wright <[EMAIL PROTECTED]> writes:
> Also, as I said before, I have done extensive query analysis and found
> that the plans for the queries that are taking a long time are in fact
> very reasonable. Here's an example from the application log of a query
> that took way more time than its p
A useful utility that I've found is PgFouine. It has an option to
analyze VACUUM VERBOSE logs. It has been instrumental in helping me
figure out whats been going on with my VACUUM that is taking 4+
hours, specifically tracking the tables that are taking the longest.
I highly recommend ch
"Karl Wright" <[EMAIL PROTECTED]> writes:
> Fine - but what if the previous vacuum is still in progress, and does not
> finish in 5 minutes?
Yes, well, there are problems with this design but the situation is already
much improved in 8.2 and there are more improvements on the horizon.
But it's l
Bill Moran wrote:
In response to Karl Wright <[EMAIL PROTECTED]>:
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
In response to Karl Wright <[EMAIL PROTECTED]>:
> Alvaro Herrera wrote:
> > Karl Wright wrote:
> >> Alvaro Herrera wrote:
> >>> Karl Wright wrote:
> >>>
> This particular run lasted four days before a VACUUM became essential.
> The symptom that indicates that VACUUM is needed seems to b
Alvaro Herrera wrote:
Karl Wright wrote:
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentia
Gregory Stark wrote:
"Karl Wright" <[EMAIL PROTECTED]> writes:
This particular run lasted four days before a VACUUM became essential. The
symptom that indicates that VACUUM is needed seems to be that the CPU usage of
any given postgresql query skyrockets. Is this essentially correct?
Postgre
On Tue, 2007-06-19 at 09:37 -0400, Karl Wright wrote:
> Alvaro Herrera wrote:
> > Karl Wright wrote:
> >
> >> This particular run lasted four days before a VACUUM became essential.
> >> The symptom that indicates that VACUUM is needed seems to be that the
> >> CPU usage of any given postgresql q
Karl Wright wrote:
> Alvaro Herrera wrote:
> >Karl Wright wrote:
> >
> >>This particular run lasted four days before a VACUUM became essential.
> >>The symptom that indicates that VACUUM is needed seems to be that the
> >>CPU usage of any given postgresql query skyrockets. Is this essentially
>
"Karl Wright" <[EMAIL PROTECTED]> writes:
> This particular run lasted four days before a VACUUM became essential. The
> symptom that indicates that VACUUM is needed seems to be that the CPU usage of
> any given postgresql query skyrockets. Is this essentially correct?
Postgres is designed on t
Alvaro Herrera wrote:
Karl Wright wrote:
This particular run lasted four days before a VACUUM became essential.
The symptom that indicates that VACUUM is needed seems to be that the
CPU usage of any given postgresql query skyrockets. Is this essentially
correct?
Are you saying you weren't
Karl Wright wrote:
> This particular run lasted four days before a VACUUM became essential.
> The symptom that indicates that VACUUM is needed seems to be that the
> CPU usage of any given postgresql query skyrockets. Is this essentially
> correct?
Are you saying you weren't used to run VACUU
An overnight VACUUM helped things quite a bit. I am now getting
throughput of around 75 transactions per minute, where before I was
getting 30. Also, the CPU is no longer pegged, and the machines load
average has dropped to an acceptable 6-10 from somewhere above 20.
While this is still pret
Tom Lane wrote:
Karl Wright <[EMAIL PROTECTED]> writes:
- At any given time, there are up to 100 of these operations going on at
once against the same database.
It sounds like your hardware is far past "maxed out". Which is odd
since tables with a million or so rows are pretty small for moder
Question: Does anyone have any idea what bottleneck I am hitting? An
index's performance should in theory scale as the log of the number of
rows - what am I missing here?
These can help people on the list to help you :
- Your hardware config (CPU, RAM, disk) ?
- E
Karl Wright <[EMAIL PROTECTED]> writes:
> - At any given time, there are up to 100 of these operations going on at
> once against the same database.
It sounds like your hardware is far past "maxed out". Which is odd
since tables with a million or so rows are pretty small for modern
hardware. Wh
Karl Wright wrote:
Hi,
I have an application which really exercises the performance of
postgresql in a major way, and I am running into a performance
bottleneck with Postgresql 8.1 that I do not yet understand.
Here are the details:
- There is a primary table, with some secondary tables
- T
Hi,
I have an application which really exercises the performance of
postgresql in a major way, and I am running into a performance
bottleneck with Postgresql 8.1 that I do not yet understand.
Here are the details:
- There is a primary table, with some secondary tables
- The principle transac
Josh Berkus writes:
>> Please help in understanding how I can find out what the system is
>> waiting for or why is it taking the query so long.
> First guess would be I/O bound. The planner, at least, thinks you're
> inserting 2 million records. What kind of disk support do you have?
I don't
Abu,
> I really do not know how to find out what the query is waiting on,
> unlike oracle db provides some of the information through its dynamic
> performance views.
Yeah, we don't have that yet.
> Please help in understanding how I can find out what the system is
> waiting for or why is it
1. The function:
SELECT a.birth_date FROM (
SELECT indiv_fkey, birth_dt as birth_date, intern_last_update::date as
last_update, 'fed' as source
FROM cdm.cdm_fedcustomer
WHERE birth_dt IS NOT NULL
AND indiv_fkey = $1
UNION
SELECT indiv_fkey, birthdate as birth_date, last_up
59 matches
Mail list logo