Fresh import. I've been told this includes a ANALYZE.
Uh - no it doesn't.
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Tom Lane writes:
> Rob Nagler <[EMAIL PROTECTED]> writes:
> > q5 and q6 are too complex to discuss here,
>
> How do you expect us to get better if you don't show us the problems?
With all due respect and thanks for the massive amount of help, I have
presented the problems. q5 and q6 are a subset
Josh Berkus wrote:
> > Our new project is large, high-profile, but not as data intensive as
> > the problematic one. We are willing to commit significant funding and
> > effort to make Postgres faster. We are "business value" driven. That
> > means we solve problems practically instead of theore
Rob,
> I have had a lot push back from the core Postgres folks on the idea of
> planner hints, which would go a long way to solve the performance
> problems we are seeing.
I can tell you that the general reaction that you'll get is "let's fix the
problems with the planner instead of giving the
On Thu, 30 Oct 2003, Rob Nagler wrote:
> scott.marlowe writes:
> > t2 was 'vacuum full'ed and analyzed, right? Just guessing.
>
> Fresh import. I've been told this includes a ANALYZE.
You should probably run analyze by hand just to be sure. If the planner
is using an index scan on a table wi
On Thu, 30 Oct 2003, Rob Nagler wrote:
> The vacuum problem is very serious for the problematic database to the
> point that one of my customer's customers said:
>
> However, I am having a hard time understanding why the system is so
> slow... from my perspective it seems like you have so
Rob Nagler <[EMAIL PROTECTED]> writes:
> When vacuum is running, it's going through the entire
> database, and that pretty much trashes all other queries, especially
> DSS queries. As always it is just software, and there's got to be
> 80/20 solution.
One thing that's been discussed but not yet t
scott.marlowe writes:
> t2 was 'vacuum full'ed and analyzed, right? Just guessing.
Fresh import. I've been told this includes a ANALYZE.
Rob
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
Josh Berkus writes:
> I hope that you'll stay current with PostgreSQL developments so that you can
> do a similarly thourough evaluation for your next project.
Oh, no worries. This project just happens to be a tough one. We're
heavily invested in Postgres. Other projects we maintain that use
P
On Wed, 29 Oct 2003, Rob Nagler wrote:
> Greg Stark writes:
> > > > SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)
> > > > FROM t1
> > > > GROUP BY f2
> > >
> > > This doesn't solve the problem. It's the GROUP BY that is doing the
> > > wrong thing. It's grouping, then aggregating.
> >
Rob Nagler <[EMAIL PROTECTED]> writes:
> One of the reason postgres is faster on the q1-4 is that postgres
> supports OFFSET/LIMIT, and oracle doesn't. q7 and q8 are the queries
> that I've referred to recently (avg of group by).
Well the way to do offset/limit in Oracle is:
SELECT *
FROM (
Rob,
> q5 and q6 are too complex to discuss here, but the fundamental issue
> is the order in which postgres decides to do things. The choice for
> me is clear: the developer time trying to figure out how to make the
> planner do the "obviously right thing" has been too high with
> postgres. The
Rob Nagler <[EMAIL PROTECTED]> writes:
> q5 and q6 are too complex to discuss here,
How do you expect us to get better if you don't show us the problems?
BTW, have you tried any of this with a 7.4beta release? Another project
that I'm aware of saw several bottlenecks in their Oracle-centric code
Greg Stark writes:
> > > SELECT a, (SELECT name FROM t2 WHERE t2.f2 = t1.f2)
> > > FROM t1
> > > GROUP BY f2
> >
> > This doesn't solve the problem. It's the GROUP BY that is doing the
> > wrong thing. It's grouping, then aggregating.
>
> But at least in the form above it will consider usin
Rob Nagler <[EMAIL PROTECTED]> writes:
> I didn't find ALTER SESSION for postgres (isn't that Oracle?), so I
> set sort_mem in the conf file to 512000, restarted postrgres. Reran
> the simpler query (no name) 3 times, and it was still 27 secs.
Sorry, I don't know how that bubbled up from the dep
Greg Stark writes:
> I don't understand why you would expect overwriting to win here.
> What types of updates do you do on these tables?
These are statistics that we're adjusting. I think that's pretty
normal stuff. The DSS component is the avg() of these numbers on
particular groups. The gro
Greg Stark writes:
> Sorry I was unclear. By "usual case" I meant reading, as opposed to updates.
> The size of the on-disk representation turns out to be a major determinant in
> a lot of database applications, since the dominant resource is i/o bandwidth.
> Try doing a fresh import of a large tab
Rob Nagler <[EMAIL PROTECTED]> writes:
> Greg Stark writes:
> > Note that pctfree/pctused are a big performance drain on the usual case. Try
> > setting them to 0/100 on a table that doesn't get updates (like a many-many
> > relation table) and see how much faster it is to insert and scan.
>
> R
Rob Nagler <[EMAIL PROTECTED]> writes:
> Mario Weilguni writes:
> > of course both approaches have advantages, it simply depends on the usage
> > pattern. A case where oracle really rules over postgresql are m<-->n
> > connection tables where each record consist of two foreign keys, the
> > ov
Greg Stark writes:
> Note that pctfree/pctused are a big performance drain on the usual case. Try
> setting them to 0/100 on a table that doesn't get updates (like a many-many
> relation table) and see how much faster it is to insert and scan.
Right. You can optimize each table independently. Th
Vivek Khera writes:
> Also, how close are you to the capacity of your disk bandwidth? I
> don't see that in your numbers. I know in freebsd I can run "systat
> -vmstat" and it gives me a percentage of utilization that lets me know
> when I'm near the capacity.
The vacuum totally consumes the sys
Mario Weilguni writes:
> of course both approaches have advantages, it simply depends on the usage
> pattern. A case where oracle really rules over postgresql are m<-->n
> connection tables where each record consist of two foreign keys, the
> overwrite approach is a big win here.
That's usually
Stephen writes:
> I ran into the same problem with VACUUM on my Linux box. If you are running
> Linux, take a look at "elvtune" or read this post:
The default values were -r 64 -w 8192. The article said this was
"optimal". I just futzed with different values anywere from -w 128 -r
128 to -r 16 -
I ran into the same problem with VACUUM on my Linux box. If you are running
Linux, take a look at "elvtune" or read this post:
http://groups.google.com/groups?q=stephen+vacuum+linux&hl=en&lr=&ie=UTF-8&se
lm=gRdjb.7484%241o2.77%40nntp-post.primus.ca&rnum=3
Regards, Stephen
"Rob Nagler" <[EMAIL P
Rob Nagler <[EMAIL PROTECTED]> writes:
> Incorrect. If the tuples smaller, Oracle does the right thing. If
> there's enough space in the page, it shifts the tuples to make room.
> That's what pctfree, pctused and pctincrease allow you to control.
> It's all in memory so its fast, and I don't thi
Am Donnerstag, 23. Oktober 2003 15:26 schrieb Tom Lane:
> ... if all tuples are the same size, and if you never have any
> transactions that touch enough tuples to overflow your undo segment
> (or even just sit there for a long time, preventing you from recycling
> undo-log space; this is the dual
> "RN" == Rob Nagler <[EMAIL PROTECTED]> writes:
RN> Vivek Khera writes:
>> AMI or Adaptec based?
RN> Adaptec, I think. AIC-7899 LVD SCSI is what dmidecode says, and
RN> Red Hat/Adaptec aacraid driver, Aug 18 2003 is what comes up when it
Cool. No need to diddle with it, then. The Adaptec
Tom Lane writes:
> ... if all tuples are the same size, and if you never have any
Incorrect. If the tuples smaller, Oracle does the right thing. If
there's enough space in the page, it shifts the tuples to make room.
That's what pctfree, pctused and pctincrease allow you to control.
It's all in
On Thu, Oct 23, 2003 at 09:17:41AM -0400, Tom Lane wrote:
>
> Maybe, but only if it actually had reason to use a ton of memory ---
> that is, it were recycling a very large number of tuples in a single
> table. IIRC that didn't seem to be the case here.
Ah, that's what I was trying to ask. I di
Mario Weilguni <[EMAIL PROTECTED]> writes:
> I think oracle does not do garbage collect, it overwrites the tuples directly
> and stores the old tuples in undo buffers. Since most transactions are
> commits, this is a big win.
... if all tuples are the same size, and if you never have any
transact
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Wed, Oct 22, 2003 at 09:27:47PM -0400, Tom Lane wrote:
>> trace. What is causing that? Not VACUUM I don't think. It doesn't have
>> any huge memory demand. But swapping out processes could account for
> What about if you've set vacuum_mem too hi
On Wed, Oct 22, 2003 at 09:27:47PM -0400, Tom Lane wrote:
> trace. What is causing that? Not VACUUM I don't think. It doesn't have
> any huge memory demand. But swapping out processes could account for
What about if you've set vacuum_mem too high?
A
--
Andrew Sullivan
Am Donnerstag, 23. Oktober 2003 01:32 schrieb Rob Nagler:
> The concept of vacuuming seems to be problematic. I'm not sure why
> the database simply can't garbage collect incrementally. AGC is very
> tricky, especially AGC that involves gigabytes of data on disk.
> Incremental garbage collection
Rob Nagler <[EMAIL PROTECTED]> writes:
> Here's the vmstat 5 at a random time:
>procs memoryswap io system cpu
> r b w swpd free buff cache si sobibo incs us sy id
> 0 0 0 272372 38416 78220 375048 0 3 2
Vivek Khera writes:
> AMI or Adaptec based?
Adaptec, I think. AIC-7899 LVD SCSI is what dmidecode says, and
Red Hat/Adaptec aacraid driver, Aug 18 2003 is what comes up when it
boots. I haven't be able to use the aac utilities with this driver,
however, so it's hard to interrogate the device.
>
> "RN" == Rob Nagler <[EMAIL PROTECTED]> writes:
RN> This solution doesn't really fix the fact that VACUUM consumes the
RN> disk while it is running. I want to avoid the erratic performance on
RN> my web server when VACUUM is running.
What's the disk utilization proir to running vacuum? If
> "RN" == Rob Nagler <[EMAIL PROTECTED]> writes:
RN> Vendor: DELL Model: PERCRAID Mirror Rev: V1.0
RN> Type: Direct-AccessANSI SCSI revision: 02
AMI or Adaptec based?
If AMI, make sure it has write-back cache enabled (and you have
battery backup!), and disable
Josh Berkus writes:
> Yes, but it will have less of an impact on the system while it's running.
We'll find out. I lowered it to vacuum_mem to 32000.
> What sort of disk array do you have? That seems like a lot of time
> considering how little work VACUUM is doing.
Vendor: DELL Model: PE
Manfred Koizar writes:
> ISTM you are VACCUMing too aggressively. You are reclaiming less than
> 1% and 0.005%, respectively, of tuples. I would increase FSM settings
> to ca. 1000 fsm_relations, 10 fsm_pages and VACUUM *less* often,
> say every two hours or so.
I did this. We'll see how it
On Fri, 17 Oct 2003 09:52:26 -0600, Rob Nagler <[EMAIL PROTECTED]>
wrote:
>INFO: Removed 8368 tuples in 427 pages.
>CPU 0.06s/0.04u sec elapsed 1.54 sec.
>INFO: Pages 24675: Changed 195, Empty 0; Tup 1031519: Vac 8368, Keep 254, UnUsed
>1739.
>Total CPU 2.92s/2.58u sec elapsed 65
Rob,
> vacuum_mem might be slowing down the system? But if I reduce it,
> won't vacuuming get slower?
Yes, but it will have less of an impact on the system while it's running.
> INFO: Removed 8368 tuples in 427 pages.
> CPU 0.06s/0.04u sec elapsed 1.54 sec.
> INFO: Pages 24675: Change
Shridhar Daithankar writes:
> You should try 7.4 beta and pg_autovacuum which is a contrib module
> in CVS tip.
It's on our todo list. :)
How does pg_autovacuum differ from vacuumdb? I mean it seems to call
the vacuum operation underneath just as vacuumdb does. I obviously
didn't follow the lo
> Any suggestions how to make vacuuming more effective and reducing the
> time it takes to vacuum? I'd settle for less frequent vacuuming or
> perhaps index rebuilding. The database can be re-imported in about an
> hour.
Which version and what are your FSM settings?
signature.asc
Description:
Rob Nagler wrote:
It seems a simple "vacuum" (not full or analyze) slows down the
database dramatically. I am running vacuum every 15 minutes, but it
takes about 5 minutes to run even after a fresh import. Even with
vacuuming every 15 minutes, I'm not sure vacuuming is working
properly.
There ar
44 matches
Mail list logo