Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Wager, Ryan D [NTK]
All,
   I am currently working on a project for my company that entails
Databasing upwards of 300 million specific parameters.  In the current
DB Design, these parameters are mapped against two lookup tables (2
million, and 1.5 million respectively) and I am having extreme issues
getting PG to scale to a working level.  Here are my issues:
  1)the 250 million records are currently whipped and reinserted as a
daily snapshot and the fastest way I have found COPY to do this from
a file is no where near fast enough to do this.  SQL*Loader from Oracle
does some things that I need, ie Direct Path to the db files access
(skipping the RDBMS), inherently ignoring indexing rules and saving a
ton of time (Dropping the index, COPY'ing 250 million records, then
Recreating the index just takes way too long).
  2)Finding a way to keep this many records in a fashion that can be
easily queried.  I even tried breaking it up into almost 2800 separate
tables, basically views of the data pre-broken down, if this is a
working method it can be done this way, but when I tried it, VACUUM, and
the COPY's all seemed to slow down extremely.
   If there is anyone that can give me some tweak parameters or design
help on this, it would be ridiculously appreciated.  I have already
created this in Oracle and it works, but we don't want to have to pay
the monster if something as wonderful as Postgres can handle it.


Ryan Wager

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Josh Berkus
Sent: Tuesday, January 04, 2005 12:06 PM
To: pgsql-performance@postgresql.org
Cc: Yann Michel
Subject: Re: [PERFORM] query rewrite using materialized views

Yann,

 are there any plans for rewriting queries to preexisting materialized
 views?  I mean, rewrite a query (within the optimizer) to use a
 materialized view instead of the originating table?

Automatically, and by default, no.   Using the RULES system?  Yes, you
can 
already do this and the folks on the MattView project on pgFoundry are 
working to make it easier.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] query rewrite using materialized views

2005-01-04 Thread Wager, Ryan D [NTK]
Rod,
  I do this, PG gets forked many times, it is tough to find the max
number of times I can do this, but I have a Proc::Queue Manager Perl
driver that handles all of the copy calls.  I have a quad CPU machine.
Each COPY only hits ones CPU for like 2.1% but anything over about 5
kicks the load avg up.

  Ill get some explain analysis and table structures out there pronto.

-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, January 04, 2005 1:02 PM
To: Wager, Ryan D [NTK]
Cc: Postgresql Performance
Subject: Re: [PERFORM] query rewrite using materialized views

   1)the 250 million records are currently whipped and reinserted as a
 daily snapshot and the fastest way I have found COPY to do this
from
 a file is no where near fast enough to do this.  SQL*Loader from
Oracle
 does some things that I need, ie Direct Path to the db files access
 (skipping the RDBMS), inherently ignoring indexing rules and saving a
 ton of time (Dropping the index, COPY'ing 250 million records, then
 Recreating the index just takes way too long).

If you have the hardware for it, instead of doing 1 copy, do 1 copy
command per CPU (until your IO is maxed out anyway) and divide the work
amongst them. I can push through 100MB/sec using methods like this --
which makes loading 100GB of data much faster.

Ditto for indexes. Don't create a single index on one CPU and wait --
send off one index creation command per CPU.

   2)Finding a way to keep this many records in a fashion that can be
 easily queried.  I even tried breaking it up into almost 2800 separate
 tables, basically views of the data pre-broken down, if this is a
 working method it can be done this way, but when I tried it, VACUUM,
and
 the COPY's all seemed to slow down extremely.

Can you send us EXPLAIN ANALYSE output for the slow selects and a little
insight into what your doing? A basic table structure, and indexes
involved would be handy. You may change column and table names if you
like.

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Josh
Berkus
 Sent: Tuesday, January 04, 2005 12:06 PM
 To: pgsql-performance@postgresql.org
 Cc: Yann Michel
 Subject: Re: [PERFORM] query rewrite using materialized views
 
 Yann,
 
  are there any plans for rewriting queries to preexisting
materialized
  views?  I mean, rewrite a query (within the optimizer) to use a
  materialized view instead of the originating table?
 
 Automatically, and by default, no.   Using the RULES system?  Yes, you
 can 
 already do this and the folks on the MattView project on pgFoundry are

 working to make it easier.
 
-- 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html