[EMAIL PROTECTED] wrote:
On Wed, 6 Jun 2007, Steinar H. Gunderson wrote:
On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:
I think the main argument for partitioning is when you are interested in
being able to drop whole partitions cheaply.
Wasn't there also talk about adding the
[EMAIL PROTECTED] wrote:
On Tue, 5 Jun 2007, Tom Lane wrote:
[EMAIL PROTECTED] writes:
however I really don't understand why it is more efficiant to have a 5B
line table that you do a report/query against 0.1% of then it is to
have
1000 different tables of 5M lines each and do a report/query
Tom Lane wrote:
The degree to which this is a win is *highly* debatable, and certainly
depends on a whole lot of assumptions about filesystem performance.
You also need to assume that constraint-exclusion in the planner is
pretty doggone cheap relative to the table searches, which means it
[EMAIL PROTECTED] wrote:
various people (not database experts) are pushing to install Oracle
cluster so that they can move all of these to one table with a
customerID column.
They're blowing smoke if they think Oracle can do this. One of my applications
had this exact same problem --
Craig James wrote:
Oracle is simply not better than Postgres in this regard. As far as I
know, there is only one specific situation (discussed frequently here)
where Oracle is faster: the count(), min() and max() functions, and I
know significant progress has been made since I started using
Scott Marlowe wrote:
OTOH, there are some things, like importing data, which are MUCH faster
in pgsql than in the big database.
An excellent point, I forgot about this. The COPY command is the best thing
since the invention of a shirt pocket. We have a database-per-customer design,
and one
On 6/6/07, Craig James [EMAIL PROTECTED] wrote:
They're blowing smoke if they think Oracle can do this.
Oracle could handle this fine.
Oracle fell over dead, even with the best indexing possible,
tuned by the experts, and using partitions keyed to the
customerID.
I don't think so, whoever
On 6/6/07, Craig James [EMAIL PROTECTED] wrote:
Last time I checked, Oracle didn't have anything close to this.
When did you check, 15 years ago? Oracle has direct-path
import/export and data pump; both of which make generic COPY look like
a turtle. The new PostgreSQL bulk-loader takes
On Wed, Jun 06, 2007 at 12:06:09AM +0200, Steinar H. Gunderson wrote:
Wasn't there also talk about adding the ability to mark individual
partitions as read-only, thus bypassing MVCC and allowing queries
to be satisfied using indexes only?
I have a (different) problem that read-only data
On Tue, Jun 05, 2007 at 03:31:55PM -0700, [EMAIL PROTECTED] wrote:
various people (not database experts) are pushing to install Oracle
cluster so that they can move all of these to one table with a customerID
column.
Well, you will always have to deal with the sort of people who will
base
On 6/6/07, Andrew Sullivan [EMAIL PROTECTED] wrote:
Well, you will always have to deal with the sort of people who will
base their technical prescriptions on the shiny ads they read in
SuperGlobalNetworkedExecutiveGoFast, or whatever rag they're reading
these days.
Always.
I usually
Jonah H. Harris wrote:
On 6/6/07, Craig James [EMAIL PROTECTED] wrote:
They're blowing smoke if they think Oracle can do this.
Oracle could handle this fine.
Oracle fell over dead, even with the best indexing possible,
tuned by the experts, and using partitions keyed to the
customerID.
I
On 6/6/07, Craig James [EMAIL PROTECTED] wrote:
You didn't read my message. I said that *BOTH* Oracle
and Postgres performed well with table-per-customer.
Yes, I did. My belief is that Oracle can handle all customers in a
single table.
The technical question is simple: Table-per-customer
[EMAIL PROTECTED] wrote:
On Mon, 4 Jun 2007, Scott Marlowe wrote:
Gregory Stark wrote:
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it ever make sense to create
thousands of
tables like this?
Sometimes. But usually it's not a good idea.
What you're
So, partitioning in PSQL 8 is workable, but breaking up the table up into
actual separate tables is not?
Another solution we have proposed is having 'active' and 'completed' tables.
So, rather than thousands, we'd have four tables:
responders_active
responders_completed
responses_active
Thomas Andrews wrote:
On 6/5/07 12:48 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
On Mon, 4 Jun 2007, Scott Marlowe wrote:
Gregory Stark wrote:
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it
Scott Marlowe [EMAIL PROTECTED] writes:
Sorry, I think I initially read your response as Postgres doesn't really get
any faster by breaking the tables up without the like that part.
Well breaking up the tables like that or partitioning, either way should be
about equivalent really. Breaking
Gregory Stark wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
Sorry, I think I initially read your response as Postgres doesn't really get
any faster by breaking the tables up without the like that part.
Well breaking up the tables like that or partitioning, either way should be
about
On Tue, 5 Jun 2007, Gregory Stark wrote:
Scott Marlowe [EMAIL PROTECTED] writes:
Sorry, I think I initially read your response as Postgres doesn't really get
any faster by breaking the tables up without the like that part.
Well breaking up the tables like that or partitioning, either way
On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:
I think the main argument for partitioning is when you are interested in
being able to drop whole partitions cheaply.
Wasn't there also talk about adding the ability to mark individual partitions
as read-only, thus bypassing MVCC and
[EMAIL PROTECTED] writes:
however I really don't understand why it is more efficiant to have a 5B
line table that you do a report/query against 0.1% of then it is to have
1000 different tables of 5M lines each and do a report/query against 100%
of.
Essentially what you are doing when you
On Tue, 5 Jun 2007, Tom Lane wrote:
[EMAIL PROTECTED] writes:
however I really don't understand why it is more efficiant to have a 5B
line table that you do a report/query against 0.1% of then it is to have
1000 different tables of 5M lines each and do a report/query against 100%
of.
On Wed, 6 Jun 2007, Steinar H. Gunderson wrote:
On Tue, Jun 05, 2007 at 05:59:25PM -0400, Tom Lane wrote:
I think the main argument for partitioning is when you are interested in
being able to drop whole partitions cheaply.
Wasn't there also talk about adding the ability to mark individual
On Mon, 2007-06-04 at 13:40 -0400, Thomas Andrews wrote:
I have several thousand clients. Our clients do surveys, and each survey
has two tables for the client data,
responders
responses
Frequent inserts into both table.
Right now, we are seeing significant time during inserts
Oh, and we vacuum every day. Not sure about REINDEX, but I doubt we
have done that.
=thomas
Mark Lewis wrote:
On Mon, 2007-06-04 at 13:40 -0400, Thomas Andrews wrote:
I have several thousand clients. Our clients do surveys, and each survey
has two tables for the client data,
responders
We're running 7.4 but will be upgrading to 8.2.
The responses table has 20,000,000 records.
Sometimes (but not all the time) an insert into the responses table can
take 5-6 seconds.
I guess my real question is, does it ever make sense to create thousands
of tables like this?
=thomas
Mark
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it ever make sense to create thousands of
tables like this?
Sometimes. But usually it's not a good idea.
What you're proposing is basically partitioning, though you may not actually
need to put all the partitions
On 6/4/07 3:43 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it ever make sense to create thousands of
tables like this?
Sometimes. But usually it's not a good idea.
What you're proposing is basically
On 6/4/07, Thomas Andrews [EMAIL PROTECTED] wrote:
On 6/4/07 3:43 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it ever make sense to create
thousands of
tables like this?
Sometimes. But usually it's not a good
Linux 2.4.9, if I¹m reading this right.
=thomas
On 6/4/07 4:08 PM, Y Sidhu [EMAIL PROTECTED] wrote:
On 6/4/07, Thomas Andrews [EMAIL PROTECTED] wrote:
On 6/4/07 3:43 PM, Gregory Stark [EMAIL PROTECTED] wrote:
Thomas Andrews [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] writes:
Thomas Andrews [EMAIL PROTECTED] writes:
Clustering sounds like it might be a really good solution. How long does a
cluster command usually take on a table with 50,000,000 records? Is it
something that can be run daily/weekly?
ouch, ok, with 50M records cluster isn't going to be quick
can tell you our vacuum every night is taking 2 hours and that disk IO is
the real killer - the CPU rarely gets higher than 20% or so.
How many gigabytes of stuff do you have in this database ?
( du -sh on the *right* directory will suffice, don't include the logs
etc, aim for
Gregory Stark wrote:
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it ever make sense to create thousands of
tables like this?
Sometimes. But usually it's not a good idea.
What you're proposing is basically partitioning, though you may not actually
On Mon, 4 Jun 2007, Scott Marlowe wrote:
Gregory Stark wrote:
Thomas Andrews [EMAIL PROTECTED] writes:
I guess my real question is, does it ever make sense to create thousands
of
tables like this?
Sometimes. But usually it's not a good idea.
What you're proposing is
34 matches
Mail list logo