See comments . . . thanks for the feedback.
--- Christopher Browne <[EMAIL PROTECTED]> wrote:
> The world rejoiced as Mischa Sandberg
> <[EMAIL PROTECTED]> wrote:
> > Mark Cotner wrote:
> >> Requirements:
> >> Merge table definition equivalent. We use these
> >> extensively.
> > Looked all over mysql.com etc, and afaics merge
> table is indeed
> > exactly a view of a union-all. Is that right?
> > PG supports views, of course, as well (now) as
> tablespaces, allowing
> > you to split tables/tablesets across multiple disk
> systems. PG is
> > also pretty efficient in query plans on such
> views, where (say) you
> > make one column a constant (identifier, sort of)
> per input table.
> The thing that _doesn't_ work well with these sorts
> of UNION views are
> when you do self-joins. Supposing you have 10
> members, a self-join
> leads to a 100-way join, which is not particularly
> I'm quite curious as to how MySQL(tm) copes with
> this, although it may
> not be able to take place; they may not support
> >> Um, gonna sound silly, but the web interface has
> to remain "snappy"
> >> under load. I don't see this as a major concern
> since you don't
> >> require table locking.
> > Agreed. It's more in your warehouse design, and
> intelligent bounding
> > of queries. I'd say PG's query analyzer is a few
> years ahead of
> > MySQL for large and complex queries.
> The challenge comes in if the application has had
> enormous amounts of
> effort put into it to attune it exactly to
> MySQL(tm)'s feature set.
> The guys working on RT/3 have found this a
> challenge; they had rather
> a lot of dependancies on its case-insensitive string
> causing considerable grief.
Not so much, I've tried to be as agnostic as possible.
Much of the more advanced mining that I've written is
kinda MySQL specific, but needs to be rewritten as
stored procedures anyway.
> > On the other hand, if you do warehouse-style
> loading (Insert, or PG
> > COPY, into a temp table; and then 'upsert' into
> the perm table), I
> > can guarantee 2500 inserts/sec is no problem.
> The big wins are thus:
> 1. Group plenty of INSERTs into a single
> 2. Better still, use COPY to cut parsing costs
> plenty more.
> 3. Adding indexes _after_ the COPY are a further
> Another possibility is to do clever things with
> stored procs; load
> incoming data using the above optimizations, and
> then run stored
> procedures to use some more or less fancy logic to
> put the data where
> it's ultimately supposed to be. Having the logic
> running inside the
> engine is the big optimization.
Agreed, I did some preliminary testing today and am
very impressed. I wasn't used to running analyze
after a data load, but once I did that everything was
My best results from MySQL bulk inserts was around 36k
rows per second on a fairly wide table. Today I got
42k using the COPY command, but with the analyze post
insert the results were similar. These are excellent
numbers. It basically means we could have our
cake(great features) and eat it too(performance that's
good enough to run the app).
Queries from my test views were equally pleasing. I
won't bore you with the details just yet, but
PostgreSQL is doing great. Not that you all are
> Rules of the Evil Overlord #198. "I will
> remember that any
> vulnerabilities I have are to be revealed strictly
> on a need-to-know
> basis. I will also remember that no one needs to
> ---------------------------(end of
> TIP 8: explain analyze is your friend
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]