Hi Tom,
Sorry, I didn't ask the right question. I meant to ask "Why does it estimate a
smaller cost for the seqscan?"
With some further staring I was able to find the bad estimate and fix it by
increasing the relevant statistics target.
Thanks,
Brian
- Original Message
From: Tom Lan
Krisna,
> I'm a performance engineer, quite interested in getting deep into the
> PGSQL performance enhancement effort. In that regard, I have the
> following questions :
> 1. Is there a benchmarking setup, that I can access online?
> 2. What benchmarks are we running , for performance numbers?
>
You'll find that PostGIS does a pretty good job of selectivity
estimation.
P
On 13-Feb-07, at 9:09 AM, Guillaume Smet wrote:
Hi all,
Following the work on Mark Stosberg on this list (thanks Mark!), I
optimized our slow proximity queries by using cube, earthdistance
(shipped with contrib) an
unless you specify otherwiise, every insert carries its own transaction
begin/commit. That's a lot of overhead for a single insert, no? Why
not use a single transaction for, say, each 1000 inserts? That would
strike a nice balance of security with efficiency.
pseudo code for the insert:
Begin
On Tuesday 13 February 2007 14:51, Tom Lane wrote:
> "Chuck D." <[EMAIL PROTECTED]> writes:
> > It is still using that sequence scan on the view after the APPEND for the
> > us_city and world_city table. Any reason why the view won't use the
> > indexes when it is JOINed to another table but it w
On Wed, Feb 14, 2007 at 01:07:23PM -0500, Tom Lane wrote:
> Mark Stosberg <[EMAIL PROTECTED]> writes:
> > Your suggestion about the pet_state index was right on. I tried
> > "Analyze" on it, but still got the same bad estimate. However, I then
> > used "reindex" on that index, and that fixed the
Approx. 200 reqest a sec. should be a problem unless the queries are heavy.
Thanks Claus thats good news!
I'm having a reputable vendor build the box and test it for me before
delivering. The bottom line of your message, did you mean 'should be not a
problem'? I wonder what the main reason for
Hi Merlin,
On 2/14/07 8:20 AM, "Merlin Moncure" <[EMAIL PROTECTED]> wrote:
> I am curious what is your take on the maximum insert performance, in
> mb/sec of large bytea columns (toasted), and how much if any greenplum
> was able to advance this over the baseline. I am asking on behalf of
> anot
Paul,
On 2/14/07, Paul Ramsey <[EMAIL PROTECTED]> wrote:
You'll find that PostGIS does a pretty good job of selectivity
estimation.
PostGIS is probably what I'm going to experiment in the future. The
only problem is that it's really big for a very basic need.
With my current method, I don't ev
Mark Stosberg <[EMAIL PROTECTED]> writes:
> Your suggestion about the pet_state index was right on. I tried
> "Analyze" on it, but still got the same bad estimate. However, I then
> used "reindex" on that index, and that fixed the estimate accuracy,
> which made the query run faster!
No, the es
Thanks Claus thats good news!
I'm having a reputable vendor build the box and test it for me before
delivering. The bottom line of your message, did you mean 'should be not a
problem'? I wonder what the main reason for your improvement, your ram was
increased by a factor of 2, but 4 way opteron
Merlin Moncure wrote:
On 2/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
There are two things wrong here: first, that the estimated row
count is only 20% of actual; it should certainly not be that far
off for such a simple condition. I wonder if your vacuum/analyze
procedures are actually working.
On 2/14/07, Luke Lonergan <[EMAIL PROTECTED]> wrote:
Here's one:
Insert performance is limited to about 10-12 MB/s no matter how fast the
underlying I/O hardware. Bypassing the WAL (write ahead log) only boosts
this to perhaps 20 MB/s. We've found that the biggest time consumer in the
profi
Here¹s one:
Insert performance is limited to about 10-12 MB/s no matter how fast the
underlying I/O hardware. Bypassing the WAL (write ahead log) only boosts
this to perhaps 20 MB/s. We¹ve found that the biggest time consumer in the
profile is the collection of routines that ³convert to datum².
On 2/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
There are two things wrong here: first, that the estimated row count is
only 20% of actual; it should certainly not be that far off for such a
simple condition. I wonder if your vacuum/analyze procedures are
actually working. Second, you mentioned
Have you tried pgbench yet?
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 2/14/07, Krishna Kumar <[EMAIL PROTECTED]> wrote:
Hello All,
I'm a performance engineer, quite interested in getting deep into the
PGSQL performance enhancement effort. In that regard, I have the following
questio
Hello All,
I'm a performance engineer, quite interested in getting deep into the PGSQL
performance enhancement effort. In that regard, I have the following
questions :
1. Is there a benchmarking setup, that I can access online?
2. What benchmarks are we running , for performance numbers?
3. What a
I am about to pull the trigger on a new machine after analyzing some
discussions I posted here last year. I've been trying to spec out a reliable
and powerfull enough machine where I won't have to replace it for some time.
Currently I've been using a dual Xeon 3.06ghz with 4GB of ram and utilizin
Brian Herlihy <[EMAIL PROTECTED]> writes:
> I am having trouble understanding why a seqscan is chosen for this query.
As far as anyone can see from this output, the planner's decisions are
correct: it prefers the plans with the smaller estimated cost. If you
want us to take an interest, provide s
Hi,
I am having trouble understanding why a seqscan is chosen for this query.
In practice the seqscan is very expensive, whereas the nested loop is usually
quite fast, even with several hundred rows returned from meta_keywords_url.
The server is running version 8.1.3, and both tables were analy
20 matches
Mail list logo