es to the absolute minimum
was one of the design goals.
Reducing the total amount of IO to the absolute minimum should
help as well.
Ron
-Original Message-
From: Kevin Grittner <[EMAIL PROTECTED]>
Sent: Sep 27, 2005 11:21 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sor
>From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 12:27 AM
>To: Ron Peacetree <[EMAIL PROTECTED]>
>Cc: pgsql-hackers@postgresql.org, pgsql-performance@postgresql.org
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
>You are e
>From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 12:33 AM
>Subject: Sequential I/O Cost (was Re: [PERFORM] A Better External Sort?)
>
>On Wed, 2005-09-28 at 12:03 -0400, Ron Peacetree wrote:
>>>From: "Jeffrey W. Baker" <[EMAIL
ntial passes, returning the first value
>>>on the first pass and the second value on the second pass.
>>> This will be faster than the method you propose.
>>
>>Ron Peacetree:
>>1= No that was not my main example. It was the simplest example
>>used to frame
all HD IO for the process is only
two effectively sequential passes through the data.
Subsequent retrieval of the sorted information from HD can be
done at full HD streaming speed and whatever we've decided to
save to HD can be reused later if we desire.
Hope this helps,
Ron
a small minimum (around 128mb) had no benefit on
>the overall index creation speed.
>
No surprise. The process is severely limited by the abyssmally
slow HD IO.
Ron
---(end of broadcast)---
TIP 6: explain analyze is your friend
could replace the
present sorting code with infinitely fast sorting code and we'd
still be scrod performance wise.
So why does basic IO suck so badly?
Ron
-Original Message-
From: Josh Berkus
Sent: Sep 30, 2005 1:23 PM
To: Ron Peacetree <[EMAIL PROTECTED]>
Cc: pg
overhead to access a particular piece of data.
The simplest solution is for us to implement a new VFS compatible
filesystem tuned to exactly our needs: pgfs.
We may be able to avoid that by some amount of hacking or
modifying of the current FSs we use, but I suspect it would be more
work for
rmance fixed...
(because until we do, _nothing_ is going to help us as much)
Ron
-Original Message-
From: Tom Lane <[EMAIL PROTECTED]>
Sent: Oct 1, 2005 2:01 AM
Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
"Jeffrey W. Baker" <[EMAIL PROTECTED]> write
code intimately enough to make changes this
deep in the core functionality, nor is there enough time for me to
do so if we are going to be timely enough get this into 8.2
(and no, I can't devote 24x7 to doing pg development unless
someone is going to replace my current ways of paying my bills
of concern. If any these IO rates came
from any reasonable 300+MBps RAID array, then they are BAD.
What your simple experiment really does is prove We Have A
Problem (tm) with our IO code at either or both of the OS or the pg
level(s).
Ron
-Original Message-
From: Martijn van Oosterho
d all those features, I'd buy it at even 2x or possibly
even 3x it's current price.
8, 16, or 32GB (using 1, 2, or 4GB DIMMs respectively in an 8 slot
form factor) of very fast temporary work memory (sorting
anyone ;-) ). Yum.
Ron
-Original Message-
From: Dan Harris <[EMAI
Jeff, are those _burst_ rates from HD buffer or _sustained_ rates from
actual HD media? Rates from IO subsystem buffer or cache are
usually considerably higher than Average Sustained Transfer Rate.
Also, are you measuring _raw_ HD IO (bits straight off the platters, no
FS or other overhead) or _c
etc faster than that, do we
see any performance increase?
If a modest CPU can drive a DB IO rate of 25MBps, but that rate
does not go up regardless of how much extra CPU we throw at
it...
Ron
-Original Message-
From: Josh Berkus
Sent: Oct 3, 2005 6:03 PM
To: "Jeffrey W. Baker&quo
OK, change "performance" to "single thread performance" and we
still have a valid starting point for a discussion.
Ron
-Original Message-
From: Gregory Maxwell <[EMAIL PROTECTED]>
Sent: Oct 3, 2005 8:19 PM
To: Ron Peacetree <[EMAIL PROTECTED]>
Subject:
rely written to and where data loss would not be
a disaster, "tmpfs" can be combined with an asyncronous writer
process push updates to HD. Just remember that a power hit
means that
The (much) more expensive alternative is to buy SSD(s) and put
the critical tables on it at load time.
Ron
t dependent, ie application
or domain specific awareness, caching strategies are the better
they are.
Maybe after we do all we can about physical IO and sorting
performance I'll take on the religious fanatics on this one.
One problem set at a time.
Ron
-Original Message-
From:
hat we can tell _exactly_ where
the performance drainage is?
We have to fix this.
Ron
-Original Message-
From: Luke Lonergan <[EMAIL PROTECTED]>
Sent: Oct 5, 2005 11:24 AM
To: Michael Stone <[EMAIL PROTECTED]>, Martijn van Oosterhout
Cc: pgsql-hackers@postgresql.or
fundamentals that don't seem to change no
matter how far or fast the computing field evolves.
As usual, the proper answers involve finding a sometimes nontrivial
balance between building on known precedent and not being trapped
by doctrine.
Ron
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ot;the best doc is the code", but the code in isolation is often the Slow Path to
understanding with systems as complex as a DBMS IO layer.
Ron
-Original Message-
From: "Joshua D. Drake" <[EMAIL PROTECTED]>
Sent: Oct 5, 2005 1:18 PM
Subject: Re: [HACKERS] [PERFORM] A B
ertainly not to annoy you or anyone else.
At least from my perspective, this for the most part seems to have
been an useful and reasonable engineering discussion that has
exposed a number of important things.
Regards,
Ron
---(end of broadcast)---
quot;.
At ~$75-$150 per GB as of this post, RAM is the cheapest investment you can
make in a high perfomance, low hassle DBMS. IWill's and Tyan's 16 DIMM slot
mainboards are worth every penny.
ron
-Original Message-
From: PostgreSQL <[EMAIL PROTECTED]>
Se
occur in ~ the same place?
Etc.
If the effect does seem to be sensitive to the amount of RAM in the
server, it might be worth redoing the experiment(s) with 2GB and
16GB as well...
ron
-Original Message-
From: Kelly Burkhart <[EMAIL PROTECTED]>
Sent: Oct 31, 2005 12:12 PM
To: pgs
ic actions/queries are problems.
If you get to here and the entire DBMS is still not close to acceptable, your
fundamental assumptions have to be re-examined.
Ron
-Original Message-
From: Yves Vindevogel <[EMAIL PROTECTED]>
Sent: Nov 9, 2005 3:11 PM
To: pgsql-performance@postgr
DB server. It makes all kinds of problems just not exist.
Ron
-Original Message-
From: Simon Riggs <[EMAIL PROTECTED]>
Sent: Nov 9, 2005 4:35 AM
To: Charlie Savage <[EMAIL PROTECTED]>, Luke Lonergan <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org
Subject: Re: [PE
ou can fit into RAM during
normal operation the better.
...and it all starts with proper DB design. Otherwise, you are quite right in
stating that you risk wasting time, effort, and HW.
Ron
-Original Message-
From: Frank Wiles <[EMAIL PROTECTED]>
Sent: Nov 9, 2005 6:53 PM
To
anded stuff), OCZ, etc.
Such companies sell via multiple channels, including repuatble websites like
dealtime.com, pricewatch.com, newegg.com, etc, etc.
You are quite correct that there's poor quality junk out there. I was not
talking about it, only reasonable quality components.
Ron
--
William Yu wrote:
Our SCSI drives have failed maybe a little less than our IDE drives.
Microsoft in their database showcase terraserver project has
had the same experience. They studied multiple configurations
including a SCSI/SAN solution as well as a cluster of SATA boxes.
They measured a
Christopher Kings-Lynne wrote:
Quite seriously, if you're still using 7.2.4 for production purposes
you could justifiably be accused of negligence
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the lin
Tom Lane wrote:
[EMAIL PROTECTED] writes:
On Mon, 5 Dec 2005, Tom Lane wrote:
I speculate that the seq_scan wasn't really the slow part
compared to not using using both parts of the index in the
second part of the plan. The table point_features is tens of
thousands of rows, while the table
Tom Lane wrote:
...planner is actually going to choose based on the ultimate join cost,
not on the subplan costs...
In this explanation, the reason for the change in plans over time could
be a change in the statistics for the other table. Is "facets" more
dynamic than "point_features"?
In to
Tom Lane wrote:
If you have background tasks doing ANALYZEs then this explanation seems
plausible enough. I'm willing to accept it anyway ...
Yup, there are such tasks. I could dig through logs to try to confirm
or reject it; but I think it's reasonably likely that this happened.
Basically,
Short summary:
* Papers studying priority inversion issues with
databases including PosgreSQL and realistic workloads
conclude setpriority() helps even in the presence of
priority inversion issues for TCP-C and TCP-W like
workloads.
* Avoiding priority inversion with priority in
Before asking them to remove it, are we sure priority inversion
is really a problem?
I thought this paper: http://www.cs.cmu.edu/~bianca/icde04.pdf
did a pretty good job at studying priority inversion on RDBMs's
including PostgreSQL on various workloads (TCP-W and TCP-C) and
found that the benefit
Mark Kirkwood wrote:
> Ron Mayer wrote:
>> Short summary:
>> * Papers studying priority inversion issues with
>> databases including PosgreSQL and realistic workloads
>> conclude setpriority() helps even in the presence of
>> priority inversio
Brian Hurt wrote:
> Mark Lewis wrote:
>> On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:
>>
>>> I have the same question. I've done some embedded real-time
>>> programming, so my innate reaction to priority inversions is that
>>> they're evil. But, especially given priority inheritance,
Brian Hurt wrote:
> Ron Mayer wrote:
>> Brian Hurt wrote:
>>> Mark Lewis wrote:
>>>> On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:
>>>>> But, especially given priority inheritance, is there any
>
> That second paper is interes
Bruno Wolff III wrote:
> On Thu, Dec 14, 2006 at 01:39:00 -0500,
> Jim Nasby <[EMAIL PROTECTED]> wrote:
>> On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:
>>> This appears to be changing under Linux. Recent kernels have write
>>> barriers implemented using cache flush commands (which
>>>
Xiaoning Ding wrote:
> Postgresql is 7.3.18. [...]
> 1 process takes 0.65 second to finish.
> I update PG to 8.2.3. The results are [...] now.
> 1 process :0.94 second
You sure about your test environment? Anything else
running at the same time, perhaps?
I'm a bit surprised that 8.2.3 would
[EMAIL PROTECTED] wrote:
> 8*73GB SCSI 15k ...(dell poweredge 2900)...
> 24*320GB SATA II 7.2k ...(generic vendor)...
>
> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).
> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
It's worth asking the vendor
Craig A. James wrote:
> Merlin Moncure wrote:
>> Using surrogate keys is dangerous and can lead to very bad design
>> habits that are unfortunately so prevalent in the software industry
>> they are virtually taught in schools. ... While there is
>> nothing wrong with them in principle (you are ex
Dan Harris wrote:
> Daniel Haensse wrote:
>> Has anybody a nice
>> solution to change process priority? A shell script, maybe even for java?
One way is to write astored procedure that sets it's own priority.
An example is here:
http://weblog.bignerdranch.com/?p=11
> While this may technically wo
Andrew Sullivan wrote:
> On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:
>> One way is to write astored procedure that sets it's own priority.
>> An example is here:
>> http://weblog.bignerdranch.com/?p=11
>
> Do you have evidence to show this will
Greg Smith wrote:
>
> Count me on the side that agrees adjusting the vacuuming parameters is
> the more straightforward way to cope with this problem.
Agreed for vacuum; but it still seems interesting to me that
across databases and workloads high priority transactions
tended to get through fast
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Greg Smith wrote:
>>> Count me on the side that agrees adjusting the vacuuming parameters is
>>> the more straightforward way to cope with this problem.
>
>> Agreed for vacuum; but it still seems
Greg Smith wrote:
>
> Let's break this down into individual parts:
Great summary.
> 4) Is vacuuming a challenging I/O demand? Quite.
>
> Add all this up, and that fact that you're satisfied with how nice has
> worked successfully for you doesn't have to conflict with an opinion
> that it's not
Seems Linux has IO scheduling through a program called ionice.
Has anyone here experimented with using it rather than
vacuum sleep settings?
http://linux.die.net/man/1/ionice
This program sets the io scheduling class and priority
for a program. As of this writing, Linux supports 3 schedulin
Jay Kang wrote:
> Hello,
>
> I'm currently trying to decide on a database design for tags in my web
> 2.0 application. The problem I'm facing is that I have 3 separate tables
> i.e. cars, planes, and schools. All three tables need to interact with
> the tags, so there will only be one universal se
I notice that I get different plans when I run the
following two queries that I thought would be
identical.
select distinct test_col from mytable;
select test_col from mytable group by test_col;
Any reason why it favors one in one case but not the other?
d=# explain analyze select distinct
Trevor Talbot wrote:
>
> Lack of reliability compared to _UFS_? Can you elaborate on this?
What elaboration's needed? UFS seems to have one of the longest
histories of support from major vendors of any file system supported
on any OS (Solaris, HP-UX, SVR4, Tru64 Unix all use it).
Can you elab
Csaba Nagy wrote:
>
> Well, my problem was actually solved by rising the statistics target,
Would it do more benefit than harm if postgres increased the
default_statistics_target?
I see a fair number of people (myself included) asking questions who's
resolution was to ALTER TABLE SET STATISTICS;
Heikki Linnakangas wrote:
> Peter Schuller wrote:
>> to have a slow background process (similar to normal non-full vacuums
> ...
> I think it's doable, if you take a copy of the tuple, and set the ctid
> pointer on the old one like an UPDATE, and wait until the old tuple is
> no longer visible to
Greg Smith wrote:
> Bruce Momjian wrote:
>> I always assumed SCSI disks had a write-through cache and therefore
>> didn't need a drive cache flush comment.
Some do. SCSI disks have write-back caches.
Some have both(!) - a write-back cache but the user can explicitly
send write-through requests.
Jon Schewe wrote:
> OK, so if I want the 15 minute speed, I need to give up safety (OK in
> this case as this is just research testing), or see if I can tune
> postgres better.
Depending on your app, one more possibility would be to see if you
can re-factor the application so it can do multiple w
Kevin Grittner wrote:
>
> ...Sybase named caches...segment off portions of the memory for
> specific caches... bind specific database
> objects (tables and indexes) to specific caches. ...
>
> When I posted to the list about it, the response was that LRU
> eviction was superior to any tuning an
Where "*" ==
{print | save to PDF | save to format | display on screen}
Anyone know of one?
TiA
Ron
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
haps more money than sense.)
Ron
-Original Message-
>From: Jim Nasby <[EMAIL PROTECTED]>
>Sent: Mar 16, 2006 1:33 PM
>To: pgsql-performance@postgresql.org
>Subject: [PERFORM] 1 TB of memory
>
>PostgreSQL tuned to the max and still too slow? Database too big to
>
causing
trouble for yourself.
Bad experiences with Dell in general and their poor PERC RAID controllers in
specific are all over this and other DB forums.
Ron
-Original Message-
>From: Bill Moran <[EMAIL PROTECTED]>
>Sent: Apr 25, 2006 2:14 PM
>To: pgsql-performance@postgresql
reasonable place for them... ...if the
present reliability problems I'm seeing go away.
Ron
-Original Message-
>From: David Boreham <[EMAIL PROTECTED]>
>Sent: Apr 25, 2006 5:15 PM
>To: pgsql-performance@postgresql.org
>Subject: Re: [PERFORM] Large (8M) cache vs
>Another benefit of Pentium D over AMD X2, at least until AMD chooses
>to switch, is that Pentium D supports DDR2, whereas AMD only supports
>DDR. There are a lot of technical pros and cons to each - with claims
>from AMD that DDR2 can be slower than DDR - but one claim that isn't
>often made, but
I'm posting this to the entire performance list in the hopes that it will be
generally useful.
=r
-Original Message-
>From: [EMAIL PROTECTED]
>Sent: Apr 26, 2006 3:25 AM
>To: Ron Peacetree <[EMAIL PROTECTED]>
>Subject: Re: [PERFORM] Large (8M) cache vs. dual-core
actually are.
Again, my apologies.
Ron
-Original Message-
>From: Ron Peacetree <[EMAIL PROTECTED]>
>Sent: Apr 26, 2006 8:40 AM
>To: [EMAIL PROTECTED], pgsql-performance@postgresql.org
>Subject: Re: [PERFORM] Large (8M) cache vs. dual-core CPUs
>
>I'm posting t
The best of all worlds is to use a HW RAID card with battery backed cache.
Then you can have both high performance and high reliability.
Benches suggest that the best such cards currently are the Areca cards which
support up to 2GB of battery backed cache.
Ron
-Original Message-
>F
g low-priority transactions.
"
Or am I missing something?
Ron
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
ocks ~20,000 inserts/second
When I last measured it it was about a factor of 4 speedup
(3 seconds vs 0.7 seconds) by concatenating the inserts with
sample code shown her [1].
If the same ratio holds for your test case, these concatenated
inserts would be almost the exact same spee
f BBC.
ARC-11xx are the PCI-X based products.
ARC-12xx are the PCI-E based products.
Reviews at places like tweakers.net
Areca is based in Taiwan, but has European and US offices as well
Ron Peacetree
-Original Message-
>From: Kenji Morishige <[EMAIL PROTECTED]>
>Sent: Jul 5,
sizes =after= you optimize your OS,
FS, and pg design for best IO for your usage pattern(s).
Hope this helps,
Ron
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
-Original Message-
>From: Mikael Carneholm <[EMAIL PROTECTED]>
>Sent: Jul 17, 2006 5:16 PM
>To: Ron Peacetree <[EMAIL PROTECTED]>, pgsql-performance@postgresql.org
>Subject: RE: [PERFORM] RAID stripe size question
>
>>15Krpm HDs will have average access
>From: Alex Turner <[EMAIL PROTECTED]>
>Sent: Jul 18, 2006 12:21 AM
>To: Ron Peacetree <[EMAIL PROTECTED]>
>Cc: Mikael Carneholm <[EMAIL PROTECTED]>, pgsql-performance@postgresql.org
>Subject: Re: [PERFORM] RAID stripe size question
>
>On 7/17/06,
Have you done any experiments implementing RAID 50 this way (HBA does RAID 5,
OS does RAID 0)? If so, what were the results?
Ron
-Original Message-
>From: Scott Marlowe <[EMAIL PROTECTED]>
>Sent: Jul 18, 2006 3:37 PM
>To: Alex Turner <[EMAIL PROTECTED]>
>C
Luke Lonergan wrote:
>
> I think the topic is similar to "cache bypass", used in cache capable vector
> processors (Cray, Convex, Multiflow, etc) in the 90's. When you are
> scanning through something larger than the cache, it should be marked
> "non-cacheable" and bypass caching altogether. Thi
Jim C. Nasby wrote:
>
> Index scans are also pretty picky about correlation. If you have really
> low correlation you don't want to index scan,
I'm still don't think "correlation" is the right metric
at all for making this decision.
If you have a list of addresses clustered by "zip"
the "correla
Jean-David Beyer wrote:
>
> Sure, some even read the entire cylinder. But unless the data are stored
> contiguously, this does little good. The Linux ext2 and ext3 file systems
> try to get more contiguity by allocating (IIRC) 8 blocks each time a write
> needs space
>From where do you recall thi
toby wrote:
>
> That's not quite what I meant by "trust". Some drives lie about the
> flush.
Is that really true, or a misdiagnosed software bug?
I know many _drivers_ lie about flushing - for example EXT3
on Linux before early 2005 "did not have write barrier support
that issues the FLUSH CACH
Tom Lane wrote:
> "Craig A. James" <[EMAIL PROTECTED]> writes:
>> Here's something I found googling for "memory overcommitment"+linux
>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html
>
> That might have been right when it was written (note the reference to a
> 2.2 Linux kernel
Tom Lane wrote:
> "Craig A. James" <[EMAIL PROTECTED]> writes:
>> Here's something I found googling for "memory overcommitment"+linux
>> http://archives.neohapsis.com/archives/postfix/2000-04/0512.html
>
> That might have been right when it was written (note the reference to a
> 2.2 Linux kernel
On Fri, 23 Apr 2004, Manfred Koizar wrote:
>
> Setting shared_buffers to half your available memory is the worst thing
> you can do. You would end up caching exactly the same set of blocks in
> the internal buffers and in the OS cache, thus effectively making one of
> the caches useless.
One min
mes to the point in the salesguys presentation when he's playing with
powerpoint:).
Much of this tuning was guesswork; but it did make the demo go from
"unacceptable" to "reasonable". Were any of my guesses particularly
bad, and may be doing more harm than good?
Any more ideas
Richard Huxton wrote:
If you've got a web-application then you'll probably want to insert the
results into a cache table for later use.
If I have quite a bit of activity like this (people selecting 1 out
of a few million rows and paging through them in a web browser), would
it be good to have
Randolf Richardson wrote:
While this doesn't exactly answer your question, I use this little
tidbit of information when "selling" people on PostgreSQL. PostgreSQL
was chosen over Oracle as the database to handle all of the .org TLDs
information. ...
Do you have a link for that informatio
Ron Mayer wrote:
http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2002-53
Wrong link...
http://research.microsoft.com/research/pubs/view.aspx?type=Technical%20Report&id=812
This is the one that discusses scalability, price, performance,
failover, power consumption, hard
Merlin Moncure wrote:
...You need to build a bigger, faster box with lots of storage...
Clustering ...
B: will cost you more, not less
Is this still true when you get to 5-way or 17-way systems?
My (somewhat outdated) impression is that up to about 4-way systems
they're price competitive; but bey
I sometimes also think it's fun to point out that Postgresql
bigger companies supporting it's software - like this one:
http://www.fastware.com.au/docs/FujitsuSupportedPostgreSQLWhitePaper.pdf
with $43 billion revenue -- instead of those little companies
like Mysql AB or Oracle.
:)
---
Short summary... the second query runs faster, and I think
they should be identical queries. Should the optimizer
have found this optimization?
I have two identical (or so I believe) queries; one where I
explicitly add a "is not null" comparison; and one where I
think it would implicitly onl
Tom Lane wrote:
Great Bridge did essentially that years ago, but I think we only got
away with it because we didn't say which DBs "Commercial Database A"
and "Commercial Database B" actually were. Even off the record, we
were only allowed to tell people that the commercial DBs were Oracle
and SQL
Josh Berkus wrote:
Now you can see why other DBMSs don't use the OS disk cache. ...
...as long as we use the OS disk cache, we can't
eliminate checkpoint spikes, at least on Linux.
Wouldn't the VM settings like the ones under /proc/sys/vm
and/or the commit=XXX mount option if using ext3 be a go
Merlin Moncure wrote:
readv and writev are in the single unix spec...and yes ...
On some systems they might just be implemented as a loop inside the
library, or even as a macro.
You sure?
Requirements like this:
http://www.opengroup.org/onlinepubs/007908799/xsh/write.html
"Write requests of {PIPE
this:
select * from (query) as a;
which I believe should be a no-op.
Should the optimizer have noticed that it could have used a hash
join in this case? Anything I can do to help convince it to?
Explain analyze output follows.
Thanks,
Ron
Tom Lane wrote:
And you can't just dismiss the issue of wrong cost models and say we can
get numbers anyway.
Is there a way to see more details about the cost estimates.
EXPLAIN ANALYZE seems to show the total time and rows; but not
information like how many disk pages were accessed.
I get the feel
I'm guessing your data is actually more "clustered" than the
"correlation" stastic thinks it is.
Alex Turner wrote:
> trendmls=# explain analyze select listnum from propmain where
> listprice<=30 and listprice>=20;
Is that a database of properties like land/houses?
If
I'm guessing your data is actually more "clustered" than the
"correlation" statistic thinks it is.
Alex Turner wrote:
> trendmls=# explain analyze select listnum from propmain where
> listprice<=30 and listprice>=20;
Is that a database of properties like land/houses?
Bill Moran wrote:
> On Fri, 9 Nov 2007 11:11:18 -0500 (EST)
> Greg Smith <[EMAIL PROTECTED]> wrote:
>> On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:
>>> If the queries are complex, this is understable.
>> The queries used for this comparison are trivial. There's only one table
>> involved and
like there's a step where it expects
511 rows and gets 2390779 which seems to be off by a factor of 4600x.
Also shown below it seems that if I use "OFFSET 0" as a "hint"
I can force a much (10x) better plan. I wonder if there's room for
a pgfoundry project for a
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>
>> Also shown below it seems that if I use "OFFSET 0" as a "hint"
>> I can force a much (10x) better plan. I wonder if there's room for
>> a pgfoundry project for a patch set tha
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> ...given that that plan has a lower cost estimate, it
>>> should've picked it without any artificialconstraints.
>
>>I think the reason it's not picking it was discu
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> There's something fishy about this --- given that that plan has a lower
>>> cost estimate, it should've picked it without any artificial
>>> constraints.
One final t
Joshua D. Drake wrote:
> Actually this is not true. Although I have yet to test 8.3. It is
> pretty much common knowledge that after 8 cores the acceleration of
> performance drops with PostgreSQL...
>
> This has gotten better every release. 8.1 for example handles 8 cores
> very well, 8.0 didn't
Greg Smith wrote:
On Sat, 1 Mar 2008, Steve Poe wrote:
SATA over SCSI?
I've collected up many of the past list comments on this subject and put
a summary at
http://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks
Should this section:
ATA Disks... Always default to the write cache
blowing up.
Not sure if that's redundant with the condition you
mentioned, or if it's yet a separate condition where
we might also want to consider cartesian joins.
Ron M
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that would be part of thi
201 - 300 of 409 matches
Mail list logo