I'm running PostgreSQL 7.4 on a quad Xeon attached to a
beefy disk array. However, I am begining to wonder if this is
a waste of CPU power.
I think I read somewhere that PostgreSQL is NOT multi-threaded.
But, will it be able to take advantage of multiple CPUs? Will
I have to run separate postmast
"D. Dante Lorenso" <[EMAIL PROTECTED]> writes:
> I'm running PostgreSQL 7.4 on a quad Xeon attached to a
> beefy disk array. However, I am begining to wonder if this is
> a waste of CPU power.
>
> I think I read somewhere that PostgreSQL is NOT multi-threaded.
> But, will it be able to take advan
It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime). It is still much faster than our
experiments with ext2, ext3, and reiserfs but occasionally during
large loads it will hiccup for a couple seconds but no crashes yet.
I'm reluctant to switch bac
"Spiegelberg, Greg" <[EMAIL PROTECTED]> writes:
> PostgreSQL 7.3.3 from source
*Please* update to 7.3.4 or 7.3.5 before you get bitten by the
WAL-page-boundary bug ...
regards, tom lane
---(end of broadcast)---
TIP 7: don't
I understand that COUNT queries are expensive. So I'm looking for advice on
displaying paginated query results.
I display my query results like this:
Displaying 1 to 50 of 2905.
1-50 | 51-100 | 101-150 | etc.
I do this by executing two queries. One is of the form:
SELECT FROM WHERE
On Mon, 2004-01-05 at 14:57, David Teran wrote:
> ... wow:
>
> executing a batch file with about 4250 selects, including lots of joins
> other things PostgreSQL 7.4 is about 2 times faster than FrontBase
> 3.6.27. OK, we will start to make larger tests but this is quite
> interesting already: w
Hi there,
I am quite new to postgresql, and love the explain feature. It enables
us to predict which SQL queries needs to be optimized before we see any
problems. However, I've run into an issue where explain tells us a the
costs of a quiry are tremendous (105849017586), but the query actually
On Fri, 9 Jan 2004, Richard van den Berg wrote:
> problems. However, I've run into an issue where explain tells us a the
> costs of a quiry are tremendous (105849017586), but the query actually
> runs quite fast. Even "explain analyze" shows these costs.
It would be helpful if you can show the
So I'm looking for advice on displaying paginated query results.
Displaying 1 to 50 of 2905.
1-50 | 51-100 | 101-150 | etc.
I do this by executing two queries. One is of the form:
SELECT FROM WHERE LIMIT m
OFFSET n
The other is identical except that I replace the select list with
COUNT(*)
You need to regularly run 'analyze'.
Chris
Richard van den Berg wrote:
Hi there,
I am quite new to postgresql, and love the explain feature. It enables
us to predict which SQL queries needs to be optimized before we see any
problems. However, I've run into an issue where explain tells us a the
I understand that COUNT queries are expensive. So I'm looking for advice on
displaying paginated query results.
I display my query results like this:
Displaying 1 to 50 of 2905.
1-50 | 51-100 | 101-150 | etc.
I do this by executing two queries. One is of the form:
SELECT FROM WHERE L
Richard van den Berg <[EMAIL PROTECTED]> writes:
> Hi there,
>
> I am quite new to postgresql, and love the explain feature. It enables us to
> predict which SQL queries needs to be optimized before we see any problems.
> However, I've run into an issue where explain tells us a the costs of a qu
I have a situation that is giving me small fits, and would like to see
if anyone can shed any light on it.
I have a modest table (@1.4 million rows, and growing), that has a
variety of queries run against it. One is
a very straightforward one - pull a set of distinct rows out based on
two co
On Sun, 11 Jan 2004, Andrew Rawnsley wrote:
> 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do anything
> until I exceed 0.5, which strikes me as a bit high (though please
> correct me if I am assuming too much...). RANDOM_PAGE_COST seems to have
> no effect.
What about the effective c
Low (1000). I'll fiddle with that. I just noticed that the machine only
has 512MB of ram in it, and not 1GB. I must
have raided it for some other machine...
On Jan 11, 2004, at 10:50 PM, Dennis Bjorklund wrote:
On Sun, 11 Jan 2004, Andrew Rawnsley wrote:
20-25% of the time. Fiddling with CPU_TU
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] (Andrew Rawnsley) would
write:
> I would like, of course, for it to use the index, given that it
> takes 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do
> anything until I exceed 0.5, which strikes me as a bit high (though
> ple
Andrew Rawnsley <[EMAIL PROTECTED]> writes:
> I have a situation that is giving me small fits, and would like to see
> if anyone can shed any light on it.
In general, pulling 10% of a table *should* be faster as a seqscan than
an indexscan, except under the most extreme assumptions about cluster
17 matches
Mail list logo