As for "vendor support" for Opteron, that sure looks like a
trainwreck... If you're going through IBM, then they won't want to
respond to any issues if you're not running a "bog-standard" RHAS/RHES
release from Red Hat. And that, on Opteron, is preposterous, because
there's plenty of the bits of
Gabriele Bartolini <[EMAIL PROTECTED]> writes:
> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8)
> (actual time=5338.120..40237.283 rows=1 loops=1)
> Filter: ((1040878301::bigint >= ip_address_from) AND
> (1040878301::bigint <= ip_address_to))
> Total runtime: 40237.424 ms
>>
>>
Francisco Reyes <[EMAIL PROTECTED]> writes:
> With explain analyze I have with sequential scan on
> Sort (cost=382.01..382.15 rows=56 width=196)
> (actual time=64.346..64.469 rows=24 loops=1)
> And with seqscan off
> Sort (cost=490.82..490.96 rows=56 width=196)
> (actual time=56.668..56.789 rows
Hi Tom,
thanks for your interest.
At 23.33 11/10/2004, Tom Lane wrote:
Gabriele Bartolini <[EMAIL PROTECTED]> writes:
> QUERY PLAN
>
On Mon, 11 Oct 2004, Rosser Schwarz wrote:
In general, it's best to let the planner make the appropriate choice
without any artificial constraints.
As someone suggested ran with Explain analyze.
With seqscan_off was better.
Ran a vacuum analyze this afternoon so the stats were up to date.
Although
On Mon, 11 Oct 2004, John Meinel wrote:
Postgres believes that it will cost 382 to do a sequential scan, versus 490
for an indexed scan. Hence why it prefers to do the sequential scan. Try
running explain analyze to see if how accurate it is.
With explain analyze I have with sequential scan on
So
On Mon, 11 Oct 2004, Gabriele Bartolini wrote:
> -
> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 width=8)
> (actual time=5338.120..40237.283 rows=1 loops=1)
>
[EMAIL PROTECTED] (Rod Taylor) wrote:
> On Mon, 2004-10-11 at 13:38, Andrew Sullivan wrote:
>> On Tue, Oct 05, 2004 at 09:47:36AM -0700, Josh Berkus wrote:
>> > As long as you're on x86, scaling outward is the way to go. If
>> > you want to continue to scale upwards, ask Andrew Sullivan about
>> >
John Meinel <[EMAIL PROTECTED]> writes:
> As Janning mentioned, sometimes sequential scans *are* faster. If the number of
> entries that will be found is large compared to the number of total entries (I
> don't know the percentages, but probably >30-40%),
Actually 30%-40% is unrealistic. The tr
"Chris Hutchinson" <[EMAIL PROTECTED]> writes:
> Running a trivial query in v7.4.2 (installed with fedora core2) using
> EXPLAIN ANALYZE is taking considerably longer than just running the query
> (2mins vs 6 secs). I was using this query to quickly compare a couple of
> systems after installing a
On Mon, 2004-10-11 at 13:38, Andrew Sullivan wrote:
> On Tue, Oct 05, 2004 at 09:47:36AM -0700, Josh Berkus wrote:
> > As long as you're on x86, scaling outward is the way to go. If you want to
> > continue to scale upwards, ask Andrew Sullivan about his experiences running
> > PostgreSQL on bi
Francisco Reyes wrote:
On Mon, 11 Oct 2004, Janning Vygen wrote:
[...]
When I saw the default explain I was surprised to see that indexes were
not been used. For example the join on lines 4,5 are exactly the primary
key of the tables yet a sequential scan was used.
Note this:
The default explain
Folks,
In order to have a place for scripts, graphs, results, etc., I've started the
TestPerf project on pgFoundry:
http://pgfoundry.org/projects/testperf/
If you are interested in doing performance testing for PostgreSQL, please join
the mailing list for the project. I could certainly use so
Gabriele Bartolini <[EMAIL PROTECTED]> writes:
> QUERY PLAN
> -
> Seq Scan on ip2location (cost=0.00..30490.65 rows=124781 wid
while you weren't looking, Francisco Reyes wrote:
> Is there any disadvantage of having the enable_seqscan off?
Plenty.
The planner will choose whichever plan looks "cheapest", based on the
information it has available (table size, statistics, &c). If a
sequential scan looks cheaper, and in you
Am Montag, 11. Oktober 2004 22:49 schrieb Francisco Reyes:
> On Mon, 11 Oct 2004, Janning Vygen wrote:
> > postgres uses a seq scan if its faster. In your case postgres seems to
> > know that most of your rows have a date < 2004-01-01 and so doesn't need
> > to consult the index if it has to read e
Hi guys,
please consider this scenario. I have this table:
CREATE TABLE ip2location (
ip_address_from BIGINT NOT NULL,
ip_address_to BIGINT NOT NULL,
id_location BIGINT NOT NULL,
PRIMARY KEY (ip_address_from, ip_address_to)
);
I created a cluster on its primary key, by running:
C
Is there a tutorial or reference to the different terms that appear on the
explain output?
Items such as "Nested Loop", "Hash"..
Also is there a way to easily tell which of two explains is "worse".
Example I am running a query with "set enable_seqscan to off;" and i see
the explain now shows in
On Mon, 11 Oct 2004, Janning Vygen wrote:
postgres uses a seq scan if its faster. In your case postgres seems to know
that most of your rows have a date < 2004-01-01 and so doesn't need to
consult the index if it has to read every page anyway. seq scan can be faster
on small tables. try (in psql) "
On Tue, Oct 05, 2004 at 09:47:36AM -0700, Josh Berkus wrote:
> As long as you're on x86, scaling outward is the way to go. If you want to
> continue to scale upwards, ask Andrew Sullivan about his experiences running
> PostgreSQL on big IBM boxes. But if you consider an quad-Opteron server
>
Ok thanks tom, what shall we do without U ?
by the way I have look at my kernel and getrusage() is well configure
and return good results.
i/o stats too.
I test an other version of postgresql and now, it works fine.
It' seems to be an install bug.
thx
regards, Alban Médici
on 08/10/2004 1
HyunSung Jang <[EMAIL PROTECTED]> writes:
> can you explain to me why it's not doing that i expected??
Have you ANALYZEd this table recently? The estimated row counts seem
way off.
regards, tom lane
---(end of broadcast)---
Mike Harding <[EMAIL PROTECTED]> writes:
> I just ran a COPY of a million records several times, and each time I
> ran it it ran apparently exponentially slower.
Tell us about indexes, foreign keys involving this table, triggers, rules?
Some mention of your PG version would be appropriate, too.
Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> This I guess would be quite benefitial for VIEWs. :)
Have you tried it?
regression-# SELECT entry_id,message FROM entries NATURAL JOIN messages ORDER BY
entry_id DESC LIMIT 10;
QUERY PLAN
-
Am Mittwoch, 6. Oktober 2004 09:31 schrieben Sie:
> postgres=# explain ANALYZE select * from test where today < '2004-01-01';
> QUERY PLAN
>- Seq Scan on test (cost=0.00..19.51 rows=334
> width=44) (actual
> time=0.545..2.429 row
Am Dienstag, 5. Oktober 2004 08:49 schrieb Chris Hutchinson:
> Running a trivial query in v7.4.2 (installed with fedora core2) using
> EXPLAIN ANALYZE is taking considerably longer than just running the query
> (2mins vs 6 secs). I was using this query to quickly compare a couple of
> systems after
Tom Lane wrote:
Ole Langbehn <[EMAIL PROTECTED]> writes:
What do you think about the idea of an "UniqueSort" which would do
sort+unique in one pass ?
This is what oracle does and it is quite fast with it...
Hashing is at least as fast, if not faster.
regards, tom lane
I got good mileage in a
postgres=# explain ANALYZE select * from test where today < '2004-01-01';
QUERY PLAN
Seq Scan on test (cost=0.00..19.51 rows=334 width=44) (actual
time=
Running a trivial query in v7.4.2 (installed with fedora core2) using
EXPLAIN ANALYZE is taking considerably longer than just running the query
(2mins vs 6 secs). I was using this query to quickly compare a couple of
systems after installing a faster disk.
Is this sort of slowdown to be expected?
I just ran a COPY of a million records several times, and each time I
ran it it ran apparently exponentially slower. If I do an insert of
10 million records, even with 2 indexes (same table) it doesn't appear
to slow down at all. Any ideas?
- Mike H.
(I apologize for the ^Hs)
Script started o
I've been wondering...
Suppose we have two tables
CREATE TABLE messages (
message_id serial PRIMARY KEY,
message text NOT NULL
);
CREATE TABLE entries (
entry_id serial PRIMARY KEY,
message_id integer NOT NULL REFERENCES messages
);
And we have a join:
SELECT entry_id,message FROM entri
31 matches
Mail list logo