Re: [PERFORM] COPY vs INSERT

2005-05-06 Thread Jim C. Nasby
On Wed, May 04, 2005 at 10:22:56PM -0400, Tom Lane wrote:
 Also, there is a whole lot of one-time-per-statement overhead that can
 be amortized across many rows instead of only one.  Stuff like opening
 the target table, looking up the per-column I/O conversion functions,
 identifying trigger functions if any, yadda yadda.  It's not *that*
 expensive, but compared to an operation as small as inserting a single
 row, it's significant.

Has thought been given to supporting inserting multiple rows in a single
insert? DB2 supported:

INSERT INTO table VALUES(
(1,2,3),
(4,5,6),
(7,8,9)
);

I'm not sure how standard that is or if other databases support it.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona




Thank you for the swift reply.
The test server is hardly ever vacuumed as it in general sees very
limited traffic. vacuum is only necessary if the server sees a lot of
write operations, i.e. update, delete, insert right?

What explains the different choice of query plans then?
As can be seen from the following snippets the test server decides to
use an index twice in Query 2, where as the live server decides to do a
full scan of tables with 38.5k and 5.5k records.
In Query 3 it's vice versa.
Seems strange to me...

Query 2:
--- Bad idea, price_tbl hold 38.5k records

Test:
- Index Scan using aff_price_uq on price_tbl (cost=0.00..6.01
rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2838)"

Live:
- Seq Scan on price_tbl (cost=0.00..883.48 rows=2434 width=4)
(actual time=0.86..67.25 rows=4570 loops=1)"

Filter: (affid = 8)"


--- Bad idea, sct2subcattype_tbl hold 5.5k records

Test:
- Index Scan using subcat_uq on sct2subcattype_tbl
(cost=0.00..79.26 rows=26 width=8) (actual time=0.01..0.17 rows=59
loops=48)
Live:
- Seq Scan on sct2subcattype_tbl (cost=0.00..99.26 rows=5526
width=8) (actual time=0.01..30.16 rows=5526 loops=1)"


Query 3:
- Bad idea, sct2lang_tbl has 8.6k records
Test:
- Seq Scan on sct2lang_tbl (cost=0.00..150.79 rows=8679 width=8)
(actual time=0.03..10.70 rows=8679 loops=1)"

Live:
- Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..8.13
rows=2 width=8) (actual time=1.10..2.39 rows=2 loops=69)"


Will get a VACUUM VERBOSE of StatCon_Tbl

Cheers
Jona

PS: The query plans are extracted using pgAdmin on Windows, if you can
recommend a better cross-platform postgre client I'd be happy to try it
out.

Tom Lane wrote:

  Jona [EMAIL PROTECTED] writes:
  
  
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

  
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

  
  
1) How come the query plans between the 2 servers are different?

  
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

			regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
  





Re: [PERFORM] COPY vs INSERT

2005-05-06 Thread Dennis Bjorklund
On Fri, 6 May 2005, Jim C. Nasby wrote:

 Has thought been given to supporting inserting multiple rows in a single
 insert? DB2 supported:
 
 INSERT INTO table VALUES(
 (1,2,3),
 (4,5,6),
 (7,8,9)
 );
 
 I'm not sure how standard that is or if other databases support it.

The sql standard include this, except that you can not have the outer ().
So it should be

INSERT INTO table VALUES
(1,2,3),
(4,5,6),
(7,8,9);

Do DB2 demand these extra ()?

-- 
/Dennis Björklund


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona




Results of VACUUM VERBOSE from both servers

Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO: --Relation public.statcon_tbl--
INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0,
UnUsed 0.
 Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO: --Relation pg_toast.pg_toast_179851--
INFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0,
UnUsed 0.
 Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO: --Relation public.statcon_tbl--
INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed
6101.
 Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO: --Relation pg_toast.pg_toast_891830--
INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0,
UnUsed 5487.
 Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona

Tom Lane wrote:

  Jona [EMAIL PROTECTED] writes:
  
  
I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded "home machine", a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.

  
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
  Index Cond: ((statcon_tbl.sctid = "outer".sctid) AND (statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.

  
  
1) How come the query plans between the 2 servers are different?

  
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?

It'd be interesting to see the output of "vacuum verbose statcon_tbl"
on both servers ...

			regards, tom lane

PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
  





Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Christopher Kings-Lynne
You didn't do analyze.
Chris
Jona wrote:
  Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0, 
UnUsed 0.
Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona
Tom Lane wrote:
Jona [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] writes:
 

I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running PostGreSQL 
7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded home machine, a Pentium 4 with 1GB of 
memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory and 
SCSI disks.
One should expect the production server to be faster, but appearently 
not as the outlined query plans below shows.
   

I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:
-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 rows=1 
width=4) (actual time=0.05..0.31 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid = 
1))
-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 rows=5 
width=4) (actual time=27.97..171.84 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid = 
1))
Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.
 

1) How come the query plans between the 2 servers are different?
   

The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test server?
It'd be interesting to see the output of vacuum verbose statcon_tbl
on both servers ...
regards, tom lane
PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] COPY vs INSERT

2005-05-06 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Dennis Bjorklund [EMAIL PROTECTED] writes:

 On Fri, 6 May 2005, Jim C. Nasby wrote:
 Has thought been given to supporting inserting multiple rows in a single
 insert? DB2 supported:
 
 INSERT INTO table VALUES(
 (1,2,3),
 (4,5,6),
 (7,8,9)
 );
 
 I'm not sure how standard that is or if other databases support it.

 The sql standard include this, except that you can not have the outer ().
 So it should be

 INSERT INTO table VALUES
 (1,2,3),
 (4,5,6),
 (7,8,9);

Since MySQL has benn supporting this idiom for ages, it can't be
standard ;-)


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Bad choice of query plan from PG 7.3.6 to PG 7.3.9

2005-05-06 Thread Jona
Now with analyze
Test Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.
   Total CPU 0.02s/0.00u sec elapsed 1.98 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.
   Total CPU 1.75s/0.23u sec elapsed 30.36 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM
Live Server:
comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.
   Total CPU 0.00s/0.01u sec elapsed 0.01 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
   Total CPU 3.21s/0.47u sec elapsed 18.03 sec.
INFO:  Analyzing public.statcon_tbl
VACUUM

Have done some sampling running the same query a few times through the 
past few hours and it appears that the VACUUM has helped.
The following are the results after the Vacuum:

After VACUUM VERBOSE:
Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..21.29 rows=5 
width=4) (actual time=0.07..0.37 rows=39 loops=4)
Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid 
= 1))

After VACUUM ANALYZE VERBOSE:
Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.03 rows=5 
width=4) (actual time=0.09..0.37 rows=39 loops=4)
Index Cond: ((statcon_tbl.sctid = outer.sctid) AND (statcon_tbl.ctpid 
= 1))

Only question remains why one server uses its indexes and the other 
don't eventhough VACUUM ANALYZE has now been run on both servers?
And even more interesting, before the VACUUM ANALYZEit was the server 
where no vacuum had taken place that used its index.

Cheers
Jona
Christopher Kings-Lynne wrote:
You didn't do analyze.
Chris
Jona wrote:
  Results of VACUUM VERBOSE from both servers
Test server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, 
UnUsed 0.
Total CPU 0.02s/0.00u sec elapsed 0.04 sec.
INFO:  --Relation pg_toast.pg_toast_179851--
INFO:  Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 
0, UnUsed 0.
Total CPU 4.03s/0.40u sec elapsed 70.99 sec.
VACUUM

Live Server:
comm=# VACUUM VERBOSE StatCon_Tbl;
INFO:  --Relation public.statcon_tbl--
INFO:  Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, 
UnUsed 6101.
Total CPU 0.01s/0.00u sec elapsed 0.60 sec.
INFO:  --Relation pg_toast.pg_toast_891830--
INFO:  Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, 
UnUsed 5487.
Total CPU 4.44s/0.34u sec elapsed 35.48 sec.
VACUUM

Cheers
Jona
Tom Lane wrote:
Jona [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] writes:
 

I'm currently experiencing problems with long query execution times.
What I believe makes these problems particularly interesting is the 
difference in execution plans between our test server running 
PostGreSQL 7.3.6 and our production server running PostGreSQL 7.3.9.
The test server is an upgraded home machine, a Pentium 4 with 1GB 
of memory and IDE disk.
The production server is a dual CPU XEON Pentium 4 with 2GB memory 
and SCSI disks.
One should expect the production server to be faster, but 
appearently not as the outlined query plans below shows.
  
I think the plans are fine; it looks to me like the production server
has serious table-bloat or index-bloat problems, probably because of
inadequate vacuuming.  For instance compare these entries:
-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND 
(statcon_tbl.ctpid = 1))

-  Index Scan using ctp_statcon on statcon_tbl  (cost=0.00..20.40 
rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)
 Index Cond: ((statcon_tbl.sctid = outer.sctid) AND 
(statcon_tbl.ctpid = 1))

Appears to be exactly the same task ... but the test server spent
1.24 msec total while the production server spent 687.36 msec total.
That's more than half of your problem right there.  Some of the other
scans seem a lot slower on the production machine too.
 

1) How come the query plans between the 2 servers are different?
  
The production server's rowcount estimates are pretty good, the test
server's are not.  How long since you vacuumed/analyzed the test 
server?

It'd be interesting to see the output of vacuum verbose statcon_tbl
on both servers ...
regards, tom lane
PS: if you post any more query plans, please try to use software that
doesn't mangle the formatting so horribly ...
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] COPY vs INSERT

2005-05-06 Thread Bruno Wolff III
On Fri, May 06, 2005 at 01:51:29 -0500,
  Jim C. Nasby [EMAIL PROTECTED] wrote:
 On Wed, May 04, 2005 at 10:22:56PM -0400, Tom Lane wrote:
  Also, there is a whole lot of one-time-per-statement overhead that can
  be amortized across many rows instead of only one.  Stuff like opening
  the target table, looking up the per-column I/O conversion functions,
  identifying trigger functions if any, yadda yadda.  It's not *that*
  expensive, but compared to an operation as small as inserting a single
  row, it's significant.
 
 Has thought been given to supporting inserting multiple rows in a single
 insert? DB2 supported:
 
 INSERT INTO table VALUES(
 (1,2,3),
 (4,5,6),
 (7,8,9)
 );
 
 I'm not sure how standard that is or if other databases support it.

It's on the TODO list. I don't remember anyone bringing this up for about
a year now, so I doubt anyone is actively working on it.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] COPY vs INSERT

2005-05-06 Thread Tom Lane
Bruno Wolff III [EMAIL PROTECTED] writes:
   Jim C. Nasby [EMAIL PROTECTED] wrote:
 Has thought been given to supporting inserting multiple rows in a single
 insert?

 It's on the TODO list. I don't remember anyone bringing this up for about
 a year now, so I doubt anyone is actively working on it.

It is on TODO but I think it is only there for standards compliance.
It won't produce near as much of a speedup as using COPY does ---
in particular, trying to put thousands of rows through at once with
such a command would probably be a horrible idea.  You'd still have
to pay the price of lexing/parsing, and there would also be considerable
flailing about with deducing the data type of the VALUES() construct.
(Per spec that can be used in SELECT FROM, not only in INSERT, and so
it's not clear to what extent we can use knowledge of the insert target
columns to avoid running the generic union-type-resolution algorithm for
each column of the VALUES() :-(.)  Add on the price of shoving an
enormous expression tree through the planner and executor, and it starts
to sound pretty grim.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] [SQL] ORDER BY Optimization

2005-05-06 Thread Rosser Schwarz
while you weren't looking, Derek Buttineau|Compu-SOLVE wrote:

 I'm hoping this is the right place to send this.

The PostgreSQL Performance list, pgsql-performance@postgresql.org
would be more appropriate. I'm copying my followup there, as well.

As for your query, almost all the time is actually spent in the
nestloop, not the sort.  Compare:

   -  Sort  (cost=31402.85..31405.06 rows=886 width=306) (actual
 time=87454.187..87454.240 rows=10 loops=1)

vs.

  -  Nested Loop  (cost=0.00..31359.47 rows=886 width=306)
 (actual time=4.740..86430.468 rows=26308 loops=1)

That's 50-ish ms versus 80-odd seconds.

It seems to me a merge join might be more appropriate here than a
nestloop. What's your work_mem set at?  Off-the-cuff numbers show the
dataset weighing in the sub-ten mbyte range.

Provided it's not already at least that big, and you don't want to up
it permanently, try saying:

SET work_mem = 10240; -- 10 mbytes

immediately before running this query (uncached, of course) and see
what happens.

Also, your row-count estimates look pretty off-base.  When were these
tables last VACUUMed or ANALYZEd?

/rls

-- 
:wq

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] [SQL] ORDER BY Optimization

2005-05-06 Thread Derek Buttineau|Compu-SOLVE
Thanks for the response :)
That's 50-ish ms versus 80-odd seconds.
It seems to me a merge join might be more appropriate here than a
nestloop. What's your work_mem set at?  Off-the-cuff numbers show the
dataset weighing in the sub-ten mbyte range.
Provided it's not already at least that big, and you don't want to up
it permanently, try saying:
SET work_mem = 10240; -- 10 mbytes
 

It's currently set at 16mb, I've also tried upping sort_mem as well 
without any noticible impact on the uncached query. :(

immediately before running this query (uncached, of course) and see
what happens.
Also, your row-count estimates look pretty off-base.  When were these
tables last VACUUMed or ANALYZEd?
 

I'm not entirely sure what's up with the row-count estimates, the tables 
are updated quite frequently (and VACUUM is also run quite frequently), 
however I had just run a VACUUM ANALYZE on both databases before running 
the explain.

I'm also still baffled at the differences in the plans between the two 
servers, on the one that uses the index to sort, I get for comparison a 
nestloop of:

Nested Loop  (cost=0.00..1175943.99 rows=1814 width=311) (actual 
time=25.337..26.867 rows=10 loops=1)

The plan that the live server seems to be using seems fairly inefficient.
Derek
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] Whence the Opterons?

2005-05-06 Thread Mischa Sandberg
After reading the comparisons between Opteron and Xeon processors for Linux,
I'd like to add an Opteron box to our stable of Dells and Sparcs, for 
comparison.

IBM, Sun and HP have their fairly pricey Opteron systems.
The IT people are not swell about unsupported purchases off ebay.
Anyone care to suggest any other vendors/distributors?
Looking for names with national support, so that we can recommend as much to our
customers.

Many thanks in advance.
-- 
Dreams come true, not free. -- S.Sondheim


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Whence the Opterons?

2005-05-06 Thread Ian Meyer
Mischa,

What kind of budget are you on? penguincomputing.com deals with
Opteron servers. I looked at a couple of their servers before deciding
on a HP DL145.

Ian

On 5/6/05, Mischa Sandberg [EMAIL PROTECTED] wrote:
 After reading the comparisons between Opteron and Xeon processors for Linux,
 I'd like to add an Opteron box to our stable of Dells and Sparcs, for 
 comparison.
 
 IBM, Sun and HP have their fairly pricey Opteron systems.
 The IT people are not swell about unsupported purchases off ebay.
 Anyone care to suggest any other vendors/distributors?
 Looking for names with national support, so that we can recommend as much to 
 our
 customers.
 
 Many thanks in advance.
 --
 Dreams come true, not free. -- S.Sondheim
 
 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Whence the Opterons?

2005-05-06 Thread Steve Poe
IBM, Sun and HP have their fairly pricey Opteron systems.
The IT people are not swell about unsupported purchases off ebay.

Mischa,
I certainly understand your concern, but the  price and support 
sometimes go hand-in-hand. You may have to pick your batttles if your 
want more bang for the buck or more support. I might be wrong on this, 
but not everything you buy on E-Bay is unsupported.

We purchase a dual Operton from Sun off their E-Bay store  for about $3K 
less than the buy it now price.

From an IT perspective, support is not as critical if I can do it 
myself. If it is for business 24/7 operations, then the company should 
be able to put some money behind what they want to put their business 
on. Your mileage may vary.

Steve

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Whence the Opterons?

2005-05-06 Thread Gavin M. Roy
Please wait a week before buying Sun v20z's or v40z's from off of Ebay 
(j/k). (As I'm in the process of picking up a few)  From everything I 
hear the v20z/v40z's are a great way to go and I'll know more in 15 days 
or so.

Regards,
Gavin
Steve Poe wrote:
IBM, Sun and HP have their fairly pricey Opteron systems.
The IT people are not swell about unsupported purchases off ebay.

Mischa,
I certainly understand your concern, but the  price and support 
sometimes go hand-in-hand. You may have to pick your batttles if your 
want more bang for the buck or more support. I might be wrong on this, 
but not everything you buy on E-Bay is unsupported.

We purchase a dual Operton from Sun off their E-Bay store  for about 
$3K less than the buy it now price.

From an IT perspective, support is not as critical if I can do it 
myself. If it is for business 24/7 operations, then the company should 
be able to put some money behind what they want to put their business 
on. Your mileage may vary.

Steve

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org

---(end of broadcast)---
TIP 8: explain analyze is your friend