[PERFORM] PostgreSQL is slow...HELP

2003-09-03 Thread Azlin Ghazali
Hi,

I'm working on a project to make an application run on MySQL and PostgreSQL.
I find that PostgreSQL runs up to 10 times slower than MySQL.  For small records
it is not much problems.  But as the records grew (up to 12,000 records) the
difference is quite significant.  We are talking about 15s (MySQL) vs 111s 
(PostgreSQL).  Someone suggest that my way of implementing PostgreSQL is not
efficient and someone out there might be able to help.

FYI, I running the application on ASP, XP Professional and Pentium 4 machine.

Below is the exact statement I used:

strSQL = CREATE TABLE temp1 SELECT accposd.item,items.name,Sum(accposd.qty)
as Quantity   _
 ,accposd.loose,AVG(accposd.price) as price, Sum(accposd.amount) as
sum_amount_
 ,Sum(accposd.cost) as sum_cost FROM_
 accposd left join items on accposd.item = items.fcc   _
 where accposd.date between '   varStartDate  ' AND ' 
varStopDate  '   _
  GROUP by accposd.item,items.name,accposd.loose ORDER by items.name

Below is the information about the fields:

CREATE TABLE accposd (
  fcc double precision default NULL,
  date date default NULL,
  recvbch double precision default NULL,
  type int default NULL,
  item double precision default NULL,
  qty double precision default NULL,
  price double precision default NULL,
  amount double precision default NULL,
  discamt double precision default NULL,
  cost double precision default NULL,
  loose varchar(10) default NULL,
  discflg varchar(10) default NULL,
  hour smallint default NULL,
  min smallint default NULL,
  sec smallint default NULL,
  who varchar(50) default NULL,
  promoter varchar(50) default NULL,
  userID double precision default '0',
  batchno double precision default '0'
);


CREATE TABLE items (
  fcc serial,
  code varchar(20) default NULL,
  name varchar(40) default NULL,
  description varchar(255) default NULL,
  barcode varchar(15) default NULL,
  brand varchar(30) default NULL,
  sub_category double precision default NULL,
  schedule char(1) default NULL,
  price double precision default NULL,
  lprice double precision default NULL,
  avgcost double precision default NULL,
  gname varchar(40) default NULL,
  strength varchar(10) default NULL,
  packsize double precision default NULL,
  whspack varchar(15) default NULL,
  packing varchar(10) default NULL,
  lowstock double precision default NULL,
  lstockls double precision default NULL,
  orderqty double precision default NULL,
  creation date default NULL,
  shelfno varchar(8) default NULL,
  status char(1) default NULL,
  q_cust double precision default NULL,
  ql_cust double precision default NULL,
  qoh double precision default NULL,
  qohl double precision default NULL,
  poison double precision default NULL,
  candisc double precision default NULL,
  maxdisc double precision default NULL,
  chkdate date default NULL,
  chkby varchar(5) default NULL,
  isstock double precision default NULL,
  wprice double precision default '0',
  wlprice double precision default '0',
  PRIMARY KEY  (fcc)
);


I appreciate your advice.  Thank you.

Regards,
AZLIN.

__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] PostgreSQL is slow...HELP

2003-09-03 Thread Andrew Sullivan
On Wed, Sep 03, 2003 at 06:08:57AM -0700, Azlin Ghazali wrote:
 I find that PostgreSQL runs up to 10 times slower than MySQL.  For small records

Have you done any tuning on PostgreSQL?  Have you vacuumed, c.?  All
the usual questions. 

A

-- 

Andrew Sullivan 204-4141 Yonge Street
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]  M2P 2A8
 +1 416 646 3304 x110


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] PostgreSQL is slow...HELP

2003-09-03 Thread Nick Fankhauser

 For small records
 it is not much problems.  But as the records grew (up to 12,000
 records) the
 difference is quite significant.

Although there are many tuning options, I'd suggest starting by making sure
you have an index (unique in cases where appropriate) on accposd.date
accposd.item, items.name, accposd.loose and items.name. Then do an
analyze; on the DB to make sure the database takes advantage of the
indexes where appropriate.

If this doesn't help, there are other options to pursue, but this is where I
would start.

-Nick



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] PostgreSQL is slow...HELP

2003-09-03 Thread Shridhar Daithankar
On 3 Sep 2003 at 6:08, Azlin Ghazali wrote:

 Hi,
 
 I'm working on a project to make an application run on MySQL and PostgreSQL.
 I find that PostgreSQL runs up to 10 times slower than MySQL.  For small records
 it is not much problems.  But as the records grew (up to 12,000 records) the
 difference is quite significant.  We are talking about 15s (MySQL) vs 111s 
 (PostgreSQL).  Someone suggest that my way of implementing PostgreSQL is not
 efficient and someone out there might be able to help.
 
 FYI, I running the application on ASP, XP Professional and Pentium 4 machine.

Are you running postgresql on windows? That's not an performance monster 
exactly? Is it under cygwin?

BTW, did you do any performance tuning to postgresql?

HTH

Bye
 Shridhar

--
Vulcans do not approve of violence. -- Spock, Journey to Babel, stardate 
3842.4


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] FreeBSD page size (was Re: The results of my PostgreSQL/filesystemperformance tests)

2003-09-03 Thread Vivek Khera
 SC == Sean Chittenden [EMAIL PROTECTED] writes:

 I need to step in and do 2 things:
SC Thanks for posting that.  Let me know if you have any questions while
SC doing your testing.  I've found that using 16K blocks on FreeBSD
SC results in about an 8% speedup in writes to the database, fwiw.

Just double checking: if I do this, then I need to halve the
parameters in postgresql.conf that involve buffers, specifically,
max_fsm_pages and shared_buffers.  I think max_fsm_pages should be
adjusted since the number of pages in the system overall has been
halved.

Anything else that should be re-tuned for this?

My tests are still running so I don't have numbers yet.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] FreeBSD page size

2003-09-03 Thread Vivek Khera
Ok... simple tests have completed.  Here are some numbers.

FreeBSD 4.8
PG 7.4b2
4GB Ram
Dual Xeon 2.4GHz processors
14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5
 config with 32k stripe size

Dump file:
-rw-r--r--  1 vivek  wheel  1646633745 Aug 28 11:01 19-Aug-2003.dump

When restored (after deleting one index that took up ~1Gb -- turned
out it was redundant to another multi-column index):

% df -k /u/d02
Filesystem1K-blocks Used Avail Capacity  Mounted on
/dev/amrd1s1e 226408360 18067260 190228432 9%/u/d02



postgresql.conf alterations from standard:
shared_buffers = 6
sort_mem = 8192
vacuum_mem=131702
max_fsm_pages=100
effective_cache_size=25600
random_page-cost = 2


restore time: 14777 seconds
vacuum analyze time: 30 minutes
select count(*) from user_list where owner_id=315;   50388.64 ms


the restore complained often about checkpoints occurring every few
seconds:

Sep  2 11:57:14 d02 postgres[49721]: [5-1] LOG:  checkpoints are occurring too 
frequently (15 seconds apart)
Sep  2 11:57:14 d02 postgres[49721]: [5-2] HINT:  Consider increasing 
CHECKPOINT_SEGMENTS.

The HINT threw me off since I had to set checkpoint_segments in
postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a
compile-time constant.

Anyhow, so I deleted the PG data directory, and made these two
changes:

checkpoint_segments=50
sort_mem = 131702

This *really* improved the time for the restore:

restore time: 11594 seconds

then I reset the checkpoint_segments and sort_mem back to old
values...

vacuum analyze time is still 30 minutes
select count(*) from user_list where owner_id=315;   51363.98 ms

so the select appears a bit slower but it is hard to say why.  the
system is otherwise idle as it is not in production yet.


Then I took the suggestion to update PG's page size to 16k and did the
same increase on sort_mem and checkpoint_segments as above.  I also
halved the shared_buffers and max_fsm_pages  (probably should have
halved the effective_cache_size too...)

restore time: 11322 seconds
vacuum analyze time: 27 minutes
select count(*) from user_list where owner_id=315;   48267.66 ms


Granted, given this simple test it is hard to say whether the 16k
blocks will make an improvement under live load, but I'm gonna give it
a shot.  The 16k block size shows me roughly 2-6% improvement on these
tests.

So throw in my vote for 16k blocks on FreeBSD (and annotate the docs
to tell which parameters need to be halved to account for it).


-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] FreeBSD page size

2003-09-03 Thread Sean Chittenden
 Ok... simple tests have completed.  Here are some numbers.
 
 FreeBSD 4.8
 PG 7.4b2
 4GB Ram
 Dual Xeon 2.4GHz processors
 14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5
  config with 32k stripe size
[snip]
 Then I took the suggestion to update PG's page size to 16k and did the
 same increase on sort_mem and checkpoint_segments as above.  I also
 halved the shared_buffers and max_fsm_pages  (probably should have
 halved the effective_cache_size too...)
 
 restore time: 11322 seconds
 vacuum analyze time: 27 minutes
 select count(*) from user_list where owner_id=315;   48267.66 ms
 
 
 Granted, given this simple test it is hard to say whether the 16k
 blocks will make an improvement under live load, but I'm gonna give it
 a shot.  The 16k block size shows me roughly 2-6% improvement on these
 tests.
 
 So throw in my vote for 16k blocks on FreeBSD (and annotate the docs
 to tell which parameters need to be halved to account for it).

I haven't had a chance to run any tests yet (ELIFE), but there was a
suggestion that 32K blocks was a better performer than 16K blocks
(!!??!!??).  I'm not sure why this is and my only guess is that it
relies more heavily on the disk cache to ease IO.  Since you have the
hardware setup, Vivek, would it be possible for you to run a test with
32K blocks?

I've started writing a threaded benchmarking program called pg_crush
that I hope to post here in a few days that'll time connection startup
times, INSERTs, DELETEs, UPDATEs, and both sequential scans as well as
index scans for random and sequentially ordered tuples.  It's similar
to pgbench, except it generates its own data, uses pthreads (chears on
KSE!), and returns more fine grained timing information for the
various activities.

-sc

-- 
Sean Chittenden

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] FreeBSD page size

2003-09-03 Thread Vivek Khera
 SC == Sean Chittenden [EMAIL PROTECTED] writes:

SC hardware setup, Vivek, would it be possible for you to run a test with
SC 32K blocks?

Will do.  What's another 4 hours... ;-)

I guess I'll halve the buffer size parameters again...

SC I've started writing a threaded benchmarking program called pg_crush
SC that I hope to post here in a few days that'll time connection startup

Ok.  Please post it when it is ready.  I've decided to wait until 7.4
is final before going to production so I've got this very expensive
very fast box doing not much of anything for a little while...

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] SQL slower when running for the second time

2003-09-03 Thread Ron Johnson
On Wed, 2003-09-03 at 14:15, Rhaoni Chiu Pereira wrote:
 Hi List,
 
I trying to increase performance in my PostgreSQL but there is something
 wrong.when I run this SQL for the first time it takes 1 min. 40 seconds to
 return, but when I run it for the second time it takes more than 2 minutes, and
 I should retunr faster than the first time.
 
 Does anyone have a advice ?

Is it a query or insert/update?

-- 
-
Ron Johnson, Jr. [EMAIL PROTECTED]
Jefferson, LA USA

Vanity, my favorite sin.
 Larry/John/Satan, The Devil's Advocate


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] [ADMIN] SQL slower when running for the second time

2003-09-03 Thread Rhaoni Chiu Pereira

 Version of PostgreSQL?

7.3.2-3 on RedHat 9
 
 Standard server configuration?

   Follow atached

 Hardware configuration?

   P4 1.7 Ghz
   512 MB RAM DDR
   HD 20 GB 7200 RPM

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Rhaoni Chiu
 Pereira
 Sent: Wednesday, September 03, 2003 3:16 PM
 To: PostgreSQL Performance; Lista PostgreSQL
 Subject: [ADMIN] SQL slower when running for the second time
 
 
 Hi List,
 
I trying to increase performance in my PostgreSQL but there is something
 wrong.when I run this SQL for the first time it takes 1 min. 40 seconds
 to
 return, but when I run it for the second time it takes more than 2 minutes,
 and
 I should retunr faster than the first time.
 
 Does anyone have a advice ?
 
 Atenciosamente,
 
 Rhaoni Chiu Pereira
 Sistêmica Computadores
 
 Visite-nos na Web: http://sistemica.info
 Fone/Fax : +55 51 3328 1122
 
 
 
 
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org
 
 



postgresql.conf
Description: Binary data

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] FreeBSD page size

2003-09-03 Thread Marc G. Fournier


Just curious, but Bruce(?) mentioned that apparently a 32k block size was
found to show a 15% improvement ... care to run one more test? :)

On Wed, 3 Sep 2003, Vivek Khera wrote:

 Ok... simple tests have completed.  Here are some numbers.

 FreeBSD 4.8
 PG 7.4b2
 4GB Ram
 Dual Xeon 2.4GHz processors
 14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5
  config with 32k stripe size

 Dump file:
 -rw-r--r--  1 vivek  wheel  1646633745 Aug 28 11:01 19-Aug-2003.dump

 When restored (after deleting one index that took up ~1Gb -- turned
 out it was redundant to another multi-column index):

 % df -k /u/d02
 Filesystem1K-blocks Used Avail Capacity  Mounted on
 /dev/amrd1s1e 226408360 18067260 190228432 9%/u/d02



 postgresql.conf alterations from standard:
 shared_buffers = 6
 sort_mem = 8192
 vacuum_mem=131702
 max_fsm_pages=100
 effective_cache_size=25600
 random_page-cost = 2


 restore time: 14777 seconds
 vacuum analyze time: 30 minutes
 select count(*) from user_list where owner_id=315;   50388.64 ms


 the restore complained often about checkpoints occurring every few
 seconds:

 Sep  2 11:57:14 d02 postgres[49721]: [5-1] LOG:  checkpoints are occurring too 
 frequently (15 seconds apart)
 Sep  2 11:57:14 d02 postgres[49721]: [5-2] HINT:  Consider increasing 
 CHECKPOINT_SEGMENTS.

 The HINT threw me off since I had to set checkpoint_segments in
 postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a
 compile-time constant.

 Anyhow, so I deleted the PG data directory, and made these two
 changes:

 checkpoint_segments=50
 sort_mem = 131702

 This *really* improved the time for the restore:

 restore time: 11594 seconds

 then I reset the checkpoint_segments and sort_mem back to old
 values...

 vacuum analyze time is still 30 minutes
 select count(*) from user_list where owner_id=315;   51363.98 ms

 so the select appears a bit slower but it is hard to say why.  the
 system is otherwise idle as it is not in production yet.


 Then I took the suggestion to update PG's page size to 16k and did the
 same increase on sort_mem and checkpoint_segments as above.  I also
 halved the shared_buffers and max_fsm_pages  (probably should have
 halved the effective_cache_size too...)

 restore time: 11322 seconds
 vacuum analyze time: 27 minutes
 select count(*) from user_list where owner_id=315;   48267.66 ms


 Granted, given this simple test it is hard to say whether the 16k
 blocks will make an improvement under live load, but I'm gonna give it
 a shot.  The 16k block size shows me roughly 2-6% improvement on these
 tests.

 So throw in my vote for 16k blocks on FreeBSD (and annotate the docs
 to tell which parameters need to be halved to account for it).


 --
 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Vivek Khera, Ph.D.Khera Communications, Inc.
 Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
 AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/

 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] FreeBSD page size

2003-09-03 Thread Rod Taylor
 I uppercased it because config parameters are uppercased in the
 documentation.  Do we mention config parameters in any other error
 messages?  Should it be lowercased?

How about changing the hint?

Consider increasing CHECKPOINT_SEGMENTS in your postgresql.conf


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] FreeBSD page size

2003-09-03 Thread Marc G. Fournier


On Wed, 3 Sep 2003, Bruce Momjian wrote:

 Vivek Khera wrote:
  the restore complained often about checkpoints occurring every few
  seconds:
 
  Sep  2 11:57:14 d02 postgres[49721]: [5-1] LOG:  checkpoints are occurring too 
  frequently (15 seconds apart)
  Sep  2 11:57:14 d02 postgres[49721]: [5-2] HINT:  Consider increasing 
  CHECKPOINT_SEGMENTS.
 
  The HINT threw me off since I had to set checkpoint_segments in
  postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a
  compile-time constant.

 Woo hoo, my warning worked.  Great.

 I uppercased it because config parameters are uppercased in the
 documentation.  Do we mention config parameters in any other error
 messages?  Should it be lowercased?

k, to me upper case denotes a compiler #define, so I would have been
confused ... I'd go with lower case and single quotes around it to denote
its a variable to be changed ...

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Selecting random rows efficiently

2003-09-03 Thread scott.marlowe
Can you just create an extra serial column and make sure that one is 
always in order and no holes in it?  (i.e. a nightly process, etc...)???

If so, then something like this truly flies:

select * from accounts where aid = (select cast(floor(random()*10)+1 as int));

My times on it on a 100,000 row table are  1 millisecond.

Note that you have to have a hole free sequence AND know how many rows 
there are, but if you can meet those needs, this is screamingly fast.

On Sat, 30 Aug 2003, Russell Garrett wrote:

 Considering that we'd have to index the random field too, it'd be neater in
 the long term to re-number the primary key. Although, being a primary key,
 that's foreign-keyed from absolutely everywhere, so that'd probably take an
 amusingly long time.
 
 ...and no we're not from Micronesia, we're from ever so slightly less exotic
 London. Though Micronesia might be nice...
 
 Russ (also from last.fm but without the fancy address)
 
 [EMAIL PROTECTED] wrote:
  On Sat, 2003-08-30 at 09:01, Rod Taylor wrote:
  i was hoping there was some trickery with sequences that would
  allow me to easily pick a random valid sequence number..?
 
  I would suggest renumbering the data.
 
  ALTER SEQUENCE ... RESTART WITH 1;
  UPDATE table SET pkey = DEFAULT;
 
  Of course, PostgreSQL may have trouble with that update due to
  evaluation of the unique constraint immediately -- so drop the
  primary key first, and add it back after.
 
  And if there are child tables, they'd all have to be updated, too.
 
 
 
 ---(end of broadcast)---
 TIP 8: explain analyze is your friend
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Relaxin
I have a table with 102,384 records in it, each record is 934 bytes.

Using the follow select statement:
  SELECT * from table

PG Info: version 7.3.4 under cygwin on Windows 2000
ODBC: version 7.3.100

Machine: 500 Mhz/ 512MB RAM / IDE HDD


Under PG:  Data is returned in 26 secs!!
Under SQL Server:  Data is returned in 5 secs.
Under SQLBase: Data is returned in 6 secs.
Under SAPDB:Data is returned in 7 secs.

This is the ONLY table in the database and only 1 user.

And yes I did a vacuum.

Is this normal behavior for PG?

Thanks



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Rod Taylor
 Under PG:  Data is returned in 26 secs!!
 Under SQL Server:  Data is returned in 5 secs.
 Under SQLBase: Data is returned in 6 secs.
 Under SAPDB:Data is returned in 7 secs.

What did you use as the client? Do those times include ALL resulting
data or simply the first few lines?

PostgreSQL performance on windows (via Cygwin) is known to be poor.
Do you receive similar results with 7.4 beta 2?


signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Relaxin
Yes I Analyze also, but there was no need to because it was a fresh brand
new database.

Rudi Starcevic [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
 Hi,


 And yes I did a vacuum.
 

 Did you 'Analyze' too ?

 Cheers
 Rudi.


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Rudi Starcevic
Hi,

Yes I Analyze also, but there was no need to because it was a fresh brand
new database.
Hmm ... Sorry I'm not sure then. I only use Linux with PG.
Even though it's 'brand new' you still need to Analyze so that any 
Indexes etc. are built.

I'll keep an eye on this thread - Good luck.

Regards
Rudi.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Relaxin
All queries were ran on the SERVER for all of the databases I tested.

This is all resulting data for all of the databases that I tested.


Rod Taylor [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Neil Conway
On Wed, 2003-09-03 at 21:32, Rudi Starcevic wrote:
 Hmm ... Sorry I'm not sure then. I only use Linux with PG.
 Even though it's 'brand new' you still need to Analyze so that any 
 Indexes etc. are built.

ANALYZE doesn't build indexes, it only updates the statistics used by
the query optimizer (and in any case, select * from foo has only one
reasonable query plan anyway).

-Neil


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Christopher Browne
Quoth Relaxin [EMAIL PROTECTED]:
 Yes I Analyze also, but there was no need to because it was a fresh
 brand new database.

That is _absolutely not true_.

It is not true with any DBMS that uses a cost-based optimizer.
Cost-based optimizers need some equivalent to ANALYZE in order to
collect statistics to allow them to pick any path other than a
sequential scan.

In this particular case, a seq scan is pretty likely to be the best
answer when there is no WHERE clause on the query.

Actually, it doesn't make all that much sense that the other systems
would be terribly much faster, because they obviously need to do some
processing on 102,384 records.

Can you tell us what you were *actually* doing?  Somehow it sounds as
though the other databases were throwing away the data whereas
PostgreSQL was returning it all kawhump! in one batch.

What programs were you using to submit the queries?
-- 
let name=cbbrowne and tld=acm.org in name ^ @ ^ tld;;
http://cbbrowne.com/info/oses.html
Computers let you make more  mistakes faster than any other invention
in  human  history,  with  the  possible  exception  of  handguns  and
tequila.  -- Mitch Radcliffe

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] SELECT's take a long time compared to other DBMS

2003-09-03 Thread Nick Fankhauser

 Yes I Analyze also, but there was no need to because it was a fresh brand
 new database.

This apparently wasn't the source of problem since he did an analyze anyway,
but my impression was that a fresh brand new database is exactly the
situation where an analyze is needed- ie: a batch of data has just been
loaded and stats haven't been collected yet.

Am I mistaken?

-Nick



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[PERFORM] SQL slower when running for the second time

2003-09-03 Thread Rhaoni Chiu Pereira
Hi List,

   I trying to increase performance in my PostgreSQL but there is something
wrong.when I run this SQL for the first time it takes 1 min. 40 seconds to
return, but when I run it for the second time it takes more than 2 minutes, and
I should retunr faster than the first time.

Does anyone have a advice ?

Atenciosamente,

Rhaoni Chiu Pereira
Sistêmica Computadores

Visite-nos na Web: http://sistemica.info
Fone/Fax : +55 51 3328 1122






---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] [ADMIN] SQL slower when running for the second time

2003-09-03 Thread Peter Eisentraut
Rhaoni Chiu Pereira writes:

I trying to increase performance in my PostgreSQL but there is something
 wrong.when I run this SQL for the first time

Which SQL?

 it takes 1 min. 40 seconds to
 return, but when I run it for the second time it takes more than 2 minutes, and
 I should retunr faster than the first time.

What happens the third time?

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 8: explain analyze is your friend