Hi,
I'm working on a project to make an application run on MySQL and PostgreSQL.
I find that PostgreSQL runs up to 10 times slower than MySQL. For small records
it is not much problems. But as the records grew (up to 12,000 records) the
difference is quite significant. We are talking about 15s
On Wed, Sep 03, 2003 at 06:08:57AM -0700, Azlin Ghazali wrote:
> I find that PostgreSQL runs up to 10 times slower than MySQL. For small records
Have you done any tuning on PostgreSQL? Have you vacuumed, &c.? All
the usual questions.
A
--
Andrew Sullivan 204-414
Hi everyone,
Saw this earlier on today, on the mailing list of the Open Source
Development Labs people who are porting their database testing suite
from SAP to PostgreSQL.
The comment near the end by Jenny Zhang (one of the porters), saying
that "I will put a tar ball on SourceForge today, tho
> For small records
> it is not much problems. But as the records grew (up to 12,000
> records) the
> difference is quite significant.
Although there are many tuning options, I'd suggest starting by making sure
you have an index (unique in cases where appropriate) on accposd.date
accposd.item, i
Azlin Ghazali <[EMAIL PROTECTED]> writes:
> Below is the exact statement I used:
That's not very informative. Could we see the results of EXPLAIN ANALYZE
on that SELECT? Also, what PG version are you running?
regards, tom lane
---(end of broadcas
On 3 Sep 2003 at 6:08, Azlin Ghazali wrote:
> Hi,
>
> I'm working on a project to make an application run on MySQL and PostgreSQL.
> I find that PostgreSQL runs up to 10 times slower than MySQL. For small records
> it is not much problems. But as the records grew (up to 12,000 records) the
> di
> "SC" == Sean Chittenden <[EMAIL PROTECTED]> writes:
>> I need to step in and do 2 things:
SC> Thanks for posting that. Let me know if you have any questions while
SC> doing your testing. I've found that using 16K blocks on FreeBSD
SC> results in about an 8% speedup in writes to the databas
Ok... simple tests have completed. Here are some numbers.
FreeBSD 4.8
PG 7.4b2
4GB Ram
Dual Xeon 2.4GHz processors
14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5
config with 32k stripe size
Dump file:
-rw-r--r-- 1 vivek wheel 1646633745 Aug 28 11:01 19-Aug-2003.dump
Hi ,
I am currently using Postgresql for a Research
project . I observed some performance results of Postgresql which I would like
to discuss .
I have a server which accepts requests from
clients. It spawns a new thread for each client. The clients are trying to add
entries to a relatio
> Ok... simple tests have completed. Here are some numbers.
>
> FreeBSD 4.8
> PG 7.4b2
> 4GB Ram
> Dual Xeon 2.4GHz processors
> 14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5
> config with 32k stripe size
[snip]
> Then I took the suggestion to update PG's page size to 16
> "SC" == Sean Chittenden <[EMAIL PROTECTED]> writes:
SC> hardware setup, Vivek, would it be possible for you to run a test with
SC> 32K blocks?
Will do. What's another 4 hours... ;-)
I guess I'll halve the buffer size parameters again...
SC> I've started writing a threaded benchmarking pr
On Wed, 2003-09-03 at 14:15, Rhaoni Chiu Pereira wrote:
> Hi List,
>
>I trying to increase performance in my PostgreSQL but there is something
> wrong.when I run this SQL for the first time it takes 1 min. 40 seconds to
> return, but when I run it for the second time it takes more than 2 m
<> Version of PostgreSQL?
7.3.2-3 on RedHat 9
<>
<> Standard server configuration?
Follow atached
<> Hardware configuration?
P4 1.7 Ghz
512 MB RAM DDR
HD 20 GB 7200 RPM
<> -Original Message-
<> From: [EMAIL PROTECTED]
<> [mailto:[EMAIL PROTECTED] Behalf Of Rha
Just curious, but Bruce(?) mentioned that apparently a 32k block size was
found to show a 15% improvement ... care to run one more test? :)
On Wed, 3 Sep 2003, Vivek Khera wrote:
> Ok... simple tests have completed. Here are some numbers.
>
> FreeBSD 4.8
> PG 7.4b2
> 4GB Ram
> Dual Xeon 2.4GHz
Vivek Khera wrote:
> the restore complained often about checkpoints occurring every few
> seconds:
>
> Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too
> frequently (15 seconds apart)
> Sep 2 11:57:14 d02 postgres[49721]: [5-2] HINT: Consider increasing
> CHECKPOI
> I uppercased it because config parameters are uppercased in the
> documentation. Do we mention config parameters in any other error
> messages? Should it be lowercased?
How about changing the hint?
Consider increasing CHECKPOINT_SEGMENTS in your postgresql.conf
signature.asc
Description: Th
On Wed, 3 Sep 2003, Bruce Momjian wrote:
> Vivek Khera wrote:
> > the restore complained often about checkpoints occurring every few
> > seconds:
> >
> > Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too
> > frequently (15 seconds apart)
> > Sep 2 11:57:14 d02 post
Can you just create an extra serial column and make sure that one is
always in order and no holes in it? (i.e. a nightly process, etc...)???
If so, then something like this truly flies:
select * from accounts where aid = (select cast(floor(random()*10)+1 as int));
My times on it on a 100,0
Marc G. Fournier wrote:
> On Wed, 3 Sep 2003, Bruce Momjian wrote:
>
> > Vivek Khera wrote:
> > > the restore complained often about checkpoints occurring every few
> > > seconds:
> > >
> > > Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too
> > > frequently (15 secon
I have a table with 102,384 records in it, each record is 934 bytes.
Using the follow select statement:
SELECT * from
PG Info: version 7.3.4 under cygwin on Windows 2000
ODBC: version 7.3.100
Machine: 500 Mhz/ 512MB RAM / IDE HDD
Under PG: Data is returned in 26 secs!!
Under SQL Server: D
Hi,
And yes I did a vacuum.
Did you 'Analyze' too ?
Cheers
Rudi.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faqs/FAQ.html
> Under PG: Data is returned in 26 secs!!
> Under SQL Server: Data is returned in 5 secs.
> Under SQLBase: Data is returned in 6 secs.
> Under SAPDB:Data is returned in 7 secs.
What did you use as the client? Do those times include ALL resulting
data or simply the first few lines?
P
Yes I Analyze also, but there was no need to because it was a fresh brand
new database.
"Rudi Starcevic" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> Hi,
>
>
> >And yes I did a vacuum.
> >
>
> Did you 'Analyze' too ?
>
> Cheers
> Rudi.
>
>
> ---(end of broa
Hi,
Yes I Analyze also, but there was no need to because it was a fresh brand
new database.
Hmm ... Sorry I'm not sure then. I only use Linux with PG.
Even though it's 'brand new' you still need to Analyze so that any
Indexes etc. are built.
I'll keep an eye on this thread - Good luck.
Regards
All queries were ran on the SERVER for all of the databases I tested.
This is all resulting data for all of the databases that I tested.
"Rod Taylor" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don
On Wed, 2003-09-03 at 21:32, Rudi Starcevic wrote:
> Hmm ... Sorry I'm not sure then. I only use Linux with PG.
> Even though it's 'brand new' you still need to Analyze so that any
> Indexes etc. are built.
ANALYZE doesn't build indexes, it only updates the statistics used by
the query optimizer
On Wed, 2003-09-03 at 15:32, Naveen Palavalli wrote:
> shared_buffers = 200
If you're using a relatively modern machine, this is probably on the low
side.
> 1) Effects related to Vaccum :- I performed 10 trials of adding and
> deleting entries . In each trial , 1 client adds 10,000 entries and
(Please follow Mail-Followup-To, I'm not on the pgsql-performance
mailing list but am on the Linux-XFS mailing list. My apologies too for
the cross-post. I'm cc'ing the Linux-XFS mailing list in case people
there will be interested in this, too.)
Hi,
We have a server running PostgreSQL v7.3.3 on
In the last exciting episode, "Relaxin" <[EMAIL PROTECTED]> wrote:
> All queries were ran on the SERVER for all of the databases I tested.
Queries obviously run "on the server." That's kind of the point of
the database system being a "client/server" system.
The question is what client program(s)
Quoth "Relaxin" <[EMAIL PROTECTED]>:
> Yes I Analyze also, but there was no need to because it was a fresh
> brand new database.
That is _absolutely not true_.
It is not true with any DBMS that uses a cost-based optimizer.
Cost-based optimizers need some equivalent to ANALYZE in order to
collect
> - the way PostgreSQL expects data to be written to disk without the
>fsync calls for things not to get corrupted in the event of a crash,
>and
If you want the filesystem to deal with this, I believe it is necessary
for it to write the data out in the same order the write requests are
su
> Yes I Analyze also, but there was no need to because it was a fresh brand
> new database.
This apparently wasn't the source of problem since he did an analyze anyway,
but my impression was that a fresh brand new database is exactly the
situation where an analyze is needed- ie: a batch of data h
"Nick Fankhauser" <[EMAIL PROTECTED]> writes:
> This apparently wasn't the source of problem since he did an analyze anyway,
> but my impression was that a fresh brand new database is exactly the
> situation where an analyze is needed- ie: a batch of data has just been
> loaded and stats haven't be
Rod Taylor kirjutas N, 04.09.2003 kell 06:36:
> Another alternative is
> to buy a small 15krpm disk dedicated for WAL. In theory you can achieve
> one commit per rotation.
One commit per rotation would still be only 15000/60. = 250 tps, but
fortunately you can get better results if you use multipl
Hi List,
I trying to increase performance in my PostgreSQL but there is something
wrong.when I run this SQL for the first time it takes 1 min. 40 seconds to
return, but when I run it for the second time it takes more than 2 minutes, and
I should retunr faster than the first time.
Does anyo
Rhaoni Chiu Pereira writes:
>I trying to increase performance in my PostgreSQL but there is something
> wrong.when I run this SQL for the first time
Which SQL?
> it takes 1 min. 40 seconds to
> return, but when I run it for the second time it takes more than 2 minutes, and
> I should ret
36 matches
Mail list logo