Re: [PERFORM] [pgsql-performance] function difference(geometry,geometry) is SLOW!

2008-06-16 Thread Chris Mair


 Date: Mon, 16 Jun 2008 11:06:44 +0200 (CEST)
 From: [EMAIL PROTECTED] [EMAIL PROTECTED]
 To: pgsql-performance@postgresql.org
 Subject: function difference(geometry,geometry) is SLOW!
 Message-ID:
 [EMAIL PROTECTED]

 Hi,
 In my pgsql procedure, i use the function

 geometryDiff := difference
 (geometry1,geometry2);

 but this function is very slow!!!
 What can I do to
 speed this function?
 Exists a special index for it?

 Thanks in advance!
 Luke

Hi,

this is a postgis function. Postgis is an independent project
and you might want to ask there:

http://www.postgis.org/mailman/listinfo/postgis-users

or

http://www.faunalia.com/cgi-bin/mailman/listinfo/gfoss
(italian).

Anyway, as long as you just compute the difference between
2 given shapes, no index can help you. Indices speed up
searches...

Bye,
Chris.



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres performance problem

2007-08-27 Thread Chris Mair

Hi,

Note: I have already vacumm full. It does not solve the problem.

I have a postgres 8.1 database. In the last days I have half traffic
than 4 weeks ago, and resources usage is twice. The resource monitor
graphs also shows hight peaks (usually there is not peaks)

The performarce is getting poor with the time.

Im not able to find the problem, seems there is not slow querys ( I have
log_min_duration_statement = 5000 right now, tomorrow I ll decrease it )

Server is HP, and seems there is not hardware problems detected.

Any ideas to debug it?


Hi,

first of all: let us know the exact version of PG and the OS.

If performance is getting worse, there ususally is some bloat
envolved. Not vacuuming aggressivly enough, might be the most
common cause. Do you autovacuum or vacuum manually?
Tell us more...


Bye,
Chris.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Performance Solaris vs Linux

2007-08-18 Thread Chris Mair

Fredrik Bertilsson wrote:

Hi,
we are using Postgres on both Solaris servers and Linux servers, and 
Postgres are much slower on Solaris servers. We have tested with 
different versions of Solaris and Postgres, but the fact remains: 
Postgres seems to be much faster on Linux server. Does anybody else has 
the same experience?


Best regards,
Fredrik B


I had some performance problems on Solaris a while ago which let to
this interesting thread:

http://archives.postgresql.org/pgsql-performance/2006-04/thrd4.php#00035

executive summary:
 - write cache might be (unexpectedly) off by default on sun gear
 - set explicitly wal_sync_method = fsync
 - some other settings (see thread)

Bye,
Chris.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Drop table vs Delete record

2007-05-23 Thread Chris Mair

Hi,

with that setup you should vacuum aggressivley.
I'd send a vacuum statement in a third thread every 15 minutes or so.

The table renaming trick doesn't sound very handy or even
necessary...

Bye,
Chris.



 Date: Tue, 22 May 2007 14:38:40 -0400
 From: Orhan Aglagul [EMAIL PROTECTED]
 To: pgsql-performance@postgresql.org
 Subject: Drop table vs Delete record
 Message-ID: [EMAIL PROTECTED]


 My application has two threads, one inserts thousands of  records per second into a table  (t1) and the  other thread 
 periodically deletes expired records (also in thousands) from the same table (expired ones).  So, we have one thread

 adding a row while the other thread is trying to delete a row. In a short 
time the overall performance of any sql
 statements on that instance degrades. (ex.  Select count(*) from  t1 takes  
more then few seconds with less than 10K
 rows).

 My question is: Would any sql statement perform better if I would rename the table to t1_%indx periodically, create a 
 new table t1 (for new inserts) and just drop the tables with expired records rather then doing a delete record? (t1 is

 a simple table with many rows and no constraints).

 (I know I could run vacuum analyze)

 Thanks,

 Orhan A.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] [pgsql-advocacy] Postgres and really huge tables

2007-01-18 Thread Chris Mair


Is there any experience with Postgresql and really huge tables?  I'm 
talking about terabytes (plural) here in a single table.  Obviously the 
table will be partitioned, and probably spread among several different 
file systems.  Any other tricks I should know about?


We have a problem of that form here.  When I asked why postgres wasn't 
being used, the opinion that postgres would just explicitive die was 
given.  Personally, I'd bet money postgres could handle the problem (and 
better than the ad-hoc solution we're currently using).  But I'd like a 
couple of replies of the form yeah, we do that here- no problem to 
wave around.


I've done a project using 8.1 on solaris that had a table that was 
closed to 2TB. The funny thing is that it just worked fine even without 
partitioning.


But, then again: the size of a single record was huge too: ~ 50K.
So there were not insanly many records: just something
in the order of 10ths of millions.

The queries just were done on some int fields, so the index of the
whole thing fit into RAM.

A lot of data, but not a lot of records... I don't know if that's
valid. I guess the people at Greenplum and/or Sun have more exciting
stories ;)


Bye, Chris.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] running benchmark test on a 50GB database

2006-09-20 Thread Chris Mair

 I am running bechmark test in a 50 GB postgresql database.
 I have the postgresql.conf with all parameters by default.
 In this configuration the database is very, very slow.
 
 Could you please tell which is the best configuration?
 
 My system:
 Pentium D 3.0Ghz
 RAM: 1GB
 HD: 150GB SATA

We don't know what your database looks like, what the
queries are you're running, what very, very
slow means for you and what version of PostgreSQL
on what OS this is :/

The two links are a good starting point to tuning your DB:
http://www.postgresql.org/docs/8.1/static/performance-tips.html
http://www.powerpostgresql.com/PerfList/


Bye, Chris.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Reg - Autovacuum

2006-09-12 Thread Chris Mair

 Hi All
   I have installed a application with postgres-8.1.4 , I 
 have to optimize the performance, As a measure i thought of enabling 
 Auto commit , is it a right decision to take , If correct please suggest 
 the steps that i need to follow in order to implement the Auto Vacuum.

http://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM



 
  And also please suggest other steps that i need to 
 improve the performance .
 

http://www.powerpostgresql.com/PerfList


Bye,
Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] [PATCHES] Template0 age is increasing speedily.

2006-09-07 Thread Chris Mair
On Thu, 2006-09-07 at 16:01 +0530, Nimesh Satam wrote:
 
 I noticed that the age of  template0 is increasing very rapidly..Can
 you please let me know how we can control this and what causes
 such problems. 
  
 We also noticed that the database slow downs heavily at a particular
 time..Can you suggest any tools which will help in diagnosing the root
 cause behiond the data load.


Hi,

first of all: there is no need to cross post on 4 lists.
If you have a performance problem, post on pgsql-performance.

Second, please tell us which version of PostgreSQL on
which operating system you're using. Diagnosing your
problem might depend on which OS you use...

Finally, explain what you mean by the age of template0 is
increasing very rapidly, you mean the size is increasing?

Bye,
Chris.



-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] lowering priority automatically at connection

2006-05-25 Thread Chris Mair
Hi,

I find this very helpful:

  Lowering the priority of a PostgreSQL query
  http://weblog.bignerdranch.com/?p=11

Now I was wondering whether one could have a
  SELECT pg_setpriority(10);
executed automatically each time a certain user
connects (not necessarily using psql)?

Any ideas if and how this might be possible?

Regards :)
Chris.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] lowering priority automatically at connection

2006-05-25 Thread Chris Mair

  I find this very helpful:
Lowering the priority of a PostgreSQL query
http://weblog.bignerdranch.com/?p=11
 
 That guy doesn't actually have the foggiest idea what he's doing.
 The reason there is no built-in capability to do that is that it *does
 not work well*.  Search the list archives for priority inversion to
 find out why not.
 
   regards, tom lane

Ok,
I've learned something new (*).
I'll drop that idea :)

Bye,
Chris.

(*) 
http://en.wikipedia.org/wiki/Priority_inversion




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Benchmarck PostgreSQL 8.1.4 MySQL 5.0.20 and Oracle

2006-05-18 Thread Chris Mair

  Could you give us some more infos about the box' performance while you
  run the PG benchmark? A few minutes output of vmstat 10 maybe? What
  does top say?
 
 
 Here, an extract from the vmstat 3 during the test, you can see that
 my problem is probably a very high disk usage (write and read).
 

 procs ---memory-- ---swap-- -io --system-- cpu
  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
  0 11 92 128344   9224 242843200   287  9691 2227   685  4  3  0 
 93
 [...]

Yes, as is the case most of the time, disk I/O is the bottleneck here...
I'd look into everything disk releated here...



  How are you using the 3 disks? Did you split pg_xlog and the database
  on different disks or not?
 
 
 Data are on disk 1 et 2. Index on disk 3. Perhaps i'm wrong but fsync
 = off, pg_xlog are running with that ?

Yes, pg_xlog ist also used with fsync=off. you might gain quite some
performance if you can manage to put pg_xlog on its own disk (just
symlink the directory). 

Anyway, as others have pointed out, consider that with fsync = off
you're loosing the unbreakability in case of power failures / os
crashes etc.


  Can you say something about the clients? Do they run over network from
  other hosts? What language/bindings do they use?
 
 
 Client is another server from the same network. Clients are connected
 with JDBC connector.


ok, don't know about that one..

  When they do inserts, are the inserts bundled or are there
  single insert transactions? Are the statements prepared?

 I use prepared statements for all requests. Each transaction is about
 5-45 requests.

sounds ok,
could be even more bundled together if the application is compatible
with that.


Bye, Chris.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] bad performance on Solaris 10

2006-04-10 Thread Chris Mair

  Chris,
  Just to make sure the x4100 config is similar to your Linux system, can 
  you verify the default setting for disk write cache and make sure they 
  are both enabled or disabled. Here's how to check in Solaris.
  As root, run format -e - pick a disk - cache - write_cache - display
  
  Not sure how to do it on Linux though!
  
  Regards,
  -Robert
 
 I don't have access to the machine for the next few days due to eh...
 let's call it firewall accident ;), but it might very well be that it
 was off on the x4100 (I know it's on the smaller Linux box).
 
 That together with the bad default sync method can definitely explain
 the strangely slow out of box performance I got.
 
 So thanks again for explaining this to me :)
 
 Bye, Chris.

Just for completeness:
I checked now using the above commands and can confirm the write cache
was disabled on the x4100 and was on on Linux. 

Bye, Chris.






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] bad performance on Solaris 10

2006-04-07 Thread Chris Mair

 Ok, so I did a few runs for each of the sync methods, keeping all the
 rest constant and got this:
 
 open_datasync  0.7
 fdatasync  4.6
 fsync  4.5
 fsync_writethrough not supported
 open_sync  0.6
 
 in arbitrary units - higher is faster.
 
 Quite impressive!
 
 
   
 
 Chris,
 Just to make sure the x4100 config is similar to your Linux system, can 
 you verify the default setting for disk write cache and make sure they 
 are both enabled or disabled. Here's how to check in Solaris.
 As root, run format -e - pick a disk - cache - write_cache - display
 
 Not sure how to do it on Linux though!
 
 Regards,
 -Robert

I don't have access to the machine for the next few days due to eh...
let's call it firewall accident ;), but it might very well be that it
was off on the x4100 (I know it's on the smaller Linux box).

That together with the bad default sync method can definitely explain
the strangely slow out of box performance I got.

So thanks again for explaining this to me :)

Bye, Chris.




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] bad performance on Solaris 10

2006-04-06 Thread Chris Mair

  Yeah - looks good! (is the default open_datasync still?). Might be worth
  trying out the fdatasync method too (ISTR this being quite good... again
  on Solaris 8, so things might have changed)!
 
 I was just talking to a member of the Solaris-UFS team who recommended that 
 we 
 test fdatasync.

Ok, so I did a few runs for each of the sync methods, keeping all the
rest constant and got this:

open_datasync  0.7
fdatasync  4.6
fsync  4.5
fsync_writethrough not supported
open_sync  0.6

in arbitrary units - higher is faster.

Quite impressive!

Bye, Chris.




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] bad performance on Solaris 10

2006-04-05 Thread Chris Mair

   Doing what http://blogs.sun.com/roller/page/jkshah suggests:
 wal_sync_method = fsync (unchanged)
 wal_buffers = 128 (was 8)
 checkpoint_segments = 128 (was 3)
 bgwriter_all_percent = 0 (was 0.333)
 bgwriter_all_maxpages = 0 (was 5)
   and leaving everything else default (solarispackages from pgfoundry)
   increased performance ~ 7 times!

Ok, so I could quite believe my own benchmarks and I decided
to do a fresh initdb and retry everything.

At first it looked like I coudn't reproduce the speed up I just saw.

Then I realized it was the 
wal_sync_method = fsync
line that makes all the difference!

Normally parameters that are commented are default values, but for
wal_sync_method it actually says (note the comment):

wal_sync_method = fsync  # the default is the first option
 # supported by the operating system:
 #   open_datasync
 #   fdatasync
 #   fsync
 #   fsync_writethrough
 #   open_sync

So Im my last mail I drew the wrong conclusion, because i didn't comment
wal_sync_method to double check.

To the point: the default wal_sync_method choosen on Solaris 10 appears
to be a very bad one - for me, picking fsync increases performance ~
times 7, all other parameters unchanged!

Would it be a good idea to change this in the default install?

Bye, Chris.

PS: yes I did a fresh initdb again to double check ;)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] bad performance on Solaris 10

2006-04-05 Thread Chris Mair
appears this didn't make it to the list... resending to the list
directly...
---

   Doing what http://blogs.sun.com/roller/page/jkshah suggests:
 wal_sync_method = fsync (unchanged)
 wal_buffers = 128 (was 8)
 checkpoint_segments = 128 (was 3)
 bgwriter_all_percent = 0 (was 0.333)
 bgwriter_all_maxpages = 0 (was 5)
   and leaving everything else default (solarispackages from
pgfoundry)
   increased performance ~ 7 times!

Ok, so I could quite believe my own benchmarks and I decided
to do a fresh initdb and retry everything.

At first it looked like I coudn't reproduce the speed up I just saw.

Then I realized it was the 
wal_sync_method = fsync
line that makes all the difference!

Normally parameters that are commented are default values, but for
wal_sync_method it actually says (note the comment):

wal_sync_method = fsync  # the default is the first option
 # supported by the operating system:
 #   open_datasync
 #   fdatasync
 #   fsync
 #   fsync_writethrough
 #   open_sync

So Im my last mail I drew the wrong conclusion, because i didn't comment
wal_sync_method to double check.

To the point: the default wal_sync_method choosen on Solaris 10 appears
to be a very bad one - for me, picking fsync increases performance ~
times 7, all other parameters unchanged!

Would it be a good idea to change this in the default install?

Bye, Chris.

PS: yes I did a fresh initdb again to double check ;)


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] bad performance on Solaris 10

2006-04-05 Thread Chris Mair

  I've done a few tests.
  
  Remounting the fs where $PGDATA lives with forcedirectio
  (together with logging, that is default) did not help
  (if not harm...) performance.
  
 
 
 Sure - forcedirectio on the entire $PGDATA is a definite loss, you only 
 want it on $PGDATA/pg_xlog. The usual way this is accomplished is by 
 making a separate filsystem for pg_xlog and symlinking from $PGDATA.
 
 Did you try the other option of remounting the fs for $PGDATA without 
 logging or forcedirectio?

not yet, I'm not on the final disk set yet.

when I get there I'll have two separate filesystems for pg_xlog and base
and will try what you suggest.

(but note the other mail about wal_sync_method = fsync)

bye, chris.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] bad performance on Solaris 10

2006-04-05 Thread Chris Mair
Hi,

thanks for all replys.

I've done a few tests.

Remounting the fs where $PGDATA lives with forcedirectio
(together with logging, that is default) did not help
(if not harm...) performance.

Doing what http://blogs.sun.com/roller/page/jkshah suggests:
  wal_sync_method = fsync (unchanged)
  wal_buffers = 128 (was 8)
  checkpoint_segments = 128 (was 3)
  bgwriter_all_percent = 0 (was 0.333)
  bgwriter_all_maxpages = 0 (was 5)
and leaving everything else default (solarispackages from pgfoundry)
increased performance ~ 7 times!

Playing around with these modifications I find that it's
actually just the
  wal_buffers = 128
alone which makes all the difference!

Quickly playing around with wal_buffers on Linux and Mac OS X
I see it influences the performance of my test a bit, maybe in the
10-20% range (I'm really doing quick tests, nothing systematic),
but nowhere near as spectacularly as on Solaris.

I'm happy so far, but I find it very surprising that this single
parameter has such an impact (only on) Solaris 10.

(my test program is a bulk inserts using PQputCopyData in large
transactions - all test were 8.1.3).

Bye, Chris





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] bad performance on Solaris 10

2006-04-03 Thread Chris Mair
Hi,

I've got a somewhat puzzling performance problem here.

I'm trying to do a few tests with PostgreSQL 8.1.3 under Solaris
(an OS I'm sort of a newbie in).

The machine is a X4100 and the OS is Solaris 10 1/06 fresh install
according to manual. It's got two SAS disks in RAID 1, 4GB of RAM.

Now the problem is: this box is *much* slower than I expect.

I've got a libpg test program that happily inserts data
using PQputCopyData().

It performs an order of magnitude worse than the same thing
on a small Sun (Ultra20) running Linux. Or 4 times slower than
an iBook (sic!) running MacOS X.

So, I've this very bad feeling that there is something basic
I'm missing here.

Following are some stats:

sync; dd; sync show these disks write at 53 MB/s = good.

iostat 1 while my test is running says:

   ttysd0   sd1   sd2   sd5
cpu
 tin tout kps tps serv  kps tps serv  kps tps serv  kps tps serv   us sy
wt id
   1   57   0   000   000   00  1809  23   700
1  0 99
   0  235   0   000   000   00  2186 223   141
1  0 99
   0   81   0   000   000   00  2488 251   131
1  0 98
   0   81   0   000   000   00  2296 232   151
0  0 99
   0   81   0   000   000   00  2416 16691
0  0 98
   0   81   0   000   000   00  2528 218   141
1  0 99
   0   81   0   000   000   00  2272 223   151
0  0 99

If I interpret this correctly the disk writes at not more than 2.5
MB/sec while the Opterons do nothing = this is bad.

I've tried both, a hand compile with gcc and the solarispackages
from pgfoundry.org = same result.

Eons ago PCs had those turbo switches (it was never totally clear
why they put them there in the first place, anyway). I've this bad
feeling there's a secret turbo switch I can't spot hidden somewhere
in Solaris :/


Bye, Chris.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] big databases hospitals

2006-01-14 Thread Chris Mair

 Additionally, because this company develops hospital information systems,
 if someone knows about a medical institute, which uses Postgresql, and
 happy, please send me infomation. I only now subscribed to the advocacy
 list, and only started to browse the archives.

Hi,

have you seen this case study:
http://www.postgresql.org/about/casestudies/shannonmedical

Bye, Chris.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] insertion of bytea

2005-10-27 Thread Chris Mair
 On Tue, Oct 25, 2005 at 03:44:36PM +0200, Chris Mair wrote:
Is there a better, faster way to do these inserts?

 COPY is generally the fastest way to do bulk inserts (see
 PQputCopyData).


Hi,

I've rewritten the testclient now to use COPY, but I'm getting
the exact same results as when doing bundled, prepared inserts.

I'm CPU-bound with an I/O well below what my disks could do :(


Bye, Chris.


PS1: someone off-list suggested using oprofile, which I will do.

PS2: in case somebody is iterested, the test client is here:
 http://www.1006.org/tmp/20051027/

 pgclient-1.1.c is prepared inserts, 2.0 is binary copy.





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] insertion of bytea

2005-10-25 Thread Chris Mair
Hi,

I have the following test setup:

* PG 8.0.4 on Linux (Centos 4) compiled from source.

* DB schema: essentially one table with a few int columns and
  one bytea column that stores blobs of 52000 bytes each, a
  primary key on one of the int columns.

* A test client was written in C using libpq to see what rate
  can be reached (inserting records). The client uses a
  prepared tatement and bundles n inserts into a single
  transaction (n is variable for testing).

* Hardware: different setups tested, in particular a
  single-opteron box with a built in SATA disk and also an
  array of SATA disks connected via FC.

From the test run it appears that the insert rate here is
essentially CPU bound. I'm getting about 11 MB/s net transfer,
regardless if I use the built in disk or the much faster
array and regardless various settings (like n, shared_mem).

vmstat says that disk bo is about 30MB/s (the array can do much
better, I tried with dd and sync!) while the CPU is maxed out
at about 90% us and 10% sy. The client accounts for just 2% CPU,
most goes into the postmaster.

The client inserts random data. I found out that I can improve
things by 35% if I use random sequences of bytes that are
in the printable range vs. full range.


Question 1:
Am I correct in assuming that even though I'm passing my 52000
bytes as a (char *) to PQexecPrepared(), encoding/decoding is
happening (think 0 - \000) somewhere in the transfer?


Question 2:
Is there a better, faster way to do these inserts?
I'm unsure about large objects. I'm planning to use some
custom server side functions to do computations on the bytes
in these records and the large objects API doesn't appear
to be well suited for this.


Sidequestion:
I've tried to profile the server using CFLAGS=-p -DLINUX_PROFILE.
I'm getting profiling output but when I look at it using
gprof bin-somewhere/postgres $PGDATA/gmon.out I'm only seeing
what I think are the calls for the server startup. How can I profile
the (forked) process that actually performs all the work on
my connection?


Sorry for the long post :)
Bye,
Chris.




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] insertion of bytea

2005-10-25 Thread Chris Mair
Is there a better, faster way to do these inserts?

 COPY is generally the fastest way to do bulk inserts (see
 PQputCopyData).

Thanks :)
I'll give that I try and report the results here later.

Bye, Chris.




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] insertion of bytea

2005-10-25 Thread Chris Mair
 Am I correct in assuming that even though I'm passing my 52000
 bytes as a (char *) to PQexecPrepared(), encoding/decoding is
 happening (think 0 - \000) somewhere in the transfer?

 Are you specifying it as a text or binary parameter?  Have you looked to
 see if the stored data is what you expect?

I'm specifying it as binary (i.e. one's in PQexecPrepared's
format parameter). The stored data is correct.

I'll try copy from stdin with binary tomorrow and see what
I get...

Thanks  Bye, Chris.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Data Selection Slow From VB 6.0

2005-08-17 Thread Chris Mair


 When I fire a query to search a debtor id,  it took around 5 seconds
 to return an answer for a query [...]

Are you sure that time is actually spent in the database engine?
Maybe there are DNS resolving issues or something...

Did you try to execute the queries directly on the server from
the psql shell?

Bye, Chris.


 


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] PostgresSQL vs. Firebird

2005-07-15 Thread Chris Mair

On Thu, 2005-07-14 at 00:19 -0700, Relaxin wrote:
 Before I ask, I don't want to start a war.
 
 Can someone here give me an honest opinion of how PostgresSQL (PG) is better 
 than Firebird on Windows?

A colleague of mine has made some benchmarks using those two:
http://www.1006.org/pg/postgresql_firebird_win_linux.pdf

He benchmarked inserts done through *his* own Delphi code varying a few
parameters. The servers run on Windows in all tests. The clients
were on Windows or Linux.

The summary is that PG beats FB performance-wise in all tests except
when you do many small transactions (autocommit on) with fsync on.

Bye, Chris.





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] interesting benchmarks PG/Firebird Linux/Windows fsync/nofsync

2005-03-15 Thread Chris Mair
Hello,

just recently I held a short course on PG.

One course attendant, Robert Dollinger, got 
interested in benchmarking single inserts (since
he currently maintains an application that does
exactly that on Firebird and speed is an issue
there).

He came up with a table that I think is interesting
for other people so I asked permission to publish
it on this list.

Here it is:
http://1006.org/pg/postgresql_firebird_win_linux.pdf

Note: some german words are there, I can't change
the pdf, so here's a short explanation:

He tested the speed of 4000 inserts through a Delphi
application with zeos components.

the 3 parameters are:

* transaction
  - single: all 4000 inserts inside 1 transaction
  - multi: 4000 inserts with 4000 commits

* fsync (for PG) or forced writes (for FB)
  - true/false

* Verbindung = connection
  - local
  - LAN
  - wireless

  notes: the server ran either on a windows desktop
  machine or a linux laptop; the client allways ran
  on the windows desktop

Timings are in msec, note that you cannot directly
compare Windows and Linux Performance, since machines
were different.

You can, however, compare PG to Firebird, and you
can see the effect of the 3 varied parametert.

One thing that stands out is how terribly
bad Windows performed with many small single
transactions and fsync=true.

Appearantly fsync on Windows is a very costly
operation.

Another (good) thing is that PG beats FB on all
other tests :-)


Bye, Chris.






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] poor performance of db?

2005-01-24 Thread Chris Mair
 I then re-wrote the page to use a single select query to call all the
 information needed by PHP to draw the screen. That managed to shave it
 down to 3.5 seconds... but this so far is as fast as I can get the
 page to load. Have tried vacuuming and creating indexes but to no
 avail. (increasing shared mem buffers yet to be done)

If you call this select statement directly from psql instead of through
the PHP thing, does timing change?

(just to make sure, time is actually spent in the query and not
somewhere else)

PS: use \timing in psql to see timing information

Bye, Chris.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly