Re: [PERFORM] Compression of text columns

2005-10-11 Thread Simon Riggs
On Mon, 2005-10-10 at 14:57 +0200, Stef wrote: Is there any way to achieve better compression? You can use XML schema aware compression techniques, but PostgreSQL doesn't know about those. You have to do it yourself, or translate the XML into an infoset-preserving form that will still allow

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Sean Davis
On 10/11/05 3:47 AM, Andy [EMAIL PROTECTED] wrote: Hi to all, I have the following problem: I have a client to which we send every night a dump with a the database in which there are only their data's. It is a stupid solution but I choose this solution because I couldn't find any better.

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy
Do you have foreign key relationships that must be followed for cascade delete? If so, make sure that you have indices on them. Yes I have such things. Indexes are on these fields. To be onest this delete is taking the longest time, but it involves about 10 tables. Are you running any type

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Sean Davis
On 10/11/05 8:05 AM, Andy [EMAIL PROTECTED] wrote: Do you have foreign key relationships that must be followed for cascade delete? If so, make sure that you have indices on them. Yes I have such things. Indexes are on these fields. To be onest this delete is taking the longest time, but it

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Steinar H. Gunderson
On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote: So, I have a replication only with the tables that I need to send, then I make a copy of this replication, and from this copy I delete all the data's that are not needed. How can I increase this DELETE procedure because it is really

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy
We run the DB on a linux system. The client has a windows system. The application is almost the same (so the database structure is 80% the same). The difference is that the client does not need all the tables. So, in the remaining tables there are a lot of extra data's that don't belong to this

Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Sven Willenberger
On Tue, 2005-10-11 at 09:41 +0200, Claus Guttesen wrote: I have a postgresql 7.4.8-server with 4 GB ram. snip #effective_cache_size = 1000# typically 8KB each This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I changed it to: effective_cache_size = 27462#

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Tom Lane
Andy [EMAIL PROTECTED] writes: EXPLAIN ANALYZE DELETE FROM report WHERE id_order IN ... Hash IN Join (cost=3532.83..8182.33 rows=32042 width=6) (actual time=923.456..2457.323 rows=59557 loops=1) ... Total runtime: 456718.658 ms So the runtime is all in the delete triggers. The usual

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy
Ups folks, Indeed there were 2 important indexes missing. Now it runs about 10 times faster. Sorry for the caused trouble :) and thanx for help. Hash IN Join (cost=3307.49..7689.47 rows=30250 width=6) (actual time=227.666..813.786 rows=56374 loops=1) Hash Cond: (outer.id_order =

Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alex Turner
Realise also that unless you are running the 1.5 x86-64 build, java will not use more than 1Gig, and if the app server requests more than 1gig, Java will die (I've been there) with an out of memory error, even though there is plenty of free mem available. This can easily be cause by a lazy GC

[PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/ reiserfs)

2005-10-11 Thread Claus Guttesen
I have a postgresql 7.4.8-server with 4 GB ram. #effective_cache_size = 1000# typically 8KB each This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I changed it to: effective_cache_size = 27462# typically 8KB each Apparently this formula is no longer

Re: [PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/

2005-10-11 Thread Sven Willenberger
On Tue, 2005-10-11 at 16:54 +0200, Claus Guttesen wrote: I have a postgresql 7.4.8-server with 4 GB ram. #effective_cache_size = 1000# typically 8KB each This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I changed it to: effective_cache_size = 27462#

Re: [PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/

2005-10-11 Thread Claus Guttesen
Apparently this formula is no longer relevant on the FreeBSD systems as it can cache up to almost all the available RAM. With 4GB of RAM, one could specify most of the RAM as being available for caching, assuming that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM

Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alex Turner
Well - to each his own I guess - we did extensive testing on 1.4, and it refused to allocate much past 1gig on both Linux x86/x86-64 and Windows. AlexOn 10/11/05, Alan Stange [EMAIL PROTECTED] wrote: Alex Turner wrote: Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) but I was

Re: [PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/ reiserfs)

2005-10-11 Thread Vivek Khera
On Oct 11, 2005, at 10:54 AM, Claus Guttesen wrote: Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on amd64 (or both)? It applies to FreeBSD = 5.0. However, I have not been able to get a real answer from the FreeBSD hacker community on what the max buffer space usage will

Re: [PERFORM] Massive delete performance

2005-10-11 Thread Enrico Weigelt
* Andy [EMAIL PROTECTED] wrote: snip I have the following problem: I have a client to which we send every night a dump with a the database in which there are only their data's. It is a stupid solution but I choose this solution because I couldn't find any better. The target