Re: [PERFORM] Compression of text columns

2005-10-11 Thread Simon Riggs
On Mon, 2005-10-10 at 14:57 +0200, Stef wrote:
> Is there any way to achieve better compression?

You can use XML schema aware compression techniques, but PostgreSQL
doesn't know about those. You have to do it yourself, or translate the
XML into an infoset-preserving form that will still allow XPath and
friends.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Claus Guttesen
I have a postgresql 7.4.8-server with 4 GB ram.

> #max_fsm_pages = 2  # min max_fsm_relations*16, 6 bytes each
> #max_fsm_relations = 1000   # min 100, ~50 bytes each

If you do a vacuum verbose (when it's convenient) the last couple of
lines will tell you something like this:

INFO:  free space map: 143 relations, 62034 pages stored; 63792 total
pages needed
DETAIL:  Allocated FSM size: 300 relations + 75000 pages = 473 kB shared memory.

It says 143 relations and 63792 total pages needed, so I up'ed my
values to these settings:

max_fsm_relations = 300 # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 75000   # min 1000, fsm is free space map, ~6 bytes

> #effective_cache_size = 1000# typically 8KB each

This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
changed it to:

effective_cache_size = 27462# typically 8KB each

Bear in mind that this is 7.4.8 and FreeBSD so these suggestions may
not apply to your environment. These suggestions could be validated by
the other members of this list.

regards
Claus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PERFORM] Massive delete performance

2005-10-11 Thread Andy



Hi to all, 
 
I have the following problem: I have a client to 
which we send every night a "dump" with a the database in which there are only 
their data's. It is a stupid solution but I choose this solution because I 
couldn't find any better. The target machine is a windows 2003. 
 
 
So, I have a replication only with the tables that 
I need to send, then I make a copy of this replication, and from this copy I 
delete all the data's that are not needed. 
 
How can I increase this DELETE procedure because it 
is really slow???  There are of corse a lot of data's to be deleted. 

 
 
 
Or is there any other solution for this? 

DB -> (replication) RE_DB -> (copy) -> 
COPY_DB -> (Delete unnecesary data) -> CLIENT_DB -> (ISDN connection) 
-> Data's to the client. 
 
Regards, 
Andy.
 
 
 


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Sean Davis
On 10/11/05 3:47 AM, "Andy" <[EMAIL PROTECTED]> wrote:

> Hi to all, 
> 
> I have the following problem: I have a client to which we send every night a
> "dump" with a the database in which there are only their data's. It is a
> stupid solution but I choose this solution because I couldn't find any better.
> The target machine is a windows 2003.
> 
> So, I have a replication only with the tables that I need to send, then I make
> a copy of this replication, and from this copy I delete all the data's that
> are not needed. 
> 
> How can I increase this DELETE procedure because it is really slow???  There
> are of corse a lot of data's to be deleted.

Do you have foreign key relationships that must be followed for cascade
delete?  If so, make sure that you have indices on them.  Are you running
any type of vacuum after the whole process?  What kind?

Sean


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy

Do you have foreign key relationships that must be followed for cascade
delete?  If so, make sure that you have indices on them.
Yes I have such things. Indexes are on these fields. >> To be onest this 
delete is taking the longest time, but it involves about 10 tables.



Are you running
any type of vacuum after the whole process?  What kind?

Full vacuum. (cmd: vacuumdb -f)

Is there any configuration parameter for delete speed up?


- Original Message - 
From: "Sean Davis" <[EMAIL PROTECTED]>

To: "Andy" <[EMAIL PROTECTED]>; 
Sent: Tuesday, October 11, 2005 2:54 PM
Subject: Re: [PERFORM] Massive delete performance



On 10/11/05 3:47 AM, "Andy" <[EMAIL PROTECTED]> wrote:


Hi to all,

I have the following problem: I have a client to which we send every 
night a

"dump" with a the database in which there are only their data's. It is a
stupid solution but I choose this solution because I couldn't find any 
better.

The target machine is a windows 2003.

So, I have a replication only with the tables that I need to send, then I 
make
a copy of this replication, and from this copy I delete all the data's 
that

are not needed.

How can I increase this DELETE procedure because it is really slow??? 
There

are of corse a lot of data's to be deleted.


Do you have foreign key relationships that must be followed for cascade
delete?  If so, make sure that you have indices on them.  Are you running
any type of vacuum after the whole process?  What kind?

Sean






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Sean Davis
On 10/11/05 8:05 AM, "Andy" <[EMAIL PROTECTED]> wrote:

>> Do you have foreign key relationships that must be followed for cascade
>> delete?  If so, make sure that you have indices on them.
> Yes I have such things. Indexes are on these fields. >> To be onest this
> delete is taking the longest time, but it involves about 10 tables.

Can you post explain analyze output of the next delete?

Sean


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Steinar H. Gunderson
On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote:
> So, I have a replication only with the tables that I need to send, then I
> make a copy of this replication, and from this copy I delete all the data's
> that are not needed. 
> 
> How can I increase this DELETE procedure because it is really slow???
> There are of corse a lot of data's to be deleted. 

Instead of copying and then deleting, could you try just selecting out what
you wanted in the first place?

/* Steinar */
-- 
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy

We run the DB on a linux system. The client has a windows system. The
application is almost the same (so the database structure is 80% the same).
The difference is that the client does not need all the tables.

So, in the remaining tables there are a lot of extra data's that don't
belong to this client. We have to send every night a updated "info" to the
client database. Our (have to admin) "fast and not the best" solution was so
replicated the needed tables, and delete from these the info that is not
needed.

So, I send to this client a "dump" from the database.

I also find the ideea "not the best", but couldn't find in two days another
fast solution. And it works this way for 4 months.

Out database is not THAT big (500MB), the replication about (300MB)...
everything works fast enough except this delete


How can I evidence the cascade deletes also on explain analyze?

The answer for Sean Davis <[EMAIL PROTECTED]>:

EXPLAIN ANALYZE
DELETE FROM report WHERE id_order IN
(SELECT o.id FROM orders o WHERE o.id_ag NOT IN (SELECT cp.id_ag FROM users 
u INNER JOIN
contactpartner cp ON cp.id_user=u.id WHERE u.name in ('dc') ORDER BY 
cp.id_ag))


Hash IN Join  (cost=3532.83..8182.33 rows=32042 width=6) (actual 
time=923.456..2457.323 rows=59557 loops=1)

 Hash Cond: ("outer".id_order = "inner".id)
 ->  Seq Scan on report  (cost=0.00..2613.83 rows=64083 width=10) (actual 
time=33.269..1159.024 rows=64083 loops=1)
 ->  Hash  (cost=3323.31..3323.31 rows=32608 width=4) (actual 
time=890.021..890.021 rows=0 loops=1)
   ->  Seq Scan on orders o  (cost=21.12..3323.31 rows=32608 width=4) 
(actual time=58.428..825.306 rows=60596 loops=1)

 Filter: (NOT (hashed subplan))
 SubPlan
   ->  Sort  (cost=21.11..21.12 rows=3 width=4) (actual 
time=47.612..47.612 rows=1 loops=1)

 Sort Key: cp.id_ag
 ->  Nested Loop  (cost=0.00..21.08 rows=3 width=4) 
(actual time=47.506..47.516 rows=1 loops=1)
   ->  Index Scan using users_name_idx on users u 
(cost=0.00..5.65 rows=1 width=4) (actual time=20.145..20.148 rows=1 loops=1)

 Index Cond: ((name)::text = 'dc'::text)
   ->  Index Scan using contactpartner_id_user_idx 
on contactpartner cp  (cost=0.00..15.38 rows=4 width=8) (actual 
time=27.348..27.352 rows=1 loops=1)

 Index Cond: (cp.id_user = "outer".id)
Total runtime: 456718.658 ms




- Original Message - 
From: "Steinar H. Gunderson" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 11, 2005 3:19 PM
Subject: Re: [PERFORM] Massive delete performance



On Tue, Oct 11, 2005 at 10:47:03AM +0300, Andy wrote:

So, I have a replication only with the tables that I need to send, then I
make a copy of this replication, and from this copy I delete all the
data's
that are not needed.

How can I increase this DELETE procedure because it is really slow???
There are of corse a lot of data's to be deleted.


Instead of copying and then deleting, could you try just selecting out
what
you wanted in the first place?

/* Steinar */
--
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings





---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Sven Willenberger
On Tue, 2005-10-11 at 09:41 +0200, Claus Guttesen wrote:
> I have a postgresql 7.4.8-server with 4 GB ram.


> 
> > #effective_cache_size = 1000# typically 8KB each
> 
> This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> changed it to:
> 
> effective_cache_size = 27462# typically 8KB each

Apparently this formula is no longer relevant on the FreeBSD systems as
it can cache up to almost all the available RAM. With 4GB of RAM, one
could specify most of the RAM as being available for caching, assuming
that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM
would be a reasonable value to tell the planner.

(This was verified by using dd: 
dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create
a 2G file then
dd if=/usr/local/pgsql/iotest of=/dev/null

If you run systat -vmstat 2 you will see 0% diskaccess during the read
of the 2G file indicating that it has, in fact, been cached)


Sven


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Tom Lane
"Andy" <[EMAIL PROTECTED]> writes:
> EXPLAIN ANALYZE
> DELETE FROM report WHERE id_order IN
> ...

> Hash IN Join  (cost=3532.83..8182.33 rows=32042 width=6) (actual 
> time=923.456..2457.323 rows=59557 loops=1)
> ...
> Total runtime: 456718.658 ms

So the runtime is all in the delete triggers.  The usual conclusion from
this is that there is a foreign key column pointing at this table that
does not have an index, or is not the same datatype as the column it
references.  Either condition will force a fairly inefficient way of
handling the FK deletion check.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Andy

Ups folks,

Indeed there were 2 important indexes missing. Now it runs about 10 times 
faster. Sorry for the caused trouble :) and thanx for help.



Hash IN Join  (cost=3307.49..7689.47 rows=30250 width=6) (actual 
time=227.666..813.786 rows=56374 loops=1)

 Hash Cond: ("outer".id_order = "inner".id)
 ->  Seq Scan on report  (cost=0.00..2458.99 rows=60499 width=10) (actual 
time=0.035..269.422 rows=60499 loops=1)
 ->  Hash  (cost=3109.24..3109.24 rows=30901 width=4) (actual 
time=227.459..227.459 rows=0 loops=1)
   ->  Seq Scan on orders o  (cost=9.73..3109.24 rows=30901 width=4) 
(actual time=0.429..154.219 rows=57543 loops=1)

 Filter: (NOT (hashed subplan))
 SubPlan
   ->  Sort  (cost=9.71..9.72 rows=3 width=4) (actual 
time=0.329..0.330 rows=1 loops=1)

 Sort Key: cp.id_ag
 ->  Nested Loop  (cost=0.00..9.69 rows=3 width=4) 
(actual time=0.218..0.224 rows=1 loops=1)
   ->  Index Scan using users_name_idx on users u 
(cost=0.00..5.61 rows=1 width=4) (actual time=0.082..0.084 rows=1 loops=1)

 Index Cond: ((name)::text = 'dc'::text)
   ->  Index Scan using contactpartner_id_user_idx 
on contactpartner cp  (cost=0.00..4.03 rows=3 width=8) (actual 
time=0.125..0.127 rows=1 loops=1)

 Index Cond: (cp.id_user = "outer".id)
Total runtime: 31952.811 ms



- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "Andy" <[EMAIL PROTECTED]>
Cc: "Steinar H. Gunderson" <[EMAIL PROTECTED]>; 


Sent: Tuesday, October 11, 2005 5:17 PM
Subject: Re: [PERFORM] Massive delete performance



"Andy" <[EMAIL PROTECTED]> writes:

EXPLAIN ANALYZE
DELETE FROM report WHERE id_order IN
...



Hash IN Join  (cost=3532.83..8182.33 rows=32042 width=6) (actual
time=923.456..2457.323 rows=59557 loops=1)
...
Total runtime: 456718.658 ms


So the runtime is all in the delete triggers.  The usual conclusion from
this is that there is a foreign key column pointing at this table that
does not have an index, or is not the same datatype as the column it
references.  Either condition will force a fairly inefficient way of
handling the FK deletion check.

regards, tom lane





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alex Turner
Realise also that unless you are running the 1.5 x86-64 build, java
will not use more than 1Gig, and if the app server requests more than
1gig, Java will die (I've been there) with an out of memory error, even
though there is plenty of free mem available.  This can easily be
cause by a lazy GC thread if the applicaiton is running high on CPU
usage.

The kernel will not report memory used for caching pages as being
unavailable, if a program calls a malloc, the kernel will just swap out
the oldest disk page and give the memory to the application.

Your free -mo shows 3 gig free even with cached disk pages.  It
looks to me more like either a Java problem, or a kernel problem...

Alex Turner
NetEconomistOn 10/10/05, Jon Brisbin <[EMAIL PROTECTED]> wrote:
Tom Lane wrote:>> Are you sure it's not cached data pages, rather than cached inodes?> If so, the above behavior is *good*.>> People often have a mistaken notion that having near-zero free RAM means
> they have a problem.  In point of fact, that is the way it is supposed> to be (at least on Unix-like systems).  This is just a reflection of the> kernel doing what it is supposed to do, which is to use all spare RAM
> for caching recently accessed disk pages.  If you're not swapping then> you do not have a problem.Except for the fact that my Java App server crashes when all theavailable memory is being used by caching and not reclaimed :-)
If it wasn't for the app server going down, I probably wouldn't care.--Jon BrisbinWebmasterNPC International, Inc.---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate   subscribe-nomail command to [EMAIL PROTECTED] so that your   message can get through to the mailing list cleanly



Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alan Stange

Alex Turner wrote:

Realise also that unless you are running the 1.5 x86-64 build, java 
will not use more than 1Gig, and if the app server requests more than 
1gig, Java will die (I've been there) with an out of memory error, 
even though there is plenty of free mem available.  This can easily be 
cause by a lazy GC thread if the applicaiton is running high on CPU usage.


On my side of Planet Earth, the standard non-x64 1.5 JVM will happily 
use more than 1G of memory (on linux and Solaris, can't speak for 
Windows).  If you're running larger programs, it's probably a good idea 
to use the -server compiler in the JVM as well.  I regularly run with 
-Xmx1800m and regularly have >1GB heap sizes.


The standard GC will not cause on OOM error if space remains for the 
requested object.  The GC thread blocks all other threads during its 
activity, whatever else is happening on the machine.   The 
newer/experimental GC's did have some potential race conditions, but I 
believe those have been resolved in the 1.5 JVMs.  

Finally, note that the latest _05 release of the 1.5 JVM also now 
supports large page sizes on Linux and Windows:
-XX:+UseLargePages   this can be quite beneficial depending on the 
memory patterns in your programs.


-- Alan

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/ reiserfs)

2005-10-11 Thread Claus Guttesen
> > I have a postgresql 7.4.8-server with 4 GB ram.
> > #effective_cache_size = 1000# typically 8KB each
> >
> > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> > changed it to:
> >
> > effective_cache_size = 27462# typically 8KB each
>
> Apparently this formula is no longer relevant on the FreeBSD systems as
> it can cache up to almost all the available RAM. With 4GB of RAM, one
> could specify most of the RAM as being available for caching, assuming
> that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM
> would be a reasonable value to tell the planner.
>
> (This was verified by using dd:
> dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create
> a 2G file then
> dd if=/usr/local/pgsql/iotest of=/dev/null
>
> If you run systat -vmstat 2 you will see 0% diskaccess during the read
> of the 2G file indicating that it has, in fact, been cached)

Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on
amd64 (or both)?

regards
Claus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alex Turner
Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)
but I was more thinking 1.4 which many folks are still using.

AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Realise also that unless you are running the 1.5 x86-64 build, java> will not use more than 1Gig, and if the app server requests more than> 1gig, Java will die (I've been there) with an out of memory error,
> even though there is plenty of free mem available.  This can easily be> cause by a lazy GC thread if the applicaiton is running high on CPU usage.On my side of Planet Earth, the standard non-x64 1.5
 JVM will happilyuse more than 1G of memory (on linux and Solaris, can't speak forWindows).  If you're running larger programs, it's probably a good ideato use the -server compiler in the JVM as well.  I regularly run with
-Xmx1800m and regularly have >1GB heap sizes.The standard GC will not cause on OOM error if space remains for therequested object.  The GC thread blocks all other threads during itsactivity, whatever else is happening on the machine.   The
newer/experimental GC's did have some potential race conditions, but Ibelieve those have been resolved in the 1.5 JVMs.Finally, note that the latest _05 release of the 1.5 JVM also nowsupports large page sizes on Linux and Windows:
-XX:+UseLargePages   this can be quite beneficial depending on thememory patterns in your programs.-- Alan


Re: [PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/

2005-10-11 Thread Sven Willenberger
On Tue, 2005-10-11 at 16:54 +0200, Claus Guttesen wrote:
> > > I have a postgresql 7.4.8-server with 4 GB ram.
> > > #effective_cache_size = 1000# typically 8KB each
> > >
> > > This is computed by sysctl -n vfs.hibufspace / 8192 (on FreeBSD). So I
> > > changed it to:
> > >
> > > effective_cache_size = 27462# typically 8KB each
> >
> > Apparently this formula is no longer relevant on the FreeBSD systems as
> > it can cache up to almost all the available RAM. With 4GB of RAM, one
> > could specify most of the RAM as being available for caching, assuming
> > that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM
> > would be a reasonable value to tell the planner.
> >
> > (This was verified by using dd:
> > dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create
> > a 2G file then
> > dd if=/usr/local/pgsql/iotest of=/dev/null
> >
> > If you run systat -vmstat 2 you will see 0% diskaccess during the read
> > of the 2G file indicating that it has, in fact, been cached)
> 
> Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on
> amd64 (or both)?
> 

Not sure about 6.0 (but I don't know why it would change) but definitely
on 5.4 amd64 (and I would imagine i386 as well).

Sven


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alan Stange

Alex Turner wrote:

Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) 
but I was more thinking 1.4 which many folks are still using.


The 1.4.x JVM's will also work just fine with much more than 1GB of 
memory.   Perhaps you'd like to try again?


-- Alan



On 10/11/05, *Alan Stange* <[EMAIL PROTECTED] 
> wrote:


Alex Turner wrote:

> Realise also that unless you are running the 1.5 x86-64 build, java
> will not use more than 1Gig, and if the app server requests more
than
> 1gig, Java will die (I've been there) with an out of memory error,
> even though there is plenty of free mem available.  This can
easily be
> cause by a lazy GC thread if the applicaiton is running high on
CPU usage.

On my side of Planet Earth, the standard non-x64 1.5 JVM will happily
use more than 1G of memory (on linux and Solaris, can't speak for
Windows).  If you're running larger programs, it's probably a good
idea
to use the -server compiler in the JVM as well.  I regularly run with
-Xmx1800m and regularly have >1GB heap sizes.

The standard GC will not cause on OOM error if space remains for the
requested object.  The GC thread blocks all other threads during its
activity, whatever else is happening on the machine.   The
newer/experimental GC's did have some potential race conditions, but I
believe those have been resolved in the 1.5 JVMs.

Finally, note that the latest _05 release of the 1.5 JVM also now
supports large page sizes on Linux and Windows:
-XX:+UseLargePages   this can be quite beneficial depending on the
memory patterns in your programs.

-- Alan





---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/

2005-10-11 Thread Claus Guttesen
> > > Apparently this formula is no longer relevant on the FreeBSD systems as
> > > it can cache up to almost all the available RAM. With 4GB of RAM, one
> > > could specify most of the RAM as being available for caching, assuming
> > > that nothing but PostgreSQL runs on the server -- certainly 1/2 the RAM
> > > would be a reasonable value to tell the planner.
> > >
> > > (This was verified by using dd:
> > > dd if=/dev/zero of=/usr/local/pgsql/iotest bs=128k count=16384 to create
> > > a 2G file then
> > > dd if=/usr/local/pgsql/iotest of=/dev/null
> > >
> > > If you run systat -vmstat 2 you will see 0% diskaccess during the read
> > > of the 2G file indicating that it has, in fact, been cached)
> >
> > Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on
> > amd64 (or both)?
> >
>
> Not sure about 6.0 (but I don't know why it would change) but definitely
> on 5.4 amd64 (and I would imagine i386 as well).

Works on FreeBSD 6.0 RC1 as well. Tried using count=4096 on a 1 GB ram
box. Same behaviour as you describe above.

regards
Claus

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Performance on SUSE w/ reiserfs

2005-10-11 Thread Alex Turner
Well - to each his own I guess - we did extensive testing on 1.4, and
it refused to allocate much past 1gig on both Linux x86/x86-64 and
Windows.

AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)> but I was more thinking 1.4 which many folks are still using.The 1.4.x JVM's will also work just fine with much more than 1GB of
memory.   Perhaps you'd like to try again?-- Alan>> On 10/11/05, *Alan Stange* <[EMAIL PROTECTED]> 
[EMAIL PROTECTED]>> wrote:>> Alex Turner wrote:>> > Realise also that unless you are running the 1.5 x86-64 build, java> > will not use more than 1Gig, and if the app server requests more
> than> > 1gig, Java will die (I've been there) with an out of memory error,> > even though there is plenty of free mem available.  This can> easily be> > cause by a lazy GC thread if the applicaiton is running high on
> CPU usage.>> On my side of Planet Earth, the standard non-x64 1.5 JVM will happily> use more than 1G of memory (on linux and Solaris, can't speak for> Windows).  If you're running larger programs, it's probably a good
> idea> to use the -server compiler in the JVM as well.  I regularly run with> -Xmx1800m and regularly have >1GB heap sizes.>> The standard GC will not cause on OOM error if space remains for the
> requested object.  The GC thread blocks all other threads during its> activity, whatever else is happening on the machine.   The> newer/experimental GC's did have some potential race conditions, but I
> believe those have been resolved in the 1.5 JVMs.>> Finally, note that the latest _05 release of the 1.5 JVM also now> supports large page sizes on Linux and Windows:> -XX:+UseLargePages   this can be quite beneficial depending on the
> memory patterns in your programs.>> -- Alan>>


Re: [PERFORM] effective cache size on FreeBSD (WAS: Performance on SUSE w/ reiserfs)

2005-10-11 Thread Vivek Khera

On Oct 11, 2005, at 10:54 AM, Claus Guttesen wrote:


Thank you for your reply. Does this apply to FreeBSD 5.4 or 6.0 on
amd64 (or both)?



It applies to FreeBSD >= 5.0.

However, I have not been able to get a real answer from the FreeBSD  
hacker community on what the max buffer space usage will be to  
properly set this.  The `sysctl -n vfs.hibufspace` / 8192 estimation  
works very well for me, still, and I continue to use it.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Massive delete performance

2005-10-11 Thread Enrico Weigelt
* Andy <[EMAIL PROTECTED]> wrote:


>I have the following problem: I have a client to which we send every
>night a "dump" with a the database in which there are only their
>data's. It is a stupid solution but I choose this solution because I
>couldn't find any better. The target machine is a windows 2003.
> 
>So, I have a replication only with the tables that I need to send,
>then I make a copy of this replication, and from this copy I delete
>all the data's that are not needed.

Why not filtering out as much unnecessary stuff as possible on copying ?



>How can I increase this DELETE procedure because it is really slow???
> There are of corse a lot of data's to be deleted.

Have you set up the right indices ?


cu
-- 
-
 Enrico Weigelt==   metux IT service
  phone: +49 36207 519931 www:   http://www.metux.de/
  fax:   +49 36207 519932 email: [EMAIL PROTECTED]
-
  Realtime Forex/Stock Exchange trading powered by postgreSQL :))
http://www.fxignal.net/
-

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly