Re: [PERFORM] Used Memory

2005-11-06 Thread Christian Paul B. Cosinas










It affect my application since the
database server starts to slow down. Hence a very slow in return of functions.



Any more ideas about this everyone?



Please.









From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alex Turner
Sent: Friday, October 21, 2005
3:42 PM
To: Jon Brisbin
Cc:
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory





[snip]





to the second processor in my dual Xeon eServer) has got me to the
point that the perpetually high memory usage doesn't affect my 
application server.






I'm curious - how does the high memory usage affect your application server?

Alex 













I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html




FW: [PERFORM] Used Memory

2005-11-06 Thread Christian Paul B. Cosinas
Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60 
shared_buffers = 48000   
sort_mem = 32168
sync = false

Do you think this is enough? Or can you recommend a better configuration for
my server?

The server is also running PHP and Apache but wer'e not using it
extensively. For development purpose only. 

The database slow down is occurring most of the time (when the memory free
is low) I don't think it has something to do with vacuum. We only have a
full server vacuum once a day.


-Original Message-
From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
Sent: Monday, October 24, 2005 3:14 AM
To: Christian Paul B. Cosinas
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory
  
 
 I just noticed that as long as the free memory in the first row (which 
 is 55036 as of now) became low, the slower is the response of the 
 database server.
  

Also, how about posting your postgresql.conf (or just the non-default
parameters) to this list?



Some other stuff that could be relevant:

- Is the machine just a database server, or does it run (say) Apache + Php?
- When the slowdown is noticed, does this coincide with certain activities -
e.g, backup , daily maintenance, data load(!) etc.


regards

Mark

 
 I choose Polesoft Lockspam to fight spam, and you?
 http://www.polesoft.com/refer.html

Nope, not me either.


I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html



I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


FW: [PERFORM] Used Memory

2005-10-25 Thread Christian Paul B. Cosinas

Here are the configuration of our database server:
port = 5432
max_connections = 300
superuser_reserved_connections = 10
authentication_timeout = 60 
shared_buffers = 48000   
sort_mem = 32168
sync = false

Do you think this is enough? Or can you recommend a better configuration for
my server?

The server is also running PHP and Apache but wer'e not using it
extensively. For development purpose only. 

The database slow down is occurring most of the time (when the memory free
is low) I don't think it has something to do with vacuum. We only have a
full server vacuum once a day.



I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Used Memory

2005-10-24 Thread Kevin Grittner
In addition to what Mark pointed out, there is the possibility that a
query
is running which is scanning a large table or otherwise bringing in a
large number of pages from disk.  That would first use up all available
unused cache space, and then may start replacing some of your
frequently used data.  This can cause slowness for some time after the
process which flushed the cache, as pages are reread and recached.

Keep in mind that the cache could be flushed by some external process,
such as copying disk files.

The use of free memory for caching is not slowing you down; but if it
coincides with slowness, it could be a useful clue.

-Kevin


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Used Memory

2005-10-24 Thread Craig A. James

Kevin Grittner wrote:

In addition to what Mark pointed out, there is the possibility that a
query
is running which is scanning a large table or otherwise bringing in a
large number of pages from disk.  That would first use up all available
unused cache space, and then may start replacing some of your
frequently used data.  


An LRU cache is often a bad strategy for database applications.  There are two 
illustrations that show why.

1. You have an index that's used for EVERY search, but each search returns a 
large and unique set of rows.  If it happens that the rows returned exceed the 
systems cache size, the part or all of your index will be flushed with EVERY 
query.

2. You do a sequential scan of a table that's one block bigger than the file 
system cache, then you do it again.  At the beginning of the second scan, the 
first block of the table will have just been swapped out because it was the 
oldest, so the file system brings it back in, replacing the second block, which 
is now the oldest.  As you scan the table, each block of the table is swapped 
out JUST BEFORE you get to it.  At the start of your query, the file system 
might have had 99.9% of the relevant data in memory, but it swaps out 100% of 
it as your query progresses.

Scenario 2 above is interesting because a system that is performing very well 
can suddenly experience a catastrophic performance decline when the size of the 
data exceeds a critical limit - the file system's avaliable cache.

LRU works well if your frequently-used data is used often enough to keep it in memory.  
But many applications don't have that luxury.  It's often the case that a single query 
will exceed the file system's cache size.  The file system cache is dumb -- 
it's strategy is too simple for a relational database.

What's needed is a way for the application developer to explicitely say, This 
object is frequenly used, and I want it kept in memory.

Craig

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Used Memory

2005-10-24 Thread Scott Marlowe
On Mon, 2005-10-24 at 12:00, Craig A. James wrote:
 Kevin Grittner wrote:
  In addition to what Mark pointed out, there is the possibility that a
  query
  is running which is scanning a large table or otherwise bringing in a
  large number of pages from disk.  That would first use up all available
  unused cache space, and then may start replacing some of your
  frequently used data.  
 
 An LRU cache is often a bad strategy for database applications.  There are 
 two illustrations that show why.
 
 1. You have an index that's used for EVERY search, but each search returns a 
 large and unique set of rows.  If it happens that the rows returned exceed 
 the systems cache size, the part or all of your index will be flushed with 
 EVERY query.
 
 2. You do a sequential scan of a table that's one block bigger than the file 
 system cache, then you do it again.  At the beginning of the second scan, the 
 first block of the table will have just been swapped out because it was the 
 oldest, so the file system brings it back in, replacing the second block, 
 which is now the oldest.  As you scan the table, each block of the table is 
 swapped out JUST BEFORE you get to it.  At the start of your query, the file 
 system might have had 99.9% of the relevant data in memory, but it swaps out 
 100% of it as your query progresses.
 
 Scenario 2 above is interesting because a system that is performing very well 
 can suddenly experience a catastrophic performance decline when the size of 
 the data exceeds a critical limit - the file system's avaliable cache.
 
 LRU works well if your frequently-used data is used often enough to keep it 
 in memory.  But many applications don't have that luxury.  It's often the 
 case that a single query will exceed the file system's cache size.  The file 
 system cache is dumb -- it's strategy is too simple for a relational 
 database.
 
 What's needed is a way for the application developer to explicitely say, 
 This object is frequenly used, and I want it kept in memory.

There's an interesting conversation happening on the linux kernel
hackers mailing list right about now that applies:

http://www.gossamer-threads.com/lists/linux/kernel/580789

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Used Memory

2005-10-24 Thread Craig A. James

Scott Marlowe wrote:

What's needed is a way for the application developer to explicitely say,
This object is frequenly used, and I want it kept in memory.
 
There's an interesting conversation happening on the linux kernel

hackers mailing list right about now that applies:

http://www.gossamer-threads.com/lists/linux/kernel/580789


Thanks for the pointer.  If you're a participant in that mailing list, maybe 
you could forward this comment...

A fundamental flaw in the kernel, which goes WAY back to early UNIX implementations, is that 
the nice(1) setting of a program only applies to CPU usage, not to other resources.  In this 
case, the file-system cache has no priority, so even if I set postmaster's nice(1) value to 
a very high priority, any pissant process with the lowest priority possible can come along 
with a cat some-big-file /dev/null and trash my cached file-system pages.  
It's essentially a denial-of-service mechanism that's built in to the kernel.

The kernel group's discussion on the heuristics of how and when to toss stale 
cache pages should have a strong nice(1) component to it.  A process with a low 
priority should not be allowed to toss memory from a higher-priority process 
unless there is no other source of memory.

Getting back to Postgres, the same points that the linux kernel group are discussing 
apply to Postgres.  There is simply no way to devise a heuristic that comes even close to 
what the app developer can tell you.  A mechanism that allowed an application to say, 
Keep this table in memory is the only way.  App developers should be advised 
to use it sparingly, because most of the time the system is pretty good at memory 
management, and such a mechanism hobbles the system's ability to manage.  But when it's 
needed, there is no substitute.

Craig


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Used Memory

2005-10-24 Thread Christian Paul B. Cosinas
Hi To all those who replied. Thank You.

I monitor my database server a while ago and found out that memory is used
extensively when I am fetching records from the database. I use the command
fetch all in my VB Code and put it in a recordset.Also in this command the
CPU utilization is used extensively.

Is there something wrong with my code or is it just the way postgresql is
behaving which I cannot do something about it?

I just monitor one workstation connecting to the database server and it is
already eating up about 20 % of the CPU of database server.

Which I think will not be applicable to our system since we have a target of
25 PC connecting to the database server most of the time.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Craig A. James
Sent: Monday, October 24, 2005 9:47 PM
To: Scott Marlowe
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory

Scott Marlowe wrote:
What's needed is a way for the application developer to explicitely 
say,  This object is frequenly used, and I want it kept in memory.
  
 There's an interesting conversation happening on the linux kernel 
 hackers mailing list right about now that applies:
 
 http://www.gossamer-threads.com/lists/linux/kernel/580789

Thanks for the pointer.  If you're a participant in that mailing list, maybe
you could forward this comment...

A fundamental flaw in the kernel, which goes WAY back to early UNIX
implementations, is that the nice(1) setting of a program only applies to
CPU usage, not to other resources.  In this case, the file-system cache has
no priority, so even if I set postmaster's nice(1) value to a very high
priority, any pissant process with the lowest priority possible can come
along with a cat some-big-file /dev/null and trash my cached file-system
pages.  It's essentially a denial-of-service mechanism that's built in to
the kernel.

The kernel group's discussion on the heuristics of how and when to toss
stale cache pages should have a strong nice(1) component to it.  A process
with a low priority should not be allowed to toss memory from a
higher-priority process unless there is no other source of memory.

Getting back to Postgres, the same points that the linux kernel group are
discussing apply to Postgres.  There is simply no way to devise a heuristic
that comes even close to what the app developer can tell you.  A mechanism
that allowed an application to say, Keep this table in memory is the only
way.  App developers should be advised to use it sparingly, because most of
the time the system is pretty good at memory management, and such a
mechanism hobbles the system's ability to manage.  But when it's needed,
there is no substitute.

Craig


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Used Memory

2005-10-24 Thread Mark Kirkwood

Christian Paul B. Cosinas wrote:

Hi To all those who replied. Thank You.

I monitor my database server a while ago and found out that memory is used
extensively when I am fetching records from the database. I use the command
fetch all in my VB Code and put it in a recordset.Also in this command the
CPU utilization is used extensively.

Is there something wrong with my code or is it just the way postgresql is
behaving which I cannot do something about it?

I just monitor one workstation connecting to the database server and it is
already eating up about 20 % of the CPU of database server.

Which I think will not be applicable to our system since we have a target of
25 PC connecting to the database server most of the time.



Could you post the query and the output of EXPLAIN ANALYZE?

In addition, have you run ANALYZE on all the tables in that database ? 
(sorry, have to ask :-) ).


cheers

Mark

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Used Memory

2005-10-24 Thread Christian Paul B. Cosinas
Hi mark

I have so many functions, more than 100 functions in the database :) And I
am dealing about 3 million of records in one database.
And about 100 databases :)


-Original Message-
From: Mark Kirkwood [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 25, 2005 3:07 AM
To: Christian Paul B. Cosinas
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory

Christian Paul B. Cosinas wrote:
 Hi To all those who replied. Thank You.
 
 I monitor my database server a while ago and found out that memory is 
 used extensively when I am fetching records from the database. I use 
 the command fetch all in my VB Code and put it in a recordset.Also 
 in this command the CPU utilization is used extensively.
 
 Is there something wrong with my code or is it just the way postgresql 
 is behaving which I cannot do something about it?
 
 I just monitor one workstation connecting to the database server and 
 it is already eating up about 20 % of the CPU of database server.
 
 Which I think will not be applicable to our system since we have a 
 target of
 25 PC connecting to the database server most of the time.
 

Could you post the query and the output of EXPLAIN ANALYZE?

In addition, have you run ANALYZE on all the tables in that database ? 
(sorry, have to ask :-) ).

cheers

Mark


I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Used Memory

2005-10-24 Thread Mark Kirkwood

Christian Paul B. Cosinas wrote:

Hi mark

I have so many functions, more than 100 functions in the database :) And I
am dealing about 3 million of records in one database.
And about 100 databases :)



LOL - sorry, mis-understood your previous message to mean you had 
identified *one* query where 'fetch all' was causing the problem!


Having said that, to make much more progress, you probably want to 
identify those queries that are consuming your resource, pick one of two 
of the particularly bad ones and post 'em.


There are a number of ways to perform said identification, enabling 
stats collection might be worth a try.


regards

Mark


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Used Memory

2005-10-23 Thread Christian Paul B. Cosinas










total  used  free   shared
 buffers  cached

Mem:  6192460  6137424
 55036  0  85952  5828844

-/+ buffers/cache:  222628
 5969832

Swap:  2096472  0
 2096472





Here is the result of free
command I am talking about.

What does this result mean?



I just noticed that as long as the free
memory in the first row (which is 55036 as of now) became low, the slower is
the response of the database server.





I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html




Re: [PERFORM] Used Memory

2005-10-23 Thread Mark Kirkwood

Christian Paul B. Cosinas wrote:
 


Here is the result of “free” command” I am talking about.

What does this result mean?



I seem to recall the Linux man page for 'free' being most 
unenlightening, so have a look at:


http://gentoo-wiki.com/FAQ_Linux_Memory_Management

(For Gentoo, but should be applicable to RHEL).

The basic idea is that modern operating systems try to make as much use 
of the memory as possible. Postgresql depends on this behavior - e.g. a 
page that has previously been fetched from disk, will be cached, so it 
can be read from memory next time, as this is faster(!)


 

I just noticed that as long as the free memory in the first row (which 
is 55036 as of now) became low, the slower is the response of the 
database server.
 


Well, you could be swapping - what does the swap line of 'free' show then?

Also, how about posting your postgresql.conf (or just the non-default 
parameters) to this list?


Some other stuff that could be relevant:

- Is the machine just a database server, or does it run (say) Apache + Php?
- When the slowdown is noticed, does this coincide with certain 
activities - e.g, backup , daily maintenance, data load(!) etc.



regards

Mark



I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


Nope, not me either.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Used Memory

2005-10-21 Thread Jens-Wolfhard Schicke
--On Freitag, Oktober 21, 2005 03:40:47 + Christian Paul B. Cosinas 
[EMAIL PROTECTED] wrote:

I am having a confusion to the memory handling of postgreSQL.
I rebooted my Server which is a PostgreSQL 8.0 Running on Redhat 9, which
is a Dual Xeon Server and 6 gig of memory.

Of course there is not much memory still used since it is just restarted.

But after a number of access to the tables the memory is being used and
it is not being free up. Actually after this access to the database and
the server is just idle

The memory is still used up. I am monitoring this using the free
command which gives me about 5.5 gig of used memory and the rest free.

I suppose you looked at the top row of the free output?

Because there the disk-cache is counted as used... Have a look at the 
second row where buffers are counted as free, which they more or less are.



Is there something that I should do to minimize and free up the used
memory?
No, the buffers make your database faster because they reduce direct disk 
access



I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html

I don't :)

Mit freundlichem Gruß,
Jens Schicke
--
Jens Schicke  [EMAIL PROTECTED]
asco GmbH http://www.asco.de
Mittelweg 7   Tel 0531/3906-127
38106 BraunschweigFax 0531/3906-400

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Used Memory

2005-10-21 Thread Christian Paul B. Cosinas

Also Does Creating Temporary table in a function and not dropping them
affects the performance of the database?


-Original Message-
From: Jens-Wolfhard Schicke [mailto:[EMAIL PROTECTED]
Sent: Friday, October 21, 2005 7:23 AM
To: Christian Paul B. Cosinas; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Used Memory

--On Freitag, Oktober 21, 2005 03:40:47 + Christian Paul B. Cosinas 
[EMAIL PROTECTED] wrote:
 I am having a confusion to the memory handling of postgreSQL.
 I rebooted my Server which is a PostgreSQL 8.0 Running on Redhat 9, 
 which is a Dual Xeon Server and 6 gig of memory.

 Of course there is not much memory still used since it is just restarted.

 But after a number of access to the tables the memory is being used 
 and it is not being free up. Actually after this access to the 
 database and the server is just idle

 The memory is still used up. I am monitoring this using the free
 command which gives me about 5.5 gig of used memory and the rest free.
I suppose you looked at the top row of the free output?

Because there the disk-cache is counted as used... Have a look at the
second row where buffers are counted as free, which they more or less are.

 Is there something that I should do to minimize and free up the used 
 memory?
No, the buffers make your database faster because they reduce direct disk
access

 I choose Polesoft Lockspam to fight spam, and you?
 http://www.polesoft.com/refer.html
I don't :)

Mit freundlichem Gruß,
Jens Schicke
-- 
Jens Schicke  [EMAIL PROTECTED]
asco GmbH http://www.asco.de
Mittelweg 7   Tel 0531/3906-127
38106 BraunschweigFax 0531/3906-400


I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Used Memory

2005-10-21 Thread Jon Brisbin
On Fri, 21 Oct 2005 03:40:47 -
Christian Paul B. Cosinas [EMAIL PROTECTED] wrote:


 But after a number of access to the tables the memory is being used
 and it is not being free up. Actually after this access to the
 database and the server is just idle

I noticed this behavior on my SUSE linux box as well. I thought it was
a memory leak in something (I think there was an actual memory leak in
the kernel shared memory stuff, which I fixed by upgrading my kernel
to 2.6.13-ck8). It turns out that some file systems are better than
others when it comes to increasing the performance of I/O on Linux.
ReiserFS was what I put on originally and by the end of the day, the
box would be using all of it's available memory in caching inodes.

I kept rebooting and trying to get the memory usage to go down, but it
never did. All but 500MB of it was disk cache. I let my apps just run
and when the application server needed more memory, it reclaimed it from
the disk cache, so there weren't side effects to the fact that top and
free always reported full memory usage.

They tell me that this is a good thing, as it reduces disk I/O and
increases performance. That's all well and good, but it's entirely
unnecessary in our situation. Despite that, I can't turn it off because
my research into the issue has shown that kernel developers don't want
users to be able to turn off disk caching. There is a value
in /proc/sys/vm/vfs_cache_pressure that can be changed, which will
affect the propensity of the kernel to cache files in RAM (google it
to find the suggestions on what value to set it to), but there isn't a
setting to turn that off on purpose.

After rolling my own CK-based kernel, switching to XFS, and tweaking
the nice and CPU affinity of my database process (I use schedtool in my
CK kernel to run it at SCHED_FIFO, nice -15, and CPU affinity confined
to the second processor in my dual Xeon eServer) has got me to the
point that the perpetually high memory usage doesn't affect my
application server.

Hope any of this helps.

Jon Brisbin
Webmaster
NPC International, Inc.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Used Memory

2005-10-21 Thread Alex Turner
[snip]to the second processor in my dual Xeon eServer) has got me to thepoint that the perpetually high memory usage doesn't affect my
application server.
I'm curious - how does the high memory usage affect your application server?

Alex 



[PERFORM] Used Memory

2005-10-20 Thread Christian Paul B. Cosinas








HI!



I am having a confusion to the memory
handling of postgreSQL.



Here is the Scenario.

I rebooted my Server which is a PostgreSQL
8.0 Running on Redhat 9, which is a Dual Xeon Server and 6 gig of memory.

Of course there is not much memory still
used since it is just restarted.

But after a number of access to the tables
the memory is being used and it is not being free up. Actually after this
access to the database and the server is just idle

The memory is still used up. I am
monitoring this using the free command which gives me about 5.5
gig of used memory and the rest free.



Is there something that I should do to minimize
and free up the used memory?



Thanks You.





I choose Polesoft Lockspam to fight spam, and you?
http://www.polesoft.com/refer.html