Re: [PERFORM] CPU load

2008-09-29 Thread Harald Armin Massa
Hello Maja,

 EXPLAIN ANALYSE  SELECT * FROM system_alarm WHERE id_camera='3' AND
 confirmed='false' AND dismissed='false' ORDER BY date DESC, time DESC
 LIMIT 1;

 (the table is indexed by id_camera, has around 1 milion rows, and this
 query returns around 70 rows and is executed (EXPLAIN ANALYSE) in
 around 4800 ms, and this table is queried a lot although not so often
 queried modified)

700.000 of 1.000.000 rows is around 70% ... that are nearly all rows.
As much as I read you, this table is not often modified. What reason
is there for quering all that data again and again instead of keeping
it in memory (should it be really needed) ?


Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
EuroPython 2009 will take place in Birmingham - Stay tuned!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] CPU bound at 99%

2008-04-22 Thread Harald Armin Massa
Bryan,

   about 2300 connections in idle
(ps auxwww | grep postgres | idle)

that is about 2300 processes being task scheduled by your kernel, each
of them using  1 MB of RAM and some other ressources, are you sure
that this is what you want?

Usual recommended design for a web application:

start request, rent a connection from connection pool, do query, put
connection back, finish request, wait for next request

so to get 500 connections in parallel, you would have the outside
situaion of 500 browsers submitting requests within the time needed to
fullfill one request.

Harald
-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] what is the maximum number of rows in a table in postgresql 8.1

2008-03-25 Thread Harald Armin Massa
Sathiya,

th maximum number of records in one PostreSQL table ist unlimited:

http://www.postgresql.org/about/

[for some values of unlimited]

Some further help:

googling for:
postgresql limits site:postgresql.org

leads you to this answer quite quick, while googling for

maximum number of rows in a postgresql table

leads you to a lot of misleading pages.

Harald


On Tue, Mar 25, 2008 at 12:42 PM, sathiya psql [EMAIL PROTECTED] wrote:
 Ok, finally am changing my question.


 Do get quick response from postgresql what is the maximum number of records
 i can have in a table in postgresql 8.1 ???







-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] viewing source code

2007-12-20 Thread Harald Armin Massa

 wrapping pl/pgsql with encryptor/decryptor

 It's quite a good idea, because it has more than zero chance of
 succeeding politically in the community.


It's additionally a good idea because the other big database is using the
same approach. Easier sell to phb.

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!


[PERFORM] URI to kind of a benchmark

2007-12-12 Thread Harald Armin Massa
reading postgres benchmarks for beginners advises to stop reading on the
words default (ie. unchanged postgresql.conf); but the real test is given
right after:

http://www.kaltenbrunner.cc/blog/index.php?/archives/21-guid.html

That confirmes my first impression (on different workload) of the speed has
doubled.

If reality confirmes, that 8.2 to 8.3 will be a small step in versions, and
a great step in databases.

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!


Re: [PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-07 Thread Harald Armin Massa
Scott,

Well, there've been a lot of issues with anti-virus and postgresql not
 getting along.  I wonder if pgsql takes out a stronger lock, and when
 it can't get it then the failure happens.  Not familiar enough with
 windows to do more than speculate.


without touching the file-concurrency issues caused by virus scanners:

a LOT of the Postgres - VirusScanner problems on Windows were caused
during the postgres spawns a new process and communicates with that process
via ipstack

Many Virus Scanners seam to have dealt with the TCP/IP stack in a not
compatible manner...

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!


Re: [PERFORM] PostgreSQL Configuration Tool for Dummies

2007-07-05 Thread Harald Armin Massa

Magnus,

don't bother reporting a bug

unless you're on the latest in a branch, and at least make sure you're
on one of the maojr releases listed on www.postgresql.org?

Seems reasonable?



absolutely. Should be standard practice.

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July
to Wednesday 11th July. See you there!


Re: [PERFORM] Data transfer very slow when connected via DSL

2007-06-22 Thread Harald Armin Massa

Rainer,


I did not find a solution so far; and for bulk data transfers I now
programmed a workaround.

But that is surely based on some component installed on the server, isn't
it?



Correct. I use a pyro-remote server. On request this remote server copies
the relevant rows into a temporary table, uses a copy_to Call to push them
into a StringIO-Objekt (that's Pythons version of In Memory File),
serializes that StringIO-Objekt, does a bz2-compression and transfers the
whole block via VPN.

I read on in this thread, and I scheduled to check on psycopg2 and what it
is doing with cursors.

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July
to Wednesday 11th July. See you there!


Re: [PERFORM] Data transfer very slow when connected via DSL

2007-06-22 Thread Harald Armin Massa

PFC,


Correct. I use a pyro-remote server. On request this remote server copies
 the relevant rows into a temporary table, uses a copy_to Call to push
 them
 into a StringIO-Objekt (that's Pythons version of In Memory File),
 serializes that StringIO-Objekt, does a bz2-compression and transfers
the
 whole block via VPN.




  What about a SSH tunnel using data compression ?

Setup on multiple Windows Workstations in multiple Installations is not
possible.


If you fetch all rows from a query in one go, would it be fast ?

I tried the same copy_to via VPN. It took 10-50x the time it took locally.


Also, PG can now COPY from a query, so you don't really need the temp

table...
I know, but was stuck to 8.1 on some servers.

Best wishes,

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July
to Wednesday 11th July. See you there!


Re: [PERFORM] Data transfer very slow when connected via DSL

2007-06-21 Thread Harald Armin Massa

Hello Rainer,

The database computer is connected via a 2MBit SDL connection. I myself have

a
768/128 KBit ADSL connection and pinging the server takes 150ms on
average.



I do not have a solution, but I can confirm the problem :)

One PostgreSQL-Installation: Server 8.1 and 8.2 on Windows in the central;
various others connected via VPN. Queries are subsecond when run locally
(including data transfer), and up to 10 seconds and more via VPN, even in
off-hours

The data-transfer is done via PG-Admin or via psycopg2 Python-Database
adapter; nothing with ODBC or similiar in between.

I did not find a solution so far; and for bulk data transfers I now
programmed a workaround.

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
fx 01212-5-13695179
-
EuroPython 2007 will take place in Vilnius, Lithuania from Monday 9th July
to Wednesday 11th July. See you there!


Re: [PERFORM] Kernel cache vs shared_buffers

2007-05-13 Thread Harald Armin Massa

A common rule of thumb people quote here is to set shared_buffers to 1/4
of available RAM, and leave the rest for OS cache. That's probably a
good configuration to start with.



And just for the record: This rule of thumb does NOT apply to
PostgreSQL on Windows. My current rule of thumb on Windows: set
shared_buffers to minimum * 2
Adjust effective_cache_size to the number given as system cache
within the task manager.

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Kernel cache vs shared_buffers

2007-05-13 Thread Harald Armin Massa

Heikki,



 PostgreSQL on Windows. My current rule of thumb on Windows: set
 shared_buffers to minimum * 2
 Adjust effective_cache_size to the number given as system cache
 within the task manager.

Why?


I tried with shared_buffers = 50% of available memory, and with 30% of
available memory, and the thoughput on complex queries stalled or got
worse.

I lowered shared_buffers to minimum, and started raising
effective_cache_size, and performance on real world queries improved.
pg_bench did not fully agree when simulating large numbers concurrent
queries.

So I tried setting shared_buffers between minimum and 2.5*minimum, and
pg_bench speeds recovered and real world queries did similiar.

My understanding is that shared_buffers are realised as memory mapped
file in win32; and that they are only usually kept in memory. Maybe I
understood that wrong.

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Feature Request --- was: PostgreSQL Performance Tuning

2007-04-28 Thread Harald Armin Massa

Carlos,

about your feature proposal: as I learned, nearly all
Perfomance.Configuration can be done by editing the .INI file and making the
Postmaster re-read it.

So, WHY at all should those parameters be guessed at the installation of the
database? Would'nt it be a saver point of time to have some  postgresql-tune

utilitiy, which gets run after the installation, maybe every once in a
while. That tool can check vital information like Databasesize to memory
relation; and suggest a new postgresql.ini.

That tool needs NO INTEGRATION whatsoever - it can be developed, deployed
totally independend and later only be bundled.

Does my suggestion make more sense now?  Or is it still too unrealistic to

make it work properly/safely?



And as this tool can be tested seperately, does not need a new initdb every
time ... it can be developed more easily.

Maybe there is even a pointy flashy version possible (perhaps even for money
:) which gives nice graphics and optimized, like those Windows Optimizers.
:)  I am sure, some DBAs in BIGCOMPs would be thrilled :)

May that be a possible way?

Harald


--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.


Re: [PERFORM] Feature Request --- was: PostgreSQL Performance Tuning

2007-04-28 Thread Harald Armin Massa

Jonah,

Um, shared_buffers is one of the most important initial parameters to

set and it most certainly cannot be set after startup.



Not after startup, correct. But after installation. It is possible to change
PostgreSQL.conf (not ini, to much windows on my side, sorry) and restart
postmaster.

Because a lot of good assumptions can be made on the initial install.

Likewise, some of the most important parameters cannot be tuned after
startup.



Yes. These assumptions can be made - but then they are assumptions. When the
database is filled and working, there are measurable facts. And yes, that
needs a restart of postmaster, that does not work on 24/7. But there are
many databases which can be restartet for tuning in regular maintainance
sessions.


:) which gives nice graphics and optimized, like those Windows
Optimizers.
 :)  I am sure, some DBAs in BIGCOMPs would be thrilled :)

I'd suggest that you not make snide remarks about someone else's
design when your own analysis is somewhat flawed.



Sorry, Jonah, if my words sounded snide. I had feedback from some DBAs in
BIGCOMPs, who said very positive things about the beauty of pgadmin. I saw
some DBAs quite happy about the graphical displays of TOAD. I worked for a
MVS Hoster who paid BIG SUMS to Candle Software for a Software called
Omegamon, which made it possible to have charts about performance figures.
So I deducted that people would even be willing to pay money for a GUI which
presents the opimizing process.

That idea of tune PostgreSQL database after installation also came from
the various request on pgsql-performance. Some ask before they install; but
there are also MANY questions with our PostgreSQL database was running fast
untill , with  usually being a table grown bigger then n records.

And I really did not want to discredit the idea of properly configuring from
the start. Just wanted to open an other option to do that tuning.

Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.


Re: [PERFORM] PostgreSQL in virtual machine

2007-03-12 Thread Harald Armin Massa

Andreas,

I am responsible for some active PostgreSQL databases within virtual
machines and on plain metal; all using Windows.

There are spurious strange performance issues on those PostreSQL
databases within the virtual machines. I can not pin them down (yet),
and am not really able to blame them on vmware; but just want to
recommend to be very very carefull about PostgreSQL in vmware and
please communcate your findings.


from a PostgreSQL 8.1 database in a virtual machine.  I'm able to
adjust the memory of this machine according to reasonable values
and can choose between one or two (emulated) processors.  The
question is: How can I find an optimal relation between the
virtual hardware parameters and PostgreSQL performance.  I guess


You did not specify the OS in your VM. I can report my experiences
with W2k3 inside the VM:

contrary to usual recommendations, smaller shared_buffers yielded
better results.
Growing effective_cache_size yielded the best results for me.


Harald



--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Tuning

2007-02-06 Thread Harald Armin Massa

Tuners,

allways be aware that results on Windows may be totally different!

My main customer is running PostgreSQL 8.1 on MINIMUM shared buffers

max_connections = 100#
shared_buffers = 200# min 16 or max_connections*2, 8KB each

I changed it to this value from the very low default 2, and the system
is responding better; especially after fixing the available memory setting
within the planner.

... frustrating part is, I could not replicate this behavious with pg_bench
:(

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
fx 01212-5-13695179
-
Python: the only language with more web frameworks than keywords.


Re: [PERFORM] pgBench on Windows

2006-10-22 Thread Harald Armin Massa
Does the one that ships in the installer not work?//Magnusit does work.*putting ashes on my head*Googled around and only found pgbench.c; never looked in program directory. Sorry, my mistake.
Harald-- GHUM Harald Massa
persuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


Re: [PERFORM] measuring shared memory usage on Windows

2006-10-20 Thread Harald Armin Massa
Mark, And THAT is exactly the challenge of this thread: I am searching for
 tools to check shared memory usage on Windows. ipcs is not available. And neither Magnus nor Dave, both main contributors of the win32 port of PostgreSQL, and both way wiser concerning Windows internas then me,
 know of some :(Would it help to have the postmaster report what the shared memory
allocation is when it starts up? (this won't help with the activitystuff, but at least you would know for sure how *much* you have to use).That would be of no use ... I am quite sure that PostgreSQL allocates the memory as specified; that is: as much as is written in 
postgresql.conf. The interesting part is usage of shared memory through the workload. Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart
0173/9409607-Python: the only language with more web frameworks than keywords.


Re: [PERFORM] measuring shared memory usage on Windows

2006-10-20 Thread Harald Armin Massa
Performance readers I follow up my promise:I try my theories :)) and contrary to all wisdom from all PostgreSQL
tuning recommendations reconfigured shared memory nearly to theminimum: 1000 for maximum of 400 concurrent connections. (800 would beminimum). Single user performance was fine, now I am looking forwardto full user scenario tomorrow.
I will keep you posted.HaraldI went even further down,max_connections = 200  # shared_buffers = 400   # min 16 or max_connections*2, 8KB eachto the minimum allowed value of shared_buffers. And the response times are better then ever before (with 
10.000, 20.000, 5.000 and 40.000 shared_buffers):An application-level response dropped from 16 / 9.5 seconds (first run / cached run) to 12 / 6.5 average runtime. That response time is mainly dependend on SQL performance, and esp. the drop is caused by this change.
Moreover, the columns swapped out memory in task manager stay low at ~26k per postgres.exe, compared to ~106k as my shared_buffers where at 10.000.The memory column of postgres.exe in task manager process still grows up to 
10.000K, while now virtual memory stays ~3.600k per processSo: in this configuration / workload: - windows 2k3- small (~0,4GB) Database- rather complex queries- ~1 GB memory 
- running in virtual machineand - windows xp
- small (~0,4GB) Database
- ~0,5 GB memory - rather complex queries- running native (laptops)I could verify a substantial speed gain in single and many user situations by lowering shared_buffers to the allowed minimum.
Harald
-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


[PERFORM] pgBench on Windows

2006-10-20 Thread Harald Armin Massa
Hello Performancers,has anyone a pgBench tool running on Windows?I want to experiment with various settings to tune; and would prefer using something ready made before coming up with my own misstakes.
Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


[PERFORM] measuring shared memory usage on Windows

2006-10-16 Thread Harald Armin Massa

Hello,

Shridhar Daithankar and Josh Berkus write on
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

shared_memory


There is one way to decide what is best for you. Set a  high value of
this parameter and run the database for  typical usage. Watch usage of
shared memory using ipcs  or similar tools. A recommended figure would
be between  1.2 to 2 times peak shared memory usage.


I tried to find a way to do this on windows. Scanning all the lines of
perfmon memory options, I could not see anithing like shared memory
usage.

Googling for shared memory usage just drove me to some ancient WRONG
information that PostgreSQL is not possible on Windows because of
lacking shared memory. (guess that was for Windows 95 or similiar)

So: has anybody a hint how I can check how much shared_memory is
really used by PostgreSQL on Windows, to fine tune this parameter?

I learned the hard way that just rising it can lead to a hard
performance loss :)

Harald


--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] measuring shared memory usage on Windows

2006-10-16 Thread Harald Armin Massa
Magnus, So: has anybody a hint how I can check how much shared_memory is really used by PostgreSQL on Windows, to fine tune this parameter?
 I learned the hard way that just rising it can lead to a hard performance loss :)Not really sure :) We're talking about anonymous mapped memory, and Idon't think perfmon lets you look at that. 
thanks for the clarification. However,anonymous mapped memory site:microsoft.comturns out 0 (zero) results. And even splitting it up there seems to be nearly no information ... is the same thing by any chance also known by different names? 
 However, there is no limit to it as there often is on Unix - you can map up to whatever the virtual RAM size is (2Gb/3Gb dependingo n what boot flag you use, IIRC). You can monitor it as a part of the total memory useage on the server, but
 there's no way to automatically show the difference between them.So the performance shock with high shared memory gets obvious: memory mapped files get swapped to disk. I assume that swapping is nearly transparent for the application, leading to a nice trashing ...
I'll keep on searching...Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.


Re: [PERFORM] measuring shared memory usage on Windows

2006-10-16 Thread Harald Armin Massa

Magnus,


 anonymous mapped memory  site:microsoft.com
 turns out 0 (zero) results. And even splitting it up there
 seems to be nearly no information ... is the same thing by
 any chance also known by different names?

Hmm. Yeah, most likely :) I may have grabbed that name from something
else. THe documentation for the call is on
http://windowssdk.msdn.microsoft.com/en-us/library/ms685007(VS.80).aspx,
we specify INVALID_HANDLE_VALUE for hFile, which means:


[...]
CreateFileMapping creates a file mapping object of a specified size
that _the operating system paging file backs_
[...]

I assume that DWORD dwMaximumSizeHigh and  DWORD dwMaximumSizeLow
get filled with whatever I configure in shared_memory?

My reading of that function gives me the impression, that this kind of
shared *memory* is essentially a shared disk file - _the operating
system paging file backs_

Especially documentation lines like If an application specifies a
size for the file mapping object that is larger than the size of the
actual named file on disk, the file on disk is increased to match the
specified size of the file mapping object.

really makes me think that that area is just a comfortable way to
access files on disk as memory areas; with the hope of propably better
caching then not-memory-mapped files.

That would explain my disturbing impressions of performance of
PostgreSQL on win32 rising when lowering shared_memory...

Harald
--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] measuring shared memory usage on Windows

2006-10-16 Thread Harald Armin Massa

David,


For example, if you have 1G of RAM on the box, you can't
configure a cache of 900 meg and expect things to work well.
This is because the OS and associated other stuff running on
the box will use ~300megs. The system will page as a result.


Overcommitting of memory leads to trashing, yes, that is also my experience.


The only sure fire way I know of to find the absolute maximum
cache size that can be safely configured is to experiment with
larger and larger sizes until paging occurs, then back off a bit.


Yeah, I know the trial and error method. But I also learned that
reading the manuals and documentation often helps.

So after fastreading the various PostgreSQL tuning materials, I came
accross formulas to calculate a fine starting point for shared memory
size; and the recommendation to check with shared_memory information
tools if that size is okay.

And THAT is exactly the challenge of this thread: I am searching for
tools to check shared memory usage on Windows. ipcs is not available.
And neither Magnus nor Dave, both main contributors of the win32 port
of PostgreSQL, and both way wiser concerning Windows internas then me,
know of some :(

The challenge below that: I maintain a win32 PostgreSQL server, which
gets slow every 3-4 weeks. After restarting it runs perfect, for again
3-4 weeks.

The Oracle-guys at the same customer solved a similiar problem by
simply restarting Oracle every night. But that would be not good
enough for my sence of honour :)

Thanks for your thoughts,

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] measuring shared memory usage on Windows

2006-10-16 Thread Harald Armin Massa

Magnus,



That shows that you don't really know how the memory manager in NT+
works ;-) *ALL* normal file I/O is handled through the memory manager
:-) So yes, they are both different access methods to the memory
manager, really.


don't really is a overstatement, I do not know at all how the memory
manager works in NT+. All I learned is Inside Windows NT of H.
Custer from 1993 :)

So, just to make sure I understood correctly:

If PostgreSQL reads a file  from disk, Windows NT does this file I/O
though the same memory manager than when PostgreSQL puts parts of this
read file [for example an index segment] into shared memory - which is
nothing but a file, that usually stays in main memory.

Correct so far?

I continued from this thoughts:

lets say there is 500MB memory available, we have 100MB of
shared_memory configured.
Now PostgreSQL reads 100MB from a file - memory manager takes 100MB
memory to fullfill this file access (optimizations aside)

Now PostgreSQL reshuffles that 100MB and decides: h, that may be
valuable for ALL of the currently running postgres.exe and pushes
those 100MB into shared memory for all to use. It caches the 100MB - a
fine chunk of an index.

From this kind of understanding, memory manager has 200MB in use: the
100MB from the file read, and the 100MB of shared memory.

Of course the 100MB of the file in memory manager will get flushed soon.

Now, lets restrict PostgreSQL: I only give the minimum amout of shared
memory. It will NOT cache those 100MB in shared memory.

But: PostgreSQL really was correct. The other 20 postgres.exe access
the same file on a regular basis. Won't memory manager keep that file
cached in RAM anyway?

I try my theories :)) and contrary to all wisdom from all PostgreSQL
tuning recommendations reconfigured shared memory nearly to the
minimum: 1000 for maximum of 400 concurrent connections. (800 would be
minimum). Single user performance was fine, now I am looking forward
to full user scenario tomorrow.

I will keep you posted.

Harald

--
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Reinsburgstraße 202b
70197 Stuttgart
0173/9409607
-
Python: the only language with more web frameworks than keywords.

---(end of broadcast)---
TIP 6: explain analyze is your friend