Christopher Browne wrote:
One of our sysadmins did all the "configuring OS stuff" part; I don't
recall offhand if there was a need to twiddle something in order to
get it to have great gobs of shared memory.
FWIW, the section on configuring kernel resources under various
Unixen[1] doesn't have any
Clinging to sanity, [EMAIL PROTECTED] (Dan Harris) mumbled into her beard:
> I will soon have at my disposal a new IBM pSeries server. The main
> mission for this box will be to serve several pg databases. I have
> ordered 8GB of RAM and want to learn the best way to tune pg and AIX
> for this co
I will soon have at my disposal a new IBM pSeries server. The main
mission for this box will be to serve several pg databases. I have
ordered 8GB of RAM and want to learn the best way to tune pg and AIX for
this configuration. Specifically, I am curious about shared memory
limitations. I'v
Bill Montgomery <[EMAIL PROTECTED]> writes:
> I have a particularly troublesome table in my 7.3.4 database. It
> typically has less than 50k rows, and a usage pattern of about 1k
> INSERTs, 50-100k UPDATEs, and no DELETEs per day. It is vacuumed and
> analyzed three times per week.
You probably
Matthew T. O'Connor wrote:
Is there any way to avoid doing a periodic VACUUM FULL on this table,
given the fairly radical usage pattern? Or is the (ugly) answer to
redesign our application to avoid this usage pattern?
pg_autovacuum would probably help as it monitors activity and vacuumus
table
> "BM" == Bill Montgomery <[EMAIL PROTECTED]> writes:
BM> Is there any way to avoid doing a periodic VACUUM FULL on this table,
BM> given the fairly radical usage pattern? Or is the (ugly) answer to
BM> redesign our application to avoid this usage pattern?
I'll bet upgrading to 7.4.2 clears u
Not knowing a whole lot about the internals of Pg, one thing jumped out
at me, that each trip to get data from bv_books took 2.137 ms, which
came to over 4.2 seconds right there.
The problem "seems" to be the 1993 times that the nested loop spins, as
almost all of the time is spent there.
Pers
> Is there any way to avoid doing a periodic VACUUM FULL on this table,
> given the fairly radical usage pattern? Or is the (ugly) answer to
> redesign our application to avoid this usage pattern?
Yes, you should be able to doing avoid periodic VACUUM FULL. The problem
is that your table needs to
Rosser Schwarz wrote:
PostgreSQL uses the operating system's disk cache.
... in addition to its own buffer cache, which is stored in shared
memory. You're correct though, in that the best practice is to keep the
PostgreSQL cache small and give more memory to the operating system's
disk cache.
P
All,
I have a particularly troublesome table in my 7.3.4 database. It
typically has less than 50k rows, and a usage pattern of about 1k
INSERTs, 50-100k UPDATEs, and no DELETEs per day. It is vacuumed and
analyzed three times per week. However, the performance of queries
performed on this table
> What is essentially required is the "prescient cacheing algorithm,"
> where the postmaster must consult /dev/esp in order to get a
> prediction of what blocks it may need to refer to in the next sixty
> seconds.
Easy enough. Television does it all the time with live shows. The guy
with the buzze
[EMAIL PROTECTED] (Richard Huxton) writes:
> If you could "pin" data in the cache it would run quicker, but at the
> cost of everything else running slower.
>
> Suggested steps:
> 1. Read the configuration/tuning guide at:
>http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
> 2. Post
Hello Richard and Rosser,
Thank you both for the answers.
I tried creating a semi cache by running all the queries and indeed it
worked and I might use such way in the future if needed, yet though, I
can't help but to feel it isn't exactly the right way to work around
this problem. If I do, I mig
Vitaly Belman wrote:
Hello,
I have the following problem:
When I run some query after I just run the Postmaster, it takse
several seconds to execute (sometimes more than 10), if I rerun it
again afterwards, it takes mere milliseconds.
So, I guess it has to do with PostgreSQL caching.. But how exact
while you weren't looking, Vitaly Belman wrote:
> So, I guess it has to do with PostgreSQL caching.. But how exactly
> does it work? What does it cache? And how can I control it?
PostgreSQL uses the operating system's disk cache. You can hint to
the postmaster how much memory is available for ca
Hello,
I have the following problem:
When I run some query after I just run the Postmaster, it takse
several seconds to execute (sometimes more than 10), if I rerun it
again afterwards, it takes mere milliseconds.
So, I guess it has to do with PostgreSQL caching.. But how exactly
does it work? W
16 matches
Mail list logo