On Tue, 2004-05-11 at 15:46 -0700, Paul Tuckfield wrote:
> - the "cache" column shows that linux is using 2.3G for cache. (way too
> much) you generally want to give memory to postgres to keep it "close" to
> the user, not leave it unused to be claimed by linux cache (need to leave
> *some* for
One big caveat re. the "SAME" striping strategy, is that readahead can
really hurt an OLTP you.
Mind you, if you're going from a few disks to a caching array with many
disks, it'll be hard to not have a big improvement
But if you push the envelope of the array with a "SAME" configuration,
read
Hadley Willan wrote:
To answer question 1, if you use software raid the chunk size is part of
the /etc/raidtab file that is used on initial container creation. 4KB is
the standard and a LARGE chunk size of 1MB may affect performance if
you're not writing down to blocks in that size continuously.
Bjoern Metzdorf wrote:
You might also consider configuring the Postgres data drives for a
RAID 10 SAME configuration as described in the Oracle paper "Optimal
Storage Configuration Made Easy"
(http://otn.oracle.com/deploy/availability/pdf/oow2000_same.pdf). Has
anyone delved into this before?
O
James Thornton wrote:
This is what I am considering the ultimate platform for postgresql:
Hardware:
Tyan Thunder K8QS board
2-4 x Opteron 848 in NUMA mode
4-8 GB RAM (DDR400 ECC Registered 1 GB modules, 2 for each processor)
LSI Megaraid 320-2 with 256 MB cache ram and battery backup
6 x 36GB SCSI
Bjoern Metzdorf wrote:
Hi,
at first, many thanks for your valuable replies. On my quest for the
ultimate hardware platform I'll try to summarize the things I learned.
-
This is what I am considering the ultimate platform for postgresql:
On Thu, 2004-05-13 at 14:42, Doug Y wrote:
> We don't seem to be swapping much:
>
Linux aggressively swaps. If you have any process in memory which is
sleeping a lot, Linux may actively attempt to page it out. This is true
even when you are not low on memory. Just because you see some swap
s
(Sorry if this ends up being a duplicate post, I sent a reply yesterday,
but it doesn't appear to have gone through... I think I typo'd the address
but never got a bounce.)
Hi,
Thanks for your initial help. I have some more questions below.
At 05:02 AM 5/12/2004, Shridhar Daithankar wrote:
Dou
On Thu, 13 May 2004, Fabio Panizzutti wrote:
> I don't understand why the planner chose a different query plan on
> identical tables with same indexes .
Because it's more than table structure that affects the choice made by the
planner. In addition the statistics about the values that are there
"Fabio Panizzutti" <[EMAIL PROTECTED]> writes:
> I don't understand why the planner chose a different query plan on
> identical tables with same indexes .
Different data statistics; not to mention different table sizes
(the cost equations are not linear).
Have you ANALYZEd (or VACUUM ANALYZEd) b
Note that effective_cache_size is merely a hint to that planner to say
"I have this much os buffer cache to use" - it is not actually allocated.
It is shared_buffers that will hurt you if it is too high (1 - 25000
is the usual sweet spot).
best wishes
Mark
Shridhar Daithankar wrote:
Increa
>>>-Messaggio originale-
>>>Da: [EMAIL PROTECTED]
>>>[mailto:[EMAIL PROTECTED] Per conto di
>>>Shridhar Daithankar
>>>Inviato: giovedì 13 maggio 2004 15.05
>>>A: Fabio Panizzutti
>>>Cc: [EMAIL PROTECTED]
>>>Oggetto: Re: [PERFORM] Query plan on identical tables differs . Why ?
>>>
>>>
>>
This is somthing I wish more of us did on the lists. The list archives
have solutions and workarounds for every variety of problem but very few
summary emails exist. A good example of this practice is in the
sun-managers mailling list. The original poster sends a "SUMMARY" reply
to the list with
On Wed, 2004-05-12 at 05:02, Shridhar Daithankar wrote:
> I agree. For shared buffers start with 5000 and increase in batches on 1000. Or
> set it to a high value and check with ipcs for maximum shared memory usage. If
> share memory usage peaks at 100MB, you don't need more than say 120MB of buf
Fabio Panizzutti wrote:
storico=# explain select tag_id,valore_tag,data_tag from storico_misure
where (data_tag>'2004-05-03' and data_tag <'2004-05-12') and
tag_id=37423 ;
Can you please post explain analyze? That includes actual timings.
Looking at the schema, can you try "and tag_id=37423::integ
Hello I'm tuning a postgresql (7.4.2) server for best performance .
I have a question about the planner .
I have two identical tables : one stores short data (about 2.000.000
record now) and
the other historycal data ( about 8.000.000 record now and growing ...)
A simple test query : select t
16 matches
Mail list logo