: Sean Shanny
Cc: [EMAIL PROTECTED]
Subject:Re: [PERFORM] General performance questions about postgres on Apple
In-reply-to: <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]>
<[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
<[EMAIL PROTECTED]>
Com
Sean Shanny <[EMAIL PROTECTED]> writes:
> We have the following setting for random page cost:
> random_page_cost = 1# units are one sequential page fetch cost
> Any suggestions on what to bump it up to?
Well, the default setting is 4 ... what measurements prompted you to
reduce it to 1
On Fri, 20 Feb 2004, Sean Shanny wrote:
> max_connections = 100
>
> # - Memory -
>
> shared_buffers = 16000 # min 16, at least max_connections*2,
> 8KB each
> sort_mem = 256000 # min 64, size in KB
You might wanna drop sort_mem somewhat and just set it during your impor
Simon Riggs wrote:
Sean Shanny
Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two
7
disk hardware based RAID5 sets software striped to form a RAID50 set.
The DB, WALS, etc are all on that file set. Ru
>Sean Shanny
> Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
> channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two
7
> disk hardware based RAID5 sets software striped to form a RAID50 set.
> The DB, WALS, etc are all on that file set. Running OSX journaled
Scott,
> I am certainly open to any suggestions on how to deal with speed issues
> on these sorts of large tables, it isn't going to go away for us. :-(
I'm not sure what to suggest. I can't think of anything off the top of my
head that would improve cripplingly slow random seek times.
This
Scott,
We did try clustering on the date_key for the fact table below for a
months worth of data as most of our requests for data are date range
based, i.e. get me info for the time period between 2004-02-01 and
2004-02-07. This normally results in a plan that is doing an index scan
on the da
On Sun, 22 Feb 2004, Sean Shanny wrote:
> Tom,
>
> We have the following setting for random page cost:
>
> random_page_cost = 1# units are one sequential page fetch cost
>
> Any suggestions on what to bump it up to?
>
> We are waiting to hear back from Apple on the speed issues, so
Tom,
We have the following setting for random page cost:
random_page_cost = 1# units are one sequential page fetch cost
Any suggestions on what to bump it up to?
We are waiting to hear back from Apple on the speed issues, so far we
are not impressed with the hardware in helping in
Sean Shanny <[EMAIL PROTECTED]> writes:
> New results with the above changes: (Rather a huge improvement!!!)
> Thanks Scott. I will next attempt to make the cpu_* changes to see if
> it the picks the correct plan.
> explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT
> OU
scott.marlowe wrote:
On Fri, 20 Feb 2004, Sean Shanny wrote:
max_connections = 100
# - Memory -
shared_buffers = 16000 # min 16, at least max_connections*2,
8KB each
sort_mem = 256000 # min 64, size in KB
You might wanna drop sort_mem somewhat and just set it d
To all,
This is a 2 question email. First is asking about general tuning of the
Apple hardware/postgres combination. The second is whether is is
possible to speed up a particular query.
PART 1
Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre
channel to a fully loaded 3.5
12 matches
Mail list logo