Josh Berkus wrote:
We talked about bumping it to 512kB or 1MB for 9.1. Did that get in?
Do I need to write that patch?
If it defaulted to 3% of shared_buffers, min 64K & max 16MB for the auto
setting, it would for the most part become an autotuned parameter. That
would make it 0.75 to 1M
On 1/6/2011 9:36 PM, Γιωργος Βαλκανας wrote:
1) Why is it taking *so* long for the first query (with the "NOT IN" )
to do even the simple select?
Because NOT IN has to execute the correlated subquery for every row and
then check whether the requested value is in the result set, usually by
doi
Hi all,
I'm using postgres 8.4.2 on a Ubuntu Linux machine.
I have several tables, one of which is named Document, which of course
represents information I need about my documents. I also have another
table, similar to the first one, called Doc2. The schema of both tables is
the following:
CREAT
On Thu, Jan 6, 2011 at 10:58 AM, Josh Berkus wrote:
>
>> And the risks are rather asymmetric. I don't know of any problem from
>> too large a buffer until it starts crowding out shared_buffers, while
>> under-sizing leads to the rather drastic performance consequences of
>> AdvanceXLInsertBuffer
On Jan 6, 2011, at 10:58 AM, Josh Berkus wrote:
>
>> But I wonder if initdb.c, when selecting the default shared_buffers,
>> shouldn't test with wal_buffers = shared_buffers/64 or
>> shared_buffers/128, with a lower limit of 8 blocks, and set that as
>> the default.
>
> We talked about bumping
On Thu, Jan 6, 2011 at 2:41 PM, Scott Marlowe wrote:
> On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas wrote:
>> On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith wrote:
>>> Scott Marlowe wrote:
I can sustain about 5,000 transactions per second on a machine with 8
cores (2 years old) and 14 15k
On Thu, Jan 6, 2011 at 2:31 PM, Robert Haas wrote:
> On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith wrote:
>> Scott Marlowe wrote:
>>> I can sustain about 5,000 transactions per second on a machine with 8
>>> cores (2 years old) and 14 15k seagate hard drives.
>>
>> Right. You can hit 2 to 3000/se
Thanks for the assistance.
Here is an explain analyze of the query with the problem limit:
production=# explain analyze select * from landing_page.messages where
((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY
messages.created_at ASC limit 10;
QUERY PLAN
-
On Mon, Dec 20, 2010 at 12:49 PM, Greg Smith wrote:
> Scott Marlowe wrote:
>> I can sustain about 5,000 transactions per second on a machine with 8
>> cores (2 years old) and 14 15k seagate hard drives.
>
> Right. You can hit 2 to 3000/second with a relatively inexpensive system,
> so long as you
> And the risks are rather asymmetric. I don't know of any problem from
> too large a buffer until it starts crowding out shared_buffers, while
> under-sizing leads to the rather drastic performance consequences of
> AdvanceXLInsertBuffer having to wait on the WALWriteLock while holding
> the WAL
10 matches
Mail list logo