On Thu, Mar 16, 2017 at 12:39 PM, David Steele wrote:
>> Anyway, I committed the patch posted here. Or the important line out
>> of the two, anyway. :-)
>
> It seems that this submission should be marked as "Committed" with
> Robert as the committer. Am I missing
On 3/16/17 12:41 PM, Robert Haas wrote:
> On Thu, Mar 16, 2017 at 12:39 PM, David Steele wrote:
>>> Anyway, I committed the patch posted here. Or the important line out
>>> of the two, anyway. :-)
>>
>> It seems that this submission should be marked as "Committed" with
>>
On 2/2/17 2:47 PM, Robert Haas wrote:
> On Wed, Feb 1, 2017 at 9:47 PM, Jim Nasby wrote:
>> Before doing that the first thing to look at would be why the limit is
>> currently INT_MAX / 2 instead of INT_MAX.
>
> Generally the rationale for GUCs with limits of that sort
On 2/3/17 7:34 PM, Andres Freund wrote:
On 2017-02-03 19:26:55 -0600, Jim Nasby wrote:
On 2/3/17 6:20 PM, Andres Freund wrote:
- The ringbuffers in shared buffers can be problematic. One possible way of
solving that is to get rid of ringbuffers entirely and rely on different
initial values for
On 2017-02-03 19:26:55 -0600, Jim Nasby wrote:
> On 2/3/17 6:20 PM, Andres Freund wrote:
> > > - The ringbuffers in shared buffers can be problematic. One possible way
> > > of
> > > solving that is to get rid of ringbuffers entirely and rely on different
> > > initial values for usage_count
On 2/3/17 6:20 PM, Andres Freund wrote:
- The ringbuffers in shared buffers can be problematic. One possible way of
solving that is to get rid of ringbuffers entirely and rely on different
initial values for usage_count instead, but that's not desirable if it just
means more clock sweep work for
On 2017-02-03 18:12:48 -0600, Jim Nasby wrote:
> Interesting. Probably kills a couple birds with one stone:
>
> - This should be a lot cheaper for backends then the clock sweep
Right, that's one of the motivations - the current method is pretty much
guaranteed to create the worst cacheline
On 2/2/17 1:50 PM, Andres Freund wrote:
FWIW, I think working on replacing bgwriter (e.g. by working on the
patch I send with a POC replacement) wholesale is a better approach than
spending time increasing limits.
Do you have a link to that? I'm not seeing anything in the archives.
Not at
On 2017-02-02 14:47:53 -0500, Robert Haas wrote:
> I expect that increasing the maximum value of shared_buffers beyond
> what can be stored by an integer could have a noticeable distributed
> performance cost for the entire system. It might be a pretty small
> cost, but then again maybe not; for
On 2017-02-02 11:41:44 -0800, Jim Nasby wrote:
> On 2/1/17 4:28 PM, Andres Freund wrote:
> > On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:
> > > With current limits, the most bgwriter can do (with 8k pages) is 1000
> > > pages
> > > * 100 times/sec = 780MB/s. It's not hard to exceed that with
On Wed, Feb 1, 2017 at 9:47 PM, Jim Nasby wrote:
> Before doing that the first thing to look at would be why the limit is
> currently INT_MAX / 2 instead of INT_MAX.
Generally the rationale for GUCs with limits of that sort is that
there is or might be code someplace
On 2/1/17 4:28 PM, Andres Freund wrote:
On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:
With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
* 100 times/sec = 780MB/s. It's not hard to exceed that with modern
hardware. Should we increase the limit on bgwriter_lru_maxpages?
On 2/1/17 4:27 PM, Andres Freund wrote:
On 2017-02-02 09:22:46 +0900, Michael Paquier wrote:
On Thu, Feb 2, 2017 at 9:17 AM, Jim Nasby wrote:
Speaking of which... I have a meeting in 15 minutes to discuss moving to a
server with 4TB of memory. With current limits
On 2017-02-01 20:38:58 -0500, Robert Haas wrote:
> On Wed, Feb 1, 2017 at 8:35 PM, Andres Freund wrote:
> > On 2017-02-01 20:30:30 -0500, Robert Haas wrote:
> >> On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund wrote:
> >> > On 2016-11-28 11:40:53 -0800, Jim
On Wed, Feb 1, 2017 at 8:35 PM, Andres Freund wrote:
> On 2017-02-01 20:30:30 -0500, Robert Haas wrote:
>> On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund wrote:
>> > On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:
>> >> With current limits, the most bgwriter
On 2017-02-01 20:30:30 -0500, Robert Haas wrote:
> On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund wrote:
> > On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:
> >> With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
> >> * 100 times/sec = 780MB/s. It's not
On Wed, Feb 1, 2017 at 7:28 PM, Andres Freund wrote:
> On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:
>> With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
>> * 100 times/sec = 780MB/s. It's not hard to exceed that with modern
>> hardware. Should we
On 2016-11-28 11:40:53 -0800, Jim Nasby wrote:
> With current limits, the most bgwriter can do (with 8k pages) is 1000 pages
> * 100 times/sec = 780MB/s. It's not hard to exceed that with modern
> hardware. Should we increase the limit on bgwriter_lru_maxpages?
FWIW, I think working on replacing
On 2017-02-02 09:22:46 +0900, Michael Paquier wrote:
> On Thu, Feb 2, 2017 at 9:17 AM, Jim Nasby wrote:
> > Speaking of which... I have a meeting in 15 minutes to discuss moving to a
> > server with 4TB of memory. With current limits shared buffers maxes at 16TB,
> >
On Thu, Feb 2, 2017 at 9:17 AM, Jim Nasby wrote:
> Speaking of which... I have a meeting in 15 minutes to discuss moving to a
> server with 4TB of memory. With current limits shared buffers maxes at 16TB,
> which isn't all that far in the future. While 16TB of shared
On 2/1/17 3:36 PM, Michael Paquier wrote:
On Thu, Feb 2, 2017 at 7:01 AM, Jim Nasby wrote:
On 2/1/17 10:27 AM, Robert Haas wrote:
This looks fine to me.
This could go without the comments, they are likely going to be
forgotten if any updates happen in the future.
On Thu, Feb 2, 2017 at 7:01 AM, Jim Nasby wrote:
> On 2/1/17 10:27 AM, Robert Haas wrote:
>> This looks fine to me.
This could go without the comments, they are likely going to be
forgotten if any updates happen in the future.
> If someone wants to proactively commit
On 2/1/17 10:27 AM, Robert Haas wrote:
On Tue, Jan 31, 2017 at 5:07 PM, Jim Nasby wrote:
On 11/29/16 9:58 AM, Jeff Janes wrote:
Considering a single SSD can do 70% of that limit, I would say
yes.
Next question becomes... should there even be an upper
On Tue, Jan 31, 2017 at 5:07 PM, Jim Nasby wrote:
> On 11/29/16 9:58 AM, Jeff Janes wrote:
>> Considering a single SSD can do 70% of that limit, I would say
>> yes.
>>
>> Next question becomes... should there even be an upper limit?
>>
>>
>> Where the
On 11/29/16 9:58 AM, Jeff Janes wrote:
Considering a single SSD can do 70% of that limit, I would say yes.
Next question becomes... should there even be an upper limit?
Where the contortions needed to prevent calculation overflow become
annoying?
I'm not a big fan of nannyism
On Mon, Nov 28, 2016 at 1:20 PM, Jim Nasby wrote:
> On 11/28/16 11:53 AM, Joshua D. Drake wrote:
>
>> On 11/28/2016 11:40 AM, Jim Nasby wrote:
>>
>>> With current limits, the most bgwriter can do (with 8k pages) is 1000
>>> pages * 100 times/sec = 780MB/s. It's not hard
On Tue, Nov 29, 2016 at 6:20 AM, Jim Nasby wrote:
> On 11/28/16 11:53 AM, Joshua D. Drake wrote:
>>
>> On 11/28/2016 11:40 AM, Jim Nasby wrote:
>>>
>>> With current limits, the most bgwriter can do (with 8k pages) is 1000
>>> pages * 100 times/sec = 780MB/s. It's not
On 11/28/16 11:53 AM, Joshua D. Drake wrote:
On 11/28/2016 11:40 AM, Jim Nasby wrote:
With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?
On 11/28/2016 11:40 AM, Jim Nasby wrote:
With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?
Considering a single SSD can do 70% of that
Hi,
On Mon, 2016-11-28 at 11:40 -0800, Jim Nasby wrote:
> With current limits, the most bgwriter can do (with 8k pages) is 1000
> pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
> modern hardware. Should we increase the limit on bgwriter_lru_maxpages?
+1 for that. I've seen
With current limits, the most bgwriter can do (with 8k pages) is 1000
pages * 100 times/sec = 780MB/s. It's not hard to exceed that with
modern hardware. Should we increase the limit on bgwriter_lru_maxpages?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics,
31 matches
Mail list logo