On 2014-04-28 10:57:12 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-04-28 10:03:58 -0400, Tom Lane wrote:
> >> What I find much more worrisome about Andres' proposals is that he
> >> seems to be thinking that there are *no* other changes to the buffer
> >> headers on the horizon.
>
Andres Freund writes:
> On 2014-04-28 10:03:58 -0400, Tom Lane wrote:
>> What I find much more worrisome about Andres' proposals is that he
>> seems to be thinking that there are *no* other changes to the buffer
>> headers on the horizon.
> Err. I am not thinking that at all. I am pretty sure I n
On 2014-04-28 10:03:58 -0400, Tom Lane wrote:
> What I find much more worrisome about Andres' proposals is that he
> seems to be thinking that there are *no* other changes to the buffer
> headers on the horizon.
Err. I am not thinking that at all. I am pretty sure I never made that
argument. The r
Robert Haas writes:
> I think the fact that making 20k connections might crash your computer
> is an artifact of other problems that we really ought to also fix
> (like per-backend memory utilization, and lock contention on various
> global data structures) rather than baking it into more places.
On Mon, Apr 28, 2014 at 7:37 AM, Andres Freund wrote:
>> Well, often that's still good enough.
>
> That may be true for 2-4k max_connections, but >65k? That won't even
> *run*, not to speak of doing something, in most environments because of
> the number of processes required.
>
> Even making only
On 2014-04-28 13:32:45 +0300, Heikki Linnakangas wrote:
> On 04/28/2014 12:39 PM, Andres Freund wrote:
> >On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
> >>On 04/26/2014 09:27 PM, Andres Freund wrote:
> >>>I don't think we need to decide this without benchmarks proving the
> >>>benefits.
On 04/28/2014 12:39 PM, Andres Freund wrote:
On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even
On 2014-04-28 10:48:30 +0300, Heikki Linnakangas wrote:
> On 04/26/2014 09:27 PM, Andres Freund wrote:
> >I don't think we need to decide this without benchmarks proving the
> >benefits. I basically want to know whether somebody has an actual
> >usecase - even if I really, really, can't think of on
On 04/26/2014 09:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even if I really, really, can't think of one - of setting
max_connections even remotely that high. If there'
On 4/26/14, 1:27 PM, Andres Freund wrote:
I don't think we need to decide this without benchmarks proving the
benefits. I basically want to know whether somebody has an actual
usecase - even if I really, really, can't think of one - of setting
max_connections even remotely that high. If there's s
On Sat, Apr 26, 2014 at 1:58 PM, Peter Geoghegan wrote:
> The 2Q paper also suggests a correlated reference period.
I withdraw this. 2Q in fact does not have such a parameter, while
LRU-K does. But the other major system I mentioned very explicitly has
a configurable delay that serves this exact
On Sat, Apr 26, 2014 at 1:30 PM, Noah Misch wrote:
> Sure, let's not actually commit a patch to impose this limit until the first
> change benefiting from doing so is ready to go. There remains an opportunity
> to evaluate whether that beneficiary change is better done a different way.
> By havin
Noah Misch writes:
> On Sat, Apr 26, 2014 at 11:20:56AM -0400, Tom Lane wrote:
>> While I agree with you that it seems somewhat unlikely we'd ever get
>> past 2^16 backends, these arguments are not nearly good enough to
>> justify a hard-wired limitation.
> I'm satisfied with the arguments Andres
On Sat, Apr 26, 2014 at 11:20:56AM -0400, Tom Lane wrote:
> Andres Freund writes:
> > What I think it's necessary for is at least:
>
> > * Move the buffer content lock inline into to the buffer descriptor,
> > while still fitting into one cacheline.
> > * lockless/atomic Pin/Unpin Buffer.
>
>
On 2014-04-26 13:16:38 -0700, Josh Berkus wrote:
> However, I agree with Tom that Andres should "show his hand" before we
> decrease MAX_BACKENDS by 256X.
I just don't want to invest time in developing and benchmarking
something that's not going to be accepted anyway. Thus my question.
Greetings,
On 04/26/2014 11:06 AM, David Fetter wrote:
> I know we allow for gigantic numbers of backend connections, but I've
> never found a win for >2x the number of cores in the box, which at
> least in my experience so far tops out in the 8-bit (in extreme cases
> unsigned 8-bit) range.
For my part, I'v
On 2014-04-26 11:22:39 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-04-26 05:40:21 -0700, David Fetter wrote:
> >> Out of curiosity, where are you finding that a 32-bit integer is
> >> causing problems that a 16-bit one would solve?
>
> > Save space? For one it allows to shrink some
On 2014-04-26 11:20:56 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
> >> But I don't think it's beyond the realm of possibility
> >> that we'll reduce the overhead in the future with an eye to being able
> >> to do that. Is it that helpful that
On Sat, Apr 26, 2014 at 11:20:56AM -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
> >> But I don't think it's beyond the realm of possibility
> >> that we'll reduce the overhead in the future with an eye to being able
> >> to do that. Is it that
Andres Freund writes:
> On 2014-04-26 05:40:21 -0700, David Fetter wrote:
>> Out of curiosity, where are you finding that a 32-bit integer is
>> causing problems that a 16-bit one would solve?
> Save space? For one it allows to shrink some structs (into one
> cacheline!).
And next week when we n
Andres Freund writes:
> On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
>> But I don't think it's beyond the realm of possibility
>> that we'll reduce the overhead in the future with an eye to being able
>> to do that. Is it that helpful that it's worth baking in more
>> dependencies on that limit
On 2014-04-26 05:40:21 -0700, David Fetter wrote:
> On Sat, Apr 26, 2014 at 12:15:40AM +0200, Andres Freund wrote:
> > Hi,
> >
> > Currently the maximum for max_connections (+ bgworkers + autovacuum) is
> > defined by
> > #define MAX_BACKENDS0x7f
> > which unfortunately means that some thi
On 2014-04-26 11:52:44 +0100, Greg Stark wrote:
> On Fri, Apr 25, 2014 at 11:15 PM, Andres Freund
> wrote:
> > Since there's absolutely no sensible scenario for setting
> > max_connections that high, I'd like to change the limit to 2^16, so we
> > can use a uint16 in BufferDesc->refcount.
>
> Cl
On Sat, Apr 26, 2014 at 12:15:40AM +0200, Andres Freund wrote:
> Hi,
>
> Currently the maximum for max_connections (+ bgworkers + autovacuum) is
> defined by
> #define MAX_BACKENDS0x7f
> which unfortunately means that some things like buffer reference counts
> need a full integer to store
On Fri, Apr 25, 2014 at 11:15 PM, Andres Freund wrote:
> Since there's absolutely no sensible scenario for setting
> max_connections that high, I'd like to change the limit to 2^16, so we
> can use a uint16 in BufferDesc->refcount.
Clearly there's no sensible way to run 64k backends in the curren
25 matches
Mail list logo