On 2014-04-26 11:20:56 -0400, Tom Lane wrote: > Andres Freund <and...@2ndquadrant.com> writes: > > On 2014-04-26 11:52:44 +0100, Greg Stark wrote: > >> But I don't think it's beyond the realm of possibility > >> that we'll reduce the overhead in the future with an eye to being able > >> to do that. Is it that helpful that it's worth baking in more > >> dependencies on that limitation? > > > What I think it's necessary for is at least: > > > * Move the buffer content lock inline into to the buffer descriptor, > > while still fitting into one cacheline. > > * lockless/atomic Pin/Unpin Buffer. > > TBH, that argument seems darn weak, not to mention probably applicable > only to current-vintage Intel chips.
64 byte has been the cacheline size for more than a decade and it's not just x86. ARM has also moved to it, as well as other architectures. And even if it's 32 or 128bit - fitting datastructures to a power of 2 of the cacheline size is still beneficial. I don't think many datastructures in pg deserves attention to that, but the buffer descriptors are one of the few. It's currently one of the top #3 sources of cpu cache issues in pg. > And you have not proven that > narrowing the backend ID is necessary to either goal, even if we > accepted that these goals were that important. I am pretty sure there are other ways, but since the actual cost of that restriction imo is just about zero, it seems like a quite sensible solution. > While I agree with you that it seems somewhat unlikely we'd ever get > past 2^16 backends, these arguments are not nearly good enough to > justify a hard-wired limitation. Even if you include a lockless pin/unpin buffer? Besides the lwlock's internal spinlock the buffer spinlocks are the hottest ones in PG by far. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers