On Tue, Jan 3, 2017 at 11:11 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>>> A survey of s_lock.h shows that we prefer char-width slock_t only on
>>> these architectures:
>>> x86
>>> sparc (but not sparcv9, there we use int)
>>> m68k
>>> vax
>> I don't think that's right, because on my MacBook Pro:
> ... which is an x86, which won't be affected by the proposed change.

Uh, x86 means 32-bit to me, and this MacBook Pro is definitely x86_64.

>> I think we would be well-advised to get the size of slock_t down to a
>> single byte on as many platforms as possible, because when it's any
>> wider than that it makes some critical structures that would otherwise
>> fit into a cache line start to not fit, and that can have a very big
>> impact on performance.
> I really doubt that that's a good argument for choosing a markedly less
> efficient locking primitive, which is what's at stake for PPC.  I have
> no info about the other architectures.

I don't know for certain, but I would not be willing to take that on
faith.  I'm not sure if you've read all of the discussion threads on
this mailing list about fitting things into cache lines and/or
aligning things to cache lines to avoid major performance regressions
on large servers, but there have been quite a few of those over the
last few years and there will doubtless be more.

> Also, since pg_atomic_flag is currently used in a grand total of zero
> places (other than the test case in regress.c), arguing that making
> it word-wide will bloat critical data structures is flat wrong.

Well that just begs the question of whether we should rip it out.  If
it's unused, then, yes, the performance characteristics don't matter,
but if it's gonna get used for anything important, I maintain that
both the speed of the implementation and the number of bytes that it
consumes will be relevant.

Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to