Hi,

On 2026-02-10 19:15:27 +0000, Bertrand Drouvot wrote:
> On Tue, Feb 10, 2026 at 01:15:01PM -0500, Andres Freund wrote:
> > On 2026-02-10 19:14:44 +0200, Heikki Linnakangas wrote:
> > Yea, I don't think we need to be perfect here. Just a bit less bad. And, as
> > you say, the current order doesn't make a lot of sense.
> > Just grouping things like
> > - pid, pgxactoff, backendType (i.e. barely if ever changing)
> > - wait_event_info, waitStart (i.e. very frequently changing, but typically
> >   accessed within one proc)
> > - sem, lwWaiting, waitLockMode (i.e. stuff that is updated frequently and
> >   accessed across processes)
> 
> With an ordering like in the attached (to apply on top of Heikki's patch), 
> we're
> back to 832 bytes.

You'd really need to insert padding between the sections to make it work...


> But, then the pg_attribute_aligned() added in Heikki's patch makes it 896 
> bytes...
> 
> "
> /*    816      |      16 */    dlist_node lockGroupLink;
> /* XXX 64-byte padding   */
> 
>                                /* total size (bytes):  896 */
>                              }
> "
> 
> What about applying this new ordering and remove the pg_attribute_aligned()?


> (I thought the aligned attribute would be smarter than that and not add this
> 64 padding bytes).

That's just because we have

/*
 * Assumed cache line size.  This doesn't affect correctness, but can be used
 * for low-level optimizations.  This is mostly used to pad various data
 * structures, to ensure that highly-contended fields are on different cache
 * lines.  Too small a value can hurt performance due to false sharing, while
 * the only downside of too large a value is a few bytes of wasted memory.
 * The default is 128, which should be large enough for all supported
 * platforms.
 */
#define PG_CACHE_LINE_SIZE              128


I don't think we need to worry about the number of bytes here very much. This
isn't much compared to all the other overheads a connection slot has (like the
memory for locks).

Greetings,

Andres Freund


Reply via email to