On Thu, Jan 11, 2018 at 3:01 PM Thomas Munro
wrote:
> So I agree with Tom's suggestion:
>
> On Wed, Oct 4, 2017 at 2:29 PM, Tom Lane wrote:
> > Perhaps serialize the contents into an array in DSM, then rebuild a hash
> > table from that in the worker. Robert might have a better idea though.
>
>
On Fri, Jan 12, 2018 at 5:06 PM, Andres Freund wrote:
> OTOH, it seems quite likely that we'll add more transaction-lifetime
> shared data (e.g. combocid), so building per-xact infrastructure
> actually seems like a good idea.
Sure, but there's no urgency about it.
On 2018-01-12 07:51:34 -0500, Robert Haas wrote:
> On Thu, Jan 11, 2018 at 11:01 PM, Thomas Munro
> wrote:
> > Are you saying we should do the work now to create a per-transaction
> > DSM segment + DSA area + thing that every backend attaches to?
>
> No, I was just
On Thu, Jan 11, 2018 at 11:01 PM, Thomas Munro
wrote:
> Are you saying we should do the work now to create a per-transaction
> DSM segment + DSA area + thing that every backend attaches to?
No, I was just thinking you could stuff it into the per-parallel-query
On Fri, Jan 12, 2018 at 4:19 PM, Robert Haas wrote:
> On Thu, Jan 11, 2018 at 6:01 PM, Thomas Munro
> wrote:
>> [ the data isn't session lifetime ]
>>
>> So I agree with Tom's suggestion:
>>
>> On Wed, Oct 4, 2017 at 2:29 PM, Tom Lane
On Thu, Jan 11, 2018 at 6:01 PM, Thomas Munro
wrote:
> [ the data isn't session lifetime ]
>
> So I agree with Tom's suggestion:
>
> On Wed, Oct 4, 2017 at 2:29 PM, Tom Lane wrote:
>> Perhaps serialize the contents into an array in DSM, then
On Fri, Oct 6, 2017 at 2:45 AM, Robert Haas wrote:
> On Tue, Oct 3, 2017 at 9:38 PM, Andres Freund wrote:
>>> Do you have any suggestion as to how we should transmit the blacklist to
>>> parallel workers?
>>
>> How about storing them in the a dshash