2015-09-14 14:11 GMT+02:00 Tomas Vondra <tomas.von...@2ndquadrant.com>:

>
>
> On 09/14/2015 01:15 PM, Shulgin, Oleksandr wrote:
>
>> On Mon, Sep 14, 2015 at 11:53 AM, Tomas Vondra
>> <tomas.von...@2ndquadrant.com <mailto:tomas.von...@2ndquadrant.com>>
>> wrote:
>>
>>
>>     On 09/14/2015 10:23 AM, Shulgin, Oleksandr wrote:
>>
>>         On Sat, Sep 12, 2015 at 11:50 AM, Tomas Vondra
>>         <tomas.von...@2ndquadrant.com
>>         <mailto:tomas.von...@2ndquadrant.com>
>>         <mailto:tomas.von...@2ndquadrant.com
>>
>>         <mailto:tomas.von...@2ndquadrant.com>>> wrote:
>>
>>     ...
>>
>>              - Attempts to get plan for simple insert queries like this
>>
>>                     INSERT INTO x SELECT * FROM x;
>>
>>                 end with a segfault, because ActivePortal->queryDesc is
>>         0x0 for this
>>                 query. Needs investigation.
>>
>>
>>         Yes, I've hit the same problem after submitting the latest
>>         version of
>>         the patch.  For now I've just added a check for queryDesc being
>> not
>>         NULL, but I guess the top of the current_query_stack might
>> contains
>>         something useful.  Something I need to try.
>>
>>
>>     Well, the thing is we're able to do EXPLAIN on those queries, and
>>     IIRC auto_explain can log them too. So perhaps look into the hooks
>>     where they take the queryDesc in those cases - it has to be
>>     available somewhere.
>>
>>
>> Yes, this seems to work.  I was able to successfully query "create table
>> as" and even "explain analyze" for the plans.  In both cases
>> ActivePortal doesn't have a valid queryDesc, but the executor hook
>> captures one.
>>
>>              - The lockless approach seems fine to me, although I think
>>         the fear
>>                 of performance issues is a bit moot (I don't think we
>>         expect large
>>                 number of processes calling pg_cmdstatus at the same
>>         time). But
>>                 it's not significantly more complex, so why not.
>>
>>
>>         I believe the main benefit of the less-locking approach is that if
>>         something goes wrong when two backends tried to communicate it
>>         doesn't
>>         prevent the rest of them from operating, because there is no
>>         shared (and
>>         thus locked) communication channel.
>>
>>
>>     Sure, but I think it really deserves a bit more detailed explanation
>>     of the protocol, and discussion of the expected behavior for some
>>     basic failure types.
>>
>>     For example - what happens when a backend requests info, but dies
>>     before receiving it, and the backed is reused for another
>>     connection? Doesn't this introduce a race condition? Perhaps not,
>>     I'm just asking.
>>
>>
>> Now explained better in code comments.
>>
>> The worst thing that could happen in this case I believe is that the
>> requesting backend will not receive any response from the second use of
>> its slot due to the slot being marked as processed by the backend that
>> was asked first:
>>
>> A:
>> slot->is_processed = false;
>> slot->is_valid = true;
>> /* signals backend B */;
>> shm_mq_read(); /* blocks */
>>
>> B: /* finds the slot and prepares the response */
>>
>> A: /* decides to bail out */
>> InvalidateCmdStatusSlot();
>> shm_mq_detach();
>> /* exits pg_cmdstatus() */
>>
>> /* pg_cmdstatus() is called again */
>> /* initializes the slot and signals some backend */
>> shm_mq_read(); /* blocks */
>>
>> B: /* completes preparing the response */
>> slot->is_processed = true;
>> shm_mq_send() => SHM_MQ_DETACHED
>> slot->is_valid = false;
>> /* gives up with this slot */
>>
>
hmm .. yes - there have to be check if slot is related still to queue
before to change is_valid attribute.

Regards

Pavel

Reply via email to