2015-09-17 22:13 GMT+02:00 Robert Haas <robertmh...@gmail.com>:

> On Thu, Sep 17, 2015 at 11:16 AM, Pavel Stehule <pavel.steh...@gmail.com>
> wrote:
> > Is there some risk if we take too much large DSM segment? And what is max
> > size of DSM segment? When we use shm_mq, we don't need to solve limits.
>
> I can't really think you are going to have a problem.  How much data
> do you really intend to send back?  Surely it's going to be <100kB.
> If you think it's not a problem to have a running query stop and send
> a gigabyte of data someplace anytime somebody asks, well, I don't
> think I agree.
>
>
I afraid so <100kB is too optimistic. I know so GoodData environment is a
exception - it uses query generator, but I found few plans >1MB . It is
unusual probably due long identifiers too - used 63bytes long hash - and
some queries and models are strange. It does translation between analytic
multi dimensional business oriented query language and SQL. Back to topic.
With high probability we can calculate <10MB.


> >> Also, if there are any bugs in the way the shm_mq is being used,
> >> they're likely to be quite rare and hard to find, because the vast
> >> majority of messages will probably be short enough to be sent in a
> >> single chunk, so whatever bugs may exist when the processes play
> >> ping-ping are unlikely to occur in practice except in unusual cases
> >> where the message being returned is very long.
> >
> > This is true for any functionality based on shm_mq - parallel seq scan,
>
> Parallel sequential scan is likely to put a lot more data through a
> shm_mq than you would for this.
>
> >> Second, using a shm_mq manipulates the state of the process latch.  I
> >> don't think you can make the assumption that it's safe to reset the
> >> process latch at any and every place where we check for interrupts.
> >> For example, suppose the process is already using a shm_mq and the
> >> CHECK_FOR_INTERRUPTS() call inside that code then discovers that
> >> somebody has activated this mechanism and you now go try to send and
> >> receive from a new shm_mq.  But even if that and every other
> >> CHECK_FOR_INTERRUPTS() in the code can tolerate a process latch reset
> >> today, it's a new coding rule that could easily trip people up in the
> >> future.
> >
> > It is valid, and probably most important. But if we introduce own
> mechanism,
> > we will play with process latch too (although we can use LWlocks)
>
> With the design I proposed, there is zero need to touch the process
> latch, which is good, because I'm pretty sure that is going to be a
> problem.  I don't think there is any need to use LWLocks here either.
> When you get a request for data, you can just publish a DSM segment
> with the data and that's it.  Why do you need anything more?  You
> could set the requestor's latch if it's convenient; that wouldn't be a
> problem.  But the process supplying the data can't end up in a
> different state than it was before supplying that data, or stuff WILL
> break.
>

sure - but the same behave have to have shm_mq - if not, then only one can
be used for communication between process - that is little bit limiting.

>
> >> Using a shm_mq is appropriate when the amount of data that needs to be
> >> transmitted might be very large - too large to just allocate a buffer
> >> for the whole thing - or when the amount of data can't be predicted
> >> before memory is allocated.  But there is obviously no rule that a
> >> shm_mq should be used any time we have "data exchange between
> >> processes"; we have lots of shared-memory based IPC already and have
> >> for many years, and shm_mq is newer than the vast majority of that
> >> code.
> >
> > I am little bit disappointed - I hoped so shm_mq can be used as generic
> > interprocess mechanism - that will ensure all corner cases for work with
> > shared memory. I understand to shm_mq is new, and nobody used it, but
> this
> > risk is better than invent wheels again and again.
>
> shm_mq is useful, but if you insist on using a complicated tool when a
> simple one is plenty sufficient, you may not get the results you're
> hoping for.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Reply via email to