On Thu, Sep 21, 2017 at 10:34 PM, Dilip Kumar wrote:
> On Thu, Sep 21, 2017 at 4:50 PM, Rafia Sabih
> wrote:
>> On Sun, Sep 17, 2017 at 9:10 PM, Dilip Kumar wrote:
>>> On Wed, Sep 6, 2017 at 4:14 PM, Rafia Sabih
>>>
On Thu, Sep 21, 2017 at 4:50 PM, Rafia Sabih
wrote:
> On Sun, Sep 17, 2017 at 9:10 PM, Dilip Kumar wrote:
>> On Wed, Sep 6, 2017 at 4:14 PM, Rafia Sabih
>> wrote:
>>
>
> Please find the attached file for the
On Sun, Sep 17, 2017 at 9:10 PM, Dilip Kumar wrote:
> On Wed, Sep 6, 2017 at 4:14 PM, Rafia Sabih
> wrote:
>
>> I worked on this idea of using local queue as a temporary buffer to
>> write the tuples when master is busy and shared queue is
On Wed, Sep 6, 2017 at 4:14 PM, Rafia Sabih
wrote:
> I worked on this idea of using local queue as a temporary buffer to
> write the tuples when master is busy and shared queue is full, and it
> gives quite some improvement in the query performance.
>
I have done
On Fri, Jun 2, 2017 at 6:31 PM, Amit Kapila wrote:
>
> Your reasoning sounds sensible to me. I think the other way to attack
> this problem is that we can maintain some local queue in each of the
> workers when the shared memory queue becomes full. Basically, we can
>
On Fri, Jun 2, 2017 at 9:15 AM, Amit Kapila wrote:
> On Fri, Jun 2, 2017 at 6:38 PM, Robert Haas wrote:
>> On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila wrote:
>>> Your reasoning sounds sensible to me. I think the other way
On Fri, Jun 2, 2017 at 6:38 PM, Robert Haas wrote:
> On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila wrote:
>> Your reasoning sounds sensible to me. I think the other way to attack
>> this problem is that we can maintain some local queue in each of
On Fri, Jun 2, 2017 at 9:01 AM, Amit Kapila wrote:
> Your reasoning sounds sensible to me. I think the other way to attack
> this problem is that we can maintain some local queue in each of the
> workers when the shared memory queue becomes full. Basically, we can
>
On Thu, Jun 1, 2017 at 6:41 PM, Rafia Sabih
wrote:
> On Tue, May 30, 2017 at 4:57 PM, Robert Haas wrote:
>
>> I did a little bit of brief experimentation on this same topic a long
>> time ago and didn't see an improvement from boosting the
On 2017-06-01 18:41:20 +0530, Rafia Sabih wrote:
> As per my understanding it looks like this increase in tuple queue
> size is helping only gather-merge. Particularly, in the case where it
> is enough stalling by master in gather-merge because it is maintaining
> the sort-order. Like in q12 the
On Tue, May 30, 2017 at 4:57 PM, Robert Haas wrote:
> I did a little bit of brief experimentation on this same topic a long
> time ago and didn't see an improvement from boosting the queue size
> beyond 64k but Rafia is testing Gather rather than Gather Merge and,
> as I
On Wed, May 31, 2017 at 2:35 AM, Ashutosh Bapat
wrote:
> AFAIK, work_mem comes from memory private to the process whereas this
> memory will come from the shared memory pool.
I don't think that really matters. The point of limits like work_mem
is to avoid
>
> I did a little bit of brief experimentation on this same topic a long
> time ago and didn't see an improvement from boosting the queue size
> beyond 64k but Rafia is testing Gather rather than Gather Merge and,
> as I say, my test was very brief. I think it would be a good idea to
> try to
On 2017-05-30 07:27:12 -0400, Robert Haas wrote:
> The other is that I figured 64k was small enough that nobody would
> care about the memory utilization. I'm not sure we can assume the
> same thing if we make this bigger. It's probably fine to use a 6.4M
> tuple queue for each worker if
On Tue, May 30, 2017 at 6:50 AM, Ashutosh Bapat
wrote:
> Increasing that number would require increased DSM which may not be
> available. Also, I don't see any analysis as to why 6553600 is chosen?
> Is it optimal? Does that work for all kinds of work loads?
On Tue, May 30, 2017 at 5:28 AM, Rafia Sabih
wrote:
> Hello everybody,
>
> Here is a thing I observed in my recent experimentation, on changing
> the value of PARALLEL_TUPLE_QUEUE_SIZE to 6553600, the performance of
> a TPC-H query is improved by more than 50%.
How
16 matches
Mail list logo