On Mon, Oct 23, 2017 at 10:43 AM, Amit Langote
wrote:
> On 2017/10/22 5:25, Thomas Munro wrote:
>> On Sun, Oct 22, 2017 at 5:09 AM, Robert Haas wrote:
>>> On Tue, Sep 19, 2017 at 3:31 AM, Masahiko Sawada
>>> wrote:
> Down at the bottom of the build log in the regression diffs file you can
On 2017/10/22 5:25, Thomas Munro wrote:
> On Sun, Oct 22, 2017 at 5:09 AM, Robert Haas wrote:
>> On Tue, Sep 19, 2017 at 3:31 AM, Masahiko Sawada
>> wrote:
Down at the bottom of the build log in the regression diffs file you can
see:
! ERROR: cache lookup failed for relation
On Sun, Oct 22, 2017 at 5:09 AM, Robert Haas wrote:
> On Tue, Sep 19, 2017 at 3:31 AM, Masahiko Sawada
> wrote:
>>> Down at the bottom of the build log in the regression diffs file you can
>>> see:
>>>
>>> ! ERROR: cache lookup failed for relation 32893
>>>
>>> https://travis-ci.org/postgresql-
On Tue, Sep 19, 2017 at 3:31 AM, Masahiko Sawada wrote:
>> Down at the bottom of the build log in the regression diffs file you can see:
>>
>> ! ERROR: cache lookup failed for relation 32893
>>
>> https://travis-ci.org/postgresql-cfbot/postgresql/builds/277165907
>
> Thank you for letting me know.
On Tue, Sep 19, 2017 at 4:31 PM, Masahiko Sawada wrote:
> On Tue, Sep 19, 2017 at 3:33 PM, Thomas Munro
> wrote:
>> On Fri, Sep 8, 2017 at 10:37 PM, Masahiko Sawada
>> wrote:
>>> Since v4 patch conflicts with current HEAD I attached the latest version
>>> patch.
>>
>> Hi Sawada-san,
>>
>> Here
On Tue, Sep 19, 2017 at 3:33 PM, Thomas Munro
wrote:
> On Fri, Sep 8, 2017 at 10:37 PM, Masahiko Sawada
> wrote:
>> Since v4 patch conflicts with current HEAD I attached the latest version
>> patch.
>
> Hi Sawada-san,
>
> Here is an interesting failure with this patch:
>
> test rowsecurity
On Fri, Sep 8, 2017 at 10:37 PM, Masahiko Sawada wrote:
> Since v4 patch conflicts with current HEAD I attached the latest version
> patch.
Hi Sawada-san,
Here is an interesting failure with this patch:
test rowsecurity ... FAILED
test rules... FAILED
Down at
On Tue, Aug 15, 2017 at 10:13 AM, Masahiko Sawada wrote:
> On Wed, Jul 26, 2017 at 5:38 PM, Masahiko Sawada
> wrote:
>> On Sun, Mar 5, 2017 at 4:09 PM, Masahiko Sawada
>> wrote:
>>> On Sun, Mar 5, 2017 at 12:14 PM, David Steele wrote:
On 3/4/17 9:08 PM, Masahiko Sawada wrote:
> On Sa
On Sun, Mar 5, 2017 at 12:14 PM, David Steele wrote:
> On 3/4/17 9:08 PM, Masahiko Sawada wrote:
>> On Sat, Mar 4, 2017 at 5:47 PM, Robert Haas wrote:
>>> On Fri, Mar 3, 2017 at 9:50 PM, Masahiko Sawada
>>> wrote:
Yes, it's taking a time to update logic and measurement but it's
coming
On 3/4/17 9:08 PM, Masahiko Sawada wrote:
> On Sat, Mar 4, 2017 at 5:47 PM, Robert Haas wrote:
>> On Fri, Mar 3, 2017 at 9:50 PM, Masahiko Sawada
>> wrote:
>>> Yes, it's taking a time to update logic and measurement but it's
>>> coming along. Also I'm working on changing deadlock detection. Will
On Sat, Mar 4, 2017 at 5:47 PM, Robert Haas wrote:
> On Fri, Mar 3, 2017 at 9:50 PM, Masahiko Sawada wrote:
>> Yes, it's taking a time to update logic and measurement but it's
>> coming along. Also I'm working on changing deadlock detection. Will
>> post new patch and measurement result.
>
> I th
On Fri, Mar 3, 2017 at 9:50 PM, Masahiko Sawada wrote:
> Yes, it's taking a time to update logic and measurement but it's
> coming along. Also I'm working on changing deadlock detection. Will
> post new patch and measurement result.
I think that we should push this patch out to v11. I think ther
On Fri, Mar 3, 2017 at 11:01 PM, David Steele wrote:
> On 1/10/17 11:23 AM, Claudio Freire wrote:
>> On Tue, Jan 10, 2017 at 6:42 AM, Masahiko Sawada
>> wrote:
Does this work negate the other work to allow VACUUM to use >
1GB memory?
>>>
>>> Partly yes. Because memory space for dead TI
On 1/10/17 11:23 AM, Claudio Freire wrote:
> On Tue, Jan 10, 2017 at 6:42 AM, Masahiko Sawada
> wrote:
>>> Does this work negate the other work to allow VACUUM to use >
>>> 1GB memory?
>>
>> Partly yes. Because memory space for dead TIDs needs to be allocated
>> in DSM before vacuum worker launch
On Tue, Jan 10, 2017 at 6:42 AM, Masahiko Sawada wrote:
>> Does this work negate the other work to allow VACUUM to use >
>> 1GB memory?
>
> Partly yes. Because memory space for dead TIDs needs to be allocated
> in DSM before vacuum worker launches, parallel lazy vacuum cannot use
> such a variable
On Tue, Jan 10, 2017 at 6:42 AM, Masahiko Sawada wrote:
> Attached result of performance test with scale factor = 500 and the
> test script I used. I measured each test at four times and plot
> average of last three execution times to sf_500.png file. When table
> has index, vacuum execution time
On Mon, Jan 9, 2017 at 6:01 PM, Simon Riggs wrote:
> On 9 January 2017 at 08:48, Masahiko Sawada wrote:
>
>> I had not considered necessity of dead lock detection support.
>
> It seems like a big potential win to scan multiple indexes in parallel.
>
> Does the design for collecting dead TIDs use
On Tue, Jan 10, 2017 at 3:46 PM, Amit Kapila wrote:
> On Mon, Jan 9, 2017 at 2:18 PM, Masahiko Sawada wrote:
>> On Sat, Jan 7, 2017 at 2:47 PM, Amit Kapila wrote:
>>> On Fri, Jan 6, 2017 at 11:08 PM, Masahiko Sawada
>>> wrote:
On Mon, Oct 3, 2016 at 11:00 AM, Michael Paquier
wrote:
On Mon, Jan 9, 2017 at 2:18 PM, Masahiko Sawada wrote:
> On Sat, Jan 7, 2017 at 2:47 PM, Amit Kapila wrote:
>> On Fri, Jan 6, 2017 at 11:08 PM, Masahiko Sawada
>> wrote:
>>> On Mon, Oct 3, 2016 at 11:00 AM, Michael Paquier
>>> wrote:
On Fri, Sep 16, 2016 at 6:56 PM, Masahiko Sawada
On Sat, Jan 7, 2017 at 7:18 AM, Claudio Freire wrote:
> On Fri, Jan 6, 2017 at 2:38 PM, Masahiko Sawada wrote:
>> table_size | indexes | parallel_degree | time
>> +-+-+--
>> 6.5GB | 0 | 1 | 00:00:14
>> 6.5GB | 0
On 9 January 2017 at 08:48, Masahiko Sawada wrote:
> I had not considered necessity of dead lock detection support.
It seems like a big potential win to scan multiple indexes in parallel.
What do we actually gain from having the other parts of VACUUM execute
in parallel? Does truncation happen
On Sat, Jan 7, 2017 at 2:47 PM, Amit Kapila wrote:
> On Fri, Jan 6, 2017 at 11:08 PM, Masahiko Sawada
> wrote:
>> On Mon, Oct 3, 2016 at 11:00 AM, Michael Paquier
>> wrote:
>>> On Fri, Sep 16, 2016 at 6:56 PM, Masahiko Sawada
>>> wrote:
Yeah, I don't have a good solution for this problem
On Fri, Jan 6, 2017 at 11:08 PM, Masahiko Sawada wrote:
> On Mon, Oct 3, 2016 at 11:00 AM, Michael Paquier
> wrote:
>> On Fri, Sep 16, 2016 at 6:56 PM, Masahiko Sawada
>> wrote:
>>> Yeah, I don't have a good solution for this problem so far.
>>> We might need to improve group locking mechanism
On Fri, Jan 6, 2017 at 2:38 PM, Masahiko Sawada wrote:
> table_size | indexes | parallel_degree | time
> +-+-+--
> 6.5GB | 0 | 1 | 00:00:14
> 6.5GB | 0 | 2 | 00:00:02
> 6.5GB | 0 |
On Mon, Oct 3, 2016 at 11:00 AM, Michael Paquier
wrote:
> On Fri, Sep 16, 2016 at 6:56 PM, Masahiko Sawada
> wrote:
>> Yeah, I don't have a good solution for this problem so far.
>> We might need to improve group locking mechanism for the updating
>> operation or came up with another approach to
On Fri, Sep 16, 2016 at 6:56 PM, Masahiko Sawada wrote:
> Yeah, I don't have a good solution for this problem so far.
> We might need to improve group locking mechanism for the updating
> operation or came up with another approach to resolve this problem.
> For example, one possible idea is that t
On Thu, Sep 15, 2016 at 11:44 PM, Robert Haas wrote:
> On Thu, Sep 15, 2016 at 7:21 AM, Masahiko Sawada
> wrote:
>> I'm implementing this patch but I need to resolve the problem
>> regarding lock for extension by multiple parallel workers.
>> In parallel vacuum, multiple workers could try to acq
On Thu, Sep 15, 2016 at 7:21 AM, Masahiko Sawada wrote:
> I'm implementing this patch but I need to resolve the problem
> regarding lock for extension by multiple parallel workers.
> In parallel vacuum, multiple workers could try to acquire the
> exclusive lock for extension on same relation.
> Si
On Sat, Sep 10, 2016 at 7:44 PM, Pavan Deolasee
wrote:
>
>
> On Wed, Aug 24, 2016 at 3:31 AM, Michael Paquier
> wrote:
>>
>> On Tue, Aug 23, 2016 at 10:50 PM, Amit Kapila
>> wrote:
>> > On Tue, Aug 23, 2016 at 6:11 PM, Michael Paquier
>> > wrote:
>> >> On Tue, Aug 23, 2016 at 8:02 PM, Masahiko
On Wed, Aug 24, 2016 at 3:31 AM, Michael Paquier
wrote:
> On Tue, Aug 23, 2016 at 10:50 PM, Amit Kapila
> wrote:
> > On Tue, Aug 23, 2016 at 6:11 PM, Michael Paquier
> > wrote:
> >> On Tue, Aug 23, 2016 at 8:02 PM, Masahiko Sawada
> wrote:
> >>> As for PoC, I implemented parallel vacuum so tha
On Tue, Aug 23, 2016 at 10:50 PM, Amit Kapila wrote:
> On Tue, Aug 23, 2016 at 6:11 PM, Michael Paquier
> wrote:
>> On Tue, Aug 23, 2016 at 8:02 PM, Masahiko Sawada
>> wrote:
>>> As for PoC, I implemented parallel vacuum so that each worker
>>> processes both 1 and 2 phases for particular block
On 2016-08-23 12:17:30 -0400, Robert Haas wrote:
> On Tue, Aug 23, 2016 at 11:17 AM, Alvaro Herrera
> wrote:
> > Robert Haas wrote:
> >> 2. When you finish the heap scan, or when the array of dead tuple IDs
> >> is full (or very nearly full?), perform a cycle of index vacuuming.
> >> For now, have
Robert Haas wrote:
> On Tue, Aug 23, 2016 at 11:17 AM, Alvaro Herrera
> wrote:
> > Robert Haas wrote:
> >> 2. When you finish the heap scan, or when the array of dead tuple IDs
> >> is full (or very nearly full?), perform a cycle of index vacuuming.
> >> For now, have each worker process a separat
On Tue, Aug 23, 2016 at 11:17 AM, Alvaro Herrera
wrote:
> Robert Haas wrote:
>> 2. When you finish the heap scan, or when the array of dead tuple IDs
>> is full (or very nearly full?), perform a cycle of index vacuuming.
>> For now, have each worker process a separate index; extra workers just
>>
On Tue, Aug 23, 2016 at 10:50 PM, Robert Haas wrote:
> On Tue, Aug 23, 2016 at 7:02 AM, Masahiko Sawada
> wrote:
>> I'd like to propose block level parallel VACUUM.
>> This feature makes VACUUM possible to use multiple CPU cores.
>
> Great. This is something that I have thought about, too. And
Robert Haas wrote:
> 2. When you finish the heap scan, or when the array of dead tuple IDs
> is full (or very nearly full?), perform a cycle of index vacuuming.
> For now, have each worker process a separate index; extra workers just
> wait. Perhaps use the condition variable patch that I posted
On Tue, Aug 23, 2016 at 9:40 PM, Alexander Korotkov
wrote:
> On Tue, Aug 23, 2016 at 3:32 PM, Tom Lane wrote:
>>
>> Claudio Freire writes:
>> > Not only that, but from your description (I haven't read the patch,
>> > sorry), you'd be scanning the whole index multiple times (one per
>> > worker).
On Tue, Aug 23, 2016 at 7:02 AM, Masahiko Sawada wrote:
> I'd like to propose block level parallel VACUUM.
> This feature makes VACUUM possible to use multiple CPU cores.
Great. This is something that I have thought about, too. Andres and
Heikki recommended it as a project to me a few PGCons ag
On Tue, Aug 23, 2016 at 6:11 PM, Michael Paquier
wrote:
> On Tue, Aug 23, 2016 at 8:02 PM, Masahiko Sawada
> wrote:
>> As for PoC, I implemented parallel vacuum so that each worker
>> processes both 1 and 2 phases for particular block range.
>> Suppose we vacuum 1000 blocks table with 4 workers,
On 23.08.2016 15:41, Michael Paquier wrote:
On Tue, Aug 23, 2016 at 8:02 PM, Masahiko Sawada wrote:
As for PoC, I implemented parallel vacuum so that each worker
processes both 1 and 2 phases for particular block range.
Suppose we vacuum 1000 blocks table with 4 workers, each worker
processes
On Tue, Aug 23, 2016 at 8:02 PM, Masahiko Sawada wrote:
> As for PoC, I implemented parallel vacuum so that each worker
> processes both 1 and 2 phases for particular block range.
> Suppose we vacuum 1000 blocks table with 4 workers, each worker
> processes 250 consecutive blocks in phase 1 and th
On Tue, Aug 23, 2016 at 3:32 PM, Tom Lane wrote:
> Claudio Freire writes:
> > Not only that, but from your description (I haven't read the patch,
> > sorry), you'd be scanning the whole index multiple times (one per
> > worker).
>
> What about pointing each worker at a separate index? Obviously
Claudio Freire writes:
> Not only that, but from your description (I haven't read the patch,
> sorry), you'd be scanning the whole index multiple times (one per
> worker).
What about pointing each worker at a separate index? Obviously the
degree of concurrency during index cleanup is then limite
I repeat your test on ProLiant DL580 Gen9 with Xeon E7-8890 v3.
pgbench -s 100 and command vacuum pgbench_acounts after 10_000 transactions:
with: alter system set vacuum_cost_delay to DEFAULT;
parallel_vacuum_workers | time
1 | 138.703,263 ms
2 | 83.
On Tue, Aug 23, 2016 at 8:02 AM, Masahiko Sawada wrote:
>
> 2. Vacuum table and index (after 1 transaction executed)
> 1 worker : 12 sec
> 2 workers : 49 sec
> 3 workers : 54 sec
> 4 workers : 53 sec
>
> As a result of my test, since multiple process could frequently try to
> acquire
Hi all,
I'd like to propose block level parallel VACUUM.
This feature makes VACUUM possible to use multiple CPU cores.
Vacuum Processing Logic
===
PostgreSQL VACUUM processing logic consists of 2 phases,
1. Collecting dead tuple locations on heap.
2. Reclaiming dead tuples from h
46 matches
Mail list logo