Re: v1 -> v2 migration tests
Hi Stephen, Thank you so much for such a detailed investigation! Very glad to hear patchwork is doing the right things. Regards, Daniel > On Thu, 2017-06-22 at 11:16 +0100, Stephen Finucane wrote: >> On Thu, 2017-06-22 at 09:05 +1000, Daniel Axtens wrote: >> > Hi Stephen, >> > >> > > Thanks for doing this. I've tinkered with both instances and haven't seen >> > > any >> > > serious issues. I'm most happy with the fact that the performance of the >> > > web UI >> > > hasn't been altered (at least, from my brief testing). >> > > >> > > I'm thinking we give this another day or two and we might be good to go? >> > >> > Russell picked up one thing that looked questionable: >> > https://py3.patchwork.dja.id.au/project/lkml/list/?series== >> > =&; >> > q=powerpc%2Fnuma%3A+update== >> > >> > It looks like it's slightly misthreaded; I haven't had a chance to dig >> > into it properly just yet. It could well be that the code is correct and >> > the messages were just sent with weird headers. >> >> I took a look at the v6 versions of these... > > I also just took a look at "v1" (though it doesn't have a prefix, of course) > and v5 of this over lunch. The "v1" patches/covers weren't threaded, but > that's > because they were missing references and in-reply-to headers. We've fixed this > since those patches were merged, so were they to be parsed today that wouldn't > happen. As for v5, from looking at marc.info [1], it would appear that patch > 1/2 was never sent or never made it to the list. Patch 2/2 in this series is > also missing 'References' and 'In-Reply-To' headers, but because the patches > arrived *after* we'd fixed that issue, they're threaded correctly. > > Important metadata about the patches is provided below, but all in all, I > think > Patchwork is working a-ok, which is good to hear. > > Stephen > > PS: I can't comment on the quality of the patches themselves, but this guy > really needs to start sending his patches with git-send-email instead of > Thunderbird :) > > [1] https://marc.info/?a=14570338841=1=2 > > --- > > Original series ("v1"): > > [0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and memory > Date: Tue, 23 May 2017 10:15:11 -0500 > Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) > by patchwork.dja.id.au (Postfix) with ESMTP id 48630CA6D5 > for; Wed, 24 May 2017 01:15:45 +1000 > Message-Id: <790af26b-7055-7997-2080-f967aef2d...@linux.vnet.ibm.com> > > https://py3.patchwork.dja.id.au/cover/15561/ > > [1/2] powerpc/numa: Update CPU topology when VPHN enabled > Date: Tue, 23 May 2017 10:15:29 -0500 > Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) > by patchwork.dja.id.au (Postfix) with ESMTP id DE4FDCA6D5 > for ; Wed, 24 May 2017 01:15:47 +1000 > Message-Id: <4a1bec9a-d3d2-c0bd-3956-e6e402be3...@linux.vnet.ibm.com> > > https://py3.patchwork.dja.id.au/patch/15562/ > > [2/2] : powerpc/hotplug/mm: Fix hot-add memory node assoc > Date: Tue, 23 May 2017 10:15:44 -0500 > Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) > by patchwork.dja.id.au (Postfix) with ESMTP id 222F2CA6D5 > for ; Wed, 24 May 2017 01:16:03 +1000 > Message-Id: <3bb44d92-b2ff-e197-4bdf-ec6d588d6...@linux.vnet.ibm.com> > > https://py3.patchwork.dja.id.au/patch/15563/ > > Fifth revision (v5): > > [V5,0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and > memory > Date: Sun, 18 Jun 2017 13:45:17 -0500 > Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) > by patchwork.dja.id.au (Postfix) with ESMTP id 2CDAFBE57D > for ; Mon, 19 Jun 2017 04:45:30 +1000 > Message-Id: <2a7f2a4f-5885-d186-2baf-da72ce071...@linux.vnet.ibm.com> > > https://py3.patchwork.dja.id.au/cover/27710/ > > [V5,2/2] powerpc/numa: Update CPU topology when VPHN enabled > Date: Sun, 18 Jun 2017 13:46:55 -0500 > Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) > by patchwork.dja.id.au (Postfix) with ESMTP id 13FEBCE81B > for ; Mon, 19 Jun 2017 04:47:11 +1000 > Message-Id: > > https://py3.patchwork.dja.id.au/patch/27711/ ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: v1 -> v2 migration tests
On Thu, 2017-06-22 at 11:16 +0100, Stephen Finucane wrote: > However, despite this, Patchwork seems to managing just fine. There's only > one patch out of series, and that's the one that has been sent as 1/2 > mistakenly. I don't know why this didn't get lumped into the existing series > but it's correct behavior so meh. Found it: https://github.com/getpatchwork/patchwork/blob/v2.0.0-rc4/patchwork/parser.py #L915 We actually codify this. Still seems correct. Stephen ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: v1 -> v2 migration tests
On Thu, 2017-06-22 at 11:16 +0100, Stephen Finucane wrote: > On Thu, 2017-06-22 at 09:05 +1000, Daniel Axtens wrote: > > Hi Stephen, > > > > > Thanks for doing this. I've tinkered with both instances and haven't seen > > > any > > > serious issues. I'm most happy with the fact that the performance of the > > > web UI > > > hasn't been altered (at least, from my brief testing). > > > > > > I'm thinking we give this another day or two and we might be good to go? > > > > Russell picked up one thing that looked questionable: > > https://py3.patchwork.dja.id.au/project/lkml/list/?series== > > =&; > > q=powerpc%2Fnuma%3A+update== > > > > It looks like it's slightly misthreaded; I haven't had a chance to dig > > into it properly just yet. It could well be that the code is correct and > > the messages were just sent with weird headers. > > I took a look at the v6 versions of these... I also just took a look at "v1" (though it doesn't have a prefix, of course) and v5 of this over lunch. The "v1" patches/covers weren't threaded, but that's because they were missing references and in-reply-to headers. We've fixed this since those patches were merged, so were they to be parsed today that wouldn't happen. As for v5, from looking at marc.info [1], it would appear that patch 1/2 was never sent or never made it to the list. Patch 2/2 in this series is also missing 'References' and 'In-Reply-To' headers, but because the patches arrived *after* we'd fixed that issue, they're threaded correctly. Important metadata about the patches is provided below, but all in all, I think Patchwork is working a-ok, which is good to hear. Stephen PS: I can't comment on the quality of the patches themselves, but this guy really needs to start sending his patches with git-send-email instead of Thunderbird :) [1] https://marc.info/?a=14570338841=1=2 --- Original series ("v1"): [0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and memory Date: Tue, 23 May 2017 10:15:11 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 48630CA6D5 for; Wed, 24 May 2017 01:15:45 +1000 Message-Id: <790af26b-7055-7997-2080-f967aef2d...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/cover/15561/ [1/2] powerpc/numa: Update CPU topology when VPHN enabled Date: Tue, 23 May 2017 10:15:29 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id DE4FDCA6D5 for ; Wed, 24 May 2017 01:15:47 +1000 Message-Id: <4a1bec9a-d3d2-c0bd-3956-e6e402be3...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/patch/15562/ [2/2] : powerpc/hotplug/mm: Fix hot-add memory node assoc Date: Tue, 23 May 2017 10:15:44 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 222F2CA6D5 for ; Wed, 24 May 2017 01:16:03 +1000 Message-Id: <3bb44d92-b2ff-e197-4bdf-ec6d588d6...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/patch/15563/ Fifth revision (v5): [V5,0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and memory Date: Sun, 18 Jun 2017 13:45:17 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 2CDAFBE57D for ; Mon, 19 Jun 2017 04:45:30 +1000 Message-Id: <2a7f2a4f-5885-d186-2baf-da72ce071...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/cover/27710/ [V5,2/2] powerpc/numa: Update CPU topology when VPHN enabled Date: Sun, 18 Jun 2017 13:46:55 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 13FEBCE81B for ; Mon, 19 Jun 2017 04:47:11 +1000 Message-Id: https://py3.patchwork.dja.id.au/patch/27711/ ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: v1 -> v2 migration tests
On Thu, 2017-06-22 at 09:05 +1000, Daniel Axtens wrote: > Hi Stephen, > > > Thanks for doing this. I've tinkered with both instances and haven't seen > > any > > serious issues. I'm most happy with the fact that the performance of the > > web UI > > hasn't been altered (at least, from my brief testing). > > > > I'm thinking we give this another day or two and we might be good to go? > > Russell picked up one thing that looked questionable: > https://py3.patchwork.dja.id.au/project/lkml/list/?series===; > q=powerpc%2Fnuma%3A+update== > > It looks like it's slightly misthreaded; I haven't had a chance to dig > into it properly just yet. It could well be that the code is correct and > the messages were just sent with weird headers. I took a look at the v6 versions of these - it looks to be some messed up emailing and nothing more. There are two issues: - He's resent a series with the same version on different dates. - He's sent patch 2/2 of the first series as 1/2 mistakenly, then sent it as 2/2 (with a broken diff, for that matter) immediately after. However, despite this, Patchwork seems to managing just fine. There's only one patch out of series, and that's the one that has been sent as 1/2 mistakenly. I don't know why this didn't get lumped into the existing series but it's correct behavior so meh. I've given a break down of the patches below, along with important metadata, in case anyone reads differently into it. Stephen --- First series (v6): [V6,0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and memory Date: Mon, 19 Jun 2017 17:09:20 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 25D50CE7E4 for; Tue, 20 Jun 2017 08:09:57 +1000 Message-Id: <081bb19d-3008-842d-6872-f51fccb94...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/cover/28644/ [V6,1/2] powerpc/hotplug: Ensure enough nodes avail for operations Date: Mon, 19 Jun 2017 17:10:20 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 0821BCE7E4 for ; Tue, 20 Jun 2017 08:11:11 +1000 Message-Id: In-Reply-To: <081bb19d-3008-842d-6872-f51fccb94...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/patch/28647/ !!! Start broken patch [V6,1/2] powerpc/numa: Update CPU topology when VPHN enabled Date: Mon, 19 Jun 2017 17:10:29 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 10B4CCE820 for ; Tue, 20 Jun 2017 08:11:12 +1000 Message-Id: <2b085918-be91-7654-9e1c-eacd9b8fa...@linux.vnet.ibm.com> In-Reply-To: <081bb19d-3008-842d-6872-f51fccb94...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/patch/28648/ !!! End broken patch [V6,2/2] powerpc/numa: Update CPU topology when VPHN enabled Date: Mon, 19 Jun 2017 17:13:27 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id A1EABBE57D for ; Tue, 20 Jun 2017 08:13:41 +1000 Message-Id: In-Reply-To: <081bb19d-3008-842d-6872-f51fccb94...@linux.vnet.ibm.com> https://py3.patchwork.dja.id.au/patch/28649/ Second series (also v6, but a resend): [V6,0/2] powerpc/dlpar: Correct display of hot-add/hot-remove CPUs and memory Date: Tue, 20 Jun 2017 10:14:23 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id B7FB9CEB1B for ; Wed, 21 Jun 2017 01:15:39 +1000 Message-Id: b0e02344-965b-5298-4ef6-3ec67d120...@linux.vnet.ibm.com https://py3.patchwork.dja.id.au/cover/29113/ [V6,1/2] powerpc/hotplug: Ensure enough nodes avail for operations Date: Tue, 20 Jun 2017 10:14:57 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id E8EB2CEB1B for ; Wed, 21 Jun 2017 01:15:27 +1000 Message-Id: In-Reply-To: https://py3.patchwork.dja.id.au/patch/29112/ [V6,2/2] powerpc/numa: Update CPU topology when VPHN enabled Date: Tue, 20 Jun 2017 10:15:00 -0500 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork.dja.id.au (Postfix) with ESMTP id 657ACCEB56 for ; Wed, 21 Jun 2017 01:16:08 +1000 Message-Id:
Re: v1 -> v2 migration tests
Hi Stephen, > Thanks for doing this. I've tinkered with both instances and haven't seen any > serious issues. I'm most happy with the fact that the performance of the web > UI > hasn't been altered (at least, from my brief testing). > > I'm thinking we give this another day or two and we might be good to go? Russell picked up one thing that looked questionable: https://py3.patchwork.dja.id.au/project/lkml/list/?seriespowerpc%2Fnuma%3A+update== It looks like it's slightly misthreaded; I haven't had a chance to dig into it properly just yet. It could well be that the code is correct and the messages were just sent with weird headers. I'll probably get a moment to look at it in the next couple of days if you don't get to it before then. Regards, Daniel ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: v1 -> v2 migration tests
On Wed, 2017-06-21 at 00:16 +1000, Daniel Axtens wrote: > Hi all, > > I am running tests of v1 to v2 migrations. The origin is > v1.patchwork.dja.id.au and the destination is > migrate.patchwork.dja.id.au. > > Every hour, data is synced from 'v1' to 'migrate' - emails are only > ingested in v1. > > I haven't seen any _obvious_ issues, but please do check it out. > > Please continue to test things, including your API clients. (Obviously > events won't work!) Users created in v1 should exist in migrate (after > the hourly sync). Thanks for doing this. I've tinkered with both instances and haven't seen any serious issues. I'm most happy with the fact that the performance of the web UI hasn't been altered (at least, from my brief testing). I'm thinking we give this another day or two and we might be good to go? Stephen ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork