Re: [PATCH v11 00/26] Speculative page faults
Le 05/11/2018 à 11:42, Balbir Singh a écrit : On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote: This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle page fault without holding the mm semaphore [1]. The idea is to try to handle user space page faults without holding the mmap_sem. This should allow better concurrency for massively threaded Question -- I presume mmap_sem (rw_semaphore implementation tested against) was qrwlock? I don't think so, this series doesn't change the mmap_sem definition so it still belongs to the 'struct rw_semaphore'. Laurent.
Re: [PATCH v11 00/26] Speculative page faults
On Thu, May 17, 2018 at 01:06:07PM +0200, Laurent Dufour wrote: > This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle > page fault without holding the mm semaphore [1]. > > The idea is to try to handle user space page faults without holding the > mmap_sem. This should allow better concurrency for massively threaded Question -- I presume mmap_sem (rw_semaphore implementation tested against) was qrwlock? Balbir Singh.
RE: [PATCH v11 00/26] Speculative page faults
Hi Laurent, I am sorry for replying you so late. The previous LKP test for this case are running on the same Intel skylake 4s platform, but it need maintain recently. So I changed to another test box to run the page_fault3 test case, it is Intel skylake 2s platform (nr_cpu: 104, memory: 64G). I applied your patch to the SPF kernel (commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12), then triggered below 2 cases test. a) Turn on the SPF handler by below command, then run page_fault3-thp-always test. echo 1 > /proc/sys/vm/speculative_page_fault b) Turn off the SPF handler by below command, then run page_fault3-thp-always test. echo 0 > /proc/sys/vm/speculative_page_fault Every test run 3 times, and then get test result and capture perf data. Here is average result for will-it-scale.per_thread_ops: SPF_turn_off SPF_turn_on page_fault3-THP-Alwasys.will-it-scale.per_thread_ops31963 26285 Best regards, Haiyan Song From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of Laurent Dufour [lduf...@linux.vnet.ibm.com] Sent: Wednesday, August 22, 2018 10:23 PM To: Song, HaiyanX Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: Re: [PATCH v11 00/26] Speculative page faults On 03/08/2018 08:36, Song, HaiyanX wrote: > Hi Laurent, Hi Haiyan, Sorry for the late answer, I was off a couple of days. > > Thanks for your analysis for the last perf results. > Your mentioned ," the major differences at the head of the perf report is the > 92% testcase which is weirdly not reported > on the head side", which is a bug of 0-day,and it caused the item is not > counted in perf. > > I've triggered the test page_fault2 and page_fault3 again only with thread > mode of will-it-scale on 0-day (on the same test box,every case tested 3 > times). > I checked the perf report have no above mentioned problem. > > I have compared them, found some items have difference, such as below case: >page_fault2-thp-always: handle_mm_fault, base: 45.22%head: 29.41% >page_fault3-thp-always: handle_mm_fault, base: 22.95%head: 14.15% These would mean that the system spends lees time running handle_mm_fault() when SPF is in the picture in this 2 cases which is good. This should lead to better results with the SPF series, and I can't find any values higher on the head side. > > So i attached the perf result in mail again, could your have a look again for > checking the difference between base and head commit. I took a close look to all the perf result you sent, but I can't identify any major difference. But the compiler optimization is getting rid of the handle_pte_fault() symbol on the base kernel which add complexity to check the differences. To get rid of that, I'm proposing that you applied the attached patch to the spf kernel. This patch is allowing to turn on/off the SPF handler through /proc/sys/vm/speculative_page_fault. This should ease the testing by limiting the reboot and avoid kernel's symbols mismatch. Obviously there is still a small overhead due to the check but it should not be viewable. With this patch applied you can simply run echo 1 > /proc/sys/vm/speculative_page_fault to run a test with the speculative page fault handler activated. Or run echo 0 > /proc/sys/vm/speculative_page_fault to run a test without it. I'm really sorry to asking that again, but could please run the test page_fault3_base_THP-Always with and without SPF and capture the perf output. I think we should focus on that test which showed the biggest regression. Thanks, Laurent. > > Thanks, > Haiyan, Song > > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Tuesday, July 17, 2018 5:36 PM > To: Song, HaiyanX > Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemo
Re: [PATCH v11 00/26] Speculative page faults
On 03/08/2018 08:36, Song, HaiyanX wrote: > Hi Laurent, Hi Haiyan, Sorry for the late answer, I was off a couple of days. > > Thanks for your analysis for the last perf results. > Your mentioned ," the major differences at the head of the perf report is the > 92% testcase which is weirdly not reported > on the head side", which is a bug of 0-day,and it caused the item is not > counted in perf. > > I've triggered the test page_fault2 and page_fault3 again only with thread > mode of will-it-scale on 0-day (on the same test box,every case tested 3 > times). > I checked the perf report have no above mentioned problem. > > I have compared them, found some items have difference, such as below case: >page_fault2-thp-always: handle_mm_fault, base: 45.22%head: 29.41% >page_fault3-thp-always: handle_mm_fault, base: 22.95%head: 14.15% > These would mean that the system spends lees time running handle_mm_fault() when SPF is in the picture in this 2 cases which is good. This should lead to better results with the SPF series, and I can't find any values higher on the head side. > > So i attached the perf result in mail again, could your have a look again for > checking the difference between base and head commit. I took a close look to all the perf result you sent, but I can't identify any major difference. But the compiler optimization is getting rid of the handle_pte_fault() symbol on the base kernel which add complexity to check the differences. To get rid of that, I'm proposing that you applied the attached patch to the spf kernel. This patch is allowing to turn on/off the SPF handler through /proc/sys/vm/speculative_page_fault. This should ease the testing by limiting the reboot and avoid kernel's symbols mismatch. Obviously there is still a small overhead due to the check but it should not be viewable. With this patch applied you can simply run echo 1 > /proc/sys/vm/speculative_page_fault to run a test with the speculative page fault handler activated. Or run echo 0 > /proc/sys/vm/speculative_page_fault to run a test without it. I'm really sorry to asking that again, but could please run the test page_fault3_base_THP-Always with and without SPF and capture the perf output. I think we should focus on that test which showed the biggest regression. Thanks, Laurent. > > Thanks, > Haiyan, Song > > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Tuesday, July 17, 2018 5:36 PM > To: Song, HaiyanX > Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; > Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; > b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas > Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; > linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; > npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim > Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org > Subject: Re: [PATCH v11 00/26] Speculative page faults > > On 13/07/2018 05:56, Song, HaiyanX wrote: >> Hi Laurent, > > Hi Haiyan, > > Thanks a lot for sharing this perf reports. > > I looked at them closely, and I've to admit that I was not able to found a > major difference between the base and the head report, except that > handle_pte_fault() is no more in-lined in the head one. > > As expected, __handle_speculative_fault() is never traced since these tests > are > dealing with file mapping, not handled in the speculative way. > > When running these test did you seen a major differences in the test's result > between base and head ? > > From the number of cycles counted, the biggest difference is page_fault3 when > run with the THP enabled: > BASEHEADDelta > page_fault2_base_thp_never 1142252426747 1065866197589 -6.69% > page_fault2_base_THP-Alwasys1124844374523 1076312228927 -4.31% > page_fault3_base_thp_never 1099387298152 1134118402345 3.16% > page_fault3_base_THP-Always 1059370178101 853985561949-19.39% > > > The very weird thing is the difference of the delta cycles reported between > thp never and thp always, because the speculative way is aborted when checking > for the vma-
RE: [PATCH v11 00/26] Speculative page faults
Add another 3 perf file. From: Song, HaiyanX Sent: Friday, August 03, 2018 2:36 PM To: Laurent Dufour Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: RE: [PATCH v11 00/26] Speculative page faults Hi Laurent, Thanks for your analysis for the last perf results. Your mentioned ," the major differences at the head of the perf report is the 92% testcase which is weirdly not reported on the head side", which is a bug of 0-day,and it caused the item is not counted in perf. I've triggered the test page_fault2 and page_fault3 again only with thread mode of will-it-scale on 0-day (on the same test box,every case tested 3 times). I checked the perf report have no above mentioned problem. I have compared them, found some items have difference, such as below case: page_fault2-thp-always: handle_mm_fault, base: 45.22%head: 29.41% page_fault3-thp-always: handle_mm_fault, base: 22.95%head: 14.15% So i attached the perf result in mail again, could your have a look again for checking the difference between base and head commit. Thanks, Haiyan, Song From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of Laurent Dufour [lduf...@linux.vnet.ibm.com] Sent: Tuesday, July 17, 2018 5:36 PM To: Song, HaiyanX Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: Re: [PATCH v11 00/26] Speculative page faults On 13/07/2018 05:56, Song, HaiyanX wrote: > Hi Laurent, Hi Haiyan, Thanks a lot for sharing this perf reports. I looked at them closely, and I've to admit that I was not able to found a major difference between the base and the head report, except that handle_pte_fault() is no more in-lined in the head one. As expected, __handle_speculative_fault() is never traced since these tests are dealing with file mapping, not handled in the speculative way. When running these test did you seen a major differences in the test's result between base and head ? >From the number of cycles counted, the biggest difference is page_fault3 when run with the THP enabled: BASEHEADDelta page_fault2_base_thp_never 1142252426747 1065866197589 -6.69% page_fault2_base_THP-Alwasys1124844374523 1076312228927 -4.31% page_fault3_base_thp_never 1099387298152 1134118402345 3.16% page_fault3_base_THP-Always 1059370178101 853985561949-19.39% The very weird thing is the difference of the delta cycles reported between thp never and thp always, because the speculative way is aborted when checking for the vma->ops field, which is the same in both case, and the thp is never checked. So there is no code covering differnce, on the speculative path, between these 2 cases. This leads me to think that there are other interactions interfering in the measure. Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at the head of the perf report is the 92% testcase which is weirdly not reported on the head side : 92.02%22.33% page_fault3_processes [.] testcase 92.02% testcase Then the base reported 37.67% for __do_page_fault() where the head reported 48.41%, but the only difference in this function, between base and head, is the call to handle_speculative_fault(). But this is a macro checking for the fault flags, and mm->users and then calling __handle_speculative_fault() if needed. So this can't explain this differ
Re: [PATCH v11 00/26] Speculative page faults
On 13/07/2018 05:56, Song, HaiyanX wrote: > Hi Laurent, Hi Haiyan, Thanks a lot for sharing this perf reports. I looked at them closely, and I've to admit that I was not able to found a major difference between the base and the head report, except that handle_pte_fault() is no more in-lined in the head one. As expected, __handle_speculative_fault() is never traced since these tests are dealing with file mapping, not handled in the speculative way. When running these test did you seen a major differences in the test's result between base and head ? >From the number of cycles counted, the biggest difference is page_fault3 when run with the THP enabled: BASEHEADDelta page_fault2_base_thp_never 1142252426747 1065866197589 -6.69% page_fault2_base_THP-Alwasys1124844374523 1076312228927 -4.31% page_fault3_base_thp_never 1099387298152 1134118402345 3.16% page_fault3_base_THP-Always 1059370178101 853985561949-19.39% The very weird thing is the difference of the delta cycles reported between thp never and thp always, because the speculative way is aborted when checking for the vma->ops field, which is the same in both case, and the thp is never checked. So there is no code covering differnce, on the speculative path, between these 2 cases. This leads me to think that there are other interactions interfering in the measure. Looking at the perf-profile_page_fault3_*_THP-Always, the major differences at the head of the perf report is the 92% testcase which is weirdly not reported on the head side : 92.02%22.33% page_fault3_processes [.] testcase 92.02% testcase Then the base reported 37.67% for __do_page_fault() where the head reported 48.41%, but the only difference in this function, between base and head, is the call to handle_speculative_fault(). But this is a macro checking for the fault flags, and mm->users and then calling __handle_speculative_fault() if needed. So this can't explain this difference, except if __handle_speculative_fault() is inlined in __do_page_fault(). Is this the case on your build ? Haiyan, do you still have the output of the test to check those numbers too ? Cheers, Laurent > I attached the perf-profile.gz file for case page_fault2 and page_fault3. > These files were captured during test the related test case. > Please help to check on these data if it can help you to find the higher > change. Thanks. > > File name perf-profile_page_fault2_head_THP-Always.gz, means the perf-profile > result get from page_fault2 > tested for head commit (a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12) with > THP_always configuration. > > Best regards, > Haiyan Song > > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Thursday, July 12, 2018 1:05 AM > To: Song, HaiyanX > Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; > Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; > b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas > Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; > linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; > npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim > Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org > Subject: Re: [PATCH v11 00/26] Speculative page faults > > Hi Haiyan, > > Do you get a chance to capture some performance cycles on your system ? > I still can't get these numbers on my hardware. > > Thanks, > Laurent. > > On 04/07/2018 09:51, Laurent Dufour wrote: >> On 04/07/2018 05:23, Song, HaiyanX wrote: >>> Hi Laurent, >>> >>> >>> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), >>> the below test cases all were run 3 times. >>> I check the test results, only page_fault3_thread/enable THP have 6% stddev >>> for head commit, other tests have lower stddev. >> >> Repeating the test only 3 times seems a bit too low to me. >> >> I'll focus on the higher change for the moment, but I don't have access to >> such >> a hardware. >> >> Is possible to provide a diff between base and SPF of the performance cycles >> measured when running page_fault3 and page_fault2 when the 20% change is >> dete
Re: [PATCH v11 00/26] Speculative page faults
Hi Haiyan, Do you get a chance to capture some performance cycles on your system ? I still can't get these numbers on my hardware. Thanks, Laurent. On 04/07/2018 09:51, Laurent Dufour wrote: > On 04/07/2018 05:23, Song, HaiyanX wrote: >> Hi Laurent, >> >> >> For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), >> the below test cases all were run 3 times. >> I check the test results, only page_fault3_thread/enable THP have 6% stddev >> for head commit, other tests have lower stddev. > > Repeating the test only 3 times seems a bit too low to me. > > I'll focus on the higher change for the moment, but I don't have access to > such > a hardware. > > Is possible to provide a diff between base and SPF of the performance cycles > measured when running page_fault3 and page_fault2 when the 20% change is > detected. > > Please stay focus on the test case process to see exactly where the series is > impacting. > > Thanks, > Laurent. > >> >> And I did not find other high variation on test case result. >> >> a). Enable THP >> testcase base stddev change head >> stddev metric >> page_fault3/enable THP 10519 ± 3%-20.5% 8368 >> ±6% will-it-scale.per_thread_ops >> page_fault2/enalbe THP8281 ± 2%-18.8% 6728 >> will-it-scale.per_thread_ops >> brk1/eanble THP 998475 -2.2%976893 >> will-it-scale.per_process_ops >> context_switch1/enable THP 223910 -1.3%220930 >> will-it-scale.per_process_ops >> context_switch1/enable THP 233722 -1.0%231288 >> will-it-scale.per_thread_ops >> >> b). Disable THP >> page_fault3/disable THP 10856 -23.1% 8344 >> will-it-scale.per_thread_ops >> page_fault2/disable THP 8147 -18.8% 6613 >> will-it-scale.per_thread_ops >> brk1/disable THP 957-7.9% 881 >> will-it-scale.per_thread_ops >> context_switch1/disable THP 237006-2.2%231907 >> will-it-scale.per_thread_ops >> brk1/disable THP997317-2.0%98 >> will-it-scale.per_process_ops >> page_fault3/disable THP 467454-1.8%459251 >> will-it-scale.per_process_ops >> context_switch1/disable THP 224431-1.3%221567 >> will-it-scale.per_process_ops >> >> >> Best regards, >> Haiyan Song >> >> From: Laurent Dufour [lduf...@linux.vnet.ibm.com] >> Sent: Monday, July 02, 2018 4:59 PM >> To: Song, HaiyanX >> Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; >> kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; >> Matthew Wilcox; khand...@linux.vnet.ibm.com; >> aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; >> m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; >> h...@zytor.com; Will Deacon; Sergey Senozhatsky; >> sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; >> Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; >> Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; >> linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; >> npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim >> Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org >> Subject: Re: [PATCH v11 00/26] Speculative page faults >> >> On 11/06/2018 09:49, Song, HaiyanX wrote: >>> Hi Laurent, >>> >>> Regression test for v11 patch serials have been run, some regression is >>> found by LKP-tools (linux kernel performance) >>> tested on Intel 4s skylake platform. This time only test the cases which >>> have been run and found regressions on >>> V9 patch serials. >>> >>> The regression result is sorted by the metric will-it-scale.per_thread_ops. >>> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126 >>> commit id: >>> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12 >>> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1 >>>
Re: [PATCH v11 00/26] Speculative page faults
On 04/07/2018 05:23, Song, HaiyanX wrote: > Hi Laurent, > > > For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the > below test cases all were run 3 times. > I check the test results, only page_fault3_thread/enable THP have 6% stddev > for head commit, other tests have lower stddev. Repeating the test only 3 times seems a bit too low to me. I'll focus on the higher change for the moment, but I don't have access to such a hardware. Is possible to provide a diff between base and SPF of the performance cycles measured when running page_fault3 and page_fault2 when the 20% change is detected. Please stay focus on the test case process to see exactly where the series is impacting. Thanks, Laurent. > > And I did not find other high variation on test case result. > > a). Enable THP > testcase base stddev change head > stddev metric > page_fault3/enable THP 10519 ± 3%-20.5% 8368 > ±6% will-it-scale.per_thread_ops > page_fault2/enalbe THP8281 ± 2%-18.8% 6728 > will-it-scale.per_thread_ops > brk1/eanble THP 998475 -2.2%976893 > will-it-scale.per_process_ops > context_switch1/enable THP 223910 -1.3%220930 > will-it-scale.per_process_ops > context_switch1/enable THP 233722 -1.0%231288 > will-it-scale.per_thread_ops > > b). Disable THP > page_fault3/disable THP 10856 -23.1% 8344 > will-it-scale.per_thread_ops > page_fault2/disable THP 8147 -18.8% 6613 > will-it-scale.per_thread_ops > brk1/disable THP 957-7.9% 881 > will-it-scale.per_thread_ops > context_switch1/disable THP 237006-2.2%231907 > will-it-scale.per_thread_ops > brk1/disable THP997317-2.0%98 > will-it-scale.per_process_ops > page_fault3/disable THP 467454-1.8%459251 > will-it-scale.per_process_ops > context_switch1/disable THP 224431-1.3%221567 > will-it-scale.per_process_ops > > > Best regards, > Haiyan Song > > From: Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Monday, July 02, 2018 4:59 PM > To: Song, HaiyanX > Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; > Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; > b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas > Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; > linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; > npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim > Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org > Subject: Re: [PATCH v11 00/26] Speculative page faults > > On 11/06/2018 09:49, Song, HaiyanX wrote: >> Hi Laurent, >> >> Regression test for v11 patch serials have been run, some regression is >> found by LKP-tools (linux kernel performance) >> tested on Intel 4s skylake platform. This time only test the cases which >> have been run and found regressions on >> V9 patch serials. >> >> The regression result is sorted by the metric will-it-scale.per_thread_ops. >> branch: Laurent-Dufour/Speculative-page-faults/20180520-045126 >> commit id: >> head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12 >> base commit : ba98a1cdad71d259a194461b3a61471b49b14df1 >> Benchmark: will-it-scale >> Download link: https://github.com/antonblanchard/will-it-scale/tree/master >> >> Metrics: >> will-it-scale.per_process_ops=processes/nr_cpu >> will-it-scale.per_thread_ops=threads/nr_cpu >> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) >> THP: enable / disable >> nr_task:100% >> >> 1. Regressions: >> >> a). Enable THP >> testcase base change head >> metric >> page_fault3/enable THP 10519 -20.5%
RE: [PATCH v11 00/26] Speculative page faults
Hi Laurent, For the test result on Intel 4s skylake platform (192 CPUs, 768G Memory), the below test cases all were run 3 times. I check the test results, only page_fault3_thread/enable THP have 6% stddev for head commit, other tests have lower stddev. And I did not find other high variation on test case result. a). Enable THP testcase base stddev change head stddev metric page_fault3/enable THP 10519 ± 3%-20.5% 8368 ±6% will-it-scale.per_thread_ops page_fault2/enalbe THP8281 ± 2%-18.8% 6728 will-it-scale.per_thread_ops brk1/eanble THP 998475 -2.2%976893 will-it-scale.per_process_ops context_switch1/enable THP 223910 -1.3%220930 will-it-scale.per_process_ops context_switch1/enable THP 233722 -1.0%231288 will-it-scale.per_thread_ops b). Disable THP page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops brk1/disable THP 957-7.9% 881 will-it-scale.per_thread_ops context_switch1/disable THP 237006-2.2%231907 will-it-scale.per_thread_ops brk1/disable THP997317-2.0%98 will-it-scale.per_process_ops page_fault3/disable THP 467454-1.8%459251 will-it-scale.per_process_ops context_switch1/disable THP 224431-1.3%221567 will-it-scale.per_process_ops Best regards, Haiyan Song From: Laurent Dufour [lduf...@linux.vnet.ibm.com] Sent: Monday, July 02, 2018 4:59 PM To: Song, HaiyanX Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: Re: [PATCH v11 00/26] Speculative page faults On 11/06/2018 09:49, Song, HaiyanX wrote: > Hi Laurent, > > Regression test for v11 patch serials have been run, some regression is found > by LKP-tools (linux kernel performance) > tested on Intel 4s skylake platform. This time only test the cases which have > been run and found regressions on > V9 patch serials. > > The regression result is sorted by the metric will-it-scale.per_thread_ops. > branch: Laurent-Dufour/Speculative-page-faults/20180520-045126 > commit id: > head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12 > base commit : ba98a1cdad71d259a194461b3a61471b49b14df1 > Benchmark: will-it-scale > Download link: https://github.com/antonblanchard/will-it-scale/tree/master > > Metrics: > will-it-scale.per_process_ops=processes/nr_cpu > will-it-scale.per_thread_ops=threads/nr_cpu > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) > THP: enable / disable > nr_task:100% > > 1. Regressions: > > a). Enable THP > testcase base change head > metric > page_fault3/enable THP 10519 -20.5%836 > will-it-scale.per_thread_ops > page_fault2/enalbe THP8281 -18.8% 6728 > will-it-scale.per_thread_ops > brk1/eanble THP 998475 -2.2% 976893 > will-it-scale.per_process_ops > context_switch1/enable THP 223910 -1.3% 220930 > will-it-scale.per_process_ops > context_switch1/enable THP 233722 -1.0% 231288 > will-it-scale.per_thread_ops > > b). Disable THP > page_fault3/disable THP 10856 -23.1% 8344 > will-it-scale.per_thread_ops > page_fault2/disable THP 8147 -18.8% 6613 > will-it-scale.per_thread_ops > brk1/disable THP 957 -7.9%881 > will-it-scale.per_thread_ops > context_switch1/disable THP 237006
Re: [PATCH v11 00/26] Speculative page faults
On 11/06/2018 09:49, Song, HaiyanX wrote: > Hi Laurent, > > Regression test for v11 patch serials have been run, some regression is found > by LKP-tools (linux kernel performance) > tested on Intel 4s skylake platform. This time only test the cases which have > been run and found regressions on > V9 patch serials. > > The regression result is sorted by the metric will-it-scale.per_thread_ops. > branch: Laurent-Dufour/Speculative-page-faults/20180520-045126 > commit id: > head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12 > base commit : ba98a1cdad71d259a194461b3a61471b49b14df1 > Benchmark: will-it-scale > Download link: https://github.com/antonblanchard/will-it-scale/tree/master > > Metrics: > will-it-scale.per_process_ops=processes/nr_cpu > will-it-scale.per_thread_ops=threads/nr_cpu > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) > THP: enable / disable > nr_task:100% > > 1. Regressions: > > a). Enable THP > testcase base change head > metric > page_fault3/enable THP 10519 -20.5%836 > will-it-scale.per_thread_ops > page_fault2/enalbe THP8281 -18.8% 6728 > will-it-scale.per_thread_ops > brk1/eanble THP 998475 -2.2% 976893 > will-it-scale.per_process_ops > context_switch1/enable THP 223910 -1.3% 220930 > will-it-scale.per_process_ops > context_switch1/enable THP 233722 -1.0% 231288 > will-it-scale.per_thread_ops > > b). Disable THP > page_fault3/disable THP 10856 -23.1% 8344 > will-it-scale.per_thread_ops > page_fault2/disable THP 8147 -18.8% 6613 > will-it-scale.per_thread_ops > brk1/disable THP 957 -7.9%881 > will-it-scale.per_thread_ops > context_switch1/disable THP 237006 -2.2% 231907 > will-it-scale.per_thread_ops > brk1/disable THP997317 -2.0% 98 > will-it-scale.per_process_ops > page_fault3/disable THP 467454 -1.8% 459251 > will-it-scale.per_process_ops > context_switch1/disable THP 224431 -1.3% 221567 > will-it-scale.per_process_ops > > Notes: for the above values of test result, the higher is better. I tried the same tests on my PowerPC victim VM (1024 CPUs, 11TB) and I can't get reproducible results. The results have huge variation, even on the vanilla kernel, and I can't state on any changes due to that. I tried on smaller node (80 CPUs, 32G), and the tests ran better, but I didn't measure any changes between the vanilla and the SPF patched ones: test THP enabled4.17.0-rc4-mm1 spf delta page_fault3_threads 2697.7 2683.5 -0.53% page_fault2_threads 170660.6169574.1-0.64% context_switch1_threads 6915269.2 6877507.3 -0.55% context_switch1_processes 6478076.2 6529493.5 0.79% brk1243391.2238527.5-2.00% Tests were run 10 times, no high variation detected. Did you see high variation on your side ? How many times the test were run to compute the average values ? Thanks, Laurent. > > 2. Improvement: not found improvement based on the selected test cases. > > > Best regards > Haiyan Song > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Monday, May 28, 2018 4:54 PM > To: Song, HaiyanX > Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; > Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; > b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas > Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; > linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; > npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim > Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org > Subject: Re: [PATCH v11 00/26] Speculative page faults > > On 28/05/2018 10:22, Haiyan Song wrote: >> Hi Laurent, >> >> Yes, these tests are done on V9 patch. > > Do you plan to give this V11 a run ? > >>
Re: [PATCH v11 00/26] Speculative page faults
Hi Haiyan, I don't have access to the same hardware you ran the test on, but I give a try to those test on a Power8 system (2 sockets, 5 cores/s, 8 threads/c, 80 CPUs 32G). I run each will-it-scale test 10 times and compute the average. test THP enabled4.17.0-rc4-mm1 spf delta page_fault3_threads 2697.7 2683.5 -0.53% page_fault2_threads 170660.6169574.1-0.64% context_switch1_threads 6915269.2 6877507.3 -0.55% context_switch1_processes 6478076.2 6529493.5 0.79% brk1243391.2238527.5-2.00% Test were launched with the arguments '-t 80 -s 5', only the average report is taken in account. Note that page size is 64K by default on ppc64. It would be nice if you could capture some perf data to figure out why the page_fault2/3 are showing such a performance regression. Thanks, Laurent. On 11/06/2018 09:49, Song, HaiyanX wrote: > Hi Laurent, > > Regression test for v11 patch serials have been run, some regression is found > by LKP-tools (linux kernel performance) > tested on Intel 4s skylake platform. This time only test the cases which have > been run and found regressions on > V9 patch serials. > > The regression result is sorted by the metric will-it-scale.per_thread_ops. > branch: Laurent-Dufour/Speculative-page-faults/20180520-045126 > commit id: > head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12 > base commit : ba98a1cdad71d259a194461b3a61471b49b14df1 > Benchmark: will-it-scale > Download link: https://github.com/antonblanchard/will-it-scale/tree/master > > Metrics: > will-it-scale.per_process_ops=processes/nr_cpu > will-it-scale.per_thread_ops=threads/nr_cpu > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) > THP: enable / disable > nr_task:100% > > 1. Regressions: > > a). Enable THP > testcase base change head > metric > page_fault3/enable THP 10519 -20.5%836 > will-it-scale.per_thread_ops > page_fault2/enalbe THP8281 -18.8% 6728 > will-it-scale.per_thread_ops > brk1/eanble THP 998475 -2.2% 976893 > will-it-scale.per_process_ops > context_switch1/enable THP 223910 -1.3% 220930 > will-it-scale.per_process_ops > context_switch1/enable THP 233722 -1.0% 231288 > will-it-scale.per_thread_ops > > b). Disable THP > page_fault3/disable THP 10856 -23.1% 8344 > will-it-scale.per_thread_ops > page_fault2/disable THP 8147 -18.8% 6613 > will-it-scale.per_thread_ops > brk1/disable THP 957 -7.9%881 > will-it-scale.per_thread_ops > context_switch1/disable THP 237006 -2.2% 231907 > will-it-scale.per_thread_ops > brk1/disable THP997317 -2.0% 98 > will-it-scale.per_process_ops > page_fault3/disable THP 467454 -1.8% 459251 > will-it-scale.per_process_ops > context_switch1/disable THP 224431 -1.3% 221567 > will-it-scale.per_process_ops > > Notes: for the above values of test result, the higher is better. > > 2. Improvement: not found improvement based on the selected test cases. > > > Best regards > Haiyan Song > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Monday, May 28, 2018 4:54 PM > To: Song, HaiyanX > Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; > Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; > b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas > Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; > linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; > npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim > Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org > Subject: Re: [PATCH v11 00/26] Speculative page faults > > On 28/05/2018 10:22, Haiyan Song wrote: >> Hi Laurent, >> >> Yes, these tests are done on V9 patch. > > Do you plan to give this V11 a run ? > >>
RE: [PATCH v11 00/26] Speculative page faults
Hi Laurent, Regression test for v11 patch serials have been run, some regression is found by LKP-tools (linux kernel performance) tested on Intel 4s skylake platform. This time only test the cases which have been run and found regressions on V9 patch serials. The regression result is sorted by the metric will-it-scale.per_thread_ops. branch: Laurent-Dufour/Speculative-page-faults/20180520-045126 commit id: head commit : a7a8993bfe3ccb54ad468b9f1799649e4ad1ff12 base commit : ba98a1cdad71d259a194461b3a61471b49b14df1 Benchmark: will-it-scale Download link: https://github.com/antonblanchard/will-it-scale/tree/master Metrics: will-it-scale.per_process_ops=processes/nr_cpu will-it-scale.per_thread_ops=threads/nr_cpu test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) THP: enable / disable nr_task:100% 1. Regressions: a). Enable THP testcase base change head metric page_fault3/enable THP 10519 -20.5%836 will-it-scale.per_thread_ops page_fault2/enalbe THP8281 -18.8% 6728 will-it-scale.per_thread_ops brk1/eanble THP 998475 -2.2% 976893 will-it-scale.per_process_ops context_switch1/enable THP 223910 -1.3% 220930 will-it-scale.per_process_ops context_switch1/enable THP 233722 -1.0% 231288 will-it-scale.per_thread_ops b). Disable THP page_fault3/disable THP 10856 -23.1% 8344 will-it-scale.per_thread_ops page_fault2/disable THP 8147 -18.8% 6613 will-it-scale.per_thread_ops brk1/disable THP 957 -7.9%881 will-it-scale.per_thread_ops context_switch1/disable THP 237006 -2.2% 231907 will-it-scale.per_thread_ops brk1/disable THP997317 -2.0% 98 will-it-scale.per_process_ops page_fault3/disable THP 467454 -1.8% 459251 will-it-scale.per_process_ops context_switch1/disable THP 224431 -1.3% 221567 will-it-scale.per_process_ops Notes: for the above values of test result, the higher is better. 2. Improvement: not found improvement based on the selected test cases. Best regards Haiyan Song From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of Laurent Dufour [lduf...@linux.vnet.ibm.com] Sent: Monday, May 28, 2018 4:54 PM To: Song, HaiyanX Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi; linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: Re: [PATCH v11 00/26] Speculative page faults On 28/05/2018 10:22, Haiyan Song wrote: > Hi Laurent, > > Yes, these tests are done on V9 patch. Do you plan to give this V11 a run ? > > > Best regards, > Haiyan Song > > On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote: >> On 28/05/2018 07:23, Song, HaiyanX wrote: >>> >>> Some regression and improvements is found by LKP-tools(linux kernel >>> performance) on V9 patch series >>> tested on Intel 4s Skylake platform. >> >> Hi, >> >> Thanks for reporting this benchmark results, but you mentioned the "V9 patch >> series" while responding to the v11 header series... >> Were these tests done on v9 or v11 ? >> >> Cheers, >> Laurent. >> >>> >>> The regression result is sorted by the metric will-it-scale.per_thread_ops. >>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch >>> series) >>> Commit id: >>> base commit: d55f34411b1b126429a823d06c3124c16283231f >>> head commit: 0355322b3577eeab7669066df42c550a56801110 >>> Benchmark suite: will-it-scale >>> Download link: >>> https://github.com/antonblanchard/will-it-scale/tree/master/tests >>> Metrics: >>> will-it-scale.per_process_ops=processes/nr_cpu >>> will-it-scale.per_thread_ops=threads/nr_cpu >>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) >>> THP: enable / disable >>> nr_task: 100% >&g
RE: [PATCH v11 00/26] Speculative page faults
Full run would take one or two weeks depended on our resource available. Could you pick some ones up, e.g. those have performance regression? -Original Message- From: owner-linux...@kvack.org [mailto:owner-linux...@kvack.org] On Behalf Of Laurent Dufour Sent: Monday, May 28, 2018 4:55 PM To: Song, HaiyanX Cc: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox ; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner ; Ingo Molnar ; h...@zytor.com; Will Deacon ; Sergey Senozhatsky ; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli ; Alexei Starovoitov ; Wang, Kemi ; Daniel Jordan ; David Rientjes ; Jerome Glisse ; Ganesh Mahendran ; Minchan Kim ; Punit Agrawal ; vinayak menon ; Yang Shi ; linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen ; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: Re: [PATCH v11 00/26] Speculative page faults On 28/05/2018 10:22, Haiyan Song wrote: > Hi Laurent, > > Yes, these tests are done on V9 patch. Do you plan to give this V11 a run ? > > > Best regards, > Haiyan Song > > On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote: >> On 28/05/2018 07:23, Song, HaiyanX wrote: >>> >>> Some regression and improvements is found by LKP-tools(linux kernel >>> performance) on V9 patch series tested on Intel 4s Skylake platform. >> >> Hi, >> >> Thanks for reporting this benchmark results, but you mentioned the >> "V9 patch series" while responding to the v11 header series... >> Were these tests done on v9 or v11 ? >> >> Cheers, >> Laurent. >> >>> >>> The regression result is sorted by the metric will-it-scale.per_thread_ops. >>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 >>> patch series) Commit id: >>> base commit: d55f34411b1b126429a823d06c3124c16283231f >>> head commit: 0355322b3577eeab7669066df42c550a56801110 >>> Benchmark suite: will-it-scale >>> Download link: >>> https://github.com/antonblanchard/will-it-scale/tree/master/tests >>> Metrics: >>> will-it-scale.per_process_ops=processes/nr_cpu >>> will-it-scale.per_thread_ops=threads/nr_cpu >>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) >>> THP: enable / disable >>> nr_task: 100% >>> >>> 1. Regressions: >>> a) THP enabled: >>> testcasebasechange head >>> metric >>> page_fault3/ enable THP 10092 -17.5% 8323 >>> will-it-scale.per_thread_ops >>> page_fault2/ enable THP 8300 -17.2% 6869 >>> will-it-scale.per_thread_ops >>> brk1/ enable THP 957.67 -7.6% 885 >>> will-it-scale.per_thread_ops >>> page_fault3/ enable THP172821-5.3%163692 >>> will-it-scale.per_process_ops >>> signal1/ enable THP 9125-3.2% 8834 >>> will-it-scale.per_process_ops >>> >>> b) THP disabled: >>> testcasebasechange head >>> metric >>> page_fault3/ disable THP10107 -19.1% 8180 >>> will-it-scale.per_thread_ops >>> page_fault2/ disable THP 8432 -17.8% 6931 >>> will-it-scale.per_thread_ops >>> context_switch1/ disable THP 215389-6.8%200776 >>> will-it-scale.per_thread_ops >>> brk1/ disable THP 939.67 -6.6% 877.33 >>> will-it-scale.per_thread_ops >>> page_fault3/ disable THP 173145-4.7%165064 >>> will-it-scale.per_process_ops >>> signal1/ disable THP 9162-3.9% 8802 >>> will-it-scale.per_process_ops >>> >>> 2. Improvements: >>> a) THP enabled: >>> testcasebasechange head >>> metric >>> malloc1/ enable THP 66.33+469.8% 383.67 >>> will-it-scale.per_thread_ops >>> writeseek3/ enable THP 2531 +4.5% 2646 >>> will-it-scale.per_thread_ops >>> si
Re: [PATCH v11 00/26] Speculative page faults
On 28/05/2018 10:22, Haiyan Song wrote: > Hi Laurent, > > Yes, these tests are done on V9 patch. Do you plan to give this V11 a run ? > > > Best regards, > Haiyan Song > > On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote: >> On 28/05/2018 07:23, Song, HaiyanX wrote: >>> >>> Some regression and improvements is found by LKP-tools(linux kernel >>> performance) on V9 patch series >>> tested on Intel 4s Skylake platform. >> >> Hi, >> >> Thanks for reporting this benchmark results, but you mentioned the "V9 patch >> series" while responding to the v11 header series... >> Were these tests done on v9 or v11 ? >> >> Cheers, >> Laurent. >> >>> >>> The regression result is sorted by the metric will-it-scale.per_thread_ops. >>> Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch >>> series) >>> Commit id: >>> base commit: d55f34411b1b126429a823d06c3124c16283231f >>> head commit: 0355322b3577eeab7669066df42c550a56801110 >>> Benchmark suite: will-it-scale >>> Download link: >>> https://github.com/antonblanchard/will-it-scale/tree/master/tests >>> Metrics: >>> will-it-scale.per_process_ops=processes/nr_cpu >>> will-it-scale.per_thread_ops=threads/nr_cpu >>> test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) >>> THP: enable / disable >>> nr_task: 100% >>> >>> 1. Regressions: >>> a) THP enabled: >>> testcasebasechange head >>> metric >>> page_fault3/ enable THP 10092 -17.5% 8323 >>> will-it-scale.per_thread_ops >>> page_fault2/ enable THP 8300 -17.2% 6869 >>> will-it-scale.per_thread_ops >>> brk1/ enable THP 957.67 -7.6% 885 >>> will-it-scale.per_thread_ops >>> page_fault3/ enable THP172821-5.3%163692 >>> will-it-scale.per_process_ops >>> signal1/ enable THP 9125-3.2% 8834 >>> will-it-scale.per_process_ops >>> >>> b) THP disabled: >>> testcasebasechange head >>> metric >>> page_fault3/ disable THP10107 -19.1% 8180 >>> will-it-scale.per_thread_ops >>> page_fault2/ disable THP 8432 -17.8% 6931 >>> will-it-scale.per_thread_ops >>> context_switch1/ disable THP 215389-6.8%200776 >>> will-it-scale.per_thread_ops >>> brk1/ disable THP 939.67 -6.6% 877.33 >>> will-it-scale.per_thread_ops >>> page_fault3/ disable THP 173145-4.7%165064 >>> will-it-scale.per_process_ops >>> signal1/ disable THP 9162-3.9% 8802 >>> will-it-scale.per_process_ops >>> >>> 2. Improvements: >>> a) THP enabled: >>> testcasebasechange head >>> metric >>> malloc1/ enable THP 66.33+469.8% 383.67 >>> will-it-scale.per_thread_ops >>> writeseek3/ enable THP 2531 +4.5% 2646 >>> will-it-scale.per_thread_ops >>> signal1/ enable THP 989.33 +2.8% 1016 >>> will-it-scale.per_thread_ops >>> >>> b) THP disabled: >>> testcasebasechange head >>> metric >>> malloc1/ disable THP 90.33+417.3% 467.33 >>> will-it-scale.per_thread_ops >>> read2/ disable THP 58934+39.2% 82060 >>> will-it-scale.per_thread_ops >>> page_fault1/ disable THP8607+36.4% 11736 >>> will-it-scale.per_thread_ops >>> read1/ disable THP314063+12.7%353934 >>> will-it-scale.per_thread_ops >>> writeseek3/ disable THP 2452+12.5% 2759 >>> will-it-scale.per_thread_ops >>> signal1/ disable THP 971.33 +5.5% 1024 >>> will-it-scale.per_thread_ops >>> >>> Notes: for above values in column "change", the higher value means that the >>> related testcase result >>> on head commit is better than that on base commit for this benchmark. >>> >>> >>> Best regards >>> Haiyan Song >>> >>> >>> From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of >>> Laurent Dufour [lduf...@linux.vnet.ibm.com] >>> Sent: Thursday, May 17, 2018 7:06 PM >>> To: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; >>> kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; >>> j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; >>> aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; >>> m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; >>> h...@zytor.com; Will Deacon; Sergey Senozhatsky; >>> sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov;
Re: [PATCH v11 00/26] Speculative page faults
Hi Laurent, Yes, these tests are done on V9 patch. Best regards, Haiyan Song On Mon, May 28, 2018 at 09:51:34AM +0200, Laurent Dufour wrote: > On 28/05/2018 07:23, Song, HaiyanX wrote: > > > > Some regression and improvements is found by LKP-tools(linux kernel > > performance) on V9 patch series > > tested on Intel 4s Skylake platform. > > Hi, > > Thanks for reporting this benchmark results, but you mentioned the "V9 patch > series" while responding to the v11 header series... > Were these tests done on v9 or v11 ? > > Cheers, > Laurent. > > > > > The regression result is sorted by the metric will-it-scale.per_thread_ops. > > Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch > > series) > > Commit id: > > base commit: d55f34411b1b126429a823d06c3124c16283231f > > head commit: 0355322b3577eeab7669066df42c550a56801110 > > Benchmark suite: will-it-scale > > Download link: > > https://github.com/antonblanchard/will-it-scale/tree/master/tests > > Metrics: > > will-it-scale.per_process_ops=processes/nr_cpu > > will-it-scale.per_thread_ops=threads/nr_cpu > > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) > > THP: enable / disable > > nr_task: 100% > > > > 1. Regressions: > > a) THP enabled: > > testcasebasechange head > > metric > > page_fault3/ enable THP 10092 -17.5% 8323 > > will-it-scale.per_thread_ops > > page_fault2/ enable THP 8300 -17.2% 6869 > > will-it-scale.per_thread_ops > > brk1/ enable THP 957.67 -7.6% 885 > > will-it-scale.per_thread_ops > > page_fault3/ enable THP172821-5.3%163692 > > will-it-scale.per_process_ops > > signal1/ enable THP 9125-3.2% 8834 > > will-it-scale.per_process_ops > > > > b) THP disabled: > > testcasebasechange head > > metric > > page_fault3/ disable THP10107 -19.1% 8180 > > will-it-scale.per_thread_ops > > page_fault2/ disable THP 8432 -17.8% 6931 > > will-it-scale.per_thread_ops > > context_switch1/ disable THP 215389-6.8%200776 > > will-it-scale.per_thread_ops > > brk1/ disable THP 939.67 -6.6% 877.33 > > will-it-scale.per_thread_ops > > page_fault3/ disable THP 173145-4.7%165064 > > will-it-scale.per_process_ops > > signal1/ disable THP 9162-3.9% 8802 > > will-it-scale.per_process_ops > > > > 2. Improvements: > > a) THP enabled: > > testcasebasechange head > > metric > > malloc1/ enable THP 66.33+469.8% 383.67 > > will-it-scale.per_thread_ops > > writeseek3/ enable THP 2531 +4.5% 2646 > > will-it-scale.per_thread_ops > > signal1/ enable THP 989.33 +2.8% 1016 > > will-it-scale.per_thread_ops > > > > b) THP disabled: > > testcasebasechange head > > metric > > malloc1/ disable THP 90.33+417.3% 467.33 > > will-it-scale.per_thread_ops > > read2/ disable THP 58934+39.2% 82060 > > will-it-scale.per_thread_ops > > page_fault1/ disable THP8607+36.4% 11736 > > will-it-scale.per_thread_ops > > read1/ disable THP314063+12.7%353934 > > will-it-scale.per_thread_ops > > writeseek3/ disable THP 2452+12.5% 2759 > > will-it-scale.per_thread_ops > > signal1/ disable THP 971.33 +5.5% 1024 > > will-it-scale.per_thread_ops > > > > Notes: for above values in column "change", the higher value means that the > > related testcase result > > on head commit is better than that on base commit for this benchmark. > > > > > > Best regards > > Haiyan Song > > > > > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > > Laurent Dufour [lduf...@linux.vnet.ibm.com] > > Sent: Thursday, May 17, 2018 7:06 PM > > To: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; > > j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; > > aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; > > m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; > > h...@zytor.com; Will Deacon; Sergey Senozhatsky; > > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > > Minchan K
Re: [PATCH v11 00/26] Speculative page faults
On 28/05/2018 07:23, Song, HaiyanX wrote: > > Some regression and improvements is found by LKP-tools(linux kernel > performance) on V9 patch series > tested on Intel 4s Skylake platform. Hi, Thanks for reporting this benchmark results, but you mentioned the "V9 patch series" while responding to the v11 header series... Were these tests done on v9 or v11 ? Cheers, Laurent. > > The regression result is sorted by the metric will-it-scale.per_thread_ops. > Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch > series) > Commit id: > base commit: d55f34411b1b126429a823d06c3124c16283231f > head commit: 0355322b3577eeab7669066df42c550a56801110 > Benchmark suite: will-it-scale > Download link: > https://github.com/antonblanchard/will-it-scale/tree/master/tests > Metrics: > will-it-scale.per_process_ops=processes/nr_cpu > will-it-scale.per_thread_ops=threads/nr_cpu > test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) > THP: enable / disable > nr_task: 100% > > 1. Regressions: > a) THP enabled: > testcasebasechange head > metric > page_fault3/ enable THP 10092 -17.5% 8323 > will-it-scale.per_thread_ops > page_fault2/ enable THP 8300 -17.2% 6869 > will-it-scale.per_thread_ops > brk1/ enable THP 957.67 -7.6% 885 > will-it-scale.per_thread_ops > page_fault3/ enable THP172821-5.3%163692 > will-it-scale.per_process_ops > signal1/ enable THP 9125-3.2% 8834 > will-it-scale.per_process_ops > > b) THP disabled: > testcasebasechange head > metric > page_fault3/ disable THP10107 -19.1% 8180 > will-it-scale.per_thread_ops > page_fault2/ disable THP 8432 -17.8% 6931 > will-it-scale.per_thread_ops > context_switch1/ disable THP 215389-6.8%200776 > will-it-scale.per_thread_ops > brk1/ disable THP 939.67 -6.6% 877.33 > will-it-scale.per_thread_ops > page_fault3/ disable THP 173145-4.7%165064 > will-it-scale.per_process_ops > signal1/ disable THP 9162-3.9% 8802 > will-it-scale.per_process_ops > > 2. Improvements: > a) THP enabled: > testcasebasechange head > metric > malloc1/ enable THP 66.33+469.8% 383.67 > will-it-scale.per_thread_ops > writeseek3/ enable THP 2531 +4.5% 2646 > will-it-scale.per_thread_ops > signal1/ enable THP 989.33 +2.8% 1016 > will-it-scale.per_thread_ops > > b) THP disabled: > testcasebasechange head > metric > malloc1/ disable THP 90.33+417.3% 467.33 > will-it-scale.per_thread_ops > read2/ disable THP 58934+39.2% 82060 > will-it-scale.per_thread_ops > page_fault1/ disable THP8607+36.4% 11736 > will-it-scale.per_thread_ops > read1/ disable THP314063+12.7%353934 > will-it-scale.per_thread_ops > writeseek3/ disable THP 2452+12.5% 2759 > will-it-scale.per_thread_ops > signal1/ disable THP 971.33 +5.5% 1024 > will-it-scale.per_thread_ops > > Notes: for above values in column "change", the higher value means that the > related testcase result > on head commit is better than that on base commit for this benchmark. > > > Best regards > Haiyan Song > > > From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of > Laurent Dufour [lduf...@linux.vnet.ibm.com] > Sent: Thursday, May 17, 2018 7:06 PM > To: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; > kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; > Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; > b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas > Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; > sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; > Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; > Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi > Cc: linux-ker...@vger.kernel.org; linux...@kvack.org; > ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; > paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; > x...@kernel.org > Subject: [PATCH v11 00/26] Speculative page faults > > This is a port on kernel 4.17 of the work done by P
RE: [PATCH v11 00/26] Speculative page faults
Some regression and improvements is found by LKP-tools(linux kernel performance) on V9 patch series tested on Intel 4s Skylake platform. The regression result is sorted by the metric will-it-scale.per_thread_ops. Branch: Laurent-Dufour/Speculative-page-faults/20180316-151833 (V9 patch series) Commit id: base commit: d55f34411b1b126429a823d06c3124c16283231f head commit: 0355322b3577eeab7669066df42c550a56801110 Benchmark suite: will-it-scale Download link: https://github.com/antonblanchard/will-it-scale/tree/master/tests Metrics: will-it-scale.per_process_ops=processes/nr_cpu will-it-scale.per_thread_ops=threads/nr_cpu test box: lkp-skl-4sp1(nr_cpu=192,memory=768G) THP: enable / disable nr_task: 100% 1. Regressions: a) THP enabled: testcasebasechange head metric page_fault3/ enable THP 10092 -17.5% 8323 will-it-scale.per_thread_ops page_fault2/ enable THP 8300 -17.2% 6869 will-it-scale.per_thread_ops brk1/ enable THP 957.67 -7.6% 885 will-it-scale.per_thread_ops page_fault3/ enable THP172821-5.3%163692 will-it-scale.per_process_ops signal1/ enable THP 9125-3.2% 8834 will-it-scale.per_process_ops b) THP disabled: testcasebasechange head metric page_fault3/ disable THP10107 -19.1% 8180 will-it-scale.per_thread_ops page_fault2/ disable THP 8432 -17.8% 6931 will-it-scale.per_thread_ops context_switch1/ disable THP 215389-6.8%200776 will-it-scale.per_thread_ops brk1/ disable THP 939.67 -6.6% 877.33 will-it-scale.per_thread_ops page_fault3/ disable THP 173145-4.7%165064 will-it-scale.per_process_ops signal1/ disable THP 9162-3.9% 8802 will-it-scale.per_process_ops 2. Improvements: a) THP enabled: testcasebasechange head metric malloc1/ enable THP 66.33+469.8% 383.67 will-it-scale.per_thread_ops writeseek3/ enable THP 2531 +4.5% 2646 will-it-scale.per_thread_ops signal1/ enable THP 989.33 +2.8% 1016 will-it-scale.per_thread_ops b) THP disabled: testcasebasechange head metric malloc1/ disable THP 90.33+417.3% 467.33 will-it-scale.per_thread_ops read2/ disable THP 58934+39.2% 82060 will-it-scale.per_thread_ops page_fault1/ disable THP8607+36.4% 11736 will-it-scale.per_thread_ops read1/ disable THP314063+12.7%353934 will-it-scale.per_thread_ops writeseek3/ disable THP 2452+12.5% 2759 will-it-scale.per_thread_ops signal1/ disable THP 971.33 +5.5% 1024 will-it-scale.per_thread_ops Notes: for above values in column "change", the higher value means that the related testcase result on head commit is better than that on base commit for this benchmark. Best regards Haiyan Song From: owner-linux...@kvack.org [owner-linux...@kvack.org] on behalf of Laurent Dufour [lduf...@linux.vnet.ibm.com] Sent: Thursday, May 17, 2018 7:06 PM To: a...@linux-foundation.org; mho...@kernel.org; pet...@infradead.org; kir...@shutemov.name; a...@linux.intel.com; d...@stgolabs.net; j...@suse.cz; Matthew Wilcox; khand...@linux.vnet.ibm.com; aneesh.ku...@linux.vnet.ibm.com; b...@kernel.crashing.org; m...@ellerman.id.au; pau...@samba.org; Thomas Gleixner; Ingo Molnar; h...@zytor.com; Will Deacon; Sergey Senozhatsky; sergey.senozhatsky.w...@gmail.com; Andrea Arcangeli; Alexei Starovoitov; Wang, Kemi; Daniel Jordan; David Rientjes; Jerome Glisse; Ganesh Mahendran; Minchan Kim; Punit Agrawal; vinayak menon; Yang Shi Cc: linux-ker...@vger.kernel.org; linux...@kvack.org; ha...@linux.vnet.ibm.com; npig...@gmail.com; bsinghar...@gmail.com; paul...@linux.vnet.ibm.com; Tim Chen; linuxppc-dev@lists.ozlabs.org; x...@kernel.org Subject: [PATCH v11 00/26] Speculative page faults This is a port on kernel 4.17 of the work done by Peter Zijlstra to handle page fault without holding the mm semaphore [1]. The idea is to try to handle user space page faults without holding the mmap_sem. This should allow better concurrency for massively threaded process since the page fault handler will not wait for other threads memory layout change to be done, assuming that this change is done in another part of the process's memory space. This type page fault is named speculative page fa