> > >> 100.002054 14.18 kernel_lock
> > >> 47.43 846 6.72 kernel_lockfileassoc_file_delete+20
> > >> 23.73 188 3.36 kernel_lockintr_biglock_wrapper+16
> > >> 16.01 203 2.27 kernel_lockscsipi_adapter_request+63
> On Aug 9, 2018, at 10:40 AM, Thor Lancelot Simon wrote:
>
> Actually, I wonder if we could kill off the time spent by fileassoc. Is
> it still used only by veriexec? We can easily option that out of the build
> box kernels.
Indeed. And there are better ways to do what veriexec does, in
I would be interested to finish that off, I need to make some time to
get to doing it though.
I have been sitting on some changes to veriexec for ~ years that
change it from locking everything to using reference counts and
condition variables which removes some nasty hacks I did. I have not
Jason Thorpe wrote:
>
>
> > On Aug 9, 2018, at 10:40 AM, Thor Lancelot Simon wrote:
> >
> > Actually, I wonder if we could kill off the time spent by fileassoc. Is
> > it still used only by veriexec? We can easily option that out of the
> > build box kernels.
>
> Indeed. And there are
On Fri, Aug 10, 2018 at 12:29:49AM +0200, Joerg Sonnenberger wrote:
> On Thu, Aug 09, 2018 at 08:14:57PM +0200, Jaromír Doleček wrote:
> > 2018-08-09 19:40 GMT+02:00 Thor Lancelot Simon :
> > > On Thu, Aug 09, 2018 at 10:10:07AM +0200, Martin Husemann wrote:
> > >> 100.002054 14.18
On Thu, Aug 09, 2018 at 08:14:57PM +0200, Jaromír Doleček wrote:
> 2018-08-09 19:40 GMT+02:00 Thor Lancelot Simon :
> > On Thu, Aug 09, 2018 at 10:10:07AM +0200, Martin Husemann wrote:
> >> 100.002054 14.18 kernel_lock
> >> 47.43 846 6.72 kernel_lock
On Thu, Aug 09, 2018 at 10:10:07AM +0200, Martin Husemann wrote:
> With the patch applied:
>
> Elapsed time: 1564.93 seconds.
>
> -- Kernel lock spin
>
> Total% Count Time/ms Lock Caller
> -- --- - --
With the patch applied:
Elapsed time: 1564.93 seconds.
-- Kernel lock spin
Total% Count Time/ms Lock Caller
-- --- - -- --
100.002054 14.18 kernel_lock
47.43 846 6.72
> On Aug 6, 2018, at 12:17 PM, Rhialto wrote:
>
> 21.96 24626 82447.93 kernel_lockip_slowtimo+1a
The work that's happening here looks like a scalability nightmare., regardless
of holding the kernel lock or not. A couple of things:
1- It should do a better job of determining
rhia...@falu.nl (Rhialto) writes:
>=46rom the caller list, I get the impression that the statistics include
>(potentially) all system activity, not just work being done by/on behalf
>of the specified program. Otherwise I can't explain the network related
>names. All distfiles should have been
Martin Husemann wrote:
> So here is a more detailed analyzis using flamegraphs:
>
> https://netbsd.org/~martin/bc-perf/
>
>
> All operations happen on tmpfs, and the locking there is a significant
> part of the game - however, lots of mostly idle time is wasted and it is
> not clear to
I happened to be rebuilding some packages on amd64/8.0 while I was
reading your article, so I thought I might as well measure something
myself.
My results are not directly comparable to yours. I build from local
rotating disk, not a ram disk. But I have 32 GB of RAM so there should
be a lot of
So here is a more detailed analyzis using flamegraphs:
https://netbsd.org/~martin/bc-perf/
All operations happen on tmpfs, and the locking there is a significant
part of the game - however, lots of mostly idle time is wasted and it is
not clear to me what is going on.
Initial tests
So after more investigation here is an update:
The effect depends highly on "too high" -j args to build.sh. As a stopgap
hack I reduced them on the build cluster and now the netbsd-7 builds are
slow, but not super slow (and all other builds are slightly slower).
I tested building of netbsd-7 on
On Fri, Jul 13, 2018 at 06:50:02AM -0400, Thor Lancelot Simon wrote:
> Only the master does much if any disk (actually SSD) I/O. It must be
> either the mpt driver or the scsi subsystem. At a _guess_, mpt? I
> don't have time to look right now.
I couldn't spot any special dma map allocations
On Fri, Jul 13, 2018 at 09:37:26AM +0200, Martin Husemann wrote:
>
> Do you happen to know why
>
> vmstat -e | fgrep "bus_dma bounces"
>
> shows a > 500 rate e.g. on b45? I never see a single bounce on any of my
> amd64 machines. The build slaves seem to do only a few of them though.
On Fri, 13 Jul 2018, Martin Husemann wrote:
On Fri, Jul 13, 2018 at 05:55:17AM +0800, Paul Goyette wrote:
I have a system with (probably) enough RAM - amd64, 8/16 core/threads,
with 128GB, running 8.99.18 - to test if someone wants to provide an
explicit test scenario that can run on top of an
On Fri, Jul 13, 2018 at 05:55:17AM +0800, Paul Goyette wrote:
> I have a system with (probably) enough RAM - amd64, 8/16 core/threads,
> with 128GB, running 8.99.18 - to test if someone wants to provide an
> explicit test scenario that can run on top of an existing "production"
> environment.
If
On Fri, Jul 13, 2018 at 04:18:39AM +0700, Robert Elz wrote:
> But it is only really serious on the netbsd-7(-*) builds, -6 -8 and HEAD
> are not affected (or not nearly as much).
They are, but only by ~20%, while the -7 builds are > 400%.
My test builds on local hardware show +/-3% (I'd call
Date:Thu, 12 Jul 2018 15:18:08 -0400
From:Thor Lancelot Simon
Message-ID: <20180712191808.ga2...@panix.com>
| So if 8.0 has a serious tmpfs regression... we don't yet know.
If the most serious problem is something in the basic infrastructure
then it ought to be
On Thu, Jul 12, 2018 at 07:39:10PM +0200, Martin Husemann wrote:
> On Thu, Jul 12, 2018 at 12:30:39PM -0400, Thor Lancelot Simon wrote:
> > Are you running the builds from tmpfs to tmpfs, like the build cluster
> > does?
>
> No, I don't have enough ram to test it that way.
So if 8.0 has a
On Thu, Jul 12, 2018 at 12:30:39PM -0400, Thor Lancelot Simon wrote:
> Are you running the builds from tmpfs to tmpfs, like the build cluster
> does?
No, I don't have enough ram to test it that way.
Martin
On Thu, Jul 12, 2018 at 05:29:57PM +0200, Martin Husemann wrote:
> On Tue, Jul 10, 2018 at 11:01:01AM +0200, Martin Husemann wrote:
> > So stay tuned, maybe only Intel to blame ;-)
>
> Nope, that wasn't it.
>
> I failed to reproduce *any* slowdown on an AMD CPU locally, maya@ kindly
> repeated
On Tue, Jul 10, 2018 at 11:01:01AM +0200, Martin Husemann wrote:
> So stay tuned, maybe only Intel to blame ;-)
Nope, that wasn't it.
I failed to reproduce *any* slowdown on an AMD CPU locally, maya@ kindly
repeated the same test on an affected Intel CPU and also could see no
slowdown.
So this
On 11.07.2018 11:47, Takeshi Nakayama wrote:
Martin Husemann wrote
>
>>> Another observation is that grep(1) on one NetBSD server is
>>> significantly slower between the switch from -7 to 8RC1.
>>
>> Please file separate PRs for each (and maybe provide some input files
>> to reproduce the
>>> Martin Husemann wrote
> > Another observation is that grep(1) on one NetBSD server is
> > significantly slower between the switch from -7 to 8RC1.
>
> Please file separate PRs for each (and maybe provide some input files
> to reproduce the issue).
Already filed:
On 11.07.2018 09:09, Simon Burge wrote:
> Hi folks,
>
> Martin Husemann wrote:
>
>> On Tue, Jul 10, 2018 at 12:11:41PM +0200, Kamil Rytarowski wrote:
>>> After the switch from NetBSD-HEAD (version from 1 year ago) to 8.0RC2,
>>> the ld(1) linker has serious issues with linking Clang/LLVM single
Hi folks,
Martin Husemann wrote:
> On Tue, Jul 10, 2018 at 12:11:41PM +0200, Kamil Rytarowski wrote:
> > After the switch from NetBSD-HEAD (version from 1 year ago) to 8.0RC2,
> > the ld(1) linker has serious issues with linking Clang/LLVM single
> > libraries within 20 minutes. This causes
On Tue, Jul 10, 2018 at 12:11:41PM +0200, Kamil Rytarowski wrote:
> After the switch from NetBSD-HEAD (version from 1 year ago) to 8.0RC2,
> the ld(1) linker has serious issues with linking Clang/LLVM single
> libraries within 20 minutes. This causes frequent timeouts on the NetBSD
> buildbot in
On 10.07.2018 11:01, Martin Husemann wrote:
> On Fri, Jul 06, 2018 at 04:04:50PM +0200, Martin Husemann wrote:
>> I have no scientific data yet, but just noticed that build times on the
>> auto build cluster did rise very dramatically since it has been updated
>> to run NetBSD 8.0 RC2.
>>
>> Since
Le 10/07/2018 à 11:01, Martin Husemann a écrit :
On Fri, Jul 06, 2018 at 04:04:50PM +0200, Martin Husemann wrote:
I have no scientific data yet, but just noticed that build times on the
auto build cluster did rise very dramatically since it has been updated
to run NetBSD 8.0 RC2.
Since builds
On Fri, Jul 06, 2018 at 04:04:50PM +0200, Martin Husemann wrote:
> I have no scientific data yet, but just noticed that build times on the
> auto build cluster did rise very dramatically since it has been updated
> to run NetBSD 8.0 RC2.
>
> Since builds move around build slaves sometimes (not
Le 06/07/2018 à 16:48, Martin Husemann a écrit :
On Fri, Jul 06, 2018 at 04:40:48PM +0200, Maxime Villard wrote:
This are all successfull builds of HEAD for alpha that happened after June 1:
What does that mean, are you building something *on* an Alpha CPU, or are
you building the Alpha port
33 matches
Mail list logo