Re: stable/11 r329462 - Meltdown/Spectre MFC questions

2018-02-17 Thread Kevin Oberman
On Sat, Feb 17, 2018 at 12:38 PM, Pete French 
wrote:

>
>
> On 17/02/2018 20:19, Matt Smith wrote:
>
> And thank you for pointing this out. I can now just wait a while to see
>> what comes along rather than accidentally upgrading it and killing the
>> already really slow performance.
>>
>> I was just looking at this too, and wondering what (if any) the
> performance impact is on FreeBSD. I had a quick google to see if I could
> fine anything on current about it, but no luck. Anyone done any
> measurements ?
>
> -pete.


Well, I would also like a bit of information on this. When I get a moment
I'll see if I can find anything on this on the current@ archive.

Looks like all are loadables, so need to be defined in /boot/loader.conf
and can only be changed on a boot  with the exception of hw.ibrs_disable
which is settable either at boot or via sysctl. Looks like that is the one
to twiddle to check on impact.

I'm currently updating my stable system as this hit the tree after my
morning updates.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkober...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: stable/11 r329462 - Meltdown/Spectre MFC questions

2018-02-17 Thread Pete French



On 17/02/2018 20:19, Matt Smith wrote:

And thank you for pointing this out. I can now just wait a while to see 
what comes along rather than accidentally upgrading it and killing the 
already really slow performance.


I was just looking at this too, and wondering what (if any) the 
performance impact is on FreeBSD. I had a quick google to see if I could 
fine anything on current about it, but no luck. Anyone done any 
measurements ?


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: stable/11 r329462 - Meltdown/Spectre MFC questions

2018-02-17 Thread Matt Smith

On Feb 17 11:47, Jeremy Chadwick wrote:

Reference: https://svnweb.freebsd.org/base?view=revision=329462

Do the following new loader tunables and sysctls have documentation
anywhere?  I ask because I wish to know how to turn all of this off (yes
you heard me correctly), as not all systems necessarily require
mitigation of these flaws.



+1. I have an Intel Atom D525 "Pineview" which I'm led to believe 
doesn't have these flaws and therefore unless it's detected and disabled 
automatically I too would like to have documentation on how to view the 
current status, and disable it as required.


And thank you for pointing this out. I can now just wait a while to see 
what comes along rather than accidentally upgrading it and killing the 
already really slow performance.


--
Matt
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


stable/11 r329462 - Meltdown/Spectre MFC questions

2018-02-17 Thread Jeremy Chadwick
Reference: https://svnweb.freebsd.org/base?view=revision=329462

Do the following new loader tunables and sysctls have documentation
anywhere?  I ask because I wish to know how to turn all of this off (yes
you heard me correctly), as not all systems necessarily require
mitigation of these flaws.

Best I can tell from skimming source:

vm.pmap.pti
  - Description: Page Table Isolation enabled
  - Loader tunable, visible in sysctl (read-only)
  - Integer
  - Default value: depends on CPU model and capabilities, see
function pti_get_default(); looks like AMD = 0, any CPU with
RDCL_NO capability enabled = 0, else 1

hw.ibrs_active
  - Description: Indirect Branch Restricted Speculation active
  - sysctl (read-only)
  - Integer
  - Real-time indicator as to if IBRS is currently on or off

hw.ibrs_disable 
  - Description: Disable Indirect Branch Restricted Speculation
  - Loader tunable and sysctl tunable (read-write)
  - Integer
  - Default value: unsure.  Variable declaration has 1 but
SYSCTL_PROC() macro has 0.

Thank you.

-- 
| Jeremy Chadwick   j...@koitsu.org |
| UNIX Systems Administratorhttp://jdc.koitsu.org/ |
| Making life hard for others since 1977. PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: package building performance (was: Re: FreeBSD on AMD Epyc boards)

2018-02-17 Thread Rainer Duffner


> Am 17.02.2018 um 10:09 schrieb Don Lewis :
> 
> It is unfortunate that there don't seem to be any server-grade Ryzen
> motherboards.  They all seem to be gamer boards with a lot of
> unnecessary bling.



That’s because few people use servers to build packages.

Increasingly, all the other things related to a server are becoming important 
(fast memory, fast networking, fast I/O) and because everything else is 
expensive, it’s simply not economical to skimp on the CPU when everything else 
(SSD, 40GB switch ports, rack space etc.pp.) costs the same.


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: package building performance (was: Re: FreeBSD on AMD Epyc boards)

2018-02-17 Thread Don Lewis
On 14 Feb, Mark Linimon wrote:
> On Wed, Feb 14, 2018 at 09:15:53AM +0100, Kurt Jaeger wrote:
>> On the plus side: 16+16 cores, on the minus: A low CPU tact of 2.2 GHz.
>> Would a box like this be better for a package build host instead of 4+4 cores
>> with 3.x GHz ?
> 
> In my experience, "it depends".
> 
> I think that above a certain number of cores, I/O will dominate.  I _think_;
> I have never done any metrics on any of this.
> 
> The dominant term of the equation is, as you might guess, RAM.  Previous
> experience suggests that you need at least 2GB per build.  By default,
> nbuilds is set equal to ncores.  Less than 2GB-per and you're going to be
> unhappy.
> 
> (It's true that for modern systems, where large amounts of RAM are standard,
> that this is probably no longer a concern.)
> 
> Put it this way: with 4 cores and 16GB and netbooting (7GB of which was
> devoted to md(4)), I was having lots of problems on powerpc64.  The same
> machine with 64GB gives me no problems.
> 
> My guess is that after RAM, there is I/O, ncores, and speed.  But I'm just
> speculating.

I've been configuring 4 GB per builder, so on my 8-core 16-thread Ryzen
machine, that means 64 GB of RAM.  I also set USE_TMPS to "wrkdir data
localbase" in poudriere.conf, so I'm leaning pretty heavily on RAM.  I do
figure that zfs clone is more efficient than tmpfs for the builder
jails.  With this configuration, building my default set of ports is
pretty much CPU-bound.  When it starts building the the larger ports
that need a lot of space for WRKDIR, like openoffice-4,
openoffice-devel, libreoffice, chromium, etc. the machine does end up
using a lot of swap space, but it is mostly dead data from the wrkdirs,
so generally there isn't a lot of paging activity.  I also have
ALLOW_MAKE_JOBS=yes to load up the CPUs a bit more, though I did get
the best results with MAKE_JOBS_NUMBER=7 building my default port set on
this machine.  The hard drive is a fairly old WD Green that I removed
from one of my other machines, and it is plenty fast enough to keep CPU
idle % at or near zero most of the time during the build run.

I did just try out "poudriere bulk -a" on this machine to build ports
for 11.1-RELEASE amd64 and got these results:

[111amd64-default] [2018-02-14_23h40m24s] [committing:] Queued: 29787 Built: 
29277 Failed: 59Skipped: 112   Ignored: 339   Tobuild: 0  Time: 47:39:48

I did notice some periods of high idle CPU during this run, but a lot
of that was due to a bunch of the builders in the fetch state at the
same time.  Without that, the runtime would have been lower.  On the
other hand, some ports failed due to a gmake issue, and others looked
like they failed due to having problems with ALLOW_MAKE_JOBS=yes.  The
runtime would have been higher without those problems.

As far as Epyc goes, I think the larger core count would win.  A lot
depends on how effective cache is for this workload, so it would be
interesting to plot poudriere run time vs. clock speed.  If cache misses
dominate execution time, then lowering the clock speed would not hurt
that much.  Something important to keep in mind with Threadripper and
Epync is NUMA.  For best results, all of the memory channels should be
used and the work should be distributed so that the processes on each
core primarily access RAM local to that CPU die.  If this isn't the case
then the infinity fabric that connects all of the CPU die will be the
bottleneck.  The lower core clock speed on Epyc lessens that penalty,
but it is still something to be avoided if possible.

Something else to consider is price/performance.  If you want to build
packages for four OS/arch combinations, then doing it in parallel on
four Ryzen machines is likely to be both cheaper and faster than doing
the same builds sequentially on an Epyc machine with 4x the core count
and RAM.

It is unfortunate that there don't seem to be any server-grade Ryzen
motherboards.  They all seem to be gamer boards with a lot of
unnecessary bling.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"