People have speculated about the impact of Meltdown and Spectre.
My initial view was that the effect was minimal - after this testing
I have revised that to 'small' :)

This will be a *long* set of postings - overview of the method below,
I'll reply with the results for the three sets of LFS builds, and
after that for all the timings from running some tests using scripted
ffmpeg to manipulate video files.

I used my build scripts to build LFS as at 20171231 (I had already
done that on a different machine, and then on this one which is my
Haswell, Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz.

For those who do not know, 'retpoline' (return trampoline) is a
technique listed by google to overcome the v2 Spectre attacks -
https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)#Mitigation

Overview
--------

There are currently 3 sets of runs to build LFS -

An initial position with PTI turned off and using older firmware.
I did NOT run this as the first step, I had already moved to using
PTI and new firmware, so I did this later.  Host kernel 4.14.12 and
the same version of LFS.

A 'Meltdown' run with PTI (this also includes the new firmware),
host kernel 4.14.12.  This was where I started these runs, then I
reverted to the so-called 'initial' position to run the next set.

A 'Spectre' (v2) run using a patched gcc and patches from fedora to
apply a no-longer-current version of the retpoline patches to the
4.14.13 kernel.  This is just to get an early view of how badly
things will slow down.  For this set only, the patched 4.14.13 kernel
built with patched gcc was used on the host.

Patches are at http://www.linuxfromscratch.org/~ken/retpoline-test/

For the moment there is no patch for Spectre v1.

For each set of tests I have made 3 runs, which is not enough to give
an accurate feeling for by how much timings may vary, but it is a
little more reliable than only building once.

Each run builds the same kernel as the host, so 4.14.12 for the first
two runs, then 4.14.13 with the retpoline patches for the third run.

I have only run 3 sets of tests on these builds, in the hope that
would highlight _most_ of the *variation* in timing.  For the kernel
build, the retpoline run was actually faster, but the differences are
only seconds.

After that I did some more testing of scripted ffmpeg use on all
three kernels - for that I ran 10 tiems on each.

Depending on what happens upstream, I hope to revisit this after
patches against Spectre v1 are upstream and usable (even if that is
only in an -rc kernel) - by that stage the patches for Spectre v2
might also be upstream.  But undoubtedly there will be regressions
and attempts to optimize after the basics are in place.

Process
-------

Builds are LFS-svn-20171231 (because I had the scripts, and that is
what the host is running), but for each build of gcc I've added HJL's
patches which will be needed to build the retpoline Spectre
workaround. I also add elfutils, and for building the kernel in these
runs I used a tarball of 4.14.13 because that removes the possibility
that I forget to chmod an added objtool script on my normal patched
version.

These builds are onto a real ext4 filesystem (on an SSD), rather than
into a bind-mounted directory.  I'm making a clean filesystem before
each run. Except for the SBU I'm using make -j8.  My scripts use nfs
to read the sources and the scripts, and my logging process (find all
files before and after the install, work out what changed) adds a
significant overhead to the build.  That is not reflected in the
times for the individual steps, only in the times for the tools and
chroot scripts.

Although I'm building with all tests (except vim - that started
hanging on some of my machines last year).  For many of the tests in
LFS I used make -j8 check.  If a test has failed in the past, I
generally now allow it to fail, e.g. ISTR bison tests fail for me
but not for others.

NB I have NOT installed patched gcc on the host for these tests, at
the moment it merely proivides infrastructure which will allow the
kernel to build retpoline mitigation against Spectre, if that gets
added to the kernel.

Apart from the kernel, which I timed manually, the step times are
calculated in my scripts - start after untarring and patching, stop
before doing logging of what got installed and before hiding most
static libs and libtool files.  The time for the driving scripts
(tools.sh, chroot.sh) includes all that overhead as well as
untarring, patching, removing the source.  In chroot, and on the way
into chroot, I do some things differently from the book.

Although variation in build/test time for individual steps is
sometimes noticeable, I would place most reliance on the times for
the overall scripts for tools.sh and chroot.sh, and also the total
time to build the kernel - each of these is rounded to the nearest
second.  For a few packages I've worked out a mean, but generally I
can't see the point of filling in any more numbers!

An Oddity I noticed
-------------------

When testing tar, before these tests the new test 92 (link mismatch)
always failed, but in all three runs of the 'initial' position the
test passed, and in the first two runs with 'retpoline' the test
passed, but then failed in the final run.  I had guessed that the
variation in timings for chroot tar was related to test failures, but
no, it isn't.

Summary
-------

The results may improve with the current Spectre v2 patchset which
have just been offered to Linus.  But the Spectre v1 patches might
slow things down again.  And the results WILL vary on each processor
variant, and for different workloads.

ĸen
-- 
Truth, in front of her huge walk-in wardrobe, selected black leather
boots with stiletto heels for such a barefaced truth.
                                     - Unseen Academicals
-- 
http://lists.linuxfromscratch.org/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to