Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Frank Steinmetzger
On Wed, May 15, 2019 at 01:53:45PM -0500, Dale wrote:
> Nikos Chantziaras wrote:
> > […]
> > If your system is on, how is it going to replace vulnerable kernels
> > with patched ones?

> […]
>
> While I want to keep the bad CPU code from being used, they first have
> to get past other things.  My DSL modem has protections, my router adds
> yet another layer of it.  I use adblock, noscript and such on all my
> browsers as well.

I’m kinda on the same train of thought. All those vulnerabilities of recent
years are about data exfiltration through cross-process memory reads or
exploitation of caching mechanisms for instruction optimisations. The threat
scenario is mostly relevant for servers which run unverified processes of
any number of users which may be trying to attack other users’ processes.

On a personal computer, nowadays the most common point of entry for malware
is the browser (or a manipulated data file for any kind of parser bug such
as Adobe or M$ Office). And in the browser, the threat comes from active
elements, IOW, Ecma Script. I use uMatrix with strict defaults, scripts are
only enabled when actually needed. And opposed to often-heard street talk,
you can still use many corners of the Web without JS in many cases.
And of course I don’t blindly extract any ace archive that pretends to be a
rar.

Linux doesn’t “support” Windows crapware, and as long as you are careful
about where you get your programs from (i.e. package manager and other
trustworthy sources), you are reasonably safe, as opposed from Joe
Average-Windows-User who loads Adobe Reader and Google Chrome from
free-full-version-software.com instead of the developer’s official website
because he simply doesn’t know any better.

So I might not be as safe as technically possible, but right now I’m grown
tired of following which fix incurs what performance penalty and don’t
really give a crap. I set mitigations=off to my cmdline and watch the Tech
media burn itself down in a spiral of hysteria. In the meantime I protect
myself by (hopefully) knowing what each of my actions does and by using
software that uses common sense and provides a small attack surface, for
example mutt and vim instead of HTML mail and a text editor based on an
entire browser engine.

At some point in the future, my stationary PC will require a hardware
refresh. At that point I will say goodbye to Intel. This is the only
language companies understand. They’ve been getting ahead by developing
features without due diligence and by cutting corners. And this is biting
them in their behind now all the way back.

-- 
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

There is only one way to the lung and it must be tarred.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Dale
Mick wrote:
> On Friday, 17 May 2019 09:43:46 BST Dale wrote:
>
>> My problems with init thingys date all the way back to to the Mandrake
>> 9.1 days when I first used Linux.
> I was never a Mandrake user, but also avoided using an initrd unless it came 
> with a binary distro - at which point I would also expect it to Just-Work(TM).
>
> The world has moved on since Mandrake 9.x and the generation of initramfs is 
> a 
> much more automated and reliable process now.

I was not that lucky with Mandrake.  I stopped counting the number of
times the init thingy failed.  It got to a point where I would not
update Mandrake, I would download a new CD and reinstall only leaving
/home untouched.  Even then, with some dodgy hardware, rebooting was not
something I looked forward to. 


>
> Regarding a separate /usr fs necessitating initramfs, it shouldn't be too 
> difficult to plan some downtime, reboot with Live-media and move the /usr fs 
> contents into /, following any required partition modifications.  Unless of 
> course you *want* to keep /usr separate for mounting it as read-only, or 
> sharing it among multiple OS, in which case I don't think you can escape 
> initramfs.
>
> The downtime for rebooting a new kernel is measured in seconds.  Even if the 
> new kernel fails, you can fallback onto the previous kernel and boot that in 
> seconds.
>

As I posted earlier, if I ever replace the hard drive, with a SDD most
likely, that is the plan.  With a SDD there is little need to have a
separate partition.  I may still make /var separate tho, since I've had
logs go crazy and fill it up before.  Having /var fill up is less of a
problem than / filling up.


>> As to hardware, I had one time where that was a issue.  Power failed and
>> a shutdown was needed.  When I went to power back up, the CPU fan
>> wouldn't spin up.  After a couple drops of oil was added, it was
>> spinning up again and of course, I ordered a replacement fan right
>> away.  I don't recall ever having any other hardware problem.  
> Count yourself lucky.  You could have discovered your disk wouldn't spin up 
> again, your PSU packed up, or even the MoBo chipset decided to retire from 
> active service.  Eventually, any of these hardware problems would manifest 
> themselves, but a reboot could reveal their demise sooner and hopefully at a 
> point where you were somewhat prepared for it.
>

As I posted, I've had a fan to fail, that's it.  Thing is, at the moment
I'm not prepared for any of that but when things age, I replace them. 
Of course, that really requires planning and is one reason I wouldn't
mind having a second system.  Thing is, if I'm running, it is working. 
Avoiding reboots avoids those issues.  Rebooting only forces them to
show up sooner, which I don't want.  I'm not sure how making something
fail sooner is really going to help anything.  If making something fail
sooner is the answer, never change oil in your car.  ROFL 

Dale

:-)  :-) 



Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Dale
Rich Freeman wrote:
> On Fri, May 17, 2019 at 6:28 AM Mick  wrote:
>> Count yourself lucky.  You could have discovered your disk wouldn't spin up
>> again, your PSU packed up, or even the MoBo chipset decided to retire from
>> active service.  Eventually, any of these hardware problems would manifest
>> themselves, but a reboot could reveal their demise sooner and hopefully at a
>> point where you were somewhat prepared for it.
>>
> ++
>
> You can't completely prevent reboots (not unless you are willing to
> spend big and go mainframe or something like that - and those create a
> different set of issues).  What you can do is take steps to reduce the
> risk that an unplanned reboot will cause problems.
>
> One of the best ways to ensure you're prepared for disaster is to make
> disaster routine.  Regular reboots can be a part of this, because you
> can do them at a time when you have time to deal with problems, and
> when you're looking for problems.
>
> This is why I've made the move to containers largely.  I still have a
> few processes running on my host because, but almost everything has
> moved into containers that do one thing.  When I update a container I
> take a snapshot, run updates, shut it down, take another snapshot,
> start it up, and test the service it runs.  Since each container only
> does one thing, I know exactly what to test.  If it works I'm good,
> and if it doesn't work I can roll it back and not worry about what
> that might break on the 47 other services running on the same host.
> Every update involves an effective reboot for that one service, so I
> know that in the event of a host reboot they will generally all come
> up fine.  I of course update the host regularly and reboot that for
> kernel updates, which seem to come about twice a week these days
> anyway.
>
> Obviously I don't run updates the day before I leave on vacation,
> unless they are security critical, and then I exercise some care.
>
> The downside is that I end up with a lot more hosts to keep up to
> date, because I can't just run emerge -u world once on one host and
> have every service I run updated.  However, I gladly accept the extra
> work because the work itself becomes much simpler and predictable.  If
> I'm updating my imapd container and imapd still works, then I'm fine.
> I don't have to worry about suddenly realizing two days later that
> postgrey is bouncing a ton of mail or whatever.  If something obscure
> like a text editor breaks in my imapd container which I didn't catch,
> that might be an annoyance but it doesn't really impact me much since
> it isn't critical for the operation of that container.
>


But none of this changes one main point, my system is in use virtually
all the time.  A KDE upgrade has been ready for a while.  While waiting
for time to log out and back in, yet another KDE upgrade came
available.  So, I ended up with two updates that were ready.  Still, it
took me a couple days to get to a stopping point where I could log out
and back in again.  Even restarting Firefox gets on my nerve at times. 
I've even learned how to figure out what tab is going wacky so that I
can either close it or reload it to reset it to correct either a high
CPU usage problem or it using a large amount of memory.  Again, if it is
in the middle of a download that may lack hours of download, I can't
just close it without losing what I've already downloaded. 

Even with that, I still didn't want to risk rebooting and having any
sort of failure, no matter what it was.  As I pointed out, I can't think
of anything that a person can post that will change how I use my
system.  One thing I have considered, building a second system and use
that to at least play my videos to the TV with.  Still, I download a lot. 

Until how I use my system changes, init thingy or not, it is still
difficult to find time to reboot.  Add in the risk of it all, since I do
not trust the init thingy, that just adds to the issue. 

Dale

:-)  :-) 



[gentoo-user] LVM2 problem, meta data format change?

2019-05-17 Thread Paul Colquhoun
Recently I found that new kernels were not booting for me, because they could 
not assemble the LVM partition that I use for the root filesystem.

Booting back to my old kernel still worked.

I have tracked this back to the lvm2 version.

After booting with the old kernel, I ran lvm and tried the 'fullreport' 
command.

sys-fs/lvm2-2.02.184-r3 gives an error:

lvm> fullreport 
 LV root invalid: visible raid meta LV for raid1 segment 
 LV root invalid: visible raid meta LV for raid1 segment 
 Internal error: LV segments corrupted in root.


After backing out to an earlier version, sys-fs/lvm2-2.02.183
the 'fullreport' actually gives a report.

I'm assuming the only reason the old kernel boots is that it has the older lvm 
in the initramfs, and once assembled the handover to the live system still 
works.

I can't find anything online that looks like the same thing to me, so I was 
wondering if anyone here had encountered a similar problem?

The next step is to try and find how to update the on-disk lvm meta data so the 
later versions understand it, hopefully without having to rebuild my system 
from scratch.


-- 
Reverend Paul Colquhoun, ULC. http://andor.dropbear.id.au/
  Asking for technical help in newsgroups?  Read this first:
 http://catb.org/~esr/faqs/smart-questions.html#intro






Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Rich Freeman
On Fri, May 17, 2019 at 6:28 AM Mick  wrote:
>
> Count yourself lucky.  You could have discovered your disk wouldn't spin up
> again, your PSU packed up, or even the MoBo chipset decided to retire from
> active service.  Eventually, any of these hardware problems would manifest
> themselves, but a reboot could reveal their demise sooner and hopefully at a
> point where you were somewhat prepared for it.
>

++

You can't completely prevent reboots (not unless you are willing to
spend big and go mainframe or something like that - and those create a
different set of issues).  What you can do is take steps to reduce the
risk that an unplanned reboot will cause problems.

One of the best ways to ensure you're prepared for disaster is to make
disaster routine.  Regular reboots can be a part of this, because you
can do them at a time when you have time to deal with problems, and
when you're looking for problems.

This is why I've made the move to containers largely.  I still have a
few processes running on my host because, but almost everything has
moved into containers that do one thing.  When I update a container I
take a snapshot, run updates, shut it down, take another snapshot,
start it up, and test the service it runs.  Since each container only
does one thing, I know exactly what to test.  If it works I'm good,
and if it doesn't work I can roll it back and not worry about what
that might break on the 47 other services running on the same host.
Every update involves an effective reboot for that one service, so I
know that in the event of a host reboot they will generally all come
up fine.  I of course update the host regularly and reboot that for
kernel updates, which seem to come about twice a week these days
anyway.

Obviously I don't run updates the day before I leave on vacation,
unless they are security critical, and then I exercise some care.

The downside is that I end up with a lot more hosts to keep up to
date, because I can't just run emerge -u world once on one host and
have every service I run updated.  However, I gladly accept the extra
work because the work itself becomes much simpler and predictable.  If
I'm updating my imapd container and imapd still works, then I'm fine.
I don't have to worry about suddenly realizing two days later that
postgrey is bouncing a ton of mail or whatever.  If something obscure
like a text editor breaks in my imapd container which I didn't catch,
that might be an annoyance but it doesn't really impact me much since
it isn't critical for the operation of that container.

-- 
Rich



Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Mick
On Friday, 17 May 2019 09:43:46 BST Dale wrote:

> My problems with init thingys date all the way back to to the Mandrake
> 9.1 days when I first used Linux.

I was never a Mandrake user, but also avoided using an initrd unless it came 
with a binary distro - at which point I would also expect it to Just-Work(TM).

The world has moved on since Mandrake 9.x and the generation of initramfs is a 
much more automated and reliable process now.

Regarding a separate /usr fs necessitating initramfs, it shouldn't be too 
difficult to plan some downtime, reboot with Live-media and move the /usr fs 
contents into /, following any required partition modifications.  Unless of 
course you *want* to keep /usr separate for mounting it as read-only, or 
sharing it among multiple OS, in which case I don't think you can escape 
initramfs.

The downtime for rebooting a new kernel is measured in seconds.  Even if the 
new kernel fails, you can fallback onto the previous kernel and boot that in 
seconds.


> As to hardware, I had one time where that was a issue.  Power failed and
> a shutdown was needed.  When I went to power back up, the CPU fan
> wouldn't spin up.  After a couple drops of oil was added, it was
> spinning up again and of course, I ordered a replacement fan right
> away.  I don't recall ever having any other hardware problem.  

Count yourself lucky.  You could have discovered your disk wouldn't spin up 
again, your PSU packed up, or even the MoBo chipset decided to retire from 
active service.  Eventually, any of these hardware problems would manifest 
themselves, but a reboot could reveal their demise sooner and hopefully at a 
point where you were somewhat prepared for it.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Dale
Adam Carter wrote:
>
> something even worse.  Since rebooting is when those tend to
> fail/break/whatever, it is yet another reason I avoid rebooting.  
>
>
> I take the opposite approach. If I update the kernel and reboot often,
> I see the following benefits;
> - Each increment in version is smaller, therefore there's less change
> per update, which make it easier to troubleshoot if there's problems
> - Doing something regularly is practice, and practice makes perfect.
> If you were to update regularly you would become more proficient and
> confident with doing the init thingy (initrd?)
> - If a hardware issue occurs, I find it at a good time when i'm not
> busy, and have the time to troubleshoot
> - Getting the benefits of the automated kernel code testing (eg
> syzbot, KASAN) that is used these days finding issues that then get
> fixed (including security issues). You'd have to assume that over the
> overall quality of the kernel is improving at a faster rate now than
> before those extra checks were in place.
>  
> At work I have raised tickets to have systems with big uptimes have
> their hardware status reviewed then restarted, a couple of days before
> I undertake risky/critical work. That way I can have more confidence
> in the system's health before starting. The classic one is where OPS
> haven't noticed that disks in a RAID array have died years ago...
>
> Even when I have a power fail here, it makes me very nervous to
> shutdown. 
>
>
> Another benefit of regular updates would be to reduce stress of
> deciding to shutdown, as you will have more confidence that the
> systems are healthy when you need to do it.
>
> :)


My problems with init thingys date all the way back to to the Mandrake
9.1 days when I first used Linux.  At that time, I didn't make the init
thingys at all, the OS did that during install or updates.  Still, they
would work but eventually a update or something would break them which
left me with a unbootable OS.  After a few times with that, I grew to
hate the init thingys and have hated them ever since.  It is just one
more thing that tends to fail and since it shouldn't really even be
needed, since it wasn't for many years, I would rather not have one at
all.  I find it odd that I can build a kernel from scratch that boots
and works on the first try but that silly init thingy seems to cause
problems even when not messed with.  I might add, the init thingy is one
reason I left Mandrake.  Gentoo didn't require the stupid thing and for
years when I didn't have one, rebooting wasn't a issue since I was
confident my kernels would work.  After all, even back then, I didn't
change or update the kernels that often. 

As to hardware, I had one time where that was a issue.  Power failed and
a shutdown was needed.  When I went to power back up, the CPU fan
wouldn't spin up.  After a couple drops of oil was added, it was
spinning up again and of course, I ordered a replacement fan right
away.  I don't recall ever having any other hardware problem.  Thing is,
even if I had shutdown a week earlier, that fan may have worked fine. 
Who knows when it would have eventually failed. 

As I also said, my system is almost always doing something I need it to
do.  It is doing things that it can't do if I'm rebooting or shutting
down.  It is certainly something it can't do if it us unable to boot due
to a broken init thingy.  If I wanted a system that required rebooting
on a regular basis to work, I'd be using windoze not Linux.  Reboots
frequently fixes windoze issues but doesn't usually do so with Linux. 

As I said before, for some the advice is good advice.  For me, it is
not.  It is counterproductive even for me in my use case.  I can't think
of anything that will be changing that either.  If how I use my system
changes, that may change things.

Dale

:-)  :-)


Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Adam Carter
> Yep, and not just to Adam.  I had to ask support twice to check their
> array
> because performance was degraded, but they preferred to blame (my) network
> for
> it.  So much for keeping an eye on monitoring kit for their storage.
>
>
Systems guys blame the network, network guys blame the firewall. I just ask
"what evidence do you have there is a network/firewall problem?". That
question is troubling if you haven't done your homework before blaming
someone else.


Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-17 Thread Mick
On Friday, 17 May 2019 06:48:28 BST Adam Carter wrote:
> > > The classic one is where OPS haven't noticed that disks in a RAID array
> > 
> > have died years ago...
> > 
> > This really happened?
> 
> Yeah. Spent huge money on NMSes but then didn't spend the relatively small
> amount on thorough integration to make it really worthwhile... It seems
> common to me that companies penny pinch too hard and shoot themselves in
> the foot. Probably due to perverse incentives and/or the inability to
> capture true value on a ledger. It requires leadership.

Yep, and not just to Adam.  I had to ask support twice to check their array 
because performance was degraded, but they preferred to blame (my) network for 
it.  So much for keeping an eye on monitoring kit for their storage.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.