Re: [gentoo-user] Can I use containers?

2019-05-18 Thread Grant Taylor

On 5/18/19 7:04 PM, Wols Lists wrote:
Not that I do it (it would be a bit of a learning experience :-) 
but this is where using ldap for user management would score ...


Centralized ID administration is nice.  I've dabbled with the following:

 · Manual UID & GID management
 · Copying passwd/shadow & group/gshadow files (bad idea, would 
recommend against)

 · LDAP
 · NIS(+)
 · AD via Samba

There are some other options too.  Hesiod and something else uses DNS as 
the central directory.


I recently used LDAP + Kerberos + NFS and was quite happy with it.  NFS 
even used Kerberos for authentication.




Re: [gentoo-user] Can I use containers?

2019-05-18 Thread Grant Taylor

On 5/18/19 5:49 PM, Rich Freeman wrote:
I'd be interested if there are other scripts people have put out 
there, but I agree that most of the container solutions on Linux 
are overly-complex.


Here's what I use for some networking, which probably qualifies as 
extremely light weight ""containers.


Prerequisite:  Create a place for the name spaces to anchor:

   # Create the directories to contain the *NS mount points.
   sudo mkdir -p /run/{mount,net,uts}ns

You can use any path that you want.  —  I do a lot with iproute2's 
network namespaces (which is where this evolved from), which use 
/run/netns/$NetNSname.  So I used that as a pattern for the other types 
of namespaces.  Adjust as you want.  —  What I'm doing is interoperable 
with iproute2's netns command.


Per ""Container:  Create the ""Containers mount points:

   # Create the *NS mount points
   sudo touch /run/{mount,net,uts}ns/$ContainerName

Start the actual namespaces:

   # Spawn the lab# NetNSs.
   unshare --mount=/run/mountns/$ContainerName 
--net=/run/netns/$ContainerName --uts=/run/utsns/$ContainerName /bin/true


Note:  The namespaces don't die when true exits because they are 
associated with a mount point.


Tweak the namespaces:

   # Set the lab# NetNS's hostname.
   nsenter --mount=/run/mountns/$ContainerName 
--net=/run/netns/$ContainerName --uts=/run/utsns/$ContainerName 
/bin/hostname $ContainerName


I reuse this command calling different binaries any time I want to do 
something in the ""container.  Calling /bin/bash (et al.) enters the 
container.


I've created a wrapper script (nsenter.wrapper) that passes the proper 
parameters to nsenter.  I've then sym-linked the container name to the 
nsenter.wrapper script.  This means that I can run "$ContainerName 
$Command"  or simply enter the container with $ContainerName.  (The 
script checks the number of parameters and assumes /bin/bash if no 
command is specified.


I think it's ultimately extremely trivial to have a ""container 
(glorified collection of name spaces) to do things I want with virtually 
zero disk space.  Ok, ok, maybe 1 or 2 kB for the script & links.


Note:  Since I'm using the mount name space, I can have a completely 
different mount tree inside the ""container than I have outside the 
container / on the host.  I'm not currently doing that, but it's 
possible to change things as desired.


I personally use nspawn, which is actually pretty minimal, but it 
depends on systemd, which I'm sure many would argue is overly complex. 
:)  However, if you are running systemd you can basically do a 
one-liner that requires zero setup to turn a chroot into a container.


As much as I might not like systemd, if you have it, and it reliably 
does what you want, then I see no reason to /not/ use it.  Just 
acknowledge it as a dependency on your solution, which you have done. 
So I think we're cool.



On to the original questions about mounts:

In general you can mount stuff in containers without issue.  There are 
two ways to go about it.  One is to mount something on the host and 
bind-mount it into the container, typically at launch time.  The other 
is to give the container the necessary capabilities so that it can 
do its own mounting (typically containers are not given the necessary 
capabilities, so mounting will fail even as root inside the container).


Given that one of the uses of containers is security isolation (such as 
it is), I feel like giving the container the ability to mount things is 
less than a stellar idea.  But to each his / her own.


I believe the reason the wiki says to be careful with mounts has more 
to do with UID/GID mapping.  As you are using nfs this is already an 
issue you're probably dealing with.  You're probably aware that running 
nfs with multiple hosts with unsynchronized passwd/group files can 
be tricky, because linux (and unix in general) works with UIDs/GIDs, 
and not really directly with names,


That's true for NFS v1-3.  But NFS v4 changes that.  NFS v4 actually 
uses user names & group names and has a daemon that runs on the client & 
server to translate things as necessary.


so if you're doing something with one UID on one host and with a 
different UID on another host you might get unexpected permissions 
behavior.


Yep.  You need to do /something/ to account for this.  Be it manually 
manage UID & GID across things, or use something like NFSv4's 
synchronization mechanism.


In a nutshell the same thing can happen with containers, or for 
that matter with chroots.


I mostly agree.  However, user namespaces can nullify this.

I've not dabbled with user namespaces yet, but my understanding is that 
they can have completely different UIDs & GIDs inside the user namespace 
than outside of it.  It's my understanding that UID 0 / GID 0 inside a 
user namespace can be mapped to UID 12345 / GID 23456 outside of the 
user namespace.  Refer to nsenter / unshare man pages for more details.



If you have identical 

Re: [gentoo-user] Can I use containers?

2019-05-18 Thread Wols Lists
On 19/05/19 00:49, Rich Freeman wrote:
> I believe the reason the wiki says to be careful with mounts has more
> to do with UID/GID mapping.  As you are using nfs this is already an
> issue you're probably dealing with.  You're probably aware that
> running nfs with multiple hosts with unsynchronized passwd/group files
> can be tricky, because linux (and unix in general) works with
> UIDs/GIDs, and not really directly with names, so if you're doing
> something with one UID on one host and with a different UID on another
> host you might get unexpected permissions behavior.

Not that I do it (it would be a bit of a learning experience :-) but
this is where using ldap for user management would score ...

Cheers,
Wol



Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-18 Thread Paul Colquhoun
On Saturday, May 18, 2019 11:01:30 P.M. AEST Wols Lists wrote:
> On 17/05/19 06:19, Andrew Udvare wrote:
> >> On May 17, 2019, at 01:14, Adam Carter  wrote:
> >> 
> >> The classic one is where OPS haven't noticed that disks in a RAID array
> >> have died years ago...> 
> > This really happened?
> 
> It's probably more common than you think.
> 
> Can't tell (don't really know) the details, but I was told a story first
> hand about someone who went in to the computer room and asked "what are
> those flashing red lights?"
> 
> Cue massive panic as ops suddenly realised that (a) it was the main
> billing server with terabytes of critical information and (b) the two
> flashing lights meant their terribly expensive raid-6 disk array was now
> running in raid-0!


And the even bigger worry would be that a drive replacement and rebuild, which 
is the whole point of using RAID, may fail. The degraded RAID is working (so 
far) but a rebuild (unless it is *very* file system aware) needs to read EVERY 
BLOCK on the existing disks to rebuild the failed drive/s, and if it 
encounters any failed blocks in unused areas of the RAID it may be unable to 
complete the rebuild.

I have seen this happen in previous positions. Not an easy thing to report to 
management, and the unexpected downtime to rebuild everything from backups 
onto new drives can be extensive (and expensive).

This is why good RAID systems have a background task that regularly reads and 
checks every block of every disk, to avoid undetected errors.

Hot Spares are also a good safety measure, along with monitoring software that 
alerts you when the spares have gone live.


-- 
Reverend Paul Colquhoun, ULC. http://andor.dropbear.id.au/
  Asking for technical help in newsgroups?  Read this first:
 http://catb.org/~esr/faqs/smart-questions.html#intro






Re: [gentoo-user] Can I use containers?

2019-05-18 Thread Rich Freeman
On Sat, May 18, 2019 at 12:44 PM Grant Taylor
 wrote:
>
> On 5/18/19 9:26 AM, Peter Humphrey wrote:
> > Hello list,
>
> Hi,
>
> > Can anyone answer this?
>
> I would think that containers could be made to do this.  But I'm not a
> fan of the containerization systems that I've seen.  They seem to be too
> large and try to control too many things and impose too many
> limitations.

I'd be interested if there are other scripts people have put out
there, but I agree that most of the container solutions on Linux are
overly-complex.

I personally use nspawn, which is actually pretty minimal, but it
depends on systemd, which I'm sure many would argue is overly complex.
:)  However, if you are running systemd you can basically do a
one-liner that requires zero setup to turn a chroot into a container.

On to the original questions about mounts:

In general you can mount stuff in containers without issue.  There are
two ways to go about it.  One is to mount something on the host and
bind-mount it into the container, typically at launch time.  The other
is to give the container the necessary capabilities so that it can do
its own mounting (typically containers are not given the necessary
capabilities, so mounting will fail even as root inside the
container).

I believe the reason the wiki says to be careful with mounts has more
to do with UID/GID mapping.  As you are using nfs this is already an
issue you're probably dealing with.  You're probably aware that
running nfs with multiple hosts with unsynchronized passwd/group files
can be tricky, because linux (and unix in general) works with
UIDs/GIDs, and not really directly with names, so if you're doing
something with one UID on one host and with a different UID on another
host you might get unexpected permissions behavior.

In a nutshell the same thing can happen with containers, or for that
matter with chroots.  If you have identical passwd/group files it
should be a non-issue.  However, if you want to do mapping with
unprivileged containers you have to be careful with mounts as they
might not get translated properly.  Using completely different UIDs in
a container is their suggested solution, which is fine as long as the
actual container filesystem isn't shared with anything else.  That
tends to be the case anyway when you're using container
implementations that do a lot of fancy image management.  If you're
doing something very minimal and just using a path/chroot on the host
as your container then you need to be mindful of your UIDs/GIDs if you
go accessing anything from the host directly.

The other thing I'd be careful with is mounting physical devices in
more than one place.  Since you're actually sharing a kernel I suspect
linux will "do the right thing" if you mount an ext4 on /dev/sda2 on
two different containers, but I've never tried it (and again doing
that requires giving containers access to even see sda2 because they
probably won't see physical devices by default).  In a VM environment
you definitely can't do this, because the VMs are completely isolated
at the kernel level and having two different kernels having dirty
buffers on the same physical device is going to kill any filesystem
that isn't designed to be clustered.  In a container environment the
two containers aren't really isolated at the actual physical
filesystem level since they share the kernel, so I think you'd be fine
but I'd really want to test or do some research before relying on it.

In any case, the more typical solution is to just mount everything on
the host and then bind-mount it into the container.  So, you could
mount the nfs in /mnt and then bind-mount that into your container.
There is really no performance hit and it should work fine without
giving the container a bunch of capabilities.

-- 
Rich



Re: [gentoo-user] Can I use containers?

2019-05-18 Thread Grant Taylor

On 5/18/19 9:26 AM, Peter Humphrey wrote:

Hello list,


Hi,


Can anyone answer this?


I can't comment on LXC or containers in general, but I will say that I 
think that namespaces (which is largely what I think containers are) 
could do this.


I'd suggest a mount, UTS, and possibly user namespace.  The mount 
namespace will allow you to have a different mount tree inside of the 
""container / chroot / et al.  The UTS [1] namespace will allow you to 
have a different hostname inside of the  The user namespace will 
allow you to have different user IDs and group IDs inside of the


I would think that containers could be made to do this.  But I'm not a 
fan of the containerization systems that I've seen.  They seem to be too 
large and try to control too many things and impose too many 
limitations.  Especially when I just want something similar to a chroot 
/ Solaris Zone.  I've always been able to get namespaces to do what I 
want with little problem.


[1] Unix Time Sharing



Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-18 Thread Mick
On Saturday, 18 May 2019 02:25:58 BST Frank Steinmetzger wrote:

> At some point in the future, my stationary PC will require a hardware
> refresh. At that point I will say goodbye to Intel. This is the only
> language companies understand. 

Yes, Intel has been permanently erased as an option for any future computer 
purchases of mine.

However, in the current oligopoly of hardware suppliers and their market carve 
up, there isn't much/any choice for the retail consumer. The only CPU which 
does not come with ME/PSP hardware backdoors built-in by design, is the 
POWER9, which is an expensive server CPU.  No choice for laptops.

Unless big OEMs like Apple start exerting pressure on the CPU manufacturers to 
secure their designs, I can't see them changing their strategy just because an 
infinitesimally small number of users stopped buying their products.


> They’ve been getting ahead by developing
> features without due diligence and by cutting corners. And this is biting
> them in their behind now all the way back.

It's not just a matter of cutting corners and trying to remain competitive by 
being first to market with shoddy products.  They have also consciously 
decided to incorporate  co-processors (OOB hypervisors) in their 
chips, with no option of physically removing these, or at least fully 
disabling or replacing the proprietary firmware blobs they have been running.  
The concept of 'secure computing' with today's market offerings is increasing 
showing itself to be an oxymoron.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


[gentoo-user] Can I use containers?

2019-05-18 Thread Peter Humphrey
Hello list,

I use this box as a compile host for two other boxes on the LAN. From each of 
those I NFS-export $PORTDIR to a chroot jail on this box, run portage etc. in 
the jail as needed and then install the binaries on the smaller boxes.

Recently there've been several mentions of containers instead of chroots, and 
before I put too many hours into trying them I'd like just a little help, 
please,

The Gentoo containers guide[1] says: "Do not mount parts of external 
filesystems within a container, except ro (read only)." (This is under 
Limitations of LXC). My question is whether "external" means "outside the 
container but on the same machine" or absolutely any file system not in the 
container. In the latter case I wouldn't be able to use the NFS-mounted remote 
directories, so I'd be wasting time and energy trying to make it work.

Can anyone answer this?

1.  https://wiki.gentoo.org/wiki/LXC

-- 
Regards,
Peter.






Re: [gentoo-user] Re: New Intel CPU flaws discovered

2019-05-18 Thread Wols Lists
On 17/05/19 06:19, Andrew Udvare wrote:
>> On May 17, 2019, at 01:14, Adam Carter  wrote:
>>
>> The classic one is where OPS haven't noticed that disks in a RAID array have 
>> died years ago...
> 
> This really happened?
> 
It's probably more common than you think.

Can't tell (don't really know) the details, but I was told a story first
hand about someone who went in to the computer room and asked "what are
those flashing red lights?"

Cue massive panic as ops suddenly realised that (a) it was the main
billing server with terabytes of critical information and (b) the two
flashing lights meant their terribly expensive raid-6 disk array was now
running in raid-0!

Cheers,
Wol