Thank you for your response, Mantas!

I think I might have been wrong here with my terminology and use of "host-only" 
networking, after reading Virtualbox's definition in their networking 
documentation. Currently I have two containers configured.

- Container 1
  - [Files]
    Bind=/datastore/downloads:/data/downloads
    [Network]
    VirtualEthernet=true
    Port=tcp:32401
- Container 2
  - [Files]
    Bind=/datastore/mediacentre:/data
    Bind=/datastore/mediacentre/.plexconfig:/var/lib/plex/Plex\ Media\ Server

Here is a copy of my systemd-nspawn@.service file that is symlinked for both my 
containers. As you can see I have removed the —network-veth tag that appears by 
default.

[Unit]

Description=Container %i

Documentation=man:systemd-nspawn(1)

PartOf=machines.target

Before=machines.target

After=network.target

[Service]

ExecStart=/usr/bin/systemd-nspawn --quiet --keep-unit --boot 
--link-journal=try-guest -U --settings=override --machine=%i

KillMode=mixed

Type=notify

RestartForceExitStatus=133

SuccessExitStatus=133

Slice=machine.slice

Delegate=yes

TasksMax=16384



Within Container 1:

[root@container1 ~]# ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: host0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state 
UP mode DEFAULT group default qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff link-netnsid 0



Within Container 2:

[root@container2 ~]# ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode 
DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode 
DEFAULT group default qlen 1000

    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff



This works fine, however — is there a way of explicitly setting container 2 to 
use the host's network adapter without modifying the systemd-nspawn@.service 
file to omitt —network-veth? I think I'm correct in saying this can be achieved 
within Docker by passing a —net=host parameter at the time of launching your 
container? Though I could be wrong, I haven't really played with Docker.



Thank you folks, again, really appreciate any input/assistance.



Sam

From:  Mantas Mikulėnas <graw...@gmail.com>
Date:  Tuesday, 20 June 2017 at 17:17
To:  Samuel Taylor <s...@tailornetworks.com>, 
<systemd-devel@lists.freedesktop.org>
Subject:  Re: [systemd-devel] Systems-nspawn host-only networking?


I haven't used nspawn much. But I think the terminology is the opposite – veth 
*is* the most similar to other tools' "host-only network", as it essentially 
creates a connection completely separate from the physical LAN, unless the host 
itself decides to route between them. (Compare with VirtualBox's vboxnet0.)

Meanwhile, the opposite option would be macvlan, which attaches to a physical 
interface (like "bridged network" in VirtualBox) and separates traffic by MAC.

In between, you have the option of first creating a "host-only" veth, and 
*then* putting it in a Linux bridge interface (br0/virbr) together with eth0.

(I don't remember if nspawn can do this automatically or whether you need to 
'ip link set veth0 master br0'...)

On Tue, Jun 20, 2017, 19:07 Samuel Taylor <s...@tailornetworks.com> wrote:
Hello to all,

I'm new to the scene here so forgive me if this is not the most appropriate 
place to post this. I have posed this question to Freenet IRC a couple of times 
but I've not had any takers.

At the moment I am in the process of deploying a couple of nspawn containers, 
one utilizing a VirtualEthernet config and the other sharing the network 
adapter of the host, which I believe is typically, outside of the nspawn 
universe, referred to as host-only networking? (please correct me if I am 
wrong).

At present I have omitted --network-veth from the default systemd-nspawn 
.service unit file for containers, to enable the use of host-only networking 
within one of my containers. For the second container which utilizes a 
VirtualEthernet I have configured this parameter using the .nspawn file. Is 
there a way of avoiding having to modify the default systemd-nspawn unit file 
and instead configuring host-only networking within the .nspawn file? I have 
noted from the documentation that a network interface can be specified i.e

[Network]
Interface=eth0

However, from the documentation this would appear to remove the adapter from 
the calling namespace, and it would only be available within my container, 
which is not the case when removing --network-veth from the equation and not 
setting anything at all.

If this is considered a bad practice I will instead use the VirtualEthernet and 
Port parameters on my container currently utilising host-only networking.

I've been really enjoying getting my hands dirty with systemd the last few 
days, so if you could shed some light on where I'm going wrong here, that would 
be greatly appreciated!

Many thanks,

Sam


Sent from my iPhone
_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
Mantas Mikulėnas <graw...@gmail.com>
Sent from my phone

_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to