Re: [systemd-devel] How to deploy systemd-nspawn containers and use for deployment

2016-10-12 Thread Brian Kroth
Seems really dependent upon the container layout as to what's the most
appropriate way of doing that. For instance, if the underlying fs of the
source container is something like btrfs or zfs you could imagine doing a
send/recv of a golden snapshot. Possibly also for an lvm volume/snapshot.
For others rsync might be best. For others maybe it's just a deployment
script or tar or git repo.



Cheers,
Brian

On Wed, Oct 12, 2016, 16:23 Samuel Williams 
wrote:

> I'm not sure if this belongs in machinectl, but it would be interesting to
> explore some kind of deployment mechanism. e.g. machinectl deploy
> local-container-name ssh://remote-server:container-name because I'm sure
> this is going to be a really common use case, and there are enough details
> (e.g. stopping and starting the remote machine) that it would be nice to
> keep it easy for new users.
>
> On 13 October 2016 at 02:10, Chris Bell  wrote:
>
> On 2016-10-11 22:29, Samuel Williams wrote:
>
>
> For step 2, what would be the best practice. Rsync the local container
> to the remote container?
>
>
> That's worked fine for me so far. Just to state the obvious: makes sure
> the container is stopped before using rsync.
>
>
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to deploy systemd-nspawn containers and use for deployment

2016-10-12 Thread Samuel Williams
I'm not sure if this belongs in machinectl, but it would be interesting to
explore some kind of deployment mechanism. e.g. machinectl deploy
local-container-name ssh://remote-server:container-name because I'm sure
this is going to be a really common use case, and there are enough details
(e.g. stopping and starting the remote machine) that it would be nice to
keep it easy for new users.

On 13 October 2016 at 02:10, Chris Bell  wrote:

> On 2016-10-11 22:29, Samuel Williams wrote:
>
>>
>> For step 2, what would be the best practice. Rsync the local container
>> to the remote container?
>>
>>
> That's worked fine for me so far. Just to state the obvious: makes sure
> the container is stopped before using rsync.
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] tomcat start up script wait for message

2016-10-12 Thread Tyler Couto
Thank you for responding, Mantas. I haven’t gotten around to checking this out.

Tyler

From: Mantas Mikulėnas >
Date: Tuesday, October 4, 2016 at 10:57 AM
To: Tyler Couto >
Cc: 
"systemd-devel@lists.freedesktop.org"
 
>
Subject: Re: [systemd-devel] tomcat start up script wait for message

On Tue, Oct 4, 2016 at 8:32 PM, Tyler Couto 
> wrote:
Hi all,

We have a tomcat application that requires some initialization after
tomcat starts up. That is, we run an initialize script after catalina.out
says ?'Server startup in:'. Currently we do this in a number of ways:
manually, through a custom tail script, or through logstash. But I¹m
thinking it might be best to let the init system do it. Is this a good
idea? And if so, how best to implement it?

systemd will not react to stdout messages, but it does have Type=notify which 
will react to a "READY=1" message sent via Unix socket – ideally directly by 
the program (see sd_notify, $NOTIFY_SOCKET) but also possibly via the 
`systemd-notify` tool:

sd_notify(0, "READY=1");

systemd-notify --ready

--
Mantas Mikulėnas >
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Chris Bell

On 2016-10-12 11:26, Chris Bell wrote:

On 2016-10-12 09:28, Reindl Harald wrote:

Am 12.10.2016 um 15:08 schrieb Chris Bell:

Not sure if this is the right place to ask


no


Sorry


I'll unsub so this doesn't happen again. Sorry again for spamming.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Chris Bell

On 2016-10-12 09:28, Reindl Harald wrote:

Am 12.10.2016 um 15:08 schrieb Chris Bell:

Not sure if this is the right place to ask


no


Sorry

box runs 231, but our EL7 (RHEL7.2) boxes are only at 219, where it 
has

been (if I'm not mistaken) since the 7.0 release


not true, there where at least one if not two major jumps within the
CentOS 7 lifetime and it was released with 208


My mistake




Any idea on when they
may port an update?


when they thinks it's needed for very good reasons and you bought that
"no version jumps" implicit with RHEL7 - so choose another
distribution systemd or live with it


Thanks :)

On 2016-10-12 09:51, Greg KH wrote:

Why not ask Red Hat?  You are paying for support, why wouldn't you be
able information like this directly from them?

good luck!

greg k-h


I don't have direct access; that's a couple levels above my paygrade :/


I apologize for spamming the list. Thank you for your responses.

Regards,
Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Greg KH
On Wed, Oct 12, 2016 at 09:08:27AM -0400, Chris Bell wrote:
> Hey everyone,
> 
> Not sure if this is the right place to ask, but I figured someone here or at
> RH would know. What's the TTL for systemd updates on EL7? My Arch box runs
> 231, but our EL7 (RHEL7.2) boxes are only at 219, where it has been (if I'm
> not mistaken) since the 7.0 release. Any idea on when they may port an
> update?

Why not ask Red Hat?  You are paying for support, why wouldn't you be
able information like this directly from them?

good luck!

greg k-h
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Chris Bell

Hey everyone,

Not sure if this is the right place to ask, but I figured someone here 
or at RH would know. What's the TTL for systemd updates on EL7? My Arch 
box runs 231, but our EL7 (RHEL7.2) boxes are only at 219, where it has 
been (if I'm not mistaken) since the 7.0 release. Any idea on when they 
may port an update?


Thanks,
Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Reindl Harald



Am 12.10.2016 um 15:08 schrieb Chris Bell:

Hey everyone,

Not sure if this is the right place to ask


no


but I figured someone here
or at RH would know. What's the TTL for systemd updates on EL7? My Arch
box runs 231, but our EL7 (RHEL7.2) boxes are only at 219, where it has
been (if I'm not mistaken) since the 7.0 release


not true, there where at least one if not two major jumps within the 
CentOS 7 lifetime and it was released with 208



Any idea on when they
may port an update?


when they thinks it's needed for very good reasons and you bought that 
"no version jumps" implicit with RHEL7 - so choose another distribution 
systemd or live with it

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to deploy systemd-nspawn containers and use for deployment

2016-10-12 Thread Chris Bell

On 2016-10-11 22:29, Samuel Williams wrote:


For step 2, what would be the best practice. Rsync the local container
to the remote container?



That's worked fine for me so far. Just to state the obvious: makes sure 
the container is stopped before using rsync.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Identical model scsi disks reported by scsi_id with identical ID_SERIAL

2016-10-12 Thread Doug VanLeuven
System under discussion:
centos 7 (Core) under hyper-V gen2, systemd-219-19.el7_2.12.x86_64
kernel 3.10.0-327.28.2.el7.x86_64 #1 SMP Wed Aug 3 11:11:39 UTC 2016

I bought two ID_VENDOR=HGST ID_MODEL=HMS5C4040BLE640
to create a mirror raid and using the default
IMPORT{program}="scsi_id --export --whitelisted -d $devnode"
in /usr/lib/udev/rules.d they are both reported
as ID_SERIAL=SHGST_HMS5C4040BLE640

What ended up in /dev/disk/by-id was only
   scsi-SHGST_HMS5C4040BLE640 -> ../../sdb
and no link for sdc. Please notice ID_SERIAL is the same as ID_MODEL

This ID_SERIAL duplication gets logged in journal as:
Device dev-disk-by\x2did-scsi\x2dSHGST_HMS5C4040BLE640.device \
appeared twice with different sysfs paths \
/sys/devices/LNXSYSTM:00/device:00/ACPI0004:00/VMBUS:00/\
vmbus_8/host0/target0:0:0/0:0:0:2/block/sdb and \
/sys/devices/LNXSYSTM:00/device:00/ACPI0004:00/VMBUS:00/\
 vmbus_8/host0/target0:0:0/0:0:0:3/block/sdc

Mostly, when users have posted about seeing similar log messages to the
distribution lists, this is commented as a "harmless" message because
general practice has been to mount partitions using /dev/disk/by-uuid.
But I need whole disk reference for the raid.

/dev/sdb & /dev/sdc were created OK.
I could reference the drives that way, but since I was using them in a
mirror, I worried about a future fail, removal and replacement attach
getting the right physical devices as sdb & sdc. Like if I added
a hot spare I would be using the intel controller whereas
these drives were on the marvell controller. Maybe they end up
being assigned sdc & sdd with the spare at sdb.

By using SCSI page 0x83 instead of page 0x80 (which seems to be the
default) the drives report their true serial numbers
ID_SCSI_SERIAL=PL1331LAGY0HSH and ID_SCSI_SERIAL=PL1331LAGY0Z4H

By using:
IMPORT{program}="scsi_id --page=0x83 --export --whitelisted -d $devnode"
Then for the ID_SERIAL SYMLINK:
ENV{ID_SCSI_SERIAL}!="?*", ENV{ID_SERIAL}=="?*"
and for the ID_SCSI_SERIAL SYMLINK
ENV{ID_SCSI_SERIAL}=="?*"

This will stop the duplicate allocation and creates unique entries on
the drives with ID_SCSI_SERIAL and prior behavior for drives without.

But I understand that this would break existing installations.

For existing installations without disk ID_SERIAL duplicates, changing
the name from what appears to be the model number to a true serial
number would be a regression. There should to be a way to either:
1. alter the 1st & 2nd drives names only when the duplicate occurs
   on the 2nd drive
2. leave the 1st drive alone and use the different naming technique
   only for the 2nd drive after an "appeared twice" error because right
   now those drives don't have any name at all
The upside of 2 is existing implementations don't change, but the
downside is drive by-id names would change if the port sequence was
changed.

For my own purposes, I crafted a rules file in /etc/udev/rules.d that
creates entries in a new /dev/disk/by-serial/ to hold the links using
ID_SCSI_SERIAL and that works quite well for me. For example:
zpool import -d /dev/disk/by-serial

True, I'm only using disk for raid that report ID_SCSI_SERIAL but
it does offer a way to introduce a different naming convention
without disturbing the existing conventions.

I could have used /dev/dev/by-path except path_id_compat has been
dropped from systemd-219-19 udev and IMPORT builtin 'path_id' returned
non-zero.
However openSUSE Leap 42.1 (x86_64) in a virtual machine under hyper-V
gen1 is still using udev-210-95.1.x86_64 and path_id_compat returns
ID_PATH_COMPAT=scsi-0:0:0:0 and correctly populates that directory.
I have to speculate why systemd dropped support for that, but
that is a different issue.

Hopefully this will kickstart some discussion
and someone else will have a better idea.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel