[Bug 1881747] Re: cephadm does not work with zfs root

2022-04-26 Thread Tobias Bossert
Related pull request on ceph side:
https://github.com/ceph/ceph/pull/46043

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2021-03-04 Thread Anders Johansen
** Also affects: zfs-linux (Arch Linux)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-10-06 Thread Martin Strange
I think the reason that ZFS behaves differently is because of this...

/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/activate.py

from ceph_volume.util import system

# mount on tmpfs the osd directory
osd_path = '/var/lib/ceph/osd/%s-%s' % (conf.cluster, osd_id)
if not system.path_is_mounted(osd_path):
# mkdir -p and mount as tmpfs
prepare_utils.create_osd_path(osd_id, tmpfs=tmpfs)


This "path_is_mounted" test that it does appears to misbehave on a ZFS root, 
causing it to then resort to using the tmpfs 

The test is ultimately traced to "get_mounts" in 
/usr/lib/python3.6/site-packages/ceph_volume/util/system.py

On Linux, this reads through /proc/mounts

On ZFS root, the line is should be finding is resembles this...

rpool/ROOT/ubuntu_4trzhh/var/lib /var/lib/ceph/osd/ceph-0 zfs
rw,relatime,xattr,posixacl 0 0

...where as on a normal EXT4 root, it looks like this...

/dev/nvme0n1p2 /var/lib/ceph/osd/ceph-0 ext4 rw,relatime,errors=remount-
ro 0 0

There's some logic in there about the device needing to start with
leading "/", and I think that is what confuses the test when ZFS root
has "rpool" with no leading slash.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-10-06 Thread Martin Strange
Follow up - it does seem to be the tmpfs mount that activate creates
that causes the problem.

I manually started the activate container by running the podman command
from unit.run for the activate step, but just ran "bash -l" instead of
the actual activate command

Then I prevented the mount tmpfs from doing anything by "rm
/usr/bin/mount" and replacing it with a link to "/usr/bin/true", and
then ran the original activate command

# /usr/sbin/ceph-volume lvm activate 2
56b13799-3ef5-4ea5-91d5-474f829f12dc --no-systemd

Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2   <<< 
WHY DOES IT DO THIS?
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir 
--dev 
/dev/ceph-6fc7e3e3-2ce6-47ab-aac8-adc5c6633dfb/osd-block-56b13799-3ef5-4ea5-91d5-474f829f12dc
 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Running command: /usr/bin/ln -snf 
/dev/ceph-6fc7e3e3-2ce6-47ab-aac8-adc5c6633dfb/osd-block-56b13799-3ef5-4ea5-91d5-474f829f12dc
 /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -R ceph:ceph 
/dev/mapper/ceph--6fc7e3e3--2ce6--47ab--aac8--adc5c6633dfb-osd--block--56b13799--3ef5--4ea5--91d5--474f829f12dc
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
--> ceph-volume lvm activate successful for osd ID: 2

Because the tmpfs was now effectively a no-op, this activation created
the necessary files in the real OSD directory, and now I was able to
systemctl restart the osd service and now it came up apparently OK.

I also did another fresh install on same hardware using normal non-ZFS
root, and this problem did not happen, so it does in some way appear to
be an interaction with ZFS.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-10-05 Thread Martin Strange
For what it's worth, I've now had the exact same problem, which led me
here.

On a bare-metal 20.04 using full blank HDDs as OSDs (/dev/sda etc.),
installing using cephadm worked fine with an XFS root, but later on when
I reinstalled and tried ZFS root, I then got the same behaviour
described above despite trying device zaps and everything I can think
of.

It seems that the unit.run does two separate steps, first a "/usr/sbin
/ceph-volume lvm activate 0" and then a "/usr/bin/ceph-osd -n osd.0"

The activate does its work inside a tmpfs "/var/lib/ceph/osd/ceph-0",
which is entirely thrown away when that container ends, so the symlink
"/var/lib/ceph/osd/ceph-0/block" it creates is gone before the ceph-osd
container starts up, resulting it in not finding a "block" any more and
then declaring unknown type because of that.

I don't understand how that could ever possibly work, so maybe the ZFS
root is not relevant, or maybe it somehow causes activate to use the
tmpfs?

Note that if I run a single container manually, and do the same activate
followed by running ceph-osd then the OSD does come up.

How is the "/var/lib/ceph/osd/ceph-0/block" meant to persist between
running the activate in one container and then running the ceph-osd in a
different one afterwards, or is the "/usr/bin/mount -t tmpfs tmpfs
/var/lib/ceph/osd/ceph-0" it does during activate that is somehow the
source of this problem?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-09 Thread Bryant G Ly
We tried docker by itself then tried ceph ansible by itself to deploy. 
https://docs.ceph.com/ceph-ansible/master/
for ceph ansible we used version 5

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-09 Thread Bryant G Ly
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-
linux-servers/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-09 Thread Andrea Righi
I was pretty much following this simple tutorial:
http://prashplus.blogspot.com/2018/01/ceph-single-node-setup-ubuntu.html

I'll try to add docker and ceph-ansible to the equation and see if I can
reproduce it.

** Changed in: zfs-linux (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-09 Thread Andrea Righi
BTW, how did you install ceph-ansible? I can't find a 20.04 package in
the ansible ppa.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-08 Thread Bryant G Ly
We are using the latest Ubuntu 20.04 and we ahve tried ceph ansible +
docker deploy and both of those give us issues with zfs root fs. How are
you deploying?

If you give me your list of commands + image I can retry.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-08 Thread Andrea Righi
I've tried to reproduce the problem on a VM (that uses ZFS as rootfs)
setting up a single-node ceph cluster, but OSD is coming up correctly:

$ sudo ceph -s | grep osd
osd: 1 osds: 1 up (since 50m), 1 in (since 59m)

Could you provide more details about your particular ceph configuration
/ infrastructure, so that I can try to reproduce the problem in an
environment more similar to yours? Thanks.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1881747] Re: cephadm does not work with zfs root

2020-09-08 Thread Andrea Righi
** Changed in: zfs-linux (Ubuntu)
 Assignee: (unassigned) => Andrea Righi (arighi)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs