[ceph-users] Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

2024-05-27 Thread P Wagner-Beccard
Hey Dulux-Oz,
Care to share how you did it now?
 vg/lv syntax or :/dev/vg_osd/lvm_osd ?

On Mon, 27 May 2024 at 09:49, duluxoz  wrote:

> @Eugen, @Cedric
>
> DOH!
>
> Sorry lads, my bad! I had a typo in my lv name - that was the cause of
> my issues.
>
> My apologises for being so stupid - and *thank you* for the help; having
> a couple of fresh brains on things help to eliminate possibilities and
> so narrow down onto the cause of the issue.
>
> Thanks again for all the help
>
> Cheers
>
> Dulux-Oz
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph recipe for nfs exports

2024-04-25 Thread P Wagner-Beccard
I'm not using Ceph Ganesha but GPFS Ganesha, so YMMV

> ceph nfs export create cephfs --cluster-id nfs-cephfs --pseudo-path /mnt
> --fsname vol1
> --> nfs mount
> mount -t nfs -o nfsvers=4.1,proto=tcp 192.168.7.80:/mnt /mnt/ceph
>
> - Although I can mount the export I can't write on it

You created a export,
But there are no Clients setup
https://docs.ceph.com/en/latest/mgr/nfs/#create-cephfs-export

[--client_addr ...]

It should be fine to set this to a ip range like/24 /8 or whatever
you should endup with a client config like this
https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/export.txt#L264



> - I can't understand the concept of "pseudo path"

Lets say your mount your exported fs to /mnt
your pseudo path is /mnt,
When you now mount on the machine you will end up with a folder /mnt/mnt
We use this to seperate diffrent producs in gpfs.
So we have a pseudo path set to /products/something, and on the machines we
have the NFS mounted under /corp.
the combination of both comes then to /corp/products/something





On Thu, 25 Apr 2024 at 16:55, Roberto Maggi @ Debian 
wrote:

> first of all thanks to all!
>
>
> Al supposed by Robert Sander, I get "permission denied" but even writing
> with root privileges I get the same error.
>
> As soon as I can I'll test your suggestions and update the thread.
>
>
> Thanks again
>
> On 4/24/24 16:05, Adam King wrote:
> >
> > - Although I can mount the export I can't write on it
> >
> > What error are you getting trying to do the write? The way you set
> > things up doesn't look to different than one of our integration tests
> > for ingress over nfs
> > (
> https://github.com/ceph/ceph/blob/main/qa/suites/orch/cephadm/smoke-roleless/2-services/nfs-ingress.yaml)
>
> > and that test tests a simple read/write to the export after
> > creating/mounting it.
> >
> > - I can't understand how to use the sdc disks for journaling
> >
> >
> > you should be able to specify a `journal_devices` section in an OSD
> > spec. For example
> >
> > *service_type: osd
> > service_id: foo
> > placement:
> >   hosts:
> >   - vm-00
> > spec:
> >   data_devices:
> > paths:
> > - /dev/vdb
> >   journal_devices:
> > paths:
> > - /dev/vdc*
> > that will make non-colocated OSDs where the devices from the
> > journal_devices section are used as journal devices for the OSDs on
> > the devices in the data_devices section. Although I'd recommend
> > looking through
> >
> https://docs.ceph.com/en/latest/cephadm/services/osd/#advanced-osd-service-specifications
> > <
> https://docs.ceph.com/en/latest/cephadm/services/osd/#advanced-osd-service-specifications>
>  and
>
> > see if there are any other filtering options than the path that can be
> > used first. It's possible the path the device gets can change on
> > reboot and you could end up with cepadm using a device you don't want
> > it to for this as that other device gets the path another device held
> > previously.
> >
> > - I can't understand the concept of "pseudo path"
> >
> >
> > I don't know at a low level either, but it seems to just be the path
> > nfs-ganesha will present to the user. There is another argument to
> > `ceph nfs export create` which is just "path" rather than pseudo-path
> > that marks what actual path within the cephfs the export is mounted
> > on. It's optional and defaults to "/" (so the export you made is
> > mounted at the root of the fs). I think that's the one that really
> > matters. The pseudo-path seems to just act like a user facing name for
> > the path.
> >
> > On Wed, Apr 24, 2024 at 3:40 AM Roberto Maggi @ Debian
> >  wrote:
> >
> > Hi you all,
> >
> > I'm almost new to ceph and I'm understanding, day by day, why the
> > official support is so expansive :)
> >
> >
> > I setting up a ceph nfs network cluster whose recipe can be found
> > here
> > below.
> >
> > ###
> >
> > --> cluster creation cephadm bootstrap --mon-ip 10.20.20.81
> > --cluster-network 10.20.20.0/24  --fsid
> > $FSID --initial-dashboard-user adm \
> > --initial-dashboard-password 'Hi_guys' --dashboard-password-noupdate
> > --allow-fqdn-hostname --ssl-dashboard-port 443 \
> > --dashboard-crt /etc/ssl/wildcard.it/wildcard.it.crt
> >  --dashboard-key
> > /etc/ssl/wildcard.it/wildcard.it.key
> >  \
> > --allow-overwrite --cleanup-on-failure
> > cephadm shell --fsid $FSID -c /etc/ceph/ceph.conf -k
> > /etc/ceph/ceph.client.admin.keyring
> > cephadm add-repo --release reef && cephadm install ceph-common
> > --> adding hosts and set labels
> > for IP in $(grep ceph /etc/hosts | awk '{print $1}') ; do
> > ssh-copy-id -f
> > -i /etc/ceph/ceph.pub root@$IP ; done
> > ceph orch host add cephstage01 10.20.20.81 --labels
> > _admin,mon,mgr,prometheus,grafana
> > ceph orch host add 

[ceph-users] Re: Ceph image delete error - NetHandler create_socket couldnt create socket

2024-04-19 Thread P Wagner-Beccard
With cephadm you're able to set these values cluster wide.
See the host-management section of the docs.
https://docs.ceph.com/en/reef/cephadm/host-management/#os-tuning-profiles

On Fri, 19 Apr 2024 at 12:40, Konstantin Shalygin  wrote:

> Hi,
>
> > On 19 Apr 2024, at 10:39, Pardhiv Karri  wrote:
> >
> > Thank you for the reply. I tried setting ulimit to 32768 when I saw 25726
> > number in lsof output and then after 2 disks deletion again it got an
> error
> > and checked lsof and which is above 35000.  I'm not sure how to handle
> it.
> > I rebooted the monitor node, but the open files kept growing.
> >
> > root@ceph-mon01 ~# lsof | wc -l
> > 49296
> > root@ceph-mon01 ~#
>
> This means that is not a Ceph problem. Is a problem in this system at all
>
>
> k
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Setting Up Multiple HDDs with replicated DB Device

2023-12-03 Thread P Wagner-Beccard
Hey Cephers,

Hope you're all doing well! I'm in a bit of a pickle and could really use
some of your power.

Here's the scoop:

I have a setup with around 10 HDDs. and 2 NVME's (+uninteresting boot
disks)
My initial goal was to configure part of the HDDs (6 out of 7TB) into an
md0 or similar device to be used as a DB device. (the rest would be nice to
use as an nvme osd)
I made some clumsy attempts to set them up "right"

While the OSDs are getting deployed, they are not being shown back to the
dashboard.
The specific error when running `ceph orch device ls`: 'Insufficient space
(<10 extents) on vgs, LVM detected, locked.'
Given these, I have a few questions:

Are there specific configurations or steps that I might be missing when
setting up the DB device with multiple HDDs?
(rel; I currently try things like this
https://paste.openstack.org/show/bdPHXQ0BMypWnZTYosT2/ )
Could the error message indicate a particular issue with my current setup
or approach?
If anyone has successfully configured a similar setup, could you please
share your insights or steps taken?

Thanks a bunch!

Cheers,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Rook-Ceph OSD Deployment Error

2023-11-28 Thread P Wagner-Beccard
(Again to the mailing list, ups)
Hi Travis,

Thanks for your input – it's greatly appreciated. I assume that my
deployment was using v17.2.6, as I hadn't explicitly specified a version in
my provided rook-ceph-cluster/values.yaml
<https://github.com/rook/rook/blob/master/deploy/charts/rook-ceph-cluster/values.yaml#L96>
However,
due to an issue with the Dashboard (could not use the object-gateway tab),
I updated to v17.2.7 so I cant confirm this anymore.
I'm considering completely destroying the cluster and redeploying with
potentially even v16. Any further advice or insights in this regard would
be very helpful.
Best regards,
P

On Mon, 27 Nov 2023 at 20:32, Travis Nielsen  wrote:

> Sounds like you're hitting a known issue with v17.2.7.
> https://github.com/rook/rook/issues/13136
>
> The fix will be in v18.2.1 if it's an option to upgrade to Reef. If not,
> you'll need to use v17.2.6 until the fix comes out for quincy in v17.2.8.
>
> Travis
>
> On Thu, Nov 23, 2023 at 4:06 PM P Wagner-Beccard <
> wagner-kerschbau...@schaffroth.eu> wrote:
>
>> Hi Mailing-Lister's,
>>
>> I am reaching out for assistance regarding a deployment issue I am facing
>> with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
>> the rook helm chart, but we are encountering an issue that apparently
>> seems
>> related to a known bug (https://tracker.ceph.com/issues/61597).
>>
>> During the OSD preparation phase, the deployment consistently fails with
>> an
>> IndexError: list index out of range. The logs indicate a problem occurs
>> when configuring new Disks, specifically using /dev/dm-3 as a metadata
>> device. It's important to note that /dev/dm-3 is an LVM on top of an mdadm
>> RAID, which might or might not be contributing to this issue. (I swear,
>> this setup worked already)
>>
>> Here is a snippet of the error from the deployment logs:
>> > 2023-11-23 23:11:30.196913 D | exec: IndexError: list index out of range
>> > 2023-11-23 23:11:30.236962 C | rookcmd: failed to configure devices:
>> failed to initialize osd: failed ceph-volume report: exit status 1
>> https://paste.openstack.org/show/bileqRFKbolrBlTqszmC/
>>
>> We have attempted different configurations, including specifying devices
>> explicitly and using the useAllDevices: true option with a specified
>> metadata device (/dev/dm-3 or the /dev/pv_md0/lv_md0 path). However, the
>> issue persists across multiple configurations.
>>
>> tested configurations are as follows:
>>
>> Explicit device specification:
>>
>> ```yaml
>> nodes:
>>   - name: "ceph01.maas"
>> devices:
>>   - name: /dev/dm-1
>>   - name: /dev/dm-2
>>   - name: "sdb"
>> config:
>>   metadataDevice: "/dev/dm-3"
>>   - name: "sdc"
>> config:
>>   metadataDevice: "/dev/dm-3"
>> ```
>>
>> General device specification with metadata device:
>> ```yaml
>> storage:
>>   useAllNodes: true
>>   useAllDevices: true
>>   config:
>> metadataDevice: /dev/dm-3
>> ```
>>
>> I would greatly appreciate any insights or recommendations on how to
>> proceed or work around this issue.
>> Is there a halfway decent way to apply the fix or maybe a workaround that
>> we can apply to successfully deploy Ceph in our environment?
>>
>> Kind regards,
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Rook-Ceph OSD Deployment Error

2023-11-23 Thread P Wagner-Beccard
Hi Mailing-Lister's,

I am reaching out for assistance regarding a deployment issue I am facing
with Ceph on a 4 node RKE2 cluster. We are attempting to deploy Ceph via
the rook helm chart, but we are encountering an issue that apparently seems
related to a known bug (https://tracker.ceph.com/issues/61597).

During the OSD preparation phase, the deployment consistently fails with an
IndexError: list index out of range. The logs indicate a problem occurs
when configuring new Disks, specifically using /dev/dm-3 as a metadata
device. It's important to note that /dev/dm-3 is an LVM on top of an mdadm
RAID, which might or might not be contributing to this issue. (I swear,
this setup worked already)

Here is a snippet of the error from the deployment logs:
> 2023-11-23 23:11:30.196913 D | exec: IndexError: list index out of range
> 2023-11-23 23:11:30.236962 C | rookcmd: failed to configure devices:
failed to initialize osd: failed ceph-volume report: exit status 1
https://paste.openstack.org/show/bileqRFKbolrBlTqszmC/

We have attempted different configurations, including specifying devices
explicitly and using the useAllDevices: true option with a specified
metadata device (/dev/dm-3 or the /dev/pv_md0/lv_md0 path). However, the
issue persists across multiple configurations.

tested configurations are as follows:

Explicit device specification:

```yaml
nodes:
  - name: "ceph01.maas"
devices:
  - name: /dev/dm-1
  - name: /dev/dm-2
  - name: "sdb"
config:
  metadataDevice: "/dev/dm-3"
  - name: "sdc"
config:
  metadataDevice: "/dev/dm-3"
```

General device specification with metadata device:
```yaml
storage:
  useAllNodes: true
  useAllDevices: true
  config:
metadataDevice: /dev/dm-3
```

I would greatly appreciate any insights or recommendations on how to
proceed or work around this issue.
Is there a halfway decent way to apply the fix or maybe a workaround that
we can apply to successfully deploy Ceph in our environment?

Kind regards,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io