[ceph-users] Re: adding mds service , unable to create keyring for mds

2022-09-14 Thread Xiubo Li
On 15/09/2022 03:09, Jerry Buburuz wrote: Hello, I am trying to add my first mds service on any node. I am unable to add keyring to start mds service. # $ sudo ceph auth get-or-create mds.mynode mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' Error ENINVAL: key for

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Neha Ojha
Hi Yuri, On Wed, Sep 14, 2022 at 8:02 AM Adam King wrote: > > orch suite failures fall under > https://tracker.ceph.com/issues/49287 > https://tracker.ceph.com/issues/57290 > https://tracker.ceph.com/issues/57268 > https://tracker.ceph.com/issues/52321 > > For rados/cephadm the failures are both

[ceph-users] adding mds service , unable to create keyring for mds

2022-09-14 Thread Jerry Buburuz
Hello, I am trying to add my first mds service on any node. I am unable to add keyring to start mds service. # $ sudo ceph auth get-or-create mds.mynode mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' Error ENINVAL: key for mds.mynode exists but cap mds does not match I tried

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Adam King
orch suite failures fall under https://tracker.ceph.com/issues/49287 https://tracker.ceph.com/issues/57290 https://tracker.ceph.com/issues/57268 https://tracker.ceph.com/issues/52321 For rados/cephadm the failures are both https://tracker.ceph.com/issues/57290 Overall, nothing new/unexpected.

[ceph-users] Re: Manual deployment, documentation error?

2022-09-14 Thread Ranjan Ghosh
Hi Eugen, thanks for your answer. I don't want to use the cephadm tool because it needs docker. I don't like it because it's total overkill for our small 3-node cluster.  I'd like to avoid the added complexity, added packages, everything. Just another thing I have to learn in detaisl about in

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Guillaume Abrioux
the ceph-volume failure seems valid. I need to investigate. thanks On Wed, 14 Sept 2022 at 11:12, Ilya Dryomov wrote: > On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein > wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/57472#note-1 > >

[ceph-users] Re: Manual deployment, documentation error?

2022-09-14 Thread Eugen Block
Hi, Im currently trying the manual deployment because ceph-deploy unfortunately doesn't seem to exist anymore and under step 19 it says you should run "sudo ceph -s". That doesn't seem to work. I guess this is because the manager service isn't yet running, right? ceph-deploy was deprecated

[ceph-users] Manual deployment, documentation error?

2022-09-14 Thread Ranjan Ghosh
Hi all, I think there's an error in the documentation: https://docs.ceph.com/en/quincy/install/manual-deployment/ Im currently trying the manual deployment because ceph-deploy unfortunately doesn't seem to exist anymore and under step 19 it says you should run "sudo ceph -s". That doesn't seem

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Janne Johansson
Den ons 14 sep. 2022 kl 11:08 skrev gagan tiwari : > Yes. To start with we only have one HP server with DAS. Which I am planning > to set up as ceph on. We can have one more server later. > > But I think you are correct. I will use ZFS file systems on it and NFS export > all the data to all

[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Ilya Dryomov
On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/57472#note-1 > Release Notes - https://github.com/ceph/ceph/pull/48072 > > Seeking approvals for: > > rados - Neha, Travis, Ernesto, Adam > rgw - Casey > fs

[ceph-users] Re: 16.2.10 Cephfs with CTDB, Samba running on Ubuntu

2022-09-14 Thread Rafael Diaz Maurin
Hello, We recently have build a similar config here with Cluster Samba CTDB on top of CephFS (under Pacific) via LXC containers (RockyLinux) under Proxmox (7.2) for 35000 users authenticated on an Active Directory. It's used for personal homedirs and shared directories. The LXC Proxmox Samba

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread gagan tiwari
Yes. To start with we only have one HP server with DAS. Which I am planning to set up as ceph on. We can have one more server later. But I think you are correct. I will use ZFS file systems on it and NFS export all the data to all clients. So, please advise me whether to I use RAID6 with ZFS /

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Janne Johansson
Den ons 14 sep. 2022 kl 10:14 skrev gagan tiwari : > > Sorry. I meant SSD Solid state disks. >> > We have a HP storage server with 12 SDD of 5T each and have set-up hardware >> > RAID6 on these disks. >> >> You have only one single machine? >> If so, run zfs on it and export storage as NFS.

[ceph-users] Re: Increasing number of unscrubbed PGs

2022-09-14 Thread Burkhard Linke
Hi, On 9/13/22 16:33, Wesley Dillingham wrote: what does "ceph pg ls scrubbing" show? Do you have PGs that have been stuck in a scrubbing state for a long period of time (many hours,days,weeks etc). This will show in the "SINCE" column. the deep scrubs have been running for some minutes to

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread gagan tiwari
Sorry. I meant SSD Solid state disks. Thanks, Gagan On Wed, Sep 14, 2022 at 12:49 PM Janne Johansson wrote: > Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari > : > > Hi Guys, > > I am new to Ceph and storage. We have a requirement of > > managing around 40T of data which will

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Janne Johansson
Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari : > Hi Guys, > I am new to Ceph and storage. We have a requirement of > managing around 40T of data which will be accessed by around 100 clients > all running RockyLinux9. > > We have a HP storage server with 12 SDD of 5T each and

[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Jarett
Did you mean SSD? 12 x 5TB solid-state disks? Or is that “Spinning Disk Drive?” Do you have any SSDs/NVMe you can use? From: gagan tiwariSent: Wednesday, September 14, 2022 1:54 AMTo: ceph-users@ceph.ioSubject: [ceph-users] ceph deployment best practice Hi Guys,    I am new to Ceph and

[ceph-users] ceph deployment best practice

2022-09-14 Thread gagan tiwari
Hi Guys, I am new to Ceph and storage. We have a requirement of managing around 40T of data which will be accessed by around 100 clients all running RockyLinux9. We have a HP storage server with 12 SDD of 5T each and have set-up hardware RAID6 on these disks. HP storage server