[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Ian Kaufman
Actually, that is exactly what I was looking for. Thanks. Ian On Thu, Oct 27, 2022 at 3:31 PM Federico Lucifredi wrote: > Not exactly what you asked, but just to make sure you are aware, there is > a project delivering Windows native Ceph drivers. If performance is an > issue, these are going

[ceph-users] Re: 16.2.11 branch

2022-10-27 Thread Laura Flores
Hi Oleksiy, The Pacific RC has not been declared yet since there have been problems in our upstream testing lab. There is no ETA yet for v16.2.11 for that reason, but the full diff of all the patches that were included will be published to ceph.io when v16.2.11 is released. There will also be a

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Bailey Allison
Hi, That is most likely possible but the difference in performance from doing CephFS + Samba compared to RBD + Ceph iSCSI + Windows SMB would probably be extremely noticeable in a not very good way. As Wyll mentioned recommended way is to just share out SMB on top of an exisitng CephFS mount

[ceph-users] Re: Mirror de.ceph.com broken?

2022-10-27 Thread Oliver Freyermuth
Hi together, according to the list of mirror responsibles in the repo at: https://github.com/ceph/ceph/blob/main/mirroring/MIRRORS the person to ask is Oliver Dzombic. I have added him in CC. Cheers and hope that helps, Oliver Am 27.10.22 um 21:43 schrieb Mike Perez: Hi Christian,

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Ian Kaufman
Would it be plausible to have Windows DFS servers mount the Ceph cluster via iSCSI? And then share the data out in a more Windows native way? Thanks, Ian On Thu, Oct 27, 2022 at 1:50 PM Wyll Ingersoll < wyllys.ingers...@keepertech.com> wrote: > > No - the recommendation is just to mount

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Christophe BAILLON
Thanks, it's fine > De: "Wyll Ingersoll" > À: "Christophe BAILLON" > Cc: "Eugen Block" , "ceph-users" > Envoyé: Jeudi 27 Octobre 2022 22:49:18 > Objet: Re: [ceph-users] Re: SMB and ceph question > No - the recommendation is just to mount /cephfs using the kernel module and > then share it

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Edward R Huyer
There do exist vfs_ceph and vfs_ceph_snapshots modules for Samba, at least in theory. https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html https://www.samba.org/samba/docs/current/man-html/vfs_ceph_snapshots.8.html However, they don't exist in, for instance, the version of Samba in

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Wyll Ingersoll
No - the recommendation is just to mount /cephfs using the kernel module and then share it via standard VFS module from Samba. Pretty simple. From: Christophe BAILLON Sent: Thursday, October 27, 2022 4:08 PM To: Wyll Ingersoll Cc: Eugen Block ; ceph-users

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Christophe BAILLON
Re Ok, I thought there was a module like ganesha for the nfs to install directly on the cluster... - Mail original - > De: "Wyll Ingersoll" > À: "Eugen Block" , "ceph-users" > Envoyé: Jeudi 27 Octobre 2022 15:25:36 > Objet: [ceph-users] Re: SMB and ceph question > I don't think there

[ceph-users] OSD crashes

2022-10-27 Thread Daniel Brunner
Hi, I noticed one my OSDs keeps crashing even when ran manually, this is my homelab and nothing too critical is going on my cluster, but I'd like to know what's the issue. I am running on archlinux arm (aarch64 on an odroid-hc4) and compiled everything ceph related myself, ceph version 17.2.4

[ceph-users] Re: Mirror de.ceph.com broken?

2022-10-27 Thread Mike Perez
Hi Christian, Thank you for reporting this. I did a git blame on the file and saw that Wido added it. 63be401a411ffc7c2f78e450a29c69eee1af02d3 Wido, do you happen to know who is maintaining this mirror? On Thu, Oct 20, 2022 at 1:06 AM Christian Rohmann wrote: > > Hey ceph-users, > > it

[ceph-users] 16.2.11 branch

2022-10-27 Thread Oleksiy Stashok
Hey guys, Could you please point me to the branch that will be used for the upcoming 16.2.11 release? I'd like to see the diff w/ 16.2.10 to better understand what was fixed. Thank you. Oleksiy ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: 1 pg stale, 1 pg undersized

2022-10-27 Thread Josh Baergen
Hi Alexander, I'd be suspicious that something is up with pool 25. Which pool is that? ('ceph osd pool ls detail') Knowing the pool and the CRUSH rule it's using is a good place to start. Then that can be compared to your CRUSH map (e.g. 'ceph osd tree') to see why Ceph is struggling to map that

[ceph-users] cephadm node-exporter extra_container_args for textfile_collector

2022-10-27 Thread Lee Carney
Has anyone had success in using cephadm to add extra_container_args onto the node-exporter config? For example changing the collector config. I am trying and failing using the following: 1. Create ne.yml service_type: node-exporter service_name: node-exporter placement: host_pattern: '*'

[ceph-users] Correction: 10/27/2022 perf meeting with guest speaker Peter Desnoyers today!

2022-10-27 Thread Mark Nelson
Hi Folks, The weekly performance meeting will be starting in approximately 55 minutes at 8AM PST.  Peter Desnoyers from Khoury College of Computer Sciences, Northeastern University will be speaking today about his work on local storage for RBD caching.  A short architectural overview is

[ceph-users] 10/20/2022 perf meeting with guest speaker Peter Desnoyers today!

2022-10-27 Thread Mark Nelson
Hi Folks, The weekly performance meeting will be starting in approximately 70 minutes at 8AM PST.  Peter Desnoyers from Khoury College of Computer Sciences, Northeastern University will be speaking today about his work on local storage for RBD caching.  A short architectural overview is

[ceph-users] Re: large omap objects in the .rgw.log pool

2022-10-27 Thread Anthony D'Atri
This prior post https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2QNKWK642LWCNCJEB5THFGMSLR37FLX7/ may help. You can bump up the warning threshold to make the warning go away - a few releases ago it was reduced to 1/10 of the prior value. There’s also information about trimming

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Wyll Ingersoll
I don't think there is anything particularly special about exposing /cephfs (or subdirs thereof) over SMB with SAMBA. We've done it for years over various releases of both Ceph and Samba. Basically, you create a NAS server host that mounts /cephfs and run Samba on that host. You share

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Eugen Block
Hi, the SUSE docs [1] are not that old, they apply for Ceph Pacific. Have you tried it yet? Maybe the upstream docs could adapt the SUSE docs, just an idea if there aren't any guides yet on docs.ceph.com. Regards, Eugen [1]

[ceph-users] Re: Ceph Leadership Team Meeting Minutes - 2022 Oct 26

2022-10-27 Thread Nizamudeen A
Great, thanks Ilya. Regards, On Thu, Oct 27, 2022 at 2:00 PM Ilya Dryomov wrote: > On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A wrote: > > > > > > > > lab issues blocking centos container builds and teuthology testing: > > > * https://tracker.ceph.com/issues/57914 > > > * delays testing for

[ceph-users] SMB and ceph question

2022-10-27 Thread Christophe BAILLON
Hello, For a side project, we need to expose cephfs datas to legacy users via SMB, I don't find the official way in ceph doc to do that. In old suze doc I found ref to ceph-samba, but I can't find any informations on ceph official doc. We have a small cephadm dedicated cluster to do that, can

[ceph-users] Re: Ceph Leadership Team Meeting Minutes - 2022 Oct 26

2022-10-27 Thread Ilya Dryomov
On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A wrote: > > > > > lab issues blocking centos container builds and teuthology testing: > > * https://tracker.ceph.com/issues/57914 > > * delays testing for 16.2.11 > > > The quay.ceph.io has been down for some days now. Not sure who is actively >

[ceph-users] large omap objects in the .rgw.log pool

2022-10-27 Thread Sarah Coxon
Hey, I would really appreciate any help I can get on this as googling has led me to a dead end. We have 2 data centers each with 4 servers running ceph on kubernetes in multisite config, everything is working great but recently the master cluster changed status to HEALTH_WARN and the issues are

[ceph-users] Re: 1 pg stale, 1 pg undersized

2022-10-27 Thread Alexander Fiedler
Hi, any updates on this? Best regards Alexander Fiedler Von: Alexander Fiedler Gesendet: Dienstag, 25. Oktober 2022 14:45 An: 'ceph-users@ceph.io' Betreff: 1 pg stale, 1 pg undersized Hello, we run a ceph cluster with the following error which came up suddenly without any

[ceph-users] Re: ceph-volume claiming wrong device

2022-10-27 Thread Oleksiy Stashok
Hey Eugen, valid points, I first tried to provision OSDs via ceph-ansible (later excluded), which does run the batch command with all 4 disk devices, but it often failed with the same issue I mentioned earlier, something like: ``` bluefs _replay 0x0: stop: uuid

[ceph-users] Re: cephfs ha mount expectations

2022-10-27 Thread Eugen Block
Hi, Thanks for the interesting discussion. Actually it's a bit disappointing to see that also cephfs with multiple MDS servers is not as HA as we would like it. it really depends on what you're trying to achieve since there are lots of different scenarios how to setup and configure one

[ceph-users] Re: how to upgrade host os under ceph

2022-10-27 Thread Simon Oosthoek
Dear list thanks for the answers, it looks like we have worried about this far too much ;-) Cheers /Simon On 26/10/2022 22:21, shubjero wrote: We've done 14.04 -> 16.04 -> 18.04 -> 20.04 all at various stages of our ceph cluster life. The latest 18.04 to 20.04 was painless and we ran:

[ceph-users] Re: ceph-volume claiming wrong device

2022-10-27 Thread Eugen Block
Hi, first of all, if you really need to issue ceph-volume manually, there's a batch command: cephadm ceph-volume lvm batch /dev/sdb /dev/sdc /dev/sdd /dev/sde Second, are you using cephadm? Maybe your manual intervention conflicts with the automatic osd setup (all available devices). You

[ceph-users] Re: Ceph Leadership Team Meeting Minutes - 2022 Oct 26

2022-10-27 Thread Nizamudeen A
> > lab issues blocking centos container builds and teuthology testing: > * https://tracker.ceph.com/issues/57914 > * delays testing for 16.2.11 The quay.ceph.io has been down for some days now. Not sure who is actively maintaining the quay repos now. At least in the ceph-dashboard, we have a

[ceph-users] Re: how to upgrade host os under ceph

2022-10-27 Thread Stefan Kooman
On 10/26/22 16:14, Simon Oosthoek wrote: Dear list, I'm looking for some guide or pointers to how people upgrade the underlying host OS in a ceph cluster (if this is the right way to proceed, I don't even know...) Our cluster is nearing the 4.5 years of age and now our ubuntu 18.04 is