[ceph-users] Re: Create iscsi targets from CLI

2022-03-25 Thread York Huang
Hi Budai, May you take a look at gwcli,https://docs.ceph.com/en/latest/rbd/iscsi-target-cli/ --Original-- From: "BudaiLaszlo"

[ceph-users] Re: [RGW] Too much index objects and OMAP keys on them

2022-03-25 Thread David Orman
Hi Gilles, Did you ever figure this out? Also, your rados ls output indicates that the prod cluster has fewer objects in the index pool than the backup cluster, or am I misreading this? David On Wed, Dec 1, 2021 at 4:32 AM Gilles Mocellin < gilles.mocel...@nuagelibre.org> wrote: > Hello, > >

[ceph-users] Re: Even number of replicas?

2022-03-25 Thread Wolfpaw - Dale Corse
Hi Nico, No 2 data centers. - We use size=4 - our ceph map is configured with OSD's assigned to 2 separate data center locations, so we end up with 2 OSD's in use from in each DC - min_size=2. - we have (1) monitor in each DC - we have a 3rd monitor that is in a 3rd DC and has a VPN connection

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-25 Thread Ilya Dryomov
On Fri, Mar 25, 2022 at 4:11 PM Ilya Dryomov wrote: > > On Thu, Mar 24, 2022 at 2:04 PM Budai Laszlo wrote: > > > > Hi Ilya, > > > > Thank you for your answer! > > > > On 3/24/22 14:09, Ilya Dryomov wrote: > > > > > > How can we see whether a lock is exclusive or shared? the rbd lock ls > >

[ceph-users] Re: Even number of replicas?

2022-03-25 Thread Wolfpaw - Dale Corse
Hi George, We use 4/2 for our deployment and it works fine - but it's a huge waste of space :) Our reason is because we want to be able to lose a data center and still have ceph running. You could accomplish that with size=1 on an emergency basis, but we didn't like the redundancy loss.

[ceph-users] Even number of replicas?

2022-03-25 Thread Kyriazis, George
Hello ceph-users, I was wondering if it is good practice to have an even number of replicas in a replicated pool. For example, have size=4 and min_size=2. Thank you! George ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an

[ceph-users] Re: Kingston DC500M IO problems

2022-03-25 Thread Janne Johansson
Do read this https://yourcmc.ru/wiki/index.php?title=Ceph_performance=toggle_view_desktop#Drive_cache_is_slowing_you_down and see if one or both of the drives rather have write cache on or off. At least some microns we have want it off for better perf in ceph. Den fre 25 mars 2022 kl 15:01

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-25 Thread Ilya Dryomov
On Thu, Mar 24, 2022 at 2:04 PM Budai Laszlo wrote: > > Hi Ilya, > > Thank you for your answer! > > On 3/24/22 14:09, Ilya Dryomov wrote: > > > How can we see whether a lock is exclusive or shared? the rbd lock ls command > output looks identical for the two cases. > > You can't. The way

[ceph-users] OSD crush with end_of_buffer

2022-03-25 Thread Wissem MIMOUNA
Hi , I found more information in the OSD logs about this assertion , may be it could help => ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable) in thread 7f8002357700 thread_name:msgr-worker-2 *** Caught signal (Aborted) ** what(): buffer::end_of_buffer

[ceph-users] Re: ceph namespace access control

2022-03-25 Thread Eugen Block
Hi, This is because the default client id is "admin" -- you are trying to connect to the cluster as admin with user3's key here. that makes sense, of course. This is a bit broader than perhaps needed. If the intention is to allow user3 to create and use RBD images in namespace user3 of

[ceph-users] Re: ceph namespace access control

2022-03-25 Thread Kai Stian Olstad
On Wed, Mar 23, 2022 at 07:14:22AM +0200, Budai Laszlo wrote: > Hello all, > > what capabilities a ceph user should have in order to be able to create rbd > images in one namespace only? > > I have tried the following: > > [root@ceph1 ~]# rbd namespace ls --format=json >

[ceph-users] Re: ceph namespace access control

2022-03-25 Thread Ilya Dryomov
On Fri, Mar 25, 2022 at 10:11 AM Eugen Block wrote: > > Hi, > > I was curious and tried the same with debug logs. One thing I noticed > was that if I use the '-k ' option I get a different error > message than with '--id user3'. So with '-k' the result is the same: > > ---snip--- > pacific:~ #

[ceph-users] Re: OSD crush on a new ceph cluster

2022-03-25 Thread Wissem MIMOUNA
Hi , I found more information in the OSD logs about this assertion , may be it could help => ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable) in thread 7f8002357700 thread_name:msgr-worker-2 *** Caught signal (Aborted) ** what(): buffer::end_of_buffer terminate

[ceph-users] Create iscsi targets from CLI

2022-03-25 Thread Budai Laszlo
Hello everybody, Is there a way to create the scsi targets from the command line with the ceph command? (Or a series of commands that can be put in a script.) I have reviewed the "ceph -h" but I guess I'm missing something. Thank you, Laszlo ___

[ceph-users] Re: [ERR] OSD_FULL: 1 full osd(s) - with 73% used

2022-03-25 Thread Eugen Block
Can you add more information about your cluster like 'ceph -s' and 'ceph osd df tree'? I haven't seen full OSD errors yet when they are only around 75% full. It can't be a pool quota since it would report the pool(s) as full, not the OSDs. Is there anything in the logs of that OSD?

[ceph-users] Re: ceph namespace access control

2022-03-25 Thread Eugen Block
Hi, I was curious and tried the same with debug logs. One thing I noticed was that if I use the '-k ' option I get a different error message than with '--id user3'. So with '-k' the result is the same: ---snip--- pacific:~ # rbd -k /etc/ceph/ceph.client.user3.keyring -p test2 --namespace