[ceph-users] krbd upmap support on kernel-4.16 ?

2018-05-22 Thread Heðin Ejdesgaard Møller
Hello, I have a test env. with a single centos-7.5 ceph server and one rbd client, running Fedora 28. I have set the minimum compatibility level to luminous, as shown in ceph osd dump below. When I use krbd for mapping rbd/demo, then it shows up as a "Jewel" client in ceph features. When I

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
Hi, Here is our complete crush map that is being used. # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device

Re: [ceph-users] Web panel is failing where create rpm

2018-05-22 Thread John Spray
On Tue, May 22, 2018 at 6:38 PM, Antonio Novaes wrote: > Hi people, > I need help for you. > I trying create package rpm ceph calamari but i get error. > The package of calamari server is OK, but Diamond get error. > I send log > My ceph is OK, create pool, put file

Re: [ceph-users] Web panel is failing where create rpm

2018-05-22 Thread Antonio Novaes
The error: -- ID: cp-artifacts-up Diamond/dist/diamond-*.rpm Function: cmd.run Name: cp Diamond/dist/diamond-*.rpm /git Result: False Comment: Command "cp Diamond/dist/diamond-*.rpm /git" run Started: 14:26:08.056879 Duration: 20.077 ms

Re: [ceph-users] Data recovery after loosing all monitors

2018-05-22 Thread Frank Li
Just having reliable hardware isn’t enough for monitor failures. I’ve had a case where a wrongly typed command Brought down all three monitors via segfault and no way to bring them back since the command caused the monitor Database to be corrupt. I wish there was a checkpoint implemented in

[ceph-users] Web panel is failing where create rpm

2018-05-22 Thread Antonio Novaes
Hi people, I need help for you. I trying create package rpm ceph calamari but i get error. The package of calamari server is OK, but Diamond get error. I send log My ceph is OK, create pool, put file remove file and remove pool with sucessful. But create rpm for web panel is failing I am using

Re: [ceph-users] Delete pool nicely

2018-05-22 Thread David Turner
>From my experience, that would cause you some troubles as it would throw the entire pool into the deletion queue to be processed as it cleans up the disks and everything. I would suggest using a pool listing from `rados -p .rgw.buckets ls` and iterate on that using some scripts around the `rados

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
Hi David, We are using tree algorithm. Thanks, Pardhiv Karri On Tue, May 22, 2018 at 9:42 AM, David Turner wrote: > Your PG counts per pool per osd doesn't have any PGs on osd.38. that > definitely matches what your seeing, but I've never seen this happen > before.

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread David Turner
Your PG counts per pool per osd doesn't have any PGs on osd.38. that definitely matches what your seeing, but I've never seen this happen before. The osd doesn't seem to be misconfigured at all. Does anyone have any ideas what could be happening here? I expected to see something wrong in one of

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
Hi David, root@or1010051251044:~# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 79793G 56832G 22860G 28.65 POOLS: NAMEID USED %USED MAX AVAIL OBJECTS rbd 0 0 014395G 0 compute

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread David Turner
This is all weird. Maybe it just doesn't have any PGs with data on them. `ceph df`, how many PGs you have in each pool, and which PGs are on osd 38. On Tue, May 22, 2018, 11:19 AM Pardhiv Karri wrote: > Hi David, > > > > root@or1010051251044:~# ceph osd tree > ID WEIGHT

[ceph-users] Recovery time is very long till we have a double tree in the crushmap

2018-05-22 Thread Vincent Godin
Two monthes ago, we had a simple crushmap : - one root - one region - two datacenters - one room per datacenter - two pools per room (one SATA and one SSD) - hosts in SATA pool only - osds in host So we created a ceph pool at the level SATA on each site. After some disk problems which impacted

Re: [ceph-users] How to see PGs of a pool on a OSD

2018-05-22 Thread Pardhiv Karri
This is exactly what I'm looking for. Tested it in our lab and it works great. Thanks you Caspar! Thanks, Pardhiv Karri On Tue, May 22, 2018 at 3:42 AM, Caspar Smit wrote: > Here you go: > > ps. You might have to map your poolnames to pool ids > >

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
Hi David, root@or1010051251044:~# ceph osd tree ID WEIGHT TYPE NAMEUP/DOWN REWEIGHT PRIMARY-AFFINITY -1 80.0 root default -2 40.0 rack rack_A1 -3 20.0 host or1010051251040 0 2.0 osd.0 up 1.0

[ceph-users] Delete pool nicely

2018-05-22 Thread Simon Ironside
Hi Everyone, I have an older cluster (Hammer 0.94.7) with a broken radosgw service that I'd just like to blow away before upgrading to Jewel after which I'll start again with EC pools. I don't need the data but I'm worried that deleting the .rgw.buckets pool will cause performance

Re: [ceph-users] RGW won't start after upgrade to 12.2.5

2018-05-22 Thread Marc Spencer
This is now filed as bug #24228 Marc D. Spencer Chief Technology Officer T: 866.808.4937 × 202 E: mspen...@liquidpixels.com www.liquidpixels.com

Re: [ceph-users] Data recovery after loosing all monitors

2018-05-22 Thread Caspar Smit
2018-05-22 15:51 GMT+02:00 Wido den Hollander : > > > On 05/22/2018 03:38 PM, George Shuklin wrote: > > Good news, it's not an emergency, just a curiosity. > > > > Suppose I lost all monitors in a ceph cluster in my laboratory. I have > > all OSDs intact. Is it possible to recover

Re: [ceph-users] Data recovery after loosing all monitors

2018-05-22 Thread Wido den Hollander
On 05/22/2018 03:38 PM, George Shuklin wrote: > Good news, it's not an emergency, just a curiosity. > > Suppose I lost all monitors in a ceph cluster in my laboratory. I have > all OSDs intact. Is it possible to recover something from Ceph? Yes, there is. Using ceph-objectstore-tool you are

[ceph-users] Data recovery after loosing all monitors

2018-05-22 Thread George Shuklin
Good news, it's not an emergency, just a curiosity. Suppose I lost all monitors in a ceph cluster in my laboratory. I have all OSDs intact. Is it possible to recover something from Ceph? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Crush Map Changed After Reboot

2018-05-22 Thread Caspar Smit
Fwir, you could also put this into your ceph.conf to explicitly put an osd into the correct chassis at start if you have other osd's which you still want the crush_update_on_start setting set to true for: [osd.34] osd crush location = "chassis=ceph-osd3-internal" [osd.35] osd crush

Re: [ceph-users] samba gateway experiences with cephfs ?

2018-05-22 Thread David Disseldorp
Hi Daniel and Jake, On Mon, 21 May 2018 22:46:01 +0200, Daniel Baumann wrote: > Hi > > On 05/21/2018 05:38 PM, Jake Grimmett wrote: > > Unfortunately we have a large number (~200) of Windows and Macs clients > > which need CIFS/SMB access to cephfs. > > we too, which is why we're

Re: [ceph-users] Luminous: resilience - private interface down , no read/write

2018-05-22 Thread David Turner
What happens when a storage node loses its cluster network but not it's public network is that all other osss on the cluster see that it's down and report that to the mons, but the node call still talk to the mons telling the mons that it is up and in fact everything else is down. The setting osd

Re: [ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread David Turner
What are your `ceph osd tree` and `ceph status` as well? On Tue, May 22, 2018, 3:05 AM Pardhiv Karri wrote: > Hi, > > We are using Ceph Hammer 0.94.9. Some of our OSDs never get any data or > PGs even at their full crush weight, up and running. Rest of the OSDs are > at

Re: [ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-22 Thread David Turner
We use radosgw in our deployment. It doesn't really matter as you can specify the key in the config file. You could call it client.thatobjectthing.hostname and it would work fine. On Tue, May 22, 2018, 5:54 AM Massimo Sgaravatto < massimo.sgarava...@gmail.com> wrote: > # ls

[ceph-users] Several questions on the radosgw-openstack integration

2018-05-22 Thread Massimo Sgaravatto
I have several questions on the radosgw - OpenStack integration. I was more or less able to set it (using a Luminous ceph cluster and an Ocata OpenStack cloud), but I don't know if it working as expected. So, the questions: 1. I miss the meaning of the attribute "rgw keystone implicit

Re: [ceph-users] How to see PGs of a pool on a OSD

2018-05-22 Thread Caspar Smit
Here you go: ps. You might have to map your poolnames to pool ids http://cephnotes.ksperis.com/blog/2015/02/23/get-the-number-of-placement-groups-per-osd Kind regards, Caspar 2018-05-22 9:13 GMT+02:00 Pardhiv Karri : > Hi, > > Our ceph cluster have 12 pools and only 3

Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-22 Thread Eugen Block
Hi, So "somthing" goes wrong: # cat /var/log/libvirt/libxl/libxl-driver.log -> ... 2018-05-20 15:28:15.270+: libxl: libxl_bootloader.c:634:bootloader_finished: bootloader failed - consult logfile /var/log/xen/bootloader.7.log 2018-05-20 15:28:15.270+: libxl:

Re: [ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-22 Thread Massimo Sgaravatto
# ls /var/lib/ceph/radosgw/ ceph-rgw.ceph-test-rgw-01 So [client.rgw.ceph-test-rgw-01] Thanks, Massimo On Tue, May 22, 2018 at 6:28 AM, Marc Roos wrote: > > I can relate to your issue, I am always looking at > > /var/lib/ceph/ > > See what is used there > > >

Re: [ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-22 Thread Marc Roos
I can relate to your issue, I am always looking at /var/lib/ceph/ See what is used there -Original Message- From: Massimo Sgaravatto [mailto:massimo.sgarava...@gmail.com] Sent: dinsdag 22 mei 2018 11:46 To: Ceph Users Subject: [ceph-users] [client.rgw.hostname] or

[ceph-users] [client.rgw.hostname] or [client.radosgw.hostname] ?

2018-05-22 Thread Massimo Sgaravatto
I am really confused about the use of [client.rgw.hostname] or [client.radosgw.hostname] in the configuration file. I don't understand if they have different purposes or if there is just a problem with documentation. E.g.: http://docs.ceph.com/docs/luminous/start/quick-rgw/ says that

[ceph-users] leveldb to rocksdb migration

2018-05-22 Thread Захаров Алексей
Hi all. I'm trying to change osd's kv backend using instructions mentioned here: http://pic.doit.com.cn/ceph/pdf/20180322/4/0401.pdf But, ceph-osdomap-tool --check step fails with the following error: ceph-osdomap-tool: /build/ceph-12.2.5/src/rocksdb/db/version_edit.h:188: void

Re: [ceph-users] rgw default user quota for OpenStack users

2018-05-22 Thread Massimo Sgaravatto
The openstack nodes have their own ceph config file, but they have the same content On Mon, May 21, 2018 at 4:14 PM, David Turner wrote: > Is openstack/keystone maintaining it's own version of the ceph config > file? I know that's the case with software like Proxmox.

Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-22 Thread thg
Hi Marc, > in the last weeks we spent some time in in improving RBDSR a rbd storage > repository for XenServer. > RBDSR is capable to userRBD by fuse, krbd and rbd-nbd. I will have a look in this, thank you very much! > I am pretty sure, that we will use this in production in a few weeks :-)

Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-05-22 Thread Robert Sander
On 19.05.2018 00:16, Gregory Farnum wrote: > Is there any chance of sharing those slides when the meetup has > finished? It sounds interesting! :) We usually put a link to the slides on the MeetUp page. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] How to see PGs of a pool on a OSD

2018-05-22 Thread Pardhiv Karri
Hi, Our ceph cluster have 12 pools and only 3 pools are really used. How can I see number of PGs on a OSD and which PGs belong to which pool on that OSD? Something like below, OSD 0 = 1000PGs (500PGs belong to PoolA, 200PGs belong to PoolB, 300PGs belong to PoolC) Thanks, Pardhiv Karri

Re: [ceph-users] Ceph - Xen accessing RBDs through libvirt

2018-05-22 Thread Marc Schöchlin
Hello thg, in the last weeks we spent some time in in improving RBDSR a rbd storage repository for XenServer. RBDSR is capable to userRBD by fuse, krbd and rbd-nbd. Our improvements are based on https://github.com/rposudnevskiy/RBDSR/tree/v2.0 and are currently published at

Re: [ceph-users] ceph mds memory usage 20GB : is it normal ?

2018-05-22 Thread Alexandre DERUMIER
Hi,some new stats, mds memory is not 16G, I have almost same number of items and bytes in cache vs some weeks ago when mds was using 8G. (ceph 12.2.5) root@ceph4-2:~# while sleep 1; do ceph daemon mds.ceph4-2.odiso.net perf dump | jq '.mds_mem.rss'; ceph daemon mds.ceph4-2.odiso.net

[ceph-users] Some OSDs never get any data or PGs

2018-05-22 Thread Pardhiv Karri
Hi, We are using Ceph Hammer 0.94.9. Some of our OSDs never get any data or PGs even at their full crush weight, up and running. Rest of the OSDs are at 50% full. Is there a bug in Hammer that is causing this issue? Does upgrading to Jewel or Luminous fix this issue? I tried deleting and