[ceph-users] FW: radosgw: stale/leaked bucket index entries

2017-06-19 Thread Pavan Rallabhandi
Trying one more time with ceph-users On 19/06/17, 11:07 PM, "Pavan Rallabhandi" wrote: On many of our clusters running Jewel (10.2.5+), am running into a strange problem of having stale bucket index entries left over for (some of the) objects deleted. Though

[ceph-users] 转发: Question about upgrading ceph clusters from Hammer to Jewel

2017-06-19 Thread 许雪寒
By the way, I intend to install jewel version throught “rpm” command, and I already have a user “ceph” on the target machine, is there any problem if I do “systemctl start ceph.target” after the installation of jewel version? 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒

[ceph-users] 答复: Question about upgrading ceph clusters from Hammer to Jewel

2017-06-19 Thread 许雪寒
By the way, I intend to install jewel version throught “rpm” command, and I already have a user “ceph” on the target machine, is there any problem if I do “systemctl start ceph.target” after the installation of jewel version? 发件人: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] 代表 许雪寒

[ceph-users] Question about upgrading ceph clusters from Hammer to Jewel

2017-06-19 Thread 许雪寒
Hi, everyone. I intend to upgrade one of our ceph clusters from Hammer to Jewel, I wonder in what order I should upgrade the MON, OSD and LIBRBD? Is there any problem to have some of these components running Hammer version while others running Jewel version? Do I have to upgrade QEMU as well

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread Marko Sluga
Not a problem at all, sometimes all we need is just need a second pair of eyes! ;) On Mon, 19 Jun 2017 21:23:34 -0400 tribe...@tribecc.us wrote That was it! Thank you so much for your help, Marko! What a silly thing for me to miss! <3 Trilliams Sent from my iPhone On Jun 19, 2017,

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread T. Nichole Williams
That was it! Thank you so much for your help, Marko! What a silly thing for me to miss! <3 Trilliams Sent from my iPhone > On Jun 19, 2017, at 8:12 PM, Marko Sluga wrote: > > Sorry, > > rbd_user = volumes > > Not client.volumes > > > > On Mon, 19 Jun 2017

Re: [ceph-users] Introduction

2017-06-19 Thread Brad Hubbard
On Tue, Jun 20, 2017 at 10:40 AM, Marko Sluga wrote: > Hi Everyone, > > My name is Marko, I'm an independent consultant and trainer on cloud > solutions and I work a lot with OpenStack. > > Recently my clients have started asking about Ceph so I went on the docs and >

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread Marko Sluga
Sorry, rbd_user = volumes Not client.volumes On Mon, 19 Jun 2017 21:09:38 -0400 ma...@markocloud.com wrote Hi Nichole, Yeah, your setup looks is ok, so the only thing here could be an auth issue. So I went through the config again and I see you have set the client.volumes ceph

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread Marko Sluga
Hi Nichole, Yeah, your setup looks is ok, so the only thing here could be an auth issue. So I went through the config again and I see you have set the client.volumes ceph user with rwx permissions on the volumes pool. In your cinder.conf the setup is: rbd_user = cinder Unless the cinder

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread T. Nichole Williams
Hi Marko! Here’s my details: OpenStack Newton deployed with PackStack [controller + network node} Ceph Kraken 3-node setup deployed with ceph-ansible # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo) # ceph --version ceph version 11.2.0

[ceph-users] Introduction

2017-06-19 Thread Marko Sluga
Hi Everyone, My name is Marko, I'm an independent consultant and trainer on cloud solutions and I work a lot with OpenStack. Recently my clients have started asking about Ceph so I went on the docs and learned how to use it and feel pretty comfortable using it now, especially in connection

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread Marko Sluga
Hi Nichole, Since your config is ok. I'm going to need more details on the OpenStack release, the hypervisor, linux and librados versions. You could also test if you can try and monut a volume from your os and/or hypervisor and the machine that runs the cinder volume service to start with.

[ceph-users] Ceph packages for Debian Stretch?

2017-06-19 Thread Christian Balzer
Hello, can we have the status, projected release date of the Ceph packages for Debian Stretch? Christian -- Christian BalzerNetwork/Systems Engineer ch...@gol.com Rakuten Communications ___ ceph-users mailing list

Re: [ceph-users] Packages for Luminous RC 12.1.0?

2017-06-19 Thread Linh Vu
No worries, thanks a lot, look forward to testing it :) From: Abhishek Lekshmanan Sent: Monday, 19 June 2017 10:03:15 PM To: Linh Vu; ceph-users Subject: Re: [ceph-users] Packages for Luminous RC 12.1.0? Linh Vu writes:

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread T. Nichole Williams
Hi Marko, Here’s’ my ceph config: [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = volumes rbd_ceph_conf = /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1

Re: [ceph-users] Erasure Coding: Determine location of data and coding chunks

2017-06-19 Thread Marko Sluga
Hi Jonas, ceph osd map [poolname] [objectname] should provide you with more information about where the object and chunks are stored on the cluster. Regards, Marko Sluga Independent Trainer W: http://markocloud.com T: +1 (647) 546-4365 L + M Consulting Inc. Ste 212, 2121

[ceph-users] OSDs are not mounting on startup

2017-06-19 Thread Alex Gorbachev
We are seeing the same problem as http://tracker.ceph.com/issues/18945 where OSDs are not activating, with the lockbox error as below. -- Alex Gorbachev Storcium un 19 17:11:56 roc03r-sca070 ceph-osd6804: starting osd.75 at :/0 osd_data /var/lib/ceph/osd/ceph-75 /var/lib/ceph/osd/ceph-75/journal

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread Alejandro Comisario
you might want to configure cinder.conf with verbose = true debug = true and see /var/log/cinder/cinder-volume.log after a "systemctl restart cinder-volume" to see the real cause. best. alejandrito On Mon, Jun 19, 2017 at 6:25 PM, T. Nichole Williams wrote: > Hello, > >

Re: [ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread Marko Sluga
Hi Nicole, I can help, I have been working on my own openstack connected to ceph - can you send over the config in your /etc/cinder/cinder.conf file - especially the rbd relevant section starting with: volume_driver = cinder.volume.drivers.rbd.RBDDriver Also, make sure your

[ceph-users] Errors connecting cinder-volume to ceph

2017-06-19 Thread T. Nichole Williams
Hello, I’m having trouble connecting Ceph to OpenStack Cinder following the guide in docs.ceph.com & I can’t figure out what’s wrong. I’ve confirmed auth connectivity for both root & ceph users on my openstack controller node, but the RBDDriver is not initializing. I’ve

Re: [ceph-users] ceph on raspberry pi - unable to locate package ceph-osd and ceph-mon

2017-06-19 Thread Gregory Farnum
On Sat, Jun 17, 2017 at 10:11 AM Craig Wilson wrote: > Hi ceph-users > > I'm look at ceph for a new storage cluster at work and have been trying to > build a test cluster using an old laptop and 2 raspberry pi's to evaluate > it in something other then virtualbox. I've got

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Willem Jan Withagen
Op 19-6-2017 om 19:57 schreef Alfredo Deza: On Mon, Jun 19, 2017 at 11:37 AM, Willem Jan Withagen wrote: On 19-6-2017 16:13, Alfredo Deza wrote: On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza

[ceph-users] Erasure Coding: Determine location of data and coding chunks

2017-06-19 Thread Jonas Jaszkowic
Hello all, I have a simple question: I have an erasure coded pool with k = 2 data chunks and m = 3 coding chunks, how can I determine the location of the data and coding chunks? Given an object A that is stored on n = k + m different OSDs I want to find out where (i.e. on which OSDs) the data

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Andrew Schoen
>> >> I think having one part of Ceph on a different release cycle to the >> rest of Ceph is an even more dramatic thing than having it in a >> separate git repository. >> >> It seems like there is some dissatisfaction with how the Ceph project >> as whole is doing things that is driving you to

Re: [ceph-users] Mon Create currently at the state of probing

2017-06-19 Thread David Turner
Question... Why are you reinstalling the node, removing the mon from the cluster, and adding it back into the cluster to upgrade to Kraken? The upgrade path from 10.2.5 to 11.2.0 is an acceptable upgrade path. If you just needed to reinstall the OS for some reason, then you can keep the

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Alfredo Deza
On Mon, Jun 19, 2017 at 11:37 AM, Willem Jan Withagen wrote: > On 19-6-2017 16:13, Alfredo Deza wrote: >> On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: >>> On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote: On Fri, Jun 16, 2017

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Alfredo Deza
On Mon, Jun 19, 2017 at 12:55 PM, John Spray wrote: > On Mon, Jun 19, 2017 at 3:13 PM, Alfredo Deza wrote: >> On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: >>> On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote:

Re: [ceph-users] Mon Create currently at the state of probing

2017-06-19 Thread Jim Forde
No, I don’t think Ubuntu 14.04 has it enabled by default. Double checked. Sudo ufw status Status: inactive. No other symptoms of a firewall. From: Sasha Litvak [mailto:alexander.v.lit...@gmail.com] Sent: Sunday, June 18, 2017 11:10 PM To: Jim Forde Cc: ceph-users@lists.ceph.com

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread John Spray
On Mon, Jun 19, 2017 at 3:13 PM, Alfredo Deza wrote: > On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: >> On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote: >>> On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD >>>

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Willem Jan Withagen
On 19-6-2017 16:13, Alfredo Deza wrote: > On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: >> On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote: >>> On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD >>> wrote: I would

Re: [ceph-users] radosgw: scrub causing slow requests in the md log

2017-06-19 Thread Casey Bodley
Hi Dan, That's good news that it can remove 1000 keys at a time without hitting timeouts. The output of 'du' will depend on when the leveldb compaction runs. If you do find that compaction leads to suicide timeouts on this osd (you would see a lot of 'leveldb:' output in the log), consider

Re: [ceph-users] radosgw: scrub causing slow requests in the md log

2017-06-19 Thread Dan van der Ster
On Thu, Jun 15, 2017 at 7:56 PM, Casey Bodley wrote: > > On 06/14/2017 05:59 AM, Dan van der Ster wrote: >> >> Dear ceph users, >> >> Today we had O(100) slow requests which were caused by deep-scrubbing >> of the metadata log: >> >> 2017-06-14 11:07:55.373184 osd.155 >>

Re: [ceph-users] CephFS | flapping OSD locked up NFS

2017-06-19 Thread John Petrini
Hi David, While I have no personal experience with this; from what I've been told, if you're going to export cephfs over NFS it's recommended that you use a userspace implementation of NFS (like nfs-ganesha) rather than nfs-kernel-server. This may be the source of you issues and might be worth

[ceph-users] CephFS | flapping OSD locked up NFS

2017-06-19 Thread David
Hi All We had a faulty OSD that was going up and down for a few hours until Ceph marked it out. During this time Cephfs was accessible, however, for about 10 mins all NFS processes (kernel NFSv3) on a server exporting Cephfs were hung, locking up all the NFS clients. The cluster was healthy

Re: [ceph-users] Luminous: ETA on LTS production release?

2017-06-19 Thread Lars Marowsky-Bree
On 2017-06-16T20:09:04, Gregory Farnum wrote: > There's a lot going into this release, and things keep popping up. I > suspect it'll be another month or two, but I doubt anybody is capable of > giving a more precise date. :/ The downside of giving up on train > releases...

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Alfredo Deza
On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: > On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote: >> On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD >> wrote: >>> I would prefer that this is something more generic, to

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread John Spray
On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote: > On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD > wrote: >> I would prefer that this is something more generic, to possibly support >> other backends one day, like ceph-volume. Creating one tool

Re: [ceph-users] ceph packages on stretch from eu.ceph.com

2017-06-19 Thread Ronny Aasen
Thanks for the suggestions. i did do a trial with the proxmox ones, on a single node machine tho. But i hope now that debian 9 is released and stable, that the ceph repos will incluclude stretch soon.. Hint Hint :) I am itching to try to upgrade my testing cluster. :) kind regards Ronny

Re: [ceph-users] FAILED assert(i.first <= i.last)

2017-06-19 Thread Peter Rosell
That sound like an easy rule to follow. thanks again for you reply. /Peter mån 19 juni 2017 kl 10:19 skrev Wido den Hollander : > > > Op 19 juni 2017 om 9:55 schreef Peter Rosell : > > > > > > I have my servers on UPS and shutdown them manually the way I

Re: [ceph-users] RadosGW not working after upgrade to Hammer

2017-06-19 Thread Gerson Jamal
Hi, Anyone can found the solution for this issue. I upgrade from Firefly do Hammer, and i'm facing with this problem. Thanks in advance On Mon, Jun 19, 2017 at 10:32 AM, Gerson Jamal wrote: > Hi, > > Anyone can found the solution for this issue. > I upgrade from

Re: [ceph-users] FAILED assert(i.first <= i.last)

2017-06-19 Thread Wido den Hollander
> Op 19 juni 2017 om 9:55 schreef Peter Rosell : > > > I have my servers on UPS and shutdown them manually the way I use to turn > them off. There where enough power in the UPS after the servers were > shutdown because is continued to beep. Anyway, I will wipe it and

Re: [ceph-users] FAILED assert(i.first <= i.last)

2017-06-19 Thread Peter Rosell
I have my servers on UPS and shutdown them manually the way I use to turn them off. There where enough power in the UPS after the servers were shutdown because is continued to beep. Anyway, I will wipe it and re-add it. Thanks for your reply. /Peter mån 19 juni 2017 kl 09:11 skrev Wido den

Re: [ceph-users] Kernel RBD client talking to multiple storage clusters

2017-06-19 Thread Wido den Hollander
> Op 19 juni 2017 om 5:15 schreef Alex Gorbachev : > > > Has anyone run into such config where a single client consumes storage from > several ceph clusters, unrelated to each other (different MONs and OSDs, > and keys)? > Should be possible, you can simply supply a

Re: [ceph-users] FAILED assert(i.first <= i.last)

2017-06-19 Thread Wido den Hollander
> Op 18 juni 2017 om 16:21 schreef Peter Rosell : > > > Hi, > I have a small cluster with only three nodes, 4 OSDs + 3 OSDs. I have been > running version 0.87.2 (Giant) for over 2.5 year, but a couple of day ago I > upgraded to 0.94.10 (Hammer) and then up to 10.2.7