Re: [ceph-users] tracker.ceph.com

2016-12-19 Thread Nathan Cutler
Please let me know if you notice anything is amiss. I haven't received any email notifications since the crash. Normally on a Monday I'd have several dozen. -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037

[ceph-users] How exactly does rgw work?

2016-12-19 Thread Gerald Spencer
Hello all, We're currently waiting on a delivery of equipment for a small 50TB proof of concept cluster, and I've been lurking/learning a ton from you. Thanks for how active everyone is. Question(s): How does the raids gateway work exactly? Does it introduce a single point of failure? Does all

Re: [ceph-users] centos 7.3 libvirt (2.0.0-10.el7_3.2) and openstack volume attachment w/ cephx broken

2016-12-19 Thread Mike Lowe
Not that I’ve found, it’s a little hard to search for. I believe it’s related to this libvirt mailing list thread https://www.redhat.com/archives/libvir-list/2016-October/msg00396.html You’ll find this in the libvirt qemu

Re: [ceph-users] Unwanted automatic restart of daemons during an upgrade since 10.2.5 (on Trusty)

2016-12-19 Thread Francois Lafont
Hi, On 12/19/2016 09:58 PM, Ken Dreyer wrote: > I looked into this again on a Trusty VM today. I set up a single > mon+osd cluster on v10.2.3, with the following: > > # status ceph-osd id=0 > ceph-osd (ceph/0) start/running, process 1301 > > #ceph daemon osd.0 version >

Re: [ceph-users] centos 7.3 libvirt (2.0.0-10.el7_3.2) and openstack volume attachment w/ cephx broken

2016-12-19 Thread Jason Dillaman
Do you happen to know if there is an existing bugzilla ticket against this issue? On Mon, Dec 19, 2016 at 3:46 PM, Mike Lowe wrote: > It looks like the libvirt (2.0.0-10.el7_3.2) that ships with centos 7.3 is > broken out of the box when it comes to hot plugging new

Re: [ceph-users] CephFS metdata inconsistent PG Repair Problem

2016-12-19 Thread Goncalo Borges
Hi Sean In our case, the last time we had this error, we stopped the osd, mark it out, let ceph recover and then reinstall it. We did it because we were suspecting of issues with the osd and that was why we decided to take this approach. The fact is that the pg we were seeing constantly

Re: [ceph-users] CephFS metdata inconsistent PG Repair Problem

2016-12-19 Thread Wido den Hollander
> Op 19 december 2016 om 18:14 schreef Sean Redmond : > > > Hi Ceph-Users, > > I have been running into a few issue with cephFS metadata pool corruption > over the last few weeks, For background please see > tracker.ceph.com/issues/17177 > > # ceph -v > ceph version

[ceph-users] centos 7.3 libvirt (2.0.0-10.el7_3.2) and openstack volume attachment w/ cephx broken

2016-12-19 Thread Mike Lowe
It looks like the libvirt (2.0.0-10.el7_3.2) that ships with centos 7.3 is broken out of the box when it comes to hot plugging new virtio-scsi devices backed by rbd and cephx auth. If you use openstack, cephx auth, and centos, I’d caution against the upgrade to centos 7.3 right now.

Re: [ceph-users] Unwanted automatic restart of daemons during an upgrade since 10.2.5 (on Trusty)

2016-12-19 Thread Ken Dreyer
On Tue, Dec 13, 2016 at 4:42 AM, Francois Lafont wrote: > So, now with 10.2.5 version, in my process, OSD daemons are stopped, > then automatically restarted by the upgrade and then stopped again > by the reboot. This is not an optimal process of course. ;) We do

Re: [ceph-users] rgw civetweb ssl official documentation?

2016-12-19 Thread Christian Wuerdig
No official documentation but here is how I got it to work on Ubuntu 16.04.01 (in this case I'm using a self-signed certificate): assuming you're running rgw on a computer called rgwnode: 1. create self-signed certificate ssh rgwnode openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem

[ceph-users] CephFS metdata inconsistent PG Repair Problem

2016-12-19 Thread Sean Redmond
Hi Ceph-Users, I have been running into a few issue with cephFS metadata pool corruption over the last few weeks, For background please see tracker.ceph.com/issues/17177 # ceph -v ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367) I am currently facing a side effect of this issue

Re: [ceph-users] Jewel + kernel 4.4 Massive performance regression (-50%)

2016-12-19 Thread Yoann Moulin
Hello, Finally, I found time to do some new benchmarks with the latest jewel release (10.2.5) on 4 nodes. Each node has 10 OSDs. I ran 2 times "ceph tell osd.* bench" over 40 OSDs, here the average speed : 4.2.0-42-generic 97.45 MB/s 4.4.0-53-generic 55.73 MB/s 4.8.15-040815-generic

Re: [ceph-users] fio librbd result is poor

2016-12-19 Thread David Turner
All of our DC S3500 and S3510 all ran out of writes this week after being in production for 1.5 years as journal drives to 4 disks each. Having 43 drives say they have less than 1% of their writes left is scary. I'd recommend having a monitoring check for your ssds durability in Ceph. As a