Re: [ceph-users] RGW how to delete orphans

2017-10-02 Thread Christian Wuerdig
yes, at least that's how I'd interpret the information given in this thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-February/016521.html On Tue, Oct 3, 2017 at 1:11 AM, Webert de Souza Lima wrote: > Hey Christian, > >> On 29 Sep 2017 12:32 a.m.,

[ceph-users] Ceph on ARM meeting canceled

2017-10-02 Thread Leonardo Vaz
Hey Cephers, My apologies for the short notice, but the Ceph on ARM meeting scheduled for tomorrow (Oct 3) has been canceled. Kindest regards, Leo -- Leonardo Vaz Ceph Community Manager Open Source and Standards Team ___ ceph-users mailing list

Re: [ceph-users] MDS crashes shortly after startup while trying to purge stray files.

2017-10-02 Thread Patrick Donnelly
On Thu, Sep 28, 2017 at 5:16 AM, Micha Krause wrote: > Hi, > > I had a chance to catch John Spray at the Ceph Day, and he suggested that I > try to reproduce this bug in luminos. Did you edit the code before trying Luminous? I also noticed from your original mail that it

Re: [ceph-users] decreasing number of PGs

2017-10-02 Thread Jack
You cannot; On 02/10/2017 21:43, Andrei Mikhailovsky wrote: > Hello everyone, > > what is the safest way to decrease the number of PGs in the cluster. > Currently, I have too many per osd. > > Thanks > > > > ___ > ceph-users mailing list >

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread Reed Dier
As someone currently running collectd/influxdb/grafana stack for monitoring, I am curious if anyone has seen issues moving Jewel -> Luminous. I thought I remembered reading that collectd wasn’t working perfectly in Luminous, likely not helped with the MGR daemon. Also thought about trying

[ceph-users] decreasing number of PGs

2017-10-02 Thread Andrei Mikhailovsky
Hello everyone, what is the safest way to decrease the number of PGs in the cluster. Currently, I have too many per osd. Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread Erik McCormick
On Mon, Oct 2, 2017 at 11:55 AM, Matthew Vernon wrote: > On 02/10/17 12:34, Osama Hasebou wrote: >> Hi Everyone, >> >> Is there a guide/tutorial about how to setup Ceph monitoring system >> using collectd / grafana / graphite ? Other suggestions are welcome as >> well ! > > We

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread Matthew Vernon
On 02/10/17 12:34, Osama Hasebou wrote: > Hi Everyone, > > Is there a guide/tutorial about how to setup Ceph monitoring system > using collectd / grafana / graphite ? Other suggestions are welcome as > well ! We just installed the collectd plugin for ceph, and pointed it at our grahphite server;

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread German Anders
prometheus has a nice data exporter build in go, that then you can send to grafana or any other tool https://github.com/digitalocean/ceph_exporter *German* 2017-10-02 8:34 GMT-03:00 Osama Hasebou : > Hi Everyone, > > Is there a guide/tutorial about how to setup Ceph

[ceph-users] Discontiune of cn.ceph.com

2017-10-02 Thread Shengjing Zhu
Hi, According to the regulation in China, we, the mirror site of mirrors.ustc.edu.cn, is no longer able to serve the domain cn.ceph.com, which has no ICP license[1]. Please either disable the CNAME record of cn.ceph.com or change it to a mirror like hk.ceph.com. People can still access our

Re: [ceph-users] Ceph monitoring

2017-10-02 Thread David
If you take Ceph out of your search string you should find loads of tutorials on setting up the popular collectd/influxdb/grafana stack. Once you've got that in place, the Ceph bit should be fairly easy. There's Ceph collectd plugins out there or you could write your own. On Mon, Oct 2, 2017 at

Re: [ceph-users] 1 osd Segmentation fault in test cluster

2017-10-02 Thread Gregory Farnum
Please file a tracker ticket with all the info you have for stuff like this. They’re a lot harder to lose than emails are. ;) On Sat, Sep 30, 2017 at 8:31 AM Marc Roos wrote: > Is this useful for someone? > > > > [Sat Sep 30 15:51:11 2017] libceph: osd5

Re: [ceph-users] Ceph OSD on Hardware RAID

2017-10-02 Thread Vincent Godin
In addition to the points that you made : I noticed on RAID0 disk that read IO errors are not always trapped by ceph leading to unattended behaviour of the impacted OSD daemon. On both RAID0 disk or non-RAID disk, a IO error is trapped on /var/log/messages Oct 2 15:20:37 os-ceph05 kernel: sd

Re: [ceph-users] [Ceph-announce] Luminous v12.2.1 released

2017-10-02 Thread Fabian Grünbichler
On Thu, Sep 28, 2017 at 05:46:30PM +0200, Abhishek wrote: > This is the first bugfix release of Luminous v12.2.x long term stable > release series. It contains a range of bug fixes and a few features > across CephFS, RBD & RGW. We recommend all the users of 12.2.x series > update. > > For more

[ceph-users] Ceph monitoring

2017-10-02 Thread Osama Hasebou
Hi Everyone, Is there a guide/tutorial about how to setup Ceph monitoring system using collectd / grafana / graphite ? Other suggestions are welcome as well ! I found some GitHub solutions but not much documentation on how to implement. Thanks. Regards, Ossi

[ceph-users] BlueStore questions about workflow and performance

2017-10-02 Thread Sam Huracan
Hi, I'm reading this document: http://storageconference.us/2017/Presentations/CephObjectStore-slides.pdf I have 3 questions: 1. BlueStore writes both data (to raw block device) and metadata (to RockDB) simultaneously, or sequentially? 2. From my opinion, performance of BlueStore can not

Re: [ceph-users] tunable question

2017-10-02 Thread Manuel Lausch
Hi, We have similar issues. After upgradeing from hammer to jewel the tunable "choose leave stabel" was introduces. If we activate it nearly all data will be moved. The cluster has 2400 OSD on 40 nodes over two datacenters and is filled with 2,5 PB Data. We tried to enable it but the

Re: [ceph-users] zone, zonegroup and resharding bucket on luminous

2017-10-02 Thread Orit Wasserman
On Fri, Sep 29, 2017 at 5:56 PM, Yoann Moulin wrote: > Hello, > > I'm doing some tests on the radosgw on luminous (12.2.1), I have a few > questions. > > In the documentation[1], there is a reference to "radosgw-admin region get" > but it seems not to be available anymore.