[ceph-users] 答复: 答复: Ceph user manangerment question

2016-09-28 Thread 卢 迪
ok. thanks. 发件人: Daleep Singh Bais 发送时间: 2016年9月28日 8:14:53 收件人: 卢 迪; ceph-users@lists.ceph.com 主题: Re: 答复: [ceph-users] Ceph user manangerment question Hi Dillon, Please check

[ceph-users] KVM vm using rbd volume hangs on 120s when one of the nodes crash

2016-09-28 Thread wei li
Hi, colleagues! I'm using Ceph 10.0.2, build a Ceph cluster in order to use it in production environment. And I'm using OpenStack L version. I tested the ceph osd node crash, like pull out the power supplier or the network cable. At the same time, in the vm I try to run some commands, it

Re: [ceph-users] Ceph with Cache pool - disk usage / cleanup

2016-09-28 Thread Christian Balzer
Hello, On Wed, 28 Sep 2016 19:36:28 +0200 Sascha Vogt wrote: > Hi Christian, > > Am 28.09.2016 um 16:56 schrieb Christian Balzer: > > 0.94.5 has a well known and documented bug, it doesn't rotate the omap log > > of the OSDs. > > > > Look into "/var/lib/ceph/osd/ceph-xx/current/omap/" of the

[ceph-users] OSD Down but not marked down by cluster

2016-09-28 Thread Tyler Bishop
S1148 is down but the cluster does not mark it as such. cluster 3aac8ab8-1011-43d6-b281-d16e7a61b2bd health HEALTH_WARN 3888 pgs backfill 196 pgs backfilling 6418 pgs degraded 52 pgs down 52 pgs peering 1 pgs recovery_wait 3653 pgs stuck degraded 52 pgs stuck inactive 6088 pgs stuck

Re: [ceph-users] Same pg scrubbed over and over (Jewel)

2016-09-28 Thread Arvydas Opulskis
Hi, we have same situation with one PG on our different cluster. Scrubs and deep-scrubs are running over and over for same PG (38.34). I've logged some period with deep-scrub and some scrubs repeating. OSD log form primary osd can be found there:

[ceph-users] Attempt to access beyond end of device

2016-09-28 Thread Brady Deetz
The question: Is this something I need to investigate further, or am I being paranoid? Seems bad to me. I have a fairly new cluster built using ceph-deploy 1.5.34-0, ceph 10.2.2-0, and centos 7.2.1511. I recently noticed on every one

Re: [ceph-users] Ceph with Cache pool - disk usage / cleanup

2016-09-28 Thread Sascha Vogt
Hi Christian, Am 28.09.2016 um 16:56 schrieb Christian Balzer: > 0.94.5 has a well known and documented bug, it doesn't rotate the omap log > of the OSDs. > > Look into "/var/lib/ceph/osd/ceph-xx/current/omap/" of the cache tier and > most likely discover a huge "LOG" file. You're right, it was

Re: [ceph-users] Ceph Very Small Cluster

2016-09-28 Thread Vasu Kulkarni
On Wed, Sep 28, 2016 at 8:03 AM, Ranjan Ghosh wrote: > Hi everyone, > > Up until recently, we were using GlusterFS to have two web servers in sync > so we could take one down and switch back and forth between them - e.g. for > maintenance or failover. Usually, both were running,

Re: [ceph-users] fixing zones

2016-09-28 Thread Michael Parson
On Wed, 28 Sep 2016, Orit Wasserman wrote: see blow On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson wrote: We googled around a bit and found the fix-zone script: https://raw.githubusercontent.com/yehudasa/ceph/wip-fix-default-zone/src/fix-zone Which ran fine until the

[ceph-users] Ceph Very Small Cluster

2016-09-28 Thread Ranjan Ghosh
Hi everyone, Up until recently, we were using GlusterFS to have two web servers in sync so we could take one down and switch back and forth between them - e.g. for maintenance or failover. Usually, both were running, though. The performance was abysmal, unfortunately. Copying many small files

Re: [ceph-users] Ceph with Cache pool - disk usage / cleanup

2016-09-28 Thread Christian Balzer
On Wed, 28 Sep 2016 14:08:43 +0200 Sascha Vogt wrote: > Hi all, > > we currently experience a few "strange" things on our Ceph cluster and I > wanted to ask if anyone has recommendations for further tracking them > down (or maybe even an explanation already ;) ) > > Ceph version is 0.94.5 and

Re: [ceph-users] RGW multisite replication failures

2016-09-28 Thread Ben Morrice
and subsequent retries of the sync fail with a return code of -5. Any other suggestions? 2016-09-28 16:14:52.145933 7f84609e3700 20 rgw meta sync: entry: name=20160928:bbp-gva-master.106061599.1 2016-09-28 16:14:52.145994 7f84609e3700 20 rgw meta sync: entry: name=20160928:bbp-gva-master.106061599.1

Re: [ceph-users] Bcache, partitions and BlueStore

2016-09-28 Thread Wido den Hollander
> Op 26 september 2016 om 19:51 schreef Sam Yaple : > > > On Mon, Sep 26, 2016 at 5:44 PM, Wido den Hollander wrote: > > > > > > Op 26 september 2016 om 17:48 schreef Sam Yaple : > > > > > > > > > On Mon, Sep 26, 2016 at 9:31 AM, Wido den

[ceph-users] v10.2.3 Jewel Released

2016-09-28 Thread Abhishek Lekshmanan
This point release fixes several important bugs in RBD mirroring, RGW multi-site, CephFS, and RADOS. We recommend that all v10.2.x users upgrade. Notable changes in this release include: * build/ops: 60-ceph-partuuid-workaround-rules still needed by debian jessie (udev 215-17) (#16351,

[ceph-users] Radosgw Orphan and multipart objects

2016-09-28 Thread William Josefsson
Hi, I'm CentOS7/Hammer 0.94.9 (upgraded from RGW s3 objects created in 0.94.7) and I have radosgw multipart and shadow objects in .rgw.buckets even though I have deleted all buckets 2weeks ago, can anybody advice on how to prune or garbage collect the orphan and multipart objects? Pls help. Thx

Re: [ceph-users] Ceph with Cache pool - disk usage / cleanup

2016-09-28 Thread Sascha Vogt
Hi Burkhard, thanks a lot for the quick response. Am 28.09.2016 um 14:15 schrieb Burkhard Linke: > someone correct me if I'm wrong, but removing objects in a cache tier > setup result in empty objects which acts as markers for deleting the > object on the backing store.. I've seen the same

Re: [ceph-users] Ceph with Cache pool - disk usage / cleanup

2016-09-28 Thread Burkhard Linke
Hi, someone correct me if I'm wrong, but removing objects in a cache tier setup result in empty objects which acts as markers for deleting the object on the backing store.. I've seen the same pattern you have described in the past. As a test you can try to evict all objects from the cache

[ceph-users] Ceph with Cache pool - disk usage / cleanup

2016-09-28 Thread Sascha Vogt
Hi all, we currently experience a few "strange" things on our Ceph cluster and I wanted to ask if anyone has recommendations for further tracking them down (or maybe even an explanation already ;) ) Ceph version is 0.94.5 and we have a HDD based pool with a cache pool on NVMe SSDs in front if

Re: [ceph-users] rgw multi-site replication issues

2016-09-28 Thread Orit Wasserman
On Tue, Sep 27, 2016 at 10:19 PM, John Rowe wrote: > Hi Orit, > > It appears it must have been one of the known bugs in 10.2.2. I just > upgraded to 10.2.3 and bi-directional syncing now works. > Good > I am still seeing some errors when I run synch-related commands

Re: [ceph-users] fixing zones

2016-09-28 Thread Orit Wasserman
see blow On Tue, Sep 27, 2016 at 8:31 PM, Michael Parson wrote: > (I tried to start this discussion on irc, but I wound up with the wrong > paste buffer and wound up getting kicked off for a paste flood, sorry, > that was on me :( ) > > We were having some weirdness with our

Re: [ceph-users] Adding new monitors to production cluster

2016-09-28 Thread Nick @ Deltaband
On 28 September 2016 at 19:22, Wido den Hollander wrote: > > > > Op 28 september 2016 om 0:35 schreef "Nick @ Deltaband" > > : > > > > > > Hi Cephers, > > > > We need to add two new monitors to a production cluster (0.94.9) which has > > 3 existing monitors. It

Re: [ceph-users] Adding new monitors to production cluster

2016-09-28 Thread Wido den Hollander
> Op 28 september 2016 om 0:35 schreef "Nick @ Deltaband" : > > > Hi Cephers, > > We need to add two new monitors to a production cluster (0.94.9) which has > 3 existing monitors. It looks like it's as easy as ceph-deploy mon add mon>. > You are going to add two

[ceph-users] Troubles seting up radosgw

2016-09-28 Thread Iban Cabrillo
Dear Admins, During last day I have been trying to deploy a new radosgw, following jewel guide, ceph cluster is healthy (3 mon and 2 osd servers ) root@cephrgw ceph]# ceph -v ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b) [root@cephrgw ceph]# rpm -qa | grep ceph

Re: [ceph-users] 答复: Ceph user manangerment question

2016-09-28 Thread Daleep Singh Bais
Hi Dillon, Please check http://docs.ceph.com/docs/firefly/rados/operations/auth-intro/#ceph-authorization-caps http://docs.ceph.com/docs/jewel/rados/operations/user-management/ This might provide some information on permissions. Thanks, Daleep Singh Bais On 09/28/2016 11:28 AM, 卢 迪 wrote: