[ceph-users] radosgw leaked orphan objects

2016-12-02 Thread Marius Vaitiekunas
Hi Cephers, I would like to ask more about this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1254398 On our backup cluster we've a search of leaked objects: # radosgw-admin orphans find --pool=.rgw.buckets --job-id=bck1 The result is 131288. Before running radosgw-admin orphans finish, I wou

[ceph-users] mds reconnect timeout

2016-12-02 Thread Xusangdi
Hi John, In our environment we want to deploy MDS and cephfs client on the same node (users actually use cifs/nfs to access ceph storage). However, it takes a long time to recover if the node with active MDS fails, during which a large part is for the new MDS waiting for all clients reconnect. T

[ceph-users] How to create two isolated rgw services in one ceph cluster?

2016-12-02 Thread piglei
Hi, I am a ceph newbie. I want to create two isolated rgw services in a single ceph cluster, the requirements: - Two radosgw will have different hosts, such as radosgw-x.site.com and radosgw-y.site.com . *File uploaded to rgw-xcannot be accessed via rgw-y.*

[ceph-users] Sandisk SSDs

2016-12-02 Thread Matteo Dacrema
Hi All, Did someone ever used or tested Sandisk Cloudspeed Eco II 1,92TB with Ceph? I know they have 0,6 DWPD that with Journal will be only 0,3 DPWD which means 560GB of data per day over 5 years. I need to know the performance side. Thanks Matteo

Re: [ceph-users] renaming ceph server names

2016-12-02 Thread Andrei Mikhailovsky
*BUMP* > From: "andrei" > To: "ceph-users" > Sent: Tuesday, 29 November, 2016 12:46:05 > Subject: [ceph-users] renaming ceph server names > Hello. > As a part of the infrastructure change we are planning to rename the servers > running ceph-osd, ceph-mon and radosgw services. The IP addresses

[ceph-users] rgw: how to prevent rgw user from creating a new bucket?

2016-12-02 Thread Yang Joseph
Hello, I would like only to allow the user to read the object in a already existed bucket, and not allow users to create new bucket. It supposed to execute the following command: $ radosgw-admin metadata put user:test3 < ... ... "caps": [ { "type": "bucket

Re: [ceph-users] renaming ceph server names

2016-12-02 Thread Peter Maloney
I did something like this the other day on a test cluster... can't guarantee the same results, but it worked for me. I don't see an official procedure documented anywhere. I didn't have mds or radosgw. (I also renamed the cluster at the same time... I omitted those steps) assuming services are sto

Re: [ceph-users] renaming ceph server names

2016-12-02 Thread Peter Maloney
On 12/02/16 12:33, Peter Maloney wrote: > # last section on the other mons (using the file produced on > the first) > # repeat on each monitor node > ceph-mon --cluster newname -i newhostname --inject-monmap > /tmp/monmap correction do that on all mons

Re: [ceph-users] New to ceph - error running create-initial

2016-12-02 Thread Oleg Kolosov
Hi Thank you for your answer. Unfortunately both workarounds didn't solve the issue: 1) Pusing admin key onto mon: ubuntu@ip-172-31-38-183:~/my-cluster$ ceph-deploy --username ubuntu admin mon1 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf [ceph_deploy.cli][I

Re: [ceph-users] Is there a setting on Ceph that we can use to fix the minimum read size?

2016-12-02 Thread Thomas Bennett
Hi Steve and Kate, Again - thanks again for the great suggestions. Increasing the allocsize did not help us in the situation relating to my current testing (poor read performance). However, allocsize is a great for parameter for overall performance tuning and I intend to use it. :) After discuss

Re: [ceph-users] rbd_default_features

2016-12-02 Thread Ilya Dryomov
On Thu, Dec 1, 2016 at 10:31 PM, Florent B wrote: > Hi, > > On 12/01/2016 10:26 PM, Tomas Kukral wrote: >> >> I wasn't successful trying to find table with indexes of features ... >> does anybody know? > > In sources : > https://github.com/ceph/ceph/blob/master/src/include/rbd/features.h There is

[ceph-users] How to create two isolated rgw services in one ceph cluster?

2016-12-02 Thread piglei
Hi, I am a ceph newbie. I want to create two isolated rgw services in a single ceph cluster, the requirements: - Two radosgw will have different hosts, such as radosgw-x.site.com and radosgw-y.site.com . *File uploaded to rgw-xcannot be accessed via rgw-y.*

Re: [ceph-users] How to create two isolated rgw services in one ceph cluster?

2016-12-02 Thread Abhishek L
piglei writes: > Hi, I am a ceph newbie. I want to create two isolated rgw services in a > single ceph cluster, the requirements: > > * Two radosgw will have different hosts, such as radosgw-x.site.com and > radosgw-y.site.com. File uploaded to rgw-xcannot be accessed via rgw-y. > * Isolated bu

Re: [ceph-users] node and its OSDs down...

2016-12-02 Thread David Turner
If you want to reweight only once when you have a failed disk that is being balanced off of, set the crush weight for that osd to 0.0. Then when you fully remove the disk from the cluster it will not do any additional backfilling. Any change to the crush map will likely move data around, even

Re: [ceph-users] Migrate OSD Journal to SSD

2016-12-02 Thread Reed Dier
> On Dec 1, 2016, at 6:26 PM, Christian Balzer wrote: > > On Thu, 1 Dec 2016 18:06:38 -0600 Reed Dier wrote: > >> Apologies if this has been asked dozens of times before, but most answers >> are from pre-Jewel days, and want to double check that the methodology still >> holds. >> > It does.

Re: [ceph-users] rgw: how to prevent rgw user from creating a new bucket?

2016-12-02 Thread Yehuda Sadeh-Weinraub
On Fri, Dec 2, 2016 at 3:18 AM, Yang Joseph wrote: > Hello, > > I would like only to allow the user to read the object in a already existed > bucket, and not allow users > to create new bucket. It supposed to execute the following command: > > $ radosgw-admin metadata put user:test3 < ... > ...

Re: [ceph-users] stalls caused by scrub on jewel

2016-12-02 Thread Dan Jakubiec
For what it's worth... this sounds like the condition we hit we re-enabled scrub on our 16 OSDs (after 6 to 8 weeks of noscrub). They flapped for about 30 minutes as most of the OSDs randomly hit suicide timeouts here and there. This settled down after about an hour and the OSDs stopped dying.

Re: [ceph-users] stalls caused by scrub on jewel

2016-12-02 Thread Sage Weil
On Fri, 2 Dec 2016, Dan Jakubiec wrote: > For what it's worth... this sounds like the condition we hit we > re-enabled scrub on our 16 OSDs (after 6 to 8 weeks of noscrub). They > flapped for about 30 minutes as most of the OSDs randomly hit suicide > timeouts here and there. > > This settled

Re: [ceph-users] stalls caused by scrub on jewel

2016-12-02 Thread Dan Jakubiec
> On Dec 2, 2016, at 10:48, Sage Weil wrote: > > On Fri, 2 Dec 2016, Dan Jakubiec wrote: >> For what it's worth... this sounds like the condition we hit we >> re-enabled scrub on our 16 OSDs (after 6 to 8 weeks of noscrub). They >> flapped for about 30 minutes as most of the OSDs randomly hit

[ceph-users] Ceph QoS user stories

2016-12-02 Thread Sage Weil
Hi all, We're working on getting infrasture into RADOS to allow for proper distributed quality-of-service guarantees. The work is based on the mclock paper published in OSDI'10 https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Gulati.pdf There are a few ways this can be appl

Re: [ceph-users] Announcing: Embedded Ceph and Rook

2016-12-02 Thread Dan Mick
On 11/30/2016 03:46 PM, Bassam Tabbara wrote: > Hello Cephers, > > I wanted to let you know about a new library that is now available in > master. It's called “libcephd” and it enables the embedding of Ceph > daemons like MON and OSD (and soon MDS and RGW) into other applications. > Using libcephd

[ceph-users] Ceph and rrdtool

2016-12-02 Thread Steve Jankowski
Anyone using rrdtool with Ceph via rados or cephfs ? If so, how many rrd files and how many rrd file updates per minute. We have a large population of rrd files that's growing beyond a single machine. We're already using SSD and rrdcached with great success, but it's not enough for the growt

Re: [ceph-users] Announcing: Embedded Ceph and Rook

2016-12-02 Thread Bassam Tabbara
Hi Dan, Is there anyplace you explain in more detail about why this design is attractive? I'm having a hard time imagining why applications would want to try to embed the cluster. Take a look at https://github.com/rook/rook for a small explanation of how we use embedded Ceph. Thanks! Bassam _

Re: [ceph-users] Migrate OSD Journal to SSD

2016-12-02 Thread Warren Wang - ISD
I’ve actually had to migrate every single journal in many clusters from one (horrible) SSD model to a better SSD. It went smoothly. You’ll also need to update your /var/lib/ceph/osd/ceph-*/journal_uuid file. Honestly, the only challenging part was mapping and automating the back and forth conv

Re: [ceph-users] Ceph QoS user stories

2016-12-02 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Sage > Weil > Sent: 02 December 2016 19:02 > To: ceph-de...@vger.kernel.org; ceph-us...@ceph.com > Subject: [ceph-users] Ceph QoS user stories > > Hi all, > > We're working on getting infrastu

Re: [ceph-users] Ceph QoS user stories

2016-12-02 Thread Federico Lucifredi
Hi Sage, The primary QoS issue we see with OpenStack users is wanting to guarantee minimum IOPS to each Cinder-mounted RBD volume as a way to guarantee the health of well-mannered workloads against badly-behaving ones. As an OpenStack Administrator, I want to guarantee a minimum number of IOPS

Re: [ceph-users] Ceph QoS user stories

2016-12-02 Thread Federico Lucifredi
Hi Sage, The primary QoS issue we see with OpenStack users is wanting to guarantee minimum IOPS to each Cinder-mounted RBD volume as a way to guarantee the health of well-mannered workloads against badly-behaving ones. As an OpenStack Administrator, I want to guarantee a minimum number of IOPS

[ceph-users] RBD Image Features not working on Ubuntu 16.04 + Jewel 10.2.3.

2016-12-02 Thread Rakesh Parkiti
Hi All, I. Firstly, As per my understanding, RBD image features (exclusive-lock, object-map, fast-diff, deep-flatten, journaling) are not yet ready for ceph Jewel version? II. The only working image feature is "Layering". III.Trying to configure rbd-mirroring on two different clusters, which