[ceph-users] ceph-deploy issues on RHEL6.4

2013-09-27 Thread Guang
Hi ceph-users, I recently deployed a ceph cluster with use of *ceph-deploy* utility, on RHEL6.4, during the time, I came across a couple of issues / questions which I would like to ask for your help. 1. ceph-deploy does not help to install dependencies (snappy leveldb gdisk python-argparse

[ceph-users] gateway instance

2013-09-27 Thread lixuehui
Hi all Does gateway instances mean multi-process of a gateway user for a ceph cluster. Though they were configured independently during the configure file,can they configured with zones among different region? lixuehui___ ceph-users mailing list

Re: [ceph-users] ceph-deploy issues on RHEL6.4

2013-09-27 Thread Mariusz Gronczewski
Dnia 2013-09-27, o godz. 15:30:21 Guang yguan...@yahoo.com napisał(a): Hi ceph-users, I recently deployed a ceph cluster with use of *ceph-deploy* utility, on RHEL6.4, during the time, I came across a couple of issues / questions which I would like to ask for your help. 1. ceph-deploy does

Re: [ceph-users] PG distribution scattered

2013-09-27 Thread Niklas Goerke
Sorry for replying only now, I did not get to try it earlier… On Thu, 19 Sep 2013 08:43:11 -0500, Mark Nelson wrote: On 09/19/2013 08:36 AM, Niklas Goerke wrote: […] My Setup: * Two Hosts with 45 Disks each -- 90 OSDs * Only one newly created pool with 4500 PGs and a Replica Size of 2 --

Re: [ceph-users] ceph-deploy issues on RHEL6.4

2013-09-27 Thread Alfredo Deza
On Fri, Sep 27, 2013 at 3:30 AM, Guang yguan...@yahoo.com wrote: Hi ceph-users, I recently deployed a ceph cluster with use of *ceph-deploy* utility, on RHEL6.4, during the time, I came across a couple of issues / questions which I would like to ask for your help. 1. ceph-deploy does not

Re: [ceph-users] CephFS Pool Specification?

2013-09-27 Thread Aronesty, Erik
Ø You can also create additional data pools and map directories to them, but this probably isn't what you need (yet). Is there a link to a web page where you can read how to map a directory to a pool? (I googled ceph map directory to pool ... and got this post) From:

Re: [ceph-users] CephFS Pool Specification?

2013-09-27 Thread Gregory Farnum
On Fri, Sep 27, 2013 at 7:10 AM, Aronesty, Erik earone...@expressionanalysis.com wrote: Ø You can also create additional data pools and map directories to them, but this probably isn't what you need (yet). Is there a link to a web page where you can read how to map a directory to a pool?

Re: [ceph-users] CephFS Pool Specification?

2013-09-27 Thread Aronesty, Erik
I see it's the undocumented ceph.dir.layout.pool Something like: setfattr -n ceph.dir.layout.pool -v mynewpool DIR On an empty dir should work. I'd like one directory to be more heavily mirrored so that a) objects are more likely to be on a less busy server b) availability

[ceph-users] RBD Snap removal priority

2013-09-27 Thread Travis Rhoden
Hello everyone, I'm running a Cuttlefish cluster that hosts a lot of RBDs. I recently removed a snapshot of a large one (rbd snap rm -- 12TB), and I noticed that all of the clients had markedly decreased performance. Looking at iostat on the OSD nodes had most disks pegged at 100% util. I know

Re: [ceph-users] failure starting radosgw after setting up object storage

2013-09-27 Thread Yehuda Sadeh
On Wed, Sep 25, 2013 at 2:07 PM, Gruher, Joseph R joseph.r.gru...@intel.com wrote: Hi all- I am following the object storage quick start guide. I have a cluster with two OSDs and have followed the steps on both. Both are failing to start radosgw but each in a different manner. All the

Re: [ceph-users] gateway instance

2013-09-27 Thread Yehuda Sadeh
On Fri, Sep 27, 2013 at 1:10 AM, lixuehui lixue...@chinacloud.com.cn wrote: Hi all Does gateway instances mean multi-process of a gateway user for a ceph cluster. Though they were configured independently during the configure file,can they configured with zones among different region? Not

Re: [ceph-users] RBD Snap removal priority

2013-09-27 Thread Mike Dawson
[cc ceph-devel] Travis, RBD doesn't behave well when Ceph maintainance operations create spindle contention (i.e. 100% util from iostat). More about that below. Do you run XFS under your OSDs? If so, can you check for extent fragmentation? Should be something like: xfs_db -c frag -r

Re: [ceph-users] Scaling radosgw module

2013-09-27 Thread Mark Nelson
Hi Somnath, With SSDs, you almost certainly are going to be running into bottlenecks on the RGW side... Maybe even fastcgi or apache depending on the machine and how things are configured. Unfortunately this is probably one of the more complex performance optimization scenarios in the Ceph

Re: [ceph-users] Scaling radosgw module

2013-09-27 Thread Somnath Roy
Yes, I understand that.. I tried with thread pool size of 300 (default 100, I believe). I am in process of running perf on radosgw as well as on osds for profiling. BTW, let me know if any particular ceph component you want me to focus. Thanks Regards Somnath -Original Message- From:

Re: [ceph-users] Scaling radosgw module

2013-09-27 Thread Mark Nelson
Likely on the radosgw side you are going to see the top consumers be malloc/free/memcpy/memcmp. If you have kernel 3.9 or newer compiled with libunwind, you might get better callgraphs in perf which could be helpful. Mark On 09/27/2013 01:56 PM, Somnath Roy wrote: Yes, I understand that..

Re: [ceph-users] RBD Snap removal priority

2013-09-27 Thread Travis Rhoden
Hi Mike, Thanks for the info. I had seem some of the previous reports of reduced performance during various recovery tasks (and certainly experienced them) but you summarized them all quite nicely. Yes, I'm running XFS on the OSDs. I checked fragmentation on a few of my OSDs -- all came back

[ceph-users] OSD: Newbie question regarding ceph-deploy odd create

2013-09-27 Thread Piers Dawson-Damer
Hi, I'm trying to setup my first cluster, (have never manually bootstrapped a cluster) Is ceph-deploy odd activate/prepare supposed to write to the master ceph.conf file, specific entries for each OSD along the lines of http://ceph.com/docs/master/rados/configuration/osd-config-ref/ ? I

[ceph-users] Can't mount CephFS - where to start troubleshooting?

2013-09-27 Thread Aaron Ten Clay
Hi, I probably did something wrong setting up my cluster with 0.67.3. I previously built a cluster with 0.61 and everything went well, even after an upgrade to 0.67.3. Now I built a fresh 0.67.3 cluster and when I try to mount CephFS: aaron@seven ~ $ sudo mount -t ceph 10.42.6.21:/ /mnt/ceph

Re: [ceph-users] Can't mount CephFS - where to start troubleshooting?

2013-09-27 Thread Aaron Ten Clay
On Fri, Sep 27, 2013 at 2:44 PM, Gregory Farnum g...@inktank.com wrote: What is the output of ceph -s? It could be something underneath the filesystem. root@chekov:~# ceph -s cluster 18b7cba7-ccc3-4945-bb39-99450be81c98 health HEALTH_OK monmap e3: 3 mons at {chekov=

Re: [ceph-users] rdosgw swift subuser creation

2013-09-27 Thread Snider, Tim
Thanks that worked - you were close: This is another document issue on http://ceph.com/docs/next/radosgw/config/ , --gen-secret parameter requirement isn't mentioned. Enabling Swift Access Allowing access to the object store with Swift (OpenStack Object Storage)

[ceph-users] OpenStack Grizzly Authentication (Keystone PKI) with RADOS Gateway

2013-09-27 Thread Amit Vijairania
Hello! Does RADOS Gateway supports or integrates with OpenStack (Grizzly) Authentication (Keystone PKI)? Can RADOS Gateway use PKI tokens to conduct user token verification without explicit calls to Keystone. Thanks! Amit Amit Vijairania | 978.319.3684 --*--

Re: [ceph-users] About the data movement in Ceph

2013-09-27 Thread Zh Chen
Thx Sage for help me understand Ceph much more deeply! And recently i have another questions as follows, 1. As we know, Ceph -s is the summary of system's state, and is there any tools to monitor the detail of data's flow when the Crush map is changed? 2. In my understanding, the mapping