[ceph-users] POC doc

2015-02-02 Thread Hoc Phan
Hi all I remember seeing a POC doc from someone on initial evaluation of Ceph. I would like to see some examples and use cases I should go through. Is there such a doc or blog post? Thanks.___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Mark Kirkwood
On 03/02/15 01:28, Loic Dachary wrote: On 02/02/2015 13:27, Ritesh Raj Sarraf wrote: By the way, I'm trying to build Ceph from master, on Ubuntu Trusty. I hope that is supported ? Yes, that's also what I have. Same here - in the advent you need to rebuild the whole thing, using

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Somnath Roy
This rbd_cache is only applicable to librbd not with the kernel rbd. Hope you are trying with librbd based env. If not, then the caching effect you are seeing is the filesystem cache. Thanks Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Bruce McFarland
I'm still missing something. I can check on the monitor to see that the running config on the cluster has rbd cache = false [root@essperf13 ceph]# ceph --admin-daemon /var/run/ceph/ceph-mon.essperf13.asok config show | grep rbd debug_rbd: 0\/5, rbd_cache: false, Since rbd caching is a

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Sage Weil
I've never seen this either. The build is slow and there is certainly code reorg that could be done to speed it up but incremental builds definitely work and are used extensively by all developers... sage On Mon, 2 Feb 2015, Loic Dachary wrote: Hi, I re-compile without cleaning and don't

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Loic Dachary
On 02/02/2015 13:27, Ritesh Raj Sarraf wrote: By the way, I'm trying to build Ceph from master, on Ubuntu Trusty. I hope that is supported ? Yes, that's also what I have. On Mon, Feb 2, 2015 at 5:51 PM, Ritesh Raj Sarraf r...@researchut.com mailto:r...@researchut.com wrote: Thanks

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Ritesh Raj Sarraf
By the way, I'm trying to build Ceph from master, on Ubuntu Trusty. I hope that is supported ? On Mon, Feb 2, 2015 at 5:51 PM, Ritesh Raj Sarraf r...@researchut.com wrote: Thanks Loic. I guess I need to look at the deb building script first then. And now, looking at src/CMakeLists.txt, it is

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread John Spray
While I don't know about this odd boost issue you see, personally I find it useful to always be specific about built targets, as an overall make will build e.g. unit tests -- quite slow. My favourite is make ceph-mds ceph-osd ceph-mon ceph-fuse -- incremental builds are pretty snappy that way.

[ceph-users] features of the next stable release

2015-02-02 Thread Andrei Mikhailovsky
Hi cephers, I've got three questions: 1. Does anyone have an estimation on the release dates of the next stable ceph branch? 2. Will the new stable release have improvements in the following areas: a) working with ssd disks; b) cache tier 3. Will the new stable release introduce support

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Ritesh Raj Sarraf
Thanks Loic. I guess I need to look at the deb building script first then. And now, looking at src/CMakeLists.txt, it is clear that Ceph does make use of reusing the build liraries. On Mon, Feb 2, 2015 at 5:43 PM, Loic Dachary l...@dachary.org wrote: Hi, I re-compile without cleaning and

Re: [ceph-users] Question about primary OSD of a pool

2015-02-02 Thread Dennis Chen
Hello Sudarshan, Thanks, it should be useful when I want to appoint the specific OSD as primary ;-) On Mon, Feb 2, 2015 at 3:50 PM, Sudarshan Pathak sushan@gmail.com wrote: Hello Dennis, You can create CRUSH rule to select one of osd as primary as: rule ssd-primary {

[ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Ritesh Raj Sarraf
Hi, We are currently working on adding changes to a sub-feature of Ceph. My current challenge lies with the build environment of Ceph. Ceph is huge and takes a lot of time to build. The build folder is close to 15 GiB. I would like to re-use the compiled files when adding changes to the

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Loic Dachary
Hi, I re-compile without cleaning and don't have the same problem. It is supported by Ceph, the problem is elsewhere. My 2cts ;-) On 02/02/2015 13:02, Ritesh Raj Sarraf wrote: Hi, We are currently working on adding changes to a sub-feature of Ceph. My current challenge lies with the

Re: [ceph-users] RGW region metadata sync prevents writes to non-master region

2015-02-02 Thread Mark Kirkwood
On 30/01/15 13:39, Mark Kirkwood wrote: On 30/01/15 12:34, Yehuda Sadeh wrote: On Thu, Jan 29, 2015 at 3:27 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: On 30/01/15 11:08, Yehuda Sadeh wrote: How does your regionmap look like? Is it updated correctly on all zones? Regionmap

Re: [ceph-users] features of the next stable release

2015-02-02 Thread Gregory Farnum
It's not merely unstable, it's not actually complete. The XIOMessenger is merged so that things don't get too far out of sync, but it should not be used by anybody except developers who are working on it. :) -Greg On Mon, Feb 2, 2015 at 7:43 PM Nicheal zay11...@gmail.com wrote: 2015-02-03 0:48

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Nicheal
Uh! It is strange. You said you have already cleared caches both on client and osd node, the data must directly come from the disk. Wait other's ideas 2015-02-03 11:44 GMT+08:00 Bruce McFarland bruce.mcfarl...@taec.toshiba.com: Yes I'm using and the kernel rbd in Ubuntu 14.04 which makes calls

[ceph-users] ceph reports 10x actuall available space

2015-02-02 Thread pixelfairy
tried ceph on 3 kvm instances, each with a root 40G drive, and 6 virtio disks of 4G each. when i look at available space, instead of some number less than 72G, i get 689G, and 154G used. the journal is in a folder on the root drive. the images were made with virt-builder using ubuntu-14.04 and

Re: [ceph-users] Rbd device on RHEL 6.5

2015-02-02 Thread Nick @ Deltaband
Hi, Short of changing the kernel, we've not found away... This is how we got it to work using the elrepo kernel-lt (3.10) rpm -Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm yum --enablerepo=elrepo-kernel install kernel-lt sed -i 's/default=1/default=0/g' /etc/grub.conf

Re: [ceph-users] ceph reports 10x actuall available space

2015-02-02 Thread pixelfairy
ceph 0.87 On Mon, Feb 2, 2015 at 7:53 PM, pixelfairy pixelfa...@gmail.com wrote: tried ceph on 3 kvm instances, each with a root 40G drive, and 6 virtio disks of 4G each. when i look at available space, instead of some number less than 72G, i get 689G, and 154G used. the journal is in a

Re: [ceph-users] Question about CRUSH rule set parameter min_size max_size

2015-02-02 Thread Sahana Lokeshappa
Hi Mika, The below command will set ruleset to the pool: ceph osd pool set poolname crush_ruleset 1 For more info : http://ceph.com/docs/master/rados/operations/crush-map/ Thanks Sahana From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickie ch Sent: Tuesday, February

Re: [ceph-users] Selecting between multiple public networks

2015-02-02 Thread Wido den Hollander
On 02/03/2015 03:06 AM, Nick @ Deltaband wrote: Hi Cephers, If there is more than one public network, it possible to tell a client (RBD) which public network to preference? We have a working ceph cluster with a separate public (public1) and cluster network. What i’d like to do is add a

Re: [ceph-users] Selecting between multiple public networks

2015-02-02 Thread Nick @ Deltaband
On 3 February 2015 at 13:54, Wido den Hollander w...@42on.com wrote: On 02/03/2015 03:06 AM, Nick @ Deltaband wrote: Hi Cephers, If there is more than one public network, it possible to tell a client (RBD) which public network to preference? We have a working ceph cluster with a separate

[ceph-users] Question about CRUSH rule set parameter min_size max_size

2015-02-02 Thread Vickie ch
Hi , CRUSH map have two parameter are min_size and max_size. Explanation about min_size is *If a pool makes fewer replicas than this number, CRUSH will NOT select this rule*. The max_size is *If a pool makes more replicas than this number, CRUSH will NOT select this rule* Default setting of

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Nicheal
It seems you use the kernel rbd. So rbd_cache does not work, which is just designed for librbd. Kernel rbd is directly using the system page cache. You said that you have already run like echo 3 /proc/sys/vm/drop_cache to invalidate all pages cached in kernel. So do you test the /dev/rbd1 based

Re: [ceph-users] CEPH BackUPs

2015-02-02 Thread Georgios Dimitrakakis
Hi Christian, On Fri, 30 Jan 2015 01:22:53 +0200 Georgios Dimitrakakis wrote: Urged by a previous post by Mike Winfield where he suffered a leveldb loss I would like to know which files are critical for CEPH operation and must be backed-up regularly and how are you people doing it?

[ceph-users] Update 0.80.7 to 0.80.8 -- Restart Order

2015-02-02 Thread Daniel Schneller
Hello! We are planning to upgrade our Ubuntu 14.04.1 based cluster from Ceph Firefly 0.80.7 to 0.80.8. We have 4 nodes, 12x4TB spinners each (plus OS disks). Apart from the 12 OSDs per node, nodes 1-3 have MONs running. The instructions on ceph.com say it is best to first restart the MONs,

Re: [ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread Haomai Wang
There exists a more recently discuss in PR(https://github.com/ceph/ceph/pull/1665). On Mon, Feb 2, 2015 at 11:05 PM, J-P Methot jpmet...@gtcomm.net wrote: Hi, I've been looking into increasing the performance of my ceph cluster for openstack that will be moved in production soon. It's a full

[ceph-users] JCloud on Ceph

2015-02-02 Thread Alexis KOALLA
Hi all, Does anyone using JCloud on Ceph? Any feedback on the topic is welcome and will be very appreciated. Regards Alex ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] CacheCade to cache pool - worth it?

2015-02-02 Thread mailinglist
Hi, I have a small, 3 node Firefly cluster. Each node hosting 6 OSDs, a 3 TB spinner each. Each host has 2 SSDs used for the journals. Also each host has 4 SSDs used as a 2 x RAID1 CacheCade array. The cluster is used to host KVM based virutal machines, about 180 now. I'm thinking about

[ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread J-P Methot
Hi, I've been looking into increasing the performance of my ceph cluster for openstack that will be moved in production soon. It's a full 1TB SSD cluster with 16 OSD per node over 6 nodes. As I searched for possible tweaks to implement, I stumbled upon unitedstack's presentation at the

Re: [ceph-users] erasure code : number of chunks for a small cluster ?

2015-02-02 Thread Alexandre DERUMIER
If you have K=2,M=1 you will survive one node failure. If your failure domain is the host (i.e. there never is more than one chunk per node for any given object), it will also survive two disks failures within a given node because only one of them will have a chunk. It won't be able to resist

Re: [ceph-users] erasure code : number of chunks for a small cluster ?

2015-02-02 Thread Alexandre DERUMIER
Hi Alexandre, nice to meet you here ;-) Hi Udo! (Udo from proxmox ? ;) With 3 hosts only you can't survive an full node failure, because for that you need host = k + m. And k:1 m:2 don't make any sense. I start with 5 hosts and use k:3, m:2. In this case two hdds can fail or one host

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Gregory Farnum
Are you actually using CMake? It's an alternative and incomplete build system right now; the autotools build chain is the canonical one. (I don't think it should be causing your problem, but...who knows?) -Greg On Mon, Feb 2, 2015 at 4:21 AM Ritesh Raj Sarraf r...@researchut.com wrote: Thanks

Re: [ceph-users] Update 0.80.7 to 0.80.8 -- Restart Order

2015-02-02 Thread Gregory Farnum
Oh, yeah, that'll hurt on a small cluster more than a large one. I'm not sure how much it matters, sorry. On Mon, Feb 2, 2015 at 8:18 AM Daniel Schneller daniel.schnel...@centerdevice.com wrote: On 2015-02-02 16:09:27 +, Gregory Farnum said: That said, for a point release it shouldn't

Re: [ceph-users] Update 0.80.7 to 0.80.8 -- Restart Order

2015-02-02 Thread Gregory Farnum
The packages might trigger restarts; the behavior has fluctuated a bit and I don't know where it is right now. That said, for a point release it shouldn't matter what order stuff gets restarted in. I wouldn't worry about it. :) -Greg On Mon, Feb 2, 2015 at 6:47 AM Daniel Schneller

Re: [ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread Haomai Wang
I mean it seemed fine for the current master branch ran under kernel 2.6.32 but I can't make sure that no other problem because no production verify. On Tue, Feb 3, 2015 at 12:21 AM, Haomai Wang haomaiw...@gmail.com wrote: Hmm, I think there still some buggy exist in 2.6.32. I only try to make

Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread Florent MONTHEL
Hi Mad 3Gbps so you will have SSD Sata ? I think you should take 6Gbps controllers to make sure so not have Sata limitations Thanks Sent from my iPhone On 2 févr. 2015, at 09:27, mad Engineer themadengin...@gmail.com wrote: I am trying to create a 5 node cluster using 1 Tb SSD disks with 2

Re: [ceph-users] features of the next stable release

2015-02-02 Thread Gregory Farnum
On Mon, Feb 2, 2015 at 5:27 AM, Andrei Mikhailovsky and...@arhont.com wrote: Hi cephers, I've got three questions: 1. Does anyone have an estimation on the release dates of the next stable ceph branch? We should be branching Hammer from master today, and it's feature-frozen at this point. I

Re: [ceph-users] Update 0.80.7 to 0.80.8 -- Restart Order

2015-02-02 Thread Daniel Schneller
On 2015-02-02 16:09:27 +, Gregory Farnum said: That said, for a point release it shouldn't matter what order stuff gets restarted in. I wouldn't worry about it. :) That is good to know. One follow-up then: If the packets trigger restarts, they will most probably do so for *all* daemons

Re: [ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread J-P Methot
Thank you very much. Also thank you for the presentation you made in Paris, it was very instructive. So, from what I understand, the fiemap patch is proven to work on kernel 2.6.32 . The good news is that we use the same kernel in our setup. How long have your production cluster been running

Re: [ceph-users] filestore_fiemap and other ceph tweaks

2015-02-02 Thread Haomai Wang
Hmm, I think there still some buggy exist in 2.6.32. I only try to make write block size align(which already merged into master) but not verify it in production. Our production cluster ran under customize kernel version based on 3.12. On Tue, Feb 3, 2015 at 12:18 AM, J-P Methot

Re: [ceph-users] Repetitive builds for Ceph

2015-02-02 Thread Ritesh Raj Sarraf
On 02/02/2015 09:35 PM, Gregory Farnum wrote: Are you actually using CMake? It's an alternative and incomplete build system right now; the autotools build chain is the canonical one. (I don't think it should be causing your problem, but...who knows?) While CMake was installed, I doubt if it

Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread Florent MONTHEL
Hi, Writes will be distributed every 4MB (size of IMAGEV1 RBD object) IMAGEV2 not fully supported on KRBD (but you can customize size of object and striping) You need to take : - SSD SATA 6gbits - or SSD SAS 12gbits (more expensive) Florent Monthel Le 2 févr. 2015 à 18:29, mad

Re: [ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread mad Engineer
Thanks Florent, can ceph distribute write to multiple hosts? On Mon, Feb 2, 2015 at 10:17 PM, Florent MONTHEL fmont...@flox-arts.net wrote: Hi Mad 3Gbps so you will have SSD Sata ? I think you should take 6Gbps controllers to make sure so not have Sata limitations

[ceph-users] Selecting between multiple public networks

2015-02-02 Thread Nick @ Deltaband
Hi Cephers, If there is more than one public network, it possible to tell a client (RBD) which public network to preference? We have a working ceph cluster with a separate public (public1) and cluster network. What i’d like to do is add a new public (public2) network for some newer clients. The

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Bruce McFarland
I'm using Ubuntu 14.04 and the kernel rbd which makes calls into libceph root@essperf3:/etc/ceph# lsmod | grep rbd rbd63707 1 libceph 225026 1 rbd root@essperf3:/etc/ceph# I'm doing raw device IO with either fio or vdbench (preferred tool) and there is no

Re: [ceph-users] RBD caching on 4K reads???

2015-02-02 Thread Bruce McFarland
Yes I'm using and the kernel rbd in Ubuntu 14.04 which makes calls into libceph root@essperf3:/etc/ceph# lsmod | grep rbd rbd63707 1 libceph 225026 1 rbd root@essperf3:/etc/ceph# I'm doing raw device IO with either fio or vdbench (preferred tool) and there

Re: [ceph-users] features of the next stable release

2015-02-02 Thread Nicheal
2015-02-03 0:48 GMT+08:00 Gregory Farnum g...@gregs42.com: On Mon, Feb 2, 2015 at 5:27 AM, Andrei Mikhailovsky and...@arhont.com wrote: Hi cephers, I've got three questions: 1. Does anyone have an estimation on the release dates of the next stable ceph branch? We should be branching

Re: [ceph-users] features of the next stable release

2015-02-02 Thread Gregory Farnum
On Mon, Feb 2, 2015 at 11:28 AM, Andrei Mikhailovsky and...@arhont.com wrote: I'm not sure what you mean about improvements for SSD disks, but the OSD should be generally a bit faster. There are several cache tier improvements included that should improve

[ceph-users] ssd OSD and disk controller limitation

2015-02-02 Thread mad Engineer
I am trying to create a 5 node cluster using 1 Tb SSD disks with 2 OSD on each server.Each server will have 10G NIC. SSD disks are of good quality and as per label it can support ~300 MBps What are the limiting factor that prevents from utilizing full speed of SSD disks? Disk controllers are 3