Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Jake Young
Thanks for the feedback Nick and Zoltan, I have been seeing periodic kernel panics when I used LIO. It was either due to LIO or the kernel rbd mapping. I have seen this on Ubuntu precise with kernel 3.14.14 and again in Ubunty trusty with the utopic kernel (currently 3.16.0-28). Ironically,

Re: [ceph-users] RGW Enabling non default region on existing cluster - data migration

2015-01-23 Thread Yehuda Sadeh
Also, one more point to consider. A bucket that was created at the default region before a region was set is considered to belong to the master region. Yehuda On Fri, Jan 23, 2015 at 8:40 AM, Yehuda Sadeh yeh...@redhat.com wrote: On Wed, Jan 21, 2015 at 7:24 PM, Mark Kirkwood

Re: [ceph-users] RGW Enabling non default region on existing cluster - data migration

2015-01-23 Thread Yehuda Sadeh
On Wed, Jan 21, 2015 at 7:24 PM, Mark Kirkwood mark.kirkw...@catalyst.net.nz wrote: I've been looking at the steps required to enable (say) multi region metadata sync where there is an existing RGW that has been in use (i.e non trivial number of buckets and objects) which been setup without any

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Nick Fisk
Thanks for your responses guys, I’ve been spending a lot of time looking at this recently and I think I’m even more confused than when I started. I been looking at trying to adapt a resource agent made by tiger computing (https://github.com/tigercomputing/ocf-lio) to create a HA LIO

[ceph-users] Ceph with IB and ETH

2015-01-23 Thread German Anders
Hi to all, I've a question regarding Ceph and IB, we plan to migrate our Ethernet ceph cluster to a infiniband FDR 56Gb/s architecture. We are going to use 2x Mellanox IB SX6036G switches for the Public Network and a 2x IB SX6018F switch for the Cluster network, and Mellanox FDR ADPT

Re: [ceph-users] Having an issue with: 7 pgs stuck inactive; 7 pgs stuck unclean; 71 requests are blocked 32

2015-01-23 Thread Jean-Charles Lopez
Hi Glen Run a ceph pg {id} query on one of your stuck PGs to find out what the PG is waiting for to be completed. Rgds JC On Friday, January 23, 2015, Glen Aidukas gaidu...@behaviormatrix.com wrote: Hello fellow ceph users, I ran into a major issue were two KVM hosts will not start due

[ceph-users] Having an issue with: 7 pgs stuck inactive; 7 pgs stuck unclean; 71 requests are blocked 32

2015-01-23 Thread Glen Aidukas
Hello fellow ceph users, I ran into a major issue were two KVM hosts will not start due to issues with my Ceph cluster. Here are some details: Running ceph version 0.87. There are 10 hosts with 6 drives each for 60 OSDs. # ceph -s cluster 1431e336-faa2-4b13-b50d-c1d375b4e64b health

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Zoltan Arnold Nagy
Correct me if I'm wrong, but tgt doesn't have full SCSI-3 persistence support when _not_ using the LIO backend for it, right? AFAIK you can either run tgt with it's own iSCSI implementation or you can use tgt to manage your LIO targets. I assume when you're running tgt with the rbd backend

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Jake Young
I would go with tgt regardless of your HA solution. I tried to use LIO for a long time and am glad I finally seriously tested tgt. Two big reasons are 1) latest rbd code will be in tgt 2) two less reasons for a kernel panic in the proxy node (rbd and iscsi) For me, I'm comfortable with how my

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Craig Lewis
It depends. There are a lot of variables, like how many nodes and disks you currently have. Are you using journals on SSD. How much data is already in the cluster. What the client load is on the cluster. Since you only have 40 GB in the cluster, it shouldn't take long to backfill. You may

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Craig Lewis
You've either modified the crushmap, or changed the pool size to 1. The defaults create 3 replicas on different hosts. What does `ceph osd dump | grep ^pool` output? If the size param is 1, then you reduced the replica count. If the size param is 1, you must've adjusted the crushmap. Either

Re: [ceph-users] CEPH Expansion

2015-01-23 Thread Georgios Dimitrakakis
Hi Craig! For the moment I have only one node with 10 OSDs. I want to add a second one with 10 more OSDs. Each OSD in every node is a 4TB SATA drive. No SSD disks! The data ara approximately 40GB and I will do my best to have zero or at least very very low load during the expansion process.

Re: [ceph-users] RBD backup and snapshot

2015-01-23 Thread Frank Yu
I'm also interested in this question. Can any body give some view point? I wonder *Do* we really need backup the image through snapshot in an excellent performance, reliability and scalability distributed file system? 2015-01-19 18:58 GMT+08:00 Luis Periquito periqu...@gmail.com: Hi, I'm

Re: [ceph-users] Different flavors of storage?

2015-01-23 Thread Luis Periquito
you have a nice howto here http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/ on how to do this with crush rules. On Fri, Jan 23, 2015 at 6:06 AM, Jason King chn@gmail.com wrote: Hi Don, Take a look at CRUSH settings.

[ceph-users] remote storage

2015-01-23 Thread Robert Duncan
Hi All, This is my first post, I have been using Ceph OSD in OpenStack Icehouse as part of the Mirantis distribution with Fuel - this is my only experience with Ceph, so as you can imagine - it works, but I don't really understand all of the technical details, I am working for a college in

Re: [ceph-users] Different flavors of storage?

2015-01-23 Thread Jason King
Hi Don, Take a look at CRUSH settings. http://ceph.com/docs/master/rados/operations/crush-map/ Jason 2015-01-22 2:41 GMT+08:00 Don Doerner dondoer...@sbcglobal.net: OK, I've set up 'giant' in a single-node cluster, played with a replicated pool and an EC pool. All goes well so far.