[ceph-users] metadata management in case of ceph object storage and ceph block storage

2015-04-04 Thread pragya jain
hello all! As the documentation said One of the unique features of Ceph is that it decouples data and metadata.for applying the mechanism of decoupling, Ceph uses Metadata Server (MDS) cluster.MDS cluster manages metadata operations, like open or rename a file On the other hand, Ceph

Re: [ceph-users] Recovering incomplete PGs with ceph_objectstore_tool

2015-04-04 Thread Chris Kitzmiller
On Apr 3, 2015, at 12:37 AM, LOPEZ Jean-Charles jelo...@redhat.com wrote: according to your ceph osd tree capture, although the OSD reweight is set to 1, the OSD CRUSH weight is set to 0 (2nd column). You need to assign the OSD a CRUSH weight so that it can be selected by CRUSH: ceph osd

[ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Justin Chin-You
Hi All, Hoping someone can help me understand CEPH HA or point me in the direction of a doc I missed. I understand how CEPH HA itself works in regards to PG, OSD and Monitoring. However what isn't clear for me is the failover in regards to things like iSCSI and the not yet production ready

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Eric Eastman
You may want to look at the Clustered SCSI Target Using RBD Status Blueprint, Etherpad and video at: https://wiki.ceph.com/Planning/Blueprints/Hammer/Clustered_SCSI_target_using_RBD http://pad.ceph.com/p/I-scsi

[ceph-users] Install problems GIANT on RHEL7

2015-04-04 Thread Don Doerner
Folks, I am having a hard time setting up a fresh install of GIANT on a fresh install of RHEL7 - which you would think would be about the easiest of all situations... 1. Using ceph-deploy 1.5.22 - for some reason it never updates the /etc/yum.repos.d to include all of the various ceph

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Iain Geddes
Hi Justin, I could probably be wrong on this but you're having to use a Ceph gateway rather than natively interracting with the cluster right? If so then the only way that you'd really be able to get HA would be to install a load balancer in front of multiple gateways. Under normal conditions

Re: [ceph-users] Install problems GIANT on RHEL7

2015-04-04 Thread Don Doerner
OK, apparently it's also a good idea to install EPEL, not just copy over the repo configuration from another installation. That resolved the key error and It appears that I have it all installed. -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent:

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Don Doerner
Hi Justin, Ceph, proper, does not provide those services. Ceph does provide Linux block devices (look for Rados Block Devices, aka, RBD) and a filesystem, CephFS. I don’t know much about the filesystem, but the block devices are present on an RBD client that you set up, following the

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Justin Chin-You
Thank you all!! This all makes more sense now. I think I know the direction where we are heading. Justin On Apr 4, 2015 6:18 PM, Don Doerner don.doer...@quantum.com wrote: Hi Justin, Ceph, proper, does not provide those services. Ceph *does* provide Linux block devices (look for Rados

Re: [ceph-users] Understanding High Availability - iSCSI/CIFS/NFS

2015-04-04 Thread Wido den Hollander
On 04/04/2015 03:30 PM, Justin Chin-You wrote: Hi All, Hoping someone can help me understand CEPH HA or point me in the direction of a doc I missed. I understand how CEPH HA itself works in regards to PG, OSD and Monitoring. However what isn't clear for me is the failover in regards to

Re: [ceph-users] Install problems GIANT on RHEL7

2015-04-04 Thread Don Doerner
Key problem resolved by actually installing (as opposed to simply configuring) the EPEL repo. And with that, the cluster became viable. Thanks all. -don- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Don Doerner Sent: 04 April, 2015 09:47 To: ceph-us...@ceph.com

[ceph-users] OSD auto-mount after server reboot

2015-04-04 Thread shiva rkreddy
HI, I'm currently testing Firefly 0.80.9 and noticed that OSD are not auto-mounted after server reboot. It used to mount auto with Firefly 0.80.7. OS is RHEL 6.5. There was another thread earlier on this topic with v0.80.8, suggestion was to add mount points to /etc/fstab. Question is whether