Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Henrik Korkuc

On 16-05-02 02:14, Stuart Longland wrote:

On 02/05/16 00:32, Henrik Korkuc wrote:

mons generate these bootstrep keys. You can find them in
/var/lib/ceph/bootstrap-*/ceph.keyring

on pre-infernalis there were created automagically (I guess by init).
Infernalis and jewel have ceph-create-keys@.service systemd job for that.

Just place that dir with file in same location on OSD hosts and you'll
be able to activate OSDs.

Yeah, in my case the OSD hosts are the MON hosts, and there was no such
file or directory created on any of them.  Monitors were running at the
time.
you need to run ceph-create-keys systemd job to generate these keys (or 
the command in there). I am not sure if it is intentional that it 
doesn't run automatically or just some dependency problem. I think Jewel 
did create keys for me. I didn't put much attention to it as it is rare 
for me to start new clusters.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Bill Sharer
I have an active and a standby setup. The faillover takes less than a 
minute if you manually stop the active service.  Add whatever the 
timeout is for the faillover to happen if things go pear shaped for the box.


Things are back to letters now for mds servers.  I had started with 
letters on firefly as recommended.  Then somewhere (giant?), I was 
getting prodded to use numbers instead.  Now with later hammer and 
infernalis, I'm back to getting scolded for not using letters :-)


I'm holding off on jewel for the moment until I get things straightened 
out with the kde4 to plasma upgrade.  I think that one got stablized 
before it was quite ready for prime time.  Even then I'll probably take 
a good long time to backup some stuff before I try out the shiny new 
fsck utility.


On 05/01/2016 07:13 PM, Stuart Longland wrote:

Hi Bill,
On 02/05/16 04:37, Bill Sharer wrote:

Actually you didn't need to do a udev rule for raw journals.  Disk
devices in gentoo have their group ownership set to 'disk'.  I only
needed to drop ceph into that in /etc/group when going from hammer to
infernalis.

Yeah, I recall trying that on the Ubuntu-based Ceph cluster at work, and
Ceph still wasn't happy, hence I've gone the route of making the
partition owned by the ceph user.


Did you poke around any of the ceph howto's on the gentoo wiki? It's
been a while since I wrote this guide when I first rolled out with firefly:

https://wiki.gentoo.org/wiki/Ceph/Guide

That used to be https://wiki.gentoo.org/wiki/Ceph before other people
came in behind me and expanded on things

No, hadn't looked at that.


I've pretty much had these bookmarks sitting around forever for adding
and removing mons and osds

http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/

For the MDS server I think I originally went to this blog which also has
other good info.

http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/

That might be my next step, depending on how stable CephFS is now.  One
thing that has worried me is since you can only deploy one MDS, what
happens if that MDS goes down?

If it's simply a case of spin up another one, then fine, I can put up
with a little downtime.  If there's data loss though, then no, that's
not good.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] jewel, cephfs and selinux

2016-05-01 Thread Andrus, Brian Contractor
All,

I thought there was a way to mount CephFS using the kernel driver and be able 
to honor selinux labeling.
Right now, if I do 'ls -lZ' on a mounted cephfs, I get question marks instead 
of any contexts.
When I mount it, I see in dmesg:

[858946.554719] SELinux: initialized (dev ceph, type ceph), not configured for 
labeling


Is this something that is in the works and will be available to test?


Brian Andrus
ITACS/Research Computing
Naval Postgraduate School
Monterey, California
voice: 831-656-6238


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Robin H. Johnson
On Sun, May 01, 2016 at 08:46:36PM +1000, Stuart Longland wrote:
> Hi all,
> 
> This evening I was in the process of deploying a ceph cluster by hand.
> I did it by hand because to my knowledge, ceph-deploy doesn't support
> Gentoo, and my cluster here runs that.
You'll want the ceph-disk & ceph-detect-init pieces here:
https://github.com/ceph/ceph/pull/8317

ceph-deploy on Gentoo should only a little bit of work after this.

-- 
Robin Hugh Johnson
Gentoo Linux: Developer, Infrastructure Lead, Foundation Trustee
E-Mail : robb...@gentoo.org
GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Stuart Longland
On 02/05/16 00:32, Henrik Korkuc wrote:
> mons generate these bootstrep keys. You can find them in
> /var/lib/ceph/bootstrap-*/ceph.keyring
> 
> on pre-infernalis there were created automagically (I guess by init).
> Infernalis and jewel have ceph-create-keys@.service systemd job for that.
> 
> Just place that dir with file in same location on OSD hosts and you'll
> be able to activate OSDs.

Yeah, in my case the OSD hosts are the MON hosts, and there was no such
file or directory created on any of them.  Monitors were running at the
time.
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Stuart Longland
Hi Bill,
On 02/05/16 04:37, Bill Sharer wrote:
> Actually you didn't need to do a udev rule for raw journals.  Disk
> devices in gentoo have their group ownership set to 'disk'.  I only
> needed to drop ceph into that in /etc/group when going from hammer to
> infernalis.

Yeah, I recall trying that on the Ubuntu-based Ceph cluster at work, and
Ceph still wasn't happy, hence I've gone the route of making the
partition owned by the ceph user.

> Did you poke around any of the ceph howto's on the gentoo wiki? It's
> been a while since I wrote this guide when I first rolled out with firefly:
> 
> https://wiki.gentoo.org/wiki/Ceph/Guide
> 
> That used to be https://wiki.gentoo.org/wiki/Ceph before other people
> came in behind me and expanded on things

No, hadn't looked at that.

> I've pretty much had these bookmarks sitting around forever for adding
> and removing mons and osds
> 
> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/
> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/
> 
> For the MDS server I think I originally went to this blog which also has
> other good info.
> 
> http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/

That might be my next step, depending on how stable CephFS is now.  One
thing that has worried me is since you can only deploy one MDS, what
happens if that MDS goes down?

If it's simply a case of spin up another one, then fine, I can put up
with a little downtime.  If there's data loss though, then no, that's
not good.
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Bill Sharer
Actually you didn't need to do a udev rule for raw journals.  Disk 
devices in gentoo have their group ownership set to 'disk'.  I only 
needed to drop ceph into that in /etc/group when going from hammer to 
infernalis.


Did you poke around any of the ceph howto's on the gentoo wiki? It's 
been a while since I wrote this guide when I first rolled out with firefly:


https://wiki.gentoo.org/wiki/Ceph/Guide

That used to be https://wiki.gentoo.org/wiki/Ceph before other people 
came in behind me and expanded on things


I've pretty much had these bookmarks sitting around forever for adding 
and removing mons and osds


http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/

For the MDS server I think I originally went to this blog which also has 
other good info.


http://www.sebastien-han.fr/blog/2013/05/13/deploy-a-ceph-mds-server/


On 05/01/2016 06:46 AM, Stuart Longland wrote:

Hi all,

This evening I was in the process of deploying a ceph cluster by hand.
I did it by hand because to my knowledge, ceph-deploy doesn't support
Gentoo, and my cluster here runs that.

The instructions I followed are these ones:
http://docs.ceph.com/docs/master/install/manual-deployment and I'm
running the 10.0.2 release of Ceph:

ceph version 10.0.2 (86764eaebe1eda943c59d7d784b893ec8b0c6ff9)

Things went okay bootstrapping the monitors.  I'm running a 3-node
cluster, with OSDs and monitors co-located.  Each node has a 1TB 2.5"
HDD and a 40GB partition on SSD for the journal.

Things went pear shaped however when I tried bootstrapping the OSDs.
All was going fine until it came time to activate my first OSD.

ceph-disk activate barfed because I didn't have the bootstrap-osd key.
No one told me I needed to create one, or how to do it.  There's a brief
note about using --activate-key, but no word on what to pass as the
argument.  I tried passing in my admin keyring in /etc/ceph, but it
didn't like that.

In the end, I muddled my way through the manual OSD deployment steps,
which worked fine.  After correcting permissions for the ceph user, I
found the OSDs came up.  As an added bonus, I now know how to work
around the journal permission issue at work since I've reproduced it
here, using a UDEV rules file like the following:

SUBSYSTEM=="block", KERNEL=="sda7", OWNER="ceph", GROUP="ceph", MODE="0600"

The cluster seems to be happy enough now, but some notes on how one
generates the OSD activation keys to use with `ceph-disk activate` would
be a big help.

Regards,


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Change MDS's mode from active to standby

2016-05-01 Thread Jevon Qiao

Hi,

Currently, I'm testing the functionality of multiple MDS in Jewel 
10.2.0. I know increasing the value of max_mds can make the standby MDS 
become active(I have two MDSes and max_mds=1). But I'm wondering if 
there is a way to change the mode back. I tried to decrease max_mds, but 
it did not work. I also tried 'mds_standby_for_name', but got 'Function 
not implemented'.


Thanks,
Jevon
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] snaps & consistency group

2016-05-01 Thread Yair Magnezi
Hello Guys .

I'm a little bit confused about ceph's capability to take a a consistency
snapshots ( more then one rbd image  )

is there a way to do this ( we're running hammer right now )

Thanks

-- 
This e-mail, as well as any attached document, may contain material which 
is confidential and privileged and may include trademark, copyright and 
other intellectual property rights that are proprietary to Kenshoo Ltd, 
 its subsidiaries or affiliates ("Kenshoo"). This e-mail and its 
attachments may be read, copied and used only by the addressee for the 
purpose(s) for which it was disclosed herein. If you have received it in 
error, please destroy the message and any attachment, and contact us 
immediately. If you are not the intended recipient, be aware that any 
review, reliance, disclosure, copying, distribution or use of the contents 
of this message without Kenshoo's express permission is strictly prohibited.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Henrik Korkuc
mons generate these bootstrep keys. You can find them in 
/var/lib/ceph/bootstrap-*/ceph.keyring


on pre-infernalis there were created automagically (I guess by init). 
Infernalis and jewel have ceph-create-keys@.service systemd job for that.


Just place that dir with file in same location on OSD hosts and you'll 
be able to activate OSDs.


On 16-05-01 13:46, Stuart Longland wrote:

Hi all,

This evening I was in the process of deploying a ceph cluster by hand.
I did it by hand because to my knowledge, ceph-deploy doesn't support
Gentoo, and my cluster here runs that.

The instructions I followed are these ones:
http://docs.ceph.com/docs/master/install/manual-deployment and I'm
running the 10.0.2 release of Ceph:

ceph version 10.0.2 (86764eaebe1eda943c59d7d784b893ec8b0c6ff9)

Things went okay bootstrapping the monitors.  I'm running a 3-node
cluster, with OSDs and monitors co-located.  Each node has a 1TB 2.5"
HDD and a 40GB partition on SSD for the journal.

Things went pear shaped however when I tried bootstrapping the OSDs.
All was going fine until it came time to activate my first OSD.

ceph-disk activate barfed because I didn't have the bootstrap-osd key.
No one told me I needed to create one, or how to do it.  There's a brief
note about using --activate-key, but no word on what to pass as the
argument.  I tried passing in my admin keyring in /etc/ceph, but it
didn't like that.

In the end, I muddled my way through the manual OSD deployment steps,
which worked fine.  After correcting permissions for the ceph user, I
found the OSDs came up.  As an added bonus, I now know how to work
around the journal permission issue at work since I've reproduced it
here, using a UDEV rules file like the following:

SUBSYSTEM=="block", KERNEL=="sda7", OWNER="ceph", GROUP="ceph", MODE="0600"

The cluster seems to be happy enough now, but some notes on how one
generates the OSD activation keys to use with `ceph-disk activate` would
be a big help.

Regards,


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Deploying ceph by hand: a few omissions

2016-05-01 Thread Stuart Longland
Hi all,

This evening I was in the process of deploying a ceph cluster by hand.
I did it by hand because to my knowledge, ceph-deploy doesn't support
Gentoo, and my cluster here runs that.

The instructions I followed are these ones:
http://docs.ceph.com/docs/master/install/manual-deployment and I'm
running the 10.0.2 release of Ceph:

ceph version 10.0.2 (86764eaebe1eda943c59d7d784b893ec8b0c6ff9)

Things went okay bootstrapping the monitors.  I'm running a 3-node
cluster, with OSDs and monitors co-located.  Each node has a 1TB 2.5"
HDD and a 40GB partition on SSD for the journal.

Things went pear shaped however when I tried bootstrapping the OSDs.
All was going fine until it came time to activate my first OSD.

ceph-disk activate barfed because I didn't have the bootstrap-osd key.
No one told me I needed to create one, or how to do it.  There's a brief
note about using --activate-key, but no word on what to pass as the
argument.  I tried passing in my admin keyring in /etc/ceph, but it
didn't like that.

In the end, I muddled my way through the manual OSD deployment steps,
which worked fine.  After correcting permissions for the ceph user, I
found the OSDs came up.  As an added bonus, I now know how to work
around the journal permission issue at work since I've reproduced it
here, using a UDEV rules file like the following:

SUBSYSTEM=="block", KERNEL=="sda7", OWNER="ceph", GROUP="ceph", MODE="0600"

The cluster seems to be happy enough now, but some notes on how one
generates the OSD activation keys to use with `ceph-disk activate` would
be a big help.

Regards,
-- 
Stuart Longland
Systems Engineer
 _ ___
\  /|_) |   T: +61 7 3535 9619
 \/ | \ | 38b Douglas StreetF: +61 7 3535 9699
   SYSTEMSMilton QLD 4064   http://www.vrt.com.au


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com