Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Wade Holler
I'm interested in this too. Should start testing next week at 1B+ objects
and I sure would like a recommendation of what config to start with.

We learned the hard way that not sharding is very bad at scales like this.
On Wed, Dec 16, 2015 at 2:06 PM Florian Haas  wrote:

> Hi Ben & everyone,
>
> just following up on this one from July, as I don't think there's been
> a reply here then.
>
> On Wed, Jul 8, 2015 at 7:37 AM, Ben Hines  wrote:
> > Anyone have any data on optimal # of shards for a radosgw bucket index?
> >
> > We've had issues with bucket index contention with a few million+
> > objects in a single bucket so i'm testing out the sharding.
> >
> > Perhaps at least one shard per OSD? Or, less? More?
>
> I'd like to make this more concrete: what about having several buckets
> each holding 2-4M objects, created on hammer, with 64 index shards? Is
> that type of fill expected to bring radosgw performance down by a
> factor of 5, versus an unpopulated (empty) radosgw setup?
>
> Ben, you wrote elsewhere
> (
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html
> )
> that you found approx. 900k objects to be the threshold where index
> sharding becomes necessary. Have you found that to be a reasonable
> rule of thumb, as in "try 1-2 shards per million objects in your most
> populous bucket"? Also, do you reckon that beyond that, more shards
> make things worse?
>
> > I noticed some discussion here regarding slow bucket listing with
> > ~200k obj --
> http://cephnotes.ksperis.com/blog/2015/05/12/radosgw-big-index
> > - bucket list seems significantly impacted.
> >
> > But i'm more concerned about general object put  (write) / object read
> > speed since 'bucket listing' is not something that we need to do. Not
> > sure if the index has to be completely read to write an object into
> > it?
>
> This is a question where I'm looking for an answer, too.
>
> Cheers,
> Florian
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSDs stuck in booting state on CentOS 7.2.1511 and ceph infernalis 9.2.0

2015-12-16 Thread Bob R
We've been operating a cluster relatively incident free since 0.86. On
Monday I did a yum update on one node, ceph00, and after rebooting we're
seeing every OSD stuck in 'booting' state. I've tried removing all of the
OSDs and recreating them with ceph-deploy (ceph-disk required modification
to use partx -a rather than partprobe) but we see the same status. I'm not
sure how to troubleshoot this further. Our OSDs on this host are now
running as the ceph user which may be related to the issue as the other
three hosts are running as root (although I followed the steps listed to
upgrade from hammer to infernalis and did chown -R ceph:ceph /var/lib/ceph
on each node).

[root@ceph00 ceph]# lsb_release -idrc
Distributor ID: CentOS
Description:CentOS Linux release 7.2.1511 (Core)
Release:7.2.1511
Codename:   Core

[root@ceph00 ceph]# ceph --version
ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)

[root@ceph00 ceph]# ceph daemon osd.0 status
{
"cluster_fsid": "2e4ea2c0-fb62-41fa-b7b7-e34d759b851e",
"osd_fsid": "ddf659ad-a3db-4094-b4d0-7d50f34b8f75",
"whoami": 0,
"state": "booting",
"oldest_map": 25243,
"newest_map": 26610,
"num_pgs": 0
}

[root@ceph00 ceph]# ceph daemon osd.3 status
{
"cluster_fsid": "2e4ea2c0-fb62-41fa-b7b7-e34d759b851e",
"osd_fsid": "8b1acd8a-645d-4dc2-8c1d-6dbb1715265f",
"whoami": 3,
"state": "booting",
"oldest_map": 25243,
"newest_map": 26612,
"num_pgs": 0
}

[root@ceph00 ceph]# ceph osd tree
ID  WEIGHTTYPE NAME   UP/DOWN REWEIGHT PRIMARY-AFFINITY
-23   1.43999 root ssd
-19 0 host ceph00_ssd
-20   0.48000 host ceph01_ssd
 40   0.48000 osd.40   up  1.0  1.0
-21   0.48000 host ceph02_ssd
 43   0.48000 osd.43   up  1.0  1.0
-22   0.48000 host ceph03_ssd
 41   0.48000 osd.41   up  1.0  1.0
 -1 120.0 root default
-17  80.0 room b1
-14  40.0 host ceph01
  1   4.0 osd.1up  1.0  1.0
  4   4.0 osd.4up  1.0  1.0
 18   4.0 osd.18   up  1.0  1.0
 19   4.0 osd.19   up  1.0  1.0
 20   4.0 osd.20   up  1.0  1.0
 21   4.0 osd.21   up  1.0  1.0
 22   4.0 osd.22   up  1.0  1.0
 23   4.0 osd.23   up  1.0  1.0
 24   4.0 osd.24   up  1.0  1.0
 25   4.0 osd.25   up  1.0  1.0
-16  40.0 host ceph03
 30   4.0 osd.30   up  1.0  1.0
 31   4.0 osd.31   up  1.0  1.0
 32   4.0 osd.32   up  1.0  1.0
 33   4.0 osd.33   up  1.0  1.0
 34   4.0 osd.34   up  1.0  1.0
 35   4.0 osd.35   up  1.0  1.0
 36   4.0 osd.36   up  1.0  1.0
 37   4.0 osd.37   up  1.0  1.0
 38   4.0 osd.38   up  1.0  1.0
 39   4.0 osd.39   up  1.0  1.0
-18  40.0 room b2
-13 0 host ceph00
-15  40.0 host ceph02
  2   4.0 osd.2up  1.0  1.0
  5   4.0 osd.5up  1.0  1.0
 14   4.0 osd.14   up  1.0  1.0
 15   4.0 osd.15   up  1.0  1.0
 16   4.0 osd.16   up  1.0  1.0
 17   4.0 osd.17   up  1.0  1.0
 26   4.0 osd.26   up  1.0  1.0
 27   4.0 osd.27   up  1.0  1.0
 28   4.0 osd.28   up  1.0  1.0
 29   4.0 osd.29   up  1.0  1.0
  0 0 osd.0  down0  1.0
  3 0 osd.3  down0  1.0
  6 0 osd.6  down0  1.0
  7 0 osd.7  down0  1.0
  8 0 osd.8  down0  1.0
  9 0 osd.9  down0  1.0
 10 0 osd.10 down0  1.0
 11 0 osd.11 down0  1.0
 12 0 osd.12 down0  1.0
 13 0 osd.13 down0  1.0


Any assistance is greatly appreciated.

Bob
___
ceph-users mailing list

Re: [ceph-users] sync writes - expected performance?

2015-12-16 Thread Nikola Ciprich
Hello Mark,

thanks for your explanation, it all makes sense. I've done
some measuring on google and amazon clouds as well and really,
those numbers seem to be pretty good. I'll be playing with
fine tunning a little bit more, but overall performance
really seems to be quite nice.

Thanks to all of you for your replies guys!

nik


On Mon, Dec 14, 2015 at 11:03:16AM -0600, Mark Nelson wrote:
> 
> 
> On 12/14/2015 04:49 AM, Nikola Ciprich wrote:
> >Hello,
> >
> >i'm doing some measuring on test (3 nodes) cluster and see strange 
> >performance
> >drop for sync writes..
> >
> >I'm using SSD for both journalling and OSD. It should be suitable for
> >journal, giving about 16.1KIOPS (67MB/s) for sync IO.
> >
> >(measured using fio --filename=/dev/xxx --direct=1 --sync=1 --rw=write 
> >--bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting 
> >--name=journal-test)
> >
> >On top of this cluster, I have running KVM guest (using qemu librbd backend).
> >Overall performance seems to be quite good, but the problem is when I try
> >to measure sync IO performance inside the guest.. I'm getting only about 
> >600IOPS,
> >which I think is quite poor.
> >
> >The problem is, I don't see any bottlenect, OSD daemons don't seem to be 
> >hanging on
> >IO, neither hogging CPU, qemu process is also not somehow too much loaded..
> >
> >I'm using hammer 0.94.5 on top of centos 6 (4.1 kernel), all debugging 
> >disabled,
> >
> >my question is, what results I can expect for synchronous writes? I 
> >understand
> >there will always be some performance drop, but 600IOPS on top of storage 
> >which
> >can give as much as 16K IOPS seems to little..
> 
> So basically what this comes down to is latency.  Since you get 16K IOPS for
> O_DSYNC writes on the SSD, there's a good chance that it has a
> super-capacitor on board and can basically acknowledge a write as complete
> as soon as it hits the on-board cache rather than when it's written to
> flash.  Figure that for 16K O_DSYNC IOPs means that each IO is completing in
> around 0.06ms on average.  That's very fast!  At 600 IOPs for O_DSYNC writes
> on your guest, you're looking at about 1.6ms per IO on average.
> 
> So how do we account for the difference?  Let's start out by looking at a
> quick example of network latency (This is between two random machines in one
> of our labs at Red Hat):
> 
> >64 bytes from gqas008: icmp_seq=1 ttl=64 time=0.583 ms
> >64 bytes from gqas008: icmp_seq=2 ttl=64 time=0.219 ms
> >64 bytes from gqas008: icmp_seq=3 ttl=64 time=0.224 ms
> >64 bytes from gqas008: icmp_seq=4 ttl=64 time=0.200 ms
> >64 bytes from gqas008: icmp_seq=5 ttl=64 time=0.196 ms
> 
> now consider that when you do a write in ceph, you write to the primary OSD
> which then writes out to the replica OSDs.  Every replica IO has to complete
> before the primary will send the acknowledgment to the client (ie you have
> to add the latency of the worst of the replica writes!). In your case, the
> network latency alone is likely dramatically increasing IO latency vs raw
> SSD O_DSYNC writes.  Now add in the time to process crush mappings, look up
> directory and inode metadata on the filesystem where objects are stored
> (assuming it's not cached), and other processing time, and the 1.6ms latency
> for the guest writes starts to make sense.
> 
> Can we improve things?  Likely yes.  There's various areas in the code where
> we can trim latency away, implement alternate OSD backends, and potentially
> use alternate network technology like RDMA to reduce network latency.  The
> thing to remember is that when you are talking about O_DSYNC writes, even
> very small increases in latency can have dramatic effects on performance.
> Every fraction of a millisecond has huge ramifications.
> 
> >
> >Has anyone done similar measuring?
> >
> >thanks a lot in advance!
> >
> >BR
> >
> >nik
> >
> >
> >
> >
> >___
> >ceph-users mailing list
> >ceph-users@lists.ceph.com
> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
-
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28.rijna 168, 709 00 Ostrava

tel.:   +420 591 166 214
fax:+420 596 621 273
mobil:  +420 777 093 799
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: ser...@linuxbox.cz
-


pgpcTqptKGKxY.pgp
Description: PGP signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount.ceph not accepting options, please help

2015-12-16 Thread Mike Miller

Hi,

sorry, the question might seem very easy, probably my bad, but can you 
please help me why I am unable to change read ahead size and other 
options when mounting cephfs?


mount.ceph m2:6789:/ /foo2 -v -o name=cephfs,secret=,rsize=1024000

the result is:

ceph: Unknown mount option rsize

I am using hammer 0.94.5 and ubuntu trusty.

Thanks for your help!

Mike
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Florian Haas
Hi Ben & everyone,

just following up on this one from July, as I don't think there's been
a reply here then.

On Wed, Jul 8, 2015 at 7:37 AM, Ben Hines  wrote:
> Anyone have any data on optimal # of shards for a radosgw bucket index?
>
> We've had issues with bucket index contention with a few million+
> objects in a single bucket so i'm testing out the sharding.
>
> Perhaps at least one shard per OSD? Or, less? More?

I'd like to make this more concrete: what about having several buckets
each holding 2-4M objects, created on hammer, with 64 index shards? Is
that type of fill expected to bring radosgw performance down by a
factor of 5, versus an unpopulated (empty) radosgw setup?

Ben, you wrote elsewhere
(http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html)
that you found approx. 900k objects to be the threshold where index
sharding becomes necessary. Have you found that to be a reasonable
rule of thumb, as in "try 1-2 shards per million objects in your most
populous bucket"? Also, do you reckon that beyond that, more shards
make things worse?

> I noticed some discussion here regarding slow bucket listing with
> ~200k obj -- http://cephnotes.ksperis.com/blog/2015/05/12/radosgw-big-index
> - bucket list seems significantly impacted.
>
> But i'm more concerned about general object put  (write) / object read
> speed since 'bucket listing' is not something that we need to do. Not
> sure if the index has to be completely read to write an object into
> it?

This is a question where I'm looking for an answer, too.

Cheers,
Florian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Matt Taylor

Hi Loic,

No problems, I'll add my my report on your bug report.

I also tried adding the sleep prior to invoking partprobe, but it didn't 
work (same error).


See pastebin for complete output:

http://pastebin.com/Q26CeUge

Cheers,
Matt.


On 16/12/2015 19:57, Loic Dachary wrote:

Hi Matt,

Could you please add your report to http://tracker.ceph.com/issues/14080 ? I 
think what you're seeing is a partprobe timeout because things get too long to 
complete (that's also why adding sleep as mentionned in the mail thread 
sometime helps). There is a variant of that problem where udevadm settle also 
timesout (but it is less common on real hardware). I'm testing a fix to make 
this more robust.

Cheers

On 16/12/2015 07:17, Matt Taylor wrote:

Hi all,

After recently upgrading to CentOS 7.2 and installing a new Ceph cluster using 
Infernalis v9.2.0, I have noticed that disk's are failing to prepare.

I have observed the same behaviour over multiple Ceph servers when preparing 
disk's. All the servers are identical.

Disk's are zapping fine, however when running 'ceph-deploy disk prepare', we're 
encountering the following error:


[ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare 
kvsrv02:/dev/sdr
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : 
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : 
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
[kvsrv02][DEBUG ] connection detected need for sudo
[kvsrv02][DEBUG ] connected to host: kvsrv02
[kvsrv02][DEBUG ] detect platform information from remote host
[kvsrv02][DEBUG ] detect machine type
[kvsrv02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
[kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
activate False
[kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
--fs-type xfs -- /dev/sdr
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdr
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on 
/dev/sdr
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M 
--change-name=2:ceph journal 

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Jesper Thorhauge
Hi, 

Some more information showing in the boot.log; 

2015-12-16 07:35:33.289830 7f1b990ad800 -1 
filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on 
/var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument 
2015-12-16 07:35:33.289842 7f1b990ad800 -1 OSD::mkfs: ObjectStore::mkfs failed 
with error -22 
2015-12-16 07:35:33.289883 7f1b990ad800 -1 ** ERROR: error creating empty 
object store in /var/lib/ceph/tmp/mnt.aWZTcE: (22) Invalid argument 
ERROR:ceph-disk:Failed to activate 
ceph-disk: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', 
'--mkkey', '-i', '7', '--monmap', 
'/var/lib/ceph/tmp/mnt.aWZTcE/activate.monmap', '--osd-data', 
'/var/lib/ceph/tmp/mnt.aWZTcE', '--osd-journal', 
'/var/lib/ceph/tmp/mnt.aWZTcE/journal', '--osd-uuid', 
'c83b5aa5-fe77-42f6-9415-25ca0266fb7f', '--keyring', 
'/var/lib/ceph/tmp/mnt.aWZTcE/keyring']' returned non-zero exit status 1 
ceph-disk: Error: One or more partitions failed to activate 

Maybe related to the "(22) Invalid argument" part..? 

/Jesper 

* 

Hi, 

I have done several reboots, and it did not lead to healthy symlinks :-( 

/Jesper 

 

Hi, 

On 16/12/2015 07:39, Jesper Thorhauge wrote: 
> Hi, 
> 
> A fresh server install on one of my nodes (and yum update) left me with 
> CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2. 
> 
> "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but 
> "ceph-disk activate / dev/sda1" fails. I have traced the problem to 
> "/dev/disk/by-partuuid", where the journal symlinks are broken; 
> 
> -rw-r--r-- 1 root root 0 Dec 16 07:35 1e9d527f-0866-4284-b77c-c1cb04c5a168 
> -rw-r--r-- 1 root root 0 Dec 16 07:35 c34d4694-b486-450d-b57f-da24255f0072 
> lrwxrwxrwx 1 root root 10 Dec 16 07:35 c83b5aa5-fe77-42f6-9415-25ca0266fb7f 
> -> ../../sdb1 
> lrwxrwxrwx 1 root root 10 Dec 16 07:35 e85f4d92-c8f1-4591-bd2a-aa43b80f58f6 
> -> ../../sda1 
> 
> Re-creating them manually wont survive a reboot. Is this a problem with the 
> udev rules in Ceph 0.94.3+? 

This usually is a symptom of something else going wrong (i.e. it is possible to 
confuse the kernel into creating the wrong symbolic links). The correct 
symlinks should be set when you reboot. 

> Hope that somebody can help me :-) 

Please let us know if rebooting leads to healthy symlinks. 

Cheers 
> 
> Thanks! 
> 
> Best regards, 
> Jesper 
> 
> 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 

-- 
Loïc Dachary, Artisan Logiciel Libre 


___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Ben Hines
Great, glad to see that others are concerned about this.

One serious problem is that the number of index shards cannot be changed
once it's been created. So if you have a bucket that you can't just
recreate easily, you're screwed. Fortunately for my use case i can delete
the contents of our buckets and recreate them if need be, though it takes
time.

Adding the ability to scale up bucket index shards (or just making it fully
dynamic --  user shouldn't have to worry about this) would be great.

-Ben



On Wed, Dec 16, 2015 at 11:25 AM, Wade Holler  wrote:

> I'm interested in this too. Should start testing next week at 1B+ objects
> and I sure would like a recommendation of what config to start with.
>
> We learned the hard way that not sharding is very bad at scales like this.
> On Wed, Dec 16, 2015 at 2:06 PM Florian Haas  wrote:
>
>> Hi Ben & everyone,
>>
>> just following up on this one from July, as I don't think there's been
>> a reply here then.
>>
>> On Wed, Jul 8, 2015 at 7:37 AM, Ben Hines  wrote:
>> > Anyone have any data on optimal # of shards for a radosgw bucket index?
>> >
>> > We've had issues with bucket index contention with a few million+
>> > objects in a single bucket so i'm testing out the sharding.
>> >
>> > Perhaps at least one shard per OSD? Or, less? More?
>>
>> I'd like to make this more concrete: what about having several buckets
>> each holding 2-4M objects, created on hammer, with 64 index shards? Is
>> that type of fill expected to bring radosgw performance down by a
>> factor of 5, versus an unpopulated (empty) radosgw setup?
>>
>> Ben, you wrote elsewhere
>> (
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html
>> )
>> that you found approx. 900k objects to be the threshold where index
>> sharding becomes necessary. Have you found that to be a reasonable
>> rule of thumb, as in "try 1-2 shards per million objects in your most
>> populous bucket"? Also, do you reckon that beyond that, more shards
>> make things worse?
>>
>> > I noticed some discussion here regarding slow bucket listing with
>> > ~200k obj --
>> http://cephnotes.ksperis.com/blog/2015/05/12/radosgw-big-index
>> > - bucket list seems significantly impacted.
>> >
>> > But i'm more concerned about general object put  (write) / object read
>> > speed since 'bucket listing' is not something that we need to do. Not
>> > sure if the index has to be completely read to write an object into
>> > it?
>>
>> This is a question where I'm looking for an answer, too.
>>
>> Cheers,
>> Florian
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Ben Hines
On Wed, Dec 16, 2015 at 11:05 AM, Florian Haas  wrote:

> Hi Ben & everyone,
>
>
> Ben, you wrote elsewhere
> (
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html
> )
> that you found approx. 900k objects to be the threshold where index
> sharding becomes necessary. Have you found that to be a reasonable
> rule of thumb, as in "try 1-2 shards per million objects in your most
> populous bucket"? Also, do you reckon that beyond that, more shards
> make things worse?
>
>

Oh, and to answer this part.   I didn't do that much experimentation
unfortunately.  I actually am using about 24 index shards per bucket
currently and we delete each bucket once it hits about a million objects.
(it's just a throwaway cache for us) Seems ok, so i stopped tweaking.

Also, i think i have a pretty slow cluster as far as write speed is
concerned, since we do not have SSD Journals. With SSD journals i imagine
the index write speed is significantly improved, but i am not sure how
much. A faster cluster could probably handle bigger indexes.

-Ben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Loic Dachary
Hi,

On 17/12/2015 07:53, Jesper Thorhauge wrote:
> Hi,
> 
> Some more information showing in the boot.log;
> 
> 2015-12-16 07:35:33.289830 7f1b990ad800 -1 
> filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on 
> /var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument
> 2015-12-16 07:35:33.289842 7f1b990ad800 -1 OSD::mkfs: ObjectStore::mkfs 
> failed with error -22
> 2015-12-16 07:35:33.289883 7f1b990ad800 -1  ** ERROR: error creating empty 
> object store in /var/lib/ceph/tmp/mnt.aWZTcE: (22) Invalid argument
> ERROR:ceph-disk:Failed to activate
> ceph-disk: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', 
> '--mkkey', '-i', '7', '--monmap', 
> '/var/lib/ceph/tmp/mnt.aWZTcE/activate.monmap', '--osd-data', 
> '/var/lib/ceph/tmp/mnt.aWZTcE', '--osd-journal', 
> '/var/lib/ceph/tmp/mnt.aWZTcE/journal', '--osd-uuid', 
> 'c83b5aa5-fe77-42f6-9415-25ca0266fb7f', '--keyring', 
> '/var/lib/ceph/tmp/mnt.aWZTcE/keyring']' returned non-zero exit status 1
> ceph-disk: Error: One or more partitions failed to activate
> 
> Maybe related to the "(22) Invalid argument" part..?

After a reboot the symlinks are reconstructed and if they are still incorrect, 
it means there is an inconsistency somewhere else. To debug the problem, could 
you mount /dev/sda1 and verify the symlink of the journal file ? Then verify 
the content of /dev/disk/by-partuuid. And also display the partition 
information with sgdisk -i 1 /dev/sda and sgdisk -i 2 /dev/sda. Are you 
collocating your journal with the data, on the same disk ? Or are they on two 
different disks ?

git log --no-merges --oneline tags/v0.94.3..tags/v0.94.5 udev 

shows nothing, meaning there has been no change to udev rules. There is one 
change related to the installation of the udev rules 
https://github.com/ceph/ceph/commit/4eb58ad2027148561d94bb43346b464b55d041a6. 
Could you double check 60-ceph-partuuid-workaround.rules is installed where it 
should ?

Cheers

> 
> /Jesper
> 
> *
> 
> Hi,
> 
> I have done several reboots, and it did not lead to healthy symlinks :-(
> 
> /Jesper
> 
> 
> 
> Hi,
> 
> On 16/12/2015 07:39, Jesper Thorhauge wrote:
>> Hi,
>>
>> A fresh server install on one of my nodes (and yum update) left me with 
>> CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2.
>>
>> "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but 
>> "ceph-disk activate / dev/sda1" fails. I have traced the problem to 
>> "/dev/disk/by-partuuid", where the journal symlinks are broken;
>>
>> -rw-r--r-- 1 root root  0 Dec 16 07:35 1e9d527f-0866-4284-b77c-c1cb04c5a168
>> -rw-r--r-- 1 root root  0 Dec 16 07:35 c34d4694-b486-450d-b57f-da24255f0072
>> lrwxrwxrwx 1 root root 10 Dec 16 07:35 c83b5aa5-fe77-42f6-9415-25ca0266fb7f 
>> -> ../../sdb1
>> lrwxrwxrwx 1 root root 10 Dec 16 07:35 e85f4d92-c8f1-4591-bd2a-aa43b80f58f6 
>> -> ../../sda1
>>
>> Re-creating them manually wont survive a reboot. Is this a problem with the 
>> udev rules in Ceph 0.94.3+?
> 
> This usually is a symptom of something else going wrong (i.e. it is possible 
> to confuse the kernel into creating the wrong symbolic links). The correct 
> symlinks should be set when you reboot.
> 
>> Hope that somebody can help me :-)
> 
> Please let us know if rebooting leads to healthy symlinks.
> 
> Cheers
>>
>> Thanks!
>>
>> Best regards,
>> Jesper
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 
> -- 
> Loïc Dachary, Artisan Logiciel Libre
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] recommendations for file sharing

2015-12-16 Thread lin zhou 周林
seafile is another way.it support write data to ceph using librados
directly.

在 2015年12月15日 10:51, Wido den Hollander 写道:
> Are you sure you need file sharing? ownCloud for example now has native
> RADOS support using phprados.
>
> Isn't ownCloud something that could work? Talking native RADOS is always
> the best.
>
> Wido
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Jesper Thorhauge
Hi Loic, 

osd's are on /dev/sda and /dev/sdb, journal's is on /dev/sdc (sdc3 / sdc4). 

sgdisk for sda shows; 

Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) 
Partition unique GUID: E85F4D92-C8F1-4591-BD2A-AA43B80F58F6 
First sector: 2048 (at 1024.0 KiB) 
Last sector: 1953525134 (at 931.5 GiB) 
Partition size: 1953523087 sectors (931.5 GiB) 
Attribute flags:  
Partition name: 'ceph data' 

for sdb 

Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) 
Partition unique GUID: C83B5AA5-FE77-42F6-9415-25CA0266FB7F 
First sector: 2048 (at 1024.0 KiB) 
Last sector: 1953525134 (at 931.5 GiB) 
Partition size: 1953523087 sectors (931.5 GiB) 
Attribute flags:  
Partition name: 'ceph data' 

for /dev/sdc3 

Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown) 
Partition unique GUID: C34D4694-B486-450D-B57F-DA24255F0072 
First sector: 935813120 (at 446.2 GiB) 
Last sector: 956293119 (at 456.0 GiB) 
Partition size: 2048 sectors (9.8 GiB) 
Attribute flags:  
Partition name: 'ceph journal' 

for /dev/sdc4 

Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown) 
Partition unique GUID: 1E9D527F-0866-4284-B77C-C1CB04C5A168 
First sector: 956293120 (at 456.0 GiB) 
Last sector: 976773119 (at 465.8 GiB) 
Partition size: 2048 sectors (9.8 GiB) 
Attribute flags:  
Partition name: 'ceph journal' 

60-ceph-partuuid-workaround.rules is located in /lib/udev/rules.d, so it seems 
correct to me. 

after a reboot, /dev/disk/by-partuuid is; 

-rw-r--r-- 1 root root 0 Dec 16 07:35 1e9d527f-0866-4284-b77c-c1cb04c5a168 
-rw-r--r-- 1 root root 0 Dec 16 07:35 c34d4694-b486-450d-b57f-da24255f0072 
lrwxrwxrwx 1 root root 10 Dec 16 07:35 c83b5aa5-fe77-42f6-9415-25ca0266fb7f -> 
../../sdb1 
lrwxrwxrwx 1 root root 10 Dec 16 07:35 e85f4d92-c8f1-4591-bd2a-aa43b80f58f6 -> 
../../sda1 

i dont know how to verify the symlink of the journal file - can you guide me on 
that one? 

Thank :-) ! 

/Jesper 

** 

Hi, 

On 17/12/2015 07:53, Jesper Thorhauge wrote: 
> Hi, 
> 
> Some more information showing in the boot.log; 
> 
> 2015-12-16 07:35:33.289830 7f1b990ad800 -1 
> filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on 
> /var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument 
> 2015-12-16 07:35:33.289842 7f1b990ad800 -1 OSD::mkfs: ObjectStore::mkfs 
> failed with error -22 
> 2015-12-16 07:35:33.289883 7f1b990ad800 -1 ** ERROR: error creating empty 
> object store in /var/lib/ceph/tmp/mnt.aWZTcE: (22) Invalid argument 
> ERROR:ceph-disk:Failed to activate 
> ceph-disk: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', 
> '--mkkey', '-i', '7', '--monmap', 
> '/var/lib/ceph/tmp/mnt.aWZTcE/activate.monmap', '--osd-data', 
> '/var/lib/ceph/tmp/mnt.aWZTcE', '--osd-journal', 
> '/var/lib/ceph/tmp/mnt.aWZTcE/journal', '--osd-uuid', 
> 'c83b5aa5-fe77-42f6-9415-25ca0266fb7f', '--keyring', 
> '/var/lib/ceph/tmp/mnt.aWZTcE/keyring']' returned non-zero exit status 1 
> ceph-disk: Error: One or more partitions failed to activate 
> 
> Maybe related to the "(22) Invalid argument" part..? 

After a reboot the symlinks are reconstructed and if they are still incorrect, 
it means there is an inconsistency somewhere else. To debug the problem, could 
you mount /dev/sda1 and verify the symlink of the journal file ? Then verify 
the content of /dev/disk/by-partuuid. And also display the partition 
information with sgdisk -i 1 /dev/sda and sgdisk -i 2 /dev/sda. Are you 
collocating your journal with the data, on the same disk ? Or are they on two 
different disks ? 

git log --no-merges --oneline tags/v0.94.3..tags/v0.94.5 udev 

shows nothing, meaning there has been no change to udev rules. There is one 
change related to the installation of the udev rules 
https://github.com/ceph/ceph/commit/4eb58ad2027148561d94bb43346b464b55d041a6. 
Could you double check 60-ceph-partuuid-workaround.rules is installed where it 
should ? 

Cheers 

> 
> /Jesper 
> 
> * 
> 
> Hi, 
> 
> I have done several reboots, and it did not lead to healthy symlinks :-( 
> 
> /Jesper 
> 
>  
> 
> Hi, 
> 
> On 16/12/2015 07:39, Jesper Thorhauge wrote: 
>> Hi, 
>> 
>> A fresh server install on one of my nodes (and yum update) left me with 
>> CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2. 
>> 
>> "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but 
>> "ceph-disk activate / dev/sda1" fails. I have traced the problem to 
>> "/dev/disk/by-partuuid", where the journal symlinks are broken; 
>> 
>> -rw-r--r-- 1 root root 0 Dec 16 07:35 1e9d527f-0866-4284-b77c-c1cb04c5a168 
>> -rw-r--r-- 1 root root 0 Dec 16 07:35 c34d4694-b486-450d-b57f-da24255f0072 
>> lrwxrwxrwx 1 root root 10 Dec 16 07:35 c83b5aa5-fe77-42f6-9415-25ca0266fb7f 
>> -> ../../sdb1 
>> lrwxrwxrwx 1 root root 10 Dec 16 07:35 e85f4d92-c8f1-4591-bd2a-aa43b80f58f6 
>> -> ../../sda1 
>> 

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Jesper Thorhauge
Hi, 

I have done several reboots, and it did not lead to healthy symlinks :-( 

/Jesper 

 

Hi, 

On 16/12/2015 07:39, Jesper Thorhauge wrote: 
> Hi, 
> 
> A fresh server install on one of my nodes (and yum update) left me with 
> CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2. 
> 
> "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but 
> "ceph-disk activate / dev/sda1" fails. I have traced the problem to 
> "/dev/disk/by-partuuid", where the journal symlinks are broken; 
> 
> -rw-r--r-- 1 root root 0 Dec 16 07:35 1e9d527f-0866-4284-b77c-c1cb04c5a168 
> -rw-r--r-- 1 root root 0 Dec 16 07:35 c34d4694-b486-450d-b57f-da24255f0072 
> lrwxrwxrwx 1 root root 10 Dec 16 07:35 c83b5aa5-fe77-42f6-9415-25ca0266fb7f 
> -> ../../sdb1 
> lrwxrwxrwx 1 root root 10 Dec 16 07:35 e85f4d92-c8f1-4591-bd2a-aa43b80f58f6 
> -> ../../sda1 
> 
> Re-creating them manually wont survive a reboot. Is this a problem with the 
> udev rules in Ceph 0.94.3+? 

This usually is a symptom of something else going wrong (i.e. it is possible to 
confuse the kernel into creating the wrong symbolic links). The correct 
symlinks should be set when you reboot. 

> Hope that somebody can help me :-) 

Please let us know if rebooting leads to healthy symlinks. 

Cheers 
> 
> Thanks! 
> 
> Best regards, 
> Jesper 
> 
> 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 

-- 
Loïc Dachary, Artisan Logiciel Libre 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Change servers of the Cluster

2015-12-16 Thread Oliver Dzombic
Hi,

if you want to be nice/free of interruption, you should consider adding
the new mon/osd to your existing cluster, let it sync, and then remove
the old mon/osd.

So this is an add/remove task, not a 1:1 replace. You will need to copy
the data from your existing harddisks anyway.

Greetings
Oliver
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Change servers of the Cluster

2015-12-16 Thread Daniel Takatori Ohara
Hi Oliver,

Thank you for answer.

My cluster stay in VM servers. I will change to physical servers. The data
stay in storage with iscsi communications. I will map the iscsi again in
the new server.

Thank's.

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br


On Wed, Dec 16, 2015 at 11:17 AM, Oliver Dzombic 
wrote:

> Hi,
>
> if you want to be nice/free of interruption, you should consider adding
> the new mon/osd to your existing cluster, let it sync, and then remove
> the old mon/osd.
>
> So this is an add/remove task, not a 1:1 replace. You will need to copy
> the data from your existing harddisks anyway.
>
> Greetings
> Oliver
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Change servers of the Cluster

2015-12-16 Thread Daniel Takatori Ohara
Hello,

Anyone can help me, please?

I need change the servers (OSD's and MDS) of my cluster.

I have a mini cluster with 3 OSD's, 1 MON and 1 MDS in the Ceph 0.94.1

How can i change the servers? I install the SO and ceph packages and i copy
the ceph.conf? Just it?

Thank's,

Att.

---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular Oncology Center
Instituto Sírio-Libanês de Ensino e Pesquisa
Hospital Sírio-Libanês
Phone: +55 11 3155-0200 (extension 1927)
R: Cel. Nicolau dos Santos, 69
São Paulo-SP. 01308-060
http://www.bioinfo.mochsl.org.br
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
When installing Hammer on RHEL7.1 we regularly got the message that partprobe 
failed to inform the kernel. We are using the ceph-disk command from ansible to 
prepare the disks. The partprobe failure seems harmless and our OSDs always 
activated successfully.

If the Infernalis version of ceph-disk is going to trap an error from part 
probe then we will be unable to prepare our OSDs and this becomes a bug.

Regards
Paul



On 16/12/2015, 06:17, "ceph-users on behalf of Matt Taylor" 
 wrote:

>Hi all,
>
>After recently upgrading to CentOS 7.2 and installing a new Ceph cluster 
>using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.
>
>I have observed the same behaviour over multiple Ceph servers when 
>preparing disk's. All the servers are identical.
>
>Disk's are zapping fine, however when running 'ceph-deploy disk 
>prepare', we're encountering the following error:
>
>> [ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare 
>> kvsrv02:/dev/sdr
>> [ceph_deploy.cli][INFO ] ceph-deploy options:
>> [ceph_deploy.cli][INFO ] username : None
>> [ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
>> [ceph_deploy.cli][INFO ] dmcrypt : False
>> [ceph_deploy.cli][INFO ] verbose : False
>> [ceph_deploy.cli][INFO ] overwrite_conf : False
>> [ceph_deploy.cli][INFO ] subcommand : prepare
>> [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
>> [ceph_deploy.cli][INFO ] quiet : False
>> [ceph_deploy.cli][INFO ] cd_conf : > instance at 0x7f1d54a4a7a0>
>> [ceph_deploy.cli][INFO ] cluster : ceph
>> [ceph_deploy.cli][INFO ] fs_type : xfs
>> [ceph_deploy.cli][INFO ] func : 
>> [ceph_deploy.cli][INFO ] ceph_conf : None
>> [ceph_deploy.cli][INFO ] default_release : False
>> [ceph_deploy.cli][INFO ] zap_disk : False
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
>> [kvsrv02][DEBUG ] connection detected need for sudo
>> [kvsrv02][DEBUG ] connected to host: kvsrv02
>> [kvsrv02][DEBUG ] detect platform information from remote host
>> [kvsrv02][DEBUG ] detect machine type
>> [kvsrv02][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
>> [ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
>> [kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
>> activate False
>> [kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
>> --fs-type xfs -- /dev/sdr
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-allows-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-wants-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-needs-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=fsid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=osd_journal_size
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdr
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 
>> on /dev/sdr
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: 

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Loic Dachary
Hi Paul,

On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote:
> When installing Hammer on RHEL7.1 we regularly got the message that partprobe 
> failed to inform the kernel. We are using the ceph-disk command from ansible 
> to prepare the disks. The partprobe failure seems harmless and our OSDs 
> always activated successfully.

Do you have a copy of those errors by any chance. ceph-disk hammer on RHEL 
should use partx, not partprobe.

> If the Infernalis version of ceph-disk is going to trap an error from part 
> probe then we will be unable to prepare our OSDs and this becomes a bug.

Agreed.

Cheers

> Regards
> Paul
> 
> 
> 
> On 16/12/2015, 06:17, "ceph-users on behalf of Matt Taylor" 
>  wrote:
> 
>> Hi all,
>>
>> After recently upgrading to CentOS 7.2 and installing a new Ceph cluster 
>> using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.
>>
>> I have observed the same behaviour over multiple Ceph servers when 
>> preparing disk's. All the servers are identical.
>>
>> Disk's are zapping fine, however when running 'ceph-deploy disk 
>> prepare', we're encountering the following error:
>>
>>> [ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk 
>>> prepare kvsrv02:/dev/sdr
>>> [ceph_deploy.cli][INFO ] ceph-deploy options:
>>> [ceph_deploy.cli][INFO ] username : None
>>> [ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
>>> [ceph_deploy.cli][INFO ] dmcrypt : False
>>> [ceph_deploy.cli][INFO ] verbose : False
>>> [ceph_deploy.cli][INFO ] overwrite_conf : False
>>> [ceph_deploy.cli][INFO ] subcommand : prepare
>>> [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
>>> [ceph_deploy.cli][INFO ] quiet : False
>>> [ceph_deploy.cli][INFO ] cd_conf : >> instance at 0x7f1d54a4a7a0>
>>> [ceph_deploy.cli][INFO ] cluster : ceph
>>> [ceph_deploy.cli][INFO ] fs_type : xfs
>>> [ceph_deploy.cli][INFO ] func : 
>>> [ceph_deploy.cli][INFO ] ceph_conf : None
>>> [ceph_deploy.cli][INFO ] default_release : False
>>> [ceph_deploy.cli][INFO ] zap_disk : False
>>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
>>> [kvsrv02][DEBUG ] connection detected need for sudo
>>> [kvsrv02][DEBUG ] connected to host: kvsrv02
>>> [kvsrv02][DEBUG ] detect platform information from remote host
>>> [kvsrv02][DEBUG ] detect machine type
>>> [kvsrv02][DEBUG ] find the location of an executable
>>> [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
>>> [ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
>>> [kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>> [ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
>>> activate False
>>> [kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
>>> --fs-type xfs -- /dev/sdr
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>>> --check-allows-journal -i 0 --cluster ceph
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>>> --check-wants-journal -i 0 --cluster ceph
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>>> --check-needs-journal -i 0 --cluster ceph
>>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>>> /sys/dev/block/65:16/dm/uuid
>>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>>> /sys/dev/block/65:16/dm/uuid
>>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>>> /sys/dev/block/65:16/dm/uuid
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>>> --cluster=ceph --show-config-value=fsid
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>>> --cluster=ceph --show-config-value=osd_journal_size
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
>>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>>> /sys/dev/block/65:16/dm/uuid
>>> [kvsrv02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdr
>>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid 

Re: [ceph-users] recommendations for file sharing

2015-12-16 Thread Alex Leake
Martin / Wade,


Thanks for the response. I had a feeling that would be the case!


I've been playing around with that approach anyway, glad to know that's the 
general agreement.




Kind Regards,

Alex.?

?


From: Martin Palma 
Sent: 15 December 2015 16:30
To: Wade Holler
Cc: Alex Leake; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] recommendations for file sharing

Currently, we use approach #1 with kerberized NFSv4 and Samba (with AD as KDC) 
- desperately waiting for CephFS :-)

Best,
Martin

On Tue, Dec 15, 2015 at 11:51 AM, Wade Holler 
> wrote:
Keep it simple is my approach. #1

If needed Add rudimentary HA with pacemaker.

http://linux-ha.org/wiki/Samba

Cheers
Wade
On Tue, Dec 15, 2015 at 5:45 AM Alex Leake 
> wrote:

Good Morning,


I have a production Ceph cluster at the University I work at, which runs 
brilliantly.


However, I'd like your advice on the best way of sharing CIFS / SMB from Ceph. 
So far I have three ideas:

  1.  ??Use a server as a head node, with an RBD mapped, then just export with 
samba
  2.  Use a platform like OpenStack / KVM to host VMs that reside on Ceph RBDs 
- and use those to export filesystems (similar to 1.)
  3.  Use a the S3 functionality of the RADOS gateway in conjunction with tools 
like SoftNAS (https://www.softnas.com/wp/)?

They all seem a little inefficient though, especially the 2nd option.

Any help would be welcomed, I think it's an interesting problem.


Kind Regards,
Alex.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Loic Dachary
Hi,

On 16/12/2015 07:39, Jesper Thorhauge wrote:
> Hi,
> 
> A fresh server install on one of my nodes (and yum update) left me with 
> CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2.
> 
> "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but 
> "ceph-disk activate / dev/sda1" fails. I have traced the problem to 
> "/dev/disk/by-partuuid", where the journal symlinks are broken;
> 
> -rw-r--r-- 1 root root  0 Dec 16 07:35 1e9d527f-0866-4284-b77c-c1cb04c5a168
> -rw-r--r-- 1 root root  0 Dec 16 07:35 c34d4694-b486-450d-b57f-da24255f0072
> lrwxrwxrwx 1 root root 10 Dec 16 07:35 c83b5aa5-fe77-42f6-9415-25ca0266fb7f 
> -> ../../sdb1
> lrwxrwxrwx 1 root root 10 Dec 16 07:35 e85f4d92-c8f1-4591-bd2a-aa43b80f58f6 
> -> ../../sda1
> 
> Re-creating them manually wont survive a reboot. Is this a problem with the 
> udev rules in Ceph 0.94.3+?

This usually is a symptom of something else going wrong (i.e. it is possible to 
confuse the kernel into creating the wrong symbolic links). The correct 
symlinks should be set when you reboot. 

> Hope that somebody can help me :-)

Please let us know if rebooting leads to healthy symlinks.

Cheers
> 
> Thanks!
> 
> Best regards,
> Jesper
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Loic Dachary
Hi Matt,

Could you please add your report to http://tracker.ceph.com/issues/14080 ? I 
think what you're seeing is a partprobe timeout because things get too long to 
complete (that's also why adding sleep as mentionned in the mail thread 
sometime helps). There is a variant of that problem where udevadm settle also 
timesout (but it is less common on real hardware). I'm testing a fix to make 
this more robust. 

Cheers

On 16/12/2015 07:17, Matt Taylor wrote:
> Hi all,
> 
> After recently upgrading to CentOS 7.2 and installing a new Ceph cluster 
> using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.
> 
> I have observed the same behaviour over multiple Ceph servers when preparing 
> disk's. All the servers are identical.
> 
> Disk's are zapping fine, however when running 'ceph-deploy disk prepare', 
> we're encountering the following error:
> 
>> [ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare 
>> kvsrv02:/dev/sdr
>> [ceph_deploy.cli][INFO ] ceph-deploy options:
>> [ceph_deploy.cli][INFO ] username : None
>> [ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
>> [ceph_deploy.cli][INFO ] dmcrypt : False
>> [ceph_deploy.cli][INFO ] verbose : False
>> [ceph_deploy.cli][INFO ] overwrite_conf : False
>> [ceph_deploy.cli][INFO ] subcommand : prepare
>> [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
>> [ceph_deploy.cli][INFO ] quiet : False
>> [ceph_deploy.cli][INFO ] cd_conf : > instance at 0x7f1d54a4a7a0>
>> [ceph_deploy.cli][INFO ] cluster : ceph
>> [ceph_deploy.cli][INFO ] fs_type : xfs
>> [ceph_deploy.cli][INFO ] func : 
>> [ceph_deploy.cli][INFO ] ceph_conf : None
>> [ceph_deploy.cli][INFO ] default_release : False
>> [ceph_deploy.cli][INFO ] zap_disk : False
>> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
>> [kvsrv02][DEBUG ] connection detected need for sudo
>> [kvsrv02][DEBUG ] connected to host: kvsrv02
>> [kvsrv02][DEBUG ] detect platform information from remote host
>> [kvsrv02][DEBUG ] detect machine type
>> [kvsrv02][DEBUG ] find the location of an executable
>> [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
>> [ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
>> [kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>> [ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
>> activate False
>> [kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
>> --fs-type xfs -- /dev/sdr
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-allows-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-wants-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --check-needs-journal -i 0 --cluster ceph
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=fsid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
>> --cluster=ceph --show-config-value=osd_journal_size
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
>> --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdr
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
>> /sys/dev/block/65:16/dm/uuid
>> [kvsrv02][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 
>> on /dev/sdr
>> [kvsrv02][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk 
>> --new=2:0:5120M --change-name=2:ceph journal 
>> 

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
Hi Loic

You are correct – it is partx – sorry for the confusion

ansible.stderr:partx: specified range <1:0> does not make sense
ansible.stderr:partx: /dev/sdg: error adding partition 2
ansible.stderr:partx: /dev/sdg: error adding partitions 1-2
ansible.stderr:partx: /dev/sdg: error adding partitions 1-2


Regards
Paul

On 16/12/2015, 09:36, "Loic Dachary" 
> wrote:

Hi Paul,

On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote:
When installing Hammer on RHEL7.1 we regularly got the message that partprobe 
failed to inform the kernel. We are using the ceph-disk command from ansible to 
prepare the disks. The partprobe failure seems harmless and our OSDs always 
activated successfully.

Do you have a copy of those errors by any chance. ceph-disk hammer on RHEL 
should use partx, not partprobe.

If the Infernalis version of ceph-disk is going to trap an error from part 
probe then we will be unable to prepare our OSDs and this becomes a bug.

Agreed.

Cheers

Regards
Paul
On 16/12/2015, 06:17, "ceph-users on behalf of Matt Taylor" 
 on 
behalf of mtay...@mty.net.au> wrote:
Hi all,

After recently upgrading to CentOS 7.2 and installing a new Ceph cluster
using Infernalis v9.2.0, I have noticed that disk's are failing to prepare.

I have observed the same behaviour over multiple Ceph servers when
preparing disk's. All the servers are identical.

Disk's are zapping fine, however when running 'ceph-deploy disk
prepare', we're encountering the following error:

[ceph_deploy.cli][INFO ] Invoked (1.5.30): /usr/bin/ceph-deploy disk prepare 
kvsrv02:/dev/sdr
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('kvsrv02', '/dev/sdr', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : 
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : 
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kvsrv02:/dev/sdr:
[kvsrv02][DEBUG ] connection detected need for sudo
[kvsrv02][DEBUG ] connected to host: kvsrv02
[kvsrv02][DEBUG ] detect platform information from remote host
[kvsrv02][DEBUG ] detect machine type
[kvsrv02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to kvsrv02
[kvsrv02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host kvsrv02 disk /dev/sdr journal None 
activate False
[kvsrv02][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph 
--fs-type xfs -- /dev/sdr
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-allows-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-wants-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--check-needs-journal -i 0 --cluster ceph
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=fsid
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd 
--cluster=ceph --show-config-value=osd_journal_size
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[kvsrv02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf 
--cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[kvsrv02][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdr uuid path is 
/sys/dev/block/65:16/dm/uuid