Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Wade Holler
I'm interested in this too. Should start testing next week at 1B+ objects and I sure would like a recommendation of what config to start with. We learned the hard way that not sharding is very bad at scales like this. On Wed, Dec 16, 2015 at 2:06 PM Florian Haas wrote: > Hi

[ceph-users] OSDs stuck in booting state on CentOS 7.2.1511 and ceph infernalis 9.2.0

2015-12-16 Thread Bob R
We've been operating a cluster relatively incident free since 0.86. On Monday I did a yum update on one node, ceph00, and after rebooting we're seeing every OSD stuck in 'booting' state. I've tried removing all of the OSDs and recreating them with ceph-deploy (ceph-disk required modification to

Re: [ceph-users] sync writes - expected performance?

2015-12-16 Thread Nikola Ciprich
Hello Mark, thanks for your explanation, it all makes sense. I've done some measuring on google and amazon clouds as well and really, those numbers seem to be pretty good. I'll be playing with fine tunning a little bit more, but overall performance really seems to be quite nice. Thanks to all of

[ceph-users] mount.ceph not accepting options, please help

2015-12-16 Thread Mike Miller
Hi, sorry, the question might seem very easy, probably my bad, but can you please help me why I am unable to change read ahead size and other options when mounting cephfs? mount.ceph m2:6789:/ /foo2 -v -o name=cephfs,secret=,rsize=1024000 the result is: ceph: Unknown mount option rsize

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Florian Haas
Hi Ben & everyone, just following up on this one from July, as I don't think there's been a reply here then. On Wed, Jul 8, 2015 at 7:37 AM, Ben Hines wrote: > Anyone have any data on optimal # of shards for a radosgw bucket index? > > We've had issues with bucket index

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Matt Taylor
Hi Loic, No problems, I'll add my my report on your bug report. I also tried adding the sleep prior to invoking partprobe, but it didn't work (same error). See pastebin for complete output: http://pastebin.com/Q26CeUge Cheers, Matt. On 16/12/2015 19:57, Loic Dachary wrote: Hi Matt,

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Jesper Thorhauge
Hi, Some more information showing in the boot.log; 2015-12-16 07:35:33.289830 7f1b990ad800 -1 filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument 2015-12-16 07:35:33.289842 7f1b990ad800 -1 OSD::mkfs:

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Ben Hines
Great, glad to see that others are concerned about this. One serious problem is that the number of index shards cannot be changed once it's been created. So if you have a bucket that you can't just recreate easily, you're screwed. Fortunately for my use case i can delete the contents of our

Re: [ceph-users] radosgw bucket index sharding tips?

2015-12-16 Thread Ben Hines
On Wed, Dec 16, 2015 at 11:05 AM, Florian Haas wrote: > Hi Ben & everyone, > > > Ben, you wrote elsewhere > ( > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html > ) > that you found approx. 900k objects to be the threshold where index > sharding

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Loic Dachary
Hi, On 17/12/2015 07:53, Jesper Thorhauge wrote: > Hi, > > Some more information showing in the boot.log; > > 2015-12-16 07:35:33.289830 7f1b990ad800 -1 > filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on > /var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument

Re: [ceph-users] recommendations for file sharing

2015-12-16 Thread lin zhou 周林
seafile is another way.it support write data to ceph using librados directly. 在 2015年12月15日 10:51, Wido den Hollander 写道: > Are you sure you need file sharing? ownCloud for example now has native > RADOS support using phprados. > > Isn't ownCloud something that could work? Talking native RADOS is

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Jesper Thorhauge
Hi Loic, osd's are on /dev/sda and /dev/sdb, journal's is on /dev/sdc (sdc3 / sdc4). sgdisk for sda shows; Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown) Partition unique GUID: E85F4D92-C8F1-4591-BD2A-AA43B80F58F6 First sector: 2048 (at 1024.0 KiB) Last sector:

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Jesper Thorhauge
Hi, I have done several reboots, and it did not lead to healthy symlinks :-( /Jesper Hi, On 16/12/2015 07:39, Jesper Thorhauge wrote: > Hi, > > A fresh server install on one of my nodes (and yum update) left me with > CentOS 6.7 / Ceph 0.94.5. All the other nodes are

Re: [ceph-users] Change servers of the Cluster

2015-12-16 Thread Oliver Dzombic
Hi, if you want to be nice/free of interruption, you should consider adding the new mon/osd to your existing cluster, let it sync, and then remove the old mon/osd. So this is an add/remove task, not a 1:1 replace. You will need to copy the data from your existing harddisks anyway. Greetings

Re: [ceph-users] Change servers of the Cluster

2015-12-16 Thread Daniel Takatori Ohara
Hi Oliver, Thank you for answer. My cluster stay in VM servers. I will change to physical servers. The data stay in storage with iscsi communications. I will map the iscsi again in the new server. Thank's. Att. --- Daniel Takatori Ohara. System Administrator - Lab. of Bioinformatics Molecular

[ceph-users] Change servers of the Cluster

2015-12-16 Thread Daniel Takatori Ohara
Hello, Anyone can help me, please? I need change the servers (OSD's and MDS) of my cluster. I have a mini cluster with 3 OSD's, 1 MON and 1 MDS in the Ceph 0.94.1 How can i change the servers? I install the SO and ceph packages and i copy the ceph.conf? Just it? Thank's, Att. --- Daniel

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
When installing Hammer on RHEL7.1 we regularly got the message that partprobe failed to inform the kernel. We are using the ceph-disk command from ansible to prepare the disks. The partprobe failure seems harmless and our OSDs always activated successfully. If the Infernalis version of

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Loic Dachary
Hi Paul, On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote: > When installing Hammer on RHEL7.1 we regularly got the message that partprobe > failed to inform the kernel. We are using the ceph-disk command from ansible > to prepare the disks. The partprobe failure seems harmless and our OSDs >

Re: [ceph-users] recommendations for file sharing

2015-12-16 Thread Alex Leake
Martin / Wade, Thanks for the response. I had a feeling that would be the case! I've been playing around with that approach anyway, glad to know that's the general agreement. Kind Regards, Alex.? ? From: Martin Palma Sent: 15 December

Re: [ceph-users] Journal symlink broken / Ceph 0.94.5 / CentOS 6.7

2015-12-16 Thread Loic Dachary
Hi, On 16/12/2015 07:39, Jesper Thorhauge wrote: > Hi, > > A fresh server install on one of my nodes (and yum update) left me with > CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2. > > "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but > "ceph-disk

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread Loic Dachary
Hi Matt, Could you please add your report to http://tracker.ceph.com/issues/14080 ? I think what you're seeing is a partprobe timeout because things get too long to complete (that's also why adding sleep as mentionned in the mail thread sometime helps). There is a variant of that problem where

Re: [ceph-users] CentOS 7.2, Infernalis, preparing osd's and partprobe issues.

2015-12-16 Thread HEWLETT, Paul (Paul)
Hi Loic You are correct – it is partx – sorry for the confusion ansible.stderr:partx: specified range <1:0> does not make sense ansible.stderr:partx: /dev/sdg: error adding partition 2 ansible.stderr:partx: /dev/sdg: error adding partitions 1-2 ansible.stderr:partx: /dev/sdg: error adding