I'm interested in this too. Should start testing next week at 1B+ objects
and I sure would like a recommendation of what config to start with.
We learned the hard way that not sharding is very bad at scales like this.
On Wed, Dec 16, 2015 at 2:06 PM Florian Haas wrote:
> Hi
We've been operating a cluster relatively incident free since 0.86. On
Monday I did a yum update on one node, ceph00, and after rebooting we're
seeing every OSD stuck in 'booting' state. I've tried removing all of the
OSDs and recreating them with ceph-deploy (ceph-disk required modification
to
Hello Mark,
thanks for your explanation, it all makes sense. I've done
some measuring on google and amazon clouds as well and really,
those numbers seem to be pretty good. I'll be playing with
fine tunning a little bit more, but overall performance
really seems to be quite nice.
Thanks to all of
Hi,
sorry, the question might seem very easy, probably my bad, but can you
please help me why I am unable to change read ahead size and other
options when mounting cephfs?
mount.ceph m2:6789:/ /foo2 -v -o name=cephfs,secret=,rsize=1024000
the result is:
ceph: Unknown mount option rsize
Hi Ben & everyone,
just following up on this one from July, as I don't think there's been
a reply here then.
On Wed, Jul 8, 2015 at 7:37 AM, Ben Hines wrote:
> Anyone have any data on optimal # of shards for a radosgw bucket index?
>
> We've had issues with bucket index
Hi Loic,
No problems, I'll add my my report on your bug report.
I also tried adding the sleep prior to invoking partprobe, but it didn't
work (same error).
See pastebin for complete output:
http://pastebin.com/Q26CeUge
Cheers,
Matt.
On 16/12/2015 19:57, Loic Dachary wrote:
Hi Matt,
Hi,
Some more information showing in the boot.log;
2015-12-16 07:35:33.289830 7f1b990ad800 -1
filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on
/var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument
2015-12-16 07:35:33.289842 7f1b990ad800 -1 OSD::mkfs:
Great, glad to see that others are concerned about this.
One serious problem is that the number of index shards cannot be changed
once it's been created. So if you have a bucket that you can't just
recreate easily, you're screwed. Fortunately for my use case i can delete
the contents of our
On Wed, Dec 16, 2015 at 11:05 AM, Florian Haas wrote:
> Hi Ben & everyone,
>
>
> Ben, you wrote elsewhere
> (
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-August/003955.html
> )
> that you found approx. 900k objects to be the threshold where index
> sharding
Hi,
On 17/12/2015 07:53, Jesper Thorhauge wrote:
> Hi,
>
> Some more information showing in the boot.log;
>
> 2015-12-16 07:35:33.289830 7f1b990ad800 -1
> filestore(/var/lib/ceph/tmp/mnt.aWZTcE) mkjournal error creating journal on
> /var/lib/ceph/tmp/mnt.aWZTcE/journal: (22) Invalid argument
seafile is another way.it support write data to ceph using librados
directly.
在 2015年12月15日 10:51, Wido den Hollander 写道:
> Are you sure you need file sharing? ownCloud for example now has native
> RADOS support using phprados.
>
> Isn't ownCloud something that could work? Talking native RADOS is
Hi Loic,
osd's are on /dev/sda and /dev/sdb, journal's is on /dev/sdc (sdc3 / sdc4).
sgdisk for sda shows;
Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: E85F4D92-C8F1-4591-BD2A-AA43B80F58F6
First sector: 2048 (at 1024.0 KiB)
Last sector:
Hi,
I have done several reboots, and it did not lead to healthy symlinks :-(
/Jesper
Hi,
On 16/12/2015 07:39, Jesper Thorhauge wrote:
> Hi,
>
> A fresh server install on one of my nodes (and yum update) left me with
> CentOS 6.7 / Ceph 0.94.5. All the other nodes are
Hi,
if you want to be nice/free of interruption, you should consider adding
the new mon/osd to your existing cluster, let it sync, and then remove
the old mon/osd.
So this is an add/remove task, not a 1:1 replace. You will need to copy
the data from your existing harddisks anyway.
Greetings
Hi Oliver,
Thank you for answer.
My cluster stay in VM servers. I will change to physical servers. The data
stay in storage with iscsi communications. I will map the iscsi again in
the new server.
Thank's.
Att.
---
Daniel Takatori Ohara.
System Administrator - Lab. of Bioinformatics
Molecular
Hello,
Anyone can help me, please?
I need change the servers (OSD's and MDS) of my cluster.
I have a mini cluster with 3 OSD's, 1 MON and 1 MDS in the Ceph 0.94.1
How can i change the servers? I install the SO and ceph packages and i copy
the ceph.conf? Just it?
Thank's,
Att.
---
Daniel
When installing Hammer on RHEL7.1 we regularly got the message that partprobe
failed to inform the kernel. We are using the ceph-disk command from ansible to
prepare the disks. The partprobe failure seems harmless and our OSDs always
activated successfully.
If the Infernalis version of
Hi Paul,
On 16/12/2015 10:26, HEWLETT, Paul (Paul) wrote:
> When installing Hammer on RHEL7.1 we regularly got the message that partprobe
> failed to inform the kernel. We are using the ceph-disk command from ansible
> to prepare the disks. The partprobe failure seems harmless and our OSDs
>
Martin / Wade,
Thanks for the response. I had a feeling that would be the case!
I've been playing around with that approach anyway, glad to know that's the
general agreement.
Kind Regards,
Alex.?
?
From: Martin Palma
Sent: 15 December
Hi,
On 16/12/2015 07:39, Jesper Thorhauge wrote:
> Hi,
>
> A fresh server install on one of my nodes (and yum update) left me with
> CentOS 6.7 / Ceph 0.94.5. All the other nodes are running Ceph 0.94.2.
>
> "ceph-disk prepare /dev/sda /dev/sdc" seems to work as expected, but
> "ceph-disk
Hi Matt,
Could you please add your report to http://tracker.ceph.com/issues/14080 ? I
think what you're seeing is a partprobe timeout because things get too long to
complete (that's also why adding sleep as mentionned in the mail thread
sometime helps). There is a variant of that problem where
Hi Loic
You are correct – it is partx – sorry for the confusion
ansible.stderr:partx: specified range <1:0> does not make sense
ansible.stderr:partx: /dev/sdg: error adding partition 2
ansible.stderr:partx: /dev/sdg: error adding partitions 1-2
ansible.stderr:partx: /dev/sdg: error adding
22 matches
Mail list logo