On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote:
On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
Hi Emmanuel,
This is interesting, because we?ve had sales guys telling us that those
Samsung drives are definitely the best for a Ceph journal O_o !
Our sales guys or Samsung
On Wed, Sep 24, 2014 at 08:49:21PM +0200, Alexandre DERUMIER wrote:
What about writes with Giant?
I'm around
- 4k iops (4k random) with 1osd (1 node - 1 osd)
- 8k iops (4k random) with 2 osd (1 node - 2 osd)
- 16K iops (4k random) with 4 osd (2 nodes - 2 osd by node)
- 22K iops (4k
On Thu, Sep 18, 2014 at 03:36:48PM +0200, Alexandre DERUMIER wrote:
Have anyone ever testing multi volume performance on a *FULL* SSD setup?
I known that Stefan Priebe run full ssd clusters in production, and have done
benchmark.
(Ad far I remember, he have benched around 20k peak with
Hi Sébastien,
On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote:
Hey all,
(...)
We have been able to reproduce this on 3 distinct platforms with some
deviations (because of the hardware) but the behaviour is the same.
Any thoughts will be highly appreciated, only getting 3,2k out
Hi Sage,
a couple of months ago (maybe last year) I was able to change the
assignment of Directorlies and Files of CephFS to different pools
back and forth (with cephfs set_layout as well as with setfattr).
Now (with ceph v0.81 and Kernel 3.10 an the client side)
neither 'cephfs set_layout' nor
' commands) and it is the same with both
ceph-fuse and the kernel client.
sage
On Mon, 18 Aug 2014, Kasper Dieter wrote:
Hi Sage,
a couple of months ago (maybe last year) I was able to change the
assignment of Directorlies and Files of CephFS to different pools
back and forth
Hi Sage,
it seems the pools must be added to the MDS first:
ceph mds add_data_pool 3# = SSD-r2
ceph mds add_data_pool 4# = SAS-r2
After these commands the setfattr -n ceph.dir.layout.pool worked.
Thanks,
-Dieter
On Mon, Aug 18, 2014 at 10:19:08PM +0200, Kasper Dieter wrote
We have observed a very similar behavior.
In a 140 OSD cluster (new created and idle) ~8000 PGs are available.
After adding two new pools (each with 2 PGs)
100 out of 140 OSDs are going down + out.
The cluster never recovers.
This problem can be reproduced every time with v0.67 and 0.72.
On Thu, Mar 13, 2014 at 11:16:45AM +0100, Gandalf Corvotempesta wrote:
2014-03-13 10:53 GMT+01:00 Kasper Dieter dieter.kas...@ts.fujitsu.com:
After adding two new pools (each with 2 PGs)
100 out of 140 OSDs are going down + out.
The cluster never recovers.
In my case, cluster
of --stripe-unit to 8192 for example
Or increase order so that it is bigger than your stripe unit and contains
a multiple of stripe-units (e.g. 21)
And it will work without any problem
JC
On Mar 11, 2014, at 07:22, Kasper Dieter dieter.kas...@ts.fujitsu.com
wrote:
So
Please see this Email on ceph-devel
---snip---
Date: Thu, 15 Aug 2013 14:30:24 +0200
From: Damien Churchill dam...@gmail.com
To: Kasper, Dieter dieter.kas...@ts.fujitsu.com
CC: ceph-de...@vger.kernel.org ceph-de...@vger.kernel.org
Subject: Re: rbd: format 2 support in rbd.ko ?
On 15 August 2013
When using rbd create ... --image-format 2 in some cases this CMD is rejected
by
EINVAL with the message librbd: STRIPINGV2 and format 2 or later required for
non-default striping
But, in v0.61.9 STRIPINGV2 and format 2 should be supported
[root@rx37-3 ~]# rbd create --pool SSD-r2 --size 20480
.
On Tue, Mar 11, 2014 at 7:16 PM, Kasper Dieter
[1]dieter.kas...@ts.fujitsu.com wrote
images on RADOS as
block storage.
On Tue, Mar 11, 2014 at 7:37 PM, Kasper Dieter
[1]dieter.kas
On Wed, Nov 27, 2013 at 04:34:00PM +0100, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/27/2013 09:25 AM, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
jens-christian.fisc...@switch.ch wrote:
The
Hi Greg,
on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/1705
I found a statement from you regarding snapshots on cephfs:
---snip---
Filesystem snapshots exist and you can experiment with them on CephFS
(there's a hidden .snaps folder; you can create or remove snapshots
by
Hi,
under
http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/
I found a good description about RBD cache parameters.
But, I am missing information
- by whom these parameters are evaluated and
- when will this happen ?
My assumption:
- the rbd_cache* parameter will be read by MONs
On Wed, Aug 28, 2013 at 04:24:59PM +0200, Gandalf Corvotempesta wrote:
2013/6/20 Matthew Anderson manderson8...@gmail.com:
Hi All,
I've had a few conversations on IRC about getting RDMA support into Ceph and
thought I would give it a quick attempt to hopefully spur some interest.
What I
On Mon, Aug 12, 2013 at 03:19:04PM +0200, Jeff Moskow wrote:
Hi,
The activity on our ceph cluster has gone up a lot. We are using exclusively
RBD
storage right now.
Is there a tool/technique that could be used to find out which rbd images are
receiving the most activity (something like
] On Behalf Of Kasper Dieter
Sent: Wednesday, July 17, 2013 2:17 PM
To: Chen, Xiaoxi
Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com
Subject: Re: Any concern about Ceph on CentOS
Hi Xiaoxi,
we are really running Ceph on CentOS-6.4
(6 server nodes, 3 client nodes, 160 OSDs).
We put
subscribe
Thanks,
Dieter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Sage,
I'm a little bit confused about 'ceph-deploy' in 0.61:
. the 0.61 release note says: ceph-deploy: our new deployment tool to replace
'mkcephfs'
. http://ceph.com/docs/master/rados/deployment/mkcephfs/
says To deploy a test or development cluster, you can use the mkcephfs
tool.
22 matches
Mail list logo