On 28/08/2014 16:29, Mike Dawson wrote:
On 8/28/2014 12:23 AM, Christian Balzer wrote:
On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote:
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary wrote:
Hi Craig,
I assume the reason for
On Thu, 28 Aug 2014 10:29:20 -0400 Mike Dawson wrote:
On 8/28/2014 12:23 AM, Christian Balzer wrote:
On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote:
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary wrote:
Hi Craig,
Hello,
Is there any way to provoke a ceph cluster to level out its OSD usage?
Currently, a cluster of 3 servers with 4 identical OSDs each is
showing disparity of about 20% between the most-used OSD and the
least-used OSD. This wouldn't be too big of a problem, but the
most-used OSD is now at
Having heard some suggestions on RAID configuration under Gluster (we have
someone else doing that evaluation, I'm doing the Ceph piece), I'm wondering
what (if any) RAID configurations would be recommended for Ceph. I have the
impression that striping data could counteract/undermine data
Is Ceph Filesystem ready for production servers?
The documentation says it's not, but I don't see that mentioned anywhere
else.
http://ceph.com/docs/master/cephfs/
Thanks,
Brian
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 8/28/2014 11:17 AM, Loic Dachary wrote:
On 28/08/2014 16:29, Mike Dawson wrote:
On 8/28/2014 12:23 AM, Christian Balzer wrote:
On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote:
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary
Hi Sebastian,
If you are trying with the latest Ceph master, there are some changes we made
that will be increasing your read performance from SSD a factor of ~5X if the
ios are hitting the disks. Otherwise, the serving from memory the improvement
is even more. The single OSD will be cpu bound
Just out of curiosity, is there a way to mount a Ceph filesystem directly on a
MSWindows system (2008 R2 server)? Just wanted to try something out from a VM.
--
CONFIDENTIALITY NOTICE: If you have received this email
Yes, Mark, all of my changes are in ceph main now and we are getting
significant RR performance improvement with that.
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Thursday, August 28, 2014 10:43
There aren't too many people running RAID under Ceph, as it's a second
layer of redundancy that in normal circumstances is a bit pointless.
But there are scenarios where it might be useful. You might check the
list archives for the anti-cephalopod question thread.
-Greg
Software Engineer #42 @
On Thu, Aug 28, 2014 at 10:36 AM, Brian C. Huffman
bhuff...@etinternational.com wrote:
Is Ceph Filesystem ready for production servers?
The documentation says it's not, but I don't see that mentioned anywhere
else.
http://ceph.com/docs/master/cephfs/
Everybody has their own standards, but
That's definitely interesting.
Is this meant to be released in a dot release in Firefly or will they land
in Giant ?
--
David Moreau Simard
Le 2014-08-28, 1:49 PM, « Somnath Roy » somnath@sandisk.com a écrit :
Yes, Mark, all of my changes are in ceph main now and we are getting
On Thu, Aug 28, 2014 at 10:41 AM, LaBarre, James (CTR) A6IT
james.laba...@cigna.com wrote:
Just out of curiosity, is there a way to mount a Ceph filesystem directly on
a MSWindows system (2008 R2 server)? Just wanted to try something out from
a VM.
Nope, sorry.
-Greg
Nope, this will not be back ported to Firefly I guess.
Thanks Regards
Somnath
-Original Message-
From: David Moreau Simard [mailto:dmsim...@iweb.com]
Sent: Thursday, August 28, 2014 11:32 AM
To: Somnath Roy; Mark Nelson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD
On Thu, Aug 28, 2014 at 10:48 PM, Somnath Roy somnath@sandisk.com wrote:
Nope, this will not be back ported to Firefly I guess.
Thanks Regards
Somnath
Thanks for sharing this, the first thing in thought when I looked at
this thread, was your patches :)
If Giant will incorporate them,
My initial experience was similar to Mike's, causing a similar level of
paranoia. :-) I'm dealing with RadosGW though, so I can tolerate higher
latencies.
I was running my cluster with noout and nodown set for weeks at a time.
Recovery of a single OSD might cause other OSDs to crash. In the
On 8/28/2014 4:17 PM, Craig Lewis wrote:
My initial experience was similar to Mike's, causing a similar level of
paranoia. :-) I'm dealing with RadosGW though, so I can tolerate
higher latencies.
I was running my cluster with noout and nodown set for weeks at a time.
I'm sure Craig will
Yes, what I saw the messenger level bottleneck is still huge !
Hopefully RDMA messenger will resolve that and the performance gain will be
significant for Read (on SSDs). For write we need to uncover the OSD
bottlenecks first to take advantage of the improved upstream.
What I experienced that
How many PGs do you have in your pool? This should be about 100/OSD. If it
is too low, you could get an imbalance. I don't know the consequence of
changing it on such a full cluster. The default values are only good for
small test environments.
Robert LeBlanc
Sent from a mobile device please
On Thu, Aug 28, 2014 at 7:00 PM, Robert LeBlanc rob...@leblancnet.us wrote:
How many PGs do you have in your pool? This should be about 100/OSD.
There are 1328 PG's in the pool, so about 110 per OSD.
Thanks!
___
ceph-users mailing list
On Thu, Aug 28, 2014 at 1:30 PM, Gregory Farnum g...@inktank.com wrote:
On Thu, Aug 28, 2014 at 10:36 AM, Brian C. Huffman
bhuff...@etinternational.com wrote:
Is Ceph Filesystem ready for production servers?
The documentation says it's not, but I don't see that mentioned anywhere
else.
On Fri, Aug 29, 2014 at 8:36 AM, James Devine fxmul...@gmail.com wrote:
On Thu, Aug 28, 2014 at 1:30 PM, Gregory Farnum g...@inktank.com wrote:
On Thu, Aug 28, 2014 at 10:36 AM, Brian C. Huffman
bhuff...@etinternational.com wrote:
Is Ceph Filesystem ready for production servers?
The
On 29/08/14 04:11, Sebastien Han wrote:
Hey all,
See my fio template:
[global]
#logging
#write_iops_log=write_iops_log
#write_bw_log=write_bw_log
#write_lat_log=write_lat_lo
time_based
runtime=60
ioengine=rbd
clientname=admin
pool=test
rbdname=fio
invalidate=0# mandatory
#rw=randwrite
On 29/08/14 14:06, Mark Kirkwood wrote:
... mounting (xfs) with nobarrier seems to get
much better results. The run below is for a single osd on an xfs
partition from an Intel 520. I'm using another 520 as a journal:
...and adding
filestore_queue_max_ops = 2
improved IOPS a bit more:
Hello,
On Thu, 28 Aug 2014 19:49:59 -0400 J David wrote:
On Thu, Aug 28, 2014 at 7:00 PM, Robert LeBlanc rob...@leblancnet.us
wrote:
How many PGs do you have in your pool? This should be about 100/OSD.
There are 1328 PG's in the pool, so about 110 per OSD.
And just to be pedantic, the
Hi Roy,
I already scan your merged codes about fdcache and optimizing for
lfn_find/lfn_open, could you give some performance improvement data
about it? I fully agree with your orientation, do you have any update
about it?
As for messenger level, I have some very early works on
Another thread about
it(http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/19284)
On Fri, Aug 29, 2014 at 11:01 AM, Haomai Wang haomaiw...@gmail.com wrote:
Hi Roy,
I already scan your merged codes about fdcache and optimizing for
lfn_find/lfn_open, could you give some performance
Hi,
There's also an early-stage TCP transport implementation for Accelio, also
EPOLL-based. (We haven't attempted to run Ceph protocols over it yet, to my
knowledge, but it should be straightforward.)
Regards,
Matt
- Haomai Wang haomaiw...@gmail.com wrote:
Hi Roy,
As for
I have some basic question about monitor and paxos relationship:
As the documents says, Ceph monitor contains cluster map, if there is any
change in the state of the cluster, the change is updated in the cluster map.
monitor use paxos algorithm to create the consensus among monitors to
29 matches
Mail list logo