On Thursday, June 21, 2012 at 11:12 AM, Barry Pederson wrote:
On Jun 20, 2012, at 4:59 PM, Gregory Farnum wrote:
On Wed, Jun 20, 2012 at 2:53 PM, Travis Rhoden trho...@gmail.com
(mailto:trho...@gmail.com) wrote:
This incorrect syntax is still published in the docs at
Thanks yes it is from the next branch.
Am 23.06.2012 um 02:26 schrieb Dan Mick dan.m...@inktank.com:
The ceph-osd binary you sent claims to be version 0.47.2-521-g88c762, which
is not quite 0.47.3. You can get the version with binary -v, or (in my
case) examining strings in the binary.
Thanks did you find anything?
Am 23.06.2012 um 01:59 schrieb Sam Just sam.j...@inktank.com:
I am still looking into the logs.
-Sam
On Fri, Jun 22, 2012 at 3:56 PM, Dan Mick dan.m...@inktank.com wrote:
Stefan, I'm looking at your logs and coredump now.
On 06/21/2012 11:43 PM, Stefan
Thanks Dan,
The btrfs checksum had failed on that file.
All three servers are running Fedora 17 with kernel 3.4.3
The logs on all three servers are full of messages like:
Jun 23 04:02:19 Store2 kernel: [63811.494955] ceph-osd: page allocation
failure: order:3, mode:0x4020
The difference
On Fri, 22 Jun 2012, Alexandre DERUMIER wrote:
Hi Sage,
thanks for your response.
If you turn off the journal compeletely, you will see bursty write commits
from the perspective of the client, because the OSD is periodically doing
a sync or snapshot and only acking the writes then.
If
On 6/23/12 10:38 AM, Sage Weil wrote:
On Fri, 22 Jun 2012, Alexandre DERUMIER wrote:
Hi Sage,
thanks for your response.
If you turn off the journal compeletely, you will see bursty write commits
from the perspective of the client, because the OSD is periodically doing
a sync or snapshot and
I was just talking with Elder on IRC yesterday about looking into how
much small network transfers are hurting us in cases like these. Even
with SSD based OSDs I haven't seen a very dramatic improvement in small
request performance. How tough would it be to aggregate requests into
larger
On Sat, 23 Jun 2012, Alexandre DERUMIER wrote:
I was just talking with Elder on IRC yesterday about looking into how
much small network transfers are hurting us in cases like these. Even
with SSD based OSDs I haven't seen a very dramatic improvement in small
request performance. How tough
Is that 2000 ios from a single client? You might try multiple clients and
see if the sum of the ios will scale any higher.
yes from a single client. (qemu-kvm guest).
Tomorrow,I'll retry with 3 qemu-kvm guest, on same host and 3 differents hosts.
I'll also try on bigger cpu machine to compra.(I
Hi,
i got stuck while selecting the right FS for ceph / RBD.
XFS:
- deadlock / hung task under 3.0.34 in xfs_ilock / xfs_buf_lock while syncfs
- under 3.5-rc3 all my machines got loaded doing nothing than waiting
for XFS / SSDs so ceph is really slow / unuseable
btrfs:
- 3.5-rc3 ceph is
Hi all from hot Kiev))
Does anybody use Ceph as a backend storage for NOVA-INST-DIR/instances/ ?
Is it in production use? Live migration is still possible?
I kindly ask any advice of best practices point of view.
--
Igor Laskovy
facebook.com/igor.laskovy
Kiev, Ukraine
--
To unsubscribe from this
11 matches
Mail list logo