I enabled compression by a command like this:
radosgw-admin zone placement modify --rgw-zone=coredumps
--placement-id=default-placement --compression=zlib
Then once the object was uploaded, elasticsearch kept dumping the following
messages:
On Fri, Jan 19, 2018 at 11:54 PM, Youzhong Yang wrote:
> I don't think it's hardware issue. All the hosts are VMs. By the way, using
> the same set of VMWare hypervisors, I switched back to Ubuntu 16.04 last
> night, so far so good, no freeze.
Too little information to make
Hi all,
I'm getting such weird problems when we for instance re-add a server,
add disks etc! Most of the time some PGs end up in
"active+clean+remapped" mode, but today some of them got stuck
"activating" which meant that some PGs were offline for a while. I'm
able to fix things, but the fix
Hello,
bcache didn't supported partitions on the past so that a lot of our osds
have their data directly on:
/dev/bcache[0-9]
But that means i can't give them the needed part type of
4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation
with udev und ceph-disk does not work.
I am not able to find documentation for how to convert an existing cephfs
filesystem to use allow_ec_overwrites. The documentation says that the
metadata pool needs to be replicated, but that the data pool can be EC. But
it says, "For Cephfs, using an erasure coded pool means setting that pool
in
If I test my connections with sockperf via a 1Gbit switch I get around
25usec, when I test the 10Gbit connection via the switch I have around
12usec is that normal? Or should there be a differnce of 10x.
sockperf ping-pong
sockperf: Warmup stage (sending a few dummy messages)...
sockperf:
Sorry for me asking maybe the obvious but is this the kernel available
in elrepo? Or a different one?
-Original Message-
From: Mike Christie [mailto:mchri...@redhat.com]
Sent: zaterdag 20 januari 2018 1:19
To: Steven Vacaroaia; Joshua Chen
Cc: ceph-users
Subject: Re: [ceph-users]
On Thu, Jan 18, 2018 at 6:39 PM, Florent B wrote:
> I still have file corruption on Ceph-fuse with Luminous (on Debian
> Jessie, default kernel) !
>
> My mounts are using fuse_disable_pagecache=true
>
> And I have a lot of errors like "EOF reading msg header (got 0/30
>