[ceph-users] RGW compression causing issue for ElasticSearch

2018-01-20 Thread Youzhong Yang
I enabled compression by a command like this: radosgw-admin zone placement modify --rgw-zone=coredumps --placement-id=default-placement --compression=zlib Then once the object was uploaded, elasticsearch kept dumping the following messages:

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-20 Thread Brad Hubbard
On Fri, Jan 19, 2018 at 11:54 PM, Youzhong Yang wrote: > I don't think it's hardware issue. All the hosts are VMs. By the way, using > the same set of VMWare hypervisors, I switched back to Ubuntu 16.04 last > night, so far so good, no freeze. Too little information to make

[ceph-users] Weird issues related to (large/small) weights in mixed nvme/hdd pool

2018-01-20 Thread peter . linder
Hi all, I'm getting such weird problems when we for instance re-add a server, add disks etc! Most of the time some PGs end up in "active+clean+remapped" mode, but today some of them got stuck "activating" which meant that some PGs were offline for a while. I'm able to fix things, but the fix

[ceph-users] udev rule or script to auto add bcache devices?

2018-01-20 Thread Stefan Priebe - Profihost AG
Hello, bcache didn't supported partitions on the past so that a lot of our osds have their data directly on: /dev/bcache[0-9] But that means i can't give them the needed part type of 4fbd7e29-9d25-41b8-afd0-062c0ceff05d and that means that the activation with udev und ceph-disk does not work.

[ceph-users] Luminous upgrade with existing EC pools

2018-01-20 Thread David Turner
I am not able to find documentation for how to convert an existing cephfs filesystem to use allow_ec_overwrites. The documentation says that the metadata pool needs to be replicated, but that the data pool can be EC. But it says, "For Cephfs, using an erasure coded pool means setting that pool in

[ceph-users] What is the should be the expected latency of 10Gbit network connections

2018-01-20 Thread Marc Roos
If I test my connections with sockperf via a 1Gbit switch I get around 25usec, when I test the 10Gbit connection via the switch I have around 12usec is that normal? Or should there be a differnce of 10x. sockperf ping-pong sockperf: Warmup stage (sending a few dummy messages)... sockperf:

Re: [ceph-users] iSCSI over RBD

2018-01-20 Thread Marc Roos
Sorry for me asking maybe the obvious but is this the kernel available in elrepo? Or a different one? -Original Message- From: Mike Christie [mailto:mchri...@redhat.com] Sent: zaterdag 20 januari 2018 1:19 To: Steven Vacaroaia; Joshua Chen Cc: ceph-users Subject: Re: [ceph-users]

Re: [ceph-users] Corrupted files on CephFS since Luminous upgrade

2018-01-20 Thread Yan, Zheng
On Thu, Jan 18, 2018 at 6:39 PM, Florent B wrote: > I still have file corruption on Ceph-fuse with Luminous (on Debian > Jessie, default kernel) ! > > My mounts are using fuse_disable_pagecache=true > > And I have a lot of errors like "EOF reading msg header (got 0/30 >