Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev wrote: > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev > wrote: >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote: >>> >>> >>> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev >>> wrote: I am not sure this related to RBD, but

Re: [ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote: > > > On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev > wrote: >> >> I am not sure this related to RBD, but in case it is, this would be an >> important bug to fix. >> >> Running LVM on top of RBD, XFS filesystem on top of that, consumed in

[ceph-users] LVM on top of RBD apparent pagecache corruption with snapshots

2018-07-25 Thread Alex Gorbachev
I am not sure this related to RBD, but in case it is, this would be an important bug to fix. Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL 7.4. When running a large read operation and doing LVM snapshots during that operation, the block being read winds up all zeroes

Re: [ceph-users] ls operation is too slow in cephfs

2018-07-25 Thread Ronny Aasen
What are you talking about when you say you have mds in a region, afaik only radosgw supports multisite and regions. it sounds like you have a cluster spread out over a geographical area. and this will have a massive impact on latency what is the latency between all servers in the cluster ?

Re: [ceph-users] Reclaim free space on RBD images that use Bluestore?????

2018-07-25 Thread Sean Bolding
Thanks. Yes, it turns out this was not an issue with Ceph, but rather an issue with XenServer. Starting in version 7, Xenserver changed how they manage LVM by adding a VHD layer on top of it. They did it to handle live migrations but ironically broke live migrations when using any iSCSI including

Re: [ceph-users] Why LZ4 isn't built with ceph?

2018-07-25 Thread Casey Bodley
On 07/25/2018 08:39 AM, Elias Abacioglu wrote: Hi I'm wondering why LZ4 isn't built by default for newer Linux distros like Ubuntu Xenial? I understand that it wasn't built for Trusty because of too old lz4 libraries. But why isn't built for the newer distros? Thanks, Elias

Re: [ceph-users] download.ceph.com repository changes

2018-07-25 Thread Sage Weil
On Tue, 24 Jul 2018, Alfredo Deza wrote: > Hi all, > > After the 12.2.6 release went out, we've been thinking on better ways > to remove a version from our repositories to prevent users from > upgrading/installing a known bad release. > > The way our repos are structured today means every single

[ceph-users] Cephfs meta data pool to ssd and measuring performance difference

2018-07-25 Thread Marc Roos
>From this thread, I got how to move the meta data pool from the hdd's to the ssd's. https://www.spinics.net/lists/ceph-users/msg39498.html ceph osd pool get fs_meta crush_rule ceph osd pool set fs_meta crush_rule replicated_ruleset_ssd I guess this can be done on a live system? What would

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Yan, Zheng
On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng wrote: > > On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco wrote: > > > > Hello, > > > > I've attached the PDF. > > > > I don't know if is important, but I made changes on configuration and I've > > restarted the servers after dump that heap file. I've

[ceph-users] Why LZ4 isn't built with ceph?

2018-07-25 Thread Elias Abacioglu
Hi I'm wondering why LZ4 isn't built by default for newer Linux distros like Ubuntu Xenial? I understand that it wasn't built for Trusty because of too old lz4 libraries. But why isn't built for the newer distros? Thanks, Elias ___ ceph-users mailing

Re: [ceph-users] Insane CPU utilization in ceph.fuse

2018-07-25 Thread Yan, Zheng
On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco wrote: > > Hello, > > I've attached the PDF. > > I don't know if is important, but I made changes on configuration and I've > restarted the servers after dump that heap file. I've changed the > memory_limit to 25Mb to test if stil with aceptable

Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

2018-07-25 Thread SCHAER Frederic
My cache pool seems affected by an old/closed bug... but I don't think this is (directly ?) related to the current issue - but this won't help anyway :-/ http://tracker.ceph.com/issues/12659 Since I got promote issues, I tried to flush only the affected rbd image : I got 6 unflush-able

Re: [ceph-users] Error creating compat weight-set with mgr balancer plugin

2018-07-25 Thread Martin Overgaard Hansen
> On 24 Jul 2018, at 13.22, Lothar Gesslein wrote: > >> On 07/24/2018 12:58 PM, Martin Overgaard Hansen wrote: >> Creating a compat weight set manually with 'ceph osd crush weight-set >> create-compat' gives me: Error EPERM: crush map contains one or more >> bucket(s) that are not straw2 >>

Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

2018-07-25 Thread SCHAER Frederic
Hi again, Now with all OSDs restarted, I'm getting health: HEALTH_ERR 777 scrub errors Possible data damage: 36 pgs inconsistent (...) pgs: 4764 active+clean 36 active+clean+inconsistent But from what I could read up to now, this is what's

Re: [ceph-users] 12.2.7 + osd skip data digest + bluestore + I/O errors

2018-07-25 Thread SCHAER Frederic
Hi Dan, Just checked again : arggghhh... # grep AUTO_RESTART /etc/sysconfig/ceph CEPH_AUTO_RESTART_ON_UPGRADE=no So no :'( RPMs were upgraded, but OSD were not restarted as I thought. Or at least not restarted with new 12.2.7 binaries (but since the skip digest option was present in the

Re: [ceph-users] JBOD question

2018-07-25 Thread Caspar Smit
Satish, Yes, that card support 'both'. You have to flash the IR firmware (IT Firmware = JBOD only) and then you are able to create RAID1 sets in the BIOS of the card and any ununsed disks will be seen by the OS as 'jbod' Kind regards, Caspar Smit 2018-07-23 20:43 GMT+02:00 Satish Patel : > I