Re: [ceph-users] MDS damaged

2018-07-15 Thread Nicolas Huillard
Le dimanche 15 juillet 2018 à 11:01 -0500, Adam Tygart a écrit : > Check out the message titled "IMPORTANT: broken luminous 12.2.6 > release in repo, do not upgrade" > > It sounds like 12.2.7 should come *soon* to fix this transparently. Thanks. I didn't notice this one. I should monitor more

Re: [ceph-users] MDS damaged

2018-07-15 Thread Adam Tygart
Check out the message titled "IMPORTANT: broken luminous 12.2.6 release in repo, do not upgrade" It sounds like 12.2.7 should come *soon* to fix this transparently. -- Adam On Sun, Jul 15, 2018 at 10:28 AM, Nicolas Huillard wrote: > Hi all, > > I have the same problem here: > * during the

Re: [ceph-users] MDS damaged

2018-07-15 Thread Nicolas Huillard
Hi all, I have the same problem here: * during the upgrade from 12.2.5 to 12.2.6 * I restarted all the OSD server in turn, which did not trigger any bad thing * a few minutes after upgrading the OSDs/MONs/MDSs/MGRs (all on the same set of servers) and unsetting noout, I upgraded the clients,

Re: [ceph-users] RBD image repurpose between iSCSI and QEMU VM, how to do properly ?

2018-07-15 Thread Wladimir Mutel
Jason Dillaman wrote:  I am doing more experiments with Ceph iSCSI gateway and I am a bit confused on how to properly repurpose an RBD image from iSCSI target into QEMU virtual disk and back This isn't really a use case that we support nor intend to support. Your best bet would

[ceph-users] chkdsk /b fails on Ceph iSCSI volume

2018-07-15 Thread Wladimir Mutel
Hi, I cloned a NTFS with bad blocks from USB HDD onto Ceph RBD volume (using ntfsclone, so the copy has sparse regions), and decided to clean bad blocks within the copy. I run chkdsk /b from WIndows and it fails on free space verification (step 5 of 5). In tcmu-runner.log I see that

[ceph-users] CephFS with erasure coding, do I need a cache-pool?

2018-07-15 Thread Oliver Schulz
Dear all, we're planning a new Ceph-Clusterm, with CephFS as the main workload, and would like to use erasure coding to use the disks more efficiently. Access pattern will probably be more read- than write-heavy, on average. I don't have any practical experience with erasure- coded pools so

[ceph-users] Safe to use rados -p rbd cleanup?

2018-07-15 Thread Mehmet
hello guys, in my production cluster i've many objects like this "#> rados -p rbd ls | grep 'benchmark'" ... .. . benchmark_data_inkscope.example.net_32654_object1918 benchmark_data_server_26414_object1990 ... .. . Is it safe to run "rados -p rbd cleanup" or is there any risk for my images?