Re: [ceph-users] Running Jewel and Luminous mixed for a longer period

2017-12-31 Thread David Herselman
be much faster using BlueStore... Regards David Herselman On 29 Dec 2017 22:06, Travis Nielsen <travis.niel...@quantum.com> wrote: Since bluestore was declared stable in Luminous, is there any remaining scenario to use filestore in new deployments? Or is it safe to assume that bluestore is

[ceph-users] Copy locked parent and clones to another pool

2017-12-24 Thread David Herselman
20480M 11056M Repeating the copy using the Perl solution is much slower but as the VM is currently off nothing has changed and each snapshot consumes zero data: real1m49.000s user1m34.339s sys 0m17.847s [admin@kvm5a ~]# rbd du rbd_hdd/vm-211-disk-3_backup warning: fast-diff map is not enabled for vm-211-disk-3_backup. operation may be slow. NAME PROVISIONED USED vm-211-disk-3_backup@snap3 20480M 2764M vm-211-disk-3_backup@snap2 20480M 0 vm-211-disk-3_backup@snap1 20480M 0 vm-211-disk-3_backup20480M 0 20480M 2764M PS: Not if this that is a Ceph display bug, why would the snapshot base be reported as not consuming any data and the first snapshot (rotated to 'snap3') report all the usage? Purging all snapshots yields the following: [admin@kvm5a ~]# rbd du rbd_hdd/vm-211-disk-3_backup warning: fast-diff map is not enabled for vm-211-disk-3_backup. operation may be slow. NAME PROVISIONED USED vm-211-disk-3_backup 20480M 2764M Regards David Herselman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-01-10 Thread David Herselman
or not they were connected to a RAID module/card. PS: After much searching we’ve decided to order the NVMe conversion kit and have ordered HGST UltraStar SN200 2.5 inch SFF drives with a 3 DWPD rating. Regards David Herselman From: Sean Redmond [mailto:sean.redmo...@gmail.com] Sent: Thursday, 11 January

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2018-02-22 Thread David Herselman
their Data Centre reliability stamp. I returned the lot and am done with Intel SSDs, will advise as many customers and peers to do the same… Regards David Herselman From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mike Lovell Sent: Thursday, 22 February 2018 11:19 PM

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2017-12-21 Thread David Herselman
s to 0 before starting again. This should prevent data flowing on to them, if they are not in a different device class or other crush selection ruleset. Ie: for OSD in `seq 24 35`; do ceph osd crush reweight osd.$OSD 0; done Regards David Herselman -Original Message- From: Davi

[ceph-users] Many concurrent drive failures - How do I activate pgs?

2017-12-20 Thread David Herselman
so we used ddrescue to read the source image forwards, rebooted the node when it stalled on the missing data and repeated the copy in reverse direction thereafter... Regards David Herselman ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Many concurrent drive failures - How do I activate pgs?

2017-12-20 Thread David Herselman
ices to check if there was newer firmware (there wasn't) and once when we restarted the node to see if it could then access a failed drive. Regards David Herselman -Original Message- From: Christian Balzer [mailto:ch...@gol.com] Sent: Thursday, 21 December 2017 3:24 AM To: ceph-users@l

Re: [ceph-users] Ceph Nautilus - can't balance due to degraded state

2019-08-03 Thread David Herselman
403] pg_upmap_items 8.409 [404,403] pg_upmap_items 8.40b [103,102,404,405] pg_upmap_items 8.40c [404,400] pg_upmap_items 8.410 [404,403] pg_upmap_items 8.411 [404,405] pg_upmap_items 8.417 [404,403] pg_upmap_items 8.418 [404,403] pg_upmap_items 9.2 [10401,10400] pg_upmap

[ceph-users] Problem formatting erasure coded image

2019-09-22 Thread David Herselman
MiB 0 666 GiB Regards David Herselman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph assimilated configuration - unable to remove item

2019-12-13 Thread David Herselman
rbd_default_features 31; ceph config dump | grep -e WHO -e rbd_default_features; WHOMASK LEVELOPTION VALUE RO global advanced rbd_default_features 31 Regards David Herselman -Original Message- From: Stefan Kooman Sent

[ceph-users] Ceph assimilated configuration - unable to remove item

2019-12-11 Thread David Herselman
OPTION VALUE RO global advanced rbd_default_features 7 global advanced rbd_default_features 31 Regards David Herselman ___ ceph-users mailing list ceph-users@lists.ceph.com http