be much faster using BlueStore...
Regards
David Herselman
On 29 Dec 2017 22:06, Travis Nielsen <travis.niel...@quantum.com> wrote:
Since bluestore was declared stable in Luminous, is there any remaining
scenario to use filestore in new deployments? Or is it safe to assume that
bluestore is
20480M 11056M
Repeating the copy using the Perl solution is much slower but as the VM is
currently off nothing has changed and each snapshot consumes zero data:
real1m49.000s
user1m34.339s
sys 0m17.847s
[admin@kvm5a ~]# rbd du rbd_hdd/vm-211-disk-3_backup
warning: fast-diff map is not enabled for vm-211-disk-3_backup. operation may
be slow.
NAME PROVISIONED USED
vm-211-disk-3_backup@snap3 20480M 2764M
vm-211-disk-3_backup@snap2 20480M 0
vm-211-disk-3_backup@snap1 20480M 0
vm-211-disk-3_backup20480M 0
20480M 2764M
PS: Not if this that is a Ceph display bug, why would the snapshot base be
reported as not consuming any data and the first snapshot (rotated to 'snap3')
report all the usage? Purging all snapshots yields the following:
[admin@kvm5a ~]# rbd du rbd_hdd/vm-211-disk-3_backup
warning: fast-diff map is not enabled for vm-211-disk-3_backup. operation may
be slow.
NAME PROVISIONED USED
vm-211-disk-3_backup 20480M 2764M
Regards
David Herselman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
or
not they were connected to a RAID module/card.
PS: After much searching we’ve decided to order the NVMe conversion kit and
have ordered HGST UltraStar SN200 2.5 inch SFF drives with a 3 DWPD rating.
Regards
David Herselman
From: Sean Redmond [mailto:sean.redmo...@gmail.com]
Sent: Thursday, 11 January
their Data Centre reliability stamp.
I returned the lot and am done with Intel SSDs, will advise as many customers
and peers to do the same…
Regards
David Herselman
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mike
Lovell
Sent: Thursday, 22 February 2018 11:19 PM
s to 0 before starting again.
This should prevent data flowing on to them, if they are not in a different
device class or other crush selection ruleset. Ie:
for OSD in `seq 24 35`; do
ceph osd crush reweight osd.$OSD 0;
done
Regards
David Herselman
-Original Message-
From: Davi
so we used
ddrescue to read the source image forwards, rebooted the node when it stalled
on the missing data and repeated the copy in reverse direction thereafter...
Regards
David Herselman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
ices to check if there
was newer firmware (there wasn't) and once when we restarted the node to see if
it could then access a failed drive.
Regards
David Herselman
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Thursday, 21 December 2017 3:24 AM
To: ceph-users@l
403]
pg_upmap_items 8.409 [404,403]
pg_upmap_items 8.40b [103,102,404,405]
pg_upmap_items 8.40c [404,400]
pg_upmap_items 8.410 [404,403]
pg_upmap_items 8.411 [404,405]
pg_upmap_items 8.417 [404,403]
pg_upmap_items 8.418 [404,403]
pg_upmap_items 9.2 [10401,10400]
pg_upmap
MiB 0
666 GiB
Regards
David Herselman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rbd_default_features 31;
ceph config dump | grep -e WHO -e rbd_default_features;
WHOMASK LEVELOPTION VALUE RO
global advanced rbd_default_features 31
Regards
David Herselman
-Original Message-
From: Stefan Kooman
Sent
OPTION VALUE RO
global advanced rbd_default_features 7
global advanced rbd_default_features 31
Regards
David Herselman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
11 matches
Mail list logo