[ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-08-31 Thread Kenneth Van Alstyne
ack" - rbd_concurrent_management_ops is unset, so it appears the default is “10” Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com DHS EAGLE

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-08-31 Thread Kenneth Van Alstyne
I/O coalescing to deal with my crippling IOP limit due to the low number of spindles? Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-09-01 Thread Kenneth Van Alstyne
Thanks for the awesome advice folks. Until I can go larger scale (50+ SATA disks), I’m thinking my best option here is to just swap out these 1TB SATA disks with 1TB SSDs. Am I oversimplifying the short term solution? Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC

Re: [ceph-users] Ceph Performance Questions with rbd images access by qemu-kvm

2015-09-01 Thread Kenneth Van Alstyne
Got it — I’ll keep that in mind. That may just be what I need to “get by” for now. Ultimately, we’re looking to buy at least three nodes of servers that can hold 40+ OSDs backed by 2TB+ SATA disks, Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled

[ceph-users] Snapshot cleanup performance impact on client I/O?

2017-06-30 Thread Kenneth Van Alstyne
know if I’ve missed something fundamental. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com DHS EAGLE II Prime Contractor: FC1 S

[ceph-users] OSD Crash When Upgrading from Jewel to Luminous?

2018-08-17 Thread Kenneth Van Alstyne
ount of logging and debug information I have available, unfortunately. If it helps, all ceph-mon, ceph-mds, radosgw, and ceph-mgr daemons were running 12.2.7, while 30 of the 50 total ceph-osd daemons were also on 12.2.7 when the remaining 20 ceph-osd daemons (on 10.2.10) crashed. Thanks, -- Ken

Re: [ceph-users] OSD Crash When Upgrading from Jewel to Luminous?

2018-08-21 Thread Kenneth Van Alstyne
duplicate the issue in a lab, but highly suspect this is what happened. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com<h

[ceph-users] Anyone tested Samsung 860 DCT SSDs?

2018-10-12 Thread Kenneth Van Alstyne
Cephers: As the subject suggests, has anyone tested Samsung 860 DCT SSDs? They are really inexpensive and we are considering buying some to test. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite

Re: [ceph-users] Anyone tested Samsung 860 DCT SSDs?

2018-10-12 Thread Kenneth Van Alstyne
; "Seagate Nytro 1551 DuraWrite 3DWPD Mainstream Endurance 960GB, SATA"? > Seems really cheap too and has TBW 5.25PB. Anybody tested that? What > about (RBD) performance? > > Cheers > Corin > > On Fri, 2018-10-12 at 13:53 +, Kenneth Van Alstyne wrote: >> C

[ceph-users] Image has watchers, but cannot determine why

2019-01-09 Thread Kenneth Van Alstyne
| grep -i qemu | grep -i rbd | grep -i 145 # ceph version ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe) # Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8

Re: [ceph-users] Image has watchers, but cannot determine why

2019-01-10 Thread Kenneth Van Alstyne
the watcher did indeed go away and I was able to remove the images. Very, very strange. (But situation solved… except I don’t know what the cause was, really.) Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue

[ceph-users] RBD Mirror Proxy Support?

2019-01-11 Thread Kenneth Van Alstyne
. Has anything been done in this regard? If not, is my best bet perhaps a tertiary clusters that both can reach and do one-way replication to? Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA

Re: [ceph-users] RBD Mirror Proxy Support?

2019-01-14 Thread Kenneth Van Alstyne
build out a test lab to see how that would work for us. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106 www.knightpoint.com<h

Re: [ceph-users] RBD Mirror Proxy Support?

2019-01-14 Thread Kenneth Van Alstyne
In this case, I’m imagining Clusters A/B both having write access to a third “Cluster C”. So A/B -> C rather than A -> C -> B / B -> C -> A / A -> B-> C. I admit, in the event that I need to replicate back to either primary cluster, there may be challenges. Thanks, -

Re: [ceph-users] RBD Mirror Proxy Support?

2019-01-14 Thread Kenneth Van Alstyne
D’oh! I was hoping that the destination pools could be unique names, regardless of the source pool name. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266

[ceph-users] Filestore OSD on CephFS?

2019-01-16 Thread Kenneth Van Alstyne
B 0.02 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 1 133 B 083 GiB 10 # df -h /var/lib/ceph/osd/cephfs-0/ Filesystem Size Used Avail Use% Mounted on 10.0.0.1:/ceph-remote 87G 12M 87G 1% /var/lib/ce

Re: [ceph-users] Filestore OSD on CephFS?

2019-01-16 Thread Kenneth Van Alstyne
have that. The single OSD is simply due to the underlying cluster already either being erasure coded or replicated. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f

Re: [ceph-users] Filestore OSD on CephFS?

2019-01-16 Thread Kenneth Van Alstyne
mind — I just didn’t want to risk impacting the underlying cluster too much or hit any other caveats that perhaps someone else has run into before. I doubt many people have tried CephFS as a Filestore OSD since in general, it seems like a pretty silly idea. Thanks, -- Kenneth Van Alstyne

Re: [ceph-users] Filestore OSD on CephFS?

2019-01-16 Thread Kenneth Van Alstyne
I’d actually rather it not be an extra cluster, but can the destination pool name be different? If not, I have conflicting image names in the “rbd” pool on either side. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775

Re: [ceph-users] Nautilus upgrade but older releases reported by features

2019-03-27 Thread Kenneth Van Alstyne
705696d4fe619afc) nautilus (stable)": 1 }, "mds": { "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)": 1 }, "rgw": { "ceph version 14.2.0 (3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (st

Re: [ceph-users] VM management setup

2019-04-05 Thread Kenneth Van Alstyne
5.6.1 or wait for 5.8.1 to be released since the issues have already been fixed upstream. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Disabled Veteran-Owned Business 1775 Wiehle Avenue Suite 101 | Reston, VA 20190 c: 228-547-8045 f: 571-266-3106

Re: [ceph-users] Data distribution question

2019-04-30 Thread Kenneth Van Alstyne
Shain: Have you looked into doing a "ceph osd reweight-by-utilization” by chance? I’ve found that data distribution is rarely perfect and on aging clusters, I always have to do this periodically. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service-Dis

Re: [ceph-users] Data distribution question

2019-04-30 Thread Kenneth Van Alstyne
Unfortunately it looks like he’s still on Luminous, but if upgrading is an option, the options are indeed significantly better. If I recall correctly, at least the balancer module is available in Luminous. Thanks, -- Kenneth Van Alstyne Systems Architect Knight Point Systems, LLC Service

[ceph-users] Ceph capacity versus pool replicated size discrepancy?

2019-08-13 Thread Kenneth Van Alstyne
done rbd size: 3 data size: 3 metadata size: 3 .rgw.root size: 3 default.rgw.control size: 3 default.rgw.meta size: 3 default.rgw.log size: 3 default.rgw.buckets.index size: 3 default.rgw.buckets.data size: 3 default.rgw.buckets.non-ec size: 3 Thanks, -- K

Re: [ceph-users] Ceph capacity versus pool replicated size discrepancy?

2019-08-14 Thread Kenneth Van Alstyne
Got it! I can calculate individual clone usage using “rbd du”, but does anything exist to show total clone usage across the pool? Otherwise it looks like phantom space is just missing. Thanks, -- Kenneth Van Alstyne Systems Architect M: 228.547.8045 15052 Conference Center Dr, Chantilly, VA

[ceph-users] Panic in kernel CephFS client after kernel update

2019-10-01 Thread Kenneth Van Alstyne
ashed machine and to avoid attaching an image, I’ll link to where they are: http://kvanals.kvanals.org/.ceph_kernel_panic_images/ Am I way off base or has anyone else run into this issue? Thanks, -- Kenneth Van Alstyne Systems Architect M: 228.547.8045 15052

Re: [ceph-users] Panic in kernel CephFS client after kernel update

2019-10-05 Thread Kenneth Van Alstyne
Thanks! I’ll remove my patch from my local build of the 4.19 kernel and upgrade to 4.19.77. Appreciate the quick fix. Thanks, -- Kenneth Van Alstyne Systems Architect M: 228.547.8045 15052 Conference Center Dr, Chantilly, VA 20151 perspecta On Oct 5, 2019, at 7:29 AM, Ilya Dryomov