Re: [ceph-users] Enabling Jumbo Frames on ceph cluser

2017-08-11 Thread Joe Comeau
I'm no expert but maybe another test might be iperf and watch your cpu utilization while doing it You can set iperf to run between a couple monitors and OSD servers Try setting it at 1500 or your switch's stock MTU then put the servers at 9000 and the switch at 9128 (for packet

Re: [ceph-users] Some OSDs are down after Server reboot

2017-09-15 Thread Joe Comeau
We're running journals on NVMe as well - SLES before rebooting try deleting the links here: /etc/systemd/system/ceph-osd.target.wants/ if we delete first it boots ok if we don't delete the disks sometimes don't come up and we have to ceph-disk activate all HTH Thanks Joe >>> David

Re: [ceph-users] Performance, and how much wiggle room there is with tunables

2017-11-10 Thread Joe Comeau
Hi Not to sure what you are looking for but these are the type of performance numbers we are getting on our jewel 10.2. install We have tweaked things up a bit to get better write performance. all writes using fio - libio for 2 minute warm and 10 minute run 6 node cluster - spinning disk with

Re: [ceph-users] Stop scrubbing

2018-06-06 Thread Joe Comeau
When I am upgrading from filestore to bluestore or any other server maintenance for a short time (ie high I/O while rebuilding) ceph osd set noout ceph osd set noscrub ceph osd set nodeep-scrub when finished ceph osd unset noscrub ceph osd unset nodeep-scrub ceph osd unset noout again

Re: [ceph-users] Uneven OSD data distribution

2018-02-15 Thread Joe Comeau
Hi What are you using your cluster for ? Are the pools rbd by chance and images for vm's of some sort Did you add any osds after the pools were created, or redeploy any ? Thanks Joe >>> Osama Hasebou 2/15/2018 6:14 AM >>> Hi All, I am seeing a lot of uneven

Re: [ceph-users] Bluestore Hardwaresetup

2018-02-16 Thread Joe Comeau
I have a question about block.db and block.wal How big should they be? Relative to drive size or ssd size ? Thanks Joe >>> Michel Raabe 2/16/2018 9:12 AM >>> Hi Peter, On 02/15/18 @ 19:44, Jan Peters wrote: > I want to evaluate ceph with bluestore, so I need some

Re: [ceph-users] cephfs speed

2018-08-31 Thread Joe Comeau
Are you using bluestore OSDs ? if so my thought process on this is what we are having an issue with is caching and bluestore see the thread on bluestore caching "Re: [ceph-users] Best practices for allocating memory to bluestore cache" before when we were on Jewel and filestore we could get a

Re: [ceph-users] cephfs speed

2018-09-01 Thread Joe Comeau
rs so from the thread, but I thought it would be good to confirm that before digging in further. David Byte Sr. Technology Strategist SCE Enterprise Linux SCE Enterprise Storage Alliances and SUSE Embedded db...@suse.com 918.528.4422 From: ceph-users on behalf of Joe Comeau Date: Friday, Au

Re: [ceph-users] Disk write cache - safe?

2018-03-15 Thread Joe Comeau
Hi We're using SUSE Ent Storage - Ceph And have Dell 730xd and expansion trays with 8 tB disks We initially had the controller cache turned off as per ceph documentation (so configured as jboss in Dell Bios) We reconfigured as raid0 and use the cache now for both internal and expansion

Re: [ceph-users] Disk write cache - safe?

2018-03-15 Thread Joe Comeau
After reading Reeds comments about losing power to his data center, I think he brings up a lot of good points. So take Dells advice I linked into consideration with your own environment We also have 8TB disks with Intel P3700 for journal Our large ups and new generators which are tested

Re: [ceph-users] DELL R630 and Optane NVME

2018-10-11 Thread Joe Comeau
I'm curious about optane too We are running Dell 730xd & 740xd with expansion chassis 12 -x 8 TB disks in the server and 12x 8 TB is the exp unit 2 x 2 TB Intel NVMe for caching in the servers (12 disks cached with wal/db on opposite nVMe from Intel cache- so interleaved) Intel cache running

Re: [ceph-users] Benchmark performance when using SSD as the journal

2018-11-14 Thread Joe Comeau
Hi Dave Have you looked at the Intel P4600 vsd the P4500 The P4600 has better random writes and a better drive writes per day I believe Thanks Joe >>> 11/13/2018 8:45 PM >>> Thanks Merrick! I checked with Intel spec [1], the performance Intel said is, ยท Sequential Read (up to) 500

Re: [ceph-users] cephfs speed

2018-09-12 Thread Joe Comeau
d SUSE Embedded db...@suse.com 918.528.4422 >>> "Joe Comeau" 9/1/2018 8:21 PM >>> Yes I was referring to windows explorer copies as that is what users typically use but also with windows robocopy and it set to 32 threads the difference is we may go from a peak of 300MB/s to

[ceph-users] scrub start hour = heavy load

2019-06-14 Thread Joe Comeau
I wonder if anyone has dealt with deep-scrubbing being really heavy when it kicks of at the defined start time? I currently have a script that kicks off and runs deep-scrub every 10 minutes on the oldest un-deep-scrubbed pg this script runs 24/7 regardless of when deep scrub is scheduled my

Re: [ceph-users] best pool usage for vmware backing

2019-12-05 Thread Joe Comeau
Just a note that we use SUSE for our Ceph/Vmware system this is the general ceph docs for vmware/iscsi https://docs.ceph.com/docs/master/rbd/iscsi-initiator-esx/ this is the SUSE docs https://documentation.suse.com/ses/6/html/ses-all/cha-ceph-iscsi.html they differ I'll tell you what