Sorry about the delay. We did confirm the Jumbo frames. We dropped the
iSCSI and and switched to NFS on FreeNAS before I got your reply. Seems to
have gotten rid of any hiccups. And we like being able to see the files
better than iSCSI anyway.
So we decided that we were very confident it's
Once upon a time, Karli Sjöberg said:
> Lastly network, are you sure you activated jumbo frames, all the way
> from the storage to the hosts? That makes a huge difference on 10 Gb
BTW: just wanted to mention a good way to check that you are really
getting jumbo frames is to use "ss"
On 2019-03-13 05:20, Drew Rash wrote:
> Pictures and speeds are the latest. Which seems to be the best
> performance we've ever gotten so far. Still seems like the hardware is
> sitting idling by not doing much after an initial burst.
> Took a picture of a file copy using the latest setup. You
OK, thanks. I'd also asked for gluster version you're running. Could you
share that information as well?
On Thu, Mar 7, 2019 at 9:38 PM Drew Rash wrote:
> Here is the output for our ssd gluster which exhibits the same issue as
> the hdd glusters.
> However, I can replicate the
Pictures and speeds are the latest. Which seems to be the best performance
we've ever gotten so far. Still seems like the hardware is sitting idling
by not doing much after an initial burst.
Took a picture of a file copy using the latest setup. You can see it
transfer like 25% of a 7gig file at
What is the host RAM size and what is the setting for VM.dirty_ratio and
background ratio on those hosts?
What about your iSCSI target?
On Mar 11, 2019 23:51, Drew Rash wrote:
> Added the disable:false, removed the gluster, re-added using nfs.
Added the disable:false, removed the gluster, re-added using nfs.
Performance still in the low 10's MBps + or - 5
Ran the showmount -e "" and it displayed the mount.
Trying right now to re-mount using gluster with a negative-timeout=1 option.
We converted one of our 4 boxes to FreeNAS, took 4
For the test change the gluster parameter nfs.disabled to false.
Something like gluster volume set volname nfs.dsiable false
Then use shownount -e gluster-node-fqdn
Note: NFS might not be allowed in the firewall.
Then add this NFS domain (don't forget to remove the gluster storage
Or do I just set it up as a glusterfs just how I have it and change that
On Fri, Mar 8, 2019 at 3:01 PM Drew Rash wrote:
> I'm having trouble finding exactly how to setup a gluster with oVirt to
> talk NFS.
> Is there a walk through of some kinda?
> On Thu, Mar 7, 2019 at 2:34 PM
I'm having trouble finding exactly how to setup a gluster with oVirt to
Is there a walk through of some kinda?
On Thu, Mar 7, 2019 at 2:34 PM Strahil wrote:
> Hi Drew,
> During my tests on gluster v3, using 'disable.nfs: false' and using that
> as ovirt storage brought maximum
During my tests on gluster v3, using 'disable.nfs: false' and using that as
ovirt storage brought maximum performance for my 1 Gbit/s network.
Can you try using NFS and if it works well, you can disable nfs and get NFS
Ganesha up and running.
It seems that FUSE in gluster v5 is
Here is the output for our ssd gluster which exhibits the same issue as the
However, I can replicate the issue on an 8TB WD Gold disk NFS mounted as
well ( removed the gluster part ) Which is the reason I'm on the oVirt
site. I can start a file copy that writes at max speed, then
So from the profile, it appears the XATTROPs and FINODELKs are way higher
than the number of WRITEs:
%-latency Avg-latency Min-Latency Max-Latency No. of calls
- --- --- ---
0.43 384.83 us 51.00 us
Could you share the following pieces of information to begin with -
1. output of `gluster volume info $AFFECTED_VOLUME_NAME`
2. glusterfs version you're running
On Sat, Mar 2, 2019 at 3:38 AM Drew R wrote:
> Saw some people asking for profile info. So I had started a migration
Another option is to try nfs-ganesha - in my case I can reach 80MB/s sequential
writes in the VM.
Strahil NikolovOn Mar 1, 2019 00:15, Jayme wrote:
> Also one more thing, did you make sure to setup the 10Gb gluster network in
> ovirt and set migration and vm traffic to use the
Saw some people asking for profile info. So I had started a migration from a
6TB WDGold 2+1arb replicated gluster to a 1TB samsung ssd 2+1 rep gluster and
it's been running a while for a 100GB file thin provisioned with like 28GB
actually used. Here is the profile info. I started the
This kind of helped. It helped on a vm with a single disk nfs mount using
I tried it on a winserv 2016 and got weird non-responsiveness happening a lot.
Kind of surprised by the windows 10 box actually got full speed. 100MBps
basically. Way better than 5MBps.
Now if I could fix
Also I just noticed but the windows resource monitor response times show zero
(0) across the board now with the viodiskcache = writeback setting. Not sure
if that's accurate. But a notable difference for the notable speed boost.
Users mailing list
Yes, it's all on the 10Gb. The only way out of it is a single 1Gb going to
the rest of the servers/clients and internet.
On Thu, Feb 28, 2019 at 4:15 PM Jayme wrote:
> Also one more thing, did you make sure to setup the 10Gb gluster network
> in ovirt and set migration and vm traffic to use
I set my account name to Drew R earlier and it seems to have altered the course
of this thread. But anyway. This guy Strahil replied to me (not showing below)
and said to try this:
Strahil: "Can you try setting the viodiskcache custom property to write back ?
My coworker says he did the optimize thing a while back. I'll see if he
wants to click it again tomorrow sometime. If it has a positive effect I'll
be sure to post it.
On Thu, Feb 28, 2019 at 4:12 PM Jayme wrote:
> Check volumes in Ovirt admin and make sure the optimize volume for cm
Thanks, adding your reply to the web page. Somehow changing my name has
created a rift between us.
On Thu, Feb 28, 2019 at 4:01 PM Strahil wrote:
> Can you try setting the viodiskcache custom property to write back ?
> Best Regards,
Also one more thing, did you make sure to setup the 10Gb gluster network in
ovirt and set migration and vm traffic to use the gluster network?
On Thu, Feb 28, 2019 at 6:11 PM Jayme wrote:
> Check volumes in Ovirt admin and make sure the optimize volume for cm
> storage is selected
> I have a
Check volumes in Ovirt admin and make sure the optimize volume for cm
storage is selected
I have a three node ovirt hci with ssd gluster backed storage and 10Gb
storage network and I write at around 50-60 megabytes per second from
within vms. Before I used the optimize for vm storage it was about
Can you try setting the viodiskcache custom property to write back ?
Users mailing list -- firstname.lastname@example.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
I expect VM's to be able to write at 200MB or 500MB to the hdd/ssd when using
an NFS mount or gluster from inside a VM.
But heck, it's not even 10% (20MB or 50MB), it's closer to 2.5%-5% of the speed
of the drives.
I expect windows services to not timeout due to slow hard drives while booting.
Can you explain your expectations for where you are trying to get?
Ovirt uses KVM under the hood and performs very well for many
organizations. I am sure its just a configuration thing.
On Thu, Feb 28, 2019 at 12:48 PM wrote:
> Hi everybody, my coworker and I have some decent hardware that
Mail list logo