[ovirt-users] ovirt glusterfs

2020-11-13 Thread garcialiang . anne
Hello,
I try use Gluster deploymont. I've the message error:
failed: [llrovirttest02.in2p3.fr] (item={u'path': u'/gluster_bricks/engine', 
u'vgname': u'gluster_vg_sdb', u'lvname': u'gluster_lv_engine'}) => 
{"ansible_loop_var": "item", "changed": false, "item": {"lvname": 
"gluster_lv_engine", "path": "/gluster_bricks/engine", "vgname": 
"gluster_vg_sdb"}, "msg": "SELinux is disabled on this host."}

Could you help me ?

Thanks,

Anne Garcia
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VAEIKJN7HS2H2E3AUNAHAR2UMS5HLABN/


[ovirt-users] ovirt glusterfs

2020-11-02 Thread garcialiang . anne
Hello,

I've some problem with Hyperconverged Configure Gluster storage and oVirt 
hosted engine. I've the message error:

failed: [node2.x.fr] (item={u'key': u'gluster_vg_sdb', u'value': 
[{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]}) => 
{"ansible_loop_var": "item", "changed": false, "err": "  Device /dev/sdb 
excluded by a filter.\n", "item": {"key": "gluster_vg_sdb", "value": 
[{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}]}, "msg": "Creating 
physical volume '/dev/sdb' failed", "rc": 5}

Could you help me for know where is the problem, please?

Thanks,

Anne Garcia
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FRJTPHMZHZOS6FPTBA2D6TLQIO6MD3HG/


[ovirt-users] Ovirt Glusterfs

2019-02-21 Thread suporte
Hi, 

How can I get the best performance when using Glusterfs as an oVirt Domain 
Storage? 

Thanks 

José 

-- 

Jose Ferradeira 
http://www.logicworks.pt 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HPB3NSJN6QSHLRBBNP56F3IHNAJMGZ3T/


[ovirt-users] oVirt + GlusterFS over FCoE

2019-02-07 Thread Николаев Алексей
Hi community! Is it possibly to use oVirt with GlusterFS over FCoE with instruction https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/html-single/administration_guide/#How_to_Set_Up_RHVM_to_Use_FCoE?   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQEUFPAKOXQF4OTTVS5VNQ6ZFMKGRWFU/


Re: [ovirt-users] oVIRT / GlusterFS / Data (HA)

2017-01-24 Thread David Gossage
On Tue, Jan 24, 2017 at 4:56 PM, Devin Acosta 
wrote:

>
> I have created an oVIRT 4.0.6 Cluster, it has 2 Compute nodes, and 3
> Dedicated Gluster nodes. The Gluster nodes are configured correctly and
> they have the replica set to 3. I'm trying to figure out when I go to
> attach the Data (Master) domain to the oVIRT manager what is the best
> method to do so in the configuration?  I initially set the mount point to
> be like: gluster01-int:/data, then set in the mount options
> "backup-volfile-servers=gluster02-int:/data,gluster03-int:/data", so I
> understand that will choose another host if the 1st one is down but if i
> was to reboot the 1st Gluster node would that provide HA for my Data
> domain?
>

If you are mounting with gluster fuse then gluster itself will take care of
making sure mount is available still if one node goes down.  The
backup-volfile settings are so that if the main host in config, here being
gluster01-int, is down when it initially tries to mount on a host it has 2
other hosts in the cluster it can try to connect to which otherwise it
wouldn't know about until after the intial mount was made and it became
aware of the other nodes.


> I also configured ctdb with a floating-ip address that floats between all
> 3 Gluster nodes, and I am wondering if I should be pointing the mount to
> that VIP? What is the best solution with dealing with Gluster and keeping
> your mount HA?
>

If you use gluster-fuse I don't think that you need the floating IP.  I
think that would be more used if you mounted via nfs.


>
> --
>
> Devin Acosta
> Red Hat Certified Architect, LinuxStack
> 602-354-1220 <(602)%20354-1220> || de...@linuxguru.co
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVIRT / GlusterFS / Data (HA)

2017-01-24 Thread Devin Acosta
I have created an oVIRT 4.0.6 Cluster, it has 2 Compute nodes, and 3
Dedicated Gluster nodes. The Gluster nodes are configured correctly and
they have the replica set to 3. I'm trying to figure out when I go to
attach the Data (Master) domain to the oVIRT manager what is the best
method to do so in the configuration?  I initially set the mount point to
be like: gluster01-int:/data, then set in the mount options
"backup-volfile-servers=gluster02-int:/data,gluster03-int:/data", so I
understand that will choose another host if the 1st one is down but if i
was to reboot the 1st Gluster node would that provide HA for my Data
domain?

I also configured ctdb with a floating-ip address that floats between all 3
Gluster nodes, and I am wondering if I should be pointing the mount to that
VIP? What is the best solution with dealing with Gluster and keeping your
mount HA?

-- 

Devin Acosta
Red Hat Certified Architect, LinuxStack
602-354-1220 || de...@linuxguru.co
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt glusterfs performance

2016-04-12 Thread Roderick Mooi
Hi

> It is not removed. Can you try 'gluster volume set volname cluster.eager-lock 
> enable`?

This works. BTW by default this setting is “on”. What’s the difference between 
“on” and “enable”?

Thanks for the clarification.

Regards,

Roderick

> On 06 Apr 2016, at 10:56 AM, Ravishankar N  wrote:
> 
> On 04/06/2016 02:08 PM, Roderick Mooi wrote:
>> Hi Ravi and colleagues
>> 
>> (apologies for hijacking this thread but I’m not sure where else to report 
>> this (and it is related).)
>> 
>> With gluster 3.7.10, running
>> #gluster volume set  group virt
>> fails with:
>> volume set: failed: option : eager-lock does not exist
>> Did you mean eager-lock?
>> 
>> I had to remove the eager-lock setting from /var/lib/glusterd/groups/virt to 
>> get this to work. It seems like setting eager-lock has been removed from 
>> latest gluster. Is this correct? Either way, is there anything else I should 
>> do?
> 
> It is not removed. Can you try 'gluster volume set volname cluster.eager-lock 
> enable`?
> I think the disperse (EC) translator introduced a `disperse.eager-lock` which 
> is why you would need to mention entire volume option name to avoid ambiguity.
> We probably need to fix the virt profile setting to include the entire name. 
> By the way 'gluster volume set help` should give you the list of all options.
> 
> -Ravi
> 
>> 
>> Cheers,
>> 
>> Roderick
>> 
>>> On 12 Feb 2016, at 6:18 AM, Ravishankar N >> > wrote:
>>> 
>>> Hi Bill,
>>> Can you enable virt-profile setting for your volume and see if that helps? 
>>> You need to enable this optimization when you create the volume using 
>>> ovrit, or use the following command for an existing volume:
>>> 
>>> #gluster volume set  group virt
>>> 
>>> -Ravi
>>> 
>>> 
>>> On 02/12/2016 05:22 AM, Bill James wrote:
 My apologies, I'm showing how much of a noob I am.
 Ignore last direct to gluster numbers, as that wasn't really glusterfs.
 
 
 [root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com 
 :/gv1 /mnt/tmp/
 [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M 
 count=1000 oflag=direct
 1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s
 
 That's more how I expected, it is pointing to glusterfs performance.
 
 
 
 On 02/11/2016 03:27 PM, Bill James wrote:
> don't know if it helps, but I ran a few more tests, all from the same 
> hardware node.
> 
> The VM:
> [root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
> count=1000 oflag=direct
> 1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
> 
> Writing directly to gluster volume:
> [root@ovirt2 test ~]# time dd if=/dev/zero 
> of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct
> 1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s
> 
> 
> Writing to NFS volume:
> [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile 
> bs=1M count=1000 oflag=direct
> 1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s
> 
> NFS & Gluster are using the same interface. Tests were not run at same 
> time.
> 
> This would suggest my problem isn't glusterfs, but the VM performance.
> 
> 
> 
> On 02/11/2016 03:13 PM, Bill James wrote:
>> xml attached. 
>> 
>> 
>> On 02/11/2016 12:28 PM, Nir Soffer wrote: 
>>> On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
>>>  
>>>  wrote: 
 thank you for the reply. 
 
 We setup gluster using the names associated with  NIC 2 IP. 
   Brick1: ovirt1-ks.test.j2noc.com 
 :/gluster-store/brick1/gv1 
   Brick2: ovirt2-ks.test.j2noc.com 
 :/gluster-store/brick1/gv1 
   Brick3: ovirt3-ks.test.j2noc.com 
 :/gluster-store/brick1/gv1 
 
 That's NIC 2's IP. 
 Using 'iftop -i eno2 -L 5 -t' : 
 
 dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct 
 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s 
>>> Can you share the xml of this vm? You can find it in vdsm log, 
>>> at the time you start the vm. 
>>> 
>>> Or you can do (on the host): 
>>> 
>>> # virsh 
>>> virsh # list 
>>> (username: vdsm@ovirt password: shibboleth) 
>>> virsh # dumpxml vm-id 
>>> 
 Peak rate (sent/received/total):  281Mb 5.36Mb 
 282Mb 
 Cumulative (sent/received/total):1.96GB 14.6MB 
 1.97GB 
 
 gluster volume info gv1: 
   Options Reconfigured: 
   performance.write-behind-window-size: 4MB 
   performance.readdir-ahead: on 
   performance.cache-size: 1GB 
>>>

Re: [ovirt-users] ovirt glusterfs performance

2016-04-12 Thread Niels de Vos
On Tue, Apr 12, 2016 at 11:11:54AM +0200, Roderick Mooi wrote:
> Hi
> 
> > It is not removed. Can you try 'gluster volume set volname 
> > cluster.eager-lock enable`?
> 
> This works. BTW by default this setting is “on”

Thanks for reporting back!

> What’s the difference between “on” and “enable”?

Both are the same, you could also use "yes", "true" and possibly others.

Cheers,
Niels


> 
> Thanks for the clarification.
> 
> Regards,
> 
> Roderick
> 
> > On 06 Apr 2016, at 10:56 AM, Ravishankar N  wrote:
> > 
> > On 04/06/2016 02:08 PM, Roderick Mooi wrote:
> >> Hi Ravi and colleagues
> >> 
> >> (apologies for hijacking this thread but I’m not sure where else to report 
> >> this (and it is related).)
> >> 
> >> With gluster 3.7.10, running
> >> #gluster volume set  group virt
> >> fails with:
> >> volume set: failed: option : eager-lock does not exist
> >> Did you mean eager-lock?
> >> 
> >> I had to remove the eager-lock setting from /var/lib/glusterd/groups/virt 
> >> to get this to work. It seems like setting eager-lock has been removed 
> >> from latest gluster. Is this correct? Either way, is there anything else I 
> >> should do?
> > 
> > It is not removed. Can you try 'gluster volume set volname 
> > cluster.eager-lock enable`?
> > I think the disperse (EC) translator introduced a `disperse.eager-lock` 
> > which is why you would need to mention entire volume option name to avoid 
> > ambiguity.
> > We probably need to fix the virt profile setting to include the entire 
> > name. By the way 'gluster volume set help` should give you the list of all 
> > options.
> > 
> > -Ravi
> > 
> >> 
> >> Cheers,
> >> 
> >> Roderick
> >> 
> >>> On 12 Feb 2016, at 6:18 AM, Ravishankar N  >>> > wrote:
> >>> 
> >>> Hi Bill,
> >>> Can you enable virt-profile setting for your volume and see if that 
> >>> helps? You need to enable this optimization when you create the volume 
> >>> using ovrit, or use the following command for an existing volume:
> >>> 
> >>> #gluster volume set  group virt
> >>> 
> >>> -Ravi
> >>> 
> >>> 
> >>> On 02/12/2016 05:22 AM, Bill James wrote:
>  My apologies, I'm showing how much of a noob I am.
>  Ignore last direct to gluster numbers, as that wasn't really glusterfs.
>  
>  
>  [root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com 
>  :/gv1 /mnt/tmp/
>  [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M 
>  count=1000 oflag=direct
>  1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s
>  
>  That's more how I expected, it is pointing to glusterfs performance.
>  
>  
>  
>  On 02/11/2016 03:27 PM, Bill James wrote:
> > don't know if it helps, but I ran a few more tests, all from the same 
> > hardware node.
> > 
> > The VM:
> > [root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
> > count=1000 oflag=direct
> > 1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
> > 
> > Writing directly to gluster volume:
> > [root@ovirt2 test ~]# time dd if=/dev/zero 
> > of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct
> > 1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s
> > 
> > 
> > Writing to NFS volume:
> > [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile 
> > bs=1M count=1000 oflag=direct
> > 1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s
> > 
> > NFS & Gluster are using the same interface. Tests were not run at same 
> > time.
> > 
> > This would suggest my problem isn't glusterfs, but the VM performance.
> > 
> > 
> > 
> > On 02/11/2016 03:13 PM, Bill James wrote:
> >> xml attached. 
> >> 
> >> 
> >> On 02/11/2016 12:28 PM, Nir Soffer wrote: 
> >>> On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
> >>>  
> >>>  wrote: 
>  thank you for the reply. 
>  
>  We setup gluster using the names associated with  NIC 2 IP. 
>    Brick1: ovirt1-ks.test.j2noc.com 
>  :/gluster-store/brick1/gv1 
>    Brick2: ovirt2-ks.test.j2noc.com 
>  :/gluster-store/brick1/gv1 
>    Brick3: ovirt3-ks.test.j2noc.com 
>  :/gluster-store/brick1/gv1 
>  
>  That's NIC 2's IP. 
>  Using 'iftop -i eno2 -L 5 -t' : 
>  
>  dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct 
>  1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s 
> >>> Can you share the xml of this vm? You can find it in vdsm log, 
> >>> at the time you start the vm. 
> >>> 
> >>> Or you can do (on the host): 
> >>> 
> >>> # virsh 
> >>> virsh # list 
> >>> (username: vdsm@ovirt password: s

Re: [ovirt-users] ovirt glusterfs performance

2016-04-12 Thread Ravishankar N

On 04/12/2016 02:41 PM, Roderick Mooi wrote:

Hi

It is not removed. Can you try 'gluster volume set volname 
cluster.eager-lock enable`?


This works. BTW by default this setting is “on”. What’s the difference 
between “on” and “enable”?


Both are identical. You can use any of the booleans to achieve the same 
effect. {"1", "on", "yes", "true", "enable"} or {"0", "off", "no", 
"false", "disable"}
FYI, the patch http://review.gluster.org/#/c/13958/ to fix this issue 
should make it to glusterfs-3.7.11.

-Ravi


Thanks for the clarification.

Regards,

Roderick

On 06 Apr 2016, at 10:56 AM, Ravishankar N > wrote:


On 04/06/2016 02:08 PM, Roderick Mooi wrote:

Hi Ravi and colleagues

(apologies for hijacking this thread but I’m not sure where else to 
report this (and it is related).)


With gluster 3.7.10, running
#gluster volume set  group virt
fails with:
volume set: failed: option : eager-lock does not exist
Did you mean eager-lock?

I had to remove the eager-lock setting from 
/var/lib/glusterd/groups/virt to get this to work. It seems like 
setting eager-lock has been removed from latest gluster. Is this 
correct? Either way, is there anything else I should do?


It is not removed. Can you try 'gluster volume set volname 
cluster.eager-lock enable`?
I think the disperse (EC) translator introduced a 
`disperse.eager-lock` which is why you would need to mention entire 
volume option name to avoid ambiguity.
We probably need to fix the virt profile setting to include the 
entire name. By the way 'gluster volume set help` should give you the 
list of all options.


-Ravi



Cheers,

Roderick

On 12 Feb 2016, at 6:18 AM, Ravishankar N > wrote:


Hi Bill,
Can you enable virt-profile setting for your volume and see if that 
helps? You need to enable this optimization when you create the 
volume using ovrit, or use the following command for an existing 
volume:


#gluster volume set  group virt

-Ravi


On 02/12/2016 05:22 AM, Bill James wrote:

My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really 
glusterfs.



[root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com 
:/gv1 /mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 
bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s

That's more how I expected, it is pointing to glusterfs performance.



On 02/11/2016 03:27 PM, Bill James wrote:
don't know if it helps, but I ran a few more tests, all from the 
same hardware node.


The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s

Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s


Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/mnt/storage/qa/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s

NFS & Gluster are using the same interface. Tests were not run at 
same time.


This would suggest my problem isn't glusterfs, but the VM 
performance.




On 02/11/2016 03:13 PM, Bill James wrote:

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com 
:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com 
:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com 
:/gluster-store/brick1/gv1


That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total): 281Mb 5.36Mb
282Mb
Cumulative (sent/received/total): 1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 


wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better 
throughput?




On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing 
not ste

Re: [ovirt-users] ovirt glusterfs performance

2016-04-06 Thread Roderick Mooi
Hi Ravi and colleagues

(apologies for hijacking this thread but I’m not sure where else to report this 
(and it is related).)

With gluster 3.7.10, running
#gluster volume set  group virt
fails with:
volume set: failed: option : eager-lock does not exist
Did you mean eager-lock?

I had to remove the eager-lock setting from /var/lib/glusterd/groups/virt to 
get this to work. It seems like setting eager-lock has been removed from latest 
gluster. Is this correct? Either way, is there anything else I should do?

Cheers,

Roderick

> On 12 Feb 2016, at 6:18 AM, Ravishankar N  wrote:
> 
> Hi Bill,
> Can you enable virt-profile setting for your volume and see if that helps? 
> You need to enable this optimization when you create the volume using ovrit, 
> or use the following command for an existing volume:
> 
> #gluster volume set  group virt
> 
> -Ravi
> 
> 
> On 02/12/2016 05:22 AM, Bill James wrote:
>> My apologies, I'm showing how much of a noob I am.
>> Ignore last direct to gluster numbers, as that wasn't really glusterfs.
>> 
>> 
>> [root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com:/gv1 
>> /mnt/tmp/
>> [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M 
>> count=1000 oflag=direct
>> 1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s
>> 
>> That's more how I expected, it is pointing to glusterfs performance.
>> 
>> 
>> 
>> On 02/11/2016 03:27 PM, Bill James wrote:
>>> don't know if it helps, but I ran a few more tests, all from the same 
>>> hardware node.
>>> 
>>> The VM:
>>> [root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M count=1000 
>>> oflag=direct
>>> 1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
>>> 
>>> Writing directly to gluster volume:
>>> [root@ovirt2 test ~]# time dd if=/dev/zero 
>>> of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct
>>> 1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s
>>> 
>>> 
>>> Writing to NFS volume:
>>> [root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile 
>>> bs=1M count=1000 oflag=direct
>>> 1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s
>>> 
>>> NFS & Gluster are using the same interface. Tests were not run at same time.
>>> 
>>> This would suggest my problem isn't glusterfs, but the VM performance.
>>> 
>>> 
>>> 
>>> On 02/11/2016 03:13 PM, Bill James wrote:
 xml attached. 
 
 
 On 02/11/2016 12:28 PM, Nir Soffer wrote: 
> On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
>  wrote: 
>> thank you for the reply. 
>> 
>> We setup gluster using the names associated with  NIC 2 IP. 
>>   Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1 
>>   Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1 
>>   Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1 
>> 
>> That's NIC 2's IP. 
>> Using 'iftop -i eno2 -L 5 -t' : 
>> 
>> dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct 
>> 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s 
> Can you share the xml of this vm? You can find it in vdsm log, 
> at the time you start the vm. 
> 
> Or you can do (on the host): 
> 
> # virsh 
> virsh # list 
> (username: vdsm@ovirt password: shibboleth) 
> virsh # dumpxml vm-id 
> 
>> Peak rate (sent/received/total):  281Mb 5.36Mb 
>> 282Mb 
>> Cumulative (sent/received/total):1.96GB 14.6MB 
>> 1.97GB 
>> 
>> gluster volume info gv1: 
>>   Options Reconfigured: 
>>   performance.write-behind-window-size: 4MB 
>>   performance.readdir-ahead: on 
>>   performance.cache-size: 1GB 
>>   performance.write-behind: off 
>> 
>> performance.write-behind: off didn't help. 
>> Neither did any other changes I've tried. 
>> 
>> 
>> There is no VM traffic on this VM right now except my test. 
>> 
>> 
>> 
>> On 02/10/2016 11:55 PM, Nir Soffer wrote: 
>>> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N  
>>>  
>>> wrote: 
 +gluster-users 
 
 Does disabling 'performance.write-behind' give a better throughput? 
 
 
 
 On 02/10/2016 11:06 PM, Bill James wrote: 
> I'm setting up a ovirt cluster using glusterfs and noticing not 
> stellar 
> performance. 
> Maybe my setup could use some adjustments? 
> 
> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 
> 3.6.2.6-1. 
> Each node has 8 spindles configured in 1 array which is split using 
> LVM 
> with one logical volume for system and one for gluster. 
> They each have 4 NICs, 
>NIC1 = ovirtmgmt 
>NIC2 = gluster  (1GbE) 
>>> How do you ensure that gluster trafic is using this nic? 
>>> 
>NIC3 = VM traffic 
>>> How do 

Re: [ovirt-users] ovirt glusterfs performance

2016-04-06 Thread Ravishankar N

On 04/06/2016 02:08 PM, Roderick Mooi wrote:

Hi Ravi and colleagues

(apologies for hijacking this thread but I’m not sure where else to 
report this (and it is related).)


With gluster 3.7.10, running
#gluster volume set  group virt
fails with:
volume set: failed: option : eager-lock does not exist
Did you mean eager-lock?

I had to remove the eager-lock setting from 
/var/lib/glusterd/groups/virt to get this to work. It seems like 
setting eager-lock has been removed from latest gluster. Is this 
correct? Either way, is there anything else I should do?


It is not removed. Can you try 'gluster volume set volname 
cluster.eager-lock enable`?
I think the disperse (EC) translator introduced a `disperse.eager-lock` 
which is why you would need to mention entire volume option name to 
avoid ambiguity.
We probably need to fix the virt profile setting to include the entire 
name. By the way 'gluster volume set help` should give you the list of 
all options.


-Ravi



Cheers,

Roderick

On 12 Feb 2016, at 6:18 AM, Ravishankar N > wrote:


Hi Bill,
Can you enable virt-profile setting for your volume and see if that 
helps? You need to enable this optimization when you create the 
volume using ovrit, or use the following command for an existing volume:


#gluster volume set  group virt

-Ravi


On 02/12/2016 05:22 AM, Bill James wrote:

My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really glusterfs.


[root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com 
:/gv1 /mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 
bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s

That's more how I expected, it is pointing to glusterfs performance.



On 02/11/2016 03:27 PM, Bill James wrote:
don't know if it helps, but I ran a few more tests, all from the 
same hardware node.


The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s

Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s


Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/mnt/storage/qa/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s

NFS & Gluster are using the same interface. Tests were not run at 
same time.


This would suggest my problem isn't glusterfs, but the VM performance.



On 02/11/2016 03:13 PM, Bill James wrote:

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com 
:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com 
:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com 
:/gluster-store/brick1/gv1


That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total): 281Mb 5.36Mb
282Mb
Cumulative (sent/received/total): 1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
  performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 


wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better 
throughput?




On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing 
not stellar

performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 
3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split 
using LVM

with one logical volume for system and one for gluster.
They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[

Re: [ovirt-users] ovirt glusterfs performance

2016-02-12 Thread Ravishankar N

On 02/12/2016 09:11 PM, Bill James wrote:

wow, that made a whole lot of difference!
Thank you!

[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile1 bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 20.2778 s, 51.7 MB/s
That's great. It was Vijay Bellur who noticed that it was not enabled on 
your volume while we were talking on irc. So thanks to him.




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt glusterfs performance

2016-02-12 Thread Bill James

wow, that made a whole lot of difference!
Thank you!

[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile1 bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 20.2778 s, 51.7 MB/s

these are the options now for the record.

Options Reconfigured:
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.write-behind: off
performance.write-behind-window-size: 4MB
performance.cache-size: 1GB
performance.readdir-ahead: on


Thanks again!


On 2/11/16 8:18 PM, Ravishankar N wrote:

Hi Bill,
Can you enable virt-profile setting for your volume and see if that 
helps? You need to enable this optimization when you create the volume 
using ovrit, or use the following command for an existing volume:


#gluster volume set  group virt

-Ravi


On 02/12/2016 05:22 AM, Bill James wrote:

My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really glusterfs.


[root@ovirt2 test ~]# mount -t glusterfs 
ovirt2-ks.test.j2noc.com:/gv1 /mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 
bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s

That's more how I expected, it is pointing to glusterfs performance.



On 02/11/2016 03:27 PM, Bill James wrote:
don't know if it helps, but I ran a few more tests, all from the 
same hardware node.


The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s

Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s


Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/mnt/storage/qa/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s

NFS & Gluster are using the same interface. Tests were not run at 
same time.


This would suggest my problem isn't glusterfs, but the VM performance.



On 02/11/2016 03:13 PM, Bill James wrote:

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 8:27 PM, Bill James  
wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1

That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total):  281Mb 5.36Mb
282Mb
Cumulative (sent/received/total): 1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
  performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 


wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better 
throughput?




On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing 
not stellar

performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 
3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split 
using LVM

with one logical volume for system and one for gluster.
They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


   NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using sim

Re: [ovirt-users] ovirt glusterfs performance

2016-02-11 Thread Ravishankar N

Hi Bill,
Can you enable virt-profile setting for your volume and see if that 
helps? You need to enable this optimization when you create the volume 
using ovrit, or use the following command for an existing volume:


#gluster volume set  group virt

-Ravi


On 02/12/2016 05:22 AM, Bill James wrote:

My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really glusterfs.


[root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com:/gv1 
/mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s

That's more how I expected, it is pointing to glusterfs performance.



On 02/11/2016 03:27 PM, Bill James wrote:
don't know if it helps, but I ran a few more tests, all from the same 
hardware node.


The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s

Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s


Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/mnt/storage/qa/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s

NFS & Gluster are using the same interface. Tests were not run at 
same time.


This would suggest my problem isn't glusterfs, but the VM performance.



On 02/11/2016 03:13 PM, Bill James wrote:

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:

On Thu, Feb 11, 2016 at 8:27 PM, Bill James  wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1

That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total):  281Mb 5.36Mb
282Mb
Cumulative (sent/received/total): 1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
  performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 


wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better throughput?



On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing not 
stellar

performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 
3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split 
using LVM

with one logical volume for system and one for gluster.
They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


   NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct

block size of 1G?!

Try 1M (our default for storage operations)


1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
 dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get 
better

performance?

Adding Niels for advice.


This email, its contents and 
Please avoid this, this is a public mailing list, everything you 
write

here is public.

Nir
I'll have to look into how to remove this sig for this m

Re: [ovirt-users] ovirt glusterfs performance

2016-02-11 Thread Bill James

My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really glusterfs.


[root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com:/gv1 
/mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 65.8596 s, 15.9 MB/s

That's more how I expected, it is pointing to glusterfs performance.



On 02/11/2016 03:27 PM, Bill James wrote:
don't know if it helps, but I ran a few more tests, all from the same 
hardware node.


The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s

Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s


Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile 
bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s

NFS & Gluster are using the same interface. Tests were not run at same 
time.


This would suggest my problem isn't glusterfs, but the VM performance.



On 02/11/2016 03:13 PM, Bill James wrote:

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:

On Thu, Feb 11, 2016 at 8:27 PM, Bill James  wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1

That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total):  281Mb 5.36Mb
282Mb
Cumulative (sent/received/total):1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
  performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 


wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better throughput?



On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing not 
stellar

performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 
3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split 
using LVM

with one logical volume for system and one for gluster.
They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


   NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct

block size of 1G?!

Try 1M (our default for storage operations)


1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
 dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get 
better

performance?

Adding Niels for advice.


This email, its contents and 
Please avoid this, this is a public mailing list, everything you 
write

here is public.

Nir
I'll have to look into how to remove this sig for this mailing 
list


Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from 
j2 Global,

Inc. and/or its affiliates which may be privileged, confidential or
otherwise p

Re: [ovirt-users] ovirt glusterfs performance

2016-02-11 Thread Bill James
don't know if it helps, but I ran a few more tests, all from the same 
hardware node.


The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M 
count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s

Writing directly to gluster volume:
[root@ovirt2 test ~]# time dd if=/dev/zero 
of=/gluster-store/brick1/gv1/testfile bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 9.92048 s, 106 MB/s


Writing to NFS volume:
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/storage/qa/testfile 
bs=1M count=1000 oflag=direct

1048576000 bytes (1.0 GB) copied, 10.5776 s, 99.1 MB/s

NFS & Gluster are using the same interface. Tests were not run at same time.

This would suggest my problem isn't glusterfs, but the VM performance.



On 02/11/2016 03:13 PM, Bill James wrote:

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:

On Thu, Feb 11, 2016 at 8:27 PM, Bill James  wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1

That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total):  281Mb 5.36Mb
282Mb
Cumulative (sent/received/total):1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
  performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 


wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better throughput?



On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing not 
stellar

performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 
3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split 
using LVM

with one logical volume for system and one for gluster.
They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


   NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct

block size of 1G?!

Try 1M (our default for storage operations)


1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
 dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get better
performance?

Adding Niels for advice.


This email, its contents and 

Please avoid this, this is a public mailing list, everything you write
here is public.

Nir

I'll have to look into how to remove this sig for this mailing list

Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 
Global,

Inc. and/or its affiliates which may be privileged, confidential or
otherwise protected from disclosure. The information is intended to 
be for
the addressee(s) only. If you are not an addressee, any disclosure, 
copy,
distribution, or use of the contents of this message is prohibited. 
If you
have received this email in error please notify the sender by reply 
e-mail
and delete the original message and any copies. (c) 2015 j2 Global, 
Inc. All
rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and 
Onebox

are registered trademarks of j2 G

Re: [ovirt-users] ovirt glusterfs performance

2016-02-11 Thread Bill James

xml attached.


On 02/11/2016 12:28 PM, Nir Soffer wrote:

On Thu, Feb 11, 2016 at 8:27 PM, Bill James  wrote:

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
  Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
  Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1

That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id


Peak rate (sent/received/total):  281Mb 5.36Mb
282Mb
Cumulative (sent/received/total):1.96GB 14.6MB
1.97GB

gluster volume info gv1:
  Options Reconfigured:
  performance.write-behind-window-size: 4MB
  performance.readdir-ahead: on
  performance.cache-size: 1GB
  performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:

On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 
wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better throughput?



On 02/10/2016 11:06 PM, Bill James wrote:

I'm setting up a ovirt cluster using glusterfs and noticing not stellar
performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM
with one logical volume for system and one for gluster.
They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


   NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct

block size of 1G?!

Try 1M (our default for storage operations)


1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
 dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get better
performance?

Adding Niels for advice.


This email, its contents and 

Please avoid this, this is a public mailing list, everything you write
here is public.

Nir

I'll have to look into how to remove this sig for this mailing list

Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global,
Inc. and/or its affiliates which may be privileged, confidential or
otherwise protected from disclosure. The information is intended to be for
the addressee(s) only. If you are not an addressee, any disclosure, copy,
distribution, or use of the contents of this message is prohibited. If you
have received this email in error please notify the sender by reply e-mail
and delete the original message and any copies. (c) 2015 j2 Global, Inc. All
rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox
are registered trademarks of j2 Global, Inc. and its affiliates.


Please enter your authentication name: Please enter your password: 
  billjov1.test.j2noc.com
  c6aa56b4-f387-4a5b-84b6-a7db6ef89686
  http://ovirt.org/vm/tune/1.0";>

  
  4294967296
  2097152
  2097152
  16
  
1020
  
  

  
  
/machine
  
  

  oVirt
  oVirt Node
  7-2.1511.el7.centos.2.10
  30343536-3138-5355-4533-323134593738
  c6aa56b4-f387-4a5b-84b6-a7db6ef89686

  
  
hvm

  
  

  
  
SandyBridge


  

  
  



  
  destroy
  restart
  destroy
  
/usr/libexec/qemu-kvm

  
  
  
  
  
  
  
  


  
  
  
  
  eb0ccbf9-1ad8-4af8-944f-bc0d06981ed0
  
  
  


  
  


  
  


  
 

Re: [ovirt-users] ovirt glusterfs performance

2016-02-11 Thread Nir Soffer
On Thu, Feb 11, 2016 at 8:27 PM, Bill James  wrote:
> thank you for the reply.
>
> We setup gluster using the names associated with  NIC 2 IP.
>  Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
>  Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
>  Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
>
> That's NIC 2's IP.
> Using 'iftop -i eno2 -L 5 -t' :
>
> dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
> 1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Can you share the xml of this vm? You can find it in vdsm log,
at the time you start the vm.

Or you can do (on the host):

# virsh
virsh # list
(username: vdsm@ovirt password: shibboleth)
virsh # dumpxml vm-id

>
> Peak rate (sent/received/total):  281Mb 5.36Mb
> 282Mb
> Cumulative (sent/received/total):1.96GB 14.6MB
> 1.97GB
>
> gluster volume info gv1:
>  Options Reconfigured:
>  performance.write-behind-window-size: 4MB
>  performance.readdir-ahead: on
>  performance.cache-size: 1GB
>  performance.write-behind: off
>
> performance.write-behind: off didn't help.
> Neither did any other changes I've tried.
>
>
> There is no VM traffic on this VM right now except my test.
>
>
>
> On 02/10/2016 11:55 PM, Nir Soffer wrote:
>>
>> On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N 
>> wrote:
>>>
>>> +gluster-users
>>>
>>> Does disabling 'performance.write-behind' give a better throughput?
>>>
>>>
>>>
>>> On 02/10/2016 11:06 PM, Bill James wrote:

 I'm setting up a ovirt cluster using glusterfs and noticing not stellar
 performance.
 Maybe my setup could use some adjustments?

 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
 Each node has 8 spindles configured in 1 array which is split using LVM
 with one logical volume for system and one for gluster.
 They each have 4 NICs,
   NIC1 = ovirtmgmt
   NIC2 = gluster  (1GbE)
>>
>> How do you ensure that gluster trafic is using this nic?
>>
   NIC3 = VM traffic
>>
>> How do you ensure that vm trafic is using this nic?
>>
 I tried with default glusterfs settings
>>
>> And did you find any difference?
>>
 and also with:
 performance.cache-size: 1GB
 performance.readdir-ahead: on
 performance.write-behind-window-size: 4MB

 [root@ovirt3 test scripts]# gluster volume info gv1

 Volume Name: gv1
 Type: Replicate
 Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
 Status: Started
 Number of Bricks: 1 x 3 = 3
 Transport-type: tcp
 Bricks:
 Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
 Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
 Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
 Options Reconfigured:
 performance.cache-size: 1GB
 performance.readdir-ahead: on
 performance.write-behind-window-size: 4MB


 Using simple dd test on VM in ovirt:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>
>> block size of 1G?!
>>
>> Try 1M (our default for storage operations)
>>
1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

 Another VM not in ovirt using nfs:
 dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


 Is that expected or is there a better way to set it up to get better
 performance?
>>
>> Adding Niels for advice.
>>
 This email, its contents and 
>>
>> Please avoid this, this is a public mailing list, everything you write
>> here is public.
>>
>> Nir
>
> I'll have to look into how to remove this sig for this mailing list
>
> Cloud Services for Business www.j2.com
> j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox
>
>
> This email, its contents and attachments contain information from j2 Global,
> Inc. and/or its affiliates which may be privileged, confidential or
> otherwise protected from disclosure. The information is intended to be for
> the addressee(s) only. If you are not an addressee, any disclosure, copy,
> distribution, or use of the contents of this message is prohibited. If you
> have received this email in error please notify the sender by reply e-mail
> and delete the original message and any copies. (c) 2015 j2 Global, Inc. All
> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox
> are registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt glusterfs performance

2016-02-11 Thread Bill James

thank you for the reply.

We setup gluster using the names associated with  NIC 2 IP.
 Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
 Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
 Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1

That's NIC 2's IP.
Using 'iftop -i eno2 -L 5 -t' :

dd if=/dev/zero of=/root/testfile bs=1M count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 68.0714 s, 15.4 MB/s

Peak rate (sent/received/total):  281Mb 5.36Mb  
282Mb
Cumulative (sent/received/total):1.96GB 14.6MB 
1.97GB


gluster volume info gv1:
 Options Reconfigured:
 performance.write-behind-window-size: 4MB
 performance.readdir-ahead: on
 performance.cache-size: 1GB
 performance.write-behind: off

performance.write-behind: off didn't help.
Neither did any other changes I've tried.


There is no VM traffic on this VM right now except my test.



On 02/10/2016 11:55 PM, Nir Soffer wrote:

On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N  wrote:

+gluster-users

Does disabling 'performance.write-behind' give a better throughput?



On 02/10/2016 11:06 PM, Bill James wrote:

I'm setting up a ovirt cluster using glusterfs and noticing not stellar
performance.
Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM
with one logical volume for system and one for gluster.
They each have 4 NICs,
  NIC1 = ovirtmgmt
  NIC2 = gluster  (1GbE)

How do you ensure that gluster trafic is using this nic?


  NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?


I tried with default glusterfs settings

And did you find any difference?


and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
   dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct

block size of 1G?!

Try 1M (our default for storage operations)


   1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
   1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get better
performance?

Adding Niels for advice.


This email, its contents and 

Please avoid this, this is a public mailing list, everything you write
here is public.

Nir

I'll have to look into how to remove this sig for this mailing list

Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt glusterfs performance

2016-02-10 Thread Nir Soffer
On Thu, Feb 11, 2016 at 2:42 AM, Ravishankar N  wrote:
> +gluster-users
>
> Does disabling 'performance.write-behind' give a better throughput?
>
>
>
> On 02/10/2016 11:06 PM, Bill James wrote:
>>
>> I'm setting up a ovirt cluster using glusterfs and noticing not stellar
>> performance.
>> Maybe my setup could use some adjustments?
>>
>> 3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
>> Each node has 8 spindles configured in 1 array which is split using LVM
>> with one logical volume for system and one for gluster.
>> They each have 4 NICs,
>>  NIC1 = ovirtmgmt
>>  NIC2 = gluster

How do you ensure that gluster trafic is using this nic?

>>  NIC3 = VM traffic

How do you ensure that vm trafic is using this nic?

>> I tried with default glusterfs settings

And did you find any difference?

>> and also with:
>> performance.cache-size: 1GB
>> performance.readdir-ahead: on
>> performance.write-behind-window-size: 4MB
>>
>> [root@ovirt3 test scripts]# gluster volume info gv1
>>
>> Volume Name: gv1
>> Type: Replicate
>> Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
>> Status: Started
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
>> Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
>> Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
>> Options Reconfigured:
>> performance.cache-size: 1GB
>> performance.readdir-ahead: on
>> performance.write-behind-window-size: 4MB
>>
>>
>> Using simple dd test on VM in ovirt:
>>   dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct

block size of 1G?!

Try 1M (our default for storage operations)

>>   1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s
>>
>> Another VM not in ovirt using nfs:
>>dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
>>   1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s
>>
>>
>> Is that expected or is there a better way to set it up to get better
>> performance?

Adding Niels for advice.

>> This email, its contents and attachments contain information from j2
>> Global, Inc. and/or its affiliates which may be privileged, confidential or
>> otherwise protected from disclosure. The information is intended to be for
>> the addressee(s) only. If you are not an addressee, any disclosure, copy,
>> distribution, or use of the contents of this message is prohibited. If you
>> have received this email in error please notify the sender by reply e-mail
>> and delete the original message and any copies. (c) 2015 j2 Global, Inc. All
>> rights reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox
>> are registered trademarks of j2 Global, Inc. and its affiliates.

Please avoid this, this is a public mailing list, everything you write
here is public.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt glusterfs performance

2016-02-10 Thread Ravishankar N

+gluster-users

Does disabling 'performance.write-behind' give a better throughput?


On 02/10/2016 11:06 PM, Bill James wrote:
I'm setting up a ovirt cluster using glusterfs and noticing not 
stellar performance.

Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using 
LVM with one logical volume for system and one for gluster.

They each have 4 NICs,
 NIC1 = ovirtmgmt
 NIC2 = gluster
 NIC3 = VM traffic

I tried with default glusterfs settings and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
  dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
  1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
   dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
  1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get better 
performance?


Thanks.


Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 
Global, Inc. and/or its affiliates which may be privileged, 
confidential or otherwise protected from disclosure. The information 
is intended to be for the addressee(s) only. If you are not an 
addressee, any disclosure, copy, distribution, or use of the contents 
of this message is prohibited. If you have received this email in 
error please notify the sender by reply e-mail and delete the original 
message and any copies. (c) 2015 j2 Global, Inc. All rights reserved. 
eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt glusterfs performance

2016-02-10 Thread Bill James
I'm setting up a ovirt cluster using glusterfs and noticing not stellar 
performance.

Maybe my setup could use some adjustments?

3 hardware nodes running centos7.2, glusterfs 3.7.6.1, ovirt 3.6.2.6-1.
Each node has 8 spindles configured in 1 array which is split using LVM 
with one logical volume for system and one for gluster.

They each have 4 NICs,
 NIC1 = ovirtmgmt
 NIC2 = gluster
 NIC3 = VM traffic

I tried with default glusterfs settings and also with:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB

[root@ovirt3 test scripts]# gluster volume info gv1

Volume Name: gv1
Type: Replicate
Volume ID: 71afc35b-09d7-4384-ab22-57d032a0f1a2
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick3: ovirt3-ks.test.j2noc.com:/gluster-store/brick1/gv1
Options Reconfigured:
performance.cache-size: 1GB
performance.readdir-ahead: on
performance.write-behind-window-size: 4MB


Using simple dd test on VM in ovirt:
  dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
  1073741824 bytes (1.1 GB) copied, 65.9337 s, 16.3 MB/s

Another VM not in ovirt using nfs:
   dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=direct
  1073741824 bytes (1.1 GB) copied, 27.0079 s, 39.8 MB/s


Is that expected or is there a better way to set it up to get better 
performance?


Thanks.


Cloud Services for Business www.j2.com
j2 | eFax | eVoice | FuseMail | Campaigner | KeepItSafe | Onebox


This email, its contents and attachments contain information from j2 Global, 
Inc. and/or its affiliates which may be privileged, confidential or otherwise 
protected from disclosure. The information is intended to be for the 
addressee(s) only. If you are not an addressee, any disclosure, copy, 
distribution, or use of the contents of this message is prohibited. If you have 
received this email in error please notify the sender by reply e-mail and 
delete the original message and any copies. (c) 2015 j2 Global, Inc. All rights 
reserved. eFax, eVoice, Campaigner, FuseMail, KeepItSafe, and Onebox are 
registered trademarks of j2 Global, Inc. and its affiliates.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users