Re: [ovirt-users] How to create a new Gluster volume

2017-07-07 Thread knarra

On 07/07/2017 02:03 PM, Gianluca Cecchi wrote:
On Fri, Jul 7, 2017 at 10:15 AM, knarra > wrote:






It seems I have to de-select the checkbox "Show available bricks
from host" and so I can manually the the directory of the bricks

I see that bricks are mounted in /gluster/brick3 and that is the
reason it does not show anything in "Brick Directory" drop down
filed. If bricks are mounted under /gluster_bricks then it would
have detected automatically. There is an RFE which is raised to
detect bricks which are created manually.


I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think 
I used the "default" path that was proposed inside the 
ovirt-gluster.conf file to feed gdeploy with...

I think it was based on this from Jason:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
and this conf file
https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf

Good that there is an RFE. Thanks




BTW: I see that after creating a volume optimized for oVirt in
web admin gui of 4.1.2 I get slight option for it in respect for
a pre-existing volume created in 4.0.5 during initial setup with
gdeploy.

NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I
have gluster 3.10 (manually updated from CentOS storage SIG)

Making a "gluster volume info" and then a diff of the output for
the 2 volumes I have:

new volume ==   <
old volume  ==>

< cluster.shd-max-threads: 8
---
> cluster.shd-max-threads: 6
13a13,14
> features.shard-block-size: 512MB
16c17
< network.remote-dio: enable
---
> network.remote-dio: off
23a25
> performance.readdir-ahead: on
25c27
< server.allow-insecure: on
---
> performance.strict-o-direct: on

Do I have to change anything for the newly created one?

No, you do not need to change anything for the new volume. But if
you plan to enable o-direct on the volume then you will have to
disable/turn off remote-dio.


OK.
Again, in ovirt-gluster.conf file I see there was this kind of setting 
for the Gluster volumes when running gdeploy for them:

key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,1,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine
I'm going to crosscheck now what are the suggested values for oVirt 
4.1 and Gluster 3.10 combined...
Now virt group sets the shard block size and it is the default which is 
4MB and is the suggested value. With 4MB shards we see that healing is 
much faster with granular entry heal being enabled on the volume.


I am not sure why the conf file again sets the shard size. May be this 
can be removed from the file.


Other than this everything looks good for me.


I was in particular worried by the difference 
of features.shard-block-size but after reading this


http://blog.gluster.org/2015/12/introducing-shard-translator/

I'm not sure if 512Mb is the best in case of VMs storage I'm going 
to dig more eventually


Thanks,
Gianluca



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to create a new Gluster volume

2017-07-07 Thread Gianluca Cecchi
On Fri, Jul 7, 2017 at 10:15 AM, knarra  wrote:

>
>
>>
> It seems I have to de-select the checkbox "Show available bricks from
> host" and so I can manually the the directory of the bricks
>
> I see that bricks are mounted in /gluster/brick3 and that is the reason it
> does not show anything in "Brick Directory" drop down filed. If bricks are
> mounted under /gluster_bricks then it would have detected automatically.
> There is an RFE which is raised to detect bricks which are created manually.
>

I deployed this HCI system with gdeploy at oVirt 4.05 time, so I think I
used the "default" path that was proposed inside the ovirt-gluster.conf
file to feed gdeploy with...
I think it was based on this from Jason:
https://www.ovirt.org/blog/2016/08/up-and-running-with-ovirt-4-0-and-gluster-storage/
and this conf file
https://gist.githubusercontent.com/jasonbrooks/a5484769eea5a8cf2fa9d32329d5ebe5/raw/ovirt-gluster.conf

Good that there is an RFE. Thanks



>
>
> BTW: I see that after creating a volume optimized for oVirt in web admin
> gui of 4.1.2 I get slight option for it in respect for a pre-existing
> volume created in 4.0.5 during initial setup with gdeploy.
>
> NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have
> gluster 3.10 (manually updated from CentOS storage SIG)
>
> Making a "gluster volume info" and then a diff of the output for the 2
> volumes I have:
>
> new volume ==   <
> old volume  ==>
>
> < cluster.shd-max-threads: 8
> ---
> > cluster.shd-max-threads: 6
> 13a13,14
> > features.shard-block-size: 512MB
> 16c17
> < network.remote-dio: enable
> ---
> > network.remote-dio: off
> 23a25
> > performance.readdir-ahead: on
> 25c27
> < server.allow-insecure: on
> ---
> > performance.strict-o-direct: on
>
> Do I have to change anything for the newly created one?
>
> No, you do not need to change anything for the new volume. But if you plan
> to enable o-direct on the volume then you will have to disable/turn off
> remote-dio.
>
>
> OK.
Again, in ovirt-gluster.conf file I see there was this kind of setting for
the Gluster volumes when running gdeploy for them:

key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal
value=virt,36,36,on,512MB,32,full,granular,1,8,30,off,on,off,on
brick_dirs=/gluster/brick1/engine

I'm going to crosscheck now what are the suggested values for oVirt 4.1 and
Gluster 3.10 combined...

I was in particular worried by the difference of features.shard-block-size
but after reading this

http://blog.gluster.org/2015/12/introducing-shard-translator/

I'm not sure if 512Mb is the best in case of VMs storage I'm going to
dig more eventually

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to create a new Gluster volume

2017-07-07 Thread knarra

On 07/06/2017 04:38 PM, Gianluca Cecchi wrote:
On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi 
mailto:gianluca.cec...@gmail.com>> wrote:


Hello,
I'm trying to create a new volume. I'm in 4.1.2
I'm following these indications:

http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/



When I click the "add brick" button, I don't see anything in
"Brick Directory" dropdown field and I cannot manuall input a
directory name.

On the 3 nodes I already have formatted and mounted fs

[root@ovirt01 ~]# df -h /gluster/brick3/
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
[root@ovirt01 ~]#

The guide tells

7. Click the Add Bricks button to select bricks to add to the
volume. Bricks must be created externally on the Gluster Storage
nodes.

What does it mean with "created externally"?
The next step from os point would be volume creation but it is
indeed what I would like to do from the gui...

Thanks,
Gianluca


It seems I have to de-select the checkbox "Show available bricks from 
host" and so I can manually the the directory of the bricks
I see that bricks are mounted in /gluster/brick3 and that is the reason 
it does not show anything in "Brick Directory" drop down filed. If 
bricks are mounted under /gluster_bricks then it would have detected 
automatically. There is an RFE which is raised to detect bricks which 
are created manually.


BTW: I see that after creating a volume optimized for oVirt in web 
admin gui of 4.1.2 I get slight option for it in respect for a 
pre-existing volume created in 4.0.5 during initial setup with gdeploy.


NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have 
gluster 3.10 (manually updated from CentOS storage SIG)


Making a "gluster volume info" and then a diff of the output for the 2 
volumes I have:


new volume ==   <
old volume  ==>

< cluster.shd-max-threads: 8
---
> cluster.shd-max-threads: 6
13a13,14
> features.shard-block-size: 512MB
16c17
< network.remote-dio: enable
---
> network.remote-dio: off
23a25
> performance.readdir-ahead: on
25c27
< server.allow-insecure: on
---
> performance.strict-o-direct: on

Do I have to change anything for the newly created one?
No, you do not need to change anything for the new volume. But if you 
plan to enable o-direct on the volume then you will have to disable/turn 
off remote-dio.





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to create a new Gluster volume

2017-07-06 Thread Gianluca Cecchi
On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi 
wrote:

> Hello,
> I'm trying to create a new volume. I'm in 4.1.2
> I'm following these indications:
> http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_
> Storage/
>
> When I click the "add brick" button, I don't see anything in "Brick
> Directory" dropdown field and I cannot manuall input a directory name.
>
> On the 3 nodes I already have formatted and mounted fs
>
> [root@ovirt01 ~]# df -h /gluster/brick3/
> Filesystem  Size  Used Avail Use% Mounted on
> /dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
> [root@ovirt01 ~]#
>
> The guide tells
>
> 7. Click the Add Bricks button to select bricks to add to the volume.
> Bricks must be created externally on the Gluster Storage nodes.
>
> What does it mean with "created externally"?
> The next step from os point would be volume creation but it is indeed what
> I would like to do from the gui...
>
> Thanks,
> Gianluca
>
>
It seems I have to de-select the checkbox "Show available bricks from host"
and so I can manually the the directory of the bricks

BTW: I see that after creating a volume optimized for oVirt in web admin
gui of 4.1.2 I get slight option for it in respect for a pre-existing
volume created in 4.0.5 during initial setup with gdeploy.

NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have
gluster 3.10 (manually updated from CentOS storage SIG)

Making a "gluster volume info" and then a diff of the output for the 2
volumes I have:

new volume ==   <
old volume  ==>

< cluster.shd-max-threads: 8
---
> cluster.shd-max-threads: 6
13a13,14
> features.shard-block-size: 512MB
16c17
< network.remote-dio: enable
---
> network.remote-dio: off
23a25
> performance.readdir-ahead: on
25c27
< server.allow-insecure: on
---
> performance.strict-o-direct: on

Do I have to change anything for the newly created one?

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] How to create a new Gluster volume

2017-07-06 Thread Gianluca Cecchi
Hello,
I'm trying to create a new volume. I'm in 4.1.2
I'm following these indications:
http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/

When I click the "add brick" button, I don't see anything in "Brick
Directory" dropdown field and I cannot manuall input a directory name.

On the 3 nodes I already have formatted and mounted fs

[root@ovirt01 ~]# df -h /gluster/brick3/
Filesystem  Size  Used Avail Use% Mounted on
/dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
[root@ovirt01 ~]#

The guide tells

7. Click the Add Bricks button to select bricks to add to the volume.
Bricks must be created externally on the Gluster Storage nodes.

What does it mean with "created externally"?
The next step from os point would be volume creation but it is indeed what
I would like to do from the gui...

Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users