[ovirt-users] Re: Quick generic Questions

2019-11-08 Thread Christian Reiss

Hello all,

thank you all for your overwhelming support and information.
Waking up to a plethora of answers is a great way to start the day.

This goes out to all: Thank you!

-Christian.

On 07/11/2019 18:38, Staniforth, Paul wrote:

Hello Christian,

here are some useful links

https://www.ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide.html


https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/

You need to have at least 1 device for the O/S node install and at least 
1 device for the bricks, so ideally mirror 2 for the O/S and raid the 
rest + spare.


a) Yes the the oVirt node installer has a hyperconverged/gluster mode.

b) The storage will be shared between all 3 nodes and distributed to 
bricks on all 3 nodes.


c) The VM can start and migrate on / to all nodes if all requirements 
are met (mem/cpu/networks) as the storage is shared.


d) I don't think so

e) storage network should not die VMs will probably be paused and if 
nodes lose access to storage on the master storage domain I think the
engine will restart them depending on fencing policy.
The gluster network should be a separate from the front-end network and 
use bonds for resilience.


f) yes you can add more shared storage domains using ISCSI, FC, NFS 
external gluster storage with ISCSI it may be better to use multipath 
connections rather than a bond.




Regards,
                     Paul S.



*From:* Christian Reiss 
*Sent:* 07 November 2019 13:29
*To:* users 
*Subject:* [ovirt-users] Quick generic Questions
Hey folks,

I am looking at setting up a hyperconverged cluster with 3 nodes (and
oVirt 4.3). Before setting up I have some generic questions that I would
love to get hints or even an answer on.

First off, the Servers are outfittet with 24 (SSD) drives each in a
HW-RAID. Due to wear-leveling and speed I am looking at RAID10. So I
would end up with one giant sda device.

a) Partitioning
Using oVirt node installer which will use the full size of /dev/sda is
this still the right solution to Hyperconverged given the gluster issue?
If I understood it correctly gluster is using empty drives or partitions
so a fully utilized drive is of no use here. Does oVirt node installer
have a hyperconverged/ gluster mode?

b) Storage Location
In this 3 node cluster, creating a VM on node01 will the data for node01
always end up in the local node01 server?

c) Starting VMs
Can a VM be migrated or launched from node03 if the data resides on
node01 and node02 (copies 2 with arbiter).

d) Efficiency / High IO Load
If node01 has high IO Load would additional data be loaded from the
other node which has the copy to even the load? I am aware Virtuozzo
does this.

e) Storage Network dies
What would happen with node01, node02 and node03 are operational but
only the storage network dies (frontend is still alive as are the nodes).

f) external isci/ FreeNAS
We have a FreeNAS system with tons of space and fast network
connectivity. Can oVirt handle storage import like remote iscsi target
and run VMs on the ovirt nodes but store data there?

Thank you for your time to clear this up.
I have found many approaches out there that either are old (oVirt 3) or
even contradict themselves (talk about RAID level...)

Cheers!
-Christian.

--
   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
     supp...@alpha-labs.net   \ /    Campaign
   X   against HTML
   WEB alpha-labs.net / \   in eMails

   GPG Retrieval 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgpg.christian-reiss.dedata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C765f79e574a04381964808d763883393%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637087308944128073sdata=ubhkWlgDwoSjZ7848zUFparWdh8Hzdrnqzd6laLbqbA%3Dreserved=0

   GPG ID ABCD43C5, 0x44E29126ABCD43C5
   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

   "It's better to reign in hell than to serve in heaven.",
    John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C765f79e574a04381964808d763883393%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637087308944128073sdata=kA7hJ3Gx%2FcytSzTkb1i0HTPjdJfJW9JZfsZWovDt5po%3Dreserved=0
oVirt Code of Conduct: 

[ovirt-users] Re: Quick generic Questions

2019-11-07 Thread Staniforth, Paul
Hello Christian,

here are some useful links

https://www.ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide.html


https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.6/

You need to have at least 1 device for the O/S node install and at least 1 
device for the bricks, so ideally mirror 2 for the O/S and raid the rest + 
spare.

a) Yes the the oVirt node installer has a hyperconverged/gluster mode.

b) The storage will be shared between all 3 nodes and distributed to bricks on 
all 3 nodes.

c) The VM can start and migrate on / to all nodes if all requirements are met 
(mem/cpu/networks) as the storage is shared.

d) I don't think so

e) storage network should not die VMs will probably be paused and if nodes lose 
access to storage on the master storage domain I think theengine will 
restart them depending on fencing policy.
The gluster network should be a separate from the front-end network and use 
bonds for resilience.

f) yes you can add more shared storage domains using ISCSI, FC, NFS external 
gluster storage with ISCSI it may be better to use multipath connections rather 
than a bond.



Regards,
Paul S.



From: Christian Reiss 
Sent: 07 November 2019 13:29
To: users 
Subject: [ovirt-users] Quick generic Questions

Hey folks,

I am looking at setting up a hyperconverged cluster with 3 nodes (and
oVirt 4.3). Before setting up I have some generic questions that I would
love to get hints or even an answer on.

First off, the Servers are outfittet with 24 (SSD) drives each in a
HW-RAID. Due to wear-leveling and speed I am looking at RAID10. So I
would end up with one giant sda device.

a) Partitioning
Using oVirt node installer which will use the full size of /dev/sda is
this still the right solution to Hyperconverged given the gluster issue?
If I understood it correctly gluster is using empty drives or partitions
so a fully utilized drive is of no use here. Does oVirt node installer
have a hyperconverged/ gluster mode?

b) Storage Location
In this 3 node cluster, creating a VM on node01 will the data for node01
always end up in the local node01 server?

c) Starting VMs
Can a VM be migrated or launched from node03 if the data resides on
node01 and node02 (copies 2 with arbiter).

d) Efficiency / High IO Load
If node01 has high IO Load would additional data be loaded from the
other node which has the copy to even the load? I am aware Virtuozzo
does this.

e) Storage Network dies
What would happen with node01, node02 and node03 are operational but
only the storage network dies (frontend is still alive as are the nodes).

f) external isci/ FreeNAS
We have a FreeNAS system with tons of space and fast network
connectivity. Can oVirt handle storage import like remote iscsi target
and run VMs on the ovirt nodes but store data there?

Thank you for your time to clear this up.
I have found many approaches out there that either are old (oVirt 3) or
even contradict themselves (talk about RAID level...)

Cheers!
-Christian.

--
  Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
supp...@alpha-labs.net   \ /Campaign
  X   against HTML
  WEB alpha-labs.net / \   in eMails

  GPG Retrieval 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgpg.christian-reiss.dedata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C765f79e574a04381964808d763883393%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637087308944128073sdata=ubhkWlgDwoSjZ7848zUFparWdh8Hzdrnqzd6laLbqbA%3Dreserved=0
  GPG ID ABCD43C5, 0x44E29126ABCD43C5
  GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5

  "It's better to reign in hell than to serve in heaven.",
   John Milton, Paradise lost.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C765f79e574a04381964808d763883393%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637087308944128073sdata=kA7hJ3Gx%2FcytSzTkb1i0HTPjdJfJW9JZfsZWovDt5po%3Dreserved=0
oVirt Code of Conduct: 
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2Fdata=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7C765f79e574a04381964808d763883393%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C1%7C637087308944128073sdata=hpl1iIQlXsI0r2AvOxScngwcJtZtAEkYAKETWzM0SDI%3Dreserved=0
List Archives: 

[ovirt-users] Re: Quick generic Questions

2019-11-07 Thread Strahil

On Nov 7, 2019 15:29, Christian Reiss  wrote:
>
> Hey folks, 
>
> I am looking at setting up a hyperconverged cluster with 3 nodes (and 
> oVirt 4.3). Before setting up I have some generic questions that I would 
> love to get hints or even an answer on. 
>
> First off, the Servers are outfittet with 24 (SSD) drives each in a 
> HW-RAID. Due to wear-leveling and speed I am looking at RAID10. So I 
> would end up with one giant sda device. 

Go with RAID 0  , or RAID5/6 as you will have the same data on all nodes (3  
copies in total) .
> a) Partitioning 
> Using oVirt node installer which will use the full size of /dev/sda is 
> this still the right solution to Hyperconverged given the gluster issue? 
> If I understood it correctly gluster is using empty drives or partitions 
> so a fully utilized drive is of no use here. Does oVirt node installer 
> have a hyperconverged/ gluster mode? 
The cockpit installer can prepare  the gluster  infrastructure  and then the 
oVirt cluster
> b) Storage Location 
> In this 3 node cluster, creating a VM on node01 will the data for node01 
> always end up in the local node01 server? 
Nope , all data is replicated on all 3 nodes  (or on 2 nodes  when using  
'replica  2  arbiter1' volumes).
> c) Starting VMs 
> Can a VM be migrated or launched from node03 if the data resides on 
> node01 and node02 (copies 2 with arbiter). 
As gluster  is a  shared storage, the VMs can migrate  on any host that has  
access  to the  storage (in your case any of the 3 nodes).
> d) Efficiency / High IO Load 
> If node01 has high IO Load would additional data be loaded from the 
> other node which has the copy to even the load? I am aware Virtuozzo 
> does this. 
Gluster Clients (in this case oVirt node)  reads  from all 3 nodes 
simultaneously for better I/O. Same is valid for writes.
> e) Storage Network dies 
> What would happen with node01, node02 and node03 are operational but 
> only the storage network dies (frontend is still alive as are the nodes). 
Nodes will become unoperational and all VMs will be paused until storage  is 
restored.
> f) external isci/ FreeNAS 
> We have a FreeNAS system with tons of space and fast network 
> connectivity. Can oVirt handle storage import like remote iscsi target 
> and run VMs on the ovirt nodes but store data there? 
Yep.
> Thank you for your time to clear this up. 
> I have found many approaches out there that either are old (oVirt 3) or 
> even contradict themselves (talk about RAID level...) 
>
> Cheers! 
> -Christian
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PHGRD7SXINKIQ22TYD2QC4K7AJLC7O6Z/


[ovirt-users] Re: Quick generic Questions

2019-11-07 Thread Gianluca Cecchi
On Thu, Nov 7, 2019 at 3:59 PM Jayme  wrote:

> Your nodes should have separate drives for node image. The installer will
> expect empty block device for gluster setup.  I would do a separate array
> for OS and storage.
>
> Most of your node specific questions I think can be cleared up if you
> think of the storage as being networked storage. Even though the nodes are
> acting as storage they are gluster clients mounting the shares. If you
> create a vm the storage doesn’t go to one specific node. Since the data is
> mounted over network you can run vms from any node.
>
> On Thu, Nov 7, 2019 at 9:39 AM Christian Reiss 
> wrote:
>
>>
>>
[snip]


>> b) Storage Location
>> In this 3 node cluster, creating a VM on node01 will the data for node01
>> always end up in the local node01 server?
>>
>>
Actually there is this parameter that one can set for the volumes:
cluster.choose-local

implemented starting with 3.12
https://bugzilla.redhat.com/show_bug.cgi?id=1501022

I think default is off and not tried toggling in a composite environment.

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SK2Z6BA5KUEEOUXLDSE5YE4J7I5C27MI/


[ovirt-users] Re: Quick generic Questions

2019-11-07 Thread Jayme
Your nodes should have separate drives for node image. The installer will
expect empty block device for gluster setup.  I would do a separate array
for OS and storage.

Most of your node specific questions I think can be cleared up if you think
of the storage as being networked storage. Even though the nodes are acting
as storage they are gluster clients mounting the shares. If you create a vm
the storage doesn’t go to one specific node. Since the data is mounted over
network you can run vms from any node.

On Thu, Nov 7, 2019 at 9:39 AM Christian Reiss 
wrote:

> Hey folks,
>
> I am looking at setting up a hyperconverged cluster with 3 nodes (and
> oVirt 4.3). Before setting up I have some generic questions that I would
> love to get hints or even an answer on.
>
> First off, the Servers are outfittet with 24 (SSD) drives each in a
> HW-RAID. Due to wear-leveling and speed I am looking at RAID10. So I
> would end up with one giant sda device.
>
> a) Partitioning
> Using oVirt node installer which will use the full size of /dev/sda is
> this still the right solution to Hyperconverged given the gluster issue?
> If I understood it correctly gluster is using empty drives or partitions
> so a fully utilized drive is of no use here. Does oVirt node installer
> have a hyperconverged/ gluster mode?
>
> b) Storage Location
> In this 3 node cluster, creating a VM on node01 will the data for node01
> always end up in the local node01 server?
>
> c) Starting VMs
> Can a VM be migrated or launched from node03 if the data resides on
> node01 and node02 (copies 2 with arbiter).
>
> d) Efficiency / High IO Load
> If node01 has high IO Load would additional data be loaded from the
> other node which has the copy to even the load? I am aware Virtuozzo
> does this.
>
> e) Storage Network dies
> What would happen with node01, node02 and node03 are operational but
> only the storage network dies (frontend is still alive as are the nodes).
>
> f) external isci/ FreeNAS
> We have a FreeNAS system with tons of space and fast network
> connectivity. Can oVirt handle storage import like remote iscsi target
> and run VMs on the ovirt nodes but store data there?
>
> Thank you for your time to clear this up.
> I have found many approaches out there that either are old (oVirt 3) or
> even contradict themselves (talk about RAID level...)
>
> Cheers!
> -Christian.
>
> --
>   Christian Reiss - em...@christian-reiss.de /"\  ASCII Ribbon
> supp...@alpha-labs.net   \ /Campaign
>   X   against HTML
>   WEB alpha-labs.net / \   in eMails
>
>   GPG Retrieval https://gpg.christian-reiss.de
>   GPG ID ABCD43C5, 0x44E29126ABCD43C5
>   GPG fingerprint = 9549 F537 2596 86BA 733C  A4ED 44E2 9126 ABCD 43C5
>
>   "It's better to reign in hell than to serve in heaven.",
>John Milton, Paradise lost.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5LYYK3OHRIUT3D247UDCK26VDXKXG2P7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L37STGZJGW2XNPUK4IATTKGOMIE4JL56/