[ovirt-users] Re: Hyperconverged solution

2020-01-24 Thread Jayme
I believe you would have to either combine the drives with raid or lvm so
it’s presented as one device or just create multiple storage domains

On Fri, Jan 24, 2020 at 5:41 AM Benedetto Vassallo <
benedetto.vassa...@unipa.it> wrote:

> Def. Quota Nir Soffer :
>
> > Hyperconverged uses gluster, and gluster uses replication (replica 3 or
> > replica 2 + arbiter) so adding raid below may not be needed.
>
> Yes, I know this, but there is a way from the UI to create the storage
> domain using more then one disk?
> I can't understand this in the guide available at
>
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html
>
> >
> > You may use the SSDs for lvm cache for the gluster setup.
> >
>
> That would be great!
>
>
> > I would try to ask on Gluster mailing list about this.
>
> Thank you, I'm waiting for your news.
>
> Best Regards
>
>
> --
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax:   +3909123860880
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/INQJOX7FY7MDXADMIVWZKPS6D6DZZ5YY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YH5AMDC4CAHJJXRNHFR5TLHPMBH26QR5/


[ovirt-users] Re: Hyperconverged solution

2020-01-24 Thread Benedetto Vassallo

Def. Quota Nir Soffer :


Hyperconverged uses gluster, and gluster uses replication (replica 3 or
replica 2 + arbiter) so adding raid below may not be needed.


Yes, I know this, but there is a way from the UI to create the storage  
domain using more then one disk?
I can't understand this in the guide available at  
https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged.html




You may use the SSDs for lvm cache for the gluster setup.



That would be great!



I would try to ask on Gluster mailing list about this.


Thank you, I'm waiting for your news.

Best Regards


--
--
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo

Phone: +3909123860056
Fax:   +3909123860880
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/INQJOX7FY7MDXADMIVWZKPS6D6DZZ5YY/


[ovirt-users] Re: Hyperconverged solution

2020-01-23 Thread Nir Soffer
On Wed, Jan 22, 2020, 17:54 Benedetto Vassallo 
wrote:

> Hello Guys,
> Here at University of Palermo (Italy) we are planning to switch from
> vmware to ovirt using the hyperconverged solution.
> Our design is a 6 nodes cluster, each node with this configuration:
>
> - 1x Dell PowerEdge R7425 server;
> - 2x AMD EPYC 7301 Processor;
> - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
> - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
> - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
> - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare);
>
The hosted engine storage donain is small and sould run only one VM, so you
probably don't need 1.2T disks for it.

> - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
> - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);
>
Hyperconverged uses gluster, and gluster uses replication (replica 3 or
replica 2 + arbiter) so adding raid below may not be needed.

You may use the SSDs for lvm cache for the gluster setup.

I would try to ask on Gluster mailing list about this.

Sahina, what do you think?

Nir

Is this configuration supported or I have to change something?
> Thank you and Best Regards.
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax: +3909123860880
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU7PVYPSNUASWZAU2VG2DRCLSWHK5XRX/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N3GTOJ5ZHELKWPEI7SM4WL427SKQU2KM/


[ovirt-users] Re: Hyperconverged solution

2020-01-23 Thread Strahil Nikolov
On January 23, 2020 11:45:37 AM GMT+02:00, Benedetto Vassallo 
 wrote:
>  Def. Quota Strahil Nikolov :
>
>> On January 22, 2020 6:46:39 PM GMT+02:00, Benedetto Vassallo  
>>  wrote:
>>> Hello Guys,
>>> Here at University of Palermo (Italy) we are planning to switch from
>>> vmware to ovirt using the hyperconverged solution.
>>> Our design is a 6 nodes cluster, each node with this configuration:
>>>
>>> - 1x Dell PowerEdge R7425 server;
>>> - 2x AMD EPYC 7301 Processor;
>>> - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
>>> - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
>>> - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
>>> - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 +
>>> hotspare);
>>> - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
>>> - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 +
>>> hotspare);
>>>
>>> Is this configuration supported or I have to change something?
>>> Thank you and Best Regards.
>>> --
>>> Benedetto Vassallo
>>> Responsabile U.O. Sviluppo e manutenzione dei sistemi
>>> Sistema Informativo di Ateneo
>>> Università degli studi di Palermo
>>>
>>> Phone: +3909123860056
>>> Fax: +3909123860880
>>
>> Hi,
>>
>> Recently it was mentioned that there were some issues with the 'too  
>> new' EPYC.
>> For now, you can do :
>> 1. Use some old machines for initial setup of the HostedEngine VM  
>> (disable all Spectre/Meltdown in advance) -> and then add the new  
>> EPYC-based hosts and remove the older systems. Sadly, the older   
>> systems cannot be too old :)
>>
>> 2. Host the HostedEngine VM on your current VmWare environment or on 
>
>> a separate KVM host. Hosting the HostedEngine on bare metal is also
>OK
>>
>> 3. Wait (I don't know how long) till EPYC issues are solved.
>>
>> Best Regards,Strahil Nikolov
>
>Thank you.
>Maybe it's better to use Intel processors?
>Best Regards.
>  --
>Benedetto Vassallo
>Responsabile U.O. Sviluppo e manutenzione dei sistemi
>Sistema Informativo di Ateneo
>Università degli studi di Palermo
>
>Phone: +3909123860056
>Fax: +3909123860880

Amd is giving higher memory bandwidth and higher bang/buck ratio with more 
cores per $ compared to intel - so I wouldn't recommend intel.

I guess you can try with installing an older version (v 4.2.X) and then upgrade 
the cluster to 4.3 .

Good Luck and Welcome to oVirt.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4ANFV7APCBXULQ4NRASSJIKY3QZO7SAP/


[ovirt-users] Re: Hyperconverged solution

2020-01-23 Thread Paolo Margara
Hi Benedetto,

we have a running cluster with the same machine and a similar
configuration, currently we haven't encounter any issue. We're running
ovirt 4.3.7.


Greeting,

    Paolo

Il 22/01/20 17:46, Benedetto Vassallo ha scritto:
>
> Hello Guys,
> Here at University of Palermo (Italy) we are planning to switch from
> vmware to ovirt using the hyperconverged solution.
> Our design is a 6 nodes cluster, each node with this configuration:
>
> - 1x Dell PowerEdge R7425 server;
> - 2x AMD EPYC 7301 Processor;
> - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
> - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
> - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
> - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare);
> - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
> - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 +
> hotspare);
>
> Is this configuration supported or I have to change something?
> Thank you and Best Regards.
>
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax: +3909123860880
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/JU7PVYPSNUASWZAU2VG2DRCLSWHK5XRX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MR6BHDW43K6J4RNJ7OGCQWCQIIJ7KEOY/


[ovirt-users] Re: Hyperconverged solution

2020-01-23 Thread Paolo Margara
You know my mail address ;-)


Greetings,

    Paolo

Il 23/01/20 11:04, Benedetto Vassallo ha scritto:
>
> Thank you Paolo.
> Can we keep in contact (in private) to exchange furter informations?
> Best Regards.
>
>
> Def. Quota Paolo Margara  >:
>
>> Hi Benedetto,
>>
>> we have a running cluster with the same machine and a similar
>> configuration, currently we haven't encounter any issue. We're
>> running ovirt 4.3.7.
>>
>>  
>>
>> Greeting,
>>
>>     Paolo
>>
>> Il 22/01/20 17:46, Benedetto Vassallo ha scritto:
>>>
>>>  
>>>
> Hello Guys,
> Here at University of Palermo (Italy) we are planning to switch from
> vmware to ovirt using the hyperconverged solution.
> Our design is a 6 nodes cluster, each node with this configuration:
>
> - 1x Dell PowerEdge R7425 server;
> - 2x AMD EPYC 7301 Processor;
> - 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
> - 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
> - 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
> - 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare);
> - 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
> - 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 +
> hotspare);
>
> Is this configuration supported or I have to change something?
> Thank you and Best Regards.
>
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax: +3909123860880
>
>
> ___Users mailing list -- 
> users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy 
> Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/List Archives: 
>
>
>
> --
> Benedetto Vassallo
> Responsabile U.O. Sviluppo e manutenzione dei sistemi
> Sistema Informativo di Ateneo
> Università degli studi di Palermo
>
> Phone: +3909123860056
> Fax: +3909123860880

-- 
LABINF - HPC@POLITO
DAUIN - Politecnico di Torino
Corso Castelfidardo, 34D - 10129 Torino (TO)
phone: +39 011 090 7051
site: http://www.labinf.polito.it/
site: http://hpc.polito.it/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LO2LBNPOSZOENYNIUHKH7CQGLZBLB5ZJ/


[ovirt-users] Re: Hyperconverged solution

2020-01-23 Thread Benedetto Vassallo

 Thank you Paolo.
Can we keep in contact (in private) to exchange furter informations?
Best Regards.

Def. Quota Paolo Margara :


Hi Benedetto,

  we have a running cluster with the same machine and a similar  
configuration, currently we haven't encounter any issue. We're  
running ovirt 4.3.7.


   

  Greeting,

      Paolo
  Il 22/01/20 17:46, Benedetto Vassallo ha scritto:


 


 Hello Guys,
Here at University of Palermo (Italy) we are planning to switch from  
vmware to ovirt using the hyperconverged solution.

Our design is a 6 nodes cluster, each node with this configuration:

- 1x Dell PowerEdge R7425 server;
- 2x AMD EPYC 7301 Processor;
- 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
- 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
- 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
- 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 + hotspare);
- 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
- 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 + hotspare);

Is this configuration supported or I have to change something?
Thank you and Best Regards.
 --
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo

Phone: +3909123860056
Fax: +3909123860880

 ___Users mailing list --  
users@ovirt.orgTo unsubscribe send an email to  
users-leave@ovirt.orgPrivacy Statement:  
https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct:  
https://www.ovirt.org/community/about/community-guidelines/List  
Archives:

 --
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo

Phone: +3909123860056
Fax: +3909123860880
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2U2VKUFTVHPHNIZJFUFJEMRFJMYAZPLW/


[ovirt-users] Re: Hyperconverged solution

2020-01-23 Thread Benedetto Vassallo

 Def. Quota Strahil Nikolov :

On January 22, 2020 6:46:39 PM GMT+02:00, Benedetto Vassallo  
 wrote:

Hello Guys,
Here at University of Palermo (Italy) we are planning to switch from
vmware to ovirt using the hyperconverged solution.
Our design is a 6 nodes cluster, each node with this configuration:

- 1x Dell PowerEdge R7425 server;
- 2x AMD EPYC 7301 Processor;
- 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
- 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
- 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
- 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 +
hotspare);
- 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
- 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 +
hotspare);

Is this configuration supported or I have to change something?
Thank you and Best Regards.
--
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo

Phone: +3909123860056
Fax: +3909123860880


Hi,

Recently it was mentioned that there were some issues with the 'too  
new' EPYC.

For now, you can do :
1. Use some old machines for initial setup of the HostedEngine VM  
(disable all Spectre/Meltdown in advance) -> and then add the new  
EPYC-based hosts and remove the older systems. Sadly, the older   
systems cannot be too old :)


2. Host the HostedEngine VM on your current VmWare environment or on  
a separate KVM host. Hosting the HostedEngine on bare metal is also OK


3. Wait (I don't know how long) till EPYC issues are solved.

Best Regards,Strahil Nikolov


Thank you.
Maybe it's better to use Intel processors?
Best Regards.
 --
Benedetto Vassallo
Responsabile U.O. Sviluppo e manutenzione dei sistemi
Sistema Informativo di Ateneo
Università degli studi di Palermo

Phone: +3909123860056
Fax: +3909123860880
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/H5LGDEEECPIMCJF55ME4RAAZGPIDV4IF/


[ovirt-users] Re: Hyperconverged solution

2020-01-22 Thread Strahil Nikolov
On January 22, 2020 6:46:39 PM GMT+02:00, Benedetto Vassallo 
 wrote:
>Hello Guys,
>Here at University of Palermo (Italy) we are planning to switch from  
>vmware to ovirt using the hyperconverged solution.
>Our design is a 6 nodes cluster, each node with this configuration:
>
>- 1x Dell PowerEdge R7425 server;
>- 2x AMD EPYC 7301 Processor;
>- 512GB of RAM (8x 64GB LRDIMM, 2666MT/s, Quad Rank);
>- 2x Broadcom 57412 Dual Port 10Gb SFP+ ethernet card;
>- 3x 600GB 10K RPM SAS for the OS (Raid1 + hotspare);
>- 5x 1.2TB 10K RPM SAS for the hosted storage domain (Raid5 +
>hotspare);
>- 11x 2.4TB 10KRPM SAS for the vm data domain (Raid6 + hotspare);
>- 4x 960GB SSD SAS for an additional SSD storage domain (Raid5 +
>hotspare);
>
>Is this configuration supported or I have to change something?
>Thank you and Best Regards.
>  --
>Benedetto Vassallo
>Responsabile U.O. Sviluppo e manutenzione dei sistemi
>Sistema Informativo di Ateneo
>Università degli studi di Palermo
>
>Phone: +3909123860056
>Fax: +3909123860880


Hi,

Recently it was mentioned that there were some issues with the 'too new' EPYC.
For now, you can do :
1. Use some old machines for initial setup of the HostedEngine VM (disable all 
Spectre/Meltdown in advance) -> and then add the new EPYC-based hosts and 
remove the older systems. Sadly, the older  systems cannot be too old :)

2. Host the HostedEngine VM on your current VmWare environment or on a separate 
KVM host. Hosting the HostedEngine on bare metal is also OK

3. Wait (I don't know how long) till EPYC issues are solved.


Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5VB2ZZZA5J2EVGT3N4OPPF6RFL7MSBK/