Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-31 Thread Vincent Royer
Lots of great hardware knowledge in this thread!  I'm also making the move
to 10Gb.

I'm adding a 3rd host to my deployment and moving to Glusterfs on 3 nodes,
from my current NFS share on a separate storage server.

Each of the 3 nodes has dual E5-2640 V4s with128Gb ram. I have some
hardware choices I would love some advice about:

- Should I use Intel X520-DA2 or X710-DA2 nics for the storage network?  No
significant price difference.  The hosts are running oVirt Node 4.2.  I
hope to use them in bridge mode so that I don't need a 10Gbe switch.  I do
have a single 10Gbe port left on my router.

- the hosts have 12Gbps 520i SAS cards, should I spec 6 or 12 Gbps SSD
drives? Here there is a large price difference, also large difference
between Enterprise performance, Enterprise mainstream, and Enterprise
entry.  I'm not sure how to estimate the value of those different options
in a Glusterfs deployment.

The workload is pretty I/O intensive with fairly small read/write
operations (under 128Kb) on windows VMs.

Any obvious weak links with this plan?


On Sat, Mar 31, 2018, 12:27 PM Michael Watters,  wrote:

> We run Dell Poweredge R720s and R730s with 32 GB of RAM and quad Xeon
> processors.  Storage is provided by Dell MD3800i and Promise arrays
> using iSCSI.  The network is all 10 gigabit interfaces using 802.3ad
> bonds.  We actually just upgraded from 1 gigabit NICs since there were
> some performance issues with storage causing high IOwait on VMs.  I'd
> recommend avoiding 1 gigabit if you can.
>
>
> On 3/24/18 4:33 AM, Andy Michielsen wrote:
> > Hi all,
> >
> > Not sure if this is the place to be asking this but I was wondering
> which hardware you all are using and why in order for me to see what I
> would be needing.
> >
> > I would like to set up a HA cluster consisting off 3 hosts to be able to
> run 30 vm’s.
> > The engine, I can run on an other server. The hosts can be fitted with
> the storage and share the space through glusterfs. I would think I will be
> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s
> sufficient ?)
> >
> > Any input you guys would like to share would be greatly appriciated.
> >
> > Thanks,
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-31 Thread Michael Watters
We run Dell Poweredge R720s and R730s with 32 GB of RAM and quad Xeon
processors.  Storage is provided by Dell MD3800i and Promise arrays
using iSCSI.  The network is all 10 gigabit interfaces using 802.3ad
bonds.  We actually just upgraded from 1 gigabit NICs since there were
some performance issues with storage causing high IOwait on VMs.  I'd
recommend avoiding 1 gigabit if you can.


On 3/24/18 4:33 AM, Andy Michielsen wrote:
> Hi all,
>
> Not sure if this is the place to be asking this but I was wondering which 
> hardware you all are using and why in order for me to see what I would be 
> needing.
>
> I would like to set up a HA cluster consisting off 3 hosts to be able to run 
> 30 vm’s.
> The engine, I can run on an other server. The hosts can be fitted with the 
> storage and share the space through glusterfs. I would think I will be 
> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
> sufficient ?)
>
> Any input you guys would like to share would be greatly appriciated.
>
> Thanks,
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-30 Thread Yaniv Kaul
On Mon, Mar 26, 2018, 7:04 PM Christopher Cox  wrote:

> On 03/24/2018 03:33 AM, Andy Michielsen wrote:
> > Hi all,
> >
> > Not sure if this is the place to be asking this but I was wondering
> which hardware you all are using and why in order for me to see what I
> would be needing.
> >
> > I would like to set up a HA cluster consisting off 3 hosts to be able to
> run 30 vm’s.
> > The engine, I can run on an other server. The hosts can be fitted with
> the storage and share the space through glusterfs. I would think I will be
> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s
> sufficient ?)
>
> Just because you asked, but not because this is helpful to you
>
> But first, a comment on "3 hosts to be able to run 30 VMs".  The SPM
> node shouldn't run a lot of VMs.  There are settings (the setting slips
> my mind) on the engine to give it a "virtual set" of VMs in order to
> keep VMs off of it.
>
> With that said, CPU wise, it doesn't require a lot to run 30 VM's.  The
> costly thing is memory (in general).  So while a cheap set of 3 machines
> might handle the CPU requirements of 30 VM's, those cheap machines might
> not be able to give you the memory you need (depends).  You might be
> fine.  I mean, there are cheap desktop like machines that do 64G (and
> sometimes more).  Just something to keep in mind.  Memory and storage
> will be the most costly items.  It's simple math.  Linux hosts, of
> course, don't necessarily need much memory (or storage).  But Windows...
>
> 1Gbit NIC's are "ok", but again, depends on storage.  Glusterfs is no
> speed demon.  But you might not need "fast" storage.
>
> Lastly, your setup is just for "fun", right?  Otherwise, read on.
>
>
> Running oVirt 3.6 (this is a production setup)
>
> ovirt engine (manager):
> Dell PowerEdge 430, 32G
>
> ovirt cluster nodes:
> Dell m1000e 1.1 backplane Blade Enclosure
> 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all
> 10GbE, CentOS 7.2
> 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)
>
> 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012
> We've run on as few as 3 nodes.
>
> Network, SAN and Storage (for ovirt Domains):
> 2 x S4810 (part is used for SAN, part for LAN)
> Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K
> SAS)
> Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS
>
> ISO and Export Domains are handled by:
> Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above),
> CentOS 7.4, NFS
>
> What I like:
> * Easy setup.
> * Relatively good network and storage.
>
> What I don't like:
> * 2 "effective" networks, LAN and iSCSI.  All networking uses the same
> effective path.  Would be nice to have more physical isolation for mgmt
> vs motion vs VMs.  QoS is provided in oVirt, but still, would be nice to
> have the full pathways.
> * Storage doesn't use active/active controllers, so controller failover
> is VERY slow.
> * We have a fast storage system, and somewhat slower storage system
> (matter of IOPS),  neither is SSD, so there isn't a huge difference.  No
> real redundancy or flexibility.
> * vdsm can no longer respond fast enough for the amount of disks defined
> (in the event of a new Storage Domain add).  We have raised vdsTimeout,
> but have not tested yet.
>

We have substantially changed and improved VDSM for better scale since 3.6.
How many disks are defined, in how many storage domains and LUNs?
(also the OS itself has improved).


> I inherited the "style" above.  My recommendation of where to start for
> a reasonable production instance, minimum (assumes the S4810's above,
> not priced here):
>
> 1 x ovirt manager/engine, approx $1500
>

What about high availability for the engine?

4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
> 3 x Nexsan 18P 108TB, approx $96K
>

Alternatively, how many reasonable SSDs can you buy? Samsing 860 EVO, 4TB
costs in Amazon (US) $1300. You could buy tens (70+) of those and be left
with some change.
Can you instead use them in a fast storage setup?
https://www.backblaze.com/blog/open-source-data-storage-server/ for example
is interesting.


> While significantly cheaper (by 6 figures), it provides active/active
> controllers, storage reliability and flexibility and better network
> pathways.  Why 4 x nodes?  Need at least N+1 for reliability.  The extra
> 4th node is merely capacity.  Why 3 x storage?  Need at least N+1 for
> reliability.
>

Are they running in some cluster?


> Obviously, you'll still want to back things up and test the ability to
> restore components like the ovirt engine from scratch.
>

+1.
Y.


> Btw, my recommended minimum above is regardless of hypervisor cluster
> choice (could be VMware).
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-28 Thread Andy Michielsen
Hello Chritopher,

Thank you very much for sharing.

It started out just for fun but now people at work are looking at me to have an 
environment to do testing, simulate problems they have encountered, etc.

And more an more off them see the benifits off this. At work we are running 
vmware but that was far to expencieve to use it for these test. But as I 
suspected that was in the beginning and I knew I had to be able to expand so 
whenever an old server was decommisioned from production I converted it to an 
node. I now have 4 in use and demands keep growing.

So now I want to ask my boss to invest in new hardware as now people are asking 
me why I do not have proper backups and even why the can not use the vm’s when 
I perform administrative tasks or upgrades.

So that’s why I’m very inerested in what others are using.

Kind regards.

> On 26 Mar 2018, at 18:03, Christopher Cox  wrote:
> 
>> On 03/24/2018 03:33 AM, Andy Michielsen wrote:
>> Hi all,
>> Not sure if this is the place to be asking this but I was wondering which 
>> hardware you all are using and why in order for me to see what I would be 
>> needing.
>> I would like to set up a HA cluster consisting off 3 hosts to be able to run 
>> 30 vm’s.
>> The engine, I can run on an other server. The hosts can be fitted with the 
>> storage and share the space through glusterfs. I would think I will be 
>> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
>> sufficient ?)
> 
> Just because you asked, but not because this is helpful to you
> 
> But first, a comment on "3 hosts to be able to run 30 VMs".  The SPM node 
> shouldn't run a lot of VMs.  There are settings (the setting slips my mind) 
> on the engine to give it a "virtual set" of VMs in order to keep VMs off of 
> it.
> 
> With that said, CPU wise, it doesn't require a lot to run 30 VM's.  The 
> costly thing is memory (in general).  So while a cheap set of 3 machines 
> might handle the CPU requirements of 30 VM's, those cheap machines might not 
> be able to give you the memory you need (depends).  You might be fine.  I 
> mean, there are cheap desktop like machines that do 64G (and sometimes more). 
>  Just something to keep in mind.  Memory and storage will be the most costly 
> items.  It's simple math.  Linux hosts, of course, don't necessarily need 
> much memory (or storage).  But Windows...
> 
> 1Gbit NIC's are "ok", but again, depends on storage.  Glusterfs is no speed 
> demon.  But you might not need "fast" storage.
> 
> Lastly, your setup is just for "fun", right?  Otherwise, read on.
> 
> 
> Running oVirt 3.6 (this is a production setup)
> 
> ovirt engine (manager):
> Dell PowerEdge 430, 32G
> 
> ovirt cluster nodes:
> Dell m1000e 1.1 backplane Blade Enclosure
> 9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 10GbE, 
> CentOS 7.2
> 4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)
> 
> 120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012
> We've run on as few as 3 nodes.
> 
> Network, SAN and Storage (for ovirt Domains):
> 2 x S4810 (part is used for SAN, part for LAN)
> Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K SAS)
> Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS
> 
> ISO and Export Domains are handled by:
> Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), CentOS 
> 7.4, NFS
> 
> What I like:
> * Easy setup.
> * Relatively good network and storage.
> 
> What I don't like:
> * 2 "effective" networks, LAN and iSCSI.  All networking uses the same 
> effective path.  Would be nice to have more physical isolation for mgmt vs 
> motion vs VMs.  QoS is provided in oVirt, but still, would be nice to have 
> the full pathways.
> * Storage doesn't use active/active controllers, so controller failover is 
> VERY slow.
> * We have a fast storage system, and somewhat slower storage system (matter 
> of IOPS),  neither is SSD, so there isn't a huge difference.  No real 
> redundancy or flexibility.
> * vdsm can no longer respond fast enough for the amount of disks defined (in 
> the event of a new Storage Domain add).  We have raised vdsTimeout, but have 
> not tested yet.
> 
> I inherited the "style" above.  My recommendation of where to start for a 
> reasonable production instance, minimum (assumes the S4810's above, not 
> priced here):
> 
> 1 x ovirt manager/engine, approx $1500
> 4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
> 3 x Nexsan 18P 108TB, approx $96K
> 
> While significantly cheaper (by 6 figures), it provides active/active 
> controllers, storage reliability and flexibility and better network pathways. 
>  Why 4 x nodes?  Need at least N+1 for reliability.  The extra 4th node is 
> merely capacity.  Why 3 x storage?  Need at least N+1 for reliability.
> 
> Obviously, you'll still want to back things up and test the ability to 
> restore components like the ovirt 

Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-26 Thread Christopher Cox

On 03/24/2018 03:33 AM, Andy Michielsen wrote:

Hi all,

Not sure if this is the place to be asking this but I was wondering which 
hardware you all are using and why in order for me to see what I would be 
needing.

I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 
vm’s.
The engine, I can run on an other server. The hosts can be fitted with the 
storage and share the space through glusterfs. I would think I will be needing 
at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)


Just because you asked, but not because this is helpful to you

But first, a comment on "3 hosts to be able to run 30 VMs".  The SPM 
node shouldn't run a lot of VMs.  There are settings (the setting slips 
my mind) on the engine to give it a "virtual set" of VMs in order to 
keep VMs off of it.


With that said, CPU wise, it doesn't require a lot to run 30 VM's.  The 
costly thing is memory (in general).  So while a cheap set of 3 machines 
might handle the CPU requirements of 30 VM's, those cheap machines might 
not be able to give you the memory you need (depends).  You might be 
fine.  I mean, there are cheap desktop like machines that do 64G (and 
sometimes more).  Just something to keep in mind.  Memory and storage 
will be the most costly items.  It's simple math.  Linux hosts, of 
course, don't necessarily need much memory (or storage).  But Windows...


1Gbit NIC's are "ok", but again, depends on storage.  Glusterfs is no 
speed demon.  But you might not need "fast" storage.


Lastly, your setup is just for "fun", right?  Otherwise, read on.


Running oVirt 3.6 (this is a production setup)

ovirt engine (manager):
Dell PowerEdge 430, 32G

ovirt cluster nodes:
Dell m1000e 1.1 backplane Blade Enclosure
9 x M630 Blades (2xE5-2669v3, 384GB), 4 iSCSI paths, 4 bonded LAN, all 
10GbE, CentOS 7.2

4 x MXL 10/40GbE (2x40Gbit LAN, 2x40Gbit iSCSI SAN to the S4810's)

120 VM's, CentOS 6, CentOS 7, Windows 10 Ent., Windows Server 2012
We've run on as few as 3 nodes.

Network, SAN and Storage (for ovirt Domains):
2 x S4810 (part is used for SAN, part for LAN)
Equallogic dual controller (note: passive/active) PS6610S (84 x 4TB 7.2K 
SAS)

Equallogic dual controller (note: passive/active) PS6610X (84 x 1TB 10K SAS

ISO and Export Domains are handled by:
Dell PE R620, 32G, 2x10Gbit LAN, 2x10Gbit iSCSI to the SAN (above), 
CentOS 7.4, NFS


What I like:
* Easy setup.
* Relatively good network and storage.

What I don't like:
* 2 "effective" networks, LAN and iSCSI.  All networking uses the same 
effective path.  Would be nice to have more physical isolation for mgmt 
vs motion vs VMs.  QoS is provided in oVirt, but still, would be nice to 
have the full pathways.
* Storage doesn't use active/active controllers, so controller failover 
is VERY slow.
* We have a fast storage system, and somewhat slower storage system 
(matter of IOPS),  neither is SSD, so there isn't a huge difference.  No 
real redundancy or flexibility.
* vdsm can no longer respond fast enough for the amount of disks defined 
(in the event of a new Storage Domain add).  We have raised vdsTimeout, 
but have not tested yet.


I inherited the "style" above.  My recommendation of where to start for 
a reasonable production instance, minimum (assumes the S4810's above, 
not priced here):


1 x ovirt manager/engine, approx $1500
4 x Dell R620, 2xE5-2660, 768G, 6x10GbE (LAN, Storage, Motion), approx $42K
3 x Nexsan 18P 108TB, approx $96K

While significantly cheaper (by 6 figures), it provides active/active 
controllers, storage reliability and flexibility and better network 
pathways.  Why 4 x nodes?  Need at least N+1 for reliability.  The extra 
4th node is merely capacity.  Why 3 x storage?  Need at least N+1 for 
reliability.


Obviously, you'll still want to back things up and test the ability to 
restore components like the ovirt engine from scratch.


Btw, my recommended minimum above is regardless of hypervisor cluster 
choice (could be VMware).

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-25 Thread Juan Pablo
Andy, Im using 2 node cluster:
-2 supermicro 6017 (2 Intel 2420(12C24T each node) 384Gb ram total, 10gbe.
all hosted engine via nfs

storage side: 2 SC836BE16-R1K28B(192gb arc cache) with raid 10 zfs+intel
slog serving iscsi at 10Gbe
80 VM's more or less.

regards,



2018-03-25 4:36 GMT-03:00 Andy Michielsen :

> Hello Alex,
>
> Thanks for sharing. Much appriciated.
>
> I believe my setup would need 96 Gb off RAM in each host, and would need
> about at least 3 Tb of storage. Probably 4 Tb would be beter if I want to
> work with snapshots. (Will be running mostly windows 2016 servers or
> windows 10 desktops with 6Gb off RAM and 100 Gb of disks)
>
> I agree that a 10 Gb network for storage would be very beneficial.
>
> Now If I can figure out how to set up a glusterfs on a 3 node cluster in
> oVirt 4.2 just for the data storage. I ‘m golden to get started. :-)
>
> Kind regards.
>
> On 24 Mar 2018, at 20:08, Alex K  wrote:
>
> I have 2 or 3 node clusters with following hardware (all with self-hosted
> engine) :
>
> 2 node cluster:
> RAM: 64 GB per host
> CPU: 8 cores per host
> Storage: 4x 1TB SAS in RAID10
> NIC: 2x Gbit
> VMs: 20
>
> The above, although I would like to have had a third NIC for gluster
> storage redundancy, it is running smoothly for quite some time and without
> performance issues.
> The VMs it is running are not high on IO (mostly small Linux servers).
>
> 3 node clusters:
> RAM: 32 GB per host
> CPU: 16 cores per host
> Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space
> without purchasing extra disks)
> NIC: 6x Gbit
> VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)
>
> For your setup (30 VMs) I would rather go with RAID10 SAS disks and at
> least a dual 10Gbit NIC dedicated to the gluster traffic only.
>
> Alex
>
>
> On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen <
> andy.michiel...@gmail.com> wrote:
>
>> Hello Andrei,
>>
>> Thank you very much for sharing info on your hardware setup. Very
>> informative.
>>
>> At this moment I have my ovirt engine on our vmware environment which is
>> fine for good backup and restore.
>>
>> I have 4 nodes running now all different in make and model with local
>> storage and it works but lacks performance a bit.
>>
>> But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2
>> quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s
>> harddisks. This isn’t bad but I will add more RAM for starters. Also I
>> would like to have some good redundant storage for this too and the servers
>> have limited space to add that.
>>
>> Hopefully others will also share there setups and expirience like you did.
>>
>> Kind regards.
>>
>> On 24 Mar 2018, at 10:35, Andrei Verovski  wrote:
>>
>> Hi,
>>
>> HL ProLiant DL380, dual Xeon
>> 120 GB RAID L1 for system
>> 2 TB RAID L10 for VM disks
>> 5 VMs, 3 Linux, 2 Windows
>> Total CPU load most of the time is  low, high level of activity related
>> to disk.
>> Host engine under KVM appliance on SuSE, can be easily moved, backed up,
>> copied, experimented with, etc.
>>
>> You'll have to use servers with more RAM and storage than main.
>> More then one NIC required if some of your VMs are on different subnets,
>> e.g. 1 in internal zone and 2nd on DMZ.
>> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
>>
>> BTW, I would suggest to have several separate hardware RAIDs unless you
>> have SSD, otherwise limit of the disk system I/O will be a bottleneck.
>> Consider SSD L1 RAID for heavy-loaded databases.
>>
>> *Please note many cheap SSDs do NOT work reliably with SAS controllers
>> even in SATA mode*.
>>
>> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for
>> OS.
>> It was possible to install system, yet under heavy load simulated with
>> iozone disk system freeze, rendering OS unbootable.
>> Same crash was experienced with 512GB KingFast SSD connected to
>> broadcom/AMCC SAS RAID Card.
>>
>>
>> On 03/24/2018 10:33 AM, Andy Michielsen wrote:
>>
>> Hi all,
>>
>> Not sure if this is the place to be asking this but I was wondering which 
>> hardware you all are using and why in order for me to see what I would be 
>> needing.
>>
>> I would like to set up a HA cluster consisting off 3 hosts to be able to run 
>> 30 vm’s.
>> The engine, I can run on an other server. The hosts can be fitted with the 
>> storage and share the space through glusterfs. I would think I will be 
>> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
>> sufficient ?)
>>
>> Any input you guys would like to share would be greatly appriciated.
>>
>> Thanks,
>> ___
>> Users mailing 
>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
> 

Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-25 Thread Andy Michielsen
Hello Alex,

Thanks for sharing. Much appriciated.

I believe my setup would need 96 Gb off RAM in each host, and would need about 
at least 3 Tb of storage. Probably 4 Tb would be beter if I want to work with 
snapshots. (Will be running mostly windows 2016 servers or windows 10 desktops 
with 6Gb off RAM and 100 Gb of disks)

I agree that a 10 Gb network for storage would be very beneficial.

Now If I can figure out how to set up a glusterfs on a 3 node cluster in oVirt 
4.2 just for the data storage. I ‘m golden to get started. :-)

Kind regards.

> On 24 Mar 2018, at 20:08, Alex K  wrote:
> 
> I have 2 or 3 node clusters with following hardware (all with self-hosted 
> engine) : 
> 
> 2 node cluster: 
> RAM: 64 GB per host
> CPU: 8 cores per host
> Storage: 4x 1TB SAS in RAID10
> NIC: 2x Gbit
> VMs: 20
> 
> The above, although I would like to have had a third NIC for gluster storage 
> redundancy, it is running smoothly for quite some time and without 
> performance issues. 
> The VMs it is running are not high on IO (mostly small Linux servers). 
> 
> 3 node clusters: 
> RAM: 32 GB per host
> CPU: 16 cores per host
> Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space 
> without purchasing extra disks)
> NIC: 6x Gbit
> VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)
> 
> For your setup (30 VMs) I would rather go with RAID10 SAS disks and at least 
> a dual 10Gbit NIC dedicated to the gluster traffic only. 
> 
> Alex
> 
> 
>> On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen  
>> wrote:
>> Hello Andrei,
>> 
>> Thank you very much for sharing info on your hardware setup. Very 
>> informative.
>> 
>> At this moment I have my ovirt engine on our vmware environment which is 
>> fine for good backup and restore.
>> 
>> I have 4 nodes running now all different in make and model with local 
>> storage and it works but lacks performance a bit.
>> 
>> But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2 
>> quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s 
>> harddisks. This isn’t bad but I will add more RAM for starters. Also I would 
>> like to have some good redundant storage for this too and the servers have 
>> limited space to add that.
>> 
>> Hopefully others will also share there setups and expirience like you did.
>> 
>> Kind regards.
>> 
>>> On 24 Mar 2018, at 10:35, Andrei Verovski  wrote:
>>> 
>>> Hi,
>>> 
>>> HL ProLiant DL380, dual Xeon
>>> 120 GB RAID L1 for system
>>> 2 TB RAID L10 for VM disks
>>> 5 VMs, 3 Linux, 2 Windows
>>> Total CPU load most of the time is  low, high level of activity related to 
>>> disk.
>>> Host engine under KVM appliance on SuSE, can be easily moved, backed up, 
>>> copied, experimented with, etc.
>>> 
>>> You'll have to use servers with more RAM and storage than main.
>>> More then one NIC required if some of your VMs are on different subnets, 
>>> e.g. 1 in internal zone and 2nd on DMZ.
>>> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
>>> 
>>> BTW, I would suggest to have several separate hardware RAIDs unless you 
>>> have SSD, otherwise limit of the disk system I/O will be a bottleneck. 
>>> Consider SSD L1 RAID for heavy-loaded databases.
>>> 
>>> Please note many cheap SSDs do NOT work reliably with SAS controllers even 
>>> in SATA mode.
>>> 
>>> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for 
>>> OS. 
>>> It was possible to install system, yet under heavy load simulated with 
>>> iozone disk system freeze, rendering OS unbootable.
>>> Same crash was experienced with 512GB KingFast SSD connected to 
>>> broadcom/AMCC SAS RAID Card.
>>> 
>>> 
 On 03/24/2018 10:33 AM, Andy Michielsen wrote:
 Hi all,
 
 Not sure if this is the place to be asking this but I was wondering which 
 hardware you all are using and why in order for me to see what I would be 
 needing.
 
 I would like to set up a HA cluster consisting off 3 hosts to be able to 
 run 30 vm’s.
 The engine, I can run on an other server. The hosts can be fitted with the 
 storage and share the space through glusterfs. I would think I will be 
 needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
 sufficient ?)
 
 Any input you guys would like to share would be greatly appriciated.
 
 Thanks,
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
>>> 
>> 
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-24 Thread Alex K
I have 2 or 3 node clusters with following hardware (all with self-hosted
engine) :

2 node cluster:
RAM: 64 GB per host
CPU: 8 cores per host
Storage: 4x 1TB SAS in RAID10
NIC: 2x Gbit
VMs: 20

The above, although I would like to have had a third NIC for gluster
storage redundancy, it is running smoothly for quite some time and without
performance issues.
The VMs it is running are not high on IO (mostly small Linux servers).

3 node clusters:
RAM: 32 GB per host
CPU: 16 cores per host
Storage: 5x 600GB in RAID5 (not ideal but I had to gain some storage space
without purchasing extra disks)
NIC: 6x Gbit
VMs: less then 10 large Windows VMs (Windows 2016 server and Windows 10)

For your setup (30 VMs) I would rather go with RAID10 SAS disks and at
least a dual 10Gbit NIC dedicated to the gluster traffic only.

Alex


On Sat, Mar 24, 2018 at 1:24 PM, Andy Michielsen 
wrote:

> Hello Andrei,
>
> Thank you very much for sharing info on your hardware setup. Very
> informative.
>
> At this moment I have my ovirt engine on our vmware environment which is
> fine for good backup and restore.
>
> I have 4 nodes running now all different in make and model with local
> storage and it works but lacks performance a bit.
>
> But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2
> quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s
> harddisks. This isn’t bad but I will add more RAM for starters. Also I
> would like to have some good redundant storage for this too and the servers
> have limited space to add that.
>
> Hopefully others will also share there setups and expirience like you did.
>
> Kind regards.
>
> On 24 Mar 2018, at 10:35, Andrei Verovski  wrote:
>
> Hi,
>
> HL ProLiant DL380, dual Xeon
> 120 GB RAID L1 for system
> 2 TB RAID L10 for VM disks
> 5 VMs, 3 Linux, 2 Windows
> Total CPU load most of the time is  low, high level of activity related to
> disk.
> Host engine under KVM appliance on SuSE, can be easily moved, backed up,
> copied, experimented with, etc.
>
> You'll have to use servers with more RAM and storage than main.
> More then one NIC required if some of your VMs are on different subnets,
> e.g. 1 in internal zone and 2nd on DMZ.
> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
>
> BTW, I would suggest to have several separate hardware RAIDs unless you
> have SSD, otherwise limit of the disk system I/O will be a bottleneck.
> Consider SSD L1 RAID for heavy-loaded databases.
>
> *Please note many cheap SSDs do NOT work reliably with SAS controllers
> even in SATA mode*.
>
> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for
> OS.
> It was possible to install system, yet under heavy load simulated with
> iozone disk system freeze, rendering OS unbootable.
> Same crash was experienced with 512GB KingFast SSD connected to
> broadcom/AMCC SAS RAID Card.
>
>
> On 03/24/2018 10:33 AM, Andy Michielsen wrote:
>
> Hi all,
>
> Not sure if this is the place to be asking this but I was wondering which 
> hardware you all are using and why in order for me to see what I would be 
> needing.
>
> I would like to set up a HA cluster consisting off 3 hosts to be able to run 
> 30 vm’s.
> The engine, I can run on an other server. The hosts can be fitted with the 
> storage and share the space through glusterfs. I would think I will be 
> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
> sufficient ?)
>
> Any input you guys would like to share would be greatly appriciated.
>
> Thanks,
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-24 Thread Andy Michielsen
Hello Andrei,

Thank you very much for sharing info on your hardware setup. Very informative.

At this moment I have my ovirt engine on our vmware environment which is fine 
for good backup and restore.

I have 4 nodes running now all different in make and model with local storage 
and it works but lacks performance a bit.

But I can get my hands on some old dell’s R415 with 96 Gb of ram and 2 
quadcores and 6 x 1 Gb nic’s. They all come with 2 x 146 Gb 15000 rpm’s 
harddisks. This isn’t bad but I will add more RAM for starters. Also I would 
like to have some good redundant storage for this too and the servers have 
limited space to add that.

Hopefully others will also share there setups and expirience like you did.

Kind regards.

> On 24 Mar 2018, at 10:35, Andrei Verovski  wrote:
> 
> Hi,
> 
> HL ProLiant DL380, dual Xeon
> 120 GB RAID L1 for system
> 2 TB RAID L10 for VM disks
> 5 VMs, 3 Linux, 2 Windows
> Total CPU load most of the time is  low, high level of activity related to 
> disk.
> Host engine under KVM appliance on SuSE, can be easily moved, backed up, 
> copied, experimented with, etc.
> 
> You'll have to use servers with more RAM and storage than main.
> More then one NIC required if some of your VMs are on different subnets, e.g. 
> 1 in internal zone and 2nd on DMZ.
> For your setup 10 GB NICs + L3 Switch for ovirtmgmt.
> 
> BTW, I would suggest to have several separate hardware RAIDs unless you have 
> SSD, otherwise limit of the disk system I/O will be a bottleneck. Consider 
> SSD L1 RAID for heavy-loaded databases.
> 
> Please note many cheap SSDs do NOT work reliably with SAS controllers even in 
> SATA mode.
> 
> For example, I supposed to use 2 x WD Green SSD configures as RAID L1 for OS. 
> It was possible to install system, yet under heavy load simulated with iozone 
> disk system freeze, rendering OS unbootable.
> Same crash was experienced with 512GB KingFast SSD connected to broadcom/AMCC 
> SAS RAID Card.
> 
> 
>> On 03/24/2018 10:33 AM, Andy Michielsen wrote:
>> Hi all,
>> 
>> Not sure if this is the place to be asking this but I was wondering which 
>> hardware you all are using and why in order for me to see what I would be 
>> needing.
>> 
>> I would like to set up a HA cluster consisting off 3 hosts to be able to run 
>> 30 vm’s.
>> The engine, I can run on an other server. The hosts can be fitted with the 
>> storage and share the space through glusterfs. I would think I will be 
>> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
>> sufficient ?)
>> 
>> Any input you guys would like to share would be greatly appriciated.
>> 
>> Thanks,
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Which hardware are you using for oVirt

2018-03-24 Thread Andrei Verovski
Hi,

HL ProLiant DL380, dual Xeon
120 GB RAID L1 for system
2 TB RAID L10 for VM disks
5 VMs, 3 Linux, 2 Windows
Total CPU load most of the time is  low, high level of activity related
to disk.
Host engine under KVM appliance on SuSE, can be easily moved, backed up,
copied, experimented with, etc.

You'll have to use servers with more RAM and storage than main.
More then one NIC required if some of your VMs are on different subnets,
e.g. 1 in internal zone and 2nd on DMZ.
For your setup 10 GB NICs + L3 Switch for ovirtmgmt.

BTW, I would suggest to have several separate hardware RAIDs unless you
have SSD, otherwise limit of the disk system I/O will be a bottleneck.
Consider SSD L1 RAID for heavy-loaded databases.

*Please note many cheap SSDs do NOT work reliably with SAS controllers
even in SATA mode*.

For example, I supposed to use 2 x WD Green SSD configures as RAID L1
for OS.
It was possible to install system, yet under heavy load simulated with
iozone disk system freeze, rendering OS unbootable.
Same crash was experienced with 512GB KingFast SSD connected to
broadcom/AMCC SAS RAID Card.


On 03/24/2018 10:33 AM, Andy Michielsen wrote:
> Hi all,
>
> Not sure if this is the place to be asking this but I was wondering which 
> hardware you all are using and why in order for me to see what I would be 
> needing.
>
> I would like to set up a HA cluster consisting off 3 hosts to be able to run 
> 30 vm’s.
> The engine, I can run on an other server. The hosts can be fitted with the 
> storage and share the space through glusterfs. I would think I will be 
> needing at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s 
> sufficient ?)
>
> Any input you guys would like to share would be greatly appriciated.
>
> Thanks,
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Which hardware are you using for oVirt

2018-03-24 Thread Andy Michielsen
Hi all,

Not sure if this is the place to be asking this but I was wondering which 
hardware you all are using and why in order for me to see what I would be 
needing.

I would like to set up a HA cluster consisting off 3 hosts to be able to run 30 
vm’s.
The engine, I can run on an other server. The hosts can be fitted with the 
storage and share the space through glusterfs. I would think I will be needing 
at least 3 nic’s but would be able to install ovn. (Are 1gb nic’s sufficient ?)

Any input you guys would like to share would be greatly appriciated.

Thanks,
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users