Re: [ovirt-users] Where do you run oVirt? Here are the answers!

2015-03-03 Thread Eyal Edri
very nice!
great to see the distribution of OSs,
can give us a hint on where to focus testing/ci/etc...

e.

- Original Message -
 From: Sandro Bonazzola sbona...@redhat.com
 To: de...@ovirt.org, Users@ovirt.org
 Sent: Tuesday, March 3, 2015 1:56:30 PM
 Subject: [ovirt-users] Where do you run oVirt? Here are the answers!
 
 Hi,
 This is a summary of the 85 response we got to the last month poll. Thanks
 everyone who answered!
 
 Which distribution are you using for running ovirt-engine?
 Fedora 20 8   9%
 CentOS 6  52  61%
 CentOS 7  22  26%
 Other 3   4%
 
 Which distribution are you using for your nodes?
 Fedora 20 6   7%
 CentOS 6  40  47%
 CentOS 7  31  36%
 oVirt Node6   7%
 Other 2   2%
 
 In Other: RHEL 6.6, 7.0, 7.1 and a mixed environment of CentOS 6 and 7.
 
 Do you use Hosted Engine?
 Yes   42  49%
 No42  49%
 
 
 Would you like to share more info on your datacenter, vms,...? Tell us about
 it
 ---
 
 oVirt is so AWESOME! I luv it.
 --
 We currently run engine on CentOS 6 as CentOS 7 was not yet supported. We
 plan on migrating it to a CentOS 7 machine.
 Our nodes are currently CentOS 6 but are planning to migrate to CentOS 7.
 (For the nodes a checkbox for each distribution would be better than the
 radiobutton, as you can have multiple clusters with different
 distributions).
 
 --
 FC Storage (Dell md3620f and IBM Blade-S internal SAS storage)
 
 --
 Please provide ceph support and built in backup tools
 
 --
 3 separate virtual RHEV datacenters, test, dev, prod.
 Use direct attach fibre channel luns for application storage heavily on VMs
 to take advantage of snapshot/restore features on our array.
 Hosted Engine in test and dev. Physical manager in prod. 
 
 --
 2 nodes running centos 6.6 from SSD (i3 35w 32GB)
 1x NFS datastore
 1x ISO datastore
 1 node running NFS (i3 35w 8GB, Dell PERC sas controller)
 All 3 nodes connected via 10Gb ethernet
 Between 9 en 15 VM's depending on my tests
 Always active
 - Zimbra
 - ldap/dhcp
 - web/php
 - devel/php
 - pfsense
 Develop/test
 - Hadoop
 - Openstack
 - Gluster
 - ms...
 
 --
 - 4 nodes for KVM, 4 nodes for GlusterFS.
 - 1Gigabit for management and 4Gigabit channel bonding for GlusterFS replica.
 - Master Storage Domain lives on replica-3 GlustrerFS volume.
 
 --
 30 hvs NFS storage over infiniband, custom portal for task automation and
 classroom abstraction via API
 
 --
 We use atm local storage.
 Current running vm count is over 100.
 I'd like to use EL7 platform in the future, but I'm uncertain how to best
 upgrade everything with a minimal downtime.
 we currently run ovirt-engine 3.3.3
 we will stick with EL platform and not switch to fedora based, because
 we need the improved stability.
 we also do not upgrade to dot zero releases as these introduced
 some breakage in the past (regressions).
 I hope this gets better with future releases.
 Keep up the good work!
 Sven
 
 --
 Storage GlusterFS (Virt+Gluster on Nodes), and FreeNAS via NFS
 
 --
 - iSCSI dedicated network
 - 2x Poweredge M1000e chassis (so, 2 x 16 blades)
 
 --
 Yes it's NIELIT  a gov agency to provide various trannig on virtual
 environment
 
 --
 Running production engine on CentOS6 with CentOS6 nodes.
 Test/Staging environemtn based on CentOS7 and CentOS7 nodes, Hosted-engine on
 iSCSI.
 
 --
 Mix of Dell, HP, UCS for compute
 Netapp for NAS, VNX for FC
 
 --
 Cloud used for CI purpose, made from about old 50 desktop PCs (and still
 growing) with Celerons, i3, i5 and few i7. VMs are light nodes for
 Jenkins (2GB-6GB/2-4cores). Some resources are utilized for cloud's services
 like vpn, zabbix, httpd, etc. As storage we use MooseFS!
 
 --
 This is a sample config for the few installes I have performed, but ideal
 for a small office.
 2x nodes - CentOS 6 with SSD boot and 2x 2TB drives and 2 gluster volumes
 spread over the 2 - 1 for vm storage and 1 for file storage
 1x engine (planning on changing to hosted)
 5x vms - 2x DNS/DHCP/Management, 1x webserver for intranet, 1x mailserver and
 1x Asterisk PBX
 
 
 --
 I think that we really need more troubleshooting tools and guides more than
 anything.  There are various logs, but there is no reason why we
 shouldn't be publishing some of this information to the engine UI and even
 automating certain self-healing.
 The absolute most important feature in my mind is getting the ability to auto
 start (restart) VMs after certain failures and attempting to unlock
 disks, etc..  VMware does a tremendous amount of that in order to provide
 better HA.  We need this.
 
 --
 Have FC only. Using SVC. Behind it now DS4700. Going to have other storages
 too.
 This is BYOD.
 
 
 --
 One node cluster with local storage for education, POC etc. at home.
 
 --
 No
 
 --
 Combo glusterfs storage and vm hosted nodes. Will be migrating engine to
 centos 7 at some point

[ovirt-users] Where do you run oVirt? Here are the answers!

2015-03-03 Thread Sandro Bonazzola
Hi,
This is a summary of the 85 response we got to the last month poll. Thanks 
everyone who answered!

Which distribution are you using for running ovirt-engine?
Fedora 20   8   9%
CentOS 652  61%
CentOS 722  26%
Other   3   4%

Which distribution are you using for your nodes?
Fedora 20   6   7%
CentOS 640  47%
CentOS 731  36%
oVirt Node  6   7%
Other   2   2%

In Other: RHEL 6.6, 7.0, 7.1 and a mixed environment of CentOS 6 and 7.

Do you use Hosted Engine?
Yes 42  49%
No  42  49%


Would you like to share more info on your datacenter, vms,...? Tell us about it
---

oVirt is so AWESOME! I luv it.
--
We currently run engine on CentOS 6 as CentOS 7 was not yet supported. We plan 
on migrating it to a CentOS 7 machine.
Our nodes are currently CentOS 6 but are planning to migrate to CentOS 7. (For 
the nodes a checkbox for each distribution would be better than the
radiobutton, as you can have multiple clusters with different distributions).

--
FC Storage (Dell md3620f and IBM Blade-S internal SAS storage)

--
Please provide ceph support and built in backup tools

--
3 separate virtual RHEV datacenters, test, dev, prod.
Use direct attach fibre channel luns for application storage heavily on VMs to 
take advantage of snapshot/restore features on our array.
Hosted Engine in test and dev. Physical manager in prod. 

--
2 nodes running centos 6.6 from SSD (i3 35w 32GB)
1x NFS datastore
1x ISO datastore
1 node running NFS (i3 35w 8GB, Dell PERC sas controller)
All 3 nodes connected via 10Gb ethernet
Between 9 en 15 VM's depending on my tests
Always active
- Zimbra
- ldap/dhcp
- web/php
- devel/php
- pfsense
Develop/test
- Hadoop
- Openstack
- Gluster
- ms...

--
- 4 nodes for KVM, 4 nodes for GlusterFS.
- 1Gigabit for management and 4Gigabit channel bonding for GlusterFS replica.
- Master Storage Domain lives on replica-3 GlustrerFS volume.

--
30 hvs NFS storage over infiniband, custom portal for task automation and 
classroom abstraction via API

--
We use atm local storage.
Current running vm count is over 100.
I'd like to use EL7 platform in the future, but I'm uncertain how to best
upgrade everything with a minimal downtime.
we currently run ovirt-engine 3.3.3
we will stick with EL platform and not switch to fedora based, because
we need the improved stability.
we also do not upgrade to dot zero releases as these introduced
some breakage in the past (regressions).
I hope this gets better with future releases.
Keep up the good work!
Sven

--
Storage GlusterFS (Virt+Gluster on Nodes), and FreeNAS via NFS

--
- iSCSI dedicated network
- 2x Poweredge M1000e chassis (so, 2 x 16 blades)

--
Yes it's NIELIT  a gov agency to provide various trannig on virtual environment

--
Running production engine on CentOS6 with CentOS6 nodes.
Test/Staging environemtn based on CentOS7 and CentOS7 nodes, Hosted-engine on 
iSCSI.

--
Mix of Dell, HP, UCS for compute
Netapp for NAS, VNX for FC

--
Cloud used for CI purpose, made from about old 50 desktop PCs (and still 
growing) with Celerons, i3, i5 and few i7. VMs are light nodes for
Jenkins (2GB-6GB/2-4cores). Some resources are utilized for cloud's services 
like vpn, zabbix, httpd, etc. As storage we use MooseFS!

--
This is a sample config for the few installes I have performed, but ideal for 
a small office.
2x nodes - CentOS 6 with SSD boot and 2x 2TB drives and 2 gluster volumes 
spread over the 2 - 1 for vm storage and 1 for file storage
1x engine (planning on changing to hosted)
5x vms - 2x DNS/DHCP/Management, 1x webserver for intranet, 1x mailserver and 
1x Asterisk PBX


--
I think that we really need more troubleshooting tools and guides more than 
anything.  There are various logs, but there is no reason why we
shouldn't be publishing some of this information to the engine UI and even 
automating certain self-healing.
The absolute most important feature in my mind is getting the ability to auto 
start (restart) VMs after certain failures and attempting to unlock
disks, etc..  VMware does a tremendous amount of that in order to provide 
better HA.  We need this.

--
Have FC only. Using SVC. Behind it now DS4700. Going to have other storages 
too.
This is BYOD.


--
One node cluster with local storage for education, POC etc. at home.

--
No

--
Combo glusterfs storage and vm hosted nodes. Will be migrating engine to centos 
7 at some point. Wish libgfapi was properly supported now that it's
feasible.

--
3 x Supermicro A1SAi-2750F nodes (16 GiB RAM + 8 TiB storage + 8x1GiB/s 
Ethernet each) with hyperconverged GlusterFS (doubling as an NFS/CIFS storage
cluster)

--
running 15 vms, for univ apps (LMS/SIS/etc), using dual freeIPA (vms), 
glusterfs (replicated-distributed-10G net)

--
Tried to install hosted engine on Centos 7, but ran into issues and went for 
Fedora20 instead.