On 10/07/2013 02:35 PM, Itamar Heim wrote:
On 10/07/2013 06:13 PM, Jason Keltz wrote:
I've been experimenting with oVirt 3.2 on some old hardware, and am now
preparing to buy new hardware for using oVirt 3.3 in production.  I'm
interested in any feedback about what I plan to purchase.  I want to
keep the setup as simple as possible.  Our current environment consists
of mostly CentOS 6.4 systems.

The combined oVirt engine and file server will be a Dell  R720 with dual
Xeon E5-2660 and 64 GB of memory.   The server would have an LSI 9207-8i
HBA connected to the SAS backplane.    The R720 enclosure has 16 x 2.5"
disk slots.  I would get 2 x 500 GB NLSAS drives for mirrored md rood
(raid1), use 12 slots for RAID10 SAS 10K rpm drives (either 600 GB or
900 GB), and have an additional 2 spares.   Data storage would be
virtual machines, and some associated data. The O/S would be CentOS 6.4.

The nodes would be 3 x Dell R620, dual Xeon E5-2690, 128 GB memory, each
with just a single, small NL SAS root drive.  There would be no other
local storage.  All VMs would use the file server as the datastore.  The
nodes would run oVirt node.

In terms of networking, each machine would have 4 ports - 2 x 1 Gb
(bonded) giving machines access to "public" network (that we do not
control).  The 2 x 10 Gb copper would be connected to a locally
installed copper 10G switch that we fully control - 1 port used for
storage, and  1 for management/consoles/VM migration.

A few additional notes ...

I chose to stick with software raid MD on the file server, mostly for
cost, and simplicity.  I have a lot of experience good with MD, and
performance seems reasonable.

I would have gone SSD for the file server root disk, but the cost from
Dell for their SSD is prohibitive, and I want the whole system to be
included in the warranty.  NLSAS is the cheapest disk that will have
support for the duration of the warranty period (with Dell servers, SATA
drives are only warranted for 1  year).

The nodes with 1 NLSAS drive... I've thought about replacing that with
simply an SD card.  It's not clear if this the best solution, or how
much space I would need on that card.  At least when I configure via the
Dell web site, the biggest SD card it seems I can purchase with a server
is 2 GB which doesn't seem like very much! I guess people guy bigger
cards separately.   I know a disk will work, and give me more than
enough space and no hassle.

I've chosen to keep the setup simple by using NFS on the file server,
but I see a whole lot of people here experimenting with the new Gluster
capabilities in oVirt 3.3.  It's not clear if that's being used in
production, or how reliable that would be.  I really can't find
information on performance tests, etc with Gluster and oVirt, in
particular, with comparison of NFS and Gluster.   Would there be a

gluster is still not available for centos 6.4, and there are some issues with snapshots around it still for libgfapi.
for posixfs, its supported since 3.2.

Ok. I guess it's probably best that I stick with NFS for this time around.

performance advantage to using Gluster here? How would it work? by
adding disk to the nodes, and getting rid of the file server (or at
least turning the file server into a smaller engine only server)?  How
would this impact the nodes in terms of their ability to handle VMs?
(performance?)  I presently have no experience with Gluster whatsoever,
though I'm certainly never against learning something new, especially
should it benefit my project.  Unfortunately, as I'm positive everyone
can attest for is that it's just trouble finding the number of hours in
the day :)  There's one thing for sure - Gluster itself, while maybe not
TOO complicated is still more complicated than an NFS only setup.

I don't have details on this, and hope others have.
but you are correct its an entirely different deployment architecture between a central nfs server, and distributed storage on the nodes.

It would be helpful if the documentation for oVirt had more information on this.


As I've mentioned before, we don't use LDAP for authentication, so I'll
be restricted to one admin user at the moment unless I setup a separate
infrastructure for oVirt authentication. That will be fine for a little
while.  I understand that work may be underway for pluggable
authentication with oVirt.  I'm not sure if that ties into any of the
items on Itamar's list though. Itamar? :)  I was hoping to see that
pluggable authentication model sooner rather than later so that I could
write something to work with our custom auth system.

well, you could also launch an openldap/ipa/ad/etc. in a VM. of course if it has issues you'd need admin@internal to fix it.

I was thinking of doing this if I had to, but it's still a lot of headache for a few logins.
Is the pluggable authentication coming in a new version of oVirt?

In terms of power management - my existing machines are using a Raritan
KVM with Raritan power management dongles and power bars. I haven't had
an opportunity to see if oVirt can manage the devices, but I guess if
oVirt can't do it, I can continue to manage power through the KVM
interface.

are they supported by fence-agents in centos?

I've never tried. I don't often need to power off hosts the hard way.. a reboot it usually fine. When I do need to power manage hosts, I go into the Raritan KVM, click on the host, and turn it off, and back on, and everything is fine. I haven't done any connection to Linux.


Any feedback would be much appreciated.

With your experience with oVirt, any feedback on the hardware/NFS server combination?

Jason.

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to