Just this past week had a node outage on our ESX cluster (lost both fiber
pass-thru modules on one of our blade chassis).  This placed about 25 guests
on each remaining node.  Our underlying storage is RAID5 on an IBM shark,
and we had absolutely no appreciable disk I/O issues.  So I would look for a
reason for poor performance outside of ESX itself.  Perhaps immature drivers
or the use of a less-than-traditional/known storage subsystem.



-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Eric Greer
Sent: Thursday, August 27, 2009 5:03 PM
To: Half-Life dedicated Linux server mailing list
Subject: Re: [hlds_linux] srcds virtualized

Strange!

I hadn't heard of or experienced any disk problems with ESXi yet.  Maybe I
just haven't loaded it down when I'm using it enough to notice.

Eric


On Thu, Aug 27, 2009 at 5:58 PM, Valtteri Kiviniemi <
[email protected]> wrote:

> Hi,
>
> ESXi 3.5, havent tested the 4.0 because Areca only has a beta driver for
> it.
>
> - Valtteri Kiviniemi
>
> Eric Greer kirjoitti:
> > vmware server or esxi?
> > Eric
> >
> >
> > On Thu, Aug 27, 2009 at 5:35 PM, Valtteri Kiviniemi <
> > [email protected]> wrote:
> >
> >> Hi,
> >>
> >> You are correct. But I'm just saying my opinion here, and I think that
> >> Xen is better.
> >>
> >> VMWare ESXi is maybe a bit more user friendly than XenServer 5.5, but I
> >> don't still understand why ESXi is so much slower. I'am using both of
> >> them because my company sell's virtual servers and some customers want
> >> VMWare ones.
> >>
> >> I have identical hardware on all machines but im still seeing 30-40%
> >> more performance on Xen virtual servers than on VMWare. Dont know why,
> >> but disk i/o is way better on Xen than VMWare.
> >>
> >> - Valtteri Kiviniemi
> >>
> >> Eric Greer kirjoitti:
> >>> If everyone wants to get technical with all of this nonsense... you
can
> >> run
> >>> srcds just fine on a VPS - as long as there is enough power.
> >>> Xen Quite simply adds another layer hardware layer that data must pass
> >>> through.  However, we're talking nanoseconds here people.  Not like
> >> another
> >>> hop on your way to chicago - another *virtual* device on the way to
the
> >>> hardware and back.  It's like nothing.  VMWare ESXi adds a few more
> >> layers
> >>> as it passes through more virtual devices... but it still does not
> >> matter.
> >>> A VM can be provisioned with plenty enough power to do any source
> server
> >>> just fine. You just have to give it plenty of dedicated resources.
> >>>
> >>> I feel like people start taking emotions into computing at some point.
> >>>  There aren't any - its all benchmarks and numbers.  If the system can
> >> CPU
> >>> bench some number has memory available and bandwidth... it can run the
> >>> server - simple as that.
> >>>
> >>> A VPS is generally considered 'weaker' because it can share resources
> >> with
> >>> other VMs - but it doesn't have to.  If for some reason you wanted to
> >> give
> >>> root shell access to a game server customer, you could VM them.  Yes,
> >> theres
> >>> a good 100Mb of memory overhead for the hypervisor, but it can be
worth
> >> it.
> >>> Eric
> >>>
> >>>
> >>> On Thu, Aug 27, 2009 at 12:05 PM, Valtteri Kiviniemi <
> >>> [email protected]> wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> You should probably read the facts before posting. Ofc. its not
> exactly
> >>>> the same, but if you know nothing about Xen you would know that the
> >>>> performance difference between (for example 2.6.18-xen and 2.6.18
> >>>> kernels) are so small, that you cant even notice it.
> >>>>
> >>>> Maybe with ESXi you have greater performance difference compared to
> >>>> bare-metl but not with xen.
> >>>>
> >>>> - Valtteri Kiviniemi
> >>>>
> >>>> Kveri kirjoitti:
> >>>>> believe me, if you have paravirtualized enviroment you don't have
> >>>>> equal performance than on bare-metal. Paravirtualization adds
another
> >>>>> layer, so does overhead. Maybe performance in CSS, but I doubt about
> >> it.
> >>>>> I'm using full VT on 4x quad core xeons with 16gb ram and providing
> >>>>> 1000fps 1.6 servers (yes, stable 1000fps, kernel self-pached with RT
> >>>>> and some HZ tweaks), CSS servers with 100 ticrate and and some tf2
> >>>>> servers without any problems.
> >>>>>
> >>>>> Kveri
> >>>>>
> >>>>> On 25.8.2009, at 20:52, Valtteri Kiviniemi wrote:
> >>>>>
> >>>>>> Hi,
> >>>>>>
> >>>>>> We are running multiple TF2 servers with Xen 3.4.1 paravirtualized.
> >>>>>> Performance is exactly the same as bare-metal, maybe even better.
> Only
> >>>>>> downside is that you need xen-patched kernel so to get most stable
> and
> >>>>>> working environment you have to use the default 2.6.18.8-xen
kernel.
> >>>>>> Ofc. you can compile a 1000hz domU kernel like we have.
> >>>>>>
> >>>>>> There is also pv_ops kernels which are included in the xen-unstable
> >>>>>> tree. They are the normal kernel.org kernel with patches that make
> it
> >>>>>> suitable for Xen hypervisor.
> >>>>>>
> >>>>>> In my opinion Xen is the best solution for gameserver
virtualization
> >>>>>> because it is the fastest. ESXi virtuals are not paravirtualized so
> >>>>>> they
> >>>>>> have slower disk i/o and network performance. They also use more
> >>>>>> resources.
> >>>>>>
> >>>>>> If you want same performance as bare-metal you need paravirtualized
> >>>>>> guest operating systems and Xen is the best solution for that.
> >>>>>>
> >>>>>> We have a physical 2 x 2.5GHz Quad-core Xeon machine with 16 GB ram
> >>>>>> and
> >>>>>> a ARECA ARC-1220 raid controller with RAID10 array.
> >>>>>>
> >>>>>> We are also running many other virtuals on the same machine without
> >>>>>> them
> >>>>>> affecting the gameserver virtual performance.
> >>>>>>
> >>>>>> With Xen you can for example assign 4 physical cores to the
> gameserver
> >>>>>> virtual and use the other 4 for other virtuals.
> >>>>>>
> >>>>>> - Valtteri Kiviniemi
> >>>>>>
> >>>>>> Daniel Worley kirjoitti:
> >>>>>>> I don't have exact numbers, but I've run srcds both natively and
> >>>>>>> under ESXi
> >>>>>>> on a PowerEdge server.  Under both I was able to run multiple
> >>>>>>> instances, no
> >>>>>>> issues.  I saw no difference in performance playing on the
servers,
> >>>>>>> but once
> >>>>>>> again I don't have numbers to back it up.
> >>>>>>>
> >>>>>>> On Tue, Aug 25, 2009 at 11:07 AM, Claudio Beretta <
> >>>> [email protected]
> >>>>>>>> wrote:
> >>>>>>>> HiI'd like to know your experiences with running srcds in a
> >>>>>>>> virtualized
> >>>>>>>> environment. Searching mail-archive for past discussions about
> >>>>>>>> this subject
> >>>>>>>> didn't provide a reliable conclusion to this topic.
> >>>>>>>> From what i understand, only hypervisors such as ESXi, XEN (and
> >>>>>>>> maybe
> >>>>>>>> Hyper-V) are suitable to be used for game servers because they
> >>>>>>>> should be
> >>>>>>>> the
> >>>>>>>> ones that introduce the lower overhead and response delay.
> >>>>>>>> Having a minor performance loss is fine, as long as no noticeable
> >>>>>>>> jitter is
> >>>>>>>> introduced or ping is increased.Has anyone had a chance to test
> >>>>>>>> these
> >>>>>>>> products and compare srcds performance on the same machine when
> >>>>>>>> virtualized
> >>>>>>>> and when running on the bare metal?
> >>>>>>>> Provided that the machine can handle it, do you know if it is
> >>>>>>>> possible to
> >>>>>>>> virtualize tickrate100, 1000fps CSS servers? Not that i want to
do
> >>>>>>>> that,
> >>>>>>>> but
> >>>>>>>> if it can be done.. anything can be done :-)
> >>>>>>>>
> >>>>>>>> best regards,
> >>>>>>>> Claudio
> >>>>>>>> _______________________________________________
> >>>>>>>> To unsubscribe, edit your list preferences, or view the list
> >>>>>>>> archives,
> >>>>>>>> please visit:
> >>>>>>>> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
> >>>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> To unsubscribe, edit your list preferences, or view the list
> >>>>>>> archives, please visit:
> >>>>>>> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
> >>>>>> _______________________________________________
> >>>>>> To unsubscribe, edit your list preferences, or view the list
> >>>>>> archives, please visit:
> >>>>>> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
> >>>>>>
> >>>>>> --
> >>>>>> This message has been scanned for viruses and
> >>>>>> dangerous content by MailScanner, and is
> >>>>>> believed to be clean.
> >>>>>>
> >>>> _______________________________________________
> >>>> To unsubscribe, edit your list preferences, or view the list
archives,
> >>>> please visit:
> >>>> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
> >>>>
> >>> _______________________________________________
> >>> To unsubscribe, edit your list preferences, or view the list archives,
> >> please visit:
> >>> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
> >> _______________________________________________
> >> To unsubscribe, edit your list preferences, or view the list archives,
> >> please visit:
> >> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
> >>
> > _______________________________________________
> > To unsubscribe, edit your list preferences, or view the list archives,
> please visit:
> > http://list.valvesoftware.com/mailman/listinfo/hlds_linux
>
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives,
> please visit:
> http://list.valvesoftware.com/mailman/listinfo/hlds_linux
>
_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives,
please visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux




_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds_linux

Reply via email to