Hello,

This has taken the wrong turn, so my apologies Bryan. It is my fault that I
didn't clearly specify that because you work for <vendor with a space> you
will be biased as you will mostly see their solutions implemented and
support or implement them.

*>RHEV has gained major traction just because of lack of features for Linux
guests.  Heck, on the VDI front, RHEV+Spice is the _only_ solution.  And
it's 100% open source.*
There are other open source VDI solutions which exist way before spice was
even created (http://freenx.berlios.de/ for example which was a spinoff
from http://www.nomachine.com 's opensourced NX 3.5 components). We can
discuss about how good they are but that doesn't change that they exist
since a long time ago, there are lots of deployments out there and FreeNX
is Opensource.
If you say Spice is the only solution, I'll say that last time I tested
(around 8 months ago) it wasn't a viable solution because the moment you
have some flash banner or anything like an animated banner the network
traffic will spike to around 30-40 mbit/sec and keep like that until you
get rid of that banner/animation. If you try to limit the network traffic
(qdiscs + htb from Linux the gateway) then the VDI solution(aka the
desktop) will become unusable completely while that animation exists. You
can't ensure that no banner will ever be displayed or you won't have any
screen items changing in rapid succesion so Spice works. I have even asked
(live) Hans De Goede at a conference about this and he said that the video
compression poses problems as they are working with video frames and it
get's complicated (you can't do things like you do with solutions which sit
on top of Xorg and give drawing instructions). Otherwise Spice is great
(specially because of the working USB transport) but in the VDI world
network traffic is one of the most important things and other solutions
keep it to between 50 - 120kb/sec at most (they are even lower). For two
years I have worked for SMB doing VDI and I would have loved to replace
FreeNX with something better and Opensource, but unfortunately I couldn't
find something like this.

Regards,
Alexandru.

On Sat, Jun 8, 2013 at 12:18 AM, Bryan J Smith <[email protected]> wrote:

> On Fri, Jun 7, 2013 at 5:16 PM, Alexandru Ionica <
> [email protected]> wrote:
>
>> The 90% ratio is most probably caused by the fact that companies willing
>> to send employees to training and pay several thousands of euro for this,
>> also afford and have an SAN.
>>
>
> That's the argument for a lot of things in general when it comes to
> hardware.
> But newer "Software-based Storage" will change that.
>
> E.g., for private comput farms ... RHEV+RHS (upstream: KVM+Gluster w/oVirt)
> I.e., "Just add a node to get both more compute and more highly available
> storage"
>
>  I am not an advocate for Powepath , or do I have something against
>> multipathd.
>>
>
> I deal with (redacted) regularly.  I really hate it when they abuse
> customer relationships than working with us, the customer always looses.  I
> also really hate debugging their modules, proving -- beyond a shadow of a
> doubt -- that it was their module.  Why does Red Hat have to debug a
> proprietary module it doesn't have the source code to?
>
>
>> I am just stating what I have seen (and it was Rhel 5 at that time).
>>
>
> If multipathd is misconfigured, sure, it'll f'up things.  I deal with
> those all-the-time.
>
> RHEL5 has been solid, when properly configured.
> RHEL6 improves performance heavily.
>
> HP, IBM and many others provide _excellent_ documentation on how to
> optimize RHEL5.4/5.6+ and RHEL6 for active-active configurations, ALUA and
> other configurations, with the stock, out-of-the-box, GPL configurations.
>
> Several others, I've discovered myself through trial'n error.  I know my
> work has been fed back both inside of these IHVs, as well as upstream.
>  I.e., I've worked at some of the largest SAN environments in North
> America.  ;)
>
> It does get political and at least back then
>>
>
> "Political" for IHVs/ISVs, yes.  They are going to lose a revenue stream.
>
>
>> you had the choice of either fighting with RedHat or EMC.
>>
>
> Red Hat doesn't "fight."  It respects what the customer wants to do.
>
> But when you, as a customer, tell Red Hat to debug a proprietary module,
> understand you're asking the impossible.  At the same time, the proprietary
> vendor _could_ sell you a proprietary DSM that works very well with
> DM-MPIO, solving the issue.
>
> Again, this is _exactly_ what Microsoft _makes_ vendors do in NT6 for its
> MPIO.  It _requires_ them to create a DSM for NT6's new MPIO, instead of
> loading a bunch of vendor-centric stuff.  Before NT6, this was a real
> problem.  But as of NT6, it's much better.
>
> But Red Hat is not Microsoft.  Red Hat cannot tell IHVs/ISVs what to do.
>
> Plus you also need to take into account that in some enterprises they will
>> also have Solaris, AIX running Powerpath and a sysadmin group trying to get
>> things to a standard level;
>>
>
> And the same argument is used for Symmtec/Veritas products, instead of Red
> Hat Enterprise solutions.  But even Symantec works with Red Hat.
>
> I.e., if there is a problem with a Symantec proprietary solution atop of
> Red Hat, I can work with them.
>
>
>>  the whole point of mentioning this is that things are not black and
>> white and it doesn't always matter which is the best or most capable
>> technical solution.
>>
>
> But we can _not_ cover proprietary software in LPI Objectives.
> However, we _can_ cover the GPL software.
>
> DM-MPIO is very much and heavily deployed.
>
> This has come up prior, like with 3rd party, $300/node CAL "AD clients"
> for Linux.
> We should cover SSSD, which provides the equivalent, instead of not
> addressing it at all.
>
> If we are talking about virtualization in the enterprise, generally it's
>> done with VMware and then the Linux guest is not booting off the SAN or
>> being directly connected.
>>
>
> Or Hyper-V, or RHEV, or Xen Server, etc...
>
> VMware vSphere tends to be preferred in organizations that are heavily
> Windows.  When Linux guests start getting involved, things change
> significantly.  I'm not just talking the KVM v. ESXi HyperVisor, but the
> management aspects too.
>
> EMC VMware could easily support libvirt and other, open source interfaces
> in ESXi and vSphere, instead of just paying lip service.  Red Hat
> specifically created libvirt so the HyperVisor would _not_ matter.
>
> Open source projects should be about open standards and open interfaces.
>  I don't agree with the argument, "oh, people don't use open source, so
> let's not cover it."
>
> RHEV has gained major traction just because of lack of features for Linux
> guests.  Heck, on the VDI front, RHEV+Spice is the _only_ solution.  And
> it's 100% open source.
>
> So I guess a lot of people here run in heavily Windows environments?
>
> Again, oVirt and SPICE are open source -- let alone oVirt is involved with
> a lot more than just KVM.  But even libvirt and virsh aren't specific to
> KVM either.  It's up to other HyperVisors
>
> So let's not make this about KVM v. ESXi, okay?  Let's make it about
> libvirt, oVirt, etc...
>
>
>>  I know Bryan will say he disagrees , but he works for RedHat and they
>> sell KVM based solutions :) .
>>
>
> I don't have to take this.  Seriously.  On a community list built around
> open source.
>
> I'm about as "cross platform" and "cross vendor" as they come, being an
> _active_ MCITP/MCSE/MCSA as well as LPIC-1/2/3.  I'm really tired of the,
> "Oh, he works for Red(nospace)hat," blah, blah, blah ...
>
> I could easily say, "wow, you sound like you are heavily entrenched in
> EMC."
>
> At least everything my employer does is 100% and given back upstream.  I
> really, really, really _tire_ of people forgetting that.
>
> Goodbye.  I'm done.  This is _exactly_ what happened last time.
>
>
> --
> Bryan J Smith - Professional, Technical Annoyance
> b.j.smith at ieee.org - http://www.linkedin.com/in/bjsmith
>
>
> _______________________________________________
> lpi-examdev mailing list
> [email protected]
> http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-examdev
>
_______________________________________________
lpi-examdev mailing list
[email protected]
http://list.lpi.org/cgi-bin/mailman/listinfo/lpi-examdev

Reply via email to