Hari

I don't want to be selfish person and make that call.

There are vmware best practices, the example I've given below considers to be 
common best practice. Technically, you really don't need to use DVS on switch0 
management pgroup because it is created by default when esxi is installed. DVS 
is a template switch configuration with port accountability (and other 
features). Typically, you use DVS to avoid the manual configuration/management 
of virtual switches within a cluster - it usually done for guest vms networks 
and usually you don't want to mix guest vm traffic with management network. 
Hence supervisor management network resides on separate vswitch0, which comes 
by default, with only hosts management traffic.

This is common best practice, but people can get very fancy with configs and I 
don't want to speak for the rest of the community.

There maybe customers who only have 2NICs on their servers and it that case - 
if they use DVS, they wont be able to use CS. Also, for most proof of concept 
work of CS, people tend to use basic gear with 2 NICs in LAB, they won't be 
able to test CS if they used DVS on everything including management net.

My humble opinion, it's is certainly a needed feature in 4.2, and while Sateesh 
remembers how it's done (fresh mind) It would probably makes sense to add this 
feature sooner than later.

I will leave it for someone else to make a judgement call on urgency.

Thanks
Ilya

Hari Kannan <hari.kan...@citrix.com> wrote:
Hi Ilya,

Thanks for the feedback - so, did I understand it right that your point of view 
is that mgmt. network on DVS is not a super-critical need?

Hari

-----Original Message-----
From: Musayev, Ilya [mailto:imusa...@webmd.net]
Sent: Friday, March 8, 2013 5:01 PM
To: cloudstack-dev@incubator.apache.org
Cc: Sateesh Chodapuneedi; Koushik Das; Anantha Kasetty
Subject: RE: question on Distributed Virtual Switch support

Hari

I gave a second thought to your request about having a support for management 
network and DVS.

Here are the use cases,

Be default the hypervisors are deployed with local vswitch0 and management 
network portgroup.

In most cases, if you have more than 2 NICs, assume it's 6-8, then breakdown 
for network is usually something like,

2 NICs (bonded) for vSwitch0
2 NICs (bonded) for vmotion
2 -4 NICs (bonded) for Guest VMs - usually this is where you insert DVS.
2 NICs (bonded) for storage - either local or DVS switch - if no SAN.

If your hypervisor only has 2 NICs, technically this is bad design, but even 
so, you have to bind the 2 interfaces and use DVS for everything, from 
managememt to vmotion to guest vm communication. This is usually LAB 
environemnts (at least in my case).

While this is an important feature request - it will help smaller subset of 
customers who only use 2 NICs for everything. Probably forward looking, VmWare 
may decide  to DVS everything at some point and we need this ability anyway.

Regards
Ilya

"Musayev, Ilya" <imusa...@webmd.net> wrote:
+1 .. MGMT is also part of DVS in our and other ENVs.
> -----Original Message-----
> From: Chip Childers [mailto:chip.child...@sungard.com]
> Sent: Friday, March 08, 2013 2:25 PM
> To: cloudstack-dev@incubator.apache.org
> Cc: Sateesh Chodapuneedi; Koushik Das; Anantha Kasetty
> Subject: Re: question on Distributed Virtual Switch support
>
> On Fri, Mar 8, 2013 at 2:20 PM, Hari Kannan <hari.kan...@citrix.com> wrote:
> > Hi Sateesh,
> >
> > As we increase the cluster size, I wonder not having the management
> network on DVS might be an issue. I would strongly suggest we consider this.
> I also spoke to some folks who are more knowledgeable with customer
> implementations and they also say this would be an issue.
> >
> > As you know, we have a separate feature being discussed - support
> > for
> PVLAN - so, PVLAN support via DVS is a must-have requirement..
>
> +1 - yes please.



Reply via email to