Sorry, I was out on holidays :)

I guess that should work. Just know that Primary traffic is hypervisor to 
storage and Secondary traffic is SSVM/Mgmt to storage. Cloudstack generally 
doesn't consider primary storage in its architecture design as it mostly relies 
on recommendation from the hypervisor vendors.

-----Original Message-----
From: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Sent: Friday, December 26, 2014 5:59 PM
To: us...@cloudstack.apache.org
Cc: dev@cloudstack.apache.org
Subject: RE: Physical network design options - which crime to comit

On storage nodes - yes definitively will do it.

One finall advice/opinion please...?

On compute nodes, since one 10G will be shared by both primary and
secondary traffic - would you separate that on 2 different VLANs and then
implement some QoS i.e. guarantie 8Gb/s for primary traffic vlan, or i.e.
limit sec.storage vlan to i.e. 2Gb/s. Or just simply let them compete for
the traffic? In afraid secondary traffic my influence or completely
overweight primary traffic if no QoS implemented...

Sorry for borring you with details.

Thanks

Sent from Google Nexus 4
On Dec 26, 2014 11:51 PM, "Somesh Naidu" <somesh.na...@citrix.com> wrote:

> Actually, I would highly consider nic bonding for storage network if
> possible.
>
> -----Original Message-----
> From: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Sent: Friday, December 26, 2014 4:42 PM
> To: dev@cloudstack.apache.org
> Cc: us...@cloudstack.apache.org
> Subject: RE: Physical network design options - which crime to comit
>
> Thanks Somesh, first option also seems most logical to me.
>
> I guess you wouldn't consider doing nic bonding and then vlans with some
> QoS based on vlans on switch level?
>
> Thx again
>
> Sent from Google Nexus 4
> On Dec 26, 2014 9:48 PM, "Somesh Naidu" <somesh.na...@citrix.com> wrote:
>
> > I generally prefer to keep the storage traffic separate. Reason is that
> > storage performance (provision templates to primary, snapshots, copy
> > templates, etc) significantly impact end user experience. In addition, it
> > also helps isolate network issues when troubleshooting.
> >
> > So I'd go for one of the following in that order:
> > Case I
> > 1G = mgmt network (only mgmt)
> > 10G = Primary and Secondary storage traffic
> > 10G = Guest and Public traffic
> >
> > Case II
> > 10G = Primary and Secondary storage traffic
> > 10G = mgmt network, Guest and Public traffic
> >
> > Case III
> > 10G = mgmt network, Primary and Secondary storage traffic
> > 10G = Guest and Public traffic
> >
> > -----Original Message-----
> > From: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Sent: Friday, December 26, 2014 10:06 AM
> > To: us...@cloudstack.apache.org; dev@cloudstack.apache.org
> > Subject: Physical network design options - which crime to comit
> >
> > Hi folks,
> >
> > I'm designing some stuff - and wondering which crime to commit - I have 2
> > posible scenarios in my head
> > I have folowing NICs available on compute nodes:
> > 1 x 1G NIC
> > 2 x 10G NIC
> >
> > I was wondering which approach would be better, as I', thinking about 2
> > possible sollutions at the moment, maybe 3.
> >
> > *First scenario:*
> >
> > 1G = mgmt network (only mgmt)
> > 10G = Primary and Secondary storage traffic
> > 10G = Guest and Public traffic
> >
> >
> > *Second scenario*
> >
> > 1G = not used at all
> > 10G = mgmt,primary,secondary storage
> > 10G = Guest and Public
> >
> >
> > And possibly a 3rd scenario:
> >
> > 1G = not used at all
> > 10G = mgmt+primary storage
> > 10G = secondary storage, guest,public network
> >
> >
> > I could continue here with different scenarios - but I'm wondering if 1G
> > dedicated for mgmt would make sense - I know it is "better" to have it
> > dedicated if possible, but folowing "KISS" and knowing it's extremely
> light
> > weight traffic - I was thinkin puting everything on 2 x 10G interfaces.
> >
> > Any opinions are most welcome.
> > Thanks,
> >
> >
> > --
> >
> > Andrija Panić
> >
>

Reply via email to