+1, most DC's have a mixture of virtual and bare metal. The document  therefore 
needs to be flexible. 

--
Paul Unbehagen

Sent from my iPhone

On Apr 24, 2013, at 6:10 PM, David Allan I <[email protected]> wrote:

> The architecture should not care...No difference between a bare metal server, 
> and any other non-NVO3 networking or storage appliance that needs to be 
> included in the overall compute and networking mix...we just have fewer "v"s 
> in the bare metal case...
> 
> Cheers
> Dave 
> 
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of 
> Reith, Lothar
> Sent: Wednesday, April 24, 2013 3:49 AM
> To: ramki Krishnan; Jon Hudson
> Cc: Thomas Narten; Pat Thaler; [email protected]; Qin Wu
> Subject: Re: [nvo3] vNICs and pNics in draft-wu-nvo3-nve2nve-04.txt
> 
> I have not problem if the framework document explicitly states that bare 
> metal servers without hypervisor are out of scope.
> 
> Lothar
> 
> -----Ursprüngliche Nachricht-----
> Von: ramki Krishnan [mailto:[email protected]]
> Gesendet: Mittwoch, 24. April 2013 08:01
> An: Jon Hudson; Reith, Lothar
> Cc: Thomas Narten; Pat Thaler; [email protected]; Qin Wu
> Betreff: RE: [nvo3] vNICs and pNics in draft-wu-nvo3-nve2nve-04.txt
> 
> Agree with Jon.
> 
> From my interaction with customers, one of the popular use cases for bare 
> metal servers is Analytics applications for e.g. Hadoop; the key reason is 
> that these applications keep the CPU busy all the time and hence there is no 
> benefit to virtualization.
> 
> Thanks,
> Ramki
> 
> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf Of Jon 
> Hudson
> Sent: Tuesday, April 23, 2013 8:40 PM
> To: Reith, Lothar
> Cc: Thomas Narten; Pat Thaler; [email protected]; Qin Wu
> Subject: Re: [nvo3] vNICs and pNics in draft-wu-nvo3-nve2nve-04.txt
> 
> Actually I have customers that still virtualize an OS/App even when the 
> result is a 1:1:1 relationship. 
> 
> Due to direct mapping advancements overhead from running on a hypervisor is 
> very very small.
> 
> Where the benefits of running an OS/App pair as a VM are many.
> 
> For example
> 
> 1.) Abstract an older OS like Windows NT that does not have drivers for 
> current hardware.
> 
> 2.) Ability to pause, clone, snapshot or rollback the VM.
> 
> 3.) Ability to move the VM live from one hypervisor to another. 
> 
> Jon
> 
> 
> On Apr 22, 2013, at 12:36 AM, "Reith, Lothar" <[email protected]> 
> wrote:
> 
>> Hi Qin,
>> 
>> I had read that a bare metal server is being used by people who do want to 
>> eliminate the overhead introduced by the hypervisor to the performance of 
>> the server. If the server runs only a single application - why introduce the 
>> overhead of a hypervisor and suffer from the associated performance 
>> degradation?
>> 
>> Therefore I tend to disagree with the statement, that a bare metal server is 
>> some kind of hypervisor.
>> 
>> Lothar
>> 
>> -----Ursprüngliche Nachricht-----
>> Von: Qin Wu [mailto:[email protected]]
>> Gesendet: Montag, 22. April 2013 03:59
>> An: Reith, Lothar; Thomas Narten; Pat Thaler
>> Cc: [email protected]
>> Betreff: RE: [nvo3] vNICs and pNics in draft-wu-nvo3-nve2nve-04.txt
>> 
>> Hi, Lothar:
>> Bare metal server is some kind of hypervisor. As I discussed with Larry on 
>> the list before, usually hypervisor hosts multiple VMs. Each VM can be a 
>> tenant system. So I think  the case B doesn't stand.
>> 
>> Regarding Case A and Case C, I think tenant system can be either physical 
>> system or virtual system, as tenant system definition pointed out.
>> 1.If tenant system is physical system, this could be out of scope of
>> NVO3 since as Thomas mentioned, native connection to DC network is not 
>> necessary to be in the scope of NOV3. I agree with this, since if 
>> tenant system is physical system(e.g., physical network service 
>> appliance, the tenant system doesn't need to be a VM)
>> 
>> 2. If tenant system is virtual system, it has two categories:
>> Category (a) Tenant system plays host role Category (b) Tenant system 
>> plays forwarding element role (I think it should be virtual forwarding 
>> element, if the tenant system is a firewall, it should be virtual
>> firewall)
>> 
>> Case A and Case C, in my understanding fall into these two categories. 
>> 
>> Regards!
>> -Qin
>> -----Original Message-----
>> From: Reith, Lothar [mailto:[email protected]]
>> Sent: Sunday, April 21, 2013 7:21 PM
>> To: Thomas Narten; Pat Thaler
>> Cc: [email protected]; Qin Wu
>> Subject: AW: [nvo3] vNICs and pNics in draft-wu-nvo3-nve2nve-04.txt
>> 
>> Thomas,
>> 
>> See  below. I do not agree to the wording.
>> 
>> And I suggest to change the definition of tenant system, which I identified 
>> as being perhaps a root cause of confusion.
>> 
>> Lothar
>> 
>> -----Ursprüngliche Nachricht-----
>> Von: [email protected] [mailto:[email protected]] Im Auftrag 
>> von Thomas Narten
>> Gesendet: Freitag, 19. April 2013 20:49
>> An: Pat Thaler
>> Cc: [email protected]; Qin Wu
>> Betreff: Re: [nvo3] vNICs and pNics in draft-wu-nvo3-nve2nve-04.txt
>> 
>> "Pat Thaler" <[email protected]> writes:
>> 
>>> In addition to Thomas's point, we should not restrict the number of 
>>> physical NICs that a tenant system can have. Some tenant systems will 
>>> have more than one physical NIC.
>> 
>> Agreed.
>> 
>> Lothar: Disagree - Given the current definition of tenant system, one would 
>> have to make a case differentiation throughout the document as follows:
>> 
>> Case A: Tenant System is a VM
>> - In this case - which may be the most important one to many - above 
>> statement is wrong, because then the tenant system has zero physical NICs.
>> 
>> Case B: Tenant System is a bare metal server
>> - In this case above statement is true
>> 
>> Case C: Tenant System is "according to current definition" a router or 
>> firewall...
>> - in this case we start referring to router ports as NICs or pNICs, which 
>> may further increase confusion.
>> 
>>> We may describe some typical tenant systems as part of examining use 
>>> cases, but NVO3 should define behavior in terms of the network 
>>> interface, i.e. TSI, behavior and should not restrict tenant system 
>>> architecture.
>> 
>> Another way of looking at it is that the TSI is an attachement 
>> point/interface to the TS. The point where the TSI attaches to the TS 
>> has two sides. On the tenant facing side, it appears to be a NIC. It 
>> looks like a NIC, behaves like a NIC, etc. On the side facing away 
>> from the tenant (e.g., the hypervisor in the case of a virtualized
>> system) we call it a TSI. The TSI side will have attributes that are 
>> specific to NVO3.
>> 
>> Does that make sense?
>> 
>> Thomas
>> 
>> _______________________________________________
>> nvo3 mailing list
>> [email protected]
>> https://www.ietf.org/mailman/listinfo/nvo3
>> _______________________________________________
>> nvo3 mailing list
>> [email protected]
>> https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
> _______________________________________________
> nvo3 mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3

Reply via email to