IMO you're not missing anything. As you say, with FCoE out of the way, storage
is basically either a (1) IP-based client/server application operating at the
VM level or (2) served locally by the hypervisor and operating directly over
the underlay network or on a separate network. Overlay network
discussions/solutions can focus on applications generically rather than spend
cycles trying to figure out how to glue in FCoE (which most large clouds
probably avoid like the plague). As with any application with special
performance requirements, it's up to the operator to deal with optimizing other
parameters for storage, such as latency and bandwidth.
Best -- aldrin
On Aug 28, 2012, at 3:01 AM, Ivan Pepelnjak wrote:
> Now is my time to be overly simplistic. With FC(oE)? out of the picture,
> storage access becomes just another TCP application (admittedly a
> drop-sensitive one with large bandwidth requirements).
>
> If a VM uses its own NFS or iSCSI client, the hypervisor treats that as yet
> another TCP session - all we need is IP address and session preservation on
> hot VM moves. VM probably has to reattach to storage after a cold move anyway.
>
> If the hypervisor emulates VM local storage with files on shared file system,
> then the source and target hosts have to have access to the same file system
> (hopefully over a separate VLAN or even a separate network).
>
> In any case, there might be interesting problems if the source and target
> hypervisor hosts are "far apart" (even within a single data center), but they
> have nothing to do with virtual networking.
>
> Am I missing something fundamental?
> Ivan
>
> On 8/28/12 1:34 AM, Aldrin Isaac wrote:
>> Robert,
>>
>> OK, I agree that we can't neglect transition, but as David Black
>> points out, it's probably the right thing to sweep certain
>> "traditional" storage transports. namely FCoE (for now, if not
>> indefinitely), under the rug for a number of reasons.
>>
>> Likewise we also need to evolve technologies in the "traditional"
>> network that will bridge gaps that pure hypervisor-based overlays
>> will not satisfy (for one reason or other) for some time.
>>
>> Best -- aldrin
>>
>>
>>
>> On Aug 27, 2012, at 3:54 AM, Robert Raszuk wrote:
>>
>>> Aldrin,
>>>
>>> What I primarily had in mind was "object storage" like S3,
>>> Rackspace cloud Files, Swift etc ...
>>>
>>> So one can say this is all IP so VM will just be able to access it
>>> - done. If everyone in this WG agrees with it is great.
>>>
>>> However perhaps if VM is using storage over IP during migration we
>>> need to special handle it ... propose fast-connectivity-restoration
>>> techniques on the storage PE side as mandatory to reduce the
>>> switchover time ? Maybe recommend right sequence of events ?
>>>
>>> Thx, R.
>>>
>>>
>>>> Neither is overlay networking new. But that's not my point.
>>>>
>>>> Ceph is merely an example I use to make the point that VMs
>>>> accessing storage servers over a virtual network is not IMO
>>>> compatible with arguments for a non-traditional cloud.
>>>>
>>>> Feel free to provide other real world examples of real cloud
>>>> storage other than Ceph. Would love to see what's on your list.
>>>>
>>>> Best. -- aldrin
>>>>
>>>> On Sunday, August 26, 2012, wrote:
>>>>
>>>> That is the case anytime a hypervisor provides a VM with a
>>>> "local" block device which is backed by network based storage
>>>> ("VM itself does not need to connect to network storage"). This
>>>> is not something new or unique provided by so called "cloud
>>>> storage" such as Ceph.
>>>>
>>>> Cheers,
>>>>
>>>> Brad
>>>>
>>>>
>>>>
>>>> -----Original Message----- *From: *Aldrin Isaac
>>>> [[email protected]] *Sent: *Sunday, August 26, 2012 07:45 PM
>>>> Central Standard Time *To: *Black, David *Cc: *[email protected];
>>>> [email protected] *Subject: *Re: [nvo3] Storage (part of: Let's
>>>> refocus on real world)
>>>>
>>>> AFAIK, scale out cloud storage software such as Ceph do not rely
>>>> on FC, FCoE, NFS or iSCSI on the VM. Ceph storage appears to the
>>>> VM as local storage and does not depend on network
>>>> virtualization. VM migration is not an issue for Ceph since the
>>>> VM itself does not need to connect to a storage server over the
>>>> network. So as far as real cloud storage is concerned nothing is
>>>> being swept under the rug.
>>>>
>>>> -- aldrin
>>>>
>>>> On Sunday, August 26, 2012, Black, David wrote:
>>>>
>>>> Robert,
>>>>
>>>>> Also as you have pointed out storage discussion can not be
>>>> just swapped
>>>>> under carpet and addressed by quote: "storage issues are out
>>>> of the scope".
>>>>
>>>> I agree ... and that looks like my cue to say something ... e.g.,
>>>> see the domain part of my email address ;-).
>>>>
>>>> iSCSI and NFS use TCP/IP in the storage stack and hence will run
>>>> fine over all of the data encapsulations being discussed here.
>>>> If the iSCSI initiator or NFS client is in the VM, that's most of
>>>> the discussion. That's not always the case for a number of
>>>> reasons - the obvious one is that a hypervisor iSCSI initiator or
>>>> NFS client is required if the VM's executable image is being
>>>> loaded and/or paged using one of those protocols. It's also the
>>>> case that many hypervisors simplify the storage interface
>>>> presented to VMs so that it looks like direct attached or
>>>> internal disk drives), and map those disks to networked storage
>>>> using a hypervisor iSCSI initiator or NFS client. Ensuring that
>>>> the VM migration destination hypervisor has appropriate
>>>> connectivity to storage is mostly a configuration concern. The
>>>> upshot is that iSCSI and NFS run fine over nvo3-encapsulated
>>>> networks.
>>>>
>>>> In contrast, as I said at the microphone at the nvo3 BOF in
>>>> Paris, I suggest that the WG not initially consider FCoE, in
>>>> order to defer spending time on discussing how to deliver DCB
>>>> Ethernet service/behavior (required by FCoE - ordinary non-DCB
>>>> Ethernet isn't sufficient for FCoE because FCoE is *very*
>>>> sensitive to drops) through the encapsulation(s).
>>>>
>>>> Thanks, --David
>>>>
>>>>
>>>>> -----Original Message----- From: [email protected]
>>>>> [mailto:[email protected]] On
>>>> Behalf Of Robert
>>>>> Raszuk Sent: Saturday, August 25, 2012 11:55 AM To: Ivan
>>>>> Pepelnjak Cc: Black, David; [email protected]; Linda Dunbar
>>>>> Subject: Re: [nvo3] Let's refocus on real world
>>>>>
>>>>> Ivan,
>>>>>
>>>>>> ... or I may be completely wrong.
>>>>>
>>>>> I think you are actually right on the spot correct.
>>>>>
>>>>> However I am afraid authors of this document are not likely
>>>> to admit
>>>>> that TOR switches should be just basic IP nodes providing
>>>> only transport
>>>>> between servers.
>>>>>
>>>>> Likewise they will not likely to admit that all logic of
>>>> encapsulation
>>>>> should happen on the hyper-visors as they are simply not in
>>>>> that technology space.
>>>>>
>>>>> Similarly I very much agree and support providing clear
>>>> distinction
>>>>> between "cold" and "hot" VM mobility cases and perhaps even
>>>> further
>>>>> provide number of sub-classes hot VM mobility can be
>>>> accomplish today -
>>>>> clearly there is more then one way.
>>>>>
>>>>> Also as you have pointed out storage discussion can not be
>>>> just swapped
>>>>> under carpet and addressed by quote: "storage issues are out
>>>> of the scope".
>>>>>
>>>>> While Linda was perhaps right to say that today most storage
>>>> today is
>>>>> coming to servers via backend this is what I would call very
>>>> inefficient
>>>>> and legacy way. If we are to think ahead one needs to observe
>>>> how the
>>>>> industry advances in storage virtualization via front-end IP
>>>> very often
>>>>> not co-located with the compute racks.
>>>>>
>>>>> In my view network related mobility discussion is not about
>>>> TOR or about
>>>>> VLANs. It is about an IP layer above IP transport which would
>>>> carry all
>>>>> necessary information of the actual location of the VMs and
>>>> which in
>>>>> fact would play the main role in shortening or eliminating the
>>>>> triangular routing problem.
>>>>>
>>>>> Rgs, R.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________ nvo3 mailing
>>>> list [email protected] https://www.ietf.org/mailman/listinfo/nvo3
>>>>
>>>
>>
>> _______________________________________________ nvo3 mailing list
>> [email protected] https://www.ietf.org/mailman/listinfo/nvo3
_______________________________________________
nvo3 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nvo3