So based on your message those storages should be available to containers
on some kind of protocols over network, NFS for instance.
I thought that we can attach a virtual disk to the VM that hosts the pod,
and then directly attach it to the pod. In case of pod migration, we can
detach the disk and attach it to another VM.


On Sat, Mar 13, 2021 at 2:14 PM Wei ZHOU <ustcweiz...@gmail.com> wrote:

> Thanks Alireza.
> It is a long list. Almost all drivers in the list are for block storage
> solutions or file shares. containers can access the resources directly.
>
> Different to them, cloudstack volumes are attached to VMs, not to pods or
> containers. containers cannot attach cloudstack volumes. All IaaS platforms
> have the same problem.
> In the list there are no drivers for other IaaS projects (openstack,
> opennebula, etc) and public cloud providers (amazon ec2, google gcp or
> azure compute, etc).
>
> In my opinion, CSI drivers should be implemented by storage providers , not
> cloud providers.
>
> -Wei
>
>
>
>
> On Fri, 12 Mar 2021 at 21:56, Alireza Eskandari <astro.alir...@gmail.com>
> wrote:
>
> > From a technical viewpoint and low level operations in hypervisor, does
> it
> > need something more than detaching disk from source VM and attaching it
> to
> > destination VM that has already been implemented in CloudStack? Also
> these
> > features can be available for CSI because CloudStack support it:
> > Raw Blcok, Snapshot, Expansion, Cloning
> > But a volume in CloudStack can only operate in Read/Write Single Pod mode
> > not multiple pods.
> > You can see a list of already developed CSIs in this link:
> > https://kubernetes-csi.github.io/docs/drivers.html
> > Regards
> >
>

Reply via email to