Sounds great, keep us posted.
A good solution for using iSCSI/SAN/etc would be great, as it is one of
the bigger let-downs of Cloudstack (or should I say, ACS+KVM/libvirt
combo).
PS: NVMeOF sounds like a game changer, wasn't even aware of it, thanks
for mentioning.
I would need to test
On 21/01/2021 21:31, n...@li.nux.ro wrote:
Well, there you go..
On 2021-01-21 18:50, Simon Weller wrote:
We used to use CLVM a while ago before we shifted to Ceph. Cluster
suite/corosync was a bit of a nightmare, and fencing events caused all
sorts of locking (DLM) problems.
I helped a
Well, there you go..
On 2021-01-21 18:50, Simon Weller wrote:
We used to use CLVM a while ago before we shifted to Ceph. Cluster
suite/corosync was a bit of a nightmare, and fencing events caused all
sorts of locking (DLM) problems.
I helped a CloudStack user out a couple of month ago, after
: us...@cloudstack.apache.org
Subject: Re: Experiences with KVM, iSCSI and OCFS2 (SharedMountPoint)
On 21/01/2021 11:34, n...@li.nux.ro wrote:
> Hi,
>
> I used SharedMountPoint a very long time ago with GlusterFS before
> Cloudstack had native integration.
> Don't remem
On 21/01/2021 11:34, n...@li.nux.ro wrote:
Hi,
I used SharedMountPoint a very long time ago with GlusterFS before
Cloudstack had native integration.
Don't remember the details, but overall my impression was that it worked
surprisingly well, of course back then there weren't as many
Hi,
I used SharedMountPoint a very long time ago with GlusterFS before
Cloudstack had native integration.
Don't remember the details, but overall my impression was that it worked
surprisingly well, of course back then there weren't as many feature, so
less stuff to test. I would give it a go.
Hi,
For a specific use-case I'm looking into the possibility to use iSCSI in
combination with KVM.
Use-case: Low-latenc I/O with 4k blocks and QD=1
KVM with CloudStack doesn't support iSCSI natively and the docs and
other blogs refer to using 'SharedMountPoint' with OCFS2 or GFS2:
-