Re: KVM FC shared storage

2024-05-20 Thread Kristian Liivak


Indeed, Xenserver is good and our primary choice. We allready have it up and 
running..  
Why i ask KVM cluster its cloudstack lacks kubernetes csi driver support for 
xenserver.
And we want to use kubernetes.  My idea is just make one kvm cluster for 
kubernetes users.

Of cource we will check cloudstack csi integration, maybe we can make some 
changes to current csi to support xenserver..
But its wise to evaluete all options :)

Lugupidamisega / Best regards, 

Kristian Liivak 
Tegevjuht / CEO 
[ mailto:k...@wavecom.ee | k...@wavecom.ee ] | +372 5685 0001 

WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services 


Endla 16, Tallinn 10142 | [ http://www.wavecom.ee/ | www.wavecom.ee ] | [ 
http://www.facebook.com/wavecom.ee | www.facebook.com/wavecom.ee ]

- Original Message -
From: "Vivek Kumar" 
To: "CloudStack Users Mailing list" 
Sent: Tuesday, 21 May, 2024 08:47:22
Subject: Re: KVM FC shared storage

We once tried to setup PCS cluster with GFS2 In production, and it required a 
lot of expertise to manage PCS cluster and our experience was very bad with 
that too, after a year we had to move to NFS due to so many issue.  FC works 
better XenServer, We had almost for 5-6 year with Xenserver And FC and that too 
without any single downtime.

Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com <mailto:vivek.ku...@indiqus.com>
www.indiqus.com <https://www.indiqus.com/>




> On 20 May 2024, at 9:34 PM, Andreas S. Kerber  wrote:
> 
> We're running FC with OCFS2 (OS: Oracle Linux 9) on a demo 2 node cluster.
> Works kind of nice, but of course is not the same as ESXi. For now it works 
> well enough but expanding that to a actual >10 node setup with hundreds of 
> VMs doesn't feel right.
> 
> 
> Am Mon, May 20, 2024 at 03:48:12PM +0300 schrieb Kristian Liivak:
>> Hi All 
>> 
>> Currently, we can see from the documentation that KVM supports Fiber Channel 
>> via shared mountpoints. 
>> Can someone recommend or share their experience with usable solutions for 
>> shared mountpoint technical solutions? It seems there are quite a few 
>> options, such as shared/clustered file systems: GFS2, OCFS2, cLVM. 
>> From the information I have found, OCFS2 seems to be the best option. 
>> 
>> P.S. I don't want to use any distributed storage like Ceph, etc. 
>> We are partially moving away from VMware, and we believe in FC NVMe 
>> old-school storages, which are the most reliable, in my opinion. 
>> 
>> 
>> 
>> 
>> Lugupidamisega / Best regards, 
>> 
>> Kristian Liivak 
>> Tegevjuht / CEO 
>> [ mailto:k...@wavecom.ee | k...@wavecom.ee ] | +372 5685 0001 
>> 
>> WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services 
>> 
>> 
>> Endla 16, Tallinn 10142 | [ http://www.wavecom.ee/ | www.wavecom.ee ] | [ 
>> http://www.facebook.com/wavecom.ee | www.facebook.com/wavecom.ee ] 
>> 


-- 
This message is intended only for the use of the individual or entity to 
which it is addressed and may contain confidential and/or privileged 
information. If you are not the intended recipient, please delete the 
original message and any copy of it from your computer system. You are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited unless proper authorization has been 
obtained for such action. If you have received this communication in error, 
please notify the sender immediately. Although IndiQus attempts to sweep 
e-mail and attachments for viruses, it does not guarantee that both are 
virus-free and accepts no liability for any damage sustained as a result of 
viruses.


Re: KVM FC shared storage

2024-05-20 Thread Vivek Kumar
We once tried to setup PCS cluster with GFS2 In production, and it required a 
lot of expertise to manage PCS cluster and our experience was very bad with 
that too, after a year we had to move to NFS due to so many issue.  FC works 
better XenServer, We had almost for 5-6 year with Xenserver And FC and that too 
without any single downtime.

Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 
www.indiqus.com 




> On 20 May 2024, at 9:34 PM, Andreas S. Kerber  wrote:
> 
> We're running FC with OCFS2 (OS: Oracle Linux 9) on a demo 2 node cluster.
> Works kind of nice, but of course is not the same as ESXi. For now it works 
> well enough but expanding that to a actual >10 node setup with hundreds of 
> VMs doesn't feel right.
> 
> 
> Am Mon, May 20, 2024 at 03:48:12PM +0300 schrieb Kristian Liivak:
>> Hi All 
>> 
>> Currently, we can see from the documentation that KVM supports Fiber Channel 
>> via shared mountpoints. 
>> Can someone recommend or share their experience with usable solutions for 
>> shared mountpoint technical solutions? It seems there are quite a few 
>> options, such as shared/clustered file systems: GFS2, OCFS2, cLVM. 
>> From the information I have found, OCFS2 seems to be the best option. 
>> 
>> P.S. I don't want to use any distributed storage like Ceph, etc. 
>> We are partially moving away from VMware, and we believe in FC NVMe 
>> old-school storages, which are the most reliable, in my opinion. 
>> 
>> 
>> 
>> 
>> Lugupidamisega / Best regards, 
>> 
>> Kristian Liivak 
>> Tegevjuht / CEO 
>> [ mailto:k...@wavecom.ee | k...@wavecom.ee ] | +372 5685 0001 
>> 
>> WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services 
>> 
>> 
>> Endla 16, Tallinn 10142 | [ http://www.wavecom.ee/ | www.wavecom.ee ] | [ 
>> http://www.facebook.com/wavecom.ee | www.facebook.com/wavecom.ee ] 
>> 


-- 
This message is intended only for the use of the individual or entity to 
which it is addressed and may contain confidential and/or privileged 
information. If you are not the intended recipient, please delete the 
original message and any copy of it from your computer system. You are 
hereby notified that any dissemination, distribution or copying of this 
communication is strictly prohibited unless proper authorization has been 
obtained for such action. If you have received this communication in error, 
please notify the sender immediately. Although IndiQus attempts to sweep 
e-mail and attachments for viruses, it does not guarantee that both are 
virus-free and accepts no liability for any damage sustained as a result of 
viruses.


Re: KVM FC shared storage

2024-05-20 Thread Andreas S. Kerber
We're running FC with OCFS2 (OS: Oracle Linux 9) on a demo 2 node cluster.
Works kind of nice, but of course is not the same as ESXi. For now it works 
well enough but expanding that to a actual >10 node setup with hundreds of VMs 
doesn't feel right.


Am Mon, May 20, 2024 at 03:48:12PM +0300 schrieb Kristian Liivak:
> Hi All 
> 
> Currently, we can see from the documentation that KVM supports Fiber Channel 
> via shared mountpoints. 
> Can someone recommend or share their experience with usable solutions for 
> shared mountpoint technical solutions? It seems there are quite a few 
> options, such as shared/clustered file systems: GFS2, OCFS2, cLVM. 
> From the information I have found, OCFS2 seems to be the best option. 
> 
> P.S. I don't want to use any distributed storage like Ceph, etc. 
> We are partially moving away from VMware, and we believe in FC NVMe 
> old-school storages, which are the most reliable, in my opinion. 
> 
> 
> 
> 
> Lugupidamisega / Best regards, 
> 
> Kristian Liivak 
> Tegevjuht / CEO 
> [ mailto:k...@wavecom.ee | k...@wavecom.ee ] | +372 5685 0001 
> 
> WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services 
> 
> 
> Endla 16, Tallinn 10142 | [ http://www.wavecom.ee/ | www.wavecom.ee ] | [ 
> http://www.facebook.com/wavecom.ee | www.facebook.com/wavecom.ee ] 
> 


RE: KVM FC shared storage

2024-05-20 Thread Alex Mattioli
If you are 100% FC/NVME then OCFS2 is probably the best (or least bad) option.

I personally always tried to stick to NFS for KVM, it just works.

Reliability wise, I personally consider CEPH to be more reliable (and 
supportable) than OCFS2.

Regards
Alex

From: Kristian Liivak 
Sent: Monday, May 20, 2024 2:48 PM
To: users 
Subject: KVM FC shared storage

Hi All

Currently, we can see from the documentation that KVM supports Fiber Channel 
via shared mountpoints.
Can someone recommend or share their experience with usable solutions for 
shared mountpoint technical solutions? It seems there are quite a few options, 
such as shared/clustered file systems: GFS2, OCFS2, cLVM.
From the information I have found, OCFS2 seems to be the best option.

P.S. I don't want to use any distributed storage like Ceph, etc.
We are partially moving away from VMware, and we believe in FC NVMe old-school 
storages, which are the most reliable, in my opinion.




Lugupidamisega / Best regards,

Kristian Liivak
Tegevjuht / CEO
k...@wavecom.ee<mailto:k...@wavecom.ee> | +372 5685 0001

[cid:image001.png@01DAAACA.9761B690]
WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services
Endla 16, Tallinn 10142 | www.wavecom.ee<http://www.wavecom.ee/> | 
www.facebook.com/wavecom.ee<http://www.facebook.com/wavecom.ee>


 



Re: KVM FC shared storage

2024-05-20 Thread Dietrich, Alex
Hello Kristian,

Take this perspective with a grain of salt given our time spent on this was in 
a proof-of-concept deployment.

We tested iSCSI connectivity with OCFS2 as the underlying technology to provide 
the shared mount point. We found that throughout the course of host reboots, 
the reliability of varying components re-establishing connectivity was not up 
to par with the reliability of the underlying storage technology (iSCSI). In a 
few cases, I found OCFS2 unable to establish peers with the other KVM 
hypervisors, which required rebuilding the OCFS2 cluster to re-establish 
connectivity.

The other mechanisms to create a shared mountpont were not tested as we pivoted 
away from the idea given our experience with OCFS2.

Thanks,
Alex


[__tpx__]
From: Kristian Liivak 
Date: Monday, May 20, 2024 at 8:48 AM
To: users 
Subject: KVM FC shared storage
EXTERNAL
Hi All

Currently, we can see from the documentation that KVM supports Fiber Channel 
via shared mountpoints.
Can someone recommend or share their experience with usable solutions for 
shared mountpoint technical solutions? It seems there are quite a few options, 
such as shared/clustered file systems: GFS2, OCFS2, cLVM.
From the information I have found, OCFS2 seems to be the best option.

P.S. I don't want to use any distributed storage like Ceph, etc.
We are partially moving away from VMware, and we believe in FC NVMe old-school 
storages, which are the most reliable, in my opinion.




Lugupidamisega / Best regards,

Kristian Liivak
Tegevjuht / CEO
k...@wavecom.ee<mailto:k...@wavecom.ee> | +372 5685 0001

[cid:58154b1f50b08ffef0dccd912a90853b36251804@zimbra]
WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services
Endla 16, Tallinn 10142 | 
www.wavecom.ee<https://urldefense.com/v3/__http:/www.wavecom.ee/__;!!P9cq_d3Gyw!kdPr3d5okgMIjSmm_GQ6jXtU_qJ54d1A9xUNCpf5TGe0rXZUF5J9yUPsKkmjz8b9Q0GLE4FMGhTexsqbkekSXA$>
 | 
www.facebook.com/wavecom.ee<https://urldefense.com/v3/__http:/www.facebook.com/wavecom.ee__;!!P9cq_d3Gyw!kdPr3d5okgMIjSmm_GQ6jXtU_qJ54d1A9xUNCpf5TGe0rXZUF5J9yUPsKkmjz8b9Q0GLE4FMGhTexspKUxaBBA$>



KVM FC shared storage

2024-05-20 Thread Kristian Liivak
Hi All 

Currently, we can see from the documentation that KVM supports Fiber Channel 
via shared mountpoints. 
Can someone recommend or share their experience with usable solutions for 
shared mountpoint technical solutions? It seems there are quite a few options, 
such as shared/clustered file systems: GFS2, OCFS2, cLVM. 
>From the information I have found, OCFS2 seems to be the best option. 

P.S. I don't want to use any distributed storage like Ceph, etc. 
We are partially moving away from VMware, and we believe in FC NVMe old-school 
storages, which are the most reliable, in my opinion. 




Lugupidamisega / Best regards, 

Kristian Liivak 
Tegevjuht / CEO 
[ mailto:k...@wavecom.ee | k...@wavecom.ee ] | +372 5685 0001 

WaveCom AS | ISO 9001, 27001 & 27017 Certified DC and Cloud services 


Endla 16, Tallinn 10142 | [ http://www.wavecom.ee/ | www.wavecom.ee ] | [ 
http://www.facebook.com/wavecom.ee | www.facebook.com/wavecom.ee ] 



Re: AW: AW: KVM with shared storage

2018-02-21 Thread Nux!
That's great news, thanks for sharing.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Swen - swen.io" <m...@swen.io>
> To: "users" <users@cloudstack.apache.org>
> Sent: Wednesday, 21 February, 2018 08:15:24
> Subject: AW: AW: KVM with shared storage

> I heard some rumors about Cloudstack is on the roadmap of ScaleIO. We, as the
> community, need to make some buzz and create some business cases.
> 
> @Lucian: Thank you very much for your mail.
> 
> @Andrija: Thank you very much, too!
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Dienstag, 20. Februar 2018 22:26
> An: users <users@cloudstack.apache.org>
> Cc: S. Brüseke, proIO GmbH <s.brues...@proio.com>
> Betreff: Re: AW: KVM with shared storage
> 
> FYI, we eventually went with SolidFire.
> 
> ACS integration development support from Mike T. and the actual support on the
> hardware product is just astonishing... (vs. all other shitty support and
> quality of other vendors solutions I have experienced so far in my work), so I
> simply have to share my delight that this kind of Vendor
> (support) still exist somewhere...
> 
> On 20 February 2018 at 19:24, Nux! <n...@li.nux.ro> wrote:
> 
>> Hi Swen,
>>
>> If I were to build a cloud now, I'd use NFS and/or local storage,
>> depending on requirements (HA, not HA etc). I know these technologies,
>> they are very robust and I can manage them on my own.
>>
>> If you are willing to pay for a more sophisticated solution, then
>> either look at CEPH and 42on.com for support (or others) or Storpool
>> (similar to Ceph but proprietary, excellent support).
>>
>> Or go Solidfire if $$$ allows.
>>
>> My 2 pence.
>>
>> Lucian
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> - Original Message -
>> > From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
>> > To: "users" <users@cloudstack.apache.org>
>> > Sent: Monday, 19 February, 2018 15:04:13
>> > Subject: AW: KVM with shared storage
>>
>> > Thank you guys for your replies!
>> > So the only stable option for KVM and shared storage is to use NFS
>> > or
>> CEPH at
>> > the moment?
>> >
>> > @Dag: I am evaluating ScaleIO with KVM and it works great! Easy
>> installation,
>> > good performance and handling. I installed ScaleIO on the same
>> > servers
>> as KVM
>> > is installed, because I took a hyper-converged approach. We do not
>> > have
>> any
>> > developing skills in our company, but I am really interested in
>> > ScaleIO drivers. Do you know any other customers/users of CS who are
>> > looking for
>> it
>> > too?
>> >
>> > Mit freundlichen Grüßen / With kind regards,
>> >
>> > Swen
>> >
>> > -Ursprüngliche Nachricht-
>> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
>> > Gesendet: Montag, 19. Februar 2018 15:39
>> > An: users <users@cloudstack.apache.org>
>> > Betreff: Re: KVM with shared storage
>> >
>> > From my (production) experience from few years ago, even GFS2  as a
>> clustered
>> > file system was S unstable, and lead to locks, causing the
>> > share
>> to
>> > become unresponsive 100%, and then you go and fix the things any way
>> > you
>> can
>> > (and this was with only 3 nodes !!! accessing the share with LIGHT
>> > write IO)...all setup by RH themselves back in the days (not cloud,
>> > some web
>> project)
>> >
>> > Avoid at all cost.
>> >
>> > Again, I need to refer to the work of Mike.Tutkowski from Solidfire,
>> where he
>> > implemented "online storage migration" from NFS/CEPH to SOLIDFIRE -
>> > I
>> hope that
>> > someone (DEVs) could implement this globally between NFS storages
>> > for
>> begining,
>> > if there is any interest, since you can very easily do this
>> > migration
>> manually
>> > with virsh and editing XML to reference new volumes on new storage,
>> > then
>> you
>> > can live migrate VM while it's running with zero downtime...(so zero
>> shared
>> > storage here, but again live migration works (with st

AW: AW: KVM with shared storage

2018-02-21 Thread Swen - swen.io
I heard some rumors about Cloudstack is on the roadmap of ScaleIO. We, as the 
community, need to make some buzz and create some business cases.

@Lucian: Thank you very much for your mail.

@Andrija: Thank you very much, too!

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Dienstag, 20. Februar 2018 22:26
An: users <users@cloudstack.apache.org>
Cc: S. Brüseke, proIO GmbH <s.brues...@proio.com>
Betreff: Re: AW: KVM with shared storage

FYI, we eventually went with SolidFire.

ACS integration development support from Mike T. and the actual support on the 
hardware product is just astonishing... (vs. all other shitty support and 
quality of other vendors solutions I have experienced so far in my work), so I 
simply have to share my delight that this kind of Vendor
(support) still exist somewhere...

On 20 February 2018 at 19:24, Nux! <n...@li.nux.ro> wrote:

> Hi Swen,
>
> If I were to build a cloud now, I'd use NFS and/or local storage, 
> depending on requirements (HA, not HA etc). I know these technologies, 
> they are very robust and I can manage them on my own.
>
> If you are willing to pay for a more sophisticated solution, then 
> either look at CEPH and 42on.com for support (or others) or Storpool 
> (similar to Ceph but proprietary, excellent support).
>
> Or go Solidfire if $$$ allows.
>
> My 2 pence.
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
> > To: "users" <users@cloudstack.apache.org>
> > Sent: Monday, 19 February, 2018 15:04:13
> > Subject: AW: KVM with shared storage
>
> > Thank you guys for your replies!
> > So the only stable option for KVM and shared storage is to use NFS 
> > or
> CEPH at
> > the moment?
> >
> > @Dag: I am evaluating ScaleIO with KVM and it works great! Easy
> installation,
> > good performance and handling. I installed ScaleIO on the same 
> > servers
> as KVM
> > is installed, because I took a hyper-converged approach. We do not 
> > have
> any
> > developing skills in our company, but I am really interested in 
> > ScaleIO drivers. Do you know any other customers/users of CS who are 
> > looking for
> it
> > too?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Montag, 19. Februar 2018 15:39
> > An: users <users@cloudstack.apache.org>
> > Betreff: Re: KVM with shared storage
> >
> > From my (production) experience from few years ago, even GFS2  as a
> clustered
> > file system was S unstable, and lead to locks, causing the 
> > share
> to
> > become unresponsive 100%, and then you go and fix the things any way 
> > you
> can
> > (and this was with only 3 nodes !!! accessing the share with LIGHT 
> > write IO)...all setup by RH themselves back in the days (not cloud, 
> > some web
> project)
> >
> > Avoid at all cost.
> >
> > Again, I need to refer to the work of Mike.Tutkowski from Solidfire,
> where he
> > implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - 
> > I
> hope that
> > someone (DEVs) could implement this globally between NFS storages 
> > for
> begining,
> > if there is any interest, since you can very easily do this 
> > migration
> manually
> > with virsh and editing XML to reference new volumes on new storage, 
> > then
> you
> > can live migrate VM while it's running with zero downtime...(so zero
> shared
> > storage here, but again live migration works (with storage))
> >
> > Cheers
> >
> > On 19 February 2018 at 15:21, Dag Sonstebo 
> > <dag.sonst...@shapeblue.com>
> > wrote:
> >
> >> Hi Swen,
> >>
> >> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year 
> >> +ago and
> >> ended up concluding it’s just not fit for purpose, it’s too 
> >> unstable and STONITH was a challenge to say the least. As you’ve 
> >> seen from our blog article we have had a project in the past using 
> >> OCFS2 – but it is again a challenge to set up and to get running 
> >> smoothly + you need to go cherry picking modules which can be 
> >> tricky since you are into Oracle territory.
> >>
> >> For smooth running I recomme

Re: AW: KVM with shared storage

2018-02-20 Thread Andrija Panic
FYI, we eventually went with SolidFire.

ACS integration development support from Mike T. and the actual support on
the hardware product is just astonishing... (vs. all other shitty support
and quality of other vendors solutions I have experienced so far in my
work), so I simply have to share my delight that this kind of Vendor
(support) still exist somewhere...

On 20 February 2018 at 19:24, Nux! <n...@li.nux.ro> wrote:

> Hi Swen,
>
> If I were to build a cloud now, I'd use NFS and/or local storage,
> depending on requirements (HA, not HA etc). I know these technologies, they
> are very robust and I can manage them on my own.
>
> If you are willing to pay for a more sophisticated solution, then either
> look at CEPH and 42on.com for support (or others) or Storpool (similar to
> Ceph but proprietary, excellent support).
>
> Or go Solidfire if $$$ allows.
>
> My 2 pence.
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
> > To: "users" <users@cloudstack.apache.org>
> > Sent: Monday, 19 February, 2018 15:04:13
> > Subject: AW: KVM with shared storage
>
> > Thank you guys for your replies!
> > So the only stable option for KVM and shared storage is to use NFS or
> CEPH at
> > the moment?
> >
> > @Dag: I am evaluating ScaleIO with KVM and it works great! Easy
> installation,
> > good performance and handling. I installed ScaleIO on the same servers
> as KVM
> > is installed, because I took a hyper-converged approach. We do not have
> any
> > developing skills in our company, but I am really interested in ScaleIO
> > drivers. Do you know any other customers/users of CS who are looking for
> it
> > too?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Montag, 19. Februar 2018 15:39
> > An: users <users@cloudstack.apache.org>
> > Betreff: Re: KVM with shared storage
> >
> > From my (production) experience from few years ago, even GFS2  as a
> clustered
> > file system was S unstable, and lead to locks, causing the share
> to
> > become unresponsive 100%, and then you go and fix the things any way you
> can
> > (and this was with only 3 nodes !!! accessing the share with LIGHT write
> > IO)...all setup by RH themselves back in the days (not cloud, some web
> project)
> >
> > Avoid at all cost.
> >
> > Again, I need to refer to the work of Mike.Tutkowski from Solidfire,
> where he
> > implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I
> hope that
> > someone (DEVs) could implement this globally between NFS storages for
> begining,
> > if there is any interest, since you can very easily do this migration
> manually
> > with virsh and editing XML to reference new volumes on new storage, then
> you
> > can live migrate VM while it's running with zero downtime...(so zero
> shared
> > storage here, but again live migration works (with storage))
> >
> > Cheers
> >
> > On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
> > wrote:
> >
> >> Hi Swen,
> >>
> >> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago
> >> +and
> >> ended up concluding it’s just not fit for purpose, it’s too unstable
> >> and STONITH was a challenge to say the least. As you’ve seen from our
> >> blog article we have had a project in the past using OCFS2 – but it is
> >> again a challenge to set up and to get running smoothly + you need to
> >> go cherry picking modules which can be tricky since you are into Oracle
> >> territory.
> >>
> >> For smooth running I recommend sticking to NFS if you can – if not
> >> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
> >>
> >> Regards,
> >> Dag Sonstebo
> >> Cloud Architect
> >> ShapeBlue
> >>
> >> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
> >>
> >> So CLVM used to be supported (and probably still works). I'd
> >> highly recommend you avoid using a clustered file system if you can
> >> avoid it. See if your SAN supports the ability to do exclusive locking.
> >>
> >> There was chatter on the list a couple of years ago about a
> >&

Re: AW: KVM with shared storage

2018-02-20 Thread Nux!
Hi Swen,

If I were to build a cloud now, I'd use NFS and/or local storage, depending on 
requirements (HA, not HA etc). I know these technologies, they are very robust 
and I can manage them on my own.

If you are willing to pay for a more sophisticated solution, then either look 
at CEPH and 42on.com for support (or others) or Storpool (similar to Ceph but 
proprietary, excellent support).

Or go Solidfire if $$$ allows.

My 2 pence.

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
> To: "users" <users@cloudstack.apache.org>
> Sent: Monday, 19 February, 2018 15:04:13
> Subject: AW: KVM with shared storage

> Thank you guys for your replies!
> So the only stable option for KVM and shared storage is to use NFS or CEPH at
> the moment?
> 
> @Dag: I am evaluating ScaleIO with KVM and it works great! Easy installation,
> good performance and handling. I installed ScaleIO on the same servers as KVM
> is installed, because I took a hyper-converged approach. We do not have any
> developing skills in our company, but I am really interested in ScaleIO
> drivers. Do you know any other customers/users of CS who are looking for it
> too?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Montag, 19. Februar 2018 15:39
> An: users <users@cloudstack.apache.org>
> Betreff: Re: KVM with shared storage
> 
> From my (production) experience from few years ago, even GFS2  as a clustered
> file system was S unstable, and lead to locks, causing the share to
> become unresponsive 100%, and then you go and fix the things any way you can
> (and this was with only 3 nodes !!! accessing the share with LIGHT write
> IO)...all setup by RH themselves back in the days (not cloud, some web 
> project)
> 
> Avoid at all cost.
> 
> Again, I need to refer to the work of Mike.Tutkowski from Solidfire, where he
> implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I hope 
> that
> someone (DEVs) could implement this globally between NFS storages for 
> begining,
> if there is any interest, since you can very easily do this migration manually
> with virsh and editing XML to reference new volumes on new storage, then you
> can live migrate VM while it's running with zero downtime...(so zero shared
> storage here, but again live migration works (with storage))
> 
> Cheers
> 
> On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
> wrote:
> 
>> Hi Swen,
>>
>> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago
>> +and
>> ended up concluding it’s just not fit for purpose, it’s too unstable
>> and STONITH was a challenge to say the least. As you’ve seen from our
>> blog article we have had a project in the past using OCFS2 – but it is
>> again a challenge to set up and to get running smoothly + you need to
>> go cherry picking modules which can be tricky since you are into Oracle
>> territory.
>>
>> For smooth running I recommend sticking to NFS if you can – if not
>> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
>>
>> Regards,
>> Dag Sonstebo
>> Cloud Architect
>> ShapeBlue
>>
>> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
>>
>> So CLVM used to be supported (and probably still works). I'd
>> highly recommend you avoid using a clustered file system if you can
>> avoid it. See if your SAN supports the ability to do exclusive locking.
>>
>> There was chatter on the list a couple of years ago about a
>> scaleio storage driver, but I'm not sure whether there was any
>> movement on that or not.
>>
>>
>> - Si
>>
>>
>> 
>> From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
>> Sent: Monday, February 19, 2018 4:57 AM
>> To: users@cloudstack.apache.org
>> Subject: KVM with shared storage
>>
>> Hi @all,
>>
>> I am evaluating KVM for our Cloudstack installation. We are using
>> XenServer at the moment. We want to use shared storage so we can do
>> live migration of VMs.
>> For our KVM hosts I am using CentOS7 with standard kernel as OS
>> and for shared storage I am evaluating ScaleIO. KVM and ScaleIO
>> installation is working great and I can map a volume to all KVM hosts.
>> I end up with a device (dev/scinia) of block storage on all KVM 

AW: KVM with shared storage

2018-02-19 Thread S . Brüseke - proIO GmbH
Thank you guys for your replies!
So the only stable option for KVM and shared storage is to use NFS or CEPH at 
the moment?

@Dag: I am evaluating ScaleIO with KVM and it works great! Easy installation, 
good performance and handling. I installed ScaleIO on the same servers as KVM 
is installed, because I took a hyper-converged approach. We do not have any 
developing skills in our company, but I am really interested in ScaleIO 
drivers. Do you know any other customers/users of CS who are looking for it too?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Montag, 19. Februar 2018 15:39
An: users <users@cloudstack.apache.org>
Betreff: Re: KVM with shared storage

>From my (production) experience from few years ago, even GFS2  as a clustered 
>file system was S unstable, and lead to locks, causing the share to 
>become unresponsive 100%, and then you go and fix the things any way you can 
>(and this was with only 3 nodes !!! accessing the share with LIGHT write 
>IO)...all setup by RH themselves back in the days (not cloud, some web project)

Avoid at all cost.

Again, I need to refer to the work of Mike.Tutkowski from Solidfire, where he 
implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I hope that 
someone (DEVs) could implement this globally between NFS storages for begining, 
if there is any interest, since you can very easily do this migration manually 
with virsh and editing XML to reference new volumes on new storage, then you 
can live migrate VM while it's running with zero downtime...(so zero shared 
storage here, but again live migration works (with storage))

Cheers

On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
wrote:

> Hi Swen,
>
> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago 
> +and
> ended up concluding it’s just not fit for purpose, it’s too unstable 
> and STONITH was a challenge to say the least. As you’ve seen from our 
> blog article we have had a project in the past using OCFS2 – but it is 
> again a challenge to set up and to get running smoothly + you need to 
> go cherry picking modules which can be tricky since you are into Oracle 
> territory.
>
> For smooth running I recommend sticking to NFS if you can – if not 
> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
>
> So CLVM used to be supported (and probably still works). I'd 
> highly recommend you avoid using a clustered file system if you can 
> avoid it. See if your SAN supports the ability to do exclusive locking.
>
> There was chatter on the list a couple of years ago about a 
> scaleio storage driver, but I'm not sure whether there was any 
> movement on that or not.
>
>
> - Si
>
>
> 
>     From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
> Sent: Monday, February 19, 2018 4:57 AM
> To: users@cloudstack.apache.org
> Subject: KVM with shared storage
>
> Hi @all,
>
> I am evaluating KVM for our Cloudstack installation. We are using 
> XenServer at the moment. We want to use shared storage so we can do 
> live migration of VMs.
> For our KVM hosts I am using CentOS7 with standard kernel as OS 
> and for shared storage I am evaluating ScaleIO. KVM and ScaleIO 
> installation is working great and I can map a volume to all KVM hosts.
> I end up with a device (dev/scinia) of block storage on all KVM hosts.
>
> As far as I understand I need a cluster filesystem on this device 
> so I can use it on all KVM hosts simultaneously. So my options are 
> ocfs2, gfs2 and clvm. As documented clvm is not supported by 
> Cloudstack and therefore not really an option.
> I found this shapeblue howto (http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/) for ocfs2, 
> but this is for CentOS6 and I am unable to find a working ocfs2 module 
> for
> CentOS7 standard kernel.
> [http://www.shapeblue.com/wp-content/uploads/2017/03/
> Fotolia_51644947_XS-1.jpg]<http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/>
>
> Installing and Configuring an OCFS2 Clustered File System ...<
> http://www.shapeblue.com/installing-and-configuring-an-
> ocfs2-clustered-file-system/>
> www.shapeblue.com
> Last year we had a project which required us to build out a KVM 
> environment which used shared storage. Most often that would be NFS 
> all the way and very occasionally ...
>
>
>
>
>

Re: KVM with shared storage

2018-02-19 Thread Andrija Panic
>From my (production) experience from few years ago, even GFS2  as a
clustered file system was S unstable, and lead to locks, causing
the share to become unresponsive 100%, and then you go and fix the things
any way you can (and this was with only 3 nodes !!! accessing the share
with LIGHT write IO)...all setup by RH themselves back in the days (not
cloud, some web project)

Avoid at all cost.

Again, I need to refer to the work of Mike.Tutkowski from Solidfire, where
he implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I
hope that someone (DEVs) could implement this globally between NFS storages
for begining, if there is any interest, since you can very easily do this
migration manually with virsh and editing XML to reference new volumes on
new storage, then you can live migrate VM while it's running with zero
downtime...(so zero shared storage here, but again live migration works
(with storage))

Cheers

On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
wrote:

> Hi Swen,
>
> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago and
> ended up concluding it’s just not fit for purpose, it’s too unstable and
> STONITH was a challenge to say the least. As you’ve seen from our blog
> article we have had a project in the past using OCFS2 – but it is again a
> challenge to set up and to get running smoothly + you need to go cherry
> picking modules which can be tricky since you are into Oracle territory.
>
> For smooth running I recommend sticking to NFS if you can – if not take a
> look at CEPH or gluster, or ScaleIO as Simon suggested.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
>
> So CLVM used to be supported (and probably still works). I'd highly
> recommend you avoid using a clustered file system if you can avoid it. See
> if your SAN supports the ability to do exclusive locking.
>
> There was chatter on the list a couple of years ago about a scaleio
> storage driver, but I'm not sure whether there was any movement on that or
> not.
>
>
> - Si
>
>
> 
> From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
> Sent: Monday, February 19, 2018 4:57 AM
> To: users@cloudstack.apache.org
> Subject: KVM with shared storage
>
> Hi @all,
>
> I am evaluating KVM for our Cloudstack installation. We are using
> XenServer at the moment. We want to use shared storage so we can do live
> migration of VMs.
> For our KVM hosts I am using CentOS7 with standard kernel as OS and
> for shared storage I am evaluating ScaleIO. KVM and ScaleIO installation is
> working great and I can map a volume to all KVM hosts.
> I end up with a device (dev/scinia) of block storage on all KVM hosts.
>
> As far as I understand I need a cluster filesystem on this device so I
> can use it on all KVM hosts simultaneously. So my options are ocfs2, gfs2
> and clvm. As documented clvm is not supported by Cloudstack and therefore
> not really an option.
> I found this shapeblue howto (http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/) for ocfs2,
> but this is for CentOS6 and I am unable to find a working ocfs2 module for
> CentOS7 standard kernel.
> [http://www.shapeblue.com/wp-content/uploads/2017/03/
> Fotolia_51644947_XS-1.jpg]<http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/>
>
> Installing and Configuring an OCFS2 Clustered File System ...<
> http://www.shapeblue.com/installing-and-configuring-an-
> ocfs2-clustered-file-system/>
> www.shapeblue.com
> Last year we had a project which required us to build out a KVM
> environment which used shared storage. Most often that would be NFS all the
> way and very occasionally ...
>
>
>
>
> So my question is how are you implementing shared storage with KVM
> hosts in your Cloudstack installation if it is not NFS? Thx for your help!
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
>
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
>
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
>
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
> Informationen.
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich
> erhalten haben,
> informieren Sie bitte sofort den Absender und vernichten Sie diese
> Mail.
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail
> sind nicht gestattet

Re: KVM with shared storage

2018-02-19 Thread Dag Sonstebo
Hi Swen,

+1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago and ended 
up concluding it’s just not fit for purpose, it’s too unstable and STONITH was 
a challenge to say the least. As you’ve seen from our blog article we have had 
a project in the past using OCFS2 – but it is again a challenge to set up and 
to get running smoothly + you need to go cherry picking modules which can be 
tricky since you are into Oracle territory.

For smooth running I recommend sticking to NFS if you can – if not take a look 
at CEPH or gluster, or ScaleIO as Simon suggested.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:

So CLVM used to be supported (and probably still works). I'd highly 
recommend you avoid using a clustered file system if you can avoid it. See if 
your SAN supports the ability to do exclusive locking.

There was chatter on the list a couple of years ago about a scaleio storage 
driver, but I'm not sure whether there was any movement on that or not.


- Si



From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
Sent: Monday, February 19, 2018 4:57 AM
To: users@cloudstack.apache.org
Subject: KVM with shared storage

Hi @all,

I am evaluating KVM for our Cloudstack installation. We are using XenServer 
at the moment. We want to use shared storage so we can do live migration of VMs.
For our KVM hosts I am using CentOS7 with standard kernel as OS and for 
shared storage I am evaluating ScaleIO. KVM and ScaleIO installation is working 
great and I can map a volume to all KVM hosts.
I end up with a device (dev/scinia) of block storage on all KVM hosts.

As far as I understand I need a cluster filesystem on this device so I can 
use it on all KVM hosts simultaneously. So my options are ocfs2, gfs2 and clvm. 
As documented clvm is not supported by Cloudstack and therefore not really an 
option.
I found this shapeblue howto 
(http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/)
 for ocfs2, but this is for CentOS6 and I am unable to find a working ocfs2 
module for CentOS7 standard kernel.

[http://www.shapeblue.com/wp-content/uploads/2017/03/Fotolia_51644947_XS-1.jpg]<http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/>

Installing and Configuring an OCFS2 Clustered File System 
...<http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/>
www.shapeblue.com
Last year we had a project which required us to build out a KVM environment 
which used shared storage. Most often that would be NFS all the way and very 
occasionally ...




So my question is how are you implementing shared storage with KVM hosts in 
your Cloudstack installation if it is not NFS? Thx for your help!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
erhalten haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind 
nicht gestattet.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this e-mail in 
error) please notify
the sender immediately and destroy this e-mail.
Any unauthorized copying, disclosure or distribution of the material in 
this e-mail is strictly forbidden.





dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: KVM with shared storage

2018-02-19 Thread Simon Weller
So CLVM used to be supported (and probably still works). I'd highly recommend 
you avoid using a clustered file system if you can avoid it. See if your SAN 
supports the ability to do exclusive locking.

There was chatter on the list a couple of years ago about a scaleio storage 
driver, but I'm not sure whether there was any movement on that or not.


- Si



From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
Sent: Monday, February 19, 2018 4:57 AM
To: users@cloudstack.apache.org
Subject: KVM with shared storage

Hi @all,

I am evaluating KVM for our Cloudstack installation. We are using XenServer at 
the moment. We want to use shared storage so we can do live migration of VMs.
For our KVM hosts I am using CentOS7 with standard kernel as OS and for shared 
storage I am evaluating ScaleIO. KVM and ScaleIO installation is working great 
and I can map a volume to all KVM hosts.
I end up with a device (dev/scinia) of block storage on all KVM hosts.

As far as I understand I need a cluster filesystem on this device so I can use 
it on all KVM hosts simultaneously. So my options are ocfs2, gfs2 and clvm. As 
documented clvm is not supported by Cloudstack and therefore not really an 
option.
I found this shapeblue howto 
(http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/)
 for ocfs2, but this is for CentOS6 and I am unable to find a working ocfs2 
module for CentOS7 standard kernel.
[http://www.shapeblue.com/wp-content/uploads/2017/03/Fotolia_51644947_XS-1.jpg]<http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/>

Installing and Configuring an OCFS2 Clustered File System 
...<http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/>
www.shapeblue.com
Last year we had a project which required us to build out a KVM environment 
which used shared storage. Most often that would be NFS all the way and very 
occasionally ...




So my question is how are you implementing shared storage with KVM hosts in 
your Cloudstack installation if it is not NFS? Thx for your help!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben,
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail.
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet.

This e-mail may contain confidential and/or privileged information.
If you are not the intended recipient (or have received this e-mail in error) 
please notify
the sender immediately and destroy this e-mail.
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.




KVM with shared storage

2018-02-19 Thread S . Brüseke - proIO GmbH
Hi @all,

I am evaluating KVM for our Cloudstack installation. We are using XenServer at 
the moment. We want to use shared storage so we can do live migration of VMs.
For our KVM hosts I am using CentOS7 with standard kernel as OS and for shared 
storage I am evaluating ScaleIO. KVM and ScaleIO installation is working great 
and I can map a volume to all KVM hosts.
I end up with a device (dev/scinia) of block storage on all KVM hosts.

As far as I understand I need a cluster filesystem on this device so I can use 
it on all KVM hosts simultaneously. So my options are ocfs2, gfs2 and clvm. As 
documented clvm is not supported by Cloudstack and therefore not really an 
option.
I found this shapeblue howto 
(http://www.shapeblue.com/installing-and-configuring-an-ocfs2-clustered-file-system/)
 for ocfs2, but this is for CentOS6 and I am unable to find a working ocfs2 
module for CentOS7 standard kernel.

So my question is how are you implementing shared storage with KVM hosts in 
your Cloudstack installation if it is not NFS? Thx for your help!

Mit freundlichen Grüßen / With kind regards,

Swen



- proIO GmbH -
Geschäftsführer: Swen Brüseke
Sitz der Gesellschaft: Frankfurt am Main

USt-IdNr. DE 267 075 918
Registergericht: Frankfurt am Main - HRB 86239

Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. 
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten 
haben, 
informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
gestattet. 

This e-mail may contain confidential and/or privileged information. 
If you are not the intended recipient (or have received this e-mail in error) 
please notify 
the sender immediately and destroy this e-mail.  
Any unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.