Re: AW: AW: KVM with shared storage

2018-02-21 Thread Nux!
That's great news, thanks for sharing.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "Swen - swen.io" <m...@swen.io>
> To: "users" <users@cloudstack.apache.org>
> Sent: Wednesday, 21 February, 2018 08:15:24
> Subject: AW: AW: KVM with shared storage

> I heard some rumors about Cloudstack is on the roadmap of ScaleIO. We, as the
> community, need to make some buzz and create some business cases.
> 
> @Lucian: Thank you very much for your mail.
> 
> @Andrija: Thank you very much, too!
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Dienstag, 20. Februar 2018 22:26
> An: users <users@cloudstack.apache.org>
> Cc: S. Brüseke, proIO GmbH <s.brues...@proio.com>
> Betreff: Re: AW: KVM with shared storage
> 
> FYI, we eventually went with SolidFire.
> 
> ACS integration development support from Mike T. and the actual support on the
> hardware product is just astonishing... (vs. all other shitty support and
> quality of other vendors solutions I have experienced so far in my work), so I
> simply have to share my delight that this kind of Vendor
> (support) still exist somewhere...
> 
> On 20 February 2018 at 19:24, Nux! <n...@li.nux.ro> wrote:
> 
>> Hi Swen,
>>
>> If I were to build a cloud now, I'd use NFS and/or local storage,
>> depending on requirements (HA, not HA etc). I know these technologies,
>> they are very robust and I can manage them on my own.
>>
>> If you are willing to pay for a more sophisticated solution, then
>> either look at CEPH and 42on.com for support (or others) or Storpool
>> (similar to Ceph but proprietary, excellent support).
>>
>> Or go Solidfire if $$$ allows.
>>
>> My 2 pence.
>>
>> Lucian
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro
>>
>> - Original Message -
>> > From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
>> > To: "users" <users@cloudstack.apache.org>
>> > Sent: Monday, 19 February, 2018 15:04:13
>> > Subject: AW: KVM with shared storage
>>
>> > Thank you guys for your replies!
>> > So the only stable option for KVM and shared storage is to use NFS
>> > or
>> CEPH at
>> > the moment?
>> >
>> > @Dag: I am evaluating ScaleIO with KVM and it works great! Easy
>> installation,
>> > good performance and handling. I installed ScaleIO on the same
>> > servers
>> as KVM
>> > is installed, because I took a hyper-converged approach. We do not
>> > have
>> any
>> > developing skills in our company, but I am really interested in
>> > ScaleIO drivers. Do you know any other customers/users of CS who are
>> > looking for
>> it
>> > too?
>> >
>> > Mit freundlichen Grüßen / With kind regards,
>> >
>> > Swen
>> >
>> > -Ursprüngliche Nachricht-
>> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
>> > Gesendet: Montag, 19. Februar 2018 15:39
>> > An: users <users@cloudstack.apache.org>
>> > Betreff: Re: KVM with shared storage
>> >
>> > From my (production) experience from few years ago, even GFS2  as a
>> clustered
>> > file system was S unstable, and lead to locks, causing the
>> > share
>> to
>> > become unresponsive 100%, and then you go and fix the things any way
>> > you
>> can
>> > (and this was with only 3 nodes !!! accessing the share with LIGHT
>> > write IO)...all setup by RH themselves back in the days (not cloud,
>> > some web
>> project)
>> >
>> > Avoid at all cost.
>> >
>> > Again, I need to refer to the work of Mike.Tutkowski from Solidfire,
>> where he
>> > implemented "online storage migration" from NFS/CEPH to SOLIDFIRE -
>> > I
>> hope that
>> > someone (DEVs) could implement this globally between NFS storages
>> > for
>> begining,
>> > if there is any interest, since you can very easily do this
>> > migration
>> manually
>> > with virsh and editing XML to reference new volumes on new storage,
>> > then
>> you
>> > can live migrate VM while it's running with zero downtime...(so zero
>> shared
>> > storage here, but again live migration works (with st

AW: AW: KVM with shared storage

2018-02-21 Thread Swen - swen.io
I heard some rumors about Cloudstack is on the roadmap of ScaleIO. We, as the 
community, need to make some buzz and create some business cases.

@Lucian: Thank you very much for your mail.

@Andrija: Thank you very much, too!

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Dienstag, 20. Februar 2018 22:26
An: users <users@cloudstack.apache.org>
Cc: S. Brüseke, proIO GmbH <s.brues...@proio.com>
Betreff: Re: AW: KVM with shared storage

FYI, we eventually went with SolidFire.

ACS integration development support from Mike T. and the actual support on the 
hardware product is just astonishing... (vs. all other shitty support and 
quality of other vendors solutions I have experienced so far in my work), so I 
simply have to share my delight that this kind of Vendor
(support) still exist somewhere...

On 20 February 2018 at 19:24, Nux! <n...@li.nux.ro> wrote:

> Hi Swen,
>
> If I were to build a cloud now, I'd use NFS and/or local storage, 
> depending on requirements (HA, not HA etc). I know these technologies, 
> they are very robust and I can manage them on my own.
>
> If you are willing to pay for a more sophisticated solution, then 
> either look at CEPH and 42on.com for support (or others) or Storpool 
> (similar to Ceph but proprietary, excellent support).
>
> Or go Solidfire if $$$ allows.
>
> My 2 pence.
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
> > To: "users" <users@cloudstack.apache.org>
> > Sent: Monday, 19 February, 2018 15:04:13
> > Subject: AW: KVM with shared storage
>
> > Thank you guys for your replies!
> > So the only stable option for KVM and shared storage is to use NFS 
> > or
> CEPH at
> > the moment?
> >
> > @Dag: I am evaluating ScaleIO with KVM and it works great! Easy
> installation,
> > good performance and handling. I installed ScaleIO on the same 
> > servers
> as KVM
> > is installed, because I took a hyper-converged approach. We do not 
> > have
> any
> > developing skills in our company, but I am really interested in 
> > ScaleIO drivers. Do you know any other customers/users of CS who are 
> > looking for
> it
> > too?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Montag, 19. Februar 2018 15:39
> > An: users <users@cloudstack.apache.org>
> > Betreff: Re: KVM with shared storage
> >
> > From my (production) experience from few years ago, even GFS2  as a
> clustered
> > file system was S unstable, and lead to locks, causing the 
> > share
> to
> > become unresponsive 100%, and then you go and fix the things any way 
> > you
> can
> > (and this was with only 3 nodes !!! accessing the share with LIGHT 
> > write IO)...all setup by RH themselves back in the days (not cloud, 
> > some web
> project)
> >
> > Avoid at all cost.
> >
> > Again, I need to refer to the work of Mike.Tutkowski from Solidfire,
> where he
> > implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - 
> > I
> hope that
> > someone (DEVs) could implement this globally between NFS storages 
> > for
> begining,
> > if there is any interest, since you can very easily do this 
> > migration
> manually
> > with virsh and editing XML to reference new volumes on new storage, 
> > then
> you
> > can live migrate VM while it's running with zero downtime...(so zero
> shared
> > storage here, but again live migration works (with storage))
> >
> > Cheers
> >
> > On 19 February 2018 at 15:21, Dag Sonstebo 
> > <dag.sonst...@shapeblue.com>
> > wrote:
> >
> >> Hi Swen,
> >>
> >> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year 
> >> +ago and
> >> ended up concluding it’s just not fit for purpose, it’s too 
> >> unstable and STONITH was a challenge to say the least. As you’ve 
> >> seen from our blog article we have had a project in the past using 
> >> OCFS2 – but it is again a challenge to set up and to get running 
> >> smoothly + you need to go cherry picking modules which can be 
> >> tricky since you are into Oracle territory.
> >>
> >> For smooth running I recomme

Re: AW: KVM with shared storage

2018-02-20 Thread Andrija Panic
FYI, we eventually went with SolidFire.

ACS integration development support from Mike T. and the actual support on
the hardware product is just astonishing... (vs. all other shitty support
and quality of other vendors solutions I have experienced so far in my
work), so I simply have to share my delight that this kind of Vendor
(support) still exist somewhere...

On 20 February 2018 at 19:24, Nux! <n...@li.nux.ro> wrote:

> Hi Swen,
>
> If I were to build a cloud now, I'd use NFS and/or local storage,
> depending on requirements (HA, not HA etc). I know these technologies, they
> are very robust and I can manage them on my own.
>
> If you are willing to pay for a more sophisticated solution, then either
> look at CEPH and 42on.com for support (or others) or Storpool (similar to
> Ceph but proprietary, excellent support).
>
> Or go Solidfire if $$$ allows.
>
> My 2 pence.
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
> - Original Message -
> > From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
> > To: "users" <users@cloudstack.apache.org>
> > Sent: Monday, 19 February, 2018 15:04:13
> > Subject: AW: KVM with shared storage
>
> > Thank you guys for your replies!
> > So the only stable option for KVM and shared storage is to use NFS or
> CEPH at
> > the moment?
> >
> > @Dag: I am evaluating ScaleIO with KVM and it works great! Easy
> installation,
> > good performance and handling. I installed ScaleIO on the same servers
> as KVM
> > is installed, because I took a hyper-converged approach. We do not have
> any
> > developing skills in our company, but I am really interested in ScaleIO
> > drivers. Do you know any other customers/users of CS who are looking for
> it
> > too?
> >
> > Mit freundlichen Grüßen / With kind regards,
> >
> > Swen
> >
> > -Ursprüngliche Nachricht-
> > Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> > Gesendet: Montag, 19. Februar 2018 15:39
> > An: users <users@cloudstack.apache.org>
> > Betreff: Re: KVM with shared storage
> >
> > From my (production) experience from few years ago, even GFS2  as a
> clustered
> > file system was S unstable, and lead to locks, causing the share
> to
> > become unresponsive 100%, and then you go and fix the things any way you
> can
> > (and this was with only 3 nodes !!! accessing the share with LIGHT write
> > IO)...all setup by RH themselves back in the days (not cloud, some web
> project)
> >
> > Avoid at all cost.
> >
> > Again, I need to refer to the work of Mike.Tutkowski from Solidfire,
> where he
> > implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I
> hope that
> > someone (DEVs) could implement this globally between NFS storages for
> begining,
> > if there is any interest, since you can very easily do this migration
> manually
> > with virsh and editing XML to reference new volumes on new storage, then
> you
> > can live migrate VM while it's running with zero downtime...(so zero
> shared
> > storage here, but again live migration works (with storage))
> >
> > Cheers
> >
> > On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
> > wrote:
> >
> >> Hi Swen,
> >>
> >> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago
> >> +and
> >> ended up concluding it’s just not fit for purpose, it’s too unstable
> >> and STONITH was a challenge to say the least. As you’ve seen from our
> >> blog article we have had a project in the past using OCFS2 – but it is
> >> again a challenge to set up and to get running smoothly + you need to
> >> go cherry picking modules which can be tricky since you are into Oracle
> >> territory.
> >>
> >> For smooth running I recommend sticking to NFS if you can – if not
> >> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
> >>
> >> Regards,
> >> Dag Sonstebo
> >> Cloud Architect
> >> ShapeBlue
> >>
> >> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
> >>
> >> So CLVM used to be supported (and probably still works). I'd
> >> highly recommend you avoid using a clustered file system if you can
> >> avoid it. See if your SAN supports the ability to do exclusive locking.
> >>
> >> There was chatter on the list a couple of years ago about a
> >&

Re: AW: KVM with shared storage

2018-02-20 Thread Nux!
Hi Swen,

If I were to build a cloud now, I'd use NFS and/or local storage, depending on 
requirements (HA, not HA etc). I know these technologies, they are very robust 
and I can manage them on my own.

If you are willing to pay for a more sophisticated solution, then either look 
at CEPH and 42on.com for support (or others) or Storpool (similar to Ceph but 
proprietary, excellent support).

Or go Solidfire if $$$ allows.

My 2 pence.

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

- Original Message -
> From: "S. Brüseke, proIO GmbH" <s.brues...@proio.com>
> To: "users" <users@cloudstack.apache.org>
> Sent: Monday, 19 February, 2018 15:04:13
> Subject: AW: KVM with shared storage

> Thank you guys for your replies!
> So the only stable option for KVM and shared storage is to use NFS or CEPH at
> the moment?
> 
> @Dag: I am evaluating ScaleIO with KVM and it works great! Easy installation,
> good performance and handling. I installed ScaleIO on the same servers as KVM
> is installed, because I took a hyper-converged approach. We do not have any
> developing skills in our company, but I am really interested in ScaleIO
> drivers. Do you know any other customers/users of CS who are looking for it
> too?
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> -Ursprüngliche Nachricht-
> Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
> Gesendet: Montag, 19. Februar 2018 15:39
> An: users <users@cloudstack.apache.org>
> Betreff: Re: KVM with shared storage
> 
> From my (production) experience from few years ago, even GFS2  as a clustered
> file system was S unstable, and lead to locks, causing the share to
> become unresponsive 100%, and then you go and fix the things any way you can
> (and this was with only 3 nodes !!! accessing the share with LIGHT write
> IO)...all setup by RH themselves back in the days (not cloud, some web 
> project)
> 
> Avoid at all cost.
> 
> Again, I need to refer to the work of Mike.Tutkowski from Solidfire, where he
> implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I hope 
> that
> someone (DEVs) could implement this globally between NFS storages for 
> begining,
> if there is any interest, since you can very easily do this migration manually
> with virsh and editing XML to reference new volumes on new storage, then you
> can live migrate VM while it's running with zero downtime...(so zero shared
> storage here, but again live migration works (with storage))
> 
> Cheers
> 
> On 19 February 2018 at 15:21, Dag Sonstebo <dag.sonst...@shapeblue.com>
> wrote:
> 
>> Hi Swen,
>>
>> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago
>> +and
>> ended up concluding it’s just not fit for purpose, it’s too unstable
>> and STONITH was a challenge to say the least. As you’ve seen from our
>> blog article we have had a project in the past using OCFS2 – but it is
>> again a challenge to set up and to get running smoothly + you need to
>> go cherry picking modules which can be tricky since you are into Oracle
>> territory.
>>
>> For smooth running I recommend sticking to NFS if you can – if not
>> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
>>
>> Regards,
>> Dag Sonstebo
>> Cloud Architect
>> ShapeBlue
>>
>> On 19/02/2018, 13:45, "Simon Weller" <swel...@ena.com.INVALID> wrote:
>>
>> So CLVM used to be supported (and probably still works). I'd
>> highly recommend you avoid using a clustered file system if you can
>> avoid it. See if your SAN supports the ability to do exclusive locking.
>>
>> There was chatter on the list a couple of years ago about a
>> scaleio storage driver, but I'm not sure whether there was any
>> movement on that or not.
>>
>>
>> - Si
>>
>>
>> 
>> From: S. Brüseke - proIO GmbH <s.brues...@proio.com>
>> Sent: Monday, February 19, 2018 4:57 AM
>> To: users@cloudstack.apache.org
>> Subject: KVM with shared storage
>>
>> Hi @all,
>>
>> I am evaluating KVM for our Cloudstack installation. We are using
>> XenServer at the moment. We want to use shared storage so we can do
>> live migration of VMs.
>> For our KVM hosts I am using CentOS7 with standard kernel as OS
>> and for shared storage I am evaluating ScaleIO. KVM and ScaleIO
>> installation is working great and I can map a volume to all KVM hosts.
>> I end up with a device (dev/scinia) of block storage on all KVM 

AW: KVM with shared storage

2018-02-19 Thread S . Brüseke - proIO GmbH
Thank you guys for your replies!
So the only stable option for KVM and shared storage is to use NFS or CEPH at 
the moment?

@Dag: I am evaluating ScaleIO with KVM and it works great! Easy installation, 
good performance and handling. I installed ScaleIO on the same servers as KVM 
is installed, because I took a hyper-converged approach. We do not have any 
developing skills in our company, but I am really interested in ScaleIO 
drivers. Do you know any other customers/users of CS who are looking for it too?

Mit freundlichen Grüßen / With kind regards,

Swen

-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com] 
Gesendet: Montag, 19. Februar 2018 15:39
An: users 
Betreff: Re: KVM with shared storage

>From my (production) experience from few years ago, even GFS2  as a clustered 
>file system was S unstable, and lead to locks, causing the share to 
>become unresponsive 100%, and then you go and fix the things any way you can 
>(and this was with only 3 nodes !!! accessing the share with LIGHT write 
>IO)...all setup by RH themselves back in the days (not cloud, some web project)

Avoid at all cost.

Again, I need to refer to the work of Mike.Tutkowski from Solidfire, where he 
implemented "online storage migration" from NFS/CEPH to SOLIDFIRE - I hope that 
someone (DEVs) could implement this globally between NFS storages for begining, 
if there is any interest, since you can very easily do this migration manually 
with virsh and editing XML to reference new volumes on new storage, then you 
can live migrate VM while it's running with zero downtime...(so zero shared 
storage here, but again live migration works (with storage))

Cheers

On 19 February 2018 at 15:21, Dag Sonstebo 
wrote:

> Hi Swen,
>
> +1 for Simon’s comments. I did a fair bit of CLVM POC work a year ago 
> +and
> ended up concluding it’s just not fit for purpose, it’s too unstable 
> and STONITH was a challenge to say the least. As you’ve seen from our 
> blog article we have had a project in the past using OCFS2 – but it is 
> again a challenge to set up and to get running smoothly + you need to 
> go cherry picking modules which can be tricky since you are into Oracle 
> territory.
>
> For smooth running I recommend sticking to NFS if you can – if not 
> take a look at CEPH or gluster, or ScaleIO as Simon suggested.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 19/02/2018, 13:45, "Simon Weller"  wrote:
>
> So CLVM used to be supported (and probably still works). I'd 
> highly recommend you avoid using a clustered file system if you can 
> avoid it. See if your SAN supports the ability to do exclusive locking.
>
> There was chatter on the list a couple of years ago about a 
> scaleio storage driver, but I'm not sure whether there was any 
> movement on that or not.
>
>
> - Si
>
>
> 
> From: S. Brüseke - proIO GmbH 
> Sent: Monday, February 19, 2018 4:57 AM
> To: users@cloudstack.apache.org
> Subject: KVM with shared storage
>
> Hi @all,
>
> I am evaluating KVM for our Cloudstack installation. We are using 
> XenServer at the moment. We want to use shared storage so we can do 
> live migration of VMs.
> For our KVM hosts I am using CentOS7 with standard kernel as OS 
> and for shared storage I am evaluating ScaleIO. KVM and ScaleIO 
> installation is working great and I can map a volume to all KVM hosts.
> I end up with a device (dev/scinia) of block storage on all KVM hosts.
>
> As far as I understand I need a cluster filesystem on this device 
> so I can use it on all KVM hosts simultaneously. So my options are 
> ocfs2, gfs2 and clvm. As documented clvm is not supported by 
> Cloudstack and therefore not really an option.
> I found this shapeblue howto (http://www.shapeblue.com/
> installing-and-configuring-an-ocfs2-clustered-file-system/) for ocfs2, 
> but this is for CentOS6 and I am unable to find a working ocfs2 module 
> for
> CentOS7 standard kernel.
> [http://www.shapeblue.com/wp-content/uploads/2017/03/
> Fotolia_51644947_XS-1.jpg] installing-and-configuring-an-ocfs2-clustered-file-system/>
>
> Installing and Configuring an OCFS2 Clustered File System ...<
> http://www.shapeblue.com/installing-and-configuring-an-
> ocfs2-clustered-file-system/>
> www.shapeblue.com
> Last year we had a project which required us to build out a KVM 
> environment which used shared storage. Most often that would be NFS 
> all the way and very occasionally ...
>
>
>
>
> So my question is how are you implementing shared storage with KVM 
> hosts in your Cloudstack installation if it is not NFS? Thx for your help!
>
> Mit freundlichen Grüßen / With kind regards,
>
> Swen
>
>
>
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der