Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-14 Thread Lars Marowsky-Bree
On 2018-03-02T15:24:29, Joshua Chen  wrote:

> Dear all,
>   I wonder how we could support VM systems with ceph storage (block
> device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
> I myself use oVirt (RHEV). the default protocol is iSCSI.

Lean on VMWare to stop being difficult about a native RBD driver. I can
guarantee you all the Ceph vendors are drooling at the bits to get that
done, but ... the VMware licensing terms for their header files, SDKs,
etc aren't exactly Open Source friendly.

iSCSI - yes, it works, but it's a work-around that introduces
significant performance penalties and architectural complexities.

If you are a VMWare customer, let them know you're considering moving
off to OpenStack et al to get Ceph supported better.

Imagine Linus scolding NVidia. Maybe eventually it'll help ;-)



Regards,
Lars

-- 
Architect SDS, Distinguished Engineer
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-06 Thread Martin Emrich

Hi!

Am 02.03.18 um 13:27 schrieb Federico Lucifredi:


We do speak to the Xen team every once in a while, but while there is 
interest in adding Ceph support on their side, I think we are somewhat 
down the list of their priorities.



Maybe things change with XCP-ng (https://xcp-ng.github.io). Now as 
Citrix is removing features from 7.3 and cutting off users of the free 
version, this project looks very interesting (Trying to be what CentOS 
is/was to RHEL).


And they have Ceph RBD support on their ideas list already.

Cheers,

Martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-06 Thread Konstantin Shalygin

Dear all,
   I wonder how we could support VM systems with ceph storage (block
device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
I myself use oVirt (RHEV). the default protocol is iSCSI.
   I know that openstack/cinder work well with ceph and proxmox (just heard)
too. But currently we are using vmware and ovirt.


Your wise suggestion is appreciated

Cheers
Joshua



oVirt works with Ceph natively via librbd.



k

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-05 Thread Robert Sander
On 05.03.2018 00:26, Adrian Saul wrote:
>  
> 
> We are using Ceph+RBD+NFS under pacemaker for VMware.  We are doing
> iSCSI using SCST but have not used it against VMware, just Solaris and
> Hyper-V.
> 
> 
> It generally works and performs well enough – the biggest issues are the
> clustering for iSCSI ALUA support and NFS failover, most of which we
> have developed in house – we still have not quite got that right yet.

You should look at setting up a Samba CTDB cluster with CephFS as
backend. This can also be used with NFS including NFS failover.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-04 Thread Adrian Saul

We are using Ceph+RBD+NFS under pacemaker for VMware.  We are doing iSCSI using 
SCST but have not used it against VMware, just Solaris and Hyper-V.

It generally works and performs well enough – the biggest issues are the 
clustering for iSCSI ALUA support and NFS failover, most of which we have 
developed in house – we still have not quite got that right yet.



From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Daniel 
K
Sent: Saturday, 3 March 2018 1:03 AM
To: Joshua Chen <csc...@asiaa.sinica.edu.tw>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph iSCSI is a prank?

There's been quite a few VMWare/Ceph threads on the mailing list in the past.

One setup I've been toying with is a linux guest running on the vmware host on 
local storage, with the guest mounting a ceph RBD with a filesystem on it, then 
exporting that via NFS to the VMWare host as a datastore.

Exporting CephFS via NFS to Vmware is another option.

I'm not sure how well shared storage will work with either of these 
configurations. but they work fairly well for single-host deployments.

There are also quite a few products that do support iscsi on ceph. Suse 
Enterprise Storage is a commercial one, PetaSAN is an open-source option.


On Fri, Mar 2, 2018 at 2:24 AM, Joshua Chen 
<csc...@asiaa.sinica.edu.tw<mailto:csc...@asiaa.sinica.edu.tw>> wrote:
Dear all,
  I wonder how we could support VM systems with ceph storage (block device)? my 
colleagues are waiting for my answer for vmware (vSphere 5) and I myself use 
oVirt (RHEV). the default protocol is iSCSI.
  I know that openstack/cinder work well with ceph and proxmox (just heard) 
too. But currently we are using vmware and ovirt.


Your wise suggestion is appreciated

Cheers
Joshua


On Thu, Mar 1, 2018 at 3:16 AM, Mark Schouten 
<m...@tuxis.nl<mailto:m...@tuxis.nl>> wrote:
Does Xen still not support RBD? Ceph has been around for years now!
Met vriendelijke groeten,

--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl<mailto:i...@tuxis.nl>


Van: Massimiliano Cuttini <m...@phoenixweb.it<mailto:m...@phoenixweb.it>>
Aan: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
<ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
Verzonden: 28-2-2018 13:53
Onderwerp: [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old technology 
like iSCSI.
So sad.






___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Confidentiality: This email and any attachments are confidential and may be 
subject to copyright, legal or some other professional privilege. They are 
intended solely for the attention and use of the named addressee(s). They may 
only be copied, distributed or disclosed with the consent of the copyright 
owner. If you have received this email by mistake or by breach of the 
confidentiality clause, please notify the sender immediately by return email 
and delete or destroy all copies of the email. Any confidentiality, privilege 
or copyright is not waived or lost because this email has been sent to you by 
mistake.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Mike Christie
On 03/02/2018 01:24 AM, Joshua Chen wrote:
> Dear all,
>   I wonder how we could support VM systems with ceph storage (block
> device)? my colleagues are waiting for my answer for vmware (vSphere 5)

We were having difficulties supporting older versions, because they will
drop down to using SCSI-2 reservations if an ATS failed. Sometimes the
ATS-only setting worked and sometimes it didn't and in older versions it
may not have existed. We have not had time to fully debug and QE.

When distributed PGRs/reservation support is added it will not be a
issue, so we have been concentration on that instead of debugging each
vsphere version individually due to lack of time/resources.

SUSE's implementation does support distributed PGRs, so you should be
able to use it now.

Do you mean 5.0 or 5.5 btw? And if 5.0, just wondering why the older
version.

> and I myself use oVirt (RHEV). the default protocol is iSCSI.

For RHEV, RHCS iSCSI is supported with the current version. It works
like a normal old iSCSI target. There were no changes done to RHEV for
this, so I think it should work just fine with upstream/downstream ceph
iscsi and oVirt too.


>   I know that openstack/cinder work well with ceph and proxmox (just
> heard) too. But currently we are using vmware and ovirt.
> 
> 
> Your wise suggestion is appreciated
> 
> Cheers
> Joshua
> 
> 
> On Thu, Mar 1, 2018 at 3:16 AM, Mark Schouten  > wrote:
> 
> Does Xen still not support RBD? Ceph has been around for years now!
> 
> Met vriendelijke groeten,
> 
> -- 
> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
> Mark Schouten | Tuxis Internet Engineering
> KvK: 61527076 | http://www.tuxis.nl/
> T: 0318 200208 | i...@tuxis.nl 
> 
> 
> 
> *Van: * Massimiliano Cuttini  >
> *Aan: * "ceph-users@lists.ceph.com
> "  >
> *Verzonden: * 28-2-2018 13:53
> *Onderwerp: * [ceph-users] Ceph iSCSI is a prank?
> 
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
> 
> *CentOS 7.5*
> (which is not available yet, it's still at 7.4)
> https://wiki.centos.org/Download
> 
> 
> *Kernel 4.17*
> (which is not available yet, it is still at 4.15.7)
> https://www.kernel.org/
> 
> So I guess, there is no ufficial support and this is just a bad
> prank.
> 
> Ceph is ready to be used with S3 since many years.
> But need the kernel of the next century to works with such an
> old technology like iSCSI.
> So sad.
> 
> 
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Daniel K
There's been quite a few VMWare/Ceph threads on the mailing list in the
past.

One setup I've been toying with is a linux guest running on the vmware host
on local storage, with the guest mounting a ceph RBD with a filesystem on
it, then exporting that via NFS to the VMWare host as a datastore.

Exporting CephFS via NFS to Vmware is another option.

I'm not sure how well shared storage will work with either of these
configurations. but they work fairly well for single-host deployments.

There are also quite a few products that do support iscsi on ceph. Suse
Enterprise Storage is a commercial one, PetaSAN is an open-source option.


On Fri, Mar 2, 2018 at 2:24 AM, Joshua Chen 
wrote:

> Dear all,
>   I wonder how we could support VM systems with ceph storage (block
> device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
> I myself use oVirt (RHEV). the default protocol is iSCSI.
>   I know that openstack/cinder work well with ceph and proxmox (just
> heard) too. But currently we are using vmware and ovirt.
>
>
> Your wise suggestion is appreciated
>
> Cheers
> Joshua
>
>
> On Thu, Mar 1, 2018 at 3:16 AM, Mark Schouten  wrote:
>
>> Does Xen still not support RBD? Ceph has been around for years now!
>>
>> Met vriendelijke groeten,
>>
>> --
>> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
>> Mark Schouten | Tuxis Internet Engineering
>> KvK: 61527076 | http://www.tuxis.nl/
>> T: 0318 200208 | i...@tuxis.nl
>>
>>
>>
>> * Van: * Massimiliano Cuttini 
>> * Aan: * "ceph-users@lists.ceph.com" 
>> * Verzonden: * 28-2-2018 13:53
>> * Onderwerp: * [ceph-users] Ceph iSCSI is a prank?
>>
>> I was building ceph in order to use with iSCSI.
>> But I just see from the docs that need:
>>
>> *CentOS 7.5*
>> (which is not available yet, it's still at 7.4)
>> https://wiki.centos.org/Download
>>
>> *Kernel 4.17*
>> (which is not available yet, it is still at 4.15.7)
>> https://www.kernel.org/
>>
>> So I guess, there is no ufficial support and this is just a bad prank.
>>
>> Ceph is ready to be used with S3 since many years.
>> But need the kernel of the next century to works with such an old
>> technology like iSCSI.
>> So sad.
>>
>>
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins



Il 02/03/2018 13:27, Federico Lucifredi ha scritto:


On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins > wrote:




Hi Federico,

Hi Max,

On Feb 28, 2018, at 10:06 AM, Max Cuttins
> wrote:

This is true, but having something that just works in
order to have minimum compatibility and start to dismiss
old disk is something you should think about.
You'll have ages in order to improve and get better
performance. But you should allow Users to cut-off old
solutions as soon as possible while waiting for a better
implementation.

I like your thinking, but I wonder why doesn’t a
locally-mounted kRBD volume meet this need? It seems easier
than iSCSI and I would venture would show twice the
performance at least in some cases.


Simple because it's not possible.
XenServer is closed. You cannot add RPM (so basically install
ceph) without hack the distribution by removing the limitation to YUM.
And this is what we do here:
https://github.com/rposudnevskiy/RBDSR



Understood. Thanks Max, I did not realize you were also speaking about 
Xen, I thought you meant to find an arbitrary non-virtual disk  
replacement strategy ("start to dismiss old disk").
I need to find an arbitrary non-virtual disk replacement strategy 
compatible with Xen.





We do speak to the Xen team every once in a while, but while there is 
interest in adding Ceph support on their side, I think we are somewhat 
down the list of their priorities.


Thanks -F


They are somewhat interested in higher the limitation instead of 
improving their hypervisor.

Xen 7.3 is _*exactly *_Xen 7.2 with new limitations and no added features.
It's a shame.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Federico Lucifredi
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins  wrote:

>
>
> Hi Federico,
>
> Hi Max,
>>
>> On Feb 28, 2018, at 10:06 AM, Max Cuttins  wrote:
>>>
>>> This is true, but having something that just works in order to have
>>> minimum compatibility and start to dismiss old disk is something you should
>>> think about.
>>> You'll have ages in order to improve and get better performance. But you
>>> should allow Users to cut-off old solutions as soon as possible while
>>> waiting for a better implementation.
>>>
>> I like your thinking, but I wonder why doesn’t a locally-mounted kRBD
>> volume meet this need? It seems easier than iSCSI and I would venture would
>> show twice the performance at least in some cases.
>>
>
> Simple because it's not possible.
> XenServer is closed. You cannot add RPM (so basically install ceph)
> without hack the distribution by removing the limitation to YUM.
> And this is what we do here: https://github.com/rposudnevskiy/RBDSR


Understood. Thanks Max, I did not realize you were also speaking about Xen,
I thought you meant to find an arbitrary non-virtual disk  replacement
strategy ("start to dismiss old disk").

We do speak to the Xen team every once in a while, but while there is
interest in adding Ceph support on their side, I think we are somewhat down
the list of their priorities.

Thanks -F
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Max Cuttins



Hi Federico,


Hi Max,


On Feb 28, 2018, at 10:06 AM, Max Cuttins  wrote:

This is true, but having something that just works in order to have minimum 
compatibility and start to dismiss old disk is something you should think about.
You'll have ages in order to improve and get better performance. But you should 
allow Users to cut-off old solutions as soon as possible while waiting for a 
better implementation.

I like your thinking, but I wonder why doesn’t a locally-mounted kRBD volume 
meet this need? It seems easier than iSCSI and I would venture would show twice 
the performance at least in some cases.


Simple because it's not possible.
XenServer is closed. You cannot add RPM (so basically install ceph) 
without hack the distribution by removing the limitation to YUM.

And this is what we do here: https://github.com/rposudnevskiy/RBDSR

In order to let live migration works it's needed to rewrite the VHD/VDI 
driver (because this driver it's monolitich fused with iSCSI and HBA).
So any implementation it's just something more than just a plugin or a 
class extension. It's an entire rewrite of the SR manager.

Is it working? yes.
Is this suitable for production?  I think should not.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Joshua Chen
Dear all,
  I wonder how we could support VM systems with ceph storage (block
device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
I myself use oVirt (RHEV). the default protocol is iSCSI.
  I know that openstack/cinder work well with ceph and proxmox (just heard)
too. But currently we are using vmware and ovirt.


Your wise suggestion is appreciated

Cheers
Joshua


On Thu, Mar 1, 2018 at 3:16 AM, Mark Schouten  wrote:

> Does Xen still not support RBD? Ceph has been around for years now!
>
> Met vriendelijke groeten,
>
> --
> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
> Mark Schouten | Tuxis Internet Engineering
> KvK: 61527076 | http://www.tuxis.nl/
> T: 0318 200208 | i...@tuxis.nl
>
>
>
> * Van: * Massimiliano Cuttini 
> * Aan: * "ceph-users@lists.ceph.com" 
> * Verzonden: * 28-2-2018 13:53
> * Onderwerp: * [ceph-users] Ceph iSCSI is a prank?
>
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
>
> *CentOS 7.5*
> (which is not available yet, it's still at 7.4)
> https://wiki.centos.org/Download
>
> *Kernel 4.17*
> (which is not available yet, it is still at 4.15.7)
> https://www.kernel.org/
>
> So I guess, there is no ufficial support and this is just a bad prank.
>
> Ceph is ready to be used with S3 since many years.
> But need the kernel of the next century to works with such an old
> technology like iSCSI.
> So sad.
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Milanov, Radoslav Nikiforov
Probably priorities have changed since RedHat acquired Ceph/InkTank  ( 
https://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph
 ) ?
Why support a competing hypervisor ? Long term switching to KVM seems to be the 
solution.

- Rado

From: ceph-users <ceph-users-boun...@lists.ceph.com> On Behalf Of Max Cuttins
Sent: Thursday, March 1, 2018 7:27 AM
To: David Turner <drakonst...@gmail.com>; dilla...@redhat.com
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Ceph iSCSI is a prank?

Il 28/02/2018 18:16, David Turner ha scritto:

My thought is that in 4 years you could have migrated to a hypervisor that will 
have better performance into ceph than an added iSCSI layer. I won't deploy VMs 
for ceph on anything that won't allow librbd to work. Anything else is added 
complexity and reduced performance.

You are definitly right: I have to change hypervisor. So Why I didn't do this 
before?
Because both Citrix/Xen and Inktank/Ceph claim that they were ready to add 
support to Xen in 2013!

It was 2013:
XEN claim to support Ceph: 
https://www.citrix.com/blogs/2013/07/08/xenserver-tech-preview-incorporating-ceph-object-stores-is-now-available/
Inktank say the support for Xen was almost ready: 
https://ceph.com/geen-categorie/xenserver-support-for-rbd/

And also iSCSI was close (it was 2014):
https://ceph.com/geen-categorie/updates-to-ceph-tgt-iscsi-support/

So why change Hypervisor if everybody tell you that compatibility is almost 
ready to be deployed?
... but then "just" pass 4 years and both XEN and Ceph never become 
compatibile...

It's obvious that Citrix in not anymore belivable.
However, at least Ceph should have added iSCSI to it's platform during all 
these years.
Ceph is awesome, so why just don't kill all the competitors make it compatible 
even with washingmachine?



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins

Almost...


Il 01/03/2018 16:17, Heðin Ejdesgaard Møller ha scritto:

Hello,

I would like to point out that we are running ceph+redundant iscsiGW's,
connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support.

We did encountered a few bumps on the road to production, but those got
fixed by Red Hat engineering and are included in the rhel7.5 and 4.17
kernel.

I can recommend having a look at https://github.com/open-iscsi if you
want to contribute on the userspace side.

Regards
Heðin Ejdesgaard
Synack Sp/f

Direct: +298 77 11 12
Phone:  +298 20 11 11
E-Mail: h...@synack.fo


On hós, 2018-03-01 at 13:33 +0100, Kai Wagner wrote:

I totally understand and see your frustration here, but you've to
keep
in mind that this is an Open Source project with a lots of
volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.

It's a long journey but it seems like it finally comes to an end.


On 03/01/2018 01:26 PM, Max Cuttins wrote:

It's obvious that Citrix in not anymore belivable.
However, at least Ceph should have added iSCSI to it's platform
during
all these years.
Ceph is awesome, so why just don't kill all the competitors make it
compatible even with washingmachine?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Heðin Ejdesgaard Møller
Hello,

I would like to point out that we are running ceph+redundant iscsiGW's,
connecting the LUN's to a esxi+vcsa-6.5 cluster with Red Hat support.

We did encountered a few bumps on the road to production, but those got
fixed by Red Hat engineering and are included in the rhel7.5 and 4.17
kernel.

I can recommend having a look at https://github.com/open-iscsi if you
want to contribute on the userspace side.

Regards
Heðin Ejdesgaard
Synack Sp/f 

Direct: +298 77 11 12
Phone:  +298 20 11 11
E-Mail: h...@synack.fo


On hós, 2018-03-01 at 13:33 +0100, Kai Wagner wrote:
> I totally understand and see your frustration here, but you've to
> keep
> in mind that this is an Open Source project with a lots of
> volunteers.
> If you have a really urgent need, you have the possibility to develop
> such a feature on your own or you've to buy someone who could do the
> work for you.
> 
> It's a long journey but it seems like it finally comes to an end.
> 
> 
> On 03/01/2018 01:26 PM, Max Cuttins wrote:
> > It's obvious that Citrix in not anymore belivable.
> > However, at least Ceph should have added iSCSI to it's platform
> > during
> > all these years.
> > Ceph is awesome, so why just don't kill all the competitors make it
> > compatible even with washingmachine?
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

signature.asc
Description: This is a digitally signed message part
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Donny Davis
I wonder when EMC/Netapp are going to start giving away production ready
bits that fit into your architecture

At least support for this feature is coming in the near term.

I say keep on keepin on. Kudos to the ceph team (and maybe more teams) for
taking care of the hard stuff for us.




On Thu, Mar 1, 2018 at 9:42 AM, Samuel Soulard 
wrote:

> Hi Jason,
>
> That's awesome.  Keep up the good work guys, we all love the work you are
> doing with that software!!
>
> Sam
>
> On Mar 1, 2018 09:11, "Jason Dillaman"  wrote:
>
>> It's very high on our priority list to get a solution merged in the
>> upstream kernel. There was a proposal to use DLM to distribute the PGR
>> state between target gateways (a la the SCST target) and it's quite
>> possible that would have the least amount of upstream resistance since
>> it would work for all backends and not just RBD. We, of course, would
>> love to just use the Ceph cluster to distribute the state information
>> instead of requiring a bolt-on DLM (with its STONITH error handling),
>> but we'll take what we can get (merged).
>>
>> I believe SUSE uses a custom downstream kernel that stores the PGR
>> state in the Ceph cluster but requires two round-trips to the cluster
>> for each IO (first to verify the PGR state and the second to perform
>> the IO). The PetaSAN project is built on top of these custom kernel
>> patches as well, I believe.
>>
>> On Thu, Mar 1, 2018 at 8:50 AM, Samuel Soulard 
>> wrote:
>> > On another note, is there any work being done for persistent group
>> > reservations support for Ceph/LIO compatibility? Or just a rough
>> estimate :)
>> >
>> > Would love to see Redhat/Ceph support this type of setup.  I know Suse
>> > supports it as of late.
>> >
>> > Sam
>> >
>> > On Mar 1, 2018 07:33, "Kai Wagner"  wrote:
>> >>
>> >> I totally understand and see your frustration here, but you've to keep
>> >> in mind that this is an Open Source project with a lots of volunteers.
>> >> If you have a really urgent need, you have the possibility to develop
>> >> such a feature on your own or you've to buy someone who could do the
>> >> work for you.
>> >>
>> >> It's a long journey but it seems like it finally comes to an end.
>> >>
>> >>
>> >> On 03/01/2018 01:26 PM, Max Cuttins wrote:
>> >> > It's obvious that Citrix in not anymore belivable.
>> >> > However, at least Ceph should have added iSCSI to it's platform
>> during
>> >> > all these years.
>> >> > Ceph is awesome, so why just don't kill all the competitors make it
>> >> > compatible even with washingmachine?
>> >>
>> >> --
>> >> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>> HRB
>> >> 21284 (AG Nürnberg)
>> >>
>> >>
>> >>
>> >> ___
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Jason
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Samuel Soulard
Hi Jason,

That's awesome.  Keep up the good work guys, we all love the work you are
doing with that software!!

Sam

On Mar 1, 2018 09:11, "Jason Dillaman"  wrote:

> It's very high on our priority list to get a solution merged in the
> upstream kernel. There was a proposal to use DLM to distribute the PGR
> state between target gateways (a la the SCST target) and it's quite
> possible that would have the least amount of upstream resistance since
> it would work for all backends and not just RBD. We, of course, would
> love to just use the Ceph cluster to distribute the state information
> instead of requiring a bolt-on DLM (with its STONITH error handling),
> but we'll take what we can get (merged).
>
> I believe SUSE uses a custom downstream kernel that stores the PGR
> state in the Ceph cluster but requires two round-trips to the cluster
> for each IO (first to verify the PGR state and the second to perform
> the IO). The PetaSAN project is built on top of these custom kernel
> patches as well, I believe.
>
> On Thu, Mar 1, 2018 at 8:50 AM, Samuel Soulard 
> wrote:
> > On another note, is there any work being done for persistent group
> > reservations support for Ceph/LIO compatibility? Or just a rough
> estimate :)
> >
> > Would love to see Redhat/Ceph support this type of setup.  I know Suse
> > supports it as of late.
> >
> > Sam
> >
> > On Mar 1, 2018 07:33, "Kai Wagner"  wrote:
> >>
> >> I totally understand and see your frustration here, but you've to keep
> >> in mind that this is an Open Source project with a lots of volunteers.
> >> If you have a really urgent need, you have the possibility to develop
> >> such a feature on your own or you've to buy someone who could do the
> >> work for you.
> >>
> >> It's a long journey but it seems like it finally comes to an end.
> >>
> >>
> >> On 03/01/2018 01:26 PM, Max Cuttins wrote:
> >> > It's obvious that Citrix in not anymore belivable.
> >> > However, at least Ceph should have added iSCSI to it's platform during
> >> > all these years.
> >> > Ceph is awesome, so why just don't kill all the competitors make it
> >> > compatible even with washingmachine?
> >>
> >> --
> >> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB
> >> 21284 (AG Nürnberg)
> >>
> >>
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Jason
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Ric Wheeler

On 02/28/2018 10:06 AM, Max Cuttins wrote:



Il 28/02/2018 15:19, Jason Dillaman ha scritto:

On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini  
wrote:

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

The necessary kernel changes actually are included as part of 4.16-rc1
which is available now. We also offer a pre-built test kernel with the
necessary fixes here [1].

This is a release candidate and it's not ready for production.
Does anybody know when the kernel 4.16 will be ready for production?


Every user/customer has a different definition of "production" - most enterprise 
users will require their distribution vendor to have this prebuilt into a 
product with commercial support.


If you are looking at using brand new kernels in production for your definition 
of production without vendor support, you need to have the personal expertise 
and staffing required to validate production readiness and carry out support 
yourself.


As others have said, that is the joy of open source - you get to make that call 
on your own, but that does come at a price (spending money for vendor support or 
spending your time and expertise to do it on your own :))


Regards,

Ric







So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old technology
like iSCSI.
So sad.

Unfortunately, kernel vs userspace have very different development
timelines.We have no interest in maintaining out-of-tree patchsets to
the kernel.


This is true, but having something that just works in order to have minimum 
compatibility and start to dismiss old disk is something you should think about.
You'll have ages in order to improve and get better performance. But you 
should allow Users to cut-off old solutions as soon as possible while waiting 
for a better implementation.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread David Disseldorp
On Thu, 1 Mar 2018 09:11:21 -0500, Jason Dillaman wrote:

> It's very high on our priority list to get a solution merged in the
> upstream kernel. There was a proposal to use DLM to distribute the PGR
> state between target gateways (a la the SCST target) and it's quite
> possible that would have the least amount of upstream resistance since
> it would work for all backends and not just RBD. We, of course, would
> love to just use the Ceph cluster to distribute the state information
> instead of requiring a bolt-on DLM (with its STONITH error handling),
> but we'll take what we can get (merged).

I'm also very keen on having a proper upstream solution for this. My
preference is still to proceed with PR state backed by Ceph.

> I believe SUSE uses a custom downstream kernel that stores the PGR
> state in the Ceph cluster but requires two round-trips to the cluster
> for each IO (first to verify the PGR state and the second to perform
> the IO). The PetaSAN project is built on top of these custom kernel
> patches as well, I believe.

Maged from PetaSAN added support for rados-notify based PR state
retrieval. Sill, in the end the PR patch-set is too intrusive to make it
upstream, so we need to work on a proper upstreamable solution, with
tcmu-runner or otherwise.

Cheers, David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Federico Lucifredi
Hi Max,

> On Feb 28, 2018, at 10:06 AM, Max Cuttins  wrote:
> 
> This is true, but having something that just works in order to have minimum 
> compatibility and start to dismiss old disk is something you should think 
> about.
> You'll have ages in order to improve and get better performance. But you 
> should allow Users to cut-off old solutions as soon as possible while waiting 
> for a better implementation.

I like your thinking, but I wonder why doesn’t a locally-mounted kRBD volume 
meet this need? It seems easier than iSCSI and I would venture would show twice 
the performance at least in some cases.

ISCSI in ALUA mode may be as close as it gets to scale-out iSCSI in software. 
It is not bad, but you pay for the extra hops in performance and complexity. So 
it totally makes sense where kRBD and libRBD are not (yet) available, like 
WMware and Windows, but not where native drivers are available. 

And about Xen... patches are accepted in this project — folks who really care 
should go out and code it.

Best-F
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Jason Dillaman
It's very high on our priority list to get a solution merged in the
upstream kernel. There was a proposal to use DLM to distribute the PGR
state between target gateways (a la the SCST target) and it's quite
possible that would have the least amount of upstream resistance since
it would work for all backends and not just RBD. We, of course, would
love to just use the Ceph cluster to distribute the state information
instead of requiring a bolt-on DLM (with its STONITH error handling),
but we'll take what we can get (merged).

I believe SUSE uses a custom downstream kernel that stores the PGR
state in the Ceph cluster but requires two round-trips to the cluster
for each IO (first to verify the PGR state and the second to perform
the IO). The PetaSAN project is built on top of these custom kernel
patches as well, I believe.

On Thu, Mar 1, 2018 at 8:50 AM, Samuel Soulard  wrote:
> On another note, is there any work being done for persistent group
> reservations support for Ceph/LIO compatibility? Or just a rough estimate :)
>
> Would love to see Redhat/Ceph support this type of setup.  I know Suse
> supports it as of late.
>
> Sam
>
> On Mar 1, 2018 07:33, "Kai Wagner"  wrote:
>>
>> I totally understand and see your frustration here, but you've to keep
>> in mind that this is an Open Source project with a lots of volunteers.
>> If you have a really urgent need, you have the possibility to develop
>> such a feature on your own or you've to buy someone who could do the
>> work for you.
>>
>> It's a long journey but it seems like it finally comes to an end.
>>
>>
>> On 03/01/2018 01:26 PM, Max Cuttins wrote:
>> > It's obvious that Citrix in not anymore belivable.
>> > However, at least Ceph should have added iSCSI to it's platform during
>> > all these years.
>> > Ceph is awesome, so why just don't kill all the competitors make it
>> > compatible even with washingmachine?
>>
>> --
>> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB
>> 21284 (AG Nürnberg)
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Samuel Soulard
On another note, is there any work being done for persistent group
reservations support for Ceph/LIO compatibility? Or just a rough estimate :)

Would love to see Redhat/Ceph support this type of setup.  I know Suse
supports it as of late.

Sam

On Mar 1, 2018 07:33, "Kai Wagner"  wrote:

> I totally understand and see your frustration here, but you've to keep
> in mind that this is an Open Source project with a lots of volunteers.
> If you have a really urgent need, you have the possibility to develop
> such a feature on your own or you've to buy someone who could do the
> work for you.
>
> It's a long journey but it seems like it finally comes to an end.
>
>
> On 03/01/2018 01:26 PM, Max Cuttins wrote:
> > It's obvious that Citrix in not anymore belivable.
> > However, at least Ceph should have added iSCSI to it's platform during
> > all these years.
> > Ceph is awesome, so why just don't kill all the competitors make it
> > compatible even with washingmachine?
>
> --
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB
> 21284 (AG Nürnberg)
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Kai Wagner
I totally understand and see your frustration here, but you've to keep
in mind that this is an Open Source project with a lots of volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.

It's a long journey but it seems like it finally comes to an end.


On 03/01/2018 01:26 PM, Max Cuttins wrote:
> It's obvious that Citrix in not anymore belivable.
> However, at least Ceph should have added iSCSI to it's platform during
> all these years.
> Ceph is awesome, so why just don't kill all the competitors make it
> compatible even with washingmachine?

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins

Il 28/02/2018 18:16, David Turner ha scritto:
My thought is that in 4 years you could have migrated to a hypervisor 
that will have better performance into ceph than an added iSCSI layer. 
I won't deploy VMs for ceph on anything that won't allow librbd to 
work. Anything else is added complexity and reduced performance.




You are definitly right: I have to change hypervisor. So Why I didn't do 
this before?
Because both Citrix/Xen and Inktank/Ceph claim that they were ready to 
add support to Xen in _*2013*_!


It was 2013:
XEN claim to support Ceph: 
https://www.citrix.com/blogs/2013/07/08/xenserver-tech-preview-incorporating-ceph-object-stores-is-now-available/
Inktank say the support for Xen was almost ready: 
https://ceph.com/geen-categorie/xenserver-support-for-rbd/


And also iSCSI was close (it was 2014):
https://ceph.com/geen-categorie/updates-to-ceph-tgt-iscsi-support/

So why change Hypervisor if everybody tell you that compatibility is 
almost ready to be deployed?
... but then "just" pass 4 years and both XEN and Ceph never become 
compatibile...


It's obvious that Citrix in not anymore belivable.
However, at least Ceph should have added iSCSI to it's platform during 
all these years.
Ceph is awesome, so why just don't kill all the competitors make it 
compatible even with washingmachine?





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Max Cuttins

Xen by Citrix used to be a very good hypervisor.
However they used very old kernel till the 7.1

The distribution doesn't allow you to add package from yum. So you need 
to hack it.

I have helped to develop the installer of the not ufficial plugin:
https://github.com/rposudnevskiy/RBDSR

However I still don't feel safe using that in production.
So I need to fall back to iSCSI.



Il 28/02/2018 20:16, Mark Schouten ha scritto:

Does Xen still not support RBD? Ceph has been around for years now!

Met vriendelijke groeten,

--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



*Van: * Massimiliano Cuttini 
*Aan: * "ceph-users@lists.ceph.com" 
*Verzonden: * 28-2-2018 13:53
*Onderwerp: * [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

*CentOS 7.5*
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

*Kernel 4.17*
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old
technology like iSCSI.
So sad.





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Mark Schouten
Does Xen still not support RBD? Ceph has been around for years now!


Met vriendelijke groeten,

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten  | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl



 Van:   Massimiliano Cuttini  
 Aan:   "ceph-users@lists.ceph.com"  
 Verzonden:   28-2-2018 13:53 
 Onderwerp:   [ceph-users] Ceph iSCSI is a prank? 


 
I was building ceph in order to use with iSCSI.
   But I just see from the docs that need:
CentOS 7.5
 (which is not available yet, it's still at 7.4)
 https://wiki.centos.org/Download   
Kernel 4.17
 (which is not available yet, it is still at 4.15.7)
 https://www.kernel.org/  
So I guess, there is no ufficial support and this is just a bad   prank.
 
Ceph is ready to be used with S3 since many years.
   But need the kernel of the next century to works with such an old   
technology like iSCSI.
   So sad.
  

  

 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


smime.p7s
Description: Electronic Signature S/MIME
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread David Turner
My thought is that in 4 years you could have migrated to a hypervisor that
will have better performance into ceph than an added iSCSI layer. I won't
deploy VMs for ceph on anything that won't allow librbd to work. Anything
else is added complexity and reduced performance.

On Wed, Feb 28, 2018, 11:49 AM Jason Dillaman  wrote:

> On Wed, Feb 28, 2018 at 9:17 AM, Max Cuttins  wrote:
> > Sorry for being rude Ross,
> >
> > I follow Ceph since 2014 waiting for iSCSI support in order to use it
> with
> > Xen.
>
> What OS are you using in Dom0 that you cannot just directly use krbd?
> iSCSI is going to add an extra hop so it will never be able to match
> the performance of something that is directly talking to the OSDs.
>
> > When finally it seemds it was implemented the OS requirements are
> > irrealistic.
> > Seems a bad prank. 4 year waiting for this... and still not true support
> > yet.
> >
> >
> >
> >
> >
> > Il 28/02/2018 14:11, Marc Roos ha scritto:
> >>
> >>   Hi Massimiliano, have an espresso. You know the indians have a nice
> >> saying
> >>
> >> "Everything will be good at the end. If it is not good, it is still not
> >> the end."
> >>
> >>
> >>
> >> -Original Message-
> >> From: Massimiliano Cuttini [mailto:m...@phoenixweb.it]
> >> Sent: woensdag 28 februari 2018 13:53
> >> To: ceph-users@lists.ceph.com
> >> Subject: [ceph-users] Ceph iSCSI is a prank?
> >>
> >> I was building ceph in order to use with iSCSI.
> >> But I just see from the docs that need:
> >>
> >> CentOS 7.5
> >> (which is not available yet, it's still at 7.4)
> >> https://wiki.centos.org/Download
> >>
> >> Kernel 4.17
> >> (which is not available yet, it is still at 4.15.7)
> >> https://www.kernel.org/
> >>
> >> So I guess, there is no ufficial support and this is just a bad prank.
> >>
> >> Ceph is ready to be used with S3 since many years.
> >> But need the kernel of the next century to works with such an old
> >> technology like iSCSI.
> >> So sad.
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Jason
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 9:17 AM, Max Cuttins  wrote:
> Sorry for being rude Ross,
>
> I follow Ceph since 2014 waiting for iSCSI support in order to use it with
> Xen.

What OS are you using in Dom0 that you cannot just directly use krbd?
iSCSI is going to add an extra hop so it will never be able to match
the performance of something that is directly talking to the OSDs.

> When finally it seemds it was implemented the OS requirements are
> irrealistic.
> Seems a bad prank. 4 year waiting for this... and still not true support
> yet.
>
>
>
>
>
> Il 28/02/2018 14:11, Marc Roos ha scritto:
>>
>>   Hi Massimiliano, have an espresso. You know the indians have a nice
>> saying
>>
>> "Everything will be good at the end. If it is not good, it is still not
>> the end."
>>
>>
>>
>> -Original Message-
>> From: Massimiliano Cuttini [mailto:m...@phoenixweb.it]
>> Sent: woensdag 28 februari 2018 13:53
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] Ceph iSCSI is a prank?
>>
>> I was building ceph in order to use with iSCSI.
>> But I just see from the docs that need:
>>
>> CentOS 7.5
>> (which is not available yet, it's still at 7.4)
>> https://wiki.centos.org/Download
>>
>> Kernel 4.17
>> (which is not available yet, it is still at 4.15.7)
>> https://www.kernel.org/
>>
>> So I guess, there is no ufficial support and this is just a bad prank.
>>
>> Ceph is ready to be used with S3 since many years.
>> But need the kernel of the next century to works with such an old
>> technology like iSCSI.
>> So sad.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Nico Schottelius

Max,

I understand your frustration.
However, last time I checked, ceph was open source.

Some of you might not remember, but one major reason why open source is
great is that YOU CAN DO your own modifications.

If you need a change like iSCSI support and it isn't there,
it is probably best, if you implement it.

Even if a lot of people are voluntarily contributing to open source
and even if there is a company behind ceph as a product, there
is no right for a feature.

Best,

Nico

p.s.: If your answer is "I don't have experience to implement it" then
my answer will be "hire somebody" and if your answer is "I don't have the
money", my answer is "You don't have the resource to have that feature".
(from: the book of reality)

Max Cuttins  writes:

> Sorry for being rude Ross,
>
> I follow Ceph since 2014 waiting for iSCSI support in order to use it
> with Xen.
> When finally it seemds it was implemented the OS requirements are
> irrealistic.
> Seems a bad prank. 4 year waiting for this... and still not true support
> yet.

--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 10:06 AM, Max Cuttins  wrote:
>
>
> Il 28/02/2018 15:19, Jason Dillaman ha scritto:
>>
>> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini 
>> wrote:
>>>
>>> I was building ceph in order to use with iSCSI.
>>> But I just see from the docs that need:
>>>
>>> CentOS 7.5
>>> (which is not available yet, it's still at 7.4)
>>> https://wiki.centos.org/Download
>>>
>>> Kernel 4.17
>>> (which is not available yet, it is still at 4.15.7)
>>> https://www.kernel.org/
>>
>> The necessary kernel changes actually are included as part of 4.16-rc1
>> which is available now. We also offer a pre-built test kernel with the
>> necessary fixes here [1].
>
> This is a release candidate and it's not ready for production.
> Does anybody know when the kernel 4.16 will be ready for production?
>
>
>>
>>> So I guess, there is no ufficial support and this is just a bad prank.
>>>
>>> Ceph is ready to be used with S3 since many years.
>>> But need the kernel of the next century to works with such an old
>>> technology
>>> like iSCSI.
>>> So sad.
>>
>> Unfortunately, kernel vs userspace have very different development
>> timelines.We have no interest in maintaining out-of-tree patchsets to
>> the kernel.
>
>
> This is true, but having something that just works in order to have minimum
> compatibility and start to dismiss old disk is something you should think
> about.
> You'll have ages in order to improve and get better performance. But you
> should allow Users to cut-off old solutions as soon as possible while
> waiting for a better implementation.

That's exactly what is included in the kernel changes -- changes
required to stabilize LIO iSCSI with RBD (specifically in a clustered
environment). You have been able to use LIO, TGT, SCST, SPDK,  with various levels of
capabilities for a while now. There are plenty of performance changes
that are still required along with additional features like support
for SCSI persistent group reservations. Most important of all,
remember that this is a *free* open source project, so it might be
recommended to set your demands and expectations accordingly.

>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> [1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/
>>
>

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Erik McCormick
On Feb 28, 2018 10:06 AM, "Max Cuttins"  wrote:



Il 28/02/2018 15:19, Jason Dillaman ha scritto:

> On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini 
> wrote:
>
>> I was building ceph in order to use with iSCSI.
>> But I just see from the docs that need:
>>
>> CentOS 7.5
>> (which is not available yet, it's still at 7.4)
>> https://wiki.centos.org/Download
>>
>> Kernel 4.17
>> (which is not available yet, it is still at 4.15.7)
>> https://www.kernel.org/
>>
> The necessary kernel changes actually are included as part of 4.16-rc1
> which is available now. We also offer a pre-built test kernel with the
> necessary fixes here [1].
>
This is a release candidate and it's not ready for production.
Does anybody know when the kernel 4.16 will be ready for production?


Release date is late March / early April.





> So I guess, there is no ufficial support and this is just a bad prank.
>>
>> Ceph is ready to be used with S3 since many years.
>> But need the kernel of the next century to works with such an old
>> technology
>> like iSCSI.
>> So sad.
>>
> Unfortunately, kernel vs userspace have very different development
> timelines.We have no interest in maintaining out-of-tree patchsets to
> the kernel.
>

This is true, but having something that just works in order to have minimum
compatibility and start to dismiss old disk is something you should think
about.
You'll have ages in order to improve and get better performance. But you
should allow Users to cut-off old solutions as soon as possible while
waiting for a better implementation.



>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> [1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Max Cuttins



Il 28/02/2018 15:19, Jason Dillaman ha scritto:

On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini  
wrote:

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

The necessary kernel changes actually are included as part of 4.16-rc1
which is available now. We also offer a pre-built test kernel with the
necessary fixes here [1].

This is a release candidate and it's not ready for production.
Does anybody know when the kernel 4.16 will be ready for production?





So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old technology
like iSCSI.
So sad.

Unfortunately, kernel vs userspace have very different development
timelines.We have no interest in maintaining out-of-tree patchsets to
the kernel.


This is true, but having something that just works in order to have 
minimum compatibility and start to dismiss old disk is something you 
should think about.
You'll have ages in order to improve and get better performance. But you 
should allow Users to cut-off old solutions as soon as possible while 
waiting for a better implementation.





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 7:53 AM, Massimiliano Cuttini  
wrote:
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
>
> CentOS 7.5
> (which is not available yet, it's still at 7.4)
> https://wiki.centos.org/Download
>
> Kernel 4.17
> (which is not available yet, it is still at 4.15.7)
> https://www.kernel.org/

The necessary kernel changes actually are included as part of 4.16-rc1
which is available now. We also offer a pre-built test kernel with the
necessary fixes here [1].

> So I guess, there is no ufficial support and this is just a bad prank.
>
> Ceph is ready to be used with S3 since many years.
> But need the kernel of the next century to works with such an old technology
> like iSCSI.
> So sad.

Unfortunately, kernel vs userspace have very different development
timelines.We have no interest in maintaining out-of-tree patchsets to
the kernel.

>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[1] https://shaman.ceph.com/repos/kernel/ceph-iscsi-test/

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Max Cuttins

Sorry for being rude Ross,

I follow Ceph since 2014 waiting for iSCSI support in order to use it 
with Xen.
When finally it seemds it was implemented the OS requirements are 
irrealistic.
Seems a bad prank. 4 year waiting for this... and still not true support 
yet.





Il 28/02/2018 14:11, Marc Roos ha scritto:
  
Hi Massimiliano, have an espresso. You know the indians have a nice

saying

"Everything will be good at the end. If it is not good, it is still not
the end."



-Original Message-
From: Massimiliano Cuttini [mailto:m...@phoenixweb.it]
Sent: woensdag 28 februari 2018 13:53
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old
technology like iSCSI.
So sad.












___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-02-28 Thread Marc Roos
 
Hi Massimiliano, have an espresso. You know the indians have a nice 
saying

"Everything will be good at the end. If it is not good, it is still not 
the end."



-Original Message-
From: Massimiliano Cuttini [mailto:m...@phoenixweb.it] 
Sent: woensdag 28 februari 2018 13:53
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old 
technology like iSCSI.
So sad.









___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com