[ceph-users] advised needed for different projects design

2018-10-08 Thread Joshua Chen
Hello all,
  When planning for my institute's need, I would like to seek for design
suggestions from you for my special situation:

1, I will support many projects, currently they are all nfs servers (and
those nfs servers serve their clients respectively). For example nfsA (for
clients belong to projectA); nfsB, nfsC,,,

2, For the institute's total capacity (currently 200TB), I would like nfsA,
nfsB, nfsC,,, to only see their individual assigned capacities, for
example, nfsA only get 50TB at her /export/nfsdata, nfsB only see 140TB,
nfsC only 10TB,,,

3, my question is, what would be the good choice to provide storage to
those nfs servers?

RBD? is rbd good for hundreds of TB size for a single block device for
a nfs server?

cephFS? this seems good solution for me that the nfs server could mount
cephfs and share them over nfs. But how could I make different project
(nfsA nfsB nfsC) 'see' or 'mount' part of the total 200TB capacity, there
should be many small cephfs(es) and each one has it's own given smaller
capacity.

Rados? I don't have much experience on this ? is rados suitable for this
multi project servers' need?


Thanks in advance

Cheers
Joshua
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] list admin issues

2018-10-07 Thread Joshua Chen
I also got removed once, got another warning once (need to re-enable).

Cheers
Joshua


On Sun, Oct 7, 2018 at 5:38 AM Svante Karlsson 
wrote:

> I'm also getting removed but not only from ceph. I subscribe
> d...@kafka.apache.org list and the same thing happens there.
>
> Den lör 6 okt. 2018 kl 23:24 skrev Jeff Smith :
>
>> I have been removed twice.
>> On Sat, Oct 6, 2018 at 7:07 AM Elias Abacioglu
>>  wrote:
>> >
>> > Hi,
>> >
>> > I'm bumping this old thread cause it's getting annoying. My membership
>> get disabled twice a month.
>> > Between my two Gmail accounts I'm in more than 25 mailing lists and I
>> see this behavior only here. Why is only ceph-users only affected? Maybe
>> Christian was on to something, is this intentional?
>> > Reality is that there is a lot of ceph-users with Gmail accounts,
>> perhaps it wouldn't be so bad to actually trying to figure this one out?
>> >
>> > So can the maintainers of this list please investigate what actually
>> gets bounced? Look at my address if you want.
>> > I got disabled 20181006, 20180927, 20180916, 20180725, 20180718 most
>> recently.
>> > Please help!
>> >
>> > Thanks,
>> > Elias
>> >
>> > On Mon, Oct 16, 2017 at 5:41 AM Christian Balzer  wrote:
>> >>
>> >>
>> >> Most mails to this ML score low or negatively with SpamAssassin,
>> however
>> >> once in a while (this is a recent one) we get relatively high scores.
>> >> Note that the forged bits are false positives, but the SA is up to
>> date and
>> >> google will have similar checks:
>> >> ---
>> >> X-Spam-Status: No, score=3.9 required=10.0 tests=BAYES_00,DCC_CHECK,
>> >>
>> FORGED_MUA_MOZILLA,FORGED_YAHOO_RCVD,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM,
>> >>
>> HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MIME_HTML_MOSTLY,RCVD_IN_MSPIKE_H4,
>> >>  RCVD_IN_MSPIKE_WL,RDNS_NONE,T_DKIM_INVALID shortcircuit=no
>> autolearn=no
>> >> ---
>> >>
>> >> Between attachment mails and some of these and you're well on your way
>> out.
>> >>
>> >> The default mailman settings and logic require 5 bounces to trigger
>> >> unsubscription and 7 days of NO bounces to reset the counter.
>> >>
>> >> Christian
>> >>
>> >> On Mon, 16 Oct 2017 12:23:25 +0900 Christian Balzer wrote:
>> >>
>> >> > On Mon, 16 Oct 2017 14:15:22 +1100 Blair Bethwaite wrote:
>> >> >
>> >> > > Thanks Christian,
>> >> > >
>> >> > > You're no doubt on the right track, but I'd really like to figure
>> out
>> >> > > what it is at my end - I'm unlikely to be the only person
>> subscribed
>> >> > > to ceph-users via a gmail account.
>> >> > >
>> >> > > Re. attachments, I'm surprised mailman would be allowing them in
>> the
>> >> > > first place, and even so gmail's attachment requirements are less
>> >> > > strict than most corporate email setups (those that don't already
>> use
>> >> > > a cloud provider).
>> >> > >
>> >> > Mailman doesn't do anything with this by default AFAIK, but see
>> below.
>> >> > Strict is fine if you're in control, corporate mail can be hell,
>> doubly so
>> >> > if on M$ cloud.
>> >> >
>> >> > > This started happening earlier in the year after I turned off
>> digest
>> >> > > mode. I also have a paid google domain, maybe I'll try setting
>> >> > > delivery to that address and seeing if anything changes...
>> >> > >
>> >> > Don't think google domain is handled differently, but what do I know.
>> >> >
>> >> > Though the digest bit confirms my suspicion about attachments:
>> >> > ---
>> >> > When a subscriber chooses to receive plain text daily “digests” of
>> list
>> >> > messages, Mailman sends the digest messages without any original
>> >> > attachments (in Mailman lingo, it “scrubs” the messages of
>> attachments).
>> >> > However, Mailman also includes links to the original attachments
>> that the
>> >> > recipient can click on.
>> >> > ---
>> >> >
>> >> > Christian
>> >> >
>> >> > > Cheers,
>> >> > >
>> >> > > On 16 October 2017 at 13:54, Christian Balzer 
>> wrote:
>> >> > > >
>> >> > > > Hello,
>> >> > > >
>> >> > > > You're on gmail.
>> >> > > >
>> >> > > > Aside from various potential false positives with regards to
>> spam my bet
>> >> > > > is that gmail's known dislike for attachments is the cause of
>> these
>> >> > > > bounces and that setting is beyond your control.
>> >> > > >
>> >> > > > Because Google knows best[tm].
>> >> > > >
>> >> > > > Christian
>> >> > > >
>> >> > > > On Mon, 16 Oct 2017 13:50:43 +1100 Blair Bethwaite wrote:
>> >> > > >
>> >> > > >> Hi all,
>> >> > > >>
>> >> > > >> This is a mailing-list admin issue - I keep being unsubscribed
>> from
>> >> > > >> ceph-users with the message:
>> >> > > >> "Your membership in the mailing list ceph-users has been
>> disabled due
>> >> > > >> to excessive bounces..."
>> >> > > >> This seems to be happening on roughly a monthly basis.
>> >> > > >>
>> >> > > >> Thing is I have no idea what the bounce is or where it is
>> coming from.
>> >> > > >> I've tried emailing ceph-users-ow...@lists.ceph.com and the
>> contact
>> >> > > >> listed in Mailman 

[ceph-users] provide cephfs to mutiple project

2018-10-03 Thread Joshua Chen
Hello all,
  I am almost ready to provide storage (cephfs in the beginning) to my
colleagues, they belong to different main project, and according to their
budget that are previously claimed, to have different capacity. For example
ProjectA will have 50TB, ProjectB will have 150TB.

I choosed cephfs because that it has good enough throughput compared to rbd.

but I would like to let clients in ProjectA only see 50TB mount space (by
linux df -h maybe) and ProjectB clients see 150TB. so my question is:
1, is that possible? that cephfs make clients see different available space
respectively?

2, what is the good setup that ProjectA has a reasonable mount source and
ProjectB has his?

for example
in projecta client root, he will do
mount -t ceph cephmon1,cephmon2:/ProjectA /mnt/ProjectA

but can not

mount -t ceph cephmon1,cephmon2:/ProjectB /mnt/ProjectB

(can not mount the root /, either /ProjectB which is not their area)

or what is the official production style for this need?

Thank in advance
Cheers
Joshua
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount cephfs from a public network ip of mds

2018-10-01 Thread Joshua Chen
Thank you for all your reply.
I will consider changing the design or negotiate with my colleagues for the
topology issue. Or if all are not working, try to come back to this
solution.

Cheers
Joshua

On Mon, Oct 1, 2018 at 9:05 PM Paul Emmerich  wrote:

> No, mons can only have exactly one IP address and they'll only listen
> on that IP.
>
> As David suggested: check if you really need separate networks. This
> setup usually creates more problems than it solves, especially if you
> have one 1G and one 10G network.
>
> Paul
> Am Mo., 1. Okt. 2018 um 04:11 Uhr schrieb Joshua Chen
> :
> >
> > Hello Paul,
> >   Thanks for your reply.
> >   Now my clients will be from 140.109 (LAN, the real ip network 1Gb/s)
> and from 10.32 (SAN, a closed 10Gb network). Could I make this
> public_network to be 0.0.0.0? so mon daemon listens on both 1Gb and 10Gb
> network?
> >   Or could I have
> > public_network = 140.109.169.0/24, 10.32.67.0/24
> > cluster_network = 10.32.67.0/24
> >
> > does ceph allow 2 (multiple) public_network?
> >
> >   And I don't want to limit the client read/write speed to be 1Gb/s nics
> unless they don't have 10Gb nic installed. To guarantee clients read/write
> to osd (when they know the details of the location) they should be using
> the fastest nic (10Gb) when available. But other clients with only 1Gb nic
> will go through 140.109.0.0 (1Gb LAN) to ask mon or to read/write to osds.
> This is why my osds also have 1Gb and 10Gb nics with 140.109.0.0 and
> 10.32.0.0 networking respectively.
> >
> > Cheers
> > Joshua
> >
> > On Sun, Sep 30, 2018 at 12:09 PM David Turner 
> wrote:
> >>
> >> The cluster/private network is only used by the OSDs. Nothing else in
> ceph or its clients communicate using it. Everything other than osd to osd
> communication uses the public network. That includes the MONs, MDSs,
> clients, and anything other than an osd talking to an osd. Nothing else
> other than osd to osd traffic can communicate on the private/cluster
> network.
> >>
> >> On Sat, Sep 29, 2018, 6:43 AM Paul Emmerich 
> wrote:
> >>>
> >>> All Ceph clients will always first connect to the mons. Mons provide
> >>> further information on the cluster such as the IPs of MDS and OSDs.
> >>>
> >>> This means you need to provide the mon IPs to the mount command, not
> >>> the MDS IPs. Your first command works by coincidence since
> >>> you seem to run the mons and MDS' on the same server.
> >>>
> >>>
> >>> Paul
> >>> Am Sa., 29. Sep. 2018 um 12:07 Uhr schrieb Joshua Chen
> >>> :
> >>> >
> >>> > Hello all,
> >>> >   I am testing the cephFS cluster so that clients could mount -t
> ceph.
> >>> >
> >>> >   the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
> >>> >   All these 6 nodes has 2 nic, one 1Gb nic with real ip
> (140.109.0.0) and 1 10Gb nic with virtual ip (10.32.0.0)
> >>> >
> >>> > 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
> >>> > 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
> >>> >
> >>> >
> >>> >
> >>> > and I have the following questions:
> >>> >
> >>> > 1, can I have both public (140.109.0.0) and cluster (10.32.0.0)
> clients all be able to mount this cephfs resource
> >>> >
> >>> > I want to do
> >>> >
> >>> > (in a 140.109 network client)
> >>> > mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=
> >>> >
> >>> > and also in a 10.32.0.0 network client)
> >>> > mount -t ceph mds1(10.32.67.48):/
> >>> > /mnt/cephfs -o user=,secret=
> >>> >
> >>> >
> >>> >
> >>> >
> >>> > Currently, only this 10.32.0.0 clients can mount it. that of public
> network (140.109) can not. How can I enable this?
> >>> >
> >>> > here attached is my ceph.conf
> >>> >
> >>> > Thanks in advance
> >>> >
> >>> > Cheers
> >>> > Joshua
> >>> > ___
> >>> > ceph-users mailing list
> >>> > ceph-users@lists.ceph.com
> >>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>>
> >>>
> >>> --
> >>> Paul Emmerich
> >>>
> >>> Looking for help with your Ceph cluster? Contact us at
> https://croit.io
> >>>
> >>> croit GmbH
> >>> Freseniusstr. 31h
> >>> 81247 München
> >>> www.croit.io
> >>> Tel: +49 89 1896585 90
> >>> ___
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mount cephfs from a public network ip of mds

2018-09-30 Thread Joshua Chen
Hello Paul,
  Thanks for your reply.
  Now my clients will be from 140.109 (LAN, the real ip network 1Gb/s) and
from 10.32 (SAN, a closed 10Gb network). Could I make this public_network
to be 0.0.0.0? so mon daemon listens on both 1Gb and 10Gb network?
  Or could I have
public_network = 140.109.169.0/24, 10.32.67.0/24
cluster_network = 10.32.67.0/24

does ceph allow 2 (multiple) public_network?

  And I don't want to limit the client read/write speed to be 1Gb/s
nics unless they don't have 10Gb nic installed. To guarantee clients
read/write to osd (when they know the details of the location) they should
be using the fastest nic (10Gb) when available. But other clients with only
1Gb nic will go through 140.109.0.0 (1Gb LAN) to ask mon or to read/write
to osds. This is why my osds also have 1Gb and 10Gb nics with 140.109.0.0
and 10.32.0.0 networking respectively.

Cheers
Joshua

On Sun, Sep 30, 2018 at 12:09 PM David Turner  wrote:

> The cluster/private network is only used by the OSDs. Nothing else in ceph
> or its clients communicate using it. Everything other than osd to osd
> communication uses the public network. That includes the MONs, MDSs,
> clients, and anything other than an osd talking to an osd. Nothing else
> other than osd to osd traffic can communicate on the private/cluster
> network.
>
> On Sat, Sep 29, 2018, 6:43 AM Paul Emmerich 
> wrote:
>
>> All Ceph clients will always first connect to the mons. Mons provide
>> further information on the cluster such as the IPs of MDS and OSDs.
>>
>> This means you need to provide the mon IPs to the mount command, not
>> the MDS IPs. Your first command works by coincidence since
>> you seem to run the mons and MDS' on the same server.
>>
>>
>> Paul
>> Am Sa., 29. Sep. 2018 um 12:07 Uhr schrieb Joshua Chen
>> :
>> >
>> > Hello all,
>> >   I am testing the cephFS cluster so that clients could mount -t ceph.
>> >
>> >   the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
>> >   All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0)
>> and 1 10Gb nic with virtual ip (10.32.0.0)
>> >
>> > 140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
>> > 140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
>> > 140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
>> > 140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
>> > 140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
>> > 140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.
>> >
>> >
>> >
>> > and I have the following questions:
>> >
>> > 1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients
>> all be able to mount this cephfs resource
>> >
>> > I want to do
>> >
>> > (in a 140.109 network client)
>> > mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=
>> >
>> > and also in a 10.32.0.0 network client)
>> > mount -t ceph mds1(10.32.67.48):/
>> > /mnt/cephfs -o user=,secret=
>> >
>> >
>> >
>> >
>> > Currently, only this 10.32.0.0 clients can mount it. that of public
>> network (140.109) can not. How can I enable this?
>> >
>> > here attached is my ceph.conf
>> >
>> > Thanks in advance
>> >
>> > Cheers
>> > Joshua
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Paul Emmerich
>>
>> Looking for help with your Ceph cluster? Contact us at https://croit.io
>>
>> croit GmbH
>> Freseniusstr. 31h
>> 81247 München
>> www.croit.io
>> Tel: +49 89 1896585 90
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mount cephfs from a public network ip of mds

2018-09-29 Thread Joshua Chen
Hello all,
  I am testing the cephFS cluster so that clients could mount -t ceph.

  the cluster has 6 nodes, 3 mons (also mds), and 3 osds.
  All these 6 nodes has 2 nic, one 1Gb nic with real ip (140.109.0.0) and 1
10Gb nic with virtual ip (10.32.0.0)

140.109. Nic1 1G<-MDS1->Nic2 10G 10.32.
140.109. Nic1 1G<-MDS2->Nic2 10G 10.32.
140.109. Nic1 1G<-MDS3->Nic2 10G 10.32.
140.109. Nic1 1G<-OSD1->Nic2 10G 10.32.
140.109. Nic1 1G<-OSD2->Nic2 10G 10.32.
140.109. Nic1 1G<-OSD3->Nic2 10G 10.32.



and I have the following questions:

1, can I have both public (140.109.0.0) and cluster (10.32.0.0) clients all
be able to mount this cephfs resource

I want to do

(in a 140.109 network client)
mount -t ceph mds1(140.109.169.48):/ /mnt/cephfs -o user=,secret=

and also in a 10.32.0.0 network client)
mount -t ceph mds1(10.32.67.48):/
/mnt/cephfs -o user=,secret=




Currently, only this 10.32.0.0 clients can mount it. that of public network
(140.109) can not. How can I enable this?

here attached is my ceph.conf

Thanks in advance

Cheers
Joshua


ceph.conf
Description: Binary data
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] changing my cluster network ip

2018-09-26 Thread Joshua Chen
Hello all,
  I am buinding my testing cluster with public_network and cluster_network
interface. For some reason, the testing cluster need to do peer connection
with my colleague's machines so it's better I change my original
cluster_network from 172.20.x.x to 10.32.67.x.

Now if I don't want to re-build the whole ceph cluster, I just want to
change the interface ip/network setting like:
(it's a 6 nodes CentOS 7.5, 3 mons 3 osds)

1, (all 6) edit /etc/hosts to become

10.32.67.48 cephmon1
10.32.67.49 cephmon2
10.32.67.50 cephmon3
10.32.67.51 cephosd1
10.32.67.52 cephosd2
10.32.67.53 cephosd3


2,(all 6)  edit /etc/ceph/ceph.conf
  (deploy svr's)   /home/cephuser/pescadores/ceph.conf

to be

mon_host = 10.32.67.48
cluster_network = 10.32.67.0/24

3, edit individual host's /etc/sysconfig/network-scripts/ifcfg-bond1
to be 10.32.67.x ip

4, reboot

(and forget about firewall and selinux, they are disabled)

what else should I do before the ceph cluster will run on new
cluster_network setting?

Thanks in advance
Cheers

Joshua
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] customized ceph cluster name by ceph-deploy

2018-09-21 Thread Joshua Chen
Hi all,
  I am using ceph-deploy 2.0.1 to create my testing cluster by this command:

ceph-deploy --cluster pescadores  new  --cluster-network 100.109.240.0/24
--public-network 10.109.240.0/24  cephmon1 cephmon2 cephmon3

but the --cluster pescadores (name of the cluster) doesn't seem to work.
Anyone could help me on this or point out the direction? anything wrong
with my cli?

or what is the equivelent ceph command to do the same job?

Cheers
Joshua
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Joshua Chen
Dear all,
  I wonder how we could support VM systems with ceph storage (block
device)? my colleagues are waiting for my answer for vmware (vSphere 5) and
I myself use oVirt (RHEV). the default protocol is iSCSI.
  I know that openstack/cinder work well with ceph and proxmox (just heard)
too. But currently we are using vmware and ovirt.


Your wise suggestion is appreciated

Cheers
Joshua


On Thu, Mar 1, 2018 at 3:16 AM, Mark Schouten  wrote:

> Does Xen still not support RBD? Ceph has been around for years now!
>
> Met vriendelijke groeten,
>
> --
> Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
> Mark Schouten | Tuxis Internet Engineering
> KvK: 61527076 | http://www.tuxis.nl/
> T: 0318 200208 | i...@tuxis.nl
>
>
>
> * Van: * Massimiliano Cuttini 
> * Aan: * "ceph-users@lists.ceph.com" 
> * Verzonden: * 28-2-2018 13:53
> * Onderwerp: * [ceph-users] Ceph iSCSI is a prank?
>
> I was building ceph in order to use with iSCSI.
> But I just see from the docs that need:
>
> *CentOS 7.5*
> (which is not available yet, it's still at 7.4)
> https://wiki.centos.org/Download
>
> *Kernel 4.17*
> (which is not available yet, it is still at 4.15.7)
> https://www.kernel.org/
>
> So I guess, there is no ufficial support and this is just a bad prank.
>
> Ceph is ready to be used with S3 since many years.
> But need the kernel of the next century to works with such an old
> technology like iSCSI.
> So sad.
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RBD

2018-01-06 Thread Joshua Chen
That is Awesome! and wonderful,
Thanks for making this acl option available.


Cheers
Joshua

On Sat, Jan 6, 2018 at 7:17 AM, Mike Christie <mchri...@redhat.com> wrote:

> On 01/04/2018 09:36 PM, Joshua Chen wrote:
> > Hello Michael,
> >   Thanks for the reply.
> >   I did check this ceph doc at
> > http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
> >   And yes, I need acl instead of chap usr/passwd, but I will negotiate
> > with my colleagues for changing the management style.
> >   Really appreciated for pointing the doc's bug and current status of
> > chap/acl limitation. looking forwarding to this ACL function adding to
> > gwcli.
>
> I made a patch for that here:
>
> https://github.com/ceph/ceph-iscsi-config/pull/44
>
> It is enabled by default when you first create a initiator/client. If
> you have chap enabled but want to switch then when you do "auth nochap"
> it will switch to the initiator ACL.
>
>
> >
> >
> > Cheers
> > Joshua
> >
> > On Fri, Jan 5, 2018 at 12:47 AM, Michael Christie <mchri...@redhat.com
> > <mailto:mchri...@redhat.com>> wrote:
> >
> > On 01/04/2018 03:50 AM, Joshua Chen wrote:
> > > Dear all,
> > >   Although I managed to run gwcli and created some iqns, or luns,
> > > but I do need some working config example so that my initiator
> could
> > > connect and get the lun.
> > >
> > >   I am familiar with targetcli and I used to do the following ACL
> > style
> > > connection rather than password,
> > > the targetcli setting tree is here:
> >
> > What docs have you been using? Did you check out the gwcli man page
> and
> > upstream ceph doc:
> >
> > http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
> > <http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/>
> >
> > Let me know what is not clear in there.
> >
> > There is a bug in the upstream doc and instead of doing
> > > cd /iscsi-target/iqn.2003-01.com
> > <http://iqn.2003-01.com>.redhat.iscsi-gw:/disks/
> >
> > you do
> >
> > > cd /disks
> >
> > in step 3. Is that the issue you are hitting?
> >
> >
> > For gwcli, a client is the initiator. It only supports one way chap,
> so
> > there is just the 3 commands in those docs above.
> >
> > 1. create client/initiator-name. This is the same as creating the
> ACL in
> > targetcli.
> >
> > > create  iqn.1994-05.com.redhat:15dbed23be9e
> >
> > 2. set CHAP username and password for that initiator. You have to do
> > this with gwcli right now due to a bug, or maybe feature :), in the
> > code. This is simiar to doing the set auth command in targetcli.
> >
> > auth chap=/
> >
> > 3. export a image as a lun. This is equivalent to creating the lun in
> > targetcli.
> >
> > disk add rbd.some-image
> >
> >
> > >
> > > (or see this page
> > <http://www.asiaa.sinica.edu.tw/~cschen/targetcli.html
> > <http://www.asiaa.sinica.edu.tw/~cschen/targetcli.html>>)
> > >
> > > #targetcli ls
> > > o- /
> > >
> > 
> .
> > > [...]
> > >   o- backstores
> > >
> > 
> ..
> > > [...]
> > >   | o- block
> > >
> > 
> ..
> > > [Storage Objects: 1]
> > >   | | o- vmware_5t
> > > ..
> > > [/dev/rbd/rbd/vmware_5t (5.0TiB) write-thru activated]
> > >   | |   o- alua
> > >
> > 
> ...
> > > [ALUA Groups: 1]
> > >   | | o- default_tg_pt_gp
> > >
> > 
> ...
> > > [ALUA state: Active/optimized]
> > >   | o- fileio
> > >
> > 
> ...

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
Hello Steven,
  I am using CentOS 7.4.1708 with kernel 3.10.0-693.el7.x86_64
  and the following packages:

ceph-iscsi-cli-2.5-9.el7.centos.noarch.rpm
ceph-iscsi-config-2.3-12.el7.centos.noarch.rpm
libtcmu-1.3.0-0.4.el7.centos.x86_64.rpm
libtcmu-devel-1.3.0-0.4.el7.centos.x86_64.rpm
python-rtslib-2.1.fb64-2.el7.centos.noarch.rpm
python-rtslib-doc-2.1.fb64-2.el7.centos.noarch.rpm
targetcli-2.1.fb47-0.1.20170815.git5bf3517.el7.centos.noarch.rpm
tcmu-runner-1.3.0-0.4.el7.centos.x86_64.rpm
tcmu-runner-debuginfo-1.3.0-0.4.el7.centos.x86_64.rpm


Cheers
Joshua


On Fri, Jan 5, 2018 at 2:14 AM, Steven Vacaroaia <ste...@gmail.com> wrote:

> Hi Joshua,
>
> How did you manage to use iSCSI gateway ?
> I would like to do that but still waiting for a patched kernel
>
> What kernel/OS did you use and/or how did you patch it ?
>
> Tahnsk
> Steven
>
> On 4 January 2018 at 04:50, Joshua Chen <csc...@asiaa.sinica.edu.tw>
> wrote:
>
>> Dear all,
>>   Although I managed to run gwcli and created some iqns, or luns,
>> but I do need some working config example so that my initiator could
>> connect and get the lun.
>>
>>   I am familiar with targetcli and I used to do the following ACL style
>> connection rather than password,
>> the targetcli setting tree is here:
>>
>> (or see this page <http://www.asiaa.sinica.edu.tw/~cschen/targetcli.html>
>> )
>>
>> #targetcli ls
>> o- / 
>> . [...]
>>   o- backstores ..
>> 
>> [...]
>>   | o- block 
>> .. [Storage Objects: 1]
>>   | | o- vmware_5t ..
>> [/dev/rbd/rbd/vmware_5t (5.0TiB) write-thru activated]
>>   | |   o- alua ..
>> .
>> [ALUA Groups: 1]
>>   | | o- default_tg_pt_gp ..
>> . [ALUA state: Active/optimized]
>>   | o- fileio ..
>> ...
>> [Storage Objects: 0]
>>   | o- pscsi 
>> .. [Storage Objects: 0]
>>   | o- ramdisk ..
>> ..
>> [Storage Objects: 0]
>>   | o- user:rbd ..
>> .
>> [Storage Objects: 0]
>>   o- iscsi 
>>  [Targets: 1]
>>   | o- iqn.2017-12.asiaa.cephosd1:vmware5t ..
>> . [TPGs: 1]
>>   |   o- tpg1 ..
>> 
>> [gen-acls, no-auth]
>>   | o- acls ..
>> ...
>> [ACLs: 12]
>>   | | o- iqn.1994-05.com.redhat:15dbed23be9e
>> ..
>> [Mapped LUNs: 1]
>>   | | | o- mapped_lun0 ..
>> ... [lun0 block/vmware_5t
>> (rw)]
>>   | | o- iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
>> ... [Mapped
>> LUNs: 1]
>>   | | | o- mapped_lun0 ..
>> ... [lun0 block/vmware_5t
>> (rw)]
>>   | | o- iqn.1994-05.com.redhat:2af344ba6ae5-ceph-admin-test
>> .. [Mapped LUNs: 1]
>>   | | | o- mapped_lun0 ..
>> ... [lun0 block/vmware_5t
>> (rw)]
>>   | | o- iqn.1994-05.com.redhat:67669afedddf
>> ..
>> [Mapped LUNs: 1]
>>   | | | o- mapped_lun0 ..
>> ... [lun0 bloc

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
Hello Michael,
  Thanks for the reply.
  I did check this ceph doc at
http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
  And yes, I need acl instead of chap usr/passwd, but I will negotiate with
my colleagues for changing the management style.
  Really appreciated for pointing the doc's bug and current status of
chap/acl limitation. looking forwarding to this ACL function adding to
gwcli.


Cheers
Joshua

On Fri, Jan 5, 2018 at 12:47 AM, Michael Christie <mchri...@redhat.com>
wrote:

> On 01/04/2018 03:50 AM, Joshua Chen wrote:
> > Dear all,
> >   Although I managed to run gwcli and created some iqns, or luns,
> > but I do need some working config example so that my initiator could
> > connect and get the lun.
> >
> >   I am familiar with targetcli and I used to do the following ACL style
> > connection rather than password,
> > the targetcli setting tree is here:
>
> What docs have you been using? Did you check out the gwcli man page and
> upstream ceph doc:
>
> http://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
>
> Let me know what is not clear in there.
>
> There is a bug in the upstream doc and instead of doing
> > cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:/disks/
>
> you do
>
> > cd /disks
>
> in step 3. Is that the issue you are hitting?
>
>
> For gwcli, a client is the initiator. It only supports one way chap, so
> there is just the 3 commands in those docs above.
>
> 1. create client/initiator-name. This is the same as creating the ACL in
> targetcli.
>
> > create  iqn.1994-05.com.redhat:15dbed23be9e
>
> 2. set CHAP username and password for that initiator. You have to do
> this with gwcli right now due to a bug, or maybe feature :), in the
> code. This is simiar to doing the set auth command in targetcli.
>
> auth chap=/
>
> 3. export a image as a lun. This is equivalent to creating the lun in
> targetcli.
>
> disk add rbd.some-image
>
>
> >
> > (or see this page <http://www.asiaa.sinica.edu.tw/~cschen/targetcli.html
> >)
> >
> > #targetcli ls
> > o- /
> > 
> .
> > [...]
> >   o- backstores
> > 
> ..
> > [...]
> >   | o- block
> > 
> ..
> > [Storage Objects: 1]
> >   | | o- vmware_5t
> > ..
> > [/dev/rbd/rbd/vmware_5t (5.0TiB) write-thru activated]
> >   | |   o- alua
> > 
> ...
> > [ALUA Groups: 1]
> >   | | o- default_tg_pt_gp
> > ...
> > [ALUA state: Active/optimized]
> >   | o- fileio
> > 
> .
> > [Storage Objects: 0]
> >   | o- pscsi
> > 
> ..
> > [Storage Objects: 0]
> >   | o- ramdisk
> > 
> 
> > [Storage Objects: 0]
> >   | o- user:rbd
> > 
> ...
> > [Storage Objects: 0]
> >   o- iscsi
> > 
> 
> > [Targets: 1]
> >   | o- iqn.2017-12.asiaa.cephosd1:vmware5t
> > 
> ...
> > [TPGs: 1]
> >   |   o- tpg1
> > 
> ..
> > [gen-acls, no-auth]
> >   | o- acls
> > 
> .
> > [ACLs: 12]
> >   | | o- iqn.1994-05.com.redhat:15dbed23be9e
> > ..
> > [Mapped LUNs: 1]
> >   | | | o- mapped_lun0
> > 
> .
> > [lun0 block/vmware_5t (rw)]
> >   | | o- iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
> > 

Re: [ceph-users] iSCSI over RBD

2018-01-04 Thread Joshua Chen
904dfd
 [Mapped LUNs:
1]
  | | | o- mapped_lun0
.
[lun0 block/vmware_5t (rw)]
  | | o- iqn.1998-01.com.vmware:localhost-6af62e4c
 [Mapped LUNs:
1]
  | |   o- mapped_lun0
.
[lun0 block/vmware_5t (rw)]
  | o- luns
..
[LUNs: 1]
  | | o- lun0 
[block/vmware_5t (/dev/rbd/rbd/vmware_5t) (default_tg_pt_gp)]
  | o- portals

[Portals: 1]
  |   o- 172.20.0.12:3260
.
[OK]
  o- loopback
.
[Targets: 0]
  o- xen_pvscsi
...
[Targets: 0]






My targetcli setup procedure is like this, could someone translate it to
gwcli equivalent procedure?
sorry for asking for this due to lack of documentation and examples.
thanks in adavance

Cheers
Joshua




targetcli /backstores/block create name=vmware_5t dev=/dev/rbd/rbd/vmware_5t
targetcli /iscsi/ create iqn.2017-12.asiaa.cephosd1:vmware5t
targetcli /iscsi/iqn.2017-12.asiaa.cephosd1:vmware5t/tpg1/portals delete
ip_address=0.0.0.0 ip_port=3260

targetcli
  cd /iscsi/iqn.2017-12.asiaa.cephosd1:vmware5t/tpg1
portals/  create 172.20.0.12
acls/
create iqn.1994-05.com.redhat:e7692a10f661-ceph-node1
create iqn.1994-05.com.redhat:b01662ec2129-ceph-node2
create iqn.1994-05.com.redhat:d46b42a1915b-ceph-node3
create iqn.1994-05.com.redhat:15dbed23be9e
create iqn.1994-05.com.redhat:a7c1ec3c43f7
create iqn.1994-05.com.redhat:67669afedddf
create iqn.1994-05.com.redhat:15dbed23be9e-ovirt1
create iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2
create iqn.1994-05.com.redhat:67669afedddf-ovirt3
create iqn.1994-05.com.redhat:2af344ba6ae5-ceph-admin-test
create iqn.1998-01.com.vmware:localhost-6af62e4c
create iqn.1998-01.com.vmware:localhost-0f904dfd
cd ..
set attribute generate_node_acls=1
cd luns
create /backstores/block/vmware_5t




On Thu, Jan 4, 2018 at 10:55 AM, Joshua Chen <csc...@asiaa.sinica.edu.tw>
wrote:

> I had the same problem before, mine is CentOS, and when I created
> /iscsi/create iqn_bla-bla
> it goes
> ocal LIO instance already has LIO configured with a target - unable to
>  continue
>
>
>
> then finally the solution happened to be, turn off target service
>
> systemctl stop target
> systemctl disable target
>
>
> somehow they are doing the same thing, you need to disable 'target'
> service (targetcli) in order to allow gwcli (rbd-target-api) do it's job.
>
> Cheers
> Joshua
>
> On Thu, Jan 4, 2018 at 2:39 AM, Mike Christie <mchri...@redhat.com> wrote:
>
>> On 12/25/2017 03:13 PM, Joshua Chen wrote:
>> > Hello folks,
>> >   I am trying to share my ceph rbd images through iscsi protocol.
>> >
>> > I am trying iscsi-gateway
>> > http://docs.ceph.com/docs/master/rbd/iscsi-overview/
>> >
>> >
>> > now
>> >
>> > systemctl start rbd-target-api
>> > is working and I could run gwcli
>> > (at a CentOS 7.4 osd node)
>> >
>> > gwcli
>> > /> ls
>> > o- /
>> > 
>> .
>> > [...]
>> >   o- clusters
>> > 
>> 
>> > [Clusters: 1]
>> >   | o- ceph
>> > 
>> 
>> > [HEALTH_OK]
>> >   |   o- pools
>> > 
>> ..
>> > [Pools: 1]
>> >   |   | o- rbd
>> > 
>> .

Re: [ceph-users] iSCSI over RBD

2018-01-03 Thread Joshua Chen
I had the same problem before, mine is CentOS, and when I created
/iscsi/create iqn_bla-bla
it goes
ocal LIO instance already has LIO configured with a target - unable to
 continue



then finally the solution happened to be, turn off target service

systemctl stop target
systemctl disable target


somehow they are doing the same thing, you need to disable 'target' service
(targetcli) in order to allow gwcli (rbd-target-api) do it's job.

Cheers
Joshua

On Thu, Jan 4, 2018 at 2:39 AM, Mike Christie <mchri...@redhat.com> wrote:

> On 12/25/2017 03:13 PM, Joshua Chen wrote:
> > Hello folks,
> >   I am trying to share my ceph rbd images through iscsi protocol.
> >
> > I am trying iscsi-gateway
> > http://docs.ceph.com/docs/master/rbd/iscsi-overview/
> >
> >
> > now
> >
> > systemctl start rbd-target-api
> > is working and I could run gwcli
> > (at a CentOS 7.4 osd node)
> >
> > gwcli
> > /> ls
> > o- /
> > 
> .
> > [...]
> >   o- clusters
> > 
> 
> > [Clusters: 1]
> >   | o- ceph
> > 
> 
> > [HEALTH_OK]
> >   |   o- pools
> > 
> ..
> > [Pools: 1]
> >   |   | o- rbd
> > 
> ...
> > [(x3), Commit: 0b/25.9T (0%), Used: 395M]
> >   |   o- topology
> > 
> 
> > [OSDs: 9,MONs: 3]
> >   o- disks
> > 
> ..
> > [0b, Disks: 0]
> >   o- iscsi-target
> > 
> .
> > [Targets: 0]
> >
> >
> > but when I created iscsi-target, I got
> >
> > Local LIO instance already has LIO configured with a target - unable to
> > continue
> >
> >
> > /> /iscsi-target create
> > iqn.2003-01.org.linux-iscsi.ceph-node1.x8664:sn.571e1ab51af2
> > Local LIO instance already has LIO configured with a target - unable to
> > continue
> > />
> >
>
>
> Could you send the output of
>
> targetcli ls
>
> ?
>
> What distro are you using?
>
> You might just have a target setup from a non gwcli source. Maybe from
> the distro targetcli systemd tools.
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] iSCSI over RBD

2017-12-25 Thread Joshua Chen
Hello folks,
  I am trying to share my ceph rbd images through iscsi protocol.

I am trying iscsi-gateway
http://docs.ceph.com/docs/master/rbd/iscsi-overview/


now

systemctl start rbd-target-api
is working and I could run gwcli
(at a CentOS 7.4 osd node)

gwcli
/> ls
o- /
.
[...]
  o- clusters

[Clusters: 1]
  | o- ceph

[HEALTH_OK]
  |   o- pools
..
[Pools: 1]
  |   | o- rbd
...
[(x3), Commit: 0b/25.9T (0%), Used: 395M]
  |   o- topology

[OSDs: 9,MONs: 3]
  o- disks
..
[0b, Disks: 0]
  o- iscsi-target
.
[Targets: 0]


but when I created iscsi-target, I got

Local LIO instance already has LIO configured with a target - unable to
continue


/> /iscsi-target create
iqn.2003-01.org.linux-iscsi.ceph-node1.x8664:sn.571e1ab51af2
Local LIO instance already has LIO configured with a target - unable to
continue
/>

and no more progress at all,

is there something I need to check? something missing? or please direct me
for further debugging.

Thanks in advance

Cheers
Joshua
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com