Re: [ceph-users] Ceph repo is broken, no repodata at all

2016-09-26 Thread Chengwei Yang
On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote:
> 
> > Op 23 september 2016 om 5:59 schreef Chengwei Yang 
> > :
> > 
> > 
> > Hi list,
> > 
> > I found that ceph repo is broken these days, no any repodata in the repo at 
> > all.
> > 
> > http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/
> > 
> > it's just empty, so how can I install ceph rpms from yum?
> > 
> 
> Thanks for the report! I contacted the mirror admin for you asking him to 
> check out why these files are not present.

Thanks Wido, expect it will be fixed soon.

> 
> Wido
> 
> > A workaround is I synced all the rpms to local and create repodata with
> > createrepo command, but I think the upstream has to be fixed.
> > 
> > -- 
> > Thanks,
> > Chengwei
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph repo is broken, no repodata at all

2016-09-22 Thread Chengwei Yang
Hi list,

I found that ceph repo is broken these days, no any repodata in the repo at all.

http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/

it's just empty, so how can I install ceph rpms from yum?

A workaround is I synced all the rpms to local and create repodata with
createrepo command, but I think the upstream has to be fixed.

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph map error

2016-08-16 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 10:21:55AM +0200, Ilya Dryomov wrote:
> On Tue, Aug 16, 2016 at 5:18 AM, Yanjun Shen  wrote:
> > hi,
> >when i run cep map -p pool rbd test, error
> > hdu@ceph-mon2:~$ sudo rbd map -p rbd test
> > rbd: sysfs write failed
> > In some cases useful info is found in syslog - try "dmesg | tail" or so.
> > rbd: map failed: (5) Input/output error
> >
> > dmesg |tail
> > [ 4148.672530] libceph: mon1 172.22.111.173:6789 feature set mismatch, my
> > 384a042a42 < server's 4384a042a42, missing 400
> > [ 4148.672576] libceph: mon1 172.22.111.173:6789 socket error on read
> > [ 4158.688709] libceph: mon0 172.22.111.172:6789 feature set mismatch, my
> > 384a042a42 < server's 4384a042a42, missing 400
> > [ 4158.688750] libceph: mon0 172.22.111.172:6789 socket error on read
> > [ 4168.704629] libceph: mon1 172.22.111.173:6789 feature set mismatch, my
> > 384a042a42 < server's 4384a042a42, missing 400
> > [ 4168.704674] libceph: mon1 172.22.111.173:6789 socket error on read
> > [ 4178.721313] libceph: mon2 172.22.111.174:6789 feature set mismatch, my
> > 384a042a42 < server's 4384a042a42, missing 400
> > [ 4178.721396] libceph: mon2 172.22.111.174:6789 socket error on read
> > [ 4188.736345] libceph: mon1 172.22.111.173:6789 feature set mismatch, my
> > 384a042a42 < server's 4384a042a42, missing 400
> > [ 4188.736383] libceph: mon1 172.22.111.173:6789 socket error on read
> >
> >
> > sudo ceph -v
> > ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
> >
> > hdu@ceph-mon2:~$ uname -a
> > Linux ceph-mon2 3.14.0-031400-generic #201403310035 SMP Mon Mar 31 04:36:23
> > UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> >
> > can you help me ?
> 
> Hi Yanjun,
> 
> Please disregard Chengwei's message.  The page he linked to clearly
> states that hashpspool is supported in kernels 3.9 and above...
> 
> Your CRUSH tunables are set to jewel (tunables5).  As per [1],
> tunables5 are supported starting with kernel 4.5.  You need to either
> upgrade your kernel or set your tunables to legacy with

Ah, thanks Ilya correct me.

> 
> $ ceph osd crush tunables legacy
> 
> [1] 
> http://docs.ceph.com/docs/master/rados/operations/crush-map/#which-client-versions-support-crush-tunables5
> 
> Thanks,
> 
> Ilya
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-16 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 10:46:37AM +0200, Ilya Dryomov wrote:
> On Tue, Aug 16, 2016 at 4:06 AM, Chengwei Yang
>  wrote:
> > On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> >> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
> >>  wrote:
> >> > Hi List,
> >> >
> >> > I read from ceph document[1] that there are several rbd image features
> >> >
> >> >   - layering: layering support
> >> >   - striping: striping v2 support
> >> >   - exclusive-lock: exclusive locking support
> >> >   - object-map: object map support (requires exclusive-lock)
> >> >   - fast-diff: fast diff calculations (requires object-map)
> >> >   - deep-flatten: snapshot flatten support
> >> >   - journaling: journaled IO support (requires exclusive-lock)
> >> >
> >> > But I didn't found any document/blog/google tells these features 
> >> > supported since
> >> > which kernel version.
> >>
> >> No released kernel currently supports these features.  exclusive-lock
> >> is staged for 4.9, we are working on staging object-map and fast-diff.
> >
> > Thanks Ilya
> >
> > Since object-map, fast-diff, journaling are depend on exclusive-lock, so I
> > expect them will be available after exclusive-lock.
> >
> > And I verified that layering is supported fine by centos kernel 3.10.0-327 
> > while
> > neither striping nor deep-flatten is supported, do you know which valina 
> > kernel
> > version start to supported rbd striping and deep-flatten?
> 
> Right, I should have been more specific - layering has been supported
> for a long time (3.10+), so I crossed it off.
> 
> None of the features except layering are currenty supported by the
> kernel client.  It will let you map an image with the striping feature
> enabled, but only if stripe_unit == object_size and stripe_count == 1
> (the default striping pattern).
> 
> exclusive-lock, object-map and fast-diff are being worked on.
> striping, deep-flatten and journaling are farther away.

Thanks Ilya, much appreciate!

> 
> Thanks,
> 
> Ilya

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph map error

2016-08-15 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 11:18:06AM +0800, Yanjun Shen wrote:
> hi,
>    when i run cep map -p pool rbd test, error
> hdu@ceph-mon2:~$ sudo rbd map -p rbd test
> rbd: sysfs write failed
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
> rbd: map failed: (5) Input/output error
> 
> dmesg |tail
> [ 4148.672530] libceph: mon1 172.22.111.173:6789 feature set mismatch, my
> 384a042a42 < server's 4384a042a42, missing 400
> [ 4148.672576] libceph: mon1 172.22.111.173:6789 socket error on read
> [ 4158.688709] libceph: mon0 172.22.111.172:6789 feature set mismatch, my
> 384a042a42 < server's 4384a042a42, missing 400
> [ 4158.688750] libceph: mon0 172.22.111.172:6789 socket error on read
> [ 4168.704629] libceph: mon1 172.22.111.173:6789 feature set mismatch, my
> 384a042a42 < server's 4384a042a42, missing 400
> [ 4168.704674] libceph: mon1 172.22.111.173:6789 socket error on read
> [ 4178.721313] libceph: mon2 172.22.111.174:6789 feature set mismatch, my
> 384a042a42 < server's 4384a042a42, missing 400
> [ 4178.721396] libceph: mon2 172.22.111.174:6789 socket error on read
> [ 4188.736345] libceph: mon1 172.22.111.173:6789 feature set mismatch, my
> 384a042a42 < server's 4384a042a42, missing 400
> [ 4188.736383] libceph: mon1 172.22.111.173:6789 socket error on read

Seems this could help
http://cephnotes.ksperis.com/blog/2014/01/21/feature-set-mismatch-error-on-ceph-kernel-client

for your situation, try turn off hashpspool feature as below:

# ceph osd pool set rbd hashpspool false

> 
> 
> sudo ceph -v
> ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
> 
> hdu@ceph-mon2:~$ uname -a
> Linux ceph-mon2 3.14.0-031400-generic #201403310035 SMP Mon Mar 31 04:36:23 
> UTC
> 2014 x86_64 x86_64 x86_64 GNU/Linux
> 
> can you help me ?
> 
> 
> SECURITY NOTE: file ~/.netrc must not be accessible by others

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
>  wrote:
> > Hi List,
> >
> > I read from ceph document[1] that there are several rbd image features
> >
> >   - layering: layering support
> >   - striping: striping v2 support
> >   - exclusive-lock: exclusive locking support
> >   - object-map: object map support (requires exclusive-lock)
> >   - fast-diff: fast diff calculations (requires object-map)
> >   - deep-flatten: snapshot flatten support
> >   - journaling: journaled IO support (requires exclusive-lock)
> >
> > But I didn't found any document/blog/google tells these features supported 
> > since
> > which kernel version.
> 
> No released kernel currently supports these features.  exclusive-lock
> is staged for 4.9, we are working on staging object-map and fast-diff.

Thanks Ilya

Since object-map, fast-diff, journaling are depend on exclusive-lock, so I
expect them will be available after exclusive-lock.

And I verified that layering is supported fine by centos kernel 3.10.0-327 while
neither striping nor deep-flatten is supported, do you know which valina kernel
version start to supported rbd striping and deep-flatten?


-- 
Thanks,
Chengwei

> 
> Thanks,
> 
> Ilya


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
Hi List,

I read from ceph document[1] that there are several rbd image features

  - layering: layering support
  - striping: striping v2 support
  - exclusive-lock: exclusive locking support
  - object-map: object map support (requires exclusive-lock)
  - fast-diff: fast diff calculations (requires object-map)
  - deep-flatten: snapshot flatten support
  - journaling: journaled IO support (requires exclusive-lock)

But I didn't found any document/blog/google tells these features supported since
which kernel version.

I'm using centos 7 and kernel version is 3.10 and found that only layering is
supported, so I'd like to learn which version support all the features and like
to give it a try, thanks in advance.

[1] http://docs.ceph.com/docs/master/man/8/rbd/

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy on Jewel error

2016-08-03 Thread Chengwei Yang
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote:
> Hi All,
> 
>  
> 
> I am trying to do a fresh install of Ceph Jewel on my cluster.  I went through
> all the steps in configuring the network, ssh, password, etc.  Now I am at the
> stage of running the ceph-deploy commands to install monitors and other 
> nodes. 
> I am getting the below error when I am deploying the first monitor.  Not able
> to figure out what it is that I am missing here.  Any pointers or help
> appreciated.
> 
>  
> 
> Thanks in advance.
> 
>  
> 
> - epk
> 
>  
> 
> [ep-c2-mon-01][DEBUG ] ---> Package librbd1.x86_64 1:0.94.7-0.el7 will be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package librbd1.x86_64 1:10.2.2-0.el7 will be an
> update
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-cephfs.x86_64 1:0.94.7-0.el7 will 
> be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-cephfs.x86_64 1:10.2.2-0.el7 will 
> be
> an update
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rados.x86_64 1:0.94.7-0.el7 will be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rados.x86_64 1:10.2.2-0.el7 will be
> an update
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rbd.x86_64 1:0.94.7-0.el7 will be
> updated
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-rbd.x86_64 1:10.2.2-0.el7 will be 
> an
> update
> 
> [ep-c2-mon-01][DEBUG ] --> Running transaction check
> 
> [ep-c2-mon-01][DEBUG ] ---> Package ceph-selinux.x86_64 1:10.2.2-0.el7 will be
> installed
> 
> [ep-c2-mon-01][DEBUG ] --> Processing Dependency: selinux-policy-base >=
> 3.13.1-60.el7_2.3 for package: 1:ceph-selinux-10.2.2-0.el7.x86_64
> 
> [ep-c2-mon-01][DEBUG ] ---> Package python-setuptools.noarch 0:0.9.8-4.el7 
> will
> be installed
> 
> [ep-c2-mon-01][DEBUG ] --> Finished Dependency Resolution
> 
> [ep-c2-mon-01][WARNIN] Error: Package: 1:ceph-selinux-10.2.2-0.el7.x86_64
> (ceph)
> 
> [ep-c2-mon-01][DEBUG ]  You could try using --skip-broken to work around the
> problem
> 
> [ep-c2-mon-01][WARNIN]Requires: selinux-policy-base >=
> 3.13.1-60.el7_2.3

It said it requires selinux-policy-base >= 3.13.1-60.el7_2.3

> 
> [ep-c2-mon-01][WARNIN]Installed:
> selinux-policy-targeted-3.13.1-60.el7.noarch (@CentOS/7)
> 
> [ep-c2-mon-01][WARNIN]selinux-policy-base = 3.13.1-60.el7
> 
> [ep-c2-mon-01][WARNIN]Available:
> selinux-policy-minimum-3.13.1-60.el7.noarch (CentOS-7)
> 
> [ep-c2-mon-01][WARNIN]selinux-policy-base = 3.13.1-60.el7
> 
> [ep-c2-mon-01][WARNIN]Available:
> selinux-policy-mls-3.13.1-60.el7.noarch (CentOS-7)
> 
> [ep-c2-mon-01][WARNIN]selinux-policy-base = 3.13.1-60.el7

However, both installed version and available versions are not meet the
requirement, so if fail.

You may have an incorrect repo configuration.

> 
> [ep-c2-mon-01][DEBUG ]  You could try running: rpm -Va --nofiles --nodigest
> 
> [ep-c2-mon-01][ERROR ] RuntimeError: command returned non-zero exit status: 1
> 
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install
> ceph ceph-radosgw
> 
>  
> 
>  
> 
> EP KOMARLA,
> 
> Flex_RGB_Sml_tm
> 
> Emal: ep.koma...@flextronics.com
> 
> Address: 677 Gibraltor Ct, Building #2, Milpitas, CA 94035, USA
> 
> Phone: 408-674-6090 (mobile)
> 
>  
> 
> 
> Legal Disclaimer:
> The information contained in this message may be privileged and confidential.
> It is intended to be read only by the individual or entity to whom it is
> addressed or by their designee. If the reader of this message is not the
> intended recipient, you are on notice that any distribution of this message, 
> in
> any form, is strictly prohibited. If you have received this message in error,
> please immediately notify the sender and delete or destroy any copy of this
> message!
> SECURITY NOTE: file ~/.netrc must not be accessible by others



> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should I manage bucket ID myself?

2016-08-01 Thread Chengwei Yang
On Tue, Aug 02, 2016 at 11:35:01AM +0800, Chengwei Yang wrote:
> Hi list,
> 
> I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be
> **changed** if you like.
> 
> By default, if bucket ID hasn't been configured, then ceph will assign one
> automatically to the bucket.
> 
> When considering to manage a production ceph cluster, we'd like to create some
> custom crush map, so as bucket ID, like:
> 
> host ID range [1, 100]
> rack ID range [101, 200]
> datacenter ID range [201, 300]

oops, all the above ID should be minus integer.

> ...
> 
> rather than left ID assigned by ceph automatically.
> 
> Please help me to understand if it's valuable to manage bucket ID myself or 
> just
> let it go.
> 
> Thanks in advance!
> 
> -- 
> Thanks,
> Chengwei



-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Should I manage bucket ID myself?

2016-08-01 Thread Chengwei Yang
Hi list,

I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be
**changed** if you like.

By default, if bucket ID hasn't been configured, then ceph will assign one
automatically to the bucket.

When considering to manage a production ceph cluster, we'd like to create some
custom crush map, so as bucket ID, like:

host ID range [1, 100]
rack ID range [101, 200]
datacenter ID range [201, 300]
...

rather than left ID assigned by ceph automatically.

Please help me to understand if it's valuable to manage bucket ID myself or just
let it go.

Thanks in advance!

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can I remove rbd pool and re-create it?

2016-07-31 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 05:36:16PM +0200, Wido den Hollander wrote:
> 
> > Op 29 juli 2016 om 16:30 schreef Chengwei Yang :
> > 
> > 
> > On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
> > > 
> > > > Op 29 juli 2016 om 13:20 schreef Chengwei Yang 
> > > > :
> > > > 
> > > > 
> > > > Hi Christian,
> > > > 
> > > > Thanks for your reply, and since I do really don't like the HEALTH_WARN 
> > > > and it
> > > > not allowed to decrease pg_num of a pool.
> > > > 
> > > > So can I just remove the default **rbd** pool and re-create it by using 
> > > > `ceph osd
> > > > pool create`?
> > > > 
> > > 
> > > Yes, you can.
> > > 
> > > > If so, is there anything I have to pay attention to?
> > > > 
> > > 
> > > Your data! Are you sure there is no data in the pool? If so, make sure 
> > > you back that up first. Otherwise you can remove the pool and re-create 
> > > it. The pool is not required, so if you don't use it you can also just 
> > > remove it.
> > 
> > Wow! great to know that, thanks Wido, much appreciate!
> > 
> 
> Again, please make sure there is no data in the pool:
> 
> $ rbd ls
> $ ceph df
> 
> It should show in both cases that there is no data in the pool. If so, be 
> convinced that it is not important data.

confirmed, thanks!

> 
> Wido
> 
> > -- 
> > Thanks,
> > Chengwei
> > 
> > > 
> > > Wido
> > > 
> > > > Thanks in advance!
> > > > 
> > > > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > > > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> > > > > 
> > > > > > Hi list,
> > > > > > 
> > > > > > I just followed the placement group guide to set pg_num for the rbd 
> > > > > > pool.
> > > > > > 
> > > > > How many other pools do you have, or is that the only pool?
> > > > > 
> > > > > The numbers mentioned are for all pools, not per pool, something that
> > > > > isn't abundantly clear from the documentation either.
> > > > > 
> > > > > >   "
> > > > > >   Less than 5 OSDs set pg_num to 128
> > > > > >   Between 5 and 10 OSDs set pg_num to 512
> > > > > >   Between 10 and 50 OSDs set pg_num to 4096
> > > > > >   If you have more than 50 OSDs, you need to understand the 
> > > > > > tradeoffs and how to
> > > > > >   calculate the pg_num value by yourself
> > > > > >   For calculating pg_num value by yourself please take help of 
> > > > > > pgcalc tool
> > > > > >   "
> > > > > > 
> > > > > You should have headed the hint about pgcalc, which is by far the best
> > > > > thing to do.
> > > > > The above numbers are an (imprecise) attempt to give a quick answer 
> > > > > to a
> > > > > complex question.
> > > > > 
> > > > > > Since I have 40 OSDs, so I set pg_num to 4096 according to the above
> > > > > > recommendation.
> > > > > > 
> > > > > > However, after configured pg_num and pgp_num both to 4096, I found 
> > > > > > that my
> > > > > > ceph cluster in **HEALTH_WARN** status, which does surprised me and 
> > > > > > still
> > > > > > confusing me.
> > > > > > 
> > > > > PGcalc would recommend 2048 PGs at most (for a single pool) with 40 
> > > > > OSDs.
> > > > > 
> > > > > I assume the above high number of 4096 stems from the wisdom that with
> > > > > small clusters more PGs than normally recommended (100 per OSD) can be
> > > > > helpful. 
> > > > > It was also probably written before those WARN calculations were 
> > > > > added to
> > > > > Ceph.
> > > > > 
> > > > > The above would better read:
> > > > > ---
> > > > > Use PGcalc!
> > > > > [...]
> > > > > Between 10 and 20 OSDs set pg_num to 1024
> > > > > Between 20 and 40 OSDs set pg_num to 2048
> > > > > 
> > > > > Over 40 d

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-31 Thread Chengwei Yang
On Mon, Aug 01, 2016 at 10:37:27AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote:
> 
> > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> > > 
> > > > Hi list,
> > > > 
> > > > I just followed the placement group guide to set pg_num for the rbd 
> > > > pool.
> > > > 
> > > How many other pools do you have, or is that the only pool?
> > 
> > Yes, this is the only one.
> > 
> > > 
> > > The numbers mentioned are for all pools, not per pool, something that
> > > isn't abundantly clear from the documentation either.
> > 
> > Exactly, especially for newbie like me. :-)
> > 
> Given how often and how LONG this issue has come up, it really needs a
> rewrite and lots of BOLD sentences. 
> 
> > > 
> > > >   "
> > > >   Less than 5 OSDs set pg_num to 128
> > > >   Between 5 and 10 OSDs set pg_num to 512
> > > >   Between 10 and 50 OSDs set pg_num to 4096
> > > >   If you have more than 50 OSDs, you need to understand the tradeoffs 
> > > > and how to
> > > >   calculate the pg_num value by yourself
> > > >   For calculating pg_num value by yourself please take help of pgcalc 
> > > > tool
> > > >   "
> > > > 
> > > You should have headed the hint about pgcalc, which is by far the best
> > > thing to do.
> > > The above numbers are an (imprecise) attempt to give a quick answer to a
> > > complex question.
> > > 
> > > > Since I have 40 OSDs, so I set pg_num to 4096 according to the above
> > > > recommendation.
> > > > 
> > > > However, after configured pg_num and pgp_num both to 4096, I found that 
> > > > my
> > > > ceph cluster in **HEALTH_WARN** status, which does surprised me and 
> > > > still
> > > > confusing me.
> > > > 
> > > PGcalc would recommend 2048 PGs at most (for a single pool) with 40 OSDs.
> > 
> > BTW, I read PGcal and found that it may also has some flaw as it says:
> > 
> > "
> > If the value of the above calculation is less than the value of (OSD#) / 
> > (Size),
> > then the value is updated to the value of ((OSD#) / (Size)). This is to 
> > ensure
> > even load / data distribution by allocating at least one Primary or 
> > Secondary PG
> > to every OSD for every Pool.
> > "
> > 
> > However, in the above **OpenStack w RGW** use case, there are a lot of small
> > pool with 32 PG that apparently smaller than OSD / Size(100/3 ~= 33.33).
> > 
> > I do mean it though it's not smaller a lot. :-)
> >
> 
> Well, there are always trade-offs to "automatic" solutions like this when
> operating either small or large clusters.
> 
> While the goal of distributing pools amongst all OSDs is commendable, it
> is also not going to be realistic in all cases.
> 
> Nor is it typically necessary, since a small (data size) pool is
> supposedly going to see less activity than a larger one, so the amount of
> IOPS (# of OSDs) is going to be lower, too.
> 
> In cases where that might not true (CephFS metadata comes to mind),
> putting such a pool on SSD based OSDs might be the better choice than
> increasing PGs on HDD based OSDs.
> 
> Or if you have a large (data size) pool that is being used for something
> like backups and sees very little activity, give that one less PGs than
> you'd normally do and give those PGs to more active ones.

Thanks, it's much clear now for me.

> 
> It boils down to the "understanding" part.
> 
> > > 
> > > I assume the above high number of 4096 stems from the wisdom that with
> > > small clusters more PGs than normally recommended (100 per OSD) can be
> > > helpful. 
> > > It was also probably written before those WARN calculations were added to
> > > Ceph.
> > > 
> > > The above would better read:
> > > ---
> > > Use PGcalc!
> > > [...]
> > > Between 10 and 20 OSDs set pg_num to 1024
> > > Between 20 and 40 OSDs set pg_num to 2048
> > > 
> > > Over 40 definitely use and understand PGcalc.
> > > ---
> > > 
> > > > >   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
> > > >  health HEALTH_WARN
> > > > too many PGs per OSD (307 > max 300)
> > > > 
> >

Re: [ceph-users] Can I remove rbd pool and re-create it?

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
> 
> > Op 29 juli 2016 om 13:20 schreef Chengwei Yang :
> > 
> > 
> > Hi Christian,
> > 
> > Thanks for your reply, and since I do really don't like the HEALTH_WARN and 
> > it
> > not allowed to decrease pg_num of a pool.
> > 
> > So can I just remove the default **rbd** pool and re-create it by using 
> > `ceph osd
> > pool create`?
> > 
> 
> Yes, you can.
> 
> > If so, is there anything I have to pay attention to?
> > 
> 
> Your data! Are you sure there is no data in the pool? If so, make sure you 
> back that up first. Otherwise you can remove the pool and re-create it. The 
> pool is not required, so if you don't use it you can also just remove it.

Wow! great to know that, thanks Wido, much appreciate!

-- 
Thanks,
Chengwei

> 
> Wido
> 
> > Thanks in advance!
> > 
> > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> > > 
> > > > Hi list,
> > > > 
> > > > I just followed the placement group guide to set pg_num for the rbd 
> > > > pool.
> > > > 
> > > How many other pools do you have, or is that the only pool?
> > > 
> > > The numbers mentioned are for all pools, not per pool, something that
> > > isn't abundantly clear from the documentation either.
> > > 
> > > >   "
> > > >   Less than 5 OSDs set pg_num to 128
> > > >   Between 5 and 10 OSDs set pg_num to 512
> > > >   Between 10 and 50 OSDs set pg_num to 4096
> > > >   If you have more than 50 OSDs, you need to understand the tradeoffs 
> > > > and how to
> > > >   calculate the pg_num value by yourself
> > > >   For calculating pg_num value by yourself please take help of pgcalc 
> > > > tool
> > > >   "
> > > > 
> > > You should have headed the hint about pgcalc, which is by far the best
> > > thing to do.
> > > The above numbers are an (imprecise) attempt to give a quick answer to a
> > > complex question.
> > > 
> > > > Since I have 40 OSDs, so I set pg_num to 4096 according to the above
> > > > recommendation.
> > > > 
> > > > However, after configured pg_num and pgp_num both to 4096, I found that 
> > > > my
> > > > ceph cluster in **HEALTH_WARN** status, which does surprised me and 
> > > > still
> > > > confusing me.
> > > > 
> > > PGcalc would recommend 2048 PGs at most (for a single pool) with 40 OSDs.
> > > 
> > > I assume the above high number of 4096 stems from the wisdom that with
> > > small clusters more PGs than normally recommended (100 per OSD) can be
> > > helpful. 
> > > It was also probably written before those WARN calculations were added to
> > > Ceph.
> > > 
> > > The above would better read:
> > > ---
> > > Use PGcalc!
> > > [...]
> > > Between 10 and 20 OSDs set pg_num to 1024
> > > Between 20 and 40 OSDs set pg_num to 2048
> > > 
> > > Over 40 definitely use and understand PGcalc.
> > > ---
> > > 
> > > > >   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
> > > >  health HEALTH_WARN
> > > > too many PGs per OSD (307 > max 300)
> > > > 
> > > > I see the cluster also says "4096 active+clean" so it's safe, but I do 
> > > > not like
> > > > the HEALTH_WARN in anyway.
> > > >
> > > You can ignore it, but yes, it is annoying.
> > >  
> > > > As I know(from ceph -s output), the recommended pg_num per OSD is [30, 
> > > > 300], any
> > > > other pg_num out of this range with bring cluster to HEALTH_WARN.
> > > > 
> > > > So what I would like to say: is the document misleading? Should we fix 
> > > > it?
> > > > 
> > > Definitely.
> > > 
> > > Christian
> > > -- 
> > > Christian BalzerNetwork/Systems Engineer
> > > ch...@gol.com Global OnLine Japan/Rakuten Communications
> > > http://www.gol.com/
> > 
> > -- 
> > Thanks,
> > Chengwei
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Can I remove rbd pool and re-create it?

2016-07-29 Thread Chengwei Yang
Hi Christian,

Thanks for your reply, and since I do really don't like the HEALTH_WARN and it
not allowed to decrease pg_num of a pool.

So can I just remove the default **rbd** pool and re-create it by using `ceph 
osd
pool create`?

If so, is there anything I have to pay attention to?

Thanks in advance!

On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> 
> > Hi list,
> > 
> > I just followed the placement group guide to set pg_num for the rbd pool.
> > 
> How many other pools do you have, or is that the only pool?
> 
> The numbers mentioned are for all pools, not per pool, something that
> isn't abundantly clear from the documentation either.
> 
> >   "
> >   Less than 5 OSDs set pg_num to 128
> >   Between 5 and 10 OSDs set pg_num to 512
> >   Between 10 and 50 OSDs set pg_num to 4096
> >   If you have more than 50 OSDs, you need to understand the tradeoffs and 
> > how to
> >   calculate the pg_num value by yourself
> >   For calculating pg_num value by yourself please take help of pgcalc tool
> >   "
> > 
> You should have headed the hint about pgcalc, which is by far the best
> thing to do.
> The above numbers are an (imprecise) attempt to give a quick answer to a
> complex question.
> 
> > Since I have 40 OSDs, so I set pg_num to 4096 according to the above
> > recommendation.
> > 
> > However, after configured pg_num and pgp_num both to 4096, I found that my
> > ceph cluster in **HEALTH_WARN** status, which does surprised me and still
> > confusing me.
> > 
> PGcalc would recommend 2048 PGs at most (for a single pool) with 40 OSDs.
> 
> I assume the above high number of 4096 stems from the wisdom that with
> small clusters more PGs than normally recommended (100 per OSD) can be
> helpful. 
> It was also probably written before those WARN calculations were added to
> Ceph.
> 
> The above would better read:
> ---
> Use PGcalc!
> [...]
> Between 10 and 20 OSDs set pg_num to 1024
> Between 20 and 40 OSDs set pg_num to 2048
> 
> Over 40 definitely use and understand PGcalc.
> ---
> 
> > >   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
> >  health HEALTH_WARN
> > too many PGs per OSD (307 > max 300)
> > 
> > I see the cluster also says "4096 active+clean" so it's safe, but I do not 
> > like
> > the HEALTH_WARN in anyway.
> >
> You can ignore it, but yes, it is annoying.
>  
> > As I know(from ceph -s output), the recommended pg_num per OSD is [30, 
> > 300], any
> > other pg_num out of this range with bring cluster to HEALTH_WARN.
> > 
> > So what I would like to say: is the document misleading? Should we fix it?
> > 
> Definitely.
> 
> Christian
> -- 
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com Global OnLine Japan/Rakuten Communications
> http://www.gol.com/

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [RGW] how to choise the best placement groups ?

2016-07-29 Thread Chengwei Yang
Would http://ceph.com/pgcalc/ help?

On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote:
> Hi all,
> I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD) 
> and
> some pool as: 
>     ap-southeast.rgw.data.root
>     ap-southeast.rgw.control
>     ap-southeast.rgw.gc
>     ap-southeast.rgw.log
>     ap-southeast.rgw.intent-log
>     ap-southeast.rgw.usage
>     ap-southeast.rgw.users.keys
>     ap-southeast.rgw.users.email
>     ap-southeast.rgw.users.swift
>     ap-southeast.rgw.users.uid
>     ap-southeast.rgw.buckets.index
>     ap-southeast.rgw.buckets.data
>     ap-southeast.rgw.buckets.non-ec
>     ap-southeast.rgw.meta
> In which "ap-southeast.rgw.buckets.data" is a erasure pool(k=20, m=4) and all
> of the remaining pool are replicated(size=3). I've used (100*OSDs)/size  to
> calculate the number of PGs, e.g. 100*24/3 = 800(nearest power of 2: 1024) for
> replicated pools and 100*24/24=100(nearest power of 2: 128) for erasure pool.
> I'm not sure this is the best placement group number, someone can give me some
> advice ?
> Thank !
> SECURITY NOTE: file ~/.netrc must not be accessible by others

> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 04:46:54AM +, zhu tong wrote:
> Right, that was the one that I calculated the   osd_pool_default_pg_num in our
> test cluster.
> 
> 
> 7 OSD, 11 pools, osd_pool_default_pg_num is calculated to be 256, but when 
> ceph
> status shows 
> 
> health HEALTH_WARN
> too many PGs per OSD (5818 > max 300)
>  monmap e1: 1 mons at {open-kvm-app63=192.168.32.103:6789/0}
> election epoch 1, quorum 0 open-kvm-app63
>  osdmap e143: 7 osds: 7 up, 7 in
>   pgmap v717609: 6916 pgs, 11 pools, 1617 MB data, 4577 objects
> 17600 MB used, 3481 GB / 3498 GB avail
> 6916 active+clean
> 
> How so?

It says there are 6916 pgs, 6919 / 11 = 629, so 629 gps per pool I think, quite
larger than 256 that you said.

PG per OSD = pgs * pool size / osd, so pool size = 5818 * 7 / 6916, so I think
you have quite a large pool size?

you may get pg_num and pool size from below commands

$ ceph osd pool get  pg_num
pg_num: 4096
# ceph osd pool get  size
size: 3


-- 
Thanks,
Chengwei

> 
> 
> Thanks.
> 
> ━━━
> 发件人: Christian Balzer 
> 发送时间: 2016年7月29日 3:31:18
> 收件人: ceph-users@lists.ceph.com
> 抄送: zhu tong
> 主题: Re: [ceph-users] 答复: too many PGs per OSD (307 > max 300)
>  
> 
> Hello,
> 
> On Fri, 29 Jul 2016 03:18:10 + zhu tong wrote:
> 
> > The same problem is confusing me recently too, trying to figure out the
> relationship (an equation would be the best) among number of pools, OSD and 
> PG.
> >
> The pgcalc tool and the equation on that page are your best bet/friend.
>  http://ceph.com/pgcalc/
> 
> > For example, having 10 OSD, 7 pools in one cluster, and
> osd_pool_default_pg_num = 128, then how many PGs the health status would show?
> > I have seen some recommended calc the other way round -- inferring
> osd_pool_default_pg_num  value by giving a fixed amount of OSD and PGs, but
> when I try it in the way above mentioned, the two results do not match.
> >
> Number of PGs per OSD is your goal.
> To use a simpler example, 20 OSDs, 4 pools, all of equal (expected amount
> of data) size.
> So that's 1024 total PGs (about 150 per OSD),  thus 256 per pool.
>  
> Again, see pgcalc.
> 
> Christian
> > Thanks.
> > 
> > 发件人: ceph-users  代表 Christian Balzer
> 
> > 发送时间: 2016年7月29日 2:47:59
> > 收件人: ceph-users
> > 主题: Re: [ceph-users] too many PGs per OSD (307 > max 300)
> >
> > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> >
> > > Hi list,
> > >
> > > I just followed the placement group guide to set pg_num for the rbd pool.
> > >
> > How many other pools do you have, or is that the only pool?
> >
> > The numbers mentioned are for all pools, not per pool, something that
> > isn't abundantly clear from the documentation either.
> >
> > >   "
> > >   Less than 5 OSDs set pg_num to 128
> > >   Between 5 and 10 OSDs set pg_num to 512
> > >   Between 10 and 50 OSDs set pg_num to 4096
> > >   If you have more than 50 OSDs, you need to understand the tradeoffs and
> how to
> > >   calculate the pg_num value by yourself
> > >   For calculating pg_num value by yourself please take help of pgcalc tool
> > >   "
> > >
> > You should have headed the hint about pgcalc, which is by far the best
> > thing to do.
> > The above numbers are an (imprecise) attempt to give a quick answer to a
> > complex question.
> >
> > > Since I have 40 OSDs, so I set pg_num to 4096 according to the above
> > > recommendation.
> > >
> > > However, after configured pg_num and pgp_num both to 4096, I found that my
> > > ceph cluster in **HEALTH_WARN** status, which does surprised me and still
> > > confusing me.
> > >
> > PGcalc would recommend 2048 PGs at most (for a single pool) with 40 OSDs.
> >
> > I assume the above high number of 4096 stems from the wisdom that with
> > small clusters more PGs than normally recommended (100 per OSD) can be
> > helpful.
> > It was also probably written before those WARN calculations were added to
> > Ceph.
> >
> > The above would better read:
> > ---
> > Use PGcalc!
> > [...]
> > Between 10 and 20 OSDs set pg_num to 1024
> > Between 20 and 40 OSDs set pg_num to 2048
> >
> > Over 40 definitely use and understand PGcalc.
> > ---
> >
> > > >   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
> > >  health HE

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
> 
> > Hi list,
> > 
> > I just followed the placement group guide to set pg_num for the rbd pool.
> > 
> How many other pools do you have, or is that the only pool?

Yes, this is the only one.

> 
> The numbers mentioned are for all pools, not per pool, something that
> isn't abundantly clear from the documentation either.

Exactly, especially for newbie like me. :-)

> 
> >   "
> >   Less than 5 OSDs set pg_num to 128
> >   Between 5 and 10 OSDs set pg_num to 512
> >   Between 10 and 50 OSDs set pg_num to 4096
> >   If you have more than 50 OSDs, you need to understand the tradeoffs and 
> > how to
> >   calculate the pg_num value by yourself
> >   For calculating pg_num value by yourself please take help of pgcalc tool
> >   "
> > 
> You should have headed the hint about pgcalc, which is by far the best
> thing to do.
> The above numbers are an (imprecise) attempt to give a quick answer to a
> complex question.
> 
> > Since I have 40 OSDs, so I set pg_num to 4096 according to the above
> > recommendation.
> > 
> > However, after configured pg_num and pgp_num both to 4096, I found that my
> > ceph cluster in **HEALTH_WARN** status, which does surprised me and still
> > confusing me.
> > 
> PGcalc would recommend 2048 PGs at most (for a single pool) with 40 OSDs.

BTW, I read PGcal and found that it may also has some flaw as it says:

"
If the value of the above calculation is less than the value of (OSD#) / (Size),
then the value is updated to the value of ((OSD#) / (Size)). This is to ensure
even load / data distribution by allocating at least one Primary or Secondary PG
to every OSD for every Pool.
"

However, in the above **OpenStack w RGW** use case, there are a lot of small
pool with 32 PG that apparently smaller than OSD / Size(100/3 ~= 33.33).

I do mean it though it's not smaller a lot. :-)

> 
> I assume the above high number of 4096 stems from the wisdom that with
> small clusters more PGs than normally recommended (100 per OSD) can be
> helpful. 
> It was also probably written before those WARN calculations were added to
> Ceph.
> 
> The above would better read:
> ---
> Use PGcalc!
> [...]
> Between 10 and 20 OSDs set pg_num to 1024
> Between 20 and 40 OSDs set pg_num to 2048
> 
> Over 40 definitely use and understand PGcalc.
> ---
> 
> > >   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
> >  health HEALTH_WARN
> > too many PGs per OSD (307 > max 300)
> > 
> > I see the cluster also says "4096 active+clean" so it's safe, but I do not 
> > like
> > the HEALTH_WARN in anyway.
> >
> You can ignore it, but yes, it is annoying.
>  
> > As I know(from ceph -s output), the recommended pg_num per OSD is [30, 
> > 300], any
> > other pg_num out of this range with bring cluster to HEALTH_WARN.
> > 
> > So what I would like to say: is the document misleading? Should we fix it?
> > 
> Definitely.

OK, I'd like to submit a PR.

> 
> Christian
> -- 
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com Global OnLine Japan/Rakuten Communications
> http://www.gol.com/

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] too many PGs per OSD (307 > max 300)

2016-07-28 Thread Chengwei Yang
Hi list,

I just followed the placement group guide to set pg_num for the rbd pool.

  "
  Less than 5 OSDs set pg_num to 128
  Between 5 and 10 OSDs set pg_num to 512
  Between 10 and 50 OSDs set pg_num to 4096
  If you have more than 50 OSDs, you need to understand the tradeoffs and how to
  calculate the pg_num value by yourself
  For calculating pg_num value by yourself please take help of pgcalc tool
  "

Since I have 40 OSDs, so I set pg_num to 4096 according to the above
recommendation.

However, after configured pg_num and pgp_num both to 4096, I found that my
ceph cluster in **HEALTH_WARN** status, which does surprised me and still
confusing me.

>   cluster bf6fa9e4-56db-481e-8585-29f0c8317773
 health HEALTH_WARN
too many PGs per OSD (307 > max 300)

I see the cluster also says "4096 active+clean" so it's safe, but I do not like
the HEALTH_WARN in anyway.

As I know(from ceph -s output), the recommended pg_num per OSD is [30, 300], any
other pg_num out of this range with bring cluster to HEALTH_WARN.

So what I would like to say: is the document misleading? Should we fix it?

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
In addition, I tried `ceph auth rm`, neither failed.

```
# ceph auth rm client.chengwei
Error EINVAL: 
```

-- 
Thanks,
Chengwei

On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote:
> Hi list,
> 
> I'm learning ceph and follow
> http://docs.ceph.com/docs/master/rados/operations/user-management/
> to experience ceph user management.
> 
> I create a user `client.chengwei` which looks like below.
> 
> ```
> exported keyring for client.chengwei
> [client.chengwei]
> key = AQBC1ZlXnVRgOBAA/nO03Hr1C1hGdrckTJWX/w==
> caps mon = "allow r"
> caps osd = "allow rw pool=rbd"
> ```
> 
> And now I'm trying to cleanup it's caps, so I executed below command
> 
> ```
> # ceph auth caps client.chengwei mon ' ' osd ' '
> Error EINVAL: moncap parse failed, stopped at ' ' of ' '
> 
> ```
> 
> However it failed, this command is exactly copied from the above guide and 
> just
> replaced with my own user, so any help?
> 
> I'm using Jewel 10.2.2 on CentOS 7.2, thanks in advance!
> 
> -- 
> Thanks,
> Chengwei




signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
Hi list,

I'm learning ceph and follow
http://docs.ceph.com/docs/master/rados/operations/user-management/
to experience ceph user management.

I create a user `client.chengwei` which looks like below.

```
exported keyring for client.chengwei
[client.chengwei]
key = AQBC1ZlXnVRgOBAA/nO03Hr1C1hGdrckTJWX/w==
caps mon = "allow r"
caps osd = "allow rw pool=rbd"
```

And now I'm trying to cleanup it's caps, so I executed below command

```
# ceph auth caps client.chengwei mon ' ' osd ' '
Error EINVAL: moncap parse failed, stopped at ' ' of ' '

```

However it failed, this command is exactly copied from the above guide and just
replaced with my own user, so any help?

I'm using Jewel 10.2.2 on CentOS 7.2, thanks in advance!

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Can I use ceph from a ceph node?

2016-07-06 Thread Chengwei Yang
Hi List,

I just setup my first ceph demo cluster by following the step-by-step quick
start guide.

However, I noted that there is a FAQ
http://tracker.ceph.com/projects/ceph/wiki/How_Can_I_Give_Ceph_a_Try that says
it's may problematic if use ceph client from ceph cluster node.

Is that still the case?

So I can not mix other software like mesos, hadoop with ceph cluster?

Thanks in advance!

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] confused by ceph quick install and manual install

2016-07-01 Thread Chengwei Yang
Hi List,

Sorry if this question was answered before.

I'm new to ceph and following the ceph document to setting up a ceph cluster.
However, I noticed that the manual install guide said below

http://docs.ceph.com/docs/master/install/install-storage-cluster/

> Ensure your YUM ceph.repo entry includes priority=2. See Get Packages for
> details:

But the /etc/yum.repos.d/ceph.repo created by ceph-deploy(followed quick install
quide) contains **priority=1**.

So I'm now quite confused, which one is correct?

-- 
Thanks,
Chengwei


signature.asc
Description: Digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com