Re: [ceph-users] Ceph repo is broken, no repodata at all

2016-09-26 Thread Chengwei Yang
On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote: > > > Op 23 september 2016 om 5:59 schreef Chengwei Yang > > <chengwei.yang...@gmail.com>: > > > > > > Hi list, > > > > I found that ceph repo is broken these days, no any r

[ceph-users] Ceph repo is broken, no repodata at all

2016-09-22 Thread Chengwei Yang
Hi list, I found that ceph repo is broken these days, no any repodata in the repo at all. http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/ it's just empty, so how can I install ceph rpms from yum? A workaround is I synced all the rpms to local and create repodata with createrepo command,

Re: [ceph-users] ceph map error

2016-08-16 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 10:21:55AM +0200, Ilya Dryomov wrote: > On Tue, Aug 16, 2016 at 5:18 AM, Yanjun Shen wrote: > > hi, > >when i run cep map -p pool rbd test, error > > hdu@ceph-mon2:~$ sudo rbd map -p rbd test > > rbd: sysfs write failed > > In some cases useful

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-16 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 10:46:37AM +0200, Ilya Dryomov wrote: > On Tue, Aug 16, 2016 at 4:06 AM, Chengwei Yang > <chengwei.yang...@gmail.com> wrote: > > On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote: > >> On Mon, Aug 15, 2016 at 9:54 AM, Chengw

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote: > On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang > <chengwei.yang...@gmail.com> wrote: > > Hi List, > > > > I read from ceph document[1] that there are several rbd image features > > > > - l

[ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
Hi List, I read from ceph document[1] that there are several rbd image features - layering: layering support - striping: striping v2 support - exclusive-lock: exclusive locking support - object-map: object map support (requires exclusive-lock) - fast-diff: fast diff calculations

Re: [ceph-users] Ceph-deploy on Jewel error

2016-08-03 Thread Chengwei Yang
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote: > Hi All, > > > > I am trying to do a fresh install of Ceph Jewel on my cluster. I went through > all the steps in configuring the network, ssh, password, etc. Now I am at the > stage of running the ceph-deploy commands to install

Re: [ceph-users] Should I manage bucket ID myself?

2016-08-01 Thread Chengwei Yang
On Tue, Aug 02, 2016 at 11:35:01AM +0800, Chengwei Yang wrote: > Hi list, > > I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be > **changed** if you like. > > By default, if bucket ID hasn't been configured, then ceph will assign one > automa

[ceph-users] Should I manage bucket ID myself?

2016-08-01 Thread Chengwei Yang
Hi list, I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be **changed** if you like. By default, if bucket ID hasn't been configured, then ceph will assign one automatically to the bucket. When considering to manage a production ceph cluster, we'd like to create some

Re: [ceph-users] Can I remove rbd pool and re-create it?

2016-08-01 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 05:36:16PM +0200, Wido den Hollander wrote: > > > Op 29 juli 2016 om 16:30 schreef Chengwei Yang <chengwei.yang...@gmail.com>: > > > > > > On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote: > > > > > &g

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-08-01 Thread Chengwei Yang
On Mon, Aug 01, 2016 at 10:37:27AM +0900, Christian Balzer wrote: > On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote: > > > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: &

Re: [ceph-users] Can I remove rbd pool and re-create it?

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote: > > > Op 29 juli 2016 om 13:20 schreef Chengwei Yang <chengwei.yang...@gmail.com>: > > > > > > Hi Christian, > > > > Thanks for your reply, and since I do really don't lik

[ceph-users] Can I remove rbd pool and re-create it?

2016-07-29 Thread Chengwei Yang
in advance! On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: > > > Hi list, > > > > I just followed the placement group guide to set pg_num for the rbd pool. > > > How many other pools do y

Re: [ceph-users] [RGW] how to choise the best placement groups ?

2016-07-29 Thread Chengwei Yang
Would http://ceph.com/pgcalc/ help? On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote: > Hi all, > I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD) > and > some pool as:  >     ap-southeast.rgw.data.root >     ap-southeast.rgw.control >    

Re: [ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
50 per OSD), thus 256 per pool. > > Again, see pgcalc. > > Christian > > Thanks. > > > > 发件人: ceph-users <ceph-users-boun...@lists.ceph.com> 代表 Christian Balzer > <ch...@gol.com> > > 发送时间: 2016年7月29日 2:47:59 &

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: > > > Hi list, > > > > I just followed the placement group guide to set pg_num for the rbd pool. > > > How many other pools do you hav

[ceph-users] too many PGs per OSD (307 > max 300)

2016-07-28 Thread Chengwei Yang
Hi list, I just followed the placement group guide to set pg_num for the rbd pool. " Less than 5 OSDs set pg_num to 128 Between 5 and 10 OSDs set pg_num to 512 Between 10 and 50 OSDs set pg_num to 4096 If you have more than 50 OSDs, you need to understand the tradeoffs and how to

Re: [ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
In addition, I tried `ceph auth rm`, neither failed. ``` # ceph auth rm client.chengwei Error EINVAL: ``` -- Thanks, Chengwei On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote: > Hi list, > > I'm learning ceph and follow > http://docs.ceph.com/docs/master/rados/ope

[ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
Hi list, I'm learning ceph and follow http://docs.ceph.com/docs/master/rados/operations/user-management/ to experience ceph user management. I create a user `client.chengwei` which looks like below. ``` exported keyring for client.chengwei [client.chengwei] key =

[ceph-users] Can I use ceph from a ceph node?

2016-07-06 Thread Chengwei Yang
Hi List, I just setup my first ceph demo cluster by following the step-by-step quick start guide. However, I noted that there is a FAQ http://tracker.ceph.com/projects/ceph/wiki/How_Can_I_Give_Ceph_a_Try that says it's may problematic if use ceph client from ceph cluster node. Is that still the

[ceph-users] confused by ceph quick install and manual install

2016-07-01 Thread Chengwei Yang
Hi List, Sorry if this question was answered before. I'm new to ceph and following the ceph document to setting up a ceph cluster. However, I noticed that the manual install guide said below http://docs.ceph.com/docs/master/install/install-storage-cluster/ > Ensure your YUM ceph.repo entry