Hi List,
Sorry if this question was answered before.
I'm new to ceph and following the ceph document to setting up a ceph cluster.
However, I noticed that the manual install guide said below
http://docs.ceph.com/docs/master/install/install-storage-cluster/
> Ensure your YUM ceph.repo entry incl
Hi List,
I just setup my first ceph demo cluster by following the step-by-step quick
start guide.
However, I noted that there is a FAQ
http://tracker.ceph.com/projects/ceph/wiki/How_Can_I_Give_Ceph_a_Try that says
it's may problematic if use ceph client from ceph cluster node.
Is that still the
Hi list,
I'm learning ceph and follow
http://docs.ceph.com/docs/master/rados/operations/user-management/
to experience ceph user management.
I create a user `client.chengwei` which looks like below.
```
exported keyring for client.chengwei
[client.chengwei]
key = AQBC1ZlXnVRgOBAA/nO03Hr1
In addition, I tried `ceph auth rm`, neither failed.
```
# ceph auth rm client.chengwei
Error EINVAL:
```
--
Thanks,
Chengwei
On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote:
> Hi list,
>
> I'm learning ceph and follow
> http://docs.ceph.com/docs/master/rado
Hi list,
I just followed the placement group guide to set pg_num for the rbd pool.
"
Less than 5 OSDs set pg_num to 128
Between 5 and 10 OSDs set pg_num to 512
Between 10 and 50 OSDs set pg_num to 4096
If you have more than 50 OSDs, you need to understand the tradeoffs and how to
calc
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
>
> > Hi list,
> >
> > I just followed the placement group guide to set pg_num for the rbd pool.
> >
> How many other pools do you hav
per pool.
>
> Again, see pgcalc.
>
> Christian
> > Thanks.
> >
> > 发件人: ceph-users 代表 Christian Balzer
>
> > 发送时间: 2016年7月29日 2:47:59
> > 收件人: ceph-users
> > 主题: Re: [ceph-users] too many PGs per OSD (307 >
Would http://ceph.com/pgcalc/ help?
On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote:
> Hi all,
> I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD)
> and
> some pool as:
> ap-southeast.rgw.data.root
> ap-southeast.rgw.control
> ap-southeast.
ks in advance!
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
>
> > Hi list,
> >
> > I just followed the placement group guide to set pg_num for the rbd pool.
> >
> How many other pools
On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
>
> > Op 29 juli 2016 om 13:20 schreef Chengwei Yang :
> >
> >
> > Hi Christian,
> >
> > Thanks for your reply, and since I do really don't like the HEALTH_WARN and
> &g
On Mon, Aug 01, 2016 at 10:37:27AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote:
>
> > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
&
On Fri, Jul 29, 2016 at 05:36:16PM +0200, Wido den Hollander wrote:
>
> > Op 29 juli 2016 om 16:30 schreef Chengwei Yang :
> >
> >
> > On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
> > >
> > > >
Hi list,
I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be
**changed** if you like.
By default, if bucket ID hasn't been configured, then ceph will assign one
automatically to the bucket.
When considering to manage a production ceph cluster, we'd like to create some
On Tue, Aug 02, 2016 at 11:35:01AM +0800, Chengwei Yang wrote:
> Hi list,
>
> I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be
> **changed** if you like.
>
> By default, if bucket ID hasn't been configured, then ceph will assign one
&
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote:
> Hi All,
>
>
>
> I am trying to do a fresh install of Ceph Jewel on my cluster. I went through
> all the steps in configuring the network, ssh, password, etc. Now I am at the
> stage of running the ceph-deploy commands to install mo
Hi List,
I read from ceph document[1] that there are several rbd image features
- layering: layering support
- striping: striping v2 support
- exclusive-lock: exclusive locking support
- object-map: object map support (requires exclusive-lock)
- fast-diff: fast diff calculations (requir
On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
> wrote:
> > Hi List,
> >
> > I read from ceph document[1] that there are several rbd image features
> >
> > - layering: layering support
&
On Tue, Aug 16, 2016 at 11:18:06AM +0800, Yanjun Shen wrote:
> hi,
> when i run cep map -p pool rbd test, error
> hdu@ceph-mon2:~$ sudo rbd map -p rbd test
> rbd: sysfs write failed
> In some cases useful info is found in syslog - try "dmesg | tail" or so.
> rbd: map failed: (5) Input/output err
On Tue, Aug 16, 2016 at 10:46:37AM +0200, Ilya Dryomov wrote:
> On Tue, Aug 16, 2016 at 4:06 AM, Chengwei Yang
> wrote:
> > On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> >> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
> >> wrote:
> >> &g
On Tue, Aug 16, 2016 at 10:21:55AM +0200, Ilya Dryomov wrote:
> On Tue, Aug 16, 2016 at 5:18 AM, Yanjun Shen wrote:
> > hi,
> >when i run cep map -p pool rbd test, error
> > hdu@ceph-mon2:~$ sudo rbd map -p rbd test
> > rbd: sysfs write failed
> > In some cases useful info is found in syslog -
Hi list,
I found that ceph repo is broken these days, no any repodata in the repo at all.
http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/
it's just empty, so how can I install ceph rpms from yum?
A workaround is I synced all the rpms to local and create repodata with
createrepo command,
On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote:
>
> > Op 23 september 2016 om 5:59 schreef Chengwei Yang
> > :
> >
> >
> > Hi list,
> >
> > I found that ceph repo is broken these days, no any repodata in the repo at
> >
22 matches
Mail list logo