On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote:
>
> > Op 23 september 2016 om 5:59 schreef Chengwei Yang
> > <chengwei.yang...@gmail.com>:
> >
> >
> > Hi list,
> >
> > I found that ceph repo is broken these days, no any r
Hi list,
I found that ceph repo is broken these days, no any repodata in the repo at all.
http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/
it's just empty, so how can I install ceph rpms from yum?
A workaround is I synced all the rpms to local and create repodata with
createrepo command,
On Tue, Aug 16, 2016 at 10:21:55AM +0200, Ilya Dryomov wrote:
> On Tue, Aug 16, 2016 at 5:18 AM, Yanjun Shen wrote:
> > hi,
> >when i run cep map -p pool rbd test, error
> > hdu@ceph-mon2:~$ sudo rbd map -p rbd test
> > rbd: sysfs write failed
> > In some cases useful
On Tue, Aug 16, 2016 at 10:46:37AM +0200, Ilya Dryomov wrote:
> On Tue, Aug 16, 2016 at 4:06 AM, Chengwei Yang
> <chengwei.yang...@gmail.com> wrote:
> > On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> >> On Mon, Aug 15, 2016 at 9:54 AM, Chengw
On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote:
> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang
> <chengwei.yang...@gmail.com> wrote:
> > Hi List,
> >
> > I read from ceph document[1] that there are several rbd image features
> >
> > - l
Hi List,
I read from ceph document[1] that there are several rbd image features
- layering: layering support
- striping: striping v2 support
- exclusive-lock: exclusive locking support
- object-map: object map support (requires exclusive-lock)
- fast-diff: fast diff calculations
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote:
> Hi All,
>
>
>
> I am trying to do a fresh install of Ceph Jewel on my cluster. I went through
> all the steps in configuring the network, ssh, password, etc. Now I am at the
> stage of running the ceph-deploy commands to install
On Tue, Aug 02, 2016 at 11:35:01AM +0800, Chengwei Yang wrote:
> Hi list,
>
> I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be
> **changed** if you like.
>
> By default, if bucket ID hasn't been configured, then ceph will assign one
> automa
Hi list,
I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be
**changed** if you like.
By default, if bucket ID hasn't been configured, then ceph will assign one
automatically to the bucket.
When considering to manage a production ceph cluster, we'd like to create some
On Fri, Jul 29, 2016 at 05:36:16PM +0200, Wido den Hollander wrote:
>
> > Op 29 juli 2016 om 16:30 schreef Chengwei Yang <chengwei.yang...@gmail.com>:
> >
> >
> > On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
> > >
> > &g
On Mon, Aug 01, 2016 at 10:37:27AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote:
>
> > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
&
On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote:
>
> > Op 29 juli 2016 om 13:20 schreef Chengwei Yang <chengwei.yang...@gmail.com>:
> >
> >
> > Hi Christian,
> >
> > Thanks for your reply, and since I do really don't lik
in advance!
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
>
> > Hi list,
> >
> > I just followed the placement group guide to set pg_num for the rbd pool.
> >
> How many other pools do y
Would http://ceph.com/pgcalc/ help?
On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote:
> Hi all,
> I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD)
> and
> some pool as:
> ap-southeast.rgw.data.root
> ap-southeast.rgw.control
>
50 per OSD), thus 256 per pool.
>
> Again, see pgcalc.
>
> Christian
> > Thanks.
> >
> > 发件人: ceph-users <ceph-users-boun...@lists.ceph.com> 代表 Christian Balzer
> <ch...@gol.com>
> > 发送时间: 2016年7月29日 2:47:59
&
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote:
> On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote:
>
> > Hi list,
> >
> > I just followed the placement group guide to set pg_num for the rbd pool.
> >
> How many other pools do you hav
Hi list,
I just followed the placement group guide to set pg_num for the rbd pool.
"
Less than 5 OSDs set pg_num to 128
Between 5 and 10 OSDs set pg_num to 512
Between 10 and 50 OSDs set pg_num to 4096
If you have more than 50 OSDs, you need to understand the tradeoffs and how to
In addition, I tried `ceph auth rm`, neither failed.
```
# ceph auth rm client.chengwei
Error EINVAL:
```
--
Thanks,
Chengwei
On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote:
> Hi list,
>
> I'm learning ceph and follow
> http://docs.ceph.com/docs/master/rados/ope
Hi list,
I'm learning ceph and follow
http://docs.ceph.com/docs/master/rados/operations/user-management/
to experience ceph user management.
I create a user `client.chengwei` which looks like below.
```
exported keyring for client.chengwei
[client.chengwei]
key =
Hi List,
I just setup my first ceph demo cluster by following the step-by-step quick
start guide.
However, I noted that there is a FAQ
http://tracker.ceph.com/projects/ceph/wiki/How_Can_I_Give_Ceph_a_Try that says
it's may problematic if use ceph client from ceph cluster node.
Is that still the
Hi List,
Sorry if this question was answered before.
I'm new to ceph and following the ceph document to setting up a ceph cluster.
However, I noticed that the manual install guide said below
http://docs.ceph.com/docs/master/install/install-storage-cluster/
> Ensure your YUM ceph.repo entry
21 matches
Mail list logo