Re: [ceph-users] Radosgw index has been inconsistent with reality

2018-10-18 Thread Yang Yang
hy it happens and how to avoid it. Yehuda Sadeh-Weinraub 于2018年10月19日周五 上午2:25写道: > On Wed, Oct 17, 2018 at 1:14 AM Yang Yang wrote: > > > > Hi, > > A few weeks ago I found radosgw index has been inconsistent with > reality. Some object I can not list, but I can get them by

[ceph-users] RadosGW multipart completion is already in progress

2018-10-18 Thread Yang Yang
Hi, I copy some big files to radosgw with awscli. But I found some copy will failed, like : * aws s3 --endpoint=XXX cp ./bigfile s3://mybucket/bigfile* *upload failed: ./bigfile to s3://mybucket/bigfile An error occurred (InternalError) when calling the CompleteMultipartUpload operation

[ceph-users] Radosgw index has been inconsistent with reality

2018-10-17 Thread Yang Yang
Hi, A few weeks ago I found radosgw index has been inconsistent with reality. Some object I can not list, but I can get them by key. Please see the details below: *BACKGROUND:* Ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable) Index pool is on ssd. Th

Re: [ceph-users] cephfs compression?

2018-06-29 Thread Youzhong Yang
Thanks Richard. Yes, it seems working by perf dump: osd.6 "bluestore_compressed": 62622444, "bluestore_compressed_allocated": 186777600, "bluestore_compressed_original":373555200, It's very interesting that bluestore_compressed_allocated is approxima

[ceph-users] cephfs compression?

2018-06-28 Thread Youzhong Yang
For RGW, compression works very well. We use rgw to store crash dumps, in most cases, the compression ratio is about 2.0 ~ 4.0. I tried to enable compression for cephfs data pool: # ceph osd pool get cephfs_data all | grep ^compression compression_mode: force compression_algorithm: lz4 compressio

Re: [ceph-users] How to make nfs v3 work? nfs-ganesha for cephfs

2018-06-27 Thread Youzhong Yang
, but > only subdirectories due to limitations in the inode size) > > > Paul > > 2018-06-26 18:13 GMT+02:00 Youzhong Yang : > >> NFS v4 works like a charm, no issue for Linux clients, but when trying to >> mount on MAC OS X client, it doesn't work - likely due to &#

[ceph-users] How to make nfs v3 work? nfs-ganesha for cephfs

2018-06-26 Thread Youzhong Yang
NFS v4 works like a charm, no issue for Linux clients, but when trying to mount on MAC OS X client, it doesn't work - likely due to 'mountd' not registered in rpc by ganesha when it comes to v4. So I tried to set up v3, no luck: # mount -t nfs -o rw,noatime,vers=3 ceph-dev:/ceph /mnt/ceph mount.n

Re: [ceph-users] fstrim issue in VM for cloned rbd image with fast-diff feature

2018-05-09 Thread Youzhong Yang
anks. On Wed, May 9, 2018 at 11:52 AM, Jason Dillaman wrote: > On Wed, May 9, 2018 at 11:39 AM, Youzhong Yang wrote: > > This is what I did: > > > > # rbd import /var/tmp/debian93-raw.img images/debian93 > > # rbd info images/debian93 > > rbd image 'debian93&#x

[ceph-users] fstrim issue in VM for cloned rbd image with fast-diff feature

2018-05-09 Thread Youzhong Yang
This is what I did: # rbd import /var/tmp/debian93-raw.img images/debian93 # rbd info images/debian93 rbd image 'debian93': size 81920 MB in 20480 objects order 22 (4096 kB objects) block_name_prefix: rbd_data.384b74b0dc51 format: 2 features: layering, exclusive-lock, object-map, fast-diff, d

[ceph-users] _setup_block_symlink_or_file failed to create block symlink to spdk:5780A001A5KD: (17) File exists

2018-04-27 Thread Yang, Liang
hi, I am making enable spdk on ceph. I got the error below. Could someone could help me ? Thank you very much. 1. SPDK code will be compiled by default if(CMAKE_SYSTEM_PROCESSOR MATCHES "i386|i686|amd64|x86_64|AMD64|aarch64") option(WITH_SPDK "Enable SPDK" ON) else() option(WITH_SPDK "Enable SPD

Re: [ceph-users] How to deploy ceph with spdk step by step?

2018-04-27 Thread Yang, Liang
Hi Nathan Cutler,Orlando Moreno, Loic Dachary and Sage Weil, I am making spdk enable on ceph. But I failed. My step is listed as below. Could you help check if all the step is right? And

[ceph-users] how to make spdk enable on ceph

2018-04-26 Thread Yang, Liang
Hi, I have run src/spdk/setup.sh as below: [root@ceph-rep-05 ceph-ansible]# ../ceph/src/spdk/scripts/setup.sh 0005:01:00.0 (1179 010e): nvme -> vfio-pci I cannot find /dev/uio. How could I deploy osd on nvme ssd? Thank you very much. ___ ceph-users m

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-21 Thread Youzhong Yang
eph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Youzhong Yang > *Sent:* 21 January 2018 19:50 > *To:* Brad Hubbard > *Cc:* ceph-users > *Subject:* Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = > random OS hang ? > > > > As someone sugge

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-21 Thread Youzhong Yang
. On Sat, Jan 20, 2018 at 7:03 PM, Brad Hubbard wrote: > On Fri, Jan 19, 2018 at 11:54 PM, Youzhong Yang > wrote: > > I don't think it's hardware issue. All the hosts are VMs. By the way, > using > > the same set of VMWare hypervisors, I switched back to Ubuntu

[ceph-users] RGW compression causing issue for ElasticSearch

2018-01-20 Thread Youzhong Yang
I enabled compression by a command like this: radosgw-admin zone placement modify --rgw-zone=coredumps --placement-id=default-placement --compression=zlib Then once the object was uploaded, elasticsearch kept dumping the following messages: [2018-01-20T23:13:43,587][DEBUG][o.e.a.b.TransportShard

Re: [ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-19 Thread Youzhong Yang
I don't think it's hardware issue. All the hosts are VMs. By the way, using the same set of VMWare hypervisors, I switched back to Ubuntu 16.04 last night, so far so good, no freeze. On Fri, Jan 19, 2018 at 8:50 AM, Daniel Baumann wrote: > Hi, > > On 01/19/18 14:46,

[ceph-users] Ubuntu 17.10 or Debian 9.3 + Luminous = random OS hang ?

2018-01-19 Thread Youzhong Yang
One month ago when I first started evaluating ceph, I chose Debian 9.3 as the operating system. I saw random OS hang so I gave up and switched to Ubuntu 16.04. Every thing works well using Ubuntu 16.04. Yesterday I tried Ubuntu 17.10, again I saw random OS hang, no matter it's mon, mgr, osd, or rg

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-16 Thread Youzhong Yang
ssue? I understand it may sound obvious to experienced users ... http://ceph.com/rgw/new-luminous-rgw-metadata-search/ Thanks a lot. On Tue, Jan 16, 2018 at 3:59 PM, Yehuda Sadeh-Weinraub wrote: > On Tue, Jan 16, 2018 at 12:20 PM, Youzhong Yang > wrote: > > Hi Yehuda, > > >

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-16 Thread Youzhong Yang
7;re seeing there don't look like related to > elasticsearch. It's a generic radosgw related error that says that it > failed to reach the rados (ceph) backend. You can try bumping up the > messenger log (debug ms =1) and see if there's any hint in there. > > Yehud

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-15 Thread Youzhong Yang
gw related error that says that it > failed to reach the rados (ceph) backend. You can try bumping up the > messenger log (debug ms =1) and see if there's any hint in there. > > Yehuda > > On Fri, Jan 12, 2018 at 12:54 PM, Youzhong Yang > wrote: > > So I did the

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-12 Thread Youzhong Yang
So I did the exact same thing using Kraken and the same set of VMs, no issue. What is the magic to make it work in Luminous? Anyone lucky enough to have this RGW ElasticSearch working using Luminous? On Mon, Jan 8, 2018 at 10:26 AM, Youzhong Yang wrote: > Hi Yehuda, > > Thanks for

Re: [ceph-users] Luminous RGW Metadata Search

2018-01-08 Thread Youzhong Yang
Hi Yehuda, Thanks for replying. >radosgw failed to connect to your ceph cluster. Does the rados command >with the same connection params work? I am not quite sure what to do by running rados command to test. So I tried again, could you please take a look and check what could have gone wrong? H

[ceph-users] rbd map failed when ms_public_type=async+rdma

2017-12-26 Thread Yang, Liang
Hi all, Rbd map will fail when ms_public_type=async+rdma, and network of ceph cluster is blocked. Is this cause by that rbd in kernel does not support rdma? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-u

[ceph-users] Luminous RGW Metadata Search

2017-12-22 Thread Youzhong Yang
I followed the exact steps of the following page: http://ceph.com/rgw/new-luminous-rgw-metadata-search/ "us-east-1" zone is serviced by host "ceph-rgw1" on port 8000, no issue, the service runs successfully. "us-east-es" zone is serviced by host "ceph-rgw2" on port 8002, the service was unable t

[ceph-users] radosgw: Couldn't init storage provider (RADOS)

2017-12-18 Thread Youzhong Yang
Hello, I tried to install Ceph 12.2.2 (Luminous) on Ubuntu 16.04.3 LTS (kernel 4.4.0-104-generic), but I am having trouble starting radosgw service: # systemctl status ceph-rado...@rgw.ceph-rgw1 â ceph-rado...@rgw.ceph-rgw1.service - Ceph rados gateway Loaded: loaded (/lib/systemd/system/ceph-

[ceph-users] Anybody worked with collectd and Luminous build? help please

2017-07-24 Thread Yang X
The perf counter dump added a "avgtime" field for which collectd-5.7.2 ceph plugin does not understand and put out a warning and exit. ceph plugin: ds %s was not properly initialized.", Anybody knows a patch to collectd which might help?

Re: [ceph-users] Help! Access ceph cluster from multiple networks?

2017-07-21 Thread Yang X
separated" multiple public network work. And also I wonder if it is possible to have the monitor daemon to listen on multiple interfaces. such as: mon host = 153.64.109.25,10.25.3.8 Your help in this will be much appreciated!! Sincerely, Yang __

Re: [ceph-users] Modification Time of RBD Images

2017-03-28 Thread Dongsheng Yang
o the bottom of backlog. [1] https://trello.com/b/ugTc2QFH/ceph-backlog Yes, agree, let's focus on the other higher-priority backlog items now. Thanx On Fri, Mar 24, 2017 at 3:27 AM, Dongsheng Yang wrote: Hi jason, do you think this is a good feature for rbd? maybe we can implement

Re: [ceph-users] Questions on rbd-mirror

2017-03-27 Thread Dongsheng Yang
Jason, do you think it's good idea to introduce a rbd_config object to record some configurations of per-pool, such as default_features. That means, we can set some configurations differently in different pool. In this way, we can also handle the per-pool setting in rbd-mirror. Thanx

Re: [ceph-users] RBD image perf counters: usage, access

2017-03-27 Thread Dongsheng Yang
On 03/27/2017 04:06 PM, Masha Atakova wrote: Hi Yang, Hi Masha, Thank you for your reply. This is very useful indeed that there are many ImageCtx objects for one image. But in my setting, I don't have any particular ceph client connected to ceph (I could, but this is not the

Re: [ceph-users] Questions on rbd-mirror

2017-03-27 Thread Dongsheng Yang
ng mirroring) "per-pool default features" sounds like a reasonable feature request. About the "ceph auth" for mirroring, I am working on a rbd acl design, will consider pool-level, namespace-level and image-level. Then I think we can

Re: [ceph-users] RBD image perf counters: usage, access

2017-03-26 Thread Dongsheng Yang
ard": 0, "discard_bytes": 0, "discard_latency": { "omap_rd": 0, But, note that, this is a counter of this one ImageCtx, but not the counter for this image. There are possible several ImageCtxes reading or writing on the same image. Ya

Re: [ceph-users] Modification Time of RBD Images

2017-03-24 Thread Dongsheng Yang
Hi jason, do you think this is a good feature for rbd? maybe we can implement a "rbd stat" command to show atime, mtime and ctime of an image. Yang On 03/23/2017 08:36 PM, Christoph Adomeit wrote: Hi, no i did not enable the journalling feature since we do not use mirroring

Re: [ceph-users] Modification Time of RBD Images

2017-03-23 Thread Dongsheng Yang
Did you enable the journaling feature? On 03/23/2017 07:44 PM, Christoph Adomeit wrote: Hi Yang, I mean "any write" to this image. I am sure we have a lot of not-used-anymore rbd images in our pool and I am trying to identify them. The mtime would be a good hint to show which im

Re: [ceph-users] Modification Time of RBD Images

2017-03-23 Thread Dongsheng Yang
On 03/23/2017 07:32 PM, Dongsheng Yang wrote: Hi Christoph, On 03/23/2017 07:16 PM, Christoph Adomeit wrote: Hello List, i am wondering if there is meanwhile an easy method in ceph to find more information about rbd-images. For example I am interested in the modification time of an rbd

Re: [ceph-users] Modification Time of RBD Images

2017-03-23 Thread Dongsheng Yang
as resize? Or any write to this image? Thanx Yang I found some posts from 2015 that say we have to go over all the objects of an rbd image and find the newest mtime put this is not a preferred solution for me. It takes to much time and too many system resources. Any Ideas ? Thanks

[ceph-users] Fwd: rbd: show the snapshot tree

2017-03-20 Thread Dongsheng Yang
:rbd: show the snapshot tree Date: Fri, 17 Mar 2017 11:50:17 +0800 From: Dongsheng Yang To: 'Ceph Users' CC: jason , chenhanx...@gmail.com Hi guys, There is an idea about showing the snapshots of an image in a tree view, as what vmware is doing in screensho

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-16 Thread Dongsheng Yang
BTW, is there any body using EnhanceIO? On 02/15/2017 05:51 PM, Dongsheng Yang wrote: thanx Nick, Gregory and Wido, So at least, we can say the cache tiering in Jewel is stable enough I think. I like cache tiering more than the others, but yes, there is a problem about cache tiering in

Re: [ceph-users] bcache vs flashcache vs cache tiering

2017-02-15 Thread Dongsheng Yang
. guys: Is there any plan to enhance cache tiering to solve such problem? Or as Nick asked, is that cache tiering fading away? Yang On 15/02/2017, 06:42, Nick Fisk wrote: -Original Message- From: Gregory Farnum [mailto:gfar...@redhat.com] Sent: 14 February 2017 21:05 To: Wido den

[ceph-users] bcache vs flashcache vs cache tiering

2017-02-14 Thread Dongsheng Yang
advance. Yang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rgw: how to prevent rgw user from creating a new bucket?

2016-12-04 Thread Yang Joseph
Thank you very much for your response. I‘m confused about what this cap related to? On 12/03/2016 12:13 AM, Yehuda Sadeh-Weinraub wrote: On Fri, Dec 2, 2016 at 3:18 AM, Yang Joseph wrote: Hello, I would like only to allow the user to read the object in a already existed bucket, and not

[ceph-users] rgw: how to prevent rgw user from creating a new bucket?

2016-12-02 Thread Yang Joseph
"type": "buckets", "perm": "read" } But why user test3 can still create new bucket after I have set its caps to "buckets=read"? thx, Yang Honggang ___ ceph-users mailing li

Re: [ceph-users] Ceph repo is broken, no repodata at all

2016-09-26 Thread Chengwei Yang
On Fri, Sep 23, 2016 at 09:31:46AM +0200, Wido den Hollander wrote: > > > Op 23 september 2016 om 5:59 schreef Chengwei Yang > > : > > > > > > Hi list, > > > > I found that ceph repo is broken these days, no any repodata in the repo at > >

[ceph-users] deploy ceph cluster in containers

2016-09-25 Thread yang
Hi, cephers, I want to run ceph cluster in docker, including MONs, OSDs, RGWs and maybe MDS. I do it with the guide https://github.com/ceph/ceph-docker/tree/master/ceph-releases/hammer/ubuntu/14.04/daemon , and it runs well in single nodes without KV store. That is, every component runs as one c

[ceph-users] Ceph repo is broken, no repodata at all

2016-09-22 Thread Chengwei Yang
Hi list, I found that ceph repo is broken these days, no any repodata in the repo at all. http://us-east.ceph.com/rpm-jewel/el7/x86_64/repodata/ it's just empty, so how can I install ceph rpms from yum? A workaround is I synced all the rpms to local and create repodata with createrepo command,

Re: [ceph-users] ceph map error

2016-08-16 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 10:21:55AM +0200, Ilya Dryomov wrote: > On Tue, Aug 16, 2016 at 5:18 AM, Yanjun Shen wrote: > > hi, > >when i run cep map -p pool rbd test, error > > hdu@ceph-mon2:~$ sudo rbd map -p rbd test > > rbd: sysfs write failed > > In some cases useful info is found in syslog -

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-16 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 10:46:37AM +0200, Ilya Dryomov wrote: > On Tue, Aug 16, 2016 at 4:06 AM, Chengwei Yang > wrote: > > On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote: > >> On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang > >> wrote: > >> &g

Re: [ceph-users] ceph map error

2016-08-15 Thread Chengwei Yang
On Tue, Aug 16, 2016 at 11:18:06AM +0800, Yanjun Shen wrote: > hi, >    when i run cep map -p pool rbd test, error > hdu@ceph-mon2:~$ sudo rbd map -p rbd test > rbd: sysfs write failed > In some cases useful info is found in syslog - try "dmesg | tail" or so. > rbd: map failed: (5) Input/output err

Re: [ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
On Mon, Aug 15, 2016 at 03:27:50PM +0200, Ilya Dryomov wrote: > On Mon, Aug 15, 2016 at 9:54 AM, Chengwei Yang > wrote: > > Hi List, > > > > I read from ceph document[1] that there are several rbd image features > > > > - layering: layering support &

[ceph-users] rbd image features supported by which kernel version?

2016-08-15 Thread Chengwei Yang
Hi List, I read from ceph document[1] that there are several rbd image features - layering: layering support - striping: striping v2 support - exclusive-lock: exclusive locking support - object-map: object map support (requires exclusive-lock) - fast-diff: fast diff calculations (requir

Re: [ceph-users] Ceph-deploy on Jewel error

2016-08-03 Thread Chengwei Yang
On Thu, Aug 04, 2016 at 12:20:01AM +, EP Komarla wrote: > Hi All, > > > > I am trying to do a fresh install of Ceph Jewel on my cluster. I went through > all the steps in configuring the network, ssh, password, etc. Now I am at the > stage of running the ceph-deploy commands to install mo

Re: [ceph-users] Should I manage bucket ID myself?

2016-08-01 Thread Chengwei Yang
On Tue, Aug 02, 2016 at 11:35:01AM +0800, Chengwei Yang wrote: > Hi list, > > I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be > **changed** if you like. > > By default, if bucket ID hasn't been configured, then ceph will assign one &

[ceph-users] Should I manage bucket ID myself?

2016-08-01 Thread Chengwei Yang
Hi list, I'm learning Ceph CRUSH map and know that the bucket ID is optional and can be **changed** if you like. By default, if bucket ID hasn't been configured, then ceph will assign one automatically to the bucket. When considering to manage a production ceph cluster, we'd like to create some

Re: [ceph-users] Can I remove rbd pool and re-create it?

2016-07-31 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 05:36:16PM +0200, Wido den Hollander wrote: > > > Op 29 juli 2016 om 16:30 schreef Chengwei Yang : > > > > > > On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote: > > > > > > >

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-31 Thread Chengwei Yang
On Mon, Aug 01, 2016 at 10:37:27AM +0900, Christian Balzer wrote: > On Fri, 29 Jul 2016 16:20:03 +0800 Chengwei Yang wrote: > > > On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > > > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: &

Re: [ceph-users] Can I remove rbd pool and re-create it?

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 01:48:43PM +0200, Wido den Hollander wrote: > > > Op 29 juli 2016 om 13:20 schreef Chengwei Yang : > > > > > > Hi Christian, > > > > Thanks for your reply, and since I do really don't like the HEALTH_WARN and > &g

[ceph-users] Can I remove rbd pool and re-create it?

2016-07-29 Thread Chengwei Yang
ks in advance! On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: > > > Hi list, > > > > I just followed the placement group guide to set pg_num for the rbd pool. > > > How many other pools

Re: [ceph-users] [RGW] how to choise the best placement groups ?

2016-07-29 Thread Chengwei Yang
Would http://ceph.com/pgcalc/ help? On Mon, Jul 18, 2016 at 01:27:38PM +0700, Khang Nguyễn Nhật wrote: > Hi all, > I have a cluster consists of: 3 Monitors, 1 RGW, 1 host of 24 OSDs(2TB/OSD) > and > some pool as:  >     ap-southeast.rgw.data.root >     ap-southeast.rgw.control >     ap-southeast.

Re: [ceph-users] 答复: 答复: too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
per pool. > > Again, see pgcalc. > > Christian > > Thanks. > > > > 发件人: ceph-users 代表 Christian Balzer > > > 发送时间: 2016年7月29日 2:47:59 > > 收件人: ceph-users > > 主题: Re: [ceph-users] too many PGs per OSD (307 >

Re: [ceph-users] too many PGs per OSD (307 > max 300)

2016-07-29 Thread Chengwei Yang
On Fri, Jul 29, 2016 at 11:47:59AM +0900, Christian Balzer wrote: > On Fri, 29 Jul 2016 09:59:38 +0800 Chengwei Yang wrote: > > > Hi list, > > > > I just followed the placement group guide to set pg_num for the rbd pool. > > > How many other pools do you hav

[ceph-users] too many PGs per OSD (307 > max 300)

2016-07-28 Thread Chengwei Yang
Hi list, I just followed the placement group guide to set pg_num for the rbd pool. " Less than 5 OSDs set pg_num to 128 Between 5 and 10 OSDs set pg_num to 512 Between 10 and 50 OSDs set pg_num to 4096 If you have more than 50 OSDs, you need to understand the tradeoffs and how to calc

Re: [ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
In addition, I tried `ceph auth rm`, neither failed. ``` # ceph auth rm client.chengwei Error EINVAL: ``` -- Thanks, Chengwei On Thu, Jul 28, 2016 at 06:23:09PM +0800, Chengwei Yang wrote: > Hi list, > > I'm learning ceph and follow > http://docs.ceph.com/docs/master/rado

[ceph-users] ceph auth caps failed to cleanup user's cap

2016-07-28 Thread Chengwei Yang
Hi list, I'm learning ceph and follow http://docs.ceph.com/docs/master/rados/operations/user-management/ to experience ceph user management. I create a user `client.chengwei` which looks like below. ``` exported keyring for client.chengwei [client.chengwei] key = AQBC1ZlXnVRgOBAA/nO03Hr1

[ceph-users] Can I use ceph from a ceph node?

2016-07-06 Thread Chengwei Yang
Hi List, I just setup my first ceph demo cluster by following the step-by-step quick start guide. However, I noted that there is a FAQ http://tracker.ceph.com/projects/ceph/wiki/How_Can_I_Give_Ceph_a_Try that says it's may problematic if use ceph client from ceph cluster node. Is that still the

[ceph-users] confused by ceph quick install and manual install

2016-07-01 Thread Chengwei Yang
Hi List, Sorry if this question was answered before. I'm new to ceph and following the ceph document to setting up a ceph cluster. However, I noticed that the manual install guide said below http://docs.ceph.com/docs/master/install/install-storage-cluster/ > Ensure your YUM ceph.repo entry incl

[ceph-users] Can I modify ak/sk?

2016-06-29 Thread yang
Hello, everyone When I want to modify access_key using the following cmd: radosgw-admin user modify --uid=user --access_key="userak" I got: { "user_id": "user", "display_name": "User name", "email": "", "suspended": 0, "max_buckets": 1000, "auid": 0, "subusers": []

Re: [ceph-users] Ceph for online file storage

2016-06-26 Thread yang
use rados directly. Best Regards, yang -- Original -- From: "m.da...@bluewin.ch";; Date: Mon, Jun 27, 2016 02:30 AM To: "ceph-users"; Subject: [ceph-users] Ceph for online file storage Hi all, After a quick review of the mailing li

[ceph-users] Use of legacy bobtail tunables and potential performance impact to "jewel"?

2016-06-22 Thread Yang X
fect data placement or does it actually would not find mapping for certain objects and thus cause error? Thanks in advance, Yang ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] What's the minimal version of "ceph" client side the current "jewel" release would support?

2016-05-12 Thread Yang X
See title. We have Firefly on the client side (SLES11SP3) and it does not seem to work well with the "jewel" server nodes (CentOS 7) Can somebody please provide some guidelines? Thanks, Yang ___ ceph-users mailing list ceph-users@lists.cep

[ceph-users] Permission problem when "ceph-disk activate" with ceph-osd-10.2.0-0.el7.x86_64

2016-05-06 Thread Yang X
Hi, I am following the documentation on how to prepare and activate ceph-disk and ran into the following problem: command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 8 --monmap /var/lib/ceph/tmp/mnt.RxRUd8/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.RxRUd8

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread yang sheng
information by persons or entities other than the intended recipient > is prohibited. If you received this in error, please contact the sender and > destroy any copies of this information. > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *

Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread yang sheng
ve de-attach and re-attach the volume on server B. ) On Mon, May 2, 2016 at 9:34 AM, Sean Redmond wrote: > Hi, > > You could set the below to create ephemeral disks as RBD's > > [libvirt] > > libvirt_images_type = rbd > > > On Mon, May 2, 2016 at 2:28 PM, yan

[ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread yang sheng
Hi I am using ceph infernalis. it works fine with my openstack liberty. I am trying to test nova evacuate. All the vms' volumes are shared among all compute nodes. however, the instance files (/var/lib/nova/instances) are in each compute node's local storage. Based on redhat docs( https://acce

[ceph-users] how ceph mon works

2016-04-26 Thread yang sheng
Hi according to ceph docs, it recommend 3 monitors as least. All the clients will contact monitors first to get the ceph map and connect the osd. I am curious that if I have 3 monitors, are these monitors run in master-master mode or master-slave mode? In another word, will clients talk to any o

[ceph-users] 1 pg stuck

2016-03-24 Thread yang sheng
Hi all, I am testing the ceph right now using 4 servers with 8 OSDs (all OSDs are up and in). I have 3 pools in my cluster (image pool, volume pool and default rbd pool), both image and volume pool have replication size =3. Based on the pg equation, there are 448 pgs in my cluster. $ ceph osd tre

Re: [ceph-users] How many mds node that ceph need.

2016-03-24 Thread yang
Hi, Mika, By default, single MDS is used unless you set max_mds to a value bigger than 1. Creating more than one MDS is okay, as these will by default simply become standbys. All the standby MDS can be down as long as the master MDS(up:active) works. Thanks, yang -- Original

Re: [ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-23 Thread yang
e you are using, you should / have to stick to the user root. Everything else might cause trouble." You mean root is the best practice? Is there any document about this, or some details? Best Regards, yang -- Original -- From: "Oliver Dzombic";; D

[ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-23 Thread yang
Anyone who can help me? -- Original -- From: "yang";; Date: Wed, Mar 23, 2016 11:30 AM To: "ceph-users"; Subject: root and non-root user for ceph/ceph-deploy Hi, everyone, In my ceph cluster, first I deploy my ceph using ceph-deploy with

[ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-22 Thread yang
g any more. And, after I umount the OSD, re-deploy the cluster, the old OSD is still display in my new cluster. My another question is, what's the difference between root and non-root user for ceph/ceph-deploy? ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) yang, T

Re: [ceph-users] Why my cluster performance is so bad?

2016-02-25 Thread yang
Thanks for your suggestion, I have re-test my cluster and result is much better. Regards, Yang -- Original -- From: "Christian Balzer";; Date: Tue, Feb 23, 2016 09:49 PM To: "ceph-users"; Cc: "yang"; Subject: Re: [ceph-users] Wh

[ceph-users] Why my cluster performance is so bad?

2016-02-23 Thread yang
My ceph cluster config: 7 nodes(including 3 mons, 3 mds). 9 SATA HDD in every node and each HDD as an OSD&journal(deployed by ceph-deploy). CPU: 32core Mem: 64GB public network: 1Gbx2 bond0, cluster network: 1Gbx2 bond0. The read bw is 109910KB/s for 1M-read, and 34329KB/s for 1M-write. Why is i

Re: [ceph-users] how to monit ceph bandwidth?

2016-02-02 Thread yang
Yes, it is monitor, sorry for this mistake spelling. -- Original -- From: "Shinobu Kinjo";; Date: Feb 3, 2016 To: "yang"; Cc: "ceph-users"; Subject: Re: [ceph-users] how to monit ceph bandwidth? monit? probably monitor??

[ceph-users] how to monit ceph bandwidth?

2016-02-02 Thread yang
know how to monit the IO performance of those clients. Futhermore, it's better to separate read bandwidth and write bandwidth? Does the current version of ceph support this? Thanks, yang ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Urgent help needed for ceph storage "mount error 5= Input/output error"

2016-02-02 Thread yang
You can try ceph daemon mds.host session evict to kill it off. -- Original -- From: "Zhao Xu";; Date: Feb 3, 2016 To: "Goncalo Borges"; Cc: "ceph-users@lists.ceph.com"; Subject: Re: [ceph-users] Urgent help needed for ceph storage "mount error 5= Input/ou

[ceph-users] How to configure placement_targets?

2016-01-11 Thread Yang Honggang
'*bj:fast-placement*') // Create bucket 'mmm-2' in placement target 'default-placement' bucket = conn.create_bucket('mmm-2', location='*bj:default-placement*') thx joseph On 01/07/2016 09:04 PM, Yang Honggang wrote: Hello, * **How to configure p

[ceph-users] How to configure placement_targets?

2016-01-07 Thread Yang Honggang
Hello, * **How to configure placement_targets? Which step is wrong in my following steps? * I want to use different pools to hold user's buckets. Two pools are created, one is '.bj-dz.rgw.buckets', the other is '.bj-dz.rgw.buckets.hot'. 1. Two placement targets are added to region map. Targets

Re: [ceph-users] Long peering - throttle at FileStore::queue_transactions

2016-01-05 Thread Guang Yang
On Mon, Jan 4, 2016 at 7:21 PM, Sage Weil wrote: > On Mon, 4 Jan 2016, Guang Yang wrote: >> Hi Cephers, >> Happy New Year! I got question regards to the long PG peering.. >> >> Over the last several days I have been looking into the *long peering* >> problem when

Re: [ceph-users] How to run multiple RadosGW instances under the same zone

2016-01-04 Thread Yang Honggang
Tuesday, January 05, 2016 10:07 AM *To:* Yang Honggang *Cc:* Srinivasula Maram; ceph-us...@ceph.com; Javen Wu *Subject:* Re: [ceph-users] How to run multiple RadosGW instances under the same zone It works fine. The federated config reference is not related to running multiple instances on the s

Re: [ceph-users] How to run multiple RadosGW instances under the same zone

2016-01-04 Thread Yang Honggang
y for load balancing and failover. Thanks, Srinivas *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of *Joseph Yang *Sent:* Monday, January 04, 2016 2:09 PM *To:* ceph-us...@ceph.com; Joseph Yang *Subject:* [ceph-users] How to run multiple RadosGW instances under the same zone

[ceph-users] Long peering - throttle at FileStore::queue_transactions

2016-01-04 Thread Guang Yang
Hi Cephers, Happy New Year! I got question regards to the long PG peering.. Over the last several days I have been looking into the *long peering* problem when we start a OSD / OSD host, what I observed was that the two peering working threads were throttled (stuck) when trying to queue new transa

[ceph-users] How to run multiple RadosGW instances under the same zone

2016-01-04 Thread Joseph Yang
Hello, How to run multiple RadosGW instances under the same zone? Assume there are two hosts HOST_1 and HOST2. I want to run two RadosGW instances on these two hosts for my zone ZONE_MULI. So, when one of the radosgw instance is down, I can still access the zone. There are some questions: 1. Ho

[ceph-users] 【ceph-users】Journal size when use ceph-deploy to add new osd

2015-03-05 Thread Alexander Yang
hello everyone, recently, I have a doubt about ceph osd journal. I use ceph-deploy to add new osd which the version is 1.4.0. And my ceph version is 0.80.5 the /dev/sdb is a sata disk,and the /dev/sdk is a ssd disk, the sdk1 partition size is 50G. ceph-deploy osd prepare host1:/dev/sdb1:/de

[ceph-users] 答复: Re: can not add osd

2014-12-21 Thread yang . bin18
Hi I have deploied ceph osd according official Ceph docs,and the same error came out again. 发件人: Karan Singh 收件人: yang.bi...@zte.com.cn, 抄送: ceph-users 日期: 2014/12/16 22:51 主题: Re: [ceph-users] can not add osd Hi You logs does not provides much information , if y

Re: [ceph-users] 'rbd list' stuck

2014-12-17 Thread yang . bin18
The cluster state must be wrong,but how to recovery? root@node3 ceph-cluster]# ceph -w cluster 1365f2dd-b86c-436c-a64f-3318a937f3c2 health HEALTH_WARN 64 pgs incomplete; 64 pgs stale; 64 pgs stuck inactive; 64 pgs stuck stale; 64 pgs stuck unclean; 8 requests are blocked > 32 sec m

[ceph-users] 'rbd list' stuck

2014-12-17 Thread yang . bin18
Hi Why command 'rbd list'executed on monitor stuck,any prompt should be appreciated! Backtree: [] futex_wait_queue_me+0xde/0x140 [] futex_wait+0x179/0x280 [] do_futex+0xfe/0x5e0 [] SyS_futex+0x80/0x180 [] system_call_fastpath+0x16/0x1b [] 0x Best Regards! YangBin --

Re: [ceph-users] can not add osd

2014-12-16 Thread yang . bin18
From official Ceph docs,i still get the same err: [root@node3 ceph-cluster]# ceph-deploy osd activate node2:/dev/sdb1 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy osd activate node2:/dev/sdb1 [ceph

[ceph-users] can not add osd

2014-12-15 Thread yang . bin18
hi When i execute "ceph-deploy osd prepare node3:/dev/sdb",always come out err like this : [node3][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.u2KXW3 [node3][WARNIN] umount: /var/lib/ceph/tmp/mnt.u2KXW3: target is busy. Then i execute "/bin/umount -- /var/lib/c

[ceph-users] can not add osd

2014-12-10 Thread yang . bin18
hi I am newly use ceph,when mon start up ,ceph -s show "no monitors specified to connect to. Error connecting to cluster: ObjectNotFound"(even in the mon). What reason may be? ZTE Information Security Notice: The information contained i

Re: [ceph-users] experimental features

2014-12-08 Thread Fred Yang
You will have to consider in the real world whoever built the cluster might not document the dangerous option to let support stuff or successor aware. Thus any experimental feature considered not safe for production should be included in a warning message in 'ceph health', and logs, either log it p

[ceph-users] Negative number of objects degraded for extended period of time

2014-11-13 Thread Fred Yang
Hi, The Ceph cluster we are running have few OSDs approaching to 95% 1+ weeks ago so I ran a reweight to balance it out, in the meantime, instructing application to purge data not required. But after large amount of data purge issued from application side(all OSDs' usage dropped below 20%), the cl

  1   2   >