[ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-24 Thread Mohd Bazli Ab Karim
Dear Ceph-devel, ceph-users, I am currently facing issue with my ceph mds server. Ceph-mds daemon does not want to bring up back. Tried running that manually with ceph-mds -i mon01 -d but it shows that it stucks at failed assert(session) line 1303 in mds/journal.cc and aborted. Can someone shed

Re: [ceph-users] [Bug]radosgw-agent can't sync files with Chinese filename

2014-04-24 Thread wsnote
Hi, Yehuda. It doesn't matter.We have fixed it. The filename will be transcoded by url_encode and decoded by url_decode. There is a bug when decoding the filename. There is another bug when decoding the filename. when radosgw-agent fails decoding a filename, files sync will get stuck and other f

Re: [ceph-users] [Bug]radosgw-agent can't sync files with Chinese filename

2014-04-24 Thread wsnote
Hi, Yehuda. It doesn't matter.We have fixed it. The filename will be transcoded by url_encode and decoded by urlDecode. There is a bug when decoding the filename. At 2014-04-25 03:32:02,"Yehuda Sadeh" wrote: >Hi, > > sorry for the late response. I opened a ceph tracker issue for it (#8202). > >

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Christopher O'Connell
I'm going to note that I've seen this on Firefly. I don't think it necessarily needs a named release, but I can confirm that it seems to be related to radosgw. All the best, ~ Christopher On Thu, Apr 24, 2014 at 10:33 AM, Gregory Farnum wrote: > Yehuda says he's fixed several of these bugs in

[ceph-users] Copying RBD images between clusters?

2014-04-24 Thread Brian Rak
Is there a recommended way to copy an RBD image between two different clusters? My initial thought was 'rbd export - | ssh "rbd import -"', but I'm not sure if there's a more efficient way. ___ ceph-users mailing list ceph-users@lists.ceph.com http:

Re: [ceph-users] [Bug]radosgw-agent can't sync files with Chinese filename

2014-04-24 Thread Yehuda Sadeh
Hi, sorry for the late response. I opened a ceph tracker issue for it (#8202). Thanks, Yehuda On Wed, Apr 16, 2014 at 1:00 AM, wsnote wrote: > OS: CentOS 6.5 > Version: Ceph 0.67.7 or 0.79 > > Hello, everyone! > I have configured federated gateways for several. > Now I can sync files from mas

Re: [ceph-users] RBD Cloning

2014-04-24 Thread Dyweni - Ceph-Users
Great! I've just confirmed this with 3.12. However, I also see 'rbd: image test1: WARNING: kernel layering is EXPERIMENTAL!' FYI On 2014-04-24 13:07, McNamara, Bradley wrote: I believe any kernel greater than 3.9 supports format 2 RBD's. I'm sure someone will correct me if this is a miss

Re: [ceph-users] OSD distribution unequally -- osd crashes

2014-04-24 Thread Craig Lewis
Your OSDs shouldn't be crashing during a remap. Although.. you might try to get that one OSDs below 96% first, it could have an effect. If OSDs continue to crash after that, I'd start a new thread about the crashes. If you have the option to delete some data and reload it it later, I'd do t

Re: [ceph-users] RBD Cloning

2014-04-24 Thread McNamara, Bradley
I believe any kernel greater than 3.9 supports format 2 RBD's. I'm sure someone will correct me if this is a misstatement. Brad -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dyweni - Ceph-Users Sent: Thursday, April 2

Re: [ceph-users] cluster_network ignored

2014-04-24 Thread McNamara, Bradley
Do you have all of the cluster IP's defined in the host file on each OSD server? As I understand it, the mon's do not use a cluster network, only the OSD servers. -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gandalf

Re: [ceph-users] OOM-Killer for ceph-osd

2014-04-24 Thread Andrey Korolyov
On 04/24/2014 08:14 PM, Gandalf Corvotempesta wrote: > During a recovery, I'm hitting oom-killer for ceph-osd because it's > using more than 90% of avaialble ram (8GB) > > How can I decrease the memory footprint during a recovery ? You can reduce pg count per OSD for example, it scales down well

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Gregory Farnum
Yehuda says he's fixed several of these bugs in recent code, but if you're seeing it from a recent dev release, please file a bug! Likewise if you're on a named release and would like to see a backport. :) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Apr 24, 2014 at

Re: [ceph-users] osd_recovery_max_single_start

2014-04-24 Thread Chad Seys
Hi David, Thanks for the reply. I'm a little confused by OSD versus PGs in the description of the two options osd_recovery_max_single_start and osd_recovery_max_active . The ceph webpage describes osd_recovery_max_active as "The number of active recovery requests per OSD at one time." It doe

[ceph-users] RBD Cloning

2014-04-24 Thread Dyweni - Ceph-Users
Hi, Per the docs, I see that cloning is only supported in format 2, and that the kernel rbd module does not support format 2. Is there another way to be able to mount a format 2 rbd image on a physical host without using the kernel rbd module? One idea I had (not tested) is to use rbd-fuse

Re: [ceph-users] osd_recovery_max_single_start

2014-04-24 Thread David Zafman
The value of osd_recovery_max_single_start (default 5) is used in conjunction with osd_recovery_max_active (default 15). This means that a given PG will start up to 5 recovery operations at time of a total of 15 operations active at a time. This allows recovery to spread operations across mo

Re: [ceph-users] cluster_network ignored

2014-04-24 Thread Gandalf Corvotempesta
2014-04-24 18:09 GMT+02:00 Peter : > Do you have a typo? : > > public_network = 192.168.0/24 > > > should this read: > > public_network = 192.168.0.0/24 Sorry, it was a typo when posting in list. ceph.conf is correct. ___ ceph-users mailing list ceph-use

[ceph-users] OOM-Killer for ceph-osd

2014-04-24 Thread Gandalf Corvotempesta
During a recovery, I'm hitting oom-killer for ceph-osd because it's using more than 90% of avaialble ram (8GB) How can I decrease the memory footprint during a recovery ? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinf

Re: [ceph-users] Access denied error

2014-04-24 Thread Yehuda Sadeh
On Thu, Apr 24, 2014 at 8:04 AM, Punit Dambiwal wrote: > > > Hi Yehuda, > > I am getting the following from the radosgw logs :- > > --- > 2014-04-22 09:36:00.024618 7ff16ccf6700 1 == starting new request > req=0x1ec7270 = > 2014-04-22 09:36:00.024719 7ff16ccf6700 2 r

Re: [ceph-users] ceph manual deploy doesnt start at step 15

2014-04-24 Thread *sm1Ly
hello again. I planning to rent 2 servers with 22hd each. 2 in raid1 for system and 20 in two way tests 20 hdd = 20 ods or 20 hdd = 1 ods. 10-15k sas and sata drives, dont know absolute specificaition, servers are preparing. workload - a lot of reads with small files. 1-20mb, but 1 000 000 000 +

Re: [ceph-users] cluster_network ignored

2014-04-24 Thread Peter
Do you have a typo? : public_network = 192.168.0/24 should this read: public_network = 192.168.0.0/24 On 04/24/2014 04:53 PM, Gandalf Corvotempesta wrote: I'm trying to configure a small ceph cluster with both public and cluster networks. This is my conf: [global] public_network = 192.

Re: [ceph-users] Troubles MDS

2014-04-24 Thread Gregory Farnum
On Thursday, April 24, 2014, Georg Höllrigl wrote: > > And that's exactly what it sounds like — the MDS isn't finding objects >> that are supposed to be in the RADOS cluster. >> > > I'm not sure, what I should think about that. MDS shouldn't access data > for RADOS and vice versa? The metadata

[ceph-users] cluster_network ignored

2014-04-24 Thread Gandalf Corvotempesta
I'm trying to configure a small ceph cluster with both public and cluster networks. This is my conf: [global] public_network = 192.168.0/24 cluster_network = 10.0.0.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx fsid = 004baba0-74dc-4429-8

Re: [ceph-users] ceph manual deploy doesnt start at step 15

2014-04-24 Thread Karan Singh
It totally depends upon why kind of hardware you are using in your ceph cluster. you should always choose the right hardware for your cluster keeping your performance needs in mind. Can you tell us about the hardware you have or planning to purchase , also what kind of workload your setup woul

[ceph-users] osd_recovery_max_single_start

2014-04-24 Thread Chad Seys
Hi All, What does osd_recovery_max_single_start do? I could not find a description of it. Thanks! Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Access denied error

2014-04-24 Thread Punit Dambiwal
Hi Yehuda, I am getting the following from the radosgw logs :- --- 2014-04-22 09:36:00.024618 7ff16ccf6700 1 == starting new request req=0x1ec7270 = 2014-04-22 09:36:00.024719 7ff16ccf6700 2 req 15:0.000100::GET /admin/usage::initializing 2014-04-22 09:36:00.024731 7

Re: [ceph-users] 403 error sync objects

2014-04-24 Thread Peter
i also have this issue and there is another thread on it. radosgw-agent will sync metadata but not data. Do you have different gateway system user keys on master and slave zone? On 04/24/2014 09:45 AM, lixuehui wrote: hi,list: I tried to sync between master zone and slave zone belong one regi

[ceph-users] radosgw-agent failed to parse

2014-04-24 Thread Peter
Hello, I am testing radosgw-agent for federation. I have a fully working two cluster master/secondary zones. When I try to run radosgw-agent, I receive the following error: root@us-master:/etc/ceph# radosgw-agent -c inter-sync.conf ERROR:root:Could not retrieve region map from destination Tr

Re: [ceph-users] Fresh Ceph Deployment

2014-04-24 Thread Srinivasa Rao Ragolu
HI, you can try with 1 monitor node and 2 osd nodes minimum. and detailed instructions are given below. http://karan-mj.blogspot.in/2013/12/what-is-ceph-ceph-is-open-source.html Srinivas. On Thu, Apr 24, 2014 at 5:54 PM, Cedric Lemarchand wrote: > Hello, > > Le 24/04/2014 12:39, Sakhi Hadebe

Re: [ceph-users] Fresh Ceph Deployment

2014-04-24 Thread Sakhi Hadebe
Hi Cedric Regards, Sakhi Hadebe SANReN Engineer - CSIR Meraka Institute Tel: +27 12 841 2308 Fax: +27 12 841 4223 http://www.sanren.ac.za >>> Cedric Lemarchand 4/24/2014 2:24 PM >>> Hello, Le 24/04/2014 12:39, Sakhi Hadebe a écrit : Hi, I am a new to the storage clucter concept

Re: [ceph-users] Fresh Ceph Deployment

2014-04-24 Thread Cedric Lemarchand
Hello, Le 24/04/2014 12:39, Sakhi Hadebe a écrit : > > Hi, > > > I am a new to the storage clucter concept. I have been task to test CEPH. > > > I have two DELL PE R515 machines running ubuntu 12.04 LTS. I > understand that I need to make one the admin node where I will be able > to run all the c

Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with cache=writeback, is live migration safe ?

2014-04-24 Thread Alexandre DERUMIER
>>My recommendation would be to add that bdrv_invalidate() implementation, >>then we can be sure for raw, and get the rest fixed as well. They are a bug tracker about bdrv_invalidate(), closed 2 years ago http://tracker.ceph.com/issues/2467 Can we reopened it ? - Mail original - De

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Dan van der Ster
Hi, We also get the '' pool from rgw, which is clearly a bug somewhere. But we recently learned that you can prevent it from being recreated by removing the 'x' capability on the mon from your client.radosgw.* users, for example: client.radosgw.cephrgw1 key: xxx caps: [mon] allow r

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Irek Fasikhov
I do not use distributed replication across zones. :) 2014-04-24 15:00 GMT+04:00 : > I dont use distributed replication across zones. > $ sudo radosgw-admin zone list > { "zones": [ > "default"]} > > -- > Regards, > Mikhail > > > On Thu, 24 Apr 2014 14:52:09 +0400 > Irek Fasikhov wrote:

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread mykr0t
I dont use distributed replication across zones. $ sudo radosgw-admin zone list { "zones": [ "default"]} -- Regards, Mikhail On Thu, 24 Apr 2014 14:52:09 +0400 Irek Fasikhov wrote: > These pools of different purposes. > > > [root@ceph01 ~]# radosgw-admin zone list > { "zones": [ >

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Irek Fasikhov
These pools of different purposes. [root@ceph01 ~]# radosgw-admin zone list { "zones": [ "default"]} [root@ceph01 ~]# radosgw-admin zone get default { "domain_root": ".rgw", "control_pool": ".rgw.control", "gc_pool": ".rgw.gc", "log_pool": ".log", "intent_log_pool": ".intent-log",

[ceph-users] Fresh Ceph Deployment

2014-04-24 Thread Sakhi Hadebe
Hi, I am a new to the storage clucter concept. I have been task to test CEPH. I have two DELL PE R515 machines running ubuntu 12.04 LTS. I understand that I need to make one the admin node where I will be able to run all the commands and the other node the server, where osd processes will

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread mykr0t
> You need to create a pool named ".rgw.buckets.index" I tried it before i sent a letter to the list. All of my buckets have "index_pool": ".rgw.buckets". -- Regards, Mikhail On Thu, 24 Apr 2014 14:21:57 +0400 Irek Fasikhov wrote: > You need to create a pool named ".rgw.buckets.index" > > >

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Irek Fasikhov
You need to create a pool named ".rgw.buckets.index" 2014-04-24 14:05 GMT+04:00 : > Hi, > > I cant delete pool with empty name: > > $ sudo rados rmpool "" "" --yes-i-really-really-mean-it > successfully deleted pool > > but after a few seconds it is recreated automatically. > > $ sudo ceph osd

Re: [ceph-users] Pool with empty name recreated

2014-04-24 Thread Wido den Hollander
On 04/24/2014 12:05 PM, myk...@gmail.com wrote: Hi, I cant delete pool with empty name: $ sudo rados rmpool "" "" --yes-i-really-really-mean-it successfully deleted pool but after a few seconds it is recreated automatically. I have the same 'problem'. I think it's something which is done by

[ceph-users] Pool with empty name recreated

2014-04-24 Thread mykr0t
Hi, I cant delete pool with empty name: $ sudo rados rmpool "" "" --yes-i-really-really-mean-it successfully deleted pool but after a few seconds it is recreated automatically. $ sudo ceph osd dump | grep '^pool' pool 3 '.rgw' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 8

Re: [ceph-users] Monitor Restart Consequences

2014-04-24 Thread Joao Eduardo Luis
On 04/23/2014 09:35 PM, Craig Lewis wrote: On 4/23/14 12:33 , Dyweni - Ceph-Users wrote: Hi, I'd like to know what happens to a cluster with one monitor while that one monitor process is being restarted. For example, if I have an RBD image mounted and in use (actively reading/writing) when I

Re: [ceph-users] OSD distribution unequally -- osd crashes

2014-04-24 Thread Kenneth Waegeman
- Message from Craig Lewis - Date: Fri, 18 Apr 2014 14:59:25 -0700 From: Craig Lewis Subject: Re: [ceph-users] OSD distribution unequally To: ceph-users@lists.ceph.com When you increase the number of PGs, don't just go to the max value. Step into it. You'll want to e

[ceph-users] Docs - trouble shooting mon

2014-04-24 Thread Guang
Hello, Today I read the monitor trouble shooting doc (https://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/) with this section: Scrap the monitor and create a new one You should only take this route if you are positive that you won’t lose the information kept by th

[ceph-users] 403 error sync objects

2014-04-24 Thread lixuehui
hi,list: I tried to sync between master zone and slave zone belong one region . I stored a file "hello" in the bucket beast,and it returned error ,when running the radosgw-agent to sync the objects. But metadata is OK ! the radosgw-agent reflected info that: Thu, 24 Apr 2014 08:33:29 GMT /admi

Re: [ceph-users] ceph manual deploy doesnt start at step 15

2014-04-24 Thread *sm1Ly
Hi Karan thx for u cooperation. I got some help on #ceph, so now I got mon[123] and mds[123] and ods[12345]. also I got one client on centos6,5 with elrepo lt kernel, which provides me nonfuse mount of my cephfs. I got some simple cases for test. like bonnie++ and this: for i in `seq 0 256`; do t

Re: [ceph-users] Troubles MDS

2014-04-24 Thread Georg Höllrigl
And that's exactly what it sounds like — the MDS isn't finding objects that are supposed to be in the RADOS cluster. I'm not sure, what I should think about that. MDS shouldn't access data for RADOS and vice versa? Anyway, glad it fixed itself, but it sounds like you've got some infrastruc

Re: [ceph-users] Troubles MDS

2014-04-24 Thread Georg Höllrigl
Looks like you enabled directory fragments, which is buggy in ceph version 0.72. Regards Yan, Zheng When it's enabled it wasn't intentionally. So how would I disable it? Regards, Georg ___ ceph-users mailing list ceph-users@lists.ceph.com http://l