[ceph-users] Cache data consistency among multiple RGW instances

2015-01-18 Thread ZHOU Yuan
Hi list, I'm trying to understand the RGW cache consistency model. My Ceph cluster has multiple RGW instances with HAProxy as the load balancer. HAProxy would choose one RGW instance to serve the request(with round-robin). The question is if RGW cache was enabled, which is the default behavior,

Re: [ceph-users] MDS aborted after recovery and active, FAILED assert (r =0)

2015-01-18 Thread Mohd Bazli Ab Karim
Hi John, Good shot! I've increased the osd_max_write_size to 1GB (still smaller than osd journal size) and now the mds still running fine after an hour. Now checking if fs still accessible or not. Will update from time to time. Thanks again John. Regards, Bazli -Original Message-

[ceph-users] rgw-agent copy file failed

2015-01-18 Thread baijia...@126.com
when I write a file named 1234% in the master region, and rgw-agent send copy obj request which contains x-amz-copy-source:nofilter_bucket_1/1234% to the rep region fail 404 error; I analysis that rgw-agent can't encode url x-amz-copy-source:nofilter_bucket_1/1234% , but rgw could decode

Re: [ceph-users] Cache pool tiering SSD journal

2015-01-18 Thread lidc...@redhat.com
No, if you used cache tiering, It is no need to use ssd journal again. From: Florent MONTHEL Date: 2015-01-17 23:43 To: ceph-users Subject: [ceph-users] Cache pool tiering SSD journal Hi list, With cache pool tiering (in write back mode) enhancement, should I keep to use SSD journal on SSD ?

Re: [ceph-users] CEPH Expansion

2015-01-18 Thread Jiri Kanicky
Hi George, List disks available: # $ ceph-deploy disk list {node-name [node-name]...} Add OSD using osd create: # $ ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}] Or you can use the manual steps to prepare and activate disk described at

Re: [ceph-users] Cache pool tiering SSD journal

2015-01-18 Thread Mark Nelson
On 01/17/2015 08:17 PM, lidc...@redhat.com wrote: No, if you used cache tiering, It is no need to use ssd journal again. The cache tiering and SSD journals serve a somewhat different purpose. In Ceph, all of the data for every single write is written to both the journal and to the data

Re: [ceph-users] Giant on Centos 7 with custom cluster name

2015-01-18 Thread Jiri Kanicky
Hi, I have upgraded Firefly to Giant on Debian Wheezy and it went without any problems. Jiri On 16/01/2015 06:49, Erik McCormick wrote: Hello all, I've got an existing Firefly cluster on Centos 7 which I deployed with ceph-deploy. In the latest version of ceph-deploy, it refuses to

Re: [ceph-users] Cache pool tiering SSD journal

2015-01-18 Thread Lindsay Mathieson
On Sun, 18 Jan 2015 10:17:50 AM lidc...@redhat.com wrote: No, if you used cache tiering, It is no need to use ssd journal again. Really? writes are as fast as with ssd journals? ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] two mount points, two diffrent data

2015-01-18 Thread RafaƂ Michalak
Because you are not using a cluster aware filesystem - the respective mounts don't know when changes are made to the underlying block device (rbd) by the other mount. What you are doing *will* lead to file corruption. Your need to use a distributed filesystem such as GFS2 or cephfs.

Re: [ceph-users] CEPH Expansion

2015-01-18 Thread Georgios Dimitrakakis
Hi Jiri, thanks for the feedback. My main concern is if it's better to add each OSD one-by-one and wait for the cluster to rebalance every time or do it all-together at once. Furthermore an estimate of the time to rebalance would be great! Regards, George Hi George, List disks