[ceph-users] How to move OSD form 1TB disk to 2TB disk

2015-09-19 Thread wsnote
I know another way: out 1TB osd, up 2TB osd as osd.X without data, then rados 
will backfill the data to 2TB disks.
Now I use rsync to mv data form 1TB disk to 2TB disk, but the new osd coredump.
What's the problem?


ceph version:0.80.1
osd.X 
host1 with 1TB disks
host2 with 2TB disks


on host1:
osd.X down
ceph-osd -i X --flush-journal
rsync -av /data/osd/osd.X/ root:host2:/data/osd/osd.X/  
on host2:
vim ceph.conf
ceph-osd -i X --mkjournal
ceph-osd -i X


then osd.X coredump
osd log:
-1> 2015-09-19 14:52:22.371149 7f008cd007a0 0 osd.29 416 load_pgs
0> 2015-09-19 14:52:22.378677 7f008cd007a0 -1 osd/PG.cc: In function 'static 
epoch_t PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
ceph::bufferlist*)' thread 7f008cd007a0 time 2015-09-19 14:52:22.377569
osd/PG.cc: 2559: FAILED assert(r > 0)

ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
1: (PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
ceph::buffer::list*)+0x48d) [0x7fa4ad]
2: (OSD::load_pgs()+0x18f1) [0x63c771]
3: (OSD::init()+0x22b0) [0x6550e0]
4: (main()+0x359e) [0x5f931e]
5: (__libc_start_main()+0xfd) [0x3073c1ed5d]
6: ceph-osd() [0x5f59c9]

coredump:
(gdb) bt

0 0x00307400f5db in raise () from /lib64/libpthread.so.0
1 0x009ab7f4 in ?? ()
2 
3 0x003073c32635 in raise () from /lib64/libc.so.6
4 0x003073c33e15 in abort () from /lib64/libc.so.6
5 0x003b4febea7d in __gnu_cxx::__verbose_terminate_handler() () from 
/usr/lib64/libstdc++.so.6
6 0x003b4febcbd6 in ?? () from /usr/lib64/libstdc++.so.6
7 0x003b4febcc03 in std::terminate() () from /usr/lib64/libstdc++.so.6
8 0x003b4febcd22 in __cxa_throw () from /usr/lib64/libstdc++.so.6
9 0x00aec612 in ceph::__ceph_assert_fail(char const*, char const*, int, 
char const*) ()
10 0x007fa4ad in PG::peek_map_epoch(ObjectStore*, coll_t, hobject_t&, 
ceph::buffer::list*) ()
11 0x0063c771 in OSD::load_pgs() ()
12 0x006550e0 in OSD::init() ()
13 0x005f931e in main ()





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph can't recognize ext4 extended attributes when --mkfs --mkkey

2015-03-03 Thread wsnote
ceph version 0.80.1
System: CentOS 6.5


[root@dn1 osd.6]# mount
/dev/sde1 on /cache4 type ext4 (rw,noatime,user_xattr) —— osd.6
/dev/sdf1 on /cache5 type ext4 (rw,noatime,user_xattr) —— osd.7
/dev/sdg1 on /cache6 type ext4 (rw,noatime,user_xattr) —— osd.8
/dev/sdh1 on /cache7 type ext4 (rw,noatime,user_xattr) —— osd.9
/dev/sdi1 on /cache8 type ext4 (rw,noatime,user_xattr) —— osd.10
/dev/sdj1 on /cache9 type ext4 (rw,noatime,user_xattr) —— osd.11


[root@dn1 osd.6]# ceph-osd -i 6 --mkfs --mkkey
2015-03-03 15:52:12.156548 7fba6de2b7a0 -1 journal FileJournal::_open: 
disabling aio for non-block journal.  Use journal_force_aio to force use of aio 
anyway
2015-03-03 15:52:12.468304 7fba6de2b7a0 -1 filestore(/cache4/osd.6) Extended 
attributes don't appear to work. Got error (95) Operation not supported. If you 
are using ext3 or ext4, be sure to mount the underlying file system with the 
'user_xattr' option.
2015-03-03 15:52:12.468367 7fba6de2b7a0 -1 filestore(/cache4/osd.6) 
FileStore::mount : error in _detect_fs: (95) Operation not supported
2015-03-03 15:52:12.468387 7fba6de2b7a0 -1 OSD::mkfs: couldn't mount 
ObjectStore: error -95
2015-03-03 15:52:12.468470 7fba6de2b7a0 -1  ** ERROR: error creating empty 
object store in /cache4/osd.6: (95) Operation not supported




[root@dn1 osd.6]# tail -f /var/log/ceph/osd.6.log
2015-03-03 15:52:11.770484 7fba6de2b7a0  0 ceph version 0.80.1 
(a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-osd, pid 30336
2015-03-03 15:52:12.156548 7fba6de2b7a0 -1 journal FileJournal::_open: 
disabling aio for non-block journal.  Use journal_force_aio to force use of aio 
anyway
2015-03-03 15:52:12.224362 7fba6de2b7a0  0 filestore(/cache4/osd.6) mkjournal 
created journal on /cache4/osd.6/journal
2015-03-03 15:52:12.274706 7fba6de2b7a0  0 
genericfilestorebackend(/cache4/osd.6) detect_features: FIEMAP ioctl is 
supported and appears to work
2015-03-03 15:52:12.274733 7fba6de2b7a0  0 
genericfilestorebackend(/cache4/osd.6) detect_features: FIEMAP ioctl is 
disabled via 'filestore fiemap' config option
2015-03-03 15:52:12.468181 7fba6de2b7a0  0 
genericfilestorebackend(/cache4/osd.6) detect_features: syscall(SYS_syncfs, fd) 
fully supported
2015-03-03 15:52:12.468304 7fba6de2b7a0 -1 filestore(/cache4/osd.6) Extended 
attributes don't appear to work. Got error (95) Operation not supported. If you 
are using ext3 or ext4, be sure to mount the underlying file system with the 
'user_xattr' option.
2015-03-03 15:52:12.468367 7fba6de2b7a0 -1 filestore(/cache4/osd.6) 
FileStore::mount : error in _detect_fs: (95) Operation not supported
2015-03-03 15:52:12.468387 7fba6de2b7a0 -1 OSD::mkfs: couldn't mount 
ObjectStore: error -95
2015-03-03 15:52:12.468470 7fba6de2b7a0 -1  ** ERROR: error creating empty 
object store in /cache4/osd.6: (95) Operation not supported


Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] backfill_toofull, but OSDs not full

2015-03-03 Thread wsnote
ceph 0.80.1 
The same quesiton.
I have deleted 1/4 data, but the problem didn't disappear
Does anyone have other way to solve it?


At 2015-01-10 05:31:30,Udo Lembke ulem...@polarzone.de wrote:
Hi,
I had an similiar effect two weeks ago - 1PG backfill_toofull and due
reweighting and delete there was enough free space but the rebuild
process stopped after a while.

After stop and start ceph on the second node, the rebuild process runs
without trouble and the backfill_toofull are gone.

This happens with firefly.

Udo

On 09.01.2015 21:29, c3 wrote:
 In this case the root cause was half denied reservations.

 http://tracker.ceph.com/issues/9626

 This stopped backfills since, those listed as backfilling were
 actually half denied and doing nothing. The toofull status is not
 checked until a free backfill slot happens, so everything was just stuck.

 Interestingly, the toofull was created by other backfills which were
 not stoppped.
 http://tracker.ceph.com/issues/9594

 Quite the log jam to clear.


 Quoting Craig Lewis cle...@centraldesktop.com:

 What was the osd_backfill_full_ratio?  That's the config that controls
 backfill_toofull.  By default, it's 85%.  The mon_osd_*_ratio affect the
 ceph status.

 I've noticed that it takes a while for backfilling to restart after
 changing osd_backfill_full_ratio.  Backfilling usually restarts for
 me in
 10-15 minutes.  Some PGs will stay in that state until the cluster is
 nearly done recoverying.

 I've only seen backfill_toofull happen after the OSD exceeds the
 ratio (so
 it's reactive, no proactive).  Mine usually happen when I'm
 rebalancing a
 nearfull cluster, and an OSD backfills itself toofull.




 On Mon, Jan 5, 2015 at 11:32 AM, c3 ceph-us...@lopkop.com wrote:

 Hi,

 I am wondering how a PG gets marked backfill_toofull.

 I reweighted several OSDs using ceph osd crush reweight. As
 expected, PG
 began moving around (backfilling).

 Some PGs got marked +backfilling (~10), some +wait_backfill (~100).

 But some are marked +backfill_toofull. My OSDs are between 25% and 72%
 full.

 Looking at ceph pg dump, I can find the backfill_toofull PGs and
 verified
 the OSDs involved are less than 72% full.

 Do backfill reservations include a size? Are these OSDs projected to be
 toofull, once the current backfilling complete? Some of the
 backfill_toofull and backfilling point to the same OSDs.

 I did adjust the full ratios, but that did not change the
 backfill_toofull
 status.
 ceph tell mon.\* injectargs '--mon_osd_full_ratio 0.95'
 ceph tell osd.\* injectargs '--osd_backfill_full_ratio 0.92'


 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] OSD can't start After server restart

2015-02-01 Thread wsnote
Ceph Version: 0.80.1
Server Number: 4
OSD Number: 6disks per server
All of the OSDs of one server can't start After this server restart, but other 
3 servers can.
---
ceph -s:
[root@dn1 ~]# ceph -s
cluster 73ceed62-9a53-414b-95dd-61f802251df4
 health HEALTH_WARN 65 pgs stale; 65 pgs stuck stale; 51 requests are 
blocked  32 sec; pool .rgw.buckets has too few pgs; clock skew detected on 
mon.1, mon.2
 monmap e1: 3 mons at 
{0=172.16.0.166:6789/0,1=172.16.0.167:6789/0,2=172.16.0.168:6789/0}, election 
epoch 628, quorum 0,1,2 0,1,2
 osdmap e513: 24 osds: 18 up, 18 in
  pgmap v707911: 7424 pgs, 14 pools, 513 GB data, 310 kobjects
9321 GB used, 6338 GB / 16498 GB avail
  65 stale+active+clean
7359 active+clean

[root@dn1 ~]# cat /var/log/ceph/osd.6.log
2015-02-02 10:08:11.534384 7f771012f7a0  0 ceph version 0.80.1 
(a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-osd, pid 22691
2015-02-02 10:08:11.986865 7f771012f7a0  0 
genericfilestorebackend(/cache4/osd.6) detect_features: FIEMAP ioctl is 
supported and appears to work
2015-02-02 10:08:11.986910 7f771012f7a0  0 
genericfilestorebackend(/cache4/osd.6) detect_features: FIEMAP ioctl is 
disabled via 'filestore fiemap' config option
2015-02-02 10:08:12.612637 7f771012f7a0  0 
genericfilestorebackend(/cache4/osd.6) detect_features: syscall(SYS_syncfs, fd) 
fully supported
2015-02-02 10:08:12.612824 7f771012f7a0 -1 filestore(/cache4/osd.6) Extended 
attributes don't appear to work. Got error (95) Operation not supported. If you 
are using ext3 or ext4, be sure to mount the underlying file system with the 
'user_xattr' option.
2015-02-02 10:08:12.612942 7f771012f7a0 -1 filestore(/cache4/osd.6) 
FileStore::mount : error in _detect_fs: (95) Operation not supported
2015-02-02 10:08:12.612964 7f771012f7a0 -1  ** ERROR: error converting store 
/cache4/osd.6: (95) Operation not supported
---
[root@dn1 ~]# mount
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/sde1 on /cache4 type ext4 (rw,noatime,user_xattr)
/dev/sdf1 on /cache5 type ext4 (rw,noatime,user_xattr)
/dev/sdg1 on /cache6 type ext4 (rw,noatime,user_xattr)
/dev/sdh1 on /cache7 type ext4 (rw,noatime,user_xattr)
/dev/sdi1 on /cache8 type ext4 (rw,noatime,user_xattr)
/dev/sdj1 on /cache9 type ext4 (rw,noatime,user_xattr)
# Other 3 server's disks also use ext4 with rw,noatime,user_xattr.


What's the possible reason?___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to improve performance of ceph objcect storage cluster

2014-06-25 Thread wsnote
OS: CentOS 6.5
Version: Ceph 0.79


Hi, everybody!
I have installed a ceph cluster with 10 servers.
I test the throughput of ceph cluster in the same  datacenter.
Upload files of 1GB from one server or several servers to one server or several 
servers, the total is about 30MB/s.
That is to say, there is no difference between one server or one cluster about 
throughput when uploading files.
How to optimize the performance of ceph object storage?
Thanks!



Info about ceph cluster: 
4 MONs in the first 4 nodes in the cluster.
11 OSDs in each server, 109 OSDs in total (one disk was bad).
4TB each disk, 391TB in total (109*4-391=45TB.Where did 45TB space?)
1 RGW in each server, 10 RGWs in total.That is to say, I can use S3 API in each 
Server.


ceph.conf:
[global]
auth supported = none


;auth_service_required = cephx
;auth_client_required = cephx
;auth_cluster_required = cephx
filestore_xattr_use_omap = true


max open files = 131072
log file = /var/log/ceph/$name.log
pid file = /var/run/ceph/$name.pid
keyring = /etc/ceph/keyring.admin

mon_clock_drift_allowed = 2 ;clock skew detected


[mon]
mon data = /data/mon$id
keyring = /etc/ceph/keyring.$name
[osd]
osd data = /data/osd$id
osd journal = /data/osd$id/journal
osd journal size = 1024;
keyring = /etc/ceph/keyring.$name
osd mkfs type = xfs
osd mount options xfs = rw,noatime
osd mkfs options xfs = -f


[client.radosgw.cn-bj-1]
rgw region = cn
rgw region root pool = .cn.rgw.root
rgw zone = cn-bj
rgw zone root pool = .cn-wz.rgw.root
host = yun168
public_addr = 192.168.10.115
rgw dns name = s3.domain.com
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw socket path = /var/run/ceph/$name.sock
log file = /var/log/ceph/radosgw.log
debug rgw = 20
rgw print continue = true
rgw should log = true








[root@yun168 ~]# ceph -s
cluster e48b0d5b-ff08-4a8e-88aa-4acd3f5a6204
 health HEALTH_OK
 monmap e7: 4 mons at {... ...  ...}, election epoch 78, quorum 0,1,2,3 
0,1,2,3
 mdsmap e49: 0/0/1 up
 osdmap e3722: 109 osds: 109 up, 109 in
  pgmap v106768: 29432 pgs, 19 pools, 12775 GB data, 12786 kobjects
640 GB used, 390 TB / 391 TB avail
   29432 active+clean
  client io 1734 kB/s rd, 29755 kB/s wr, 443 op/s



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] what is the Recommandation configure for a ceph cluster with 10 servers without memory leak?

2014-06-18 Thread wsnote
 acting 
[903,404]
pg 1.fb7b is stuck inactive since forever, current state creating, last acting 
[505,803,303]
pg 0.fb7a is stuck inactive since forever, current state 
stale+remapped+peering, last acting [1003]
pg 2.fb7b is stuck inactive since forever, current state creating, last acting 
[303,505,802]
pg 1.fb78 is stuck inactive since forever, current state creating, last acting 
[803,303,905]
pg 0.fb79 is stuck inactive since forever, current state creating, last acting 
[901,403,805]
pg 0.fb78 is stuck inactive since forever, current state creating, last acting 
[404,901]
pg 2.fb7a is stuck inactive since forever, current state creating, last acting 
[403,303,805]
pg 1.fb79 is stuck inactive since forever, current state creating, last acting 
[803,901,404]
pg 0.fb77 is stuck inactive for 24155.756030, current state stale+peering, last 
acting [101,1005]
pg 1.fb76 is stuck inactive since forever, current state creating, last acting 
[901,505,403]
pg 2.fb75 is stuck inactive since forever, current state creating, last acting 
[905,403,204]
pg 1.fb77 is stuck inactive since forever, current state creating, last acting 
[905,805,204]
pg 2.fb74 is stuck inactive since forever, current state creating, last acting 
[901,404]
pg 0.fb75 is stuck inactive since forever, current state creating, last acting 
[903,403]
pg 1.fb74 is stuck inactive since forever, current state creating, last acting 
[901,204,403]
pg 2.fb77 is stuck inactive since forever, current state creating, last acting 
[505,802,905]
pg 0.fb74 is stuck inactive for 24042.660267, current state stale+incomplete, 
last acting [101]
pg 1.fb75 is stuck inactive since forever, current state creating, last acting 
[905,403,804]





At 2014-06-19 04:31:09,Craig Lewis cle...@centraldesktop.com wrote:

I haven't seen behavior like that.  I have seen my OSDs use a lot of RAM while 
they're doing a recovery, but it goes back down when they're done.


Your OSD is doing something, it's using 126% CPU. What does `ceph osd tree` and 
`ceph health detail` say?




When you say you're installing Ceph on 10 severs, are you running a monitor on 
all 10 servers?







On Wed, Jun 18, 2014 at 4:18 AM, wsnote wsn...@163.com wrote:

If I install ceph in 10 servers with one disk each servers, the problem remains.
This is the memory usage of ceph-osd.
ceph-osd VIRT:10.2G, RES: 4.2G
The usage of ceph-osd is too big!



At 2014-06-18 16:51:02,wsnote wsn...@163.com wrote:

Hi, Lewis!
I come up with a question and don't know how to solve, so I ask you for help.
I can succeed to install ceph in a cluster with 3 or 4 servers but fail to do 
it with 10 servers.
I install it and start it, then there would be a server whose memory rises up 
to 100% and this server crash.I have to restart it.
All the config are the same.I don't know what's the problem.
Can you give some suggestion?
Thanks!

ceph.conf:
[global]
auth supported = none


;auth_service_required = cephx
;auth_client_required = cephx
;auth_cluster_required = cephx
filestore_xattr_use_omap = true


max open files = 131072
log file = /var/log/ceph/$name.log
pid file = /var/run/ceph/$name.pid
keyring = /etc/ceph/keyring.admin

;mon_clock_drift_allowed = 1 ;clock skew detected


[mon]
mon data = /data/mon$id
keyring = /etc/ceph/keyring.$name
[mds]
mds data = /data/mds$id
keyring = /etc/ceph/keyring.$name
[osd]
osd data = /data/osd$id
osd journal = /data/osd$id/journal
osd journal size = 1024
keyring = /etc/ceph/keyring.$name
osd mkfs type = xfs
osd mount options xfs = rw,noatime
osd mkfs options xfs = -f
filestore fiemap = false


In every server, there is an mds, an mon, 11 osd with 4TB space each.
mon address is public IP, and osd address has an public IP and an cluster IP.


wsnote




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph can't start with one server'memory leaking

2014-06-17 Thread wsnote
OS: CentOS 6.5
Version: ceph 0.79


hello, everyone!
I am installing ceph in a cluster with 10 servers.
When I installed in 3 servers, I could installed successful and started it 
normally.
But when  I installed in 10servers, I started ceph and the moniter server's 
memory increase to 100% rapidly and the server crashed.
I don't know where is the problem.Why did ceph use all of the memory?
When I used the command ceph -s in other server, the result is :


2014-06-17 22:40:48.240226 7f40247c4700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.172:6789/0 pipe(0x7f4020024850 sd=4 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4020024ab0).fault
2014-06-17 22:40:51.240005 7f40246c3700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.175:6789/0 pipe(0x7f4014000d10 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014000f70).fault
2014-06-17 22:40:54.240154 7f40247c4700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.173:6789/0 pipe(0x7f40140031d0 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014003430).fault
2014-06-17 22:40:57.239757 7f40246c3700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.172:6789/0 pipe(0x7f4014000df0 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014002000).fault
2014-06-17 22:41:00.240047 7f40247c4700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.169:6789/0 pipe(0x7f4014002560 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f40140027c0).fault
2014-06-17 22:41:03.240459 7f40246c3700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.175:6789/0 pipe(0x7f40140040e0 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014004340).fault
2014-06-17 22:41:06.242056 7f40247c4700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.170:6789/0 pipe(0x7f40140049f0 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014004c50).fault
2014-06-17 22:41:09.242216 7f40246c3700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.176:6789/0 pipe(0x7f4014006590 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f40140067f0).fault
2014-06-17 22:41:12.241405 7f40247c4700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.170:6789/0 pipe(0x7f4014005000 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014005260).fault
2014-06-17 22:41:15.241588 7f40246c3700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.175:6789/0 pipe(0x7f4014005a80 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f4014005ce0).fault
2014-06-17 22:41:18.241897 7f40247c4700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.169:6789/0 pipe(0x7f401400a050 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f401400a2b0).fault
2014-06-17 22:41:21.243095 7f40246c3700  0 Pipe.cc  1324 -- 
:/1001983  122.228.248.174:6789/0 pipe(0x7f401400aaf0 sd=5 :0 s=1 pgs=0 cs=0 
l=1 c=0x7f401400ad50).fault


Can any one give some advice?
Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] what is the Recommandation configure for a ceph cluster with 10 servers without memory leak?

2014-06-17 Thread wsnote
OS: CentOS 6.5
Version: Ceph 0.79


ceph.conf:
[global]
auth supported = none


;auth_service_required = cephx
;auth_client_required = cephx
;auth_cluster_required = cephx
filestore_xattr_use_omap = true


max open files = 131072
log file = /var/log/ceph/$name.log
pid file = /var/run/ceph/$name.pid
keyring = /etc/ceph/keyring.admin

;mon_clock_drift_allowed = 1 ;clock skew detected


[mon]
mon data = /data/mon$id
keyring = /etc/ceph/keyring.$name
[mds]
mds data = /data/mds$id
keyring = /etc/ceph/keyring.$name
[osd]
osd data = /data/osd$id
osd journal = /data/osd$id/journal
osd journal size = 1024
keyring = /etc/ceph/keyring.$name
osd mkfs type = xfs
osd mount options xfs = rw,noatime
osd mkfs options xfs = -f
filestore fiemap = false


In every server, there is an mds, an mon, 11 osd with 4TB space each.
mon address is public IP, and osd address has an public IP and an cluster IP.
If I install ceph in 4 servers, it can start normally.
But if I install ceph in 10 servers, there will be a server with memory using 
out rapidly and crashing.Then what I can do is restart the server.
What's the difference between a cluster with 4 servers and 10 servers?
Can anyone give some recommandation configure?
Thanks!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph gets stuck when starting

2014-06-16 Thread wsnote
OS: CentOS 6.5
Ceph: 0.79


Hi, everyone!
I am installing a ceph cluster.
After installing, I started the ceph cluster by the command service ceph 
start but failed.
All of the log of one osd is:


2014-06-17 09:57:24.494599 7f206a806760  0 XfsFileStoreBackend.cc 108  
xfsfilestorebackend(/data/osd1001) detect_feature: extsize is supported
2014-06-17 09:57:24.646110 7f206a806760  0 FileStore.cc 1415 
filestore(/data/osd1001) mount: WRITEAHEAD journal mode explicitly enabled in 
conf
2014-06-17 09:57:24.754289 7f206a806760 -1 FileJournal.cc   95   journal 
FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio 
to force use of aio anyway
2014-06-17 09:57:24.754311 7f206a806760  1 FileJournal.cc   118  journal 
_open /data/osd1001/journal fd 21: 1073741824 bytes, block size 4096 bytes, 
directio = 1, aio = 0
2014-06-17 09:57:24.754787 7f206a806760  1 FileJournal.cc   118  journal 
_open /data/osd1001/journal fd 21: 1073741824 bytes, block size 4096 bytes, 
directio = 1, aio = 0
2014-06-17 09:57:24.760049 7f206a806760  0 class_api.cc 620  cls 
cls/hello/cls_hello.cc:271: loading cls_hello
2014-06-17 09:57:24.73 7f206a806760  0 OSD.cc   5647 osd.1001 
217 crush map has features 1107558400, adjusting msgr requires for clients
2014-06-17 09:57:24.766683 7f206a806760  0 OSD.cc   5656 osd.1001 
217 crush map has features 1107558400, adjusting msgr requires for osds
2014-06-17 09:57:24.766709 7f206a806760  0 OSD.cc   2010 osd.1001 
217 load_pgs
2014-06-17 09:57:28.445373 7f206a806760  0 OSD.cc   2139 osd.1001 
217 load_pgs opened 5395 pgs


The ceph cluster got stuck and the osd log didn't increase with the last line 
2014-06-17 09:57:28.445373 7f206a806760  0 OSD.cc   2139 osd.1001 
217 load_pgs opened 5395 pgs.
Can any one help?
Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Failed - InvalidArgument 400 When changing objcet's ACL

2014-06-02 Thread wsnote
Hi, everyone!
I have installed a ceph cluster with object storage.Now I meet a question.
I can use S3 Client or SDK to upload or delete a object, but can't change the 
ACL of objects.
When I try to change the ACL, the error info is Failed - InvalidArgument 400
What's the config about this?
Thanks!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Can ceph object storage distinguish the upload domain and the download domain?

2014-05-27 Thread wsnote
There are two different modes of accessing the buckets.The upload domain and 
the download domain are the same.
GET /mybucket HTTP/1.1
Host: cname.domain.com
GET / HTTP/1.1
Host: mybucket.cname.domain.com 




Now I want to distinguish them. The reason that I want to do this is:
I have two ceph clusters which have the same data and sync by radosgw-agent. 
The user can read the data from both clusters but write to only the master 
zone. 
When configure the domain, I feel puzzled. I mean the two domain: 
cname.domain.com and mybucket.cname.domain.com.
If I configure the domain to the master zone, users can't surf the slave zone 
whose computin power was wasted. 
If I configure the domain to the both zone.  user's writing request may be 
directed to the slave zone and come up failure, which has bad user experience.
If I use two domain for two zones, when I want to release an file for public 
downloading, I must supply two url, which has not good experience too.
Does any one meet with this situation and give some suggestion?
Thanks!





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can ceph object storage distinguish the upload domain and the download domain?

2014-05-27 Thread wsnote
Thanks for reply!


At 2014-05-27 22:13:19,Wido den Hollander w...@42on.com wrote:
On 05/27/2014 04:10 PM, wsnote wrote:
 There are two different modes of accessing the buckets.The upload domain
 and the download domain are the same.
 GET /mybucket HTTP/1.1
 Host: cname.domain.com
 GET / HTTP/1.1
 Host: mybucket.cname.domain.com


 Now I want to distinguish them. The reason that I want to do this is:
 I have two ceph clusters which have the same data and sync by
 radosgw-agent. The user can read the data from both clusters but write
 to only the master zone.
 When configure the domain, I feel puzzled. I mean the two domain:
 cname.domain.com and mybucket.cname.domain.com.
 If I configure the domain to the master zone, users can't surf the slave
 zone whose computin power was wasted.
 If I configure the domain to the both zone.  user's writing request may
 be directed to the slave zone and come up failure, which has bad user
 experience.
 If I use two domain for two zones, when I want to release an file for
 public downloading, I must supply two url, which has not good experience
 too.
 Does any one meet with this situation and give some suggestion?

Use Varnish in front? In the VCL you can direct all PUT, DELETE and POST 
request towards the master zone and serve GET and HEAD requests from 
both zones.


In front of ceph cluster, there will be an LVS  server to balance load.In front 
of LVS, there will be CDN. I'll use CDN for both upload and download.
Using Varnish may make it a more complex question.
There is a configure in ceph.conf: rgw dns name = cname.domain.com. 
If I use another domain not like *.cname.domain.com, ceph don't recognize it.
Can I solve the problem by config ceph or Apache?

Simply directing HTTP traffic.

 Thanks!





 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Wido den Hollander
Ceph consultant and trainer
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to use Admin Ops API?

2014-05-27 Thread wsnote
Hi, everyone!
I am trying to use Admin Ops API.
In the document, there is a statement:
An admin API request will be done on a URI that starts with the configurable 
‘admin’ resource entry point. 


My question is: where is the configure of 'admin' and what is the default value?
Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Questions about zone and disater recovery

2014-05-21 Thread wsnote
Hi,everyone!
I have 2 ceph clusters, one master zone, another secondary zone.
Now I have some question.
1. Can ceph have two or more secondary zones?


2. Can the role of master zone and secondary zone transform mutual?
I mean I can change the secondary zone to be master and the master zone to 
secondary.


3. How to deal with the situation when the master zone is down?
Now the secondary zone forbids all the operations of files, such as create 
objects, delete objects.
When the master zone is down, users can't do anything to the files except read 
objects from the secondary zone.
It's a bad user experience. Additionly, it will have a bad influence on the 
confidence of the users.
I know the limit of secondary zone is out of consideration for the consistency 
of data. However, is there another way to improve some experience?
I think:
There can be a config that allow the files operations of the secondary zone.If 
the master zone is down, the admin can enable it, then the users can do files 
opeartions as usually. The secondary record all the files operations of the 
files. When the master zone gets right, the admin can sync files to the master 
zone manually.


Thanks!







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Inter-region data replication through radosgw

2014-05-21 Thread wsnote
Hi, Lewis!
With your way, there will be a contradition because of the limit of secondary 
zone.
In secondary zone, one can't do any files operations.
Let me give some example.I define the symbols first.


The instances of cluster 1:
M1: master zone of cluster 1
S2: Slave zone for M2 of cluster2, the files of cluster2 will be synced from M2 
to S2
I13: the third instance of cluster 1(M1 and S2 are both the instances too.)


The instances of cluster 2:
M2: master zone of cluster 2
S1: Slave zone for M1 of cluster 1, the files of cluster1 will be synced from 
M1 to S1
I23: the third instance of cluster 1(M2 and S1 are both the instances too.)


cluster 1:  M1  S2  I13
cluster 2:  M2  S1  I23


Questions:
1. If I upload objects form I13 of cluster 1, is it synced to cluster 2 from M1?
2. In cluster 1, can I do some operations for the files synced from cluster2 
through M1 or I13?
3. If I upload an object in cluster 1, the matadata will be synced to cluster 2 
before file data.When  matadata of it has been synced but filedata not, cluster 
1 is down, that is to say the object hasnot been synced yet.Then I upload the 
same object in cluster 2. Can it succeed?
I think it will fail. cluster 2 has the matadata of object and will consider 
the object is in cluster 2, and this object is synced from cluster 1, so I have 
no permission to operate it.
Do I right? 


Because of the limit of files operations in slave zone, I think there will be 
some contradition.


Looking forward to your reply.
Thanks!







At 2014-05-22 07:12:17,Craig Lewis cle...@centraldesktop.com wrote:

On 5/21/14 09:02 , Fabrizio G. Ventola wrote:

Hi everybody,

I'm reading the doc regarding the replication through radosgw. It
talks just about inter-region METAdata replication, nothing about data
replication.

My question is, it's possible to have (everything) geo-replicated
through radosgw? Actually we have 2 ceph cluster (geo-dislocated)
instances and we wanna exploit the radosgw to make replicas across our
two clusters.

It's possible to read/write on both replicas (one placed on primary
region and one on the secondary one) done through radosgw? I'm
wondering because on the doc it's suggested to write just on a master
zone, avoiding to write on secondary zones. It's the same for
primary/secondary regions?


Cheers,
Fabrizio
___
ceph-users mailing list
ceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

The federated setup will replicate both data and metadata.  You can do just 
metadata if you want, but it's not the default.

You can have all of the RadosGW data geo-replicated.  Raw Rados isn't possible 
yet, and rbd under development.

You can read from both the master and slave, but you don't want to write to a 
slave.  The master and slave have different URLs, so it's up to you to use the 
appropriate URL.

You can run multiple zones in each cluster, as long as each zone has it's own 
URL.  If you do this, you might want to share apache/radosgw/osd across all the 
zone, or dedicated them to specific zones.  It's entirely possible multiple 
zones in one cluster share everything, or just the monitors.

If you really want both clusters to handle writes, this is how you'd do it:
ClusterWest1 contains us-west-1 (master), and us-west-2 (slave for us-east-2).
ClusterEast1 constains us-east-1 (slave for us-west-1), and us-east-2 (master).
If users and buckets need to be globally unique across all zones, setup 
metadata (not data) replication between the two zones.
Write to us-west-1 or us-east-2, up to you.


This replication setup to make more sense when you have more than 3+ data 
centers, and you set them up in a ring.


Does that help?


--


Craig Lewis
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com

Central Desktop. Work together in ways you never thought possible.
Connect with us   Website  |  Twitter  |  Facebook  |  LinkedIn  |  Blog
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] How to point custom domains to a bucket and set default page and error page

2014-05-17 Thread wsnote
Hi, everyone!
I want to use ceph to host static websites.
In Amazon S3, if I want to host test.domain.com, I create a bucket name 
test.domain.com, and set a CNAME record: test.domain.com CNAME 
test.domain.com.s3.amazonaws.com.


In ceph, I do the same but without any effect. My ceph host is s3.cephtest.com. 
I create a bucket name test.domain.com, and I can download files from 
test.domain.com.s3.cephtest.com. However, if I want to download file from 
s3.cephtest.com, I fail and see the following info:
Error
CodeNoSuchBucket/Code
/Error


Is there any config in ceph that I don't know about it?


Additional, how can I set a default page and 4** page for a static website?
Thanks a lot!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Where is the SDK of ceph object storage

2014-05-13 Thread wsnote
Hi, everyone!
Where can I find the SDK of ceph object storage?
Python: boto
C++: libs3 which I found in the src of ceph and github.com/ceph/libs3.
where are that of other language? Does ceph supply them?
Otherwise I use the SDK of Amazon S3 directly?


Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] too slowly upload on ceph object storage

2014-05-08 Thread wsnote
Hi, everyone!
I am testing ceph rgw and found it upload slowly. I don't know where may be the 
bottleneck.


OS: CentOS 6.5
Version: Ceph 0.79


Hardware:
CPU: 2 * quad-core
Mem: 32GB
Disk: 2TB*1+3TB*11
Network: 1*1GB Ethernet NIC


Ceph Cluster:
My cluster was composed of 4 servers(called ceph1-4).
I installed a monister, a radosgw and 11 osd in every server.
So the cluster had 4 servers(ceph1-4), 4 moniter, 4 radosgw, 44 osd.
I configured ceph as ceph.com's documents said and didn't do any special config.


Then I use s3cmd to test the Ceph Cluster.
Test 1:
In ceph1, upload a big file to the rgw in ceph1.Do test several times.
The speed is about 10MB/s!
It's too slowly! I upload files from ceph1 to ceph1.
There is not any network latency.


Test 2:
In ceph1, upload a big file to the rgw in ceph2.Do test several times.
The speed is also about 10MB/s!


Test3:
In each ceph server, upload a big file to the rgw in their own rgw in the mean 
time.Do test several times.
The speed in each ceph server is about 1-3MB/s. The sum of the speeds is about 
10MB/s!


I use the command iostat -kx 3 to watch the stress of disks.
When testing, the iowait is lower than 1%, and %util is lower than 1% too.


There may be some problem.The speed in one server is too slowly.And the total 
speed didn't increase with the number of rgw.
Can anyone give some suggestion?
Thanks!

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Bug]radosgw-agent can't sync files with Chinese filename

2014-04-24 Thread wsnote
Hi, Yehuda.
It doesn't matter.We have fixed it.
The filename will be transcoded by url_encode and decoded by urlDecode. There 
is a bug when decoding the filename.

At 2014-04-25 03:32:02,Yehuda Sadeh yeh...@inktank.com wrote:
Hi,

  sorry for the late response. I opened a ceph tracker issue for it (#8202).

Thanks,
Yehuda

On Wed, Apr 16, 2014 at 1:00 AM, wsnote wsn...@163.com wrote:
 OS: CentOS 6.5
 Version: Ceph 0.67.7 or 0.79

 Hello, everyone!
 I have configured federated gateways for several.
 Now I can sync files from master zone to slave zone.
 However I found that files with English filename could be  synced, but files
 with Chinese filename would not be synced.
 I compared the log between file with English filename and with Chinese
 filename in master zone's log.
 These two files are with name radosgw-agent.txt and
 radosgw-agent-测试用例-233.txt .

 radosgw-agent.txt in master zone's log
 2014-04-16 15:10:20.883445 7fd1635be700  1 == starting new request
 req=0x12ea120 =
 2014-04-16 15:10:20.883501 7fd1635be700  2 req 199:0.57::GET
 /ttt/rad%2Fradosgw-agent.txt::initializing
 2014-04-16 15:10:20.883507 7fd1635be700 10 host=s3.ceph69.com
 rgw_dns_name=s3.ceph69.com
 2014-04-16 15:10:20.883518 7fd1635be700 10 meta HTTP_X_AMZ_COPY_SOURCE
 2014-04-16 15:10:20.883524 7fd1635be700 10 x
 x-amz-copy-source:ttt/rad/radosgw-agent.txt
 2014-04-16 15:10:20.883560 7fd1635be700 10 s-object=rad/radosgw-agent.txt
 s-bucket=ttt
 2014-04-16 15:10:20.883572 7fd1635be700 20 FCGI_ROLE=RESPONDER
 2014-04-16 15:10:20.883573 7fd1635be700 20
 SCRIPT_URL=/ttt/rad/radosgw-agent.txt
 2014-04-16 15:10:20.883573 7fd1635be700 20
 SCRIPT_URI=http://s3.ceph69.com/ttt/rad/radosgw-agent.txt
 2014-04-16 15:10:20.883574 7fd1635be700 20 HTTP_AUTHORIZATION=AWS
 18GNN0DH1900H0L1LEBY:T23G/DqMa8KeIfJuv95XVRS4Hes=
 2014-04-16 15:10:20.883575 7fd1635be700 20 SERVER_PORT_SECURE=443
 2014-04-16 15:10:20.883575 7fd1635be700 20 HTTP_HOST=s3.ceph69.com

 radosgw-agent-测试用例-233.txt in master zone's log
 2014-04-16 15:10:21.108608 7fd1739d8700  1 == starting new request
 req=0x126fe10 =
 2014-04-16 15:10:21.108670 7fd1739d8700  2 req 200:0.63::GET
 /ttt/rad%2Fradosgw-agent-%FFE6%FFB5%FF8B%FFE8%FFAF%FF95%FFE7%FF94%FFA8%FFE4%FFBE%FF8B-233.txt::initializing
 2014-04-16 15:10:21.108677 7fd1739d8700 10 host=s3.ceph69.com
 rgw_dns_name=s3.ceph69.com
 2014-04-16 15:10:21.108687 7fd1739d8700 10 meta HTTP_X_AMZ_COPY_SOURCE
 2014-04-16 15:10:21.108693 7fd1739d8700 10 x
 x-amz-copy-source:ttt/rad/radosgw-agent-%E6%B5%8B%E8%AF%95%E7%94%A8%E4%BE%8B-233.txt
 2014-04-16 15:10:21.108714 7fd1739d8700 10
 s-object=rad/radosgw-agent-憆FE6憆FB5憆F8B憆FE8憆FAF憆F95憆FE7憆F94憆FA8憆FE4憆FBE憆F8B-233.txt
 s-bucket=ttt
 2014-04-16 15:10:21.108738 7fd1739d8700  2 req 200:0.000131::GET
 /ttt/rad%2Fradosgw-agent-%FFE6%FFB5%FF8B%FFE8%FFAF%FF95%FFE7%FF94%FFA8%FFE4%FFBE%FF8B-233.txt::http
 status=400
 2014-04-16 15:10:21.108921 7fd1739d8700  1 == req done req=0x126fe10
 http_status=400 ==

 The difference between them is where I bold.
 radosgw-agent.txt in master zone's log
 2014-04-16 15:10:20.883572 7fd1635be700 20 FCGI_ROLE=RESPONDER

 radosgw-agent-测试用例-233.txt in master zone's log
 2014-04-16 15:10:21.108738 7fd1739d8700  2 req 200:0.000131::GET
 /ttt/rad%2Fradosgw-agent-%FFE6%FFB5%FF8B%FFE8%FFAF%FF95%FFE7%FF94%FFA8%FFE4%FFBE%FF8B-233.txt::http
 status=400

 Therefor I don't know whether it's ceph's error or fastcgi's error.
 I attach the test files. Anyone can test it. Another two files are the full
 log about the test files.

 Thanks!



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Bug]radosgw-agent can't sync files with Chinese filename

2014-04-24 Thread wsnote
Hi, Yehuda.
It doesn't matter.We have fixed it.
The filename will be transcoded by url_encode and decoded by url_decode. There 
is a bug when decoding the filename. 
There is another bug when decoding the filename. when radosgw-agent fails 
decoding a filename, files sync will get stuck and other files will be not 
synced. This must be optimized. If one file synced fail, radosgw-agent must try 
to continuing sync other files.


At 2014-04-25 03:32:02,Yehuda Sadeh yeh...@inktank.com wrote:
Hi,

  sorry for the late response. I opened a ceph tracker issue for it (#8202).

Thanks,
Yehuda

On Wed, Apr 16, 2014 at 1:00 AM, wsnote wsn...@163.com wrote:
 OS: CentOS 6.5
 Version: Ceph 0.67.7 or 0.79

 Hello, everyone!
 I have configured federated gateways for several.
 Now I can sync files from master zone to slave zone.
 However I found that files with English filename could be  synced, but files
 with Chinese filename would not be synced.
 I compared the log between file with English filename and with Chinese
 filename in master zone's log.
 These two files are with name radosgw-agent.txt and
 radosgw-agent-测试用例-233.txt .

 radosgw-agent.txt in master zone's log
 2014-04-16 15:10:20.883445 7fd1635be700  1 == starting new request
 req=0x12ea120 =
 2014-04-16 15:10:20.883501 7fd1635be700  2 req 199:0.57::GET
 /ttt/rad%2Fradosgw-agent.txt::initializing
 2014-04-16 15:10:20.883507 7fd1635be700 10 host=s3.ceph69.com
 rgw_dns_name=s3.ceph69.com
 2014-04-16 15:10:20.883518 7fd1635be700 10 meta HTTP_X_AMZ_COPY_SOURCE
 2014-04-16 15:10:20.883524 7fd1635be700 10 x
 x-amz-copy-source:ttt/rad/radosgw-agent.txt
 2014-04-16 15:10:20.883560 7fd1635be700 10 s-object=rad/radosgw-agent.txt
 s-bucket=ttt
 2014-04-16 15:10:20.883572 7fd1635be700 20 FCGI_ROLE=RESPONDER
 2014-04-16 15:10:20.883573 7fd1635be700 20
 SCRIPT_URL=/ttt/rad/radosgw-agent.txt
 2014-04-16 15:10:20.883573 7fd1635be700 20
 SCRIPT_URI=http://s3.ceph69.com/ttt/rad/radosgw-agent.txt
 2014-04-16 15:10:20.883574 7fd1635be700 20 HTTP_AUTHORIZATION=AWS
 18GNN0DH1900H0L1LEBY:T23G/DqMa8KeIfJuv95XVRS4Hes=
 2014-04-16 15:10:20.883575 7fd1635be700 20 SERVER_PORT_SECURE=443
 2014-04-16 15:10:20.883575 7fd1635be700 20 HTTP_HOST=s3.ceph69.com

 radosgw-agent-测试用例-233.txt in master zone's log
 2014-04-16 15:10:21.108608 7fd1739d8700  1 == starting new request
 req=0x126fe10 =
 2014-04-16 15:10:21.108670 7fd1739d8700  2 req 200:0.63::GET
 /ttt/rad%2Fradosgw-agent-%FFE6%FFB5%FF8B%FFE8%FFAF%FF95%FFE7%FF94%FFA8%FFE4%FFBE%FF8B-233.txt::initializing
 2014-04-16 15:10:21.108677 7fd1739d8700 10 host=s3.ceph69.com
 rgw_dns_name=s3.ceph69.com
 2014-04-16 15:10:21.108687 7fd1739d8700 10 meta HTTP_X_AMZ_COPY_SOURCE
 2014-04-16 15:10:21.108693 7fd1739d8700 10 x
 x-amz-copy-source:ttt/rad/radosgw-agent-%E6%B5%8B%E8%AF%95%E7%94%A8%E4%BE%8B-233.txt
 2014-04-16 15:10:21.108714 7fd1739d8700 10
 s-object=rad/radosgw-agent-憆FE6憆FB5憆F8B憆FE8憆FAF憆F95憆FE7憆F94憆FA8憆FE4憆FBE憆F8B-233.txt
 s-bucket=ttt
 2014-04-16 15:10:21.108738 7fd1739d8700  2 req 200:0.000131::GET
 /ttt/rad%2Fradosgw-agent-%FFE6%FFB5%FF8B%FFE8%FFAF%FF95%FFE7%FF94%FFA8%FFE4%FFBE%FF8B-233.txt::http
 status=400
 2014-04-16 15:10:21.108921 7fd1739d8700  1 == req done req=0x126fe10
 http_status=400 ==

 The difference between them is where I bold.
 radosgw-agent.txt in master zone's log
 2014-04-16 15:10:20.883572 7fd1635be700 20 FCGI_ROLE=RESPONDER

 radosgw-agent-测试用例-233.txt in master zone's log
 2014-04-16 15:10:21.108738 7fd1739d8700  2 req 200:0.000131::GET
 /ttt/rad%2Fradosgw-agent-%FFE6%FFB5%FF8B%FFE8%FFAF%FF95%FFE7%FF94%FFA8%FFE4%FFBE%FF8B-233.txt::http
 status=400

 Therefor I don't know whether it's ceph's error or fastcgi's error.
 I attach the test files. Anyone can test it. Another two files are the full
 log about the test files.

 Thanks!



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [Bug]radosgw.log won't be generated when deleted

2014-04-16 Thread wsnote
OS: CentOS 6.5
Ceph version: 0.67.7


When I delete or move /var/log/ceph/radosgw.log, I can continue operating files 
through rgw. Then I find there are no log. The log won't be generated 
automatically. Even if I created it, it will still been written nothing. And if 
I restart radowgw, the log will be generated again. 
I think this is a bug  and don't know whether it was fixed or not. Whatever I 
delete or move the log, it should re-generates again and record the following 
operation through rgw.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] federated gateways can sync files with English filename but can't sync files with Chinese filename

2014-04-14 Thread wsnote
Hi, everyone!
After serevel day's attempt, the files can sync between two zones.
But now, I find another quesion.Just files with English filename were synced, 
but files with Chinese filename were not synced.
Does anyone come with the same question?
Thanks!



 master zone radosgw's log following:
2014-04-14 15:37:00.775529 7fd54dde2700 10 moving 
.fj-xm.domain.rgw+.bucket.meta.test:fj-xm.4302.1 to cache LRU end
2014-04-14 15:37:00.775557 7fd54dde2700 10 cache get: 
name=.fj-xm.domain.rgw+.bucket.meta.test:fj-xm.4302.1 : type miss (requested=1
7, cached=22)
2014-04-14 15:37:00.775574 7fd54dde2700 20 get_obj_state: rctx=0x7fd580001b50 
obj=.fj-xm.domain.rgw:.bucket.meta.test:fj-xm.4302.1 s
tate=0x7fd58000d1d8 s-prefetch_data=0
2014-04-14 15:37:00.775580 7fd54dde2700 20 get_obj_state: rctx=0x7fd580001b50 
obj=.fj-xm.domain.rgw:.bucket.meta.test:fj-xm.4302.1 s
tate=0x7fd58000d1d8 s-prefetch_data=0
2014-04-14 15:37:00.775581 7fd54dde2700 20 state for 
obj=.fj-xm.domain.rgw:.bucket.meta.test:fj-xm.4302.1 is not atomic, not appendi
ng atomic test
2014-04-14 15:37:00.775612 7fd54dde2700 20 rados-read obj-ofs=0 read_ofs=0 
read_len=524288
2014-04-14 15:37:00.776528 7fd54dde2700 20 rados-read r=0 bl.length=153
2014-04-14 15:37:00.776581 7fd54dde2700 10 cache put: 
name=.fj-xm.domain.rgw+.bucket.meta.test:fj-xm.4302.1
2014-04-14 15:37:00.776595 7fd54dde2700 10 moving 
.fj-xm.domain.rgw+.bucket.meta.test:fj-xm.4302.1 to cache LRU end
2014-04-14 15:37:00.777375 7fd54dde2700 15 Read 
AccessControlPolicyAccessControlPolicy 
xmlns=http://s3.amazonaws.com/doc/2006-03-0
1/OwnerIDtestname/IDDisplayNametest 
name/DisplayName/OwnerAccessControlListGrantGrantee 
xmlns:xsi=http://www.w
3.org/2001/XMLSchema-instance 
xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers/URI/GranteePermissionREAD
/Permission/GrantGrantGrantee 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:type=CanonicalUserIDtestname/ID
DisplayNametest 
name/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPol
icy
2014-04-14 15:37:00.777417 7fd54dde2700 20 get_obj_state: rctx=0x7fd580001b50 
obj=test:mytest/win7鏃楄埌婵€娲婚槄璇?txt state=0x7fd
58000ffb8 s-prefetch_data=1
2014-04-14 15:37:00.785713 7fd54dde2700 15 Read 
AccessControlPolicyAccessControlPolicy 
xmlns=http://s3.amazonaws.com/doc/2006-03-0
1/OwnerIDtestname/IDDisplayNametest 
name/DisplayName/OwnerAccessControlListGrantGrantee 
xmlns:xsi=http://www.w
3.org/2001/XMLSchema-instance 
xsi:type=GroupURIhttp://acs.amazonaws.com/groups/global/AllUsers/URI/GranteePermissionREAD
/Permission/GrantGrantGrantee 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:type=CanonicalUserIDtestname/ID
DisplayNametest 
name/DisplayName/GranteePermissionFULL_CONTROL/Permission/Grant/AccessControlList/AccessControlPol
icy
2014-04-14 15:37:00.785923 7fd54dde2700 10 read_permissions on 
test(@{i=.fj-xm.rgw.buckets.index}.fj-xm.rgw.buckets[fj-xm.4302.1]):m
ytest/win7鏃楄埌婵€娲婚槄璇?txt only_bucket=0 ret=-2
2014-04-14 15:37:00.785993 7fd54dde2700  2 req 2:0.063973:s3:HEAD 
/test/mytest%2Fwin7%E6%97%97%E8%88%B0%E6%BF%80%E6%B4%BB%E9%98%85%E
8%AF%BB.txt:get_obj:http status=404
2014-04-14 15:37:00.786126 7fd54dde2700  1
 == req done req=0x2527870 http_status=404 ==


 slave zone radosgw's log following:
Mon Apr 14 07:37:53 2014
x-amz-copy-source:test/mytest/win7%E6%97%97%E8%88%B0%E6%BF%80%E6%B4%BB%E9%98%85%E8%AF%BB.txt
/test/mytest%2Fwin7%FFE6%FF97%FF97%FFE8%FF88%FFB0%FFE6%FFBF%FF80%FFE6%FFB4%FFBB%
FFE9%FF98%FF85%FFE8%FFAF%FFBB.txt
2014-04-14 15:37:53.713088 7f460a3e9700 15 generated auth header: AWS 
UWN9EF6XJ8E9KUQX4XAE:jRhG5XJ6va5nFFmXZK4inGLTO4A=
2014-04-14 15:37:53.722011 7f460a3e9700 20 sending request to 
http://s3.ceph69.com:80/test/mytest%2Fwin7%FFE6%FF97%FF97%
FFE8%FF88%FFB0%FFE6%FFBF%FF80%FFE6%FFB4%FFBB%FFE9%FF98%FF85%FFE8%FFAF%FF
BB.txt?rgwx-uid=fj-xmrgwx-region=fjrgwx-prepend-metadata=fj
2014-04-14 15:37:53.863186 7f460a3e9700 10 receive_http_header
2014-04-14 15:37:53.863215 7f460a3e9700 10 received header:HTTP/1.1 400 Bad 
Request
2014-04-14 15:37:53.863219 7f460a3e9700 10 receive_http_header
2014-04-14 15:37:53.863220 7f460a3e9700 10 received header:Date: Mon, 14 Apr 
2014 07:37:51 GMT
2014-04-14 15:37:53.863240 7f460a3e9700 10 receive_http_header
2014-04-14 15:37:53.863241 7f460a3e9700 10 received header:Server: 
Apache/2.2.22 (Fedora)
2014-04-14 15:37:53.863244 7f460a3e9700 10 receive_http_header
2014-04-14 15:37:53.863244 7f460a3e9700 10 received header:Accept-Ranges: bytes
2014-04-14 15:37:53.872057 7f460a3e9700 10 receive_http_header
2014-04-14 15:37:53.872272 7f460a3e9700 10 received header:Content-Length: 83
2014-04-14 15:37:53.872285 7f460a3e9700 10 receive_http_header
2014-04-14 15:37:53.872286 

Re: [ceph-users] Questions about federated gateways configure

2014-04-10 Thread wsnote
Now my configure is normal, but there are still some mistake.
Bucket list can rsync, but object not.
In the secondary zone, with secondary zone's key, I can't see the bucket list 
;But with master zone's key, I can see the bucket list.
The log is following:
the master zone:
Thu, 10 Apr 2014 09:35:31 GMT
/admin/log
2014-04-10 17:35:31.184939 7f0ea79d8700 15 calculated 
digest=dffaFPagxbrKq4OIGUW37/p/LZ0=
2014-04-10 17:35:31.184941 7f0ea79d8700 15 
auth_sign=dffaFPagxbrKq4OIGUW37/p/LZ0=
2014-04-10 17:35:31.184943 7f0ea79d8700 15 compare=0
2014-04-10 17:35:31.184945 7f0ea79d8700 20 system request
2014-04-10 17:35:31.184948 7f0ea79d8700  2 req 27796:0.000323::GET 
/admin/log:list_data_changes_log:reading permissions
2014-04-10 17:35:31.184950 7f0ea79d8700  2 req 27796:0.000326::GET 
/admin/log:list_data_changes_log:verifying op mask
2014-04-10 17:35:31.184952 7f0ea79d8700 20 required_mask= 0 user.op_mask=7
2014-04-10 17:35:31.184953 7f0ea79d8700  2 req 27796:0.000329::GET 
/admin/log:list_data_changes_log:verifying op permissions
2014-04-10 17:35:31.184956 7f0ea79d8700  2 overriding permissions due to system 
operation
2014-04-10 17:35:31.184957 7f0ea79d8700  2 req 27796:0.000333::GET 
/admin/log:list_data_changes_log:verifying op params
2014-04-10 17:35:31.184959 7f0ea79d8700  2 req 27796:0.000335::GET 
/admin/log:list_data_changes_log:executing
2014-04-10 17:35:31.186112 7f0ea79d8700  2 req 27796:0.001488::GET 
/admin/log:list_data_changes_log:http status=404
2014-04-10 17:35:31.186276 7f0ea79d8700  1 == req done req=0x1cf1b10 
http_status=404 ==


The secondary zone:
Thu, 10 Apr 2014 09:32:48 GMT
/admin/replica_log
2014-04-10 17:32:49.388584 7fd6a17fb700 15 calculated 
digest=0ZQB/sBiIIsDLExsbzzmF9G02Js=
2014-04-10 17:32:49.388586 7fd6a17fb700 15 
auth_sign=0ZQB/sBiIIsDLExsbzzmF9G02Js=
2014-04-10 17:32:49.388587 7fd6a17fb700 15 compare=0
2014-04-10 17:32:49.388589 7fd6a17fb700 20 system request
2014-04-10 17:32:49.388592 7fd6a17fb700  2 req 79527:0.000359::GET 
/admin/replica_log:replicadatalog_getbounds:reading permissions
2014-04-10 17:32:49.388596 7fd6a17fb700  2 req 79527:0.000363::GET 
/admin/replica_log:replicadatalog_getbounds:verifying op mask
2014-04-10 17:32:49.388617 7fd6a17fb700 20 required_mask= 0 user.op_mask=7
2014-04-10 17:32:49.388619 7fd6a17fb700  2 req 79527:0.000386::GET 
/admin/replica_log:replicadatalog_getbounds:verifying op permissions
2014-04-10 17:32:49.388622 7fd6a17fb700  2 overriding permissions due to system 
operation
2014-04-10 17:32:49.388624 7fd6a17fb700  2 req 79527:0.000391::GET 
/admin/replica_log:replicadatalog_getbounds:verifying op params
2014-04-10 17:32:49.388626 7fd6a17fb700  2 req 79527:0.000393::GET 
/admin/replica_log:replicadatalog_getbounds:executing
2014-04-10 17:32:49.389355 7fd6a17fb700  2 req 79527:0.001122::GET 
/admin/replica_log:replicadatalog_getbounds:http status=404
2014-04-10 17:32:49.389586 7fd6a17fb700  1 == req done req=0xcc69b0 
http_status=404 ==







At 2014-04-10 09:40:07,wsnote wsn...@163.com wrote:

In cluster-data-sync.conf, if I use https,then it will show the error:


INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/radosgw_agent/cli.py, line 269, in 
main
region_map = client.get_region_map(dest_conn)
  File /usr/lib/python2.6/site-packages/radosgw_agent/client.py, line 391, in 
get_region_map
region_map = request(connection, 'get', 'admin/config')
  File /usr/lib/python2.6/site-packages/radosgw_agent/client.py, line 153, in 
request
result = handler(url, params=params, headers=request.headers, data=data)
  File /usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
return request('get', url, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
return session.request(method=method, url=url, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/sessions.py, line 279, in 
request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, 
cert=cert, proxies=proxies)
  File /usr/lib/python2.6/site-packages/requests/sessions.py, line 374, in 
send
r = adapter.send(request, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/adapters.py, line 213, in 
send
raise SSLError(e)
SSLError: hostname 's3.ceph71.com' doesn't match u'ceph71'


If I use http, there is no error, and the log is
INFO:radosgw_agent.worker:finished processing shard 26
INFO:radosgw_agent.sync:27/128 items processed
INFO:radosgw_agent.worker:15413 is processing shard number 27
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:radosgw_agent.worker:finished processing shard 27
INFO:radosgw_agent.sync:28/128 items processed
INFO:radosgw_agent.worker:15413 is processing shard number 28
INFO:urllib3

Re: [ceph-users] Questions about federated gateways configure

2014-04-09 Thread wsnote
Thank you very much!
I did as what you said. But there are some mistake.


 [root@ceph69 ceph]# radosgw-agent -c region-data-sync.conf
Traceback (most recent call last):
  File /usr/bin/radosgw-agent, line 5, in module
from pkg_resources import load_entry_point
  File /usr/lib/python2.6/site-packages/pkg_resources.py, line 2659, in 
module
parse_requirements(__requires__), Environment()
  File /usr/lib/python2.6/site-packages/pkg_resources.py, line 546, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: requests=1.2.1 





At 2014-04-09 12:11:09,Craig Lewis cle...@centraldesktop.com wrote:
I posted inline.


1. Create Pools
there are many us-east and us-west pools.
Do I have to create both us-east and us-west pools in a ceph instance? Or, I 
just create us-east pools in us-east zone and create us-west pools in us-west 
zone?

No, just create the us-east pools in the us-east cluster, and the us-west pools 
in the us-west cluster.




2. Create a keyring

Generate a Ceph Object Gateway user name and key for each instance.

sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-east-1 --gen-key
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-west-1 --gen-key
Do I use the all above commands in every ceph instance, or use first in us-east 
zone and use second in us-west zone?

For the keyrings, you should only need to do the key in the respective zone.  
I'm not 100% sure though, as I'm not using CephX.





3. add instances to ceph config file

[client.radosgw.us-east-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-east
rgw zone root pool = .us-east.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}

[client.radosgw.us-west-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-west
rgw zone root pool = .us-west.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}


Does both of above config put in one ceph.conf, or put us-east in us-east zone 
and us-west in us-west zone?

It only needs to be in each cluster's ceph.conf.  Assuming your client names 
are globally unique., it won't hurt if you put it in all cluster's ceph.conf. 




4. Create Zones
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-east-1
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-west-1
Use both commands in every instance or separately?

Yes, the zones need to know about each other.  The slaves definitely need to 
know the master zone information.  The master might be able to get away with 
not knowing about the slave zones, but I haven't tested it.  I ran both 
commands in both zones, using the respective --name argument for the node in 
the zone I was running the command on.




5. Create Zone Users


radosgw-admin user create --uid=us-east --display-name=Region-US Zone-East 
--name client.radosgw.us-east-1 --system
radosgw-admin user create --uid=us-west --display-name=Region-US Zone-West 
--name client.radosgw.us-west-1 --system
Does us-east zone have to create uid us-west?
Does us-west zone have to create uid us-east?

When you create the system users, you do need to create all users in all zone.  
I think you don't need the master user in the slave zones, but I haven't taken 
the time to test it.  You do need the access keys to match in all zones.  So if 
you create the users in the master zone with

radosgw-admin user create --uid=$name --display-name=$display_name --name 
client.radosgw.us-west-1 --gen-access-key --gen-secret --system

you'll copy the access and secret keys to the slave zone with

radosgw-admin user create --uid=$name --display-name=$display_name --name 
client.radosgw.us-east-1 --access_key=$access_key --secret=$secret_key 
--system




6. about secondary region


Create zones from master region in the secondary region.
Create zones from secondary region in the master region.


Do these two steps aim at that the two regions have the same pool?

I haven't tried multiple regions yet, but since the two regions are in two 
different clusters, they can't share pools.  They could use the same pool names 
in different clusters, but I recommend against that.  You really want all pools 
in all locations to be named uniquely.  Having the same names in different 
locations is a recipe for human error.

I'm pretty sure you just need to load the region and zone maps in all of the 
clusters.  Since the other regions will only be storing metadata about the 
other regions and zones, they shouldn't need extra pools.  Similar to my answer 
to question #1.




The best advice I can give is to setup a pair of virtual machines, and start 
messing around.  Make liberal use of VM snapshots.  I broke my test clusters 
several times.  I could've 

Re: [ceph-users] Questions about federated gateways configure

2014-04-09 Thread wsnote
Now I can configure it but it seems make no sense.
The following is the Error info.


 [root@ceph69 ceph]# radosgw-agent -c /etc/ceph/cluster-data-sync.conf
INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/radosgw_agent/cli.py, line 269, in 
main
region_map = client.get_region_map(dest_conn)
  File /usr/lib/python2.6/site-packages/radosgw_agent/client.py, line 391, in 
get_region_map
region_map = request(connection, 'get', 'admin/config')
  File /usr/lib/python2.6/site-packages/radosgw_agent/client.py, line 153, in 
request
result = handler(url, params=params, headers=request.headers, data=data)
  File /usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
return request('get', url, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
return session.request(method=method, url=url, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/sessions.py, line 279, in 
request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, 
cert=cert, proxies=proxies)
  File /usr/lib/python2.6/site-packages/requests/sessions.py, line 374, in 
send
r = adapter.send(request, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/adapters.py, line 213, in 
send
raise SSLError(e)
SSLError: hostname 's3.ceph71.com' doesn't match u'ceph71' 


What's the probably reason?
Thanks!




At 2014-04-09 16:24:48,wsnote wsn...@163.com wrote:

Thank you very much!
I did as what you said. But there are some mistake.


 [root@ceph69 ceph]# radosgw-agent -c region-data-sync.conf
Traceback (most recent call last):
  File /usr/bin/radosgw-agent, line 5, in module
from pkg_resources import load_entry_point
  File /usr/lib/python2.6/site-packages/pkg_resources.py, line 2659, in 
module
parse_requirements(__requires__), Environment()
  File /usr/lib/python2.6/site-packages/pkg_resources.py, line 546, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: requests=1.2.1 





At 2014-04-09 12:11:09,Craig Lewis cle...@centraldesktop.com wrote:
I posted inline.


1. Create Pools
there are many us-east and us-west pools.
Do I have to create both us-east and us-west pools in a ceph instance? Or, I 
just create us-east pools in us-east zone and create us-west pools in us-west 
zone?

No, just create the us-east pools in the us-east cluster, and the us-west pools 
in the us-west cluster.




2. Create a keyring

Generate a Ceph Object Gateway user name and key for each instance.

sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-east-1 --gen-key
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-west-1 --gen-key
Do I use the all above commands in every ceph instance, or use first in us-east 
zone and use second in us-west zone?

For the keyrings, you should only need to do the key in the respective zone.  
I'm not 100% sure though, as I'm not using CephX.





3. add instances to ceph config file
[client.radosgw.us-east-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-east
rgw zone root pool = .us-east.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}

[client.radosgw.us-west-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-west
rgw zone root pool = .us-west.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}


Does both of above config put in one ceph.conf, or put us-east in us-east zone 
and us-west in us-west zone?

It only needs to be in each cluster's ceph.conf.  Assuming your client names 
are globally unique., it won't hurt if you put it in all cluster's ceph.conf. 




4. Create Zones
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-east-1
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-west-1
Use both commands in every instance or separately?

Yes, the zones need to know about each other.  The slaves definitely need to 
know the master zone information.  The master might be able to get away with 
not knowing about the slave zones, but I haven't tested it.  I ran both 
commands in both zones, using the respective --name argument for the node in 
the zone I was running the command on.




5. Create Zone Users


radosgw-admin user create --uid=us-east --display-name=Region-US Zone-East 
--name client.radosgw.us-east-1 --system
radosgw-admin user create --uid=us-west --display-name=Region-US Zone-West 
--name client.radosgw.us-west-1 --system
Does us-east zone have to create uid us-west?
Does us-west zone have to create uid us-east?

When you create the system users, you do need to create all

Re: [ceph-users] Questions about federated gateways configure

2014-04-09 Thread wsnote
In cluster-data-sync.conf, if I use https,then it will show the error:


INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination
Traceback (most recent call last):
  File /usr/lib/python2.6/site-packages/radosgw_agent/cli.py, line 269, in 
main
region_map = client.get_region_map(dest_conn)
  File /usr/lib/python2.6/site-packages/radosgw_agent/client.py, line 391, in 
get_region_map
region_map = request(connection, 'get', 'admin/config')
  File /usr/lib/python2.6/site-packages/radosgw_agent/client.py, line 153, in 
request
result = handler(url, params=params, headers=request.headers, data=data)
  File /usr/lib/python2.6/site-packages/requests/api.py, line 55, in get
return request('get', url, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/api.py, line 44, in request
return session.request(method=method, url=url, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/sessions.py, line 279, in 
request
resp = self.send(prep, stream=stream, timeout=timeout, verify=verify, 
cert=cert, proxies=proxies)
  File /usr/lib/python2.6/site-packages/requests/sessions.py, line 374, in 
send
r = adapter.send(request, **kwargs)
  File /usr/lib/python2.6/site-packages/requests/adapters.py, line 213, in 
send
raise SSLError(e)
SSLError: hostname 's3.ceph71.com' doesn't match u'ceph71'


If I use http, there is no error, and the log is
INFO:radosgw_agent.worker:finished processing shard 26
INFO:radosgw_agent.sync:27/128 items processed
INFO:radosgw_agent.worker:15413 is processing shard number 27
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:radosgw_agent.worker:finished processing shard 27
INFO:radosgw_agent.sync:28/128 items processed
INFO:radosgw_agent.worker:15413 is processing shard number 28
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph69.com
INFO:radosgw_agent.worker:syncing bucket zhangyt6
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com
INFO:urllib3.connectionpool:Starting new HTTP connection (1): s3.ceph71.com


I can see the sync of bucket but the sync is failed:
 INFO:radosgw_agent.worker:syncing bucket zhangyt6




Another question is that it will create a pool called .rgw.root  
automatically.Does it have some affect?


Thanks!





At 2014-04-10 00:40:15,Craig Lewis cle...@centraldesktop.com wrote:

On 4/9/2014 3:33 AM, wsnote wrote:

Now I can configure it but it seems make no sense.
The following is the Error info.


 [root@ceph69 ceph]# radosgw-agent -c /etc/ceph/cluster-data-sync.conf
INFO:urllib3.connectionpool:Starting new HTTPS connection (1): s3.ceph71.com
ERROR:root:Could not retrieve region map from destination


This error means that radosgw-agent can't retrieve the region and zone maps 
from the slave zone.

In cluster-data-sync.conf, double check that the destination URL is correct, 
and that ceph69 can connect to that URL. 

Next verify dest_access_key and dest_secret_key are correct.  Compare them to 
radosgw-admin user show --name client.radosgw.us-east-1.  radosgw-admin uses 
those credentials to pull that data.  Before I started, I made sure that all of 
my secret keys did not have backslashes.

One of the issues I ran into was making sure I created everything in the zone 
RGW pools, not the default RGW pools.  Sometimes my users would end up in 
.users.uid, not .us.east-1.users.uid, because I forgot to add the --name 
parameter to the radosgw-admin commands.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Questions about federated gateways configure

2014-04-08 Thread wsnote
Hello, everyone.
I tried to confiure federated gateways but failed.
I have read the document several times and had some quesions.


OS: CentOS 6.5
Version: Ceph 0.67.7


1. Create Pools
there are many us-east and us-west pools.
Do I have to create both us-east and us-west pools in a ceph instance? Or, I 
just create us-east pools in us-east zone and create us-west pools in us-west 
zone?


2. Create a keyring

Generate a Ceph Object Gateway user name and key for each instance.

sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-east-1 --gen-key
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n 
client.radosgw.us-west-1 --gen-key
Do I use the all above commands in every ceph instance, or use first in us-east 
zone and use second in us-west zone?


3. add instances to ceph config file


[client.radosgw.us-east-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-east
rgw zone root pool = .us-east.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}

[client.radosgw.us-west-1]
rgw region = us
rgw region root pool = .us.rgw.root
rgw zone = us-west
rgw zone root pool = .us-west.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw socket path = /var/run/ceph/$name.sock
host = {host-name}


Does both of above config put in one ceph.conf, or put us-east in us-east zone 
and us-west in us-west zone?


4. Create Zones
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-east-1
radosgw-admin zone set --rgw-zone=us-east --infile us-east.json --name 
client.radosgw.us-west-1
Use both commands in every instance or separately?


5. Create Zone Users


radosgw-admin user create --uid=us-east --display-name=Region-US Zone-East 
--name client.radosgw.us-east-1 --system
radosgw-admin user create --uid=us-west --display-name=Region-US Zone-West 
--name client.radosgw.us-west-1 --system
Does us-east zone have to create uid us-west?
Does us-west zone have to create uid us-east?


6. about secondary region


Create zones from master region in the secondary region.
Create zones from secondary region in the master region.


Do these two steps aim at that the two regions have the same pool?
Can any one help?
Thanks!


Best Wishes!
wsnote___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] how to bind customer's domain to ceph?

2014-04-03 Thread wsnote
Hello, everyone!
I have installed ceph radosgw. My domain is like cephtest.com, and new bucket's 
domain is {bucket-name}.cephtest.com.
Now customer has his own domain, such as domain.com.he want to bind 
{bucket-name}.cephtest.com with domain.com.
Then he can download file by domain.com/filename, not by 
{bucket-name}.cephtest.com/filename.
How can I do it?
Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Does rgw support lighttpd?

2014-03-28 Thread wsnote
Hello, everyone!
I have installed ceph rgw in CentOS 6.5.
Now I want to change the web server from httpd to lighttpd.But I don't found 
any valuable information.
Did anyone do this? What do I have to pay attention to?
Thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] help with ceph radosgw configure

2014-03-15 Thread wsnote
-
12. vi /var/www/html/s3gw.fcgi and chmod +x  /var/www/html/s3gw.fcgi
-
#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
-
13. rm -rf /tmp/radosgw.sock
14. start radosgw
chkconfig --add ceph-radosgw
chkconfig ceph-radosgw on
service ceph -a restart
service httpd restart
service ceph-radosgw start
service ceph-radosgw status
15. add user
radosgw-admin user create --uid admin --display-name admin


Is there something wrong with my rgw.conf or httpd.conf?




At 2014-03-15 01:07:00,Yehuda Sadeh yeh...@inktank.com wrote:
You might have a default web server set up on apache. Remove it and
restart apache.

On Fri, Mar 14, 2014 at 8:03 AM, wsnote wsn...@163.com wrote:
 OS: CentOS 6.5
 version: ceph 0.67.7

 I have configured radosgw and start it.
 When I surfed https://hostname:65443/, I thought it should be
 
 ListAllMyBucketsResult
 Owner
 IDanonymous/ID
 DisplayName/
 /Owner
 Buckets/
 /ListAllMyBucketsResult
 -
 But, what I saw is
 ---
 Index of /

 [ICO] Name Last modified Size Description
 [   ] s3gw.fcgi 14-Mar-2014 21:16 81
 Apache/2.2.22 (Fedora) Server at ceph65 Port 65443
 ---

 what's the possible reason for this situation?
 Thanks.



 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] help with ceph radosgw configure

2014-03-14 Thread wsnote
OS: CentOS 6.5
version: ceph 0.67.7


I have configured radosgw and start it.
When I surfed https://hostname:65443/, I thought it should be

ListAllMyBucketsResult
Owner
IDanonymous/ID
DisplayName/
/Owner
Buckets/
/ListAllMyBucketsResult
-
But, what I saw is
---
Index of /


[ICO]NameLast modifiedSizeDescription
[   ]s3gw.fcgi14-Mar-2014 21:16 81 
Apache/2.2.22 (Fedora) Server at ceph65 Port 65443
---


what's the possible reason for this situation?
Thanks.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] how to configure ceph object gateway

2014-03-11 Thread wsnote
Thanks for your reply!
I have try it but no use.


I installed ceph in 3 servers called ceph69, ceph70, ceph71.


All my steps are as follows:
1. vi /etc/ceph/ceph.conf
add these content:


[client.radosgw.gateway]
host = {host-name}
keyring = /etc/ceph/keyring.radosgw.gateway
rgw socket path = /tmp/radosgw.sock
log file = /var/log/ceph/radosgw.log


2. copy ceph.conf to another 2 server.


cd /etc/ceph
ssh ceph70 tee /etc/ceph/ceph.conf  ceph.conf
ssh ceph71 tee /etc/ceph/ceph.conf  ceph.conf


3. mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway


4. vi /etc/httpd/conf.d/fastcgi.conf, modify FastCgiWrapper to off, and add 
FastCgiExternalServer.
FastCgiWrapper Off
FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock


5. vi /etc/httpd/conf/httpd.conf
add these content:


VirtualHost *:80
ServerName ceph69
ServerAdmin zhan...@chinanetcenter.com
DocumentRoot /var/www
IfModule mod_fastcgi.c
Directory /var/www
Options +ExecCGI
AllowOverride All
SetHandler fastcgi-script
Order allow,deny
Allow from all
AuthBasicAuthoritative Off
/Directory
/IfModule
AllowEncodedSlashes On
ErrorLog /var/log/httpd/error.log
CustomLog /var/log/httpd/access.log combined
ServerSignature Off
/VirtualHost
RewriteEngine On
RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) 
/s3gw.fcgi?page=$1params=$2%{QUERY_STRING} 
[E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]


6. vi /etc/httpd/conf.d/ssl.conf, and modify the following content.
SSLCertificateFile /etc/pki/tls/certs/ca.crt
SSLCertificateKeyFile /etc/pki/tls/private/ca.key


7. vi /var/www/s3gw.fcgi and chmod +x /var/www/s3gw.fcgi 
add these content:


#!/bin/sh
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway


8. GENERATE A KEYRING AND KEY FOR THE GATEWAY
ceph-authtool --create-keyring /etc/ceph/keyring.radosgw.gateway
chmod +r /etc/ceph/keyring.radosgw.gateway


ceph-authtool /etc/ceph/keyring.radosgw.gateway -n client.radosgw.gateway 
--gen-key
ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow 
rw' /etc/ceph/keyring.radosgw.gateway


9. ADD TO CEPH KEYRING ENTRIES
ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i 
/etc/ceph/keyring.radosgw.gateway


10. RESTART SERVICES AND START THE GATEWAY
[root@ceph69 ceph]# service ceph restart
=== mon.0 === 
=== mon.0 === 
Stopping Ceph mon.0 on ceph69...kill 9444...done
=== mon.0 === 
Starting Ceph mon.0 on ceph69...
=== mds.0 === 
=== mds.0 === 
Stopping Ceph mds.0 on ceph69...kill 9518...done
=== mds.0 === 
Starting Ceph mds.0 on ceph69...
starting mds.0 at :/0
=== osd.0 === 
=== osd.0 === 
Stopping Ceph osd.0 on ceph69...kill 9677...done
=== osd.0 === 
Mounting Btrfs on ceph69:/data/osd.0
Scanning for Btrfs filesystems
create-or-move updated item name 'osd.0' weight 0.04 at location 
{host=ceph69,root=default} to crush map
Starting Ceph osd.0 on ceph69...
starting osd.0 at :/0 osd_data /data/osd.0 /data/osd.0/journal
[root@ceph69 ceph]# service httpd restart
Stopping httpd:[  OK  ]
Starting httpd:[  OK  ]
[root@ceph69 ceph]# /etc/init.d/ceph-radosgw start
Starting radosgw instance(s)...
2014-03-07 11:41:55.743996 7fdb098c4820 -1 WARNING: libcurl doesn't support 
curl_multi_wait()
2014-03-07 11:41:55.744001 7fdb098c4820 -1 WARNING: cross zone / region 
transfer performance may be affected
Starting client.radosgw.gateway... [  OK  ]


11. CREATE A GATEWAY USER
[root@ceph69 conf.d]# radosgw-admin user create --uid=wsnote 
--display-name=wsnote --email=wsn...@163.com
{ user_id: wsnote,
  display_name: wsnote,
  email: wsn...@163.com,
  suspended: 0,
  max_buckets: 1000,
  auid: 0,
  subusers: [],
  keys: [
{ user: wsnote,
  access_key: 8BRTJ746Q6AC38MAF5EO,
  secret_key: M0dfKnZuwANWefPyujHzABFJPbCMfDDxPAw4vxAU}],
  swift_keys: [],
  caps: [],
  op_mask: read, write, delete,
  default_placement: ,
  placement_tags: []}
  
The following content is the /etc/ceph/ceph.conf:


; global
[global]
auth supported = none
max open files = 131072
log file = /var/log/ceph/$name.log
pid file = /var/run/ceph/$name.pid


; monitors
[mon]
mon data = /data/$name


[mon.0]
host = ceph69
mon addr = 121.205.7.69:6789






; mds
[mds]
keyring = /data/keyring.$name


[mds.0]
host = ceph69


; osd
[osd]
osd data = /data/$name
osd journal = /data/$name/journal
osd journal size = 1000 ; journal size, in megabytes
osd mkfs type = btrfs
osd mount options btrfs = rw,noatime


[osd.0]
host = ceph69
devs = /dev/sda3


[osd.1]
host = ceph70

Re: [ceph-users] how to configure ceph object gateway

2014-03-10 Thread wsnote
You must also create an rgw.conf file in the /etc/apache2/sites-enabled 
directory. 
There is no  /etc/apache2/sites-enabled directory in the CentOS. So I didn't 
create rgw.conf. I put the content of rgw.conf to the httpd.conf.


sudo a2ensite rgw.conf
sudo a2dissite default


These 2 commands was not found in the CentOS.
I can start /etc/init.d/ceph-radosgw and create a gateway user.But when I used 
API, it shows 403 forbidden.
I didn't add wildcart to DNS, because I didn't use domain.


在 2014-03-11 10:47:57,Jean-Charles LOPEZ jeanchlo...@mac.com 写道:
Hi,

what commands are “not found”?

This page for configuring the RGW works fine as far as I know as I used it no 
later than a week ago.

Can you please give us more details? What is your layout (radosgw installed on 
a ceph node, mon node, standalone node)?

Note: In order to get it running, remember you need to have a web server 
installed and running (apache), ceph base packages obviously, swift if you 
want to use the swift tool, s3cmd also, s3curl, …

JC

On Mar 10, 2014, at 19:35, wsnote wsn...@163.com wrote:

 OS: CentOS 6.4
 version: ceph 0.67.7
 
 Hello, everyone.
 With the help of document, I have install ceph gateway.
 But I don't know how to configure it. The web 
 http://ceph.com/docs/master/radosgw/config/ has many command not found.I 
 thought it's written in the ubuntu.
 can anyone help?
 Thanks!
 
 
 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com