nginx for rgw fcgi frontend

2015-09-18 Thread Zhou, Yuan
Hi Yehuda,

I was trying to do some tests on nginx over rgw and ran into some issue on the 
PUT side:

$ swift upload con ceph_fuse.cc
Object PUT failed: http://localhost/swift/v1/con/ceph_fuse.cc 411 Length 
Required   MissingContentLength

However the GET/HEAD/POST requests are all working. From the history mail in 
ceph-user nginx should be working well. There's no such issue if switch to 
civetweb frontend. Is there anything changed in fcgi frontend? I'm testing on 
the master branch.

here's the request log and the CONTENT_LENGTH is there actually.

http://paste2.org/YDJFYIcp



rgw part of ceph.conf

        rgw frontends = fastcgi
        rgw dns name = localhost
        rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
        rgw print continue = false
...


Nginx site.conf:

server {
    listen 80;

    client_max_body_size 10g;

    access_log /dev/stdout;
    error_log /dev/stderr;

    location / {
        fastcgi_pass_header Authorization;
        fastcgi_pass_request_headers on;

        if ($request_method = PUT) {
            rewrite ^ /PUT$request_uri;
        }

        include fastcgi_params;

        fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
    }

    location /PUT/ {
        internal;

        fastcgi_pass_header Authorization;
        fastcgi_pass_request_headers on;

        include fastcgi_params;
        fastcgi_param CONTENT_LENGTH $content_length;
        fastcgi_param HTTP_CONTENT_LENGTH $content_length;

        fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
    }
}



Sincerely, Yuan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Newstore create failed with fio_objectstore

2015-09-18 Thread James (Fei) Liu-SSI
Hi Xiaoxi,
  We fixed the problems which from fio-objectstore and Makefile. We are able to 
enable both rocksdb and newstore at same time with fio-objectstore.

 However, the fio got below errors while issue the commands to the newstore.

2015-09-18 16:26:29.471099 7ffd86aec700 -1 newstore(/mnt/nvmedevice/) 
_aio_thread got (14) Bad address
2015-09-18 16:26:29.471106 7ffd86aec700 -1 newstore(/mnt/nvmedevice/) 
_aio_thread got (14) Bad address
2015-09-18 16:26:29.471112 7ffd86aec700 -1 newstore(/mnt/nvmedevice/) 
_aio_thread got (14) Bad address

We are debugging and trying to fix them. thanks

Regards,
James



-Original Message-
From: James (Fei) Liu-SSI 
Sent: Friday, September 18, 2015 2:55 PM
To: 'Chen, Xiaoxi'
Cc: 'ceph-devel@vger.kernel.org'
Subject: RE: Newstore create failed with fio_objectstore

Hi Xiaoxi and Cephers,
 Thanks for your feedback. I am trying to get newstore works with rocksdb in 
fio-objectstore plugin.  
 
 Here are steps I  did:
1. Reconfigure the ceph with RocksDB build in ./configure 
--with-fio-dir=./src/fio/ --with-librocksdb-static 2. sudo ./fio/fio 
./test/objectstore.fio  I always get error of undefined symbol RocksDBStore.
 
 fio: engine ./.libs/libfio_ceph_objectstore.so not loadable
 fio: failed to load engine ./.libs/libfio_ceph_objectstore.so
 Bad option 

Re: [Ceph-community] Getting WARN in __kick_osd_requests doing stress testing

2015-09-18 Thread Abhishek L
Redirecting to ceph-devel, where such a question might have a better
chance of a reply.

On Fri, Sep 18, 2015 at 4:03 AM,   wrote:
> I'm running in a 3-node cluster and doing osd/rbd creation and deletion, and
> ran across this WARN
> Note, it only happened once (on one rbd add) after approximately 500 cycles
> of the test, but was wondering if
> someone can explain to me why this warning would be happening, and how I can
> prevent it.
>
> Here is what my test script is doing:
>
> while(1):
> create 5 ceph pools   - sleep 2 between each pool create
> sleep 5
> create 5 ceph volumes - sleep 2 between each pool create
> sleep 5
> delete 5 ceph volumes - sleep 2 between each pool create
> sleep 5
> delete 5 ceph pools   - sleep 2 between each pool create
> sleep 5
>
>
> 333940 Sep 17 00:31:54 10.0.41.9 [18372.272771] Call Trace:
> 333941 Sep 17 00:31:54 10.0.41.9 [18372.273489]  []
> dump_stack+0x45/0x57
> 333942 Sep 17 00:31:54 10.0.41.9 [18372.274226]  []
> warn_slowpath_common+0x97/0xe0
> 333943 Sep 17 00:31:54 10.0.41.9 [18372.274923]  []
> warn_slowpath_null+0x1a/0x20
> 333944 Sep 17 00:31:54 10.0.41.9 [18372.275635]  []
> __kick_osd_requests+0x1dc/0x240 [libceph]
> 333945 Sep 17 00:31:54 10.0.41.9 [18372.276305]  []
> osd_reset+0x57/0xa0 [libceph]
> 333946 Sep 17 00:31:54 10.0.41.9 [18372.276962]  []
> con_work+0x112/0x290 [libceph]
> 333947 Sep 17 00:31:54 10.0.41.9 [18372.277608]  []
> process_one_work+0x144/0x470
> 333948 Sep 17 00:31:54 10.0.41.9 [18372.278247]  []
> worker_thread+0x11e/0x450
> 333949 Sep 17 00:31:54 10.0.41.9 [18372.278880]  [] ?
> create_worker+0x1f0/0x1f0
> 333950 Sep 17 00:31:54 10.0.41.9 [18372.279543]  []
> kthread+0xc9/0xe0
> 333951 Sep 17 00:31:54 10.0.41.9 [18372.280174]  [] ?
> flush_kthread_worker+0x90/0x90
> 333952 Sep 17 00:31:54 10.0.41.9 [18372.280803]  []
> ret_from_fork+0x58/0x90
> 333953 Sep 17 00:31:54 10.0.41.9 [18372.281430]  [] ?
> flush_kthread_worker+0x90/0x90
>
> static void __kick_osd_requests(struct ceph_osd_client *osdc,
> struct ceph_osd *osd)
> {
>  :
> list_for_each_entry_safe(req, nreq, >o_linger_requests,
>  r_linger_osd_item) {
> WARN_ON(!list_empty(>r_req_lru_item));
> __kick_linger_request(req);
> }
> :
> }
>
> - Bart
>
>
> ___
> Ceph-community mailing list
> ceph-commun...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: nginx for rgw fcgi frontend

2015-09-18 Thread Zhou, Yuan
Thanks Yehuada for the quick response!

My nginx is 1.4.6(old but default for Ubuntu trusty) and for some reason it's 
sending both CONTENT_LENGTH and HTTP_CONTENT_LENGTH to the backend even if I 
comment out the fastcgi_params part in the site conf

With below config this issue is fixed:
 
rgw content length compat = true


Thanks, -yuan

-Original Message-
From: Yehuda Sadeh-Weinraub [mailto:yeh...@redhat.com] 
Sent: Friday, September 18, 2015 11:29 PM
To: Zhou, Yuan
Cc: Ceph Development
Subject: Re: nginx for rgw fcgi frontend

On Thu, Sep 17, 2015 at 11:38 PM, Zhou, Yuan  wrote:
> Hi Yehuda,
>
> I was trying to do some tests on nginx over rgw and ran into some issue on 
> the PUT side:
>
> $ swift upload con ceph_fuse.cc
> Object PUT failed: http://localhost/swift/v1/con/ceph_fuse.cc 411 Length 
> Required   MissingContentLength
>
> However the GET/HEAD/POST requests are all working. From the history mail in 
> ceph-user nginx should be working well. There's no such issue if switch to 
> civetweb frontend. Is there anything changed in fcgi frontend? I'm testing on 
> the master branch.
>
> here's the request log and the CONTENT_LENGTH is there actually.
>
> http://paste2.org/YDJFYIcp
>
>

What version are you running? Note that you're getting an HTTP_CONTENT_LENGTH 
header instead of CONTENT_LENGTH header. There should be some support for it, 
but on the other hand there now, but maybe you can get nginx to send the 
appropriate header?

Yehuda


>
> rgw part of ceph.conf
> 
> rgw frontends = fastcgi
> rgw dns name = localhost
> rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
> rgw print continue = false
> ...
>
>
> Nginx site.conf:
>
> server {
> listen 80;
>
> client_max_body_size 10g;
>
> access_log /dev/stdout;
> error_log /dev/stderr;
>
> location / {
> fastcgi_pass_header Authorization;
> fastcgi_pass_request_headers on;
>
> if ($request_method = PUT) {
> rewrite ^ /PUT$request_uri;
> }
>
> include fastcgi_params;
>
> fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
> }
>
> location /PUT/ {
> internal;
>
> fastcgi_pass_header Authorization;
> fastcgi_pass_request_headers on;
>
> include fastcgi_params;
> fastcgi_param CONTENT_LENGTH $content_length;
> fastcgi_param HTTP_CONTENT_LENGTH $content_length;
>
> fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
> }
> }
>
>
>
> Sincerely, Yuan
>


RE: Newstore create failed with fio_objectstore

2015-09-18 Thread James (Fei) Liu-SSI
Hi Xiaoxi and Cephers,
 Thanks for your feedback. I am trying to get newstore works with rocksdb in 
fio-objectstore plugin.  
 
 Here are steps I  did:
1. Reconfigure the ceph with RocksDB build in
./configure --with-fio-dir=./src/fio/ --with-librocksdb-static
2. sudo ./fio/fio ./test/objectstore.fio 
 I always get error of undefined symbol RocksDBStore.
 
 fio: engine ./.libs/libfio_ceph_objectstore.so not loadable
 fio: failed to load engine ./.libs/libfio_ceph_objectstore.so
 Bad option 

Re: [Ceph-community] Getting WARN in __kick_osd_requests doing stress testing

2015-09-18 Thread Ilya Dryomov
On Fri, Sep 18, 2015 at 9:48 AM, Abhishek L
 wrote:
> Redirecting to ceph-devel, where such a question might have a better
> chance of a reply.
>
> On Fri, Sep 18, 2015 at 4:03 AM,   wrote:
>> I'm running in a 3-node cluster and doing osd/rbd creation and deletion, and
>> ran across this WARN
>> Note, it only happened once (on one rbd add) after approximately 500 cycles
>> of the test, but was wondering if
>> someone can explain to me why this warning would be happening, and how I can
>> prevent it.
>>
>> Here is what my test script is doing:
>>
>> while(1):
>> create 5 ceph pools   - sleep 2 between each pool create
>> sleep 5
>> create 5 ceph volumes - sleep 2 between each pool create
>> sleep 5
>> delete 5 ceph volumes - sleep 2 between each pool create
>> sleep 5
>> delete 5 ceph pools   - sleep 2 between each pool create
>> sleep 5
>>
>>
>> 333940 Sep 17 00:31:54 10.0.41.9 [18372.272771] Call Trace:
>> 333941 Sep 17 00:31:54 10.0.41.9 [18372.273489]  []
>> dump_stack+0x45/0x57
>> 333942 Sep 17 00:31:54 10.0.41.9 [18372.274226]  []
>> warn_slowpath_common+0x97/0xe0
>> 333943 Sep 17 00:31:54 10.0.41.9 [18372.274923]  []
>> warn_slowpath_null+0x1a/0x20
>> 333944 Sep 17 00:31:54 10.0.41.9 [18372.275635]  []
>> __kick_osd_requests+0x1dc/0x240 [libceph]
>> 333945 Sep 17 00:31:54 10.0.41.9 [18372.276305]  []
>> osd_reset+0x57/0xa0 [libceph]
>> 333946 Sep 17 00:31:54 10.0.41.9 [18372.276962]  []
>> con_work+0x112/0x290 [libceph]
>> 333947 Sep 17 00:31:54 10.0.41.9 [18372.277608]  []
>> process_one_work+0x144/0x470
>> 333948 Sep 17 00:31:54 10.0.41.9 [18372.278247]  []
>> worker_thread+0x11e/0x450
>> 333949 Sep 17 00:31:54 10.0.41.9 [18372.278880]  [] ?
>> create_worker+0x1f0/0x1f0
>> 333950 Sep 17 00:31:54 10.0.41.9 [18372.279543]  []
>> kthread+0xc9/0xe0
>> 333951 Sep 17 00:31:54 10.0.41.9 [18372.280174]  [] ?
>> flush_kthread_worker+0x90/0x90
>> 333952 Sep 17 00:31:54 10.0.41.9 [18372.280803]  []
>> ret_from_fork+0x58/0x90
>> 333953 Sep 17 00:31:54 10.0.41.9 [18372.281430]  [] ?
>> flush_kthread_worker+0x90/0x90
>>
>> static void __kick_osd_requests(struct ceph_osd_client *osdc,
>> struct ceph_osd *osd)
>> {
>>  :
>> list_for_each_entry_safe(req, nreq, >o_linger_requests,
>>  r_linger_osd_item) {
>> WARN_ON(!list_empty(>r_req_lru_item));
>> __kick_linger_request(req);
>> }
>> :
>> }

What is your kernel version?

There is no mention of rbd map/unmap in the pseudo code you provided.
How are you mapping/unmapping those rbd images?  More details or the
script itself would be nice to see.

Thanks,

Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What should be in the next hammer/firefly release ?

2015-09-18 Thread Loic Dachary
Hi Robert,

http://tracker.ceph.com/issues/10399 was backported to hammer and will be in 
v0.94.4 (see http://tracker.ceph.com/issues/12751 for details).

Cheers

On 18/09/2015 02:44, Robert LeBlanc wrote:
> Can we get http://tracker.ceph.com/issues/10399 into hammer. We hit this 
> today.
> 
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> 
> 
> On Thu, Sep 10, 2015 at 4:57 AM, Miyamae, Takeshi  wrote:
>> Hi Loic,
> 
>> As the last pull request #5257 was committed into the main trunk,
>> we believe all the SHEC codes in main trunk are ready to be backported into 
>> Hammer branch.
>> What can we do at this moment?
> 
>> Best regards,
>> Takeshi Miyamae
> 
>> -Original Message-
>> From: Miyamae, Takeshi/宮前 剛
>> Sent: Thursday, September 3, 2015 4:09 PM
>> To: 'ceph-devel-ow...@vger.kernel.org'
>> Cc: Paul Von-Stamwitz (pvonstamw...@us.fujitsu.com); Toshine, Naoyoshi/利根 
>> 直佳; Shiozawa, Kensuke/塩沢 賢輔; Nakao, Takanori/中尾 鷹詔 
>> (nakao.takan...@jp.fujitsu.com)
>> Subject: Re: What should be in the next hammer/firefly release ?
> 
>> Dear Loic,
> 
>> We would like to let following two patches be backported to hammer v0.94.4.
>> (And our wish is finally backporting these patches to RHCS v1.3.) Can it be 
>> possible? If possible, please let us know what should be started at first.
>> (Caution: #5257 has not been committed to master branch yet.)
> 
>> erasure-code: shec plugin feature #5493
>> https://github.com/ceph/ceph/pull/5493
> 
>> erasure code: shec performance optimization by decoding cache #5257
>> https://github.com/ceph/ceph/pull/5257
> 
>> Best regards,
>> Takeshi Miyamae
> 
>> -Original Message-
>> From: Loic Dachary  dachary.org>
>> Subject: What should be in the next hammer/firefly release ?
>> Newsgroups: gmane.comp.file-systems.ceph.devel
>> Date: 2015-09-02 11:00:53 GMT (15 hours and 32 minutes ago) Hi,
> 
>> I added a link to
> 
>> http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO#Overview-of-the-backports-in-progress
> 
>> to show all issues that should be in the next point release for
> 
>> hammer v0.94.4 : http://tracker.ceph.com/versions/495
>> firefly v0.80.11 : http://tracker.ceph.com/versions/480
> 
>> Cheers
> 
>> --
>> Loïc Dachary, Artisan Logiciel Libre
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature


Re: nginx for rgw fcgi frontend

2015-09-18 Thread Yehuda Sadeh-Weinraub
On Thu, Sep 17, 2015 at 11:38 PM, Zhou, Yuan  wrote:
> Hi Yehuda,
>
> I was trying to do some tests on nginx over rgw and ran into some issue on 
> the PUT side:
>
> $ swift upload con ceph_fuse.cc
> Object PUT failed: http://localhost/swift/v1/con/ceph_fuse.cc 411 Length 
> Required   MissingContentLength
>
> However the GET/HEAD/POST requests are all working. From the history mail in 
> ceph-user nginx should be working well. There's no such issue if switch to 
> civetweb frontend. Is there anything changed in fcgi frontend? I'm testing on 
> the master branch.
>
> here's the request log and the CONTENT_LENGTH is there actually.
>
> http://paste2.org/YDJFYIcp
>
>

What version are you running? Note that you're getting an
HTTP_CONTENT_LENGTH header instead of CONTENT_LENGTH header. There
should be some support for it, but on the other hand there now, but
maybe you can get nginx to send the appropriate header?

Yehuda


>
> rgw part of ceph.conf
> 
> rgw frontends = fastcgi
> rgw dns name = localhost
> rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
> rgw print continue = false
> ...
>
>
> Nginx site.conf:
>
> server {
> listen 80;
>
> client_max_body_size 10g;
>
> access_log /dev/stdout;
> error_log /dev/stderr;
>
> location / {
> fastcgi_pass_header Authorization;
> fastcgi_pass_request_headers on;
>
> if ($request_method = PUT) {
> rewrite ^ /PUT$request_uri;
> }
>
> include fastcgi_params;
>
> fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
> }
>
> location /PUT/ {
> internal;
>
> fastcgi_pass_header Authorization;
> fastcgi_pass_request_headers on;
>
> include fastcgi_params;
> fastcgi_param CONTENT_LENGTH $content_length;
> fastcgi_param HTTP_CONTENT_LENGTH $content_length;
>
> fastcgi_pass unix:/var/run/ceph/ceph.radosgw.gateway.fastcgi.sock;
> }
> }
>
>
>
> Sincerely, Yuan
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: What should be in the next hammer/firefly release ?

2015-09-18 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Thanks, didn't find the original ticket number in the list.
- 
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Fri, Sep 18, 2015 at 12:57 AM, Loic Dachary  wrote:
> Hi Robert,
>
> http://tracker.ceph.com/issues/10399 was backported to hammer and will be in 
> v0.94.4 (see http://tracker.ceph.com/issues/12751 for details).
>
> Cheers
>
> On 18/09/2015 02:44, Robert LeBlanc wrote:
>> Can we get http://tracker.ceph.com/issues/10399 into hammer. We hit this 
>> today.
>> 
>> Robert LeBlanc
>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>>
>>
>> On Thu, Sep 10, 2015 at 4:57 AM, Miyamae, Takeshi  wrote:
>>> Hi Loic,
>>
>>> As the last pull request #5257 was committed into the main trunk,
>>> we believe all the SHEC codes in main trunk are ready to be backported into 
>>> Hammer branch.
>>> What can we do at this moment?
>>
>>> Best regards,
>>> Takeshi Miyamae
>>
>>> -Original Message-
>>> From: Miyamae, Takeshi/宮前 剛
>>> Sent: Thursday, September 3, 2015 4:09 PM
>>> To: 'ceph-devel-ow...@vger.kernel.org'
>>> Cc: Paul Von-Stamwitz (pvonstamw...@us.fujitsu.com); Toshine, Naoyoshi/利根 
>>> 直佳; Shiozawa, Kensuke/塩沢 賢輔; Nakao, Takanori/中尾 鷹詔 
>>> (nakao.takan...@jp.fujitsu.com)
>>> Subject: Re: What should be in the next hammer/firefly release ?
>>
>>> Dear Loic,
>>
>>> We would like to let following two patches be backported to hammer v0.94.4.
>>> (And our wish is finally backporting these patches to RHCS v1.3.) Can it be 
>>> possible? If possible, please let us know what should be started at first.
>>> (Caution: #5257 has not been committed to master branch yet.)
>>
>>> erasure-code: shec plugin feature #5493
>>> https://github.com/ceph/ceph/pull/5493
>>
>>> erasure code: shec performance optimization by decoding cache #5257
>>> https://github.com/ceph/ceph/pull/5257
>>
>>> Best regards,
>>> Takeshi Miyamae
>>
>>> -Original Message-
>>> From: Loic Dachary  dachary.org>
>>> Subject: What should be in the next hammer/firefly release ?
>>> Newsgroups: gmane.comp.file-systems.ceph.devel
>>> Date: 2015-09-02 11:00:53 GMT (15 hours and 32 minutes ago) Hi,
>>
>>> I added a link to
>>
>>> http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO#Overview-of-the-backports-in-progress
>>
>>> to show all issues that should be in the next point release for
>>
>>> hammer v0.94.4 : http://tracker.ceph.com/versions/495
>>> firefly v0.80.11 : http://tracker.ceph.com/versions/480
>>
>>> Cheers
>>
>>> --
>>> Loïc Dachary, Artisan Logiciel Libre
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>

-BEGIN PGP SIGNATURE-
Version: Mailvelope v1.1.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJV/DdkCRDmVDuy+mK58QAAB9wP/ilDxlotJ3ar6l46AjhB
hM9aytpLermhzOvznNkwrRXcRET7IZcBUeNv9lr2E1Elq56mM7d4nlvpJMME
KdfmvkfQYR1oiyKt0lJgG3Hl6ZCtzNRzWSEGp72yCpKmtK0/47PWQvQHdW8C
B88umZ1i1u16VuKEv8TdgySohJCXr/7gTMYrui7yQCBWoUWUaSSH+MV7UStx
vWM0806Xvvp3hfbrh5ffBk1ycLDaYDvW0+/vQJc0VJpGLn47u8TVjZrj1xZk
92B2TXwHdIMAQpF7xGFwVDbY+PkExRPWd8Yd75kxL7O+s+HYQIPU2nDo9qp8
PgXsXKr8Qu/WksbjVi8GEUj8rLrOPgoboqB+D6e4RpaZz4/gHt12aedQwbPj
hm6N4KOVr3oc5+QEAu81loC3d/uG4+BDwVTRRn4bt8+5ksOLDWeKdYOteBim
BvlyUt42ZlbDKQlr65a/h/v2C8asSvMM1NNhFl/NOM9k01wCY7/dNsWMeO0T
88gn4rAMNA9fpP1kZvFc5h7palKhPh2CVr9VuKyc45YBi8w8pCD2eYwgBjPN
R7LDbOgUhBjRtU/bqLY/KQYzG7o5VMuJp0MtY9HHDLXQyqR6GyMMUuKYFCX9
xREqJQStrTi1eQ7oeZ0gLcx9sd7WmWldb8OW6JTgJ7zu5rB8N3qu+V2JN2nn
OAW5
=M8sA
-END PGP SIGNATURE-
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html