Re: RGW Keystone interaction (was [ceph-users] Ceph.conf)

2015-09-13 Thread Shinobu Kinjo
> Looked a bit more into this, swift apis seem to support the use
> of an admin tenant, user & token for validating the bearer token,
> similar to other openstack service which use a service tenant
> credentials for authenticating.

Yes, it's just working as middleware under Keystone.

>  Though it needs documentation, the configurables `rgw keystone admin
> tenant`, `rgw keystone admin user` and `rgw keystone admin password`
> make this possible, so as to avoid configuring the keystone shared
> admin password compoletely.

It's getting better but still problematic:

keystonemiddleware.auth_token

> S3 APIs with keystone seem to be a bit more different, apparently

Yes, from codes point of view, it's a bit. But, architecture point of
view, it's not bit, I think.

> s3tokens interface does seem to allow authenticating without an
> `X-Auth-Token` in the headers and validates based on the access key,
> secret key provided to it. So basically not configuring
> `rgw_keystone_admin_password` seem to work, can you also see if this
> is the case for you.

Yes, I think so.

I do not say that Keystone is problem. Keystone itself is quite fine.

But what we have to make sure is that OpenStack becomes getting bigger
than we thought just 4 years ago.

There have been already more components like TripleO which was totally out
of scope of OpenStack at the beginning.

So it's impossible for Keystone to deal with every single request regarding
to identities about using OpenStack services properly.

In respect to our page:

  http://ceph.com/docs/giant/radosgw/keystone/

Probably it's better explain why we are using v1 not v2, if we have any
specific reason.

What do you think?

 Shinobu
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


答复: 2 replications,flapping can not stop for a very long time

2015-09-13 Thread zhao.ming...@h3c.com
hi, do you set both public_network and cluster_network, but just cut off the 
cluster_network?
And do you have not only one osd on the same host?
=yes,public network+cluster network,and I cut off 
the cluster network; 2 node ,each node has serveral osds;

If so, maybe you can not get stable, now the osd have peers in the prev and 
next osd id, they can exchange ping message.
you cut off the cluster_network, the outbox peer osds can not detect the ping, 
they reports the osd failure to MON, and MON gather enough reporters and 
reports, then the osd will be marked down.
=when osd recv a new map and it is marked down,it 
think MON wronly mark me down,what will it do,join the cluster again or other 
actions?can you give me some more detailed explanation?

But the osd can reports to MON bc the public_network is ok,  MON thinks the osd 
wronly marked down, mark it to UP.
=you mean that MON recv message ONE TIME from this 
osd then it will mark this osd up? 

So flapping happens again and again.
= I tried 3 replications,(public network + cluster 
network,3 node,each node has serveral osds),although it will occur flapping,but 
after serveral minutes it will be stable,
compared with 2 replications situation, I wait for the same intervals,the 
cluster can not be stable;
so I'm confused about the machnism that how monitor can decide which osd is 
actually down?

thanks

-邮件原件-
发件人: huang jun [mailto:hjwsm1...@gmail.com] 
发送时间: 2015年9月13日 10:39
收件人: zhaomingyue 09440 (RD)
抄送: ceph-devel@vger.kernel.org
主题: Re: 2 replications,flapping can not stop for a very long time

hi, do you set both public_network and cluster_network, but just cut off the 
cluster_network?
And do you have not only one osd on the same host?
If so, maybe you can not get stable, now the osd have peers in the prev and 
next osd id, they can exchange ping message.
you cut off the cluster_network, the outbox peer osds can not detect the ping, 
they reports the osd failure to MON, and MON gather enough reporters and 
reports, then the osd will be marked down.
But the osd can reports to MON bc the public_network is ok,  MON thinks the osd 
wronly marked down, mark it to UP.
So flapping happens again and again.

2015-09-12 20:26 GMT+08:00 zhao.ming...@h3c.com :
>
> Hi,
> I'm testing reliability of ceph recently, and I have met the flapping problem.
> I have 2 replications, and cut off the cluster network ,now  flapping can not 
> stop,I have wait more than 30min, but status of osds are still not stable;
> I want to know about  when monitor recv reports from osds ,how it can 
> mark one osd down?
> (reports && reporter && grace) need to satisfied some conditions, how to 
> calculate the grace?
> and how long will the flapping  stop?Does the flapping must be stopped by 
> configure,such as configure an osd lost?
> Can someone help me ?
> Thanks~
> --
> ---
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from 
> H3C, which is intended only for the person or entity whose address is 
> listed above. Any use of the information contained herein in any way 
> (including, but not limited to, total or partial disclosure, 
> reproduction, or dissemination) by persons other than the intended
> recipient(s) is prohibited. If you receive this e-mail in error, 
> please notify the sender by phone or email immediately and delete it!



--
thanks
huangjun


Re: 答复: 2 replications,flapping can not stop for a very long time

2015-09-13 Thread huang jun
2015-09-13 14:07 GMT+08:00 zhao.ming...@h3c.com :
> hi, do you set both public_network and cluster_network, but just cut off the 
> cluster_network?
> And do you have not only one osd on the same host?
> =yes,public network+cluster network,and I cut off 
> the cluster network; 2 node ,each node has serveral osds;
>
> If so, maybe you can not get stable, now the osd have peers in the prev and 
> next osd id, they can exchange ping message.
> you cut off the cluster_network, the outbox peer osds can not detect the 
> ping, they reports the osd failure to MON, and MON gather enough reporters 
> and reports, then the osd will be marked down.
> =when osd recv a new map and it is marked down,it 
> think MON wrongly mark me down,what will it do,join the cluster again or 
> other actions?can you give me some more detailed explanation?

It will send a boot message to MON, and will be marked UP by MON.
>
> But the osd can reports to MON bc the public_network is ok,  MON thinks the 
> osd wronly marked down, mark it to UP.
> =you mean that MON recv message ONE TIME from 
> this osd then it will mark this osd up?
>

> So flapping happens again and again.
> = I tried 3 replications,(public network + 
> cluster network,3 node,each node has serveral osds),although it will occur 
> flapping,but after serveral minutes it will be stable,
> compared with 2 replications situation, I wait for the same intervals,the 
> cluster can not be stable;
> so I'm confused about the machnism that how monitor can decide which osd is 
> actually down?
>
It's weird, if you cut off the cluster_network, the osds in other node
can not get the ping messages, and naturally think the osd is failed.

> thanks
>
> -邮件原件-
> 发件人: huang jun [mailto:hjwsm1...@gmail.com]
> 发送时间: 2015年9月13日 10:39
> 收件人: zhaomingyue 09440 (RD)
> 抄送: ceph-devel@vger.kernel.org
> 主题: Re: 2 replications,flapping can not stop for a very long time
>
> hi, do you set both public_network and cluster_network, but just cut off the 
> cluster_network?
> And do you have not only one osd on the same host?
> If so, maybe you can not get stable, now the osd have peers in the prev and 
> next osd id, they can exchange ping message.
> you cut off the cluster_network, the outbox peer osds can not detect the 
> ping, they reports the osd failure to MON, and MON gather enough reporters 
> and reports, then the osd will be marked down.
> But the osd can reports to MON bc the public_network is ok,  MON thinks the 
> osd wronly marked down, mark it to UP.
> So flapping happens again and again.
>
> 2015-09-12 20:26 GMT+08:00 zhao.ming...@h3c.com :
>>
>> Hi,
>> I'm testing reliability of ceph recently, and I have met the flapping 
>> problem.
>> I have 2 replications, and cut off the cluster network ,now  flapping can 
>> not stop,I have wait more than 30min, but status of osds are still not 
>> stable;
>> I want to know about  when monitor recv reports from osds ,how it can 
>> mark one osd down?
>> (reports && reporter && grace) need to satisfied some conditions, how to 
>> calculate the grace?
>> and how long will the flapping  stop?Does the flapping must be stopped by 
>> configure,such as configure an osd lost?
>> Can someone help me ?
>> Thanks~
>> --
>> ---
>> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
>> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
>> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
>> 邮件!
>> This e-mail and its attachments contain confidential information from
>> H3C, which is intended only for the person or entity whose address is
>> listed above. Any use of the information contained herein in any way
>> (including, but not limited to, total or partial disclosure,
>> reproduction, or dissemination) by persons other than the intended
>> recipient(s) is prohibited. If you receive this e-mail in error,
>> please notify the sender by phone or email immediately and delete it!
>
>
>
> --
> thanks
> huangjun



-- 
thanks
huangjun
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 33/39] rbd: drop null test before destroy functions

2015-09-13 Thread Julia Lawall
Remove unneeded NULL test.

The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)

// 
@@ expression x; @@
-if (x != NULL) {
  \(kmem_cache_destroy\|mempool_destroy\|dma_pool_destroy\)(x);
  x = NULL;
-}
// 

Signed-off-by: Julia Lawall 

---
 drivers/block/rbd.c |6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index d93a037..0507246 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -5645,10 +5645,8 @@ static int rbd_slab_init(void)
if (rbd_segment_name_cache)
return 0;
 out_err:
-   if (rbd_obj_request_cache) {
-   kmem_cache_destroy(rbd_obj_request_cache);
-   rbd_obj_request_cache = NULL;
-   }
+   kmem_cache_destroy(rbd_obj_request_cache);
+   rbd_obj_request_cache = NULL;
 
kmem_cache_destroy(rbd_img_request_cache);
rbd_img_request_cache = NULL;

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 00/39] drop null test before destroy functions

2015-09-13 Thread Julia Lawall
Recent commits to kernel/git/torvalds/linux.git have made the following
functions able to tolerate NULL arguments:

kmem_cache_destroy (commit 3942d29918522)
mempool_destroy (commit 4e3ca3e033d1)
dma_pool_destroy (commit 44d7175da6ea)

These patches remove the associated NULL tests for the files that I found
easy to compile test.  If these changes are OK, I will address the
remainder later.

---

 arch/x86/kvm/mmu.c |6 --
 block/bio-integrity.c  |7 --
 block/bio.c|7 --
 block/blk-core.c   |3 -
 block/elevator.c   |3 -
 drivers/atm/he.c   |7 --
 drivers/block/aoe/aoedev.c |3 -
 drivers/block/drbd/drbd_main.c |   21 ++-
 drivers/block/pktcdvd.c|3 -
 drivers/block/rbd.c|6 --
 drivers/dma/dmaengine.c|6 --
 drivers/firmware/google/gsmi.c |3 -
 drivers/gpu/drm/i915/i915_dma.c|   19 ++
 drivers/iommu/amd_iommu_init.c |7 --
 drivers/md/bcache/bset.c   |3 -
 drivers/md/bcache/request.c|3 -
 drivers/md/bcache/super.c  |9 +--
 drivers/md/dm-bufio.c  |3 -
 drivers/md/dm-cache-target.c   |3 -
 drivers/md/dm-crypt.c  |6 --
 drivers/md/dm-io.c |3 -
 drivers/md/dm-log-userspace-base.c |3 -
 drivers/md/dm-region-hash.c|4 -
 drivers/md/dm.c|   13 +---
 drivers/md/multipath.c |3 -
 drivers/md/raid1.c |6 --
 drivers/md/raid10.c|9 +--
 drivers/md/raid5.c |3 -
 drivers/mtd/nand/nandsim.c |3 -
 drivers/mtd/ubi/attach.c   |4 -
 drivers/net/ethernet/intel/ixgbe/ixgbe_fcoe.c  |3 -
 drivers/staging/lustre/lustre/llite/super25.c  |   16 +
 drivers/staging/lustre/lustre/obdclass/genops.c|   24 ++--
 drivers/staging/lustre/lustre/obdclass/lu_object.c |6 --
 drivers/staging/rdma/hfi1/user_sdma.c  |3 -
 drivers/thunderbolt/ctl.c  |3 -
 drivers/usb/gadget/udc/bdc/bdc_core.c  |3 -
 drivers/usb/gadget/udc/gr_udc.c|3 -
 drivers/usb/gadget/udc/mv_u3d_core.c   |3 -
 drivers/usb/gadget/udc/mv_udc_core.c   |3 -
 drivers/usb/host/fotg210-hcd.c |   12 +---
 drivers/usb/host/fusbh200-hcd.c|   12 +---
 drivers/usb/host/whci/init.c   |3 -
 drivers/usb/host/xhci-mem.c|   12 +---
 fs/btrfs/backref.c |3 -
 fs/btrfs/delayed-inode.c   |3 -
 fs/btrfs/delayed-ref.c |   12 +---
 fs/btrfs/disk-io.c |3 -
 fs/btrfs/extent_io.c   |6 --
 fs/btrfs/extent_map.c  |3 -
 fs/btrfs/file.c|3 -
 fs/btrfs/inode.c   |   18 ++
 fs/btrfs/ordered-data.c|3 -
 fs/dlm/memory.c|6 --
 fs/ecryptfs/main.c |3 -
 fs/ext4/crypto.c   |9 +--
 fs/ext4/extents_status.c   |3 -
 fs/ext4/mballoc.c  |3 -
 fs/f2fs/crypto.c   |9 +--
 fs/gfs2/main.c |   29 ++
 fs/jbd2/journal.c  |   15 +
 fs/jbd2/revoke.c   |   12 +---
 fs/jbd2/transaction.c  |6 --
 fs/jffs2/malloc.c  |   27 +++--
 fs/nfsd/nfscache.c |6 --
 fs/nilfs2/super.c  |   12 +---
 fs/ocfs2/dlm/dlmlock.c |3 -
 fs/ocfs2/dlm/dlmmaster.c   |   16 +
 fs/ocfs2/super.c   |   18 ++
 fs/ocfs2/uptodate.c|3 -
 lib/debugobjects.c |3 -
 net/core/sock.c|   12 +---
 net/dccp/ackvec.c  |   12 +---
 net/dccp/ccid.c

size_t and related types on mn10300

2015-09-13 Thread Ilya Dryomov
On Thu, Sep 10, 2015 at 10:57 AM, kbuild test robot
 wrote:
> tree:   https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git 
> master
> head:   22dc312d56ba077db27a9798b340e7d161f1df05
> commit: 5f1c79a71766ba656762636936edf708089bdb14 [12335/12685] libceph: check 
> data_len in ->alloc_msg()
> config: mn10300-allmodconfig (attached as .config)
> reproduce:
>   wget 
> https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross
>  -O ~/bin/make.cross
>   chmod +x ~/bin/make.cross
>   git checkout 5f1c79a71766ba656762636936edf708089bdb14
>   # save the attached .config to linux build tree
>   make.cross ARCH=mn10300
>
> All warnings (new ones prefixed by >>):
>
>net/ceph/osd_client.c: In function 'get_reply':
>>> net/ceph/osd_client.c:2865:3: warning: format '%zu' expects argument of 
>>> type 'size_t', but argument 6 has type 'unsigned int' [-Wformat=]
>   pr_warn("%s osd%d tid %llu data %d > preallocated %zu, skipping\n",
>   ^
>
> vim +2865 net/ceph/osd_client.c
>
>   2849   req->r_reply, req->r_reply->con);
>   2850  ceph_msg_revoke_incoming(req->r_reply);
>   2851
>   2852  if (front_len > req->r_reply->front_alloc_len) {
>   2853  pr_warn("%s osd%d tid %llu front %d > preallocated 
> %d\n",
>   2854  __func__, osd->o_osd, req->r_tid, front_len,
>   2855  req->r_reply->front_alloc_len);
>   2856  m = ceph_msg_new(CEPH_MSG_OSD_OPREPLY, front_len, 
> GFP_NOFS,
>   2857   false);
>   2858  if (!m)
>   2859  goto out;
>   2860  ceph_msg_put(req->r_reply);
>   2861  req->r_reply = m;
>   2862  }
>   2863
>   2864  if (data_len > req->r_reply->data_length) {
>> 2865  pr_warn("%s osd%d tid %llu data %d > preallocated %zu, 
>> skipping\n",
>   2866  __func__, osd->o_osd, req->r_tid, data_len,
>   2867  req->r_reply->data_length);
>   2868  m = NULL;
>   2869  *skip = 1;
>   2870  goto out;
>   2871  }
>   2872
>   2873  m = ceph_msg_get(req->r_reply);

req->r_reply->data_length is size_t, formatted with %zu, as it should
be.  I think this is a problem with either mn10300 itself or the build
toolchain - compiling mn10300 defconfig throws a whole bunch of similar
errors all over the place.

arch/mn10300/include/uapi/asm/posix_types.h
 30 #if __GNUC__ == 4
 31 typedef unsigned int__kernel_size_t;
 32 typedef signed int  __kernel_ssize_t;
 33 #else
 34 typedef unsigned long   __kernel_size_t;
 35 typedef signed long __kernel_ssize_t;
 36 #endif

This came from commit 3ad001c04f1a ("MN10300: Fix size_t and ssize_t")
by David.  Now, if that's right, it should probably be >= 4.  But then

$ echo | /opt/gcc-4.9.0-nolibc/am33_2.0-linux/bin/am33_2.0-linux-gcc
-dM -E - | grep __SIZE_TYPE__
#define __SIZE_TYPE__ long unsigned int

and sure enough

gcc/config/mn10300/linux.h
 87 #undef SIZE_TYPE
 88 #undef PTRDIFF_TYPE
 89 #undef WCHAR_TYPE
 90 #undef WCHAR_TYPE_SIZE

gcc/config/mn10300/mn10300.h
125 #undef  SIZE_TYPE
126 #define SIZE_TYPE "unsigned int"

so it looks like "linux" variant uses gcc defaults, at least since
gcc.git commit 05381348ac78 (which is dated a few months after David's
commit made it into the kernel).  Can someone who cares about mn10300
or at least knows what it is look at this?

Thanks,

Ilya
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Question about big EC pool.

2015-09-13 Thread Somnath Roy
 12-Sep-15 19:34, Somnath Roy пишет:
>> >I don't think there is any limit from Ceph side..
>> >We are testing with ~768 TB deployment with 4:2 EC on Flash and it is 
>> >working well so far..
>> >
>> >Thanks & Regards
>> >Somnath
> Thanks for answer!
>
> It's very interesting!
>
> What is hardware you use for your the test cluster?
> [Somnath] Three 256 TB SanDisk's JBOF (IF100) and 2 heads in front of that , 
> so, total of 6 node cluster. FYI, each IF100 can support max 512 TB. Heads 
> are with 128GB  RAM and Xeon 2690 V3 dual socket on each of the server.

What a version of ceph you use?
[Somnath] As of now, it's giant , but, will be moving to Hammer soon..

How cluster working in degraded state? Performance degradation is huge?

[Somnath] That's one of the reason we are using Cauchy_good, it's performance 
in degraded state is much better. By reducing the recovery traffic (lower 
values of recovery settings) , we are able to get significant performance 
improvement during degraded state as well...BTW, degradation will depend on how 
much data cluster has to recover. In our case, we are seeing ~8% degradation if 
say ~64 TB (one ceph node) is failed, but, ~28% if ~128TB (2 node) is down. 
This is for 4M reads..

I think that e5-2690 didn't enough for that flash cluster.

[Somnath] In our case and specially for bigger block sizes object use cases, 
dual socket E5-2690 should be more than sufficient. We are not able to saturate 
that in this case. For smaller block size block use cases we are almost 
saturating the cpus with our config though. If you are planning to use EC with 
RGW and considering you have object size at least 256K or so, this cpu complex 
is good enough IMO.


How you have 6 node if as you say "Three 256 TB SanDisk's JBOF (IF100) and 2 
heads in front of that", may be I not realized how IF100 working.

[Somnath] It is 2 ceph nodes connected to each IF100 (you can connect upto 8 
servers in front). The IF100 drives are partitioned between 2 head servers. We 
used 3 IF100s, so, total 3 * 2 = 6 head nodes or 6 node ceph servers. Hope that 
make sense now.

> You use only SSD or SSD+NVE?
>
> [Somnath] For now, it is all SSDs.
>
> Journal is located on the same SSD or not?
>
> [Somnath] Yes, journal is on the same SSD.
>
> What a plugin you use?
>
> [Somnath] Cauchy_good jerasure.

You did try the isa plugin?

>
> You catch some bugs or strange things?
>
> [Somnath] So far all is well :-)
>

It's good :)

Thanks for answer!
--
Mike, yes.




PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

N�r��yb�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�mzZ+�ݢj"��!�i

Re: Question about big EC pool.

2015-09-13 Thread Mike Almateia

13-Sep-15 01:12, Somnath Roy пишет:

12-Sep-15 19:34, Somnath Roy пишет:

>I don't think there is any limit from Ceph side..
>We are testing with ~768 TB deployment with 4:2 EC on Flash and it is working 
well so far..
>
>Thanks & Regards
>Somnath

Thanks for answer!

It's very interesting!

What is hardware you use for your the test cluster?
[Somnath] Three 256 TB SanDisk's JBOF (IF100) and 2 heads in front of that , 
so, total of 6 node cluster. FYI, each IF100 can support max 512 TB. Heads are 
with 128GB  RAM and Xeon 2690 V3 dual socket on each of the server.


What a version of ceph you use?
How cluster working in degraded state? Performance degradation is huge?
I think that e5-2690 didn't enough for that flash cluster.

How you have 6 node if as you say "Three 256 TB SanDisk's JBOF (IF100) 
and 2 heads in front of that", may be I not realized how IF100 working.



You use only SSD or SSD+NVE?

[Somnath] For now, it is all SSDs.

Journal is located on the same SSD or not?

[Somnath] Yes, journal is on the same SSD.

What a plugin you use?

[Somnath] Cauchy_good jerasure.


You did try the isa plugin?



You catch some bugs or strange things?

[Somnath] So far all is well :-)



It's good :)

Thanks for answer!
--
Mike, yes.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


reducing package size / compression time

2015-09-13 Thread Loic Dachary
Hi Sage,

You did something to reduce the size (hence the compression time) of the debug 
packages using https://fedoraproject.org/wiki/Features/DwarfCompressor. Would 
you be so kind as to remind me which commit does that ?

Thanks in advance :-)

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature


Re: reducing package size / compression time

2015-09-13 Thread Sage Weil
On Mon, 14 Sep 2015, Loic Dachary wrote:
> Hi Sage,
> 
> You did something to reduce the size (hence the compression time) of the 
> debug packages using 
> https://fedoraproject.org/wiki/Features/DwarfCompressor. Would you be so 
> kind as to remind me which commit does that ?

https://github.com/ceph/autobuild-ceph/commit/193864ec69edb4dbb0112bb3ea54e6d2f20b30dd

The suggestion came from Boris.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html