Re: [PATCH] net: ceph: osd_client: change osd_req_op_data() macro
On Thu, Oct 22, 2015 at 5:06 PM, Ioana Ciorneiwrote: > This patch changes the osd_req_op_data() macro to not evaluate > parameters more than once in order to follow the kernel coding style. > > Signed-off-by: Ioana Ciornei > Reviewed-by: Alex Elder > --- > net/ceph/osd_client.c | 10 ++ > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c > index a362d7e..856e8f8 100644 > --- a/net/ceph/osd_client.c > +++ b/net/ceph/osd_client.c > @@ -120,10 +120,12 @@ static void ceph_osd_data_bio_init(struct ceph_osd_data > *osd_data, > } > #endif /* CONFIG_BLOCK */ > > -#define osd_req_op_data(oreq, whch, typ, fld) \ > - ({ \ > - BUG_ON(whch >= (oreq)->r_num_ops); \ > - &(oreq)->r_ops[whch].typ.fld; \ > +#define osd_req_op_data(oreq, whch, typ, fld)\ > + ({\ > + struct ceph_osd_request *__oreq = (oreq); \ > + unsigned int __whch = (whch); \ > + BUG_ON(__whch >= __oreq->r_num_ops); \ > + &__oreq->r_ops[__whch].typ.fld; \ > }) > > static struct ceph_osd_data * For some reason this ended up in Spam - you should CC subsystem maintainer(s) on any patch, as noted in Documentation/SubmittingPatches. Applied with minor changes, see https://github.com/ceph/ceph-client/commit/51dcb83f3d1143819fe0791c112e1b1f830e457d. Thanks, Ilya -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Fix OP dequeuing order
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Here is the pull request against master. https://github.com/ceph/ceph/pull/6429 - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Wed, Oct 28, 2015 at 11:44 PM, Robert LeBlanc wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > I've only tested on my dev cluster, but I'm not seeing the lone long > blocked I/O. There is still blocked I/O, but it behaves a bit more > like I expect. I'll try and get this fix out on our production cluster > and see how it goes. > > I've tried implementing a Max-Min priority queue that had varied > success, but I think there are multiple queues (one per thread or > something) that made it challenging. I'm still trying to figure out > how everything is getting queued and dequeued. There is one more queue > idea I'd like to try that I think is less computationally stressful > than the current three tier priority token bucket or the max-min I > wrote up. I just wanted to get this fix in if it was the right > direction. > > My max-min queue suffered from segfaults when OSDs were taken out, and > I think it had to do with OPs not getting cleared out of the queue ( I > removed all of the cutting in line and multiple queues and was relying > on strict priority). Is there some documentation outlining the > priority and the op that I can reference? ie. 128 - replication op, 64 > - primary op, etc... > > Do you want me to write up a patch against master as well? > > Thanks, > -BEGIN PGP SIGNATURE- > Version: Mailvelope v1.2.3 > Comment: https://www.mailvelope.com > > wsFcBAEBCAAQBQJWMbIpCRDmVDuy+mK58QAARSoQAIlqfkt8AOsmg4/Hkemg > q6/u/AP0N3SIxk5j0OBM5MRe9jn14dW7ABpa6xnxunRXSUJC3MVQgl6jvJGn > vtd68GDGu3dWWgmwMwgupr8B/OrQUmfMjmJxDGjerSEMAqzNluippVlCxM3R > mFQjK9QNvO6/k3ceI1/SUcpjSwwp/fGWQizTQpKUcPFFJ4V/BXmv+rvzaNiU > aJsWZvt6+ld9xdCWODP3MG8cBHl5dMaQvWdsmQ7bo66qgBdZImwUYqeCwKdm > 6ODLmyrJJ3kTbhWKBvVQPzZFB8Ee89JfPKRq9LWsqNyZ5nqXVYle+XVC1KBF > OPbqmqke6KJkjc4v+iCUgFarGZr4CxpJNqZqhvMg6LYnWF+m/E54DduxPzys > N3LrR/K37Gp5NkUA7Qz/e/GoXf1kOSYvZUiAp2AHWkjeOT2Tx7LfEVScI6Ow > 8V8iYDVlzTveH/6BxREpKgXoQ9EgZMdDWntLUBU+QV+FedlXRTEHD3zQKwlT > Hix1lTuNxVY+VZSXos18FaFz+duVjUS/O2yuy1bmLWD6PfouFFCMfRwmK1rl > QaaE1k6vinAzGbwq48D94hMPBcNQmJTWd8GC8kOGP+F9EFHyCPaHVK0Lu/T2 > jdnx9h/HcshOWh60CKu6N5w+JfvIcqyrLcDmiS3A9scgYOsdphLf1etxrKdq > Xd8Z > =evCv > -END PGP SIGNATURE- > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Wed, Oct 28, 2015 at 10:52 PM, Sage Weil wrote: >> On Wed, 28 Oct 2015, Robert LeBlanc wrote: >>> -BEGIN PGP SIGNED MESSAGE- >>> Hash: SHA256 >>> >>> I created a pull request to fix an op dequeuing order problem. I'm not >>> sure if I need to mention it here. >>> >>> https://github.com/ceph/ceph/pull/6417 >> >> Wow, good catch. Have you found that this materially impacts the behavior >> in your cluster? >> >> sage >> >> >>> >>> - >>> Robert LeBlanc >>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 >>> -BEGIN PGP SIGNATURE- >>> Version: Mailvelope v1.2.3 >>> Comment: https://www.mailvelope.com >>> >>> wsFcBAEBCAAQBQJWMVh6CRDmVDuy+mK58QAAztQP/385BOI8AH2uEJhN8pQ4 >>> QnAJxRy4HceWzjfAUulqNbbiD1scHZMU7LDW1GtsXfOZzmndTnJSBrR4+aHq >>> F7py9zgXcxXH4uTAoILbRzkCF3rWdmkeh1/m5aY4LqmhE2N/O/LLOmDUe2BT >>> XkQgZ9sROzY9pSj6pjA2vuv7k2u1SWtF3Ky14Hll3LHjqJibXoXYy+ik7lOP >>> lRUoAY08Yf+c/Ag/Yy7CLGgIk/y6mdaJZPd2PCaVsKFa55NJAlYv0PHJKX0j >>> XkSAY10MednMX6N+QL8XAq+yiAd//UADfCNhxHkP84YsPPCpNeS1OcoF6WGG >>> g5H8uMK84kZCk37ummW/ANg9WNnO3hN2j22r9ezA+4GfxqKibT4lEMba6h88 >>> i5L3rQwWmM0cdpjS9plH1yUiPP2DexJV8PaiAIVVMAkw+AC0Xb/nUXKX6u5+ >>> YU744kSjtscN95Caf72V6HirB/uEU4sm+4lUuUBHzTcvau/r9WUHezwvmUiH >>> HHL9bSU5TJ4jXvQhDEBYKbflTzLNKjXPcp1PagN2P9ZWQvNaxrQm32iB84DW >>> 6jLEArFX10kE3eZ8IqoBikw5d+y3YtnuJ1oAIkfzj1ANofm37VKcQY/Wfrjw >>> eke0nR4QBuN6SibbPXqIsjjIWZdo/jCgOCylNONXCFn9Qp08/7UJMQtzHk/1 >>> xRRp >>> =g+NJ >>> -END PGP SIGNATURE- >>> -- >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >>> the body of a message to majord...@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >>> -BEGIN PGP SIGNATURE- Version: Mailvelope v1.2.3 Comment: https://www.mailvelope.com wsFcBAEBCAAQBQJWMn6/CRDmVDuy+mK58QAA/AoP/3k1psJiYWD3gsnDxK86 6RTLCvx4N1o74zabVnVXyfAASsD7q+S9r3K6vFgaMKUvf0EbbVqQsshTREJK VhWxoy+w7uBAbPL+RCZwkUWPF+Gq974aChp2lo9ecllNcg1zFskZ5lxUHxQC pIAsl7nCCIVHMew7Aedokb+x4WDJYf3appE2fthoizvhHjSzfGGA3ukxg1NG EpoOH1bC47bIjVf8KaeT5kinGg7APj3fuozwjEnhR0VxOyam48KoP4Awb0Ru NPabcwFiufkGC60z86g6o6uVCC1wCik4Yvj0xe48jePWn1m6cJBdGoQnA+wG 4RlZ2p/IQyijpYo3lzfRAq3mijZt37CDfhoRjHa5z9rAgt6H5bb+vWrgH1Jy 6bG0e97l9HhmAeAQV/yKXc1RrJLFSBowdbCsnGnITxYmWv6Y07U8wZyc6fSa h8Jt56zORWZvRvUKBwqv4pt1Jc/MA2N2FCzWZ5C3QY5h5bqkjAhxEoCCQIFR ZuO0M6Mm2dGczEeVOCAJCf5arYQySVVOi7h3uMIxtBqfSBfccPMetS0O3gaB
civetweb upstream/downstream divergence
Hi Ceph: The civetweb code in RGW is taken from https://github.com/ceph/civetweb/ which is a fork of https://github.com/civetweb/civetweb. The last commit to our fork took place on March 18. Upstream civetweb development has progressed ("This branch is 19 commits ahead, 972 commits behind civetweb:master.") Are there plans to rebase to a newer upstream version or should we think more in terms of backporting (to ceph/civetweb.git) from upstream (civetweb/civetweb.git) when we need to fix bugs or add features? Thanks and regards -- Nathan Cutler Software Engineer Distributed Storage SUSE LINUX, s.r.o. Tel.: +420 284 084 037 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: civetweb upstream/downstream divergence
On 29-10-15 10:19, Nathan Cutler wrote: > Hi Ceph: > > The civetweb code in RGW is taken from https://github.com/ceph/civetweb/ > which is a fork of https://github.com/civetweb/civetweb. The last commit > to our fork took place on March 18. > > Upstream civetweb development has progressed ("This branch is 19 commits > ahead, 972 commits behind civetweb:master.") > > Are there plans to rebase to a newer upstream version or should we think > more in terms of backporting (to ceph/civetweb.git) from upstream > (civetweb/civetweb.git) when we need to fix bugs or add features? > I think it would be smart to keep tracking civetweb from upstream otherwise we forked Civetweb. We might run into some issues with Civetweb which we need to fix upstream, that's a lot easier if we are close to where upstream is. Wido > Thanks and regards > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Bucket namespaces pull req. 5872
On Mon, 26 Oct 2015 16:09:41 +0100 Radoslaw Zarzynskiwrote: > Those are the reasons I’m behind keeping rgw_user even if the > entire information it carries would be solely an ID string. Okay. It felt a little obfuscatory but perhaps it's my kernel background talking. > Yeah, bucket namespace is a part of account/RGWUserInfo*. Property > “has_own_bns” even now is serialized together with RGWUserInfo. Very well, we're good. > > The rgw_swift_create_account_with_bns shold go away with rgw_user. > > Option "rgw_swift_create_account_with_bns" is needed mostly due to > integration with OpenStack (Keystone) when accounts* are automatically > created at first access. Without the parameter you would lose ability to > tell radosgw what is more important for you: compliance with Swift API > or previous behavior that still may be useful in some cases. Creating > massive amount of accounts by hand might not be an option here. I'm buying the logic here: at the time of auto-creation, we do not possess the information about the account being auto-created wanting BNS or not. Still, it feels unsatisfactory. I'd rather look into some sort of user attributes in Keystone or whatnot. I'll investigate and report. > > The rgw_swift_account_in_url should be possible to incorporate > > in a compatible fashion (it does not add an extra next_tok()). > > According to "rgw_swift_account_in_url": I don’t see viable method for > deducing whether two tokens in URL refer to 1) account and bucket or > 2) bucket and object. Of course, we may apply some kind of heuristic > like scanning the first token for auth prefix (eg. “AUTH_”, “KEY_”) but > this would introduce limitations on bucket naming. That makes sense, but it's not how I read the actual code. I'll look again. -- Pete -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Is lttng enable by default in debian hammer-0.94.5?
Hi, everyone After install hammer-0.94.5 in debian, i want to trace the librbd by lttng, but after done follow steps, i got nothing: 2036 mkdir -p traces 2037 lttng create -o traces librbd 2038 lttng enable-event -u 'librbd:*' 2039 lttng add-context -u -t pthread_id 2040 lttng start 2041 lttng stop So, is the lttng enabled in this version on debian? Thanks! -- hzwulibin 2015-10-30