[ 
https://issues.apache.org/jira/browse/HADOOP-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12631867#action_12631867
 ] 

George Porter commented on HADOOP-4049:
---------------------------------------

Thank you for the feedback.  I'm implementing it now, but wanted to bring up an 
issue related to making Client and Server more extensible.

I like the idea of providing an addHeader() and setHeader() interface to the 
Client and Server, but my understanding of the code is that a single Client can 
multiplex remote procedure calls from multiple Caller threads (I think this is 
exhibited in TestRPC.java lines 277 through 284).  So I think that we would 
need to associate headers with individual Call objects, rather than Client 
objects.

If so, one way of doing that would be to create an IPCHeaders object that 
contains the headers you want to include with the IPC call, and then modify
  Client.call(Writable param, InetSocketAddress addr, UserGroupInformation 
ticket) to become
  Client.call(Writable param, InetSocketAddress addr, UserGroupInformation 
ticket, IPCHeaders headers)

This way we could associate those headers with its Call on the Client side, and 
include them on the wire when we call into the server.  In terms of the return 
path, instead of returning a Writable, it could return a pair consisting of a 
Writable and an IPCHeaders (which were set by the server).

> Cross-system causal tracing within Hadoop
> -----------------------------------------
>
>                 Key: HADOOP-4049
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4049
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs, ipc, mapred
>            Reporter: George Porter
>         Attachments: HADOOP-4049.patch, multiblockread.png, 
> multiblockwrite.png
>
>
> Much of Hadoop's behavior is client-driven, with clients responsible for 
> contacting individual datanodes to read and write data, as well as dividing 
> up work for map and reduce tasks.  In a large deployment with many concurrent 
> users, identifying the effects of individual clients on the infrastructure is 
> a challenge.  The use of data pipelining in HDFS and Map/Reduce make it hard 
> to follow the effects of a given client request through the system.
> This proposal is to instrument the HDFS, IPC, and Map/Reduce layers of Hadoop 
> with X-Trace.  X-Trace is an open-source framework for capturing causality of 
> events in a distributed system.  It can correlate operations making up a 
> single user request, even if those operations span multiple machines.  As an 
> example, you could use X-Trace to follow an HDFS write operation as it is 
> pipelined through intermediate nodes.  Additionally, you could trace a single 
> Map/Reduce job and see how it is decomposed into lower-layer HDFS operations.
> Matei Zaharia and Andy Konwinski initially integrated X-Trace with a local 
> copy of the 0.14 release, and I've brought that code up to release 0.17.  
> Performing the integration involves modifying the IPC protocol, 
> inter-datanode protocol, and some data structures in the map/reduce layer to 
> include 20-byte long tracing metadata.  With release 0.18, the generated 
> traces could be collected with Chukwa.
> I've attached some example traces of HDFS and IPC layers from the 0.17 patch 
> to this JIRA issue.
> More information about X-Trace is available from http://www.x-trace.net/ as 
> well as in a paper that appeared at NSDI 2007, available online at 
> http://www.usenix.org/events/nsdi07/tech/fonseca.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to