[ 
http://issues.apache.org/jira/browse/HADOOP-347?page=comments#action_12420906 ] 

Devaraj Das commented on HADOOP-347:
------------------------------------

I am taking the approach where the user first connects to the namenode's jetty 
and he gets redirected to a random datanode. The random datanode (having an 
embedded DFS client) then gets the directory listing (starting at the root) 
from the namenode (through the regular IPC channel). The user then probably 
chooses a file to view - the JSP at the backend lists the blocks of the file 
and he also gets options to do HEAD, TAIL, DOWNLOAD_FULL_FILE, and, BLOCK_VIEW 
(view the contents of the block in a chunked fashion with a prev, next for 
navigation). 

For these options he gets redirected to one of the datanodes which has the 
block that he ought to view. That can be handled since the LocatedBlock array 
obtained by doing dfs.namenode.open returns an array of blocks  (in the order 
of the bytes of the file) and the block number & offset within that can be kept 
track of since the client sends, as part of each HTTP request, the block number 
he currently is at along with the offsets (start, end) that he wants to view 
next. If the current datanode finishes serving up all the chunks for the 
current block the client gets redirected to one of the datanodes that has the 
next block with chunk offsets (0, <some-preconfigured-value>). The current 
datanode knows where to send the client to since it can also obtain the 
LocatedBlock array from the namenode and figure that out from the array. The 
offset end (<some-preconfigured-value>) can be customized by the user but must 
be a factor of the DFS block size that the file was created with.

> Implement HDFS content browsing interface
> -----------------------------------------
>
>          Key: HADOOP-347
>          URL: http://issues.apache.org/jira/browse/HADOOP-347
>      Project: Hadoop
>         Type: New Feature

>   Components: dfs
>     Versions: 0.1.0, 0.2.0, 0.1.1, 0.3.0, 0.4.0, 0.2.1, 0.3.1, 0.3.2
>     Reporter: Devaraj Das
>     Assignee: Devaraj Das
>      Fix For: 0.5.0

>
> Implement HDFS content browsing interface over HTTP. Clients would connect to 
> the NameNode and this would send a redirect to a random DataNode. The 
> DataNode, via dfs client, would proxy to namenode for metadata browsing and 
> to other datanodes for content. One can also view the local blocks on any 
> DataNode. Head, Tail will be provided as shorthands for viewing the first 
> block and the last block of a file. 
> For full file viewing, the data displayed per HTTP request will be a block 
> with a PREV/NEXT link. The block size for viewing can be a configurable 
> parameter (the user sets it via the web browser) to the HTTP server (e.g., 
> 256 KB can be the default block size for viewing files).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to