[
https://issues.apache.org/jira/browse/HDFS-2246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13109143#comment-13109143
]
Suresh Srinivas commented on HDFS-2246:
---------------------------------------
bq. Could you explain that? I imagine the more likely deployment is that the
client is running as 'hbase' and the DN is running as 'hdfs'. Then they would
share a common group and have block files chmodded g+r.
My understanding is that, both hbase region server and DN are running as the
same user and not done using group access. Sanjay any comments?
bq. With the path cache, why's this true? It seems that, so long as you're
accessing a relatively bounded number of blocks, the paths will all be cached
and you won't need to re-RPC unless a block moves, etc?
How do you assure relatively bounded number of blocks? Not sure what the HBase
usecase here is. Additional information on the usage pattern would help. For
every block to fill up the cache you end up creating an RPC proxy and a
connection.
The new patch is much cleaner. Also when I ran my client accessing 1000 blocks,
I saw OutOfMemoryErrors.
> Shortcut a local client reads to a Datanodes files directly
> -----------------------------------------------------------
>
> Key: HDFS-2246
> URL: https://issues.apache.org/jira/browse/HDFS-2246
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Sanjay Radia
> Attachments: 0001-HDFS-347.-Local-reads.patch, HDFS-2246.20s.1.patch,
> HDFS-2246.20s.2.txt, HDFS-2246.20s.3.txt, HDFS-2246.20s.patch,
> localReadShortcut20-security.2patch
>
>
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira