[
https://issues.apache.org/jira/browse/HADOOP-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12507209
]
Hudson commented on HADOOP-1283:
--------------------------------
Integrated in Hadoop-Nightly #132 (See
[http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/132/])
> Eliminate internal UTF8 to String and vice versa conversions in the name-node.
> ------------------------------------------------------------------------------
>
> Key: HADOOP-1283
> URL: https://issues.apache.org/jira/browse/HADOOP-1283
> Project: Hadoop
> Issue Type: Improvement
> Components: dfs
> Affects Versions: 0.12.0
> Reporter: Konstantin Shvachko
> Assignee: Konstantin Shvachko
> Fix For: 0.14.0
>
> Attachments: EliminateUTF8-2.patch, EliminateUTF8.patch
>
>
> We have internal conversions of those two types inside name-node code. One
> example:
> NameNode.complete(String src, String clientName)
> then it calls
> FSNamesystem.completeFile(new UTF8(src), new UTF8(clientName));
> which in turn finally calls
> FSDirectory.addNode(path.toString(), newNode )
> and in another place
> FSDirectory.getNode(src.toString())
> So we have several conversions of the same parameter back and forth during
> computation.
> We should keep the parameter type consistent within different methods.
> The question is, which type should be used: String or Text.
> From previous discussions I remember that Text is more efficient in space and
> time for non ASCII
> data. Here we mostly deal with file names and network addresses, which are
> ASCII.
> Does it make sense to use Text in this case?
> UTF8 is also used as a key in two maps: pendingCreates and leases.
> This should be replaced too.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.