[ 
https://issues.apache.org/jira/browse/HADOOP-1864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17692818#comment-17692818
 ] 

ASF GitHub Bot commented on HADOOP-1864:
----------------------------------------

steveloughran opened a new pull request, #5429:
URL: https://github.com/apache/hadoop/pull/5429

   * Exclude imports which come in with hadoop-common
   * Add explicit import of hadoop's org.codehaus.jettison declaration to 
hadoop-aliyun
   * Cut duplicate and inconsistent hbase-server declarations from 
hadoop-project
   
   ### How was this patch tested?
   
   * building and looking at imports; verifying compilation worked.
   * testing of azure in progress.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Support for big jar file (>2G)
> ------------------------------
>
>                 Key: HADOOP-1864
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1864
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.14.1
>            Reporter: Yiping Han
>            Priority: Critical
>
> We have huge size binary that need to be distributed onto tasktracker nodes 
> in Hadoop streaming mode. We've tried both -file option and -cacheArchive 
> option. It seems the tasktracker node cannot unjar jar files bigger than 2G. 
> We are considering split our binaries into multiple jars, but with -file, it 
> seems we cannot do it. Also, we would prefer -cacheArchive option for 
> performance issue, but it seems -cacheArchive does not allow more than 
> appearance in the streaming options. Even if -cacheArchive support multiple 
> jars, we still need a way to put the jars into a single directory tree, 
> instead of using multiple symbolic links. 
> So, in general, we need a feasible and efficient way to update large size 
> (>2G) binaries for Hadoop streaming. Don't know if there is an existing 
> solution that we either didn't find or took it wrong. Or there should be some 
> extra work to provide a solution?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to