[ 
https://issues.apache.org/jira/browse/ACCUMULO-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15847705#comment-15847705
 ] 

ASF GitHub Bot commented on ACCUMULO-4579:
------------------------------------------

Github user ctubbsii commented on a diff in the pull request:

    https://github.com/apache/accumulo-testing/pull/3#discussion_r98796795
  
    --- Diff: core/src/main/java/org/apache/accumulo/testing/core/TestEnv.java 
---
    @@ -96,15 +97,22 @@ public String getPid() {
       }
     
       public Configuration getHadoopConfiguration() {
    -    Configuration config = new Configuration();
    -    config.set("mapreduce.framework.name", "yarn");
    -    // Setting below are required due to bundled jar breaking default
    -    // config.
    -    // See
    -    // 
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
    -    config.set("fs.hdfs.impl", 
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
    -    config.set("fs.file.impl", 
org.apache.hadoop.fs.LocalFileSystem.class.getName());
    -    return config;
    +    if (hadoopConfig == null) {
    +      String hadoopPrefix = System.getenv("HADOOP_PREFIX");
    +      if (hadoopPrefix == null || hadoopPrefix.isEmpty()) {
    +        throw new IllegalArgumentException("HADOOP_PREFIX must be sent in 
env");
    +      }
    +      hadoopConfig = new Configuration();
    +      hadoopConfig.addResource(new Path(hadoopPrefix + 
"/etc/hadoop/core-site.xml"));
    --- End diff --
    
    I think using the properties file is better. I don't think we should be 
writing java code to depend on env variables, especially arbitrary script 
conventions like `HADOOP_PREFIX`, which is very sensitive to Hadoop 
packaging/deployment changes.


> Continous ingest failing due to bad hadoop configuration
> --------------------------------------------------------
>
>                 Key: ACCUMULO-4579
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-4579
>             Project: Accumulo
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 2.0.0
>            Reporter: Mike Walch
>            Assignee: Mike Walch
>             Fix For: 2.0.0
>
>
> I ran the continuous ingest test in the accumulo-testing repo on a 
> distributed cluster. The test failed due to Twill storing configuration on 
> the local file system rather than HDFS. This is occurring because the the 
> YarnTwillRunnerService is not being provide with the proper hadoop 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to