[ 
https://issues.apache.org/jira/browse/NIFI-2873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15572693#comment-15572693
 ] 

ASF GitHub Bot commented on NIFI-2873:
--------------------------------------

Github user mattyb149 commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/1113#discussion_r83274530
  
    --- Diff: 
nifi-nar-bundles/nifi-hive-bundle/nifi-hive-processors/src/main/java/org/apache/nifi/util/hive/HiveConfigurator.java
 ---
    @@ -71,6 +72,14 @@ public Configuration getConfigurationFromFiles(final 
String configFiles) {
             return hiveConfig;
         }
     
    +    public void preload(Configuration configuration) {
    +        try {
    +            FileSystem.get(configuration);
    +        }catch(IOException ioe) {
    --- End diff --
    
    Minor CheckStyle violation here, I will add whitespace before merging your 
PR.


> PutHiveStreaming throws UnknownHostException with HA NameNode
> -------------------------------------------------------------
>
>                 Key: NIFI-2873
>                 URL: https://issues.apache.org/jira/browse/NIFI-2873
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Extensions
>    Affects Versions: 1.0.0
>            Reporter: Franco
>             Fix For: 1.1.0
>
>
> This is the same issue that previously affected Spark:
> https://github.com/Jianfeng-chs/spark/commit/9f2b2bf001262215742be418f24d5093c92ff10f
> We are experiencing this issue consistently when trying to use 
> PutHiveStreaming. In theory this should be a problem with GetHDFS but for 
> whatever reason it is not.
> The fix is identical namely preloading the Hadoop configuration during the 
> processor setup phase. Pull request forthcoming.
> {code:title=Stack Trace|borderStyle=solid}
> 2016-10-06 16:07:59,225 ERROR [Timer-Driven Process Thread-9] 
> o.a.n.processors.hive.PutHiveStreaming
> java.lang.IllegalArgumentException: java.net.UnknownHostException: tdcdv2
>         at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
>  ~[hadoop-common-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
>  ~[hadoop-hdfs-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) 
> ~[hadoop-hdfs-2.6.2.jar:na]
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:668) 
> ~[hadoop-hdfs-2.6.2.jar:na]
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:604) 
> ~[hadoop-hdfs-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
>  ~[hadoop-hdfs-2.6.2.jar:na]
>         at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) 
> ~[hadoop-common-2.6.2.jar:na]
>         at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) 
> ~[hadoop-common-2.6.2.jar:na]
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) 
> ~[hadoop-common-2.6.2.jar:na]
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.<init>(OrcRecordUpdater.java:221)
>  ~[hive-exec-1.2.1.jar:1.2.1]
>         at 
> org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:292)
>  ~[hive-exec-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:141)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:121)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:37)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:509)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:461)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:345)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:325)
>  ~[hive-hcatalog-streaming-1.2.1.jar:1.2.1]
>         at 
> org.apache.nifi.util.hive.HiveWriter.lambda$nextTxnBatch$1(HiveWriter.java:250)
>  ~[nifi-hive-processors-1.0.0.jar:1.0.0]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to