Hi Team,

 

We have HCP cluster installed along with HDP and here is the stack versions:

Ambari-2.6.2.2
HDP-2.6.5.0
HCP-1.8.0.0(Which includes Apache metron-0.7.0)
 

We are using custom stellar function while parsing the data. At present we have 
copied our custom stellar function into an HDFS location and specified the 
location in global.json( 
/usr/hcp/1.8.0.0-58/metron/config/zookeeper/global.json). We have HA enabled 
for NameNode Service and we would like to give the dfs name service name to 
access the file from HDFS. At present, our dfs name service name is set to 
“TTNNHA  and I am able to access the stellar function jar files using the dfs 
name service name(ie, 
hdfs://TTNNHA/apps/metron/stellar/custom-stellars-1.0.jar).  However, if I gave 
the same name in the global.json file, I am getting below error:

    Java.lang.IllegalArgumentExceptio: java.net.UnknownHostException: ttnnha

Caused by: java.net.unknownHostException: ttnaha

Not sure what went wrong here, I could understand from the error message that 
the dfs name is in lowercase where in my configuration it was in uppercase.

 

I have tried to give my two name node hostname as an array(comma separated 
list) in the global.json and got an error as it cannot read the file using the 
standby name node hostname.

 

I have also tried to export the jar file through a web server and pointed the 
HTTP address in the global.json file, this time I did not get any error however 
the custom functions were not loaded from the location.

 

Could someone please help me with this?

Reply via email to