[
https://issues.apache.org/jira/browse/SPARK-21470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16093716#comment-16093716
]
Maciej Bryński edited comment on SPARK-21470 at 7/19/17 8:05 PM:
-----------------------------------------------------------------
[~vanzin]
I tried.
{code}
/etc/hadoop/conf$ grep -A1 fs.defaultFS core-site.xml
<name>fs.defaultFS</name>
<value>hdfs://hdfs1</value>
{code}
So I change spark.history.fs.logDirectory to hdfs://hdfs1/apps/spark
Result:
{code}
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:278)
at
org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException:
hdfs1
at
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at
org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:108)
at
org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:78)
... 6 more
Caused by: java.net.UnknownHostException: hdfs1
... 20 more
{code}
was (Author: maver1ck):
[~vanzin]
I tried.
{code}
/etc/hadoop/conf$ grep -A1 fs.defaultFS core-site.xml
<name>fs.defaultFS</name>
<value>hdfs://hdfs1</value>
{code}
So I change spark.history.fs.logDirectory to hdfs://hdfs1/apps/spark
Result:
{code}
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at
org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:278)
at
org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException:
hdfs1
at
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:312)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:178)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:665)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:601)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at
org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:108)
at
org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:78)
... 6 more
Caused by: java.net.UnknownHostException: hdfs1
... 20 more
{code}
> [SPARK 2.2 Regression] Spark History server doesn't support HDFS HA
> -------------------------------------------------------------------
>
> Key: SPARK-21470
> URL: https://issues.apache.org/jira/browse/SPARK-21470
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 2.2.0
> Reporter: Maciej Bryński
>
> With Spark version up to 2.1.1 there was possibility to config history server
> to read from hdfs without putting namenode:
> spark.history.fs.logDirectory hdfs:///apps/spark
> And this works with HDFS HA.
> Unfortunately there is regression with Spark 2.2.0 when such configuration
> gives error:
> {code}
> Caused by: java.io.IOException: Incomplete HDFS URI, no host:
> hdfs:///apps/spark
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:142)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at
> org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:108)
> at
> org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:78)
> ... 6 more
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]