Re: Spark (K8S) IPv6 support
I don't know about the state of IPv6 support, but yes you're right in guessing that 3.4.0 might be released perhaps early next year. You can always clone the source repo and build it! On Thu, Jul 14, 2022 at 2:19 PM Valer wrote: > Hi, > > We're starting to use IPv6-only K8S cluster (EKS) which currently breaks > spark. I've noticed SPARK-39457 > <https://issues.apache.org/jira/browse/SPARK-39457?jql=project%20%3D%20SPARK%20AND%20resolution%20%3D%20Unresolved%20AND%20text%20~%20%22IPv6%22%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC> > contains > a lot of focus on this, where all the sub-tasks seem to be done and > indicates this should come in 3.4.0, so I'd like to ask a couple of > questions: > > >- Is 3.4.0 supposed to fully support IPv6 ? >- When should I roughly expect it to be released? I've noticed that >3.2 released in October and 3.3 this June. Is this a somewhat stable >release frequency (half-yearly)? >- Is there any way currently to download a tarball with the "master" / >"latest" version that we could run before releasing ? The apache archive >only has actual semver'd releases. > > > Thanks in advance :) > > Regards, > *Valér* >
Spark (K8S) IPv6 support
Hi, We're starting to use IPv6-only K8S cluster (EKS) which currently breaks spark. I've noticed [SPARK-39457](https://issues.apache.org/jira/browse/SPARK-39457?jql=project%20%3D%20SPARK%20AND%20resolution%20%3D%20Unresolved%20AND%20text%20~%20%22IPv6%22%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC) contains a lot of focus on this, where all the sub-tasks seem to be done and indicates this should come in 3.4.0, so I'd like to ask a couple of questions: - Is 3.4.0 supposed to fully support IPv6 ? - When should I roughly expect it to be released? I've noticed that 3.2 released in October and 3.3 this June. Is this a somewhat stable release frequency (half-yearly)? - Is there any way currently to download a tarball with the "master" / "latest" version that we could run before releasing ? The apache archive only has actual semver'd releases. Thanks in advance :) Regards, Valér
Re: IPv6 support
On 24 Jun 2015, at 18:56, Kevin Liu kevin...@fb.commailto:kevin...@fb.com wrote: Continuing this thread beyond standalone - onto clusters, does anyone have experience successfully running any Spark cluster on IPv6 only (not dual stack) machines? More companies are moving to IPv6 and some such as Facebook are only allocating new clusters on IPv6 only network, so this is getting more relevant. YARN still doesn’t support IPv6 per http://wiki.apache.org/hadoop/HadoopIPv6 Facebook are doing work on IPv6, which they are clearly using in-house https://issues.apache.org/jira/browse/HADOOP-11890 Nobody has been reviewing that work, but we should. For most people you can just say don't use ipv6, but FB have probably run out of IP addrs somewhere and are stuck unless they buy HP for the two Class-A subnets it holds. If people were to play with those patches on IPv6, it could make it into Hadoop 2.8
Re: IPv6 support
Hi Kevin, Did you try adding a host name for the ipv6? I have a few ipv6 boxes, spark failed for me when i use just the ipv6 addresses, but it works fine when i use the host names. Here's an entry in my /etc/hosts: 2607:5300:0100:0200::::0a4d hacked.work My spark-env.sh file: export SPARK_MASTER_IP=hacked.work Here's the master listening on my v6: [image: Inline image 1] The Master UI with running spark-shell: [image: Inline image 2] I even ran a simple sc.parallelize(1 to 100).collect(). Thanks Best Regards On Wed, May 20, 2015 at 11:09 PM, Kevin Liu kevin...@fb.com wrote: Hello, I have to work with IPv6 only servers and when I installed the 1.3.1 hadoop 2.6 build, I couldn¹t get the example to run due to IPv6 issues (errors below). I tried to add the -Djava.net.preferIPv6Addresses=true setting but it still doesn¹t work. A search on Spark¹s support for IPv6 is inconclusive. Can someone help clarify the current status for IPv6? Thanks Kevin ‹‹ errors ‹ 5/05/20 10:17:30 INFO Executor: Fetching http://2401:db00:2030:709b:face:0:9:0:51453 /jars/spark-examples-1.3.1-hadoo p2.6.0.jar with timestamp 1432142250197 15/05/20 10:17:30 INFO Executor: Fetching http://2401:db00:2030:709b:face:0:9:0:51453 /jars/spark-examples-1.3.1-hadoo p2.6.0.jar with timestamp 1432142250197 15/05/20 10:17:30 ERROR Executor: Exception in task 5.0 in stage 0.0 (TID 5) java.net.MalformedURLException: For input string: db00:2030:709b:face:0:9:0:51453 at java.net.URL.init(URL.java:620) at java.net.URL.init(URL.java:483) at java.net.URL.init(URL.java:432) at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:603) at org.apache.spark.util.Utils$.fetchFile(Utils.scala:431) at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Execu tor$$updateDependencies$5.apply(Executor.scala:374) at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Execu tor$$updateDependencies$5.apply(Executor.scala:366) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(Traver sableLike.scala:772) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:7 71) at org.apache.spark.executor.Executor.org $apache$spark$executor$Executor$$upda teDependencies(Executor.scala:366) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:184) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1 142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NumberFormatException: For input string: db00:2030:709b:face:0:9:0:51453 at java.lang.NumberFormatException.forInputString(NumberFormatException.java:6 5) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at java.net.URLStreamHandler.parseURL(URLStreamHandler.java:216) at java.net.URL.init(URL.java:615) ... 18 more - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
IPv6 support
Hello, I have to work with IPv6 only servers and when I installed the 1.3.1 hadoop 2.6 build, I couldn¹t get the example to run due to IPv6 issues (errors below). I tried to add the -Djava.net.preferIPv6Addresses=true setting but it still doesn¹t work. A search on Spark¹s support for IPv6 is inconclusive. Can someone help clarify the current status for IPv6? Thanks Kevin ‹‹ errors ‹ 5/05/20 10:17:30 INFO Executor: Fetching http://2401:db00:2030:709b:face:0:9:0:51453/jars/spark-examples-1.3.1-hadoo p2.6.0.jar with timestamp 1432142250197 15/05/20 10:17:30 INFO Executor: Fetching http://2401:db00:2030:709b:face:0:9:0:51453/jars/spark-examples-1.3.1-hadoo p2.6.0.jar with timestamp 1432142250197 15/05/20 10:17:30 ERROR Executor: Exception in task 5.0 in stage 0.0 (TID 5) java.net.MalformedURLException: For input string: db00:2030:709b:face:0:9:0:51453 at java.net.URL.init(URL.java:620) at java.net.URL.init(URL.java:483) at java.net.URL.init(URL.java:432) at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:603) at org.apache.spark.util.Utils$.fetchFile(Utils.scala:431) at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Execu tor$$updateDependencies$5.apply(Executor.scala:374) at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Execu tor$$updateDependencies$5.apply(Executor.scala:366) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(Traver sableLike.scala:772) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:7 71) at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$upda teDependencies(Executor.scala:366) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:184) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1 142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java: 617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NumberFormatException: For input string: db00:2030:709b:face:0:9:0:51453 at java.lang.NumberFormatException.forInputString(NumberFormatException.java:6 5) at java.lang.Integer.parseInt(Integer.java:580) at java.lang.Integer.parseInt(Integer.java:615) at java.net.URLStreamHandler.parseURL(URLStreamHandler.java:216) at java.net.URL.init(URL.java:615) ... 18 more - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org