[jira] [Commented] (SPARK-36057) Support volcano/alternative schedulers

2021-10-24 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-36057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17433527#comment-17433527
 ] 

Jiaxin Shan commented on SPARK-36057:
-

Is it better to do this at the operator level?

> Support volcano/alternative schedulers
> --
>
> Key: SPARK-36057
> URL: https://issues.apache.org/jira/browse/SPARK-36057
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.2.0
>Reporter: Holden Karau
>Priority: Major
>
> This is an umbrella issue for tracking the work for supporting Volcano & 
> Yunikorn on Kubernetes. These schedulers provide more YARN like features 
> (such as queues and minimum resources before scheduling jobs) that many folks 
> want on Kubernetes.
>  
> Yunikorn is an ASF project & Volcano is a CNCF project (sig-batch).
>  
> They've taken slightly different approaches to solving the same problem, but 
> from Spark's point of view we should be able to share much of the code.
>  
> See the initial brainstorming discussion in SPARK-35623.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31173) Spark Kubernetes add tolerations and nodeName support

2020-03-19 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17062785#comment-17062785
 ] 

Jiaxin Shan commented on SPARK-31173:
-

I am trying to get more details.

There's two level performance issues. 
 # As every pod need to be mutated by webhook, it drags down to overall 
throughput. 
 # Nodeselector, tolerations or node affinities have impact on kubernetes 
scheduler performance. 

Could I understand if you benchmark difference reflects both of above two 
points?  

 

BTW, Tolerations should be supported in PodTemplate in 3.0.0 release. 

 

 

 

> Spark Kubernetes add tolerations and nodeName support
> -
>
> Key: SPARK-31173
> URL: https://issues.apache.org/jira/browse/SPARK-31173
> Project: Spark
>  Issue Type: New Feature
>  Components: Kubernetes
>Affects Versions: 3.1.0, 2.4.6
> Environment: Alibaba Cloud ACK with spark 
> operator(v1beta2-1.1.0-2.4.5) and spark(2.4.5)
>Reporter: zhongwei liu
>Priority: Trivial
>  Labels: features
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> When you run spark on serverless kubernetes cluster(virtual-kubelet). you 
> need to specific the nodeSelectors,tolerations even nodeName when you want to 
> gain better scheduling performance. Currently spark doesn't support 
> tolerations. If you want to use this feature, You must use admission 
> controller webhook to decorate the pod. But the performance is extremely bad. 
> Here is the benchmark. 
> With webhook 
> Batch Size: 500 Pod creation: about 7 Pods/s   All Pods running: 5min
> Without webhook 
> Batch Size: 500 Pod creation: more than 500 Pods/s All Pods running: 45s
> Adding tolerations and nodeName in spark will bring great help when you want 
> to run a large scale job on serverless kubernetes cluster.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27696) kubernetes driver pod not deleted after finish.

2020-01-25 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-27696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17023458#comment-17023458
 ] 

Jiaxin Shan commented on SPARK-27696:
-

I think this is by design. If the driver job is deleted after job completion. 
How will check spark application logs? 

> kubernetes driver pod not deleted after finish.
> ---
>
> Key: SPARK-27696
> URL: https://issues.apache.org/jira/browse/SPARK-27696
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes
>Affects Versions: 2.4.0
>Reporter: Henry Yu
>Priority: Minor
>
> When submit to k8s, driver pod not deleted after job completion. 
> While k8s checks driver pod name not existing, It is especially painful when 
> we use workflow tool to resubmit the failed spark job. (by the way, client 
> always exist with 0 is another painful issue)
> I have fix this with a new config 
> spark.kubernetes.submission.deleteCompletedPod=true in our home maintained 
> spark version. 
>  Do you guys have more insights or I can make a pr on this issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-30626) [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID env

2020-01-23 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-30626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17022426#comment-17022426
 ] 

Jiaxin Shan commented on SPARK-30626:
-

I have an improvement change for this and let me create a PR

> [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID env
> 
>
> Key: SPARK-30626
> URL: https://issues.apache.org/jira/browse/SPARK-30626
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 2.4.4, 3.0.0
>Reporter: Jiaxin Shan
>Priority: Minor
>
> This should be a minor improvement.
> The use case is we want to look up environment variables and create 
> application folder and redirect driver logs to application folder.  Executors 
> has it and we want to make a change to driver as well. 
>  
> {code:java}
> Limits:
>  cpu: 1024m
>  memory: 896Mi
>  Requests:
>  cpu: 1
>  memory: 896Mi
> Environment:
>  SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
>  SPARK_LOCAL_DIRS: /var/data/spark-9c315655-aba4-47fb-821c-30268d02af7e
>  SPARK_CONF_DIR: /opt/spark/conf{code}
>  
> [https://github.com/apache/spark/blob/afe70b3b5321439318a456c7d19b7074171a286a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala#L73-L79]
> We need SPARK_APPLICATION_ID inside the pod to organize logs 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30626) [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID env

2020-01-23 Thread Jiaxin Shan (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiaxin Shan updated SPARK-30626:

Summary: [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID env  
(was: [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID)

> [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID env
> 
>
> Key: SPARK-30626
> URL: https://issues.apache.org/jira/browse/SPARK-30626
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 2.4.4, 3.0.0
>Reporter: Jiaxin Shan
>Priority: Minor
>
> This should be a minor improvement.
> The use case is we want to look up environment variables and create 
> application folder and redirect driver logs to application folder.  Executors 
> has it and we want to make a change to driver as well. 
>  
> {code:java}
> Limits:
>  cpu: 1024m
>  memory: 896Mi
>  Requests:
>  cpu: 1
>  memory: 896Mi
> Environment:
>  SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
>  SPARK_LOCAL_DIRS: /var/data/spark-9c315655-aba4-47fb-821c-30268d02af7e
>  SPARK_CONF_DIR: /opt/spark/conf{code}
>  
> [https://github.com/apache/spark/blob/afe70b3b5321439318a456c7d19b7074171a286a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala#L73-L79]
> We need SPARK_APPLICATION_ID inside the pod to organize logs 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30626) [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID

2020-01-23 Thread Jiaxin Shan (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiaxin Shan updated SPARK-30626:

Description: 
This should be a minor improvement.

The use case is we want to look up environment variables and create application 
folder and redirect driver logs to application folder.  Executors has it and we 
want to make a change to driver as well. 

 
{code:java}
Limits:
 cpu: 1024m
 memory: 896Mi
 Requests:
 cpu: 1
 memory: 896Mi
Environment:
 SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
 SPARK_LOCAL_DIRS: /var/data/spark-9c315655-aba4-47fb-821c-30268d02af7e
 SPARK_CONF_DIR: /opt/spark/conf{code}
 

[https://github.com/apache/spark/blob/afe70b3b5321439318a456c7d19b7074171a286a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala#L73-L79]

We need SPARK_APPLICATION_ID inside the pod to organize logs 

  was:
This should be a minor improvement.

The use case is we want to look up environment variables and create application 
folder and redirect driver logs to application folder.  Executors has it and we 
want to make a change to driver as well. 

```
 Limits:
 cpu: 1024m
 memory: 896Mi
 Requests:
 cpu: 1
 memory: 896Mi
 Environment:
 SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
 SPARK_LOCAL_DIRS: /var/data/spark-9c315655-aba4-47fb-821c-30268d02af7e
 SPARK_CONF_DIR: /opt/spark/conf

```

 

https://github.com/apache/spark/blob/afe70b3b5321439318a456c7d19b7074171a286a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala#L73-L79

We need SPARK_APPLICATION_ID inside the pod to organize logs 


> [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID
> 
>
> Key: SPARK-30626
> URL: https://issues.apache.org/jira/browse/SPARK-30626
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 2.4.4, 3.0.0
>Reporter: Jiaxin Shan
>Priority: Minor
>
> This should be a minor improvement.
> The use case is we want to look up environment variables and create 
> application folder and redirect driver logs to application folder.  Executors 
> has it and we want to make a change to driver as well. 
>  
> {code:java}
> Limits:
>  cpu: 1024m
>  memory: 896Mi
>  Requests:
>  cpu: 1
>  memory: 896Mi
> Environment:
>  SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
>  SPARK_LOCAL_DIRS: /var/data/spark-9c315655-aba4-47fb-821c-30268d02af7e
>  SPARK_CONF_DIR: /opt/spark/conf{code}
>  
> [https://github.com/apache/spark/blob/afe70b3b5321439318a456c7d19b7074171a286a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala#L73-L79]
> We need SPARK_APPLICATION_ID inside the pod to organize logs 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30626) [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID

2020-01-23 Thread Jiaxin Shan (Jira)
Jiaxin Shan created SPARK-30626:
---

 Summary: [K8S] Spark driver pod doesn't have SPARK_APPLICATION_ID
 Key: SPARK-30626
 URL: https://issues.apache.org/jira/browse/SPARK-30626
 Project: Spark
  Issue Type: Improvement
  Components: Kubernetes
Affects Versions: 2.4.4, 3.0.0
Reporter: Jiaxin Shan


This should be a minor improvement.

The use case is we want to look up environment variables and create application 
folder and redirect driver logs to application folder.  Executors has it and we 
want to make a change to driver as well. 

```
 Limits:
 cpu: 1024m
 memory: 896Mi
 Requests:
 cpu: 1
 memory: 896Mi
 Environment:
 SPARK_DRIVER_BIND_ADDRESS: (v1:status.podIP)
 SPARK_LOCAL_DIRS: /var/data/spark-9c315655-aba4-47fb-821c-30268d02af7e
 SPARK_CONF_DIR: /opt/spark/conf

```

 

https://github.com/apache/spark/blob/afe70b3b5321439318a456c7d19b7074171a286a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala#L73-L79

We need SPARK_APPLICATION_ID inside the pod to organize logs 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28022) k8s pod affinity to achieve cloud native friendly autoscaling

2019-10-18 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954398#comment-16954398
 ] 

Jiaxin Shan commented on SPARK-28022:
-

I don't quite understand the use case here. Sound like you want to put your 
executor as close as possible. 

Kubernetes has the native support for node affinity and pod affinity but 
they're a little bit different even it make you pods sit close at some level.

 
 # node selector or node affinity 
k8s scheduler put your application of subset of nodes pool. The problem is if 
you have a large pool of certain nodes, it won't do bin pack inside target node 
group. In cloud environment, if you have autoscaler enable, it will guarantee 
resource is utilized.
 # pod affinity.
k8s scheduler will try to find the qualified pod and try to put following pods 
to the same node. 



My question is can both of these address your use case?

> k8s pod affinity to achieve cloud native friendly autoscaling 
> --
>
> Key: SPARK-28022
> URL: https://issues.apache.org/jira/browse/SPARK-28022
> Project: Spark
>  Issue Type: New Feature
>  Components: Kubernetes
>Affects Versions: 3.0.0
>Reporter: Henry Yu
>Priority: Major
>
> Hi, in order to achieve cloud native friendly autoscaling , I propose to add 
> a pod affinity feature.
> Traditionally, when we use spark in fix size yarn cluster, it make sense to 
> spread containers to every node.
> Coming to cloud native resource manage, we want to release node when we don't 
> need it any more.
> Pod affinity feature counts to place all pods of certain application to some 
> nodes instead of all nodes.
> By the way,  using pod template is not a good choice, adding application id  
> to pod affinity term when submit is more robust.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28042) Support mapping spark.local.dir to hostPath volume

2019-09-08 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925349#comment-16925349
 ] 

Jiaxin Shan commented on SPARK-28042:
-

[~dongjoon] Thanks! I will try to cherry-pick changes and build a customized 
version for now. 

> Support mapping spark.local.dir to hostPath volume
> --
>
> Key: SPARK-28042
> URL: https://issues.apache.org/jira/browse/SPARK-28042
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Minor
> Fix For: 3.0.0
>
>
> Currently, the k8s executor builder mount spark.local.dir as emptyDir or 
> memory, it should satisfy some small workload, while in some heavily workload 
> like TPCDS, both of them can have some problem, such as pods are evicted due 
> to disk pressure when using emptyDir, and OOM when using tmpfs.
> In particular on cloud environment, users may allocate cluster with minimum 
> configuration and add cloud storage when running workload. In this case, we 
> can specify multiple elastic storage as spark.local.dir to accelerate the 
> spilling. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28042) Support mapping spark.local.dir to hostPath volume

2019-09-08 Thread Jiaxin Shan (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16925265#comment-16925265
 ] 

Jiaxin Shan commented on SPARK-28042:
-

[~vanzin] Do we usually backport these improvements to 2.4.x? 

> Support mapping spark.local.dir to hostPath volume
> --
>
> Key: SPARK-28042
> URL: https://issues.apache.org/jira/browse/SPARK-28042
> Project: Spark
>  Issue Type: Improvement
>  Components: Kubernetes
>Affects Versions: 3.0.0
>Reporter: Junjie Chen
>Assignee: Junjie Chen
>Priority: Minor
> Fix For: 3.0.0
>
>
> Currently, the k8s executor builder mount spark.local.dir as emptyDir or 
> memory, it should satisfy some small workload, while in some heavily workload 
> like TPCDS, both of them can have some problem, such as pods are evicted due 
> to disk pressure when using emptyDir, and OOM when using tmpfs.
> In particular on cloud environment, users may allocate cluster with minimum 
> configuration and add cloud storage when running workload. In this case, we 
> can specify multiple elastic storage as spark.local.dir to accelerate the 
> spilling. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-07 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787163#comment-16787163
 ] 

Jiaxin Shan commented on SPARK-26742:
-

[~shaneknapp] Thanks letting me know. I use virtualBox as VM driver locally. 
Right now, I have to manually create namespace and service account and specify 
them in integration testing scripts which is not fully automated. 

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-07 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787145#comment-16787145
 ] 

Jiaxin Shan commented on SPARK-26742:
-

Double confirm master PR change pass integration test on kubernetes version
 * v1.10.13
 * v1.11.7
 * v1.12.6
 * v1.13.3

{code:java}
Tests: succeeded 15, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO]
[INFO] Spark Project Parent POM ... SUCCESS [ 4.647 s]
[INFO] Spark Project Tags . SUCCESS [ 3.894 s]
[INFO] Spark Project Local DB . SUCCESS [ 2.867 s]
[INFO] Spark Project Networking ... SUCCESS [ 5.506 s]
[INFO] Spark Project Shuffle Streaming Service  SUCCESS [ 3.178 s]
[INFO] Spark Project Unsafe ... SUCCESS [ 3.026 s]
[INFO] Spark Project Launcher . SUCCESS [ 3.903 s]
[INFO] Spark Project Core . SUCCESS [ 32.987 s]
[INFO] Spark Project Kubernetes Integration Tests . SUCCESS [08:28 min]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 09:28 min
[INFO] Finished at: 2019-03-07T10:51:01-08:00
[INFO] 
➜ integration-tests git:(update_k8s_sdk_master) ✗ minikube start --cpus 4 
--memory 6000 --kubernetes-version=v1.10.13
Tests: succeeded 15, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO]
[INFO] Spark Project Parent POM ... SUCCESS [ 4.852 s]
[INFO] Spark Project Tags . SUCCESS [ 4.028 s]
[INFO] Spark Project Local DB . SUCCESS [ 2.872 s]
[INFO] Spark Project Networking ... SUCCESS [ 5.205 s]
[INFO] Spark Project Shuffle Streaming Service  SUCCESS [ 3.200 s]
[INFO] Spark Project Unsafe ... SUCCESS [ 3.681 s]
[INFO] Spark Project Launcher . SUCCESS [ 3.734 s]
[INFO] Spark Project Core . SUCCESS [ 31.357 s]
[INFO] Spark Project Kubernetes Integration Tests . SUCCESS [08:02 min]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 09:01 min
[INFO] Finished at: 2019-03-07T11:11:39-08:00
[INFO] 
➜ integration-tests git:(update_k8s_sdk_master) ✗ minikube start --cpus 4 
--memory 6000 --kubernetes-version=v1.11.7
All tests passed.
[INFO] 
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO]
[INFO] Spark Project Parent POM ... SUCCESS [ 4.512 s]
[INFO] Spark Project Tags . SUCCESS [ 3.632 s]
[INFO] Spark Project Local DB . SUCCESS [ 3.182 s]
[INFO] Spark Project Networking ... SUCCESS [ 4.543 s]
[INFO] Spark Project Shuffle Streaming Service  SUCCESS [ 2.444 s]
[INFO] Spark Project Unsafe ... SUCCESS [ 2.777 s]
[INFO] Spark Project Launcher . SUCCESS [ 3.773 s]
[INFO] Spark Project Core . SUCCESS [ 29.389 s]
[INFO] Spark Project Kubernetes Integration Tests . SUCCESS [08:38 min]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 09:33 min
[INFO] Finished at: 2019-03-07T11:47:33-08:00
[INFO] 
➜ integration-tests git:(update_k8s_sdk_master) ✗ minikube start --cpus 4 
--memory 6000 --kubernetes-version=v1.12.6
Tests: succeeded 15, failed 0, canceled 0, ignored 0, pending 0
All tests passed.
[INFO] 
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO]
[INFO] Spark Project Parent POM ... SUCCESS [ 5.327 s]
[INFO] Spark Project Tags . SUCCESS [ 4.828 s]
[INFO] Spark Project Local DB . SUCCESS [ 3.686 s]

[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-07 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16787097#comment-16787097
 ] 

Jiaxin Shan commented on SPARK-26742:
-

Thanks [~shaneknapp] help on the testing stuff. It would be great to add info 
[~skonto] provides to integration test README.md and that's helpful to onboard 
developers. 

Cut over plan sound good.

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-07 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786573#comment-16786573
 ] 

Jiaxin Shan edited comment on SPARK-26742 at 3/7/19 9:55 AM:
-

[~skonto] Thanks! This is really helpful. I am running following command to 
kickoff tests. It pop up some timeout failures but it goes into tests itself.  
I will finish local integration testing tomorrow morning for all the k8s 
version and then update thread.
{code:java}
./dev/make-distribution.sh --name spark-0307 --r --tgz -Psparkr -Phadoop-2.7 
-Pkubernetes -Phive
tar -zxvf spark-3.0.0-SNAPSHOT-bin-test.tgz

./dev/dev-run-integration-tests.sh --spark-tgz 
/Users/me/Github/spark/spark-3.0.0-SNAPSHOT-bin-spark-0307-v1.tgz
{code}
 

Do you have an idea of this error?
{code:java}
- Run SparkPi with no resources *** FAILED ***
The code passed to eventually never returned normally. Attempted 70 times over 
2.003577789266 minutes. Last failure message: false was not true. 
(KubernetesSuite.scala:276)
- Run SparkPi with a very long application name. *** FAILED ***
The code passed to eventually never returned normally. Attempted 70 times over 
2.003119812684 minutes. Last failure message: false was not true. 
(KubernetesSuite.scala:276)
{code}
 


was (Author: seedjeffwan):
[~skonto] Thanks! This is really helpful. I am running following command to 
kickoff tests. It pop up some timeout failures but it goes into tests itself.  
I will finish local integration testing tomorrow morning for all the k8s 
version and then update thread.
{code:java}
./dev/make-distribution.sh --name spark-0307 --r --tgz -Psparkr -Phadoop-2.7 
-Pkubernetes -Phive
tar -zxvf spark-3.0.0-SNAPSHOT-bin-test.tgz

./dev/dev-run-integration-tests.sh --spark-tgz 
/Users/me/Github/spark/spark-3.0.0-SNAPSHOT-bin-spark-0307-v1.tgz
{code}
 

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-07 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786573#comment-16786573
 ] 

Jiaxin Shan commented on SPARK-26742:
-

[~skonto] Thanks! This is really helpful. I am running following command to 
kickoff tests. It pop up some timeout failures but it goes into tests itself.  
I will finish local integration testing tomorrow morning for all the k8s 
version and then update thread.
{code:java}
./dev/make-distribution.sh --name spark-0307 --r --tgz -Psparkr -Phadoop-2.7 
-Pkubernetes -Phive
tar -zxvf spark-3.0.0-SNAPSHOT-bin-test.tgz

./dev/dev-run-integration-tests.sh --spark-tgz 
/Users/me/Github/spark/spark-3.0.0-SNAPSHOT-bin-spark-0307-v1.tgz
{code}
 

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-06 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786418#comment-16786418
 ] 

Jiaxin Shan edited comment on SPARK-26742 at 3/7/19 6:05 AM:
-

Got some failures. I think it's not related to kubernetes cluster version but 
some other configuration. Do you have an idea?  [~shaneknapp] [~skonto]

 
{code:java}
[INFO] — exec-maven-plugin:1.4.0:exec (setup-integration-test-env) @ 
spark-kubernetes-integration-tests_2.12 —
Must specify a Spark tarball to build Docker images against with --spark-tgz.
[INFO] 
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO]
[INFO] Spark Project Parent POM ... SUCCESS [ 4.754 s]
[INFO] Spark Project Tags . SUCCESS [ 3.560 s]
[INFO] Spark Project Local DB . SUCCESS [ 3.040 s]
[INFO] Spark Project Networking ... SUCCESS [ 4.559 s]
[INFO] Spark Project Shuffle Streaming Service  SUCCESS [ 2.559 s]
[INFO] Spark Project Unsafe ... SUCCESS [ 3.040 s]
[INFO] Spark Project Launcher . SUCCESS [ 3.807 s]
[INFO] Spark Project Core . SUCCESS [ 32.979 s]
[INFO] Spark Project Kubernetes Integration Tests . FAILURE [ 2.045 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:00 min
[INFO] Finished at: 2019-03-06T21:58:26-08:00
[INFO] 
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.4.0:exec 
(setup-integration-test-env) on project 
spark-kubernetes-integration-tests_2.12: Command execution failed.: Process 
exited with an error: 1 (Exit value: 1) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.codehaus.mojo:exec-maven-plugin:1.4.0:exec (setup-integration-test-env) on 
project spark-kubernetes-integration-tests_2.12: Command execution failed.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Command execution 
failed.
at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:276)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at 

[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-06 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786418#comment-16786418
 ] 

Jiaxin Shan commented on SPARK-26742:
-

Got some failures. I think it's not related to kubernetes cluster version but 
some other configuration. Do you have an idea?  [~shaneknapp] [~skonto]

 

```

[INFO] --- exec-maven-plugin:1.4.0:exec (setup-integration-test-env) @ 
spark-kubernetes-integration-tests_2.12 ---
Must specify a Spark tarball to build Docker images against with --spark-tgz.
[INFO] 
[INFO] Reactor Summary for Spark Project Parent POM 3.0.0-SNAPSHOT:
[INFO]
[INFO] Spark Project Parent POM ... SUCCESS [ 4.754 s]
[INFO] Spark Project Tags . SUCCESS [ 3.560 s]
[INFO] Spark Project Local DB . SUCCESS [ 3.040 s]
[INFO] Spark Project Networking ... SUCCESS [ 4.559 s]
[INFO] Spark Project Shuffle Streaming Service  SUCCESS [ 2.559 s]
[INFO] Spark Project Unsafe ... SUCCESS [ 3.040 s]
[INFO] Spark Project Launcher . SUCCESS [ 3.807 s]
[INFO] Spark Project Core . SUCCESS [ 32.979 s]
[INFO] Spark Project Kubernetes Integration Tests . FAILURE [ 2.045 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:00 min
[INFO] Finished at: 2019-03-06T21:58:26-08:00
[INFO] 
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.4.0:exec 
(setup-integration-test-env) on project 
spark-kubernetes-integration-tests_2.12: Command execution failed.: Process 
exited with an error: 1 (Exit value: 1) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.codehaus.mojo:exec-maven-plugin:1.4.0:exec (setup-integration-test-env) on 
project spark-kubernetes-integration-tests_2.12: Command execution failed.
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:215)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
 at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
 at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
 at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
 at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
 at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
 at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
 at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:288)
 at org.apache.maven.cli.MavenCli.main (MavenCli.java:192)
 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke (Method.java:498)
 at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:289)
 at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
 at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:415)
 at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: Command execution 
failed.
 at org.codehaus.mojo.exec.ExecMojo.execute (ExecMojo.java:276)
 at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:137)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:210)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
 at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
 at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
 at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
 at org.apache.maven.DefaultMaven.doExecute 

[jira] [Comment Edited] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-06 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786217#comment-16786217
 ] 

Jiaxin Shan edited comment on SPARK-26742 at 3/6/19 11:28 PM:
--

I am willing to do that. I will make sure local integration test pass and then 
check in. [~shaneknapp]

 

I am a new contributor and not very familiar with integration settings, may 
takes some time. I will sync with you later today. 


was (Author: seedjeffwan):
I am willing to do that. I will make sure local integration test pass and then 
check in. [~shaneknapp]

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-06 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786217#comment-16786217
 ] 

Jiaxin Shan commented on SPARK-26742:
-

I am willing to do that. I will make sure local integration test pass and then 
check in. [~shaneknapp]

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.2

2019-03-06 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786140#comment-16786140
 ] 

Jiaxin Shan commented on SPARK-26742:
-

Agree to target v1.13.x even though 4.1.2 may not pass compatibility test. 

Here's a feature list for v1.13.0 and we need to make sure the apis Spark using 
are not affected.

[https://sysdig.com/blog/whats-new-in-kubernetes-1-13/]

 

> Bump Kubernetes Client Version to 4.1.2
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
> Fix For: 3.0.0
>
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-26742) Bump Kubernetes Client Version to 4.1.1

2019-02-15 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769618#comment-16769618
 ] 

Jiaxin Shan edited comment on SPARK-26742 at 2/15/19 7:25 PM:
--

[~dongjoon] A follow up question, I think latest kubernetes-client 4.1.2 is 
compatible with Spark 3.0.0. If we'd like to patch 2.4.x. What's the right way 
to go? 
 # Cherry-pick version upgrade to 2.4.x and resolve conflicts?
 # Patch 2.4.x directly?


was (Author: jiaxin):
[~dongjoon] A follow up question, I think latest kubernetes-client is 
compatible with Spark 3.0.0. If we'd like to patch 2.4.x. What's the right way 
to go? 
 # Cherry-pick version upgrade to 2.4.x and resolve conflicts?
 # Patch 2.4.x directly?

> Bump Kubernetes Client Version to 4.1.1
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-26742) Bump Kubernetes Client Version to 4.1.1

2019-02-15 Thread Jiaxin Shan (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-26742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16769618#comment-16769618
 ] 

Jiaxin Shan commented on SPARK-26742:
-

[~dongjoon] A follow up question, I think latest kubernetes-client is 
compatible with Spark 3.0.0. If we'd like to patch 2.4.x. What's the right way 
to go? 
 # Cherry-pick version upgrade to 2.4.x and resolve conflicts?
 # Patch 2.4.x directly?

> Bump Kubernetes Client Version to 4.1.1
> ---
>
> Key: SPARK-26742
> URL: https://issues.apache.org/jira/browse/SPARK-26742
> Project: Spark
>  Issue Type: Dependency upgrade
>  Components: Kubernetes
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Steve Davids
>Priority: Major
>  Labels: easyfix
>
> Spark 2.x is using Kubernetes Client 3.x which is pretty old, the master 
> branch has 4.0, the client should be upgraded to 4.1.1 to have the broadest 
> Kubernetes compatibility support for newer clusters: 
> https://github.com/fabric8io/kubernetes-client#compatibility-matrix



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org