Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20722#discussion_r171972187
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala
---
@@ -159,6 +159,19
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20722#discussion_r171968886
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala
---
@@ -159,6 +159,19
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20722#discussion_r171968410
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala
---
@@ -159,6 +159,19
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20553
Oops - knew we tried that before. Just paged in the discussion from
https://github.com/apache/spark/pull/20460. Looks like that approach might need
revisiting
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20553
@cloud-fan @jiangxb1987 I assume you're referring to
[ExecutorAllocationManager.scala#L114-L118](https://github.com/apache/spark/blob/fc6fe8a1d0f161c4788f3db94de49a8669ba3bcc/core/src/main/scal
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20669
+1, the benefits outweigh the desire to be k8s-like in the approach. We
should have tests that validate the behavior changes though. I'm wondering if
we should move the integration tests int
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20553#discussion_r170138424
--- Diff: docs/running-on-kubernetes.md ---
@@ -576,14 +576,21 @@ specific to Spark on Kubernetes.
spark.kubernetes.driver.limit.cores
(none
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20553#discussion_r170137631
--- Diff: docs/running-on-kubernetes.md ---
@@ -576,14 +576,21 @@ specific to Spark on Kubernetes.
spark.kubernetes.driver.limit.cores
(none
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20553#discussion_r170138218
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodFactory.scala
---
@@ -144,7 +149,7 @@ private
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20460#discussion_r165195110
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -267,7 +267,11 @@ private[deploy] class SparkSubmitArguments(args
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20272
Makes sense. The change LGTM.
On Jan 29, 2018 10:23 AM, "Jiang Xingbo" wrote:
> IIUC there was a issue in launching Thrift Server on YARN cluster mode,
>
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20383
That plan LGTM - we can merge into 2.3 after removing the non-existent
config, and getting a clean test run against the 2.3 branch.
Should be low risk
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20383
I'm ok with backporting it once the non-existent config is removed and
we're confident we're covering all the requisite config. Also would make sense
to have a test under https://gi
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20383#discussion_r164084503
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/Checkpoint.scala ---
@@ -53,6 +53,21 @@ class Checkpoint(ssc: StreamingContext, val
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20272
@jiangxb1987 Is there any specific owner of the thrift server that we can
ping here? The testing looks good - so, all we're waiting for is confirmation
from them on the original intent b
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20314
@felixcheung @vanzin @rxin this is ready to merge.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20320
Suspected that it would use the same code path - might be worth a manual
test of that as well.
LGTM from my end.
---
-
To
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20320#discussion_r162484585
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/DriverConfigOrchestrator.scala
---
@@ -117,6 +117,13
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20320#discussion_r162485460
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/DriverConfigOrchestrator.scala
---
@@ -117,6 +117,13
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
Opened https://github.com/apache/spark/pull/20322.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user foxish opened a pull request:
https://github.com/apache/spark/pull/20322
[SPARK-23133][K8S] Fix passing java options to Executor
## What changes were proposed in this pull request?
Supersedes https://github.com/apache/spark/pull/20296 and targets master.
cc
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
I think one of us should do it then - in the interest of time and making
the next RC.
It looks like the PR author may be in a different timezone
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
That would explain also why the tests aren't running.
@sameeragarwal/@vanzin, can someone with manual merge powers retarget this
to the master b
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20320#discussion_r162465264
--- Diff: docs/running-on-kubernetes.md ---
@@ -117,7 +117,10 @@ This URI is the location of the example jar that is
already in the Docker image.
If
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20320#discussion_r162465377
--- Diff: docs/running-on-kubernetes.md ---
@@ -117,7 +117,10 @@ This URI is the location of the example jar that is
already in the Docker image.
If
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
Good point. @andrusha, can you target it to master instead?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
LGTM, looks like we missed this when unifying the docker images. Would be
good to get this into 2.3.0 as well.
---
-
To
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
cc/ @vanzin @felixcheung @liyinan926
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20296
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20314#discussion_r162430012
--- Diff: docs/running-on-kubernetes.md ---
@@ -41,11 +45,10 @@ logs and remains in "completed" state in the Kubernetes
API until it
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20314#discussion_r162429954
--- Diff: docs/cluster-overview.md ---
@@ -52,8 +52,7 @@ The system currently supports three cluster managers:
* [Apache Mesos](running-on-mesos.html
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20314
/cc @felixcheung PTAL.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user foxish opened a pull request:
https://github.com/apache/spark/pull/20314
[SPARK-23104][K8S][Docs] Changes to Kubernetes scheduler documentation
## What changes were proposed in this pull request?
Docs changes:
- Adding a warning that the backend is
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20314
cc/ @vanzin @sameeragarwal @liyinan926 @ash211
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20272
@ozzieba, can we add a test to our integration test set to ensure this
works? It's rather late in the Spark 2.3 release, and I'd be apprehensive about
adding things that haven't b
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20256
Can someone trigger a retest please?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20256#discussion_r161366632
--- Diff: dev/create-release/releaseutils.py ---
@@ -185,6 +185,7 @@ def get_commits(tag):
"graphx": "GraphX",
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20256#discussion_r161365711
--- Diff: dev/create-release/releaseutils.py ---
@@ -185,6 +185,7 @@ def get_commits(tag):
"graphx": "GraphX",
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20256
cc/ @felixcheung @sameeragarwal @vanzin @rxin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user foxish opened a pull request:
https://github.com/apache/spark/pull/20256
[SPARK-23063][K8S] K8s changes for publishing scripts (and a couple of
other misses)
## What changes were proposed in this pull request?
Including the `-Pkubernetes` flag in a few places
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20192
Great, thanks @vanzin. We'll probably need to add a test case using the new
option as well - I can take care of that.
Thanks for the c
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20192
Thanks @vanzin. I was waiting on spark-dev [thread on integration
testing](http://apache-spark-developers-list.1001551.n3.nabble.com/Integration-testing-and-Scheduler-Backends-td23105.html)
to
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20192
@vanzin, do you have some time to modify the integration tests as well? The
change LGTM, but a clean run on minikube would give us a lot more confidence.
Until the integration tests get checked in
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20192
Our [integration
tests](https://github.com/apache-spark-on-k8s/spark-integration) should be
changed to accommodate this modification and test it, and we should also add
some new tests utilizing the
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20193
LGTM. This exists in [our
fork](https://github.com/apache-spark-on-k8s/spark/blob/branch-2.2-kubernetes/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20187
```
Discovery starting.
Discovery completed in 191 milliseconds.
Run starting. Expected test count is: 8
KubernetesSuite:
- Run SparkPi with no resources
- Run SparkPi with a
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20187
@liyinan926 -- not yet, will be running them shortly.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20187
cc/ @zihongz @bowei from K8s networking - can you guys confirm that using
`..svc` is strictly better than using the FQDN which
made an assumption of the dns zone (cluster.local
GitHub user foxish opened a pull request:
https://github.com/apache/spark/pull/20187
[SPARK-22992][K8S] Remove assumption of the DNS domain
## What changes were proposed in this pull request?
Remove the use of FQDN to access the driver because it assumes that it's
s
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20170
LGTM, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20157
LGTM, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159754427
--- Diff: docs/running-on-kubernetes.md ---
@@ -215,8 +218,8 @@ If the pod has encountered a runtime error, the status
can be probed further usi
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159754082
--- Diff: sbin/build-push-docker-images.sh ---
@@ -19,51 +19,131 @@
# This script builds and pushes docker images when run from a release of
Spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20148#discussion_r159740606
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/BasicDriverConfigurationStep.scala
---
@@ -119,7 +119,7
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159735381
--- Diff: sbin/build-push-docker-images.sh ---
@@ -19,51 +19,118 @@
# This script builds and pushes docker images when run from a release of
Spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159724221
--- Diff: sbin/build-push-docker-images.sh ---
@@ -19,51 +19,118 @@
# This script builds and pushes docker images when run from a release of
Spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159720022
--- Diff: sbin/build-push-docker-images.sh ---
@@ -19,51 +19,118 @@
# This script builds and pushes docker images when run from a release of
Spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159716761
--- Diff: docs/running-on-kubernetes.md ---
@@ -16,6 +16,8 @@ Kubernetes scheduler that has been added to Spark.
you may setup a test cluster on your
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20154#discussion_r159719030
--- Diff: sbin/build-push-docker-images.sh ---
@@ -19,51 +19,118 @@
# This script builds and pushes docker images when run from a release of
Spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20148#discussion_r159588871
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/MountSecretsBootstrap.scala
---
@@ -28,20 +28,26 @@ private[spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20148#discussion_r159588348
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/MountSecretsBootstrap.scala
---
@@ -28,20 +28,26 @@ private[spark
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158568498
--- Diff: docs/running-on-kubernetes.md ---
@@ -120,6 +120,23 @@ by their appropriate remote URIs. Also, application
dependencies can be pre-moun
Those
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158566909
--- Diff: docs/running-on-kubernetes.md ---
@@ -120,6 +120,23 @@ by their appropriate remote URIs. Also, application
dependencies can be pre-moun
Those
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158566870
--- Diff: docs/running-on-kubernetes.md ---
@@ -120,6 +120,23 @@ by their appropriate remote URIs. Also, application
dependencies can be pre-moun
Those
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/19954
@mridulm, can you PTAL? This is ready for another round of review, and is
the last large feature we're planning to do i
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564513
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564505
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564324
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564775
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564741
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564475
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20059#discussion_r158564558
--- Diff: docs/running-on-kubernetes.md ---
@@ -528,51 +545,94 @@ specific to Spark on Kubernetes
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20051
Done, comments updated.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20051
@mridulm thanks for looking into it! Looks like I missed updating the
dockerfile comments. On it.
---
-
To unsubscribe, e-mail
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20051#discussion_r158427619
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark-base/Dockerfile
---
@@ -38,7 +38,7 @@ COPY jars /opt/spark/jars
COPY bin /opt
GitHub user foxish opened a pull request:
https://github.com/apache/spark/pull/20051
[SPARK-22866] [K8S] Fix path issue in Kubernetes dockerfile
## What changes were proposed in this pull request?
The path was recently changed in
https://github.com/apache/spark/pull/19946
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/19946
Ping - any new comments?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r158357822
--- Diff: docs/running-on-yarn.md ---
@@ -18,7 +18,8 @@ Spark application's configuration (driver, executors, and
the AM when running in
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r158173626
--- Diff: docs/running-on-kubernetes.md ---
@@ -0,0 +1,573 @@
+---
+layout: global
+title: Running Spark on Kubernetes
+---
+* This will
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/19946
Addressed comments. This PR should be ready to go, with one separate issue
of renaming to be discussed
inhttps://issues.apache.org/jira/browse/SPARK-22853.
PTAL @vanzin @mridulm @ueshin
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r158172980
--- Diff: docs/building-spark.md ---
@@ -49,7 +49,7 @@ To create a Spark distribution like those distributed by
the
to be runnable, use `./dev/make
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r158170697
--- Diff: docs/running-on-kubernetes.md ---
@@ -0,0 +1,573 @@
+---
+layout: global
+title: Running Spark on Kubernetes
+---
+* This will
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/19946
Makes sense, will be sure to categorize appropriately moving forward.
Thanks!
On Dec 20, 2017 4:16 PM, "Marcelo Vanzin" wrote:
BTW can I ask you guys to start using &q
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20032
Fixed, thanks for the reviews. @mridulm, the questions about performance
are super helpful for us also to keep track of how we're doing wrt other
cluster managers and wrt user expectations. T
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20032#discussion_r158154833
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
---
@@ -86,7
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20032#discussion_r158148895
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
---
@@ -86,7
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20032
> (And what happens if set to 0/-ve ?)
We have a check preventing that in the option itself. The value should be
strictly greater than 0
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20032
Within the allocator's control loop, it's all asynchronous requests being
made for executor pods from the k8s API, so, each loop doesn't take very long.
If a user were to set a ver
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20032#discussion_r158131312
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
---
@@ -217,7
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r158008588
--- Diff: docs/running-on-kubernetes.md ---
@@ -0,0 +1,498 @@
+---
+layout: global
+title: Running Spark on Kubernetes
+---
+* This will
GitHub user foxish opened a pull request:
https://github.com/apache/spark/pull/20032
[SPARK-22845] [Scheduler] Modify spark.kubernetes.allocation.batch.delay to
take time instead of int
## What changes were proposed in this pull request?
Fixing configuration that was
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/19954
> I don't think they are independent as architecturally they make sense
together and represent a single concern: enabling use of remote dependencies
through init-containers. Missing any on
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r157967544
--- Diff: docs/running-on-kubernetes.md ---
@@ -0,0 +1,498 @@
+---
+layout: global
+title: Running Spark on Kubernetes
+---
+* This will
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r157966640
--- Diff: docs/running-on-kubernetes.md ---
@@ -0,0 +1,502 @@
+---
+layout: global
+title: Running Spark on Kubernetes
+---
+* This will
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19946#discussion_r157965524
--- Diff: docs/building-spark.md ---
@@ -49,7 +49,7 @@ To create a Spark distribution like those distributed by
the
to be runnable, use `./dev/make
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r157643149
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark-base/Dockerfile
---
@@ -0,0 +1,47 @@
+#
+# Licensed to the Apache Software
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20007
@vanzin PTAL
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user foxish commented on the issue:
https://github.com/apache/spark/pull/20007
700 might break openshift that doesn't run these containers as root.
@erikerlandson can you verify?
---
-
To unsubscri
Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/20007#discussion_r157531263
--- Diff: dev/make-distribution.sh ---
@@ -168,12 +168,18 @@ echo "Build flags: $@" >> "$DISTDIR/RELEASE"
# Copy jars
101 - 200 of 321 matches
Mail list logo