GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/20167
Allow providing Mesos principal & secret via files (SPARK-16501)
## What changes were proposed in this pull request?
This commit modifies the Mesos submission client to allow the princ
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/20167
@felixcheung Well currently you are required to provide them in clear text
either via `--conf` arguments or in your `spark-defaults.conf` which was
unacceptable to our security conscious users. The
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/20167
CC @ArtRand @vanzin I would appreciate your reviews as and when you have
time
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/20167#discussion_r161484346
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -80,10 +80,27 @@ trait
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/20167
@vanzin @ArtRand Thanks for the initial reviews, I have refactored the
patch based on comments so should hopefully be in a better state now. I was
able to also add some unit test coverage for this
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/20167
@vanzin Thanks for the detailed review. I have now added the ability to
also specify the credentials directly via environment variables. I have added
additional unit test coverage and a new section
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/20167#discussion_r164123285
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -71,40 +74,64 @@ trait
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/20167
I've fixed up the Scala style issues so think this is ready to merge
---
-
To unsubscribe, e-mail: reviews-uns
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/20167
@vanzin Ok I should hopefully have all those addressed and the
documentation clarified appropriately
---
-
To unsubscribe, e
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r207299324
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala
---
@@ -107,7 +109,14
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r208531808
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -164,7 +164,15 @@ private[spark] class SparkSubmit extends Logging
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r208530663
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -212,6 +212,60 @@ private[spark] object Config
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/22215
[SPARK-25222][K8S][WIP] Improve container status logging
## What changes were proposed in this pull request?
Currently when running Spark on Kubernetes a logger is run by the client
that
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/13599
@holdenk What we're doing in some of our products currently is that we
require that users create their Python environments up front and that they be
stored on a file system that is accessible t
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/21796
[SPARK-24833][K8S][WIP] Add host name aliases feature
## What changes were proposed in this pull request?
This adds a new feature to the driver and executor builders for K8S that
allows
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/21796
@liyinan926 Ok, I will keep an eye on that issue
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/21796
@felixcheung No problem, have already made a couple of comments on the
design doc
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rvesse closed the pull request at:
https://github.com/apache/spark/pull/21796
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r204403925
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala
---
@@ -107,7 +109,14
Github user rvesse commented on the pull request:
https://github.com/apache/spark/pull/4650#issuecomment-121210915
A Spark plugin seems like a much better approach, I've done some
experimentation on a plugin for this which seems like a much cleaner and
lightweight approach tho
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22584#discussion_r221538327
--- Diff: docs/running-on-kubernetes.md ---
@@ -799,7 +799,7 @@ specific to Spark on Kubernetes.
spark.kubernetes.local.dirs.tmpfs
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r221539921
--- Diff: docs/security.md ---
@@ -729,6 +729,15 @@ so that non-local processes can authenticate. These
delegation tokens in Kuberne
shared by the
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22146
@mccheah I was taking that as a given
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22146#discussion_r223620758
--- Diff: docs/running-on-kubernetes.md ---
@@ -799,4 +815,168 @@ specific to Spark on Kubernetes.
This sets the major Python version of the docker
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/18519
@ArtRand Any plans to add delegation token renewal under Mesos in the
future?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22215
@liyinan926 @nrchakradhar Addressed all your comments, thanks for the
reviews.
Is someone able to kick off the Jenkins testing on this PR
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/22256
[SPARK-25262][K8S][WIP] Better support configurability of Spark scratch
space when using Kubernetes
## What changes were proposed in this pull request?
This change improves how Spark on
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22256
@skonto Here is the PR for the `SPARK_LOCAL_DIRS` behaviour customisation
we were discussing in the context of SPARK-24434.
I have minimised config to adding a single new setting for the
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22256#discussion_r213954156
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
---
@@ -37,41 +40,99 @@ private
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22256
@skonto I haven't done anything specific for the size limit ATM. From the
K8S docs `tmpfs` backed `emptyDir` usage counts towards your containers memory
limits so you can jus
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22256#discussion_r214277672
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
---
@@ -37,41 +40,99 @@ private
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22256#discussion_r214416634
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
---
@@ -37,41 +40,99 @@ private
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22215#discussion_r214612510
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
---
@@ -60,4 +64,81 @@ private[spark] object
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22215
@mccheah Thanks for the review, have made the change you suggested to use
N/A instead of empty string.
I have left indentation as tabs for now, as I said in a previous comment
this was just
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22215#discussion_r214614277
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsLifecycleManager.scala
---
@@ -151,13 +152,15
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/22323
[SPARK-25262][K8S] Allow SPARK_LOCAL_DIRS to be tmpfs backed on K8S
## What changes were proposed in this pull request?
The default behaviour of Spark on K8S currently is to create
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22256#discussion_r214623272
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
---
@@ -37,41 +40,99 @@ private
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/21669
@ifilonenko I think the issue with the `UnixUsername` might possibly be
avoided by exporting `HADOOP_USER_NAME` as an environment variable in the pod
spec set to the same value as `SPARK_USER
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215187426
--- Diff: docs/running-on-kubernetes.md ---
@@ -215,6 +215,19 @@
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.clai
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215338145
--- Diff: docs/running-on-kubernetes.md ---
@@ -215,6 +215,19 @@
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.clai
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215625362
--- Diff: docs/running-on-kubernetes.md ---
@@ -215,6 +215,19 @@
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.clai
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215625299
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
---
@@ -45,6 +47,10 @@ private
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215625636
--- Diff: docs/running-on-kubernetes.md ---
@@ -215,6 +215,19 @@
spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.clai
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215625448
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -225,6 +225,15 @@ private[spark] object Config
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22323#discussion_r215625508
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala
---
@@ -22,6 +22,7 @@ import
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22323
All comments so far addressed, can we kick off the PR builder on this now?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22215
Think this is pretty much ready to merge, can folks take another look when
they get chance
---
-
To unsubscribe, e-mail: reviews
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r215692843
--- Diff: docs/security.md ---
@@ -722,6 +722,62 @@ with encryption, at least.
The Kerberos login will be periodically renewed using the provided
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r215695909
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -212,6 +212,60 @@ private[spark] object Config
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22256
Closed in favour of #22323 which has been merged
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user rvesse closed the pull request at:
https://github.com/apache/spark/pull/22256
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/21669
@vanzin I think in the current implementation of this PR the Kerberos login
is happening inside the driver pod which is running inside the K8S cluster.
The old design from the Spark on K8S
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22748
Suggested reviewers: @mccheah @liyinan926 @skonto
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/22748
[SPARK-25745][K8S] Improve docker-image-tool.sh script
## What changes were proposed in this pull request?
Adds error checking and handling to `docker` invocations ensuring the
script
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22608#discussion_r225614226
--- Diff: bin/docker-image-tool.sh ---
@@ -71,18 +71,29 @@ function build {
--build-arg
base_img=$(image_ref spark)
)
- local
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22748
> There seems to be overlapping logic between this PR and #22681
Yes sorry, I was having issues with the script while working on something
unrelated and hadn't realised your int
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22748#discussion_r225867566
--- Diff: bin/docker-image-tool.sh ---
@@ -44,28 +44,41 @@ function image_ref {
function build {
local BUILD_ARGS
local IMG_PATH
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22748#discussion_r225867655
--- Diff: bin/docker-image-tool.sh ---
@@ -78,20 +91,38 @@ function build {
docker build $NOCACHEARG "${BUILD_ARGS[@]}" \
-t $
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22681#discussion_r226289881
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile ---
@@ -18,6 +18,7 @@
FROM openjdk:8-alpine
ARG
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22681
I actually just got bit by this today while trying to run K8S integration
tests with custom images. The integration tests assume the runnable
distribution layout for `/opt/spark/examples` in the
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22748
Rebased onto master, should be ready for merging
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22782#discussion_r226983785
--- Diff: bin/docker-image-tool.sh ---
@@ -79,7 +79,7 @@ function build {
fi
# Verify that Spark has actually been built/is a runnable
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22782#discussion_r227274978
--- Diff: bin/docker-image-tool.sh ---
@@ -79,7 +79,7 @@ function build {
fi
# Verify that Spark has actually been built/is a runnable
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22782#discussion_r227373891
--- Diff: bin/docker-image-tool.sh ---
@@ -79,7 +79,7 @@ function build {
fi
# Verify that Spark has actually been built/is a runnable
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/22805
[WIP][SPARK-25809][K8S][TEST] New K8S integration testing backends
## What changes were proposed in this pull request?
Currently K8S integration tests are hardcoded to use a `minikube
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r227400699
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
@srowen Yes there are a lot of assumptions made by the integration tests
that are not documented anywhere and I figured out by digging in the code and
POMs.
Broadly speaking right now to
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
@skonto Yep, I plan to do that tomorrow
At least for my `minikube` instance I found 4g insufficient and a couple of
tests would fail because their pods didn't get scheduled. 8g is pro
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r227833846
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r227835021
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/docker/DockerForDesktopBackend.scala
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
@ifilonenko I have done the generalisation today since it was fairly
trivial and it actually resolves a number of concerns about the first pass
implementation
@skonto I have restored the
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
Ran successfully against one of our dev K8S clusters today:
![screen shot 2018-10-25 at 17 39
40](https://user-images.githubusercontent.com/2104864/47516269-fd2e1480-d87c-11e8-80f9
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22820#discussion_r228458281
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
---
@@ -157,7 +157,9 @@ private[spark] object
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22820#discussion_r228458947
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
---
@@ -157,7 +157,9 @@ private[spark] object
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
@liyinan926 I will rebase and squash appropriately once PR #22820 is merged
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228467766
--- Diff: resource-managers/kubernetes/integration-tests/README.md ---
@@ -13,15 +13,45 @@ The simplest way to run the integration tests is to
install and
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228467591
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/cloud/KubeConfigBackend.scala
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228467650
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/cloud/KubeConfigBackend.scala
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228467937
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228467705
--- Diff: resource-managers/kubernetes/integration-tests/README.md ---
@@ -41,12 +71,127 @@ The Spark code to test is handed to the integration
test system
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228469510
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/Utils.scala
---
@@ -27,4 +27,36
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228470004
--- Diff:
resource-managers/kubernetes/integration-tests/scripts/setup-integration-test-env.sh
---
@@ -71,19 +71,36 @@ if [[ $IMAGE_TAG == &quo
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r228470066
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/Utils.scala
---
@@ -27,4 +27,36
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r229395036
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/backend/IntegrationTestBackend.scala
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r229400780
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
---
@@ -42,6 +42,9 @@ private
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
Did a bunch more testing on our internal K8S clusters today after rebasing
this onto master. I am now happy that this is ready for final review and
merging so I have removed the `[WIP]` tag
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22805#discussion_r229630011
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/SparkKubernetesClientFactory.scala
---
@@ -63,6 +66,8 @@ private
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/22904
[SPARK-25887][K8S] Configurable K8S context support
## What changes were proposed in this pull request?
This enhancement allows for specifying the desired context to use for
the initial
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22805
@mccheah I have updated the comment to reference the follow up issue and
opened a PR for that as #22904. Can we go ahead and merge now
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22904#discussion_r230318413
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -23,6 +23,18 @@ import
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22904
@mccheah I have made the requested changes, can I get another review please?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22911#discussion_r231195413
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/KerberosConfDriverFeatureStep.scala
---
@@ -126,20 +134,53
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/22959
First glance this looks like a lot of nice simplification, will take a
proper look over this tomorrow
---
-
To unsubscribe, e
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/22959#discussion_r231838596
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesConf.scala
---
@@ -112,125 +72,139 @@ private[spark] case
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/23013
[SPARK-25023] More detailed security guidance for K8S
## What changes were proposed in this pull request?
Highlights specific security issues to be aware of with Spark on K8S and
GitHub user rvesse opened a pull request:
https://github.com/apache/spark/pull/23017
[WIP][SPARK-26015][K8S] Set a default UID for Spark on K8S Images
## What changes were proposed in this pull request?
Adds USER directives to the Dockerfiles which is configurable via build
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/23017
For those with more knowledge of client mode here is the specific error
seen in the integration tests:
```
Exception in thread "main" java.lang.IllegalArgumentException: basedi
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/23013
@mccheah I have tried to keep it minimal and just point to the official K8S
docs. Obviously there is a balance to be had between high level warnings and
detailed advice. K8S is still a relatively
Github user rvesse commented on the issue:
https://github.com/apache/spark/pull/23013
@tgravescs Re: Point 1 I have a separate PR #22904 which makes some
improvements to the docs around that point
---
-
To
Github user rvesse commented on a diff in the pull request:
https://github.com/apache/spark/pull/23017#discussion_r233385383
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/entrypoint.sh ---
@@ -30,6 +30,10 @@ set -e
# If there is no passwd entry for
1 - 100 of 124 matches
Mail list logo