[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-13 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r233266609
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/HadoopConfExecutorFeatureStepSuite.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import org.mockito.{Mock, MockitoAnnotations}
+import org.mockito.Matchers.{eq => Eq}
+import org.mockito.Mockito.when
+import org.mockito.invocation.InvocationOnMock
+import org.mockito.stubbing.Answer
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SparkPod}
+import org.apache.spark.deploy.k8s.Constants.HADOOP_CONFIG_MAP_NAME
+import org.apache.spark.deploy.k8s.features.hadooputils.HadoopBootstrapUtil
+
+class HadoopConfExecutorFeatureStepSuite extends SparkFunSuite with 
BeforeAndAfter {
+  private val hadoopConfMapName = "HADOOP_CONF_NAME"
+  private val sparkPod = SparkPod.initialPod()
+
+  @Mock
+  private var kubernetesConf: 
KubernetesConf[KubernetesExecutorSpecificConf] = _
+
+  @Mock
+  private var hadoopBootstrapUtil: HadoopBootstrapUtil = _
+
+  before {
+MockitoAnnotations.initMocks(this)
+val sparkConf = new SparkConf(false)
+  .set(HADOOP_CONFIG_MAP_NAME, hadoopConfMapName)
+when(kubernetesConf.sparkConf).thenReturn(sparkConf)
+  }
+
+  test("bootstrapHadoopConf being applied to a base spark pod") {
+when(hadoopBootstrapUtil.bootstrapHadoopConfDir(
--- End diff --

> I think it's fine though that we want to test "We're calling this 
submodule with these parameters"

I think that's a very restrictive view of testing. I prefer my previous 
interpretation: the goal of this step is to mount a volume if a certain config 
is set. Whether it does that by calling into another module or not, it doesn't 
matter. That's what should be tested. You're taking "white box testing" to an 
extreme here - you're testing specific lines of code in the implementation 
instead of testing the contract ("for these inputs generate these outputs").

If it's done via another module, and you want to have unit tests for that 
module too, that's fine. But  then you get into your other question...

Looking from the other side: if you change the implementation in a way that 
breaks this test, but keeps the actual functionality, what good was this test 
doing?

> did we need a submodule to begin with?

My answer to that question is no - and as part of my updates to SPARK-25815 
I'll be removing that class. But that's kinda orthogonal to my point.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-13 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r233257218
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/HadoopConfExecutorFeatureStepSuite.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import org.mockito.{Mock, MockitoAnnotations}
+import org.mockito.Matchers.{eq => Eq}
+import org.mockito.Mockito.when
+import org.mockito.invocation.InvocationOnMock
+import org.mockito.stubbing.Answer
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SparkPod}
+import org.apache.spark.deploy.k8s.Constants.HADOOP_CONFIG_MAP_NAME
+import org.apache.spark.deploy.k8s.features.hadooputils.HadoopBootstrapUtil
+
+class HadoopConfExecutorFeatureStepSuite extends SparkFunSuite with 
BeforeAndAfter {
+  private val hadoopConfMapName = "HADOOP_CONF_NAME"
+  private val sparkPod = SparkPod.initialPod()
+
+  @Mock
+  private var kubernetesConf: 
KubernetesConf[KubernetesExecutorSpecificConf] = _
+
+  @Mock
+  private var hadoopBootstrapUtil: HadoopBootstrapUtil = _
+
+  before {
+MockitoAnnotations.initMocks(this)
+val sparkConf = new SparkConf(false)
+  .set(HADOOP_CONFIG_MAP_NAME, hadoopConfMapName)
+when(kubernetesConf.sparkConf).thenReturn(sparkConf)
+  }
+
+  test("bootstrapHadoopConf being applied to a base spark pod") {
+when(hadoopBootstrapUtil.bootstrapHadoopConfDir(
--- End diff --

There is a point of over-mocking, I agree. I think `KubernetesConf` 
shouldn't be mocked in most cases because it's just a structure. This is 
similar to not mocking something like a case class or a hash map. I also don't 
think we need to use Mockito Answers here - that could be done with stub 
implementations of the submodule. I didn't have a strong enough conviction of 
not using Mockito answers, but in general I think we should favor stub 
implementations over `Answer`; it's more readable.

I think it's fine though that we want to test "We're calling this submodule 
with these parameters", because the rest of the module's correctness is 
unit-test covered in the unit tests of that submodule. The general premise we'd 
like to follow is that a unit test should only execute code that is in the 
class under test. In other words, in this concrete case, since 
`HadoopBootstrapUtil` is a separate class, no code in `HadoopBootstrapUtil` 
should be run as part of the test. The class under test is responsible for 
calling the utility submodule with certain arguments but is not concerned about 
what that submodule actually does. Thus the test also doesn't have to be 
concerned here about what that submodule actually does, but should check that 
the submodule was actually called. If we want to check the submodule's 
correctness, we unit test the submodule.

Now - did we need a submodule to begin with? We could have, for example, 
kept HadoopBootstrapUtil not really be a submodule at all, but just a `static` 
call on a utility method. I think that's debatable - it makes each test need to 
cover a larger amount of code; that is, we no longer test the utility method in 
isolation, which is what we get here. Therefore it's unclear if 
HadoopBootstrapUtil being its own object that can be stubbed, is the right 
approach.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-13 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r233210145
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/HadoopConfExecutorFeatureStepSuite.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import org.mockito.{Mock, MockitoAnnotations}
+import org.mockito.Matchers.{eq => Eq}
+import org.mockito.Mockito.when
+import org.mockito.invocation.InvocationOnMock
+import org.mockito.stubbing.Answer
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SparkPod}
+import org.apache.spark.deploy.k8s.Constants.HADOOP_CONFIG_MAP_NAME
+import org.apache.spark.deploy.k8s.features.hadooputils.HadoopBootstrapUtil
+
+class HadoopConfExecutorFeatureStepSuite extends SparkFunSuite with 
BeforeAndAfter {
+  private val hadoopConfMapName = "HADOOP_CONF_NAME"
+  private val sparkPod = SparkPod.initialPod()
+
+  @Mock
+  private var kubernetesConf: 
KubernetesConf[KubernetesExecutorSpecificConf] = _
+
+  @Mock
+  private var hadoopBootstrapUtil: HadoopBootstrapUtil = _
+
+  before {
+MockitoAnnotations.initMocks(this)
+val sparkConf = new SparkConf(false)
+  .set(HADOOP_CONFIG_MAP_NAME, hadoopConfMapName)
+when(kubernetesConf.sparkConf).thenReturn(sparkConf)
+  }
+
+  test("bootstrapHadoopConf being applied to a base spark pod") {
+when(hadoopBootstrapUtil.bootstrapHadoopConfDir(
--- End diff --

> The purpose of the mocks is to make every unit test run only code that is 
in the class under test

First, let me clarify that I'm against what I think is the excessive use of 
mocks in these tests; and by that I mean mostly mocking a whole bunch of things 
using mockito.

You can mock things without mockito. e.g. if what you want is to provide 
the test with a specific mock configuration, it's pretty trivial to do that 
with existing classes. Tests do this all over the code base without having to 
mock `SparkConf` or any other classes. 

(This is currently more painful than it should be in the k8s backend, which 
is one of the things I'm trying to fix with my cleanups.)

So let's use this particular test here as an example. What is it testing?

I think it should be testing that if a user has set the appropriate 
configuration in their `SparkConf`, that this step will mount a volume with the 
user's Hadoop configuration in the final pod. Is that what the code here is 
testing?

I don't think so. It's testing that a specific method call is being made 
with certain parameters. A much better test would be to actually let the code 
in the step run and do its thing, and test the final result for expected things 
(in this case, the hadoop conf being mounted).

Or thinking another way: how can you actually break this test, without 
removing that one method call from the step implementation? Since both the 
inputs and the output of that call are mocked here.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-13 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r233198612
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/HadoopConfExecutorFeatureStepSuite.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import org.mockito.{Mock, MockitoAnnotations}
+import org.mockito.Matchers.{eq => Eq}
+import org.mockito.Mockito.when
+import org.mockito.invocation.InvocationOnMock
+import org.mockito.stubbing.Answer
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SparkPod}
+import org.apache.spark.deploy.k8s.Constants.HADOOP_CONFIG_MAP_NAME
+import org.apache.spark.deploy.k8s.features.hadooputils.HadoopBootstrapUtil
+
+class HadoopConfExecutorFeatureStepSuite extends SparkFunSuite with 
BeforeAndAfter {
+  private val hadoopConfMapName = "HADOOP_CONF_NAME"
+  private val sparkPod = SparkPod.initialPod()
+
+  @Mock
+  private var kubernetesConf: 
KubernetesConf[KubernetesExecutorSpecificConf] = _
+
+  @Mock
+  private var hadoopBootstrapUtil: HadoopBootstrapUtil = _
+
+  before {
+MockitoAnnotations.initMocks(this)
+val sparkConf = new SparkConf(false)
+  .set(HADOOP_CONFIG_MAP_NAME, hadoopConfMapName)
+when(kubernetesConf.sparkConf).thenReturn(sparkConf)
+  }
+
+  test("bootstrapHadoopConf being applied to a base spark pod") {
+when(hadoopBootstrapUtil.bootstrapHadoopConfDir(
--- End diff --

Instead of mocks would we implement stub interfaces then? The purpose of 
the mocks is to make every unit test run only code that is in the class under 
test, plus utility methods. I think the only alternative to mocks is writing 
stub interfaces, and it's not clear how much more / less boilerplate that would 
be also. If the stub interfaces lead to less boilerplate than mocks then we 
should go with that.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-13 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r233168980
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/HadoopConfExecutorFeatureStepSuite.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import org.mockito.{Mock, MockitoAnnotations}
+import org.mockito.Matchers.{eq => Eq}
+import org.mockito.Mockito.when
+import org.mockito.invocation.InvocationOnMock
+import org.mockito.stubbing.Answer
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SparkPod}
+import org.apache.spark.deploy.k8s.Constants.HADOOP_CONFIG_MAP_NAME
+import org.apache.spark.deploy.k8s.features.hadooputils.HadoopBootstrapUtil
+
+class HadoopConfExecutorFeatureStepSuite extends SparkFunSuite with 
BeforeAndAfter {
+  private val hadoopConfMapName = "HADOOP_CONF_NAME"
+  private val sparkPod = SparkPod.initialPod()
+
+  @Mock
+  private var kubernetesConf: 
KubernetesConf[KubernetesExecutorSpecificConf] = _
+
+  @Mock
+  private var hadoopBootstrapUtil: HadoopBootstrapUtil = _
+
+  before {
+MockitoAnnotations.initMocks(this)
+val sparkConf = new SparkConf(false)
+  .set(HADOOP_CONFIG_MAP_NAME, hadoopConfMapName)
+when(kubernetesConf.sparkConf).thenReturn(sparkConf)
+  }
+
+  test("bootstrapHadoopConf being applied to a base spark pod") {
+when(hadoopBootstrapUtil.bootstrapHadoopConfDir(
+  Eq(None),
+  Eq(None),
+  Eq(Some(hadoopConfMapName)),
+  Eq(sparkPod)
+)).thenAnswer(new Answer[SparkPod] {
+  override def answer(invocation: InvocationOnMock) : SparkPod = {
+KubernetesFeaturesTestUtils.hadoopConfBootPod(
+  invocation.getArgumentAt(3, classOf[SparkPod]))
+  }
+})
+val hConfStep = new HadoopConfExecutorFeatureStep(kubernetesConf, 
hadoopBootstrapUtil)
+val pod = hConfStep.configurePod(sparkPod)
+KubernetesFeaturesTestUtils.podHasLabels(pod.pod, 
Map("bootstrap-hconf" -> "true"))
+assert(hConfStep.getAdditionalPodSystemProperties() == Map.empty,
--- End diff --

use `assert(foo.isEmpty)`, no need for a message. Also in a bunch of places 
below.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-13 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r233174157
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/HadoopConfExecutorFeatureStepSuite.scala
 ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import org.mockito.{Mock, MockitoAnnotations}
+import org.mockito.Matchers.{eq => Eq}
+import org.mockito.Mockito.when
+import org.mockito.invocation.InvocationOnMock
+import org.mockito.stubbing.Answer
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.{SparkConf, SparkFunSuite}
+import org.apache.spark.deploy.k8s.{KubernetesConf, 
KubernetesExecutorSpecificConf, SparkPod}
+import org.apache.spark.deploy.k8s.Constants.HADOOP_CONFIG_MAP_NAME
+import org.apache.spark.deploy.k8s.features.hadooputils.HadoopBootstrapUtil
+
+class HadoopConfExecutorFeatureStepSuite extends SparkFunSuite with 
BeforeAndAfter {
+  private val hadoopConfMapName = "HADOOP_CONF_NAME"
+  private val sparkPod = SparkPod.initialPod()
+
+  @Mock
+  private var kubernetesConf: 
KubernetesConf[KubernetesExecutorSpecificConf] = _
+
+  @Mock
+  private var hadoopBootstrapUtil: HadoopBootstrapUtil = _
+
+  before {
+MockitoAnnotations.initMocks(this)
+val sparkConf = new SparkConf(false)
+  .set(HADOOP_CONFIG_MAP_NAME, hadoopConfMapName)
+when(kubernetesConf.sparkConf).thenReturn(sparkConf)
+  }
+
+  test("bootstrapHadoopConf being applied to a base spark pod") {
+when(hadoopBootstrapUtil.bootstrapHadoopConfDir(
--- End diff --

So after I spent some time cleaning other code in the k8s backend, I 
started to really dislike this kind of test that over-mocks things. This test 
is actually just a result of a lot of other things that I think are wrong in 
the current code (not just kerberos-related, but in the k8s backend in general).

Basically you have a lot of code and mocks here to test a single thing: 
that the feature step is calling this one method with the parameters you expect.

If instead the feature step was written to be more easily testable, this 
wouldn't be needed. You wouldn't need to mock anything, and it would just run 
the normal code and you'd assert that the resulting pod has what you expect.

I filed SPARK-25874 to fix all this stuff (with a few sub-tasks), and I'm 
now thinking it's better to do that first. It would simplify the tests here a 
lot. It would avoid all this noisy and IMO unnecessary mocking. And we'd have 
better tests. You can look at my p.o.c. code linked from that bug to see what I 
mean.

As another data point, to address the comments on SPARK-25815 I'm having to 
rewrite a bunch of code in this area. So all these tests would have to be 
largely rewritten (on which PR, it depends on which one goes in first).

So, long story short, checking this in would mean I'd be rewriting all this 
code very, very soon. So I'm not seeing a lot of gains here...


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-05 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230965800
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,66 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def containerHasEnvVars(container: Container, envs: Map[String, 
String]): Unit = {
+assertHelper[Set[(String, String)]](envs.toSet,
+  container.getEnv.asScala
+.map { e => (e.getName, e.getValue) }.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def containerHasVolumeMounts(container: Container, vms: Map[String, 
String]): Unit = {
+assertHelper[Set[(String, String)]](vms.toSet,
+  container.getVolumeMounts.asScala
+.map { vm => (vm.getName, vm.getMountPath) }.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def podHasLabels(pod: Pod, labels: Map[String, String]): Unit = {
+assertHelper[Set[(String, String)]](labels.toSet, 
pod.getMetadata.getLabels.asScala.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def podHasVolumes(pod: Pod, volumes: Seq[Volume]): Unit = {
+assertHelper[Set[Volume]](volumes.toSet, 
pod.getSpec.getVolumes.asScala.toSet,
+  subsetOfElem[Set[Volume], Volume], "a subset of")
+  }
+
+  // Mocking bootstrapHadoopConfDir
+  def hadoopConfBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-hconf", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  // Mocking bootstrapKerberosPod
+  def krbBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-kerberos", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  // Mocking bootstrapSparkUserPod
+  def userBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-user", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  def subsetOfElem[T <: Set[B], B <: Any]: (T, T) => Boolean = (a, b) => 
a.subsetOf(b)
+  def subsetOfTup[T <: Set[(B, B)], B <: Any]: (T, T) => Boolean = (a, b) 
=> a.subsetOf(b)
+
+  def assertHelper[T](con1: T, con2: T,
+  expr: (T, T) => Boolean = (a: T, b: T) => a == b, exprMsg: String = 
"equal to"): Unit = {
+assert(expr(con1, con2), s"$con1 is not $exprMsg $con2 as expected")
--- End diff --

I think here the simpler approach is preferred - on each assertion line I 
want to see right then and there what error message I'll get back, not having 
to go through a helper method.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-05 Thread ifilonenko
Github user ifilonenko commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230951222
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,66 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def containerHasEnvVars(container: Container, envs: Map[String, 
String]): Unit = {
+assertHelper[Set[(String, String)]](envs.toSet,
+  container.getEnv.asScala
+.map { e => (e.getName, e.getValue) }.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def containerHasVolumeMounts(container: Container, vms: Map[String, 
String]): Unit = {
+assertHelper[Set[(String, String)]](vms.toSet,
+  container.getVolumeMounts.asScala
+.map { vm => (vm.getName, vm.getMountPath) }.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def podHasLabels(pod: Pod, labels: Map[String, String]): Unit = {
+assertHelper[Set[(String, String)]](labels.toSet, 
pod.getMetadata.getLabels.asScala.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def podHasVolumes(pod: Pod, volumes: Seq[Volume]): Unit = {
+assertHelper[Set[Volume]](volumes.toSet, 
pod.getSpec.getVolumes.asScala.toSet,
+  subsetOfElem[Set[Volume], Volume], "a subset of")
+  }
+
+  // Mocking bootstrapHadoopConfDir
+  def hadoopConfBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-hconf", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  // Mocking bootstrapKerberosPod
+  def krbBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-kerberos", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  // Mocking bootstrapSparkUserPod
+  def userBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-user", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  def subsetOfElem[T <: Set[B], B <: Any]: (T, T) => Boolean = (a, b) => 
a.subsetOf(b)
+  def subsetOfTup[T <: Set[(B, B)], B <: Any]: (T, T) => Boolean = (a, b) 
=> a.subsetOf(b)
+
+  def assertHelper[T](con1: T, con2: T,
+  expr: (T, T) => Boolean = (a: T, b: T) => a == b, exprMsg: String = 
"equal to"): Unit = {
+assert(expr(con1, con2), s"$con1 is not $exprMsg $con2 as expected")
--- End diff --

I thought it would be better than doing a custom string at every assert 
statement. I don't find it to be too awkward, but *shrug* :) I actually kind of 
like it since we could do a check with any expression


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-05 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230942826
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,66 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def containerHasEnvVars(container: Container, envs: Map[String, 
String]): Unit = {
+assertHelper[Set[(String, String)]](envs.toSet,
+  container.getEnv.asScala
+.map { e => (e.getName, e.getValue) }.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def containerHasVolumeMounts(container: Container, vms: Map[String, 
String]): Unit = {
+assertHelper[Set[(String, String)]](vms.toSet,
+  container.getVolumeMounts.asScala
+.map { vm => (vm.getName, vm.getMountPath) }.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def podHasLabels(pod: Pod, labels: Map[String, String]): Unit = {
+assertHelper[Set[(String, String)]](labels.toSet, 
pod.getMetadata.getLabels.asScala.toSet,
+  subsetOfTup[Set[(String, String)], String], "a subset of")
+  }
+
+  def podHasVolumes(pod: Pod, volumes: Seq[Volume]): Unit = {
+assertHelper[Set[Volume]](volumes.toSet, 
pod.getSpec.getVolumes.asScala.toSet,
+  subsetOfElem[Set[Volume], Volume], "a subset of")
+  }
+
+  // Mocking bootstrapHadoopConfDir
+  def hadoopConfBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-hconf", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  // Mocking bootstrapKerberosPod
+  def krbBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-kerberos", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  // Mocking bootstrapSparkUserPod
+  def userBootPod(inputPod: SparkPod): SparkPod =
+SparkPod(
+  new PodBuilder(inputPod.pod)
+.editOrNewMetadata()
+  .addToLabels("bootstrap-user", "true")
+  .endMetadata()
+.build(),
+  inputPod.container)
+
+  def subsetOfElem[T <: Set[B], B <: Any]: (T, T) => Boolean = (a, b) => 
a.subsetOf(b)
+  def subsetOfTup[T <: Set[(B, B)], B <: Any]: (T, T) => Boolean = (a, b) 
=> a.subsetOf(b)
+
+  def assertHelper[T](con1: T, con2: T,
+  expr: (T, T) => Boolean = (a: T, b: T) => a == b, exprMsg: String = 
"equal to"): Unit = {
+assert(expr(con1, con2), s"$con1 is not $exprMsg $con2 as expected")
--- End diff --

This interaction seems really awkward to me. Why can't we just embed the 
error message directly into the assertion on each assertion statement? I'm also 
not familiar enough with the ScalaTest framework but perhaps we can explore 
other options the library has to offer. (In Java I'm used to using 
[AssertJ](https://github.com/joel-costigliola/assertj-core) for fluent 
assertions but I'm not sure if that will be of any help for Scala objects, 
particularly Scala collections without converting everything.)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-05 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230941006
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = File.createTempFile(s"${UUID.randomUUID().toString}", 
".txt", tmpDir)
+Files.write("contents".getBytes, tmpFile)
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("bootstrapKerberosPod with file location specified for krb5.conf 
file") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val fileLocation = Some(tmpFile.getAbsolutePath)
+val stringPath = tmpFile.getName
+val newKrb5ConfName = Some("/etc/krb5.conf")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  fileLocation,
+  newKrb5ConfName,
+  None,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(newKrb5ConfName.get)
+  .withItems(new KeyToPathBuilder()
+.withKey(stringPath)
+.withPath(stringPath)
+.build())
+.endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
+podHasVolumes(resultingPod.pod, expectedVolumes)
+containerHasEnvVars(resultingPod.container, Map(
+  ENV_HADOOP_TOKEN_FILE_LOCATION -> 
s"$SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR/$dtSecretItemKey",
+  ENV_SPARK_USER -> userName)
+)
+containerHasVolumeMounts(resultingPod.container, Map(
+  SPARK_APP_HADOOP_SECRET_VOLUME_NAME -> 
SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR,
+  KRB_FILE_VOLUME -> (KRB_FILE_DIR_PATH + "/krb5.conf"))
+)
+  }
+
+  test("bootstrapKerberosPod with pre-existing configMap specified for 
krb5.conf file") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
--- End diff --

These can be constants at the top of the file or in a companion object.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-05 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230941669
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,221 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = File.createTempFile(s"${UUID.randomUUID().toString}", 
".txt", tmpDir)
+Files.write("contents".getBytes, tmpFile)
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("bootstrapKerberosPod with file location specified for krb5.conf 
file") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
--- End diff --

These can be constants in a companion object or at the top of the class.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230223616
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,54 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def containerHasEnvVars(container: Container, envs: Map[String, 
String]): Boolean = {
+envs.toSet.subsetOf(container.getEnv.asScala
+  .map { e => (e.getName, e.getValue)}.toSet)
--- End diff --

space before `}`

Also in other places.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230221109
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = createTempFile(tmpDir, "contents")
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("Testing bootstrapKerberosPod with file location of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val fileLocation = Some(tmpFile.getAbsolutePath)
+val stringPath = tmpFile.toPath.getFileName.toString
+val newKrb5ConfName = Some("/etc/krb5.conf")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  fileLocation,
+  newKrb5ConfName,
+  None,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(newKrb5ConfName.get)
+  .withItems(new KeyToPathBuilder()
+.withKey(stringPath)
+.withPath(stringPath)
+.build())
+.endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
+assert(podHasVolumes(resultingPod.pod, expectedVolumes))
+assert(containerHasEnvVars(resultingPod.container, Map(
+  ENV_HADOOP_TOKEN_FILE_LOCATION -> 
s"$SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR/$dtSecretItemKey",
+  ENV_SPARK_USER -> userName)
+))
+assert(containerHasVolumeMounts(resultingPod.container, Map(
+  SPARK_APP_HADOOP_SECRET_VOLUME_NAME -> 
SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR,
+  KRB_FILE_VOLUME -> (KRB_FILE_DIR_PATH + "/krb5.conf"))
+))
+  }
+
+  test("Testing bootstrapKerberosPod with configMap of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val existingKrb5ConfName = Some("krb5CMap")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  None,
+  None,
+  existingKrb5ConfName,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(existingKrb5ConfName.get)
+  .endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
   

[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230223300
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = createTempFile(tmpDir, "contents")
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("Testing bootstrapKerberosPod with file location of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val fileLocation = Some(tmpFile.getAbsolutePath)
+val stringPath = tmpFile.toPath.getFileName.toString
+val newKrb5ConfName = Some("/etc/krb5.conf")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  fileLocation,
+  newKrb5ConfName,
+  None,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(newKrb5ConfName.get)
+  .withItems(new KeyToPathBuilder()
+.withKey(stringPath)
+.withPath(stringPath)
+.build())
+.endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
+assert(podHasVolumes(resultingPod.pod, expectedVolumes))
+assert(containerHasEnvVars(resultingPod.container, Map(
+  ENV_HADOOP_TOKEN_FILE_LOCATION -> 
s"$SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR/$dtSecretItemKey",
+  ENV_SPARK_USER -> userName)
+))
+assert(containerHasVolumeMounts(resultingPod.container, Map(
+  SPARK_APP_HADOOP_SECRET_VOLUME_NAME -> 
SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR,
+  KRB_FILE_VOLUME -> (KRB_FILE_DIR_PATH + "/krb5.conf"))
+))
+  }
+
+  test("Testing bootstrapKerberosPod with configMap of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val existingKrb5ConfName = Some("krb5CMap")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  None,
+  None,
+  existingKrb5ConfName,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(existingKrb5ConfName.get)
+  .endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
   

[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230223467
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = createTempFile(tmpDir, "contents")
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("Testing bootstrapKerberosPod with file location of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val fileLocation = Some(tmpFile.getAbsolutePath)
+val stringPath = tmpFile.toPath.getFileName.toString
+val newKrb5ConfName = Some("/etc/krb5.conf")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  fileLocation,
+  newKrb5ConfName,
+  None,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(newKrb5ConfName.get)
+  .withItems(new KeyToPathBuilder()
+.withKey(stringPath)
+.withPath(stringPath)
+.build())
+.endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
+assert(podHasVolumes(resultingPod.pod, expectedVolumes))
+assert(containerHasEnvVars(resultingPod.container, Map(
+  ENV_HADOOP_TOKEN_FILE_LOCATION -> 
s"$SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR/$dtSecretItemKey",
+  ENV_SPARK_USER -> userName)
+))
+assert(containerHasVolumeMounts(resultingPod.container, Map(
+  SPARK_APP_HADOOP_SECRET_VOLUME_NAME -> 
SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR,
+  KRB_FILE_VOLUME -> (KRB_FILE_DIR_PATH + "/krb5.conf"))
+))
+  }
+
+  test("Testing bootstrapKerberosPod with configMap of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val existingKrb5ConfName = Some("krb5CMap")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  None,
+  None,
+  existingKrb5ConfName,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(existingKrb5ConfName.get)
+  .endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
   

[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230223525
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = createTempFile(tmpDir, "contents")
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("Testing bootstrapKerberosPod with file location of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val fileLocation = Some(tmpFile.getAbsolutePath)
+val stringPath = tmpFile.toPath.getFileName.toString
+val newKrb5ConfName = Some("/etc/krb5.conf")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  fileLocation,
+  newKrb5ConfName,
+  None,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(newKrb5ConfName.get)
+  .withItems(new KeyToPathBuilder()
+.withKey(stringPath)
+.withPath(stringPath)
+.build())
+.endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
+assert(podHasVolumes(resultingPod.pod, expectedVolumes))
+assert(containerHasEnvVars(resultingPod.container, Map(
+  ENV_HADOOP_TOKEN_FILE_LOCATION -> 
s"$SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR/$dtSecretItemKey",
+  ENV_SPARK_USER -> userName)
+))
+assert(containerHasVolumeMounts(resultingPod.container, Map(
+  SPARK_APP_HADOOP_SECRET_VOLUME_NAME -> 
SPARK_APP_HADOOP_CREDENTIALS_BASE_DIR,
+  KRB_FILE_VOLUME -> (KRB_FILE_DIR_PATH + "/krb5.conf"))
+))
+  }
+
+  test("Testing bootstrapKerberosPod with configMap of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val existingKrb5ConfName = Some("krb5CMap")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  None,
+  None,
+  existingKrb5ConfName,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(existingKrb5ConfName.get)
+  .endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
   

[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230223741
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,54 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def containerHasEnvVars(container: Container, envs: Map[String, 
String]): Boolean = {
+envs.toSet.subsetOf(container.getEnv.asScala
+  .map { e => (e.getName, e.getValue)}.toSet)
+  }
+
+  def containerHasVolumeMounts(container: Container, vms: Map[String, 
String]): Boolean = {
+vms.toSet.subsetOf(container.getVolumeMounts.asScala
+  .map { vm => (vm.getName, vm.getMountPath)}.toSet)
+  }
+
+  def podHasLabels(pod: Pod, labels: Map[String, String]): Boolean = {
+labels.toSet.subsetOf(pod.getMetadata.getLabels.asScala.toSet)
+  }
+
+  def podHasVolumes(pod: Pod, volumes: Seq[Volume]): Boolean = {
+volumes.toSet.subsetOf(pod.getSpec.getVolumes.asScala.toSet)
+  }
+
+  // Kerberos Specific Test utils
--- End diff --

Unnecessary comment.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230223780
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,54 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def containerHasEnvVars(container: Container, envs: Map[String, 
String]): Boolean = {
+envs.toSet.subsetOf(container.getEnv.asScala
+  .map { e => (e.getName, e.getValue)}.toSet)
+  }
+
+  def containerHasVolumeMounts(container: Container, vms: Map[String, 
String]): Boolean = {
+vms.toSet.subsetOf(container.getVolumeMounts.asScala
+  .map { vm => (vm.getName, vm.getMountPath)}.toSet)
+  }
+
+  def podHasLabels(pod: Pod, labels: Map[String, String]): Boolean = {
+labels.toSet.subsetOf(pod.getMetadata.getLabels.asScala.toSet)
+  }
+
+  def podHasVolumes(pod: Pod, volumes: Seq[Volume]): Boolean = {
+volumes.toSet.subsetOf(pod.getSpec.getVolumes.asScala.toSet)
+  }
+
+  // Kerberos Specific Test utils
+
+  // Upon use of bootstrapHadoopConfDir
--- End diff --

Not sure I understand what this comment means.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230222736
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = createTempFile(tmpDir, "contents")
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("Testing bootstrapKerberosPod with file location of krb5") {
--- End diff --

Could you create better names for these tests? All of them start with 
"Testing", which is  redundant and also makes it harder to differentiate them.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-11-01 Thread vanzin
Github user vanzin commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r230222388
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/hadooputils/HadoopBootstrapUtilSuite.scala
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features.hadooputils
+
+import java.io.File
+import java.util.UUID
+
+import scala.collection.JavaConverters._
+
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model._
+import org.scalatest.BeforeAndAfter
+
+import org.apache.spark.SparkFunSuite
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.SparkPod
+import org.apache.spark.deploy.k8s.features.KubernetesFeaturesTestUtils._
+import org.apache.spark.util.Utils
+
+class HadoopBootstrapUtilSuite extends SparkFunSuite with BeforeAndAfter{
+  private val sparkPod = SparkPod.initialPod()
+  private val hadoopBootstrapUtil = new HadoopBootstrapUtil
+  private var tmpDir: File = _
+  private var tmpFile: File = _
+
+  before {
+tmpDir = Utils.createTempDir()
+tmpFile = createTempFile(tmpDir, "contents")
+  }
+
+  after {
+tmpFile.delete()
+tmpDir.delete()
+  }
+
+  test("Testing bootstrapKerberosPod with file location of krb5") {
+val dtSecretName = "EXAMPLE_SECRET_NAME"
+val dtSecretItemKey = "EXAMPLE_ITEM_KEY"
+val userName = "SPARK_USER_NAME"
+val fileLocation = Some(tmpFile.getAbsolutePath)
+val stringPath = tmpFile.toPath.getFileName.toString
+val newKrb5ConfName = Some("/etc/krb5.conf")
+val resultingPod = hadoopBootstrapUtil.bootstrapKerberosPod(
+  dtSecretName,
+  dtSecretItemKey,
+  userName,
+  fileLocation,
+  newKrb5ConfName,
+  None,
+  sparkPod)
+val expectedVolumes = Seq(
+  new VolumeBuilder()
+.withName(KRB_FILE_VOLUME)
+.withNewConfigMap()
+  .withName(newKrb5ConfName.get)
+  .withItems(new KeyToPathBuilder()
+.withKey(stringPath)
+.withPath(stringPath)
+.build())
+.endConfigMap()
+.build(),
+  new VolumeBuilder()
+.withName(SPARK_APP_HADOOP_SECRET_VOLUME_NAME)
+.withNewSecret()
+  .withSecretName(dtSecretName)
+  .endSecret()
+.build()
+)
+assert(podHasVolumes(resultingPod.pod, expectedVolumes))
--- End diff --

If this assert (and others like it) fails, you'll get a pretty generic 
message ("false was not true" or some such).

Might be better to make `podHasVolumes` (and friends) assert with a more 
helpful message that shows at least the computed set.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-10-22 Thread liyinan926
Github user liyinan926 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r227064253
  
--- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesConf.scala
 ---
@@ -78,6 +79,12 @@ private[spark] case class KubernetesConf[T <: 
KubernetesRoleSpecificConf](
 
   def krbConfigMapName: String = s"$appResourceNamePrefix-krb5-file"
 
+  // For unit-testing purposes
+  def hadoopBootstrapUtil: HadoopBootstrapUtil = new HadoopBootstrapUtil()
--- End diff --

+1.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-10-22 Thread liyinan926
Github user liyinan926 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r227065151
  
--- Diff: 
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/KubernetesFeaturesTestUtils.scala
 ---
@@ -63,4 +63,40 @@ object KubernetesFeaturesTestUtils {
   def containerHasEnvVar(container: Container, envVarName: String): 
Boolean = {
 container.getEnv.asScala.exists(envVar => envVar.getName == envVarName)
   }
+
+  def podHasLabels(pod: Pod, labels: Map[String, String]): Boolean = {
+labels.toSet.subsetOf(pod.getMetadata.getLabels.asScala.toSet)
+  }
+
+  // Kerberos Specific Test utils
+
+  // Upon use of bootstrapHadoopConfDir
+  def hConfBootPod(inputPod: SparkPod): SparkPod =
--- End diff --

Can we just name it `hadoopConfBootPod`?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-10-22 Thread mccheah
Github user mccheah commented on a diff in the pull request:

https://github.com/apache/spark/pull/22760#discussion_r227062895
  
--- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesConf.scala
 ---
@@ -78,6 +79,12 @@ private[spark] case class KubernetesConf[T <: 
KubernetesRoleSpecificConf](
 
   def krbConfigMapName: String = s"$appResourceNamePrefix-krb5-file"
 
+  // For unit-testing purposes
+  def hadoopBootstrapUtil: HadoopBootstrapUtil = new HadoopBootstrapUtil()
--- End diff --

KubernetesConf might not be the right place for hadoopBootstrapUtil, 
hadoopKerberosLogin, and tokenManager - I'd expect KubernetesConf to just be a 
struct-like object. Can we rewrite these things such that these modules are 
injected into the constructors? We can use default implementations and override 
in tests.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22760: [SPARK-25751][K8S][TEST] Unit Testing for Kerbero...

2018-10-17 Thread ifilonenko
GitHub user ifilonenko opened a pull request:

https://github.com/apache/spark/pull/22760

[SPARK-25751][K8S][TEST] Unit Testing for Kerberos Support

## What changes were proposed in this pull request?

Unit tests for Kerberos support addition

## How was this patch tested?

Unit and Integration tests


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ifilonenko/spark SPARK-25751

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/22760.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #22760


commit 47bd9071f88199e4c73b43227626108481d72595
Author: Ilan Filonenko 
Date:   2018-10-17T22:12:51Z

unit tests for all features




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org