[GitHub] spark issue #22595: [SPARK-25577][Web UI] Add an on-off switch to display th...

2018-10-06 Thread LantaoJin
Github user LantaoJin commented on the issue:

https://github.com/apache/spark/pull/22595
  
@srowen The checkbox is what I add in this PR to display/hidden the columns 
which have been hidden always. These columns are on heap memory, off heap 
memory. If we want to display them in executor page, we have to change the css 
file and rebuild the spark-core.jar file. Besides, there should be more and 
more columns being added in future. Ref 
[SPARK-23206](https://issues.apache.org/jira/browse/SPARK-23206), 
[SPARK-23206](https://issues.apache.org/jira/browse/SPARK-23206). If all of 
them are visible all the time, this page won't be brief like this. So we can 
simply add a checkbox to let user display them on demand and hidden by default. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22637
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97072/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22637
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22637
  
**[Test build #97072 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97072/testReport)**
 for PR 22637 at commit 
[`9c9f065`](https://github.com/apache/spark/commit/9c9f065ceb2c8458a67b2a0fb24664a36ef67484).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `public abstract class RowBasedKeyValueBatch extends MemoryConsumer 
implements Closeable `


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223200104
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -121,74 +122,257 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
   {
 if (running.nonEmpty) {
   
-Running 
Queries:
+Running Queries:
 {running.size}
   
 }
   }
   {
 if (completed.nonEmpty) {
   
-Completed 
Queries:
+Completed 
Queries:
 {completed.size}
   
 }
   }
   {
 if (failed.nonEmpty) {
   
-Failed 
Queries:
+Failed Queries:
 {failed.size}
   
 }
   }
 
   
+
 UIUtils.headerSparkPage(request, "SQL", summary ++ content, parent, 
Some(5000))
   }
+
+  private def executionsTable(
+request: HttpServletRequest,
+executionTag: String,
+executionData: Seq[SQLExecutionUIData],
+currentTime: Long,
+showRunningJobs: Boolean,
+showSucceededJobs: Boolean,
+showFailedJobs: Boolean): Seq[Node] = {
+
+// stripXSS is called to remove suspicious characters used in XSS 
attacks
+val allParameters = request.getParameterMap.asScala.toMap.map { case 
(k, v) =>
+  UIUtils.stripXSS(k) -> v.map(UIUtils.stripXSS).toSeq
+}
+val parameterOtherTable = 
allParameters.filterNot(_._1.startsWith(executionTag))
+  .map(para => para._1 + "=" + para._2(0))
+
+val parameterExecutionPage = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.page"))
+val parameterExecutionSortColumn = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.sort"))
+val parameterExecutionSortDesc = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.desc"))
+val parameterExecutionPageSize = UIUtils.stripXSS(request.
--- End diff --

Thank you @felixcheung . I have modified.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S] Kerberos Support for Spark on K8S

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21669
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S] Kerberos Support for Spark on K8S

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/21669
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3764/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S] Kerberos Support for Spark on K8S

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21669
  
Kubernetes integration test status success
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3764/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22608: [SPARK-23257][K8S][TESTS] Kerberos Support Integration T...

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on the issue:

https://github.com/apache/spark/pull/22608
  
> for now it's probably ok, but is there a solution before the next release?

This integration-test suite works seemlessly and is quite robust when 
rebased on-top of the Kerberos PR. So if we leave this PR as is, it should be 
good for merge. Pulling from `ifilonenko/hadoop-base:latest` makes it s 
much easier :)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22624: [SPARK-23781][CORE] Add base class for token renewal fun...

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on the issue:

https://github.com/apache/spark/pull/22624
  
If we are talking about the token renewal functionality, could we possibly 
refactor the `HadoopFSDelegationTokenProvider` as well. I found that within the 
function `obtainDelegationTokens()`:

This code-block:
```
val fetchCreds = fetchDelegationTokens(getTokenRenewer(hadoopConf), 
fsToGetTokens, creds)

// Get the token renewal interval if it is not set. It will only be 
called once.
if (tokenRenewalInterval == null) {
  tokenRenewalInterval = getTokenRenewalInterval(hadoopConf, sparkConf, 
fsToGetTokens)
}
```
calls `fetchDelegationTokens()` twice since the `tokenRenewalInterval` will 
always be null upon creation of the `TokenManager` which I think is unnecessary 
in the case of Kubernetes (as you are creating 2 DTs when only one is needed.) 
Idk if use-case is different in Mesos / Yarn, but could this possibly be 
refactored to only call `fetchDelegationTokens()` once upon startup or to have 
a param to specify `tokenRenewalInterval`? I could send a follow-up PR if 
desired, but idk if this fits better within the scope of this PR. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S] Kerberos Support for Spark on K8S

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21669
  
Kubernetes integration test starting
URL: 
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/3764/



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21669: [SPARK-23257][K8S] Kerberos Support for Spark on ...

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on a diff in the pull request:

https://github.com/apache/spark/pull/21669#discussion_r223198885
  
--- Diff: docs/security.md ---
@@ -722,6 +722,67 @@ with encryption, at least.
 The Kerberos login will be periodically renewed using the provided 
credentials, and new delegation
 tokens for supported will be created.
 
+## Secure Interaction with Kubernetes
+
+When talking to Hadoop-based services behind Kerberos, it was noted that 
Spark needs to obtain delegation tokens
+so that non-local processes can authenticate. These delegation tokens in 
Kubernetes are stored in Secrets that are 
+shared by the Driver and its Executors. As such, there are three ways of 
submitting a kerberos job: 
+
+In all cases you must define the environment variable: `HADOOP_CONF_DIR`.
+It also important to note that the KDC needs to be visible from inside the 
containers if the user uses a local
+krb5 file. 
+
+If a user wishes to use a remote HADOOP_CONF directory, that contains the 
Hadoop configuration files, or 
+a remote krb5 file, this could be achieved by mounting a pre-defined 
ConfigMap and mounting the volume in the
+desired location that you can point to via the appropriate configs. This 
method is useful for those who wish to not
+rebuild their Docker images, but instead point to a ConfigMap that they 
could modify. This strategy is supported
+via the pod-template feature. 
+
+1. Submitting with a $kinit that stores a TGT in the Local Ticket Cache:
+```bash
+/usr/bin/kinit -kt  /
+/opt/spark/bin/spark-submit \
+--deploy-mode cluster \
+--class org.apache.spark.examples.HdfsTest \
+--master k8s:// \
+--conf spark.executor.instances=1 \
+--conf spark.app.name=spark-hdfs \
+--conf spark.kubernetes.container.image=spark:latest \
+--conf spark.kubernetes.kerberos.krb5location=/etc/krb5.conf \
+local:///opt/spark/examples/jars/spark-examples_-SNAPSHOT.jar 
\
+
+```
+2. Submitting with a local keytab and principal
--- End diff --

> So If I understand the code correctly, this mode is just replacing the 
need to run `kinit`. Unlike the use of this option in YARN and Mesos, you do 
not get token renewal, right? That can be a little confusing to users who are 
coming from one of those envs.

Correct. 

> I've sent #22624 which abstracts some of the code used by Mesos and YARN 
to make it more usable. It could probably be used by k8s too with some 
modifications.

Can we possibly merge this in, and then refactor based on that PR getting 
merged in the future? Or would you prefer to block this PR on that one getting 
in? I agree with the sentiment to leverage the `AbstractCredentialRenewer` 
presented in the work you linked tho. 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S] Kerberos Support for Spark on K8S

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on the issue:

https://github.com/apache/spark/pull/21669
  
Thank you for all of your reviews @mccheah @liyinan926 @vanzin . I have 
resolved the comments and am wondering if there are any further comments before 
this is in a state that is ready to merge! The current PR passes all tests 
(including the ability to configure the krb5 ConfigMap). 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #21669: [SPARK-23257][K8S] Kerberos Support for Spark on K8S

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/21669
  
**[Test build #97073 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97073/testReport)**
 for PR 21669 at commit 
[`89063fd`](https://github.com/apache/spark/commit/89063fdfa76184bb87bcdf1f4b193f3571200fac).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21669: [SPARK-23257][K8S] Kerberos Support for Spark on ...

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on a diff in the pull request:

https://github.com/apache/spark/pull/21669#discussion_r223198790
  
--- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/HadoopGlobalFeatureDriverStep.scala
 ---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import java.io.File
+
+import scala.collection.JavaConverters._
+
+import com.google.common.base.Charsets
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model.{ConfigMapBuilder, HasMetadata}
+
+import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesUtils, 
SparkPod}
+import org.apache.spark.deploy.k8s.Config._
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.KubernetesDriverSpecificConf
+import org.apache.spark.deploy.k8s.features.hadoopsteps._
+import org.apache.spark.internal.Logging
+
+ /**
+  * Runs the necessary Hadoop-based logic based on Kerberos configs and 
the presence of the
+  * HADOOP_CONF_DIR. This runs various bootstrap methods defined in 
HadoopBootstrapUtil.
+  */
+private[spark] class HadoopGlobalFeatureDriverStep(
+kubernetesConf: KubernetesConf[KubernetesDriverSpecificConf])
+extends KubernetesFeatureConfigStep with Logging {
+
+private val conf = kubernetesConf.sparkConf
+private val maybePrincipal = 
conf.get(org.apache.spark.internal.config.PRINCIPAL)
+private val maybeKeytab = 
conf.get(org.apache.spark.internal.config.KEYTAB)
+private val maybeExistingSecretName = 
conf.get(KUBERNETES_KERBEROS_DT_SECRET_NAME)
+private val maybeExistingSecretItemKey =
+  conf.get(KUBERNETES_KERBEROS_DT_SECRET_ITEM_KEY)
+private val kubeTokenManager = kubernetesConf.tokenManager
+private val isKerberosEnabled = kubeTokenManager.isSecurityEnabled
+
+require(maybeKeytab.forall( _ => isKerberosEnabled ),
+  "You must enable Kerberos support if you are specifying a Kerberos 
Keytab")
+
+require(maybeExistingSecretName.forall( _ => isKerberosEnabled ),
+  "You must enable Kerberos support if you are specifying a Kerberos 
Secret")
+
+KubernetesUtils.requireBothOrNeitherDefined(
+  maybeKeytab,
+  maybePrincipal,
+  "If a Kerberos principal is specified you must also specify a 
Kerberos keytab",
+  "If a Kerberos keytab is specified you must also specify a Kerberos 
principal")
+
+KubernetesUtils.requireBothOrNeitherDefined(
+  maybeExistingSecretName,
+  maybeExistingSecretItemKey,
+  "If a secret data item-key where the data of the Kerberos Delegation 
Token is specified" +
+" you must also specify the name of the secret",
+  "If a secret storing a Kerberos Delegation Token is specified you 
must also" +
+" specify the item-key where the data is stored")
+
+require(kubernetesConf.hadoopConfDir.isDefined, "Ensure that 
HADOOP_CONF_DIR is defined")
+private val hadoopConfDir = kubernetesConf.hadoopConfDir.get
+private val hadoopConfigurationFiles = 
kubeTokenManager.getHadoopConfFiles(hadoopConfDir)
+
+// Either use pre-existing secret or login to create new Secret with 
DT stored within
+private val hadoopSpec: Option[KerberosConfigSpec] = (for {
+  secretName <- maybeExistingSecretName
+  secretItemKey <- maybeExistingSecretItemKey
+} yield {
+  KerberosConfigSpec(
+ dtSecret = None,
+ dtSecretName = secretName,
+ dtSecretItemKey = secretItemKey,
+ jobUserName = kubeTokenManager.getCurrentUser.getShortUserName)
+}).orElse(
+  if (isKerberosEnabled) {
+ Some(HadoopKerberosLogin.buildSpec(
+ conf,
+ kubernetesConf.appResourceNamePrefix,
+ kubeTokenManager))
+   } else None )
+
+override def 

[GitHub] spark pull request #21669: [SPARK-23257][K8S] Kerberos Support for Spark on ...

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on a diff in the pull request:

https://github.com/apache/spark/pull/21669#discussion_r223198761
  
--- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/HadoopGlobalFeatureDriverStep.scala
 ---
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.deploy.k8s.features
+
+import java.io.File
+
+import scala.collection.JavaConverters._
+
+import com.google.common.base.Charsets
+import com.google.common.io.Files
+import io.fabric8.kubernetes.api.model.{ConfigMapBuilder, HasMetadata}
+
+import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesUtils, 
SparkPod}
+import org.apache.spark.deploy.k8s.Config._
+import org.apache.spark.deploy.k8s.Constants._
+import org.apache.spark.deploy.k8s.KubernetesDriverSpecificConf
+import org.apache.spark.deploy.k8s.features.hadoopsteps._
+import org.apache.spark.internal.Logging
+
+ /**
+  * Runs the necessary Hadoop-based logic based on Kerberos configs and 
the presence of the
+  * HADOOP_CONF_DIR. This runs various bootstrap methods defined in 
HadoopBootstrapUtil.
+  */
+private[spark] class HadoopGlobalFeatureDriverStep(
+kubernetesConf: KubernetesConf[KubernetesDriverSpecificConf])
+extends KubernetesFeatureConfigStep with Logging {
+
+private val conf = kubernetesConf.sparkConf
+private val maybePrincipal = 
conf.get(org.apache.spark.internal.config.PRINCIPAL)
+private val maybeKeytab = 
conf.get(org.apache.spark.internal.config.KEYTAB)
+private val maybeExistingSecretName = 
conf.get(KUBERNETES_KERBEROS_DT_SECRET_NAME)
+private val maybeExistingSecretItemKey =
+  conf.get(KUBERNETES_KERBEROS_DT_SECRET_ITEM_KEY)
+private val kubeTokenManager = kubernetesConf.tokenManager
+private val isKerberosEnabled = kubeTokenManager.isSecurityEnabled
+
+require(maybeKeytab.forall( _ => isKerberosEnabled ),
+  "You must enable Kerberos support if you are specifying a Kerberos 
Keytab")
+
+require(maybeExistingSecretName.forall( _ => isKerberosEnabled ),
+  "You must enable Kerberos support if you are specifying a Kerberos 
Secret")
+
+KubernetesUtils.requireBothOrNeitherDefined(
+  maybeKeytab,
+  maybePrincipal,
+  "If a Kerberos principal is specified you must also specify a 
Kerberos keytab",
+  "If a Kerberos keytab is specified you must also specify a Kerberos 
principal")
+
+KubernetesUtils.requireBothOrNeitherDefined(
+  maybeExistingSecretName,
+  maybeExistingSecretItemKey,
+  "If a secret data item-key where the data of the Kerberos Delegation 
Token is specified" +
+" you must also specify the name of the secret",
+  "If a secret storing a Kerberos Delegation Token is specified you 
must also" +
+" specify the item-key where the data is stored")
+
+require(kubernetesConf.hadoopConfDir.isDefined, "Ensure that 
HADOOP_CONF_DIR is defined")
+private val hadoopConfDir = kubernetesConf.hadoopConfDir.get
+private val hadoopConfigurationFiles = 
kubeTokenManager.getHadoopConfFiles(hadoopConfDir)
+
+// Either use pre-existing secret or login to create new Secret with 
DT stored within
+private val hadoopSpec: Option[KerberosConfigSpec] = (for {
--- End diff --

In this specific case, the `for..yield` is quite clear IMHO. I think it is 
easier to parse. 
I would prefer to leave it.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22618
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22618
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97070/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22618
  
**[Test build #97070 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97070/testReport)**
 for PR 22618 at commit 
[`90eb1d7`](https://github.com/apache/spark/commit/90eb1d7f5895e442a86506e3e7dae382e138b3b0).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `sealed abstract class Node extends Serializable `


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223198079
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -121,74 +122,257 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
   {
 if (running.nonEmpty) {
   
-Running 
Queries:
+Running Queries:
 {running.size}
   
 }
   }
   {
 if (completed.nonEmpty) {
   
-Completed 
Queries:
+Completed 
Queries:
 {completed.size}
   
 }
   }
   {
 if (failed.nonEmpty) {
   
-Failed 
Queries:
+Failed Queries:
 {failed.size}
   
 }
   }
 
   
+
 UIUtils.headerSparkPage(request, "SQL", summary ++ content, parent, 
Some(5000))
   }
+
+  private def executionsTable(
+request: HttpServletRequest,
+executionTag: String,
+executionData: Seq[SQLExecutionUIData],
+currentTime: Long,
+showRunningJobs: Boolean,
+showSucceededJobs: Boolean,
+showFailedJobs: Boolean): Seq[Node] = {
+
+// stripXSS is called to remove suspicious characters used in XSS 
attacks
+val allParameters = request.getParameterMap.asScala.toMap.map { case 
(k, v) =>
+  UIUtils.stripXSS(k) -> v.map(UIUtils.stripXSS).toSeq
+}
+val parameterOtherTable = 
allParameters.filterNot(_._1.startsWith(executionTag))
+  .map(para => para._1 + "=" + para._2(0))
+
+val parameterExecutionPage = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.page"))
+val parameterExecutionSortColumn = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.sort"))
+val parameterExecutionSortDesc = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.desc"))
+val parameterExecutionPageSize = UIUtils.stripXSS(request.
--- End diff --

could you either have `.` last or first of the next line.
for example L172 is at the end and L166 is in front - let's do this 
consistently with other code in the file


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22658
  
**[Test build #4360 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/4360/testReport)**
 for PR 22658 at commit 
[`cd2264b`](https://github.com/apache/spark/commit/cd2264b6de5f386ece66e28ff62ec75cf3d34e22).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22651
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22651
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97069/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22608: [SPARK-23257][K8S][TESTS] Kerberos Support Integration T...

2018-10-06 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/22608
  
> calling an external docker-image like: ifilonenko/hadoop-base:latest for 
now

for now it's probably ok, but is there a solution before the next release?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22651
  
**[Test build #97069 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97069/testReport)**
 for PR 22651 at commit 
[`2a512ce`](https://github.com/apache/spark/commit/2a512ce82560014469ce5c35e164b7c074b429a6).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22275: [SPARK-25274][PYTHON][SQL] In toPandas with Arrow...

2018-10-06 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/22275#discussion_r223197940
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -4434,6 +4434,12 @@ def test_timestamp_dst(self):
 self.assertPandasEqual(pdf, df_from_python.toPandas())
 self.assertPandasEqual(pdf, df_from_pandas.toPandas())
 
+def test_toPandas_batch_order(self):
+df = self.spark.range(64, numPartitions=8).toDF("a")
+with 
self.sql_conf({"spark.sql.execution.arrow.maxRecordsPerBatch": 4}):
+pdf, pdf_arrow = self._toPandas_arrow_toggle(df)
+self.assertPandasEqual(pdf, pdf_arrow)
--- End diff --

that sounds good


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22455: [SPARK-24572][SPARKR] "eager execution" for R she...

2018-10-06 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/22455#discussion_r223197863
  
--- Diff: R/pkg/R/DataFrame.R ---
@@ -246,30 +248,38 @@ setMethod("showDF",
 #' @note show(SparkDataFrame) since 1.4.0
 setMethod("show", "SparkDataFrame",
   function(object) {
-allConf <- sparkR.conf()
-if (!is.null(allConf[["spark.sql.repl.eagerEval.enabled"]]) &&
-identical(allConf[["spark.sql.repl.eagerEval.enabled"]], 
"true")) {
-  argsList <- list()
-  argsList$x <- object
-  if 
(!is.null(allConf[["spark.sql.repl.eagerEval.maxNumRows"]])) {
-numRows <- 
as.numeric(allConf[["spark.sql.repl.eagerEval.maxNumRows"]])
-if (numRows > 0) {
-  argsList$numRows <- numRows
+showFunc <- getOption("sparkr.SparkDataFrame.base_show_func")
--- End diff --

IMO pretty print should plug in to something more R standard like 
[printr](https://yihui.name/printr/) or

[lemon](https://cran.r-project.org/web/packages/lemon/vignettes/lemon_print.html)
or

[print.x](https://stat.ethz.ch/R-manual/R-devel/library/base/html/print.dataframe.html)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22455: [SPARK-24572][SPARKR] "eager execution" for R she...

2018-10-06 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/22455#discussion_r223197875
  
--- Diff: R/pkg/R/DataFrame.R ---
@@ -246,30 +248,38 @@ setMethod("showDF",
 #' @note show(SparkDataFrame) since 1.4.0
 setMethod("show", "SparkDataFrame",
   function(object) {
-allConf <- sparkR.conf()
-if (!is.null(allConf[["spark.sql.repl.eagerEval.enabled"]]) &&
-identical(allConf[["spark.sql.repl.eagerEval.enabled"]], 
"true")) {
-  argsList <- list()
-  argsList$x <- object
-  if 
(!is.null(allConf[["spark.sql.repl.eagerEval.maxNumRows"]])) {
-numRows <- 
as.numeric(allConf[["spark.sql.repl.eagerEval.maxNumRows"]])
-if (numRows > 0) {
-  argsList$numRows <- numRows
+showFunc <- getOption("sparkr.SparkDataFrame.base_show_func")
--- End diff --

could we consider leaving print/show option out? I'd like to get eager 
compute to work even in basic sparkR / R shell


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22145: [SPARK-25152][K8S] Enable SparkR Integration Tests for K...

2018-10-06 Thread felixcheung
Github user felixcheung commented on the issue:

https://github.com/apache/spark/pull/22145
  
@shaneknapp could we do this soon?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22608: [SPARK-23257][K8S][TESTS] Kerberos Support Integration T...

2018-10-06 Thread ifilonenko
Github user ifilonenko commented on the issue:

https://github.com/apache/spark/pull/22608
  
@erikerlandson the clusterrolebinding is something the user testing should 
set up. As such, we may disregard that bullet-point from the conversation. 
However, I am wondering what are thoughts of calling an external docker-image 
like: `ifilonenko/hadoop-base:latest` for now? This would just require for the 
hadoop-base image to be built in the docker-image-builder and for the 
distribution to contain the `hadoop-2.7.3.tgz` file for the image to build. 

> Although this is a large patch, its impact on existing code is small, and 
it is nearly all testing code. Unless the tests themselves are unstable, I'd 
consider this plausible to include with the 2.4 release.

Very true, this feature is very isolated and was designed to be extremely 
stable (via the WatcherCaches), but should only be merged with 
https://github.com/apache/spark/pull/21669. Would like a review on the design 
so that we may merge this in ASAP when the above PR is merged as they are 
completely isolated.



---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22641: [SPARK-25611][SPARK-25612][SQL][TESTS] Improve test run ...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22641
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97071/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22641: [SPARK-25611][SPARK-25612][SQL][TESTS] Improve test run ...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22641
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22641: [SPARK-25611][SPARK-25612][SQL][TESTS] Improve test run ...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22641
  
**[Test build #97071 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97071/testReport)**
 for PR 22641 at commit 
[`01f1f97`](https://github.com/apache/spark/commit/01f1f97114892174cf52996c297e14ae6800628b).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22637
  
**[Test build #97072 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97072/testReport)**
 for PR 22637 at commit 
[`9c9f065`](https://github.com/apache/spark/commit/9c9f065ceb2c8458a67b2a0fb24664a36ef67484).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread Fokko
Github user Fokko commented on a diff in the pull request:

https://github.com/apache/spark/pull/22637#discussion_r223196616
  
--- Diff: 
sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/CLIService.java 
---
@@ -154,6 +154,7 @@ public synchronized void start() {
   throw new ServiceException("Unable to connect to MetaStore!", e);
 }
 finally {
+  // IMetaStoreClient is not AutoCloseable, closing it manually
--- End diff --

Done!


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22501: [SPARK-25492][TEST] Refactor WideSchemaBenchmark ...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/22501#discussion_r223196145
  
--- Diff: 
core/src/test/scala/org/apache/spark/benchmark/BenchmarkBase.scala ---
@@ -48,15 +48,11 @@ abstract class BenchmarkBase {
   if (!file.exists()) {
 file.createNewFile()
   }
-  output = Some(new FileOutputStream(file))
+  output = Option(new FileOutputStream(file))
--- End diff --

IIUC, @HyukjinKwon meant `when you need to touch this file`.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22658: [SPARK-25671] Build external/spark-ganglia-lgpl i...

2018-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/22658


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/22658
  
Thanks! Merged to master/2.4


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22501: [SPARK-25492][TEST] Refactor WideSchemaBenchmark ...

2018-10-06 Thread wangyum
Github user wangyum commented on a diff in the pull request:

https://github.com/apache/spark/pull/22501#discussion_r223195740
  
--- Diff: 
core/src/test/scala/org/apache/spark/benchmark/BenchmarkBase.scala ---
@@ -48,15 +48,11 @@ abstract class BenchmarkBase {
   if (!file.exists()) {
 file.createNewFile()
   }
-  output = Some(new FileOutputStream(file))
+  output = Option(new FileOutputStream(file))
--- End diff --

Change here because: 
https://github.com/apache/spark/pull/22443#discussion_r221181428


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBe...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/22652#discussion_r223195444
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/HashByteArrayBenchmark.scala 
---
@@ -19,15 +19,23 @@ package org.apache.spark.sql
 
 import java.util.Random
 
-import org.apache.spark.benchmark.Benchmark
+import org.apache.spark.benchmark.{Benchmark, BenchmarkBase}
 import org.apache.spark.sql.catalyst.expressions.{HiveHasher, XXH64}
 import org.apache.spark.unsafe.Platform
 import org.apache.spark.unsafe.hash.Murmur3_x86_32
 
 /**
  * Synthetic benchmark for MurMurHash 3 and xxHash64.
+ * To run this benchmark:
+ * {{{
+ *   1. without sbt: bin/spark-submit --class  
--- End diff --

It seems that we missed this because we thought this is a legacy guide 
which has been worked before.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBe...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/22652#discussion_r223195385
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/HashByteArrayBenchmark.scala 
---
@@ -19,15 +19,23 @@ package org.apache.spark.sql
 
 import java.util.Random
 
-import org.apache.spark.benchmark.Benchmark
+import org.apache.spark.benchmark.{Benchmark, BenchmarkBase}
 import org.apache.spark.sql.catalyst.expressions.{HiveHasher, XXH64}
 import org.apache.spark.unsafe.Platform
 import org.apache.spark.unsafe.hash.Murmur3_x86_32
 
 /**
  * Synthetic benchmark for MurMurHash 3 and xxHash64.
+ * To run this benchmark:
+ * {{{
+ *   1. without sbt: bin/spark-submit --class  
--- End diff --

Is this correct guide? `BenchmarkBase` is in a different jar file, isn't it?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22650: [SPARK-25575][WEBUI][FOLLOWUP]SQL tab in the spark UI su...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on the issue:

https://github.com/apache/spark/pull/22650
  
Thanks a lot @srown 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22650: [SPARK-25575][WEBUI][FOLLOWUP]SQL tab in the spar...

2018-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/22650


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22650: [SPARK-25575][WEBUI][FOLLOWUP]SQL tab in the spark UI su...

2018-10-06 Thread srowen
Github user srowen commented on the issue:

https://github.com/apache/spark/pull/22650
  
Merged to master


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22501: [SPARK-25492][TEST] Refactor WideSchemaBenchmark ...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/22501#discussion_r223195081
  
--- Diff: 
core/src/test/scala/org/apache/spark/benchmark/BenchmarkBase.scala ---
@@ -48,15 +48,11 @@ abstract class BenchmarkBase {
   if (!file.exists()) {
 file.createNewFile()
   }
-  output = Some(new FileOutputStream(file))
+  output = Option(new FileOutputStream(file))
--- End diff --

This looks like irrelevant pig-back.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBenchmark...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22652
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBenchmark...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22652
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97066/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pagination...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on the issue:

https://github.com/apache/spark/pull/22645
  
Thank you @srowen , I have modified the code based on your suggestions.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223195007
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -229,73 +406,88 @@ private[ui] abstract class ExecutionTable(
 }
 
 val desc = if (execution.description != null && 
execution.description.nonEmpty) {
-  {execution.description}
+  {execution.description}
 } else {
-  {execution.executionId}
+  {execution.executionId}
 }
 
-{desc} {details}
-  }
-
-  def toNodeSeq(request: HttpServletRequest): Seq[Node] = {
-UIUtils.listingTable[SQLExecutionUIData](
-  header, row(request, currentTime, _), executionUIDatas, id = 
Some(tableId))
+{desc}{details}
   }
 
   private def jobURL(request: HttpServletRequest, jobId: Long): String =
 "%s/jobs/job/?id=%s".format(UIUtils.prependBaseUri(request, 
parent.basePath), jobId)
 
-  private def executionURL(request: HttpServletRequest, executionID: 
Long): String =
+  private def executionURL(executionID: Long): String =
 s"${UIUtils.prependBaseUri(
   request, 
parent.basePath)}/${parent.prefix}/execution/?id=$executionID"
 }
 
-private[ui] class RunningExecutionTable(
-parent: SQLTab,
-currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData])
-  extends ExecutionTable(
-parent,
-"running-execution-table",
-currentTime,
-executionUIDatas,
-showRunningJobs = true,
-showSucceededJobs = true,
-showFailedJobs = true) {
 
-  override protected def header: Seq[String] =
-baseHeader ++ Seq("Running Job IDs", "Succeeded Job IDs", "Failed Job 
IDs")
-}
+private[ui] class ExecutionTableRowData(
+val submissionTime: Long,
+val duration: Long,
+val executionUIData: SQLExecutionUIData)
+
 
-private[ui] class CompletedExecutionTable(
+private[ui] class ExecutionDataSource(
+request: HttpServletRequest,
 parent: SQLTab,
+executionData: Seq[SQLExecutionUIData],
+basePath: String,
 currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData])
-  extends ExecutionTable(
-parent,
-"completed-execution-table",
-currentTime,
-executionUIDatas,
-showRunningJobs = false,
-showSucceededJobs = true,
-showFailedJobs = false) {
+pageSize: Int,
+sortColumn: String,
+desc: Boolean) extends 
PagedDataSource[ExecutionTableRowData](pageSize) {
 
-  override protected def header: Seq[String] = baseHeader ++ Seq("Job IDs")
-}
+  // Convert ExecutionData to ExecutionTableRowData which contains the 
final contents to show
+  // in the table so that we can avoid creating duplicate contents during 
sorting the data
+  private val data = 
executionData.map(executionRow).sorted(ordering(sortColumn, desc))
 
-private[ui] class FailedExecutionTable(
-parent: SQLTab,
-currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData])
-  extends ExecutionTable(
-parent,
-"failed-execution-table",
-currentTime,
-executionUIDatas,
-showRunningJobs = false,
-showSucceededJobs = true,
-showFailedJobs = true) {
+  private var _slicedJobIds: Set[Int] = _
+
+  override def dataSize: Int = data.size
+
+  override def sliceData(from: Int, to: Int): Seq[ExecutionTableRowData] = 
{
+val r = data.slice(from, to)
+_slicedJobIds = r.map(_.executionUIData.executionId.toInt).toSet
+r
+  }
 
-  override protected def header: Seq[String] =
-baseHeader ++ Seq("Succeeded Job IDs", "Failed Job IDs")
+  private def executionRow(executionUIData: SQLExecutionUIData): 
ExecutionTableRowData = {
+val submissionTime = executionUIData.submissionTime
+val duration = executionUIData.completionTime.map(_.getTime())
+  .getOrElse(currentTime) - submissionTime
+
+new ExecutionTableRowData(
+  submissionTime,
+  duration,
+  executionUIData)
+  }
+
+  /**
+* Return Ordering according to sortColumn and desc
+*/
+  private def ordering(sortColumn: String, desc: Boolean): 
Ordering[ExecutionTableRowData] = {
+val ordering: Ordering[ExecutionTableRowData] = sortColumn match {
+  case "ID" => Ordering.by(_.executionUIData.executionId)
+  case "Description" => Ordering.by(_.executionUIData.description)
+  case "Submitted" => Ordering.by(_.executionUIData.submissionTime)
+  case "Duration" => Ordering.by(_.duration)
+  case "Job IDs" | "Succeeded Job IDs" => Ordering by 

[GitHub] spark issue #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBenchmark...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22652
  
**[Test build #97066 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97066/testReport)**
 for PR 22652 at commit 
[`cc268ca`](https://github.com/apache/spark/commit/cc268caa70792cb1fa91bc3fd5e79687bc4cefde).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22658
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22658
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97064/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22641: [SPARK-25611][SPARK-25612][SQL][TESTS] Improve test run ...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22641
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22641: [SPARK-25611][SPARK-25612][SQL][TESTS] Improve test run ...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22641
  
**[Test build #97071 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97071/testReport)**
 for PR 22641 at commit 
[`01f1f97`](https://github.com/apache/spark/commit/01f1f97114892174cf52996c297e14ae6800628b).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22641: [SPARK-25611][SPARK-25612][SQL][TESTS] Improve test run ...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22641
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3763/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22658
  
**[Test build #97064 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97064/testReport)**
 for PR 22658 at commit 
[`cd2264b`](https://github.com/apache/spark/commit/cd2264b6de5f386ece66e28ff62ec75cf3d34e22).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22643: [SPARK-25630][TEST] Reduce test time of HadoopFsRelation...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22643
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97068/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22643: [SPARK-25630][TEST] Reduce test time of HadoopFsRelation...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22643
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22643: [SPARK-25630][TEST] Reduce test time of HadoopFsRelation...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22643
  
**[Test build #97068 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97068/testReport)**
 for PR 22643 at commit 
[`59ca9e0`](https://github.com/apache/spark/commit/59ca9e0f2fd6234217f63c25c41a477c4e435b50).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22618
  
**[Test build #97070 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97070/testReport)**
 for PR 22618 at commit 
[`90eb1d7`](https://github.com/apache/spark/commit/90eb1d7f5895e442a86506e3e7dae382e138b3b0).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22618
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22618
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3762/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on the issue:

https://github.com/apache/spark/pull/22618
  
Retest this please.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22603: [SPARK-25062][SQL] Clean up BlockLocations in InM...

2018-10-06 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/22603


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22603: [SPARK-25062][SQL] Clean up BlockLocations in InMemoryFi...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on the issue:

https://github.com/apache/spark/pull/22603
  
@peter-toth . What is your Apache JIRA user id? I need to assign you to the 
resolved SPARK-25062, but I cannot find your id and user name `Peter Toth`.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22603: [SPARK-25062][SQL] Clean up BlockLocations in InMemoryFi...

2018-10-06 Thread dongjoon-hyun
Github user dongjoon-hyun commented on the issue:

https://github.com/apache/spark/pull/22603
  
Congratulation for your first contribution, @peter-toth . And, thank you, 
@cloud-fan and @mgaido91 .

Merged to master.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22651
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 

https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/3761/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22651
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22637
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97063/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22637
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22637: [SPARK-25408] Move to more ideomatic Java8

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22637
  
**[Test build #97063 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97063/testReport)**
 for PR 22637 at commit 
[`4c084f9`](https://github.com/apache/spark/commit/4c084f959ca45a8074362324f3f79b9c208251b7).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `public abstract class RowBasedKeyValueBatch extends MemoryConsumer 
implements Closeable `


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22651
  
**[Test build #97069 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97069/testReport)**
 for PR 22651 at commit 
[`2a512ce`](https://github.com/apache/spark/commit/2a512ce82560014469ce5c35e164b7c074b429a6).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread wangyum
Github user wangyum commented on the issue:

https://github.com/apache/spark/pull/22651
  
retest this please


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22650: [SPARK-25575][WEBUI][FOLLOWUP]SQL tab in the spark UI su...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22650
  
**[Test build #4359 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/4359/testReport)**
 for PR 22650 at commit 
[`cd9ef14`](https://github.com/apache/spark/commit/cd9ef14c4060d38a26dd31555b53a6bf9820fe17).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22603: [SPARK-25062][SQL] Clean up BlockLocations in InMemoryFi...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22603
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97065/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22603: [SPARK-25062][SQL] Clean up BlockLocations in InMemoryFi...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22603
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22603: [SPARK-25062][SQL] Clean up BlockLocations in InMemoryFi...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22603
  
**[Test build #97065 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97065/testReport)**
 for PR 22603 at commit 
[`a50ae71`](https://github.com/apache/spark/commit/a50ae71f4c9b035482df20d2565ae553cac350bc).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22651
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22651
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97067/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22651: [SPARK-25657][SQL][TEST] Refactor HashBenchmark to use m...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22651
  
**[Test build #97067 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97067/testReport)**
 for PR 22651 at commit 
[`2a512ce`](https://github.com/apache/spark/commit/2a512ce82560014469ce5c35e164b7c074b429a6).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223193975
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -121,65 +122,242 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
   {
 if (running.nonEmpty) {
   
-Running 
Queries:
+Running Queries:
 {running.size}
   
 }
   }
   {
 if (completed.nonEmpty) {
   
-Completed 
Queries:
+Completed 
Queries:
 {completed.size}
   
 }
   }
   {
 if (failed.nonEmpty) {
   
-Failed 
Queries:
+Failed Queries:
 {failed.size}
   
 }
   }
 
   
+
 UIUtils.headerSparkPage(request, "SQL", summary ++ content, parent, 
Some(5000))
   }
+
+  private def executionsTable(
+request: HttpServletRequest,
+executionTag: String,
+executionData: Seq[SQLExecutionUIData],
+currentTime: Long,
+showRunningJobs: Boolean,
+showSucceededJobs: Boolean,
+showFailedJobs: Boolean): Seq[Node] = {
+
+// stripXSS is called to remove suspicious characters used in XSS 
attacks
+val allParameters = request.getParameterMap.asScala.toMap.map { case 
(k, v) =>
+  UIUtils.stripXSS(k) -> v.map(UIUtils.stripXSS).toSeq
+}
+val parameterOtherTable = 
allParameters.filterNot(_._1.startsWith(executionTag))
+  .map(para => para._1 + "=" + para._2(0))
+
+val parameterExecutionPage = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.page"))
+val parameterExecutionSortColumn = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.sort"))
+val parameterExecutionSortDesc = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.desc"))
+val parameterExecutionPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.pageSize"))
+val parameterExecutionPrevPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.prevPageSize"))
+
+val executionPage = 
Option(parameterExecutionPage).map(_.toInt).getOrElse(1)
+val executionSortColumn = Option(parameterExecutionSortColumn).map { 
sortColumn =>
+  UIUtils.decodeURLParameter(sortColumn)
+}.getOrElse("ID")
+val executionSortDesc = 
Option(parameterExecutionSortDesc).map(_.toBoolean).getOrElse(
+  // New executions should be shown above old executions by default.
+  executionSortColumn == "ID"
+)
+val executionPageSize = 
Option(parameterExecutionPageSize).map(_.toInt).getOrElse(100)
+val executionPrevPageSize = 
Option(parameterExecutionPrevPageSize).map(_.toInt).
+  getOrElse(executionPageSize)
+
+// If the user has changed to a larger page size, then go to page 1 in 
order to avoid
+// IndexOutOfBoundsException.
+val page: Int = if (executionPageSize <= executionPrevPageSize) {
+  executionPage
+} else {
+  1
+}
+val tableHeaderId = executionTag // "running", "completed" or "failed"
+
+try {
+  new ExecutionPagedTable(
+request,
+parent,
+executionData,
+tableHeaderId,
+executionTag,
+UIUtils.prependBaseUri(request, parent.basePath),
+"SQL", // subPath
+parameterOtherTable,
+currentTime,
+pageSize = executionPageSize,
+sortColumn = executionSortColumn,
+desc = executionSortDesc,
+showRunningJobs,
+showSucceededJobs,
+showFailedJobs).table(page)
+} catch {
+  case e@(_: IllegalArgumentException | _: IndexOutOfBoundsException) 
=>
+
+  Error while rendering execution table:
+  
+{Utils.exceptionString(e)}
+  
+
+}
+  }
 }
 
-private[ui] abstract class ExecutionTable(
+
+private[ui] class ExecutionPagedTable(
+request: HttpServletRequest,
 parent: SQLTab,
-tableId: String,
+data: Seq[SQLExecutionUIData],
+tableHeaderId: String,
+executionTag: String,
+basePath: String,
+subPath: String,
+parameterOtherTable: Iterable[String],
 currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData],
+pageSize: Int,
+sortColumn: String,
+desc: Boolean,
 

[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223193956
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -121,65 +122,242 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
   {
 if (running.nonEmpty) {
   
-Running 
Queries:
+Running Queries:
 {running.size}
   
 }
   }
   {
 if (completed.nonEmpty) {
   
-Completed 
Queries:
+Completed 
Queries:
 {completed.size}
   
 }
   }
   {
 if (failed.nonEmpty) {
   
-Failed 
Queries:
+Failed Queries:
 {failed.size}
   
 }
   }
 
   
+
 UIUtils.headerSparkPage(request, "SQL", summary ++ content, parent, 
Some(5000))
   }
+
+  private def executionsTable(
+request: HttpServletRequest,
+executionTag: String,
+executionData: Seq[SQLExecutionUIData],
+currentTime: Long,
+showRunningJobs: Boolean,
+showSucceededJobs: Boolean,
+showFailedJobs: Boolean): Seq[Node] = {
+
+// stripXSS is called to remove suspicious characters used in XSS 
attacks
+val allParameters = request.getParameterMap.asScala.toMap.map { case 
(k, v) =>
+  UIUtils.stripXSS(k) -> v.map(UIUtils.stripXSS).toSeq
+}
+val parameterOtherTable = 
allParameters.filterNot(_._1.startsWith(executionTag))
+  .map(para => para._1 + "=" + para._2(0))
+
+val parameterExecutionPage = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.page"))
+val parameterExecutionSortColumn = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.sort"))
+val parameterExecutionSortDesc = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.desc"))
+val parameterExecutionPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.pageSize"))
+val parameterExecutionPrevPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.prevPageSize"))
+
+val executionPage = 
Option(parameterExecutionPage).map(_.toInt).getOrElse(1)
+val executionSortColumn = Option(parameterExecutionSortColumn).map { 
sortColumn =>
+  UIUtils.decodeURLParameter(sortColumn)
+}.getOrElse("ID")
+val executionSortDesc = 
Option(parameterExecutionSortDesc).map(_.toBoolean).getOrElse(
+  // New executions should be shown above old executions by default.
+  executionSortColumn == "ID"
+)
+val executionPageSize = 
Option(parameterExecutionPageSize).map(_.toInt).getOrElse(100)
+val executionPrevPageSize = 
Option(parameterExecutionPrevPageSize).map(_.toInt).
+  getOrElse(executionPageSize)
+
+// If the user has changed to a larger page size, then go to page 1 in 
order to avoid
+// IndexOutOfBoundsException.
+val page: Int = if (executionPageSize <= executionPrevPageSize) {
+  executionPage
+} else {
+  1
+}
+val tableHeaderId = executionTag // "running", "completed" or "failed"
+
+try {
+  new ExecutionPagedTable(
+request,
+parent,
+executionData,
+tableHeaderId,
+executionTag,
+UIUtils.prependBaseUri(request, parent.basePath),
+"SQL", // subPath
+parameterOtherTable,
+currentTime,
+pageSize = executionPageSize,
+sortColumn = executionSortColumn,
+desc = executionSortDesc,
+showRunningJobs,
+showSucceededJobs,
+showFailedJobs).table(page)
+} catch {
+  case e@(_: IllegalArgumentException | _: IndexOutOfBoundsException) 
=>
+
+  Error while rendering execution table:
+  
+{Utils.exceptionString(e)}
+  
+
+}
+  }
 }
 
-private[ui] abstract class ExecutionTable(
+
+private[ui] class ExecutionPagedTable(
+request: HttpServletRequest,
 parent: SQLTab,
-tableId: String,
+data: Seq[SQLExecutionUIData],
+tableHeaderId: String,
+executionTag: String,
+basePath: String,
+subPath: String,
+parameterOtherTable: Iterable[String],
 currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData],
+pageSize: Int,
+sortColumn: String,
+desc: Boolean,
 

[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223193970
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -121,65 +122,242 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
   {
 if (running.nonEmpty) {
   
-Running 
Queries:
+Running Queries:
 {running.size}
   
 }
   }
   {
 if (completed.nonEmpty) {
   
-Completed 
Queries:
+Completed 
Queries:
 {completed.size}
   
 }
   }
   {
 if (failed.nonEmpty) {
   
-Failed 
Queries:
+Failed Queries:
 {failed.size}
   
 }
   }
 
   
+
 UIUtils.headerSparkPage(request, "SQL", summary ++ content, parent, 
Some(5000))
   }
+
+  private def executionsTable(
+request: HttpServletRequest,
+executionTag: String,
+executionData: Seq[SQLExecutionUIData],
+currentTime: Long,
+showRunningJobs: Boolean,
+showSucceededJobs: Boolean,
+showFailedJobs: Boolean): Seq[Node] = {
+
+// stripXSS is called to remove suspicious characters used in XSS 
attacks
+val allParameters = request.getParameterMap.asScala.toMap.map { case 
(k, v) =>
+  UIUtils.stripXSS(k) -> v.map(UIUtils.stripXSS).toSeq
+}
+val parameterOtherTable = 
allParameters.filterNot(_._1.startsWith(executionTag))
+  .map(para => para._1 + "=" + para._2(0))
+
+val parameterExecutionPage = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.page"))
+val parameterExecutionSortColumn = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.sort"))
+val parameterExecutionSortDesc = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.desc"))
+val parameterExecutionPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.pageSize"))
+val parameterExecutionPrevPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.prevPageSize"))
+
+val executionPage = 
Option(parameterExecutionPage).map(_.toInt).getOrElse(1)
+val executionSortColumn = Option(parameterExecutionSortColumn).map { 
sortColumn =>
+  UIUtils.decodeURLParameter(sortColumn)
+}.getOrElse("ID")
+val executionSortDesc = 
Option(parameterExecutionSortDesc).map(_.toBoolean).getOrElse(
+  // New executions should be shown above old executions by default.
+  executionSortColumn == "ID"
+)
+val executionPageSize = 
Option(parameterExecutionPageSize).map(_.toInt).getOrElse(100)
+val executionPrevPageSize = 
Option(parameterExecutionPrevPageSize).map(_.toInt).
+  getOrElse(executionPageSize)
+
+// If the user has changed to a larger page size, then go to page 1 in 
order to avoid
+// IndexOutOfBoundsException.
+val page: Int = if (executionPageSize <= executionPrevPageSize) {
+  executionPage
+} else {
+  1
+}
+val tableHeaderId = executionTag // "running", "completed" or "failed"
+
+try {
+  new ExecutionPagedTable(
+request,
+parent,
+executionData,
+tableHeaderId,
+executionTag,
+UIUtils.prependBaseUri(request, parent.basePath),
+"SQL", // subPath
+parameterOtherTable,
+currentTime,
+pageSize = executionPageSize,
+sortColumn = executionSortColumn,
+desc = executionSortDesc,
+showRunningJobs,
+showSucceededJobs,
+showFailedJobs).table(page)
+} catch {
+  case e@(_: IllegalArgumentException | _: IndexOutOfBoundsException) 
=>
+
+  Error while rendering execution table:
+  
+{Utils.exceptionString(e)}
+  
+
+}
+  }
 }
 
-private[ui] abstract class ExecutionTable(
+
+private[ui] class ExecutionPagedTable(
+request: HttpServletRequest,
 parent: SQLTab,
-tableId: String,
+data: Seq[SQLExecutionUIData],
+tableHeaderId: String,
+executionTag: String,
+basePath: String,
+subPath: String,
+parameterOtherTable: Iterable[String],
 currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData],
+pageSize: Int,
+sortColumn: String,
+desc: Boolean,
 

[GitHub] spark pull request #22645: [SPARK-25566][SPARK-25567][WEBUI][SQL]Support pag...

2018-10-06 Thread shahidki31
Github user shahidki31 commented on a diff in the pull request:

https://github.com/apache/spark/pull/22645#discussion_r223193960
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
 ---
@@ -121,65 +122,242 @@ private[ui] class AllExecutionsPage(parent: SQLTab) 
extends WebUIPage("") with L
   {
 if (running.nonEmpty) {
   
-Running 
Queries:
+Running Queries:
 {running.size}
   
 }
   }
   {
 if (completed.nonEmpty) {
   
-Completed 
Queries:
+Completed 
Queries:
 {completed.size}
   
 }
   }
   {
 if (failed.nonEmpty) {
   
-Failed 
Queries:
+Failed Queries:
 {failed.size}
   
 }
   }
 
   
+
 UIUtils.headerSparkPage(request, "SQL", summary ++ content, parent, 
Some(5000))
   }
+
+  private def executionsTable(
+request: HttpServletRequest,
+executionTag: String,
+executionData: Seq[SQLExecutionUIData],
+currentTime: Long,
+showRunningJobs: Boolean,
+showSucceededJobs: Boolean,
+showFailedJobs: Boolean): Seq[Node] = {
+
+// stripXSS is called to remove suspicious characters used in XSS 
attacks
+val allParameters = request.getParameterMap.asScala.toMap.map { case 
(k, v) =>
+  UIUtils.stripXSS(k) -> v.map(UIUtils.stripXSS).toSeq
+}
+val parameterOtherTable = 
allParameters.filterNot(_._1.startsWith(executionTag))
+  .map(para => para._1 + "=" + para._2(0))
+
+val parameterExecutionPage = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.page"))
+val parameterExecutionSortColumn = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.sort"))
+val parameterExecutionSortDesc = 
UIUtils.stripXSS(request.getParameter(s"$executionTag.desc"))
+val parameterExecutionPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.pageSize"))
+val parameterExecutionPrevPageSize = UIUtils.stripXSS(request.
+  getParameter(s"$executionTag.prevPageSize"))
+
+val executionPage = 
Option(parameterExecutionPage).map(_.toInt).getOrElse(1)
+val executionSortColumn = Option(parameterExecutionSortColumn).map { 
sortColumn =>
+  UIUtils.decodeURLParameter(sortColumn)
+}.getOrElse("ID")
+val executionSortDesc = 
Option(parameterExecutionSortDesc).map(_.toBoolean).getOrElse(
+  // New executions should be shown above old executions by default.
+  executionSortColumn == "ID"
+)
+val executionPageSize = 
Option(parameterExecutionPageSize).map(_.toInt).getOrElse(100)
+val executionPrevPageSize = 
Option(parameterExecutionPrevPageSize).map(_.toInt).
+  getOrElse(executionPageSize)
+
+// If the user has changed to a larger page size, then go to page 1 in 
order to avoid
+// IndexOutOfBoundsException.
+val page: Int = if (executionPageSize <= executionPrevPageSize) {
+  executionPage
+} else {
+  1
+}
+val tableHeaderId = executionTag // "running", "completed" or "failed"
+
+try {
+  new ExecutionPagedTable(
+request,
+parent,
+executionData,
+tableHeaderId,
+executionTag,
+UIUtils.prependBaseUri(request, parent.basePath),
+"SQL", // subPath
+parameterOtherTable,
+currentTime,
+pageSize = executionPageSize,
+sortColumn = executionSortColumn,
+desc = executionSortDesc,
+showRunningJobs,
+showSucceededJobs,
+showFailedJobs).table(page)
+} catch {
+  case e@(_: IllegalArgumentException | _: IndexOutOfBoundsException) 
=>
+
+  Error while rendering execution table:
+  
+{Utils.exceptionString(e)}
+  
+
+}
+  }
 }
 
-private[ui] abstract class ExecutionTable(
+
+private[ui] class ExecutionPagedTable(
+request: HttpServletRequest,
 parent: SQLTab,
-tableId: String,
+data: Seq[SQLExecutionUIData],
+tableHeaderId: String,
+executionTag: String,
+basePath: String,
+subPath: String,
+parameterOtherTable: Iterable[String],
 currentTime: Long,
-executionUIDatas: Seq[SQLExecutionUIData],
+pageSize: Int,
+sortColumn: String,
+desc: Boolean,
 

[GitHub] spark issue #22623: [SPARK-25636][CORE] spark-submit cuts off the failure re...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22623
  
**[Test build #4358 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/4358/testReport)**
 for PR 22623 at commit 
[`a82e75f`](https://github.com/apache/spark/commit/a82e75fb4019cf7c0e5ca8279a40e1ac8dbbf53e).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22615: [SPARK-25016][BUILD][CORE] Remove support for Hadoop 2.6

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22615
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97054/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22615: [SPARK-25016][BUILD][CORE] Remove support for Hadoop 2.6

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22615
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22501: [SPARK-25492][TEST] Refactor WideSchemaBenchmark to use ...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22501
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97056/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22501: [SPARK-25492][TEST] Refactor WideSchemaBenchmark to use ...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22501
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22615: [SPARK-25016][BUILD][CORE] Remove support for Hadoop 2.6

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22615
  
**[Test build #97054 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97054/testReport)**
 for PR 22615 at commit 
[`9efb76c`](https://github.com/apache/spark/commit/9efb76cde8b7fa31866266dbd90fd57408147dcf).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22501: [SPARK-25492][TEST] Refactor WideSchemaBenchmark to use ...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22501
  
**[Test build #97056 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97056/testReport)**
 for PR 22501 at commit 
[`e6f39f3`](https://github.com/apache/spark/commit/e6f39f36b5d806f1afcea980ba43d544dadbe35f).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22618
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97061/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22618
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22618: [SPARK-25321][ML] Revert SPARK-14681 to avoid API breaki...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22618
  
**[Test build #97061 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97061/testReport)**
 for PR 22618 at commit 
[`90eb1d7`](https://github.com/apache/spark/commit/90eb1d7f5895e442a86506e3e7dae382e138b3b0).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `sealed abstract class Node extends Serializable `


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBenchmark...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22652
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97060/
Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBenchmark...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22652
  
Merged build finished. Test PASSed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22652: [SPARK-25658][SQL][TEST] Refactor HashByteArrayBenchmark...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22652
  
**[Test build #97060 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/97060/testReport)**
 for PR 22652 at commit 
[`b5190d4`](https://github.com/apache/spark/commit/b5190d476762295415d80b9c47c6497d49295c26).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/22658
  
**[Test build #4360 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/4360/testReport)**
 for PR 22658 at commit 
[`cd2264b`](https://github.com/apache/spark/commit/cd2264b6de5f386ece66e28ff62ec75cf3d34e22).


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22658
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/97058/
Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #22658: [SPARK-25671] Build external/spark-ganglia-lgpl in Jenki...

2018-10-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/22658
  
Merged build finished. Test FAILed.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   4   5   >