GitHub user lresende opened a pull request:
https://github.com/apache/spark/pull/13092
[SPARK-15309] Bump master to version 2.1.0-SNAPSHOT
## What changes were proposed in this pull request?
Update pom artifact version to 2.1.0-SNAPSHOT to avoid any conflicts with
2.0.0
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/13092#issuecomment-218951567
@srowen Please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lresende closed the pull request at:
https://github.com/apache/spark/pull/13092
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12508#issuecomment-216679636
The SparkPullRequestBuilder Build basically calls ./dev/run-tests-jenkins
while the nightly snapshot build is running some scripts from @pwendell. So I
don't believe
Github user lresende commented on a diff in the pull request:
https://github.com/apache/spark/pull/12508#discussion_r62191641
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/OracleIntegrationSuite.scala
---
@@ -46,12 +44,11 @@ import
Github user lresende commented on a diff in the pull request:
https://github.com/apache/spark/pull/12270#discussion_r60103432
--- Diff: project/SparkBuild.scala ---
@@ -364,12 +364,15 @@ object Flume {
}
object DockerIntegrationTests {
+ // Ignore checksum
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12270#issuecomment-211477523
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12270#issuecomment-211084045
Jenkins, retest this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12269#issuecomment-212198729
@davies @liancheng looks like after re basing to latest code, this issue
has been resolved. I am going to wait for a build to complete to double check.
---
If your
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12508#issuecomment-212150753
@JoshRosen how would I go about updating pr build to run the integration
tests in Jenkins only ? Should we run the tests after the
./dev/run-tests-jenkins
GitHub user lresende opened a pull request:
https://github.com/apache/spark/pull/12508
[SPARK-14738][BUILD] Separate docker integration tests from main build
## What changes were proposed in this pull request?
Create a maven profile for executing the docker integration
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12269#issuecomment-212137015
@davies @liancheng
So, when converting these tests, I noticed that the following :
test("uncorrelated scalar subquery on a DataFrame generated
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12508#issuecomment-212722890
@JoshRosen @rxin ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12348#issuecomment-212722791
@JoshRosen ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12552#issuecomment-212730550
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12544#issuecomment-212732645
Is there any documentation that needs to be updated with the addition of
--packages or --jars to run the examples (e.g. running-on-yarn.md) ?
---
If your project
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12270#issuecomment-211731480
@rxin Let me move them to a specific docker profile. But I would still run
them on Jenkins, as the infrastructure is already setup there.
---
If your project is set
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12270#issuecomment-211732862
Ok, I will work with @JoshRosen on the trigger part.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12348#issuecomment-211705986
@JoshRosen All good now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lresende commented on the pull request:
https://github.com/apache/spark/pull/12270#issuecomment-211729901
@rxin, just trying to understand, is the oracle test the only one failing ?
Or you are suggesting we move the whole docker based tests to a separate
profile
Github user lresende commented on a diff in the pull request:
https://github.com/apache/spark/pull/14601#discussion_r74440796
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -107,6 +107,14 @@ class SparkHadoopUtil extends Logging
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14601
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14601
Minor, the title should be [CORE] instead of [SPARK CORE]
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14601
Also, will this also fix the scenario where the user has provided the
properties programmatically ?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user lresende commented on a diff in the pull request:
https://github.com/apache/spark/pull/14601#discussion_r74441097
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -107,6 +107,14 @@ class SparkHadoopUtil extends Logging
GitHub user lresende opened a pull request:
https://github.com/apache/spark/pull/14606
[SPARK-17023][BUILD] Upgrade to Kafka 0.10.0.1 release
## What changes were proposed in this pull request?
Update Kafka streaming connector to use Kafka 0.10.0.1 release
## How
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14606
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14606
Couple of
[bugs](https://archive.apache.org/dist/kafka/0.10.0.1/RELEASE_NOTES.html)
seemed interesting to Spark, also good to maintain dependency currency similar
to what we have been doing
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14066
The issue here is that releases keep getting archived when new releases
comes up. For old releases (or by default) we could use
https://archive.apache.org/dist/maven/maven-3/, which is always
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
Spark kinesis has dependency on the kinesis client which is category-x
com.amazonaws
amazon-kinesis-client
${aws.kinesis.client.version}
Thus
GitHub user lresende opened a pull request:
https://github.com/apache/spark/pull/14981
[SPARK-17418] Remove Kinesis artifacts from Spark release scripts
## What changes were proposed in this pull request?
This PR removes Kinesis from the release scripts as Kinesis license
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
@srowen, The Kinesis assembly has been published by Spark releases for a
while. Here is the link to the 2.0 release on repository.apache.org :
https://repository.apache.org/service/local
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
As for the the Ganglia one, I will create another jira, to track that
separately as this (Kinesis) one might involve more changes around the python
and examples.
---
If your project is set up
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
I would still wait for the feedback from legal before removing anything.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
@srowen should I update this PR with the removal of kinesis assembly then ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14601
@agsachin Are you planning to address these updates on this PR ? It would
be good to have this as part of Spark as it affects multiple usage scenarios in
cloud platforms and other cases as well
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
@srowen Please don't get me wrong, I don't have any interest on this
extension either, but just want to make sure we start doing the right thing for
Apache Spark. I will try to ping some
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/15114
I verified this works on native docker in linux with :
build/mvn -Pdocker-integration-tests -Pscala-2.11 -pl
:spark-docker-integration-tests_2.11 clean compile test
LGTM
Github user lresende closed the pull request at:
https://github.com/apache/spark/pull/14981
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/15170
Mostly style related changes
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
Ok, reverting the commit to remove kinesis assembly as the python tests are
relying on it for the transient dependencies. Note that I was also trying to
overcome this requirement by appending all
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
The pointer is exactly your quote on the e-mail to legal-discuss:
http://www.apache.org/legal/resolved.html#prohibited says:
-
CAN APACHE PROJECTS RELY ON COMPONENTS UNDER
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
@srowen @rxin My understanding is that the mvn deploy is what takes care of
actually publishing the files to maven staging repository :
`
$MVN -DzincPort=$ZINC_PORT --settings
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/14981
Yes, and this is the intent. It's ok to have these in the source release
(similar to ganglia) but we don't publish them in maven repository and it
becomes available only if people goes
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/15594
@vanzin I believe this might be your realm :) Could you please help review
this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user lresende commented on a diff in the pull request:
https://github.com/apache/spark/pull/15594#discussion_r88752705
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIService.scala
---
@@ -57,7 +59,24 @@ private[hive] class
Github user lresende commented on the issue:
https://github.com/apache/spark/pull/22867
@gss2002 Sorry I missed this initially, but great that @vanzin is helping
you with the fix.
---
-
To unsubscribe, e-mail
101 - 147 of 147 matches
Mail list logo