Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7944#issuecomment-127891984
Nah - not a big enough thing to deal to create a new JIRA. Anyways this
LGTM. @JoshRosen feel free to merge.
---
If your project is set up for it, you can reply
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7929#discussion_r36274062
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala ---
@@ -62,6 +64,52 @@ private[hive] class ClientWrapper
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7629#issuecomment-128098191
Hey guys - thanks this is a cool and useful change. One thing I was
wondering, do you think we could wait until our builds are in better shape to
merge this since it's
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7629#issuecomment-128099706
If you break out the individual builds, many of them have been fine until
the Hive 1.2.1 patch. The top level dashboard isn't that useful bcause if any
one of the ~5
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127805528
I took a look at the spark submit changes and they LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7929#issuecomment-127828215
I discussed with cheng offline but I'd prefer to narrow the scope of the
changes we make to _only_ override Hive's behavior when we see the special CDH
version string
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7944#issuecomment-127774145
LGTM - thanks for adding this. We've been manually working around this on
jenkins (by passing -DzincPort, etc) but it's much nicer to have this handled
properly. /cc
Github user pwendell closed the pull request at:
https://github.com/apache/spark/pull/7876
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7139#issuecomment-127810552
@shivaram maybe you can merge? I looked at the spark submit stuff but
overall it was a very small part of the changes.
---
If your project is set up for it, you can
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7944#issuecomment-127797901
@ryan-williams I think the zinc port is already set if none is provided:
https://github.com/ryan-williams/spark/blob/zinc-status/build/mvn#L113
---
If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7914#issuecomment-127445537
hey @shanyu - we are not adding third party integrations like this in the
spark codebase. The best route is to create a third party package. See
spark-packages.org
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127413708
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7047#issuecomment-127441475
@trystanleftwich we are actually recommending that MapR users use the
hadoop provided builds that became available in Spark 1.4. You just add the
MapR hadoop bindings
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-127413421
@steveloughran any issues preventing us from merging?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7770#issuecomment-127378665
Hey Imran,
So I think there is a larger design discussion at hand that we can probably
break out into a mailing list thread or maybe discuss offline - about
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127113516
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7770#issuecomment-127122956
@squito regarding accumulators, there was an earlier patch that supports
incremental updates of accumulators, so we can use them for this type of
measurement now. Over
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7770#discussion_r36055835
--- Diff: core/src/main/scala/org/apache/spark/ui/ToolTips.scala ---
@@ -62,6 +62,13 @@ private[spark] object ToolTips {
Time that the executor
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7770#discussion_r36055866
--- Diff: core/src/main/scala/org/apache/spark/ui/ToolTips.scala ---
@@ -62,6 +62,13 @@ private[spark] object ToolTips {
Time that the executor
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7770#discussion_r36056120
--- Diff: core/src/main/scala/org/apache/spark/ui/ToolTips.scala ---
@@ -62,6 +62,13 @@ private[spark] object ToolTips {
Time that the executor
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/7876
SPARK-8064, build against Hive 1.2.1 (with Maven tests)
Attempting to run maven tests on this PR.
You can merge this pull request into a Git repository by running:
$ git pull https
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127076980
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127081488
Jenkins, test this pleas.e
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/7878
SPARK-9545: Use Maven in PRB if title contains [maven-test]
This is just some small glue code to actually make use of the
AMPLAB_JENKINS_BUILD_TOOL switch. As far as I can tell, we actually
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127079848
/cc @JoshRosen @srowen and @rxin for any thoughts on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127081454
@rxin I've also just added support for triggering different hadoop version
builds.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127081073
@rxin unfortunately neither of those is supported by the current PRB
infrastructure. We only have access to a limited number of contextual variables
set by the PRB
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7770#discussion_r36046965
--- Diff: core/src/main/scala/org/apache/spark/ui/ToolTips.scala ---
@@ -62,6 +62,13 @@ private[spark] object ToolTips {
Time that the executor
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-12701
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127079353
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127133065
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127132935
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127083643
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127084223
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127085715
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127085099
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7878#issuecomment-127090315
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7876#issuecomment-127109761
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-12694
I've cleared the maven/ivy cache on all the jenkins machines, so going to
restart the build again. Jenkins, test this please.
---
If your project is set up
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7411#issuecomment-126946591
Yeah I need to update this - but it's off the critical path for the release
so it's on hold.
On Sat, Aug 1, 2015 at 10:59 AM, Josh Rosen notificati
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7732#issuecomment-126878276
I might slightly prefer to just allow users to specify files with something
like `--conf X --conf Y` and then those files are put in the conf directory
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7826#issuecomment-126805344
Yeah seems like a good idea - LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-126362290
Thanks @steveloughran I can take a crack at publishing to maven. Since that
might take a day or so, one thing you can do is just put the forked hive jars
in your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7004#issuecomment-126112201
Okay sounds good. Thanks for looking at it Sean.
- Patrick
On Wed, Jul 29, 2015 at 1:37 PM, Sean Owen notificati...@github.com wrote:
Re
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7004#issuecomment-126063730
Did @srowen look at the build change? Sean or I should be signing off on
any dependency changes in the build.
---
If your project is set up for it, you can reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-125758372
Hey @steveloughran and @vanzin. If I understand correctly, I think we need
to modify the dependency on hive-exec to depend on the `core` artifact.
Currently I think
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7191#issuecomment-125759747
Yeah the issue is that the `hive-exec` jar in upstream hive is actually an
assembly jar that includes guava and a bunch of other things. This causes
dependency
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7577#issuecomment-125373420
I don't think so. We update the Tachyon version fairly regularly and I
think users are conditioned to expect updates periodically.
---
If your project is set up
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7663#issuecomment-124854699
Thanks Ryan. This seems like an improvement. But my concern is overall
whether we should have a test that depends on external services. What if
Kinesis experiences
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7663#issuecomment-124923327
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=blobdiff;f=extras/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala;h
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7663#issuecomment-124923264
@zsxwing is this still failing PR's? In master the kinesis test is ignored.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7595#issuecomment-124552527
LGTM - seems this doesn't break compatibility, only adds a help option.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7639#issuecomment-124552679
Thanks Sean, looks good. We can also have it print a deprecation warning
directing users to the new script.
---
If your project is set up for it, you can reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7639#issuecomment-124657693
LGTM - no problem @mallman, it's not something that's well documented as a
public API, but it did break a few things downstream to remove these scripts,
so if we can
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7572#issuecomment-123476011
Hi all,
IMO - this patch is far to invasive to be considered for a backport. Of
course, it is always a judgement call, but here is how I think about
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7558#issuecomment-123180122
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7399#discussion_r34856722
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -231,52 +241,25 @@ private[ui] class StagePage(parent: StagesTab)
extends
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7399#issuecomment-122176897
I took a look at the UI and I like it. I did have two thoughts though:
1. Can we make it so the headers, when hovering, show the link cursor?
Otherwise
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7399#discussion_r34862450
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -231,52 +241,25 @@ private[ui] class StagePage(parent: StagesTab)
extends
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7457#issuecomment-122182225
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7014#issuecomment-121306600
I think the compatibility is okay, but two other quick questions:
1. Is it well defined which exception caused the task to fail? What if a
task fails N times
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7014#discussion_r34590635
--- Diff: core/src/main/scala/org/apache/spark/TaskEndReason.scala ---
@@ -97,11 +101,17 @@ case class ExceptionFailure(
description: String
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/7411
[SPARK-1517] Refactor release scripts to facilitate nightly publishing
This update contains some code changes to the release scripts that allow
easier nightly publishing. I've been using these new
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7374#issuecomment-121058778
@brennonyork JW - did you test this to verify that it works? LGTM but bash
can sometimes have unexpected consequences, and there are no unit tests
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7296#issuecomment-119845615
Looks good, did you do browser inspection and make sure this is actually
working?
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7219#discussion_r33964449
--- Diff: pom.xml ---
@@ -1826,6 +1830,26 @@
/properties
/profile
+profile
+ !--
+ Use this profile
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-119013600
Yeah so it looks like the circular dependencies mean we can't really make
good use of a separate build project (because we can never have any code in
there that refers
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-119031353
@vanzin thanks for helping with his
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118951585
Marcelo - That is not a bad idea. It's not guaranteed it will work because
the underlying issue here is that having `test-jar` dependencies screws up the
maven shade
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118992226
Oh I see - I thought you meant to make it a test-jar dependency. Yeah, this
is a good idea.
---
If your project is set up for it, you can reply to this email and have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118640533
My understanding was this bug affects maven version 3.3 and later. I've
never tried publishing the release with Maven 3.3, so I don't know whether we
would still just
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118648645
Hey sean - option 2 still enables the bug-causing option when we publish
releases. So I think this means we may not be able to publish releases with
maven 3.2 and 3.2
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118640061
Hey @andrewor14 - sorry for the delay I didn't see this comment yesterday.
I see - I didn't realize one thing - the maven version being used on jenkins is
3.1.X
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118677343
Hey Andrew,
Can you update the release preparation script
https://github.com/apache/spark/blob/master/dev/create-release/create-release.sh#L121
so
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7219#discussion_r33906211
--- Diff: pom.xml ---
@@ -1826,6 +1830,26 @@
/properties
/profile
+profile
+ !--
+ Use this profile
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7219#discussion_r33906199
--- Diff: pom.xml ---
@@ -1826,6 +1830,26 @@
/properties
/profile
+profile
+ !--
+ Use this profile
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118719759
I made a minor comment regarding the name. Pending that small update this
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7219#discussion_r33885611
--- Diff: pom.xml ---
@@ -678,7 +682,13 @@
groupIdorg.scalatest/groupId
artifactIdscalatest_${scala.binary.version}/artifactId
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118533990
Hey @srowen - there is more going on here. The issue is that there is
actually a class called `SparkFunSuite` that is defined inside of the core
module that the other
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7219#issuecomment-118523169
Hey @andrewor14 - I took a look at this. It seems, to me a bit weird to
have Spark core dependend on Scalatest. This will affect all of our downstream
applications
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7072#discussion_r33516520
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -166,8 +167,26 @@ private[spark] object JettyUtils extends Logging
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7072#issuecomment-116853701
I would still argue for this change because in remote environments having a
page with a 10MB payload is pretty bad. It's just good form to compress output
for very
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7072#issuecomment-116931991
@srowen are you strongly against this or just mildly?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7054#issuecomment-116220075
Sure, we can fix this up. In general with the hadoop-provided builds we
expect people to use those from now on. Also, this definitely worked for me
locally, so I
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/7072#issuecomment-116442602
@zsxwing have you noticed any improvement in user-facing response time for
the loading of the page?
---
If your project is set up for it, you can reply to this email
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/7014#discussion_r33435542
--- Diff: core/src/main/scala/org/apache/spark/TaskEndReason.scala ---
@@ -97,11 +101,17 @@ case class ExceptionFailure(
description: String
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6903#issuecomment-115357990
I think @mengxr was involved in the javadoc fork.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5403#issuecomment-111727508
Hey All,
I would like to close this issue pending some further discussion, maybe
offline. The main reason is that people keep asking me why we aren't merging
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6770#issuecomment-111556246
Has this issue ever affected anyone before now? If it's a relatively
obscure issue I think failing builds with Maven 3.3 might be a big
inconvenience for developers
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6766#issuecomment-111242115
LGTM thanks @vanzin!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6765#issuecomment-111245461
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6764#issuecomment-111245502
GLTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6765#issuecomment-111245451
Yes it's true - in the short term I will post this as a known issue on the
release notes and point people to this fix.
---
If your project is set up for it, you can
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6764#issuecomment-111245518
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6734#issuecomment-110613004
This looks good, but will wait until #6735 is merged for final sign off.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6681#issuecomment-110452495
Hey guys - I'd actually prefer not to put this feature in branch 1.4 since
that release is already tied off. Let's just put it in the master branch
---
If your
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/6729
[SPARK-6511] [Documentation] Explain how to use Hadoop provided builds
This provides preliminary documentation pointing out how to use the
Hadoop free builds. I am hoping over time this list
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6729#issuecomment-110479694
/cc @vanzin and @srowen for any feedback.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6729#issuecomment-110485927
@vanzin Any specific details in mind? Personally I don't mind having
details for different distros. My feeling is if people can't get these
hadoop-free builds running
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/6729#issuecomment-110489016
Ah okay - I think it could be good to add those too over time. For instance
the MapR one probably wont' work at all unless some native libraries are added
since
101 - 200 of 4362 matches
Mail list logo