Re: Unable to run applications on spark in standalone cluster mode

2015-10-25 Thread Rohith P
No.. the ./sbin/start-master.sh --ip option did not work... It is still the
same error




--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Unable-to-run-applications-on-spark-in-standalone-cluster-mode-tp14683p14779.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Josh Rosen
Hi Mark,

The shuffle memory leaks that I identified in SPARK-11239 have been around
for multiple releases and it's not clear whether they have caused
performance problems in real workloads, so I would say that it's fine to
move the release forward without including my patch. If we have to cut
another release candidate for some other reason, though, then it might be
nice to include it then, but I don't think it qualifies as a
release-blocker by itself.

On Sun, Oct 25, 2015 at 3:50 PM, Mark Hamstra 
wrote:

> Should 1.5.2 wait for Josh's fix of SPARK-11293?
>
> On Sun, Oct 25, 2015 at 2:25 PM, Sean Owen  wrote:
>
>> The signatures and licenses are fine. I continue to get failures in
>> these tests though, with "-Pyarn -Phadoop-2.6 -Phive
>> -Phive-thriftserver" on Ubuntu 15 / Java 7.
>>
>> - Unpersisting HttpBroadcast on executors and driver in distributed
>> mode *** FAILED ***
>>   java.util.concurrent.TimeoutException: Can't find 2 executors before
>> 1 milliseconds elapsed
>>   at
>> org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:561)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite.testUnpersistBroadcast(BroadcastSuite.scala:313)
>>   at org.apache.spark.broadcast.BroadcastSuite.org
>> $apache$spark$broadcast$BroadcastSuite$$testUnpersistHttpBroadcast(BroadcastSuite.scala:238)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply$mcV$sp(BroadcastSuite.scala:149)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply(BroadcastSuite.scala:149)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply(BroadcastSuite.scala:149)
>>   at
>> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>>   ...
>> - Unpersisting TorrentBroadcast on executors only in local mode
>> - Unpersisting TorrentBroadcast on executors and driver in local mode
>> - Unpersisting TorrentBroadcast on executors only in distributed mode
>> - Unpersisting TorrentBroadcast on executors and driver in distributed
>> mode *** FAILED ***
>>   java.util.concurrent.TimeoutException: Can't find 2 executors before
>> 1 milliseconds elapsed
>>   at
>> org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:561)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite.testUnpersistBroadcast(BroadcastSuite.scala:313)
>>   at org.apache.spark.broadcast.BroadcastSuite.org
>> $apache$spark$broadcast$BroadcastSuite$$testUnpersistTorrentBroadcast(BroadcastSuite.scala:287)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply$mcV$sp(BroadcastSuite.scala:165)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply(BroadcastSuite.scala:165)
>>   at
>> org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply(BroadcastSuite.scala:165)
>>   at
>> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>>   ...
>>
>> On Sun, Oct 25, 2015 at 7:07 AM, Reynold Xin  wrote:
>> > Please vote on releasing the following candidate as Apache Spark version
>> > 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and passes
>> if a
>> > majority of at least 3 +1 PMC votes are cast.
>> >
>> > [ ] +1 Release this package as Apache Spark 1.5.2
>> > [ ] -1 Do not release this package because ...
>> >
>> >
>> > The release fixes 51 known issues in Spark 1.5.1, listed here:
>> > http://s.apache.org/spark-1.5.2
>> >
>> > The tag to be voted on is v1.5.2-rc1:
>> > https://github.com/apache/spark/releases/tag/v1.5.2-rc1
>> >
>> > The release files, including signatures, digests, etc. can be found at:
>> > http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
>> >
>> > Release artifacts are signed with the following key:
>> > https://people.apache.org/keys/committer/pwendell.asc
>> >
>> > The staging repository for this release can be found at:
>> > - as version 1.5.2-rc1:
>> > https://repository.apache.org/content/repositories/orgapachespark-1151
>> > - as version 1.5.2:
>> > https://repository.apache.org/content/repositories/orgapachespark-1150
>> >
>> > The documentation corresponding to this release can be found at:
>> >
>> http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/
>> >
>> >
>> > ===
>> > How can I help test this release?
>> > ===
>> > If you are a Spark user, you can help us test this release by taking an
>> > existing Spark workload and running on this release candidate, then
>> > reporting any regressions.
>> >
>> > ==

Duplicate (?) code paths to handle Executor failures

2015-10-25 Thread Kay Ousterhout
Hi all,

I noticed that when the JVM for an executor fails, in Standalone mode, we
have two duplicate code paths that handle the failure, one via Akka, and
the second via the Worker/ExecutorRunner:

via Akka:
(1) CoarseGrainedSchedulerBackend is notified that the remote Akka endpoint
is disconnected:
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L189
and it calls CoarseGrainedSchedulerBackend.removeExecutor
(2) removeExecutor() tells the task scheduler to reschedule all of the
tasks that were running on that executor

via the Worker/ExecutorRunner:
(1) The ExecutorRunner notes that the Executor process has failed and
notifies the Spark Master
(2) The Master notifies the AppClient (for the particular application),
which then notifies the SparkDeploySchedulerBackend (which subclasses
SparkDeploySchedulerBackend)
(3) SparkDeploySchedulerBackend calls
CoarseGrainedSchedulerBackend.removeExecutor, which eventually tells the
task scheduler that the executor was lost and all tasks running on it
should be re-scheduled (as above)

For YARN, my understanding is that there is a 3rd code path where the
YarnAllocator's processCompletedContainers() gets information about the
process's exit from the master, and translates it into an "ExecutorExited"
message that gets passed to the scheduler, similar to in the
Worker/ExecutorRunner case (YARN folks, is this correct?).

It's confusing and error prone to have these multiple different ways of
handling failures (I ran into this problem because I was fixing a bug where
one of the code paths can lead to a hang, but the other one doesn't).  Can
we eliminate all but one of these code paths?  Is there a reason for the
duplicate error handling?

Do all of the cluster managers (Standalone, YARN, Mesos) communicate in
some way when an Executor has failed, so we can ignore the Akka code path?
The Akka code path is most tempting to eliminate because it has less
information about the failure (the other code path typically has an exit
code for the process, at a minimum).

I'm also curious if others have seen this issue; for example, Marcelo, I'm
wondering if this came up in your attempts to treat YARN pre-emption
differently (did you run into issues where, when YARN pre-empts an
executor, Spark gets the "Rpc disassociated" failure from AKKA before the
more useful error from Yarn saying that the executor was pre-empted?).

-Kay

--

To reproduce this issue, you can run one of these jobs:

sc.parallelize(1 to 10, 2).foreach { x => if (x == 1) throw new
OutOfMemoryError("test OOM") }

or

sc.parallelize(1 to 10, 2).foreach { x => if (x == 1) System.exit(42) }


Re: [VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Mark Hamstra
Should 1.5.2 wait for Josh's fix of SPARK-11293?

On Sun, Oct 25, 2015 at 2:25 PM, Sean Owen  wrote:

> The signatures and licenses are fine. I continue to get failures in
> these tests though, with "-Pyarn -Phadoop-2.6 -Phive
> -Phive-thriftserver" on Ubuntu 15 / Java 7.
>
> - Unpersisting HttpBroadcast on executors and driver in distributed
> mode *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 1 milliseconds elapsed
>   at
> org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:561)
>   at
> org.apache.spark.broadcast.BroadcastSuite.testUnpersistBroadcast(BroadcastSuite.scala:313)
>   at org.apache.spark.broadcast.BroadcastSuite.org
> $apache$spark$broadcast$BroadcastSuite$$testUnpersistHttpBroadcast(BroadcastSuite.scala:238)
>   at
> org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply$mcV$sp(BroadcastSuite.scala:149)
>   at
> org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply(BroadcastSuite.scala:149)
>   at
> org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply(BroadcastSuite.scala:149)
>   at
> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   ...
> - Unpersisting TorrentBroadcast on executors only in local mode
> - Unpersisting TorrentBroadcast on executors and driver in local mode
> - Unpersisting TorrentBroadcast on executors only in distributed mode
> - Unpersisting TorrentBroadcast on executors and driver in distributed
> mode *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 1 milliseconds elapsed
>   at
> org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:561)
>   at
> org.apache.spark.broadcast.BroadcastSuite.testUnpersistBroadcast(BroadcastSuite.scala:313)
>   at org.apache.spark.broadcast.BroadcastSuite.org
> $apache$spark$broadcast$BroadcastSuite$$testUnpersistTorrentBroadcast(BroadcastSuite.scala:287)
>   at
> org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply$mcV$sp(BroadcastSuite.scala:165)
>   at
> org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply(BroadcastSuite.scala:165)
>   at
> org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply(BroadcastSuite.scala:165)
>   at
> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   ...
>
> On Sun, Oct 25, 2015 at 7:07 AM, Reynold Xin  wrote:
> > Please vote on releasing the following candidate as Apache Spark version
> > 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and passes
> if a
> > majority of at least 3 +1 PMC votes are cast.
> >
> > [ ] +1 Release this package as Apache Spark 1.5.2
> > [ ] -1 Do not release this package because ...
> >
> >
> > The release fixes 51 known issues in Spark 1.5.1, listed here:
> > http://s.apache.org/spark-1.5.2
> >
> > The tag to be voted on is v1.5.2-rc1:
> > https://github.com/apache/spark/releases/tag/v1.5.2-rc1
> >
> > The release files, including signatures, digests, etc. can be found at:
> > http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/pwendell.asc
> >
> > The staging repository for this release can be found at:
> > - as version 1.5.2-rc1:
> > https://repository.apache.org/content/repositories/orgapachespark-1151
> > - as version 1.5.2:
> > https://repository.apache.org/content/repositories/orgapachespark-1150
> >
> > The documentation corresponding to this release can be found at:
> > http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/
> >
> >
> > ===
> > How can I help test this release?
> > ===
> > If you are a Spark user, you can help us test this release by taking an
> > existing Spark workload and running on this release candidate, then
> > reporting any regressions.
> >
> > 
> > What justifies a -1 vote for this release?
> > 
> > -1 vote should occur for regressions from Spark 1.5.1. Bugs already
> present
> > in 1.5.1 will not block this release.
> >
> > ===
> > What should happen to JIRA tickets still targeting 1.5.2?
> > ===
> > Please target 1.5.3 or 1.6.0.
> >
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> 

Re: [VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Sean Owen
The signatures and licenses are fine. I continue to get failures in
these tests though, with "-Pyarn -Phadoop-2.6 -Phive
-Phive-thriftserver" on Ubuntu 15 / Java 7.

- Unpersisting HttpBroadcast on executors and driver in distributed
mode *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
1 milliseconds elapsed
  at 
org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:561)
  at 
org.apache.spark.broadcast.BroadcastSuite.testUnpersistBroadcast(BroadcastSuite.scala:313)
  at 
org.apache.spark.broadcast.BroadcastSuite.org$apache$spark$broadcast$BroadcastSuite$$testUnpersistHttpBroadcast(BroadcastSuite.scala:238)
  at 
org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply$mcV$sp(BroadcastSuite.scala:149)
  at 
org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply(BroadcastSuite.scala:149)
  at 
org.apache.spark.broadcast.BroadcastSuite$$anonfun$12.apply(BroadcastSuite.scala:149)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
  at org.scalatest.Transformer.apply(Transformer.scala:22)
  ...
- Unpersisting TorrentBroadcast on executors only in local mode
- Unpersisting TorrentBroadcast on executors and driver in local mode
- Unpersisting TorrentBroadcast on executors only in distributed mode
- Unpersisting TorrentBroadcast on executors and driver in distributed
mode *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
1 milliseconds elapsed
  at 
org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:561)
  at 
org.apache.spark.broadcast.BroadcastSuite.testUnpersistBroadcast(BroadcastSuite.scala:313)
  at 
org.apache.spark.broadcast.BroadcastSuite.org$apache$spark$broadcast$BroadcastSuite$$testUnpersistTorrentBroadcast(BroadcastSuite.scala:287)
  at 
org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply$mcV$sp(BroadcastSuite.scala:165)
  at 
org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply(BroadcastSuite.scala:165)
  at 
org.apache.spark.broadcast.BroadcastSuite$$anonfun$16.apply(BroadcastSuite.scala:165)
  at 
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
  at org.scalatest.Transformer.apply(Transformer.scala:22)
  ...

On Sun, Oct 25, 2015 at 7:07 AM, Reynold Xin  wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and passes if a
> majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.5.2
> [ ] -1 Do not release this package because ...
>
>
> The release fixes 51 known issues in Spark 1.5.1, listed here:
> http://s.apache.org/spark-1.5.2
>
> The tag to be voted on is v1.5.2-rc1:
> https://github.com/apache/spark/releases/tag/v1.5.2-rc1
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> - as version 1.5.2-rc1:
> https://repository.apache.org/content/repositories/orgapachespark-1151
> - as version 1.5.2:
> https://repository.apache.org/content/repositories/orgapachespark-1150
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/
>
>
> ===
> How can I help test this release?
> ===
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> 
> What justifies a -1 vote for this release?
> 
> -1 vote should occur for regressions from Spark 1.5.1. Bugs already present
> in 1.5.1 will not block this release.
>
> ===
> What should happen to JIRA tickets still targeting 1.5.2?
> ===
> Please target 1.5.3 or 1.6.0.
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Mark Hamstra
You're correct, Sean: That build change isn't in branch-1.5, so the
two-phase build is still needed there.

On Sun, Oct 25, 2015 at 9:30 AM, Sean Owen  wrote:

> I believe you still need to "clean package" and then "test"
> separately. Or did the change to make that unnecessary go in to 1.5?
>
> FWIW I do not see this (my results coming soon)
>
> On Sun, Oct 25, 2015 at 4:28 PM, Ted Yu  wrote:
> > When I ran the following command:
> > ~/apache-maven-3.3.3/bin/mvn -Phive -Phive-thriftserver -Pyarn
> -Phadoop-2.4
> > -Dhadoop.version=2.6.0 package
> >
> > I got:
> >
> > testChildProcLauncher(org.apache.spark.launcher.SparkLauncherSuite)  Time
> > elapsed: 0.031 sec  <<< FAILURE!
> > java.lang.AssertionError: expected:<0> but was:<1>
> > at org.junit.Assert.fail(Assert.java:93)
> > at org.junit.Assert.failNotEquals(Assert.java:647)
> > at org.junit.Assert.assertEquals(Assert.java:128)
> > at org.junit.Assert.assertEquals(Assert.java:472)
> > at org.junit.Assert.assertEquals(Assert.java:456)
> > at
> >
> org.apache.spark.launcher.SparkLauncherSuite.testChildProcLauncher(SparkLauncherSuite.java:105)
> >
> > java version "1.7.0_67"
> > Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
> > Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
> >
> > I checked
> >
> ./launcher/target/surefire-reports/TEST-org.apache.spark.launcher.SparkLauncherSuite.xml
> > but didn't get much clue.
> >
> > FYI
> >
> > On Sun, Oct 25, 2015 at 12:07 AM, Reynold Xin 
> wrote:
> >>
> >> Please vote on releasing the following candidate as Apache Spark version
> >> 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and passes
> if a
> >> majority of at least 3 +1 PMC votes are cast.
> >>
> >> [ ] +1 Release this package as Apache Spark 1.5.2
> >> [ ] -1 Do not release this package because ...
> >>
> >>
> >> The release fixes 51 known issues in Spark 1.5.1, listed here:
> >> http://s.apache.org/spark-1.5.2
> >>
> >> The tag to be voted on is v1.5.2-rc1:
> >> https://github.com/apache/spark/releases/tag/v1.5.2-rc1
> >>
> >> The release files, including signatures, digests, etc. can be found at:
> >> http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
> >>
> >> Release artifacts are signed with the following key:
> >> https://people.apache.org/keys/committer/pwendell.asc
> >>
> >> The staging repository for this release can be found at:
> >> - as version 1.5.2-rc1:
> >> https://repository.apache.org/content/repositories/orgapachespark-1151
> >> - as version 1.5.2:
> >> https://repository.apache.org/content/repositories/orgapachespark-1150
> >>
> >> The documentation corresponding to this release can be found at:
> >>
> http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/
> >>
> >>
> >> ===
> >> How can I help test this release?
> >> ===
> >> If you are a Spark user, you can help us test this release by taking an
> >> existing Spark workload and running on this release candidate, then
> >> reporting any regressions.
> >>
> >> 
> >> What justifies a -1 vote for this release?
> >> 
> >> -1 vote should occur for regressions from Spark 1.5.1. Bugs already
> >> present in 1.5.1 will not block this release.
> >>
> >> ===
> >> What should happen to JIRA tickets still targeting 1.5.2?
> >> ===
> >> Please target 1.5.3 or 1.6.0.
> >>
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


spark-sql / apache-drill / jboss-tiied

2015-10-25 Thread Pranay Tonpay
Hi,
In terms of federated query, has anyone done any evaluation between
spark-sql and drill and jboss-tiied.
I have a very urgent requirement for creating a virtualized layer (sitting
atop several databases) and am evaluating these 3 as an option.. Any help
would be appreciated.
I know Spark-SQL has the benefit that i can invoke MLLib algorithms on the
data fetched, but apart from that, any other considerations ?
Drill does not seem to have support for many data sources..

Any inputs ?

thx
pranay


Re: [VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Sean Owen
I believe you still need to "clean package" and then "test"
separately. Or did the change to make that unnecessary go in to 1.5?

FWIW I do not see this (my results coming soon)

On Sun, Oct 25, 2015 at 4:28 PM, Ted Yu  wrote:
> When I ran the following command:
> ~/apache-maven-3.3.3/bin/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.4
> -Dhadoop.version=2.6.0 package
>
> I got:
>
> testChildProcLauncher(org.apache.spark.launcher.SparkLauncherSuite)  Time
> elapsed: 0.031 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<0> but was:<1>
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at org.junit.Assert.assertEquals(Assert.java:456)
> at
> org.apache.spark.launcher.SparkLauncherSuite.testChildProcLauncher(SparkLauncherSuite.java:105)
>
> java version "1.7.0_67"
> Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
>
> I checked
> ./launcher/target/surefire-reports/TEST-org.apache.spark.launcher.SparkLauncherSuite.xml
> but didn't get much clue.
>
> FYI
>
> On Sun, Oct 25, 2015 at 12:07 AM, Reynold Xin  wrote:
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and passes if a
>> majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 1.5.2
>> [ ] -1 Do not release this package because ...
>>
>>
>> The release fixes 51 known issues in Spark 1.5.1, listed here:
>> http://s.apache.org/spark-1.5.2
>>
>> The tag to be voted on is v1.5.2-rc1:
>> https://github.com/apache/spark/releases/tag/v1.5.2-rc1
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> - as version 1.5.2-rc1:
>> https://repository.apache.org/content/repositories/orgapachespark-1151
>> - as version 1.5.2:
>> https://repository.apache.org/content/repositories/orgapachespark-1150
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/
>>
>>
>> ===
>> How can I help test this release?
>> ===
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> 
>> What justifies a -1 vote for this release?
>> 
>> -1 vote should occur for regressions from Spark 1.5.1. Bugs already
>> present in 1.5.1 will not block this release.
>>
>> ===
>> What should happen to JIRA tickets still targeting 1.5.2?
>> ===
>> Please target 1.5.3 or 1.6.0.
>>
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Adding support for truncate operator

2015-10-25 Thread Shagun Sodhani
My bad. I did not specify that I meant truncate operator on a column
similar to how other maths operators work.

On Sun, Oct 25, 2015 at 9:36 PM, Ted Yu  wrote:

> Have you seen the following ?
> [SPARK-3907][SQL] Add truncate table support
>
> Cheers
>
> On Sun, Oct 25, 2015 at 9:01 AM, Shagun Sodhani 
> wrote:
>
>> Hi! I noticed that SparkSQL does not support truncate operator as of now.
>> Can we add it? I am willing to send over a PR for it
>>
>
>


Re: [VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Ted Yu
When I ran the following command:
~/apache-maven-3.3.3/bin/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.4
-Dhadoop.version=2.6.0 package

I got:

testChildProcLauncher(org.apache.spark.launcher.SparkLauncherSuite)  Time
elapsed: 0.031 sec  <<< FAILURE!
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at
org.apache.spark.launcher.SparkLauncherSuite.testChildProcLauncher(SparkLauncherSuite.java:105)

java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

I
checked 
./launcher/target/surefire-reports/TEST-org.apache.spark.launcher.SparkLauncherSuite.xml
but didn't get much clue.

FYI

On Sun, Oct 25, 2015 at 12:07 AM, Reynold Xin  wrote:

> Please vote on releasing the following candidate as Apache Spark
> version 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.5.2
> [ ] -1 Do not release this package because ...
>
>
> The release fixes 51 known issues in Spark 1.5.1, listed here:
> http://s.apache.org/spark-1.5.2
>
> The tag to be voted on is v1.5.2-rc1:
> https://github.com/apache/spark/releases/tag/v1.5.2-rc1
>
> The release files, including signatures, digests, etc. can be found at:
> *http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
> *
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> - as version 1.5.2-rc1:
> https://repository.apache.org/content/repositories/orgapachespark-1151
> - as version 1.5.2:
> https://repository.apache.org/content/repositories/orgapachespark-1150
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/
>
>
> ===
> How can I help test this release?
> ===
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> 
> What justifies a -1 vote for this release?
> 
> -1 vote should occur for regressions from Spark 1.5.1. Bugs already
> present in 1.5.1 will not block this release.
>
> ===
> What should happen to JIRA tickets still targeting 1.5.2?
> ===
> Please target 1.5.3 or 1.6.0.
>
>
>


Re: Adding support for truncate operator

2015-10-25 Thread Ted Yu
Have you seen the following ?
[SPARK-3907][SQL] Add truncate table support

Cheers

On Sun, Oct 25, 2015 at 9:01 AM, Shagun Sodhani 
wrote:

> Hi! I noticed that SparkSQL does not support truncate operator as of now.
> Can we add it? I am willing to send over a PR for it
>


Adding support for truncate operator

2015-10-25 Thread Shagun Sodhani
Hi! I noticed that SparkSQL does not support truncate operator as of now.
Can we add it? I am willing to send over a PR for it


[VOTE] Release Apache Spark 1.5.2 (RC1)

2015-10-25 Thread Reynold Xin
Please vote on releasing the following candidate as Apache Spark
version 1.5.2. The vote is open until Wed Oct 28, 2015 at 08:00 UTC and
passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 1.5.2
[ ] -1 Do not release this package because ...


The release fixes 51 known issues in Spark 1.5.1, listed here:
http://s.apache.org/spark-1.5.2

The tag to be voted on is v1.5.2-rc1:
https://github.com/apache/spark/releases/tag/v1.5.2-rc1

The release files, including signatures, digests, etc. can be found at:
*http://people.apache.org/~pwendell/spark-releases/spark-1.5.2-rc1-bin/
*

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
- as version 1.5.2-rc1:
https://repository.apache.org/content/repositories/orgapachespark-1151
- as version 1.5.2:
https://repository.apache.org/content/repositories/orgapachespark-1150

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-v1.5.2-rc1-docs/


===
How can I help test this release?
===
If you are a Spark user, you can help us test this release by taking an
existing Spark workload and running on this release candidate, then
reporting any regressions.


What justifies a -1 vote for this release?

-1 vote should occur for regressions from Spark 1.5.1. Bugs already present
in 1.5.1 will not block this release.

===
What should happen to JIRA tickets still targeting 1.5.2?
===
Please target 1.5.3 or 1.6.0.