FWIW, I tested the first rc and saw no regressions. I ran our benchmarks
built against spark 1.3 and saw results consistent with spark 1.2/1.2.1.

On 2/25/15, 5:51 PM, "Patrick Wendell" <pwend...@gmail.com> wrote:

>Hey All,
>
>Just a quick updated on this thread. Issues have continued to trickle
>in. Not all of them are blocker level but enough to warrant another
>RC:
>
>I've been keeping the JIRA dashboard up and running with the latest
>status (sorry, long link):
>https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jir
>a_issues_-3Fjql-3Dproject-2520-253D-2520SPARK-2520AND-2520-2522Target-2520
>Version-252Fs-2522-2520-253D-25201.3.0-2520AND-2520-28fixVersion-2520IS-25
>20EMPTY-2520OR-2520fixVersion-2520-21-253D-25201.3.0-29-2520AND-2520-28Res
>olution-2520IS-2520EMPTY-2520OR-2520Resolution-2520IN-2520-28Done-252C-252
>0Fixed-252C-2520Implemented-29-29-2520ORDER-2520BY-2520priority-252C-2520c
>omponent&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=cyguR-hd
>uPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJcMu2udAW6MBS4NWwKmHCBBpCG9
>zeuaRhA&s=SEjc91m9Dpx8QLLWlMK_5G0ORYtTHlLR2r3091n9qU0&e=
>
>One these are in I will cut another RC. Thanks everyone for the
>continued voting!
>
>- Patrick
>
>On Mon, Feb 23, 2015 at 10:52 PM, Tathagata Das
><tathagata.das1...@gmail.com> wrote:
>> Hey all,
>>
>> I found a major issue where JobProgressListener (a listener used to keep
>> track of jobs for the web UI) never forgets stages in one of its data
>> structures. This is a blocker for long running applications.
>> 
>>https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_ji
>>ra_browse_SPARK-2D5967&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oO
>>nmz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJcMu2udAW6M
>>BS4NWwKmHCBBpCG9zeuaRhA&s=06QttEOx2YqhPQ2sWdQmOElwog_cJ5iT2Mqa1_5jnl4&e=
>>
>> I am testing a fix for this right now.
>>
>> TD
>>
>> On Mon, Feb 23, 2015 at 7:23 PM, Soumitra Kumar
>><kumar.soumi...@gmail.com>
>> wrote:
>>
>>> +1 (non-binding)
>>>
>>> For: 
>>>https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_j
>>>ira_browse_SPARK-2D3660&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6
>>>oOnmz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJcMu2udA
>>>W6MBS4NWwKmHCBBpCG9zeuaRhA&s=0sBvf0vWgAski9HweupKdPZwWdYH0Mimda14oHnNVDA
>>>&e= 
>>>
>>> . Docs OK
>>> . Example code is good
>>>
>>> -Soumitra.
>>>
>>>
>>> On Mon, Feb 23, 2015 at 10:33 AM, Marcelo Vanzin <van...@cloudera.com>
>>> wrote:
>>>
>>> > Hi Tom, are you using an sbt-built assembly by any chance? If so,
>>>take
>>> > a look at SPARK-5808.
>>> >
>>> > I haven't had any problems with the maven-built assembly. Setting
>>> > SPARK_HOME on the executors is a workaround if you want to use the
>>>sbt
>>> > assembly.
>>> >
>>> > On Fri, Feb 20, 2015 at 2:56 PM, Tom Graves
>>> > <tgraves...@yahoo.com.invalid> wrote:
>>> > > Trying to run pyspark on yarn in client mode with basic wordcount
>>> > example I see the following error when doing the collect:
>>> > > Error from python worker:  /usr/bin/python: No module named
>>> > sqlPYTHONPATH was:
>>> >
>>> 
>>>/grid/3/tmp/yarn-local/usercache/tgraves/filecache/20/spark-assembly-1.3
>>>.0-hadoop2.6.0.1.1411101121.jarjava.io.EOFException
>>> >       at java.io.DataInputStream.readInt(DataInputStream.java:392)
>>> > at
>>> >
>>> 
>>>org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorker
>>>Factory.scala:163)
>>> >       at
>>> >
>>> 
>>>org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(Pyth
>>>onWorkerFactory.scala:86)
>>> >       at
>>> >
>>> 
>>>org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFacto
>>>ry.scala:62)
>>> >       at 
>>>org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:105)
>>> >       at
>>> org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:69)
>>> >       at 
>>>org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>> >     at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)        at
>>> > org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:308)
>>> > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>>> > at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)        at
>>> >
>>> 
>>>org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:6
>>>8)
>>> >       at
>>> >
>>> 
>>>org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:4
>>>1)
>>> >       at org.apache.spark.scheduler.Task.run(Task.scala:64)        at
>>> > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:197)
>>> >   at
>>> >
>>> 
>>>java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.jav
>>>a:1145)
>>> >       at
>>> >
>>> 
>>>java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja
>>>va:615)
>>> >       at java.lang.Thread.run(Thread.java:722)
>>> > > any ideas on this?
>>> > > Tom
>>> > >
>>> > >      On Wednesday, February 18, 2015 2:14 AM, Patrick Wendell <
>>> > pwend...@gmail.com> wrote:
>>> > >
>>> > >
>>> > >  Please vote on releasing the following candidate as Apache Spark
>>> > version 1.3.0!
>>> > >
>>> > > The tag to be voted on is v1.3.0-rc1 (commit f97b0d4a):
>>> > >
>>> >
>>> 
>>>https://urldefense.proofpoint.com/v2/url?u=https-3A__git-2Dwip-2Dus.apac
>>>he.org_repos_asf-3Fp-3Dspark.git-3Ba-3Dcommit-3Bh-3Df97b0d4a6b2650491681
>>>6d7aefcf3132cd1da6c2&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOn
>>>mz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJcMu2udAW6M
>>>BS4NWwKmHCBBpCG9zeuaRhA&s=DF8Cc8QmI354neHBHJ0HGyQtKL4yOIX2SDDwc0-hshw&e=
>>> 
>>> > >
>>> > > The release files, including signatures, digests, etc. can be
>>>found at:
>>> > > 
>>>https://urldefense.proofpoint.com/v2/url?u=http-3A__people.apache.org_-7
>>>Epwendell_spark-2D1.3.0-2Drc1_&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXB
>>>rZ4tFb6oOnmz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJ
>>>cMu2udAW6MBS4NWwKmHCBBpCG9zeuaRhA&s=SHWRgoK3UcmmnWVXU0LWjArD2PdG9RYWnO2f
>>>lVC8nMQ&e= 
>>> > >
>>> > > Release artifacts are signed with the following key:
>>> > > 
>>>https://urldefense.proofpoint.com/v2/url?u=https-3A__people.apache.org_k
>>>eys_committer_pwendell.asc&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4t
>>>Fb6oOnmz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJcMu2
>>>udAW6MBS4NWwKmHCBBpCG9zeuaRhA&s=lAnGa6hXGkJQp14UV7lB1zQqOcCeMS3hYG0scwXh
>>>OFw&e= 
>>> > >
>>> > > The staging repository for this release can be found at:
>>> > >
>>> 
>>>https://urldefense.proofpoint.com/v2/url?u=https-3A__repository.apache.o
>>>rg_content_repositories_orgapachespark-2D1069_&d=AwIFAw&c=izlc9mHr637UR4
>>>lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2
>>>JUA&m=frmHzwi9qJcMu2udAW6MBS4NWwKmHCBBpCG9zeuaRhA&s=TOEI0htKa2cktRFNdRiM
>>>owZerFsTz44EPFC3qpzDzs8&e=
>>> > >
>>> > > The documentation corresponding to this release can be found at:
>>> > > 
>>>https://urldefense.proofpoint.com/v2/url?u=http-3A__people.apache.org_-7
>>>Epwendell_spark-2D1.3.0-2Drc1-2Ddocs_&d=AwIFAw&c=izlc9mHr637UR4lpLEZLFFS
>>>3Vn2UXBrZ4tFb6oOnmz8&r=cyguR-hduPXP87jeUDbz1NGOZ18iIQjDTb_C1-_2JUA&m=frm
>>>Hzwi9qJcMu2udAW6MBS4NWwKmHCBBpCG9zeuaRhA&s=iduBlV7hay0TwWj6-Gwto3ZBElN4k
>>>0frDTIn0Ce8B8E&e=
>>> > >
>>> > > Please vote on releasing this package as Apache Spark 1.3.0!
>>> > >
>>> > > The vote is open until Saturday, February 21, at 08:03 UTC and
>>>passes
>>> > > if a majority of at least 3 +1 PMC votes are cast.
>>> > >
>>> > > [ ] +1 Release this package as Apache Spark 1.3.0
>>> > > [ ] -1 Do not release this package because ...
>>> > >
>>> > > To learn more about Apache Spark, please see
>>> > > 
>>>https://urldefense.proofpoint.com/v2/url?u=http-3A__spark.apache.org_&d=
>>>AwIFAw&c=izlc9mHr637UR4lpLEZLFFS3Vn2UXBrZ4tFb6oOnmz8&r=cyguR-hduPXP87jeU
>>>Dbz1NGOZ18iIQjDTb_C1-_2JUA&m=frmHzwi9qJcMu2udAW6MBS4NWwKmHCBBpCG9zeuaRhA
>>>&s=UPGEOKzVMEZ-8CqDq6dkvwzKpkF6fmBgy9ZVXanQOcE&e=
>>> > >
>>> > > == How can I help test this release? ==
>>> > > If you are a Spark user, you can help us test this release by
>>> > > taking a Spark 1.2 workload and running on this release candidate,
>>> > > then reporting any regressions.
>>> > >
>>> > > == What justifies a -1 vote for this release? ==
>>> > > This vote is happening towards the end of the 1.3 QA period,
>>> > > so -1 votes should only occur for significant regressions from
>>>1.2.1.
>>> > > Bugs already present in 1.2.X, minor regressions, or bugs related
>>> > > to new features will not block this release.
>>> > >
>>> > > - Patrick
>>> > >
>>> > > 
>>>---------------------------------------------------------------------
>>> > > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> > > For additional commands, e-mail: dev-h...@spark.apache.org
>>> > >
>>> > >
>>> > >
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > Marcelo
>>> >
>>> > ---------------------------------------------------------------------
>>> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> > For additional commands, e-mail: dev-h...@spark.apache.org
>>> >
>>> >
>>>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>For additional commands, e-mail: dev-h...@spark.apache.org
>

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to