If it would help I'd be more than happy to look at kicking off the
packaging for RC3 since I'v been poking around in Jenkins a bit (for
SPARK-20216
& friends) (I'd still probably need some guidance from a previous release
coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <dbt...@dbtsai.com> wrote:

> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>
> Sincerely,
>
> DB Tsai
> ----------------------------------------------------------
> Web: https://www.dbtsai.com
> PGP Key ID: 0x5CED8B896A6BDFA0
>
>
> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
> > DB,
> >
> > This vote already failed and there isn't a RC3 vote yet. If you backport
> the
> > changes to branch-2.1 they will make it into the next RC.
> >
> > rb
> >
> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <dbt...@dbtsai.com> wrote:
> >>
> >> -1
> >>
> >> I think that back-porting SPARK-20270 and SPARK-18555 are very important
> >> since it's a critical bug that na.fill will mess up the data in Long
> even
> >> the data isn't null.
> >>
> >> Thanks.
> >>
> >>
> >> Sincerely,
> >>
> >> DB Tsai
> >> ----------------------------------------------------------
> >> Web: https://www.dbtsai.com
> >> PGP Key ID: 0x5CED8B896A6BDFA0
> >>
> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <hol...@pigscanfly.ca>
> >> wrote:
> >>>
> >>> Following up, the issues with missing pypandoc/pandoc on the packaging
> >>> machine has been resolved.
> >>>
> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <hol...@pigscanfly.ca>
> >>> wrote:
> >>>>
> >>>> See SPARK-20216, if Michael can let me know which machine is being
> used
> >>>> for packaging I can see if I can install pandoc on it (should be
> simple but
> >>>> I know the Jenkins cluster is a bit on the older side).
> >>>>
> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <hol...@pigscanfly.ca>
> >>>> wrote:
> >>>>>
> >>>>> So the fix is installing pandoc on whichever machine is used for
> >>>>> packaging. I thought that was generally done on the machine of the
> person
> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
> from
> >>>>> chatting with Josh it sounds like that part might be on of the
> Jenkins
> >>>>> workers - is there a fixed one that is used?
> >>>>>
> >>>>> Regardless I'll file a JIRA for this when I get back in front of my
> >>>>> desktop (~1 hour or so).
> >>>>>
> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
> >>>>> <mich...@databricks.com> wrote:
> >>>>>>
> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
> >>>>>> think we should proceed:
> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
> >>>>>> report if this is a regression and if there is an easy fix that we
> should
> >>>>>> wait for.
> >>>>>>
> >>>>>> For all the other test failures, please take the time to look
> through
> >>>>>> JIRA and open an issue if one does not already exist so that we can
> triage
> >>>>>> if these are just environmental issues.  If I don't hear any
> objections I'm
> >>>>>> going to go ahead with RC3 tomorrow.
> >>>>>>
> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
> >>>>>> <felixcheun...@hotmail.com> wrote:
> >>>>>>>
> >>>>>>> -1
> >>>>>>> sorry, found an issue with SparkR CRAN check.
> >>>>>>> Opened SPARK-20197 and working on fix.
> >>>>>>>
> >>>>>>> ________________________________
> >>>>>>> From: holden.ka...@gmail.com <holden.ka...@gmail.com> on behalf of
> >>>>>>> Holden Karau <hol...@pigscanfly.ca>
> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
> >>>>>>> To: Xiao Li
> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
> >>>>>>>
> >>>>>>> -1 (non-binding)
> >>>>>>>
> >>>>>>> Python packaging doesn't seem to have quite worked out (looking at
> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do
> not upload
> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version
> we
> >>>>>>> upgrade to PyPi.
> >>>>>>> Building this on my own machine results in a longer description.
> >>>>>>>
> >>>>>>> My guess is that whichever machine was used to package this is
> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
> >>>>>>>
> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <gatorsm...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> +1
> >>>>>>>>
> >>>>>>>> Xiao
> >>>>>>>>
> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
> >>>>>>>> <mich...@databricks.com>:
> >>>>>>>>>
> >>>>>>>>> Please vote on releasing the following candidate as Apache Spark
> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
> 16:30 PST and
> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
> >>>>>>>>>
> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
> >>>>>>>>> [ ] -1 Do not release this package because ...
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> To learn more about Apache Spark, please see
> >>>>>>>>> http://spark.apache.org/
> >>>>>>>>>
> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
> >>>>>>>>>
> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
> >>>>>>>>>
> >>>>>>>>> The release files, including signatures, digests, etc. can be
> found
> >>>>>>>>> at:
> >>>>>>>>>
> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-
> 2.1.1-rc2-bin/
> >>>>>>>>>
> >>>>>>>>> Release artifacts are signed with the following key:
> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
> >>>>>>>>>
> >>>>>>>>> The staging repository for this release can be found at:
> >>>>>>>>>
> >>>>>>>>> https://repository.apache.org/content/repositories/
> orgapachespark-1227/
> >>>>>>>>>
> >>>>>>>>> The documentation corresponding to this release can be found at:
> >>>>>>>>>
> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-
> 2.1.1-rc2-docs/
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> FAQ
> >>>>>>>>>
> >>>>>>>>> How can I help test this release?
> >>>>>>>>>
> >>>>>>>>> If you are a Spark user, you can help us test this release by
> >>>>>>>>> taking an existing Spark workload and running on this release
> candidate,
> >>>>>>>>> then reporting any regressions.
> >>>>>>>>>
> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
> >>>>>>>>>
> >>>>>>>>> Committers should look at those and triage. Extremely important
> bug
> >>>>>>>>> fixes, documentation, and API tweaks that impact compatibility
> should be
> >>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2
> or 2.2.0.
> >>>>>>>>>
> >>>>>>>>> But my bug isn't fixed!??!
> >>>>>>>>>
> >>>>>>>>> In order to make timely releases, we will typically not hold the
> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
> >>>>>>>>>
> >>>>>>>>> What happened to RC1?
> >>>>>>>>>
> >>>>>>>>> There were issues with the release packaging and as a result was
> >>>>>>>>> skipped.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> Cell : 425-233-8271
> >>>>>>> Twitter: https://twitter.com/holdenkarau
> >>>>>>
> >>>>>>
> >>>>> --
> >>>>> Cell : 425-233-8271
> >>>>> Twitter: https://twitter.com/holdenkarau
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Cell : 425-233-8271
> >>>> Twitter: https://twitter.com/holdenkarau
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Cell : 425-233-8271
> >>> Twitter: https://twitter.com/holdenkarau
> >>
> >>
> >
> >
> >
> > --
> > Ryan Blue
> > Software Engineer
> > Netflix
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>


-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Reply via email to