Ok and with a bit more digging between RC2 and RC3 we apparently switched
which JVM we are building the docs with.

The relevant side by side diff of the build logs (
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/60/consoleFull

https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/59/consoleFull
):

HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is now
at 02b165d... Preparing Spark release v2.1.1-rc2
Checked out Spark git hash 2ed19cf                            | Checked out
Spark git hash 02b165d
Building Spark docs                                             Building
Spark docs
Configuration file: /home/jenkins/workspace/spark-release-doc
Configuration file: /home/jenkins/workspace/spark-release-doc
Moving to project root and building API docs.                   Moving to
project root and building API docs.
Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running
'build/sbt -Pkinesis-asl clean compile unidoc' from /
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using
/usr/java/jdk1.7.0_79 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.       Note, this
will be overridden by -java-home if it is set.

There have been some known issues with building the docs with JDK8 and I
believe those fixes are in mainline, and we could cherry pick these changes
in -- but I think it might be more reasonable to just build the 2.1 docs
with JDK7.

What do people think?


On Fri, Apr 14, 2017 at 4:53 PM, Holden Karau <hol...@pigscanfly.ca> wrote:

> At first glance the error seems similar to one Pedro Rodriguez ran into
> during 2.0, so I'm looping Pedor in if they happen to have any insight into
> what was the cause last time.
>
> On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <hol...@pigscanfly.ca>
> wrote:
>
>> Sure, let me dig into it :)
>>
>> On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <mich...@databricks.com
>> > wrote:
>>
>>> Have time to figure out why the doc build failed?
>>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
>>> job/spark-release-docs/60/console
>>>
>>> On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <hol...@pigscanfly.ca>
>>> wrote:
>>>
>>>> If it would help I'd be more than happy to look at kicking off the
>>>> packaging for RC3 since I'v been poking around in Jenkins a bit (for 
>>>> SPARK-20216
>>>> & friends) (I'd still probably need some guidance from a previous release
>>>> coordinator so I understand if that's not actually faster).
>>>>
>>>> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <dbt...@dbtsai.com> wrote:
>>>>
>>>>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>>>>
>>>>> Sincerely,
>>>>>
>>>>> DB Tsai
>>>>> ----------------------------------------------------------
>>>>> Web: https://www.dbtsai.com
>>>>> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>>
>>>>>
>>>>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>>>>> > DB,
>>>>> >
>>>>> > This vote already failed and there isn't a RC3 vote yet. If you
>>>>> backport the
>>>>> > changes to branch-2.1 they will make it into the next RC.
>>>>> >
>>>>> > rb
>>>>> >
>>>>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <dbt...@dbtsai.com> wrote:
>>>>> >>
>>>>> >> -1
>>>>> >>
>>>>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>>>>> important
>>>>> >> since it's a critical bug that na.fill will mess up the data in
>>>>> Long even
>>>>> >> the data isn't null.
>>>>> >>
>>>>> >> Thanks.
>>>>> >>
>>>>> >>
>>>>> >> Sincerely,
>>>>> >>
>>>>> >> DB Tsai
>>>>> >> ----------------------------------------------------------
>>>>> >> Web: https://www.dbtsai.com
>>>>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>> >>
>>>>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <hol...@pigscanfly.ca
>>>>> >
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Following up, the issues with missing pypandoc/pandoc on the
>>>>> packaging
>>>>> >>> machine has been resolved.
>>>>> >>>
>>>>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <hol...@pigscanfly.ca
>>>>> >
>>>>> >>> wrote:
>>>>> >>>>
>>>>> >>>> See SPARK-20216, if Michael can let me know which machine is
>>>>> being used
>>>>> >>>> for packaging I can see if I can install pandoc on it (should be
>>>>> simple but
>>>>> >>>> I know the Jenkins cluster is a bit on the older side).
>>>>> >>>>
>>>>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <
>>>>> hol...@pigscanfly.ca>
>>>>> >>>> wrote:
>>>>> >>>>>
>>>>> >>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> >>>>> packaging. I thought that was generally done on the machine of
>>>>> the person
>>>>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA,
>>>>> but from
>>>>> >>>>> chatting with Josh it sounds like that part might be on of the
>>>>> Jenkins
>>>>> >>>>> workers - is there a fixed one that is used?
>>>>> >>>>>
>>>>> >>>>> Regardless I'll file a JIRA for this when I get back in front of
>>>>> my
>>>>> >>>>> desktop (~1 hour or so).
>>>>> >>>>>
>>>>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> >>>>> <mich...@databricks.com> wrote:
>>>>> >>>>>>
>>>>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how
>>>>> I
>>>>> >>>>>> think we should proceed:
>>>>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA
>>>>> and
>>>>> >>>>>> report if this is a regression and if there is an easy fix that
>>>>> we should
>>>>> >>>>>> wait for.
>>>>> >>>>>>
>>>>> >>>>>> For all the other test failures, please take the time to look
>>>>> through
>>>>> >>>>>> JIRA and open an issue if one does not already exist so that we
>>>>> can triage
>>>>> >>>>>> if these are just environmental issues.  If I don't hear any
>>>>> objections I'm
>>>>> >>>>>> going to go ahead with RC3 tomorrow.
>>>>> >>>>>>
>>>>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>> >>>>>> <felixcheun...@hotmail.com> wrote:
>>>>> >>>>>>>
>>>>> >>>>>>> -1
>>>>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>> >>>>>>> Opened SPARK-20197 and working on fix.
>>>>> >>>>>>>
>>>>> >>>>>>> ________________________________
>>>>> >>>>>>> From: holden.ka...@gmail.com <holden.ka...@gmail.com> on
>>>>> behalf of
>>>>> >>>>>>> Holden Karau <hol...@pigscanfly.ca>
>>>>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>> >>>>>>> To: Xiao Li
>>>>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>>>>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>> >>>>>>>
>>>>> >>>>>>> -1 (non-binding)
>>>>> >>>>>>>
>>>>> >>>>>>> Python packaging doesn't seem to have quite worked out
>>>>> (looking at
>>>>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc
>>>>> do not upload
>>>>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a
>>>>> version we
>>>>> >>>>>>> upgrade to PyPi.
>>>>> >>>>>>> Building this on my own machine results in a longer
>>>>> description.
>>>>> >>>>>>>
>>>>> >>>>>>> My guess is that whichever machine was used to package this is
>>>>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>> >>>>>>>
>>>>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <gatorsm...@gmail.com
>>>>> >
>>>>> >>>>>>> wrote:
>>>>> >>>>>>>>
>>>>> >>>>>>>> +1
>>>>> >>>>>>>>
>>>>> >>>>>>>> Xiao
>>>>> >>>>>>>>
>>>>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>> >>>>>>>> <mich...@databricks.com>:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> Please vote on releasing the following candidate as Apache
>>>>> Spark
>>>>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018
>>>>> at 16:30 PST and
>>>>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>> >>>>>>>>> [ ] -1 Do not release this package because ...
>>>>> >>>>>>>>>
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> To learn more about Apache Spark, please see
>>>>> >>>>>>>>> http://spark.apache.org/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The release files, including signatures, digests, etc. can
>>>>> be found
>>>>> >>>>>>>>> at:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-
>>>>> rc2-bin/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> Release artifacts are signed with the following key:
>>>>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The staging repository for this release can be found at:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapache
>>>>> spark-1227/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The documentation corresponding to this release can be found
>>>>> at:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.
>>>>> 1-rc2-docs/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> FAQ
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> How can I help test this release?
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>> >>>>>>>>> taking an existing Spark workload and running on this
>>>>> release candidate,
>>>>> >>>>>>>>> then reporting any regressions.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> Committers should look at those and triage. Extremely
>>>>> important bug
>>>>> >>>>>>>>> fixes, documentation, and API tweaks that impact
>>>>> compatibility should be
>>>>> >>>>>>>>> worked on immediately. Everything else please retarget to
>>>>> 2.1.2 or 2.2.0.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> But my bug isn't fixed!??!
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> In order to make timely releases, we will typically not hold
>>>>> the
>>>>> >>>>>>>>> release unless the bug in question is a regression from
>>>>> 2.1.0.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> What happened to RC1?
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> There were issues with the release packaging and as a result
>>>>> was
>>>>> >>>>>>>>> skipped.
>>>>> >>>>>>>>
>>>>> >>>>>>>>
>>>>> >>>>>>>
>>>>> >>>>>>>
>>>>> >>>>>>>
>>>>> >>>>>>> --
>>>>> >>>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>> >>>>>>
>>>>> >>>>>>
>>>>> >>>>> --
>>>>> >>>>> Cell : 425-233-8271
>>>>> >>>>> Twitter: https://twitter.com/holdenkarau
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> --
>>>>> >>>> Cell : 425-233-8271
>>>>> >>>> Twitter: https://twitter.com/holdenkarau
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>> --
>>>>> >>> Cell : 425-233-8271
>>>>> >>> Twitter: https://twitter.com/holdenkarau
>>>>> >>
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Ryan Blue
>>>>> > Software Engineer
>>>>> > Netflix
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>
>>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>



-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Reply via email to