It's not that you're starting 2.1 per se, but, that you're committing
things that are not in 2.0. Releases are never made from master in
moderately complex projects. It has nothing to do with pace of
release.

On Sun, Jul 3, 2016 at 1:24 PM, Jacek Laskowski <ja...@japila.pl> wrote:
> Hi,
>
> Why would I need to start 2.1? If it's ready for master, why could it be not
> part of 2.0? "Release early and often" is what would benefit Spark a lot.
> The time to ship 2.0 is far too long I think. And I know companies that
> won't use 2.0 because...it's "0" version :-(
>
> Jacek
>
> On 3 Jul 2016 2:59 a.m., "Reynold Xin" <r...@databricks.com> wrote:
>>
>> Because in that case you cannot merge anything meant for 2.1 until 2.0 is
>> released.
>>
>> On Saturday, July 2, 2016, Jacek Laskowski <ja...@japila.pl> wrote:
>>>
>>> Hi,
>>>
>>> Always release from master. What could be the gotchas?
>>>
>>> Pozdrawiam,
>>> Jacek Laskowski
>>> ----
>>> https://medium.com/@jaceklaskowski/
>>> Mastering Apache Spark http://bit.ly/mastering-apache-spark
>>> Follow me at https://twitter.com/jaceklaskowski
>>>
>>>
>>> On Sat, Jul 2, 2016 at 11:36 PM, Sean Owen <so...@cloudera.com> wrote:
>>> > I am not sure any other process makes sense. What are you suggesting
>>> > should
>>> > happen?
>>> >
>>> >
>>> > On Sat, Jul 2, 2016, 22:27 Jacek Laskowski <ja...@japila.pl> wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> Thanks Sean! It makes sense.
>>> >>
>>> >> I'm not fully convinced that's how it should be, so I apologize if I
>>> >> ever ask about the version management in Spark again :)
>>> >>
>>> >> Pozdrawiam,
>>> >> Jacek Laskowski
>>> >> ----
>>> >> https://medium.com/@jaceklaskowski/
>>> >> Mastering Apache Spark http://bit.ly/mastering-apache-spark
>>> >> Follow me at https://twitter.com/jaceklaskowski
>>> >>
>>> >>
>>> >> On Sat, Jul 2, 2016 at 11:19 PM, Sean Owen <so...@cloudera.com> wrote:
>>> >> > Because a 2.0.0 release candidate is out. If for some reason the
>>> >> > release candidate becomes the 2.0.0 release, then anything merged to
>>> >> > branch-2.0 after it is necessarily fixed in 2.0.1 at best. At this
>>> >> > stage we know the RC1 will not be 2.0.0, so really that vote should
>>> >> > be
>>> >> > formally cancelled. Then we just mark anything fixed for 2.0.1 as
>>> >> > fixed for 2.0.0 and make another RC.
>>> >> >
>>> >> > master is not what will be released as 2.0.0. branch-2.0 is what
>>> >> > will
>>> >> > contain that release.
>>> >> >
>>> >> > On Sat, Jul 2, 2016 at 10:11 PM, Jacek Laskowski <ja...@japila.pl>
>>> >> > wrote:
>>> >> >> Hi Sean, devs,
>>> >> >>
>>> >> >> How is this possible that Fix Version/s is 2.0.1 given 2.0.0 was
>>> >> >> not
>>> >> >> released yet? Why is that that master is not what's going to be
>>> >> >> released so eventually becomes 2.0.0? I don't get it. Appreciate
>>> >> >> any
>>> >> >> guidance. Thanks.
>>> >> >>
>>> >> >> Pozdrawiam,
>>> >> >> Jacek Laskowski
>>> >> >> ----
>>> >> >> https://medium.com/@jaceklaskowski/
>>> >> >> Mastering Apache Spark http://bit.ly/mastering-apache-spark
>>> >> >> Follow me at https://twitter.com/jaceklaskowski
>>> >> >>
>>> >> >>
>>> >> >> On Sat, Jul 2, 2016 at 5:30 PM, Sean Owen (JIRA) <j...@apache.org>
>>> >> >> wrote:
>>> >> >>>
>>> >> >>>      [
>>> >> >>>
>>> >> >>> https://issues.apache.org/jira/browse/SPARK-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>>> >> >>> ]
>>> >> >>>
>>> >> >>> Sean Owen resolved SPARK-16345.
>>> >> >>> -------------------------------
>>> >> >>>        Resolution: Fixed
>>> >> >>>     Fix Version/s: 2.0.1
>>> >> >>>
>>> >> >>> Issue resolved by pull request 14015
>>> >> >>> [https://github.com/apache/spark/pull/14015]
>>> >> >>>
>>> >> >>>> Extract graphx programming guide example snippets from source
>>> >> >>>> files
>>> >> >>>> instead of hard code them
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> ---------------------------------------------------------------------------------------------
>>> >> >>>>
>>> >> >>>>                 Key: SPARK-16345
>>> >> >>>>                 URL:
>>> >> >>>> https://issues.apache.org/jira/browse/SPARK-16345
>>> >> >>>>             Project: Spark
>>> >> >>>>          Issue Type: Improvement
>>> >> >>>>          Components: Documentation, Examples, GraphX
>>> >> >>>>    Affects Versions: 2.0.0
>>> >> >>>>            Reporter: Weichen Xu
>>> >> >>>>             Fix For: 2.0.1
>>> >> >>>>
>>> >> >>>>
>>> >> >>>> Currently, all example snippets in the graphx programming guide
>>> >> >>>> are
>>> >> >>>> hard-coded, which can be pretty hard to update and verify. On the
>>> >> >>>> contrary,
>>> >> >>>> ML document pages are using the include_example Jekyll plugin to
>>> >> >>>> extract
>>> >> >>>> snippets from actual source files under the examples sub-project.
>>> >> >>>> In this
>>> >> >>>> way, we can guarantee that Java and Scala code are compilable,
>>> >> >>>> and it would
>>> >> >>>> be much easier to verify these example snippets since they are
>>> >> >>>> part of
>>> >> >>>> complete Spark applications.
>>> >> >>>> The similar task is SPARK-11381.
>>> >> >>>
>>> >> >>>
>>> >> >>>
>>> >> >>> --
>>> >> >>> This message was sent by Atlassian JIRA
>>> >> >>> (v6.3.4#6332)
>>> >> >>>
>>> >> >>>
>>> >> >>> ---------------------------------------------------------------------
>>> >> >>> To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
>>> >> >>> For additional commands, e-mail: issues-h...@spark.apache.org
>>> >> >>>
>>> >> >>
>>> >> >>
>>> >> >> ---------------------------------------------------------------------
>>> >> >> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>> >> >>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to