Re: [RELEASE] Apache Cassandra 3.1 released

2015-12-10 Thread Kai Wang
Josh,

Thank you very much for the clarification.

On Thu, Dec 10, 2015 at 11:13 AM, Josh McKenzie 
wrote:

> Kai,
>
>
>> The most stable version will be 3.1 because it includes the critical
>> fixes in 3.0.1 and some additional bug fixes
>
> 3.0.1 and 3.1 are identical. This is a unique overlap specific to 3.0.1
> and 3.1.
>
> To summarize, the most stable version should be x.Max(2n+1).z.
>
> Going forward, you can expect the following:
> 3.2: new features
> 3.3: stabilization (built on top of 3.2)
> 3.4: new features
> 3.5: stabilization (built on top of 3.4)
>
> And in parallel (for the 3.x major version / transition to tick-tock
> transition period only):
> 3.0.2: bugfixes only
> 3.0.3: bugfixes only
> 3.0.4: bugfixes only
> etc
>
> *Any bugfix that goes into 3.0.X will be in the 3.X line, however not all
> bugfixes in 3.X will be in 3.0.X* (bugfixes for new features introduced
> in 3.2, 3.4, etc will obviously not be back-ported to 3.0.X).
>
> So, for the 3.x line:
>
>- If you absolutely must have the most stable version of C* and don't
>care at all about the new features introduced in even versions of 3.x, you
>want the 3.0.N release.
>- If you want access to the new features introduced in even release
>versions of 3.x (3.2, 3.4, 3.6), you'll want to run the latest odd version
>(3.3, 3.5, 3.7, etc) after the release containing the feature you want
>access to (so, if the feature's introduced in 3.4 and we haven't dropped
>3.5 yet, obviously you'd need to run 3.4).
>
>
> This is only going to be the case during the transition phase from old
> release cycles to tick-tock. We're targeting changes to CI and quality
> focus going forward to greatly increase the stability of the odd releases
> of major branches (3.1, 3.3, etc) so, for the 4.X releases, our
> recommendation would be to run the highest # odd release for greatest
> stability.
>
> Hope that helps clarify.
>
> On Thu, Dec 10, 2015 at 10:34 AM, Kai Wang  wrote:
>
>> Paulo,
>>
>> Thank you for the examples.
>>
>> So if I go to download page and see 3.0.1, 3.1 and 3.2. The most stable
>> version will be 3.1 because it includes the critical fixes in 3.0.1 and
>> some additional bug fixes while doesn't have any new features introduced in
>> 3.2. In that sense 3.0.1 becomes obsolete as soon as 3.1 comes out.
>>
>> To summarize, the most stable version should be x.Max(2n+1).z.
>>
>> Am I correct?
>>
>>
>> On Thu, Dec 10, 2015 at 6:22 AM, Paulo Motta 
>> wrote:
>>
>>> > Will 3.2 contain the bugfixes that are in 3.0.2 as well?
>>>
>>> If the bugfix affects both 3.2 and 3.0.2, yes. Otherwise it will only go
>>> in the affected version.
>>>
>>> > Is 3.x.y just 3.0.x plus new stuff? Where most of the time y is 0,
>>> unless there's a really serious issue that needs fixing?
>>>
>>> You can't really compare 3.0.y with 3.x(.y) because they're two
>>> different versioning schemes.  To make it a bit clearer:
>>>
>>> Old model:
>>> * x.y.z, where:
>>>   * x.y represents the "major" version (eg: 2.1, 2.2)
>>>   * z represents the "minor" version (eg: 2.1.1, 2.2.2)
>>>
>>> New model:
>>> * a.b(.c), where:
>>>   * a represents the "major" version (3, 4, 5)
>>>   * b represents the "minor" version (3.1, 3.2, 4.1, etc), where:
>>> * if b is even, it' a tick release, meaning it can contain both
>>> bugfixes and new features.
>>> * if b is odd, it's a tock release, meaning it can only contain
>>> bugfixes.
>>>   * c is a "subminor" optional version, which will only happen in
>>> emergency situations, for example, if a critical/blocker bug is discovered
>>> before the next release is out. So we probably won't have a 3.1.1, unless a
>>> critical bug is discovered in 3.1 and needs urgent fix before 3.2.
>>>
>>> The 3.0.x series is an interim stabilization release using the old
>>> versioning scheme, and will only receive bug fixes that affects it.
>>>
>>> 2015-12-09 18:21 GMT-08:00 Maciek Sakrejda :
>>>
 I'm still confused, even after reading the blog post twice (and reading
 the linked Intel post). I understand what you are doing conceptually, but
 I'm having a hard time mapping that to actual planned release numbers.

 > The 3.0.2 will only contain bugfixes, while 3.2 will introduce new
 features.



>>>
>>
>


Re: [RELEASE] Apache Cassandra 3.1 released

2015-12-10 Thread Maciek Sakrejda
Thanks, Josh and Paulo--that's much clearer.
​


Re: Unable to start one Cassandra node: OutOfMemoryError

2015-12-10 Thread Mikhail Strebkov
Jeff, CMS GC didn't help. Thinking about it, I don't see how can it help if
there are 8GB of strongly reachable objects from the GC roots.

Walsh, thanks for your suggestion, I checked the log and there are some
compactions_in_progress but total size of those is ~300 MiB as far as I
understand.

Here is the log of the last unsuccessful start:
https://gist.github.com/kluyg/7b9955d34def947f5e0a

On Thu, Dec 10, 2015 at 2:09 AM, Walsh, Stephen 
wrote:

> 8GB is the max recommended for heap size and that’s if you have 32GB or
> more available.
>
>
>
> We use 6GB on our 16GB machines and its very stable
>
>
>
> The out of memory could be coming from cassandra reloading
> compactions_in_progress into memory, you can check this from the log files
> if needs be.
>
> You can safely delete this folder inside the data directory.
>
>
>
> This can happen if you didn’t stop cassandra with a drain command and wait
> for the compactions to finish.
>
> Last time we hit it – was due to testing HA when we forced killed an
> entire cluster.
>
>
>
> Steve
>
>
>
>
>
>
>
> *From:* Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
> *Sent:* 10 December 2015 02:49
> *To:* user@cassandra.apache.org
> *Subject:* Re: Unable to start one Cassandra node: OutOfMemoryError
>
>
>
> 8G is probably too small for a G1 heap. Raise your heap or try CMS instead.
>
>
>
> 71% of your heap is collections – may be a weird data model quirk, but try
> CMS first and see if that behaves better.
>
>
>
>
>
>
>
> *From: *Mikhail Strebkov
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Wednesday, December 9, 2015 at 5:26 PM
> *To: *"user@cassandra.apache.org"
> *Subject: *Unable to start one Cassandra node: OutOfMemoryError
>
>
>
> Hi everyone,
>
>
>
> While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra
> 2.1.8) to DSE version: 4.8.2 (Cassandra 2.1.11)  one of the nodes can't
> start with OutOfMemoryError.
>
> We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector
> with 8 GiB heap.
>
> Average node size is 300 GiB.
>
>
>
> I looked at the heap dump with YourKit profiler (www.yourkit.com) and it
> was quite hard since it's so big, but can't get much out of it:
> http://i.imgur.com/fIRImma.png
>
>
>
> As far as I understand the report, there are 1,332,812 instances of
> org.apache.cassandra.db.Row which retain 8 GiB. I don't understand why all
> of them are still strongly reachable?
>
>
>
> Please help me to debug this. I don't know even where to start.
>
> I feel very uncomfortable with 1 node running 4.8.2, 1 node down and 3
> nodes running 4.7.1 at the same time.
>
>
>
> Thanks,
>
> Mikhail
>
>
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>


Re: Unable to start one Cassandra node: OutOfMemoryError

2015-12-10 Thread Mikhail Strebkov
Steve, thanks a ton! Removing compactions_in_progress helped! Now the node
is running again.

p.s. Sorry for referring to you by the last name in my last email, I got
confused.

On Thu, Dec 10, 2015 at 2:09 AM, Walsh, Stephen 
wrote:

> 8GB is the max recommended for heap size and that’s if you have 32GB or
> more available.
>
>
>
> We use 6GB on our 16GB machines and its very stable
>
>
>
> The out of memory could be coming from cassandra reloading
> compactions_in_progress into memory, you can check this from the log files
> if needs be.
>
> You can safely delete this folder inside the data directory.
>
>
>
> This can happen if you didn’t stop cassandra with a drain command and wait
> for the compactions to finish.
>
> Last time we hit it – was due to testing HA when we forced killed an
> entire cluster.
>
>
>
> Steve
>
>
>
>
>
>
>
> *From:* Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
> *Sent:* 10 December 2015 02:49
> *To:* user@cassandra.apache.org
> *Subject:* Re: Unable to start one Cassandra node: OutOfMemoryError
>
>
>
> 8G is probably too small for a G1 heap. Raise your heap or try CMS instead.
>
>
>
> 71% of your heap is collections – may be a weird data model quirk, but try
> CMS first and see if that behaves better.
>
>
>
>
>
>
>
> *From: *Mikhail Strebkov
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Wednesday, December 9, 2015 at 5:26 PM
> *To: *"user@cassandra.apache.org"
> *Subject: *Unable to start one Cassandra node: OutOfMemoryError
>
>
>
> Hi everyone,
>
>
>
> While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra
> 2.1.8) to DSE version: 4.8.2 (Cassandra 2.1.11)  one of the nodes can't
> start with OutOfMemoryError.
>
> We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector
> with 8 GiB heap.
>
> Average node size is 300 GiB.
>
>
>
> I looked at the heap dump with YourKit profiler (www.yourkit.com) and it
> was quite hard since it's so big, but can't get much out of it:
> http://i.imgur.com/fIRImma.png
>
>
>
> As far as I understand the report, there are 1,332,812 instances of
> org.apache.cassandra.db.Row which retain 8 GiB. I don't understand why all
> of them are still strongly reachable?
>
>
>
> Please help me to debug this. I don't know even where to start.
>
> I feel very uncomfortable with 1 node running 4.8.2, 1 node down and 3
> nodes running 4.7.1 at the same time.
>
>
>
> Thanks,
>
> Mikhail
>
>
>
>
> This email (including any attachments) is proprietary to Aspect Software,
> Inc. and may contain information that is confidential. If you have received
> this message in error, please do not read, copy or forward this message.
> Please notify the sender immediately, delete it from your system and
> destroy any copies. You may not further disclose or distribute this email
> or its attachments.
>


Re: Unable to start one Cassandra node: OutOfMemoryError

2015-12-10 Thread Carlos Rolo
Dealt with that recently, and the only solution that made it work was to
increase heap sizes.



Regards,

Carlos Juzarte Rolo
Cassandra Consultant

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
*
Mobile: +351 91 891 81 00 | Tel: +1 613 565 8696 x1649
www.pythian.com

On Thu, Dec 10, 2015 at 10:14 PM, Mikhail Strebkov 
wrote:

> Jeff, CMS GC didn't help. Thinking about it, I don't see how can it help
> if there are 8GB of strongly reachable objects from the GC roots.
>
> Walsh, thanks for your suggestion, I checked the log and there are some
> compactions_in_progress but total size of those is ~300 MiB as far as I
> understand.
>
> Here is the log of the last unsuccessful start:
> https://gist.github.com/kluyg/7b9955d34def947f5e0a
>
>
> On Thu, Dec 10, 2015 at 2:09 AM, Walsh, Stephen 
> wrote:
>
>> 8GB is the max recommended for heap size and that’s if you have 32GB or
>> more available.
>>
>>
>>
>> We use 6GB on our 16GB machines and its very stable
>>
>>
>>
>> The out of memory could be coming from cassandra reloading
>> compactions_in_progress into memory, you can check this from the log files
>> if needs be.
>>
>> You can safely delete this folder inside the data directory.
>>
>>
>>
>> This can happen if you didn’t stop cassandra with a drain command and
>> wait for the compactions to finish.
>>
>> Last time we hit it – was due to testing HA when we forced killed an
>> entire cluster.
>>
>>
>>
>> Steve
>>
>>
>>
>>
>>
>>
>>
>> *From:* Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
>> *Sent:* 10 December 2015 02:49
>> *To:* user@cassandra.apache.org
>> *Subject:* Re: Unable to start one Cassandra node: OutOfMemoryError
>>
>>
>>
>> 8G is probably too small for a G1 heap. Raise your heap or try CMS
>> instead.
>>
>>
>>
>> 71% of your heap is collections – may be a weird data model quirk, but
>> try CMS first and see if that behaves better.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Mikhail Strebkov
>> *Reply-To: *"user@cassandra.apache.org"
>> *Date: *Wednesday, December 9, 2015 at 5:26 PM
>> *To: *"user@cassandra.apache.org"
>> *Subject: *Unable to start one Cassandra node: OutOfMemoryError
>>
>>
>>
>> Hi everyone,
>>
>>
>>
>> While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra
>> 2.1.8) to DSE version: 4.8.2 (Cassandra 2.1.11)  one of the nodes can't
>> start with OutOfMemoryError.
>>
>> We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector
>> with 8 GiB heap.
>>
>> Average node size is 300 GiB.
>>
>>
>>
>> I looked at the heap dump with YourKit profiler (www.yourkit.com) and it
>> was quite hard since it's so big, but can't get much out of it:
>> http://i.imgur.com/fIRImma.png
>>
>>
>>
>> As far as I understand the report, there are 1,332,812 instances of
>> org.apache.cassandra.db.Row which retain 8 GiB. I don't understand why all
>> of them are still strongly reachable?
>>
>>
>>
>> Please help me to debug this. I don't know even where to start.
>>
>> I feel very uncomfortable with 1 node running 4.8.2, 1 node down and 3
>> nodes running 4.7.1 at the same time.
>>
>>
>>
>> Thanks,
>>
>> Mikhail
>>
>>
>>
>>
>> This email (including any attachments) is proprietary to Aspect Software,
>> Inc. and may contain information that is confidential. If you have received
>> this message in error, please do not read, copy or forward this message.
>> Please notify the sender immediately, delete it from your system and
>> destroy any copies. You may not further disclose or distribute this email
>> or its attachments.
>>
>
>

-- 


--





RE: Unable to start one Cassandra node: OutOfMemoryError

2015-12-10 Thread Walsh, Stephen
8GB is the max recommended for heap size and that’s if you have 32GB or more 
available.

We use 6GB on our 16GB machines and its very stable

The out of memory could be coming from cassandra reloading 
compactions_in_progress into memory, you can check this from the log files if 
needs be.
You can safely delete this folder inside the data directory.

This can happen if you didn’t stop cassandra with a drain command and wait for 
the compactions to finish.
Last time we hit it – was due to testing HA when we forced killed an entire 
cluster.

Steve



From: Jeff Jirsa [mailto:jeff.ji...@crowdstrike.com]
Sent: 10 December 2015 02:49
To: user@cassandra.apache.org
Subject: Re: Unable to start one Cassandra node: OutOfMemoryError

8G is probably too small for a G1 heap. Raise your heap or try CMS instead.

71% of your heap is collections – may be a weird data model quirk, but try CMS 
first and see if that behaves better.



From: Mikhail Strebkov
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, December 9, 2015 at 5:26 PM
To: "user@cassandra.apache.org"
Subject: Unable to start one Cassandra node: OutOfMemoryError

Hi everyone,

While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra 2.1.8) 
to DSE version: 4.8.2 (Cassandra 2.1.11)  one of the nodes can't start with 
OutOfMemoryError.

We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector with 8 
GiB heap.

Average node size is 300 GiB.

I looked at the heap dump with YourKit profiler 
(www.yourkit.com) and it was quite hard since it's so 
big, but can't get much out of it: http://i.imgur.com/fIRImma.png

As far as I understand the report, there are 1,332,812 instances of 
org.apache.cassandra.db.Row which retain 8 GiB. I don't understand why all of 
them are still strongly reachable?

Please help me to debug this. I don't know even where to start.
I feel very uncomfortable with 1 node running 4.8.2, 1 node down and 3 nodes 
running 4.7.1 at the same time.

Thanks,
Mikhail


This email (including any attachments) is proprietary to Aspect Software, Inc. 
and may contain information that is confidential. If you have received this 
message in error, please do not read, copy or forward this message. Please 
notify the sender immediately, delete it from your system and destroy any 
copies. You may not further disclose or distribute this email or its 
attachments.


Re: [RELEASE] Apache Cassandra 3.1 released

2015-12-10 Thread Paulo Motta
> Will 3.2 contain the bugfixes that are in 3.0.2 as well?

If the bugfix affects both 3.2 and 3.0.2, yes. Otherwise it will only go in
the affected version.

> Is 3.x.y just 3.0.x plus new stuff? Where most of the time y is 0, unless
there's a really serious issue that needs fixing?

You can't really compare 3.0.y with 3.x(.y) because they're two different
versioning schemes.  To make it a bit clearer:

Old model:
* x.y.z, where:
  * x.y represents the "major" version (eg: 2.1, 2.2)
  * z represents the "minor" version (eg: 2.1.1, 2.2.2)

New model:
* a.b(.c), where:
  * a represents the "major" version (3, 4, 5)
  * b represents the "minor" version (3.1, 3.2, 4.1, etc), where:
* if b is even, it' a tick release, meaning it can contain both
bugfixes and new features.
* if b is odd, it's a tock release, meaning it can only contain
bugfixes.
  * c is a "subminor" optional version, which will only happen in emergency
situations, for example, if a critical/blocker bug is discovered before the
next release is out. So we probably won't have a 3.1.1, unless a critical
bug is discovered in 3.1 and needs urgent fix before 3.2.

The 3.0.x series is an interim stabilization release using the old
versioning scheme, and will only receive bug fixes that affects it.

2015-12-09 18:21 GMT-08:00 Maciek Sakrejda :

> I'm still confused, even after reading the blog post twice (and reading
> the linked Intel post). I understand what you are doing conceptually, but
> I'm having a hard time mapping that to actual planned release numbers.
>
> > The 3.0.2 will only contain bugfixes, while 3.2 will introduce new
> features.
>
>
>


read time coprocessor?

2015-12-10 Thread Li Yang
This is Yang from Apache Kylin project. We are thinking about using
Cassandra instead of HBase as storage. I searched and read around and still
have one question.

Does Cassandra support read time coprocessor that allows moving computation
to data node before scan result is returned? This shall reduce network
traffic greatly in our case.

Thank
Yang


Re: [RELEASE] Apache Cassandra 3.1 released

2015-12-10 Thread Kai Wang
Paulo,

Thank you for the examples.

So if I go to download page and see 3.0.1, 3.1 and 3.2. The most stable
version will be 3.1 because it includes the critical fixes in 3.0.1 and
some additional bug fixes while doesn't have any new features introduced in
3.2. In that sense 3.0.1 becomes obsolete as soon as 3.1 comes out.

To summarize, the most stable version should be x.Max(2n+1).z.

Am I correct?


On Thu, Dec 10, 2015 at 6:22 AM, Paulo Motta 
wrote:

> > Will 3.2 contain the bugfixes that are in 3.0.2 as well?
>
> If the bugfix affects both 3.2 and 3.0.2, yes. Otherwise it will only go
> in the affected version.
>
> > Is 3.x.y just 3.0.x plus new stuff? Where most of the time y is 0,
> unless there's a really serious issue that needs fixing?
>
> You can't really compare 3.0.y with 3.x(.y) because they're two different
> versioning schemes.  To make it a bit clearer:
>
> Old model:
> * x.y.z, where:
>   * x.y represents the "major" version (eg: 2.1, 2.2)
>   * z represents the "minor" version (eg: 2.1.1, 2.2.2)
>
> New model:
> * a.b(.c), where:
>   * a represents the "major" version (3, 4, 5)
>   * b represents the "minor" version (3.1, 3.2, 4.1, etc), where:
> * if b is even, it' a tick release, meaning it can contain both
> bugfixes and new features.
> * if b is odd, it's a tock release, meaning it can only contain
> bugfixes.
>   * c is a "subminor" optional version, which will only happen in
> emergency situations, for example, if a critical/blocker bug is discovered
> before the next release is out. So we probably won't have a 3.1.1, unless a
> critical bug is discovered in 3.1 and needs urgent fix before 3.2.
>
> The 3.0.x series is an interim stabilization release using the old
> versioning scheme, and will only receive bug fixes that affects it.
>
> 2015-12-09 18:21 GMT-08:00 Maciek Sakrejda :
>
>> I'm still confused, even after reading the blog post twice (and reading
>> the linked Intel post). I understand what you are doing conceptually, but
>> I'm having a hard time mapping that to actual planned release numbers.
>>
>> > The 3.0.2 will only contain bugfixes, while 3.2 will introduce new
>> features.
>>
>>
>>
>


Re: [RELEASE] Apache Cassandra 3.1 released

2015-12-10 Thread Josh McKenzie
Kai,


> The most stable version will be 3.1 because it includes the critical fixes
> in 3.0.1 and some additional bug fixes

3.0.1 and 3.1 are identical. This is a unique overlap specific to 3.0.1 and
3.1.

To summarize, the most stable version should be x.Max(2n+1).z.

Going forward, you can expect the following:
3.2: new features
3.3: stabilization (built on top of 3.2)
3.4: new features
3.5: stabilization (built on top of 3.4)

And in parallel (for the 3.x major version / transition to tick-tock
transition period only):
3.0.2: bugfixes only
3.0.3: bugfixes only
3.0.4: bugfixes only
etc

*Any bugfix that goes into 3.0.X will be in the 3.X line, however not all
bugfixes in 3.X will be in 3.0.X* (bugfixes for new features introduced in
3.2, 3.4, etc will obviously not be back-ported to 3.0.X).

So, for the 3.x line:

   - If you absolutely must have the most stable version of C* and don't
   care at all about the new features introduced in even versions of 3.x, you
   want the 3.0.N release.
   - If you want access to the new features introduced in even release
   versions of 3.x (3.2, 3.4, 3.6), you'll want to run the latest odd version
   (3.3, 3.5, 3.7, etc) after the release containing the feature you want
   access to (so, if the feature's introduced in 3.4 and we haven't dropped
   3.5 yet, obviously you'd need to run 3.4).


This is only going to be the case during the transition phase from old
release cycles to tick-tock. We're targeting changes to CI and quality
focus going forward to greatly increase the stability of the odd releases
of major branches (3.1, 3.3, etc) so, for the 4.X releases, our
recommendation would be to run the highest # odd release for greatest
stability.

Hope that helps clarify.

On Thu, Dec 10, 2015 at 10:34 AM, Kai Wang  wrote:

> Paulo,
>
> Thank you for the examples.
>
> So if I go to download page and see 3.0.1, 3.1 and 3.2. The most stable
> version will be 3.1 because it includes the critical fixes in 3.0.1 and
> some additional bug fixes while doesn't have any new features introduced in
> 3.2. In that sense 3.0.1 becomes obsolete as soon as 3.1 comes out.
>
> To summarize, the most stable version should be x.Max(2n+1).z.
>
> Am I correct?
>
>
> On Thu, Dec 10, 2015 at 6:22 AM, Paulo Motta 
> wrote:
>
>> > Will 3.2 contain the bugfixes that are in 3.0.2 as well?
>>
>> If the bugfix affects both 3.2 and 3.0.2, yes. Otherwise it will only go
>> in the affected version.
>>
>> > Is 3.x.y just 3.0.x plus new stuff? Where most of the time y is 0,
>> unless there's a really serious issue that needs fixing?
>>
>> You can't really compare 3.0.y with 3.x(.y) because they're two different
>> versioning schemes.  To make it a bit clearer:
>>
>> Old model:
>> * x.y.z, where:
>>   * x.y represents the "major" version (eg: 2.1, 2.2)
>>   * z represents the "minor" version (eg: 2.1.1, 2.2.2)
>>
>> New model:
>> * a.b(.c), where:
>>   * a represents the "major" version (3, 4, 5)
>>   * b represents the "minor" version (3.1, 3.2, 4.1, etc), where:
>> * if b is even, it' a tick release, meaning it can contain both
>> bugfixes and new features.
>> * if b is odd, it's a tock release, meaning it can only contain
>> bugfixes.
>>   * c is a "subminor" optional version, which will only happen in
>> emergency situations, for example, if a critical/blocker bug is discovered
>> before the next release is out. So we probably won't have a 3.1.1, unless a
>> critical bug is discovered in 3.1 and needs urgent fix before 3.2.
>>
>> The 3.0.x series is an interim stabilization release using the old
>> versioning scheme, and will only receive bug fixes that affects it.
>>
>> 2015-12-09 18:21 GMT-08:00 Maciek Sakrejda :
>>
>>> I'm still confused, even after reading the blog post twice (and reading
>>> the linked Intel post). I understand what you are doing conceptually, but
>>> I'm having a hard time mapping that to actual planned release numbers.
>>>
>>> > The 3.0.2 will only contain bugfixes, while 3.2 will introduce new
>>> features.
>>>
>>>
>>>
>>
>