3.7 falls under the Tick Tock release cycle, which is almost completely
untested in production by experienced operators.  In the cases where it has
been tested, there have been numerous bugs found which I (and I think most
people on this list) consider to be show stoppers.  Additionally, the Tick
Tock release cycle puts the operator in the uncomfortable position of
having to decide between upgrading to a new version with new features
(probably new bugs) or back porting bug fixes from future versions
themselves.    There will never be a 3.7.1 release which fixes bugs in 3.7
without adding new features.

https://github.com/apache/cassandra/blob/trunk/NEWS.txt

For new projects I recommend starting with the recently released 3.0.9.

Assuming the project changes it's policy on releases (all signs point to
yes), then by the time 4.0 rolls out a lot of the features which have been
released in the 3.x series will have matured a bit, so it's very possible
4.0 will stabilize faster than the usual 6 months it takes for a major
release.

All that said, there's nothing wrong with doing compatibility & smoke tests
against the latest 3.x release as well as 3.0 and reporting bugs back to
the Apache Cassandra JIRA, I'm sure it would be greatly appreciated.

https://issues.apache.org/jira/secure/Dashboard.jspa

Jon


On Tue, Sep 20, 2016 at 8:10 PM Jesse Hodges <hodges.je...@gmail.com> wrote:

> Can you elaborate on why not 3.7?
>
> On Tue, Sep 20, 2016 at 7:41 PM, Jonathan Haddad <j...@jonhaddad.com>
> wrote:
>
>> If you haven't yet deployed to prod I strongly recommend *not* using 3.7.
>>
>>
>> What network storage are you using?  Outside of a handful of highly
>> experienced experts using EBS in very specific ways, it usually ends in
>> failure.
>>
>> On Tue, Sep 20, 2016 at 3:30 PM John Sanda <john.sa...@gmail.com> wrote:
>>
>>> I am deploying multiple Java web apps that connect to a Cassandra 3.7
>>> instance. Each app creates its own schema at start up. One of the schema
>>> changes involves dropping a table. I am seeing frequent client-side
>>> timeouts reported by the DataStax driver after the DROP TABLE statement is
>>> executed. I don't see this behavior in all environments. I do see it
>>> consistently in a QA environment in which Cassandra is running in docker
>>> with network storage, so writes are pretty slow from the get go. In my logs
>>> I see a lot of tables getting flushed, which I guess are all of the dirty
>>> column families in the respective commit log segment. Then I seen a whole
>>> bunch of flushes getting queued up. Can I reach a point in which too many
>>> table flushes get queued such that writes would be blocked?
>>>
>>>
>>> --
>>>
>>> - John
>>>
>>
>

Reply via email to