Thanks Markus that may have helped the cause as well. Certainly repair
works great beyond 3.0.3 and we have tested it on 3.5+ as well as on 3.0.7.

On that note, it is evident that there are been a number of optimizations
on various fronts post 3.0.3, I would like to know the general opinion
about stable version for production deployment.

I understand that it has been asked multiple times but while upgrading to
3.5 we encountered an issue as mentioned by Atul which was very critical
for us and we could not have compromised with it, as it would have required
a lot of code changes. How far can we consider 3.6+ versions for Production
Deployment?

On a number of instances it was suggested that Cassandra versions x.y.6+
can be assumed to be production stable as most of the major fixes are in
place by then citing past history. How far can we apply that logic to 3.x
series? As the fixes in future versions arent ported back to previous
versions in tick tock series it becomes a bit tricky as if a feature in 3.x
is being used and a bug is encountered then option is to either go to:

1. 3.(x-1) or 3.(x-2) or
2. go higher up in the series (if new version has been released).

In case of 1. if the feature isnt present in 3.x-- series then a lot of
code change may be required in application using cassandra.

(Say for example if someone deploys 3.5 and starts using SASI on production
and encounters an issue which cant be compromised with and changing
cassandra version with that issue not present is only option), then
possibly 3.3 is an option but that will require them to use an alternative
for SASI if they had used. In case of a surprise situation on production
changing app code and working out alternatives may not be so brisk.

Although chances are less but it can never be sure what issues new features
in 3.(x+1) may bring, so in that case going forward to new version is also
a bit dicy.

If a similar situation arises in earlier release strategy then one can near
blindly go ahead with new release in the same series because it is a bug
fix. Im sure a lot of thought must have been put in place to adopt tick
tock strategy (possibly to roll out a number of features which were
pending).

But from a user point of view using a feature in a new version and even
after testing for that feature exposes a risk of issues that may arise
because of other features developed and fixes for which will not be
backported and going forward or backward both may not be an option.

Will be glad if we can be helped mitigate this apprehension.


On Wed, Jun 22, 2016 at 6:33 PM, Marcus Eriksson <krum...@gmail.com> wrote:

> it could also be CASSANDRA-11412 if you have many sstables and vnodes
>
> On Wed, Jun 22, 2016 at 2:50 PM, Bhuvan Rawal <bhu1ra...@gmail.com> wrote:
>
>> Thanks for the info Paulo, Robert. I tried further testing with other
>> parameters and it was prevalent. We could be either 11739, 11206. But im
>> spektical about 11739 because repair works well in 3.5 and 11739 seems to
>> be fixed for 3.7/3.0.7.
>>
>> We may possibly resolve this by increasing heap size thereby reducing
>> some page cache bandwidth before upgrading to higher versions.
>>
>> On Mon, Jun 20, 2016 at 10:00 PM, Paulo Motta <pauloricard...@gmail.com>
>> wrote:
>>
>>> You could also be hitting CASSANDRA-11739, which was fixed on 3.0.7 and
>>> could potentially cause OOMs for long-running repairs.
>>>
>>>
>>> 2016-06-20 13:26 GMT-03:00 Robert Stupp <sn...@snazy.de>:
>>>
>>>> One possibility might be CASSANDRA-11206 (Support large partitions on
>>>> the 3.0 sstable format), which reduces heap usage for other operations
>>>> (like repair, compactions) as well.
>>>> You can verify that by setting column_index_cache_size_in_kb in c.yaml
>>>> to a really high value like 10000000 - if you see the same behaviour in 3.7
>>>> with that setting, there’s not much you can do except upgrading to 3.7 as
>>>> that change went into 3.6 and not into 3.0.x.
>>>>
>>>> —
>>>> Robert Stupp
>>>> @snazy
>>>>
>>>> On 20 Jun 2016, at 18:13, Bhuvan Rawal <bhu1ra...@gmail.com> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> We are running Cassandra 3.0.3 on Production with Max Heap Size of 8GB.
>>>> There has been a consistent issue with nodetool repair for a while and
>>>> we have tried issuing it with multiple options --pr, --local as well,
>>>> sometimes node went down with Out of Memory error and at times nodes did
>>>> stopped connecting any connection, even jmx nodetool commands.
>>>>
>>>> On trying with same data on 3.7 Repair Ran successfully without
>>>> encountering any of the above mentioned issues. I then tried increasing
>>>> heap to 16GB on 3.0.3 and repair ran successfully.
>>>>
>>>> I then analyzed memory usage during nodetool repair for 3.0.3(16GB
>>>> heap) vs 3.7 (8GB Heap) and 3.0.3 occupied 11-14 GB at all times,
>>>> whereas 3.7 spiked between 1-4.5 GB while repair runs. As they ran on
>>>> same dataset and unrepaired data with full repair.
>>>>
>>>> We would like to know if it is a known bug that was fixed post 3.0.3
>>>> and there could be a possible way by which we can run repair on 3.0.3
>>>> without increasing heap size as for all other activities 8GB works for us.
>>>>
>>>> PFA the visualvm snapshots.
>>>>
>>>> <Screenshot from 2016-06-20 21:06:09.png>
>>>> ​3.0.3 VisualVM Snapshot, consistent heap usage of greater than 12 GB.
>>>>
>>>>
>>>> <Screenshot from 2016-06-20 21:05:57.png>
>>>> ​3.7 VisualVM Snapshot, 8GB Max Heap and max heap usage till about 5GB.
>>>>
>>>> Thanks & Regards,
>>>> Bhuvan Rawal
>>>>
>>>>
>>>> PS: In case if the snapshots are not visible, they can be viewed from
>>>> the following links:
>>>> 3.0.3:
>>>> https://s31.postimg.org/4e7ifsjaz/Screenshot_from_2016_06_20_21_06_09.png
>>>> 3.7:
>>>> https://s31.postimg.org/xak32s9m3/Screenshot_from_2016_06_20_21_05_57.png
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to