rt 3.0 and
picked up a new feature you didnt realize you picked up, etc)
On Fri, May 11, 2018 at 11:26 AM, kooljava2 <koolja...@yahoo.com.invalid> wrote:
After further analyzing the data. I see some pattern. The rows which were
updated in last 2-3 weeks, the column which were not part of this
Hello,
Thank you Lucas for sharing. I am still a beginner in Cassandra NoSQL world.
Are there any other good books related to Performance tuning and Architecture
overview?
Thank you.
On Monday, 14 May 2018, 07:57:38 GMT-7, Nitan Kainth
wrote:
Hi Lucas,
I am
Hello Jeff,
2.1.19 to 3.0.15.
Thank you.
On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa <jji...@gmail.com>
wrote:
Which minor version of 3.0
-- Jeff Jirsa
On May 11, 2018, at 2:54 AM, kooljava2 <koolja...@yahoo.com.INVALID> wrote:
Hello,
Upgraded Cassandra 2.1
, kooljava2
<koolja...@yahoo.com.INVALID> wrote:
Hello Jeff,
2.1.19 to 3.0.15.
Thank you.
On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa <jji...@gmail.com>
wrote:
Which minor version of 3.0
-- Jeff Jirsa
On May 11, 2018, at 2:54 AM, kooljava2 <koolja...@yahoo.com
Hello,
We recently upgrade C* from 2.1 to 3.0. After the upgrade we are seeing
increase in the total read bytes and read ops on the EBS volumes. It almost
doubled on all the nodes. The number of writes are same.
Thank you.
, which 3.0? Newest, or an earlier 3.0.x?
On Fri, Aug 10, 2018 at 3:05 PM, kooljava2 wrote:
Hello,
We recently upgrade C* from 2.1 to 3.0. After the upgrade we are seeing
increase in the total read bytes and read ops on the EBS volumes. It almost
doubled on all the nodes. The number of writes are same.
Thank you.
We found the solution for our High read volume bytes with help of DSE, we made
following change
$ lvchange -r 16
Following parameter in cassandra.yaml:-
disk_access_mode: mmap_index_only
Thank you.
On Monday, 13 August 2018, 10:39:28 GMT-7, kooljava2
wrote:
We did run "nod
Hello Anuj,
The 2nd workaround should work. As app will auto discover all the other nodes.
Its the first contact with the node that app makes determines the protocol
version. So if you remove the newer version nodes from the app configuration
after the startup, it will auto discover the newer
Hello,
We are exploring on configuring Sorl/Spark. Wanted to get input on this. 1) How
do we decide which one to use?2) Do we run this on a DC where there is less
workload?
Any other suggestion or comments are appreciated.
Thank you.