[ANNOUNCE] Apache Gora 0.8 Release

2017-09-20 Thread lewis john mcgibbney
Hi Folks, The Apache Gora team are pleased to announce the immediate availability of Apache Gora 0.8. The Apache Gora open source framework provides an in-memory data model and persistence for big data. Gora supports persisting to - column stores, - key value stores, - document stores,

Re: Drastic increase in disk usage after starting repair on 3.7

2017-09-20 Thread kurt greaves
repair does overstream by design, so if that node is inconsistent you'd expect a bit of an increase. if you've got a backlog of compactions that's probably due to repair and likely the cause of the increase. if you're really worried you can rolling restart to stop the repair, otherwise maybe try

Re: Drastic increase in disk usage after starting repair on 3.7

2017-09-20 Thread Paul Pollack
Just a quick additional note -- we have checked and this is the only node in the cluster exhibiting this behavior, disk usage is steady on all the others. CPU load on the repairing node is slightly higher but nothing significant. On Wed, Sep 20, 2017 at 9:08 PM, Paul Pollack

Drastic increase in disk usage after starting repair on 3.7

2017-09-20 Thread Paul Pollack
Hi, I'm running a repair on a node in my 3.7 cluster and today got alerted on disk space usage. We keep the data and commit log directories on separate EBS volumes. The data volume is 2TB. The node went down due to EBS failure on the commit log drive. I stopped the instance and was later told by

Re: Debugging write timeouts on Cassandra 2.2.5

2017-09-20 Thread Jai Bheemsen Rao Dhanwada
Apologies for the typo, Mike On Wed, Sep 20, 2017 at 9:49 AM, Jai Bheemsen Rao Dhanwada < jaibheem...@gmail.com> wrote: > Hello Nike, > > were you able to fix the issue? If so what change helped you? > > On Wed, Feb 24, 2016 at 5:36 PM, Jack Krupansky > wrote: > >>

Re: Debugging write timeouts on Cassandra 2.2.5

2017-09-20 Thread Jai Bheemsen Rao Dhanwada
Hello Nike, were you able to fix the issue? If so what change helped you? On Wed, Feb 24, 2016 at 5:36 PM, Jack Krupansky wrote: > Great that you found a specific release that triggers the problem - 2.1.x > has a huge number of changes. > > How many partitions and

Re: Multi-node repair fails after upgrading to 3.0.14

2017-09-20 Thread Jeff Jirsa
It certainly violates the principle of least astonishment. Generally, people with large clusters do it the same way they did in 2.1 - with ring aware scheduling (which people running large clusters can probably do because they’re less likely to be using vnodes) The conversation beyond this