Found the problem, it turns out that what Bharatendra suggested was
correct.
I had set the memtable_flush_writers to equal the number of cores but
hadn't restarted the Cassandra process, so they didn't take the
configuration.
On Wed, Jul 29, 2015 at 12:59 PM, Robert Coli rc...@eventbrite.com
On Tue, Jul 28, 2015 at 4:49 PM, Soerian Lieve sli...@liveramp.com wrote:
I did already set that to the number of cores of the machines (24), but it
made no difference.
I continue to suggest that you file a JIRA ticket... I feel you have done
sufficient community based due dilligence to
Increase memtable_flush_writers. In cassandra.yaml, it was recommended to
increase this setting when SSDs used for storing data.
On Fri, Jul 24, 2015 at 1:55 PM, Soerian Lieve sli...@liveramp.com wrote:
I was on CFQ so I changed it to noop. The problem still persisted however.
Do you have any
I did already set that to the number of cores of the machines (24), but it
made no difference.
On Tue, Jul 28, 2015 at 4:44 PM, Bharatendra Boddu bharatend...@gmail.com
wrote:
Increase memtable_flush_writers. In cassandra.yaml, it was recommended to
increase this setting when SSDs used for
I was on CFQ so I changed it to noop. The problem still persisted however.
Do you have any other ideas?
On Thu, Jul 23, 2015 at 5:00 PM, Jeff Ferland j...@tubularlabs.com wrote:
Imbalanced disk use is ok in itself. It’s only saturated throughput that’s
harmful. RAID 0 does give more consistent
Imbalanced disk use is ok in itself. It’s only saturated throughput that’s
harmful. RAID 0 does give more consistent throughput and balancing, but that’s
another story.
As for your situation with SSD drive, you can probably tweak this by changing
the scheduler is set to noop, or read up on
My immediate guess: your transaction logs are on the same media as your
sstables and your OS prioritizes read requests.
-Jeff
On Jul 23, 2015, at 2:51 PM, Soerian Lieve sli...@liveramp.com wrote:
Hi,
I am currently performing benchmarks on Cassandra. Independently from each
other I am
Hi,
I am currently performing benchmarks on Cassandra. Independently from each
other I am seeing ~100k writes/sec and ~50k reads/sec. When I read and
write at the same time, writing drops down to ~1000 writes/sec and reading
stays roughly the same.
The heap used is the same as when only reading,
I set up RAID0 after experiencing highly imbalanced disk usage with a JBOD
setup so my transaction logs are indeed on the same media as the sstables.
Is there any alternative to setting up RAID0 that doesn't have this issue?
On Thu, Jul 23, 2015 at 4:03 PM, Jeff Ferland j...@tubularlabs.com