Re: how to reduce disk read? (and bloom filter performance)

2011-10-17 Thread Mohit Anchlia
On Sun, Oct 16, 2011 at 2:20 AM, Radim Kolar h...@sendmail.cz wrote: Dne 10.10.2011 18:53, Mohit Anchlia napsal(a): Does it mean you are not updating a row or deleting them? yes. i have 350m rows and only about 100k of them are updated.  Can you look at JMX values of BloomFilter* ? i

Re: how to reduce disk read? (and bloom filter performance)

2011-10-17 Thread Radim Kolar
Look in jconcole - org.apache.cassandra.db - ColumnFamilies bloom filter false ratio is on this server 0.0018 and 0,06% reads hits more than 1 sstable. From cassandra point of view, it looks good.

Re: how to reduce disk read? (and bloom filter performance)

2011-10-16 Thread Radim Kolar
Dne 10.10.2011 18:53, Mohit Anchlia napsal(a): Does it mean you are not updating a row or deleting them? yes. i have 350m rows and only about 100k of them are updated. Can you look at JMX values of BloomFilter* ? i could not find this in jconsole mbeans or in jmx over http in cassandra 1.0

Re: how to reduce disk read? (and bloom filter performance)

2011-10-10 Thread Mohit Anchlia
Does it mean you are not updating a row or deleting them? Can you look at JMX values of BloomFilter* ? I don't believe bloom filter false positive % value is configurable. Someone else might be able to throw more light on this. I believe if you want to keep disk seeks to 1 ssTable you will need

Re: how to reduce disk read? (and bloom filter performance)

2011-10-09 Thread Radim Kolar
Dne 7.10.2011 23:16, Mohit Anchlia napsal(a): You'll see output like: Offset SSTables 1 8021 2 783 Which means 783 read operations accessed 2 SSTables thank you for explaining it to me. I see this: Offset SSTables 1 59323 2

Re: how to reduce disk read? (and bloom filter performance)

2011-10-07 Thread Radim Kolar
Dne 16.9.2011 8:20, Yang napsal(a): I looked at the JMX attributes CFS.BloomFilterFalseRatio, it's 1.0 , BloomFilterFalsePositives, it's 2810, its possible to query this bloom filter false ratio from command line?

Re: how to reduce disk read? (and bloom filter performance)

2011-10-07 Thread aaron morton
Of the top of my head I it's not exposed via nodetool. You can get it via HTTP if you install mx4j or if you could try http://wiki.cyclopsgroup.org/jmxterm Cheers - Aaron Morton Freelance Cassandra Developer @aaronmorton http://www.thelastpickle.com On 7/10/2011, at 8:09 PM,

Re: how to reduce disk read? (and bloom filter performance)

2011-10-07 Thread Radim Kolar
Dne 7.10.2011 10:04, aaron morton napsal(a): Of the top of my head I it's not exposed via nodetool. You can get it via HTTP if you install mx4j or if you could try http://wiki.cyclopsgroup.org/jmxterm i have MX4J/Http but cant find that info in listing. i suspect that bloom filter

Re: how to reduce disk read? (and bloom filter performance)

2011-10-07 Thread Mohit Anchlia
Check your disk utilization using iostat. Also, check if compactions are causing reads to be slow. Check GC too. You can look at cfhistograms output or post it here. On Fri, Oct 7, 2011 at 1:44 AM, Radim Kolar h...@sendmail.cz wrote: Dne 7.10.2011 10:04, aaron morton napsal(a): Of the top of

Re: how to reduce disk read? (and bloom filter performance)

2011-10-07 Thread Radim Kolar
Dne 7.10.2011 15:55, Mohit Anchlia napsal(a): Check your disk utilization using iostat. Also, check if compactions are causing reads to be slow. Check GC too. You can look at cfhistograms output or post it here. i dont know how to interpret cf historgrams. can you write it to wiki?

Re: how to reduce disk read? (and bloom filter performance)

2011-10-07 Thread Mohit Anchlia
You'll see output like: Offset SSTables 1 8021 2 783 Which means 783 read operations accessed 2 SSTables On Fri, Oct 7, 2011 at 2:03 PM, Radim Kolar h...@sendmail.cz wrote: Dne 7.10.2011 15:55, Mohit Anchlia napsal(a): Check your disk utilization using

how to reduce disk read? (and bloom filter performance)

2011-09-16 Thread Yang
after I put my cassandra cluster on heavy load (1k/s write + 1k/s read ) for 1 day, I accumulated about 30GB of data in sstables. I think the caches have warmed up to their stable state. when I started this, I manually cat all the sstables to /dev/null , so that they are loaded into memory (the