I'm currently working to configure AppDynamics to monitor cassandra. It
does byte-code instrumentation, so there is an agent added to the
cassandra JVM, which gives the ability to capture latency for requests and
see where the bottleneck is coming from. We have been using it on our
other Java apps. They have a free version to try it out. It doesn't track
thrift calls out of the box, but I'm encouraging AD to figure out a way to
do that, and working on a config for capturing the entry points in the
meantime.

The way the page cache works is that pages stay in memory linked to a
specific file. If you delete that file, the pages are all considered
invalid at that point, so get zero'ed out and go to the start of the free
list. So compaction creates a new file first (which is competing with
existing read traffic to try and keep its pages in memory) then removes
the old files that were being merged, so at that point there is a supply
of blank pages, but disk reads will be needed to warm up the cache again.
The use case that I'm working with is more like a persistent memcached
replacement, so we are trying to have more RAM than data on m2.4xl EC2
instances (~70GB) and keep all reads in memory all the time.

Adrian

On 12/19/10 5:36 AM, "Peter Schuller" <peter.schul...@infidyne.com> wrote:

>> How / what are you monitoring? Best practices someone?
>
>I recently set up monitoring using the cassandra-munin-plugins
>(https://github.com/jamesgolick/cassandra-munin-plugins). However, due
>to various little details that wasn't too fun to integrate properly
>with munin-node-configure and automated configuration management. A
>problem is also the starting of a JVM for each use of jmxquery, which
>can become a problem with many column families.
>
>I like your web server idea. Something persistent that can sit there
>and do the JMX acrobatics, and expose something more easily consumed
>for stuff like munin/zabbix/etc. It would be pretty nice to have that
>out of the box with Cassandra, though I expect that would be
>considered bloat. :)
>
>-- 
>/ Peter Schuller
>

Reply via email to