Re: Seeing writes when only expecting reads

2012-07-20 Thread jmodha
Thanks for the reply Aaron.

I was thinking along the same lines as well.. as its only specific nodes
that were showing excessive writes.. during the heavy read operations.

We will be performing the same exercise again today.. where can I see within
the JMX info if a specific node is performing a lot of read repairs? I will
look out for this and report back if I see the same problem again. 



--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Seeing-writes-when-only-expecting-reads-tp7581333p7581360.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: BulkLoading SSTables and compression

2012-07-02 Thread jmodha
Thanks Sylvain.

I had a look at a node where we streamed data to and I do indeed see the
..-CompressionInfo.db files..

However, prior to running the upgradesstables command, the total size of
all the SSTables was 27GB and afterwards its 12GB.

So even though the CompressionInfo files were there immediately after bulk
loading the data, it wasn't really compressed..?

Can you think of anything else I can try to confirm this is indeed a bug?

Out of interest, we're not specifying a specific chunk size on the schema
(in the hope that it would just use the default of 64kb), so it reads
something like:

create column family test
  with column_type = 'Standard'
  and comparator = 'BytesType'
  and default_validation_class = 'UTF8Type'
  and key_validation_class = 'BytesType'
  and compaction_strategy =
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'
  and compression_options = {'sstable_compression' :
'org.apache.cassandra.io.compress.SnappyCompressor'};

Would this cause any issues? 



--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/BulkLoading-SSTables-and-compression-tp7580849p7580933.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: BulkLoading SSTables and compression

2012-07-02 Thread jmodha
Just to clarify, the data that we're loading SSTables from (v1.0.3) doesn't
have compression enabled on any of the CF's. 

So in theory the compression should occur on the receiving end (v1.1.1) as
we're going from uncompressed data to compressed data.

So I'm not sure if the bug you mention is causing the behaviour we're seeing
here.

The only thing I can think of is that the upgradesstables option follows a
slightly different path to the bulk uploader when it comes to generating the
sstables that have been flushed to disk?

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/BulkLoading-SSTables-and-compression-tp7580849p7580938.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.


Re: BulkLoading SSTables and compression

2012-07-01 Thread jmodha
Sure, before I create a ticket, is there a way I can confirm that the
sstables are indeed not compressed other than running the rebuildsstables
nodetool command (and observing the live size go down)?

Thanks.

--
View this message in context: 
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/BulkLoading-SSTables-and-compression-tp7580849p7580922.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at 
Nabble.com.