On Wednesday, August 13, 2014, Robert Coli <> wrote:
> On Wed, Aug 13, 2014 at 5:53 AM, Ruchir Jha > wrote:
>
>> We are adding nodes currently and it seems like compression is falling
>> behind. I judge that by the fact that the new node which has a 4.5T disk
&g
Hi,
I see a lot of activity around the OpsCenter_rollups CFs in the logs. Why
is there so much OpsCenter work happening? Is there a way to disable it,
and whats the impact?
Ruchir.
All,
I am trying to use the new astyanax over java driver to connect to
cassandra version 1.2.12,
Following settings are turned on in cassandra.yaml:
start_rpc: true
native_transport_port: 9042
start_native_transport: true
*Code to connect:*
final Supplier> hostSupplier = new Supplier>() {
That exception is on the cassandra server and not on the client.
On Mon, Oct 6, 2014 at 2:10 PM, DuyHai Doan wrote:
> java.lang.NoSuchMethodError -> Jar dependency issue probably. Did you try
> to create an issue on the Astyanax github repo ?
>
> On Mon, Oct 6, 2014 at 6:01
We have a column family that has about 800K rows and on an average about a
million columns. I am interested in getting all the row keys in this column
family and I am using the following Astyanax code snippet to do this.
This query never finishes (ran it for 2 days but did not finish).
This quer
Hi,
I am trying to investigate ParNew promotion failures happening routinely in
production. As part of this exercise, I enabled
-XX:PrintHistogramBeforeFullGC and saw the following output. As you can see
there are a ton of Columns, ExpiringColumns and DeletedColumns before GC
ran and these numbers
No we don't.
Sent from my iPhone
> On Apr 16, 2014, at 9:21 AM, Mark Reddy wrote:
>
> Do you delete and/or set TTLs on your data?
>
>
>> On Wed, Apr 16, 2014 at 2:14 PM, Ruchir Jha wrote:
>> Hi,
>>
>> I am trying to investigate ParN
Lowering CMSInitiatingOccupancyFraction to less than 0.75 will lead to
more GC interference and will impact write performance. If you're not
sensitive to this impact, your expectation is correct, however make
sure your flush_largest_memtables_at is always set to less than or
equal to the occupancy
I tried to do this, however the doubling in disk space is not "temporary"
as you state in your note. What am I missing?
On Fri, Apr 11, 2014 at 10:44 AM, William Oberman
wrote:
> So, if I was impatient and just "wanted to make this happen now", I could:
>
> 1.) Change GCGraceSeconds of the CF to
Sent from my iPhone
We have these precise settings but are still seeing the broken pipe exception
in our gc logs. Any clues?
Sent from my iPhone
> On Jul 8, 2014, at 1:17 PM, Bhaskar Singhal wrote:
>
> Thanks Mark. Yes the 1024 is the limit. I haven't changed it as per the
> recommended production settings.
>
>
We have a 12 node cluster and we are consistently seeing this exception being
thrown during peak write traffic. We have a replication factor of 3 and a write
consistency level of QUORUM. Also note there is no unusual Or Full GC activity
during this time. Appreciate any help.
Sent from my iPhon
xception.
>
>
> On Fri, Jul 11, 2014 at 1:50 PM, Ruchir Jha wrote:
>
>> We have a 12 node cluster and we are consistently seeing this exception
>> being thrown during peak write traffic. We have a replication factor of 3
>> and a write consistency level of QUORUM. Also
Strategy',
'datacenter1': '3'
};
On Fri, Jul 11, 2014 at 3:48 PM, Chris Lohfink
wrote:
> What replication strategy are you using? if using NetworkTopolgyStrategy
> double check that your DC names match up (case sensitive)
>
> Chris
>
> On Jul 11,
: CL_QUORUM
DefaultWriteConsistencyLevel: CL_QUORUM
On Fri, Jul 11, 2014 at 5:04 PM, Mark Reddy wrote:
> Can you post the output of nodetool status and your Astyanax connection
> settings?
>
>
> On Fri, Jul 11, 2014 at 9:06 PM, Ruchir Jha wrote:
>
>> This is how we crea
to
> cluster your app wont start using it until all bootstrapping and
> everythings settled down.
>
> Chris
>
> On Jul 14, 2014, at 12:04 PM, Ruchir Jha wrote:
>
> Mark,
>
> Here you go:
>
> *NodeTool status:*
>
> Status=Up/Down
> |/ State=Normal/Leaving/Jo
Really curious to know what's causing the spike in Columns and
DeletedColums below :
2014-07-28T09:30:27.471-0400: 127335.928: [Full GC 127335.928: [Class
Histogram:
num #instances #bytes class name
--
1: 132626060 6366050880 j
wrote:
> What is your data size and number of columns in Cassandra. Do you do many
> deletions?
>
>
> On Mon, Jul 28, 2014 at 2:53 PM, Ruchir Jha wrote:
>
>> Really curious to know what's causing the spike in Columns and
>> DeletedColums below :
>>
>&
Also we do subsequent updates (atleat 4) for each piece of data that we
write.
On Mon, Jul 28, 2014 at 10:36 AM, Ruchir Jha wrote:
> Doing about 5K writes / second. Avg Data Size = 1.6 TB / node. Total Data
> Size = 21 TB.
>
> And this is the nodetool cfstats output for one of
I am trying to bootstrap the thirteenth node in a 12 node cluster where the
average data size per node is about 2.1 TB. The bootstrap streaming has
been going on for 2 days now, and the disk size on the new node is already
above 4 TB and still going. Is this because the new node is running major
co
e seed list, it is generally advisable to use 3 seed
> nodes per AZ / DC.
>
> Cheers,
>
>
> On Mon, Aug 4, 2014 at 11:41 AM, Ruchir Jha wrote:
>
>> I am trying to bootstrap the thirteenth node in a 12 node cluster where
>> the average data size per node is about 2.1 T
>
>
> If you are using vnodes and you have num_tokens set to 256 the new node
> will take token ranges dynamically. What is the configuration of your other
> nodes, are you setting num_tokens or initial_token on those?
>
>
> Mark
>
>
> On Tue, Aug 5, 2014 at 2:57 PM, R
blocked.
On Tue, Aug 5, 2014 at 10:14 AM, Ruchir Jha wrote:
> Yes num_tokens is set to 256. initial_token is blank on all nodes
> including the new one.
>
>
> On Tue, Aug 5, 2014 at 10:03 AM, Mark Reddy
> wrote:
>
>> My understanding was that if initial_token is left em
Just ran this on the new node:
nodetool netstats | grep "Streaming from" | wc -l
10
Seems like the new node is receiving data from 10 other nodes. Is that
expected in a vnodes enabled environment?
Ruchir.
On Tue, Aug 5, 2014 at 10:21 AM, Ruchir Jha wrote:
> Also not su
Sorry for the multiple updates, but another thing I found was all the other
existing nodes have themselves in the seeds list, but the new node does not
have itself in the seeds list. Can that cause this issue?
On Tue, Aug 5, 2014 at 10:30 AM, Ruchir Jha wrote:
> Just ran this on the new n
What is the current output of 'nodetool compactionstats'? Could you also
> paste the output of nodetool status ?
>
> Mark
>
>
>
> On Tue, Aug 5, 2014 at 3:59 PM, Ruchir Jha wrote:
>
>> Sorry for the multiple updates, but another thing I found was all the
>&
12:13 PM, Ruchir Jha wrote:
> nodetool status:
>
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> -- Address Load Tokens Owns (effective) Host ID
> Rack
> UN 10.10.20.
of
> 'iostat -x 5 5'
>
> If you do in fact have spare IO, there are several configuration options
> you can tune such as increasing the number of flush writers and
> compaction_throughput_mb_per_sec
>
> Mark
>
>
> On Tue, Aug 5, 2014 at 5:22 PM, Ruchir Jha
Also, right now the "top" command shows that we are at 500-700% CPU, and we
have 23 total processors, which means we have a lot of idle CPU left over,
so throwing more threads at compaction and flush should alleviate the
problem?
On Tue, Aug 5, 2014 at 2:57 PM, Ruchir Jha wrote:
>
olumn
6: 31623 498012768
[Lorg.apache.cassandra.io.compress.CompressionMetadata$Chunk;
On Tue, Aug 5, 2014 at 2:59 PM, Ruchir Jha wrote:
> Also, right now the "top" command shows that we are at 500-700% CPU, and
> we have 23 total processors, which means we have a lot o
Hello,
We currently are at C* 1.2 and are using the SnappyCompressor for all our
CFs. Total data size is at 24 TB, and its a 12 node cluster. Avg node size
is 2 TB.
We are adding nodes currently and it seems like compression is falling
behind. I judge that by the fact that the new node which has
31 matches
Mail list logo