Re: Node is UNREACHABLE after decommission

2020-09-17 Thread Krish Donald
Thanks Paulo,

We have to decommission multiple nodes from the  cluster and move those
nodes to other clusters.
So if we have to wait for 3 days for every node then it is going to take a
lot of time.
If i am trying to add the decommissioned node to the other cluster it is
giving me an error that cluster_name is not matching however cluster name
is correct as per new cluster.
So until i issue assasinate , i am not able to move forward.

On Thu, Sep 17, 2020 at 1:13 PM Paulo Motta 
wrote:

> After decommissioning the node remains in gossip for a period of 3 days
> (if I recall correctly) and it will show up on describecluster during that
> period, so this is expected behavior. This allows other nodes that
> eventually were down when the node decommissioned to learn that this node
> left the cluster.
>
> What assassinate does is remove the node from gossip, so that's why it no
> longer shows up on describecluster, but this shouldn't be necessary. You
> should check that the node successfully decommissioned if it doesn't show
> up on "nodetool status".
>
> Em qui., 17 de set. de 2020 às 14:26, Krish Donald 
> escreveu:
>
>> We are on 3.11.5 opensource cassandra
>>
>> On Thu, Sep 17, 2020 at 10:25 AM Krish Donald 
>> wrote:
>>
>>> Hi,
>>>
>>> We decommissioned a node from the cluster.
>>> On decommissioned node it said in system.log that node has been
>>> decommissioned .
>>> But after couple of minutes only , on rest of the nodes the node is
>>> showing UNREACHABLE when we issue nodetool describecluster .
>>>
>>> nodetool status is not showing the node however nodetool describecluster
>>> is showing UNREACHABLE.
>>>
>>> I tried nodetool assassinate and now node is not showing in nodetool
>>> describecluster , however that seems to be the last option.
>>>
>>> Ideally it should leave the cluster immediately after decommission.
>>> Once decommissioned is completed as per log then is there any issue in
>>> issuing nodetool assasinate ?
>>>
>>> Thanks
>>>
>>>


Re: Node is UNREACHABLE after decommission

2020-09-17 Thread Krish Donald
We are on 3.11.5 opensource cassandra

On Thu, Sep 17, 2020 at 10:25 AM Krish Donald  wrote:

> Hi,
>
> We decommissioned a node from the cluster.
> On decommissioned node it said in system.log that node has been
> decommissioned .
> But after couple of minutes only , on rest of the nodes the node is
> showing UNREACHABLE when we issue nodetool describecluster .
>
> nodetool status is not showing the node however nodetool describecluster
> is showing UNREACHABLE.
>
> I tried nodetool assassinate and now node is not showing in nodetool
> describecluster , however that seems to be the last option.
>
> Ideally it should leave the cluster immediately after decommission.
> Once decommissioned is completed as per log then is there any issue in
> issuing nodetool assasinate ?
>
> Thanks
>
>


Node is UNREACHABLE after decommission

2020-09-17 Thread Krish Donald
Hi,

We decommissioned a node from the cluster.
On decommissioned node it said in system.log that node has been
decommissioned .
But after couple of minutes only , on rest of the nodes the node is showing
UNREACHABLE when we issue nodetool describecluster .

nodetool status is not showing the node however nodetool describecluster is
showing UNREACHABLE.

I tried nodetool assassinate and now node is not showing in nodetool
describecluster , however that seems to be the last option.

Ideally it should leave the cluster immediately after decommission.
Once decommissioned is completed as per log then is there any issue in
issuing nodetool assasinate ?

Thanks


How to know if we need to increase heap size?

2020-08-20 Thread Krish Donald
Hi,

We have a cluster where if reads are increased 2-3 times suddenly then
cassandra cpu goes around 100% (We have 48 cpu machines with 128GB RAM) for
few nodes and cassandra becomes unresponsive .
We are on 3.11.5 and using G1GC with 16GB heap size.
When going through the system.logs and gc.log , i see in system.log it is
just printing messages like below every 5 secs. I have removed lines for
many keyspaces to reduce the size of the text. , and lot of messages are
getting printed in gc.log . I feel that may be i need to increase heap size
on these nodes but i wanted to understand , how do we determine if heap
size should be increased or not. Nodes are not dying due to OOMs . When we
have OOMs , we know for sure we need to increase heap size but *what to see
in gc.log , system.log and debug.log to determine if we have to increase
heap size.*

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,368 MessagingService.java:1246
- READ messages were dropped in last 5000 ms: 199 internal and 232 cross
node. Mean internal dropped latency: 10443 ms and Mean cross-node dropped
latency: 10402 ms
INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,369 StatusLogger.java:47 -
Pool NameActive   Pending  Completed   Blocked  All
Time Blocked
INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,377 StatusLogger.java:51 -
MutationStage 0 0   80051890 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,378 StatusLogger.java:51 -
ViewMutationStage 0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,378 StatusLogger.java:51 -
ReadStage   192  1331  152624049 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,378 StatusLogger.java:51 -
RequestResponseStage  0 0  172822890 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,378 StatusLogger.java:51 -
ReadRepairStage   0 01545869 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,379 StatusLogger.java:51 -
CounterMutationStage  0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,379 StatusLogger.java:51 -
MiscStage 0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,379 StatusLogger.java:51 -
CompactionExecutor0 0 623536 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,379 StatusLogger.java:51 -
MemtableReclaimMemory 0 0   6700 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,380 StatusLogger.java:51 -
PendingRangeCalculator0 0 18 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,380 StatusLogger.java:51 -
GossipStage   0 01613366 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,380 StatusLogger.java:51 -
SecondaryIndexManagement  0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,380 StatusLogger.java:51 -
HintsDispatcher   0 0  5 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,381 StatusLogger.java:51 -
MigrationStage0 0  1 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,381 StatusLogger.java:51 -
MemtablePostFlush 0 0  14830 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,381 StatusLogger.java:51 -
PerDiskMemtableFlushWriter_0 0 0   6700 0
  0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,381 StatusLogger.java:51 -
ValidationExecutor0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,382 StatusLogger.java:51 -
Sampler   0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,382 StatusLogger.java:51 -
MemtableFlushWriter   0 0   6700 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,382 StatusLogger.java:51 -
InternalResponseStage 0 0  33229 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,383 StatusLogger.java:51 -
AntiEntropyStage  0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,383 StatusLogger.java:51 -
CacheCleanupExecutor  0 0  0 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,383 StatusLogger.java:51 -
Native-Transport-Requests   661 0   84577742 0
0

INFO  [ScheduledTasks:1] 2020-08-19 08:13:12,383 StatusLogger.java:61 -
CompactionManager

Re: Hints replays very slow in one DC

2020-02-27 Thread Krish Donald
Thanks everyone for the response.
How to debug more on GC issue ?
Is there any GC issue which is present in 3.11.0 ?

On Thu, Feb 27, 2020 at 8:46 AM Reid Pinchback 
wrote:

> Our experience with G1GC was that 31gb wasn’t optimal (for us) because
> while you have less frequent full GCs they are bigger when they do happen.
> But even so, not to the point of a 9.5s full collection.
>
>
>
> Unless it is a rare event associated with something weird happening
> outside of the JVM (there are some whacky interactions between memory and
> dirty page writing that could cause it, but not typically), then that is
> evidence of a really tough fight to reclaim memory.  There are a lot of
> things that can impact garbage collection performance.  Something is either
> being pushed very hard, or something is being constrained very tightly
> compared to resource demand.
>
>
>
> I’m with Erick, I wouldn’t be putting my attention right now on anything
> but the GC issue. Everything else that happens within the JVM envelope is
> going to be a misread on timing until you have stable garbage collection.
> You might have other issues later, but you aren’t going to know what those
> are yet.
>
>
>
> One thing you could at least try to eliminate quickly as a factor.  Are
> repairs running at the time that things are slow?  In prior to 3.11.5 you
> lack one of the tuning knobs for doing a tradeoff on memory vs network
> bandwidth when doing repairs.
>
>
>
> I’d also make sure you have tuned C* to migrate whatever you reasonably
> can to be off-heap.
>
>
>
> Another thought for surprise demands on memory.  I don’t know if this is
> in 3.11.0, you’ll have to check the C* bash scripts for launching the
> service.  The number of malloc arenas haven’t always been curtailed, and
> that could result in an explosion in memory demand.  I just don’t recall
> where in C* version history that was addressed.
>
>
>
>
>
> *From: *Erick Ramirez 
> *Reply-To: *"user@cassandra.apache.org" 
> *Date: *Wednesday, February 26, 2020 at 9:55 PM
> *To: *"user@cassandra.apache.org" 
> *Subject: *Re: Hints replays very slow in one DC
>
>
>
> *Message from External Sender*
>
> Nodes are going down due to Out of Memory and we are using 31GB heap size
> in DC1 , however in DC2 (Which serves the traffic) has 16GB heap .
>
> Why we had to increase heap in DC1 is because , DC1 nodes were going down
> due Out of Memory issue but DC2 nodes never went down .
>
>
>
> It doesn't sound right that the primary DC is DC2 but DC1 is under load.
> You might not be aware of it but the symptom suggests DC1 is getting hit
> with lots of traffic. If you run netstat (or whatever utility/tool of
> your choice), you should see established connections to the cluster. That
> should give you clues as to where it's coming from.
>
>
>
> We also noticed below kind of messages in system.log
>
> FailureDetector.java:288 - Not marking nodes down due to local pause of
> 9532654114 > 50
>
>
>
> That's another smoking gun that the nodes are buried in GC. A 9.5-second
> pause is significant. The slow hinted handoffs is really the least of your
> problem right now. If nodes weren't going down, there wouldn't be hints to
> handoff in the first place. Cheers!
>
>
>
> GOT QUESTIONS? Apache Cassandra experts from the community and DataStax have
> answers! Share your expertise on https://community.datastax.com/
> 
> .
>


Re: Hints replays very slow in one DC

2020-02-26 Thread Krish Donald
Nodes are going down due to Out of Memory and we are using 31GB heap size
in DC1 , however in DC2 (Which serves the traffic) has 16GB heap .
Why we had to increase heap in DC1 is because , DC1 nodes were going down
due Out of Memory issue but DC2 nodes never went down .

We also noticed below kind of messages in system.log
FailureDetector.java:288 - Not marking nodes down due to local pause of
9532654114 > 50



On Tue, Feb 25, 2020 at 9:43 PM Erick Ramirez 
wrote:

> What's the reason for nodes going down? Is it because the cluster is
> overloaded? Hints will get handed off periodically when nodes come back to
> life but if they happen to go down again or become unresponsive (for
> whatever reason), the handoff will be delayed until the next cycle. I think
> it's every 5 minutes but don't quote me.
>
> Hinted MV updates can be problematic so it is a symptom but with limited
> info, I'm not sure that it's the cause for slow handoffs. Cheers!
>
>>


Re: Hints replays very slow in one DC

2020-02-25 Thread Krish Donald
DC2 is our main datacenter which serves all the traffic.
This cluster has Materialized views.


On Tue, Feb 25, 2020 at 9:32 PM Erick Ramirez 
wrote:

> Krish, with the limited info and assuming things like hint throttle and
> delivery threads all being equal, my guess would be DC1 is your primary DC
> and is busier than DC2. Got any diagnostic data/troubleshooting info you
> could share? Otherwise, it's a little difficult to speculate as to what may
> be going on. Cheers!
>


Hints replays very slow in one DC

2020-02-25 Thread Krish Donald
Hi,

We have 2 datacenters in our cassandra cluster.
Whenever a node goes down in DC1 and hints gets collected in all other
nodes then what we have noticed is that hints replay is very very slow in
DC1 node but if a node goes down in DC2 and comes back, hints replay fast.

We are on 3.11.0
We are using G1GC with heap size of 31GB on the DC1 nodes.

Any suggestion , if any OS related parameter i should look for?
We use chef, so all the cassandra parameters are same on all nodes.

Thanks


Re: What is "will be anticompacted on range" ?

2020-02-10 Thread Krish Donald
Thanks Jeff, But we are running repair using below command , how do we know
if incremental repair is enabled?

repair -full -pr

Thanks
KD

On Mon, Feb 10, 2020 at 10:09 AM Jeff Jirsa  wrote:

> Incremental repair is splitting the data it repaired from the data it
> didnt repair so it can mark the repaired data with a repairedAt timestamp
> annotation on the data file / sstable.
>
>
> On Mon, Feb 10, 2020 at 9:39 AM Krish Donald  wrote:
>
>> Hi,
>>
>> I noticed few messages in system.log like below:
>> INFO  [CompactionExecutor:21] 2020-02-08 17:56:16,998
>> CompactionManager.java:677 - [repair #fb044b01-4ab5-11ea-a736-a367dba4ed71]
>> SSTable BigTableReader(path='xyz/mc-79976-big-Data.db')
>> ((-8828745000913291684,8954981413747359495]) will be anticompacted on range
>> (1298637302462891853,1299655718091763872]
>>
>> And compactionstats was showing below .
>> id   compaction type keyspace
>> table   completedtotalunit  progress
>> 82ee9720-3c86-11ea-adda-b11edeb80235 Anticompaction after repair
>> customer profile 182882813624 196589990177 bytes 93.03%
>>
>> We are on 3.11.
>>
>> What is the meaning of this compaction type  "nticompaction after repair
>> "?
>> Havent noticed this in 2.x version
>>
>> Thanks
>> KD
>>
>>


What is "will be anticompacted on range" ?

2020-02-10 Thread Krish Donald
Hi,

I noticed few messages in system.log like below:
INFO  [CompactionExecutor:21] 2020-02-08 17:56:16,998
CompactionManager.java:677 - [repair #fb044b01-4ab5-11ea-a736-a367dba4ed71]
SSTable BigTableReader(path='xyz/mc-79976-big-Data.db')
((-8828745000913291684,8954981413747359495]) will be anticompacted on range
(1298637302462891853,1299655718091763872]

And compactionstats was showing below .
id   compaction type keyspace
table   completedtotalunit  progress
82ee9720-3c86-11ea-adda-b11edeb80235 Anticompaction after repair customer
   profile 182882813624 196589990177 bytes 93.03%

We are on 3.11.

What is the meaning of this compaction type  "nticompaction after repair "?
Havent noticed this in 2.x version

Thanks
KD


Re: Cassandra Repair question

2019-10-18 Thread Krish Donald
Thanks Manish,

What is the best and fastest way to repair a table using nodetool repair ?
We are using 256 vnodes .


On Fri, Oct 18, 2019 at 10:05 PM manish khandelwal <
manishkhandelwa...@gmail.com> wrote:

> No it will only cover primary ranges of nodes on single rac. Repair with
> -pr option is to be run on all nodes in a rolling manner.
>
> Regards
> Manish
>
> On 19 Oct 2019 10:03, "Krish Donald"  wrote:
>
>> Hi Cassandra experts,
>>
>>
>> We are on Cassandra 3.11.1.
>>
>> We have to run repairs for a big cluster.
>>
>> We have 2 DCs.
>>
>> 3 RACs in each DC.
>>
>> Replication factor is 3 for each datacenter .
>>
>> So if I run repair on all nodes of a single  RAC with "pr" option then
>> ideally it will cover all the ranges.
>>
>> Please correct my understanding.
>>
>>
>> Thanks
>>
>>
>>


Cassandra Repair question

2019-10-18 Thread Krish Donald
Hi Cassandra experts,


We are on Cassandra 3.11.1.

We have to run repairs for a big cluster.

We have 2 DCs.

3 RACs in each DC.

Replication factor is 3 for each datacenter .

So if I run repair on all nodes of a single  RAC with "pr" option then
ideally it will cover all the ranges.

Please correct my understanding.


Thanks


Backups in Cassandra

2019-08-07 Thread Krish Donald
Hi Folks,

First question is , Do you take backup  for your cassandra cluster ?
If answer is yes then question follows:
1. How do you take backup ?
1.1 ) Is it only snapshot?
 1.2 ) We are on AWS with very large cluster around 51 nodes
with 1TB data on each node.
  1.3) Do you take backup and move it to S3 ?

2. If you take backup, how restore process worked for you?

Thanks
Krish


CDC enabled settings and performance impact

2019-07-29 Thread Krish Donald
Hi,

We need to enable CDC in one of the cluster which is on DSE 5.1. We need to
change below settings :
cdc_enabled
cdc_raw_directory
cdc_total_space_in_mb
cdc_free_space_check_interval_ms

What is the value you keep it for below?
cdc_total_space_in_mb
cdc_free_space_check_interval_ms

Is there any performance impact you have seen?
Should we keep cdc_raw_directory on a different volume than data volume?

Thanks
Krish


Cheat Sheet for Unix based OS, Performance troubleshooting

2019-07-26 Thread Krish Donald
Any one has  Cheat Sheet for Unix based OS, Performance troubleshooting ?


Openings in BayArea/Remote for Cassandra Admin

2019-07-26 Thread Krish Donald
Hi,

This community is very helpful.
Looking for any pointers.
Anyone knows any opening in your team for Cassandra Admin skill in BayArea
/ Remote?
Please send me email .

Thanks
Krish


Re: Cassandra STIG

2019-04-02 Thread Krish Donald
Hi Joe,

Thanks for the reply, I am looking for Cassandra STIG.
I found one link.
https://grokbase.com/p/cassandra/user/162g7mfvg2/security-assessment-of-cassandra

Anyone has a complete Cassandra STIG ?
CIS benchmark is not the one ia m looking for.

Thanks
Krish


On Tue, Apr 2, 2019 at 1:25 PM Joseph Testa  wrote:

> There is a recently published CIS benchmark for Cassandra.
>
> Joe
>
>
> On Tue, Apr 2, 2019 at 4:19 PM Krish Donald  wrote:
>
>> Hi,
>>
>> Does anyone has Cassandra STIG ?
>>
>> Thanks
>> Krish
>>
>


Cassandra STIG

2019-04-02 Thread Krish Donald
Hi,

Does anyone has Cassandra STIG ?

Thanks
Krish


Re: How do u setup networking for Opening Solr Web Interface when on cloud?

2019-04-01 Thread Krish Donald
I have searched on internet but did not get any link which worked for me.

Even on
https://s3.amazonaws.com/quickstart-reference/datastax/latest/doc/datastax-enterprise-on-the-aws-cloud.pdf
it is mentioned to use SSH tunneling .

"DSE nodes have no public IP addresses. Access to the web consoles for Solr
or Spark can be established by using an SSH tunnel. For example, you can
access the Solr console from http://NODE_IP:8983/solr/. You can bind to a
local port with a command like the following (replacing the key and IP
values for those of your cluster): ssh -v -i $KEY_FILE -L
8983:$NODE_IP:8983 ubuntu@$OPSC_PUBLIC_IP -N The Solr console is then
accessible at http://127.0.0.1:8983/solr/. When you’re prompted to log in,
enter the user name cassandra and the password you chose. "

But i am not looking for SSH tunneling option.

I tried to follow below link as well:

https://forums.aws.amazon.com/thread.jspa?threadID=31406

But DSE nodes have no public IP addresses so this also did not work.

Thanks



On Mon, Apr 1, 2019 at 12:32 PM Rahul Singh 
wrote:

> This is probably not a question for this community... but rather for
> Datastax support or the Datastax Academy slack group. More specifically
> this is a "how to expose solr securely" question which is amply answered
> well on the interwebs if you look for it on Google.
>
>
> rahul.xavier.si...@gmail.com
>
> http://cassandra.link
>
> I'm speaking at #DataStaxAccelerate, the world’s premiere #ApacheCassandra
> conference, and I want to see you there! Use my code Singh50 for 50% off
> your registration. www.datastax.com/accelerate
>
>
> On Mon, Apr 1, 2019 at 12:19 PM Krish Donald  wrote:
>
>> Hi,
>>
>> We have DSE cassandra cluster running on AWS.
>> Now we have requirement to enable Solr and Spark on the cluster.
>> We have cassandra on private data subnet which has connectivity to app
>> layer.
>> From cassandra , we cant open direct Solr Web interface.
>> We tried using SSH tunneling and it is working but we cant give SSH
>> tunneling option to developers.
>>
>> We would like to create a Load Balancer  and put the cassandra nodes
>> under that load balancer but the question here is , what health check i
>> need to give for load balancer so that it can open the Solr Web UI ?
>>
>> My solution might not be perfect, please suggest any other solution if
>> you have ?
>>
>> Thanks
>>
>>


How do u setup networking for Opening Solr Web Interface when on cloud?

2019-04-01 Thread Krish Donald
Hi,

We have DSE cassandra cluster running on AWS.
Now we have requirement to enable Solr and Spark on the cluster.
We have cassandra on private data subnet which has connectivity to app
layer.
>From cassandra , we cant open direct Solr Web interface.
We tried using SSH tunneling and it is working but we cant give SSH
tunneling option to developers.

We would like to create a Load Balancer  and put the cassandra nodes under
that load balancer but the question here is , what health check i need to
give for load balancer so that it can open the Solr Web UI ?

My solution might not be perfect, please suggest any other solution if you
have ?

Thanks


What kind of Automation you have for Cassandra related operations on AWS ?

2018-02-08 Thread Krish Donald
Hi All,

What kind of Automation you have for Cassandra related operations on AWS
like restacking, restart of the cluster , changing cassandra.yaml
parameters etc ?

Thanks


Error while starting Cassandra for the first time

2015-02-04 Thread Krish Donald
Hi,

I am getting below error:
Not able to understand why ??

[csduser@master bin]$ ./cassandra -f
CompilerOracle: inline org/apache/cassandra/db/AbstractNativeCell.compareTo
(Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline
org/apache/cassandra/db/composites/AbstractSimpleCellNameType.compareUnsigned
(Lorg/apache/cassandra/db/composites/Composite;Lorg/apache/cassandra/db/composites/Composite;)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare
(Ljava/nio/ByteBuffer;[B)I
CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare
([BLjava/nio/ByteBuffer;)I
CompilerOracle: inline
org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned
(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
CompilerOracle: inline
org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
(Ljava/lang/Object;JILjava/lang/Object;JI)I
CompilerOracle: inline
org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
(Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
CompilerOracle: inline
org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
INFO  22:17:19 Hostname: master.my.com
INFO  22:17:19 Loading settings from
file:/home/csduser/cassandra/conf/cassandra.yaml
ERROR 22:17:20 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:120)
~[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
~[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
~[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
~[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:96)
[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:448)
[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:537)
[apache-cassandra-2.1.2.jar:2.1.2]
Caused by: org.yaml.snakeyaml.scanner.ScannerException: while scanning a
simple key; could not found expected ':';  in 'reader', line 33, column 1:
# See http://wiki.apache.org/cas ...
^
at
org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:460)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:558)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
~[snakeyaml-1.11.jar:na]
at
org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.Yaml.load(Yaml.java:412)
~[snakeyaml-1.11.jar:na]
at
org.apache.cassandra.config.YamlConfigurationLoader.logConfig(YamlConfigurationLoader.java:126)
~[apache-cassandra-2.1.2.jar:2.1.2]
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:104)
~[apache-cassandra-2.1.2.jar:2.1.2]
... 6 common frames omitted
Invalid yaml
Fatal configuration error; unable to start. See log for stacktrace.


Thanks
Krish


Case Study for Learning Cassandra

2015-02-04 Thread Krish Donald
Hi,

I am new to Cassandra and have setup 4 nodes Cassandra cluster using VMs.
Looking for any case study which I can do to understand the Cassandra
Administration and put in my resume as well.

Any help is appreciated.

Thanks
Krish


Re: Error while starting Cassandra for the first time

2015-02-04 Thread Krish Donald
I have used the yaml validator but tried to fixed based on error messages ,
I had to comment data_directories , commitlog_directory and saved_caches
directory and after that it worked.

Thanks a lot for the help ...

On Wed, Feb 4, 2015 at 2:32 PM, Mark Reddy mark.l.re...@gmail.com wrote:

 INFO  22:17:19 Loading settings from file:/home/csduser/cassandra/
 conf/cassandra.yaml
 ERROR 22:17:20 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml


 You have an malformed cassandra.yaml config file that is resulting in
 Cassandra not being able to start. You should be able to validate your yaml
 file with some online validator such as http://www.yamllint.com/


 Regards,
 Mark

 On 4 February 2015 at 22:23, Krish Donald gotomyp...@gmail.com wrote:

 Hi,

 I am getting below error:
 Not able to understand why ??

 [csduser@master bin]$ ./cassandra -f
 CompilerOracle: inline
 org/apache/cassandra/db/AbstractNativeCell.compareTo
 (Lorg/apache/cassandra/db/composites/Composite;)I
 CompilerOracle: inline
 org/apache/cassandra/db/composites/AbstractSimpleCellNameType.compareUnsigned
 (Lorg/apache/cassandra/db/composites/Composite;Lorg/apache/cassandra/db/composites/Composite;)I
 CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare
 (Ljava/nio/ByteBuffer;[B)I
 CompilerOracle: inline org/apache/cassandra/utils/ByteBufferUtil.compare
 ([BLjava/nio/ByteBuffer;)I
 CompilerOracle: inline
 org/apache/cassandra/utils/ByteBufferUtil.compareUnsigned
 (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
 CompilerOracle: inline
 org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
 (Ljava/lang/Object;JILjava/lang/Object;JI)I
 CompilerOracle: inline
 org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
 (Ljava/lang/Object;JILjava/nio/ByteBuffer;)I
 CompilerOracle: inline
 org/apache/cassandra/utils/FastByteOperations$UnsafeOperations.compareTo
 (Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I
 INFO  22:17:19 Hostname: master.my.com
 INFO  22:17:19 Loading settings from
 file:/home/csduser/cassandra/conf/cassandra.yaml
 ERROR 22:17:20 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:120)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:158)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:133)
 ~[apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:96)
 [apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:448)
 [apache-cassandra-2.1.2.jar:2.1.2]
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:537)
 [apache-cassandra-2.1.2.jar:2.1.2]
 Caused by: org.yaml.snakeyaml.scanner.ScannerException: while scanning a
 simple key; could not found expected ':';  in 'reader', line 33, column 1:
 # See http://wiki.apache.org/cas ...
 ^
 at
 org.yaml.snakeyaml.scanner.ScannerImpl.stalePossibleSimpleKeys(ScannerImpl.java:460)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.scanner.ScannerImpl.needMoreTokens(ScannerImpl.java:280)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:225)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:558)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
 ~[snakeyaml-1.11.jar:na]
 at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
 ~[snakeyaml-1.11.jar:na]
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
 ~[snakeyaml-1.11.jar:na]
 at org.yaml.snakeyaml.Yaml.load(Yaml.java:412)
 ~[snakeyaml-1.11.jar:na]
 at
 org.apache.cassandra.config.YamlConfigurationLoader.logConfig(YamlConfigurationLoader.java:126)
 ~[apache-cassandra-2.1.2.jar:2.1.2