Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Vineet Mishra
Hi All,

I am installing Cassandra Multinode Setup on a 4 node CentOs Cluster, my
cassandra.yaml looks like so

cluster_name: 'node'
initial_token: 0
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: 192.168.1.32
listen_address: 192.168.1.32
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch

Similarly cassandra.yaml for second node

cluster_name: 'node'
initial_token: 2305843009213693952
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: 192.168.1.32
listen_address: 192.168.1.36
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch

and so on. . .

While trying to start the Server running on the Seed node(192.168.1.32) it
throws this nasty exception and didn't start,


-bash-4.1$ sudo bin/cassandra

-bash-4.1$  INFO 12:19:46,653 Logging initialized
 INFO 12:19:46,688 Loading settings from
file:/home/cluster/cassandra/conf/cassandra.yaml
ERROR 12:19:46,985 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
at
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
at
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Caused by: Can't construct a java object for
tag:yaml.org,2002:org.apache.cassandra.config.Config;
exception=Cannot create property=seed_provider for
JavaBean=org.apache.cassandra.config.Config@676c6370;
java.lang.reflect.InvocationTargetException
 in 'reader', line 1, column 1:
cluster_name: 'pcross'
^

at
org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:333)
at
org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
at
org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:141)
at
org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:127)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
... 5 more
Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create
property=seed_provider for
JavaBean=org.apache.cassandra.config.Config@676c6370;
java.lang.reflect.InvocationTargetException
at
org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:299)
at
org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:189)
at
org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:331)
... 11 more
Caused by: org.yaml.snakeyaml.error.YAMLException:
java.lang.reflect.InvocationTargetException
at
org.yaml.snakeyaml.constructor.Constructor$ConstructSequence.construct(Constructor.java:542)
at
org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
at
org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:296)
... 13 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.yaml.snakeyaml.constructor.Constructor$ConstructSequence.construct(Constructor.java:540)
... 15 more
Caused by: java.lang.NullPointerException
at
org.apache.cassandra.config.SeedProviderDef.init(SeedProviderDef.java:33)
... 20 more
Invalid yaml


I am not sure exactly whats making it to throw NullPointer and halt the
process.

Expert Advice would be appreciated!
URGENT!

Thanks!


Re: Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Mark Reddy
It is telling you that your yaml is invalid, from looking at the snippet
you have provided it looks like the seed_provider.parameters is not
correctly indented, it should look something like:

seed_provider:
  - class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
 - seeds: 192.168.1.32


Regards,
Mark


On 26 August 2014 08:12, Vineet Mishra clearmido...@gmail.com wrote:


 Hi All,

 I am installing Cassandra Multinode Setup on a 4 node CentOs Cluster, my
 cassandra.yaml looks like so

 cluster_name: 'node'
 initial_token: 0
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.32
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 Similarly cassandra.yaml for second node

 cluster_name: 'node'
 initial_token: 2305843009213693952
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.36
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 and so on. . .

 While trying to start the Server running on the Seed node(192.168.1.32) it
 throws this nasty exception and didn't start,


 -bash-4.1$ sudo bin/cassandra

 -bash-4.1$  INFO 12:19:46,653 Logging initialized
  INFO 12:19:46,688 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
 ERROR 12:19:46,985 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
  at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
  at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
  at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: Can't construct a java object for 
 tag:yaml.org,2002:org.apache.cassandra.config.Config;
 exception=Cannot create property=seed_provider for
 JavaBean=org.apache.cassandra.config.Config@676c6370;
 java.lang.reflect.InvocationTargetException
  in 'reader', line 1, column 1:
 cluster_name: 'pcross'
 ^

 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:333)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
 at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:141)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:127)
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
  at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
  ... 5 more
 Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create
 property=seed_provider for
 JavaBean=org.apache.cassandra.config.Config@676c6370;
 java.lang.reflect.InvocationTargetException
  at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:299)
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:189)
  at
 org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:331)
 ... 11 more
 Caused by: org.yaml.snakeyaml.error.YAMLException:
 java.lang.reflect.InvocationTargetException
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructSequence.construct(Constructor.java:542)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:296)
  ... 13 more
 Caused by: java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructSequence.construct(Constructor.java:540)
  ... 15 more
 Caused by: java.lang.NullPointerException
 at
 org.apache.cassandra.config.SeedProviderDef.init(SeedProviderDef.java:33)
  ... 20 more
 Invalid yaml


 I am not sure exactly whats making it to throw NullPointer and halt the
 process.

 Expert Advice would be appreciated!
 URGENT!

 Thanks!



Re: Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Vineet Mishra
Thanks Mark,
That was indeed yaml formatting issue.
Moreover I am getting the underlying error now,

INFO 15:33:43,770 Loading settings from
file:/home/cluster/cassandra/conf/cassandra.yaml
 INFO 15:33:44,100 Data files directories: [/var/lib/cassandra/data]
 INFO 15:33:44,101 Commit log directory: /var/lib/cassandra/commitlog
ERROR 15:33:44,103 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Missing required
directive CommitLogSync
at
org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:147)
at
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Missing required directive CommitLogSync
Fatal configuration error; unable to start. See log for stacktrace.

Do you have any idea about this.

Thanks!


On Tue, Aug 26, 2014 at 3:07 PM, Mark Reddy mark.l.re...@gmail.com wrote:

 It is telling you that your yaml is invalid, from looking at the snippet
 you have provided it looks like the seed_provider.parameters is not
 correctly indented, it should look something like:

 seed_provider:
   - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
  - seeds: 192.168.1.32


 Regards,
 Mark


 On 26 August 2014 08:12, Vineet Mishra clearmido...@gmail.com wrote:


 Hi All,

 I am installing Cassandra Multinode Setup on a 4 node CentOs Cluster, my
 cassandra.yaml looks like so

 cluster_name: 'node'
 initial_token: 0
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.32
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 Similarly cassandra.yaml for second node

 cluster_name: 'node'
 initial_token: 2305843009213693952
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.36
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 and so on. . .

 While trying to start the Server running on the Seed node(192.168.1.32)
 it throws this nasty exception and didn't start,


 -bash-4.1$ sudo bin/cassandra

 -bash-4.1$  INFO 12:19:46,653 Logging initialized
  INFO 12:19:46,688 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
 ERROR 12:19:46,985 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
  at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
  at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
  at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: Can't construct a java object for 
 tag:yaml.org,2002:org.apache.cassandra.config.Config;
 exception=Cannot create property=seed_provider for
 JavaBean=org.apache.cassandra.config.Config@676c6370;
 java.lang.reflect.InvocationTargetException
  in 'reader', line 1, column 1:
 cluster_name: 'pcross'
 ^

 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:333)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
 at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:141)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:127)
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
  at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
  ... 5 more
 Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create
 property=seed_provider for
 JavaBean=org.apache.cassandra.config.Config@676c6370;
 java.lang.reflect.InvocationTargetException
  at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:299)
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:189)
  at
 org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:331)
 ... 11 more
 Caused by: org.yaml.snakeyaml.error.YAMLException:
 java.lang.reflect.InvocationTargetException
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructSequence.construct(Constructor.java:542)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
 at
 

Re: Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Mark Reddy
You are missing commitlog_sync in your cassandra.yaml.

Are you generating your own cassandra.yaml or editing the package default?
If you are generating your own there are several configuration options that
are required and if not present, Cassandra will fail to start.


Regards,
Mark


On 26 August 2014 11:14, Vineet Mishra clearmido...@gmail.com wrote:

 Thanks Mark,
 That was indeed yaml formatting issue.
 Moreover I am getting the underlying error now,

 INFO 15:33:43,770 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
  INFO 15:33:44,100 Data files directories: [/var/lib/cassandra/data]
  INFO 15:33:44,101 Commit log directory: /var/lib/cassandra/commitlog
 ERROR 15:33:44,103 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Missing required
 directive CommitLogSync
  at
 org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:147)
 at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
  at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
 at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
  at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Missing required directive CommitLogSync
 Fatal configuration error; unable to start. See log for stacktrace.

 Do you have any idea about this.

 Thanks!


 On Tue, Aug 26, 2014 at 3:07 PM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 It is telling you that your yaml is invalid, from looking at the snippet
 you have provided it looks like the seed_provider.parameters is not
 correctly indented, it should look something like:

 seed_provider:
   - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
  - seeds: 192.168.1.32


 Regards,
 Mark


 On 26 August 2014 08:12, Vineet Mishra clearmido...@gmail.com wrote:


 Hi All,

 I am installing Cassandra Multinode Setup on a 4 node CentOs Cluster, my
 cassandra.yaml looks like so

 cluster_name: 'node'
 initial_token: 0
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.32
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 Similarly cassandra.yaml for second node

 cluster_name: 'node'
 initial_token: 2305843009213693952
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.36
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 and so on. . .

 While trying to start the Server running on the Seed node(192.168.1.32)
 it throws this nasty exception and didn't start,


 -bash-4.1$ sudo bin/cassandra

 -bash-4.1$  INFO 12:19:46,653 Logging initialized
  INFO 12:19:46,688 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
 ERROR 12:19:46,985 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
  at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
  at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
  at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: Can't construct a java object for 
 tag:yaml.org,2002:org.apache.cassandra.config.Config;
 exception=Cannot create property=seed_provider for
 JavaBean=org.apache.cassandra.config.Config@676c6370;
 java.lang.reflect.InvocationTargetException
  in 'reader', line 1, column 1:
 cluster_name: 'pcross'
 ^

 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:333)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
 at
 org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:141)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:127)
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
  at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
  ... 5 more
 Caused by: org.yaml.snakeyaml.error.YAMLException: Cannot create
 property=seed_provider for
 JavaBean=org.apache.cassandra.config.Config@676c6370;
 java.lang.reflect.InvocationTargetException
  at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:299)
 at
 org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:189)
  at
 

Re: Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Vineet Mishra
Hi Mark,

Yes I was generating my own cassandra.yaml with the configuration mentioned
below,

cluster_name: 'node'
initial_token: 0
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: 192.168.1.32
listen_address: 192.168.1.32
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch

Similarly for second node

cluster_name: 'node'
initial_token: 2305843009213693952
num_tokens: 256
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: 192.168.1.32
listen_address: 192.168.1.36
rpc_address: 0.0.0.0
endpoint_snitch: RackInferringSnitch

and so on. . .



But even if I use default xml with the necessary configurational changes I
am getting following error.

 INFO 16:13:38,225 Loading settings from
file:/home/cluster/cassandra/conf/cassandra.yaml
ERROR 16:13:38,301 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
at
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
at
org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
Caused by: while parsing a block mapping
 in 'reader', line 10, column 2:
 cluster_name: 'node'
 ^
expected block end, but found BlockMappingStart
 in 'reader', line 30, column 3:
  initial_token: 0
  ^

at
org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:570)
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
at
org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230)
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159)
at org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
at
org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
at
org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
... 5 more
Invalid yaml

Could you figure out whats making the yaml invalid.

Thanks!


On Tue, Aug 26, 2014 at 4:06 PM, Mark Reddy mark.l.re...@gmail.com wrote:

 You are missing commitlog_sync in your cassandra.yaml.

 Are you generating your own cassandra.yaml or editing the package default?
 If you are generating your own there are several configuration options that
 are required and if not present, Cassandra will fail to start.


 Regards,
 Mark


 On 26 August 2014 11:14, Vineet Mishra clearmido...@gmail.com wrote:

 Thanks Mark,
 That was indeed yaml formatting issue.
 Moreover I am getting the underlying error now,

 INFO 15:33:43,770 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
  INFO 15:33:44,100 Data files directories: [/var/lib/cassandra/data]
  INFO 15:33:44,101 Commit log directory: /var/lib/cassandra/commitlog
 ERROR 15:33:44,103 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Missing required
 directive CommitLogSync
  at
 org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:147)
 at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
  at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
 at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
  at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Missing required directive CommitLogSync
 Fatal configuration error; unable to start. See log for stacktrace.

 Do you have any idea about this.

 Thanks!


 On Tue, Aug 26, 2014 at 3:07 PM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 It is telling you that your yaml is invalid, from looking at the snippet
 you have provided it looks like the seed_provider.parameters is not
 correctly indented, it should look something like:

 seed_provider:
   - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
  - seeds: 192.168.1.32


 Regards,
 Mark


 On 26 August 2014 08:12, Vineet Mishra clearmido...@gmail.com wrote:


 Hi All,

 I am installing Cassandra Multinode Setup on a 4 node CentOs Cluster,
 my cassandra.yaml looks like so

 cluster_name: 'node'
 initial_token: 0
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.32
 rpc_address: 0.0.0.0
 

Re: Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Vineet Mishra
Thanks Vivek!

It was indeed a formatting issue in yaml, got it work!


On Tue, Aug 26, 2014 at 6:06 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Please read about http://www.yaml.org/start.html.
 Looks like formatting issue. You might be missing/adding incorrect spaces

 Validate your YAML file. This should help you out
 http://yamllint.com/

 -Vivek


 On Tue, Aug 26, 2014 at 4:20 PM, Vineet Mishra clearmido...@gmail.com
 wrote:

 Hi Mark,

 Yes I was generating my own cassandra.yaml with the configuration
 mentioned below,

 cluster_name: 'node'
 initial_token: 0
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.32
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 Similarly for second node

 cluster_name: 'node'
 initial_token: 2305843009213693952
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.36
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 and so on. . .



 But even if I use default xml with the necessary configurational changes
 I am getting following error.

  INFO 16:13:38,225 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
 ERROR 16:13:38,301 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
  at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
  at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
  at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: while parsing a block mapping
  in 'reader', line 10, column 2:
  cluster_name: 'node'
  ^
 expected block end, but found BlockMappingStart
  in 'reader', line 30, column 3:
   initial_token: 0
   ^

 at
 org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:570)
  at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
 at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
  at
 org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230)
 at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159)
  at
 org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
 at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
  at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
  ... 5 more
 Invalid yaml

 Could you figure out whats making the yaml invalid.

 Thanks!


 On Tue, Aug 26, 2014 at 4:06 PM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 You are missing commitlog_sync in your cassandra.yaml.

 Are you generating your own cassandra.yaml or editing the package
 default? If you are generating your own there are several configuration
 options that are required and if not present, Cassandra will fail to
 start.


 Regards,
 Mark


 On 26 August 2014 11:14, Vineet Mishra clearmido...@gmail.com wrote:

 Thanks Mark,
 That was indeed yaml formatting issue.
 Moreover I am getting the underlying error now,

 INFO 15:33:43,770 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
  INFO 15:33:44,100 Data files directories: [/var/lib/cassandra/data]
  INFO 15:33:44,101 Commit log directory: /var/lib/cassandra/commitlog
 ERROR 15:33:44,103 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Missing
 required directive CommitLogSync
  at
 org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:147)
 at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
  at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
 at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
  at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Missing required directive CommitLogSync
 Fatal configuration error; unable to start. See log for stacktrace.

 Do you have any idea about this.

 Thanks!


 On Tue, Aug 26, 2014 at 3:07 PM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 It is telling you that your yaml is invalid, from looking at the
 snippet you have provided it looks like the seed_provider.parameters is 
 not
 correctly indented, it should look something like:

 seed_provider:
   - class_name: 

Cassandra Installation

2014-08-26 Thread Malay Nilabh
Hi

I want to setup one node Cassandra cluster on my Ubuntu machine which has Java 
1.7 along with oracle jdk and I have already downloaded the cassandra 2.0 tar 
file, so I need full document to setup single node Cassandra cluster please 
guide me through this.

Thanks  Regards
Malay Nilabh
BIDW BU/ Big Data CoE
LT Infotech Ltd, Hinjewadi,Pune
[cid:image001.gif@01CFC159.38A37400]: +91-20-66571746
[cid:image002.png@01CFC159.38A37400]+91-73-879-00727
Email: malay.nil...@lntinfotech.commailto:malay.nil...@lntinfotech.com
|| Save Paper - Save Trees ||



The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. LT Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail


Re: Cassandra Installation

2014-08-26 Thread Umang Shah
Hi Malay,

Have a look at this video, this will give you very clear instruction how
you can achieve your output.

https://www.youtube.com/watch?v=Wohi9B-1Omc

Thanks,
Umang Shah
Pentaho BI-ETL Developer
shahuma...@gmail.com


On Tue, Aug 26, 2014 at 12:41 PM, Malay Nilabh malay.nil...@lntinfotech.com
 wrote:

  Hi



 I want to setup one node Cassandra cluster on my Ubuntu machine which has
 Java 1.7 along with oracle jdk and I have already downloaded the cassandra
 2.0 tar file, so I need full document to setup single node Cassandra
 cluster please guide me through this.



 Thanks  Regards

 *Malay Nilabh*

 BIDW BU/ Big Data CoE

 LT Infotech Ltd, Hinjewadi,Pune

 [image: Description: image001]: +91-20-66571746

 [image: Description: Description: Description: Description:
 cid:image002.png@01CF1EAD.959B9290]+91-73-879-00727

 Email: malay.nil...@lntinfotech.com

 *|| Save Paper - Save Trees || *



 --
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail




-- 
Regards,
Umang V.Shah
+919886829019


MapReduce Integration?

2014-08-26 Thread Oliver Ruebenacker
 Hello,

  I read that Cassandra has had MapReduce integration since early on. There
are instructions on how to use Hadoop or Spark. However, it appears to me
that according to these instructions, Hadoop and Spark just submit requests
to Cassandra just like any other client would. So, I'm not sure what is
meant by integration.

  Any pointers? Thanks!

 Best,
 Oliver

-- 
Oliver Ruebenacker
Solutions Architect at Altisource Labs http://www.altisourcelabs.com/
Be always grateful, but never satisfied.


Re: MapReduce Integration?

2014-08-26 Thread Russell Bradberry
If you want true integration of Cassandra and Hadoop and Spark then you will 
need to use Datastax Enterprise (DSE).  There are connectors that will allow 
MapReduce over vanilla Cassandra, however, they are just making requests to 
Cassandra under the covers while DSE uses CFS which is similar to HDFS.



On August 26, 2014 at 9:23:38 AM, Oliver Ruebenacker (cur...@gmail.com) wrote:


 Hello,

  I read that Cassandra has had MapReduce integration since early on. There are 
instructions on how to use Hadoop or Spark. However, it appears to me that 
according to these instructions, Hadoop and Spark just submit requests to 
Cassandra just like any other client would. So, I'm not sure what is meant by 
integration.

  Any pointers? Thanks!

 Best,
 Oliver

--
Oliver Ruebenacker
Solutions Architect at Altisource Labs
Be always grateful, but never satisfied.


Re: Cassandra process exiting mysteriously

2014-08-26 Thread Or Sher
Hi Clint,
I think I kind of found the reason for my problem, I doubt you have the
exact same problem but here it is:

We're using Zabbix as our monitoring system and it uses /usr/bin/at to
schedule it monitoring runs.
Every time the at command adds another scheduled task, it send a kill
signal to the pid of the atd, probably just to check if it's alive, not to
kill it.
Now, looking at the system calls audit log, it seems like sometimes,
although the kill syscall uses one pid (the atd one), it actually send the
kill to our C* java process.
I'm really starting to think it's some kind of a linux kernel bug..
BTW, atd was always stopped, so I'm not really sure yet if it was part of
the problem or not.

HTH,
Or.



On Wed, Aug 13, 2014 at 9:22 AM, Or Sher or.sh...@gmail.com wrote:

 Will do the same!
 Thanks,
 Or.


 On Tue, Aug 12, 2014 at 6:47 PM, Clint Kelly clint.ke...@gmail.com
 wrote:

 Hi Or,

 For now I removed the test that was failing like this from our suite
 and made a note to revisit it in a couple of weeks.  Unfortunately I
 still don't know what the issue is.  I'll post here if I figure out it
 (please do the same!).  My working hypothesis now is that we had some
 kind of OOM problem.

 Best regards,
 Clint

 On Tue, Aug 12, 2014 at 12:23 AM, Or Sher or.sh...@gmail.com wrote:
  Clint, did you find anything?
  I just noticed it happens to us too on only one node in our CI cluster.
  I don't think there is  a special usage before it happens... The last
 line
  in the log before the shutdown lines in at least an hour before..
  We're using C* 2.0.9.
 
 
  On Thu, Aug 7, 2014 at 12:49 AM, Clint Kelly clint.ke...@gmail.com
 wrote:
 
  Hi Rob,
 
  Thanks for the clarification; this is really useful.  I'll run some
  experiments to see if the problem is a JVM OOM on our build machine.
 
  Best regards,
  Clint
 
  On Wed, Aug 6, 2014 at 1:14 PM, Robert Coli rc...@eventbrite.com
 wrote:
   On Wed, Aug 6, 2014 at 1:12 PM, Robert Coli rc...@eventbrite.com
   wrote:
  
   On Wed, Aug 6, 2014 at 1:11 AM, Duncan Sands 
 duncan.sa...@gmail.com
   wrote:
  
   this doesn't look like an OOM to me.  If the kernel OOM kills
   Cassandra
   then Cassandra instantly vaporizes, and there will be nothing in
 the
   Cassandra logs (you will find information about the OOM in the
 system
   logs
   though, eg in dmesg).  In the log snippet above you see an orderly
   shutdown,
   this is completely different to the instant OOM kill.
  
  
   Not really.
  
   https://issues.apache.org/jira/browse/CASSANDRA-7507
  
  
   To be clear, there's two different OOMs here, I am talking about the
 JVM
   OOM, not system level. As CASSANDRA-7507 indicates, JVM OOM does not
   necessarily result in the cassandra process dying, and can in fact
   trigger
   clean shutdown.
  
   System level OOM will in fact send the equivalent of KILL, which will
   not
   trigger the clean shutdown hook in Cassandra.
  
   =Rob
 
 
 
 
  --
  Or Sher




 --
 Or Sher




-- 
Or Sher


Re: MapReduce Integration?

2014-08-26 Thread Chris Lohfink
There is a Bring your own Hadoop for DSE as well: 
http://www.datastax.com/documentation/datastax_enterprise/4.5/datastax_enterprise/byoh/byohIntro.html

Can also run hadoop against your backup/snapshots:
https://github.com/Netflix/aegisthus
https://github.com/fullcontact/hadoop-sstable

Chris

On Aug 26, 2014, at 8:41 AM, Russell Bradberry rbradbe...@gmail.com wrote:

 If you want true integration of Cassandra and Hadoop and Spark then you will 
 need to use Datastax Enterprise (DSE).  There are connectors that will allow 
 MapReduce over vanilla Cassandra, however, they are just making requests to 
 Cassandra under the covers while DSE uses CFS which is similar to HDFS.
 
 
 
 On August 26, 2014 at 9:23:38 AM, Oliver Ruebenacker (cur...@gmail.com) wrote:
 
 
  Hello,
 
   I read that Cassandra has had MapReduce integration since early on. There 
 are instructions on how to use Hadoop or Spark. However, it appears to me 
 that according to these instructions, Hadoop and Spark just submit requests 
 to Cassandra just like any other client would. So, I'm not sure what is 
 meant by integration.
 
   Any pointers? Thanks!
 
  Best,
  Oliver
 
 --
 Oliver Ruebenacker
 Solutions Architect at Altisource Labs
 Be always grateful, but never satisfied.



CQL performance inserting multiple cluster keys under same partition key

2014-08-26 Thread Jaydeep Chovatia
Hi,

I have question on inserting multiple cluster keys under same partition
key.

Ex:

CREATE TABLE Employee (
  deptId int,
  empId int,
  name   varchar,
  address varchar,
  salary int,
  PRIMARY KEY(deptId, empId)
);

BEGIN *UNLOGGED *BATCH
  INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
10, 'testNameA', 'testAddressA', 2);
  INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
20, 'testNameB', 'testAddressB', 3);
APPLY BATCH;

Here we are inserting two cluster keys (10 and 20) under same partition key
(1).
Q1) Is this batch transaction atomic and isolated? If yes then is there any
performance overhead with this syntax?
Q2) Is this CQL syntax can be considered equivalent of Thrift
batch_mutate?

-jaydeep


Re: Installing Cassandra Multinode on CentOs coming up with exception

2014-08-26 Thread Patricia Gorla
Vineet,

One more thing -- you have initial_token and num_tokens both set. If you
are trying to use virtual nodes, you should comment out initial_token as
this setting overrides num_tokens.

Cheers,


On Tue, Aug 26, 2014 at 5:39 AM, Vineet Mishra clearmido...@gmail.com
wrote:

 Thanks Vivek!

 It was indeed a formatting issue in yaml, got it work!


 On Tue, Aug 26, 2014 at 6:06 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 Please read about http://www.yaml.org/start.html.
 Looks like formatting issue. You might be missing/adding incorrect spaces

 Validate your YAML file. This should help you out
 http://yamllint.com/

 -Vivek


 On Tue, Aug 26, 2014 at 4:20 PM, Vineet Mishra clearmido...@gmail.com
 wrote:

 Hi Mark,

 Yes I was generating my own cassandra.yaml with the configuration
 mentioned below,

 cluster_name: 'node'
 initial_token: 0
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.32
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 Similarly for second node

 cluster_name: 'node'
 initial_token: 2305843009213693952
 num_tokens: 256
 seed_provider:
 - class_name: org.apache.cassandra.locator.SimpleSeedProvider
 parameters:
 - seeds: 192.168.1.32
 listen_address: 192.168.1.36
 rpc_address: 0.0.0.0
 endpoint_snitch: RackInferringSnitch

 and so on. . .



 But even if I use default xml with the necessary configurational changes
 I am getting following error.

  INFO 16:13:38,225 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
 ERROR 16:13:38,301 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
  at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:100)
 at
 org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:135)
  at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
 at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
  at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Caused by: while parsing a block mapping
  in 'reader', line 10, column 2:
  cluster_name: 'node'
  ^
 expected block end, but found BlockMappingStart
  in 'reader', line 30, column 3:
   initial_token: 0
   ^

 at
 org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:570)
  at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
 at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143)
  at
 org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230)
 at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159)
  at
 org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122)
 at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105)
  at
 org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120)
 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481)
  at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:475)
 at
 org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:93)
  ... 5 more
 Invalid yaml

 Could you figure out whats making the yaml invalid.

 Thanks!


 On Tue, Aug 26, 2014 at 4:06 PM, Mark Reddy mark.l.re...@gmail.com
 wrote:

 You are missing commitlog_sync in your cassandra.yaml.

 Are you generating your own cassandra.yaml or editing the package
 default? If you are generating your own there are several configuration
 options that are required and if not present, Cassandra will fail to
 start.


 Regards,
 Mark


 On 26 August 2014 11:14, Vineet Mishra clearmido...@gmail.com wrote:

 Thanks Mark,
 That was indeed yaml formatting issue.
 Moreover I am getting the underlying error now,

 INFO 15:33:43,770 Loading settings from
 file:/home/cluster/cassandra/conf/cassandra.yaml
  INFO 15:33:44,100 Data files directories: [/var/lib/cassandra/data]
  INFO 15:33:44,101 Commit log directory: /var/lib/cassandra/commitlog
 ERROR 15:33:44,103 Fatal configuration error
 org.apache.cassandra.exceptions.ConfigurationException: Missing
 required directive CommitLogSync
  at
 org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:147)
 at
 org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:111)
  at
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:156)
 at
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
  at
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 Missing required directive CommitLogSync
 Fatal configuration error; unable to start. See log for stacktrace.

 Do you have any idea about this.

 Thanks!


 On Tue, Aug 26, 2014 at 3:07 PM, Mark Reddy 

Question about MemoryMeter liveRatio

2014-08-26 Thread Leleu Eric
Hi,


I'm trying to understand what is the liveRatio and if I have to care about it.
I found some reference on the web and if I understand them, the liveRatio 
represents  the Memtable size divided by the amount of data serialized on the 
disk. Is it the truth?


When I see the following log, what can I deduce about it ?

INFO [MemoryMeter:1] 2014-08-26 19:02:41,047 Memtable.java (line 481) 
CFS(Keyspace='ufapi', ColumnFamily='users') liveRatio is 8.52308554793235 
(just-counted was 8.514143642185562).  calculation took 3613ms for 272646 cells
   INFO [MemoryMeter:1] 2014-08-26 18:36:09,965 Memtable.java (line 481) 
CFS(Keyspace='system', ColumnFamily='compactions_in_progress') liveRatio is 
40.1893491
1242604 (just-counted was 16.37869822485207).  calculation took 0ms for 7 cells


According to my read, the liveRatio is set between 1 and 64. If My liveRatio is 
around 64, should I care about some things?
Does Cassandra use the liveRatio for some internal task or it is just a metric?


Regards,
Eric



Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage 
exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret 
professionnel. Si vous recevez ce message par erreur, merci d'en avertir 
imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant 
?tre assur?e sur Internet, la responsabilit? de Worldline ne pourra ?tre 
recherch?e quant au contenu de ce message. Bien que les meilleurs efforts 
soient faits pour maintenir cette transmission exempte de tout virus, 
l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne 
saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis.

This e-mail and the documents attached are confidential and intended solely for 
the addressee; it may also be privileged. If you receive this e-mail in error, 
please notify the sender immediately and destroy it. As its integrity cannot be 
secured on the Internet, the Worldline liability cannot be triggered for the 
message content. Although the sender endeavours to maintain a computer 
virus-free network, the sender does not warrant that this transmission is 
virus-free and will not be liable for any damages resulting from any virus 
transmitted.


Re: Question about MemoryMeter liveRatio

2014-08-26 Thread Chris Lohfink
Computing how much actual memory the memtables cost including jvm overhead etc 
is expensive (using https://github.com/jbellis/jamm).  So instead the 
memorymeter thread pool will periodically find the size and compare it to the 
serialized size to compute the ratio to give an appropriate estimate at real 
time.  This is used to determine how much memory a memtable is taking up and 
how often to flush it.

---
Chris Lohfink

On Aug 26, 2014, at 12:20 PM, Leleu Eric eric.le...@worldline.com wrote:

 Hi,
  
  
 I’m trying to understand what is the liveRatio and if I have to care about it.
 I found some reference on the web and if I understand them, the liveRatio 
 represents  the Memtable size divided by the amount of data serialized on the 
 disk. Is it the truth?
  
  
 When I see the following log, what can I deduce about it ?
  
 INFO [MemoryMeter:1] 2014-08-26 19:02:41,047 Memtable.java (line 481) 
 CFS(Keyspace='ufapi', ColumnFamily='users') liveRatio is 8.52308554793235 
 (just-counted was 8.514143642185562).  calculation took 3613ms for 272646 
 cells
INFO [MemoryMeter:1] 2014-08-26 18:36:09,965 Memtable.java (line 481) 
 CFS(Keyspace='system', ColumnFamily='compactions_in_progress') liveRatio is 
 40.1893491
 1242604 (just-counted was 16.37869822485207).  calculation took 0ms for 7 
 cells
  
  
 According to my read, the liveRatio is set between 1 and 64. If My liveRatio 
 is around 64, should I care about some things?
 Does Cassandra use the liveRatio for some internal task or it is just a 
 metric?
  
  
 Regards,
 Eric
 
 
 Ce message et les pièces jointes sont confidentiels et réservés à l'usage 
 exclusif de ses destinataires. Il peut également être protégé par le secret 
 professionnel. Si vous recevez ce message par erreur, merci d'en avertir 
 immédiatement l'expéditeur et de le détruire. L'intégrité du message ne 
 pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra 
 être recherchée quant au contenu de ce message. Bien que les meilleurs 
 efforts soient faits pour maintenir cette transmission exempte de tout virus, 
 l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne 
 saurait être recherchée pour tout dommage résultant d'un virus transmis.
 
 This e-mail and the documents attached are confidential and intended solely 
 for the addressee; it may also be privileged. If you receive this e-mail in 
 error, please notify the sender immediately and destroy it. As its integrity 
 cannot be secured on the Internet, the Worldline liability cannot be 
 triggered for the message content. Although the sender endeavours to maintain 
 a computer virus-free network, the sender does not warrant that this 
 transmission is virus-free and will not be liable for any damages resulting 
 from any virus transmitted.



Too many SSTables after rebalancing cluster (LCS)

2014-08-26 Thread Paulo Ricardo Motta Gomes
Hey folks,

After adding more nodes and moving tokens of old nodes to rebalance the
ring, I noticed that the old nodes had significant more data then the
newly bootstrapped nodes, even after cleanup.

I noticed that the old nodes had a much larger number of SSTables on LCS
CFs, and most of them located on the last level:

Node N-1 (old node): [1, 10, 102/100, 173, 2403, 0, 0, 0, 0] (total:2695)

*Node N (new node): [1, 10, 108/100, 214, 0, 0, 0, 0, 0] (total: 339)*Node
N+1 (old node): [1, 10, 87, 113, 1076, 0, 0, 0, 0] (total: 1287)

Since these sstables have a lot of tombstones, and they're not updated
frequently, they remain in the last level forever, and are never cleaned.

What is the solution here? The good old change to STCS and then back to
LCS, or is there something less brute force?

Environment: Cassandra 1.2.16 - non-vnondes

Any help would be very much appreciated.

Cheers,

-- 
*Paulo Motta*

Chaordic | *Platform*
*www.chaordic.com.br http://www.chaordic.com.br/*
+55 48 3232.3200


Re: Too many SSTables after rebalancing cluster (LCS)

2014-08-26 Thread Robert Coli
On Tue, Aug 26, 2014 at 11:38 AM, Paulo Ricardo Motta Gomes 
paulo.mo...@chaordicsystems.com wrote:

 What is the solution here? The good old change to STCS and then back to
 LCS, or is there something less brute force?


In theory you could use user defined compaction via JMX, but I'd probably
just switch to STCS, major compact  and switch back to LCS because I don't
fully comprehend if and when user defined compaction and/or even works with
LCS.

=Rob


Re: CQL performance inserting multiple cluster keys under same partition key

2014-08-26 Thread Vivek Mishra
AFAIK, it is not. With CAS it should br
On 26/08/2014 10:21 pm, Jaydeep Chovatia chovatia.jayd...@gmail.com
wrote:

 Hi,

 I have question on inserting multiple cluster keys under same partition
 key.

 Ex:

 CREATE TABLE Employee (
   deptId int,
   empId int,
   name   varchar,
   address varchar,
   salary int,
   PRIMARY KEY(deptId, empId)
 );

 BEGIN *UNLOGGED *BATCH
   INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
 10, 'testNameA', 'testAddressA', 2);
   INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
 20, 'testNameB', 'testAddressB', 3);
 APPLY BATCH;

 Here we are inserting two cluster keys (10 and 20) under same partition
 key (1).
 Q1) Is this batch transaction atomic and isolated? If yes then is there
 any performance overhead with this syntax?
 Q2) Is this CQL syntax can be considered equivalent of Thrift
 batch_mutate?

 -jaydeep



Re: CQL performance inserting multiple cluster keys under same partition key

2014-08-26 Thread Jaydeep Chovatia
But if we look at thrift world batch_mutate then it used to perform all
mutations withing partition key atomically without using CAS i.e no extra
penalty.
Does this mean CQL degrades in performance as compared to thrift if we want
to do multiple updates to a partition key atomically?


On Tue, Aug 26, 2014 at 11:51 AM, Vivek Mishra mishra.v...@gmail.com
wrote:

 AFAIK, it is not. With CAS it should br
 On 26/08/2014 10:21 pm, Jaydeep Chovatia chovatia.jayd...@gmail.com
 wrote:

 Hi,

 I have question on inserting multiple cluster keys under same partition
 key.

 Ex:

 CREATE TABLE Employee (
   deptId int,
   empId int,
   name   varchar,
   address varchar,
   salary int,
   PRIMARY KEY(deptId, empId)
 );

 BEGIN *UNLOGGED *BATCH
   INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
 10, 'testNameA', 'testAddressA', 2);
   INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
 20, 'testNameB', 'testAddressB', 3);
 APPLY BATCH;

 Here we are inserting two cluster keys (10 and 20) under same partition
 key (1).
 Q1) Is this batch transaction atomic and isolated? If yes then is there
 any performance overhead with this syntax?
 Q2) Is this CQL syntax can be considered equivalent of Thrift
 batch_mutate?

 -jaydeep




are dynamic columns supported at all in CQL 3?

2014-08-26 Thread Ian Rose
Is it possible in CQL to create a table that supports dynamic column names?
 I am using C* v2.0.9, which I assume implies CQL version 3.

This page appears to show that this was supported in CQL 2 with the 'with
comparator' and 'with default_validation' options but that CQL 3 does not
support this: http://www.datastax.com/dev/blog/whats-new-in-cql-3-0

Am I understanding that right?  If so, what is my best course of action?
 Create the table using the cassandra-cli tool?

Thanks,
- Ian


Re: are dynamic columns supported at all in CQL 3?

2014-08-26 Thread Shane Hansen
Does this answer your question Ian?
http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows



On Tue, Aug 26, 2014 at 1:12 PM, Ian Rose ianr...@fullstory.com wrote:

 Is it possible in CQL to create a table that supports dynamic column
 names?  I am using C* v2.0.9, which I assume implies CQL version 3.

 This page appears to show that this was supported in CQL 2 with the 'with
 comparator' and 'with default_validation' options but that CQL 3 does not
 support this: http://www.datastax.com/dev/blog/whats-new-in-cql-3-0

 Am I understanding that right?  If so, what is my best course of action?
  Create the table using the cassandra-cli tool?

 Thanks,
 - Ian




Re: are dynamic columns supported at all in CQL 3?

2014-08-26 Thread Robert Coli
On Tue, Aug 26, 2014 at 12:14 PM, Shane Hansen shanemhan...@gmail.com
wrote:

 Does this answer your question Ian?
 http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows


If dissembling can be considered to answer a question...

A common misunderstanding is that CQL does not support dynamic columns or
wide rows. On the contrary, CQL was designed to support everything you can
do with the Thrift model, but make it easier and more accessible.

The sentence makes it clear that CQL must not support dynamic columns,
because it instead supports everything you can do with them.

tl;dr :
---

If OP is asking if CQL3 supports actual dynamic columns as they existed
in CQL2 and thrift, the answer is : no.

If OP is asking if one can use CQL3's E-A-V like scheme to achieve a
similar [1] result, the answer is : yes.

=Rob
[1] http://www.edwardcapriolo.com/roller/edwardcapriolo/entry/legacy_tables


Re: are dynamic columns supported at all in CQL 3?

2014-08-26 Thread Ian Rose
Unfortunately, no.  I've read that and the solution presented only works in
limited scenarios.  Using the post's example, consider the query of get
all readings for sensor 1.  With dynamic columns, the query is just
select * from data where sensor_id=1.  In CQL, not only does this take N
different queries (one per sample) but you have to explicitly know the
collected_at values to query for.  Right?

The other suggestion, to use collections (such as a map), again works in
some circumstances, but not all.  In particular, each item in a collection
is limited to 64k bytes which is not something we want to be limited to (we
are storing byte arrays that occasionally exceed this size).



On Tue, Aug 26, 2014 at 3:14 PM, Shane Hansen shanemhan...@gmail.com
wrote:

 Does this answer your question Ian?
 http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows



 On Tue, Aug 26, 2014 at 1:12 PM, Ian Rose ianr...@fullstory.com wrote:

 Is it possible in CQL to create a table that supports dynamic column
 names?  I am using C* v2.0.9, which I assume implies CQL version 3.

 This page appears to show that this was supported in CQL 2 with the 'with
 comparator' and 'with default_validation' options but that CQL 3 does not
 support this: http://www.datastax.com/dev/blog/whats-new-in-cql-3-0

 Am I understanding that right?  If so, what is my best course of action?
  Create the table using the cassandra-cli tool?

 Thanks,
 - Ian