[jira] [Created] (CASSANDRA-7952) DataStax Agent Null Pointer Exception

2014-09-17 Thread Hari Sekhon (JIRA)
Hari Sekhon created CASSANDRA-7952:
--

 Summary: DataStax Agent Null Pointer Exception
 Key: CASSANDRA-7952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7952
 Project: Cassandra
  Issue Type: Bug
 Environment: DSE 4.5.1, DataStax OpsCenter  Agent 5.0.0
Reporter: Hari Sekhon


I've got a Null Pointer Exception in my DataStax OpsCenter Agent log, and it's 
not reporting in to the OpsCenter as a result. Here is the log
{code}
 INFO [StompConnection receiver] 2014-09-17 13:01:15,992 New JMX connection 
(127.0.0.1:7199)
 INFO [Jetty] 2014-09-17 13:01:16,019 Jetty server started
 INFO [Initialization] 2014-09-17 13:01:16,031 Using x.x.x.x as the cassandra 
broadcast address
 INFO [StompConnection receiver] 2014-09-17 13:01:16,032 Starting up agent 
collection.
 INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC address is  x.x.x.x
 INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC address is  
x.x.x.x
 INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC broadcast address is  
x.x.x.x
 INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC broadcast 
address is  x.x.x.x
 INFO [StompConnection receiver] 2014-09-17 13:01:16,163 Starting OS metric 
collectors (Linux)
 INFO [Initialization] 2014-09-17 13:01:16,166 Clearing ssl.truststore
 INFO [Initialization] 2014-09-17 13:01:16,166 Clearing ssl.truststore.password
 INFO [Initialization] 2014-09-17 13:01:16,167 Setting ssl.store.type to JKS
 INFO [Initialization] 2014-09-17 13:01:16,167 Clearing 
kerberos.service.principal.name
 INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.principal
 INFO [Initialization] 2014-09-17 13:01:16,167 Setting kerberos.useTicketCache 
to true
 INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.ticketCache
 INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.useKeyTab to 
true
 INFO [Initialization] 2014-09-17 13:01:16,168 Clearing kerberos.keyTab
 INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.renewTGT to true
 INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.debug to false
 INFO [StompConnection receiver] 2014-09-17 13:01:16,171 Starting Cassandra JMX 
metric collectors
 INFO [thrift-init] 2014-09-17 13:01:16,171 Connecting to Cassandra cluster: 
x.x.x.x (port 9160)
 INFO [StompConnection receiver] 2014-09-17 13:01:16,187 New JMX connection 
(127.0.0.1:7199)
 INFO [thrift-init] 2014-09-17 13:01:16,189 Downed Host Retry service started 
with queue size -1 and retry delay 10s
 INFO [thrift-init] 2014-09-17 13:01:16,192 Registering JMX 
me.prettyprint.cassandra.service_Agent 
Cluster:ServiceType=hector,MonitorType=hector
 INFO [pdp-loader] 2014-09-17 13:01:16,231 in execute with client 
org.apache.cassandra.thrift.Cassandra$Client@7a22c094
 INFO [pdp-loader] 2014-09-17 13:01:16,237 Attempting to load stored metric 
values.
 INFO [thrift-init] 2014-09-17 13:01:16,240 Connected to Cassandra cluster: PoC
 INFO [thrift-init] 2014-09-17 13:01:16,240 in execute with client 
org.apache.cassandra.thrift.Cassandra$Client@7a22c094
 INFO [thrift-init] 2014-09-17 13:01:16,240 Using partitioner: 
org.apache.cassandra.dht.Murmur3Partitioner
 INFO [jmx-metrics-1] 2014-09-17 13:01:21,181 New JMX connection 
(127.0.0.1:7199)
ERROR [StompConnection receiver] 2014-09-17 13:01:24,376 Failed to collect 
machine info
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:942)
at clojure.lang.Numbers.divide(Numbers.java:157)
at 
opsagent.nodedetails.machine_info$get_machine_info.invoke(machine_info.clj:76)
at 
opsagent.nodedetails$get_static_properties$fn__4313.invoke(nodedetails.clj:161)
at 
opsagent.nodedetails$get_static_properties.invoke(nodedetails.clj:160)
at 
opsagent.nodedetails$get_longtime_values$fn__4426.invoke(nodedetails.clj:227)
at opsagent.nodedetails$get_longtime_values.invoke(nodedetails.clj:226)
at 
opsagent.nodedetails$send_all_nodedetails$fn__.invoke(nodedetails.clj:245)
at opsagent.jmx$jmx_wrap.doInvoke(jmx.clj:111)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.nodedetails$send_all_nodedetails.invoke(nodedetails.clj:241)
at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:125)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at opsagent.conf$handle_new_conf.invoke(conf.clj:179)
at opsagent.messaging$message_callback$fn__6059.invoke(messaging.clj:30)
at 
opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown
 Source)
at 
org.jgroups.client.StompConnection.notifyListeners(StompConnection.java:319)
at org.jgroups.client.StompConnection.run(StompConnection.java:269)
at java.lang.Thread.run(Thread.java:745)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7952) DataStax Agent Null Pointer Exception

2014-09-17 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7952:
---
Description: 
I've got a Null Pointer Exception in my DataStax OpsCenter Agent log, and it's 
not reporting in to the OpsCenter. Here is the log
{code}
 INFO [StompConnection receiver] 2014-09-17 13:01:15,992 New JMX connection 
(127.0.0.1:7199)
 INFO [Jetty] 2014-09-17 13:01:16,019 Jetty server started
 INFO [Initialization] 2014-09-17 13:01:16,031 Using x.x.x.x as the cassandra 
broadcast address
 INFO [StompConnection receiver] 2014-09-17 13:01:16,032 Starting up agent 
collection.
 INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC address is  x.x.x.x
 INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC address is  
x.x.x.x
 INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC broadcast address is  
x.x.x.x
 INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC broadcast 
address is  x.x.x.x
 INFO [StompConnection receiver] 2014-09-17 13:01:16,163 Starting OS metric 
collectors (Linux)
 INFO [Initialization] 2014-09-17 13:01:16,166 Clearing ssl.truststore
 INFO [Initialization] 2014-09-17 13:01:16,166 Clearing ssl.truststore.password
 INFO [Initialization] 2014-09-17 13:01:16,167 Setting ssl.store.type to JKS
 INFO [Initialization] 2014-09-17 13:01:16,167 Clearing 
kerberos.service.principal.name
 INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.principal
 INFO [Initialization] 2014-09-17 13:01:16,167 Setting kerberos.useTicketCache 
to true
 INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.ticketCache
 INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.useKeyTab to 
true
 INFO [Initialization] 2014-09-17 13:01:16,168 Clearing kerberos.keyTab
 INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.renewTGT to true
 INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.debug to false
 INFO [StompConnection receiver] 2014-09-17 13:01:16,171 Starting Cassandra JMX 
metric collectors
 INFO [thrift-init] 2014-09-17 13:01:16,171 Connecting to Cassandra cluster: 
x.x.x.x (port 9160)
 INFO [StompConnection receiver] 2014-09-17 13:01:16,187 New JMX connection 
(127.0.0.1:7199)
 INFO [thrift-init] 2014-09-17 13:01:16,189 Downed Host Retry service started 
with queue size -1 and retry delay 10s
 INFO [thrift-init] 2014-09-17 13:01:16,192 Registering JMX 
me.prettyprint.cassandra.service_Agent 
Cluster:ServiceType=hector,MonitorType=hector
 INFO [pdp-loader] 2014-09-17 13:01:16,231 in execute with client 
org.apache.cassandra.thrift.Cassandra$Client@7a22c094
 INFO [pdp-loader] 2014-09-17 13:01:16,237 Attempting to load stored metric 
values.
 INFO [thrift-init] 2014-09-17 13:01:16,240 Connected to Cassandra cluster: PoC
 INFO [thrift-init] 2014-09-17 13:01:16,240 in execute with client 
org.apache.cassandra.thrift.Cassandra$Client@7a22c094
 INFO [thrift-init] 2014-09-17 13:01:16,240 Using partitioner: 
org.apache.cassandra.dht.Murmur3Partitioner
 INFO [jmx-metrics-1] 2014-09-17 13:01:21,181 New JMX connection 
(127.0.0.1:7199)
ERROR [StompConnection receiver] 2014-09-17 13:01:24,376 Failed to collect 
machine info
java.lang.NullPointerException
at clojure.lang.Numbers.ops(Numbers.java:942)
at clojure.lang.Numbers.divide(Numbers.java:157)
at 
opsagent.nodedetails.machine_info$get_machine_info.invoke(machine_info.clj:76)
at 
opsagent.nodedetails$get_static_properties$fn__4313.invoke(nodedetails.clj:161)
at 
opsagent.nodedetails$get_static_properties.invoke(nodedetails.clj:160)
at 
opsagent.nodedetails$get_longtime_values$fn__4426.invoke(nodedetails.clj:227)
at opsagent.nodedetails$get_longtime_values.invoke(nodedetails.clj:226)
at 
opsagent.nodedetails$send_all_nodedetails$fn__.invoke(nodedetails.clj:245)
at opsagent.jmx$jmx_wrap.doInvoke(jmx.clj:111)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.nodedetails$send_all_nodedetails.invoke(nodedetails.clj:241)
at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:125)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at opsagent.conf$handle_new_conf.invoke(conf.clj:179)
at opsagent.messaging$message_callback$fn__6059.invoke(messaging.clj:30)
at 
opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown
 Source)
at 
org.jgroups.client.StompConnection.notifyListeners(StompConnection.java:319)
at org.jgroups.client.StompConnection.run(StompConnection.java:269)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
I've got a Null Pointer Exception in my DataStax OpsCenter Agent log, and it's 
not reporting in to the OpsCenter as a result. Here is the log
{code}
 INFO [StompConnection receiver] 2014-09-17 13:01:15,992 New JMX connection 
(127.0.0.1:7199)
 INFO 

[jira] [Updated] (CASSANDRA-7952) DataStax Agent Null Pointer Exception

2014-09-17 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7952:
---
Priority: Minor  (was: Major)

 DataStax Agent Null Pointer Exception
 -

 Key: CASSANDRA-7952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7952
 Project: Cassandra
  Issue Type: Bug
 Environment: DSE 4.5.1, DataStax OpsCenter  Agent 5.0.0
Reporter: Hari Sekhon
Priority: Minor

 I've got a Null Pointer Exception in my DataStax OpsCenter Agent log, and 
 it's not reporting in to the OpsCenter. Here is the log
 {code}
  INFO [StompConnection receiver] 2014-09-17 13:01:15,992 New JMX connection 
 (127.0.0.1:7199)
  INFO [Jetty] 2014-09-17 13:01:16,019 Jetty server started
  INFO [Initialization] 2014-09-17 13:01:16,031 Using x.x.x.x as the cassandra 
 broadcast address
  INFO [StompConnection receiver] 2014-09-17 13:01:16,032 Starting up agent 
 collection.
  INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC address is  x.x.x.x
  INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC address is 
  x.x.x.x
  INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC broadcast address is 
  x.x.x.x
  INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC broadcast 
 address is  x.x.x.x
  INFO [StompConnection receiver] 2014-09-17 13:01:16,163 Starting OS metric 
 collectors (Linux)
  INFO [Initialization] 2014-09-17 13:01:16,166 Clearing ssl.truststore
  INFO [Initialization] 2014-09-17 13:01:16,166 Clearing 
 ssl.truststore.password
  INFO [Initialization] 2014-09-17 13:01:16,167 Setting ssl.store.type to JKS
  INFO [Initialization] 2014-09-17 13:01:16,167 Clearing 
 kerberos.service.principal.name
  INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.principal
  INFO [Initialization] 2014-09-17 13:01:16,167 Setting 
 kerberos.useTicketCache to true
  INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.ticketCache
  INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.useKeyTab to 
 true
  INFO [Initialization] 2014-09-17 13:01:16,168 Clearing kerberos.keyTab
  INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.renewTGT to 
 true
  INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.debug to false
  INFO [StompConnection receiver] 2014-09-17 13:01:16,171 Starting Cassandra 
 JMX metric collectors
  INFO [thrift-init] 2014-09-17 13:01:16,171 Connecting to Cassandra cluster: 
 x.x.x.x (port 9160)
  INFO [StompConnection receiver] 2014-09-17 13:01:16,187 New JMX connection 
 (127.0.0.1:7199)
  INFO [thrift-init] 2014-09-17 13:01:16,189 Downed Host Retry service started 
 with queue size -1 and retry delay 10s
  INFO [thrift-init] 2014-09-17 13:01:16,192 Registering JMX 
 me.prettyprint.cassandra.service_Agent 
 Cluster:ServiceType=hector,MonitorType=hector
  INFO [pdp-loader] 2014-09-17 13:01:16,231 in execute with client 
 org.apache.cassandra.thrift.Cassandra$Client@7a22c094
  INFO [pdp-loader] 2014-09-17 13:01:16,237 Attempting to load stored metric 
 values.
  INFO [thrift-init] 2014-09-17 13:01:16,240 Connected to Cassandra cluster: 
 PoC
  INFO [thrift-init] 2014-09-17 13:01:16,240 in execute with client 
 org.apache.cassandra.thrift.Cassandra$Client@7a22c094
  INFO [thrift-init] 2014-09-17 13:01:16,240 Using partitioner: 
 org.apache.cassandra.dht.Murmur3Partitioner
  INFO [jmx-metrics-1] 2014-09-17 13:01:21,181 New JMX connection 
 (127.0.0.1:7199)
 ERROR [StompConnection receiver] 2014-09-17 13:01:24,376 Failed to collect 
 machine info
 java.lang.NullPointerException
 at clojure.lang.Numbers.ops(Numbers.java:942)
 at clojure.lang.Numbers.divide(Numbers.java:157)
 at 
 opsagent.nodedetails.machine_info$get_machine_info.invoke(machine_info.clj:76)
 at 
 opsagent.nodedetails$get_static_properties$fn__4313.invoke(nodedetails.clj:161)
 at 
 opsagent.nodedetails$get_static_properties.invoke(nodedetails.clj:160)
 at 
 opsagent.nodedetails$get_longtime_values$fn__4426.invoke(nodedetails.clj:227)
 at 
 opsagent.nodedetails$get_longtime_values.invoke(nodedetails.clj:226)
 at 
 opsagent.nodedetails$send_all_nodedetails$fn__.invoke(nodedetails.clj:245)
 at opsagent.jmx$jmx_wrap.doInvoke(jmx.clj:111)
 at clojure.lang.RestFn.invoke(RestFn.java:410)
 at 
 opsagent.nodedetails$send_all_nodedetails.invoke(nodedetails.clj:241)
 at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:125)
 at clojure.lang.RestFn.invoke(RestFn.java:421)
 at opsagent.conf$handle_new_conf.invoke(conf.clj:179)
 at 
 opsagent.messaging$message_callback$fn__6059.invoke(messaging.clj:30)
 at 
 opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown
  

[jira] [Commented] (CASSANDRA-7952) DataStax Agent Null Pointer Exception

2014-09-17 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14137210#comment-14137210
 ] 

Hari Sekhon commented on CASSANDRA-7952:


Yes I know but I don't think they have a public jira for this, I was hoping 
someone from DataStax would pick this up and copy to their internal jira, 
didn't seem suitable to post on serverfault...

 DataStax Agent Null Pointer Exception
 -

 Key: CASSANDRA-7952
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7952
 Project: Cassandra
  Issue Type: Bug
 Environment: DSE 4.5.1, DataStax OpsCenter  Agent 5.0.0
Reporter: Hari Sekhon
Priority: Minor

 I've got a Null Pointer Exception in my DataStax OpsCenter Agent log, and 
 it's not reporting in to the OpsCenter. Here is the log
 {code}
  INFO [StompConnection receiver] 2014-09-17 13:01:15,992 New JMX connection 
 (127.0.0.1:7199)
  INFO [Jetty] 2014-09-17 13:01:16,019 Jetty server started
  INFO [Initialization] 2014-09-17 13:01:16,031 Using x.x.x.x as the cassandra 
 broadcast address
  INFO [StompConnection receiver] 2014-09-17 13:01:16,032 Starting up agent 
 collection.
  INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC address is  x.x.x.x
  INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC address is 
  x.x.x.x
  INFO [Initialization] 2014-09-17 13:01:16,162 agent RPC broadcast address is 
  x.x.x.x
  INFO [StompConnection receiver] 2014-09-17 13:01:16,162 agent RPC broadcast 
 address is  x.x.x.x
  INFO [StompConnection receiver] 2014-09-17 13:01:16,163 Starting OS metric 
 collectors (Linux)
  INFO [Initialization] 2014-09-17 13:01:16,166 Clearing ssl.truststore
  INFO [Initialization] 2014-09-17 13:01:16,166 Clearing 
 ssl.truststore.password
  INFO [Initialization] 2014-09-17 13:01:16,167 Setting ssl.store.type to JKS
  INFO [Initialization] 2014-09-17 13:01:16,167 Clearing 
 kerberos.service.principal.name
  INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.principal
  INFO [Initialization] 2014-09-17 13:01:16,167 Setting 
 kerberos.useTicketCache to true
  INFO [Initialization] 2014-09-17 13:01:16,167 Clearing kerberos.ticketCache
  INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.useKeyTab to 
 true
  INFO [Initialization] 2014-09-17 13:01:16,168 Clearing kerberos.keyTab
  INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.renewTGT to 
 true
  INFO [Initialization] 2014-09-17 13:01:16,168 Setting kerberos.debug to false
  INFO [StompConnection receiver] 2014-09-17 13:01:16,171 Starting Cassandra 
 JMX metric collectors
  INFO [thrift-init] 2014-09-17 13:01:16,171 Connecting to Cassandra cluster: 
 x.x.x.x (port 9160)
  INFO [StompConnection receiver] 2014-09-17 13:01:16,187 New JMX connection 
 (127.0.0.1:7199)
  INFO [thrift-init] 2014-09-17 13:01:16,189 Downed Host Retry service started 
 with queue size -1 and retry delay 10s
  INFO [thrift-init] 2014-09-17 13:01:16,192 Registering JMX 
 me.prettyprint.cassandra.service_Agent 
 Cluster:ServiceType=hector,MonitorType=hector
  INFO [pdp-loader] 2014-09-17 13:01:16,231 in execute with client 
 org.apache.cassandra.thrift.Cassandra$Client@7a22c094
  INFO [pdp-loader] 2014-09-17 13:01:16,237 Attempting to load stored metric 
 values.
  INFO [thrift-init] 2014-09-17 13:01:16,240 Connected to Cassandra cluster: 
 PoC
  INFO [thrift-init] 2014-09-17 13:01:16,240 in execute with client 
 org.apache.cassandra.thrift.Cassandra$Client@7a22c094
  INFO [thrift-init] 2014-09-17 13:01:16,240 Using partitioner: 
 org.apache.cassandra.dht.Murmur3Partitioner
  INFO [jmx-metrics-1] 2014-09-17 13:01:21,181 New JMX connection 
 (127.0.0.1:7199)
 ERROR [StompConnection receiver] 2014-09-17 13:01:24,376 Failed to collect 
 machine info
 java.lang.NullPointerException
 at clojure.lang.Numbers.ops(Numbers.java:942)
 at clojure.lang.Numbers.divide(Numbers.java:157)
 at 
 opsagent.nodedetails.machine_info$get_machine_info.invoke(machine_info.clj:76)
 at 
 opsagent.nodedetails$get_static_properties$fn__4313.invoke(nodedetails.clj:161)
 at 
 opsagent.nodedetails$get_static_properties.invoke(nodedetails.clj:160)
 at 
 opsagent.nodedetails$get_longtime_values$fn__4426.invoke(nodedetails.clj:227)
 at 
 opsagent.nodedetails$get_longtime_values.invoke(nodedetails.clj:226)
 at 
 opsagent.nodedetails$send_all_nodedetails$fn__.invoke(nodedetails.clj:245)
 at opsagent.jmx$jmx_wrap.doInvoke(jmx.clj:111)
 at clojure.lang.RestFn.invoke(RestFn.java:410)
 at 
 opsagent.nodedetails$send_all_nodedetails.invoke(nodedetails.clj:241)
 at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:125)
 at clojure.lang.RestFn.invoke(RestFn.java:421)
 at 

[jira] [Created] (CASSANDRA-7877) Tunable cross datacenter replication

2014-09-04 Thread Hari Sekhon (JIRA)
Hari Sekhon created CASSANDRA-7877:
--

 Summary: Tunable cross datacenter replication
 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor


Right now tunable consistency allows for things like local quorum and quorum 
across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do a full QUORUM or 
ALL write level across multiple datacenters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Summary: Tunable consistency across cross datacenters LOCAL_QUORUM + 1  
(was: Tunable cross datacenter replication)

 Tunable consistency across cross datacenters LOCAL_QUORUM + 1
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and quorum 
 across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do a full 
 QUORUM or ALL write level across multiple datacenters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters.

  was:
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write level across multiple datacenters.


 Tunable consistency across cross datacenters LOCAL_QUORUM + 1
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write level across multiple datacenters.

  was:
Right now tunable consistency allows for things like local quorum and quorum 
across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do a full QUORUM or 
ALL write level across multiple datacenters.


 Tunable consistency across cross datacenters LOCAL_QUORUM + 1
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write level across multiple datacenters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across cross datacenters LOCAL_QUORUM + 1

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

  was:
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters.


 Tunable consistency across cross datacenters LOCAL_QUORUM + 1
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Summary: Tunable consistency across datacenters - LOCAL_QUORUM + 1  (was: 
Tunable consistency across cross datacenters LOCAL_QUORUM + 1)

 Tunable consistency across datacenters - LOCAL_QUORUM + 1
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Summary: Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other 
DC  (was: Tunable consistency across datacenters - LOCAL_QUORUM + 1)

 Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

EDIT: thinking about this more this is probably reasonably covered by each 
quorum given that by the time you write to 1 replica node in other DC you may 
as well write to 2 in which case each quorum is probably the way to go.

Although if you have 3 or more datacenters then perhaps there should be an 
option to do each quorum in local + quorum in at least 1 other DC before 
confirming write to prevent data loss on site failure?

  was:
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...


 Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...
 EDIT: thinking about this more this is probably reasonably covered by each 
 quorum given that by the time you write to 1 replica node in other DC you may 
 as well write to 2 in which case each quorum is probably the way to go.
 Although if you have 3 or more datacenters then perhaps there should be an 
 option to do each quorum in local + quorum in at least 1 other DC before 
 confirming write to prevent data loss on site failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

EDIT: thinking about this more this is probably reasonably covered by each 
quorum given that by the time you write to 1 replica node in other DC you may 
as well write to 2 in which case each quorum is probably the way to go.

Although if you have 3 or more datacenters then perhaps there should be an 
option to do quorum in local DC + quorum in one other DC before confirming 
write to prevent data loss on site failure?

  was:
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

EDIT: thinking about this more this is probably reasonably covered by each 
quorum given that by the time you write to 1 replica node in other DC you may 
as well write to 2 in which case each quorum is probably the way to go.

Although if you have 3 or more datacenters then perhaps there should be an 
option to do each quorum in local + quorum in at least 1 other DC before 
confirming write to prevent data loss on site failure?


 Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC
 -

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...
 EDIT: thinking about this more this is probably reasonably covered by each 
 quorum given that by the time you write to 1 replica node in other DC you may 
 as well write to 2 in which case each quorum is probably the way to go.
 Although if you have 3 or more datacenters then perhaps there should be an 
 option to do quorum in local DC + quorum in one other DC before confirming 
 write to prevent data loss on site failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Summary: Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC 
 (was: Tunable consistency across datacenters - LOCAL_QUORUM + 1 at other DC)

 Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC
 --

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...
 EDIT: thinking about this more this is probably reasonably covered by each 
 quorum given that by the time you write to 1 replica node in other DC you may 
 as well write to 2 in which case each quorum is probably the way to go.
 Although if you have 3 or more datacenters then perhaps there should be an 
 option to do quorum in local DC + quorum in one other DC before confirming 
 write to prevent data loss on site failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

EDIT: thinking about this more this is probably reasonably covered by each 
quorum given that by the time you write to 1 replica node in other DC you may 
as well write to 2 in which case each quorum is probably the way to go.

Although if you have 3 or more datacenters then perhaps there should be an 
option to do local_quorum + quorum in one other DC but not all the multiple DCs 
before confirming write to prevent data loss on site failure?

  was:
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

EDIT: thinking about this more this is probably reasonably covered by each 
quorum given that by the time you write to 1 replica node in other DC you may 
as well write to 2 in which case each quorum is probably the way to go.

Although if you have 3 or more datacenters then perhaps there should be an 
option to do quorum in local DC + quorum in one other DC before confirming 
write to prevent data loss on site failure?


 Tunable consistency across datacenters - LOCAL_QUORUM + 1 other DC
 --

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...
 EDIT: thinking about this more this is probably reasonably covered by each 
 quorum given that by the time you write to 1 replica node in other DC you may 
 as well write to 2 in which case each quorum is probably the way to go.
 Although if you have 3 or more datacenters then perhaps there should be an 
 option to do local_quorum + quorum in one other DC but not all the multiple 
 DCs before confirming write to prevent data loss on site failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Description: 
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

If you have 3 or more datacenters then perhaps there should be an option to do 
local_quorum + quorum in one other DC but not all the multiple DCs before 
confirming write to prevent data loss on site failure?

  was:
Right now tunable consistency allows for things like local quorum and each 
quorum across multiple datacenters.

I propose something in between where you achieve local quorum + 1 node at other 
DC. This would make sure the data is written to the other datacenter for 
resilience purposes but be better performing that having to do an each quorum 
write at both datacenters. Mind you re-reviewing each_quorum I'm not sure how 
much more performant this would be...

EDIT: thinking about this more this is probably reasonably covered by each 
quorum given that by the time you write to 1 replica node in other DC you may 
as well write to 2 in which case each quorum is probably the way to go.

Although if you have 3 or more datacenters then perhaps there should be an 
option to do local_quorum + quorum in one other DC but not all the multiple DCs 
before confirming write to prevent data loss on site failure?


 Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 
 out of N other DCs
 ---

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 If you have 3 or more datacenters then perhaps there should be an option to 
 do local_quorum + quorum in one other DC but not all the multiple DCs before 
 confirming write to prevent data loss on site failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7877) Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 out of N other DCs

2014-09-04 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7877:
---
Summary: Tunable consistency across multiple datacenters - LOCAL_QUORUM + 
quorum at 1 out of N other DCs  (was: Tunable consistency across datacenters - 
LOCAL_QUORUM + 1 other DC)

 Tunable consistency across multiple datacenters - LOCAL_QUORUM + quorum at 1 
 out of N other DCs
 ---

 Key: CASSANDRA-7877
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7877
 Project: Cassandra
  Issue Type: Improvement
Reporter: Hari Sekhon
Priority: Minor

 Right now tunable consistency allows for things like local quorum and each 
 quorum across multiple datacenters.
 I propose something in between where you achieve local quorum + 1 node at 
 other DC. This would make sure the data is written to the other datacenter 
 for resilience purposes but be better performing that having to do an each 
 quorum write at both datacenters. Mind you re-reviewing each_quorum I'm not 
 sure how much more performant this would be...
 EDIT: thinking about this more this is probably reasonably covered by each 
 quorum given that by the time you write to 1 replica node in other DC you may 
 as well write to 2 in which case each quorum is probably the way to go.
 Although if you have 3 or more datacenters then perhaps there should be an 
 option to do local_quorum + quorum in one other DC but not all the multiple 
 DCs before confirming write to prevent data loss on site failure?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-7860) csv2sstable - bulk load CSV data to SSTables similar to json2sstable

2014-09-02 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118291#comment-14118291
 ] 

Hari Sekhon edited comment on CASSANDRA-7860 at 9/2/14 4:03 PM:


Do we know when  2.1.1 will be released to try that COPY speed improvement in 
CQL?


was (Author: harisekhon):
Do we know when  2.1.1 will be released?

 csv2sstable - bulk load CSV data to SSTables similar to json2sstable
 

 Key: CASSANDRA-7860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7860
 Project: Cassandra
  Issue Type: New Feature
 Environment: DataStax Community Edition 2.0.9
Reporter: Hari Sekhon
Priority: Minor

 Need a csv2sstable utility to bulk load billions of rows of CSV data - 
 impractical to have to pre-convert to json before bulk loading to sstable.
 CQL COPY really is too slow - a test of mere 4 million row 6GB CSV directly 
 took 28 minutes... while it only takes 60 secs to cat all that data off the 
 hdfs source filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7860) csv2sstable - bulk load CSV data to SSTables similar to json2sstable

2014-09-02 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14118291#comment-14118291
 ] 

Hari Sekhon commented on CASSANDRA-7860:


Do we know when  2.1.1 will be released?

 csv2sstable - bulk load CSV data to SSTables similar to json2sstable
 

 Key: CASSANDRA-7860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7860
 Project: Cassandra
  Issue Type: New Feature
 Environment: DataStax Community Edition 2.0.9
Reporter: Hari Sekhon
Priority: Minor

 Need a csv2sstable utility to bulk load billions of rows of CSV data - 
 impractical to have to pre-convert to json before bulk loading to sstable.
 CQL COPY really is too slow - a test of mere 4 million row 6GB CSV directly 
 took 28 minutes... while it only takes 60 secs to cat all that data off the 
 hdfs source filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-7860) csv2sstable - bulk load CSV data to SSTables similar to json2sstable

2014-09-01 Thread Hari Sekhon (JIRA)
Hari Sekhon created CASSANDRA-7860:
--

 Summary: csv2sstable - bulk load CSV data to SSTables similar to 
json2sstable
 Key: CASSANDRA-7860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7860
 Project: Cassandra
  Issue Type: New Feature
 Environment: DataStax Community Edition 2.0.9
Reporter: Hari Sekhon
Priority: Blocker


Need a csv2sstable utility to bulk load billions of rows of CSV data - 
impractical to have to pre-convert to json before bulk loading to sstable.

CQL COPY really is too slow - a test of mere 4 million row 6GB CSV directly 
took 28 minutes... while it only takes 60 secs to cat all the data off hdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-7860) csv2sstable - bulk load CSV data to SSTables similar to json2sstable

2014-09-01 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon updated CASSANDRA-7860:
---
Description: 
Need a csv2sstable utility to bulk load billions of rows of CSV data - 
impractical to have to pre-convert to json before bulk loading to sstable.

CQL COPY really is too slow - a test of mere 4 million row 6GB CSV directly 
took 28 minutes... while it only takes 60 secs to cat all that data off the 
hdfs source filesystem.

  was:
Need a csv2sstable utility to bulk load billions of rows of CSV data - 
impractical to have to pre-convert to json before bulk loading to sstable.

CQL COPY really is too slow - a test of mere 4 million row 6GB CSV directly 
took 28 minutes... while it only takes 60 secs to cat all the data off hdfs.


 csv2sstable - bulk load CSV data to SSTables similar to json2sstable
 

 Key: CASSANDRA-7860
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7860
 Project: Cassandra
  Issue Type: New Feature
 Environment: DataStax Community Edition 2.0.9
Reporter: Hari Sekhon
Priority: Blocker

 Need a csv2sstable utility to bulk load billions of rows of CSV data - 
 impractical to have to pre-convert to json before bulk loading to sstable.
 CQL COPY really is too slow - a test of mere 4 million row 6GB CSV directly 
 took 28 minutes... while it only takes 60 secs to cat all that data off the 
 hdfs source filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-6195) Typo in error msg in cqlsh: Bad Request: Only superusers are allowed to perfrom CREATE USER queries

2013-10-13 Thread Hari Sekhon (JIRA)
Hari Sekhon created CASSANDRA-6195:
--

 Summary: Typo in error msg in cqlsh: Bad Request: Only superusers 
are allowed to perfrom CREATE USER queries
 Key: CASSANDRA-6195
 URL: https://issues.apache.org/jira/browse/CASSANDRA-6195
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Reporter: Hari Sekhon
Priority: Trivial


Typo in error message perfrom instead of perform:

cqlsh
Connected to MyCluster1 at x.x.x.x:9160.
[cqlsh 4.0.1 | Cassandra 2.0.1 | CQL spec 3.0.0 | Thrift protocol 19.37.0]
Use HELP for help.
cqlsh create user hari with password 'mypass';
Bad Request: Only superusers are allowed to perfrom CREATE USER queries



--
This message was sent by Atlassian JIRA
(v6.1#6144)