RE: Are Cassandra writes are faster than reads?

2016-11-08 Thread Rajesh Radhakrishnan
Hi, Just found that reducing the batch size below 20 also increases the writing speed and reduction in memory usage(especially for Python driver). Kind regards, Rajesh R From: Ben Bromhead [b...@instaclustr.com] Sent: 07 November 2016 05:44 To:

RE: Cassandra Python Driver : execute_async consumes lots of memory?

2016-11-08 Thread Rajesh Radhakrishnan
Hi Lahiru, Great! you know what, REDUCTION of BATCH size from 50 to 20 solved my issue. Thank you very much. Good job man! and Memory issue solved. Next I will try using Spark to speed it up. Kind regards, Rajesh Radhakrishnan From: Lahiru Gamathige [lah

RE: Are Cassandra writes are faster than reads?

2016-11-08 Thread Rajesh Radhakrishnan
Hi, In my case writing is slower using Python driver, using Batch execution and prepared statements. I am looking at different ways to speed it up, as I am trying to write 100 * 200 Million records . Cheers Rajesh R From: Vikas Jaiman [er.vikasjai...@gmail.com]

RE: Cassandra Python Driver : execute_async consumes lots of memory?

2016-11-08 Thread Rajesh Radhakrishnan
, 2016 at 8:51 AM, Rajesh Radhakrishnan <rajesh.radhakrish...@phe.gov.uk<redir.aspx?REF=GvqGFcevT1ya9l3HAqSNmO4M5tTZq0dwWz9uDyLF3vAkgyBYuAfUCAFtYWlsdG86UmFqZXNoLlJhZGhha3Jpc2huYW5AcGhlLmdvdi51aw..>> wrote: Hi We are trying to inject millions to data into a table by execu

Cassandra Python Driver : execute_async consumes lots of memory?

2016-11-07 Thread Rajesh Radhakrishnan
cluster = None del cluster cassandraCluster = None del cassandraCluster gc.collect() ===CODE == Kind regards, Rajesh Radhakrishnan ** The i

RE: cqlsh fails to connect

2016-10-28 Thread Rajesh Radhakrishnan
Hi John Z, Did you tried running with latest Python 2.7.11 or 2.7.12? Kind regards, Rajesh Radhakrishnan From: Ioannis Zafiropoulos [john...@gmail.com] Sent: 27 October 2016 22:16 To: user@cassandra.apache.org Subject: cqlsh fails to connect I upgraded DSE

RE: Error creating pool to /IP_ADDRESS33:9042 (Proving Cassandra's NO SINGLE point of failure)

2016-10-26 Thread Rajesh Radhakrishnan
Hi Vladimir, Thank you for the response. Yes I added all the three node IPs while connecting to the cluster via driver. Its not failed operation. while the script is running and it takes some time to read millions of data and during this time , I intentionally put one node down to see how the

Error creating pool to /IP_ADDRESS33:9042 (Proving Cassandra's NO SINGLE point of failure)

2016-10-24 Thread Rajesh Radhakrishnan
Hi, I have 3 nodes Cassandra cluster. Cassandra version : dsc-cassandra-2.1.5 Python Cassandra Driver : 2.5.1 Running the nodes in Red Hat virtual machines. Node ip info: Node 1: IP_ADDRESS219 Node 2: IP_ADDRESS229 Node 3: IP_ADDRESS230 (IP_ADDRESS219 is masked for this email which

#RE: During writing data into Cassandra 3.7.0 using Python driver 3.7 sometime loose Connection because of Server NullPointerException (Help please!)

2016-09-23 Thread Rajesh Radhakrishnan
ot;cassandra/cluster.py", line 3781, in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:73073) cassandra.protocol.ServerError: ====== Kind regards, Rajesh Radhakrishnan From: li...@beobal.com [li...@beobal.com] on behalf of Sam Tunnicliffe [s...@b

RE: data file directory path customization

2016-09-23 Thread Rajesh Radhakrishnan
, Rajesh Radhakrishnan From: Mehdi Bada [mehdi.b...@dbi-services.com] Sent: 23 September 2016 08:58 To: user@cassandra.apache.org Subject: data file directory path customization Hi all, With the new apache cassandra 3.7 version, It is possible to setup

During writing data into Cassandra 3.7.0 using Python driver 3.7 sometime loose Connection because of Server NullPointerException (Help please!)

2016-09-23 Thread Rajesh Radhakrishnan
.jar:3.7.0] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-3.7.0.jar:3.7.0] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] == Thank you. Kind regar

RE: UUID coming as int while using SPARK SQL

2016-05-25 Thread Rajesh Radhakrishnan
ise my theory is incorrect :) Cheers, ml On Tue, May 24, 2016 at 6:57 AM, Rajesh Radhakrishnan <rajesh.radhakrish...@phe.gov.uk<redir.aspx?REF=RLZ74B63gWgYMSmbd2Gok1vvIOD6w4ASvSmPQMg9-SQBmJiWhoTTCAFtYWlsdG86UmFqZXNoLlJhZGhha3Jpc2huYW5AcGhlLmdvdi51aw..>> wrote: Hi Michael, Thank you for

RE: UUID coming as int while using SPARK SQL

2016-05-24 Thread Rajesh Radhakrishnan
converting that int from decimal to hex and inserting dashes in the appropriate spots - or go the other way. Also, you are looking at different rows, based upon your selection criteria... ml On Tue, May 24, 2016 at 6:23 AM, Rajesh Radhakrishnan <rajesh.radhakrish...@phe.gov.uk<redir.as

UUID coming as int while using SPARK SQL

2016-05-24 Thread Rajesh Radhakrishnan
Hi, I got a Cassandra keyspace, but while reading the data(especially UUID) via Spark SQL using Python is not returning the correct value. Cassandra: -- My table 'SAM'' is described below: CREATE table ks.sam (id uuid, dept text, workflow text, type double primary key (id,

RE: Getting code=2200 [Invalid query] message=Invalid column name ... while executing ALTER statement

2015-11-20 Thread Rajesh Radhakrishnan
o weight to empty columns. 2. if the set of columns is really dynamic, would using 1/more map column(s) be better? Avoiding to modify the schema dynamically and avoid concurrent schema changes is always better. On Fri, Nov 13, 2015 at 7:40 AM, Rajesh Radhakrishnan <rajesh.radhakrish...@phe.go

RE: Getting code=2200 [Invalid query] message=Invalid column name ... while executing ALTER statement

2015-11-13 Thread Rajesh Radhakrishnan
nso> On 13 November 2015 at 11:14, Rajesh Radhakrishnan <rajesh.radhakrish...@phe.gov.uk<mailto:rajesh.radhakrish...@phe.gov.uk>> wrote: Hi, I am using Cassandra 2.1.5 in a cluster of two nodes (running CentOS) and using Python driver to connect to Cassandra. My Python code sn

RE: Getting code=2200 [Invalid query] message=Invalid column name ... while executing ALTER statement

2015-11-13 Thread Rajesh Radhakrishnan
are generally a bad idea, especially if they are rapid. You should rethink your approach. On Fri, Nov 13, 2015 at 7:20 AM, Rajesh Radhakrishnan <rajesh.radhakrish...@phe.gov.uk<mailto:rajesh.radhakrish...@phe.gov.uk>> wrote: Thank you Carlos for looking. But when I rand

Getting code=2200 [Invalid query] message=Invalid column name ... while executing ALTER statement

2015-11-13 Thread Rajesh Radhakrishnan
Hi, I am using Cassandra 2.1.5 in a cluster of two nodes (running CentOS) and using Python driver to connect to Cassandra. My Python code snippet is show here: #--- import time, os,