I've observed that reducing fetch size results in better latency (isn't
that obvious :-)), tried from fetch size varying from 100 to 1, seeing
a lot of errors for 1. Haven't tried modifying the number of columns.
Let me start a new thread focused on fetch size.
On Wed, Apr 2, 2014 at
Hello All,
We have a schema which can be modelled as *(studentID int, subjectID int,
marks int, PRIMARY KEY(studentID, subjectID)*. There can be ~1M studentIDs
and for each studentID there can be ~10K subjectIDs. The queries can be
using studentID and studentID-subjectID We have a 3 node (each
Hello,
I am building a write client in java to insert records into Cassandra 2.0.5.
I am using the Datastax java driver.
Problem : The datamodel is dynamic. By dynamic, I mean that the number of
columns and the datatype of columns will be given as an input by the user. It
has only 1
Hello Varsha
Your best bet is to go with blob type by serializing all data into bytes.
Another alternative is to use text and serialize to JSON.
For the dynamic columns, use clustering columns in CQL3 with blob/text type
Regards
Duy Hai DOAN
On Wed, Apr 2, 2014 at 11:21 AM, Raveendran,
I want to export all the data of particular column family to the text file
from Cassandra cluster.
I tried
copy keyspace.mycolumnfamily to '/root/ddd/xx.csv';
It gave me timeout error
I tried below in Cassandra.yaml
request_timeout_in_ms: 1000
read_request_timeout_in_ms: 1000
http://mail-archives.apache.org/mod_mbox/cassandra-user/201309.mbox/%3C9AF3ADEDDFED4DDEA840B8F5C6286BBA@vig.local%3E
http://stackoverflow.com/questions/18872422/rpc-timeout-error-while-exporting-data-from-cql
Google for more.
Best regards / Pagarbiai
Viktor Jevdokimov
Senior Developer
Email:
Hi,
Thanks for replying.
I dint quite get what you meant by use clustering columns in CQL3 with
blob/text type.
I have elaborated my problem statement below.
Assume the schema of the keyspace to which random records need to be inserted
is given in the following format :
KeySpace Name : KS_1
Cassandra 1.2.15, using commodity hardware.
On Tue, Apr 1, 2014 at 6:37 PM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Apr 1, 2014 at 3:24 PM, Redmumba redmu...@gmail.com wrote:
Is it possible to have true drop in node replacements? For example, I
have a cluster of 51 Cassandra nodes,
Thanks for the reply. Most of the solutions provided over web involves
some kind of 'where' clause in data extract and then export the next set
until done. I have column family with no time stamp and no other column I
can use to filter the data. One other solution provided was to use
pagination,