Hi All,
I want to dump a query result into a csv file with custom column delimiter.
Please help.
Regards:
Rahul Bhardwaj
--
Follow IndiaMART.com http://www.indiamart.com for latest updates on this
and more: https://plus.google.com/+indiamart
https://www.facebook.com/IndiaMART
I think this might be what you are looking for
http://www.datastax.com/documentation/cql/3.1/cql/cql_reference/copy_r.html
Andi
From: Rahul Bhardwaj [rahul.bhard...@indiamart.com]
Sent: 12 January 2015 09:22
To: user
Subject: how dump a query result into csv file
sry consider these stats:
nodetool cfstats clickstream.business_feed_new
Keyspace: clickstream
Read Count: 2108
Read Latency: 8.148092030360532 ms.
Write Count: 923452
Write Latency: 2.8382575358545976 ms.
Pending Flushes: 0
Table:
There's likely 2 things occurring
1) the cfhistograms error is due to
https://issues.apache.org/jira/browse/CASSANDRA-8028
Which is resolved in 2.1.3. Looks like voting is under way for 2.1.3. As
rcoli mentioned, you are running the latest open source of C* which should
be treated as beta until
Are you using compression on the sstables? If so, possibly you're CPU
bound instead of disk bound.
On Mon, Jan 12, 2015 at 3:47 AM, William Saar william.s...@king.com wrote:
Hi,
We are running a test with Cassandra 2.1.2 on Fusion I/O drives where we
load about 2 billion rows of data
I might be misinterpreting you, but it seems you are only using one seed
per node. Is there a specific reason for that? A node can have multiple
seeds in its seed list. It is my understanding that typically, every node
in a cluster has the same seed list.
On Sun, Jan 11, 2015 at 10:03 PM, Tim
*Environment*
- Cassandra 2.1.0
- 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
- 2500 writes per seconds, I write only to DC_A with local_quorum
- minimal reads (usually none, sometimes few)
*Problem*
After a few weeks of running I cannot read any data from my cluster,
Hi Tim, replies inline below.
On Sun, Jan 11, 2015 at 8:03 PM, Tim Dunphy bluethu...@gmail.com wrote:
Hey all,
I've been experimenting with Cassandra on a small scale and in my own
sandbox for a while now. I'm pretty used to working with it to get small
clusters up and running and
Hi All,
While using bulk loader we are getting this error:
sstableloader -d 162.217.99.217
/var/lib/cassandra/data/clickstream/business_feed_new
ERROR 17:50:48,218 Unable to initialize MemoryMeter (jamm not specified as
javaagent). This means Cassandra will be unable to measure object sizes
Hi,
Thanks for your quick reply.
But I know this command , but for one table which has around 10lacs rows,
this command (Copy table_name to 'table_name.csv' ) get stuck for long time
also slows down my cluster.
pfb table stats:
nodetool cfstats clickstream.business_feed
Keyspace: clickstream
Hi,
We are running a test with Cassandra 2.1.2 on Fusion I/O drives where we load
about 2 billion rows of data during a few hours each night onto a 6-node
cluster, but compactions that run 24/7 don't seem to be keeping up as the
number of SSTables keep growing and our disks seem way
Hi all,
I'm trying to install Cassandra 2.1.2 in Solaris 11 but I'm getting a
core dump at startup.
Any help is appreciated, since I can't change the operating system...
*My setup is:*
- Solaris 11
- JDK build 1.8.0_25-b17
*The error:*
appserver02:/opt/apache-cassandra-2.1.2/bin$
On Mon, Jan 12, 2015 at 4:26 AM, Rahul Bhardwaj
rahul.bhard...@indiamart.com wrote:
sstableloader -d 162.217.99.217
/var/lib/cassandra/data/clickstream/business_feed_new
ERROR 17:50:48,218 Unable to initialize MemoryMeter (jamm not specified as
javaagent). This means Cassandra will be
To precise your remarks:
1) About 30 sec GC. I know that after time my cluster had such problem, we
added magic flag, but result will be in ~2 weeks (as I presented in
screen on StackOverflow). If you have any idea how can fix/diagnose this
problem, I will be very grateful.
2) It is probably
Hey Guys,
I am seeking advice on design a system that maintains a historical view of
a user's activities in past one year. Each user can have different
activities: email_open, email_click, item_view, add_to_cart, purchase etc.
The query I would like to do is, for example,
Find all customers who
On Mon, Jan 12, 2015 at 5:46 PM, Sotirios Delimanolis sotodel...@yahoo.com
wrote:
So do we have to guarantee that the schema change will be backwards
compatible? Which node should send the schema change query? Should we just
make all nodes send it and ignore failures?
- Yes is the easiest
The heap usage is pretty low ( less than 700MB) when the application starts. I
can see the heap usage gradually climbing once the application starts. C* does
not log any errors before OOM happens.
Data is on EBS. Write throughput is quite high with two applications
simultaneously pumping data
Hey all,
Assuming a cluster with X 1 application nodes backed by Y 1 Cassandra
nodes, how do you best apply a schema modification?
Typically, such a schema modification is going to be done in parallel with code
changes (for querying the table) so all application nodes have to be restarted.
Are you changing schema so frequently that you really need to automate this
process?
I guess not. Though, if such a (consistent) process existed, I'd love to use
it.
The single node solution will have to do. Because of the source code change, it
seems I still have to make sure that the patch
I think it's more accurate that to say that auto paging prevents one type
of OOM. It's premature to diagnose it as 'not happening'.
What is heap usage when you start? Are you storing your data on EBS? What
kind of write throughput do you have going on at the same time? What errors
do you have in
Does your use case include many tombstones? If yes then that might explain
the OOM situation.
If you want to know for sure you can enable the heap dump generation on
crash in cassandra-env.sh just uncomment JVM_OPTS=$JVM_OPTS
-XX:+HeapDumpOnOutOfMemoryError and then run your query again. The
Hi,
When I connect to C* with driver, I found some warnings in the log (I increased
tombstone_failure_threshold to 15 to see the warning)
WARN [ReadStage:5] 2015-01-13 12:21:14,595 SliceQueryFilter.java (line 225)
Read 34188 live and 104186 tombstoned cells in system.schema_columns (see
Hi All,
We are using C* 2.1. we need to export data of one table (consist 10 lacs
records) using COPY command. After executing copy command cqlsh hangs and
get stuck . Please help in resolving the same or provide any alternative
for the same. pfb table stats:
Keyspace: clickstream
Read
There are no tombstones.
Mohammed
On Jan 12, 2015, at 9:11 PM, Dominic Letz
dominicl...@exosite.commailto:dominicl...@exosite.com wrote:
Does your use case include many tombstones? If yes then that might explain the
OOM situation.
If you want to know for sure you can enable the heap dump
Probably a bad answers but I was able to run on 1.7 jdk .So if possible
can downsize you jdk version and try. I hit the block on RedHat
enterprise...
On Jan 12, 2015 9:31 PM, Bernardino Mota bernardino.m...@inovaworks.com
wrote:
Hi all,
I'm trying to install Cassandra 2.1.2 in Solaris 11 but
Hi All!
We are hosting a Cassandra Meetup Group at our office in Phoenix, AZ on Monday
(1/26). If anyone is in the Phoenix area and would like to attend please let me
know or RSVP through the Cassandra Meetup
pagehttp://www.meetup.com/Phoenix-Cassandra-User-Group/events/219687372/.
Here is
Hello all,
In my implementation of the FutureCallBack interface in the onSuccess method,
I put Thread.currentThread.getName(). What I saw was that there is a
ThreadPool... That is all fine, but seems to me that the pool does not have
that many threads. About 10 from my observations - I did not
If you're getting 30 second GC's, this all by itself could and probably
does explain the problem.
If you're writing exclusively to A, and there are frequent partitions
between A and B, then A is potentially working a lot harder than B, because
it needs to keep track of hinted handoffs to replay
Hi Bogdan,
This question would be better on the specific driver's mailing list.
Assuming you are using the Java driver the mailing list is [1]. As for your
question look into PoolingOptions [2] that you pass when configuring the
Cluster instance.
[1]:
29 matches
Mail list logo