Can you include your read code?
On Aug 18, 2015 5:50 AM, Hervé Rivière herve.rivi...@zenika.com wrote:
Hello,
I have an issue with a ErrorMessage code= [Server error]
message=java.lang.NullPointerException when I query a table with static
fields (without where clause) with Cassandra
What happens to indexes when a table is truncated?
Indexes are removed or they stay around?
Rahul Gupta
DEKA Research Developmenthttp://www.dekaresearch.com/
340 Commercial St Manchester, NH 03101
P: 603.666.3908 extn. 6504 | C: 603.718.9676
This e-mail and the information, including any
On Wed, Aug 19, 2015 at 11:05 AM, Rahul Gupta rgu...@dekaresearch.com
wrote:
What happens to indexes when a table is truncated?
Indexes are removed or they stay around?
Secondary indexes are stored on disk in the same data directory as the data
and are truncated when the data they index is
Hello guys,
I have a cassandra cluster 2.1 comprised of 4 nodes.
I removed a lot of data in a Column Family, then I ran manually a
compaction on this Column family on every node. After doing that, If I
query that data, cassandra correctly says this data is not there. But the
space on disk is
Possibly you have snapshots? If so, use nodetool to clear them.
On Wed, Aug 19, 2015 at 4:54 PM, Analia Lorenzatto
analialorenza...@gmail.com wrote:
Hello guys,
I have a cassandra cluster 2.1 comprised of 4 nodes.
I removed a lot of data in a Column Family, then I ran manually a
Hello Michael,
Thanks for responding!
I do not have snapshots on any node of the cluster.
Saludos / Regards.
Analía Lorenzatto.
Hapiness is not something really made. It comes from your own actions by
Dalai Lama
On 19 Aug 2015 6:19 pm, Laing, Michael michael.la...@nytimes.com wrote:
Dear Alain,
Thanks again for your precious help.
I might help, but I need to know what you have done recently (change the RF,
Add remove node, cleanups, anything else as much as possible...)
I have a cluster of 5 nodes all running Cassandra 2.1.8.
I have a fixed schema which never changes. I
Hello Doan,
Thank you for your answer !
In my spark job I changed the spark.cassandra.input.split.size
(spark.cassandra.input.fetch.size_in_rows isn’t recognize in my v. 1.2.3
spark-cassandra-connector)
from 8 000 to 200 (so that’s create a lot more tasks by node) but I still
have the null