Thank you for reporting this.
I've filed https://issues.apache.org/jira/browse/CASSANDRA-11333.
On Thu, Mar 10, 2016 at 6:16 AM, Rakesh Kumar wrote:
> Cassandra : 3.3
> CQLSH : 5.0.1
>
> If there is a typo in the column name of the copy command, we get this:
>
> copy
Cassandra : 3.3
CQLSH : 5.0.1
If there is a typo in the column name of the copy command, we get this:
copy mytable
(event_id,event_class_cd,event_ts,receive_ts,event_source_instance,client_id,client_id_type,event_tag,event_udf,client_event_date)
from '/pathtofile.dat'
with DELIMITER =
Just installed cassandra 1.1.1 and run:
root@carlo-laptop:/tmp# cassandra-cli -h localhost
Connected to: Test Cluster on localhost/9160
Welcome to Cassandra CLI version 1.1.1
Type 'help;' or '?' for help.
Type 'quit;' or 'exit;' to quit.
[default@unknown] create keyspace accounts
... with
The property file snitch isn't used by default. Did you change your
cassandra.yaml to use PropertyFileSnitch so it reads
cassandra-topology.properties?
Also the formatting in your dc property file isn't right. It should be
'ip=dc:rack'. So:
127.0.0.1=dc-test:my-notebook
On Mon, Jun 11, 2012 at
I forgot to change cassandra.yaml to use PropertyFileSnitch AND
cassandra-topology syntax was incorrect. Thanks, Nick.
I don't know why I got no error in 1.0.8 with PropertyFileSnitch in
cassandra.yaml and wrong syntax in cassandra-topology.properties.
PS: I had to change JVM_OPTS in
I don't know why I got no error in 1.0.8 with PropertyFileSnitch in
cassandra.yaml and wrong syntax in cassandra-topology.properties.
Not sure either.
PS: I had to change JVM_OPTS in /etc/cassandra/cassandra-env.sh to use 160k
instead 128k. This has not been fixed?
Still marked as
I'm running into a quirky issue with Brisk 1.0 Beta 2 (w/ Cassandra 0.8.1).
I think the last node in our cluster is having problems (10.201.x.x).
OpsCenter and nodetool ring (run from that node) show the node as down, but
the rest of the cluster sees it as up.
If I run nodetool ring from one of
For the Too many open files error see:
http://www.datastax.com/docs/0.8/troubleshooting/index#java-reports-an-error-saying-there-are-too-many-open-files
Restart the node and see if the node is able to complete the pending repair
this time. Your node may have just been stuck on this error that
blow all the data away ... how do you do that? What is the timestamp
precision that you are using when creating key/col or key/supercol/col items?
I have seen a fail to write a key when the timestamp is identical to the
previous timestamp of a deleted key/col. While I didn't examine the source
Fixed for 0.6.3: https://issues.apache.org/jira/browse/CASSANDRA-1042
On Fri, Jun 18, 2010 at 2:49 PM, Corey Hulen c...@earnstone.com wrote:
We are using MapReduce to periodical verify and rebuild our secondary
indexes along with counting total records. We started to noticed double
counting
Awesome...thanks.
I just downloaded the patch and applied it and verified it fixes our
problems.
what's the ETA on 0.6.3? (debating on weather to tolerate it or maintain
our own 0.6.2+patch).
-Corey
On Fri, Jun 18, 2010 at 8:21 PM, Jonathan Ellis jbel...@gmail.com wrote:
Fixed for 0.6.3:
Looks like the end of June.
On Fri, Jun 18, 2010 at 8:38 PM, Corey Hulen c...@earnstone.com wrote:
Awesome...thanks.
I just downloaded the patch and applied it and verified it fixes our
problems.
what's the ETA on 0.6.3? (debating on weather to tolerate it or maintain
our own 0.6.2+patch).
12 matches
Mail list logo