Short answer: you'll need to pass something like --cqlversion=3.0.0 to
cqlsh.
Longer answer: when a CQL client connects (and cqlsh is one), it asks to
use a specific version of CQL. If it asked for a version that is newer than
what the server knows, you get the error message you have above. So
ok great, and thank you!
[root@beta:~] #cqlsh --cqlversion=3.0.0
Connected to mycluster Cluster at beta.mydomain.com:9160.
[cqlsh 4.0.0 | Cassandra 1.2.2 | CQL spec 3.0.0 | Thrift protocol 19.35.0]
Use HELP for help.
cqlsh
On Tue, Sep 17, 2013 at 2:41 AM, Sylvain Lebresne
You may want to be careful as column 1 could be stored in both files until
compaction as well when column 1 has encountered changes and cassandra returns
the latest column 1 version but two sstables contain column 1. (At least that
is the way I understand it).
Later,
Dean
From: Takenori Sato
I am very new to cassandra. Just started exploring.
I am running a single node cassandra server facing a problem in seeing
status of the cassandra using nodetool command.
i have hostname configured on my VM as myMachineIP cass1 in /etc/hosts
and
i configured my
Have you tried specifying your hostname (not localhost) in cassandra.yaml
and start it?
Regards,
Shahab
On Tue, Sep 17, 2013 at 8:39 AM, pradeep kumar pradeepkuma...@gmail.comwrote:
I am very new to cassandra. Just started exploring.
I am running a single node cassandra server facing a
Hi, Takenori:
Thanks for your quick reply. Your explain is clear for me understanding what
compaction mean, and I also can understand now same row key will exist in multi
SSTable file.
But beyond that, I want to know what happen if one row data is too large to put
in one SSTable file. In your
Thanks Dean for clarification.
But if I put hundreds of megabyte data of one row through one put, what you
mean is Cassandra will put all of them into one SSTable, even the data is very
big, right? Let's assume in this case the Memtables in memory reaches its limit
by this change.What I want to
java8964, basically are you asking that what will happen if we put large
amount of data in one column of one row at once? How will this blob of data
representing one column and one row i.e. cell will be split into multiple
SSTable? Or in such particular cases it will always be one extra large
Netflix created file streaming in astyanax into cassandra specifically because
writing too big a column cell is a bad thing. The limit is really dependent on
use case….do you have servers writing 1000's of 200Meg files at the same
time….if so, astyanax streaming may be a better way to go there
Another question related to the SSTable files generated in the incremental
backup is not really ONLY incremental delta, right? It will include more than
delta in the SSTable files.
I will use the example to show my question:
first, we have this data in the SSTable file 1:
rowkey(1), columns
On Tue, Sep 17, 2013 at 6:54 AM, Shahab Yunus shahab.yu...@gmail.comwrote:
java8964, basically are you asking that what will happen if we put large
amount of data in one column of one row at once? How will this blob of data
representing one column and one row i.e. cell will be split into
I am running shuffle on a cluster after upgrading to 1.2.X, and I don't
understand how to check progress.
I'm counting the lines of cassandra-shuffle ls, and it decreases VERY
slowly. Sometimes not at all after 24 hours of processing.
Is that value accurate? Does the shuffle operation supports
On Tue, Sep 17, 2013 at 12:13 PM, Juan Manuel Formoso jform...@gmail.comwrote:
I am running shuffle on a cluster after upgrading to 1.2.X, and I don't
understand how to check progress.
If your shuffle succeeds, you will be the first reported case of shuffle
succeeding on a non-test cluster.
On Thu, Sep 5, 2013 at 6:14 AM, Chris Burroughs
chris.burrou...@gmail.comwrote:
We have a 2 DC cluster running cassandra 1.2.9. They are in actual
physically separate DCs on opposite coasts of the US, not just logical
ones. The primary use of this cluster is CL.ONE reads out of a single
Thanks Robert for the answer. It makes sense. If that happens then it means
that your design or use case needs some rework ;)
Regards,
Shahab
On Tue, Sep 17, 2013 at 2:37 PM, java8964 java8964 java8...@hotmail.comwrote:
Another question related to the SSTable files generated in the
cassandra 2.0, cqlsh 3.0.
I was trying to import csv file (fields delimited by |) however import
chokes after certain number of lines with the following error:
Bad Request: line 1:300 no viable alternative at input '-'
Aborting import at record #514 (line 515). Previously-inserted values still
Stable loader is the way to go to load up the new cluster.
On Tuesday, September 17, 2013, Juan Manuel Formoso wrote:
If your shuffle succeeds, you will be the first reported case of
shuffle succeeding on a non-test cluster.
Awesome! :O
I'll try to migrate to a new cluster then.
Any
So in fact, incremental backup of Cassandra is just hard link all the new
SSTable files being generated during the incremental backup period. It
could contain any data, not just the data being update/insert/delete in
this period, correct?
Correct.
But over time, some old enough SSTable files
On Tue, Sep 17, 2013 at 4:01 PM, Juan Manuel Formoso jform...@gmail.comwrote:
Anyone who knows for sure if this would work?
Sankalp Kohli (whose last name is phonetically awesome!) has pointed you in
the correct direction.
To be a bit more explicit :
1) determine if sstable names are unique
Thanks! But, shouldn't I be able to just stop Cassandra, copy the files,
change the config and restart? Why should I drain?
My RF+consistency level can handle one replica down (I forgot to mention
that in my OP, apologies)
Would it work in theory?
On Tuesday, September 17, 2013, Robert Coli
Will the new cluster be evenly balanced? Remember that the old one was pre
1.2.X, so I had no vnodes
I haven't used that tool, will look it up.
Thanks for the suggestion!
On Tuesday, September 17, 2013, David McNelis wrote:
Stable loader is the way to go to load up the new cluster.
On
On Tue, Sep 17, 2013 at 5:46 PM, Takenori Sato ts...@cloudian.com wrote:
So in fact, incremental backup of Cassandra is just hard link all the
new SSTable files being generated during the incremental backup period. It
could contain any data, not just the data being update/insert/delete in
On Tue, Sep 17, 2013 at 5:57 PM, Juan Manuel Formoso jform...@gmail.comwrote:
Thanks! But, shouldn't I be able to just stop Cassandra, copy the files,
change the config and restart? Why should I drain?
If you drain, you reduce to zero the chance of having some problem with the
SSTables
Thanks, Rob, for clarifying!
- Takenori
(2013/09/18 10:01), Robert Coli wrote:
On Tue, Sep 17, 2013 at 5:46 PM, Takenori Sato ts...@cloudian.com
mailto:ts...@cloudian.com wrote:
So in fact, incremental backup of Cassandra is just hard link
all the new SSTable files being generated
That is very disappointing to hear. Vnodes support is one of the main
reasons we're upgrading from 1.1.X to 1.2.X.
So you're saying the only feasible way of enabling VNodes on an upgraded C*
1.2 is by doing fork writes to a brand new cluster + bulk load of sstables
from the old cluster? Or is it
Quote:
To be clear, incremental backup feature backs up the data being modified in
that period, because it writes only those files to the incremental backup dir
as hard links, between full snapshots.
I thought I was clearer, but your clarification confused me again.My
understanding so far
As Rob mentioned, no one (myself included) has successfully used shuffle in
the wild (that I've heard of).
Shuffle is *supposed* to be a transparent background process... and is
designed, in theory, to take a long time to run (weeks is the right way to
think of it).
Be sure to keep an eye on
I have been trying to make it work non-stop since Friday afternoon. I
officially gave up today and I'm going to go the sstableloader route.
I wrote a little of what I tried here:
http://seniorgeek.com.ar/blog/2013/09/16/tips-for-running-cassandra-shuffle/
(I have yet to update it with the fact
Hello
I just saw this error. Anyone knows how to fix it?
[root@gary-vm1 apache-cassandra-2.0.0]# bin/cassandra -f
xss = -ea -javaagent:bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities
-XX:ThreadPriorityPolicy=42 -Xms4014M -Xmx4014M -Xmn400M
-XX:+HeapDumpOnOutOfMemoryError -Xss180k
Exception
Thanks Jason. Does Node.js work with 2.0? I'm wondering which version
should I run. Thanks.
On Tue, Sep 17, 2013 at 8:24 PM, Jason Wee peich...@gmail.com wrote:
cassandra 2.0, then use oracle or open jdk version 7.
Jason
On Wed, Sep 18, 2013 at 11:21 AM, Gary Zhao garyz...@gmail.com
Yong,
It seems there is still a misunderstanding.
But there is no way we can be sure that these SSTable files will ONLY
contain modified data. So the statement being quoted above is not exactly
right. I agree that all the modified data in that period will be in the
incremental sstable files,
Cassandra-2.0 needs to run on jdk7
On 09/17/2013 11:21 PM, Gary Zhao wrote:
Hello
I just saw this error. Anyone knows how to fix it?
[root@gary-vm1 apache-cassandra-2.0.0]# bin/cassandra -f
xss = -ea -javaagent:bin/../lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities
Sorry, I have no knowledge on Node.js, probably someone else might know.
Jason
On Wed, Sep 18, 2013 at 11:29 AM, Gary Zhao garyz...@gmail.com wrote:
Thanks Jason. Does Node.js work with 2.0? I'm wondering which version
should I run. Thanks.
On Tue, Sep 17, 2013 at 8:24 PM, Jason Wee
It depends on which driver you're using, and drivers for 1.2 may indeed mostly
work with 2.0. I've been using Astyanax (for Java) with 2.0 even though it
doesn't specifically support the new release.
On Sep 17, 2013, at 11:34 PM, Jason Wee peich...@gmail.com wrote:
Sorry, I have no knowledge
34 matches
Mail list logo