I just started with cassandra. Currently I'm reading the following
tutorial about cal:
http://www.datastax.com/docs/1.1/dml/using_cql#use-cql
But I already fail when trying to create a keyspace:
$ ./cqlsh --cql3
Connected to Test Cluster at localhost:9160.
[cqlsh 2.3.0 | Cassandra 1.2.0 | CQL
cqlsh CREATE KEYSPACE demodb WITH replication = {'class':
'SimpleStrategy', 'replication_factor': 3};
cqlsh use demodb;
cqlsh:demodb
On Tue, Jan 22, 2013 at 7:04 PM, Paul van Hoven
paul.van.ho...@googlemail.com wrote:
CREATE KEYSPACE demodb WITH strategy_class = 'SimpleStrategy'
AND
But, this keys have the same prefix. So, they will be distributed on the
same node. Right?
2013/1/21 Jason Brown jasbr...@netflix.com
The reason for multiple keys (and, by extension, multiple columns) is to
better distribute the write/read load across the cluster as keys will
(hopefully) be
Hi,
No, the keys are hashed to be distributed, at least if you use
RandomPartitioner.From
http://www.datastax.com/docs/1.0/cluster_architecture/partitioning:To
distribute the data evenly across the number of nodes, a hashing
algorithm creates an MD5 hash value of the row key
.vegard,
-
Okay, that worked. Why is the statement from the tutorial wrong. I
mean, why would a company like datastax post somthing like this?
2013/1/22 Jason Wee peich...@gmail.com:
cqlsh CREATE KEYSPACE demodb WITH replication = {'class': 'SimpleStrategy',
'replication_factor': 3};
cqlsh use demodb;
You're right Vegard! Thanks
2013/1/22 Vegard Berget p...@fantasista.no
Hi,
No, the keys are hashed to be distributed, at least if you use
RandomPartitioner.
From http://www.datastax.com/docs/1.0/cluster_architecture/partitioning:
To distribute the data evenly across the number of nodes, a
maybe typo or forget to update the doc... but anyway, you can use the help
command when you are in cqlsh.. for example:
cqlsh HELP CREATE_KEYSPACE;
CREATE KEYSPACE ksname
WITH replication = {'class':'strategy' [,'option':val]};
On Tue, Jan 22, 2013 at 8:06 PM, Paul van
I have Cassandra 1.1.7 cluster with 4 nodes in 2 datacenters (2+2).
Replication is configured as DC1:2,DC2:2 (i.e. every node holds the entire
data).
I am load-testing counter increments at the rate of about 10k per second.
All writes are directed to two nodes in DC1 (DC2 nodes are basically
Alright. Thanks for you quick help. :)
2013/1/22 Jason Wee peich...@gmail.com:
maybe typo or forget to update the doc... but anyway, you can use the help
command when you are in cqlsh.. for example:
cqlsh HELP CREATE_KEYSPACE;
CREATE KEYSPACE ksname
WITH replication =
The output of this command seems to make no sense unless I think of it as 5
completely separate histograms that just happen to be displayed together.
Using this example output should I read it as: my reads all took either 1
or 2 sstable. And separately, I had write latencies of 3,7,19. And
On 2013-01-22, at 8:59 AM, Brian Tarbox tar...@cabotresearch.com wrote:
The output of this command seems to make no sense unless I think of it as 5
completely separate histograms that just happen to be displayed together.
Using this example output should I read it as: my reads all took
Thank you! Since this is a very non-standard way to display data it might
be worth a better explanation in the various online documentation sets.
Thank you again.
Brian
On Tue, Jan 22, 2013 at 9:19 AM, Mina Naguib mina.nag...@adgear.com wrote:
On 2013-01-22, at 8:59 AM, Brian Tarbox
This was described in good detail here:
http://thelastpickle.com/2011/04/28/Forces-of-Write-and-Read/
On Tue, Jan 22, 2013 at 9:41 AM, Brian Tarbox tar...@cabotresearch.comwrote:
Thank you! Since this is a very non-standard way to display data it
might be worth a better explanation in the
Indeed, but how many Cassandra users have the good fortune to stumble
across that page? Just saying that the explanation of the very powerful
nodetool commands should be more front and center.
Brian
On Tue, Jan 22, 2013 at 10:03 AM, Edward Capriolo edlinuxg...@gmail.comwrote:
This was
You were most likely looking at the wrong documentation. The syntax for
CQL3 changed between Cassandra 1.1 and 1.2. When I google cassandra
CQL3 the first result is Cassandra 1.1 documentation about CQL3, which
is wrong for 1.2.
Make sure you are looking at the documentation for the version
Hi everyone,
I am looking for any places where the Cassandra source code structure would be
explained.
Are there any articles / wiki available?
Kind regards,
Radek Gruchalski
radek.gruchal...@technicolor.com (mailto:radek.gruchal...@technicolor.com) |
radek.gruchal...@portico.io
http://wiki.apache.org/cassandra/ArchitectureInternals
From: Radek Gruchalski
radek.gruchal...@portico.iomailto:radek.gruchal...@portico.io
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Tuesday, January 22,
Thank you. I found this but was hoping that there's anything broader out there.
This will have to be enough.
Kind regards,
Radek Gruchalski
radek.gruchal...@technicolor.com (mailto:radek.gruchal...@technicolor.com) |
radek.gruchal...@portico.io (mailto:radek.gruchal...@portico.io) |
I agree that Cassandra cfhistograms is probably the most bizarre metrics I have
ever come across although it's extremely useful.
I believe the offset is actually the metrics it has tracked (x-axis on the
traditional histogram) and the number under each column is how many times that
value has
I sent a note to our docs team to add a warning/note to the docs there
about the difference between the syntax in 1.1 and 1.2.
Thanks!
On Tue, Jan 22, 2013 at 10:49 AM, Colin Blower cblo...@barracuda.comwrote:
You were most likely looking at the wrong documentation. The syntax for
CQL3
I have seen logs about that. I didn't worry much, since the GC of the jvm was
not under pressure.
When cassandra logs a ParNew event from the GCInspector that is time the server
is paused / frozen. CMS events have a very small pause, but they are taking a
non trivial amount of CPU time.
If
Background see my talk here
http://www.datastax.com/events/cassandrasummit2012/presentations
Mutations to a row are isolated. In practice this means that simultaneous
writes to the same row are possible, however the first write thread to complete
wins and the other threads start their work
William,
If the solution from Binh works for you can you please submit a ticket
to https://issues.apache.org/jira/browse/CASSANDRA
The error message could be better if that is the case.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
On Wed, Jan 16, 2013 at 1:30 PM, Nicolas Lalevée
nicolas.lale...@hibnet.org wrote:
Here is the long story.
After some long useless staring at the monitoring graphs, I gave a try to
using the openjdk 6b24 rather than openjdk 7u9
OpenJDK 6 and 7 are both counter-recommended with regards to
Thanks Aaron and Jim for your reply. The data import is done. We have about
135G on each node and it's about 28K SStables. For normal operation, we only
have about 90 writes per seconds, but when I ran nodetool compationstats, it
remains at 9 and hardly changes. I guess it's just an estimated
No, I have the other files unfortunately and I had it fail once and succeed
every time after.
I'm tracking the external information of sstable2json more carefully now
(exit status, stdout, stderr), so hopefully if it happens again I can be
more help.
will
On Tue, Jan 22, 2013 at 3:38 PM, aaron
On Tue, Jan 22, 2013 at 5:03 AM, Sergey Olefir solf.li...@gmail.com wrote:
I am load-testing counter increments at the rate of about 10k per second.
Do you need highly performant counters that count accurately, without
meaningful chance of over-count? If so, Cassandra's counters are
probably not
What version are you using? Are you seeing any compaction related assertions
in the logs?
Might be https://issues.apache.org/jira/browse/CASSANDRA-4411
We experienced this problem of the count only decreasing to a certain number
and then stopping. If you are idle, it should go to 0. I have
Do you have a suggestion as to what could be a better fit for counters?
Something that can also replicate across DCs and survive link breakdown
between nodes (across DCs)? (and no, I don't need 100.00% precision
(although it would be nice obviously), I just need to be pretty close for
the values
On Tue, Jan 22, 2013 at 2:57 PM, Sergey Olefir solf.li...@gmail.com wrote:
Do you have a suggestion as to what could be a better fit for counters?
Something that can also replicate across DCs and survive link breakdown
between nodes (across DCs)? (and no, I don't need 100.00% precision
On Wed 23 Jan 2013 01:10:58 AM CST, Radek Gruchalski wrote:
Thank you. I found this but was hoping that there's anything broader
out there.
This will have to be enough.
Kind regards,
Radek Gruchalski
radek.gruchal...@technicolor.com
mailto:radek.gruchal...@technicolor.com |
Replication is configured as DC1:2,DC2:2 (i.e. every node holds the entire
data).
I really recommend using RF 3.
The error is the coordinator node protecting it's self.
Basically it cannot handle the volume of local writes + the writes for HH. The
number of in flight hints is greater
It turns out that having gc_grace=0 isn't required to produce the problem.
My colleague did a lot of digging into the compaction code and we think
he's found the issue. It's detailed in
https://issues.apache.org/jira/browse/CASSANDRA-5182
Basically tombstones for a row will not be removed from
Thanks for letting us know. I also have a some tables with a lot of
activity and very short ttls, and while I haven't experienced this problem,
it's good to know just in case.
On Tue, Jan 22, 2013 at 7:35 PM, Bryan Talbot btal...@aeriagames.comwrote:
It turns out that having gc_grace=0 isn't
Thanks!
Node writing to log because it cannot handle load is much different than
node writing to log just because. Although the amount of logging is still
excessive and would it really hurt anything to add something like can't
handle load to the exception message?
On the subject of RF:3 -- could
35 matches
Mail list logo