They both have 0 for their token, and this is stored in their System keyspace.
Scrub them and start again.
But I found that the tokens that were being generated would require way too
much memory
Token assignments have nothing to do with memory usage.
m1.micro instances
You are better off
How did you create the table?
Anyways that looks like a bug, I *think* they should go here
http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/issues/list
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On
Hello,
We see that BulkOutputFormat fails to stream data from multiple reduce
instances that run on the same host.
We get the same error messages that issue
https://issues.apache.org/jira/browse/CASSANDRA-4223 tries to address.
Looks like (ip-adress + in_out_flag + atomic integer) is not unique
Alexel,
You were right.
It was already fixed to use UUID for streaming session and released in 1.2.0.
See https://issues.apache.org/jira/browse/CASSANDRA-4813.
On Thursday, January 24, 2013 at 6:49 AM, Alexei Bakanov wrote:
Hello,
We see that BulkOutputFormat fails to stream data from
Oh, that's nice! Thanks!
On 24 January 2013 13:55, Yuki Morishita mor.y...@gmail.com wrote:
Alexel,
You were right.
It was already fixed to use UUID for streaming session and released in
1.2.0.
See https://issues.apache.org/jira/browse/CASSANDRA-4813.
On Thursday, January 24, 2013 at 6:49
Cool Thanks for the advice Aaron. I actually did get this working before I
read your reply. The trick apparently for me was to use the IP for the
first node in the seeds setting of each successive node. But I like the
idea of using larges for an hour or so and terminating them for some basic
-- Forwarded message --
From: Vivek Mishra vivek.mis...@impetus.co.in
Date: Thu, Jan 24, 2013 at 8:29 PM
Subject: {kundera-discuss} Kundera 2.3 released
To: kundera-disc...@googlegroups.com kundera-disc...@googlegroups.com
Hi All,
We are happy to announce release of Kundera
Hi,
I have spent half of the day today trying to make a new Cassandra
cluster to work. I have setup a single data center cluster, using
NetworkTopologyStrategy, DC1:3.
I'm using latest version of Astyanax client to connect. After many hours
of debug, I found out that the problem may be in
Hi,
Astyanax is not 1.2 compatible yet
https://github.com/Netflix/astyanax/issues/191
https://github.com/Netflix/astyanax/issues/191Eran planned to make it
in 1.57.x
четверг, 24 января 2013 г. пользователь Gabriel Ciuloaica писал:
Hi,
I have spent half of the day today trying to make a new
I do not think that it has anything to do with Astyanax, but after I
have recreated the keyspace with cassandra-cli, everything is working fine.
Also, I have mention below that not even nodetool describering foo,
did not showed correct information for the tokens, encoding_details, if
the
The reason for the error was that I opened the connection to the database wrong.
I did:
con = cql.connect(host, port, keyspace)
but correct is:
con = cql.connect(host, port, keyspace, cql_version='3.0.0')
Now it works fine. Thanks for reading.
2013/1/24 aaron morton aa...@thelastpickle.com:
Gabriel,
It looks like you used DC1 for the datacenter name in your replication
strategy options, while the actual datacenter name was DC-1 (based on the
nodetool status output). Perhaps that was causing the problem?
On Thu, Jan 24, 2013 at 1:57 PM, Gabriel Ciuloaica gciuloa...@gmail.comwrote:
Hi Tyler,
No, it was just a typo in the email, I changed names of DC in the email
after copy/paste from output of the tools.
It is quite easy to reproduce (assuming you have a correct configuration
for NetworkTopologyStrategy, with vNodes(default, 256)):
1. launch cqlsh and create the
Hi Tim
If you want to check out Cassandra on AWS you should also have a look
www.instaclustr.com.
We are still very much in Beta (so if you come across anything, please let
us know), but if you have a few minutes and want to deploy a cluster in
just a few clicks I highly recommend trying
Hi all,
I started looking at Cassandra awhile ago and got used to the Thrift API. I
put it on the back burner for awhile though until now. To get back up to
speed I have read a lot of documentation at the DataStax website, and it
appears that the Thrift API is no longer considered the ideal way
Some of the Mapping libraries can help translate into objects and not have sooo
much DAO code
PlayOrm has a whole feature list of things that can be helpful. I am sure
other high level clients have stuff as well that can speed up development time.
Either CQL or a higher level API running on top of Thrift like
Hector/Asyntax/etc.
Thrift is uh... painful.
On Thu, Jan 24, 2013 at 3:35 PM, Matthew Langton mjla...@gmail.com wrote:
Hi all,
I started looking at Cassandra awhile ago and got used to the Thrift API. I
put it on the back burner
I use both Thrift and CQL.
my bias take is use CQL for select queries and thrift for
insert/update. I like being able to insert exactly the data type I
want for the column name and value. CQL is more user friendly, but it
lacks the flexibility of thrift in terms of using different data types
for
In the yaml, it has the following setting
# Throttles all outbound streaming file transfers on this node to the
# given total throughput in Mbps. This is necessary because Cassandra does
# mostly sequential IO when streaming data during bootstrap or repair, which
# can lead to saturating the
Can you provide details of the snitch configuration and the number of nodes you
have?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 25/01/2013, at 9:39 AM, Gabriel Ciuloaica gciuloa...@gmail.com wrote:
Hi Tyler,
Hi Aaron,
I'm using PropertyFileSnitch, an my cassandra-topology.propertis looks
like this:
/# Cassandra Node IP=Data Center:Rack//
//
//# default for unknown nodes//
//default=DC1:RAC1//
//
//# all known nodes//
// 10.11.1.108=DC1:RAC1//
// 10.11.1.109=DC1:RAC2//
// 10.11.1.200=DC1:RAC3
Do you mean 90% of the reads should come from 1 SSTable?
By the way, after I finished the data migrating, I ran nodetool repair -pr on
one of the nodes. Before nodetool repair, all the nodes have the same disk
space usage. After I ran the nodetool repair, the disk space for that node
jumped
Increasing the stack size in cassandra-env.sh should help you get past the
stack overflow. Doesn't help with your original problem though.
On Fri, Jan 25, 2013 at 12:00 AM, Wei Zhu wz1...@yahoo.com wrote:
Well, even after restart, it throws the the same exception. I am basically
stuck. Any
Thanks Derek,
in the cassandra-env.sh, it says
# reduce the per-thread stack size to minimize the impact of Thrift
# thread-per-client. (Best practice is
24 matches
Mail list logo