Hello All,
Attempting to create what the Datastax 1.1 documentation calls a
Dynamic Column Family
(http://www.datastax.com/docs/1.1/ddl/column_family#dynamic-column-families)
via CQLSH.
This works in v2 of the shell:
create table data ( key varchar PRIMARY KEY) WITH comparator=LongType;
When
Hello,
We're planning an upgrade from 0.6.7 (after it's released) to 0.7.0 (after
it's released) and I wanted to validate my assumptions about what can be
expected. Obviously I'll need to test my own assumptions but I was hoping
for some guidance to make sure my understanding is correct.
My core
May or may not be related but I thought I'd recount a similar experience we
had in EC2 in hopes it helps someone else.
As background, we had been running several servers in a 0.6.8 ring with no
Cassandra issues (some EC2 issues, but none related to Cassandra) on
multiple EC2 XL instances in a
was causing problems.
Either way, I'm looking forward to hearing about anything you find.
Mike
On Thu, Jan 13, 2011 at 11:47 PM, Erik Onnen eon...@gmail.com wrote:
Too similar to be a coincidence I'd say:
Good node (old AZ): 2.11.1-0ubuntu7.5
Bad node (new AZ): 2.11.1-0ubuntu7.6
You beat
One of the developers will have to confirm but this looks like a bug
to me. MessagingService is a singleton and there's a Multimap used for
targets that isn't accessed in a thread safe manner.
The thread dump would seem to confirm this, when you hammer what is
ultimately a standard HashMap with
Filed as https://issues.apache.org/jira/browse/CASSANDRA-2037
I can't see how the code would be correct as written but I'm usually
wrong about most things.
On Sun, Jan 23, 2011 at 12:14 PM, Erik Onnen eon...@gmail.com wrote:
One of the developers will have to confirm but this looks like a bug
During a recent upgrade of our cassandra ring from 0.6.8 to 0.7.3 and
prior to a drain on the 0.6.8 nodes, we lost a node for reasons
unrelated to cassandra. We decided to push forward with the drain on
the remaining healthy nodes. The upgrade completed successfully for
the remaining nodes and the
realize the data it's streaming over is
older-version data. Can you create a ticket?
In the meantime nodetool scrub (on the existing nodes) will rewrite
the data in the new format which should workaround the problem.
On Mon, Mar 7, 2011 at 1:23 PM, Erik Onnen eon...@gmail.com wrote:
During
I'd recommend not storing commit logs or data files on EBS volumes if
your machines are under any decent amount of load. I say that for
three reasons.
First, both EBS volumes contend directly for network throughput with
what appears to be a peer QoS policy to standard packets. In other
words, if
Thanks, so is it the [Index.db, Statistics.db, Data.db, Filter.db];
skipped that indicates it's in Statistics? Basically I need a way to
know if the same is true of all the other tables showing this issue.
-erik
It's been about 7 months now but at the time G1 would regularly
segfault for me under load on Linux x64. I'd advise extra precautions
in testing and make sure you test with representative load.
I'll capture what I we're seeing here for anyone else who may look
into this in more detail later.
Our standard heap growth is ~300K in between collections with regular
ParNew collections happening on average about every 4 seconds. All
very healthy.
The memtable flush (where we see almost all
Sorry for the complex setup, took a while to identify the behavior and
I'm still not sure I'm reading the code correctly.
Scenario:
Six node ring w/ SimpleSnitch and RF3. For the sake of discussion
assume the token space looks like:
node-0 1-10
node-1 11-20
node-2 21-30
node-3 31-40
node-4
should route the request to another node for the retry
without adding additional complexity to StorageProxy. (If that's not
what you see in practice, then we probably have a dynamic snitch bug.)
On Wed, Apr 13, 2011 at 12:32 PM, Erik Onnen eon...@gmail.com wrote:
Sorry for the complex setup
On Thu, Jul 29, 2010 at 9:57 PM, Ryan Daum r...@thimbleware.com wrote:
Barring this we (place where I work, Chango) will probably eventually fork
Cassandra to have a RESTful interface and use the Jetty async HTTP client to
connect to it. It's just ridiculous for us to have threads and
15 matches
Mail list logo